WWS 507c Assignment 4 Solutions: October 27, 2015
WWS 507c Assignment 4 Solutions: October 27, 2015
Assignment 4 Solutions
Problem 1
Part a.
Y1 2 {0 or 1}, Y2 2 {0 or 1}, . . . , Yn 2 {0 or 1}
so taking the sum Y1 + · · · + Yn simply counts the number of Yi ’s that are equal to 1 in the sample.
Hence, the sample proportion of 1’s is
#10 s in sample
p̂ =
n
Y1 + · · · + Yn
=
n
= Ȳn
Part b.
E[p̂] = E[Ȳn ]
n
X
1
= E[n Yi ]
i=1
X
1
= n E[Yi ]
i=1
1
= n (p + · · · + p)
| {z }
n
= p
1
Part c.
Suppose 2
y = E[Yi ] is the population variance of a single observation Yi . Then 2
y = p(1 p).
V ar[p̂] = V ar[Ȳn ]
n
X
1
= V ar[n Yi ]
i=1
n
X
1 2
= (n ) V ar[ Yi ]
i=1
Because Yi ’s are independent, the variance of a sum is the sum of variances. Then
n
X
1 2
V ar[p̂] = (n ) V ar[ Yi ]
i=1
n
X
1 2
= (n ) · V ar[Yi ]
i=1
1 2 2
= (n ) · (n · y)
1 2
= n y
1
= n p(1 p)
Problem 2
Part a.
215
p̂ = Ȳn = = .5375
400
Part b.
p p
By the central limit theorem, t stat = n Ȳn y .5 = n p̂ .5
⇠a N (0, 1), where y is the population
y p
standard deviation of a single observation. But y is very close to the value p̂(1 p̂), so we substitute
p
p̂(1 p̂) in the denominator.
p p̂ .5
np ⇠a N (0, 1)
p̂(1 p̂)
The p-value asks how “rare” your observation is, in the sense of: what is the probability of observing
a t-stat which is more “extreme” than what you see in-sample? In a two-sided hypothesis test
H0 : p = .5 vs. H1 : p 6= .5
this means the probability of observing a t-stat with absolute value greater than what you see.
2
and the probability of seeing something more extreme is
P r(|N (0, 1)| > 1.5) = P r(N (0, 1) > 1.5) + P r(N (0, 1) < 1.5)
= .1336
Part c.
p p
By the central limit theorem, t stat = n Ȳn y .5 = n p̂ .5
y
⇠a N (0, 1), where y is the population
standard deviation of a single observation. We do not know the true population value y , but y
p p
should be very close to the value p̂(1 p̂), so we substitute p̂(1 p̂) in the denominator.
p p̂ .5
np ⇠a N (0, 1)
p̂(1 p̂)
The p-value asks how “rare” your observation is, in the sense of: what is the probability of observing
a t-stat which is more “extreme” than what you see in-sample? In a one-sided hypothesis test
H0 : p = .5 vs. H1 : p > .5
this means the probability of observing a t-stat greater than what you see.
Part d.
Our answers differ because we have different definitions of what it means to be more “extreme.” In a
two-sided hypothesis test, we have to allow for the possibility that the t-statistic is more extreme in
absolute value.
H0 : p = .5 vs. H1 : p > .5
we only care about t-statistics being more extreme in the positive direction.
Part e.
Depends on what level tests you’re considering. For a two-sided 10% test, we do not reject because
the p-value is .1336 > .10. For a one-sided 10% test, we reject because the p-value is .0668. Clearly,
neither test rejects at the 5% level or lower.
3
Problem 3
Part a.
p p̂ .5
t stat = np
p̂(1 p̂)
p .54 .5
= 1055 p
.54(1 .54)
= 2.6068
The two-sided p-value is P r(|N (0, 1)| > 2.6068) = .0091 < .05. We reject the null.
Part b.
p p̂ .5
t stat = np
p̂(1 p̂)
p .54 .5
= 1055 p
.54(1 .54)
= 2.6068
The one-sided p-value is P r(N (0, 1) > 2.6068) = .0046 < .05. We reject the null.
Problem 4
The survey is drawn from some population of people, which may or may not be the “given” population.
We want to see if p is equal to the value .11 or not.
Then
p p̂ .11
t stat = np
p̂(1 p̂)
p .24 .11
= 600 p
.24(1 .24)
= 7.456
The p-value is ⇡ 0 (notice that this is true for both one and two-sided alternative hypoth)
Problem 5
Part a.
4
We are asking if E[Ȳn ] = µY ...
1
E[Ȳn ] = E[n (Y1 + · · · + Yn )]
1
= n E[Y1 + · · · + Yn ]
1
= n (E[Y1 ] + . . . E[Yn ])
1
= n (µY + · · · + µY )
| {z }
n
1
= n · nµY
= µY
so yes, it is unbiased
Part b.
= 1/2
6= µ2y
Part c.
We’re asking if the distribution of Ȳn gets closer to µy . This follows from the law of large numbers,
since Y1 , . . . , Yn are iid. So yes, it is consistent.
Part d.
Part c. tells us that the distribution of Ȳn gets closer to µy , and the continuous mapping theorem tells
us that any continuous function of Ȳn gets closer to µy , so long as the function is continuous at µy
itself. That is, for any continuous function G(·),
Ȳn2 !p µ2y