Hypothesis Test Power Calculation
Hypothesis Test Power Calculation
ChatGPT content
First, under H0 ,
ˉn ∼ N (0, 1 ) ,
X n = 10 000 ⟹ σXˉ =
1
=
1
.
n 100
n
ˉn ∣ ≥ c ∣ μ = 0) = 0.05 ⟹ c = z0.975 1
c so that P (∣X ≈ 1.96 ×
100
0.01 = 0.0196 .
ˉn ∼ N (0.01, 0.0001),
X
so
ˉn ∣ ≥ 0.0196) = P (X
P (∣X
ˉn ≥ 0.0196) + P (X
ˉn ≤ −0.0196).
Φ(0.96) = 0.1685,
0 0196 0 01
−0.0196 − 0.01
ˉn ≤ −0.0196) = P (Z ≤Skip
P (X to content ) = P (Z ≤ −2.96) ≈
0.01
Φ(−2.96) = 0.0015.
Hence
0.00005,
so
Most scientific calculators (and apps) let you evaluate the normal CDF via one of two
routes:
normalcdf(lower, upper, μ, σ)
So you’d do
Copy code
scss
Skip to content
normalcdf(–1E99, 0.96, 0, 1)
Φ(x) = 1
2
[1 + \erf(x/ 2)].
So for x = 0.96:
1. Compute t = 0.96/ 2.
2. Compute \erf(t).
Φ(0.96) ≈ 0.8315.
Yes—on the TI-84 you can get Φ(0.96) with the built-in normalcdf function:
1. Hit
Copy code
2nd VARS
2. Choose
2: normalcdf(
normalcdf(lower,upper,μ,σ)
4. Enter
normalcdf(-1E99,0.96,0,1)
Copy code
0.83147
1. Press
Copy code
2nd VARS
2. Select
3:invNorm(
3. At the prompt enter
Skip to content
scss Copy code
invNorm(0.975,0,1)
– where 0.975 is the cumulative area, and 0,1 are the mean and σ of the standard
normal.
Copy code
1.95996
So z0.975 ≈ 1.96.
(a) MLE of θ
The likelihood for an i.i.d. sample X1 , … , Xn is
n n θ−1
L(θ) = ∏[θ xiθ−1 ]
= θ (∏ xi )
n
.
i=1 i=1
The log‐likelihood is
n
ℓ(θ) = n lnSkip
θ +to (θ − 1) ∑ ln xi .
content
i=1
n
dℓ n n
+ ∑ ln xi = 0
!
= ⟹ θ^ = − n .
∑i=1 ln xi
dθ θ i=1
d2 ℓ n
= − ,
dθ2 θ2
d2 ℓ n
I(θ) = − E[ ] = .
dθ2 θ2
or equivalently
θ^ ≈ N(θ, θ2
n
) for large n.
Skip to content
^
(a) Least‐squares estimator β
We minimize
n
S(β) = ∑(yi − βxi )2
i=1
⟹ n
∑i=1 x2i
i=1
^
(b) Distribution of β
iid
Under the model Yi = βxi + εi with εi ∼ N (0, σ 2 ),
Its mean is
∑ xi E[Yi ] ∑ xi (βxi )
E[β^] = = = β.
∑ x2i ∑ x2i
Its variance is
2 2 2 2
^ ∑ x2i \Var(Yi ) σ 2 ∑ x2i σ2
\Var(β ) = = = .
Thus
β^ ∼ N(β, σ 2 / ∑ x2i ).
n
SSE = ∑ ri2 .
i=1
In the zero‐intercept model we’ve used one parameter (β ), so there are n − 1 “free”
directions in the data. One shows (by writing the projection matrix or via Cochran’s
theorem) that
SSE
2
∼ χ2n−1 .
i=1
Given all this, can you generalize the concepts here and
provide me a cheat sheet for an exam on this content
Formulas
Test of
H0 : μ = μ0 vs H1 : μ = μ0 at level α.
Test statistic:
ˉ − μ0
X
Z= ∼ N (0, 1) under H0 .
σ/ n
Rejection region:
ˉ − μ0 ∣ ≥ z1−α/2 σ
∣Z∣ ≥ z1−α/2
⟺ ∣X .
n
Critical value:
z1−α/2 = invNorm(1 − α/2, 0, 1).
)
+ Φ(−z1−α/2 −
μ−μ0
σ/ n
).
2. Log‐likelihood:
ℓ(θ) = ln L(θ).
3. Score equation:
Skip to content
dℓ
= 0 ⟹ θ^.
dθ
4. Check second derivative for maximum.
Fisher information
I(θ) = − E[ ddθℓ2 ]
2
(for one sample).
Asymptotic normality
n (θ^ − θ) θ^ ≈ N(θ, ).
d
N(0, I(θ)−1 ) ⟺ 1
n I(θ)
2. MLE:
dℓ n
=0 ⟹ θ=− n .
∑i=1 ln xi
dθ
3. Fisher information:
d2 ℓ n n
dθ2
=−
2
⟹ I(θ) = 2 .
θ θ
4. Asymptotic law:
n (θ^ − θ) θ2
), θ^ ≈ N(θ, θ2
d
N(0, n
n
).
i ∼ N (0, σ ).
εi , to εcontent
n
∑i=1 x2i
β
^
(b) Distribution of β
β^ ∼ N(β, σ2
∑ x2i
).
i=1
Degrees of freedom:
n − 1 (one fitted parameter).
Chi‐square law:
SSE
2
∼ χ2n−1 .
Inverse Normal:
invNorm(p, μ, σ) → quantile zp .
Usage examples:
Φ(0.96) ≈ normalcdf(-1E99,0.96,0,1)
z0.975 ≈ invNorm(0.975,0,1)
Skip to content
Tip: Keep this sheet handy, and be sure you understand when to apply each formula
(test vs. estimation vs. regression). Good luck!
1. Confidence Intervals
Normal mean, σ known
ˉ ± z1−α/2 σ
X
n
^
σ
β^ ± t1−α/2, n−1
∑ x2i
^2
where σ = SSE/(n − 1).
β^ − β0
T = ∼ tn−1 .
^/ ∑ x2i
σ
one‐sided.
Skip to content
Linearity in parameters
Independence of errors
6. Degrees of Freedom
Tests/intervals with one parameter estimated “use up” one degree of freedom:
One‐sample t: df = n–1
Always state your rejection region, and check you’ve got the right tails.
Keep this checklist in your head, and you’ll be ready to recognize which formula or
distribution applies. Good luck!
Ask anything