0% found this document useful (0 votes)
13 views14 pages

Hypothesis Test Power Calculation

This document serves as an exam cheat sheet covering key concepts and formulas related to statistical hypothesis testing, maximum likelihood estimation, Fisher information, and simple linear regression. It includes specific formulas for one-sample normal mean tests, MLE procedures, and the distribution of least-squares estimators. Additionally, it provides quick references for using the TI-84 calculator for normal CDF and inverse normal calculations.

Uploaded by

yecimag459
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views14 pages

Hypothesis Test Power Calculation

This document serves as an exam cheat sheet covering key concepts and formulas related to statistical hypothesis testing, maximum likelihood estimation, Fisher information, and simple linear regression. It includes specific formulas for one-sample normal mean tests, MLE procedures, and the distribution of least-squares estimators. Additionally, it provides quick references for using the TI-84 calculator for normal CDF and inverse normal calculations.

Uploaded by

yecimag459
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Skip too4-mini

ChatGPT content

Thought for 13 seconds

First, under H0 , ​

ˉn ∼ N (0, 1 ) ,
X n = 10 000 ⟹ σXˉ =
1
=
1
.
n 100
​ ​ ​ ​ ​

n ​

To get a two‐sided level–0.05 test we choose

ˉn ∣ ≥ c ∣ μ = 0) = 0.05 ⟹ c = z0.975 1
c so that P (∣X ≈ 1.96 ×
100
​ ​ ​

0.01 = 0.0196 .

(a) Power at μ = 0.01


Under the true mean μ = 0.01,

ˉn ∼ N (0.01, 0.0001),
X ​

so

ˉn ∣ ≥ 0.0196) = P (X
P (∣X ​
ˉn ≥ 0.0196) + P (X
ˉn ≤ −0.0196).
​ ​

Standardize with Z ∼ N (0, 1):

ˉn ≥ 0.0196) = P (Z ≥ 0.0196 − 0.01 ) = P (Z ≥ 0.96) ≈ 1 −


P (X
0.01

Φ(0.96) = 0.1685,

0 0196 0 01
−0.0196 − 0.01
ˉn ≤ −0.0196) = P (Z ≤Skip
P (X to content ) = P (Z ≤ −2.96) ≈
0.01
​ ​

Φ(−2.96) = 0.0015.

Hence

power at μ = 0.01 ≈ 0.1685 + 0.0015 = 0.17 . ​

(b) Power at μ = 0.02


ˉn
Now X ​ ∼ N (0.02, 0.0001), so

ˉn ≥ 0.0196) = P(Z ≥ 0.0196 − 0.02 ) = P (Z ≥ −0.04) = 1 −


P (X
0.01
​ ​

Φ(−0.04) = Φ(0.04) ≈ 0.5160,

ˉn ≤ −0.0196) = P(Z ≤ −0.0196 − 0.02 ) = P (Z ≤ −3.96) ≈


P (X
0.01
​ ​

0.00005,

so

power at μ = 0.02 ≈ 0.5160 + 0.00005 = 0.516 . ​

how do you do this: Φ(0.96), via calculator?

Most scientific calculators (and apps) let you evaluate the normal CDF via one of two
routes:

1. Built-in “normalcdf” (or “Φ”) function

On many graphing calculators (e.g. TI-83/84):

lua Copy code

normalcdf(lower, upper, μ, σ)

So you’d do

Copy code
scss
Skip to content
normalcdf(–1E99, 0.96, 0, 1)

which returns Φ(0.96).

2. Via the error-function identity


If your calculator has an erf key, use

Φ(x) = 1
2
​[1 + \erf(x/ 2)]. ​

So for x = 0.96:
1. Compute t = 0.96/ 2. ​

2. Compute \erf(t).

3. Compute 0.5 [1 + \erf(t)].

Either way you’ll get approximately

Φ(0.96) ≈ 0.8315.

I have a ti84 for this exam, can I do it

Yes—on the TI-84 you can get Φ(0.96) with the built-in normalcdf function:

1. Hit

Copy code

2nd VARS

(that brings up the DISTR menu).

2. Choose

css Copy code

2: normalcdf(

3. You’ll see on screen


lua Copy code
Skip to content

normalcdf(lower,upper,μ,σ)

4. Enter

scss Copy code

normalcdf(-1E99,0.96,0,1)

– use (-) → 1 → 2nd→0→EE→9→9 for “−1E99”, then comma, 0.96,0,1 .

5. Press ENTER, and you’ll get about

Copy code

0.83147

which is Φ(0.96) ≈ 0.8315.


(Note: you can also use something like −10 or −9999 instead of −1E99 for a “very
negative” lower bound.)

how would u find z.975 on a ti84

On the TI-84 you use the inverse normal function:

1. Press

Copy code

2nd VARS

(to open the DISTR menu).

2. Select

css Copy code

3:invNorm(
3. At the prompt enter
Skip to content
scss Copy code

invNorm(0.975,0,1)

– where 0.975 is the cumulative area, and 0,1 are the mean and σ of the standard
normal.

4. Press ENTER, and you’ll get approximately

Copy code

1.95996

So z0.975 ​ ≈ 1.96.

Here are the answers step by step.

(a) MLE of θ
The likelihood for an i.i.d. sample X1 , … , Xn is
​ ​

n n θ−1
L(θ) = ∏[θ xiθ−1 ]
​ ​ = θ (∏ xi )
n
​ ​ .
i=1 i=1

The log‐likelihood is
n
ℓ(θ) = n lnSkip
θ +to (θ − 1) ∑ ln xi .
content ​ ​

i=1

Differentiating and setting to zero:

n
dℓ n n
+ ∑ ln xi = 0
!
= ⟹ θ^ = − n .
∑i=1 ln xi
​ ​ ​ ​ ​

dθ θ i=1
​ ​

(b) Fisher information I(θ)


The second derivative of the log‐likelihood is

d2 ℓ n
= − ,
dθ2 θ2
​ ​

so the Fisher information is

d2 ℓ n
I(θ) = − E[ ] = .
dθ2 θ2
​ ​

(c) Asymptotic normality


By standard MLE theory,

n (θ^ − θ) N(0, I(θ)−1 ) = N(0, θ2


),
d
n
​ ​ ​

or equivalently

θ^ ≈ N(θ, θ2
n
​) for large n.
Skip to content

Here’s the solution, step by step:

^
(a) Least‐squares estimator β ​

We minimize

n
S(β) = ∑(yi − βxi )2 ​ ​ ​

i=1

w.r.t.\ β . Setting ∂S/∂β = 0 gives


n n
∑ xi y i
−2 ∑ xi (yi − βxi ) = 0 β^ = i=1 .
​ ​ ​

⟹ n
∑i=1 x2i
​ ​ ​ ​ ​ ​

​ ​

i=1

^
(b) Distribution of β ​

iid
Under the model Yi ​ = βxi + εi with εi ∼ N (0, σ 2 ),
​ ​ ​

β^ is a linear combination of the Yi , hence Gaussian.


​ ​

Its mean is

∑ xi E[Yi ] ∑ xi (βxi )
E[β^] = = = β.
​ ​ ​ ​

∑ x2i ∑ x2i
​ ​ ​

​ ​

Its variance is

2 2 2 2
^ ∑ x2i \Var(Yi ) σ 2 ∑ x2i σ2
\Var(β ) = = = .
​ ​ ​

Skip to2 content


(∑ x2i ) (∑ x2i )2 ∑ni=1 x2i
​ ​ ​ ​

​ ​ ​

Thus

β^ ∼ N(β, σ 2 / ∑ x2i ).
​ ​

(c) Distribution of the sum of squared residuals


Define the residuals ri ​ = Yi − β^xi , and
​ ​ ​

n
SSE = ∑ ri2 . ​ ​

i=1

In the zero‐intercept model we’ve used one parameter (β ), so there are n − 1 “free”
directions in the data. One shows (by writing the projection matrix or via Cochran’s
theorem) that

SSE
2
∼ χ2n−1 .
​ ​

(d) Unbiased estimator of σ 2


Since E[SSE] = (n − 1) σ 2 , an unbiased estimator is
n
SSE 1
∑(Yi − β^ xi ) .
2 2
^ =
σ =
n−1 n−1
​ ​ ​ ​ ​ ​

i=1

Given all this, can you generalize the concepts here and
provide me a cheat sheet for an exam on this content

Thought for 6 seconds


Exam Cheat Sheet: Key Concepts &
Skip to content

Formulas

1. One‐Sample Normal Mean Test (σ² Known)


Model:
X1 , … , Xn iid N (μ, σ 2 ), here σ 2 known.
​ ​

Test of
H0 : μ = μ0 vs H1 : μ = μ0 at level α.
​ ​ ​ ​

Test statistic:

ˉ − μ0
X
Z= ∼ N (0, 1) under H0 .

​ ​

σ/ n ​

Rejection region:

ˉ − μ0 ∣ ≥ z1−α/2 σ
∣Z∣ ≥ z1−α/2 ​
⟺ ∣X ​ ​ ​.
n​

Critical value:
z1−α/2 = invNorm(1 − α/2, 0, 1).

Power function at true μ:

β(μ) = Pμ (∣Z∣ ≥ z1−α/2 ) = 1 − Φ(z1−α/2 −


​ ​ ​
μ−μ0
σ/ n

) ​


+ Φ(−z1−α/2 − ​
μ−μ0

σ/ n
).​

2. Maximum Likelihood Estimation & Fisher Information


General MLE recipe
1. Write likelihood:
L(θ) = ∏ni=1 f (Xi ∣ θ). ​ ​

2. Log‐likelihood:
ℓ(θ) = ln L(θ).
3. Score equation:
Skip to content
dℓ
= 0 ⟹ θ^.


4. Check second derivative for maximum.

Fisher information

I(θ) = − E[ ddθℓ2 ]
2
​ (for one sample).

For iid n draws, total information is n I(θ).

Asymptotic normality

n (θ^ − θ) θ^ ≈ N(θ, ).
d
​ ​ N(0, I(θ)−1 ) ⟺ 1
n I(θ)

3. Example: f (x ∣ θ) = θ xθ−1 , 0 < x < 1


1. Log‐likelihood:
ℓ(θ) = n ln θ + (θ − 1) ∑ ln xi . ​

2. MLE:

dℓ n
=0 ⟹ θ=− n .
∑i=1 ln xi
​ ​

dθ ​ ​

3. Fisher information:

d2 ℓ n n
dθ2

=− ​

2
⟹ I(θ) = 2 . ​

θ θ
4. Asymptotic law:

n (θ^ − θ) θ2
), θ^ ≈ N(θ, θ2
d
​ ​ N(0, n

n
​ ).

4. Simple Linear Regression (Zero Intercept)


Model:
2 iid
Yi = β xi +Skip
​ ​

i ∼ N (0, σ ).
εi , to εcontent ​ ​

(a) Least‐squares estimator


n
∑ xi y i
β^ = arg min ∑(yi − βxi )2 = i=1 .
​ ​ ​

n
∑i=1 x2i
​ ​ ​ ​ ​

β ​ ​

^
(b) Distribution of β ​

β^ ∼ N(β, σ2
∑ x2i
). ​

(c) SSE and its distribution


Residual sum of squares:
n
SSE = ∑(Yi − β^xi )2 .
​ ​ ​ ​

i=1

Degrees of freedom:
n − 1 (one fitted parameter).
Chi‐square law:

SSE
2
∼ χ2n−1 .
​ ​

(d) Unbiased σ 2 estimator


SSE 1
^2 =
σ = ∑(Yi − β^xi )2 .
n−1 n−1
​ ​ ​ ​ ​

5. Quick TI-84 Reference


Normal CDF:
normalcdf(lower, upper, μ, σ) → Φ‐values.

Inverse Normal:
invNorm(p, μ, σ) → quantile zp . ​

Usage examples:

Φ(0.96) ≈ normalcdf(-1E99,0.96,0,1)
z0.975 ≈ invNorm(0.975,0,1)
Skip to content

Tip: Keep this sheet handy, and be sure you understand when to apply each formula
(test vs. estimation vs. regression). Good luck!

Anything else I should know?

A few final pointers you might see on the exam:

1. Confidence Intervals
Normal mean, σ known

ˉ ± z1−α/2 σ
X ​ ​

n ​

Regression slope, σ unknown

^
σ
β^ ± t1−α/2, n−1
∑ x2i
​ ​ ​

​​

^2
where σ = SSE/(n − 1).

2. Hypothesis Tests in Regression


Test H0 ​ : β = β0 using

β^ − β0
T = ∼ tn−1 .
​ ​

^/ ∑ x2i
​ ​

σ ​​

Remember the distinction between a two‐sided test (compare ∣T ∣ to t1−α/2 ) and ​

one‐sided.
Skip to content

3. MLE Properties & CRLB


MLEs are consistent, asymptotically normal, and in regular problems they achieve
the Cramér–Rao lower bound.

Check regularity conditions: support not depending on θ , differentiability, etc.

4. Model Assumptions & Diagnostics


Normal‐errors assumption underpins exact small‐sample inference; if it fails, large‐
sample CLT may still rescue MLEs/LS estimators.

In regression, always check for

Linearity in parameters

Homoskedasticity (constant Var(εi )) ​

Independence of errors

No perfect multicollinearity (here trivial with one regressor)

5. When σ² Is Unknown vs. Known


Known σ² → use Z (normal) inference.

^ and use the t distribution with appropriate df.


Unknown σ² → replace σ with σ

6. Degrees of Freedom
Tests/intervals with one parameter estimated “use up” one degree of freedom:

One‐sample t: df = n–1

Regression (zero‐intercept) SSE: df = n–1


7. Quick Reminders
On TI-84:

normalcdf and invNorm for Z;

tcdf and invT for t.

Always state your rejection region, and check you’ve got the right tails.

Keep this checklist in your head, and you’ll be ready to recognize which formula or
distribution applies. Good luck!

Ask anything

Search Deep research Create image

ChatGPT can make mistakes. Check important info.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy