0% found this document useful (0 votes)
22 views8 pages

R300 Solution Guide 2018M

The document provides the midterm exam for an Advanced Econometric Methods course, consisting of 4 questions. Question 1 covers hypothesis testing for the mean of a normal distribution. Question 2 covers the ordinary least squares estimator and an alternative estimator. Question 3 covers estimating parameters from a model with endogenous regressors. Key results summarized are: 1) A test is proposed for H0: θ = 0 against H1: θ < 0 that is uniformly most powerful and has size α for any sample size n. 2) For a linear model with heteroskedastic errors, the ordinary least squares estimator is optimal but an alternative estimator is consistent. 3) Moment conditions are proposed to estimate parameters in a model

Uploaded by

Marco Brolli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views8 pages

R300 Solution Guide 2018M

The document provides the midterm exam for an Advanced Econometric Methods course, consisting of 4 questions. Question 1 covers hypothesis testing for the mean of a normal distribution. Question 2 covers the ordinary least squares estimator and an alternative estimator. Question 3 covers estimating parameters from a model with endogenous regressors. Key results summarized are: 1) A test is proposed for H0: θ = 0 against H1: θ < 0 that is uniformly most powerful and has size α for any sample size n. 2) For a linear model with heteroskedastic errors, the ordinary least squares estimator is optimal but an alternative estimator is consistent. 3) Moment conditions are proposed to estimate parameters in a model

Uploaded by

Marco Brolli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

FACULTY OF ECONOMICS STUDY AIDS 2019

R300 – Advanced Econometric Methods

The Faculty Board has agreed to release outline solutions to the 2019
examinations as a study aid for exam revision. They are abridged solutions,
and not ‘definitive’, and should therefore not be considered as an exemplar for
'complete' answers.
Also note that the Faculty will not respond to any queries regarding these
solutions.
ADVANCED ECONOMETRIC METHODS
Midterm Exam
January 2019

The exam consists of 4 questions and lasts for 3 hours. You are required to
answer 3 out of 4 questions. The choice of which question not to answer is
entirely yours. Each question carries the same weight. Statistical tables are
attached. Be succinct in your answers.

1. Consider a random sample x1 , . . . , xn from a normal distribution with


mean θ and (known) variance 1.

(i) Propose a test for the null H0 : θ = 0 against the alternative H1 : θ < 0
that has size α for any sample size n.

(ii) Derive the power function of your test.

(iii) Show that your test is unbiased and consistent.

(iv) Construct an 1 − α confidence set by inverting your test.

(i) We know that, here, a likelihood-ratio test is uniformly most powerful.


The test amounts to comparing the statistic
n
1 X
√ xi
n
i=1

to the quantiles of the standard normal distribution (which is the statistic’s


distribution under the null). Moreover, with zα the αth quantile of the
standard normal we reject the null if √1n ni=1 xi < zα and accept the null
P

if not.

(ii) The power against a given alternative θ is


n
!
1 X √ 
Pθ √ xi < zα = Φ zα − nθ .
n
i=1

Verify that at θ = 0 we get Φ(zα ) = α so that our test is indeed size correct
for any n.

[1]
(iii) Note that the power increases to one as θ moves from zero to −∞ and
does so monotonically. Hence, the test is unbiased. Further, as n → ∞
the power converges to one for each fixed alternative θ < 0. So, our test is
consistent.

(iv) We would not reject the null for any θ for which
n
1 X
√ (xi − θ) ≥ zα .
n
i=1

Solving for θ gives



x − √ ≥ θ,
n

yielding the (random) interval Θ = (−∞, x − √
n
]. We have
   
zα zα
P0 (0 ∈ Θ) = P0 x − √ ≥ 0 = 1 − P0 x − √ < 0 = 1 − α.
n n

2. Suppose that
y i = x i θ + εi

where

εi |xi ∼ i.i.d. (0, 1), xi ≥  > 0, E(x2i ) = Q < ∞,

and  is some constant.

(i) Derive the limit distribution of the ordinary least-squares estimator of θ,


say θ̂.

(ii) Consider the alternative estimator


n
X
θ̌ = n−1 (yi /xi ).
i=1

Is this estimator consistent?

[2]

(iii) Does n(θ̌ − θ) converge in distribution? If it does then state the limit
distribution.

(iv) Which of the two estimators would you prefer, θ̂ or θ̌? Explain.

(v) Now suppose that


εi |xi ∼ i.i.d. (0, x2i ).

Does your answer to (iv) change and, if so, how? Explain.

(i) The least-squares estimator is


Pn
yi x i
θ̂ = Pi=1
n 2 .
i=1 xi

This is a ratio of sample averages. The variance of xi and εi both exist and,
hence, so does the variance of yi . Therefore,

n−1 ni=1 yi xi n−1 ni=1 xi εi


P P
θ̂ = −1 Pn 2 =θ+ + op (n−1/2 ).
n x
i=1 i Q

Moreover,
n
√ 1 X x i εi d
n(θ̂ − θ) = √ + op (1) → N (0, Q−1 )
n Q
i=1

where we have used that E(xi εi ) = E(xi E(εi |xi )) = 0 and var(xi εi ) =
E(x2i ε2i ) = E(x2i E(ε2i |xi )) = E(x2i ) = Q.

(ii) The alternative estimator is a sample average of yi /xi , Note that


     
yi E(yi |xi ) xi θ
E =E =E = θ.
xi xi xi
p
A standard law of large numbers that ensures that, indeed, θ̌ → θ, so that
the alternative estimator is consistent.

(iii) By the law of total variance,


         
yi yi yi var(yi |xi )
var = var E xi + E var xi =E ,
xi xi xi x2i

[3]
which equals  
1
E =: R (say).
x2i
The limit distribution of the estimator is thus
√ d
n(θ̌ − θ) → N (0, R).

(iv) The Gauss-Markov theorem implies that θ̌ cannot be better θ̂ in the


sense of having a smaller variance. Indeed, by Cauchy-Schwarz,
 
−1 1 1
Q = 2 ≤E =R
E(xi ) x2i

and so we should prefer the least-squares estimator here.

(v) Now the errors are heteroskedastic and the optimal estimator will, in
general, no longer be the least-squares estimator. The optimal estimator
based on the conditional moment condition

E(yi − xi θ|xi ) = 0

is the solution to
n
X yi − xi θ
= 0.
xi
i=1

But this is θ̌ = n−1


P
i (yi /xi ).

3. Consider the setting

y i = x i α + z i β + εi , E(xi εi ) 6= 0, E(εi |zi ) = 0, β 6= 0.

(i) Propose moment conditions to estimate the parameters (α, β).

(ii) Give sufficient conditions for your moment conditions to point identify
(α, β).

(iii) Derive the limit distribution of your proposed estimator.

[4]
(iv) How would you go about testing your specification? That is, supposing
you doubt whether E(εi |zi ) = 0, how would you test one of its implications?

(i) We have conditional moments

E(εi |zi ) = E(yi − xi α − zi β|zi ) = 0.

This implies unconditional moments of the generic form

E(ϕ(zi ) εi ) = 0

for chosen functions ϕ. We need at least two of these functions, as we have


two unknown parameters to estimate. The optimal unconditional moment
condition here is complicated, in general. When zi ∈ {0, 1}, though, it
amounts to using

E({zi = 0}(yi − xi α − zi β)) = 0


E({zi = 1}(yi − xi α − zi β)) = 0.

In this case, provided that P (zi = 1) ∈ (0, 1) and that E(xi |zi = 0) 6= 0, we
have
E(yi |zi = 0)
α= , β = E(yi |zi = 1) − E(xi |zi = 1)α.
E(xi |zi = 0)
More generally, one simple set of unconditional moments would be

E(zi (yi − xi α − zi β)) = 0


E(zi2 (yi − xi α − zi β)) = 0.

(ii) The Jacobian matrix is

zi xi zi2
 
Σ = −E
zi2 xi zi3

and (as the moments are linear in parameters) identification boild down to
this matrix having maximal rank. Equivalently, its determinant,

E(zi2 xi )E(zi2 ) − E(zi xi )E(zi3 ),

[5]
should be non-zero. Sufficient conditions for this are easy to derive. They
involve the regression coefficients of xi on zi and of xi on zi2 . A necessary
condition is that both these slopes should both be non-zero.
(iii) This is a standard method of moment estimator whose limit distribution
is normal with zero mean and variance matrix
 2 2 3 2 
−1 −1 z i εi z i εi
Σ ΩΣ , Ω=E .
zi3 ε2i zi4 ε2i

(iv) We can use an overidentification test. To do so first select a set of


unconditional moments conditions implied by E(εi |zi ) = 0 (Remeber you
need strictly more moments than the number of parameters to estimate;
that is three or more here). Next set up the optimal GMM criterion based
on the chosen moments and evaluate at its minimizer. If E(εi |zi ) = 0 this
should behave like a Chi-squared random variable. If it is large relative
to this benchmark you have found evidence against the moment condition
being valid.

4. Are the following statements right or wrong? Justify your answers.


(i) If the sequence {xi } is stationary then limh→∞ cov(xi , xi−h ) = 0.
(ii) Consider the model
y i = x i θ + εi ,
where E(εi |x1 , . . . , xn ) = 0 and

εi = εi−1 + ηi

for ηi independent and identically distributed with zero mean and finite
variance. We should not estimate θ from a regression of yi on xi but, rather,
from a regression of ∆yi = yi − yi−1 on ∆xi = xi − xi−1 .
(iii) Suppose that yi = xi β + εi for εi ∼ N (0, σ 2 ). We do not observe yi but
only 
yi if yi ≥ 0
yi∗ = ;
0 otherwise

[6]
the covariate xi is always observed. To estimate β we should remove all ob-
servations with yi∗ = 0 from the sample and run a regression of the remaining
yi∗ on xi .

(iv) Consider the linear model yi = xi β + εi for a random sample of size


n. Assume that E(εi ) = 0 and that ni=1 x2i > 0. For unbiasedness of the
P

least-squares estimator it suffices that E(xi εi ) = 0.

(v) The least-squares estimator of β in yi = xi β + εi is inconsistent if εi has


both mean and variance equal to zero.

(i) Incorrect. The statement relates to mixing/ergodicity, not to stationarity.


Stationarity only implies that cov(xi , xi−h ) is independent of i.

(ii) Correct. Note that εi is a unit-root process and so the levels data will
be non-stationary. In first differences we have ∆yi = ∆xi θ + ηi where the
error is stationary. Note that we should still ensure that ∆xi is stationary
and that var(∆xi ) > 0.

(iii) Incorrect. The censoring destroys linearity of the conditional expecta-


tion and so a linear regression model for the subsample will be misspecified.

(iv) Incorrect. This suffices for consistency but not for unbiasedness.

(v) Incorrect. In this case we would have the deterministic problem yi = xi β.


So the least-squares estimator is exactly equal to β without any sampling
error.

[7]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy