ECON0019 Lecture8 Slides
ECON0019 Lecture8 Slides
Econometrics
Lecture 8: Heteroskedasticity, proxy variables, and
measurement errors
yi = β0 + β1 xi + ui
!
1
n ∑ni=1 (xi x̄ )ui
Var( β̂1 jXn ) = Var β1 + Xn
σ̂2x
!
n
1
= 4 2
σ̂x n
Var ∑ (xi x̄ )ui Xn
i =1
n
1
= 4 2
σ̂x n
∑ (xi x̄ )2 Var ( ui j Xn )
i =1
n
1
= 4 2
σ̂x n
∑ (xi x̄ )2 σ2i .
i =1
1
n ∑ni=1 (xi x̄ )ui
β̂1 = β1 + ,
σ̂2x
where, by LLN and CLT as n ! ∞, σ̂2x !p σ2x and
E [(xi µx )2 u i2 ] !
N 0,
a
n E (xi µx )2 ui2
β̂1 β1 + =N β1 , .
σ2x σ4x n
y = β0 + β1 x1 + . . . + βk xk + u
β̂j βj a
tβ̂ = N (0, 1) .
j seHR ( β̂j )
The usual critical values are used – only di¤erence is that we are
using a di¤erent estimator of std( β̂j ).
Robust con…dence intervals are obtained by using seHR ( β̂j ) in place
of seHOM ( β̂j ).
Also possible to derive robust F -tests (although we will not go into
derivations here)
Note that robust inference is justi…ed ONLY when sample size is
su¢ ciently large (all results are asymptotic)
yi = β0 xi 0 + β1 xi 1 + ui ,
ui 1 σ2i
Var (ui jxi ) = Var j xi = Var ( ui j xi ) = = 1.
σi σ2i σ2i
Infeasible GLS estimator of β0 , β1 : Run OLS in this transformed SLR.
Since ui in this transformed version is homoskedastic, the GLS
estimators are BLUE and we don’t need to adjust standard errors.
The above assumed we know σ2 (x ) – wholly unrealistic. Instead use
feasible GLS estimator:
1 Use OLS to obtain ŷ = β̂ + β̂ x
i 0 1 i 1 + . . . + β̂k xik + ûi .
2 Estimate σ2 (x ). E.g., if σ2 (x ) = α + α x 2 , α and α can be
0 1 0 1
estimated by OLS, ûi2 = α̂0 + α̂1 xi2 + êi , where ei regression error.
q
3 Estimate σ̂i = α̂0 + α̂1 xi2 and run OLS in
ŷi = β0 x̂i 0 + β1 x̂i 1 + ui ,
where, ŷi = yi /σ̂i , x̂i 0 = 1/σ̂i , and x̂i 1 = xi 1 /σ̂i .
D Kristensen (UCL) ECON0019: Lecture 8 November 29, 2018 22 / 35
Generalised Least Squares - summary
y = β0 + β1 x1 + β2 x2 + u
Will the OLS estimators β̂1 and β̂2 provide meaningful estimates of
β1 and β2 ?
x2 = δ0 + δ1 x2 + v , E [v jx2 ] = 0 (1)
Substituting in x2 = δ0 + δ1 x2 + v ,
y = β0 + β1 x1 + β2 x2 + u
= ( β0 + β2 δ0 ) + β1 x1 + β2 δ1 x2 + ũ, ũ = β2 v + u.
If we can verify that E [ũ jx1 , x2 ] = 0 then MLR.1-MLR.4 will hold for
the above regression, and we can conclude that
β̂1 !p β1 , β̂2 !p β2 δ1 .
But
E [u jx1 , x2 ] = β2 E [v jx1 , x2 ] + E [u jx1 , x2 ]
where
E [v jx1 , x2 ] = E [v jx2 ] = 0,
P.2 P.3
E [u jx1 , x2 ] = 0.
P.1+MLR.4
y = β0 + β1 x + ũ, ũ = u β1 e,
where, using Cov(u, x1 ) = 0,
This implies that MLR.4 fails in the above regression and so OLS will
be biased and inconsistent:
1 n
n ∑i =1 (xi x̄ ) ũi Cov (ũ, x1 ) Var (e )
β̂1 = β1 + 1 n 2
!p β1 + = β1 1
Var (x ) Var (x )
n ∑i =1 (xi x̄ )
So OLS will su¤er from a negative bias – we will tend to
underestimate the actual e¤ect of x. This is referred to as
attenuation bias.
This result can be generalised to MLR, but size and sign of biases are
less clear-cut.
D Kristensen (UCL) ECON0019: Lecture 8 November 29, 2018 34 / 35
Summary of today’s lecture