Econ-2042 - Unit 6-W12-13
Econ-2042 - Unit 6-W12-13
De…nition (6.2)
An Estimator of parameter θ is a statistic
b
θ = t (x1 , x2 , . . . , xn ) used to estimate θ.
F. Guta (CoBE) Econ 2042 March, 2024 6 / 77
Example (6.1)
x1 + x2 + + xn
x= n is proposed as an estimator for the
parameter µ, the population mean.
De…nition (6.3)
An estimate of parameter θ is the speci…c value of
an estimator b
θ.
Example (6.2)
If we have x1 = 10, x2 = 9.5, x3 = 11.2, we have an
10+9.5+11.2
estimate of µ as follows: x = 3 ' 10.23.
Such an estimate is called point estimate.
F. Guta (CoBE) Econ 2042 March, 2024 7 / 77
Assume that the values x1 , x2 , . . . , xn of a random
sample X1 , X2 , . . . , Xn from f ( ; θ ) can be
observed.
On the basis of the observed sample values
x1 , x2 , . . . , xn it is desired to estimate the value of
the unknown parameter θ or the value of the some
function, say τ (θ ), of the unknown parameter.
This estimation can be made in two ways.
The …rst, called point estimation, is to let the value
F. Guta (CoBE) Econ 2042 March, 2024 8 / 77
of some statistics, say t (x1 , x2 , . . . , xn ), represent,
or estimate, the unknown τ (θ ); such a statistic is
called a point estimator.
De…nition (6.5)
The bias of an estimator denoted by B b
θ is
de…ned as B b
θ =E b
θ θ where b
θ is a
statistic used to estimate parameter θ.
σ2
= 1
n ∑ni=1 σ2 σ2x = σ2 n = 1 1
n σ2
1 1 2
) B (µ
b2 ) = E (µ
b2 ) σ2 = 1 σ2 σ2 = σ
n n
De…nition (6.6)
2
Let b
θ be an estimator of θ. E b
θ θ is
de…ned to be the MSE of the estimator b
θ, denoted
by MSE b
θ .
Let θ be the parameter of interest–to be estimated,
and h (x1 , x2 , . . . , xn ) be the estimator of θ, i.e.,
Then b
θ θ is the sampling error , and
2
E b
θ θ is the MSE of b
θ.
Example (6.4)
Given a random sample of, x1 , x2 , ..., xn , let θ be
the population mean–i.e., the population parameter
of interest. Then,
F. Guta (CoBE) Econ 2042 March, 2024 15 / 77
Example (6.4 continued. . . )
b
θ1 = X
b
θ2 = X1 +X2
2
b
θ3 = X1 +X3 +X5 + +X2m +1
m +1
b
θ4 = X2 +X4 +X 6+ +X2m
m
Proof.
2 2
E b
θ θ =E b
θ E bθ + E bθ θ
2 2
= E b
θ E bθ + 2 bθ E bθ E bθ θ + E b
θ θ
2 2
= E b
θ E bθ + E bθ θ , as E b
θ E bθ =0
| {z } | {z }
2
var (b
θ) [B (bθ )]
b σ2
θ1 = X ) E bθ 1 = µ and var bθ 1 =
n
b X1 + X2 σ2
θ2 = ) E bθ 2 = µ and var bθ 2 =
2 2
b X1 + X3 + + X2m +1 σ2
θ3 = ) E bθ 3 = µ & var bθ 3 =
m+1 m+1
b X2 + X4 + + X2m σ2
θ4 = ) E bθ 4 = µ & var bθ 4 =
m m
But if we let b
θ5 = X1 +X2
n ,n 6= 2 ) E bθ 5 6= µ.
Therefore, b
θ 5 is biased.
F. Guta (CoBE) Econ 2042 March, 2024 19 / 77
From the set of unbiased estimators we still would
like to restrict our choice to the one with minimum
MSE b
θ = var b
θ .
Minimum variance
2
1 1 d
ci2 = di + = di2 + 2
+2 i
n n n
n n
1 2 n
) ∑ ci2 = ∑ di2 + n2
n +
n i∑
di
i =1 i =1 =1
n n n
1
) ∑ ci2 = ∑ di2 + n , as ∑ di =0
i =1 i =1 i =1
n
1
) ∑ ci2 n
i =1
F. Guta (CoBE) Econ 2042 March, 2024 24 / 77
Proof.
Therefore, var (x ) b ) where µ
var (µ b is any other
linear unbiased estimator of µ.
De…nition (6.9)
An estimator b
θ of parameter θ is a consistent
estimator if plim b
θ = θ.
i). lim E b
θn = θ
n!∞
Then b
θ n is a consistent estimator of θ.
i). lim E b
θ n = θ ) lim B b
θn = 0 )
n!∞ n!∞
h i2
b
lim B θ n =0
n!∞
) limn!∞ Pr b
θn θ e =0
) limn!∞ Pr b
θn θ < e = 1.
Example (6.5)
Let X1 , X2 , . . . , Xn be a random sample of size n
with E (Xi ) = µ and var (Xi ) = σ2 for all i. Let
F. Guta (CoBE) Econ 2042 March, 2024 30 / 77
Example (6.5 continued. . . )
σ2
Xn = 1
n ∑ni=1 Xi , then E X n = µ and var X n = n .
σ2
) limn !∞ E X n = µ & limn !∞ var X n = limn !∞ n =0
) X n is a consistent estimator of µ.
Example (6.6)
Let X1 , X2 , . . . , Xn be a random sample of size n
from a normal distribution with E (Xi ) = µ and
2
var (Xi ) = σ2 8i. Let S 2 = 1
n 1 ∑ni=1 Xi X be an
estimator of σ2 , then E S 2 = σ2 and var S 2 =
2σ 4 / ( n 1)
F. Guta (CoBE) Econ 2042 March, 2024 31 / 77
Example
2 σ4
) limn !∞ E S 2 & limn !∞ var S 2 = limn !∞ n 1 =0
Asymptotic e¢ ciency
There are two concepts of asymptotic e¢ ciency
(relative and absolute)
De…nition (6.10)
The e¢ ciency of an estimator b
θ relative to an
estimator e
θ, denoted by E¤ b
θ, e
θ is de…ned as
F. Guta (CoBE) Econ 2042 March, 2024 32 / 77
De…nition (6.10 continued. . . )
MSE (e
θ)
E¤ b
θ, e
θ = b
MSE (θ )
this is a measure of relative e¢ ciency of an
estimator.
Example (6.7)
Let X1 , X2 , . . . , Xn be a random sample of size n,
and let e
θ = X n and b
θ = (X1 + X2 ) /2, then
E b
θ = µ and E e
θ =µ
MSE (e
θ)
) E¤ bθ, eθ = MSE bθ = σσ2 /n
2
= n2
() /2
Absolute e¢ ciency:
Given that e
θ is an unbiased estimator of θ the
measure of its absolute e¢ ciency is given when we
µj0 (θ 1 , θ 2 , . . . , θ k ) = Mj0 ; j = 1, 2, . . . , k
Solution
Recall that µ = µ10 and σ2 = µ20 (µ10 )2 .
Now, the method of moments equations become:
xi = µ + ei
ei = xi µ
E (ei ) = E (xi ) µ=0
2
var (ei ) = E (xi µ) = σ2 and cov (ei , ej ) = 0
∂S ( µ ) ∂ ∑ni=1 (xi µ )2 n
∂µ
=
∂µ
= ∑ 2 (xi µ ) ( 1)
i =1
n n
) ∑ 2 (xi b ) ( 1) = 0 )
µ ∑ xi b=0
nµ
i =1 i =1
n
) ∑ xi = nµb
i =1
1 n
b=
) µ ∑ xi = X
n i =1
F. Guta (CoBE) Econ 2042 March, 2024 42 / 77
Note that the least squares estimator of µ is BLUE .
∏i =1 f (xi ; θ ) .
n
=
F. Guta (CoBE) Econ 2042 March, 2024 43 / 77
Note: If X1 , X2 , ..., Xn is a random sample of variables
∏i =1 f (xi ; θ )
n
f (x1 , x2 , ..., xn ; θ ) =
Hence,
n
= ∑ ln f (xi ; θ )
i =1
is known as the log likelihood function.
De…nition (6.12)
A maximum likelihood estimator ( MLE ) of θ is a
solution b
θ to maxL (θ; x1 , x2 , ..., xn ).
θ
∏i =1 f (xi ; θ )
n
=
F. Guta (CoBE) Econ 2042 March, 2024 47 / 77
We try to select a b
θ which maximizes the likelihood
function.
Such an estimator is called the MLE of θ.
It is often helpful computationally to take the
natural logarithm of the LF , because it comes out
as a sum rather than a product of the density
functions of each observation, & maximizing the
LF is equivalent to the maximization of the LLF ,
since the latter is a monotonic transformation of
the other. Thus,
F. Guta (CoBE) Econ 2042 March, 2024 48 / 77
ln L (θ ; x1 , x2 , ..., xn ) = ln f (x1 ; θ ) + ln f (x2 ; θ ) + + ln f (xn ; θ )
n
= ∑ ln f (xi ; θ )
i =1
Example (6.9)
Let Xi , i = 1, 2, . . . , n be a random sample from a
F. Guta (CoBE) Econ 2042 March, 2024 49 / 77
Example (6.9 continued. . . )
normal distribution with mean µ and variance σ2 .
n
n n 1
ln L µ, σ2 ; x1 , x2 , ..., xn =
2
ln 2π
2
ln σ2
2σ 2 ∑ (xi µ )2
i =1
F. Guta (CoBE) Econ 2042 March, 2024 50 / 77
Example (6.9 continued. . . )
Thus, the …rst order conditions for a maximum are:
∂ ln L µ, σ2 ; x1 , x2 , ..., xn 1 n
∂µ
=
b2
2σ
∑( 2 ) ( xi b) = 0
µ
b2
b, σ
µ i =1
n
) ∑ ( xi b) = 0 ) µ
µ b=X
i =1
b = X.
Therefore the MLE of µ is µ
∂ ln L µ, σ2 ; x1 , ..., xn n 1 n
∂σ2
=
b2
2σ
+
b4
2σ
∑ ( xi b )2 = 0
µ
b
b, σ
µ 2 i =1
Example (6.10)
1 x
Let X Ber (π ) then f (x, π ) = π x (1 π) ,
F. Guta (CoBE) Econ 2042 March, 2024 55 / 77
Example (6.10 continued. . . )
ln f (x, π ) = x ln π + (1 x ) ln (1 π)
∂ ln f (x, π ) x (1 x )
) =
∂π π (1 π )
∂2 ln f (x, π ) x (1 x )
) =
∂π 2 π 2 (1 π )2
∂2 ln f (x, π ) E (x ) (1 E (x ))
) E =
∂π 2 π2 (1 π )2
π (1 π ) 1 1
= 2 2 =
π (1 π ) π (1 π )
1
=
π (1 π )
F. Guta (CoBE) Econ 2042 March, 2024 56 / 77
Example (6.10 continued. . . )
Thus if t is any other unbiased estimator of π its
variance will be greater than or equal to this lower
π (1 π )
bound, i.e., var (t ) n .
Therefore, the CRLB for any unbiased estimator of
π (1 π )
π based on a random sample of size n is n .
Example (6.11)
θx e θ
Let X Poi (θ ), i.e., f (x; θ ) = x! ,
x = 1, 2, . . ., then it follows that:
∂ ln f (x, θ ) x ∂2 ln f (x, θ ) x
) = 1) =
∂θ θ ∂θ 2 θ2
2
∂ ln f (x, θ ) E (x ) θ 1
) E = = =
∂θ 2 θ2 θ2 θ
Example (6.12)
Let X N (µ, σ2 ) with a known variance.
1 1
f x; µ, σ2 = p exp (x µ )2
2πσ2 2σ 2
1 1 1
) ln f x; µ, σ2 = ln 2π ln σ2 (x µ )2
2 2 2σ 2
∂ ln f x; µ, σ2 1 1
) = ( 2 ) ( x µ ) ( 1 ) = (x µ )
∂µ 2σ 2 σ2
∂2 ln f x; µ, σ2 1
) 2
=
∂µ σ2
F. Guta (CoBE) Econ 2042 March, 2024 59 / 77
Example (6.12 continued. . . )
∂2 ln f (x;µ,σ2 )
)E ∂µ2
= 1
σ2
i). Invariance: If b
θ is the MLE of θ, & g ( ) is any
continuous function of θ, then the MLE of g (θ ) is
g b
θ .
ii). Asymptotic properties of MLE: under certain
regulatory conditions:
a). The MLE is consistent, i.e., plim b
θ MLE = θ.
A
b). MLE is asymptotically normal, i.e., bθ MLE N (θ , CRLB )
Therefore,
F. Guta (CoBE) Econ 2042 March, 2024 62 / 77
n o
x µ
Pr z <
α
2
p
σ/ n
<z α
2
=1 α
σ σ
) Pr z 2α p < x µ < z 2α p =1 α
n n
σ σ
) Pr x z 2α p < µ < x + z 2α p =1 α
n n
σ σ
) Pr x z 2α p < µ < x + z 2α p =1 α
n n
10 10
140 1.96 p , 140 + 1.96 p = (140 1.96 2, 140 + 1.96 2)
25 25
= (136.08, 143.92)
(x y) µx µy
t= q tn+m 2
1
S n + m1
) Pr f tn+m 2 ( α/2) < t < tn+m 2 ( α/2)g =1 α
Note that:
r r
1 1 1 1
s.e. (x y) = σ + and S + = s.e.\
(x y )
n m n m
F. Guta (CoBE) Econ 2042 March, 2024 69 / 77
Therefore a 100 (1 α) % con…dence interval for
the di¤erence of two means is given by
(n 1) S 2
Pr χ2n 1 (1 α/2) < < χ2n 1 (α/2) = 1 α
σ2
( )
χ2n 1 (1 α/2) 1 χ2n 1 (α/2)
) Pr < 2 < =1 α
(n 1) S 2 σ (n 1) S 2
( )
(n 1) S 2 ( n 1 ) S 2
) Pr < σ2 < 2 =1 α
χ2n 1 (α/2) χn 1 (1 α/2)
= (60.98, 193.55)
F. Guta (CoBE) Econ 2042 March, 2024 72 / 77
6.4.4 Con…dence Interval for Variance Ratio
Therefore,
! ! !
sy2 sy2
Fn 1,m 1 (1 α/2) , Fn 1,m 1 (α/2)
sx2 sx2
σ2y
is a 100 (1 α) % CI of σ2x
.
F. Guta (CoBE) Econ 2042 March, 2024 75 / 77
Example (6.15)
Let n = 13, m = 16, sx2 = 1.2, sy2 = 1.5,
α = 0.02 ) α/2 = 0.01.
Find the 98% CI for the variance ratio σ2y /σ2x .