0% found this document useful (0 votes)
60 views6 pages

X X X X: Otherwise 0 0 1 0 0 0

1. The document describes estimators of the parameter θ of a uniform distribution on (0,θ). 2. The method of moments estimator is θ/2, which is the sample mean. This is an unbiased estimator of θ with variance θ2/12n. 3. The maximum likelihood estimator is the maximum observation, which is not an unbiased estimator of θ. Multiplying it by (n+1)/n yields an unbiased estimator with lower mean squared error than the method of moments estimator for large n.

Uploaded by

d
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views6 pages

X X X X: Otherwise 0 0 1 0 0 0

1. The document describes estimators of the parameter θ of a uniform distribution on (0,θ). 2. The method of moments estimator is θ/2, which is the sample mean. This is an unbiased estimator of θ with variance θ2/12n. 3. The maximum likelihood estimator is the maximum observation, which is not an unbiased estimator of θ. Multiplying it by (n+1)/n yields an unbiased estimator with lower mean squared error than the method of moments estimator for large n.

Uploaded by

d
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

STAT 410

1.

Fall 2016

Let X 1 , X 2 , , X n be a random sample of size n from a uniform


distribution on the interval ( 0 , ).

1

f ( x ; ) =

E( X ) =
a)

otherwise
Var ( X ) =

0< x<

x>

12

~
Obtain the method of moments estimator of , .

E( X)= .
2

b)

x<0

x
F( x; ) =

0< x<

X = .
2

~
= 2 X.

~
~
Is unbiased for ? That is, does E( ) equal ?

( )

E X = E( X )=

.
2

( )

~
E = E 2 X = .

~
is unbiased for .

c)

~
Compute Var( ).
~
= 2 X.
For Uniform ( 0 , ),

2 .
~
Var = Var 2 X = 4 Var X = 4

( )

2 =

2
.
12

( )

2
~
.
Var =
3 n

( )

d)

Obtain the maximum likelihood estimator of , .


Likelihood function:

L( ) =

=
i =1

1
,
n

> max X i ,

L( ) = 0,

Therefore,

e)

< max X i .

= max X i .

Is unbiased for ? That is, does E( ) equal ?


F max X ( x ) = P ( max X i x ) = P ( X 1 x, X 2 x, , X n x )
i

n
x
= P( X 1 x ) P( X 2 x ) P( X n x ) = ,

f max X i ( x ) = F 'max X i ( x ) =

n x n 1
n

0 < x < .

0 < x < .

n x n 1
n n
n x n + 1 n

E = x
dx =
x dx =

=
.
n
n
n n +1 0 n +1

0
0

()

is NOT unbiased for .

f)

What must

c equal if c is to be an unbiased estimator for ?

( )

n +1 n
n +1 n +1
E
=
E =

= .
n
n n +1
n

g)

c=

n +1
.
n

n + 1 .
n

Compute Var( ) and Var

( )

n x n 1
n n +1
n x n + 2 n 2
.
E 2 = x 2
dx =
x
dx =

=
n+2 0 n+2
n
n
n


0
0

( ) ( ) [ ( )]

2
2 n 2
n 2
n
2

.
Var = E E
=

=
n + 2 n + 1
( n + 2 ) ( n + 1 ) 2

()

2
n + 1 n + 1

.
Var
=
Var =
( n + 2 ) n
n
n

Def

Let 1 and 2 be two unbiased estimators for . 1 is said to be


more efficient than 2 if Var( 1) < Var( 2 ).
The relative efficiency of 1 with respect to 2 is Var( 2 ) / Var( 1).

h)

~
n + 1 ? What is the
Which estimator for is more efficient, or

n + 1 with respect to ~ ?
relative efficiency of
n

Since

2
2
~
n +1
for n > 1,
<
is more efficient than .
( n + 2 ) n 3 n
n

Relative efficiency of

~
n +1
n+2
.
with respect to =
n
3

For an estimator of , define the Mean Squared Error of by


MSE ( ) = E [ ( ) 2 ].
E [ ( )

i)

( E ( ) ) 2 + Var ( )

( bias ( ) ) 2 + Var ( ).

~
Find MSE ( ).
~
2
~
bias ( ) = 0 and Var =
.
3n

( )

j)

2
2
=
0
3n
3n

= 0+

as n .

Find MSE ( ).
bias ( ) =

n2
=
and Var ( ) =
.
n +1
n +1
( n +1 ) 2 ( n + 2 )

MSE ( ) = E [ ( )
=

k)

MSE ( ) = E [ ( )

=
+
n + 1
( n +1 ) 2 ( n + 2 )

22
0
( n + 1 )( n + 2 )

as n .

~
Which estimator is better, or ?
~

MSE ( ) =

2
.
3n

MSE ( ) =

22
.
( n + 1 )( n + 2 )

Note that even though = 2 X is unbiased for

= max X i is not unbiased for ,


~
MSE ( ) << MSE ( ) for large n.
and

is better.

MSE ( c ) = E [ ( c )

c min =

= c E ( ) 2 c E ( ) + .
2

()

E
.
2

( )

An estimator could be improved by multiplying it by a constant if c min


does NOT depend on .

For = 2 X ,

c min =

( )
2
Var ( 2 X ) + [ E ( 2 X ) ]
E 2 X

2
2
+2
3n

3n
.
3 n +1

~
~
6n
=
X.
3 n +1

~
3n

3n 2
~
and Var =
.
=
3 n +1
3 n +1
(3 n +1)2

~
~

bias ( ) =

~
~

MSE ( ) =

(3 n +1)2

3n 2

(3 n +1)2

2
.
3 n +1

For = max X i ,

c min

n 2
E ( )
n+2
=
= n +1 =
.
2
2

n
+
1
n
E
n+2

( )

n + 2
max X i .
=
n +1

bias ( ) =

( n + 2 )n =
( n +1 ) 2
( n +1 ) 2

MSE ( ) =

(n +1)4

and Var ( ) =

( n + 2 )n 2
( n +1 ) 4

(n +1)2

( n + 2 )n 2 .
( n +1 ) 4
.

Indeed,

~
~

~
2
2
<
= MSE ( ).
3n +1
3n

MSE ( ) =

MSE ( ) =

(n +1)2

n 2 + 2 n +1

<

n2 + 2n

= MSE ( ),

n +1 n +1
where =
=
max X i .

MSE ( ) =

(n +1)2

22
= MSE ( ).
( n + 1 )( n + 2 )

<

More on the Method of Moments:


For U ( 0, ),

E( Xk ) =

k +1

k > 1.

1 k
~
,
k = ( k + 1 ) X k

For example,

1 n
where X k = X ik .
n i =1

E( X2 ) =

2
3

~
2 = 3X 2 .

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy