0% found this document useful (0 votes)
35 views8 pages

T10 Sol..ol

Uploaded by

Wai Yi Kan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views8 pages

T10 Sol..ol

Uploaded by

Wai Yi Kan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

THE HONG KONG POLYTECHNIC UNIVERSITY

DEPARTMENT OF APPLIED MATHEMATICS


AMA3640 STATISTICAL INFERENCE

Tutorial 10 Solution

1. The posterior probabilities are:


P (Y = 9 | ✓ = 0.3)P (✓ = 0.3)
P (✓ = 0.3 | Y = 9) =
P (Y = 9 | ✓ = 0.3)P (✓ = 0.3) + P (Y = 9 | ✓ = 0.5)P (✓ = 0.5)
20
9
(0.3)9 (1 0.3)20 9 (2/3)
= 20 20
9
(0.3)9 (1 0.3)20 9 (2/3) + 9
(0.5)9 (1 0.5)20 9 (1/3)
= 0.4494,
P (✓ = 0.5 | Y = 9) = 1 P (✓ = 0.3 | Y = 9) = 0.5506.
2
2. (a) Clearly, X | ✓ ⇠ N(✓, /n). The joint density of (X, ✓) is
n n o 1 ⇢
1 2 1
f (x | ✓)⇡(✓) = p exp 2
(x ✓) p exp 2
(✓ µ)2
2
2⇡ /n 2 2⇡⌧ 2 2⌧
✓ ◆✓ p ◆ ⇢
1 n n 1
= p exp 2
(x ✓)2 (✓ µ)2 .
2⇡ 2⌧ 2 2 2⌧ 2
(b) The marginal density of X is
Z 1 ✓ ◆✓ p ◆ ⇢
1 n n 1
m(x) = p exp 2
(x ✓)2 2
(✓ µ)2 d✓
1 2⇡ 2⌧ 2 2 2⌧
Z 1 ⇢
n 2 1
/ exp 2
(x ✓) 2
(✓ µ)2 d✓
1 2 2⌧
Z 1 ⇢
n 2 1 2
= exp 2
(x 2✓x + ✓2 ) 2
(✓ 2µ✓ µ2 ) d✓
1 2 2⌧
Z 1 ✓ 2
◆ ✓ 2

nx n✓x n✓ ✓2 µ✓
/ exp exp + 2 d✓
1 2 2 2 2 2 2⌧ 2 ⌧
Z 1 ✓ 2
◆ ⇢ ✓ ◆ ✓ ◆
nx 1 n 1 2 nx µ
= exp exp + 2 ✓ + + 2 ✓ d✓.
1 2 2 2 2 ⌧ 2 ⌧

Here,
✓ ◆ ✓ ◆
1 n 1 2 nx
µ
2
+ 2 ✓ + 2
+ 2 ✓
2 ⌧ ⌧
✓ ◆⇢ ✓ ◆ ✓ ◆
1 n 1 2 nx µ . n 1
= 2
+ ✓ 2 + 2 + 2 ✓
2 ⌧2 2 ⌧ 2 ⌧
✓ ◆⇢ ✓ 2 2

1 n 1 nx⌧ + µ
= 2
+ 2
✓2 2 ✓
2 ⌧ n⌧ 2 + 2
✓ ◆( ✓ 2 2
◆ ✓ 2

2 2
✓ 2
◆)
2 2
1 n 1 nx⌧ + µ nx⌧ + µ nx⌧ + µ
= 2
+ ✓2 2 ✓+
2 ⌧2 n⌧ 2 + 2 n⌧ 2 + 2 n⌧ 2 + 2
✓ (
◆ ✓ ◆2 ✓ ◆2 )
1 n 1 nx⌧ 2 + µ 2 nx⌧ 2 + µ 2
= 2
+ ✓ .
2 ⌧2 n⌧ 2 + 2 n⌧ 2 + 2

1 12-03-2022 05:14:35 GMT -06:00


This study source was downloaded by 100000834373816 from CourseHero.com on

https://www.coursehero.com/file/121959962/T10-solpdf/
THE HONG KONG POLYTECHNIC UNIVERSITY
DEPARTMENT OF APPLIED MATHEMATICS
AMA3640 STATISTICAL INFERENCE

Tutorial 11

1. Consider a random sample of size n = 2 from a normal distribution with mean


✓ and known variance 1. It is known that 0  ✓  1. Define the estimators
✓b1 = X1 /2 + X2 /2, ✓b2 = X1 /4 + 3X2 /4, and ✓b3 = 2X1 /3. Consider the squared
error loss L(✓, a) = (✓ a)2 .
(a) Compute the risk functions for these estimators.
(b) Find the Bayes risk of the estimators under the prior ✓ ⇠ Uniform(0, 1).
(c) Find the Bayes risk of the estimators under the prior ⇡(✓) = 2✓ for 0  ✓  1.
2. Consider a random sample of size n from the Bernoulli distribution with parameter
✓. Consider a uniform prior distribution of ✓ ⇠ Uniform(0, 1) and the squared error
loss. Find the following:
(a) Bayes estimator of ✓;
(b) Bayes estimator of ✓(1 ✓);
(c) Bayes risk for the estimator in (a).
3. (a) For a continuous
Rm random Rvariable X with density f , the median of X is m
1
such that 1 f (x) dx = m f (x) dx. Show that the minimizer of E|X a|
with respect to a is a = m.
(b) Let X1 , . . . , Xn be a random sample from the N(✓, 2 ) distribution. Suppose
that the prior distribution of ✓ is N(µ, ⌧ 2 ). Here, 2 , µ, and ⌧ 2 are known.
Find the Bayes estimator of ✓ under the absolute error loss L(✓, a) = |✓ a|.
4. Let X1 , . . . , Xn be a random sample with density
f (x | ✓) = ✓x✓ 1
for 0 < x < 1.
Set the prior density for ✓ to be
k
⇡(✓) = ✓k 1 e ✓
,
(k 1)!
where k is a known integer and is a known positive constant. Find a (1 ↵)
credible interval for ✓; express the interval in terms of quantiles of a chi-square
distribution. Find a (1 ↵) credible interval for 1/✓.
5. Let X1 , . . . , X5 be a random sample from the normal distribution with mean ✓ and
variance 2 = 2. The prior distribution of ✓ is N(0, 2). What is the coverage of the
95% HPD credible interval for ✓ under the classical framework, when the true value
of ✓ is 2? dz 2 , T 2 h 5, a O ,
2
= = =
Y =0 2 20
l 96
.
,
025
=

6. Let X1 , . . . , Xn be a random sample from the Poisson distribution with mean ✓. Set
.

an improper prior for ✓ with


1/2
⇡(✓) / ✓ .
Find a (1 ↵) credible interval for ✓.

1
THE HONG KONG POLYTECHNIC UNIVERSITY
DEPARTMENT OF APPLIED MATHEMATICS
AMA3640 STATISTICAL INFERENCE

Tutorial 11 Solution

1. Note that ✓b1 ⇠ N(✓, 1/2), ✓b2 ⇠ N(✓, 5/8), and ✓b3 ⇠ N(2✓/3, 4/9).

(a) The risk functions are:


n o 1
R(✓, ✓b1 ) = E (✓ ✓b1 )2 = Var(✓b1 ) = .
2
n o 5
R(✓, ✓b2 ) = E (✓ ✓b2 )2 = Var(✓b2 ) = .
8
n o ✓ ◆2
4 2✓ 4 1 2
R(✓, ✓b3 ) = E (✓ b 2 b b 2
✓3 ) = Var(✓3 ) + {Bias(✓3 )} = + ✓ = + ✓ .
9 3 9 9

(b) Since R(✓, ✓b1 ) and R(✓, ✓b2 ) are constant, the Bayes risks of ✓b1 and ✓b2 are 1/2
and 5/8, respectively. The Bayes risk of ✓b3 is
Z 1✓ ◆
4 1 2 4 1 1 1 4 1 13
+ ✓ d✓ = ✓ + ✓3 = + = .
0 9 9 9 0 27 0 9 27 27

(c) Again, the Bayes risks for ✓b1 and ✓b2 are 1/2 and 5/8, respectively. The Bayes
risk of ✓b3 is
Z 1✓ ◆
4 1 2 4 1 1 1 4 1 1
+ ✓ 2✓ d✓ = ✓2 + ✓4 = + = .
0 9 9 9 0 18 0 9 18 2

2. (a) Under the squared error loss, the Bayes estimator


Pn is the posterior mean. Let
X1 , . . . , Xn be the random sample and Y ⌘ i=1 Xi . We have

⇡(✓ | y) / ✓y (1 ✓)n y
=) ✓ | Y = y ⇠ Beta(y + 1, n y + 1).

Hence, the Bayes estimator of ✓ is E(✓ | Y ) = (Y + 1)/(n + 2).


(b) The Bayes estimator of ✓(1 ✓) is
Z 1
(n + 1)!
E{✓(1 ✓) | Y } = ✓Y +1 (1 ✓)n Y +1 d✓
Y !(n Y )! 0
(n + 1)! (Y + 1)!(n Y + 1)!
=
Y !(n Y )! (n + 3)!
(Y + 1)(n Y + 1)
= .
(n + 3)(n + 2)

1
(c) Note that Y ⇠ Binomial(n, ✓). The risk function is given by
(✓ ◆2 ) ✓ ◆ ✓ ◆2
Y +1 Y +1 Y +1
E✓ ✓ = Var + Bias
n+2 n+2 n+2
⇢ 2
Var(Y ) E(Y ) + 1
= + ✓
(n + 2)2 n+2
⇢ 2
n✓(1 ✓) E(Y ) + 1
= + ✓
(n + 2)2 n+2
n✓(1 ✓) {n✓ + 1 ✓(n + 2)}2
= +
(n + 2)2 (n + 2)2
n✓(1 ✓) + (1 2✓)2
=
(n + 2)2
(4 n)✓2 + (n 4)✓ + 1
= .
(n + 2)2
The Bayes risk is
Z 1
(4 n)✓2 + (n 4)✓ + 1 (4 n)(1/3) + (n 4)(1/2) + 1 1
d✓ = = .
0 (n + 2)2 (n + 2)2 6(n + 2)

3. (a) We have
Z 1 Z 1 Z a
E|X a| = |x a|f (x) dx = (x a)f (x) dx + (a x)f (x) dx
1 a 1
Z 1 Z 1 Z a Z a
= xf (x) dx a f (x) dx + a f (x) dx xf (x) dx.
a a 1 1

Taking derivative on both sides, we have


Z 1 Z a
@
E|X a| = af (a) + af (a) f (x) dx + af (a) + f (x) dx af (a)
@a a 1
Z 1 Z a
= f (x) dx + f (x) dx.
a 1

Hence, the solution for


@
E|X a| = 0,
@a
is a = m. Moreover, we can check that
@2
E|X a| = 2f (a) 0,
@a2
which shows that the median is the minimizer of the absolute error loss.
(b) From Q.2(c) of Tutorial 10, the posterior distribution of ✓ is
✓ ◆
⌧2 2
/n ⌧ 2 2 /n
✓|X⇠N 2 /n + ⌧ 2
X+ 2 µ, 2 .
/n + ⌧ 2 /n + ⌧ 2
By (a), the Bayes estimator under the absolute error loss is the median of the
posterior distribution, that is
⌧2 2
/n
2 /n + ⌧ 2
X+ 2 /n
µ.
+ ⌧2

2
4. The posterior density of ✓ is
n
!✓
Y Pn
⇡(✓ | x) / ✓n xi ✓k 1 e ✓
= ✓n+k 1 e ( i=1 log xi )✓
,
i=1
Pn
Therefore, ✓ | X ⇠ Gamma(n + k, i=1 log Xi ). We have

n
! ✓ ◆
X 1 2
2 log Xi ✓ X ⇠ Gamma n + k, ⌘ 2(n+k) .
i=1
2

It follows that
n
! !
X
2 2
P 2(n+k),1 ↵/2 <2 log Xi ✓< 2(n+k),↵/2 =1 ↵.
i=1

The (1 ↵) credible interval for ✓ is


2 2
!
2(n+k),1 ↵/2 2(n+k),↵/2
Pn , Pn .
2( i=1 log Xi ) 2( i=1 log Xi )

The (1 ↵) credible interval for 1/✓ is


Pn Pn !
2( i=1 log Xi ) 2( i=1 log Xi )
2
, 2
.
2(n+k),↵/2 2(n+k),1 ↵/2

5. From Q.2(c) of Tutorial 10, the posterior distribution of ✓ is


✓ ◆
⌧2 2
/n ⌧ 2 2 /n
✓|X⇠N 2 /n + ⌧ 2
X+ 2 µ, 2 .
/n + ⌧ 2 /n + ⌧ 2

Because the normal density is symmetric, the 95% HPD credible interval is
r !
⌧2 2
/n 2⌧ 2

2 /n + ⌧ 2
X+ 2 µ ± z0.025 .
/n + ⌧ 2 2 + n⌧ 2

The coverage is
s s !
1 1 2 2
P X+ µ z0.025 <✓< X+ µ + z0.025
1+ 1+ n(1 + ) 1+ 1+ n(1 + )
✓ p p ◆
z0.025 1 + z0.025 1 +
=P (1 + )✓ µ p < X < (1 + )✓ µ+ p
n n
✓ ◆
p (✓ µ) X ✓ p (✓ µ)
=P z0.025 1 + + p < p < z0.025 1 + + p ,
/ n / n / n
| {z }
Z ⇠ N(0, 1)

where = 2 /(n⌧ 2 ). With 2 = 2, ⌧ 2 = 2, n = 5, µ = 0, = 0.2, and z0.025 = 1.96,


the above probability is P ( 1.5146 < Z < 2.7795) = 0.9323.

3
6. The posterior density of ✓ is

Pn Pn
n✓ xi 1/2 n✓ xi 1/2
⇡(✓ | x) / e ✓ i=1 ✓ =e ✓ i=1

n
!
X 1
=) ✓ | X ⇠ Gamma Xi + , n
i=1
2
n
!
X 1 1 2P
=) 2n✓ | X ⇠ Gamma Xi + , ⌘ 2 ni=1 Xi +1
.
i=1
2 2

A (1 ↵) credible interval for ✓ is


2P 2P
!
2 Xi +1,1 ↵/2 2 Xi +1,↵/2
, .
2n 2n

4
The second term in the blanket does not depend on ✓, so
✓ ◆ ( ✓ ◆✓ ◆2 )
nx2 1 n 1 nx⌧ 2 + µ 2
m(x) / exp exp + 2
2 2 2 2 ⌧ n⌧ 2 + 2
Z 1 ( ✓ ◆✓ ◆2 )
1 n 1 nx⌧ 2 + µ 2
⇥ exp 2
+ 2 ✓ d✓
1 2 ⌧ n⌧ 2 + 2
| {z }
proportional to a normal density
✓ ◆ ( ✓ ◆✓ ◆2 )
nx2 1 n 1 nx⌧ 2 + µ 2
/ exp exp + 2
2 2 2 2 ⌧ n⌧ 2 + 2

1 n
/ exp (x2 2µx)
2 n⌧ + 2
2

1 n
/ exp (x2 µ)2 ,
2 n⌧ 2 + 2
which is N(µ, ⌧ 2 + 2
/n).
(c) The posterior density of ✓ is
⇡(✓ | x) / f (x | ✓)⇡(✓)

n 1
/ exp 2
(x ✓)2 (✓ µ)2
2 2⌧ 2
( ✓ ◆✓ ◆2 )
1 n 1 nx⌧ 2 + µ 2
/ exp 2
+ 2 ✓ ,
2 ⌧ n⌧ 2 + 2

which implies that


✓ ◆
X⌧ 2 + µ 2 /n ⌧ 2 2 /n
✓|X⇠N , .
⌧ 2 + 2 /n ⌧ 2 + 2 /n
P
3. (a) Note that Y ⌘ ni=1 Xi is a sufficient statistic for ✓ and Y follows the Poission(n✓)
distribution. The posterior density of ✓ is
(n✓)y e n✓ ↵
⇡(✓ | y) / ✓↵ 1 e ✓
y! (↵)
/ (n✓)y e n✓ ↵ 1
✓ e ✓

/ ✓↵+y 1 e (n+ )✓
,
which implies that
✓ | Y ⇠ Gamma(↵ + Y, n + ).
(b) Generally, for W ⇠ Gamma(↵, ), we have
Z 1 Z

↵ 1 w

(↵ + 1) 1 ↵+1

E(W ) = w w e dw = ↵+1
w↵ e w
dw = .
0 (↵) (↵) 0 (↵ + 1)
Similarly,
Z 1 ↵ ↵
2 (↵ + 2) ↵(↵ + 1)
E(W ) = w2 w↵ 1 e w
dw = ↵+2
= 2
.
0 (↵) (↵)

2 12-03-2022 05:14:35 GMT -06:00


This study source was downloaded by 100000834373816 from CourseHero.com on

https://www.coursehero.com/file/121959962/T10-solpdf/
Hence,

Var(W ) = E(W 2 ) {E(W )}2 = 2
.

Therefore, the posterior mean and variance of ✓ are (↵ + Y )/(n + ) and


(↵ + Y )/(n + )2 , respectively.
P
4. (a) Note that Y ⌘ ni=1 Xi is a sufficient statistic for ✓ and Y follows the Gamma(n, ✓)
distribution. The posterior density of ✓ is
✓n n 1 ✓y
⇡(✓ | y) / y e e ✓
(n)
/ ✓n e ✓y e ✓
= ✓n e (y+ )✓
,
which implies that
✓ | Y ⇠ Gamma(n + 1, Y + ).
(b) The posterior mean of ✓ is
n+1
E(✓ | Y ) = .
Y +
Generally, for W ⇠ Gamma(↵, ), we have
✓ ◆ Z 1 ↵ ↵
1 (↵ 1)
E = w 1 w↵ 1 e w dw = ↵ 1
= ,
W 0 (↵) (↵) ↵ 1
where ↵ > 1. Therefore, the posterior mean of 1/✓ is given by
✓ ◆
1 Y +
E |Y = .
✓ n
5. Note that Y ⌘ maxi Xi is a sufficient statistic for ✓ with density
n
f (y | ✓) = n y n 1 I(✓ > y).

The posterior density of ✓ is
n n 1 ↵
⇡(✓ | y) / n
y I(✓ > y) +1 I(✓ > ↵)
✓ ✓
n ( +1)
/ ✓ I(✓ > y)✓ I(✓ > ↵)
(n+ +1)
=✓ I(✓ > max(↵, y)).
R1
Since 1
⇡(✓ | y) d✓ = 1, we have, for some k(y),
Z 1
k(y) ✓ (n+ +1) d✓ = 1 =) k(y) = (n + ) max(↵, y)(n+ ) .
max(↵,y)

Therefore, the posterior mean of ✓ is


Z 1
(n+ ) (n+ +1)
(n + ) max(↵, y) ✓·✓ d✓
max(↵,y)
(n+ )+1
) max(↵, y)
=(n + ) max(↵, y)(n+
(n + ) 1
n+
= max(↵, y).
n+ 1

3 12-03-2022 05:14:35 GMT -06:00


This study source was downloaded by 100000834373816 from CourseHero.com on

https://www.coursehero.com/file/121959962/T10-solpdf/
Powered by TCPDF (www.tcpdf.org)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy