0% found this document useful (0 votes)
94 views8 pages

Problems by Jim Pitman. Solutions by George Chen

This document contains a midterm exam for STAT 150 with 3 problems and their solutions. Problem 1 asks about a Markov chain (Xn) and its state space and transition probabilities. The solution explains that the state space is the non-negative integers and derives the transition probabilities. Problem 2 asks about another Markov chain (Tn) and its properties. The solution again derives the state space and transition probabilities. Problem 3 verifies properties of conditional expectations, showing E(E(Y|X)|f(X)) = E(Y|f(X)) and E(E(Y|X,Z)|X) = E(Y|X). Problem 4 shows that if E(

Uploaded by

spitzersglare
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views8 pages

Problems by Jim Pitman. Solutions by George Chen

This document contains a midterm exam for STAT 150 with 3 problems and their solutions. Problem 1 asks about a Markov chain (Xn) and its state space and transition probabilities. The solution explains that the state space is the non-negative integers and derives the transition probabilities. Problem 2 asks about another Markov chain (Tn) and its properties. The solution again derives the state space and transition probabilities. Problem 3 verifies properties of conditional expectations, showing E(E(Y|X)|f(X)) = E(Y|f(X)) and E(E(Y|X,Z)|X) = E(Y|X). Problem 4 shows that if E(

Uploaded by

spitzersglare
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

STAT 150 SPRING 2010: MIDTERM EXAM

Problems by Jim Pitman. Solutions by George Chen


1. Let X
0
, Y
1
, Y
2
, . . . be independent random variables, X
0
with values in {0, 1, 2, . . . } and each Y
i
an indicator random
variable with P(Y
i
= 1) =
1
i
and P(Y
i
= 0) = 1
1
i
=
i1
i
for each i = 1, 2, . . . For n = 1, 2, . . . let
X
n+1
:=
_
max {k : 1 k < X
n
and Y
k
= 1} if X
n
> 1,
0 if X
n
1.
Explain why (X
n
) is a Markov chain, and describe its state space and transition probabilities.
Solution: The state space is clearly {0, 1, 2, . . . } and, moreover, X
n+1
< X
n
when X
n
> 1. Suppose X
i
> 1 and
0 < X
i+1
< X
i
for i {0, 1, 2, . . . , n}. Then
P(X
n+1
= x
n+1
| X
i
= x
i
for i = 0, 1, . . . , n) =
P(X
i
= x
i
for i = 0, 1, 2, . . . , n + 1)
P(X
i
= x
i
for i = 0, 1, 2, . . . , n)
=
P
_
Y
x0
= 1, Y
x01
= = Y
x1+1
= 0, Y
x1
= 1,
Y
x11
= = Y
x2+1
= 0, Y
x2=1
, . . . , Y
xn+1
= 1
_
P
_
Y
x0
= 1, Y
x01
= = Y
x1+1
= 0, Y
x1
= 1,
Y
x11
= = Y
x2+1
= 0, Y
x2=1
, . . . , Y
xn
= 1
_
=
P(Y
x0
= 1)

n+1
i=1
__

xi11
j=xi+1
P(Y
j
= 0)
_
P(Y
xi
= 1)
_
P(Y
x0
= 1)

n
i=1
__

xi11
j=xi+1
P(Y
j
= 0)
_
P(Y
xi
= 1)
_ .
Many numerator/denominator cancellations occur and all that remains after cancellations is one term of the numera-
tors outer big-product:
_
_
x
(n+1)1
1

j=xn+1+1
P(Y
j
= 0)
_
_
P
_
Y
xn+1
= 1
_
. .
1
x
n+1
=
1
x
n+1
_
_
xn1

j=xn+1+1
j 1
j
_
_
=
1
x
n+1
_
x
n+1
x
n+1
+ 1
x
n+1
+ 1
x
n+1
+ 2

x
n
2
x
n
1
_
=
1
x
n
1
.
Conclude that
P(X
n+1
= x
n+1
| X
i
= x
i
for i = 0, 1, . . . , n) =
1
x
n
1
. (1)
The above result holds for all n such that X
i
> 1 and 0 < X
i+1
< X
i
for all 0 i n. The only other case is if there
is an m such that X
m
1. Note that by how (X
n
) is dened, we must have X
m+1
= 0 and trivially we have, for all n,
P(X
n+1
= x
n+1
| X
0
= x
0
, X
1
= x
1
, . . . , X
n1
= x
n1
, X
n
0) = 1(x
n+1
= 0) . (2)
Therefore, combining both cases (Equations (1) and (2)), we have
P(X
n+1
= x
n+1
| X
i
= x
i
for i = 0, 1, . . . , n) =
_

_
1
xn1
if x
n
> 1 and 0 < x
n+1
< x
n
,
1(x
n+1
= 0) if x
n
1,
0 otherwise.
(3)
In particular, P(X
n+1
= x
n+1
| X
i
= x
i
for i = 0, 1, . . . , n) does not depend on x
0
, x
1
, . . . , x
n1
, so P(X
n+1
| X
0
, . . . , X
n
) =
P(X
n+1
| X
n
), i.e. (X
n
) is a Markov chain with transition probabilities given by Equation (3).
2. For Y
1
, Y
2
, . . . as in the previous question, let T
0
:= 0 and for n = 1, 2, . . . let
T
n
:= min{k : k > T
n1
and Y
k
= 1} .
Explain why (T
n
) is a Markov chain, and describe its state space and transition probabilities.
1
Solution: The state space is clearly {0, 1, 2, . . . } and, moreover, T
n+1
> T
n
for all n. Note that P(T
1
= 1 | T
0
= 0) = 1
since Y
1
= 1 with probability 1. Consider n 2. We have for t
n+1
> t
n
> t
n1
> > t
2
> 1:
P(T
n+1
= t
n+1
| T
0
= 0, T
1
= 1, T
2
= t
2
, . . . , T
n
= t
n
)
=
P(T
0
= 0, T
1
= 1, T
2
= t
2
, . . . , T
n
= t
n
, T
n+1
= t
n+1
)
P(T
0
= 0, T
1
= 1, T
2
= t
2
, . . . , T
n
= t
n
)
=
P(T
0
= 0) P(T
1
= 1 | T
0
= 0)

n+1
i=2
__

ti1
j=ti1+1
P(Y
j
= 0)
_
P(Y
ti
= 1)
_
P(T
0
= 0) P(T
1
= 1 | T
0
= 0)

n
i=2
__

ti1
j=ti1+1
P(Y
j
= 0)
_
P(Y
ti
= 1)
_ .
Many numerator/denominator cancellations occur and all that remains after cancellations is one term of the numera-
tors outer big-product:
_
_
tn+11

j=t
(n+1)1
+1
P(Y
j
= 0)
_
_
P
_
Y
tn+1
= 1
_
. .
1
t
n+1
=
1
t
n+1
_
_
tn+11

j=tn+1
j 1
j
_
_
=
1
t
n+1
_
t
n
t
n
+ 1
t
n
+ 1
t
n
+ 2

t
n+1
2
t
n+1
1
_
=
t
n
t
n+1
(t
n+1
1)
.
Conclude that for n 2,
P(T
n+1
= t
n+1
| T
0
= 0, T
1
= 1, T
2
= t
2
, . . . , T
n
= t
n
) =
_
tn
tn+1(tn+11)
if t
n+1
> t
n
,
0 otherwise.
(4)
In particular, P(T
n+1
= t
n+1
| T
i
= t
i
for i = 0, 1, . . . , n) does not depend on t
0
, t
1
, . . . , t
n1
, so P(T
n+1
| T
0
, . . . , T
n
) =
P(T
n+1
| T
n
), i.e. (T
n
) is a Markov chain with transition probabilities given by Equation (4).
3. Let X, Y, Z be random variables dened on a common probability space, each with a discrete distribution. Explain
why the function (x) := E(Y | X = x) is characterized by the property
E(Y g (X)) = E[(X) g (X)] (5)
for every bounded function g whose domain is the range of X. Use this characterization of E(Y | X) to verify the
formula
E(E(Y | X) | f (X)) = E[Y | f (X)] (6)
for every function f whose domain is the range of X, and the formula
E(E(Y | X, Z) | X) = E[Y | X] . (7)
Solution: We rst show that (x) = E(Y | X = x) satises Equation (5):
E(Y g (X)) =

x
P(X = x) E(Y g (X) | X = x) =

x
P(X = x) g (x) E(Y | X = x)
. .
(x)
= E(g (X) (X)) .
Next we show that is unique, i.e. if a function satises Equation (5), then we must have (x) = E(Y | X = x).
Note that the domain of is {x : P(X = x) > 0}. Let x {x : P(X = x) > 0}. To see that (x) must be equal to
E(Y | X = x), by Equation (5), we have
E(Y 1(X = x)) = E((X) 1(X = x)) = (x) P(X = x) .
This implies that
(x) =
E(Y 1(X = x))
P(X = x)
= E(Y | X) ,
using the identity that E(A | B) = E(A1
B
) /P(B). To verify Equation (6), observe that
E(E(Y | X) | f (X) = f (x)) = E((X) | f (X) = f (x))
=
E((X) 1(f (X) = f (x)))
P(f (X) = x)
(recall that E(A | B) = E(A1
B
) /P(B) )
=
E(Y 1(f (X) = f (x)))
P(f (X) = x)
(by Equation (5) where g (x) = 1(f (X) = f (x)) )
= E(Y | f (X) = f (x)) .
2
We can verify Equation (7) with direct computation:
E(E(Y | X, Z) | X = x) =

z
E(Y | X = x, Z = z) P(Z = z | X = x)
=

y
yP(Y = y | X = x, Z = z) P(Z = z | X = x)
=

y
y
P(X = x, Y = y, Z = z)
P(X = x, Z = z)
P(X = x, Z = z)
P(X = x)
=

y
y
P(X = x, Y = y, Z = z)
P(X = x)
=

y
yP(Y = y, Z = z | X = x)
=

y
yP(Y = y | X = x)
= E(Y | X = x) .
4. Suppose that a sequence of random variables X
0
, X
1
, . . . and a function f are such that
E(f (X
n+1
) | X
0
, . . . , X
n
) = f (X
n
) (8)
for every n = 0, 1, 2, . . . Explain why this implies
E(f (X
n+1
) | f (X
0
) , . . . , f (X
n
)) = f (X
n
) . (9)
Give an example of such an f which is not constant for (X
n
) a p , 1 p random walk on the integers.
Solution: Dene random vectors X
(n)
=
_
X
0
X
1
X
n1
_

and Y
(n)
=
_
f (X
n
) 0 0
_

taking on
values in R
n
. Dene function g by g
_
X
(n)
_
=
_
f (X
0
) f (X
1
) f (X
n1
)
_

. Then
E(f (X
n
) | f (X
0
) , . . . , f (X
n1
)) = E
_
_
_
_
_
_
1 0 0
_
_
_
_
_
_
f (X
n
)
0
.
.
.
0
_
_
_
_
_

_
_
_
_
_
f (X
0
)
f (X
1
)
.
.
.
f (X
n1
)
_
_
_
_
_
_
_
_
_
_
=
_
1 0 0
_
E
_
_
_
_
_
_
_
_
_
_
f (X
n
)
0
.
.
.
0
_
_
_
_
_

_
_
_
_
_
f (X
0
)
f (X
1
)
.
.
.
f (X
n1
)
_
_
_
_
_
_
_
_
_
_
=
_
1 0 0
_
E
_
Y
(n)
| g
_
X
(n)
__
=
_
1 0 0
_
E
_
E
_
Y
(n)
| X
(n)
_
| g
_
X
(n)
__
(by Equation (6))
=
_
1 0 0
_
E
_
_
_
_
_
E
_
_
_
_
_
_
_
_
_
_
f (X
n
)
0
.
.
.
0
_
_
_
_
_

_
_
_
_
_
X
0
X
1
.
.
.
X
n1
_
_
_
_
_
_
_
_
_
_

g
_
X
(n)
_
_
_
_
_
_
=
_
1 0 0
_
E
_
_
_
_
_
_
_
_
_
_
f (X
n1
)
0
.
.
.
0
_
_
_
_
_

g
_
X
(n)
_
_
_
_
_
_
(by Equation (8))
= E
_
_
_
_
_
_
1 0 0
_
_
_
_
_
_
f (X
n1
)
0
.
.
.
0
_
_
_
_
_

_
_
_
_
_
f (X
0
)
f (X
1
)
.
.
.
f (X
n1
)
_
_
_
_
_
_
_
_
_
_
= E(f (X
n1
) | f (X
0
) , f (X
1
) , . . . , f (X
n1
)) ,
3
which is precisely Equation (9).
As an example, if f (x) =
_
q
p
_
x
, then if (X
n
) is a p , 1 p walk on the integers, then (f (X
n
)) is a martingale since
E(f (X
n+1
) | X
0
, . . . , X
n
) = E
_
_
q
p
_
Xn+1
| X
0
, . . . , X
n
_
= p
_
q
p
_
Xn+1
+q
_
q
p
_
Xn1
=
q
Xn+1
p
Xn
+
q
Xn
p
Xn1
=
q
Xn+1
p
Xn
+
q
Xn
p
p
Xn
=
q
Xn
q +q
Xn
p
p
Xn
=
q
Xn
p
Xn
(q +p)
= f (X
n
) ,
so by the result above, we have E(f (X
n+1
) | f (X
0
) , . . . , f (X
n
)) = f (X
n
).
5. Let S := X
1
+ + X
N
be the number of successes and F := N S be the number of failures in a Poisson()
distributed random number N of Bernoulli trials, where given N = n the X
1
, . . . , X
n
are independent with P(X
i
= 1) =
1P(X
i
= 0) = p for some 0 p 1. Derive the joint distribution of S and F. How can the conclusion be generalized
to multinomial trials?
Solution: Let q = 1 p. We have
P(S = s, F = f) =

n=0
P(S = s, F = f | N = n) P(N = n)
=

n=0
P(S = s, N S = f | N = n) P(N = n)
=

n=0
P(S = s, S = N f | N = n) P(N = n)
=

n=0
P
_
n

i=1
X
i
= s,
n

i=1
X
i
= n f
_
P(N = n)
=

n=0
1(s = n f) P
_
n

i=1
X
i
= s
_
P(N = n)
=

n=0
1(n = s +f) P
_
n

i=1
X
i
= s
_
P(N = n)
= P
_
s+f

i=1
X
i
= s
_
P(N = s +f)
=
_
s +f
s
_
p
s
q
f

s+f
e

(s +f)!
=
(s +f)!
s!f!
p
s

s
q
f

f
e
(p+q)
(s +f)!
=
(p)
s
e
p
s!
(q)
f
e
q
f!
= P(Poisson(p) = s) P(Poisson(q) = f) .
In the multinomial case with k categories with probabilities p
1
, p
2
, . . . , p
k
and N Poisson() trials, let S
1
, S
2
, . . . , S
k
4
denote the number of trials falling into categories 1, 2, . . . , k respectively. Then generalizing the result above, we have
P(S
1
= s
1
, S
2
= s
2
, . . . , S
k
= s
k
) =
k

i=1
P(Poisson(p
i
) = s
i
) .
6. Let P
i
govern a p , q = 1 p walk (S
n
) on the integers started at S
0
= i, with p > q. Let
f
ij
:= P
i
(S
n
= j for some n 1) .
Use results derived from lectures and/or the text to present a formula for f
ij
in each of the two cases i > j and i < j.
Deduce a formula for f
ij
for i = j.
Solution: Case i > j: This can be viewed as the gamblers ruin problem for a biased coin where the bottom absorb-
ing state is j and the top absorbing state is +. f
ij
is the probability of starting at i and hitting j before hitting
+. Using a result from lecture, we have
f
ij
= P
i
(hit j before +) = lim
b
P
a
(hit 0 before b) =
_
q
p
_
a
=
_
q
p
_
ij
where a = i j and b +.
Case i < j: Claim: Since p > q, we are guaranteed to hit j starting from i, so f
ij
= P
i
(hit j) = 1. To show this,
consider the gamblers ruin problem where we ip the walk upside down, i.e. suppose we start at i and want to reach
j before we reach + where a step up has probability q and a step down has probability p, where p > q. Using the
result from class, we have
f
ij
= lim
b
P
a
(hit 0 before b) = lim
b
_
_
_1
_
p
q
_
a
1
_
p
q
_
b
1
_
_
_ = 1 lim
b
_
p
q
_
(i)(j)
1
_
p
q
_
b
1
= 1 lim
b
_
p
q
_
i+j
1
_
p
q
_
b
1
.
Since p > q, the right-most terms denominator goes to + whereas the numerator is xed, so lim
b
(
p
q
)
i+j
1
(
p
q
)
b
1
= 0.
Thus, we have f
ij
= 1 lim
b
(
p
q
)
i+j
1
(
p
q
)
b
1
= 1 0 = 1.
Case i = j: From rst-step analysis, we have
f
ii
= P(go 1 step up) P
i+1
(hit i before +) +P(go 1 step down) P
i1
(hit i)
= p
_
q
p
_
(i+1)i
+q 1 (using previous results)
= p
_
q
p
_
+q
= 2q.
7. Let P
i
govern (X
n
) as a Markov chain starting from X
0
= i, with nite state space S and transition matrix P which
has a set of absorbing states B. Let T := min{n 1 : X
n
B} and assume that P
i
(T < ) = 1 for all i. Derive a
formula for
P
i
(X
T1
= j, X
T
= k) for i, j B
c
and k B
in terms of matrices W := (I Q)
1
and R, where Q is the restriction of P to B
c
B
c
and R is the restriction of P
to B
c
B.
Solution:
P
i
(X
T1
= j, X
T
= k) =

n=1
P
i
(X
T1
= j, X
T
= k, T = n)
=

n=1
P
n1
(i, j) P (j, k)
=
_

m=0
P
m
(i, j)
_
P (j, k)
5
=
_

m=0
Q
m
(i, j)
_
. .
W(i,j)
R(j, k)
= W (i, j) R(j, k) .
8. In the same setting, let f
ij
:= P
i
(X
n
= j for some n 1). For i, j B
c
, nd and explain a formula for f
ij
in terms
of W
ij
and W
jj
.
Solution: Let N
j
be the total number of times we visit state j before absorption. Recall that W
ij
= E
i
(N
j
) and
W
jj
= E
j
(N
j
). Reaching X
n
= j for some n 1 is equivalent to saying that there exists a rst time that we reach j;
thus:
f
ij
= P
i
(X
n
= j for some n 1) = P(we reach j for the rst time) .
From rst-step analysis:
E
i
(N
j
) = P(we reach state j for the rst time) E
j
(N
j
) +P(we never reach state j) 0.
Hence, we have
f
ij
= P(we reach state j for the rst time) =
E
i
(N
j
)
E
j
(N
j
)
=
W
ij
W
jj
.
9. In the same setting, let
i
(s) denote the probability generating function of T for the Markov chain started at state i.
Derive a system of equations which could be used to determine
i
(s) for all i S.
Solution: Note that for i B, P
i
(T = 0) = 1, i.e.
i
(s) = 1 for i B . For i / B, clearly P
i
(T = 0) = 0 and for
n 1, use rst-step analysis to get
P
i
(T = n) =

j
P (i, j) P
j
(T = n 1)
=

jB
c
Q(i, j) P
j
(T = n 1) +

kB
R(i, k) P
k
(T = n 1)
=

jB
c
Q(i, j) P
j
(T = n 1) +

kB
R(i, k) 1(n 1 = 0)
=

jB
c
Q(i, j) P
j
(T = n 1) +1(n = 1)

kB
R(i, k) .
So

i
(s) = P
i
(T = 0)
. .
0
+

n=1
P
i
(T = n) s
n
=

n=1
_
_

jB
c
Q(i, j) P
j
(T = n 1) +1(n = 1)

kB
R(i, k)
_
_
s
n
=

n=1

jB
c
Q(i, j) P
j
(T = n 1) s
n
+

n=1
1(n = 1)

kB
R(i, k) s
n
=

jB
c
Q(i, j)

n=1
P
j
(T = n 1) s
n
+

kB
R(i, k) s
=

jB
c
Q(i, j)

m=0
P
j
(T = m) s
m+1
+

kB
R(i, k) s
=

jB
c
Q(i, j) s

m=0
P
j
(T = m) s
m
+

kB
R(i, k) s
= s

jB
c
Q(i, j)

m=0
P
j
(T = m) s
m
+s

kB
R(i, k)
6
= s

jB
c
Q(i, j)
j
(s) +s

kB
R(i, k)
= s

jB
c
\{i}
Q(i, j)
j
(s) +sQ(i, i)
i
(s) +s

kB
R(i, k) .
Rearranging terms gives
s

jB
c
\{i}
Q(i, j)
j
(s) + (sQ(i, i) 1)
i
(s) +s

kB
R(i, k) = 0, for i B
c
.
10. Let X be a non-negative integer valued random variable with probability generating function (s) for 0 s 1. Let
N be independent of X with the Geometric (p) distribution P(N = n) = (1 p)
n
p for n = 0, 1, 2, . . . where 0 < p < 1.
Find a formula for P(N < X) in terms of and p.
Solution:
P(N < X) =

x=0
P(N < X | X = x) P(X = x)
=

x=0
P(N < x) P(X = x)
=

x=0
P(N x 1) P(X = x)
=

x=0
(1 (1 p)
x
) P(X = x)
=

x=0
P(X = x)

x=0
(1 p)
x
P(X = x)
= 1 (1 p) .
11. Let X be a non-negative random variable with usual probability generating function (s) for 0 s 1. Dene the
tail probability generating function (s) by
(s) :=

n=1
P(X n) s
n
.
Use the identity
P(X = n) = P(X n) P(X n + 1)
to derive a formula for (s) in terms of s and (s) for 0 s 1. Discuss what happens for s = 1.
Solution: We have
(s) =

n=0
P(X = n) s
n
=

n=0
(P(X n) P(X n + 1)) s
n
=

n=0
P(X n) s
n

n=0
P(X n + 1) s
n
= P(X 0) +

n=1
P(X n) s
n

m=1
P(X m) s
m1
= P(X 0)
. .
1
+

n=1
P(X n) s
n
s
1

m=1
P(X m) s
m
= 1 + (s) s
1
(s)
= 1 + (s)
_
1 s
1
_
,
7
so
(s) =
(s) 1
1 s
1
.
It is clear that by the denition of (s), when s = 1, we have (1) =

n=1
P(X n) = E(X). We can also see this
via lHopitals rule:
lim
s1
(s) = lim
s1
(s) 1
1 s
1
= lim
s1

(s)
s
2
=

(1)
1
=

(1) = E(X) .
12. Consider a random walk on the 3 vertices of a triangle labeled clockwise 0, 1, 2. At each step, the walk moves clockwise
with probability p and counter-clockwise with probability q, where p + q = 1. Let P denote the transition matrix.
Observe that
P
2
(0, 0) = 2pq; P
3
(0, 0) = p
3
+q
3
; P
4
(0, 0) = 6p
2
q
2
.
Derive a similar formula for P
5
(0, 0).
Solution: Consider a p , q random walk on Z. Modulo 3, we are traversing the triangle described. We restrict the
rest of our discussion to the random walk on Z where we start at the origin and want to reach state 0 of the triangle
(i.e. any multiple of 3 for the random walk on Z) in 5 steps. Observe that in 5 steps, we cannot possibly reach any
multiple of 3 larger than 3 away from the origin. Also, since we move an odd number of steps, we cannot return
to the origin. However, we can reach +3 (4 up and 1 down in any combination) and 3 (4 down and 1 up in any
combination). Therefore,
P
5
(0, 0) =
_
5
1
_
..
in 5 moves,
1 is down and
the rest are up
p
4
q +
_
5
1
_
..
in 5 moves,
1 is up and the
rest are down
pq
4
= 5p
4
q + 5pq
4
.
13. A branching process with Poisson() ospring distribution started with one individual has extinction probability p
with 0 < p < 1. Find a formula for in terms of p.
Solution: The ospring distribution has PGF
(s) =

n=0

n
e

n!
s
n
= e

n=0
(s)
n
n!
= e

e
s
= e
(s1)
.
The extinction probability p satises p = (p) = e
(p1)
. Taking the log of both sides gives log p = (p 1), so
=
log p
p 1
.
14. Suppose (X
n
) is a Markov chain with state space {0, 1, . . . , b} for some positive integer b, with states 0 and b absorbing
and no other absorbing states. Suppose also that (X
n
) is a martingale. Evaluate
lim
n
P
a
(X
n
= b)
and explain your answer carefully.
Solution: We start at X
0
= a. Since (X
n
) is a martingale, E[X
n
] = E[X
0
] = a for all n. So
a = E[X
n
] =
b

i=0
iP
a
(X
n
= i) =
b1

i=1
iP
a
(X
n
= i) +bP
a
(X
n
= b) . (10)
Claim: From any state i {1, 2, . . . , b 1}, we can eventually reach an absorbing state with probability 1. Assuming
that this claim is true, then for any state i {1, 2, . . . , b 1}, lim
n
P
a
(X
n
= i) = 0. Therefore, taking the limit as
n for Equation (10) gives
a = b lim
n
P
a
(X
n
= b) , so lim
n
P
a
(X
n
= b) =
a
b
.
Proof of claim: Suppose that at state i {1, 2, . . . , b 1}, we cannot eventually reach an absorbing state with
probability 1. Let k be the state closest to 0 that we can eventually reach from state i. Then from state k, we cannot
reach any state in {0, 1, . . . , k 1}. Since (X
n
) is a martingale, E[X
n+1
| X
n
= k] = k, but since k is not an absorbing
state, it means that there must be some probability of reaching a state in {0, 1, . . . , k 1} (otherwise, we would have
E[X
n+1
| X
n
= k] > k). Hence, we reach a contradiction. It must be that we can indeed reach absorbing state 0. By
considering the highest state < b that we can eventually reach from state i, a similar argument can be used to prove
that we can eventually reach state b from state i.
8

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy