0% found this document useful (0 votes)
24 views6 pages

Pset3 Solutions

The document discusses solutions to several exercises involving Markov chains. Exercise 11 calculates the conditional probability that a Markov chain is in state 0 given that absorption has not occurred. Exercise 12 derives the transition probability matrix for a Markov chain modeling balls moving between two urns. Exercise 13 calculates the probability a Markov chain never visits a particular state by modifying the transition matrix.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views6 pages

Pset3 Solutions

The document discusses solutions to several exercises involving Markov chains. Exercise 11 calculates the conditional probability that a Markov chain is in state 0 given that absorption has not occurred. Exercise 12 derives the transition probability matrix for a Markov chain modeling balls moving between two urns. Exercise 13 calculates the probability a Markov chain never visits a particular state by modifying the transition matrix.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

18.445 Problem Set 3.

Solutions

Exercise 11 (K&T 2.5 p.105) A Markov chain Xn ∈ {0, 1, 2}, starting from X0 = 0, has
the transition probability matrix
 
0.7 0.2 0.1
P =  0.3 0.5 0.2 
0 0 1

Let T = inf{n ≥ 0| Xn = 2} be the first time that the process reaches state 2, where it is
absorbed. If in some experiment we observed such a process and noted that absorption
has not taken place yet, we might be interested in the conditional probability that the
process is in state 0 (or 1), given that absorption has not taken place. Determine
P [ X3 = 0 | T > 3 ] .

We first calculate  
0.457 0.23 0.313
P3 =  0.345 0.227 0.428  .
0 0 1
Since T > 3, we know that X3 = 0 or X3 = 1. We are given that X0 = 0. Thus,
(3)
P [ X3 = 0 ] p .457 457
P [ X3 = 0 | T > 3 ] = = (3) 00 (3) = = .
P [ X3 = 0 ] + P [ X3 = 1 ] p00 + p01 .457 + .23 687

1
Exercise 12 (K&T 3.8 p.115) Two urns A and B contain a total of n balls. Assume that
at time t there were exactly k balls in A. At time t + 1 an urn is selected at random in
proportion to its content (i.e. A is selected with probability nk and B with probability
n−k
n ). Then a ball is selected from A with probability p and from B with probability
q = 1 − p and placed in the previously chosen urn. Determine the transition probability
matrix for the Markov chain Xt = number of balls in urn A at time t.

There are four possibilities if Xt = k:

1. If A is picked to receive and A is picked to give, Xt+1 = k. This occurs with proba-
bility nk · p.

2. If A is picked to receive and B is picked to give, Xt+1 = k + 1. This occurs with


probability nk · q.

3. If B is picked to receive and A is picked to give, Xt+1 = k − 1. This occurs with


probability n− k
n · p.

4. If B is picked to receive and B is picked to give, Xt+1 = k. This occurs with proba-
bility n− k
n · q.

Clearly, k = 0 is an absorbing state since you select A to gain a ball with probability
0; likewise, k = n is an absorbing state since you always select A to gain a ball, but the
ball comes from A, so there is no change. From the above probabilities, we have that
kq (n−k) p
P[ Xt+1 = k + 1| Xt = k] = n . Also, P[ Xt+1 = k + 1| Xt = k − 1] = n . Finally,
kp (n−k)q kp+(n−k)q
P [ X t +1 = k | X t = k ] = n + n = n . Putting this into a matrix gives:
 
1 0 0 0 0 0 ··· 0
(n−k) p kp+(n−k)q kq

 n n n 0 0 0 ··· 0


 (n−k) p kp+(n−k)q kq 
 0 n n n 0 0 ··· 0 
(n−k) p kp+(n−k)q
 
kq
 0 0 n n n 0 ··· 0 
P= .
 
(n−k) p kp+(n−k)q kq
 0 0 0 n n n ··· 0 
 .. .. .. .. .. .. .. .. 

 . . . . . . . .


 (n−k) p kp+(n−k)q kq 
 0 0 0 0 0 n n n

0 0 0 0 0 0 0 1

2
Exercise 13 (K&T 4.4 p.131) Consider the Markov chain Xn ∈ {0, 1, 2, 3} starting with
state X0 = 1 and with the following transition probability matrix:
 
1 0 0 0
 0.1 0.2 0.5 0.2 
P= 0.1 0.2 0.6 0.1  .

0.2 0.2 0.3 0.3

Determine the probability that the process never visits state 2.

Because 0 is an absorbing state, the process will eventually end up in state 0. What we
want to know is whether or not the process visits state 2 before that point. To do this, we
will stop the process if it visits state 2 by pretending that state 2 is an absorbing state:
 
1 0 0 0
 0.1 0.2 0.5 0.2 
P∗ =  0
.
0 1 0 
0.2 0.2 0.3 0.3

Then, after infinitely long time, the system will either be absorbed into state 0 or state 2.
The desired probability that the process never visits state 2 is the probability that this new
process is absorbed into state 0. We compute this using a first step analysis.

Let T = min{n ≥ 0| Xn = 0 or Xn = 2} and ui = P[ XT = 0| X0 = i ] for i = 1, 3.


Considering X0 = 1, we obtain

u1 = P10 + P11 u1 + P13 u3 = .1 + .2u1 + .2u3 .

Similarly, considering X0 = 3, we obtain

u3 = P30 + P31 u1 + P33 u3 = .2 + .2u1 + .3u3 .

Solving these equations simultaneously gives u1 = 11 9


52 and u3 = 26 . Since our chain starts
in state 1, the probability that it will end up in state 0 (never visiting state 2) is

11
u1 = .
52


3
Exercise 14 (K&T 4.15 p.134) A simplified model for the spread of a rumor goes this
way: there are N = 5 people in a group of friends, of which some have heard the rumor
and others have not. During any single period of time two people are selected at random
from the group and assumed to interact. The selection is such that an encounter between
any pair of friend is just as likely as any other pair. If one of these persons has heard the
rumor and the other has not, then with probability α = 0.1 the rumor is transmitted. Let
Xn be the number of friends who have heard the rumor at time n. Assuming that the
process begins at time 0 with a single person knowing the rumor, what is the mean time
that it takes for everyone to hear it?

If k = 1, 2, 3, 4 people know the rumor, and an interaction occurs, the number of people
who will know the rumor will be either k or k + 1. The only way that a new person will
learn the rumor is if, of the two people chosen to interact, one knows the rumor and one
does not, and the person who knows the rumor transmits it (α = 0.1 probability). Since
there are k people who know the rumor and 5 − k people who do not, the number of
ways to choose such a pair is k (5 − k ), and there are a total of (52) = 10 pairs. Thus, this
probability is precisely
k (5 − k ) k (5 − k )
P [ X n +1 = k + 1 | X n = k ] = · (0.1) =
10 100
k (5− k )
for k = 1, 2, 3, 4. Then, P[ Xn+1 = k + 1| Xn = k ] = 1 − 100 . We know that k = 5 is an
absorbing state, since if everyone knows the rumor, no more people can learn it. Thus,
we have the transition probability matrix
 24 1 
25 25 0 0 0
 0 47 3 0 0 
 50 50 
P= 47 3
 0 0 50 50 0  .

 0 0 0 24 1 
25 25
0 0 0 0 1

We then calculate
  24 1   1 1
25 − 25
 
1 0 0 0 25 25 0 0 0 0
47 3 3 3
 0 1 0 − 0
0  
50 50 0   0
50 − 50 0 
I−Q =  47 3  = 
 
3 3
.
 0 0 1 0   0 0 50 50 0 0 50 − 50 
0 0 0 1 24 1
0 0 0 25 0 0 0 25

We then calculate ( I − Q)−1 as


 50 50 
25 3 3 25
 0 50 50
25 
( I − Q ) −1 =
 0
3 3
50
.
0 3 25 
0 0 0 25

Since we begin in state 1 (one person knows the rumor), the expected time until ab-
sorption (when everyone has heard the rumor) is the sum of the elements in row 1 of

4
( I − Q ) −1 :
50 50 250
25 + + + 25 = .
3 3 3
Remark: We could also solve this by considering the first step analysis equations

v1 = 1 + P11 v1 + P12 v2
v2 = 1 + P22 v2 + P23 v3
v3 = 1 + P33 v3 + P34 v4
v4 = 1 + P44 v4 + P45 v5
where vi = E[ T | X0 = i ] where T = min{n ≥ 0| Xn = 5}. Noting that v5 = 0, we can
solve this system to obtain v1 = 250
3 . 

5
Exercise 15 (K&T 4.17 p.134) The damage Xn ∈ {0, 1, 2} of a system subjected to wear
is a Markov chain with transition probability matrix
 
0.7 0.3 0
P =  0 0.6 0.4  .
0 0 1

The system starts in state 0 and it fails when it first reaches state 2. Let

T = min{n ≥ 0| Xn = 2}

be the time of failure. Evaluate the moment generating function

u ( s ) = E[ s T ]

for 0 < s < 1.

We again use first step analysis. Denote ui (s) = E[s T | X0 = i ]. Then,

u0 (s) = P00 · E[s T +1 | X0 = 0] + P01 · E[s T +1 | X0 = 1] = P00 · s · u0 (s) + P01 · s · u1 (s)

⇒ u0 (s) = s (.7u0 (s) + .3u1 (s))


and

u1 (s) = P11 · E[s T +1 | X0 = 1] + P12 · E[s T +1 | X0 = 2] = P11 · s · u1 (s) + P12 · s · 1

⇒ u1 (s) = s (.6u1 (s) + .4)


This last equation gives
.4s
u1 ( s ) = .
1 − .6s
Then,
.3su1 (s) .3s .4s .12s2
u0 ( s ) = = · = .
1 − .7s 1 − .7s 1 − .6s (1 − .7s)(1 − .6s)


You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy