0% found this document useful (0 votes)
18 views4 pages

Solution Set P DS 10

Uploaded by

amircsgo4747
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views4 pages

Solution Set P DS 10

Uploaded by

amircsgo4747
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Probability for Data Science – Master’s Degree in Data Science – A.Y.

2023/2024
Academic Staff: Francesca Collet, Paolo Dai Pra

PROBLEM SET 10
Discrete-time Markov chains I: transition probabilities and law of the chain

P10.1. Three white and three black balls are distributed in two urns in such a way that each
contains three balls. We say that the system is in state i (with i = 0, 1, 2, 3) if the first urn contains
i white balls. At each step, we draw one ball from each urn and place the ball drawn from the
first urn into the second, and conversely with the ball from the second urn. Let Xn denote the
state of the system after the n-th step. Explain why (Xn )n∈N is a Markov chain and determine its
transition matrix.

Solution. The process described in the text is a Markov chain since, to determine the transition
probabilities, it suffices to know the actual composition of the urns at the moment of the draw, that
is it suffices to know only the current state of the chain.
The state space of this chain is S = {0, 1, 2, 3}. We determine the transition probabilities.
• If the chain is in state 0, the two urns have the following composition of balls: U1 | • • • | and
U2 | ◦ ◦ ◦ |. Therefore, if we draw one ball from each urn and we swap the two balls, we end
up with a white ball in the first urn (state 1) with certainty. As a consequence, we obtain
P01 = 1 and, due to the stochasticity of the transition matrix, P00 = P02 = P03 = 0.

• If the chain is in state 1, the two urns have the following composition of balls: U1 | • • ◦ | and
U2 | ◦ ◦ • |. Therefore, the chain will jump to
– state 0, if we draw the white ball from the first urn and the black ball from the second
urn, giving P10 = 31 · 13 = 91 ;
1
– state 1, if we swap either two white balls or two black balls, giving P11 = 3 · 23 + 23 · 31 = 94 ;
– state 2, if we draw a black ball from the first urn and a white ball from the second urn,
giving P12 = 23 · 23 = 49 .
Moreover, it follows that P13 = 0, due to the stochasticity of the transition matrix.
• If the chain is in state 2, the two urns have the following composition of balls: U1 | • ◦ ◦ | and
U2 | ◦ • • |. Therefore, the chain will jump to
– state 1, if we draw the white ball from the first urn and the black ball from the second
urn, giving P21 = 32 · 23 = 94 ;
2
– state 2, if we swap either two white balls or two black balls, giving P22 = 3 · 13 + 13 · 32 = 94 ;
– state 3, if we draw a black ball from the first urn and a white ball from the second urn,
giving P23 = 13 · 13 = 19 .
Moreover, it follows that P20 = 0, due to the stochasticity of the transition matrix.
• If the chain is in state 3, the two urns have the following composition of balls: U1 | ◦ ◦ ◦ | and
U2 | • • • |. Therefore, if we draw one ball from each urn and we swap the two balls, we end
up with a black ball (state 2) in the first urn with certainty. As a consequence, we obtain
P32 = 1 and , due to the stochasticity of the transition matrix, P30 = P31 = P33 = 0.

P10.2 (Random walk on a cycle graph). Fix N ∈ N. A particle moves on a cycle graph∗ with
vertices 0, 1, . . . , N (labeled in a clockwise order). At each step it has a probability p of moving
clockwise and 1 − p of moving counterclockwise. Let Xn denote its location on the cycle after the
n-th step. The process (Xn )n∈N is a Markov chain. Determine its transition probabilities.

Solution. The state space is S = {0, 1, . . . , N }. If i ∈ {1, . . . , N − 1} the chain jumps from i to i + 1
with probability p and from i to i − 1 with probability 1 − p. We have to take into account that
∗A cycle graph is a graph that consists of N nodes connected in a closed chain.
the cyclic arrangement of the vertices allows also transitions from N to 0 and viceversa. Thus, the
transition probabilities are given by

 1 − p if j = i − 1
Pij = p if j = i + 1
0 otherwise

if i ∈ {1, . . . , N − 1} and P01 = PN 0 = p, P0N = PN,N −1 = 1 − p.

P10.3 (Occupancy model). Consider a sequence of independent trials each consisting in placing
a ball at random in one of k given urns. Let Xn be the number of occupied urns after the n-th trial.
The process (Xn )n∈N is a Markov chain. Determine its state space and its transition probabilities.

Solution. The state space is S = {0, 1, . . . , k}. The chain is in state i if exactly i urns are occupied.
If the chain is in state i ∈ {1, . . . , k − 1} and we place a ball at random, it will jump to state i + 1
if the ball is placed in an empty urn; whereas, it will stay put if the ball is placed in an occupied
urn. Thus, the transition probabilities are given by
 i
 k if j = i
k−i
Pij = if j = i + 1
 k
0 otherwise

if i ∈ {1, . . . , k − 1} and P01 = 1 (if all urns are empty, when placing a ball at random, one urn
becomes certainly occupied) Pkk = 1 (if all urns are occupied, when placing a ball at random, it is
certainly placed in an occupied urn).

P10.4 (Example from cell genetics). Each cell of a certain organism contains N particles, some
of which are of type ~, the others of type }. Daughter cells are formed by cell division, but prior to
the division each particle replicates itself; the daughter cell inherits N particles chosen at random.
Let Xn be the number of particles of type ~ contained in a cell of the n-th generation. Determine
its state space and its transition probabilities.

Solution. The state space is S = {0, 1, . . . , N }. The chain is in state i if the mother cell contains
exactly i particles of type ~. If the chain is in state i ∈ {1, . . . , N }, it jumps to state j (i.e., the
daughter cell contains exactly j particles of type ~) whenever the daughter cell inherits N particles
as follows: j chosen from the 2i particles of type ~ of the parental cell and N − j chosen from the
2N − 2i particles of type } of the parental cell. Thus, the transition probabilities are given by

2i 2N −2i
 
−j
if max{0, 2i − N } ≤ j ≤ min{2i, N }§
j N



2N
 
Pij = N



0 otherwise

if i ∈ {1, . . . , N − 1} and P00 = PN N = 1 (all particles contained in the mother cell are of the same
type and then, with certainty, the daughter cell inherits only particles of the same type).

P10.5. Let (Xn )n∈N be a Markov chain with state space S = {0, 1, 2} and transition matrix
1 1 1
2 3 6
0 1 2
3 3 .
1 1
2 0 2

If P (X0 = 0) = P (X0 = 1) = 14 , find E(X3 ).

§ How to determine the lower and upper bounds for j? The following inequalities must be satisfied at the same

time: 0 ≤ j ≤ N (state j belongs to the state space S), j ≤ 2i and N − j ≤ 2N − 2i (the binomial coefficients are
strictly positive). Equivalently, we get max{0, 2i − N } ≤ j ≤ min{2i, N }.
Solution. To find E(X3 ) we need the distribution of the Markov chain at time n = 3. This distri-
bution is given by αP3 , where α = 14 41 α2 is the initial distribution. Due to the normalization


constraint, we immediately get α2 = 12 . Moreover, since


 13 11 47 
36 54 108
P3 =  4 4 11 
27  ,

9 27
5 2 13
12 9 36

we compute  13 11 47

 36 54 108
αP3 = 1 1 1  4 4 11  59 43 169

4 4 2  9 27 27  = 144 216 432
5 2 13
12 9 36
59 43 169 53
and finally calculate E(X3 ) = 0 · 144 +1· 216 +2· 432 = 54 .

P10.6 (v From past exam). Consider a discrete-time Markov chain (Xn )n∈N on the state space
S = {0, 1, 2, 3, 4} and with the transition graph shown in the figure below, where the labels Pij , for
i, j ∈ S, are suitable positive constants.
1

8 •1
8
/ •2
I g
P22
1
3

7 •0
1 4
3 P32 5
P14 1
4

&   
P04
•4 X •3

P44

(a) Determine the numeric values of the labels Pij , for i, j ∈ S, and write the transition probability
matrix of the chain.

(b) Let α = 1 0 0 0 0 be an initial distribution for the chain. Determine the probabilities
P (X0 = 1, X1 = 3, X2 = 2, X3 = 2), P (X3 = 2|X0 = 0) and P (X3 = 4). To spare you of
pointless calculations, if needed you may use that
1 1 19 11 41 
27 27 180 180 54
0 31 11 5 
 0 200 50 8 
41 84
P3 = 
 
0 0 125 125 0.
0 21 4
 0 25 25 0
0 0 0 0 1

Solution.

(a) From the transition graph shown in the figure above we deduce the transition probability
matrix 1 1 
3 3 0 0 P04
1 1
0 0 P14 
 
 8 4 
4
P=  0 0 P22 5 0 .
 0 0 P32 0 0 
 

0 0 0 0 P44
The transition probability matrix P is a stochastic matrix, therefore the entries of each row
must sum to 1. As a consequence, to find the labels, we have to solve the system of linear
equations  1 1

 3 + + P04 = 1
3
 1 1
+ + P14 = 1



 8 4
P22 + 54 = 1

P32 = 1





P44 = 1,

from which it follows P04 = 31 , P14 = 58 , P22 = 1


5 and P32 = P44 = 1. Therefore, we obtain
the transition probability matrix
1 1 1

3 3 0 0 3
1 1 5
0 0

 8 4 8
1 4
0
P= 0 5 5 0
0 0 1 0 0
 

0 0 0 0 1

and the transition graph


1

8 •1
8
/ •2 1

1 I g 5

7 •0
1 4
3 5
1 5
8 1
4

1
3 &   
•4 X •3

(b) We determine P (X0 = 1, X1 = 3, X2 = 2, X3 = 2). By applying the multiplication rule, we


obtain

P (X0 = 1, X1 = 3, X2 = 2, X3 = 2) = P (X1 = 3, X2 = 2, X3 = 2|X0 = 1) P (X0 = 1) = 0,


=α1

where α1 is the second entry (relative to state 1) of the initial distribution α = 1 0 0 0 0 .
We are asked for P (X3 = 2|X0 = 0), the probability of jumping from state 0 to state 2 in three
steps. This probability corresponds to the entry in position (0, 2) of the transition matrix P3 ,
3 19
that is P (X3 = 2|X0 = 0) = P02 = 180 .
Observe that P (X3 = 4) is the fifth component (relative to state 4) of the distribution of the
chain at time 3, in other words P (X3 = 4) = (αP3 )4 . Since
 1 1 19 11 41 
27 27 180 180 54
0 31 11 5 
 0 200 50 8 
 41 84  1 1 19 11 41

1 0 0 0 0
0  0 125 125 0= 27 27 180 180 54 ,
0 21 4
 0 25 25 0
0 0 0 0 1

41
we obtain P (X3 = 4) = 54 .

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy