PRP UNIT IV Markove Process
PRP UNIT IV Markove Process
Markov Process
Definition :
A random process X (t) is called Markov process, if for
t0 < t1 < t2 < · · · < tn < tn+1 , we have
P[X (tn+1 ) ≤ xn+1 /X (tn ) ≤ xn , X (tn−1 ) ≤ xn−1 , ..., X (t0 ) ≤
x0 ] = P[X (tn+1 ) ≤ xn+1 /X (tn ) ≤ xn ]
i.e., in words,
If the future behaviour of a process depends only on the
present state, but not on the past, then the process is called a
Markov process.
Markov Process(Markovian process)
Markov Process
Note :
(1) Thus from the definition, the future state xn+1 of the process
at time tn+1 {i.e., X (tn+1 ) = xn+1 }, depends only on the
present state xn of the process at tn {i.e., X (tn ) = xn } and
not on the past values x0 , x1 , · · · , xn−1 .
(2) X (t0 ), X (t1 ), · · · , X (tn+1 ) are random variables
t0 , t1 , · · · , tn+1 are times
x0 , x1 , · · · , xn+1 are states of random process
Markov Process(Markovian process)
Markov Process
Markov Chain
Markov Process
Markov Process
nth stepTransitionprobability
(n)
P[xn = aj /x0 = ai ] = Pij
(1)
P[x1 = aj /x0 = ai ] = Pij [ai to aj in 1 step]
(1)
P[x2 = aj /x1 = ai ] = Pij [ai to aj in 1 step]
(1)
P[x2 = aj /x0 = ai ] = Pij [ai to aj in 2 steps]
Markov Process(Markovian process)
Markov Process
P (n) = P n
(n)
{Pij } = {Pij }n
(n)
X
Pij = Pikn−1 Pkj
∀k
or
X
n−1
= Pik Pkj
∀k
Markov Process
Irreducible :
A Markov chain is said to tbe irreducible if every state can be
(n)
reached from every other state, where Pij > 0 for some n
and for all i and j.
The t.p.m. of an irreducible chain is an irreducible matrix.
Otherwise, the chain is reducible or non-reducible.
Return state :
(n)
If Pij > 0, for some n > 1, then we call the state i of the
Markov chain as return state.
Markov Process(Markovian process)
Period :
(m)
Let Pii > 0 for all m. Let i be a return state. Then we
define the period di as follows,
di = GCD{m, Piim > 0},
where GCD = Greatest common divisor.
If di = 1, then state i is said to be aperiodic.
If di > 1, then state i is said to be periodic with period di .
Regular matrix :
A stochastic matrix P is said to be a regular matrix, if all the
entries of P m are positive, where m is some ‘+’ integer.
Markov Process(Markovian process)
π1 + π2 + ... + πn = 1
and πP = π.
Markov Process(Markovian process)
0 2/3 1/3
Let 3 state Markov chain with t.p.m. 1/2 0 1/2 . Find
1/2 1/2 0
long run probability of Markov chain.
Solution :
0 2/3 1/3
Let TPM of the Markov chain is P = 1/2 0 1/2 . (1)
1/2 1/2 0
Let π = [π1 , π2 , π3 ] be the stationary state distribution of the 3
state Markov chain.
Markov Process(Markovian process)
πP = π (2)
X
πi = 1 (3)
∀i
Now,
0 2/3 1/3
∴ (2) ⇒ [π1 π2 π3 ] 1/2 0 1/2 = [π1 π2 π3 ]
1/2 1/2 0
π2 π3
i.e., + = π1 (4)
2 2
2π1 π3
+ = π2 (5)
3 2
π1 π2
+ = π3 (6)
3 2
Markov Process(Markovian process)
9
π1 = = Long run probability for the I state
27
10
π2 = = Long run probability for the II state
27
8
π3 = = Long run probability for the III state
27
lim
i.e., Long run probability of given 3 states = n → ∞P (n)
= [π1 π2 π3 ]
9 10 8
=
27 27 27
Markov Process(Markovian process)
T C
TPM is P = T 0 1 (1)
C 1/2 1/2
Markov Process(Markovian process)
W.K.T. πP = π (2)
and π1 + π2 = 1 (3)
i.e.,
" #
0 1
(2) ⇒ π1 π2 1 1 = π1 π2
2 2
π2
i.e., 0 · π1 + = π1 (4)
2
π2
π1 + = π2 (5)
2
Markov Process(Markovian process)
1
π1 = = Long run probability to travel in Train
3
2
π2 = = Long run probability to travel in Car
3
2
∴ P [man travels by Car in long run] = .
3
Markov Process(Markovian process)
1
Initial probability distribution P{X0 = i} = , i = 0, 1, 2.
3
i.e.,
1
P{X0 = 0} = (2)
3
1
P{X0 = 1} = (3)
3
1
P{X0 = 2} = (4)
3
1 1 1
i.e., P (0) = (5)
3 3 3
Markov Process(Markovian process)
(a)
1
(1)
P [X2 = 1/X1 = 0] = P01 = {∴ P [Xn+1 = ai /Xn = aj ] = Pij }
4
(b) P[x3 = 1, x2 = 2, x1 = 1, x0 = 2]:
P(A ∩ B) P(AB)
Use P[A/B] = =
P(B) P(B)
⇒ P(AB) = P[A/B] · P(B)
Similarly, P(ABC ) = P[A/BC ] · P[B/C ] · P(C )
P(ABCD) = P[A/BCD] · P[B/CD] · P[C /D] · P(D)
Markov Process(Markovian process)
∴ P[x3 = 1,x2 = 2, x1 = 1, x0 = 2]
= P[x3 = 1/x2 = 2, x1 = 1, x0 = 2] · P[x2 = 2/x1 = 1, x0 = 2
· P[x1 = 1/x0 = 2] · P[x0 = 2]
= P[x3 = 1/x2 = 2] · P[x2 = 2/x1 = 1] · P[x1 = 1/x0 = 2] · P
(1) (1) (1) (0)
= P21 · P12 · P21 · P2
3 1 3 1
= · · · (Refer (1) and (4))
4 4 4 3
3
=
64
Markov Process(Markovian process)
X
(c) P [X2 = 1] = P [X2 = 1/x0 = i] · P [X0 = i]
∀i
i=2
X
= P [X2 = 1/x0 = i] · P [X0 = i]
i=0
= P [X2 = 1/x0 = 0] · P [X0 = 0]
+ P [X2 = 1/x0 = 1] · P [X0 = 1]
+ P [X2 = 1/x0 = 2] · P [X0 = 2]
(2) (0) (2) (0) (2) (0)
= P01 · P0 + P11 · P1 + P21 · P2
Markov Process(Markovian process)
P (2) = P 2 = P ×P
3/4 1/4 0 3/4 1/4 0
= 1/4
1/2 1/4 × 1/4 1/2
1/4
0 3/4 1/4 0 3/4 1/4
(2) (2) (2)
5/8 5/16 1/16 P00 P01 P02
(2) (2) (2)
= 5/16
1/2 3/16 = P10
P11 P12
3/16 9/16 4/16 (2) (2) (2)
P 20 P21 P22
Markov Process(Markovian process)
5 1 1 1 9 1
∴ (6) ⇒ P [X2 = 1] = · + · + ·
16 3 2 3 13 3
5+9+8 22
= =
48 48
11
=
24
Markov Process(Markovian process)
W.K.T πP = π, (7)
&π1 + π2 + π3 = 1 (8)
where π = [π1 π2 π3 ]
be the stationary distribution of the 3 state of M.P.
Markov Process(Markovian process)
n = 1, 2, 3, · · · with states
Consider as Markov chain Xn ,
.1 .5 .4
S = {1, 2, 3} and t.p.m. P = .6 .2 .2 and initial probability
.3 .4 .3
(0)
distribution P = ( .7 .2 .1 ). Find
(a). P [X2 = 3/X0 = 1]
(b). P (X2 = 3)
(c). P [x3 = 2, x2 = 3, x1 = 3, x0 = 2].
Markov Process(Markovian process)
(2)
(a). P [X2 = 3/X0 = 1] = P13
= .26
Markov Process(Markovian process)
i=3
X
(b). P [X2 = 3] = P [X2 = 3, X0 = i]
i=1
i=3
X
= P [X2 = 3/X0 = i] P [X0 = i] [∵
i=1
= P [X2 = 3/X0 = 1] P [X0 = 1]
+ P [X2 = 3/X0 = 2] P [X0 = 2]
+ P [X2 = 3/X0 = 3] P [X0 = 3]
(2) (2) (2)
= P13 P [X0 = 1] + P23 P [X0 = 2] + P33 P [X0 = 3]
= (.26)(.7) + (.34)(.2) + (.29)(.1)
= .279
Markov Process(Markovian process)
(c). P [X3 = 2, X2 = 3, X1 = 3, X0 = 2]
= P [X3 = 2/X2 = 3] P [X2 = 3/X1 = 2] P [X1 = 3/X0 = 2] P [X
(1) (1) (1)
= P32 P33 P23 P [X0 = 2]
= (.4)(.3)(.2)(.2)
= .0048
Markov Process(Markovian process)
A gambler has Rs. 2. He bets Re. 1 at a time and wins Re. 1 with
probability 1/2. He stops playing if he loses Rs. 2 or wins Rs.4.
(a) Find the t.p.m. of the related Markov chain?
(b) Find the probability that he lost his money at the end of 5 plays?
(c) Find the probability that the game lasts more than 7 plays?
Solution : Let Xn represents the amount with the player at the
end of nth round of the play. The game is over if the player loses
all the money i.e., (Xn = 0) or wins Rs.4 i.e., (Xn = 6).
The state space is Xn = {0, 1, 2, 3, 4, 5, 6}.
1
If he wins the game, then the probability p = .
2
1
If he looses the game, then the probability q = 1 − p = .
2
He starts with Rs.2.
Markov Process(Markovian process)
(b) Probability that the player has lost money at the end of the
five plays:
Since the player initially has got Rs. 2, the initial state probability
distribution of {Xn } is
P (0) = 0 0 1 0 0 0 0
(2)
Markov Process(Markovian process)
P (1) = P (0) · P
1 0 0 0 0 0 0
1/2 0 1/2 0 0 0 0
0 1/2 0 1/2 0 0 0
= 0 0 1 0 0 0 0
0 0 1/2 0 1/2 0 0
0 0 0 1/2 0 1/2 0
0 0 0 0 1/2 0 1/
0 0 0 0 0 0 1
1 1
0 2 0 2 0 0 0
=
(1) (1) (1) (1) (1) (1) (1)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(⇒ TPM after I play)
Markov Process(Markovian process)
Similarly,
1 1 1
4 0 2 0 4 0 0
P (2) = P (1) · P =
(2) (2) (2) (2) (2) (2) (2)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After II)
1 1 3 1
4 4 0 8 0 8 0
P (3) = P (2) · P =
(3) (3) (3) (3) (3) (3) (3)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After III)
Markov Process(Markovian process)
3 5 1 1
8 0 16 0 4 0 16
P (4) = P (3) · P =
(4) (4) (4) (4) (4) (4) (4)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After IV)
!
3 5 9 1 1
(5) (4) 8 32 0 32 0 8 16
P =P ·P = (5) (5) (5) (5) (5) (5) (5)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After V)
Markov Process(Markovian process)
∴ P[The man has lost his money at the end of 5th play]
= After V play, the probability that the gambler has zero rupee
(5)
= PRs.0
= The entry corresponding to the state ’0’ in P (5)
3
=
8
Markov Process(Markovian process)
(c) The probability the the game lasts more than 7 plays:
29 7 13 1
64 0 32 0 64 0 8
P (6) = P (5) · P =
(6) (6) (6) (6) (6) (6) (6)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After VI)
29 7 27 13 1
64 64 0 128 0 128 8
P (7) = P (6) · P =
(7) (7) (7) (7) (7) (7) (7)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After VII)
Markov Process(Markovian process)
Now,
P[the game lasts more than 7 plays]
= P[the system is neither in state 0 nor in 6 at the end of the seventh
= P [X7 = 1, 2, 3, 4, 5]
7 27 13
= +0+ +0+
64 128 128
27
=
64
Markov Process(Markovian process)
A B C
A 0 1 0
P = B 2/3 0 1/3
C 2/3 1/3 0
πP = π (2)
X
πi = 1 (3)
∀i
Now,
0 1 0
∴ (2) ⇒ [π1 π2 π3 ] 2/3 0 1/3 = [π1 π2 π3 ]
2/3 1/3 0
Markov Process(Markovian process)
8
π1 = = Long run probability for the city A
20
9
π2 = = Long run probability for the city B
20
3
π3 = = Long run probability for the city C
20
lim
i.e., Long run probability of given 3 states = n → ∞P (n)
= [π1 π2 π3 ]
8 9 3
=
20 20 20
Markov Process(Markovian process)
THANK YOU