0% found this document useful (0 votes)
66 views52 pages

PRP UNIT IV Markove Process

The document defines and provides examples of Markov processes and Markov chains. A Markov process is a random process where the probability of future states depends only on the present state, not past states. Examples given are weather predictions based on the previous 2 days and market shares of brands based on the previous month. The document also defines key terms related to Markov chains like states, transition probabilities, and steady state probabilities. It provides an example problem calculating the long run probabilities of a 3-state Markov chain.

Uploaded by

Factology
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views52 pages

PRP UNIT IV Markove Process

The document defines and provides examples of Markov processes and Markov chains. A Markov process is a random process where the probability of future states depends only on the present state, not past states. Examples given are weather predictions based on the previous 2 days and market shares of brands based on the previous month. The document also defines key terms related to Markov chains like states, transition probabilities, and steady state probabilities. It provides an example problem calculating the long run probabilities of a 3-state Markov chain.

Uploaded by

Factology
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Markov Process(Markovian process)

Markov Process

Definition :
A random process X (t) is called Markov process, if for
t0 < t1 < t2 < · · · < tn < tn+1 , we have
P[X (tn+1 ) ≤ xn+1 /X (tn ) ≤ xn , X (tn−1 ) ≤ xn−1 , ..., X (t0 ) ≤
x0 ] = P[X (tn+1 ) ≤ xn+1 /X (tn ) ≤ xn ]
i.e., in words,
If the future behaviour of a process depends only on the
present state, but not on the past, then the process is called a
Markov process.
Markov Process(Markovian process)

Markov Process

Note :
(1) Thus from the definition, the future state xn+1 of the process
at time tn+1 {i.e., X (tn+1 ) = xn+1 }, depends only on the
present state xn of the process at tn {i.e., X (tn ) = xn } and
not on the past values x0 , x1 , · · · , xn−1 .
(2) X (t0 ), X (t1 ), · · · , X (tn+1 ) are random variables
t0 , t1 , · · · , tn+1 are times
x0 , x1 , · · · , xn+1 are states of random process
Markov Process(Markovian process)

Markov Process

Examples of Markov process:


(1) Probability of raining today depends only on the just previous
weather conditions existing for last two days and not the
earlier weather conditions.
(2) The market shares of three brands of the tooth pastes A, B, C
for a month depend only on their market shares in the
previous month and not of earlier months.
Markov Process(Markovian process)

Types of Markov Process

(1) Continuous random process + Markov property = continuous


parameter Markov process
(2) Continuous random sequence + Markov property = discrete
parameter Markov process
(3) Discrete random process + Markov property = continuous
parameter Markov chain
(4) Discrete random sequence + Markov property = discrete
parameter Markov chain
Markov Process(Markovian process)

Markov Chain

The random process {X (t)} satisfying Markov property where


X (t) takes only discrete values, whether t is discrete or
continuous is called Markov chain.
States of Markov chain and state space:
The random {Xn }, n = 0, 1, 2, · · · where
P[Xn = an /Xn−1 = an−1 , Xn−2 = an−2 , X1 = a1 , X0 = a0 ]
= P[Xn = an /Xn−1 = an−1 ]
for all n is called a Markov chain.
Here a0 , a1 , · · · , an , · · · , ak , are called the states of Markov
chain and
{a0 , a1 , · · · , an , · · · , ak } is the state space.
Markov Process(Markovian process)

Markov Process

One step transition probability :


The conditional probability P[Xn = aj /Xn−1 = ai ] is called
one step transition probably from state ai to aj at the nth
step(trial) and is denoted by pij (n − 1, n).
Homogeneous Markov chain :
If the one step transition probability does not depend on the
step i.e., pij (n − 1, n)=pij (m − 1, m), then Markov chain is
called a homogeneous Markov chain or the chain is said to
have stationary (constant) transition probabilities.
Markov Process(Markovian process)

Markov Process

Transition probability matrix(t.p.m.) :


When the Markov chain is homogeneous, the one step
(stationary) transition probability is denoted by pij and the
matrix of all pij values i.e., matrix P = {pij } is called (one -
step) t.p.m.
Note :
The t.p.m. of a Markov chain is a stochastic matrix since
P pij ≥ 0 and
(1)all
(2) j pij = 1, ∀i (i.e., sum of all elements in each row =1)
Markov Process(Markovian process)

nth stepTransitionprobability

n step Transition probability :

(n)
P[xn = aj /x0 = ai ] = Pij
(1)
P[x1 = aj /x0 = ai ] = Pij [ai to aj in 1 step]
(1)
P[x2 = aj /x1 = ai ] = Pij [ai to aj in 1 step]
(1)
P[x2 = aj /x0 = ai ] = Pij [ai to aj in 2 steps]
Markov Process(Markovian process)

Markov Process

Chapman - Kolmogorov theorem(for n step tpm) :


If P is the t.p.m. of a homogeneous Markov chain, then the
n-step t.p.m.

P (n) = P n
(n)
{Pij } = {Pij }n
(n)
X
Pij = Pikn−1 Pkj
∀k
or
X
n−1
= Pik Pkj
∀k

where P is the (one-step) t.p.m.


Markov Process(Markovian process)

Markov Process

Probability distributions of a Markov chain :


If the probability that the process is in state ai is pi (where
i = 1, 2, · · · , k) at any arbitrary step then the row vector
p = (p1 , p2 , ..., pk ) is called the probability distribution of the
process at that time (or step).
(n) (n)
P[xn = a1 ] = p1 , P[xn = a2 ] = p2 , · · ·
and

(n) (n) (n) (n) (n) (n)


p (n) = [p1 , p2 , ..., pk ] = {p1 , p2 , ..., pk }
= probability distributin of the Markov chain at nth step.
Markov Process(Markovian process)

Probability distribution for the nth step

Probability distribution for the nth step :


(n) (n) (n)
If p (n) = {p1 , p2 , ..., pk } is (state) probability distribution
of the process at step (time) t = n, then probability
distribution at the step (time) t = n + 1 is p (n+1) = p (n) .P
where P is t.p.m. of the chain.
Markov Process(Markovian process)

Classification of states of a Markov chain

Irreducible :
A Markov chain is said to tbe irreducible if every state can be
(n)
reached from every other state, where Pij > 0 for some n
and for all i and j.
The t.p.m. of an irreducible chain is an irreducible matrix.
Otherwise, the chain is reducible or non-reducible.
Return state :
(n)
If Pij > 0, for some n > 1, then we call the state i of the
Markov chain as return state.
Markov Process(Markovian process)

Classification of states of a Markov chain

Period :
(m)
Let Pii > 0 for all m. Let i be a return state. Then we
define the period di as follows,
di = GCD{m, Piim > 0},
where GCD = Greatest common divisor.
If di = 1, then state i is said to be aperiodic.
If di > 1, then state i is said to be periodic with period di .
Regular matrix :
A stochastic matrix P is said to be a regular matrix, if all the
entries of P m are positive, where m is some ‘+’ integer.
Markov Process(Markovian process)

Steady state probability of Markov chain

Steady state probability of Markov chain / stationary


distribution of Markov chain / Long run probability of Markov
chain / Limiting form of the state probability distribution :
Let π = ( π1 π2 · · · πn ) be n steady state such that

π1 + π2 + ... + πn = 1

and πP = π.
Markov Process(Markovian process)

Markov Process Problem 1

 
0 2/3 1/3
Let 3 state Markov chain with t.p.m.  1/2 0 1/2 . Find
1/2 1/2 0
long run probability of Markov chain.
Solution :  
0 2/3 1/3
Let TPM of the Markov chain is P =  1/2 0 1/2 . (1)
1/2 1/2 0
Let π = [π1 , π2 , π3 ] be the stationary state distribution of the 3
state Markov chain.
Markov Process(Markovian process)

Markov Process Problem 1


Finding the values of π1 , π2 , π3 , by using the relations :

πP = π (2)
X
πi = 1 (3)
∀i

Now,
 
0 2/3 1/3
∴ (2) ⇒ [π1 π2 π3 ]  1/2 0 1/2  = [π1 π2 π3 ]
1/2 1/2 0
π2 π3
i.e., + = π1 (4)
2 2
2π1 π3
+ = π2 (5)
3 2
π1 π2
+ = π3 (6)
3 2
Markov Process(Markovian process)

Markov Process Problem 1

Solving (4),(5),(6), using (3) , we get

9
π1 = = Long run probability for the I state
27
10
π2 = = Long run probability for the II state
27
8
π3 = = Long run probability for the III state
27
lim
i.e., Long run probability of given 3 states = n → ∞P (n)
= [π1 π2 π3 ]
 
9 10 8
=
27 27 27
Markov Process(Markovian process)

Markov Process Problem 2


A man either drives a car or catches a train to go to office each
day. He never goes two day in a row by train but if he drives only,
then the next day he is just as likely to drive again as he to travel
by train. Now, suppose that on the first dice and drove to work iff
a six appeared. Find
(a) the probability that he takes a train on the third day.
(b) the probability that he drives to work in the long run.
Solution : Let Xn be the mode of travel.

The state space S = {Train,Car}


= {T,C}

 T C 
TPM is P = T 0 1 (1)
C 1/2 1/2
Markov Process(Markovian process)

Markov Process Problem 2

(a) Initial probability distribution is based on the outcome that the


fair die is tossed.

P[X = C] = Probability of traveling in Car(C)


1
=
6
P[X = T] = Probability of traveling in Train(T)
5
=
6
Markov Process(Markovian process)

Markov Process Problem 2

∴ Initial distribution of the Markov chain is


 
5 1
P (1) =
6T 6C
" #
0 1
P (2) = P 2 = P × P = 65 16

1 1
  2 2
1 11
=
12 12
 " 0 1 #
(3) 3 2 1 11
P =P =P ×P = 1 1
12 12
  2 2
11 13
=
24 T 24 C
Markov Process(Markovian process)

Markov Process Problem 2


11
∴ P [The man travels by train on third day]=
24
(b) Long run probability distribution for 2 states Train and Car :
Let π = [π1 π2 ] be the stationary distribution of the process.

W.K.T. πP = π (2)
and π1 + π2 = 1 (3)
i.e.,
" #
 0  1  
(2) ⇒ π1 π2 1 1 = π1 π2
2 2
π2
i.e., 0 · π1 + = π1 (4)
2

π2
π1 + = π2 (5)
2
Markov Process(Markovian process)

Markov Process Problem 2

Solving (4),(5), using (3), we get

1
π1 = = Long run probability to travel in Train
3
2
π2 = = Long run probability to travel in Car
3
2
∴ P [man travels by Car in long run] = .
3
Markov Process(Markovian process)

Markov Process Problem 3

The t.p.m. ofa Markov chain {X n , n = 0,1,2,3,. . . } with 3 states


3/4 1/4 0
0,1,2 is P =  1/4 1/2 1/4  and the initial distribution is
0 3/4 1/4
P{X0 = i} = 31 , i = 0, 1, 2. Find
(a) P[x2 = 2/x1 = 0]
(b) P[x3 = 1, x2 = 2, x1 = 1, x0 = 2]
(c) P[x2 = 1]
(d) state distribution of the process at the second step
(e) Stationary distribution of the process.
Markov Process(Markovian process)

Markov Process Problem 3

Solution : Given TPM of 3 state M.C. is


 0 1 2 
0 3/4 1/4 0
P (1) = (1)
1  1/4 1/2 1/4 
2 0 3/4 1/4
Markov Process(Markovian process)

Markov Process Problem 3

1
Initial probability distribution P{X0 = i} = , i = 0, 1, 2.
3
i.e.,
1
P{X0 = 0} = (2)
3
1
P{X0 = 1} = (3)
3
1
P{X0 = 2} = (4)
3
 
1 1 1
i.e., P (0) = (5)
3 3 3
Markov Process(Markovian process)

Markov Process Problem 3

(a)
1
(1)
P [X2 = 1/X1 = 0] = P01 = {∴ P [Xn+1 = ai /Xn = aj ] = Pij }
4
(b) P[x3 = 1, x2 = 2, x1 = 1, x0 = 2]:
P(A ∩ B) P(AB)
Use P[A/B] = =
P(B) P(B)
⇒ P(AB) = P[A/B] · P(B)
Similarly, P(ABC ) = P[A/BC ] · P[B/C ] · P(C )
P(ABCD) = P[A/BCD] · P[B/CD] · P[C /D] · P(D)
Markov Process(Markovian process)

Markov Process Problem 3

∴ P[x3 = 1,x2 = 2, x1 = 1, x0 = 2]
= P[x3 = 1/x2 = 2, x1 = 1, x0 = 2] · P[x2 = 2/x1 = 1, x0 = 2
· P[x1 = 1/x0 = 2] · P[x0 = 2]
= P[x3 = 1/x2 = 2] · P[x2 = 2/x1 = 1] · P[x1 = 1/x0 = 2] · P
(1) (1) (1) (0)
= P21 · P12 · P21 · P2
3 1 3 1
= · · · (Refer (1) and (4))
4 4 4 3
3
=
64
Markov Process(Markovian process)

Markov Process Problem 3

X
(c) P [X2 = 1] = P [X2 = 1/x0 = i] · P [X0 = i]
∀i
i=2
X
= P [X2 = 1/x0 = i] · P [X0 = i]
i=0
= P [X2 = 1/x0 = 0] · P [X0 = 0]
+ P [X2 = 1/x0 = 1] · P [X0 = 1]
+ P [X2 = 1/x0 = 2] · P [X0 = 2]
(2) (0) (2) (0) (2) (0)
= P01 · P0 + P11 · P1 + P21 · P2
Markov Process(Markovian process)

Markov Process Problem 3

(2) (2) (2)


To find P01 , P11 , P21 , we have to find P (2) = P 2 = P × P. i.e.,

P (2) = P 2 = P ×P
   
3/4 1/4 0 3/4 1/4 0
= 1/4
 1/2 1/4 × 1/4 1/2
  1/4 
0 3/4 1/4 0 3/4 1/4
   (2) (2) (2)

5/8 5/16 1/16 P00 P01 P02
 (2) (2) (2) 
= 5/16
 1/2 3/16 =  P10
 P11 P12 
3/16 9/16 4/16 (2) (2) (2)
P 20 P21 P22
Markov Process(Markovian process)

Markov Process Problem 3

5 1 1 1 9 1
∴ (6) ⇒ P [X2 = 1] = · + · + ·
16 3 2 3 13 3

5+9+8 22
= =
48 48

11
=
24
Markov Process(Markovian process)

Markov Process Problem 3

(d) The state distribution of the process at the second state :

P (2) = P (0) · P (2) = P (0) · P 2 (∵ P (n+m) = P (n) · P (m) )


 
 5/8 5/16 1/16
= 1/3 1/3 1/3  5/16 1/2 3/16 
3/16 9/16 4/16
1 5 5 3 5
+ 12 + 16
9 1 3 4

= 8 + 16 + 16 16 16 + 16 + 16
3
(2) 1  18 22 8 
P = 16 16 16
3 3 11 1 
= 8 24 6
= a state distribution at 2nd step
Markov Process(Markovian process)

Markov Process Problem 3

(e) Stationary distribution of the process :

W.K.T πP = π, (7)
&π1 + π2 + π3 = 1 (8)
where π = [π1 π2 π3 ]
be the stationary distribution of the 3 state of M.P.
Markov Process(Markovian process)

Markov Process Problem 3


Now,
 
  3/4 1/4 0  
∴ (7) ⇒ π1 π2 π3  1/4 1/2 1/4  = π1 π2 π3
0 3/4 1/4
3 1
i.e., π1 + π2 + 0 · π1 = π1 (9)
4 4
1 1 3
π1 + π2 + π3 = π2 (10)
4 2 4
1 1
0 · π1 + π2 + · π3 = π3 (11)
4 4
Solving (9),(10),(11), using (8), we get
 
π = π1 π2 π3
 
3 3 1
= , which is the required stationary distribution
7 7 7
Markov Process(Markovian process)

Markov Process Problem 4

n = 1, 2, 3, · · · with states
Consider as Markov chain Xn , 
.1 .5 .4
S = {1, 2, 3} and t.p.m. P = .6 .2 .2  and initial probability

.3 .4 .3
(0)
distribution P = ( .7 .2 .1 ). Find
(a). P [X2 = 3/X0 = 1]
(b). P (X2 = 3)
(c). P [x3 = 2, x2 = 3, x1 = 3, x0 = 2].
Markov Process(Markovian process)

Markov Process Problem 4


 
.1 .5 .4
Solution: Given tpm is P =  .6 .2 .2  with
.3 .4 .3
(0)
P = ( .7 .2 .1 ).
The second step tpm is
  
.1 .5 .4 .1 .5 .4
P ( 2) = P 2 =  .6 .2 .2   .6 .2 .2 
.3 .4 .3 .3 .4 .3
 
.43 .31 .26
=  .24 .42 .34 
.36 .35 .29

(2)
(a). P [X2 = 3/X0 = 1] = P13
= .26
Markov Process(Markovian process)

Markov Process Problem 4

i=3
X
(b). P [X2 = 3] = P [X2 = 3, X0 = i]
i=1
i=3
X
= P [X2 = 3/X0 = i] P [X0 = i] [∵
i=1
= P [X2 = 3/X0 = 1] P [X0 = 1]
+ P [X2 = 3/X0 = 2] P [X0 = 2]
+ P [X2 = 3/X0 = 3] P [X0 = 3]
(2) (2) (2)
= P13 P [X0 = 1] + P23 P [X0 = 2] + P33 P [X0 = 3]
= (.26)(.7) + (.34)(.2) + (.29)(.1)
= .279
Markov Process(Markovian process)

Markov Process Problem 4

(c). P [X3 = 2, X2 = 3, X1 = 3, X0 = 2]
= P [X3 = 2/X2 = 3] P [X2 = 3/X1 = 2] P [X1 = 3/X0 = 2] P [X
(1) (1) (1)
= P32 P33 P23 P [X0 = 2]
= (.4)(.3)(.2)(.2)
= .0048
Markov Process(Markovian process)

Markov Process Problem 5

A gambler has Rs. 2. He bets Re. 1 at a time and wins Re. 1 with
probability 1/2. He stops playing if he loses Rs. 2 or wins Rs.4.
(a) Find the t.p.m. of the related Markov chain?
(b) Find the probability that he lost his money at the end of 5 plays?
(c) Find the probability that the game lasts more than 7 plays?
Solution : Let Xn represents the amount with the player at the
end of nth round of the play. The game is over if the player loses
all the money i.e., (Xn = 0) or wins Rs.4 i.e., (Xn = 6).
The state space is Xn = {0, 1, 2, 3, 4, 5, 6}.
1
If he wins the game, then the probability p = .
2
1
If he looses the game, then the probability q = 1 − p = .
2
He starts with Rs.2.
Markov Process(Markovian process)

Markov Process Problem 5

(a) The corresponding TPM of the related Markov chain is

Rs. in Future state (Xn+1 )


0 1 2 3 4 5 6
1 0 0 0 0 0 0
 
0
1 
 1/2 0 1/2 0 0 0 0 

2 
 0 1/2 0 1/2 0 0 0 

T.P.M., P = 3 
 0 0 1/2 0 1/2 0 0 

4 
 0 0 0 1/2 0 1/2 0 

5  0 0 0 0 1/2 0 1/2 
6 0 0 0 0 0 0 1
(1)
Markov Process(Markovian process)

Markov Process Problem 5

(b) Probability that the player has lost money at the end of the
five plays:
Since the player initially has got Rs. 2, the initial state probability
distribution of {Xn } is

P (0) = 0 0 1 0 0 0 0

(2)
Markov Process(Markovian process)

Markov Process Problem 5


After first play, the TPM is

P (1) = P (0) · P

1 0 0 0 0 0 0

 1/2 0 1/2 0 0 0 0

 0 1/2 0 1/2 0 0 0
= 0 0 1 0 0 0 0 
 0 0 1/2 0 1/2 0 0

 0 0 0 1/2 0 1/2 0
 0 0 0 0 1/2 0 1/
0 0 0 0 0 0 1
 1 1 
0 2 0 2 0 0 0
= 
(1) (1) (1) (1) (1) (1) (1)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(⇒ TPM after I play)
Markov Process(Markovian process)

Markov Process Problem 5

Similarly,
 1 1 1 
4 0 2 0 4 0 0
P (2) = P (1) · P =  
(2) (2) (2) (2) (2) (2) (2)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After II)

 1 1 3 1 
4 4 0 8 0 8 0
P (3) = P (2) · P =  
(3) (3) (3) (3) (3) (3) (3)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After III)
Markov Process(Markovian process)

Markov Process Problem 5

 3 5 1 1 
8 0 16 0 4 0 16
P (4) = P (3) · P =  
(4) (4) (4) (4) (4) (4) (4)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After IV)

!
3 5 9 1 1
(5) (4) 8 32 0 32 0 8 16
P =P ·P = (5) (5) (5) (5) (5) (5) (5)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After V)
Markov Process(Markovian process)

Markov Process Problem 5

∴ P[The man has lost his money at the end of 5th play]
= After V play, the probability that the gambler has zero rupee
(5)
= PRs.0
= The entry corresponding to the state ’0’ in P (5)
3
=
8
Markov Process(Markovian process)

Markov Process Problem 5

(c) The probability the the game lasts more than 7 plays:

 29 7 13 1 
64 0 32 0 64 0 8
P (6) = P (5) · P =  
(6) (6) (6) (6) (6) (6) (6)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After VI)

 29 7 27 13 1 
64 64 0 128 0 128 8
P (7) = P (6) · P =  
(7) (7) (7) (7) (7) (7) (7)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
(After VII)
Markov Process(Markovian process)

Markov Process Problem 5

Now,
P[the game lasts more than 7 plays]
= P[the system is neither in state 0 nor in 6 at the end of the seventh
= P [X7 = 1, 2, 3, 4, 5]
7 27 13
= +0+ +0+
64 128 128
27
=
64
Markov Process(Markovian process)

Markov Process Problem 6

A salesman territory consists of three cities A, B and C. He never


sells in the same city on successive days. If he sells in city A, then
the next day he sells in city B. However if he sells in either city B
or City C, the next day he is twice as likely sell in city A as in the
other city. In the long run, how often does he sell in each of the
cities?
Solution :
Since he never sells in the same city on successive days,
P (A → A) = 0, P (A → A) = 0, P (B → B) = 0
he sells in A, then the next day he sells in B ⇒ P (A → B) = 1
and he sells in B, then the next day he sells twice in A
2
⇒ P (B → A) = .
3
2
He sells in C, then next day he sells twice in A ⇒ P (B → A) = .
3
Markov Process(Markovian process)

Markov Process Problem 5

The corresponding TPM is

A  B C
A 0 1 0
P = B  2/3 0 1/3 
C 2/3 1/3 0

indent Let π = [π1 , π2 , π3 ] be the stationary state distribution of


the 3 state Markov chain.
where

π1 = Long run probability for the city A


π2 = Long run probability for the city B
π3 = Long run probability for the city C
Markov Process(Markovian process)

Markov Process Problem 6

Finding the values of π1 , π2 , π3 , by using the relations :

πP = π (2)
X
πi = 1 (3)
∀i

Now,
 
0 1 0
∴ (2) ⇒ [π1 π2 π3 ]  2/3 0 1/3  = [π1 π2 π3 ]
2/3 1/3 0
Markov Process(Markovian process)

Markov Process Problem 6


Finding the values of π1 , π2 , π3 , by using the relations :
2π2 2π3
i.e., + = π1 (4)
3 3
π1 π3
+ = π2 (5)
1 3
π2
= π3 (6)
3
Sub (6) in (4), we get
8
π1 = π2 (7)
9

Sub (6) and (7) in (3), we get


9
π2 =
20
Markov Process(Markovian process)

Markov Process Problem 6

Solving (6),(7) using (3) , we get

8
π1 = = Long run probability for the city A
20
9
π2 = = Long run probability for the city B
20
3
π3 = = Long run probability for the city C
20
lim
i.e., Long run probability of given 3 states = n → ∞P (n)
= [π1 π2 π3 ]
 
8 9 3
=
20 20 20
Markov Process(Markovian process)

THANK YOU

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy