Markov Chain
Markov Chain
Markov Chain
Markov Chain:
The stochastic process X n , n 0 is called a Markov chain, if for j , k , j1 , ... j n 1 N / I
PrX n k | X n 1 j , X n 2 j1 , ... X 0 j n 1 Pr X n k | X n 1 j P jk
The outcomes are called the states of the Markov Chain; if X n has the outcome j i.e., X n j the process is said
chain.
Transition Probability:
The conditional probability Pr X n 1 j | X n i Pij is known as transition probability. That is, the transition
probability refers to the probability that the process is in stat i and will be in state j in the next step. Here the
transition is one step and P jk is called one-step transition probability. Transition probability must satisfy
The conditional probability PrX n m k | X n j Pjkm is known as m step transition probability. That is, the
transition probability refers to the probability that the process is in stat i and will be in state j in the next m step.
Here the transition is m step and P jkm is called m -step transition probability. Transition probability must satisfy
Markov Chain - 1
Janardan Mahanta, Statistics, CU
n
where , P jk 0 and Pjk 1 for all j
k 1
Stochastic Matrix:
The transition probability matrix P is said to be stochastic matrix if it has a square matrix with non-negative
elements and unit row sums.
Markov Chain - 2
Janardan Mahanta, Statistics, CU
Pr X n k | X n 1 j , X n 2 j1 , , X n s j s 1 ,
Pr X n k | X n 1 j , X n 2 j1 , , X n s j s 1
Pr X n k | X n 1 j , X n 2 j1 , Pr X n k | X n 1 j
P jk
A chain is said to be of order zero if P jk Pk for all j . It is a memory less process, which means that the
Example:
Consider a Markov Chain with the following TPM
3 1 0
4 4
P 1 2 1
4 34 4
0 1
4 4
1
with the initial distribution i Px 0 i ; i 0,1,2, . Find
3
2 2
1. p 01 , p 02 2. px n 1 , x 0 0
3. p0 , 1 , 1 p x 0 0, x1 1, x 2 1 4. px1 2
5. p0 , 0 , 1 , 1 p x 0 0, x1 0, x 2 1, x 3 1
Solution:
1.
P 2 PP
3 1 0 3 1 0
4 4 4 4
1 2 1 1 2 1
4 3
4 4 4
3
4 4
0 1 0 1
4 4 4 4
5 5 1
8 16 16
5 1 3
16 2 16
3 9
16 16 1
4
2 5
p 01 Px 2 1 | x 0 0
16
2 1
p 02 Px 2 2 | x 0 0
16
Markov Chain - 3
Janardan Mahanta, Statistics, CU
2. We know,
px 2 1 , x 0 0 Px 2 1 | x 0 0 Px 0 0
5 1 5
16 3 48
3. We know,
P 0 , 1 , 1 P x 0 0, x1 1, x 2 1
Px1 1| x 0 0Px 2 1| x1 1Px 0 0
5 1 1 5
16 4 3 192
4.
2
Px1 2 p i12 i
i 0
p 02 0 p12 1 p 22 2
1 1 1 1 1
0
3 4 3 4 3
1
6
5. We know,
p0 , 0 , 1 , 1 p x 0 0, x1 0, x 2 1, x 3 1
Px1 0 | x 0 0Px 2 1| x1 0Px 3 1| x 2 1Px 0 0
5 1 1 1 5
16 4 4 3 768
Chapman-Kolmogorov Equation:
The m -step transition probability is denoted by
P jkm gives the probability that from the state j at nth trial, the state k is reached at m nth trial in m steps i.e.,
the probability of transition from the state j to the state k in exactly m steps.
Since the right hand side of the equation 1 is independent of n . So the chain is homogeneous. The one-step
The state k can be reached from the state j in two steps through some intermediate state r .
Markov Chain - 4
Janardan Mahanta, Statistics, CU
Now, we have
By induction, we have,
Prk Pjrm
r
Similarly, we get,
This equation is a special case of Chapman-Kolmogorov equation, which satisfied by the transition probabilities of a
Markov-Chain.
The Chapman-Kolmogorov equation provides a method for computing these transition probabilities.
Let P P jk denote the transition matrix of the unit step transitions and P m Pjkm denote the transition matrix
For m 2 we have,
P 2 P P P 2
Markov Chain - 5
Janardan Mahanta, Statistics, CU
Alternative Proof:
Chapman-Kolmogorov equation provides a method for finding higher step transition probabilities. If X n n 0 is a
Proof:
We know that,
Pijm n PrX m n j | X 0 i
PrX m n j, X m k | X 0 i
k 0
PrX m n j | X m k , X 0 iPrX m k | X 0 i
k 0
PrX m n j | X m kPrX m k | X 0 i
k 0
Pijm n Pkjn Pikm
k 0
Pijm n Pikm Pkjn Pr oved
k 0
distribution of the state at time n is desired, it is necessary to specify the probability distribution of the initial state.
Let us denote this by
i PrX 0 i ; i 0 i 1
i 0
So,
PrX n j Pr X n j | X0 i PrX 0 i
i 0
Pijn i Proved
i0
Markov Chain - 6
Janardan Mahanta, Statistics, CU
i) Accessible State:
State j of a Markov chain is said to be accessible from state i if Pijn 0 for some n 1 . That is, state j is
accessible from state i if and only if, starting in i , it is possible that the process will ever enter state j . The
relation is denoted by i j .
ii) Non-Accessible:
If for all n , Pijn 0 , then j is not accessible from i and it is denoted by i
j.
i j ; then there exist integer m and n such that Pijn 0 and P jim 0 .
b) The relation is clearly systematic i.e., if state i communicate with state j then state j also communicate
with state i . Notation: if i j , the j i .
c) Transitivity property: If state i communicate with state j and state j communicate with state k then
Theorem:
Prove that, if state i communicate with state j and state j communicate with state k then state i communicate
Proof:
Since i and j communicate with each other that is, i j
and j k
Now,
Pikm n Pilm Plkn Pilm Plkn Pijm P jkn
l 0 l j
Markov Chain - 7
Janardan Mahanta, Statistics, CU
Again,
Pkis t Pkqt Pqis
q 0
Return State:
A state i is a return state if Piin 0 for some n 1 .
Periodicity:
Periodicity is a class property. State i is a return state if Piin 0 for some n 1 . The period d i of a return to state
Thus di G.C.D m : Piim 0 .
State i is said to be aperiodic if d i 1 and state i is said to be periodic if d i 1 .
Periodic State:
A state i is said to be periodic with period t t 1 if return to the state i is possibly only at t , 2t , 3t , steps
where t is the greatest integer.
Aperiodic State:
If periodicity of a returned state i is 1 then i is known as aperiodic (non-periodic).
Markov Chain - 8
Janardan Mahanta, Statistics, CU
Closed Set:
Closed set is a set of states such that no state of that set is reachable by any state outside the set.
Explanation: Let C is a set of state then if P jkn 0 ; j C and k C for all n , and Pij 1 i C then
jC
C is a closed set.
Absorbing State:
If a closed set contains only one state then the state is known as absorbing state.
Classification of Chains:
There are to types of Chains.
i) Irreducible or Indecomposable
ii) Reducible or Decomposable
Interpretation of Pijn :
Pijn PrX n j | X 0 i is the probability that the process is in state i , will ever enter state j at n th step not
Interpretation of f ijn :
f ijn PrX n j ; X n j | X 0 i ; k 1, 2, ..., n 1 is the probability that the process is in state i , will ever
Markov Chain - 9
Janardan Mahanta, Statistics, CU
We have to consider two cases, F jk 1 and F jk 1 . When F jk 1 , it is certain that the system starting with state
j will reach state k ; in this case f jkn ; n 1, 2, is a proper probability distribution and this gives the first
passage time distribution for k given that the system starts with j .
to the state j is certain. In this case jj nf jjn is known as the mean recurrent time for the state j .
n 1
Recurrent State may be Null Recurrent State and Non-null Recurrent State.
A recurrent state j is said to be null recurrent state if jj , i.e., if the mean recurrence time is infinity.
A recurrent state j is said to be non-null recurrent state if jj , i.e., if the mean recurrence time is finite.
Transient State:
A state j is said to be transient if return to state j is uncertain, i.e., starting from state j , the probability of
Ergodic State:
A recurrent non-null aperiodic state is known as ergodic state then
(i ) f jj 1 , (ii ) jj , (iii ) d i 1
Markov Chain - 10
Janardan Mahanta, Statistics, CU
Proof:
Starting from state i , the process will be in state j not necessarily for the first time at the n th step (i.e., Pijn ) is the
Starting from state i the process will enter to the state j for the first time
i) For the 1st time at 1st step and again in state j at n 1 st step. i.e., f ij1 P jjn 1 .
ii) For the first time at 2nd step and again in state j at n 2 nd step i.e., f ij2 P jjn 2 . Similarly, for the 1st
time at r th step and again in state j at n r th step i.e., f ijr P jjn r .
Pijn f ij1 P jjn 1 f ij2 P jjn 2 ... ... ... f ijr P jjn r ... ... ... f ijn P jj0
n
Pijn f ijr P jjn r
r 1
n
i.e., Pijn f ijr P jjn r f 0
ij
0
r 0
Pr oved
Markov Chain - 11
Janardan Mahanta, Statistics, CU
Theorem:
1
Show that, for any state j P jjn
1 F jj
and if state j is recurrent then Pjjn
n 0 n 0
Proof:
We know that,
n
P jjn f jjr Pjjnr
r 0
n
Pjjn t n f jjr Pjjn r t n
n 1 n 1 r 0
n
Pjjn t n f jjr Pjjn r t n
n 1 r 0 n 1
n
Pjjn t n f jjr t r Pjjn r t nr
n 1 r 0 n 1
n
Pjjn t n 1 f jjr t r Pjjn t n
P jj0 1
n 0 r 0 n 0
Putting t 1 we get
n
Pjjn 1 f jjr Pjjn
n 0 r 0 n 0
n
Pjjn f jjr Pjjn 1
n 0 r 0 n 0
n
P jjn 1 f jjr 1
n 0 r 0
1
Pjjn n
n 0
1 f jjr
r 0
n
1 r
Pjjn 1 F jj F jj f jj
n 0 r 0
1 1
Pjjn 1 1 0 Pr oved
n 0
Markov Chain - 12
Janardan Mahanta, Statistics, CU
Theorem:
F jj
From State j the mean number of returns to state j is .
1 F jj
Proof:
1 ; Xn j
Suppose R n
0 ; Xn j
Therefore, Rn represents the total number of returns to j .
n 1
Now,
E Rn | X 0 j
n 1 n 1
E Rn | X 0 j
1 PX n j | X 0 j 0 PX n j | X 0 j
n 1
PX n j | X 0 j
n 1
P jjn Pjjn 1
n 1 n 0
1 F jj
1
1 F jj 1 F jj
Theorem:
If state i is recurrent and i j then state j is recurrent.
Proof:
Since i is recurrent and i j
Pijk 0, Pjim 0 and Piin
n 0
Now,
P jjm n k Pjim Pijn k
i 0
Markov Chain - 13
Janardan Mahanta, Statistics, CU
Limiting Probability:
Limiting probability j ; j 0, 1, is the long run proportion of time that the Markov Chain is in state j given
by
a) j Pijn i
i 0
b) j 1
j
Theorem:
If state j is persistent non-null, then as n
t
i) P jjnt when state j is periodic with period t and
jj
1
ii) P jjn when state j is aperiodic.
jj
P jjn 0, as n
Proof:
If state j is persistent, then
jj nf jjn
n
Markov Chain - 14
Janardan Mahanta, Statistics, CU
Lemma:
Let f n be a sequence such that f n 0, fn 1 and t 1 be the greatest common divisor of those n for
which f n 0 .
n
Let u n be another sequence such that u 0 1, u n f r u n r n 1 . Then
r 1
t
lim u nt where nf n , the limit being zero when ;
n n t
Now putting
f jjn for f n
P jjn for u n
jj for
We get,
t
P jjnt as n
jj
1
P jjn as n
jj
P jjn 0, as n
Markov Chain - 15