0% found this document useful (0 votes)
31 views15 pages

Markov Chain

The document discusses Markov chains, which are stochastic processes where the probability of future states depends only on the present state, not on the sequence of events that preceded it. It defines key terms like states, transition probabilities, homogeneous and non-homogeneous chains, finite and infinite chains. It also provides an example to demonstrate calculating transition probabilities and joint probabilities of a Markov chain.

Uploaded by

Chhana Ullah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views15 pages

Markov Chain

The document discusses Markov chains, which are stochastic processes where the probability of future states depends only on the present state, not on the sequence of events that preceded it. It defines key terms like states, transition probabilities, homogeneous and non-homogeneous chains, finite and infinite chains. It also provides an example to demonstrate calculating transition probabilities and joint probabilities of a Markov chain.

Uploaded by

Chhana Ullah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Janardan Mahanta, Statistics, CU

Markov Chain

Markov Chain:
The stochastic process X n , n  0 is called a Markov chain, if for j , k , j1 , ... j n 1  N / I

PrX n  k | X n 1  j , X n  2  j1 , ... X 0  j n 1   Pr X n  k | X n 1  j   P jk

whenever the first member is defined.

The outcomes are called the states of the Markov Chain; if X n has the outcome j i.e., X n  j  the process is said

to be at state j at nth trial. P jk denote the transition probability.

Homogeneous Markov Chain:


If the transition probability P jk is independent of n , the Markov chain is said to be homogeneous (or to have

stationary transition probabilities).

Non-homogeneous Markov Chain:


If the transition probability P jk is dependent of n , then the Markov chain is said to be non-homogeneous Markov

chain.

Transition Probability:
The conditional probability Pr X n 1  j | X n  i   Pij is known as transition probability. That is, the transition

probability refers to the probability that the process is in stat i and will be in state j in the next step. Here the

transition is one step and P jk is called one-step transition probability. Transition probability must satisfy

(i ) Pij  0 and (ii)  Pij  1 .


j

The conditional probability PrX n  m  k | X n  j   Pjkm  is known as m step transition probability. That is, the

transition probability refers to the probability that the process is in stat i and will be in state j in the next m step.

Here the transition is m step and P jkm  is called m -step transition probability. Transition probability must satisfy

(i ) P jkm   0 and (ii )  Pjkm  1 .


j

Markov Chain - 1
Janardan Mahanta, Statistics, CU

Transition Probability Matrix (TPM):


The matrix of 1st order transition probability of a Markov chain is called transition probability matrix.
If X n , n  0 is a Markov chain with transition probability P jk then transition probability matrix is given by

 P11 P12  P1n 


 
P21 P22  P2 n 
P
     
 
 Pij ; i , j  0, 1, 2,  , n
 
 Pn1 Pn 2  Pnn 

n
where , P jk  0 and  Pjk  1 for all j
k 1

Stochastic Matrix:
The transition probability matrix P is said to be stochastic matrix if it has a square matrix with non-negative
elements and unit row sums.

Finite Markov Chain:


A Markov chain X n , n  0 with k states, where k is finite, is said to be a finite Markov chain. In this case, the

transition matrix P is a square matrix with k rows and k columns.

Infinite Markov Chain:


The number of states could however be infinite. When the possible values of X n form a infinite set, then the
Markov chain is said to be denumerable infinite or denumerable and the chain is said to have a countable state space.

Probability Distribution of Stochastic Process:


It may be seen that the probability distribution of X r , X r 1 ,  , X r  n can be computed in terms of the transition

probabilities P jk and the initial distribution of X r , where  X r  n , r  n  0 .

Suppose, for simplicity, that r  0 then the sequence X r , X r 1 ,  , X r  n .

The joint distribution is:


Pr  X 0  a , X 1  b , ... , X n  2  i , X n 1  j , X n  k 
 Pr  X n  k | X n 1  j ,  , X 0  a  Pr  X n 1  j ,  , X 0  a 
 Pr  X n  k | X n 1  j  Pr  X n 1  j | X n  2  i  Pr  X n  2  i ,  , X 0  a 
 Pr  X n  k | X n 1  j  Pr  X n 1  j | X n  2  i   Pr X1  b | X 0  a  Pr  X 0  a 
 Pr  X 0  a  Pab  Pij P jk

Thus, Pr  X 0  a , X 1  b ,  , X n  2  i , X n 1  j , X n  k   Pr X 0  a  Pab  Pij P jk 
i.e., the joint probability can be expressed as the product of the transition probabilities and the initial distribution.

Generally we can write as:


Pr  X 0  a , X 1  b , ... , X n  2  i , X n 1  j , X n  k   Pr  X 0  a  Pab  Pij P jk

Markov Chain - 2
Janardan Mahanta, Statistics, CU

Order of a Markov Chain:


A Markov chain X n  is said to be order s s  1, 2,  if for all n ,

Pr  X n  k | X n 1  j , X n 2  j1 ,  , X n  s  j s 1 , 
 Pr  X n  k | X n 1  j , X n  2  j1 ,  , X n  s  j s 1 

A Markov chain X n  is said to be of order one (one) simply a Markov chain) if

Pr  X n  k | X n 1  j , X n  2  j1 ,   Pr  X n  k | X n 1  j 
 P jk

Note: Normally this type of Markov Chain does not exist.

A chain is said to be of order zero if P jk  Pk for all j . It is a memory less process, which means that the

observations are independent.

Example:
Consider a Markov Chain with the following TPM

3 1 0
 4 4 
P  1 2 1 
 4 34 4
 0 1 
4 4
1
with the initial distribution  i  Px 0  i   ; i  0,1,2, . Find
3
2  2 
1. p 01 , p 02 2. px n  1 , x 0  0

3. p0 , 1 , 1  p x 0  0, x1  1, x 2  1 4. px1  2

5. p0 , 0 , 1 , 1  p x 0  0, x1  0, x 2  1, x 3  1

Solution:
1.

P 2  PP
3 1 0  3 1 0
 4 4   4 4 
 1 2 1  1 2 1 
 4 3
4 4  4
3
4 4
 0 1  0 1 
4 4  4 4
5 5 1 
 8 16 16
 5 1 3 
 16 2 16
3 9
 16 16 1 
4
2  5
 p 01  Px 2  1 | x 0  0 
16
2  1
p 02  Px 2  2 | x 0  0 
16

Markov Chain - 3
Janardan Mahanta, Statistics, CU

2. We know,
px 2  1 , x 0  0  Px 2  1 | x 0  0 Px 0  0
5 1 5
  
16 3 48
3. We know,
P 0 , 1 , 1  P x 0  0, x1  1, x 2  1
 Px1  1| x 0  0Px 2  1| x1  1Px 0  0
5 1 1 5
   
16 4 3 192
4.
2
Px1  2   p i12 i
i 0
 p 02  0  p12  1  p 22  2
1 1 1 1 1
 0    
3 4 3 4 3
1

6
5. We know,
p0 , 0 , 1 , 1  p x 0  0, x1  0, x 2  1, x 3  1
 Px1  0 | x 0  0Px 2  1| x1  0Px 3  1| x 2  1Px 0  0
5 1 1 1 5
    
16 4 4 3 768

Chapman-Kolmogorov Equation:
The m -step transition probability is denoted by

PrX n  m  k | X n  j   P jkm     1

P jkm  gives the probability that from the state j at nth trial, the state k is reached at m  nth trial in m steps i.e.,

the probability of transition from the state j to the state k in exactly m steps.

Since the right hand side of the equation 1 is independent of n . So the chain is homogeneous. The one-step

transition probabilities P jk1 are denoted by P jk for simplicity.

Let us consider two-step transition probability matrix P jk2   PrX n  2  k | X n  j 

The state k can be reached from the state j in two steps through some intermediate state r .

Markov Chain - 4
Janardan Mahanta, Statistics, CU

Consider a fixed value of r , we have,


PrX n  2  k , X n 1  r | X n  j  PrX n  2  k | X n 1  r , X n  j PrX n 1  r | X n  j
 PrX n  2  k | X n 1  r PrX n 1  r | X n  j
 Prk P jr ; r  1, 2, 

Now, we have

P jk2   PrX n  2  k | X n  j    PrX n  2  k , X n 1  r | X n  j    Prk P jr


r r

By induction, we have,

P jkm 1  PrX n  m 1  k | X n  j


  Pr X n m1  k | X n  m  r PrX n m  r | X n  j 
r

  Prk Pjrm
r

Similarly, we get,

P jkm 1   Prkm  P jr


r

In general, we can write,

P jkm  n    Prkm P jrn    Pjrm Prkn     2


r r

This equation is a special case of Chapman-Kolmogorov equation, which satisfied by the transition probabilities of a
Markov-Chain.

From the equation 2 we can write,

P jkm  n   Prkm P jrn  for any r

The Chapman-Kolmogorov equation provides a method for computing these transition probabilities.

   
Let P  P jk denote the transition matrix of the unit step transitions and P m   Pjkm  denote the transition matrix

of the m -step transitions.

For m  2 we have,

P 2   P  P  P 2

Similarly, P m 1  P m  P  P  P m and by indication P m   P m 11  P m 1  P  P m . That is, the m -step

transition matrix may be obtained by multiplying the matrix P by itself m times.

Markov Chain - 5
Janardan Mahanta, Statistics, CU

Alternative Proof:
Chapman-Kolmogorov equation provides a method for finding higher step transition probabilities. If X n n 0 is a

stochastic process then for m, n  0



Pijm  n    PikmPkjn  .
k 0

Proof:
We know that,

Pijm  n   PrX m  n  j | X 0  i

  PrX m n  j, X m  k | X 0  i
k 0

  PrX m n  j | X m  k , X 0  iPrX m  k | X 0  i
k 0

  PrX m n  j | X m  kPrX m  k | X 0  i
k 0

Pijm  n    Pkjn  Pikm 
k 0

Pijm  n    Pikm Pkjn  Pr oved 
k 0

All Unconditional Probabilities may Be Computed by Conditioning on the Initial State:


Pijn  is the probability that the stat at time n is j given that the initial state at time 0 is i . If the unconditional

distribution of the state at time n is desired, it is necessary to specify the probability distribution of the initial state.
Let us denote this by
  
 i  PrX 0  i ; i  0    i  1
 
 i 0 
So,

PrX n  j   Pr X n  j | X0  i PrX 0  i 
i 0

  Pijn i Proved 
i0

Classification of States According to Communication:


There are two types of state such as
i) Accessible State
ii) Not Accessible State
iii) Communicate State

Markov Chain - 6
Janardan Mahanta, Statistics, CU

i) Accessible State:
State j of a Markov chain is said to be accessible from state i if Pijn   0 for some n  1 . That is, state j is

accessible from state i if and only if, starting in i , it is possible that the process will ever enter state j . The

relation is denoted by i  j .

ii) Non-Accessible:
If for all n , Pijn   0 , then j is not accessible from i and it is denoted by i 
 j.

iii) Communicate State:


Two states i and j are said to be communicate state if each is accessible from the other, it is denoted by

i  j ; then there exist integer m and n such that Pijn   0 and P jim   0 .

Properties of Communicate State:


a) State i communicate with itself, i.e., Piim   0 . Notation: i  i self communicate.

b) The relation is clearly systematic i.e., if state i communicate with state j then state j also communicate
with state i . Notation: if i  j , the j  i .

c) Transitivity property: If state i communicate with state j and state j communicate with state k then

state i communicate with state k . Notation: if i  j and j  i then i  k .

Theorem:
Prove that, if state i communicate with state j and state j communicate with state k then state i communicate

with state k i.e., i  j and j  i then i  k .

Proof:
Since i and j communicate with each other that is, i  j

 Pijm   0 and P jis   0

and j  k

 P jkn   0 and Pkjt   0

Now,

Pikm  n    Pilm Plkn    Pilm Plkn   Pijm  P jkn 
l 0 l j

 Pikm  n   Pijm  P jkn 


 Pikm  n   0  P    0
ij
m
and P jkn   0 

Markov Chain - 7
Janardan Mahanta, Statistics, CU

Again,

Pkis t    Pkqt  Pqis 
q 0

  Pkqt  Pqis   Pkjt  P jis 


q j

 Pki s  t   Pkjt  P ji s 


i.e., Pki s  t   0  P    0
kj
t
and P jis   0 
Since Pikm  n   0 and Pkis t   0

So that, i  k Pr oved 

Class Property or Class of State:


A class of states is a subset of the state space such that every state of the class communicates with every other and
there is no other state outside the class which communicates with all other states in the class.

Return State:
A state i is a return state if Piin   0 for some n  1 .

Periodicity:
Periodicity is a class property. State i is a return state if Piin   0 for some n  1 . The period d i of a return to state

i is defined as the greatest common divisor of all m such that Piim   0 .


Thus di  G.C.D m : Piim   0 . 
State i is said to be aperiodic if d i  1 and state i is said to be periodic if d i  1 .

Periodic State:
A state i is said to be periodic with period t t  1 if return to the state i is possibly only at t , 2t , 3t ,  steps
where t is the greatest integer.

In this case Piin   0 , unless n is integral multiple of t .

Aperiodic State:
If periodicity of a returned state i is 1 then i is known as aperiodic (non-periodic).

Markov Chain - 8
Janardan Mahanta, Statistics, CU

Closed Set:
Closed set is a set of states such that no state of that set is reachable by any state outside the set.

Explanation: Let C is a set of state then if P jkn   0 ;  j  C and k  C for all n , and  Pij  1  i  C then
jC

C is a closed set.

Absorbing State:
If a closed set contains only one state then the state is known as absorbing state.

Explanation: j is absorbing iff Pij  1 , for i  j and Pij  0, for i  j

Classification of Chains:
There are to types of Chains.
i) Irreducible or Indecomposable
ii) Reducible or Decomposable

i) Irreducible or Indecomposable Chain:


Every finite Markov chain contains at least one closed set, i.e., the set of all states or the state space. If the
chain does not contain any other proper closed subset other than the state space, then the chain is called
irreducible. The transition probability matrix of irreducible chain is an irreducible matrix. In an irreducible
Markov Chain every state can be reached from every other state.

ii) Reducible Chain or Non-irreducible or Decomposable:


In the chain contain more than one closed set of state then the chain is said to be reducible chain. The
transition probability matrix is reducible.

Interpretation of Pijn  :

Pijn   PrX n  j | X 0  i is the probability that the process is in state i , will ever enter state j at n th step not

necessarily for the first time.

Interpretation of f ijn  :

f ijn   PrX n  j ; X n  j | X 0  i ; k  1, 2, ..., n  1 is the probability that the process is in state i , will ever

enter state j at n th step for the first time.

Interpretation of Fij or f ij* :



This is the probability that the process is in state i , will ever enter state j and Fij   f ijn 
n 1

Markov Chain - 9
Janardan Mahanta, Statistics, CU

Recurrent or Persistent State:


A state j is said to be recurrent or persistent if return to state j is certain, i.e., for any positive integer m ,

F jj   f jjn   1 (i.e., return to state j is certain).
n 1

First Passage Time Distribution:



Let F jk denote the probability that staring with state j the system will ever reach state k . Clearly F jk   f jkn
n 1

We have sup p jkn   F jk   p jkm  for all n  1


n 1 m 1

We have to consider two cases, F jk  1 and F jk  1 . When F jk  1 , it is certain that the system starting with state

 
j will reach state k ; in this case f jkn  ; n  1, 2,  is a proper probability distribution and this gives the first

passage time distribution for k given that the system starts with j .

Mean First Passage Time:



The mean first passage time from state j to state k is given by  jk   nf jkn 
n 1

Mean Recurrent Time:


 
If f jjn  ; n  1, 2,  represents the distribution of the recurrence times of j ; and F jj  1 will inply that the return


to the state j is certain. In this case  jj   nf jjn  is known as the mean recurrent time for the state j .
n 1

Recurrent State may be Null Recurrent State and Non-null Recurrent State.
A recurrent state j is said to be null recurrent state if  jj   , i.e., if the mean recurrence time is infinity.

A recurrent state j is said to be non-null recurrent state if  jj   , i.e., if the mean recurrence time is finite.

Transient State:
A state j is said to be transient if return to state j is uncertain, i.e., starting from state j , the probability of

returning to state j is uncertain F jj  1 (i.e., return to state j is uncertain)

Ergodic State:
A recurrent non-null aperiodic state is known as ergodic state then
(i ) f jj  1 , (ii )  jj   , (iii ) d i   1

Markov Chain - 10
Janardan Mahanta, Statistics, CU

First Entrance Decomposition Formula:


For a Markov-Chain X n n 0 ,
n
Pijn    f ijr  Pjjnr  ; nr
r 0

Proof:
Starting from state i , the process will be in state j not necessarily for the first time at the n th step (i.e., Pijn  ) is the

union of the following mutually exclusive events.

Starting from state i the process will enter to the state j for the first time

i) For the 1st time at 1st step and again in state j at n  1 st step. i.e., f ij1 P jjn 1 .

ii) For the first time at 2nd step and again in state j at n  2 nd step i.e., f ij2  P jjn  2  . Similarly, for the 1st

time at r th step and again in state j at n  r th step i.e., f ijr  P jjn  r  .

Pijn   f ij1 P jjn 1  f ij2  P jjn 2   ... ... ...  f ijr  P jjn r   ... ... ...  f ijn  P jj0 
n
 Pijn    f ijr  P jjn  r 
r 1
n
i.e., Pijn    f ijr  P jjn r   f    0
ij
0

r 0
Pr oved 

Another form of This Theorem:


n
Pijn    f ijr  Pjjn r 
r 0
n 1
 Pijn    f ijr  Pjjn r   f ijn  Pjj0
r 0
n 1
 Pijn    f ijr  Pjjn r   f ijn   
P jj0   1
r 0
n 1
i.e.,  f ijn   Pijn    f ijr  Pjjn r  ________ Theorem
r 0
If j  i then
n
Piin    f iir  Piin r 
r 0
n 1
  f iir  Piin r   f iin  Pii0  Pii0   1 
r 0
n 1
 f iin   Piin    f iir  Piin r  ________ Theorem
r 0

Markov Chain - 11
Janardan Mahanta, Statistics, CU

Theorem:
 
1
Show that, for any state j  P jjn  
1  F jj
and if state j is recurrent then  Pjjn   
n 0 n 0

Proof:
We know that,
n
P jjn    f jjr  Pjjnr 
r 0

Multiplying both sides by t n and adding for all n  1 we get

  n
 Pjjn t n   f jjr  Pjjn r t n
n 1 n 1 r  0
 n 
  Pjjn t n   f jjr  Pjjn r t n
n 1 r  0 n 1
 n 
  Pjjn t n   f jjr t r  Pjjn r t nr
n 1 r 0 n 1
 n 
  Pjjn t n  1   f jjr t r  Pjjn t n  
P jj0   1
n 0 r 0 n 0

Putting t  1 we get

 n 
 Pjjn   1   f jjr   Pjjn 
n 0 r 0 n 0
 n 
  Pjjn    f jjr   Pjjn   1
n 0 r 0 n 0
  n 
  P jjn  1   f jjr    1
 
n 0  r 0 

1
  Pjjn   n
n 0
1   f jjr 
r 0
  n
1 r  
  Pjjn   1  F jj  F jj   f jj 
 
n 0 r 0

If state j is recurrent i.e., F jj  1 then


1 1
 Pjjn   1  1  0   Pr oved 
n 0

Markov Chain - 12
Janardan Mahanta, Statistics, CU

Theorem:
F jj
From State j the mean number of returns to state j is .
1  F jj

Proof:
1 ; Xn  j
Suppose R n  
0 ; Xn  j


Therefore,  Rn represents the total number of returns to j .
n 1

Now,

   

E  Rn | X 0  j  
 n 1  n 1
 
E Rn | X 0  j 

  1  PX n  j | X 0  j 0  PX n  j | X 0  j
n 1

  PX n  j | X 0  j
n 1
 
  P jjn    Pjjn   1
n 1 n 0

1 F jj
 1 
1  F jj 1  F jj

Theorem:
If state i is recurrent and i  j then state j is recurrent.

Proof:
Since i is recurrent and i  j

 Pijk   0, Pjim   0 and  Piin   
n 0

Now,

P jjm  n  k    Pjim Pijn k 
i 0

 P jjm  n  k   P jim  Pijn  k  ... ... ... 1


Again,

Pijn  k    Piin  Pijk 
i 0

 Pijn  k   Piin  Pijk  ... ... ... 2

Markov Chain - 13
Janardan Mahanta, Statistics, CU

From eq1 we get

Pjjm  n  k   Pjim Piin Pijk 


 
  Pjjm  n  k   Pjim Pijk   Piin 
n 0 n 0
   
  Pjjm  n  k      Piin    
n 0  n 0 
i.e., state j is recurrent

So that, if state i is recurrent and i  j then state j is recurrent.

Limiting Probability:
Limiting probability  j ; j  0, 1,  is the long run proportion of time that the Markov Chain is in state j given

by

a)  j   Pijn  i
i 0

b)  j  1
j

Theorem:
If state j is persistent non-null, then as n  

t
i) P jjnt   when state j is periodic with period t and
 jj

1
ii) P jjn   when state j is aperiodic.
 jj

In case state j is persistent null, (whether periodic or aperiodic), then

P jjn   0, as n  

Proof:
If state j is persistent, then

 jj   nf jjn 
n

and P jjn    f jjr  Pjjn r 


r

Markov Chain - 14
Janardan Mahanta, Statistics, CU

In order to proof the theorem we have to use the following Lemma.

Lemma:
Let f n  be a sequence such that f n  0,  fn 1 and t  1 be the greatest common divisor of those n for

which f n  0 .

n
Let u n  be another sequence such that u 0  1, u n   f r u n r n  1 . Then
r 1


t
lim u nt  where    nf n , the limit being zero when    ;
n   n t

and lim u N  0 whenever N is not divisible by t .


N 

Now putting

f jjn  for f n

P jjn  for u n

 jj for 

We get,
t
P jjnt   as n  
 jj

If j is aperiodic (i.e., t  1 ) then

1
P jjn   as n  
 jj

In case state j is persistent null,  jj   then

P jjn   0, as n  

Markov Chain - 15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy