0% found this document useful (0 votes)
46 views33 pages

Discrete-Time Markov Chains: ELEC345 1

This document discusses discrete-time Markov chains. It begins with an introduction to Markov chains and examples of states they can represent. It then provides an example of a two-state Markov chain with transition probabilities. The document explains that discrete-time Markov chains change states at discrete time points, while continuous-time chains can change at any time point. It also discusses transition probability matrices and how to calculate steady state probabilities by setting up balance equations in matrix form. As an example, it solves for the steady state probabilities of a two-state Markov chain.

Uploaded by

Rajesh Paudyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views33 pages

Discrete-Time Markov Chains: ELEC345 1

This document discusses discrete-time Markov chains. It begins with an introduction to Markov chains and examples of states they can represent. It then provides an example of a two-state Markov chain with transition probabilities. The document explains that discrete-time Markov chains change states at discrete time points, while continuous-time chains can change at any time point. It also discusses transition probability matrices and how to calculate steady state probabilities by setting up balance equations in matrix form. As an example, it solves for the steady state probabilities of a two-state Markov chain.

Uploaded by

Rajesh Paudyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Discrete-Time Markov Chains

ELEC345 1
Introduction
• Markov chains model phenomena or systems with states
and transitions between states.
• States can represent states such as:
– ON-OFF traffic sources
– The number of packets in a buffer.
– Google also uses Markov chains to rank searches.

ELEC345 2
Example Chain
• Variable bit rate source (e.g. coded video
source):

2 Mbps 6 Mbps

• What is the average rate of the source?


ELEC345 3
Discrete-time and continuous-time
chains
• In a discrete-time Markov chain, changes of state occur only at
discrete-points in time (e.g. at the start of slot in a slotted time system).

In discrete time the chain can only change state at the arrows
Time

• Continuous-time Markov chains allow transitions to occur at any point


in time.
– A key assumption here is that the time spent is an exponentially
distributed random variable.
• These notes only consider discrete-time Markov chains
ELEC345 4
Probability
• Probability is a numerical value given to the
likelihood of a random outcome
1 = certain
0 = Impossible
• Fair Die:
Probability of a particular
side turning up = 1/6

ELEC345 5
Probability density
• Repeating a random
experiment allows a
histogram to be obtained
• As the number of experiments
increases the histogram
approaches a theoretical curve
(Probability density)

• E.g. in tossing a fair coin in


the long run a 4 will occur 1/6
times (16.66667%)
ELEC345 6
Sequences of events and
realisations
• Repeating an experiment or
observing a random variable
over time produces a
realisation
• At t1 we can determine the
probability of a value
occurring
• If at another time arbitrary
t2 we get the same
probability distribution we
say the process is stationary
ELEC345 7
Transition Probabilities
• Suppose the states are labelled 1,2, …. N.
• The transition probability 𝑝𝑖𝑗 gives the probability of a
transition from state i to state j (at the time point when a
state change is allowed)
• The probabilities are usually written in matrix form
Col: To state
 p11 p12 pNN 
 
p p22
Row: From state P   21 
 
 
 pN 1 pNN 
ELEC345 8
Example Chain
0.2
0.8
State 0.6
State
1 2
 0.8 0.2 
0.4 P 
Current state Next State Transition Value
 0.4 0.6 
probability
1 1 P11 0.8 1 2
1 2 P12 0.2
1  0.8 0.2 
 
2 1 P21 0.4 2  0.4 0.6 
2 2 P22 0.6
1 2
1  p11 p12 
ELEC345
2  p 21 p 22
9 

Properties of probability
transition matrices
• Since the sum of probabilities coming out of state i add up
to 1 the sum of values in a row add up to 1 (this property
defines what is called a stochastic matrix).
• A state is allowed transition back to itself.

ELEC345 10
Steady state
• Typically, the chain starts (0 time) in a given state with a
certain probability and with each time step the states change
according to the probabilities in P.
• The time steps are numbered 0,1,2,3, … , n,...
• After a long period of time a steady state condition is reached
for which there is a probability of being in a particular state.

State 2
State 1

0 1 2 3 4 5 6
ELEC345 11
Two-state example
The chain starts in State 1
1
Probability in state 1
2

1

0 Probability in state 2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 … Steady
state
ELEC345 12
Steady state probabilities
• The steady state probability of being in state i is called i (i=1,2)
• With N states there are N steady state probabilities:
1 = Prob (State = 1)
2 = Prob (State = 2)

N = Prob (State = N)
These can be put into row vector form
 = (1, 2, … ,N)

ELEC345 13
Equations for steady state
probabilities - Two-state example
• In steady state the probability of being in state 1 is equal
to:
– The probability of being in state 1 in the previous time slot and
making a transition to itself with probability P11
plus
– the probability of being in state 2 in the previous time slot and
making a transition to state 1 with probability P21.
• This gives an equilibrium equation for state 1:
 1   1.P11   2.P 21

ELEC345 14
Picture (State 1)

Previous time point Current time point


P11
(1) State 1 State 1 (1)

(2) State 2 P21

 1   1.P11   2.P 21

ELEC345 15
In words
 1   1.P11   2.P 21
Prob of being Prob of being X Prob of
=
in state 1 in state 1 transition from (A)
1 to 1

+ Prob of being X
Prob of
in state 2 transition from 2 (B)
n.b. the two events: to 1
(A) In state 1 and transition from 1 to 1
(B) In state 2 and transition from 2 to 1
cover every possibility are mutually exclusive, so and the probabilities add
ELEC345 16
Picture (State 2)

Previous time point Current time point


P12
(1) State 1 State 2 (2)

(2) State 2 P22

 2   1.P12   2.P 22

ELEC345 17
Conditioning
 2.P 21
Prob of being X Prob of
in state 2 transition from
2 to 1

This is an example of conditional probability (Conditional means


“knowing that” or “given that”)

Prob (A and B) = Prob (A) x Prob (B conditional on A happening)

A = Previous state is 2
B = Current state is 1 ELEC345 18
Balance equations
• The equilibrium equations are also called balanced equations.
• Rewritten together they are
 1   1.P11   2.P 21
 2   1.P12   2.P 22
• We often write this in matrix form as
 P11 P12 
 1  2    1  2   
 P 21 P 22 

• Or as matrix equation with


=P
The row vector    1  2 
ELEC345 19
Two-state example
• Balance equations.
 1  0.8 1  0.4 2
 2  0.2 1  0.6 2
• Or, in matrix form
 0.8 0.2 
 1  2    1  2   
 0.4 0.6 

• Normalisation equation (all steady state probabilities add up


to 1):
1  2  1

ELEC345 20
Solving for steady state probabilities
• Our goal is to find 0 and 1.

ELEC345 21
Direction solution
• Solving the balance equations gives answer gives 0 in
terms of 1 as the equations are linearly dependent (see
next slide)
• We must add the requirement that 0 + 1 = 1
(normalisation requirement) to find the unique solution for
0 and 1.

ELEC345 22
Two-state example
• Solving the balance equations gives
 1  0.8 1  0.4 2
 2  0.2 1  0.6 2

0.2 1  0.4 2
0.4 2  0.2 1

 1  2 2
• Hence the equations are linearly dependent as there is an infinity of
solutions for 1, 2. We add the normalisation equation to find a unique
solution
1  2  1
ELEC345 23
Unique solution
• The balance equations give
 1  2 2
• Inserting this into the normalisation equation 1  2  1 gives
2 2   2  1

i.e.
3 2  1

Solution is thus:
 2  1/ 3
1  2 / 3
ELEC345 24
Verification
Solution:
1  2 / 3
 2  1/ 3

Equations:
 1  0.8 1  0.4 2
 2  0.2 1  0.6 2
1  1  2

Verification:
2 / 3  0.8* 2 / 3  0.4*1/ 3
1/ 3  0.2* 2 / 3  0.6*1/ 3
1  2 / 3  1/ 3 25
Average data rate
• 2 state example
Average data rate
=Prob(in state 1) * Rate in state 1  Prob(in state 2) * Rate in state 2
  1* 2   2*6
 (2/3)*2 +(1/3)*6 = 10/3Mbps

ELEC345 26
Matrix method of solution
• We will develop a technique that can be used to solve for
arbitrary chains. We use the example as a guide:
• Start with the balance equations together with the
normalisation equation:
 1   1.P11   2.P 21
 2   1.P12   2.P 22
1  1  2

• Put all variables on the right-hand side


0   1.( P11  1)   2.P 21
0   1.P12   2.( P 22  1)
1  1  2 27
• Write these equations in matrix form as follows
 0   P11  1 P 21 
     1 
 0    P12 P 22  1  
 2
1  1  
   1 

• Group blocks of elements in this matrices as matrices and


vectors themselves

  0     P11 P 21   1 0  
       1 
0   P12 P 22   0 1  
 1     2 
   1 1 

ELEC345 28
• Write the matrix equation in block matrix form using
symbols for known inner matrices:

 0   P ' I 
   

     '
1  1 
   

0
0 
0
1  1 1 (n.b. 1 on the LHS is just "one")
1 0
I   identity matrix
0 1
 P11 P 21 
P'    transpose of probability transition matrix P
 P12 P 22 
 1 
 '  transpose of probability transition vector 
 2 
ELEC345 29
• We can write this equation in terms of the matrix Q and
vector b where we have to solve for x

b  Qx

0
 
b  
1
 
 P ' I 
 
Q  
 1 
 
x  '

ELEC345 30
Matlab solution
• The Matlab code to find the steady state probabilities for the example
is:

N = 2; % Number of states
P = [0.8,0.2;0.4,0.6]; % Prob. Transition matrix for chain
Q = [P’ – eye(N);ones(1,N)]; % eye creates identity matrix
% ones creates matrices of all 1’s
b = [zeros(N,1);1]; % zeros creates matrices of all 0’s
x = linsolve(Q,b); % linsolve solves linear equations
fprintf(‘pi1=%g pi2=%g\n’,x(1),x(2))
31
Markov chain structures
• State x communicates with state y if it is possible to get
from x to y in a finite number of transitions and from y to x
in a finite number of transitions.
• A chain is irreducible if every state communicates with
every other state
– It is possible to have disconnected sets of states in a chain.
• A state is called recurrent if it is returned to an infinite
number of times (with probability 1).
• Otherwise the state is called transient – it is only visited a
finite number of times
• Some chains can have absorbing states which once entered
ELEC345 32
cannot be left.
Examples

1 2 1 2 1 2

3 4 3 4 3 4
Not irreducible Not irreducible Irredicible
(e.g. 1 and 4 do not communicate) (e.g. Cannot go from 4 to 1)

8 1 2 5 6 7

3 4 Recurrent states: 1,2,3,4


Transient states 5,6,7
33
Absorbing state: 8

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy