0% found this document useful (0 votes)
49 views16 pages

Applied Stochastic Processes

This document provides an introduction to applied stochastic processes. It begins by defining a stochastic process as a family of random variables indexed by time or another parameter. It then describes different types of stochastic processes based on whether the indexing parameter and possible values are discrete or continuous. The document also introduces some key concepts in stochastic processes, including processes with independent increments, second order processes, Gaussian processes, and Markov processes. It previews topics that will be covered in more depth later, such as martingales, Markov chains, and Markov processes with discrete state spaces.

Uploaded by

jeremy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views16 pages

Applied Stochastic Processes

This document provides an introduction to applied stochastic processes. It begins by defining a stochastic process as a family of random variables indexed by time or another parameter. It then describes different types of stochastic processes based on whether the indexing parameter and possible values are discrete or continuous. The document also introduces some key concepts in stochastic processes, including processes with independent increments, second order processes, Gaussian processes, and Markov processes. It previews topics that will be covered in more depth later, such as martingales, Markov chains, and Markov processes with discrete state spaces.

Uploaded by

jeremy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Applied Stochastic Processes

Ramakrushna Mishra1 and Jyotirmayee Panda2

February 9, 2021
Contents

1 Introduction 2
1.1 Specification of Stochastic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Process with independent increments . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Second order Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Gaussian process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 Markov process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Covariance function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Martingales 7
2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Polya’s Urn Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Super and sub martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Polay Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.1 1D simple symmetric random walk: . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.2 2D simple symmetric random walk . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.3 3D simple symmetric random walk . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Gambler’s Ruin Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Markov Chains 13
3.1 Transition Probability Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Initial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 Two step Transition Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4 Markov process with discrete state space 15

1
Chapter 1

Introduction

The theory of stochastic process is generally regarded as dynamic part of probability theory in which
one studies a collection of random variables indexed parameters. One is observing a stochastic pro-
cess whenever one examines a system developing in time in a manner controlled by probabilistic laws.
In other words, a stochastic process can be regarded as an empirical abstraction of a phenomenon
developing in nature according to some probability laws.

If a scientist is to take into account of the probabilistic nature of the phenomenon with which
which he is dealing, he should undoubtedly make use of the theory of stochastic process. The scientist
making measurements in his laboratory, the meteorologist attempting to forecast weather, the control
system engineer designing a servomechanism, the electrical engineer designing a communication sys-
tem, the hardware engineer developing a computer network, the economist studying price fluctuations
and business cycles, the seismologist studying earthquake vibrations, the neuro-surgeon studying the
electro-cardiogram are encountering problems to which stochastic process can be applied. Financial
modelling and Insurance mathematics are emerging areas where the theory of stochastic process is
widely used.

Examples of stochastic processes are provided by generation sizes of population such as bacterial
colony, life strength of items under different renewals, service times in a queuing system, waiting
times in front of a service center, displacement of a particle executing Brownian motion, number of
events during a particular time interval, number of deaths in a hospital on different days, voltage in
an electrical system during different time instants, maximum temperature in a particular place on
different days, Deviation of artificial satellite from its stipulated path at each instant of time after its
lunch, the quantity purchased of a particular inventory on different days etc. Suppose that a scientist
is observing the trajectory of a satellite after its launch. At random time intervals, the scientist is
observing whether it is deviating from the designated path or not and also the magnitude of the
deviation.
Definition 1.1. Stochastic Process is a random process which depends upon the time factor. Families
of random variables which are functions of time are known as stochastic process or random process
or random functions. Thus a stochastic process is a family of indexed random variables
{X(t, w) : t ∈ T, w ∈ Ω} defined on the probability space (Ω, S, P ) indexed by a parameter t, where
t ∈ T is called as the indexed set or ”Parameter Space” and Ω is called as ”State Space”.
A function defined on a sample space is a random variable. A typical example of random variable
is the number of aces in a hand of bridge.Number of birthdays in a company of ’n’ people. these are
all defined in a discrete sample space. Also Gambler game is a random variable, so every loss or gain
of the gambler is a random variable corresponding to the suitable game.
1. For each choice of t ∈ T, X(t, w) is a random variable.

2
2. For each choice of w ∈ Ω, X(t, w) is a random variable.

3. For each choice of w&t, X(t, w) is a number.


4. In general it is an family of functions X(t, w) where t&w can make different possible values.
We denote stochastic process at X(t) or Xn accordingly as the indexing parameter t is discrete or
continuous (by omitting w). The values assumed by the random X(t) are called states and the set of
all possible values of X(t) is called the state space of processes & denoted by ’S’.

The state space ’S’ can be discrete or continuous when S is discrete by a proper labeling. We
can make the state space as the set of natural numbers N. It may be finite or infinite. the main
elements distinguishing stochastic process are the nature of state space S and parameter space T and
dependence relation among random variable λ(t). Accordingly there are four types of processes.

1. Type-1: In this case both S & T are discrete examples are provided by the number of customers
reported in a bank counter on the nth day. the nth generation size of size of population, the
number of births in a hospital on the nth day etc.
2. Type-2: In this case T is continuous and S is discrete. Examples constitute the number of
persons in queue at time t, the number of telephone calls during (0, t), the number of vehicles
passing through a specific junction during (0, t) etc.
3. Type-3: In this case T is discrete and S is continuous. Examples are provided by the renewal,
life length of the nth renewed, built, service time for the nth customer, waiting time on the nth
day to get transport, the maximum temperature in a city on the nth day etc.

4. Type-4: In this case both T & S are continuous. examples are constituted by the voltage in
an electrical system at time t, the ECG level of a patient at time t, the speed of vehicle at time
t, the displacement of a particle undergoing Brownian motion at time t, the altitude of satellite
at time t etc.
Sometimes the discrete parameter family called a stochastic sequence and the continuous parameter
family a stochastic process. We now describe some of the classical types of the stochastic process
characterized by different dependence relation among X(t).

1.1 Specification of Stochastic process


1.1.1 Process with independent increments
Let {X(t) : t ∈ T } is a stochastic process. Suppose t0 , t1 , t2 , ..... be different points of the time such
that t0 < t1 < t2 .....

Define Z(ti ) = X(ti ) − X(ti − 1), for i = 1, 2....

If Z(t1 ), Z(t2 ), .... are independent random variables, then the stochastic process {X(t) : t ∈ T } is
called a process with independent increments.

e.g. X(t): Number of customers arrived in a retail counter in time (0, t).

X(t0 ): Number of customers arrived in a retail counter in time (0, t0 ).

X(t1 ): Number of customers arrived in a retail counter in time (0, t1 ).

3
Then the number of customers arrive in time t0 to t1 is X(t1 ) − X(t0 ) = Z(t1 ).

A stochastic process with independent increment is a special case of stochastic process.

1.1.2 Second order Process


A stochastic process {X(t) : t ∈ T } is called a second order process if it has finite second order
moments i.e. E{X(t)} < ∞ for any t ∈ T .

Define M (t) = E{X(t)} = µ1 (t) is called as mean process.

and C(s, t) = E{X(s).X(t)} − E(X(s)).E(X(t)) for s, t ∈ T with s < t.

Now the function C(s, t) is known as the covariance function/ covariance process of stochastic
process {X(t) : t ∈ T }.

A stochastic process {X(t) : t ∈ T } is called the covariance stationary process or simply a station-
ary process if its mean function M (t) is independent of t and covariance function C(s, t) is a function
of time difference t − s. Let’s consider a particular whose displacement at time t is given by
X(t) = A1 (t) cos ωt + A2 (t) sin ωt
where A1 (t) and A2 (t) are independent random variables with mean 0 and constant variance σ 2
and ω is phase difference. Here

M (t) = E{X(t)}
= E{A1 (t) cos ωt + A2 (t) sin ωt}
= E{A1 (t) cos ωt} + E{A2 (t) sin ωt} = 0
C(s, t) = E[X(s).X(t)] − E{X(s)}.E{X(t)}
= E[X(s).X(t)]
= E[{A1 (s) cos ωs + A2 (s) sin ωs}{A1 (t) cos ωt + A2 (t) sin ωt}]
= E[A1 (s)A1 (t) sin(ωs) sin(ωt) + A1 (s)A2 (t) sin(ωs) cos(ωt)+
A2 (s)A1 (t) cos(ωs) sin(ωt) + A2 (s)A2 (t) cos(ωs) cos(ωt)]
= σ 2 [sin ωs sin ωt + cos ωs cos ωt]
= σ 2 cos(ωs − ωt)
= σ 2 cos ω(t − s)
= ρ(t − s)
So the stochastic process {X(t) : t ∈ T } is a covariance stationary process.

1.1.3 Gaussian process


A stochastic process {X(t) : t ∈ T } is called as a Gaussian process, if the joint distribution of
{X(t1 , X(t2 )...X(tn ))}, ∀t1 , t2 , ...tn ∈ T follows multivariate normal distribution.

1.1.4 Markov process


If {X(t) : t ∈ T } is a stochastic process, s.t. given the value of X(s) and X(t), t > s do not depend
on X(u), u < s, then it is called as Markov process.

4
Past(u < s) Present(s) Future(t > s)

Future only depends on present not on past. So a Markov process can be defined as follows:

if f or t1 < t2 < ... < tn < t


P r{a ≤ X(t) ≤ b|X(t1 ) = x1 , X(t2 ) = x2 , ...X(tn ) = xn }
= P r{a ≤ X(t) ≤ b|X(tn ) = xn }

A discrete parameter Markov process is called a Markov chain. Here

P (Xn+1 = xn+1 |Xn = xn , Xn−1 = xn−1 ..., X0 = x0 ) = P (Xn+1 = xn+1 |Xn = xn )

Example-1: Let {Xn : n ≥ 1} be an uncorrelated random variable with 0 mean and unit variance.

0 if m =6 n
C(n, m) = Cov(Xn , Xm ) = E(Xn Xm ) =
1 if m = 0

So the stochastic process {Xn : n ≥ 1} is covariance stationary. But it is not strictly stationary . It

will be strictly stationary when Xn s are identically distributed.

Example-2: Let {X(t) : t ∈ T } is a Poisson process s.t.

exp(−λt).(λt)n
P r(X(t) = n) = ; n = 0, 1, 2......
n!
Here M (t) = E{X(t)} = λt, which is not independent of t, so the Poisson process is not a stationary
process.

Example-3: Let X(t) = A1 + A2 (t) where A1 andA2 are independent random variable with

E(Ai ) = ai
V (Ai ) = 0; i = 1, 2, ...

Find the mean and covariance function and verify that the process is evolutionary.

Example-4: Consider a stochastic process {X(t) : t ∈ T } whose probability distribution under


certain condition
(at)n−1


 ; n = 1, 2, 3....
P r{X(t) = n} = (1 + at)n+1
 at

;n=0
1 + at
Find the mean and variance of this process and show that it is not stationary.

1.2 Covariance function


C(s, t) = E{X(s).X(t)} − E(X(s)).E(X(t)).

Properties

5
1. It is symmetric in t and s i.e. C(s, t) = C(t, s) ∀ t, s ∈ T
p
2. We can apply Schwartz inequality to this covariance function C(s, t) ≤ C(s, s).C(t, t).
3. The covariance function is non-negative definite i.e.
X n
n X n
X
aj .ak .C(tj , tk ) − E[ aj .Xtj ]2 ≥ 0
j=1 k=1 j=1

where aj , ak ∈ R
4. Closure property: The covariance function also satisfies this closure axiom because sum and
product of two covariance function is also covariance function.
Remark. A stochastic process {X(t) : t ∈ T } is called covariance stationary or weakly stationary or
wide sense stationary, then for any t0 ∈ T , we can write

C(s + t0 , t + t0 ) = Cov(X(s + t0 ), X(t + t0 ))


= E[X(s + t0 )X(t + t0 )] − E[X(s + t0 )]E[X(t + t0 )]
= E[X(s)X(t) − E[X(s)]E[X(t)]]
= Cov(X(s)X(t))
= C(s, t)

i.e. joint distribution of X(s + t0 ), X(t + t0 ) is same as the joint distribution of X(s), X(t) for any
value of t0 ∈ T .
By generalizing, we can write the stochastic process is said to be weakly stationary of order n, if
joint distribution of X(t1 ), X(t2 )....X(tn ) is same as joint distribution of X(t1 +h), X(t2 +h)...X(tn +h)
for any arbitrary values of t1 , t2 , ...tn for any value of t > 0. This stochastic process can be called as
weakly stationary of order and for any values of n.

6
Chapter 2

Martingales

A stochastic process is often characterized by the dependence relationships between the members of
the family. A process with a particular type of dependence through conditional mean is known as
martingale property.
The word martingales is a french word which refers to a class of betting strategy in gambling,
which were popular during 18th century in France. At present this martingale theory is an important
part of probability theory, which is used to model analytically different real life situation.
Definition 2.1. A discrete parameter stochastic process {xn : n ≥ 0} is called a martingale or
martingale process if ∀n, we have

1. E{|Xn |} < ∞
2. E{Xn+1 |Xn , Xn−1 ...X0 |} = Xn
Example:
Pn Let {Zi , i = 1, 2...} be a sequence of i.i.d random variables with mean 0 and let
Xn = i=1 Zi , then {Xn : n ≥ 1} is a martingale.Here
n
X
E{|Xn |} = E{| Zi }
i=1
n
X
≤ E|Zi |
i=1
Xn
= |EZi | < ∞
i=1

Here Xn+1 = Xn + Zn+1

E{Xn+1 |Xn , Xn−1 ...X0 } = E{Xn + Zn+1 |Xn , Xn−1 ...X0 }


= E{Xn |Xn , Xn−1 ...X0 } + E{Zn+1 |Xn , Xn−1 ...X0 }
= Xn + E{Zn+1 }
= Xn

Since Zi s are independent, E{Zn+1 } = 0
′ Pn ′
Remark. If bi s are real numbers, s.t. Xn = i=1 bi .Zi where Zi s are i.i.d random variables with zero
mean, then {Xn : n ≥ 1} is a martingale.

7
2.1 Examples
Example-1
Gambler’s Ruin Problem: Suppose that a gambler with initial Rs a/- plays a series of games
against a rich adversary(say a gambling machine). he stands to gain one unit of money in the game
with probability p and to lose one unit with probability q s.t. p+q=1.

the player in the first n games and Zi be the loss/gain of ith game by the
let Xn be the gain of P
n
player. so we have Xn = i=1 Zi . Here we say that the gambler ruins after nth game, if Xn = −a.

Here EZi = 1.p + (−1).q = p − q; ∀i


Here EZi = 0; if p=q.

We can prove that{Xn : n ≥ 1} is a martingale. The game is said to be fair if p=q.


1-dimensional simple symmetric random walk is also a martingale.

Example-2
Gambler’s ruin problem: Let the gambler play a series of games with p=q and let Yn be the
gambler’s fortune after nth game. Then {Yn : n ≥ 1} is also a martingale.

Define Xn = Yn2 − n
Given Yn , Yn−1 , ....Y0 , we have
2
P r{Xn+1 = Yn+1 − (n + 1)} = P r{Xn+1 = (Yn + 1)2 − (n + 1)}
=⇒ P r{Xn+1 = (Yn − 1)2 − (n + 1)} = 1/2

so that

E{Xn+1 |Yn , Yn−1 , ..Y0 }


1 1
= [(Yn + 1)2 − (n + 1)] + [(Yn − 1)2 − (n + 1)]
2 2
= Yn2 + 1 − (n + 1)
= Yn2 − n
= Xn

so {Xn : n ≥ 1} is a martingale.

Example-3
Consider a case when p 6= q.
 Y n
q
Define Vn = where Yn is the fortune of the gambler after nth game.
p
  Yn+1 
q
Given Yn , Yn−1 , ...Y0 we have P r Vn+1 = .So
p
  Yn +1 
q
P r Vn+1 = =p
p
  Yn −1 
q
P r Vn+1 = =q
p

8
Now

E{Yn+1 |Yn , Yn−1 , ...Y0 }


 Yn +1  Yn −1
q q
=p +q
p p
q Yn +1 q Yn
= +
pY n pYn −1
 Y n
q
= [q + p]
p
 Y n
q
= = Vn
p

So {Vn : n ≥ 1} is a martingale w.r.t {Yn : n ≥ 1}

2.2 Polya’s Urn Scheme


An urn initially contains R red balls and B black balls. A ball is drawn from the urn and its colour
is noted.Then it is replaced into the urn with another ball of same colour.
Let Xn denote the number of red balls in the urn after nth draw.
Xn
Denote Yn = is the proportion of red balls in the urn after nth draw.
n+b+r
Now the number of red balls after (n + 1)th draw will be Xn + 1 or Xn , according as nth draw
results a red ball or not.
So Xn+1 will have two values Xn + 1 orXn accordingly after (n + 1)th draw. Thus
 
Xn + 1 Xn
P r Yn+1 = =
(n + 1) + b + r n+r+b
 
Xn Xn
P r Yn+1 = =1−
(n + 1) + b + r n+r+b
Now

E{Yn+1 |Xn , Xn−1 ...X0 }


! ! ! !
Xn Xn + 1 Xn Xn
= + 1−
n+r+b (n + 1) + b + r n+r+b (n + 1) + b + r
Xn2 + Xn + Xn (n + r + b − Xn )
=
(n + r + b)(n + 1 + r + b)
Xn (n + r + b + 1)
=
(n + r + b)(n + 1 + r + b)
Xn
=
n+r+b
= Yn

Hence {Yn , n ≥ 0} is a non-negative martingale w.r.t. {Xn , n ≥ 0}.

Example-4 Qn
Let {Zn : n ≥ 1} is a sequence of i.i.d random variables with EZn = 1 ∀n. Let’s define Xn = k=1 Zk .
Verify {Xn : n ≥ 1} is a martingale or not. When X0 = 1, thus verify that {Xn , n ≥ 0} is a martingale.

9

ANS EZn = 1 ∀n (Zn s are i.i.d)
n
Y
Xn = Zk
k=1

E[Xn+1 |X1 , X2 ...Xn ]


= E[Zn+1 Xn |X1 ...Xn ]
= Xn E[Zn+1 |X1 ...Xn ]
= Xn EZn+1 = Xn

Hence {Xn , n ≥ 0} is a martingale when X0 = 1.

Example-5:Let U1 , U2 ....Un be random variables having uniform distribution in[0, 1].


Let Xn = 2n .U1 .U2 ...Un with X0 = 1. Then show that {Xn , n ≥ 0} is a martingale.

2.3 Super and sub martingales


A stochastic process{Xn , n ≥ 0} with E{|Xn |} < ∞ is called as a super-martingale if

E{Xn+1 |Xn , Xn−1 ....X0 } ≥ Xn

and a stochastic process {Xn , n ≥ 0} can be called as sub-martingale if

E{Xn+1 |Xn , Xn−1 ...X0 } ≤ Xn

Remark. 1. Every martingale can be a sub-martingale and a super martingale.


2. A stochastic process which is a sub-martingale and also a super-martingale can be called as a
martingale.

2.4 Polay Theorem


In a symmetrical random walk of 1 or 2 dimensions there is a positive probability that the particle
sooner or later( and thereafter infinitely often) returns to its initial position with probability 1. But
in 3-dimensional random walk there is a positive probability that they never meet.

2.4.1 1D simple symmetric random walk:


Suppose a particle is allowed to move one-step forward with probability p and one step backward with
probability q, s.t. p+q=1.
Hence the state space S={−∞, ... − 3, −2, −1, 0, 1, 2, 3, ...∞}
Suppose the particle returns to its initial position possibly at k-steps. When the particle moves
n-steps forwards and n-steps backwards then k=2n.
(n)
Let Pjk is the probability that the particle is starting from the state j, reaches the state k after
n-steps.
By following the binomial distribution we can write
  n  n
2n 2n 1 1 2n! n n
P00 = Cn = p q
2 2 n!n!
2n+1
and P00 = 0.

10
√ 1
By Stirling’s approximation n! → 2πe−n nn+ 2

2n 2n! n n
P00 = p q
n!n!
√ 1
2π.e−n .2n2n+ 2 n n
= √ 2 p q
2π .e−2n .n2n+1
1 1
= √ 22n+ 2 pn q n
2πn
1 1
= √ 22
2πn
1
= √
πn
∞ ∞
X (2n) 1 X 1
P00 =√ √
n=0
π n=0 n
is a divergent series. Therefore the state 0 is a persistent state. Hence returning to the initial state of
the particle is possible.

2.4.2 2D simple symmetric random walk


Suppose a particle moves in 2-Dimensions, i.e. forward and backward, left and right. So let the
minimum number of steps required to return to its initial position is 2n. Let the particle moves in
2D with i-steps in the positive direction and i-steps in negative direction on X-axis. It moves j-steps
forward and j-steps backward along Y-axis. Here

(2n)
X 2n!  1 2n X 2n!.4−2n
P00 = =
i+j=n
i!i!j!j! 4 i
(i!)2 ((n − i)!)2

n
X 2n!.(n!)2 .4−2n
=
i=0
(n!)2 (i!)2 ((n − i)!)2
n
2n −2n
X n!.n!
= Cn .4 .
i=0
i!.(n − i)!.i!(n − i)!
n
X
= 2nCn .4−2n . n
Ci nCn−i
i=0
= 2nCn .4−2n .2nCn
2
= 2nCn .4−2n
 2
1
= √ .24n+1 .4−2n
2πn
1
= .24n+1 .2−4n
2πn
1
=
πn

X (2n) 1X1
P00 =
n=0
π n
is a divergent series. So state-0 is persistent.

11
2.4.3 3D simple symmetric random walk
Suppose a particle moves in 3 dimensions, i.e. forward and backward, left and right, up and down.
So let the minimum number of steps required to return to its initial position be 2n. So the particle
moves i-steps in +ve direction and in negative direction on X-axis. Similarly j forward and backward
steps on Y-axis and k forward & backward steps in Z-axis.

 2n
(2n)
X 2n! 1
P00 =
(i!j!k!)2 6
i+j+k=n
X 2n!.6−2n .(n!)2
=
(i!j!k!)2 .(n!)2
i+j+k=n
X n!.n!
= 2nCn .6−2n .
(i!)2 .(j!)2 .{(n − i − j)!}2
i+j+k=n
X 2
2n −2n 3−n .n!
= Cn .2 .
i!.j!.{(n − i − j)!}
 2
1 X 3−n .n!
=√ .
πn i!.j!.{(n − i − j)!}

Now the probability of placing n balls in 3 boxes.


 
1 n 3−n .n!
n
=
3 i, j, n − i − j i!.j!(n − i − j)!
n
This is maximised when i, j, k are as close to as possible.
3
  
(2n) 1 X 1 n! 1 n!
P00 ≤√
πn 3n {[ n3 ]!}3 3n i!.j!(n − i − j)!
1 1 n! X
≤ √ . n. n 3
πn 3 {[ 3 ]!}

2.5 Gambler’s Ruin Problem

12
Chapter 3

Markov Chains

A discrete parameter Markov process is called as Markov chain. A Markov chain can be written as,
{xn : n ≥ 0}, for a Markov chain {xn : n ≥ 0},we have
Pr {Xn+1 = j|Xn = i, Xn−1 = i1 , Xn−2 = i2 , ......, X0 = in }
= Pr {Xn+1 = j|Xn = i}
= Pij ; i, j = 0, 1, 2...&i, j ∈ S
Here Pij denotes the probability that the system is at state i at time n and it goes to state j in next
time i.e. (n+1). This is also known as One step Transition Probabilities.

3.1 Transition Probability Matrix


Let {Xn : n ≥ 0} is a Markov chain with state space S, defined Pij = Pr {Xn+1 = j|Xn = i}, for
i, j ∈ S, then the collection of Pij can be put in the form of a matrix P. Where
 
P00 P01 P02 .. .. .. .. ..
P10 P11 P12 .. .. .. .. .. 
 
P20 P21 P22 .. .. .. .. .. 
P =  ..

 .. .. .. .. .. .. ..  
 .. .. .. .. .. .. .. .. 
.. .. .. .. .. .. .. ..
P
Such that, Pij = 1, ∀i
j
i.e. the row sum of the matrix P is 1, then the matrix P is called as a transition probability matrix
or stochastic matrix.

3.2 Initial Distribution


For the Markov chain {Xn : n ≥ 0} the distribution of the random variable X0 is called as the Initial
Distribution.

Now the Joint Distribution of X0 , X1 , X2 , ......Xn is


P (X0 = i0 , X1 = i1 , ......Xn = in )
= P (X0 = i0 ).P (X1 = i1 |X0 = i0 ).P (X2 = i2 |X1 = i1 , X0 = i0 )...P (Xn = in |Xn−1 = in−1 .....X0 = i0 )
= P (X0 = i0 ).P (X1 = i1 |X0 = i0 ).P (X2 = i2 |X1 = i1 )...P (Xn = in |Xn−1 = in−1 )
= Pi0 .Pi0 i1 .Pi1 i2 ...Pin−1 Pin

13
3.3 Two step Transition Probabilities
(2)
Define, Pij = Pr {Xn+2 = j|Xn = i}
So,
(2)
X
Pij = P (Xn+2 = j|Xn+1 = r).P (Xn+1 = r|Xn = i)
r
X
= Pir ∗ P rj
r

n-step Transition Probability matrix,


(n)
Pij = Pr (Xm+n = j|Xm = i)
P (n) = P n = (Pijn )

Chapman-Kolmogorov Equation
Statement:
The n-step Transition probability is defined by,

Pijn = P (Xm+n = j|Xm = i)

which is the probability that the system is in state i at time m and it reaches state j after n-
transitions,i.e.the system is in state j at time (m+n).
The (m+n) step transition probabilities must satisfy the following equation,
(m+n)
X (m) (n)
Pij = Pir ∗ Prj
r

proof:
Let r is any fixed value,we have

P (Xn+2 = j, Xn+1 = r|Xn = i) = Pr (Xn+2 = j|Xn+1 = r, Xn = i).Pr (Xn+1 = r|Xn = i)

= Pr (Xn+2 = j|Xn+1 = r).Pr (Xn+1 = r|Xn = i)


(1) (1)
= Prj .Pir = Pir .Prj
But when r is not fixed, i.e. r can take any value from the state space S. So we can write
X (1) (1) (2)
P (Xn+2 = j|Xn = i) = Pir .Prj = Pij
r

Now we establish the Chapman-Kolmogorov equation for m=1 and n=1.

14
Chapter 4

Markov process with discrete state


space

P0′ (t) = −λP0 (t) + µP1 (t),


Pj′ (t) = λPj−1 (t) − (λ + jµ)Pj (t) + µ(j + 1)Pj+1 (t)

15

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy