0% found this document useful (0 votes)
4 views72 pages

mas1l_2022

The document covers various concepts in systems analysis, focusing on random processes and queuing models, including geometric and exponential random variables, stochastic processes, and Markov chains. It provides definitions, properties, and examples of these concepts, such as the memoryless property and Poisson processes, along with problems for practical understanding. Additionally, it discusses discrete time Markov chains and their transition probabilities, emphasizing their applications in modeling real-world systems.

Uploaded by

MySidePages
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views72 pages

mas1l_2022

The document covers various concepts in systems analysis, focusing on random processes and queuing models, including geometric and exponential random variables, stochastic processes, and Markov chains. It provides definitions, properties, and examples of these concepts, such as the memoryless property and Poisson processes, along with problems for practical understanding. Additionally, it discusses discrete time Markov chains and their transition probabilities, emphasizing their applications in modeling real-world systems.

Uploaded by

MySidePages
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

E6204 Systems Analysis

Random Processes and Queuing Models

by

Dr Patricia Wong
Office: S1-B1b-58
Tel: 67904219

Random Processes:
1. Geometric random variable
2. Exponential random variable
3. Stochastic process
4. Poisson process
5. Discrete time Markov chain
6. Chapman-Kolmogorov equation
7. Continuous time Markov chain
8. Chapman-Kolmogorov equation
9. Kolmogorov differential equation
10. Birth-death process

1
Mean of X = E(X)

1
= (prove this, problem 1.1) (2)
p

= no. of trials required, on the

average, to obtain the 1st success

What can X model?

1. No. of time units required to process a work-


piece, to fix a tool, to move a part from one
machine centre to another, to set up a ma-
chine.

2. No. of time units for which a tool remains


in working condition before a failure occurs.

3. No. of time units the repair of a spoilt com-


ponent would take.

3
Memoryless Property (or Markovian property)

P (X = m + n | X > m) = P (X = n), m, n ≥ 1.
(3)
Proof. We have
P (X = m + n and X > m)
P (X = m + n | X > m) =
P (X > m)

P (X = m + n)
= .
P (X > m)

Since

P (X > m) = (1−p)m (prove this, problem 1.2)


(4)
it follows that
(1 − p)m+n−1p
P (X = m + n | X > m) =
(1 − p)m

= (1 − p)n−1p

= P (X = n).

4
Problem 1.3: Show that
P (X > m + n | X > m) = P (X > n), m, n ≥ 1
(5)
P (X ≤ m + n | X > m) = P (X ≤ n), m, n ≥ 1
(6)

Eg 1.1. Let X be the no. of operations a tool


will last till breakdown (i.e., there are (X − 1) suc-
cess operations, the X-th operation is breakdown). X
is a geometric r.v. with parameter p = 0.01.
(a) How long will the tool last, on the average?

1 1
E(X) = = = 100
p 0.01
The tool will last an average of 100 opera-
tions.
(b) If the tool has already lasted 10 operations,
what is the probability that it will last 30
more operations?

P (X > 30 + 10 | X > 10) = P (X > 30)

= (1 − 0.01)30 = 0.7397.

5
(c) It is known that the tool is still working after
m (< 40) operations. Find the probability
that it will still work after 40 operations.

P (X > 40 | X > m) = P (X > 40 − m + m | X > m)

= P (X > 40 − m)

= (1 − 0.01)40−m = 0.9940−m.

Problem 1.4: In eg 1.1,


(i) what is the probability that the tool lasts at
least 2 operations? (0.9801)
(ii) given that the tool has already lasted 98
operations, what is the probability that it
will break only after 100 operations in all?
(0.0099)

Problem 1.5: If G1 and G2 are independent ge-


ometric random variables with parameters p1
and p2 respectively, show that
p1(1 − p2)
P (G1 < G2) = .
p1 + p 2 − p1 p2
6
What can X model?
1. Time between successive arrivals of raw work-
pieces
2. Processing time on a machine
3. Machine setup time
4. Material handling time
5. Message transmission time
6. Lifetime of a tool
7. Time to failure of a machine
8. Time required to repair a spoilt component

9
Memoryless Property (or Markovian property)

P (X > x+y | X > x) = P (X > y), x, y ≥ 0. (9)


P (X ≤ x + y | X > x) = P (X ≤ y), x, y ≥ 0.
(10)
Proof. We shall prove (9).
P (X > x + y and X > x)
P (X > x + y | X > x) =
P (X > x)

P (X > x + y)
=
P (X > x)

e−λ(x+y) −λy
= = e = P (X > y).
e−λx

Problem 2.2: Show that (10) holds.

Eg 2.1. Let the time to failure, X, of a tool


be exponentially distributed with rate 0.01 per
hour. Find
(a) Mean time between failures (MTBF)
1 1
= = = 100 hours
λ 0.01
10
(b) P (X > 50) = e−λ(50) = e−0.01(50) = 0.6065.
(c) P (X > 50 | X > 40)
= P (X > 10) = e−λ(10) = e−0.01(10) = 0.9048.
(d) P (X ≤ 15 | X > 10)
= P (X ≤ 5) = 1 − e−λ(5) = 1 − e−0.01(5) = 0.04877.

Problem 2.3: The time required to repair a ma-


chine is an exponentially distributed r.v. with
mean 30 minutes.
(a) What is the probability that a repair time
exceeds 30 min? (0.3679)
(b) Given that a repair duration exceeds 1 hour,
what is the probability that the repair takes
at least 1.5 hours? (0.3679)

Problem 2.4: Let X1 = EXP (λ1), X2 = EXP (λ2),


and X1, X2 are independent.
λ1
(a) Show that P (X1 < X2) = .
λ1 + λ2
(b) Let X = min{X1, X2}. Show that X = EXP (λ1+
λ2), i.e., X is exponentially distributed with
parameter (λ1 + λ2).
11
It can be shown that

(a) the geometric r.v. is the unique discrete r.v.


satisfying the memoryless property (3).

(b) the exponential r.v. is the unique continuous


r.v. satisfying the memoryless property (9).

3. Stochastic Processes

A stochastic process is a collection of random vari-


ables {X(t) : t ∈ T }. Note that X(t) is a r.v. for
each t ∈ T.

T is called the index set of the process, a t ∈ T


is called an index. Very often, the index t is
interpreted as time.

If T is a countable set, then the stochastic pro-


cess is called a discrete time process.

If T is an interval of IR, then the stochastic pro-


cess is called a continuous time process.

12
The set of all values that X(t) may assume is
called the state space, denoted by S.

S can be a discrete state space (countable). In


this case, the stochastic process is called a chain.
Eg, a machine has only 2 states, working or
failed.

S can be a continuous state space. Eg, X(t)


represents the distance covered by a robot arm
in time t.

There are 4 different types of stochastic pro-


cesses:
1. discrete time, discrete state space processes
(discrete time chain)
2. discrete time, continuous state space pro-
cesses
3. continuous time, discrete state space pro-
cesses (continuous time chain)
4. continuous time, continuous state space pro-
cesses

13
Eg 3.1. Let Xi denote the state of a certain
machine at time ti, i = 1, 2, 3, · · · .

Let the state be designated 1 if the machine is


down.

Let the state be designated 0 if the machine is


up.

Then, S = {0, 1}. The index set is T = {t1, t2, t3, · · ·}.
The stochastic process {Xi : i = 1, 2, 3, · · ·} is a
discrete time, discrete state space process (dis-
crete time chain).

Eg 3.2. In Eg 3.1, if the state of the machine


is examined at any instant of time, then the
index set is T = [0, ∞). The stochastic process
{X(t) : t ∈ T } is a continuous time, discrete state
space process (continuous time chain).

Eg 3.3. Random arrival of parts. Let X(t) de-


note the no. of parts arrived in the time interval
[0, t]. X(t) takes values in the set {0, 1, 2, · · ·}. The
stochastic process {X(t) : t ∈ [0, ∞)} is a contin-
uous time, discrete state space process (contin-
uous time chain). This is a counting process. It is
14
also a Poisson process.

Eg 3.4. A NC machine can process n (≥ 0) dif-


ferent types of parts. The different types of
parts arrive randomly. The machine can be in
(n + 1) states {0, 1, 2, · · · , n}, where state 0 means
machine is idle, state i, 1 ≤ i ≤ n means machine
is processing a part of type i. If X(t) specify the
state of the machine at time t, then the stochas-
tic process {X(t) : t ≥ 0} is a continuous time,
discrete state space process (continuous time
chain).

4. Poisson Process

We shall start with a Poisson random variable,


X(t).

e−λt(λt)k
P (X(t) = k) = , k = 0, 1, 2, · · · (11)
k!

A collection of Poisson random variables {X(t) :


t ≥ 0} is a Poisson process. λ is a parameter and
is called the rate of the Poisson process.

15
Problem 4.1: Show that the mean of the Pois-
son r.v. X(t) with probability mass funtion (pmf )
given by (11) is λt.

The Poisson process is a special case of contin-


uous time, discrete state space process (contin-
uous time chain). It models

1. arrival of parts into a manufacturing facility


2. arrival of jobs into a computing centre
3. arrival of telephone calls in a telephone ex-
change
4. arrival of messages to be transmitted on a
network

As an example, let X(t) be the no. of parts


arriving in the time interval [0, t] at a machine.
Then, it can be shown that
e−λt(λt)k
P (X(t) = k) = , k = 0, 1, 2, · · ·
k!
Thus, X(t) is a Poisson r.v. and {X(t) : t ≥ 0} is
a Poisson process.

16
The probability of no arrival in the interval
[0, t] is given by

P (X(t) = 0) = e−λt, (t > 0). (12)

Let T be the inter-arrival time, i.e., time be-


tween successive arrivals. Then, (12) can be
interpreted as

P (T > t) = e−λt, (t > 0). (13)

It follows that

P (T ≤ t) = 1 − e−λt, (t > 0). (14)

Hence, T is exponentially distributed with pa-


rameter λ (see (7)).

In summary,

• the inter-arrival time of a Poisson process


with rate λ is exponentially distributed with
rate λ.

• the no. of arrivals in the time interval [0, t]


is a Poisson r.v. with mean λt.

17
5. Discrete Time Markov Chain (DTMC)

This is a special case of discrete time, discrete


state space process. Let T = {t0, t1, t2, · · ·} and
let X(tn) be denoted by Xn, n = 0, 1, 2, · · · . The
state space S = {0, 1, 2, · · ·} ≡ N.

If Xm = k, we say the state of the stochastic


process at time tm is k. X0 is the initial state of
the system.

Definition A discrete time Markov chain (DTMC)


is a discrete time stochastic process {Xn : n ∈
N } with countable state space S, such that the
Markov property holds

P (Xn = j | Xn−1 = i, Xn−2 = i2, Xn−3 = i3, · · · , X0 = in)

= P (Xn = j | Xn−1 = i). (15)

Intuitively, (15) implies that given the current


state (Xn−1 = i), the future (Xn = j) is indepen-
dent of the past (Xn−2 = i2, Xn−3 = i3, · · · , X0 =
in).

20
One-Step Transition Probability
pij ≡ pij (1) = P (Xn = j | Xn−1 = i), n ≥ 1 (18)

Transition Probability Matrix (TPM)


 


p00 p01 p02 · · · 
 
 
 
 
p10 p11 p12 · · ·
 
 
 
 
 
P = [pij ] = 




 (19)
p20 p21 p22 · · ·
 
 
 
 
 
 
 
 
 
· · · ···
 

0 ≤ pij ≤ 1, i, j ∈ N
pij = 1, i∈N (20)
X

A DTMC can be described by a state transition


diagram, which is a labeled directed graph where
each node represents a particular state of the
DTMC and a directed arc from a node i to an-
other node j represents a one-step transition
form state i to state j. This directed arc is la-
beled by pij . If pij = 0, no directed arc is included
from node i to node j.

22
Sojourn Time
Given a state i of a DTMC {Xn ∈ S : n ∈ N },
the sojourn time Ti of the state i is the discrete
r.v. that gives the no. of time steps for which
the DTMC resides in state i before transiting
to a different state.
We shall show that Ti is a geometric r.v. First,
we note that
P (Ti > n)
= P (Xn = i | Xn−1 = i)P (Xn−1 = i | Xn−2 = i) · · ·
· · · P (X2 = i | X1 = i)P (X1 = i | X0 = i)
= pnii. (21)
Now, using (21) we find
∞ ∞
P (Ti = n) = P (Ti = k) − P (Ti = k)
X X

k=n k=n+1

= P (Ti > n − 1) − P (Ti > n)

= piin−1 − pnii = pn−1


ii (1 − pii ).

The above is the pmf of a geometric r.v. with


success probability (1 − pii) (see (1)).
23
Thus, Ti is a geometric r.v. with success prob-
ability (1 − pii).

E(Ti) = mean no. of time steps the DTMC

spends in state i

1
=
1 − pii

pii = 0 =⇒ E(Ti) = 1

pii = 1 =⇒ E(Ti) = ∞

Eg 5.1. A machine has 2 states: working (state


0), and undergoing repair following a break-
down (state 1). The state of the machine is
examined every hour.

The above system can be formulated as a ho-


mogeneous DTMC {Xn ∈ S : n ∈ N }, where
S = {0, 1} and the time instants t0, t1, t2, · · ·
correspond to 0, 1h, 2h, · · · , respectively. Let

24
a = probability that machine fails in a given
hour

= probability that machine is in the failed


condition by the next observation given that it
is working in the current observation

= p01, and

b = probability that the failed machine gets re-


paired in a given hour

= probability that machine gets repaired by


the next observation given that it is in the failed
condition in the current observation

= p10.

The TPM is
 


1−a a 

P = , 0 ≤ a, b ≤ 1.
 
 
 
 
b 1−b
 

25
Let there be a single job in the system. The
job can be in (m + 1) states:
0 (if getting serviced by AGV)
i (if getting serviced by Mi), 1 ≤ i ≤ m
The flow of the job can be described by a DTMC
model. The TPM of the DTMC model is
 
 q 0 q 1 q2 · · · qm 
 
 1 0 0 · · · 0
 
 

 
 1 0 0 · · · 0
 
 

P = 

.

 · · · · · · · 


 
 · · · · · · ·
 

 
 

1 0 0 ··· 0 

29
The TPM is

2 2
(1 − a) 2a(1 − a) a
 
 
 
 
 
 
 
P = 0 1−a a .
 
 
 
 
 
 
 
 
0 0 1
 

p00 = P (no machines fail in a given hour)

= (1 − a)2

p01 = P (exactly 1 machine fails in a given hour)

= a(1 − a) + a(1 − a)

= 2a(1 − a)

p02 = P (both machines fail in a given hour) = a2

p10 = p20 = p21 = P (impossible event) = 0

p22 = P (sure event) = 1


31
pij (m + n)

= P (Xm+n = j | X0 = i)

= P (Xm+n = j, Xm = k | X0 = i)
X

k∈S

(total probability thm)

= P (Xm+n = j | Xm = k, X0 = i)P (Xm = k | X0 = i)


X

k∈S

= P (Xm+n = j | Xm = k)P (Xm = k | X0 = i)


X

k∈S

(Markov property)

= pik (m)pkj (n) (22)


X

k∈S

=⇒ P (m + n) = P (m)P (n).

Eqn (22) is one form of C-K eqn. Using this


eqn, we can compute n-step transition proba-
bilities in terms of one-step transition proba-
bilities.

34
Let P (n) = [pij (n)] be the matrix of n-step tran-
sition probabilities. Note that P (0) = I. Let
m = n − 1 and n = 1 in (22). Then, we have

pij (n) = pik (n − 1)pkj (1)


X

k∈S

or
P (n) = P (n − 1) · P, n≥1 (23)

where P = [pij ] is the one-step TPM. Hence,

P (n) = P n, n ≥ 1. (24)

State Probabilities

Assume the state space is S = {0, 1, 2, · · ·}. De-


note the state probabilities

pj (n) = P (Xn = j), n = 0, 1, 2, · · · , j = 0, 1, 2, · · · .


(25)

35
By total probability thm,

pj (n) = P (Xn = j)

= P (Xn = j, X0 = i)
X

i∈S

= P (Xn = j | X0 = i)P (X0 = i)


X

i∈S

= pi(0)pij (n), j = 0, 1, 2, · · · . (26)


X

i∈S

Let

π(n) = [p0(n) p1(n) p2(n) · · ·]. (27)

π(n) gives the pmf of the r.v. Xn. π(0) gives the
pmf of the initial r.v. X0. Eqn (26) gives

π(n) = π(0)P (n) = π(0)P n, n = 0, 1, 2, · · · . (28)

Thus, knowing π(0) and P, we can compute π(n).

36
Eg 6.1. Consider the 2-state DTMC of Eg 5.1.
The TPM is
 


1−a a 

P = , 0 ≤ a, b ≤ 1.
 
 
 
 
b 1−b
 

Case 1: a = b = 0
 
1 0 
P =   = I

0 1

Hence, P (n) = P n = I. From (28) we have

π(n) = π(0)P n = π(0)I = π(0).

If the initial state X0 = 0, then π(0) = [1 0] =


π(n), n ≥ 0. Therefore, the system will remain
forever in state 0.

If the initial state X0 = 1, then π(0) = [0 1] =


π(n), n ≥ 0. Therefore, the system will remain
forever in state 1.

37
Case 2: a = b = 1
 
0 1 
P = 

1 0

Clearly,

P 2 = I, P 3 = P 2P = IP = P,

P 4 = P 3P = P P = I, P 5 = P 4P = P, ···.

Hence,


I, if n is even
P n = 

P, if n is odd.
Suppose the initial state is 0, i.e., π(0) = [1 0].
From (28) we have


 π(0)I, if n is even
π(n) = π(0)P n =

 π(0)P, if n is odd




 [1 0], if n is even
=  [0


1], if n is odd.

So the system will be in state 0 after even no.


of steps, and in state 1 after odd no. of steps.

38
Case 3: a, b ∈ (0, 1). In this case, |1 − a − b| < 1.
 
 1 − a a 
P = 
b 1−b

It can be shown that


b+axn a−axn

a+b a+b
 
 

Pn =
 





 (29)
b−bxn a+bxn
 
 
a+b a+b
where x = 1 − a − b, |x| < 1. Note that
 
b a
a+b a+b
 
 
n  
lim
n→∞
P = 




 . (30)
b a
 
 
a+b a+b

If the initial state is 0, then π(0) = [1 0]. From


(28) we have
n n
b + ax a − ax

π(n) = π(0)P n =  ,

n ≥ 0.
a+b a+b
(31)
If the initial state is 1, then π(0) = [0 1]. From
(28) we have
n n
b − bx a + bx

π(n) = π(0)P n =  ,

n ≥ 0.
a+b a+b
(32)
As n → ∞, both (31) and (32) give
b a
 

π(∞) ≡ n→∞
lim π(n) =  .

(33)
a+b a+b
39
(33) can also be obtained by using (30):

π(n) = π(0)P n

=⇒ n→∞ lim P n
lim π(n) = π(0) n→∞
 
b a
a+b a+b
 
 
 
= π(0) 





b a
 
 
a+b a+b

b a 
 

= 
 
a+b a+b

for π(0) = [1 0], [0 1].

Note that limn→∞ π(n) is independent of π(0).

The physical interpretation is that after suffi-


cient time has elapsed, the DTMC settles down
b
to a behavior whereby it visits state 0 a+b of
a
the time, and state 1 a+b of the time. We say
that the DTMC has reached steady state. The
b a
probabilities a+b , a+b are called the steady-state
probabilities or limiting probabilities of states 0 and
1, respectively.

40
Steady State Analysis
Assume that the following limiting probabilities
exist and are unique
lim p (n),
n→∞ j
j = 0, 1, · · ·
lim p (n),
n→∞ ij
i, j = 0, 1, · · · .
It can be shown that for any j,
lim p (n) = n→∞
n→∞ j
lim pij (n), i = 0, 1, · · · .
Let
yj = n→∞lim pj (n).
Hence,
yj = n→∞lim pj (n) = n→∞lim pij (n), i = 0, 1, · · · . (34)
Let
Y = n→∞lim π(n) = [y0 y1 y2 · · ·]. (35)
Y is called the vector of steady-state or limiting prob-
abilities. It follows from (24), (34) and (35) that
 
 Y 
 
 Y 
 
 
 
 · 
 
 
n
lim
n→∞
P = lim
n→∞
P (n) = 
 · 


.

 (36)
 
 
 · 
 
 
 

Y 
Observe how the above results are illustrated
by Eg 6.1.
41
From (28) we have
π(n) = π(0)P n = π(0)P n−1P = π(n − 1)P.
So
lim π(n − 1)P
lim π(n) = n→∞
n→∞
which, in view of (35), yields
Y = Y P. (37)
Also, it is clear that
yj = 1 and yj ≥ 0, j ≥ 0. (38)
X

j
We can compute Y using (37) and (38).

Eg 6.2. Consider Eg 5.1 again. Here, Y =


[y0 y1].
Case 1: a = b = 0
 
1 0 
P =   = I

0 1

(37): Y = Y P = Y I = Y (not useful)


(38): y0 + y1 = 1 (39)
Hence, there are infinite no. of solutions
Y = [y0 1 − y0], 0 ≤ y0 ≤ 1.
If y0 = 0, then Y = [0 1]. If y0 = 1, then Y = [1 0].
42
Case 2: a = b = 1
 
0 1 
P = 

1 0

(37) gives
 
0 1 
[y0 y1] = [y0 y1] 

1 0

=⇒ y0 = y1. (40)
Coupling with (39), we get y0 = y1 = 0.5. Thus,
Y = [0.5 0.5] which means that the DTMC visits
each state, on the average, 50% of the time.

Case 3: a, b ∈ (0, 1).


 
1 − a a 
P =


b 1−b
 

(37) provides
 
1 − a a 
[y0 y1] = [y0 y1] 

b 1−b

=⇒ y0a = y1b. (41)


b a
Together with (39), we obtain y0 = a+b , y 1 = a+b .
Therefore,
b a
 

Y =  .

a+b a+b
43
We interpret this as on the average, the DTMC
b
visits state 0 for a+b of the total no. of time
a
steps, and state 1 for a+b of the total no. of
time steps.

Problem 6.1: Compute the mean sojourn times


of the states in the DTMCs discussed in Egs 5.2
and 5.3.

Problem 6.2: Extend Eg 5.1 to the case where


the machine can be in 3 states: 0 (busy), 1
(under repair) and 2 (idle). Let
 
 0 0.5 0.5 
 
 1 0 0  .
P = 
 

1 0 0
 

Draw the state transition diagram, compute the


mean sojourn times, and the matrix of n-step
transition probabilities.

Problem 6.3: Determine the steady-state prob-


abilities of a DTMC, given that
 
0.6 0.2 0.2 


P = 0.1 0.8 0.1  .
 



0.6 0 0.4
 

44
Problem 6.4: In a certain manufacturing sys-
tem, there are 2 machines M1 and M2. M1 is
a fast and high precision machine whereas M2
is a slow and low precision machine. M2 is em-
ployed only when M1 is down, and it is assumed
that M2 does not fail. Assume that the process-
ing time of parts on M1, the processing time of
parts on M2, the time to failure of M1, and the
repair time of M1 are independent geometric
random variables with parameters p1, p2, f and
r, respectively. Identify a suitable state space
for the DTMC model of the above system and
compute the TPM. Investigate the steady-state
behavior of the DTMC.

45
7. Continuous Time Markov Chain (CTMC)

Definition A continuous time discrete state space


process {X(t) : t ≥ 0} with state space S
is called a continuous time Markov chain (CTMC)
if the following Markov property (memoryless prop-
erty) is satisfied: for all s ≥ 0, u ≥ 0, t ≥ s and
i, j, x(u) ∈ S,

P (X(t) = j | X(s) = i, X(u) = x(u) for 0 ≤ u < s)

= P (X(t) = j | X(s) = i). (42)

Given a CTMC {X(t) : t ≥ 0} with state space


S, the probabilities

pij (s, t) = P (X(t) = j | X(s) = i) (43)

are called the transition probabilities correspond-


ing to states i, j ∈ S and s ≥ 0, t ≥ s. Note that



 1, i = j
pij (s, s) = 
6 j.
0, i =

46
We shall show that Ti is an exponential r.v.
First, by Markov property, the future evolu-
tion of a CTMC is completely specified by its
current state and is independent of the past
trajectory of the process. In particular, it does
not matter how long the process has been in its
current state. Formally,
P (Ti > s + x | Ti > s) = h(x), s, x ≥ 0 (45)
where h(x) is a function of x only. Now,
P (Ti > s + x, Ti > s)
P (Ti > s + x | Ti > s) =
P (Ti > s)

P (Ti > s + x)
=
P (Ti > s)
leads to
P (Ti > s + x)
h(x) = . (46)
P (Ti > s)
Letting s = 0 in (46) and noting that P (Ti > 0) =
1, we find
h(x) = P (Ti > x). (47)
Hence, it follows from (45) and (47) that
P (Ti > s + x | Ti > s) = P (Ti > x), s, x ≥ 0.
(48)
48
(48) is equivalent to the memoryless property
(9). Since the exponential r.v. is the unique
memoryless continuous r.v., the sojourn time
Ti is an exponential r.v., i.e., Ti = EXP (λi),

P (Ti > x) = e−λix, x ≥ 0.

If λi = 0, the state i is an absorbing state.

If λi = ∞, the state i is an instantaneous state.

If λi ∈ (0, ∞), the state i is a stable state.

Eg 7.1. A NC machine works on parts, one at


a time, and takes an exponentially distributed
amount of time to process each part. Each time
the machine finishes processing a part, it is set
up for the next part, the setup operation takes
an exponentially distributed amount of time.
The machine is also prone to failures with time
between failures exponentially distributed. The
failed machine is immediately repaired, the re-
pair time being exponentially distributed.
49
If all the r.v.’s involved are mutually indepen-
dent, then we have a model with state space
{0, 1, 2} where
0: machine being set up for the next part
1: machine processing a part
2: machine failed, being repaired

Let X(t), t ≥ 0 represent the state of the system


at time t, then {X(t) | t ≥ 0} is a CTMC.

Let the rates of the r.v.’s be given by


s: setup rate
p: processing rate
f : failure rate
r: repair rate

The sojourn times T0, T1, T2 are exponentially


distributed. Indeed,
T0 = EXP (s), T2 = EXP (r)
since at state 0 only setup operation takes place,
and at state 2 only repair operation takes place.
50
At state 1, there are 2 possibilities:
(i) the machine may complete processing the
part
(ii) the machine may fail before finish processing
the part
In either case, the CTMC will go out of state 1.
Sojourn time in state 1 depends on whichever
of the 2 possibilities happens first.

Let X1 (= EXP (p)) be the processing time and


X2 (= EXP (f )) be the time to failure. Then,
(refer to Problem 2.4(b))
T1 = min{X1, X2} = EXP (p + f ).

8. Chapman-Kolmogorov Equation (C-K eqn)


Consider a CTMC {X(t) : t ≥ 0} with state
space {0, 1, 2, · · ·}. We use i, j, k to denote states
and s, u, t to denote time parameters. For 0 ≤
s ≤ t, consider the transition probabilities
pij (s, t) = P (X(t) = j | X(s) = i).

51
The transition probabilities can be expressed in
matrix form as
H(s, t) = [pij (s, t)].
Note that H(s, s) = I. Now, for 0 ≤ s ≤ u ≤ t,
pij (s, t)

= P (X(t) = j | X(s) = i)

= P (X(t) = j, X(u) = k | X(s) = i)


X

k∈S

(total probability thm)

= P (X(t) = j | X(u) = k, X(s) = i)


X

k∈S
×P (X(u) = k | X(s) = i)

= P (X(t) = j | X(u) = k)P (X(u) = k | X(s) = i)


X

k∈S

(Markov property)
= pik (s, u)pkj (u, t) (49)
X

k∈S

In matrix form, (49) is the same as


H(s, t) = H(s, u)H(u, t), 0 ≤ s ≤ u ≤ t. (50)

52
These are the Chapman-Kolmogorov equations
for a CTMC.

9. Kolmogorov Differential Equation


Let h be an infinitesimal increment in time. In
(50), put t = t + h and u = t. Then, we have
H(s, t + h) = H(s, t)H(t, t + h)

=⇒ H(s, t + h) − H(s, t) = H(s, t)H(t, t + h) − H(s, t)

=⇒ H(s, t + h) − H(s, t) = H(s, t)[H(t, t + h) − I]

H(s, t + h) − H(s, t) H(t, t + h) − I


=⇒ lim = H(s, t) lim .
h→0 h h→0 h
Let
H(t, t + h) − I
Q(t) = lim . (51)
h→0 h
We have the partial differential equation
∂H(s, t)
= H(s, t)Q(t), 0≤s≤t (52)
∂t
with initial condition H(s, s) = I.
(52) is called the forward Kolmogorov equation. Q(t)
is called the infinitesimal generator of the CTMC
and is also called the transition rate matrix.
53
Now in (50), take u = s + h. Then, we have

H(s, t) = H(s, s + h)H(s + h, t)

=⇒ H(s, t) − H(s + h, t) = H(s, s + h)H(s + h, t)


−H(s + h, t)

=⇒ H(s, t) − H(s + h, t) = [H(s, s + h) − I]H(s + h, t)

H(s, t) − H(s + h, t) H(s, s + h) − I


lim = lim H(s+h, t).
h→0 h h→0 h

This gives the partial differential equation

∂H(s, t)
= −Q(s)H(s, t), 0≤s≤t (53)
∂s

with initial condition H(s, s) = I. (53) is called


the backward Kolmogorov equation.

54
Interpretation of Q(t)

Let Q(t) = [qij (t)]. From (51) we have

pii(t, t + h) − 1
qii(t) = lim (54)
h→0 h
and
pij (t, t + h)
qij (t) = lim , i 6= j. (55)
h→0 h
Given that the CTMC is in state i at time t, we
may interpret qij (t) to be the rate at which the
CTMC moves from state i to state j (i 6= j) in
the time interval (t, t + h).

Since

pij (s, t) = 1, for all i and s ≤ t,


X

adding (54) and (55) provides

qij (t) = 0, for all i. (56)


X

Hence, the sum of the elements of any row of


Q(t) is zero.

55
Homogeneous CTMC
In a homogeneous CTMC, the transition proba-
bilities pij (x, x + t) do not depend on x and de-
pend only on t. Hence, we write
pij (t) = pij (x, x + t), for all x

H(t) = H(x, x + t) = [pij (t)], for all x

qij = qij (x), for all x

Q = Q(x) = [qij ], for all x.


The Chapman-Kolmogorov eqn (50) becomes
H(x + t) = H(x)H(t). (57)
The forward Kolmogorov eqn (52) becomes
dH(t)
= H(t)Q, H(0) = I. (58)
dt
The solution of (58) is
H(t) = exp(Qt) (59)
t2 2 3 t
3
= I + Qt + Q +Q + ··· (60)
2! 3!

56
State Probabilities
Assume the state space is S = {0, 1, 2, · · ·}. De-
note
pj (t) = P (X(t) = j), j = 0, 1, 2, · · · . (61)
Let
π(t) = [p0(t) p1(t) p2(t) · · ·]. (62)
By total probability thm,
pj (t) = P (X(t) = j)

= P (X(t) = j, X(0) = i)
X

i∈S

= P (X(t) = j | X(0) = i)P (X(0) = i)


X

i∈S

= pi(0)pij (t), j = 0, 1, 2, · · · .
X

i∈S

In matrix form,
π(t) = π(0)H(t) = π(0) exp(Qt). (63)
Differentiating (63), we get
dπ(t)
= π(0) exp(Qt)Q = π(t)Q. (64)
dt

57
Steady State Analysis

Under steady state, state probabilities assume


limiting values

πj = lim pj (t), j = 0, 1, · · ·
t→∞

The probability πj is interpreted as the long-run


proportion of residence time in state j.

The probabilities πj , j = 0, 1, 2, · · · satisfy the


following properties:

1. πj = 1
X

2. These probabilities are unique, i.e., indepen-


dent of the initial state

The probabilities

π = [π0 π1 π2 · · ·]

are said to constitute the steady-state probability


distribution of the CTMC and the CTMC is re-
ferred to as an ergodic CTMC.

58
To obtain the steady-state probability vector π,
set
dπ(t)
=0
dt
as these probabilities are constant. In view of
(64), this is
π(t)Q = 0.
Hence, π can be obtained as the solution of







πQ = 0









 X
πj = 1



 j
(65)









πj ≥ 0, j = 0, 1, 2, · · ·




An individual term of πQ = 0 will be given by


qjj πj + qkj πk = 0, j = 0, 1, 2, · · · (66)
X

k6=j

From (56), we have


qjk = 0, for all j.
X

k
Therefore, (66) can be written as
qjj πj + qkj πk = qjk πj
X X

k6=j k
 

=⇒ πj  qjk  = qkj πk (67)


 X  X

k6=j k6=j

59
Eg 9.1. Let us consider Eg 7.1.
0: machine being set up for the next part
1: machine processing a part
2: machine failed, being repaired
Let X(t), t ≥ 0 represent the state of the system
at time t, then {X(t) | t ≥ 0} is a CTMC.

s: setup rate
p: processing rate
f : failure rate
r: repair rate

Case 1: Assume the system follows resume pol-


icy, i.e., in the event of machine failure and sub-
sequent repair, the machine resumes processing
of the unfinished part.

61
To find the steady-state probability vector π =
[π0 π1 π2] using (65):







π0 s = π1 p








πQ = 0 =⇒  π1(p + f ) = π0s + π2r









π 2 r = π1 f



π 0 + π 1 + π2 = 1
Note that the first 3 equations above are also
the rate balance equations (67) which can be
obtained from the state transition diagram. Upon
solving, we get (show it, Problem 9.1)
pr rs
π0 = , π1 = ,
pr + rs + f s pr + rs + f s
fs
π2 = . (69)
pr + rs + f s
Suppose
s: setup rate = 20 per hour
p: processing rate = 4 per hour
f : failure rate = 0.05 per hour
r: repair rate = 1 per hour
63
then π0 = 0.16, π1 = 0.8, π2 = 0.04. This means
on an average, the machine gets set up for an
operation 16% of the total time, processes parts
for 80% of the total time, and is down for 4%
of the total time. Thus, the efficiency of the
system is 80%.

Average production rate R

= no. of parts produced per hour

= π1 p
(70)
rsp
=
pr + rs + f s

1
=1 !
f 1
= 3.2.
s + 1+ r p

−1
f
 

Steady-state availability = 1 +  (71)


r

Steady-state availability is the probability that


the system is functioning in a productive way.
64
Mean completion time

f 1
 

= 1 + 
r p (72)

mean processing time


=
steady-state availability

Case 2: Assume the system follows discard pol-


icy, i.e., in the event of machine failure and sub-
sequent repair, the machine discards the unfin-
ished part and take up a fresh part for process-
ing. This policy entails a fresh setup after each
repair.

65
is found to be (Problem 9.2)
pr + f r rs
π00 = , π10 =
pr + f r + rs + f s pr + f r + rs + f s

fs
π20 = . (74)
pr + f r + rs + f s
Using the same values of p, r, s, f as in Case
1, we obtain π00 = 0.1617, π10 = 0.7984, π20 = 0.039.
Thus, the efficiency of the system is 79.84%.
Average production rate R

= no. of parts produced per hour

= π10 p
(75)
rsp
=
pr + f r + rs + f s

1
=1 1
!
f 1
= 3.08
s + 1+ s + r p

It is clear that the resume policy leads to better


throughput than the discard policy, which is
quite intuitive.

67
Problem 9.3: Examine the single-machine sys-
tem (see Eg 9.1) with resume policy, but pro-
cessing batches of parts. Assume that after each
setup operation, the machine processes exactly
n parts, then it is set up again for the next batch
of n parts. Each setup operation has a duration
of EXP (s) and the processing time on an indi-
vidual part is EXP (p). The machine may fail
during processing, and after repair, it resumes
processing. The time to failure is EXP (f ) and
the repair time is EXP (r). All the r.v.’s involved
are mutually independent and also the machine
does not fail during setup. The state space of
the CTMC model is given by
S = {0, 1, 2, · · · , n, n + 1, n + 2, · · · , 2n}
where
0 : machine being set up for the next batch of
parts
i (1 ≤ i ≤ n) : machine processing ith part of a
batch
n + i (1 ≤ i ≤ n) : machine failed while processing
ith part of a batch, machine being repaired

68
(a) Draw the state transition diagram of the CTMC
model.
(b) Give the transition rate matrix Q.
(c) Write down the rate balance equations.
(d) Find πi, 0 ≤ i ≤ 2n.
(e) Find the production rate R.

Problem 9.4: In Eg 9.1, it is determined that


when the machine fails, the semi-finished part
can be reworked with probability q or must
be discarded with probability (1 − q). So, with
probability q we resume the processing of the
part, whereas with probability (1−q), we discard
the part. For this CTMC model, do (a)–(e) of
Problem 9.3.

Problem 9.5: Assume the probabilistic resume


and discard policies of Problem 9.4 for the case
in Problem 9.3. For this CTMC model, do (a)–
(e) of Problem 9.3.

69
Problem 10.1: Write down the transition rate
matrix Q for a finite BD process with state
space {0, 1, 2, · · · , N }.

Steady State Analysis

Let πk denote the steady state probability of


state k. The rate balance equations of a BD
process are
λ0π0 = µ1π1 (76)

(λk + µk )πk = λk−1πk−1 + µk+1πk+1, k ≥ 1. (77)

Using (77) recursively and also (76), we get

λk πk − µk+1πk+1 = λk−1πk−1 − µk πk

= λk−2πk−2 − µk−1πk−1

= λk−3πk−3 − µk−2πk−2

= · · · = λ0π0 − µ1π1

= 0.
71
Hence, it follows that
λk−1
πk = πk−1, k≥1
µk

λk−1λk−2
= πk−2
µk µk−1

λk−1λk−2λk−3
= πk−3
µk µk−1µk−2

λk−1 · · · λ1λ0
= ··· = π0
µk · · · µ2µ1
or
k−1 λi
πk = π0 , k ≥ 1. (78)
Y

i=0 µi+1
Since
πj = 1,
X

j
using (78) we get
k−1 λi
π0 + π0 =1
X Y

k≥1 i=0 µi+1


λi −1
 
k−1
=⇒ π0 = 1 + . (79)
 X Y

µi+1

k≥1 i=0

72

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy