0% found this document useful (0 votes)
83 views9 pages

HW 5 - Solution

1) The document provides the solution to several homework problems involving Markov chains and queueing systems. 2) For the first problem, the expected number of coin tosses to observe a head followed by a tail is calculated to be 4. The expected additional tosses after observing this is also 4. 3) For the Markov chain in problem 3, the expected number of transitions to leave state 3 is 6, and the expected number to enter state 4 is infinite. The steady state probabilities after 10 transitions are also calculated. 4) Performance measures like expected queue length and wait time are derived for an M/M/1 queueing system and an M/M/1/4 system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views9 pages

HW 5 - Solution

1) The document provides the solution to several homework problems involving Markov chains and queueing systems. 2) For the first problem, the expected number of coin tosses to observe a head followed by a tail is calculated to be 4. The expected additional tosses after observing this is also 4. 3) For the Markov chain in problem 3, the expected number of transitions to leave state 3 is 6, and the expected number to enter state 4 is infinite. The steady state probabilities after 10 transitions are also calculated. 4) Performance measures like expected queue length and wait time are derived for an M/M/1 queueing system and an M/M/1/4 system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

1

Homework # 5 – 61008 - Solution

Problem 1
Assume that a fair coin is tosses repeatedly, with tosses being independent. We want
to determine the expected number of tosses necessary to first observe a head directly
followed by a tail. To do so, we define a Markov chain with states: S (start), H (head),
T (tail), HT (head was followed by a tail over the last tow tosses). This Markov chain
is illustrated in the following figure:

a. What is the expected number of tosses necessary to first observe a head


directly followed by a tail?
b. Assuming we just observed a head followed by a tail, what is the expected
number of additional tosses until we observe again a head followed by a tail?

Solution
a.
ts  expected number of tosses to first observe a head directly followed by a tail

when starting at S.
t H  expected number of tosses to first observe a head directly followed by a tail

when starting at H.
tT  expected number of tosses to first observe a head directly followed by a tail

when starting at T.
2

1 1
tS  1  t H  tT
2 2 tH  2
1
tH  1  tH tT  4
2
tS  4
1 1
tT  1  tT  t H
2 2

b.
1 1 1 1
tHT *  1  tH  tT  1   2   4  4
2 2 2 2

Problem 3
Consider the following Markov chain:

For all parts of this problem, the process is in state 3 immediately before the first
transition.
a. Find the expected value umber for J, the number of transitions up to and
including the transition on which the process leaves state 3 for the last time.
b. Find the expected value for K, the number of transitions up to and including
the transition on which the process enters state 4 for the first time.
10
c. Find πi for i =1,2,..., 7, the probability that the process is in state i after 10
transitions.
d. Given that the process never enters state 4, find the πi’s as defined in part (c).

Solution
a.
The process is in state 3 immediately before the first transition. After leaving state 3
for the first time, the process cannot go back to state 3 again. Hence J, which
3

represents the number of transitions up to and including the transition on which the
process leaves state 3 for the last (= first) time is a geometric random variable with
success probability equal to 0.6.
1 10
J ~ G (0.6)  E ( J )  
0.6 6

b.
There is a positive probability that we never enter state 4; i.e., P(K= ∞) > 0. Hence the
expected value of K is ∞.
. t3 -‫ואנו מעוניינים ב‬.4 ‫ למצב‬i ‫ = ממוצע (תוחלת) מספר הצעדים ממצב‬ti :‫נגדיר‬
7
t3  1   p3i ti
i 1

:‫משרשת מקרוב הנתונה מקבלים‬


2 1 3
t3  1  t 4  t 2  t7
10 10 10
)4 ‫ למצב‬4 ‫ (ממוצע מספר צעדים ממצב‬t4  0

.)‫ הוא אינסוף‬4 ‫ למצב‬2 ‫ הרי שממוצע מספר צעדים ממצב‬,4 ‫ למצב‬2 ‫ (מאחר ולא ניתן להגיע ממצב‬t2  

.)‫ הוא אינסוף‬4 ‫ למצב‬7 ‫ הרי שממוצע מספר צעדים ממצב‬,4 ‫ למצב‬7 ‫ (מאחר ולא ניתן להגיע ממצב‬t7  

:‫נקבל‬
2 1 3
t3  1  0     
10 10 10
E ( K )  t3   :‫כלומר‬

c.
The Markov chain has 3 different recurrent classes. The first recurrent class consists
of states {1, 2}, the second recurrent class consists of states {4, 5, 6}, and the third
recurrent class consists of state {7}. The probability of getting absorbed into the first
recurrent class starting from the transient state 3 is:
1/10 1
P(absorbed in 1st | absorbed )  
1/10  2 /10  3 /10 6
which is the probability of transition to the first recurrent class given there is a change
of state.
Similarly, probability of absorption into the second class is:
4

2 /10 2 1
P(absorbed in 2nd | absorbed )   
1/10  3 /10  2 /10 6 3
And, probability of absorption into the third class is:
3 /10 3 1
P(absorbed in 3rd | absorbed )   
1/10  3 /10  2 /10 6 2

We solve the balance equations within each recurrent class, which give us the
probabilities conditioned on getting absorbed from state3 to that recurrent class.
On condition that the process is absorbed in {1, 2}, the steady-state probabilities for
the 1st class are:
1
1 1 
1   2 3
2 
2
1   2  1 2 
3
On condition that the process is absorbed in {7}, the steady-state probabilities for the
3nd class:  7  1
On condition that the process is absorbed in {4, 5, 6}, the steady-state probabilities for
the 2rd class:
1 1 4
4  5 4 
4 2 7
3 1 1 2
5  4  6  5 
4 4 2 7
4  5  6  1 1
6 
7

The unconditional steady-state probabilities for all the states are:


4 1
4  
1 1 7 3
1  
3 6 2 1 1
5    7  1
2 1 7 3 2
2  
3 6 1 1
6  
7 3

d.
1/10 1
P(absorbed in 1st | absorbed in 1st or 3nd )  
1/10  3 /10 4
5

3 /10 3
P(absorbed in 3st | absorbed in 1st or 3nd )  
1/10  3 /10 4

The steady-state probabilities for all the states, on condition that we never get to state 4:
1 1
1  
3 4 3
 7  1
2 1 4
2  
3 4
6

Problem 4
Consider an M/M/1 system with λ = 16 arrivals / hour, and µ = 18 served / hour.
a. Compute: Ls, Wq.
b. Compute: Ws, Lq.
c. What is the probability that the system is empty?

Solution
a.
 /
Ls   8
1  1  / 
1 Ls 1
Wq  Ws     0.44
  
b.
Ls
Ws   0.5

Lq  Wq  7.11
c.
P0  1    0.11

Problem 5
Consider an M/M/1/4 system with λ = 26 arrivals / hour, and µ = 15 served / hour.
Compute: Ls, Ws.

Solution

Equilibrium equations:
7

n  0:  P0   P1

n  1:  P0   P2  (   ) P1

n  2:  P1   P3  (   ) P2

n  3:  P2   P4  (   ) P3

n  4:  P3   P4


n  0:  P1  P
 0
2
 
n  1:  P2   P1  P2  P1    P0
 
3
 
n  2:  P1   P3  (   ) P2  P3  P2    P0
 
4
 
n  3:  P2   P4  (   ) P3  P4  P3    P0
 

4
 
n  4:  P3   P4  P4  P3    P0
 

     2   3    4 
P0 1             1
     
 


  26,   15   1.7333

     2   3    4 
P0 1             1
     
 

P1  0.0867
P2  0.1502
P0  0.05 
P3  0.2604
P4  0.4513
8


 1.7333

1  1  ( /  )
 0  P0    0.05
1  5
1  ( /  ) 5

eff   P(n  3)   (1  P4 )  14.266

Ls  0  P0  1 P1  2  P2  3  P3  4  P4

Ls  0  0.05  1 0.0867  2  0.1502  3  0.2604  4  0.4513  2.9735


Ls 2.9735
Ws    0.2084
eff 14.266

Problem – Extra example


Oscar goes on a run each morning. When ha leaves home in the morning for his run,
he is equally likely to go out either from the front door or the back door, and similarly
when he returns, he is equally likely to come back thru the front door or the back
door. Oscar owns five pairs of running shoes. When he comes back from a run he
immediately takes of his shoes and leaves them at the door of entrance. In the
morning if there are no shoes at the door he chose to go out from, he runs barefooted.
a. Set up the "story" as a Markov chain. Define the states and the transition
probabilities.
b. Determine the long-term proportion of time that he runs barefooted.

Solution
a.
State = S = number of shoes at the front door.
9

b.
1 1
 0  1
4 4
1 1 1
1   0   2  0  1  0  1
2 4 4
1 1 1 2 1   0   2 1   2
 2  1   3 2 2   1   3 2  3
2 4 4
1 1 1 2 3   2   4 3  4
3  2  4
2 4 4 2 4   3   5 4  5
1 1 1 5  4 5  4
4  3  5
2 4 4
1 1
5  4
4 4
1
 0  1   2   3   4   5  1  6 0  1   0 
6
1 1
P(barfoot )   0   5  
2 6
Note:
S = 0 means 0 shoes at front door, and with probability 0.5 Oscar will leave thru the
front door, were there are no shoes.
S = 5 means 5 shoes at front door, but with probability 0.5 Oscar will leave thru the
back door, were there are no shoes.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy