100% found this document useful (2 votes)
95 views4 pages

Solusi 4.18

1. This document provides the homework questions for a statistics course. It includes 5 questions about stochastic processes and Markov chains. 2. The questions cover proofs about accessibility of states, transition probabilities, recurrence and transience of random walks, limiting probabilities of ergodic doubly stochastic chains, and the transition probabilities and stationary equations of a Markov chain modeling a queue. 3. Solutions are provided that use concepts like accessibility in finite steps, the Markov property, limits of probabilities, uniqueness of limiting distributions, and characterizations of ergodicity. Recursion relations are also derived for the stationary equations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
95 views4 pages

Solusi 4.18

1. This document provides the homework questions for a statistics course. It includes 5 questions about stochastic processes and Markov chains. 2. The questions cover proofs about accessibility of states, transition probabilities, recurrence and transience of random walks, limiting probabilities of ergodic doubly stochastic chains, and the transition probabilities and stationary equations of a Markov chain modeling a queue. 3. Solutions are provided that use concepts like accessibility in finite steps, the Markov property, limits of probabilities, uniqueness of limiting distributions, and characterizations of ergodicity. Recursion relations are also derived for the stationary equations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Homework 5 (Stats 620, Winter 2017)

Due Tuesday Feb 21, in class


Questions are derived from problems in Stochastic Processes by S. Ross.

1. Prove that if the number of state is n, and if state j is accessible from state i, then it is
accessible in n or fewer steps.
Solution: j is accessible from i if, for some k 0, Pijk > 0. Now

k
XY
Pijk = Pim im+1
m=1

where the sum is taken over all sequences (i0 , i1 , , ik ) {1, , n}k+1 of Qstates with i0 = i
k k
and ik = j. Now, Pij > 0 implies that at least one term is positive, say m=1 Pim im+1 > 0.
If a state s occurs twice, say ia = ib = s for a < b, and (a, b) 6= (0, k), then the sequence
of states (i0 , , ia1 , ib , , ik ) also has positive probability, without this repetition. Thus,
the sequence i0 , , ik can be reduced to another sequence, say j0 , , jr , in which no state
is repeated. This gives r n 1, so i 6= j is accessible in at most n 1 steps. If i = j, we
cannot remove this repetition! This gives the possibility of r = n, when i = j, but there are
no other repetitions.

2. For states i, j, k with k 6= j, let


n
Pij/k = P {Xn = j, X` 6= k, ` = 1, ..., n 1|X0 = i}.

(a) Explain in words what Pij/k n presents.


Pn nk
(b) Prove that, for i 6= j, Pij = k=0 Piik Pij/i
n

Solution:
n
(a) Pij/k is the probability of being in j at time n, starting in i at time 0, while avoiding k.
(b) Let N be the (random) time at which {Xk } is last in i before time n. Then since
0 N n,

Pijn = P [Xn = j|X0 = i]


Xn
= P [Xn = j, N = k|X0 = i]
k=0
Xn
= P [Xn = j, Xk = i, Xl 6= i : k + 1 l n|X0 = i]
k=0
n
X
= P [Xn = j, Xl 6= i : k + 1 l n|X0 = i, Xk = i]P [Xk = i|X0 = i]
k=0

1
Now using the Markov property
n
X
Pijn = P [Xn = j, Xl 6= i : k + 1 l n|Xk = i]P [Xk = i|X0 = i]
k=0
Xn
nk k
= Pij/i Pii
k=0

Note that one can also calculate Pijn = E[P [Xn = j|X0 = i, N ]], but this works out slightly
less easily.

3. Show that the symmetric random walk is recurrent in two dimensions and transient in three
dimensions.
Comments: This asks you to extend the argument of Ross Example 4.2(A) to two and three
dimensions. You may use either of the definitions of the simple symmetric random walk in d
dimensions from the notes.
(d)
Solution: Define the symmetric random walk in d dimensions, Xn = (X(n,1) , , X(n,d) ),
by 
Xn,j + 1 with probability 0.5
Xn+1,j =
Xn,j 1 with probability 0.5
(d)
i.e. each component of Xn carries out an independent symmetric random walk in one
(d) (d) (d) (1) (1)
dimension. Let Pn = P [Xn = X0 ]. From example 4.2(A), P2n 1/ n, and P2n+1 = 0.
By independence,
(d) 1
P2n ( )d
n
Therefore,

X 1X1
Pn(2) =
n
n=1 n=1

and

X 1 X 1
Pn(3) <
n=1
3/2 n=1 n3/2
Thus by Proposition 4.2.3 we get the recurrence of the two dimensional symmetric random
walk, and the transience in three (or more) dimensions. Note that it would take a bit more
work to fully justify our implicit interchange of with infinite summation.

4. A transition probability matrix P is said to be doubly stochastic if


X
Pij = 1 for all j.
i

That is, the column sums all equal 1. If a doubly stochastic chain has n states and is ergodic,
calculate its limiting probabilities.

2
Hint: guess the answer, and then show that your guess satisfies the required equations. Then,
by arguing for the uniqueness of the limiting distribution, you will have solved the problem.
P Pn1
Solution:
P Let = (1/n, , 1/n). Then since j P ji = 1, [P ]i = j=0 j Pji =
j Pji /n = 1/n. Thus, P = . The uniqueness of the limiting distribution P for an er-
godic Markov chain implies that any solution to P = with i > 0 and i i = 1 is the
required limiting distribution.

5. Jobs arrive at a processing center in accordance with a Poisson process with rate . However,
the center has waiting space for only N jobs and so an arriving job finding N others waiting
goes away. At most 1 job per day can be processed, and the processing of this job must
start at the beginning of the day. Thus, if there are any jobs waiting for processing at the
beginning of a day, then one of them is processed that day, and if no jobs are waiting at the
beginning of a day then no jobs are processed that day. Let Xn denote the number of jobs
at the center at the beginning of day n.
(a) Find the transition probabilities of the Markov chain {Xn , n 0}.
(b) Is this chain ergodic? Explain.
(c) Write the equations for the stationary probabilities.
Instructions:
(a). Suppose that the arrival rate has units day1 .
(b). You may assert the property that a finite state, irreducible, aperiodic Markov chain is
ergodic (see Theorem 4.3.3, the discussion following this theorem, and Problem 4.14).
(c). There is no particularly elegant way to write these equations, and you are not expected
to solve them.
Solution: The state space is 0, 1, , N . Let p(j) = j e /j!
(a) 
P [k arrivals] = p(k)P for k = 0, , N 1
P0k =
P [ N arrivals] = l=N p(l) for k = N

P [k j + 1 arrivals] = p(kP j + 1) for k = j 1, , N 1
Pjk =
P [ N j + 1 arrivals] = l=N j+1 p(l) for k=N

(b) The Markov chain is irreducible, since P0k > 0 and Pk0 k = (p(0))k > 0. It is aperiodic,

since P00 > 0. A finite state Markov chain which is irreducible and aperiodic is ergodic (since
it is not possible for all states to P
be transient or for any states
P to be null recurrent).
(c) For j < N , the identity j = k k Pkj becomes j = j+1 k=0 k Pkj . This can be rewritten
as a recursion
j (1 Pjj ) j1
P
k=0 k Pkj
j+1 =
Pj+1,j
An alternative expression is given as follows. Since the long run rate of entering {0, , j}
must equal the rate of leaving {0, , j}
j
X
j+1 p(0) = 0 F (j) + k F (j k + 1)
k=1

3
P
where F (j) = k=j+1 p(k).

Recommended reading:
Sections 4.1 through 4.3, excluding examples 4.3(A,B,C).

Supplementary exercises: 4.13, 4.14


These are optional, but recommended. Do not turn in solutionsthey are in the back of the book.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy