0% found this document useful (0 votes)
333 views7 pages

Markov Chain by Prof. Bertsekas

This edition may be sold only in those countries to which it is consigned by Prentice-Hall International. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. The purpose of this appendix is to provide a brief summary of the results we need from discreteand continuous-time Markov chain theory.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
333 views7 pages

Markov Chain by Prof. Bertsekas

This edition may be sold only in those countries to which it is consigned by Prentice-Hall International. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. The purpose of this appendix is to provide a brief summary of the results we need from discreteand continuous-time Markov chain theory.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7
DATA NETWORKS Dimitri Bertsekas Massachusetts Institute of Technology Robert Gallager Massachusetts Institute of Technology Hy Prentice-Hall International, Inc. ‘This edition may be sold only in those countries to which itis consigned by Prentice-Hall Internationa. It's not to bbe recexported and its not for sale in the US.A., Mexico, ‘or Canada © 1967 by Prentice-Hall, nc. {A Division of Simon & Schuster Englewood Cif, NJ 07632 All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. Reproduced with Publishers Fermission Printed in the United States of America wee765 4321 ISBN 0-13-19b981-1 ozs Prentice-Hall International (UK) Limited, Landon Prentice-Hall of Australia Pty. Limited, Syaney Prentice-Hall Canada Ine., Toronto Prentice-Hall Hispanoamericana, S.A, Mexico Prentice-Hall of india Private Limited, New Delhi Prentice-Hall of Japan, nc, Tokyo Prentice-Hall of Southeast Asia Pte Ltd, Singapore Editora Prentice-Hall do Brasil, Ltda, Rio de Janeiro Prentice Hall, Englewood Clif, New Jersey cry Delay Models in Data Networks Chap. 3 APPENDIX A: Review of Markov Chain Theory ‘The purpose of this appendix is to provide a brief summary of the results we need from diserete- and continuous-time Markov chain theory. We refer the reader to books on stochastic processes for detailed accounts. 3A.1 Discrete-Time Markov Chains Consider a diserete-time stochastic process {Xn|n = 0, 1,2,...} that takes values from the set of nonnegative integers, 90 the states that the process can be in are 4 =0,1.... The process is said to be a Markov chain if whenever it is in state i, there is a fixed probability P.y that it will next be in state j regardless of the process history prior to arriving at i. That is, for all n > 0, in—1s-+- ios is Xmat = tnaty-++y Xo = to} We refer to Pj; as the transition probabilities. They must satisfy P20, D>Py i ‘The corresponding transition probability matrix is denoted Foo Pox Pon » 1=0,1,. Consider the n-step transition probabilities Ph=PXnim=i1Xm =i}, 220457 20- ‘The Chapman-Kolmogorov equations provide a method for calculating PY}. They are given by Pym = PRP, mm > 0,45 o From these equations, we see that 7 are the elements of the matrix P* (the transition probability matrix P raised to the n‘® power). We say that two states i and j communicate if for some n and n’, we have Pi > 0, PX > 0. If all states communicate, we say that the Markov chain is Sec. 8.10 Appendix A 195, irreducible, We say that the Markov chain is aperiodic if for each state i there is no integer d > 2 such that P? = O except when n is a multiple of d. A probability distribution {p;|j > 0} is said to be a stationary distribution for the Markov chain if p= omy, 720. (3A.1) We will restrict attention to irreducible and aperiodic Markov chains, this is the only type we will encounter. For such a chain, denote y= lim Ph, 520 It can be shown that the limit above exists and when p; > 0, then 1/p, equals the mean recurrence time of j, i., the expected number of transitions between two successive visits to state j. If pj =0, the mean recurrence time is infinite. Another interpretation is that p, represents the proportion of time the process visits j on the average. The following result will be of primary interest: ‘Theorem. In an irreducible, aperiodic Markov chain, there are two possi- 0 for all j > 0 in which case the chain has no stationary 2 pj > 0 for all j > 0 in which ease (p,|j 2 O} isthe unique stationary distr- bution ofthe chain. A typical example of case 1 above is an M/M/1 queueing system where the arrival rate A exceeds the service rate 4. In case 2, there arises the issue of characterizing the stationary distribution {p;|j 2 0}. For queueing systems, the following technique is often useful. Multi- lying the equation Py; + Y°%o Pys = 1 by py and using Eq. (8A.1), we have a Ps P= oP (3A2) a te ‘These equations are known as the global balance equations. They state that, at equilibrium, the probability of a transition out of j (left side of Eq. (3A.2)) equals the probability of a transition into j (right side of Eq. (8A.2)). ‘The global balance equations can be generalized to apply to an entire set of states. Consider a subset of states S. By adding Eq. (34.2) over all jeS, we obtain Dae LPs Led Ps (3A3) i i 8 which means that the probability of a transition out of the set of states $ equals the probability of a transition into S. 196 Delay Models in Data Networks Chap. 3 ‘An intuitive explanation of these equations is based on the fact that when the Markov chain is irreducible, the state (with probability one) will return to the set $ infinitely many times. Therefore, for each transition out of there ‘must be (with probability one) a reverse transition into S at some later time. ‘As a result, the proportion of transitions out of $ (over all transitions) equals the proportion of transitions into S. This is precisely the meaning of the global balance equations (34.3). $A.2 Detailed Balance Equations ‘As an application of the global balance equations, consider a Markov chain typical of queueing systems and, more generally, birth-death systems where two successive states can only differ by unity as in Fig. 3A.1. We assume that Pii+1 > 0 and Piya > O for alli. This is a necessary and sufficient condition for the chain to be irreducible. Consider the sets of states S$ ={0,1,...,n} Application of Eq. (3A.3) yields PoPamntt = PntsPretiny 2= 0,1)... (aa) i.e, in steady state, the probability of a transition from n to n+ equals the probability of a transition from n+ 1 ton. ‘These equations can be very useful in computing the stationary distribution {p,|j 2 0} (see sections 3.3 and 3.4). Figure 3A.1 Transition probability diagram for a birth-leath procea, Equation (3A.4) is a special case of the equations PP ii = PIP yy 4520 (34.5) known as the detailed balance equations. These equations need not hold in any given Markov chain. However, in many important special eases, they do hold and greatly simplify the calculation of the stationary distribution. A common method of verifying the validity of the detailed balance equations for a given irreducible, aperiodie Markov chain is to hypothesize their validity and try to solve them for the steady-state probabilities pj, j > 0. There are two possibilities; either the system (BA.5) together with 32,9; = 1 is inconsistent or else a distribution {p;1j > 0} satisfying Eq. (3A.5) wil be found. In the latter case, this distribution will clearly See. 3.10 Appendix A 197 also satisfy the global balance equations (3A.2). These equations are equivalent to the condition © Lars. 280, by the theorem given earlier, {p,1j > 0} is the unique stationary distribution. Oty. 34.3 Partial Balance Equations ‘Some Markov chains have the property that their stationary distribution {p;|j > 0} satisfies a set of equations which is intermediate between the global and the detailed balance equations. For every node j, consider a partition S},...,S¥ of the com- plementary set of nodes {ili > 0,i # j} and the equations PAD P= DO PAP, MEL k (34.6) isp asp Equations of the form above are known as a set of partial balance equations. If a distribution {p;)j > 0} solves a set of partial balance equations, then it will also solve the global balance equations so it will be the unique stationary distribution of the chain, A technique that often proves useful is to guess the right set of partial balance equations satisfied by the stationary distribution and then proceed to solve ‘them. 3A.4 Continuous-Time Markov Chains A continuous-time Markov chain is a process {X(t)|t > 0} taking values from the set of states # = 0,1,... that has the property that each time it enters state é: 1. The time it spends in state i is exponentially distributed with parameter v;. We may view vs as the average rate (in transitions/see) at which the process makes @ transition when at state i 2. When the process leaves state i, it will enter state j with probability P,, where So, Pij = 1. We wil be interested in chains for which: 1. The number of transitions in any finite length of time is finite with probability one (such chains are called regular) 2. The discrete-time Markov chain with transition probabilities Pi, (called the imbedded chain) is irreducible. Under the preceding conditions, it can be shown that the limit Py = jim, PIX() = s1X(0) = 4} (an) 198 Delay Models in Data Networks Chap. 3 exists and is independent of the initial state i. Furthermore if the imbedded chain ‘has a stationary distribution {x,|j > 0}, the steady-state probabilities pj of the continuous chain are all positive and satisfy (oa) ‘The interpretation here is that 1; represents the proportion of visits to state j, ‘hile pj represents the proportion of time spent in state jin a typical system run. For every i and j, denote 5 = Ps (3.9) Since » ia the rate at which the process leaves i and Piy is the probability that it then goes to J, it follows that qi, is the rate at which the process makes a transition to j when at state i. Consequently, qi is called the transition rate from i to j. Since we will often analyze continuous-time Markov chains in terms of their time-discretized versions, we describe the general method for doing this. Consider any 6 > 0, and the discrete-time Markov chain (Xqin > 0}, where Xn =X (nd), 0,1,.. ‘The stationary distribution of {X,} is clearly {p,|j > 0}, the stationary distribution. of the continuous chain (ef. Eq. (3A.7)). The transition probabilities of {Xn > O} are Py = 645 + 0(6), 145 PR=1-5 Sais +0(8) im Using these expressions in the global balance equations for the discrete chain (cf. Eq. (8A.2)) and taking the limit as 5 —+ 0, we obtain Py oa = Dopidy, =O.» (8A.10) aS B These are the global balance equations for the continuous chain. Similarly, the detailed balance equations take the form PQ5e= Pid» 45 = 0,15... (3a.11) One can also write a set of partial balance equations and attempt to solve them for the distribution {p,|j > 0}. If solution is found, it provides the stationary distribution of the continuous chain,

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy