0% found this document useful (0 votes)
406 views6 pages

Sheldon Ross Stochastic Processes

Uploaded by

Aditi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
406 views6 pages

Sheldon Ross Stochastic Processes

Uploaded by

Aditi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Sheldon

ross stochastic processes pdf

Sheldon ross stochastic processes solutions pdf.

Want more?

Advanced embedding details, examples, and help! Download Product Flyer Download Product Flyer is to download PDF in new tab. This is a dummy description.

Download Product Flyer is to download PDF in new tab. This is a dummy description. Download Product Flyer is to download PDF in new tab. This is a dummy description. Download Product Flyer is to download PDF in new tab. This is a dummy description. This book contains material on compound Poisson random variables including an identity
which can be used to efficiently compute moments, Poisson approximations, and coverage of the mean time spent in transient states as well as examples relating to the Gibb's sampler, the Metropolis algorithm and mean cover time in star graphs. Request permission to reuse content from this site Preliminaries. The Poisson Process. Renewal Theory.
Markov Chains.
Continuous-Time Markov Chains. Martingales. Random Walks. Brownian Motion and Other Markov Processes.
Stochastic Order Relations. Poisson Approximations. Answers and Solutions to Selected Problems. Index. Download Product Flyer Download Product Flyer is to download PDF in new tab. This is a dummy description. Download Product Flyer is to download PDF in new tab. This is a dummy description. Download Product Flyer is to download PDF in
new tab. This is a dummy description. Download Product Flyer is to download PDF in new tab. This is a dummy description. gmetrix training test 1 answers This book contains material on compound Poisson random variables including an identity which can be used to efficiently compute moments, Poisson approximations, and coverage of the mean
time spent in transient states as well as examples relating to the Gibb's sampler, the Metropolis algorithm and mean cover time in star graphs. Request permission to reuse content from this site Preliminaries. The Poisson Process. laptop motherboard schematic diagram for repair pdf Renewal Theory. Markov Chains.
Continuous-Time Markov Chains. Martingales. Random Walks. Brownian Motion and Other Markov Processes. Stochastic Order Relations. Poisson Approximations. Answers and Solutions to Selected Problems. Index. STOCHASTICPROCESSES SecondEdition Sheldon M.Ross University of California, Berkeley JOHN WILEY &SONS.INC. New York
Chichester Brisbane Toronto Singapore ACQUISITIONSEDITORBradWileyII MARKETINGMANAGERDebraRiegert SENIORPRODUCfIONEDITORTonyVenGraitis MANUFACfURINGMANAGERDorothySinclair TEXT ANDCOVERDESIGNAGood Thing,Inc PRODUCfIONCOORDINATIONElmStreetPublishing Services,Inc
ThisbookwassetinTimesRomanbyBi-Comp,Incandprintedandboundby Courier/StoughtonThe cover wasprintedbyPhoenixColor Recognizingtheimportanceof preservingwhathasbeenwritten,itisapolicyof John Wiley&Sons,Inctohavebooksof enduring valuepublishedintheUnitedStatesprintedon acid-freepaper,andweexertourbesteffortsto thatend Thepaper
inthisbookwasmanufacturedbyamillwhoseforestmanagementprograms include sustainedyieldharvesting of itstimberlandsSustainedyieldharvestingprinciples ensurethatthenumber of treescuteachyear doesnotexceedtheamountof newgrowth Copyright1996,byJohnWiley&Sons,Inc AllrightsreservedPublishedsimultaneouslyinCanada Reproductionor
translationof anypartof thisworkbeyondthatpermittedbySections 107and108of the1976United StatesCopyright Actwithoutthepermissionof thecopyright owner isunlawfulRequestsforpermission or furtherinformationshOUldbeaddressedto thePermissionsDepartment, JohnWiley&Sons,Inc Library of CongressCataloging-in-PublicationData: Ross,SheldonM
Stochasticprocesses/SheldonMRoss -2nd ed pcm Includesbibliographicalreferencesandindex ISBN0-471-12062-6(cloth alkpaper) 1StochasticprocessesITitle QA274 R651996 5192-dc20 PrintedintheUnitedStatesof America 1098765432 95-38012 CIP OnMarch30,1980,abeautiful six-year-old girl died. Thisbook isdedicatedtothememory of Nichole
Pomaras Preface tothe FirstEdition Thistextisanonmeasuretheoretic introduction to stochastic processes,and as suchassumes a knowledge of calculus and elementary probability_In it we attempt to present someof thetheory of stochasticprocesses,to indicateits diverserangeof applications,and alsoto givethestudent some probabilistic intuition and
insight in thinking about problemsWe have attempted, wherever possible,toviewprocessesfromaprobabilisticinsteadof an analyticpoint of view.Thisattempt,forinstance,hasledusto studymostprocessesfrom a samplepath point ofview. I would like to thank Mark Brown, Cyrus Derman, Shun-Chen Niu, Michael Pinedo,and Zvi Schechnerfortheir
helpfulcomments SHELDONM.Ross vii Prefaceto the SecondEdition ThesecondeditionofStochasticProcessesincludesthefollowingchanges' (i)AdditionalmaterialinChapter2oncompoundPoissonrandomvari-ables,including anidentitythat can be usedto efficiently compute moments, andwhichleadstoanelegantrecursiveequationfortheprobabilItymass
functionof a nonnegative integer valued compound Poisson random variable; (ii)Aseparate chapter(Chapter6)onmartingales,includingsectionson theAzumainequality;and (iii)Anewchapter(Chapter10)onPoissonapproximations,including both theStein-Chen method forbounding the error of theseapproximations anda
methodforimprovingtheapproximationitself.
Inaddition,wehaveadded numerous exercisesandproblemsthroughout thetext.Additions toindividualchapters follow: InChapter1,wehavenewexamplesontheprobabilisticmethod,the multivanatenormaldistnbution,randomwalksongraphs,andthecomplete match problemAlso, we have new sections on probability inequalities (includ-ingChernoff bounds)and
on Bayes estimators (showing that they are almost neverunbiased).Aproof of thestrong lawof largenumbersisgiveninthe Appendixtothischapter. Newexamples on patterns and on memoryless optimal cointossing strate-giesaregiveninChapter 3. There is new matenal in Chapter 4 covering the mean time spent in transient
states,aswellasexamplesrelatingtotheGibb'ssampler,theMetropolis algonthm,andthemean covertimeinstar graphs. Chapter 5includesanexample onatwo-sexpopulationgrowthmodel. prepare to be tortured pdf free download

Chapter6hasadditionalexamplesillustratingtheuseofthemartingale stoppingtheorem. Chapter 7 includes new material on Spitzer's identity and using it to compute mean delaysinsingle-server queues withgamma-distnbutedinterarrivaland servicetimes. Chapter 8 on Brownian motionhasbeen movedtofollowthechapter on
martingalestoallowustoutilIZemartingalestoanalyzeBrownianmotion. ix xPREFACE TO THE SECOND EDITION Chapter 9 on stochastic order relations now includes a section on associated random variables, as well as new examples utilizing coupling in coupon collect-ingandbinpacking problems. diane arbus biography pdf

We wouldliketothank allthose who werekind enough to write and send commentsaboutthefirstedition,withparticularthankstoHeSheng-wu, Stephen Herschkorn, Robert Kertz, James Matis, Erol Pekoz, Maria Rieders, and TomaszRolskifortheirmanyhelpfulcomments. SHELDONM. mapamundi politico para colorear pdf Ross Contents
CHAPTER1.PRELIMINARIES1 1.1.Probability1 1.2.RandomVariables7 1.3.ExpectedValue9 1.4Moment Generating,Characteristic Functions,and LaplaceTransforms15 1.5.ConditionalExpeCtation20 1.5.1ConditionalExpectationsandBayes Estimators33 1.6.TheExponentialDistribution,Lackof Memory,and HazardRate Functions35
1.7.SomeProbabilityInequalities39 1.8.Limit Theorems41 1.9.StochasticProcesses41 Problems46 References55 Appendix56 CHAPTER2.THE POISSON PROCESS59 2 1.ThePoissonProcess59 22.InterarrivalandWaiting TimeDistributions64 2.3ConditionalDistributionof theArrivalTimes66 2.31.TheMIGIIBusyPeriod73
2.4.NonhomogeneousPoissonProcess78 2.5CompoundPoissonRandomVariablesand Processes82 2.51.ACompound PoissonIdentity84 25.2.CompoundPoissonProcesses87 xi xiiCONTENTS 2.6ConditionalPoissonProcesses88 Problems89 References97 CHAPTER3.RENEWAL THEORY98 3.1IntroductionandPreliminaries98 32Distribution of N(t)99 3
3SomeLimit Theorems101 331Wald's Equation104 332BacktoRenewalTheory106 34TheKeyRenewalTheoremandApplications109 341AlternatingRenewalProcesses114 342LimitingMeanExcessandExpansion of met)119 343Age-DependentBranchingProcesses121 3.5DelayedRenewalProcesses123 3 6RenewalRewardProcesses132 3 6
1AQueueingApplication138 3.7.RegenerativeProcesses140 3.7.1TheSymmetric RandomWalkandtheArc Sine Laws142 3.8StationaryPointProcesses149 Problems153 References161 CHAPTER4.MARKOV CHAINS163 41IntroductionandExamples163 42.Chapman-KolmogorovEquationsandClassification of States167 4 3Limit Theorems173 44.Transitions
amongClasses,theGambler'sRuinProblem, andMean Times inTransient States185 4 5Branching Processes191 46.Applications of MarkovChains193 4 6 1AMarkovChainModelof Algorithmic Efficiency193 462AnApplicationtoRuns-A MarkovChainwitha ContinuousStateSpace195 463ListOrdering Rules-Optimality of the TranspositionRule198 A
CONTENTSXIII 4.7Time-ReversibleMarkovChains203 48Semi-MarkovProcesses213 Problems219 References230 CHAPTER5.CONTINUOUS-TIME MARKOV CHAINS231 5 1Introduction231 52.Continuous-TimeMarkovChains231 5.3.-Birth andDeath Processes233 5.4TheKolmogorovDifferentialEquations239
5.4.1ComputingtheTransitionProbabilities249 5.5.Limiting251 5 6.TimeReversibility257 5.6.1TandemQueues262 5.62AStochasticPopulationModel263 5.7Applications of theReversedChaintoQueueing Theory270 57.1.Networkof Queues271 57 2.TheErlang LossFormula275 573TheMIG/1SharedProcessorSystem278 58.Uniformization282 Problems2R6
References294 CHAPTER6.MARTINGALES295 Introduction295 6 1Martingales295 62Stopping Times298 6 3.Azuma'sInequalityforMartingales305 64.Submartingales,Supermartingales.andtheMartingale ConvergenceTheorem313 65.AGeneralizedAzumaInequality319 Problems322 References327 CHAPTER7.RANDOM WALKS328 Introduction32R 7
1.DualityinRandomWalks329 xivCONTENTS 7.2Some RemarksConcerningExchangeableRandom Variables338 73Using MartingalestoAnalyzeRandomWalks 74ApplicationstoGI Gil Queues andRuin Problems344 7.4.1The GIGll Queue 7 4 2ARuinProblem 344 347 7 5Blackwell's Theorem on theLine349 Problems352 References355 341
CHAPTER8.BROWNIANMOTION ANDOTHER MARKOV PROCESSES356 8.1IntroductionandPreliminaries356 8.2.Hitting Times,MaximumVariable,andArc Sine Laws363 83.Variations onBrownianMotion366 83.1Brownian MotionAbsorbedataValue 8.3.2BrownianMotionReflectedattheOrigin 8 3 3Geometric BrownianMotion368 8.3.4Integrated
BrownianMotion369 8.4Brownian MotionwithDrift372 84.1Using Martingalesto AnalyzeBrownian Motion381 366 368 85BackwardandForward DiffusionEquations383 8.6Applications of theKolmogorovEquationstoObtaining Limiting Distributions385 8.61.Semi-MarkovProcesses385 862.The MIG/1Queue388 8.6.3.ARuinProblem inRiskTheory
87.AMarkovShot NoiseProcess'393 392 88Stationary Processes396 Problems399 References403 CHAPTER9.STOCHASTIC ORDER RELATIONS404 Introduction404 9 1StochasticallyLarger404 CONTENTSxv 9 2.Coupling409 9.2 1.Stochastic MonotonicityProperties of Birthand Death Processes416 9.2.2ExponentialConvergence inMarkov Chains418
0.3.HazardRateOrderingandApplicationstoCounting Processes420 94.LikelihoodRatio Ordering428 95.Stochastically MoreVariable433 9.6Applicationsof VariabilityOrderings437 9.6.1.Comparisonof GIGl1Queues439 9.6.2.ARenewalProcessApplication440 963.ABranching ProcessApplication443 9.7.AssociatedRandom Variables446 Probl ems449
References456 CHAPTER10.POISSON APPROXIMATIONS457 Introduction457 10.1Brun's Sieve457 10.2The Stein-ChenMethodforBoundingtheError of the PoissonApproximation462 10.3.ImprovingthePoissonApproximation467 Problems470 References472 ANSWERS ANDSOLUTIONS TOSELECTED PROBLEMS473 INDEX505 CHAPTER1
Preliminaries 1. 1PROBABILITY Abasicnotioninprobabilitytheoryisrandomexperimentanexperiment whoseoutcomecannotbedeterminedinadvance.Thesetofallpossible outcomes of an experiment iscalledthe sample space of that experiment, and wedenoteitby S. esclerosis multiple que es pdf
An event isa subset of asample space,and issaidto occur if the outcome of the experiment isan element of that subset.We shall suppose that for each eventEofthesamplespaceSanumberP(E)isdefinedandsatisfiesthe followingthreeaxioms*: Axiom(1)O:s;;P(E)~1. Axiom(2)P(S)=1 Axiom(3)ForanysequenceofeventsE.,E2,thataremutually exclusive, that is,
events for which E,E,= cf>when i ~j (where cf>isthenullset), P ( 9E,)=~P(E,). We refertoP(E)astheprobability of the eventE. Somesimpleconsequences of axioms(1).(2).and(3)are. 1.1.1.If ECF,thenP(E):SP(F). 1.1.2.P(EC)=1- P(E)whereECisthecomplement of E. 1.1.3.P ( U ~E,)=~ ~P(E,)whentheE,are mutuallyexclusive. 8015274723.pdf 1.1.4.P ( U
~E,):S~ ~P(E,). The inequality(114) isknownasBoole's inequality * Actually P(E) willonly be definedfor the so-calledmeasurable events of SButthis restriction neednotconcernus 1 2PRELIMIN ARIES An important property of the probability functionP is that it iscontinuous. Tomakethismoreprecise,weneedtheconceptof alimitingevent,which we define as
followsA sequence of even ts {E" , n 2=I} is said to be an increasing sequenceif E"CE,,+1>n;>1 andissaidtobe decreasingifE":JE,,+I,n;> 1.If {E",n2=I}isanincreasingsequenceofevents,thenwedefineanew event,denotedby lim"_,,,E"by OIllimE,,=U E;when E"CE,,+I,n 2=1. ,,_00 ,"'I Similarlyif {E", n2=I}isadecreasing sequence,thendefinelim,,_ooE"by
OIllimE" =n E" when E":J E,,+I,n 2=1. ,,_00 We may nowstate the following: PROPOSITION1.1.1 If {E",n2:1}iseither anincreasingordecreasing sequence of events,then ProofSuppose,first,that{E",n2:1}isanincreasingsequence,anddefineevents F", n2:1 by ( II-I) F"= E"YE,t= E " E ~ _ \ ,n>l That is,F"consistsof thosepointsinE"thatarenotinanyof
theearlier E,.i ~lim 2: P(E,) n_OCIc=n =0. andthe resultisproven. ExAMPlE1.1.(a)Let Xl, X2,be suchthat P{Xn= O}=lIn2 =1- P{Xn=1},n ~ 1 .
If weletEn={Xn=O},then,as~ :peEn}2: P(En)=00, n-l then P{aninfinitenumber of theEnoccur}= 1. 6PRELIMINARIES Proof P{an infinite number ofthe En occur} ==Ptd E,} ==limp(U Ei) n_ooNow, (by independence) = II (1- P(E, (by the inequality 1 - x se-Z) = exp ( - P(E,) '" =0since 2: P(E,)=00for all n. i-If Hencetheresultfollows. ExAMPLE1.1(c)Let
XI,X2,beindependentandsuchthat P{Xn=O}=lin=1- P{Xn=I}, If weletE"= {Xn= O},thenas P(E,,)=00it followsfrom Proposition1.1.3thatEnoccursinfinitelyoften.Also,as =00italso followsthatalsooccursinfinitely often. Hence,withprobability1,Xnwillequal 0infinitely oftenandwill also equal1 infinitely often. Hence,withprobabilityI, X"willnot
approachalImitingvalueasn -400. RANDOMVARIABLES.7 1.2RANDOMVARIABLES Consider arandom experiment having sample space SArandomvariableX isafunctionthatassignsarealvaluetoeachoutcomeinS.Foranysetof realnumbers A, theprobability that Xwillassumeavaluethat iscontained inthe set Aisequaltotheprobability thatthe outcome of the
experiment is containedinX-I(A).That is, P{X EA}=P(X-I(A, whereX-I(A)isthe event consisting of allpoints sESsuchthat X(s)EA.
ThedistributionfunctionF of therandomvariableXisdefinedforany realnumber xby F(x)=P{X::$;x}=P{X E(- 00,x]). Weshalldenote1- F(x)byF(x),and so F(x)= P{X> x} ArandomvariableXissaidtobediscreteifitssetof possiblevaluesis countableFordiscreterandomvariables, F(x)2: P{X = y} YSJ: Arandom variable is called continuous if there exists a
functionf(x), called the probabilitydensity function,suchthat P{X is in B} =fBf(x) dx for every set B.SinceF(x)J ~...f(x)dx,it followsthat d f(x) =dx F(x). The jointdistributionfunctionF of tworandomvariablesXandYisde finedby F(x,y)=P{X ::$;x,Y::$;y}. The distributionfunctionsof XandY, Fx(x)P{X s;x}andFy(y)=P{Y 1, are increasing and '" lim{X :5;x, Y:5;
Yn}U {X O.
since(7) =0 when i > m. resuscitation council guidelines 2020 11 u Hence,if welet ifN>O if N =0, then(1.3.5)and(1.3.6)yield 1 - I= ~( ~ )(-1)' or (1.3.7) 1= ~( ~ )(-1)1+1 Taking expectations of both sidesof (1.3.7)yields PRELIMINARIES (1.3.8)E[Il = E[N] - E[(:)] + ... + (-1)n+IE [(:)]. However, E[I] = P{N> O} ==P{at least one of the A, occurs} and
E[N] =E[ ~ ~ ]= ~P(A,), E[ ( ~ ) ]= E[number of pairs of the A, that occur] =LL P(A,A), 10,b>0 ProbabilityDensityFunctIOn, f(x) 1 0 Ae-.lr(Ax),,-1 (n-I)'11! --e-(X-") 12fT - 00< xk},validforallnonnegativeintegervalued randomvariablesN(seeProblem1.1) EXAMPLE1..5(1)ClassifyingaPoissonNumberofEvents. Suppose that we are observing events, and that
N, the total number thatoccur,isaPoissonrandomvariablewithmeanA.Suppose alsothateacheventthatoccursis,independentofotherevents, classifiedasatypejeventwithprobabilityPI'j=1,... ,k, k ~PI=1LetNIdenotethenumberoftypejeventsthatoccur, 1= 1 j=1,,k,andletusdeterminetheirjointprobabilitymass function k Foranynonnegativeintegersn" j=1, Then,
sinceN= ~NI,wehavethat ,k,letn=~nl 1= 1 I P{NI=n"j= 1,. ,k} =P{NI = n"j = 1, ... ,k IN = n}P{N = n} + P{NI = nl,j = 1, ... ,k I N~n}P{N =n} =P{NI = n"j =1,., kiN = n}P{N = n}.
Now,giventhatthereare atotalof N=nevents it follows,since each event isindependently a type jevent withprobability PI'1 :S jO.if itsprobability density functionisgivenby {A-Ax f(x)=oe x2::0 x s+ tlX> t}=P{X> s}for s, t ;;::O. 36PRELiMINARIES If wethink of Xas being the lifetime of someinstrument, then (1.62) states
thattheprobabilitythattheinstrumentlivesforat leasts+ thours,given that ithas survivedthours,isthesameastheinitialprobabilitythat it lives forat least shours.Inotherwords,iftheinstrumentisaliveattimet,then thedistributionof itsremaininglifeistheoriginallifetimedistribution.The condition(1.6.2)isequivalentto F(s+ t)F(s)F(t). andsincethisissatisfiedwhen F
isthe exponential, wesee that such random variablesare memoryless ExAMPlE1..6(A)Considerapost officehaving two cler ks,andsup-posethatwhenAentersthesystemhediscoversthatBisbeing served byoneof the clerksandC bytheother.Suppose alsothat AistoldthathisservicewillbeginassoonaseitherBorC leaves.If theamountof
timeaclerkspendswithacustomeris exponentiallydistributedwithmean11 A,whatistheprobability that,of thethree customers,Aisthe lastto leavethepost office? Theanswerisobtainedbyreasoningasfollows:Considerthe timeatwhichAfirstfindsafreeclerk.A tthispointeitherBor C wouldhave just left andtheother onewouldstillbeinservice. However, bythe lack of
memory of the exponential, it followsthat theamountof additionaltimethatthisother personhasto spend inthe post officeisexponentially distributed with mean11 A.That is,itisthesameasif hewasjust startinghisserviceat thispoint Hence, by symmetry, the probability that he finishes before Amust equal ~ ExAMPlE 1..6(B)Let XI, X2,be independent and
identically dis-tributed continuous random variables with distributionF.Wesay thatarecordoccursattimen,n>0,andhasvalueXnif Xn>max(XI!. ,Xn-,), whereXo=00.That is, a record occurs eachtimeanewhighisreached.Let1idenotethetimebetween theith andthe(i+ l)th record.What isits distribution? As a preliminary to computing the distribution of 1i, let
usnote thatthe recordtimes of thesequenceX"X2,willbethe same as for the sequence F(X,), F(X2),,and since F(X) has a uniform (0,1) distribution (see Problem 1.2), it followsthat the distribution of 1idoesnot depend ontheactualdistributionF (as longasit is continuous). So let us suppose that Fis the exponential distribution withparameterA1 To
computethedistributionofT"wewillconditiononR,the ithrecordvalue.NowRI=XIisexponentialwithrate1.R2has the distribution of an exponential with rate 1 giventhat it is greater than R,. But by the lack of memory property of the exponential this EXPONENTIALDIS I RIBUIION,LACKOFMEMORY.HAZARDRAIEFUNCTIONS37 means that R2hasthe same
distribution as RIplus an independent exponentialwithrate 1.Hence R2hasthe same distribution asthe sumoftwoindependentexponentialrandomvariableswithrate 1.Thesameargument showsthat R,hasthesamedistributionas thesumof iindependentexponentialswithrate1.Butitiswell known(seeProblem1.29)thatsucharandomvariablehasthe
gammadistributionwithparameters (i,1).That is,thedensityof R,isgivenby t~ O . Hence, conditioningonR,yields i ~1, where thelast equation followssinceif theith record value equals t,thennoneof thenext kvalueswillberecordsif theyare allless thant. fifa 21 obb download It turns out that not only isthe exponentialdistribution "memoryless," but
itistheunique distribution possessing this property.
To see this, suppose that Xismemorylessand letF(x)=P{X> x}Then F(s+ t)=F(s)F(t). Thatis,Fsatisfiesthefunctionalequation g(s+ t)=g(s)g(t). However,theonlysolutionsoftheaboveequationthatsatisfyanysortof reasonablecondition(suchasmonotonicity,rightor leftcontinuity,oreven measurability)areof theform g(x)=e-Ax for some suitable value of A.
[Asimple proof when g is assumed right continu-ousisasfollows.Sinceg(s+ t)= g(s)g(t),it followsthat g(2In)=g(lIn+ lin)= g2(lIn).Repeatingthis yields g(mln)= g"'(lIn).Also g(l)= g(lIn+ ... +lin)=gn(lIn).Hence,g(mln)=(g(l))mln,whichimplies,sincegis rightcontinuous,thatg(x)=(g(l)),Sinceg(l)=g2(1I2)~0,weobtain g(x)=eA....whereA=-
log(g(l))]Sinceadistributionfunctionisalways 38PRELIMINARIES rightcontinuous,wemusthave Thememorylesspropertyoftheexponentialisfurtherillustratedbythe failureratefunction(alsocalledthehazardratefunction)oftheexponen-tialdistribution ConsideracontinuousrandomvariableXhavingdistributionfunctionF anddensity fThe failure(or
hazard)ratefunctionA(t)isdefinedby (16.3) A(t)= [(f) F(t) To interpretA(t),thinkof Xasbeingthe lifetimeof some item, and suppose thatXhassurvivedforthoursandwedesiretheprobability thatitwillnot survive for an additional time dtThat is, consider P{X E(t, t + dt) I X> t}Now P{X(d) I X} =P{X E(t, t + dt), X> t} Et, t+t> tP{X> t} _P{XE (t,t + dt)} - P{X>
t} f(t) dt ~ - - - -F(t) = A(t) dt That is,A(t) represents the probability intensity that at-year-old item willfaiL SupposenowthatthelifetimedistributionisexponentialThen,bythe memorylessproperty,itfollowsthatthedistributionof remaininglifefora t-year-olditemisthe same as foranew item.HenceA(t)should be constant. This checks out since
Thus,thefailureratefunctionfortheexponentialdistributionisconstant The parameter A isoften referred to asthe rateof the distribution.(Note that therateisthereciprocalof themean,andviceversa) ItturnsoutthatthefailureratefunctionA(t)uniquelydeterminesthe distributionF.To provethis,wenotethat d-- - F(t) A( t)= ---=d::--t _ F(t)
SOMEPROBABILITYINEQUALITIES Integrationyields log F(t) =f>\(I) dt+ k or Letting t==0 showsthatc=1 and so F(t)~exp{ - f ~A(t) dt} 1.7SOMEPROBABILITYINEQUALITIES Westartwithaninequality known asMarkov's inequality. mamas_and_papas_switch_pram_instructions.pdf Lemma 1.7.1Markov's Inequality If Xisanonnegativerandom
variable,then forany a >0 P{X:;::a}=s;E[X]/a 39 ProofLet I{X ;:::a} be1 jf X;:::a and 0otherwise. Then,itiseasy to see sinceX;::: o that aI{X ;:::a}=s;X Taking expectationsyieldstheresult PROPOSITION1.7.2ChernoffBounds LetXbearandomvariablewithmomentgenerating functionM(t)E[e'X]Then for a> 0 P{X;::: a} =s;e-IOM(I) P{X =s;a} =s;e-lUM(t)
foral! t > 0 for all t < O. 40PRELIMINARIES ProofFor t >0 wheretheinequality followsfromMarkov'sinequality. The proof for t a}byusingthattthatmini-mizes r'DM(t) ExAMPLE1.7(A)Chemoff Bounds/or PoissonRandomVariables. If Xis Poisson with mean A,then M(t):=eA(e'-I)Hence, the Chernoff bound forP{X ~j}is The value of tthat minimizes the
preceding isthat value forwhich e':=j/ A.Provided that j/ A >1,this minimizing value will be positive andso weobtaininthiscase that Our nextinequality relatestoexpectations ratherthanprobabilities. PROPOSITION1.7.3Jensen'sInequality If f isaconvexfunction,then E[f(X)]~f(E[X]) providedtheexpectations exist. ProofWe willgive a proof under the
supposition that fhas a Taylor series expansion Expanding about IJ- = E[X] and using the Taylor series with a remainderformula yields f(x)= f(lJ-)+ f'(IJ-)(x- IJ-)+ f"(fJ(x - ~ ) 2 / 2 ~f(lJ-)+ f'(IJ-)(x- IJ-) since f " ( ~ )~0by convexityHence, STOCHASTICPROCESSES41 Taking expectationsgivesthat 1.8LIMITTHEOREMS Someof themostimportant
resultsinprobabilitytheory areintheformof limittheorems. The twomostimportantare: StrongJLaw of Large Numbers If XI,X2, are independent andidenticallydistnbuted withmean IL,then P {lim(XI + ... + X/I)/n= IL}1. /1-+'" Central Limit Theorem If XI,X2,areindependentandidenticallydistnbutedwithmeanILand variancea2,then lim P/I$a=IIe-x212 dx.
{XI+ ... + X- nIL}f1 n-+'"aVn-'" Th us if we let S"=:2::=1X"where XI,X2 , are independent and identically distributed, then the Strong Law of Large Numbers states that, with probability 1,S,,/nwillconvergetoE[X,j; whereasthecentrallimittheorem statesthat S"willhaveanasymptotic normaldistributionasn ~00. 1.9STOCHASTICPROCESSES
AstochasticprocessX={X(t),tET}isacollectionof randomvariables. Thatis,foreachtintheindexsetT,X(t)isarandomvariable.Weoften interpret tastimeand callX(t) thestate of the process at time t.If the index setTisa countable set, wecallXa discrete-time stochastic process,and if T isa continuum, wecallitacontinuous-time process. Anyrealizationof
Xiscalledasamplepath.Forinstance,ifeventsare occurringrandomlyintimeandX(t)representsthenumberof eventsthat occur in[0,t],thenFigure1 91givesa samplepath of Xwhichcorresponds Aproof of theStrongLawof LargeNumhersisgivenintheAppendixtothischapter 42PRELIMINARIES 3 2 234 t Figure1.9.1.Asample palhoj X(t)=number oj eventsin[0,I]. to
theinitialevent occumng at time1,the nexteventattime 3 and thethird attime4,andno events anywhereelse A continuous-time stochastic process {X(t), tET} issaidto have indepen-dent incrementsifforallto1, INTERARRIVAL ANDWAITING TIME DISTRIBUTIONS65' itiseasy to show,using momentgenerating functions,thatProposition 2.2.1
impliesthatSnhasagammadistributionwithparametersnandA.Thatis, itsprobability density is (AtY-1 f(t) =Ae-Ar ,t;;::: O. (n- 1)' The above couldalsohavebeen derived bynoting thatthe nth eventoccurs prioror attimetif,andonlyif,thenumberof eventsoccurring by timetis atleast n. That is, Hence, N(t)nSnt. P{Sn$t}=P{N(t) >n} _-AI (At)! - L.Je-.,-, ;=nJ
whichupon differentiationyieldsthatthedensityfunctionof Snis f(t)=- f Ae-AI + f Ae-Ar J.(j-1). _(At)n-I - Ae(n- 1)! RemarkAnother way of obtaining the density of Snis to use the independent incrementassumptionasfollows P{t < Sn< t + dt}= P{N(t) =n- 1,1event in (t, t + dt)}+ o(dt) = P{N(t) = n- I}P{levent in (t, t + dt)}+ o (dt) -M(At)n-1 = eAdt+ o(dt)
(n- I)! whichyields,upondividing byd(t)andthenletting itapproach 0,that Proposition2.2.1alsogivesusanother wayof definingaPoissonprocess. Forsupposethatwestartoutwithasequence{Xn,n1}ofindependent identicallydistributedexponentialrandomvariableseachhavingmean1/ A. Nowletusdefineacountingprocessbysayingthatthentheventofthis
66THEPOISSONPROCESS processoccursattimeSn,where Theresultantcounting process {N(t),t~o}willbePoissonwithrateA. 2.3CONDITIONALDISTRIBUTIONOFTHE ARRIVALTIMES SupposewearetoldthatexactlyoneeventofaPoissonprocesshastaken placebytimet,andweareaskedto determinethedistributionofthetime at which the event occurred.
zulalilejoweso.pdf Since a Poisson process possesses stationary and independentincrements,itseemsreasonablethateachintervalin{O,t]of equallengthshouldhavethesameprobabilityofcontainingtheeventIn other words, the time of the event should be uniformly distributed over [0, t]. This iseasilychecked since,fors:=::;t, P{XY2,...
,y,,), CONDITIONAL DISTRIBUTIONOF THE ARRIVAL TIMES67 and(ii)the probability densitythat (Yh Yz, ... ,Yn) isequal to Yit,Y'2'... , Yiis /(y,)/(y, )... /(y, )=II; /(y,) when (y, , Y,, ..,Yi) isa permutation 12.12. of (Yl>Y2,.., Yn). If the Y" i= 1,... , n, are uniformly distributed over (0,t), then it follows fromthe abovethat the joint density functionof the order
statistics Y(I),Y(2) .
. . , YIn)is n! r' 0< Y'< Y2< ... < Yn< t. Wearenowreadyforthefollowingusefultheorem.
THEOREM2.3.1 GiventhatN(t)= n,thenarrivaltimes 51,. , 5n havethesamedistributionasthe order statisticscorresponding tonindependent randomvariables uniformly distributed ontheinterval(0,t). ProofWeshallcomputetheconditionaldensityfunctionof 51,', 5n giventhat N(t)=nSolet0=2: E[e/(X,++XnlIN =n]e-A/{At)"lnl " ~ O ex>(2.5.1 ) =2:
E[e/(X,++Xnl]e-At{At)"lnI n=O ex>(25.2)=2: E[e'X']"e-A/{At)"ln! n=O where (2.5.1) follows from the independence of {XI, X2, }and N, and (2.5.2) followsfromtheindependenceof theX,Hence,letting denotethe momentgenerating functionof theX"wehavefrom(2.5.2)that 0; E [e'W] = 2:[4>x{t)]"e-A/{At)"lnl n ~ O (2.5.3)=exp{At{4>x{t)- 1)]
COMPOUNDPOISSONRANDOMVARIABLESANDPROCESSES 83. It iseasilyshown,eitherbydifferentiating(25.3)orbydirectlyusinga conditioningargument,that whereXhasdistributionF. E[W]=AE[X] Var(W) =AE [X2] EXAMPLE2.5(A)Asidefromthewayinwhichtheyaredefined, compoundPoissonrandomvariablesoftenariseinthefollowing
mannerSupposethateventsareoccurringinaccordancewitha Poissonprocesshavingrate(say)a,andthatwheneveranevent occursa certaincontributionresultsSpecifically,supposethatan event occurringattime swill,independent of the past,resultina contnbutionwhosevalueisarandomvariablewithdistribution Fs.LetW denote the sum of the contributions up to
time t-that is, whereN(t)isthenumber of events occurringbytimet,andXiis the contribution made when event; occurs. Then, even though the XIareneitherindependentnoridenticallydistributeditfollows thatW isacompoundPoissonrandomvariablewithparameters A = atand 1 II F(x)=- Fs(x)ds. t0 ThiscanbeshownbycalculatingthedistributionofWbyfirst
conditioningonN(t),andthenusingtheresultthat,givenN(t), theunorderedsetofN(t)eventtimesareindependentuniform (0,t)randomvariables(seeSection 2.3). When F is a discrete probability distribution function there is an interesting representation of W asalinear combinationof independentPoissonrandom variables.SupposethattheXIarediscreterandom
variables suchthat k P{XI=j}= p"j = 1, .. , k, L p,= 1. , ~ I If weletN,denotethenumber of theX/sthatareequalto j, j=1,... ,k, thenwecanexpressW as (25.4) 84THEPOISSONPROCESS where,usingtheresultsofExample1.5(1),theareindependentPoisson random variableswithrespectivemeansAPi'j=1,.,kAs acheck,letus usethe representation(2.5.4)to
computethemeanandvariance of W. E[W] = =2,jAp, =AE[X] , Var(W) = = 2,FAp, = AE[X2] ,, whichcheck withourpreviousresults. 2.5.1A Compound Poisson Identity N Asbefore,letW=XrbeacompoundPoissonrandomvariablewithN 1=1 being Poisson with mean A and the X,having distribution F.We now present ausefulidentity concerningW.
PROPOSITION2.5.2 Let Xbearandom variablehavingdistribution F that i0Let hex)={O lIn ifx# n if x= n SinceWheW)=I{W =n}.whichisdefinedtoequal1 ifW=nand 0otherwise.we obtainuponapplyingProposition 252 that P{W = n} = AE[Xh(W + X)] = A2: E[Xh(W+X)lx= naJ J = A 2:jE[h(W + j)]aJ J =A "j-l P{W + j=n}a L.Jn J J COMPOUNDPOISSONR.-:-
NDOMVARIABLESANDPROCESSES RemarkWhentheX,areidenticallyequalto1,theprecedingrecursion reducestothe well-knownidentity forPoissonprobabilities P{N = O}=e-A A P{N =n}=- P{N =n- I}, n ExAMPLE2.5(a)LetWbeacompoundPoissonrandomvariable withPoissonparameterA =4andwith p{X,=i}= 114,i= 1,2, 3,4.
TodetermineP{W=5},weusetherecursionof Corollary2.5.4 asfollows: Po=e-A = e-4 p.=Aa.Po = e-4 P2 = + 2a2PO}=P3 = {a. P2 + 2a2Pl+ 3a3PO}= 1: e-4 P4 ={a. P3 + 2a2P2+ 3a3P.+ 4a4PO}= e-4 A501-4 Ps =5" {a. P4 + 2a2P3+ 3a3P2+ 4a4P.+ 5asPo} =120 e 2.5.2Compound Poisson Processes
Astochasticprocess{X(t),tO}issaidtobeacompoundPoissonprocess if itcanberepresented,fort0,by N(,) X(t)=2: X, ,=. where{N(t),t>O}isaPoissonprocess,and{X"i=1,2,...} isafamilyof independent and identically distributedrandom variables that isindependent oftheprocess{N(t),tO}.Thus,if{X(t),tO}isacompoundPoisson processthenX(t)isa
compoundPoissonrandomvariable. Asanexampleof acompoundPoissonprocess,supposethatcustomers arrive ata store ata Poisson rate A.Suppose,also, that the amounts of money spent byeach customer forma set of independent and identically distributed randomvariablesthatisindependentof thearrivalprocess.If X(t)denotes 88THE POISSONPROCESS
thetotalamountspentinthestorebyallcustomersarrivingbytimet,then {X(t),t:>O}isa compoundPoissonprocess. 2.6CONDITIONALPOISSONPROCESSES LetA bea positiverandom variablehaving distributionGand let {N(t),t:> O}bea counting process suchthat, giventhat A =:::: A,{N(t), t:> O}isa Poisson processhavingrateA.Thus,forinstance, P{N(t +
s)N(s)n} The process {N(t),t2!:O}is calleda conditional Poissonprocess since, condi-tionalontheeventthatA= A,itisaPoissonprocess withrateA.It should benoted,however,that{N(t),t:>O}isnotaPoissonprocess.Forinstance, whereasitdoeshavestationaryincrements,itdoesnothaveindependent ones(Why not?) Letuscomputetheconditional distributionof A
giventhat N(t)=n.For dAsmall, P{AE(A,A + dA) I N(t)n} _P{N(t)n I A E(A,A + dA)}P{AE(A,A + dA)} e-Ar (At)ndG(A) n! = - - ~ - : - - - -J'"e-AI(At)ndG(A) on! P{N(t) = n} and sothe conditionaldistributionof A,giventhat N(t)= n,is givenby ExAMPLE2.6(A)Suppose that, depending on factorsnot at present
understood,theaveragerateatwhichseismicshocksoccurina certain region over a givenseason iseitherAIor A2. Supposealso that it isAIfor100 ppercent of theseasons andA2theremaining time.Asimplemodelforsuchasituationwouldbetosuppose that{N(t),0O}isa PoissonprocesswithrateAI+ A2Also, show thattheprobability that
thefirsteventofthecombinedprocesscomesfrom{N1 (t),t2:O}is AI/(AI+ A2).independently of thetimeof theevent. 2.6.Amachineneedstwotypesofcomponentsinordertofunction.We haveastockpileofntype-lcomponentsandmtype-2components. 90TH EPOISSONPROCESS Type-icomponentslastforanexponentialtimewithrateILlbefore failing.Computethemean
lengthof timethemachineisoperativeifa failed component isreplaced by one of the same type from the stockpile; that is, computeXI!Y,)], where the X,(Y,) are exponential withrateILl (IL2). 2.7.Computethe joint distributionof 51,52,53.
2.8.Generating a Poisson Random Variable. LetVI!V2, be independent uniform(0,1)randomvariables. monophthongs_and_diphthongs.pdf (a)If X,=(-logV,)/A,showthatX,isexponentiallydistnbutedwith rateA. (b)Use part (a) to show that N isPoisson distributed with mean A when Nisdefinedtoequalthat valueof nsuch that nn+1 II V,e-.1> II V" where
V,1.ComparewithProblem1 21of Chapter1. 2.9.SupposethateventsoccuraccordingtoaPoissonprocesswithrateA. Each timeanevent occurswemust decide whether or not to stop, with ourobjectivebeingtostopatthelasteventtooccurpriortosome specified timeT.That is,ifanevent occursat timet, $t-n} 2.13.Supposethat
shocksoccuraccordingtoaPoissonprocesswithrateA, andsupposethateachshock,independently.causesthesystemtofail with probability pLet Ndenotethenumber of shocksthat ittakesfor thesystemtofailandletTdenotethetimeoffailureFind P{Nnl T=t} 2.14.Consideranelevatorthatstartsinthebasementandtravelsupward. Let Ntdenotethenumber of peoplethat
getintheelevatorat floori. AssumetheN,areindependentandthatNtisPoissonwithmeanA, Eachpersonenteringatiwill,independent of everythingelse,getoff at j withprobability Plj'~ j > 'Plj = 1.Let OJnumber of people getting off theelevatoratfloorj (a)ComputeE[OJ (b)Whatisthedistributionof OJ? 80548734386.pdf
(c)Whatisthejointdistributionof OJandOk? 2.15.Consider an r-sided coin and suppose that on each flipof the coin exactly one of the sidesappears' side iwithprobability PI' ~ ~PI=1For given numbers n'l,., n"let N,denote the number of flipsrequired until side ihasappearedforthe nl time,i1,., r,andlet 92THEPOISSONPROCESS Thus N isthe number of
flipsrequired until sidei hasappeared n,times forsomei= 1,, r (a)What isthedistributionof N,? (b)AretheN,independent? Nowsupposethattheflipsareperformedatrandomtimesgenerated byaPoissonprocess withrateA =1LetT,denote thetimeuntilside ihasappearedforthen,time,i=1,, randlet T=minTI 1=1.r (c)What isthedistributionof TI?
'(d)Arethe11independent? (e)DeriveanexpressionforE[T] (f)Use(e)toderiveanexpressionforE[Nl 2.16.ThenumberoftrialstobeperformedisaPoissonrandomvariable withmeanAEachtrialhasnpossibleoutcomesand,independentof everythingelse,resultsinoutcomenumberiwithprobabilityPi. ~ ~P,1LetXIdenotethenumber of outcomesthatoccurexactly j times,
j0,1,.ComputeE[X;l,Var(X;). 2.17.LetXI,X2,,Xnbeindependent continuousrandomvariableswith common density function! Let XIt) denote the ith smallest of Xl,.. , Xn (a)Notethat inorder forX(I)to equal x,exactly i- 1 of theX's must belessthan x,onemustequalx.andtheother nimustallbe greater than xUsing this factargue that thedensity function of X(I)
isgivenby (b)X(I)willbelessthan xif,andonly if,howmany of theX's areless than x? (c)Use(b)toobtainanexpressionforP{Xlr)Sx} (d)Using(a)and(c)establishtheidentity for0~yS1 PROBLEMS 93. (e)Let S,denotethetime of theith event of the Poissonprocess {N(t). t;>O}.Find E[S,IN(t) =n l ~{ i> n 2.18.LetV(llI,V(n)denotethe order statistics of a set of n
uniform(0,1) random variablesShow that given V(n)= y, V(I),,V(n-I)aredistributed asthe order statistics of a set of n- 1 uniform (0, y)random variables 2.19.Busloadsof customersarriveataninfiniteserverqueueataPoisson rate A Let Gdenote the service distribution. A bus contains j customers with probability a"j = 1,.Let X(t) denote the number of
customers that havebeenservedbytime t (a)E[X(t)]=?, (b)IsX(t)Poissondistributed? 2.20.Suppose that each event of aPoisson process with rate A isclassifiedas being either of type1,2,,kIf the event occursat s,then,indepen-dentlyof allelse,itisclassifiedastypeiwithprobabilityP,(s),i=1, .. ,k,~ ~P,(s)= 1.LetN,(t)denotethenumberoftypeiarrivalsin
[0,t]ShowthattheN,(t).i= 1,,kareindependentandN,(t)is PoissondistributedwithmeanA J ~P,(s)ds 2.21.Individuals enter the system inaccordance with a Poisson process having rateAEacharrivalindependently makesitswaythroughthe statesof the systemLet a,(s) denote the probability that an individualisinstate iatime s afteritarrivedLet N,(t)denotethe
number of individuals in state iattimet.ShowthattheN,(t),i~1,areindependentandN,(t) isPoissonwithmeanequalto AE[amount of timeanindividualisinstateiduringitsfirsttunitsin the system] 2.22.Suppose cars enter a one-way infinitehighwayata PoissonrateAThe ith car to enter chooses a velocity V,and travels at this velocityAssume that the V,'s are
independent positive random variables having a common distributionF.Derivethedistributionof thenumberof carsthatare located intheinterval (a,h) at time tAssume that no time islostwhen one car overtakesanother car 2.23.Forthemodelof Example 2.3(C),find (a)Var[D(t)]. (b)Cov[D(t),D(t+ s)] 94THEPOISSONPROCESS 2.24.Supposethatcarsenter a
one-wayhighway of lengthLinaccordance witha Poissonprocess withrate AEach car travelsat a constant speed thatisrandomlydetermined,independentlyfromcartocar,fromthe distributionF.Whenafastercar encountersaslowerone,itpassesit withnolossof timeSupposethatacar entersthehighwayattimet.
Showthatast-+00thespeed of thecarthatminimizestheexpected number of encounters with other cars, where we sayan encounter occurs whena cariseitherpassedby or passesanother car,isthemedianof thedistributionG 2.25.SupposethateventsoccurinaccordancewithaPoissonprocesswith rateA,andthatanevent occurringat times,independentof thepast,
contributesarandomamounthavingdistributionFHS:>O.Showthat W,the sum of all contributions bytime t,is a compound Poisson random N variableThat is, showthatWhas the samedistribution as~x"where 1",1 theX,areindependentandidenticallydistributedrandomvariables andareindependentofN,aPoissonrandomvariableIdentifythe distribution of
theX,andthemean of N 2.26.Compute theconditional distribution of S" Sh. , Sngiventhat Sn=t 2.27.Computethemoment generatingfunctionof D(/)inExample23(C). 2.28.ProveLemma 2.3 3 2.29.Completetheproof thatfora nonhomogeneousPoisson process N(t+ s)- N(t)isPoissonwithmeanmet+ s)- met) 2.30.LetT" Tz, denotetheinterarrival times of events
of a nonhomoge-neousPoissonprocess having intensityfunctionA(t) (a)AretheTIindependent? (b)Arethe~identicallydistributed? (c)Findthedistribution of T, (d)FindthedistributionofT2 2.31.ConsideranonhomogeneousPoissonprocess{N(t),t2:O},where A(t)> 0 forallt.Let N*(t)=N(m-I(t)) Showthat {N*(t),t2:O}isaPoissonprocess withrateA1 PROBLEMS95
2.32.(a)Let {N(t),t :>O}beanonhomogeneous Poisson process withmean valuefunctionm(t)GivenN(t)=n,showthattheunorderedset ofarrivaltimeshasthesamedistributionasnindependentand identically distributed random variables having distribution function {m(x) F(x)=x :5,t x> t (b)Supposethat workers incuraccidentsinaccordance withanonho-
mogeneous Poisson process with mean value function m(t). Suppose further that each injured person is out of work for a random amount of timehavingdistributionF.LetX(t)bethenumberof workers whoareout of workattimetComputeE[X(t)]andVar(X(t)). 2.33.Atwo-dimensionalPoissonisaprocessof eventsintheplane
suchthat(i)foranyregionofareaA,thenumberof eventsinAis PoissondistributedwithmeanAA,and(ii)thenumbersofeventsin nonoverlappingregionsareindependent.Considerafixedpoint,and letXdenotethedistancefromthatpointtoitsnearestevent,where distanceismeasuredintheusualEuclidean mannerShowthat (a)P{X >t}=e-AlI,2. fubeleravad.pdf
(b)E[X]=1/(20). Let R"i1 denote the distancefromanarbitrary point totheith closest eventtoitShow that,withRo= 0, (c)1TRf- 1TRf-I'i1 areindependent exponentialrandomvariables, each withrateA. 2.34.Repeat Problem 2 25 when the events occur according to a nonhomoge-neousPoissonprocess withintensity A(t),t0 2.35.Let
{N(t),tO}beanonhomogeneousPoissonprocesswithintensity functionA(t), t0However, suppose one starts observing theprocess atarandomtimeThavingdistnbutionfunctionF.Let N*(t)= N( T+ t)- N( T)denote the number of events that occur inthe first t timeunits of observation (a)Doestheprocess{N*(t),t:>O}possessindependent increments? I (b)Repeat(a)
when{N(t),tO}isaPoissonprocess. 2.36.LetCdenotethenumberof customersservedinanMIG/1busype-riod.Find (a)E[C]. (b)Var(C) 96THEPOISSONPROCESS N(t) 2.37.Let {X(t), t2:O}be a compound Poisson process withX(t)== and 1=1 suppose that the XI can only assumea finite set of possible values. Argue that, fortlarge,thedistributionof
X(t)isapproximately normal N(I) 2.38.Let {X(t), t2: O}be a compound Poisson process with X(t) and 1=1 supposethatA=1andP{X1 ==j}==JIIO,j=1,2,3,4.Calculate P{X(4)20}, 2.39.Compute Cov(X(s),X(t))fora compound Poisson process 2.40.Give an example of a counting process {N(t), t:> O}that is not a Poisson
processbutwhichhasthepropertythatconditionalonN(t)=nthe firstn event timesaredistributed asthe order statisticsfroma of n independentuniform(0,t)randomvariables. 2.41.Fora conditionalPoissonprocess. (a)ExplainwhyaconditionalPoissonprocesshasstationarybutnot independent increments. (b)Computethe conditionaldistributionof
Agiven{N(s),0Sn~t. From(3.21)weobtain (322)P{N(t) =n} =P{N(t) ~n} - P{N(t) ~n+ 1} =P{Sn- I}(by (3.3.7)) , =P{S"- rilL > I - rilL} (Tv"(Tv" =P"I>_ Y1 +Y(T {s- r IL() -112} (Tv"vt,;. 109 Now,bythecentrallinut (S,- ',IL)I(Tv" convergestoanormalrandom vanablehavingmean0andvariancel'as I(andthusr,)approaches00Also,since ( Y(T) -112 -Y1+-- --Y
vt,;. weseethat and since f'"2IY2 e-X 12dx =e-X 12dx, -ytheresultfollows Remarks (i)There isa slight difficultyin theaboveargument since "should bean integerforustousetherelationship(33.7).It isnot,however,too difficultto maketheaboveargumentrigorous. (ii)Theorem3.3.5statesthatN(t)isasymptoticallynormalwithmean tIp.and variancetu2/p.3.
3.4THEKEYRENEWALTHEOREM ANDApPLICATIONS AnonnegativerandomvariableXissaidtobelatticeifthereexistsd2!0 suchthat :L:=oP{X =nd}=1.That is,Xislattice if it onlytakes on integral 110RENEWAL THEORY multiplesof somenonnegativenumber dThelargestd having thisproperty issaidto betheperiod of XIf XislatticeandF isthedistribution
function of X,then wesaythatF islattice. Weshallstatewithoutproof thefollowingtheorem THEOREM3.4.1 (Blackwell's Theorem). (i)If F isnot lattice,then m(t + a)- m(t)- alii-ast _00 foralla~0 (ii)If F islatticewithperiod d,then E[number of renewalsat nd]- dlli-asn_00 ThusBlackwell'stheorem statesthat ifF isnotlattice,then theexpected number of renewalsin
an interval of length a,farfromtheorigin, isapproxi-matelyalp.Thisisquiteintuitive,foraswego furtherawayfromtheorigin itwouldseemthattheinitialeffectswearawayandthus (3.4.1)g(a) ==lim[m(t+ a)- m(t)J I_a: shouldexistHowever,iftheabovelimitdoesinfactexist,then,asasimple consequenceoftheelementaryrenewaltheorem,it mustequalalp..To see
thisnotefirstthat g(a+ h)=lim[m(t + a + h)- m(t)] I_a: =lim[m(t + a + h)- m(t + a)+ m(t + a)- m(t)J 1-'" = g(h) + g(a) However,theonly(increasing)solution of g(a+ h)=g(a)+ g(h)is g(a)= ca,a>O THE KEYRENEWAL THEOREMANDAPPLICATIONS forsomeconstant c.To showthat c=1//Ldefine Then implyingthat or Xl=m(l) - m(O) X2=m(2) - m(1) Xn= m(n)
- m(n - 1) 1.Xl+.. + Xn 1m=C n-'"n lim m(n) =c n-'"n Hence,bytheelementary renewaltheorem,c=1//L. 111 WhenF islatticewithperiodd,then thelimitin(34 1)cannot existFor nowrenewalscan only occur at integral multiples of d andthus the expected numberof renewalsinanintervalfarfromtheoriginwouldclearlydepend not on the intervals' length per
sebut rather on how many points of theform nd, n:> 0,itcontains.Thus in thelatticecasethe'relevant limit isthat of the expected number of renewals at nd and, again, if limn_",E [number of renewals atnd J exists,then bytheelementary renewaltheoremitmust equaldl/L.If interarrivalsarealwayspositive,thenpart(ii)of Blackwell's theorem states
that,inthelatticecase, limP{renewal at nd} =!!.. n ~ O C/L Lethbeafunctiondefinedon[0,00 JForanya>0letmn(a)bethe supremum,andmn(a)theinfinumof h(t)overtheinterval(n- l)a- 0and " '" lima L mn(a) 0-.0n=1 lim a L mll(a) 0_0n=l-Asufficient condition forhto be directlyRiemann integrable isthat. " (i)h(t);;:0forallt;;:0, (ii)h(t) isnonincreasing, (iii)f; h(t)dt
x} ~F{x) Thatis,forany xit ismorelikelythatthelengthoftheintervalcontaining thepointtisgreaterthanxthanit isthatanordinaryrenewalintervalis greater than x. This result, whichat first glance may seem surprising, isknown astheinspectionparadox. Wewillnowusealternating renewalprocesstheory to obtain the limiting distributionofX N(r)+1Againletanon-
offcyclecorrespondtoarenewal interval,andsaythattheontimeinthecycleisthetotalcycletimeifthat time isgreater than xand is zero otherwise. That is,the system is either totally onduringacycle(iftherenewalintervalisgreaterthanx)ortotallyoff otherwise.Then P{XN(,)+I> x} = P{length of renewal interval containing t > x} =P{on at time t}. Thusby Theorem
3.4 4,providedF isnotlattice,weobtain IP{X} - E[on time in cycle] 1mN(I)+I> X- ----"------=------'-'-'"p. =E[XIX > xlF{x)/p. =r y dF{y)/p., 118RENEWAL THEORY or,equivalently, (3.4.2)limP{XN(r)+.:s;x}= JXy dF(y)/p.. ~ ~0 RemarkTobetterunderstandtheinspectionparadox,reasonasfollows: Since the line is covered by renewal intervals, isitnot more likely
that a larger interval-as opposed to a shorter one-covers the point t?In fact,in the limit (as t--+00)it isexactly true that an interval of length yis ytimesmorelikely tocover tthan one of length1.For ifthiswerethecase, then thedensityof theinterval containing thepoint t,callit g,would be g(y)= ydF(y)/c (since dF(y)istheprobabilitythatanarbitraryintervalisof
lengthyandylcthe conditionalprobability thatitcontains thepoint).Butby(3.4.2)weseethat thisisindeedthe limitingdensity. For another illustration of the variedusesof alternating renewalprocesses considerthe followingexample. EXAMPLE3.4(A)An InventoryExample.Supposethatcustomers arriveata store,which sellsa singletypeof commodity,inaccor-
dance with a renewal process having nonlattice interamval distribu-tionF.Theamountsdesiredbythecustomersareassumedtobe independentwithacommondistributionG.Thestoreusesthe following(s,S)ordering policy' If the inventory levelafter serving acustomer isbelow s,then an order isplacedto bring itupto S
OtherwisenoorderisplacedThusiftheinventorylevelafter servingacustomerisx,then theamountorderedis S - xif x < s, oifx;;?s. Theorder isassumedto beinstantaneously filled.
LetX(t)denotetheinventorylevelattimet,andsupposewe desirelim( ....
~P{X(t)>x}.If X(O)=S,thenifwesaythatthe system is"on" whenever the inventory level isat least xand "off" otherwise, the aboveisjust an alternating renewal process. Hence, fromTheorem 3.4.4, lim P{X(t) ;;?x} =E [amount of time. the inventory;;? x in a cycle]. (.... ~E[tlme of a cycle] Now if weletY.,Y2,,denote the successive customer demands andlet
(3.4.3)Nx =min{n:Y.+ ... + Yn > S - x}, THE KEYRENEWAL THEOREMANDAPPLICATIONS thenitistheNx customerinthecyclethatcausestheinventory levelto fallbelow x,and itistheNscustomerthat ends the cycle. Hence if x"i~1,denote the interarrival times of customers, then Nl amount of "on" time in cycle =Lx" 1=1 N, time of cycle =Lx,.
1=1 Assuming that the interarrival times are independent of the succes-sivedemands,wethushaveupontakingexpectations (344) [ N- ] 1 ELX, rP{X():>} =1=1= E[Nx] L ~tx[N,]E [Ns] ELX, 1= 1 However, as the Y" i:> 1, are independent and identically distrib-uted,itfollowsfrom(3.4.3)thatwecaninterpretNx - 1asthe numberofrenewalsbytimeS-
xofarenewalprocesswith interarrivaltimeY;,i~1.Hence, E[Nx]=mG(S - x) + 1, E[N,]= mG(S - s)+ 1, whereGisthecustomerdemanddistnbutionand '" ma(t) = L Gn(t). n=1 Hence,from(3.4.4),wearnve at :>_1 + mG(S- x) 11mP{X(t) - x} - 1(S)' 1-+'"+ mG- s s ~ x ~ S . 3.4.2Limiting Mean Excess and the Expansion of m(t) 119
Letusstartbycomputingthemeanexcessofanonlatticerenewalprocess. Conditioning on SN(/)yields(byLemma3.4.3) E[Y(t)]=E[Y(t)ISN(r)=O]F(t) + f ~E[Y(t)ISN(r)=y]F(t - y) dm(y). 120RENEWAL THEORY ---------x--------------x--------------------x----Now, y ---Y(t)---Figure 3.4.1.SN(I)=Y;X::renewal. E [Y(t)ISN(r)=0]=E[X - tlX > t], E[Y(t)ISN(t)=y]=E[X - (t - y)IX > t -
y]. wheretheabovefollowssinceSNIt)=ymeansthatthereisarenewalaty andthenextinterarrivaltime-call itX-is greaterthant- Y(seeFigure 3.4.1).Hence, E[Y(t)]=E[X - tlX> t]P(t) + J: E[X - (t - y)IX> t - y]F(t - y) dm(y). Nowitcan be shown thatthefunctionh(t)=E [X - tlX >t] F(t)isdirectly Riemann integrable provided E[X2] t] F(t) dtl IL =J; J ~(x- t) dF(x) dtllL
=J; J: (x- t) dt dF(x)11L =J; x2 dF(x )/21L =E[X2]/21L. Thus wehaveproven thefollowing. PROPosnION3.4.6 (by interchange of order of integration) If theinterarrivaldistributionisnonlatticeandE[X2]1 THEOREM3.4.8 If Xo=1,m>1,andFnot lattice,then whereaistheuniquenumber suchthat J'"1 e-a'dF(x)=-om 122RENEWAL THEORY
ProalByconditioningonT ..thelifetimeof theinitialorganism,weobtain M(t)= t IX(t)ITI= s]dF(s) However, (345) ~ '{I IX(t) ITI= s]= mM(t - s) ifs> t ifs st Toseewhy(345)istrue,supposethatT]=s,sst, andsupposefurtherthatthe organismhasjoffspringThenthenumber of organismsaliveattmaybewnttenas Y]+.+Y"whereY,isthenumber of
descendants(includinghimself)of theith offspring that are alive at tClearly,Y1,,Y,are independent with the same distribu-tionasX(t- s)Thus,(YI++Y,)=jM(t- s),and(345) followsbytaking theexpectation(withrespectto j) of jM(t- s) Thus,fromtheaboveweobtain (346)M(t)= F(t)+ m t M(t- s) dF(s). Now,letadenotetheuniquepositivenumber suchthat
anddefinethedistributionGby G(s)= mt eaydF(y),Oss< 00 Uponmultiplyingbothsidesof(346)byea'andusingthefactthatdG(s) me-a,dF(s),weobtain (347)e-a'M(t)= ea'F(t)+ J: e-a('-O.If a renewaldoesnotOccur att,thenthedistributionofthetimewemustwaituntilthefirstobserved renewalwillnotbethe sameastheremaininginterarrivaldistributions. Formally, let
{Xn,n=1,2, ...} be a sequence of independent nonnegative randomvariableswithXIhavingdistributionG,andXnhavingdistribution F,n>1LetSo=0, ,Sn=L ~Xi,n::=1,anddefine ND(t)= sup{n:Sn:St}. Definition Thestochasticprocess {No(t),t2:O}iscalledageneral or adelayed renewal process WhenG=F,wehave,of course,anordinary renewalprocess.Asin the
ordinarycase,wehave Let P{Nf)(t) =n} =P{Sn:S t}- P{Sn+1ErNB])' EXAMPLE3.5(a)Asystemconsistsof nindependentcomponents, each of whichacts likean exponentialalternating renewal process More specifically, component i, i =1, ... ,n, isup for an exponential timewithmeanA,andthen goesdown,in whichstateitremains
foranexponentialtimewithmeanJL"beforegoingbackupand startinganew. Supposethatthesystemissaidto befunctionalatanytimeif atleastonecomponent isupatthattime(sucha systemiscalled parallel).If weletN(t)denotethenumberoftimesthesystem becomesnonfunctional(that is,breaks down)in[0,t],then {N(t), t;>0)isadelayedrenewalprocess
Supposewewantto computethemeantimebetweensystem breakdowns.Todosoletusfirstlookattheprobabilityofa breakdown in(t,t+ h)forlargetandsmallh.Nowonewayfor a breakdown to occur in (t,t+ h) isto haveexactly1 component upat time t and all others down, and then have that component fail. Sinceallother possibilities taken together clearly
haveprobability a (h),weseethat limp{breakdOWnin(t,t+h)}=:t{A,ITJLI }lh+O(h). I_eo,;1A,+ JL,I""Al+ JLI A, But by Blackwell's theorem the above isjust h times the reciprocal of themean timebetween breakdowns. and so upon letting h.......0 weobtain ( nJLn1 )-1 Ertime between breakdowns]=IT)L-1=1Al+ JLI 1-1JL, Astheexpectedlengthof abreakdown
periodis( ~ ; = 11/JLlt\ wecancomputetheaveragelengthofanup(orfunctional) DELAYEDRENEWALPROCESSES periodfrom ( n1 )-1 E[length of up period]=E[time between breakdowns] - 2.:-1IL, Asa checkof theabove,note that the system may be regarded asa delayed alternating renewal process whose limiting probability of beingdownis n lim
P{system isdown at t}=nILl. 1-'"I ~ IAI+ ILl We can nowverify that theaboveisindeed equal to theexpected lengthof adown period dividedbytheexpected timelengthof a cycle(or timebetween breakdowns). ExAMPLE3.5(c)Considertwocoins,andsupposethateachtime coiniisflippeditlandsontailswithsomeunknownprobability p" i=1,2.Our objectiveisto
continuallyflipamongthesecoins so asto make the long-run proportion of tails equal to min(PI' P2)' Thefollowingstrategy,havinga very smallmemoryrequirement, willaccomplishthisobjectiveStartbyflippingcoin1untilatail occurs,at which point switch to coin 2 and flipituntila tail occurs.
Saythatcycle1 endsatthispointNowflipcoin1untiltwotails ina rowoccur,andthen switchanddo the same withcoin 2.Say that cycle 2 ends at this point. In general, when cycle n ends, return to coin1andflipituntiln+ 1 tailsinarowoccurandthenflip coin 2untilthisoccurs,whichends cyclen+ 1. To showthatthe preceding policy meets our objective,let P =
max(p), P2)and ap= min(p I,P2),where a0, P{Bm ~sGm forinfinitelymany m}= O. U9 130RENEWAL THEORY ProofWewillshowthat ., L P{Bm ~eGm} 1], E[RN(I)-dSN(,)=:s]E[Rnl Xn> t - s], and so (361) Now,let and notethatsince EIRd =f; EfiRd IX,x] dF(x) 1} isa regenerative process thatregenerates whenever Xntakes value O.To obtain some of the
properties of this regenerative process, we will firststudy the symmetricrandomwalk{Zn,n2O}. Let Un= P{Z2n= O} andnote that (3.7.1) 2n-1 Un=2nUn-I' NowletusrecallfromtheresultsofExamplel.S(E)ofChapter1(the ballotproblem example)the expression forthe probability thatthefirstvisit to 0inthe symmetricrandom walkoccursattime2n.Namely,
(3.72) e)mh P{ZI=1= 0, Z2=1= 0, ... ,Z2n-1=1= 0, Z2n= O}=2n_1 == 2n-1' Wewillneedthefollowinglemma,whichstatesthatun-the probability thatthesymmetncrandomwalkisat0attime2n-isalsoequaltothe probability that therandomwalkdoesnot hit0bytime2n. 144RENEW AL THEORy Lemma3.7.3 ProofFrom(37 2)weseethat Hencewemustshowthat
(37.3) which we willdobyinduction on n. When n= 1,the aboveidentityholds since UI=! Soassume(373) for n- 1Now nn-1 1- 2:_U_k_= 1- 2:_U_k ___U_ n _ k ~ 12k - 1k=12k - 12n- 1 Un = Un_I- 2n- 1 (by the induction hypothesis) = Un(by (371. Thustheproofiscomplete Since u.~( ~ )(i)'", itfollowsuponusinganapproximationduetoStirling-whichstatesthat
n!- nn+1J2e-nV2ii-that (2n)2n+IJ2e-2nV2ii1 Un- n2n+1e-2n(21T)22n = V;;' and so Un~ Oas n ~00. Thus from Lemma 3.7.3 we see that, with probability 1, thesymmetricrandomwalkwillreturntotheorigin. tucks medicated pads instructions
REGENERATIVEPROCESSES145 Thenextpropositiongivesthedistributionof thetimeof thelastvisitto o uptoandincluding time2n PROPOSITION3.7.4 For k= 0,1,, n, Proof P{Z2.=0, ZU+I-:/=0,, Z2n-:/=O} = P{Z2.= 0}P{Z2HI-:/=0, = U.Un_. where we have used Lemma 3 7 3 to evaluate the second term on the right inthe above We are now ready for our
major result, which is that if we plot the symmetric randomwalk(startingwithZo=0)byconnectingZkandZk+1byastraight line(see Figure 3.7.1), then theprobability that upto time 2nthe processhas beenpositivefor2ktimeunitsandnegativefor2n- 2ktimeunitsisthe sameastheprobabilitygiveninProposition3.7.4.(Forthesamplepath presentedinFigure371, of
thefirsteighttimeunitstherandomwalkwas positiveforsixandnegativefortwo) z" Figure 3.7. J.Asample path for the randomwalk. 146RENEWAL THEORY THEOREM3.7.5 LetEk ndenotetheeventthatbytime2nthesymmetricrandomwalkwil( bepositive for2ktimeunitsand negative for2n- 2ktimeunits,and letbkn =P(Ekn)Then (374) ProofTheproof
isbyinductiononnSince Uo= 1, itfollowsthat(374) isvalid whenn=1Soassumethatbkm =UkUm-kforallvalues of msuchthat m 2n}P{T > 2n}. ,=1 NowgiventhatT=2r,itisequallylikelythattherandomwalkhasalwaysbeen positiveor alwaysnegativein(0,2r)anditisat 0at 2rHence, P{EnniT>2n}=t andso n bn n=~L bn -, n_,P{T =2r}+ !P{T > 2n} ,=1 n = ~L un-
,P{T= 2r}+ ~ P { T >2n}, ,=1 wherethelastequality,bn-,n-,= Un-,Un,followsfromtheinductionhypothesisNow, nn L un-,P{T= 2r}=L P{Zz,,-2,= O}P{T= 2r} ,=1,=1 n =L P{Zz"= OiT= 2r}P{T= 2r} ,==1 ==Un, and so (by Lemma 3 7 3) REGENERATIVEPROCESSES147 Hence(374) isvalidfork=nand,infact,bysymmetry,alsofork=0The proof that (374)
isvalid for 0< k< n followsinasimilar fashionAgain,conditioning on T yields /I bkll = L P{Ehl T2r}P{T = 2r} ,-1 NowgiventhatT2r,itisequallylikelythattherandomwalkhaseither always beenpositiveoralwaysnegativein(0,2r)HenceinorderforEk ntooccur,the continuation fromtime2r to2nwouldneed 2k- 2r positiveunitsinthe former case and
2kinthelatterHence, /In bkn= L bk-rll-rP{T + t L bkn_rP{T = 2r} r-I /III = L uk_rP{T2r} + L U,,-r-kP{T = 2r}, r=l wherethelastequalityfollowsfromtheinductionhypothesisAs /I L uk-rP{T = 2r} ::::Uk, II L U"r-kP{T =2r} =UII _.. ,=1 weseethat which completes theproof Theprobability distributiongivenby Theorem 37.5, namely,
iscalledthediscretearcsinedistribution.Wecallitsuchsinceforlargek and nwehave,byStirling'sapproximation,that 1 UIr;UII Ie--;;=== 1TYk(n- k) 148RENEWAL THEORY Henceforanyx, 0 O}and note thatf(t) is nonnegative and non decreasingAlso Hence and byinduction f(s+ t)=P{N(s+ t) - N(s)> 0orN(s) > O} ::; P{N(s + t)- N(s) > O}+ P{N(s) > O} =
f(t)+ f(s). f(t)::;2f(tI2) f(t)::;nf(tln)foralln= 1,2,.. 150 Thus,letting a be suchthat [(a)> 0,wehave (382) [(a) be suchthat [(s)ls> A - 6.Now,foranytE(0, s) thereisaninteger nsuchthat ss nn - 1 From themono tonicity of let) and from(3.82), weobtain that for allt inthisinterval (383) let)[(sin)_n- 1 [(sin)n- 1 [(s) -=:::------=:::.----tsl(n - 11nsinns Hence,
Since6isarbitrary,and sincen00ast 0,itfollowsthatIim,....o[(t)lt= A NowassumeA=00.Inthiscase,fixanylargeA>andchoosessuchthat [(s)ls> AThen,from(383), it fOIlOWSthatforalltE(0,s) let) =:::n- 1 [(s) > n- 1 A, tnsn whichimplies [(t)lt==00,and theproof iscomplete ExAMPLE3.8(A)For theequilibnum renewalprocess P{N(t) >O}= Fe(t) = F(y) dyllL
Hence,usingL'hospital'srule, A =limP{N(t) > O}=limF(t)=1.. HOtHOILIL STATIONARYPOINTPROCESSES Thus,fortheequilibriumrenewalprocess,A istherateofthe renewalprocess. Forany stationary pointprocess{N(t),t~O},wehavethat E[N(t + s)]==E[N(t + s)- N(s)]+ E[N(s)] = E[N(t)] + E[N(s)] implyingthat,forsome constant c, E[N(t)]=ct 151
WhatistherelationshipbetweencandA?(Inthecaseof theequilibrium renewalprocess,itfollowsfromExample 3 8(A) and Theorem 3.5.2 thatA = c=111-')In generalwenotethat since c =i nP{N(t) = n} n= It ~iP{N(t)n} n=1t _P{N(t) > O} t it followsthat c ~A.In order to determine when c =A,weneed the following concept.Astationary pointprocessissaidtobe
regular or orderlyif (38.4)P{N(t) ::>2} = o(t) It should be noted that,fora stationarypoint process,(3.8.4)impliesthat the probability that two or more events will occur simultaneously at any point isO.To seethis,dividetheinterval[0,1]intonequal parts.The probability of a simultaneous occurrence of eventsislessthantheprobability of twoor
moreeventsinanyof theintervals j=0, 1,,n1, and thusthisprobability isbounded bynP{N(lIn) ~2},which by(3.8.4) goes to zero asn ~00.When c k}, B.;fortbe even+ C: 1) - NW"'2}.
n-l Bn= U Bn!' }:O C fortbeevent {NC: 1) - NW'" 1.N(1)-NC: 1) = k} LetB> 0andapositiveinteger mbegivenFromtheassumedregularityof the process,itfollowsthat B P(Bn})< n(m + 1)' j=O,I,.,n-I forallsufficientlylargenHence, Therefore, (385) whereBnisthecomplement of BnHowever,alittlethoughtrevealsthat n-l A.Iin = U c,lin, }:O andhence n-1 P(A,Bn)
SL P(Cn,}), }:O PROBLEMS whichtogether with(385)impliesthat (386) mn-Im L P(Ak )::; L L P( Cnk)+ 8 k=OrOk-O ==nP{N(lIn) ::: 1}+ 8 ::; ,\ + 28 for allnsufficientlylargeNowsince(386)istruefor allm,itfollowsthat 00 L P(Ad ::; ,\ + 28 k=O Hence, 0000 c ==[N(1)l= L P{N(1) > k} ==L P(Ak)::;,\ + 28 and theresultisobtained
as8isarbitraryanditisalreadyknownthat c;::::,\ PROBLEMS 3.1.Isit truethat (a)N(t)< nif and onlyifSn>t? (b)N(t)t? pulud.pdf (c)N(t)>nifand onlyifSnX2,,"withErN.] 0, the nth departure leaves behind Xn customers-of which one enters service and the other Xn- 1 wait inline.Hence, at the next departure the system will contain theXn- 1 customers tha
twere in line in addition to any arrivals during the service time of the(n+ 1)stcustomerSinceasimilarargumentholdswhen Xn=0,weseethat ( 4.1.2) if Xn> 0 if Xn= O.
Since Yn,n~1, represent the number of arrivals in nonoverla p-ping service intervals, it follows,the arrival process being aPoisson process,that theyareindependent and fa:(Ax) I (4.1.3)P{Yn= j} =e-AX -.-, dG(x), oJ.
j=0,1, ..
From(4.1.2)and(41.3)itfollowsthat {Xn'n=1,2,. '.}isa Markov chainwithtransitionprobabilities givenby POI=f'"e-AX ( A ~ ) IdG(x),j;::=:0, oJ feo(Ax)r 1+I P'I =0e-AX (j _ i + 1)' dG(x), P'I = 0otherwise INTRODUCfION AND EXAMPLES EXAMPLE4.1(B)TheG/M/l Queue.Supposethatcustomersar-rive at a single-server service center in accordance with an
arbitrary renewal process having interarrival distribution G. Suppose further that the servicedistributionisexponential withrate p.. If weletXIIdenotethenumber of customersinthesystemas seen by the nth arrival, itiseasyto seethat the process {X"' n2:: I}isaMarkovchainTo computethetransitionprobabilitiesP"
forthisMarkovchain,letusfirstnotethat,aslongasthereare customerstobeserved,thenumberofservicesinanylengthof time t is aPoisson random variable withmean p.tThis istrue since thetimebetweensuccessiveservicesisexponentialand,aswe know,thisimpliesthatthenumberof servicesthusconstitutes a Poissonprocess.Therefore, f'e-I'I (p::)' dG(t), oJ. j=O,I,
... ,i, which followssince if an arrival findsi in the system, then the next arrival willfind i + 1 minus the number served, and theprobability thatjwillbeservediseasilyseen(byconditioningonthetime betweenthesuccessivearrivals)toequaltheright-handsideof the above TheformulaforP,oislittledifferent(itistheprobabilitythat
atleasti+1Poissoneventsoccurinarandomlengthoftime havingdistributionG) andthusisgivenby P=f'"~e-1'1 (p.t)k dG(t)i 2::O. 100L.,kl' k ~ ,+ I 165 RemarkThe reader should note that in the previous two examples wewere abletodiscoveranembedded Markovchainbylooking attheprocessonly at certain time points, and by choosing thesetimepoints so asto
exploitthe lack of memory of the exponential distributionThis is often a fruitful approach forprocessesinwhichtheexponentialdistributionispresent EXAMPLE4.1(c)Sums ofIndependent, Identically Distributed Ran-domVariables.TheGeneral RandomWalk.LetX"i>1,be independent andidenticallydistributedwith P{X,=j}= a"J0,::tl, If welet " So0andS"2: X"
,-I then {SlItn>O}isaMarkovchainforwhich 166MARKOVCHAINS {Sn'n;>O}iscalledthegeneralrandomwalkandwillbe studied inChapter 7. ExAMPLE4.1(D)The Absolute Value of the Simple Random Walk. Therandomwalk{Sn,n~I},whereSn= ~ ~X"issaidtobea simple randomwalkif forsome p,0< Ppmpn> 0 'k- L.Jrr,k - 'IIk ,=0 Similarly,wemay
showthereexistsans forwhichPi">0 CHAPMAN-KOLMOGOROVEQUATIONSANDCLASSIFICATIONOFSTATES169 Twostatesthatcommunicatearesaidtobeinthesameciass,andby Proposition 4.2], any two classesare either disjointor identicalWe saythat the Markov chain isirreducible if thereisonly one class-that is,if allstates communicatewith each other
Statei issaidtohaveperiodd if P ~ ,= whenever nisnot divisiblebyd and d isthe greatest integer withthisproperty.(If P ~ ,=foralln> 0, then definetheperiodofitobeinfinite)Astatewithperiod]issaidtobe aperiodicLet d(i)denotetheperiod of i.Wenow show thatperiodicity isa classproperty PROPOSITION4.2.2 If i ~j, thend(i)=dO) ProofLet mandnbesuchthatp
~P7,> 0,andsupposethatP;,> 0Then pn jm:>pn pm> 0 IJ- I''I pn j, jm:>pn P' pm> 0 II- I'"'I' wherethesecondinequalityfollows,forinstance,sincetheleft-handsiderepresents the probability that starting in jthechain willbebackin jafter n+ s+ mtransitions, whereasthe right-handsideisthe probability of the same event subjecttothe further restrictionthat the
chainisini both after nand n+ stransitionsHence, dO)divides bothn+mandn+s+m,thusn+s+m- (n+m)=s,wheneverP;,>O. Therefore,dO)dividesd(i)Asimilarargumentyieldsthatd(i)dividesdO),thus d(i)= dO) For any states i and j define t ~ 1tobethe probability that, starting in i,the firsttransitioninto joccursat time nFormally, t?1= 0, til =P{Xn= j, Xk =;6j,
k=],,n - ] IXo=i}. Let Then t'l denotes the probability of ever making atransitioninto state j,given that the process starts in i.(Note that fori=;6j, t,1ispositiveif,and only if,jis accessible from i.) State j is said to be recurrent if ~ I=], and transient otherwise. 170 PROPOSITION4.2.3 State]isrecurrentif.andonlyif. ., '" pn= 00 L../I CHAINS
ProofState]isrecurrent if,withprobability 1,aprocess starting at j willeventually returnHowever, by the Markovian property it follows that the process pr 0, p ~> 0Nowforanys~0 pm+n+r:>pm P' pn }/- JIII1/ andthus "pm+n+r 2: pm pn "pr= 00 L..J11 I''1L..JIt' ,r andtheresultfollowsfromProposition423
EXAMPLE4.2(A)TheSimpleRandomWalk.TheMarkovchain whose state space isthe set of all integers and hastransition proba-bilities P,,_I=P=1 - P,,-I'i =0,::t:1,.. , where 0 < P < 1, is called the simple random walkOne interpreta-tion of this process is that it represents the wanderings of a drunken man ashewalks along a straight line.Another isthat it
represents thewinningsofagamblerwhooneachplayof thegameeither winsor losesonedollar. SinceallstatesclearlycommunicateitfollowsfromCorollary 4.2.4thattheyareeitheralltransientorallrecurrentSoletus consider state 0 and attempt to determine if ~ : = IPoo isfiniteor in-finite Sinceitisimpossibletobeeven(usingthegamblingmodel
interpretation)afteran oddnumber of plays,wemust,of course, havethat P2n+I_0 00- ,n= 1,2, ... Ontheotherhand,thegamblerwouldbeevenafter2ntrials if,and only if,hewon nof theseand lostnof theseAs eachplay ofthegameresultsinawinwithprobabilitypandalosswith probability 1 - p, the desired probability is thus the binomial proba-bility n=1,2,3, ..
Byusinganapproximation,dueto Stirling,whichassertsthat 172MARKOV CHAINS wherewesaythat a"- b"whenlim" .... "'(a"lb,,)= 1,weobtain P2"(4p(1- p))" 00-V;;; Nowitiseasytoverifythatifa"- b",thenL" a"0,j>i-l, P'j= 0,j < ; - 1 Let p =~ jjajSince p equals the mean number of arrivals during a serviceperiod, itfollows,upon conditioning on the length of that
period,i,ha t p= '\[S], whereSisaservicetimehavingdistributionG Weshallnow show that theMarkovchainispositiverecurrent whenpO.Then, as iand jdonotcommunicate (since je R). P;,= 0 for allnHence if theprocess starts instate i,thereisa positiveprobabilityof atleast P"thattheprocesswillneverreturnto iThis contradicts thefactthat iisrecurrent,
andsoPI,:::::O. 186MARKOV CHAINS LetjbeagivenrecurrentstateandletTdenotethesetof alltransient states.For iET,weareofteninterested incomputing /,J'the probability of everentering jgiventhattheprocessstartsiniThefollowingproposition, by conditioning on the state after the initial transition, yields a set of equations satisfiedby the /',
PROPOSrrlON4.4.2 If jisrecurrent,thenthesetof probabilities{fr" iET} satisfies whereRdenotesthe setof states communicatingwith j Proof fr,= P{NJ(oo) > 0IXo i} =2: P{N,(oo) > 0lxo =i,XI = k}P{XI = klxo= i} .11k where wehave usedCorollary 425 in asserting that h,1 for kERandProposition 441inassertingthat h,= forkf.T,kf.R.
EXAMPLE4.4(A)TheGamblersRuin Problem.Consideragam-bIer whoateachplay of thegamehasprobability pof winning1 unit and probability q =1 - P of losing 1 unit.Assuming successive playsof thegameareindependent,whatistheprobabilitythat, startingwithiunits,thegambIer'sfortunewillreachNbefore reaching If we let Xndenote the player's fortune at
time n, then the process {Xn,n= 0, 1, 2,.}is a Markov chain with transition probabilities Poo=PNN=I, Pi i+ 1 = P = 1PI iI' i =1,2,,N1. ThisMarkovchainhasthreeclasses,namely,{O},{I,2,... , NI},and{N},thefirstandthirdclassbeingrecurrentand the secondtransient.Sinceeachtransientstateisonlyvisitedfinitely TRANSITIONSAMONGCLASSES, THE
GAMBLER'SRUINPROBLEM often, itfollows that, after some finite amount of time, the gambler willeither attainher goalof Nor gobroke. Letf,==f,Ndenotetheprobabilitythat,startingwithi,0:s; i 0 and Po+ PI < 1. Then(i) 1Tois the smallest positive number satisfying (ii)1To= 1 if,andonly if,II.:s;1.
ProofToshowthat1Toisthesmallestsolutionof(4.5.1),let1T~0satisfy(45.1). 31434354550.pdf We'llfirstshowbyinductionthat 1T~P{X"O}foralln.Now 1T= 2: 1TJ PJ ~1Topo =Po= P{X1 = O} J andassumethat 1T~P{X"= O}Then P{X"+I= O}2: P{X"+1= OlX1 = j}PI J 2: (P{X" = Oni PI I (by the induction hypothesis) 1T Hence, 1T~P{X"=O}foralln,
andletting n_00, 1T~lim P{X"O}P{population dies out} = 1To " To prove(ii)definethe generating function APPLICATIONS OF MARKOV CHAINS (1,1) / / / Pot--___~ / /45/ / / / / / / / Figure 4.5.1Figure 4.5.2 SincePo+PI"(S)= L j(j - l)sJ-ZPJ > 0 J ~ O (1,1) / 193 foralls E(0,1)Hence,c/>(s)isastrictlyconvexfunctionintheopeninterval(0,1)
Wenowdistinguishtwocases(Figures45 1 and452)InFigure4.51c/>(s)>s for allsE(0,1),andinFigure452,c/>(s)=s forsomes E(0,1).It isgeometncally clear that Figure 45 1 representsthe appropnatepicture whenC/>'(1):::;1.andFigure 452 isappropriatewhenc/>'(1)>1Thus,sincec/>(170)= 170,170= 1 if,andonlyif, c/>'(1):::;1Theresultfollows,sincec/>'(1)=L
~jPJ = IJ. 4.6ApPLICATIONSOFMARKOVCHAINS 4.6.1AMarkovChain Model of Algorithmic Efficiency Certainalgorithmsinoperationsresearchandcomputerscienceactinthe followingmannerthe objective isto determine the best of a set of N ordered elementsThe algorithm starts with one of the elements and then successively
movestoabetterelementuntilitreachesthebest.(Themostimportant exampleisprobablythesimplexalgorithmoflinearprogramming,which attempts to maximize a linear function subject to linear constraints and where an element correspondstoanextreme point of thefeasibilityregion.)If one looksatthealgorithm'sefficiencyfroma"worsecase"pointof view,then
examples can usually be constructed that require roughly N- 1 steps to reach theoptimalelementInthissection,wewillpresentasimpleprobabilistic modelforthe number of necessarysteps.Specifically. weconsider aMarkov chain that whentransitingfromany state isequallylikelyto enter anyof the better ones 194 ConsideraMarkovchainforwhichPI I= 1 and
1 p'J=-'-1' 1-j=I, ... ventajas y desventajas del método analítico de investigación ,i-l,i>l, MARKOV CHAINS andletT,denotethenumberof transitionstogofromstateito state1.A recursive formula for E{T;] can be obtained by conditioning on the initial tran-sition: (4.6.1) 1,-1 E[T,]=1 + i-I J ~E[TJ Starting withE [TI]=0,wesuccessivelyseethat E{T2]=1, E{T)]=
1 + l, E{T4]= 1 +1(1+1 +i) = 1 +i+1, anditisnotdifficultto guessandthenproveinductivelythat ,-11 E[T,]=2:-:. J=IJ However,toobtainamorecompletedescriptionofTN'wewillusethe representation where I={I J0 N-l TN= 2:Ij, j=I if the process ever enters j otherwise. The importanceof theaboverepresentationstemsfromthefollowing. Lemma4.6.1 f., fN-1
areindependentand P{fJ =l}=1/j,l:=:;;j:=:;;N-l APPLICATIONS OF MARKOV CHAINS195 proofGiven',+1>,'N,let n=min{ii> j."=I}Then P{',=11',-'-1' ,} =lI(n - 1)=! ,Nj/(n-I)j PROPOSITION4.6.2 N-I1 (i)[TNJ= 2:""7 , ~1] N-1I(1) (ii)Var(TN)=2:-:1 - -: ,- 1]] (iii)For Nlarge.TNhasapproximatelyaPoisson distributionwithmeanlogN
ProofParts(i)and(ii)followfromLemma46 1andtherepresentationTN= ' 2 : , ~ 1 1',. Since the sumof alargenumber of independentBernoulli random variables. each having a smallprobability of being nonzero, isapproximately Poisson distributed, part(iii)followssince or andso fNdX~ I1fN-1dx -mlx}=-x (m- 1)' since inorder forthe run length to be atleast
m, the next m- 1 values must allbegreater than xand they mustbe inincreasingorder. To obtaintheunconditionaldistributionof thelengthof agivenrun,let Indenote the initial value of the nth run.Now itiseasy to see that {In.nI} isaMarkovchainhavingacontinuousstatespaceTo compute p(Ylx), the probability density that the nextrunbeginswiththe value y
giventhatarun has justbegunwithinitialvalue x.reasonasfollows co E(y,y + dY)l/n=x}=LP{ln+1E(Y.y + dy). Ln=ml/n =x}. m&1 whereLnisthelengthof thenthrun.Nowtherunwillbeof lengthmand thenextonewillstartwithvalueyif (i)the next m- 1 values are in increasing order and are all greater thanx; (ii)the mth valuemustequal y; (iii)the maximum of the
firstm- 1 valuesmustexceedy. Hence, P{ln+ IE(y, Y + dy),Ln= ml/n = x} _(1_x)m-1 - (m-l)'dyP{max(X1,.,Xm_IyIX,>x,i=I ... ,m-l} (1- x)m-I (m-1)'dy ify< x = (l-x)m-1[(y_x)m-I] (m- 1)'dy1 - 1 - x if Y > x.
APPLICATIONS OF MARKOV CHAINS summing over myields {I-x p(ylx) =el_x Y-.t e-e ify x. 197 That is. {In'n~I}isa Markov chainhaving the continuous state space(0. 1) andatransition probability density P ( ylx)givenby theabove. Toobtainthelimitingdistnbutionof Inwewillfirsthazardaguessand thenvenfyourguessbyusingtheanalogof
Theorem4.3.3.NowII.
being the initial value ofthe firstrun,isuniformly distributed over (0,1). However, thelater runs beginwhenever avaluesmaller thantheprevious one occurs. So it seems plausible that the long-run proportion of such values that are less thanywouldequaltheprobability thatauniform(0,1)randomvariableis lessthanygiventhatitissmaller
thanasecond,andindependent,uniform (0,1)randomvanable.Since it seems plausiblethat 17'( y),thelimiting density of In'isgivenby 1T(Y)=2(1y),O 0 /;1+ 1 with the notation P""signifying that the above are limiting probabilities.
Before writing down the steady-state equations, it may be worth noting the following (I)Any elementmovestowardthebackof thelistatmost one position ata time. (ii)If anelementisinpositioniandneitheritnoranyof theelements in the following k(i) positions are requested, it will remain in position i. (iii)Anyelementinoneofthepositionsi,i+1,... ,i+k (i)willbe
movedtoaposition1- =b ,1 + (k(i)lp)q - 1 + qlp + ..+ (qlp)lrlll" 202MARKOV CHAINS NowdefineaMarkovchain withstatesO.1., nandtransitionprobabilities (466)P,,= {c, 1 - c, ifj=i-l. ifj = i+ k(i). i= 1,,n - 1 Let f,denotetheprobabilitythatthisMarkovchainever entersstate0giventhatit startsinstateiThen f,satisfies f,=C,f,-I+(1- c,)f, fk(l).1=1.,n - 1. fo= 1,
Hence,asitcanbeshownthattheabovesetofequationshasauniquesolution.it followsfrom(464) thatifwetakec,equalto a, foralli,then f,willequaltheIT,of ruleR,andfrom(465)ifwelet c,=b"then f,equals TI,Nowitisintuitivelyclear (andwedeferaformalproofuntilChapter 8)thattheprobabilitythattheMarkov chaindefinedby(466)willever enter
0isanincreasingfunctionof thevector f=: (c1Cn_l)Hence.since a,;:::b"i=1..n, we seethat IT,;:::n,for all i When p:s;lin,then a, ::s;b" i= 1,. n- 1.andtheaboveinequalityisreversed THEOREM4.6.4 Among therulesconsidered,thelimitingexpected positionof theelementrequested is minimized bythetranspositionrule
ProofLettingXdenotethepositionofelwehaveuponconditioningonwhether ornotel isrequestedthattheexpectedpositionoftherequestedelementcanbe expressedas E"'..]E[]() E[1+ 2++ n - X] LPosltlon= pX+1 - Pn_1 =(p_1 - p) E[X] + (1- p)n(n + 1) n- 12(n- 1) Thus.ifp;:::lin. the expectedpositionisminimizedbyminimizingE[X], andif p~
lin.bymaximizingE[X]SinceE[X]=2 . ; ~ 0PiX>it,theresultfollowsfrom Proposition 46 3 TIME_REVERSIBLEMARKOVCHAINS203 4.7TIME-REVERSIBLEMARKOVCHAINS An irreducible positive recurrent Markov chain is stationary if the initial state is chosen accordingto the stationary probabilities.(In the caseof an ergodic chainthis isequivalent to
imagining that the process begins at time t=-00.) Wesaythat sucha chainisin steady state. Consider now a stationary Markov chain having transition probabilities p,J andstationaryprobabilities1T,.andsupposethatstartingatsometimewe trace the sequence of states going backwards in time. That is. starting at time n consider the sequence of states Xn
Xn -I, ...It turns out that this sequence of statesisitself aMarkov chainwithtransition probabilitiesP ~definedby P ~P{Xm=jlXm+1 =i} _P{Xm+1= ilXm = j}P{Xm= j} - P{Xm+1=i} ToprovethatthereversedprocessisindeedaMarkovchainweneedto verifythat Toseethattheprecedingistrue,thinkofthepresenttimeasbeingtime m+
1.Then,sinceXn,n::>1isaMarkovchainitfollowsthatgiventhe presentstateXm+1thepaststateXmandthefuturestatesXm+2Xm+3, areindependent.Butthisisexactlywhatthepreceding equationstates. Thus the reversed process isalso a Markov chain with transition probabili-tiesgivenby 1T,P" P*-'/---' 1T, If P ~= p,Jforalli,j,thentheMarkovchainissaidto
betimereversible. Theconditionfortimereversibility,namely,that (4.7.1)foralli. j, can be interpreted asstating that, forallstates i and j, therateatwhichthe processgoesfromito j(namely,1T,P'J)isequaltotherateatwhichitgoes from jto i (namely, 1T,P,,).It should benoted that this is an obvious necessary conditionfortimereversibilitysinceatransition fromi to
jgoingbackward 204 MARKOV CHAINS intime isequivalent to a transition fromj to i going forwardintime; that is ifXm=iandXmI= j,thenatransitionfromitojisobservedifwe looking backward in time and one from j to i if weare looking forward in time If wecanfindnonnegativenumbers,summingto1,whichsatisfy(4.71)
thenitfollowsthattheMarkovchainistimereversibleandtherepresentthestationary probabilities. Thisisso sinceif-(47.2)for all i, j2: Xt 1, thensumming over i yields 2: Xl= 1 Sincethestationary probabilities'Frt aretheuniquesolutionof theabove,it followsthatX,='Fr,foralli. ExAMPLE4.7(A)An Ergodic RandomWalk.We can argue,with-out any need for
computations, that an ergodic chain withp/., + I+ Ptr I=1 is time reversible. This followsby noting that the number of transitionsfromitoi+ 1 mustatalltimes bewithin1 of the number from i + 1 to i.This is so since between any two transitions fromito i+ 1 there must be one fromi+ 1 to i (and conversely) sincetheonlywaytore-
enterifromahigherstateisbywayof statei+1.Henceitfollowsthattherateoftransitionsfromi toi+1equalstheratefromi+1toi,andsotheprocessis timereversible. ExAMP1E4.7(a)TheMetropolis Algorithm.Let ap j=1,... , m m bepositivenumbers,andletA=2:a,Supposethatmislarge ,= 1 and that Aisdifficult to compute, and supposeweideallywant to simulate the
values of a sequence of independent random variables whoseprobabilitiesareP,=ajlA,j=1,... ,m.Onewayof simulatingasequenceofrandomvariableswhosedistributions convergeto {PI'j=1,.. , m}istofindaMarkovchainthatis botheasy to simulateandwhoselimitingprobabilitiesarethe PI' TheMetropolis algorithmprovides anapproach foraccomplishing thistask
LetQbeanyirreducibletransitionprobabilitymatrixonthe integers1,...
, nsuchthatq/,=q/,foralliand j.Nowdefinea Markovchain{Xn, n::>O}asfollows.If Xn=i,thengeneriitea random variablethat isequal to j withprobability qlJ' i, j1; .., m. If this random variable takes on the value j, then set Xn+1 equal to jwithprobability min{l,a,lat},andsetitequalto iotherwise. TIME-REVERSIBLE MARKOV CHAINS Thatis,the
transitionprobabilitiesof {Xn,n~O}are ifj * i if j =i. Wewillnowshowthatthelimitingprobabilitiesof thisMarkov chainareprecisely the PI" Toprovethatthe p}arethelimitingprobabilities,wewillfirst show that the chainistimereversiblewithstationary probabilities PI'j=1,... , mbyshowingthat To verifytheprecedingwemustshowthat Now,q,}= q},and a/a,= pJpl
and sowemustverifythat P,min(l, pJp,)= p}min(l, p,Ip). Howeverthisisimmediatesincebothsidesof theequationare eq ualtomine P"p}).Thatthesestationaryprobabilitiesarealso limiting probabilities follows from the fact that since Q is an irreduc-ible transition probability matrix, {Xn}willalso be irreducible, and as(exceptinthe trivialcasewhere Pi==lin)PII
> 0for somei,it isalsoaperiodic BychoosingatransitionprobabilitymatrixQthatiseasyto simulate-thatis,foreachiitiseasytogeneratethevalueof a randomvariablethatisequalto jwithprobability q,}, j=1,... , n-we canusetheprecedingtogenerateaMarkovchainwhose limitingprobabilitiesarea/A, j=1,... ,n.Thiscanalsobe accomplishedwithoutcomputing A. 205
Consider agraphhavingapositivenumberw,}associatedwitheach edge (i, j), and suppose that a particle moves from vertex to vertex in the following manner If the particle ispresently at vertex i then itwillnext move to vertex jwithprobability wherew,}is 0 if (i,j) isnot an edge of the graph. The Markov chain describing the sequence of vertices visited by
the particle iscalledarandomwalkon an edgeweightedgraph. 206MARKOV CHAINS PROPOSITION4.7.:1. Consider a randomwalkonanedgeweightedgraph withafinitenumber of vertices IfthisMarkovchainisirreduciblethenitis,insteadystate,timereversiblewith stationary probabilities givenby 1 7 , = ~ = - -II ProofThe timereversibilityequations reduce to
or,equivalently, sincewII WI/ implying that which,sinceL 17,= 1,provestheresult. ExAMPLE 4.7(0)Consider a star graph consisting of r rays, with each ray consisting of n vertices.(See Example 1.9(C) forthe definition of astar graph.)Letleaf idenotetheleaf onrayi.Assumethat aparticlemovesalongthe verticesof thegraphinthe following
mannerWheneveritisatthecentralvertexO.itisthenequally likely to move to any ofits neighborsWhenever it is on an internal (nonJeaf) vertex of a ray. then it moves towards the leaf of that ray with probability p and towardsOwith probability 1 - p. Wheneverit isataleaf.itmovestoitsneighborvertexwithprobability1.
StartingatvertexO.weareinterestedinfindingtheexpected number of transitions that it takes to visitanthe vertices and then returnto O. TIME-REVERSIBLE MARKOV CHAINS Figure 4.7.1.Astar graphwithweights:w:=p/(l - p). Tobegin,letusdeterminethe expected number of transitions betweenreturnsto the centralvertex O.To evaluate thisquantity, note
the Markovchainof successivevertices visited isof thetype consideredinProposition 4.7.1.To seethis,attacha weightequal to1 witheachedgeconnected to 0,andaweightequaltow'on anedgeconnecting theithand(i+ 1)stvertex(from0)of a ray, wherew=pl(l- p)(seeFigure47.1)Then,withtheseedge weights,theprobabilitythataparticleatavertexistepsfrom0
movestowardsitsleaf isw'/(w'+ WI-I)p. Sincethetotalof thesumof theweightsontheedgesoutof eachof theverticesis andthesumof the weightsonthe edgesoutof vertex0isr,we see fromProposition 4 7 1 that Therefore, /Loo,theexpectednumber of stepsbetweenreturns to vertex 0,is Now,say thata newcyclebegins whenever the particlereturns to
vertex0,andletXIbethenumber of transitionsinthe jth cycle, j;;:::1Also,fixiandletNdenotethenumberof cyclesthatit 207 MARKOV CHAINS takes for the particle to visit leaf i and then return to 0With these N definitions,2: X,isequaltothenumber of stepsittakestovisit 1= 1 leaf iand then return to O.AsN isclearly a stopping timeforthe X"weobtain
fromWald'sequationthat E [f X,]=ILooE[N]::: --'-1-w-O}isirreducibleandpositive recurrent,andletitsstationary probabilitiesbe1TJ,j~O.Thatis,the1Tj,j2! 0,istheunique solutionof, and1Tjhas the interpretationof beingthe proportion of theX/s thatequals j(If theMarkovchainisaperiodic,then1Tjisalsoequaltolimn_ex>P{Xn= j}.) Nowas1TJ equals the proportion
of transitions thatare into state j, and /Lj isthemeantimespentinstatejpertransition,itseemsintuitivethatthe limitingprobabilitiesshouldbe proportionalto 1TJ/LrWe nowprove this. THEOREM4.8.3 Supposetheconditionsof Proposition481andsupposefurtherthattheembedded Markovchain{Xn'n2::O}ispositiverecurrentThen ProofDefinethenotationasfollows
Y, U) = amount of time spent in state i dunng the jth visit to that state, i,j::::: 0 N, (m)= number of visits to state i in the first m transitions of the semi-Markov process In termsof theabovenotation weseethat theproportion of timeinidunng thefirst m transitions,callitP,-m,isasfollows N,(m) LY,(j) (481) J= 1 P, = m=---'--N-c,(--'m>,---LLY,(j) ,J=I N,(m)
N:;>Y,(j) -- L.J--m/=1N,(m) L N,(m)N)Y,U) ,m/=1N,(m) 216MARKOVCHAINS Now sinceN,(m) _00as m _00,itfollowsfromthe strong lawof largenumbers that N,(m)Y,U) 2:-N()-ILl' ,_I, m and,bythestronglaw forrenewalprocesses,that N, (m)(E[bf..b..'])-1 --_numer 0transItionsetween VISitSto I= 1f, m Hence,lettingm_00in(48 1)showsthat andtheproof
iscomplete.
From Theorem 4.8.3itfollowsthatthe limitingprobabilitiesdepend only on the transitionprobabilitiesP,)andthemean timesILl>i, j~O. ExAMPLE4.8(A)Consideramachinethatcanbeinoneofthree states:goodcondition,faircondition,orbrokendown.Suppose that a machine ingoodcondition willremainthiswayforamean time ILland willthen goto either the
faircondition or thebroken conditionwithrespectiveprobabilities t and t.Amachineinthe fairconditionwillremainthatwayforameantimeILzandwill then break down.Abroken machine willbe repaired, whichtakes a meantime IL3,and whenrepairedwillbeinthegoodcondition withprobability iandthefairconditionwithprobability 1.What proportionof timeisthe
machineineachstate?
Solution.Letting the statesbe1,2,3,wehavethat the TT,satisfy The solutionis TTl+ TT2+ TT3=1, TTl= iTT3, TTz = tTTI+ iTT3, TT3=tTTI+ TT2 SEMI-MARKOVPROCESSES Hence, P"the proportion of time the machine is in state i, is given by The problem of determining the limiting distribution of a semi-
MarkovprocessisnotcompletelysolvedbyderivingtheP,.For wemayaskforthe limit,ast ~00,of being instate i attime tof making the next transition after time t+ x, and of thisnexttransi-tionbeingintostate jTo expressthisprobabilitylet yet) =time from t until the next transition, Set)= state entered at the first transition after t. To compute lim P{Z(t) =i,yet) >
x, Set)=n, , ~ ' " weagainusethe theory of alternatingrenewalprocesses THEOREM4.8.4 If thesemi-Markovprocess isirreducibleand not lattice,then (482)limP{Z(t)= i,Y(t)> x, S(t)= jIZ(O)= k} ,_00 p , J ~F,,(y)dy = 217 ProofSaythatacyclebeginseachtimetheprocessentersstateiandsaythatitis "on" ifthe stateisi andit willremaini for atleastthenext
xtimeunits andthenext stateisjSayitis"off"otherwiseThuswehaveanalternatingrenewalprocess Conditioning onwhether thestateafter iisjor not, weseethat ["on" timeinacycle]=P"[(X,,- x)'], 218MARKOVCHAINS whereX,)isarandomvanablehavingdistributionF,)andrepresentingthetimeto makeatransitionfromito j,andy+= max(O,y)Hence E["on" time in
cycle]= P"f ~P{X,) - X> a} da = P,)f ~F,)(a+ x) da =P,) r F"( y) dy AsE[cycletime]=iJ-,,,theresultfollowsfromalternating renewalprocesses By the same technique (or by summing (4.8.2) over j) we can prove the fol-lowing. Corollary4.8.5 If thesemi-Markovprocessisirreducibleandnotlattice,then (483)! ~ ~P{Z(t) = i,Y(t) > xIZ(O)= k} = r H, (y) dy1iJ-/1
Remarks (1)OfcoursethelimitingprobabilitiesinTheorem4.8.4andCorollary 4.8.5alsohaveinterpretationsaslong-runproportionsForinstance, thelong-runproportionoftimethatthesemi-Markovprocessisin stateiandwillspendthenextxtimeunitswithoutatransitionand willthengoto state jisgivenby Theorem 4.8.4.
(2)Multiplyinganddividing(4.8.3)byJLiJandusingP,=JL,IJLii'gives lim P{Z(t) = i,yet)> x} = P, H,.e(X), 1-'" whereH, eistheequilibriumdistributionofH,Hencethelimiting probabilityofbeinginstateiisP"and,giventhatthestateattis i,thetimeuntiltransition(astapproaches00)hastheequilibrium distributionof H" PROBLEMS219 PROBLEMS 4.1.A store that stocks a
certain commodity uses the following (s, S) ordering policy;if its supplyatthe beginning of a time penod isx, then it orders o S-x if xs, if x< s. The order isimmediately filled. The daily demands are independent and equaljwithprobability(Xi'Alldemandsthatcannotbeimmediately metarelost.LetXndenotetheinventorylevelattheendofthenth time penod. Argue
that {Xn,nI}isa Markovchainand compute its transitionprobabilities. - -4.2.For aMarkovchainprove that whenever n.O}a Markov chain? If so, give its transition probabilities. PROBLEMS 221 (c)Is {Yn,n2: O}a Markov chain? If so, give its transition probabilities. (d)Is {(Xn' Yn), n 2: O}a Markov chain? If so, give its transition probabil-ities. 4.11.If /" P) P
)+, w h e n ~>P,. 4.44.Consideratime-reversibleMarkovchainwithtransitionprobabilities p')and limiting probabilities 1f"andnowconsider the same chain trun-catedtothestates0,1,..,M.Thatis,forthetruncatedchainits transitionprobabilitiesp,)are P,,=P ')' 0, o ~ i ~M,j =i 05:i=l=j 0forall ioFj istimereversibleif,and only if, foralli, j, k PROBLEMS 229
4.46.Let {Xn' n~1} denote an irreducible Markov chainhavinga countable state space.Now consider a new stochastic process {Yn,n~O}that only acceptsvalues of theMarkovchainthatarebetween 0andNThat is, wedefineYn tobethe nth valueof theMarkovchainthatisbetween o andNForinstance,ifN= 3andXI= 1,Xl= 3,X3= 5,X4 = 6, X52,thenY1 :;::1,Y2
=3,Y3 =2. (a)Is{Yn,n;;:;:O}aMarkovchain?Explainbriefly (b)Let11)denote theproportion of timethat{Xn,n::>I}isinstate j If 11J > 0forallj, whatproportionof timeis{Yn,n;;:;:O}ineach of the states 0,1,... , N? (c)Suppose {Xn}isnull recurrent and let 1T,(N), i=0, 1,, N denote thelong-run proportions for {Yn,n.2 O}Showthat 1T)(N)=11,(N)E[time
theXprocess spends in j betweenreturns to i],j=1= i. (d)Use(c)toarguethatinasymmetricrandomwalktheexpected numberof visitsto state ibeforereturningtothe originequals1 (e)If {Xn,n;;:;:O}istimereversible,showthat {Yn,n;;:;:O}isalso 4.47.Mballsare initially distributed among murns.At each stage one of the balls is selected at random, taken from
whichever urn it is in, and placed, at random, inone of the other m- 1 urnsConsider the Markovchain whose state at any time isthe vector (nl', nm), where nl denotes the numberofballsinurniGuessatthelimitingprobabilitiesforthis Markovchainandthenverifyyourguessandshowatthe sametime that theMarkovchain istimereversible 4.48.For anergodic
semi-Markovprocess. (a)Computetherateatwhichtheprocessmakesatransitionfromi into j.
(b)Show that 2: PI) IlL It=11ILw I (c)Show that the proportion of timethat the processisinstate iand headed for state jisP,) 71'J IlL IIwhere71.)=I; F,)(I) dl (d)Show that the proportion of timethat the state isi and willnext be jwithina time x is P,) 71'J F'() './x, IL" whereF ~ Jisthe equilibriumdistributionof F.) 230MARKOV CHA INS 4.49.Foranergodicsemi-
Markovprocessderiveanexpression,ast~00 forthelimitingconditionalprobabilitythatthenextstatevisitedafte; t isstate j,givenX(t)= i 4.50.Ataxialternates between three locationsWhen itreaches location1 it isequallylikelytogonexttoeither2or3Whenitreaches2itwill nextgoto1withprobability~andto3withprobabilityi.From3 it
alwaysgoesto1Themeantimesbetweenlocationsiandjaretl2"" 20,tl3=30,t23 =30(t,)=t),) (a)Whatisthe(limiting)probabilitythatthetaxi'smostrecentstop wasatlocationi,i=1,2,3? (b)What isthe(limiting)probability thatthetaxiisheading forloca-tion2? (c)What fraction of time is the taxi traveling from 2 to 3?Note.Upon arrivalatalocationthe taxiimmediately
departs. REFERENCES References1,2,3,4,6,9,10,11,and12givealternatetreatmentsofMarkov chains Reference5isastandardtextinbranchingprocessesReferences7and8havenice treatmentsoftimereversibility 1NBhat,Elementsof Applied StochasticProcesses,Wiley,NewYork,1984 2CLChiang,AnIntroductiontoStochasticProcessesandTheirApplications,
Krieger,1980 3ECinlar,IntroductiontoStochasticProcesses,Prentice-Hal.!,EnglewoodCliffs, NJ,1975 40RCoxandH0Miller,TheTheoryof StochasticProcesses,Methuen,Lon-don,1965 5THarris,TheTheoryof BranchingProcesses,Springer-Verlag,Berlin,1963 6SKarlin and HTaylor, AFirstCourse inStochasticProcesses,2nd ed , Academic Press,Orlando,FL,1975
7JKeilson,MarkovChainModels-RarityandExponentiality,Springer-Verlag, Berlin,1979 8FKelly,Reversibility and StochasticNetworks,Wiley,Chichester, England,1979 9JKemeny,LSnel.!,and AKnapp,Denumerable MarkovChains,VanNostrand, Princeton,NJ,1966 lOSResnick,Adventures inStochasticProcesses,Birkhauser,Boston,MA,1992
11SMRoss,Introductionto Probability Models,5th ed , Academic Press, Orlando, FL,1993 12HCTijms,StochasticModels,AnAlgorithmicApprPage 2 Stochastic Processes SOLO HERMELIN Updated: 10.05.11 15.06.14 SOLO Stochastic Processes Table of Content Random Variables Stochastic Differential Equation (SDE) Brownian Motion Smoluchowski
Equation Langevin Equation Lévy Process Martingale Chapmann – Kolmogorov Equation Itô Lemma and Itô Processes Stratonovich Stochastic Calculus Fokker – Planck Equation Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) Propagation Equation SOLO Stochastic Processes Table of Content (continue)
Bartlett-Moyal TheoremFeller- Kolmogorov Equation Langevin and Fokker- Planck Equations Generalized Fokker - Planck Equation Karhunen-Loève Theorem References 4 Random ProcessesSOLO Random Variable: A variable x determined by the outcome Ω of a random experiment. ( )Ω= xx Random Process or Stochastic Process: A function of time x
determined by the outcome Ω of a random experiment. ( ) ( )Ω= ,txtx 1Ω 2Ω 3Ω 4Ω x t This is a family or an ensemble of functions of time, in general different for each outcome Ω.
Mean or Ensemble Average of the Random Process: ( ) ( )[ ] ( ) ( )∫+∞ ∞− =Ω= ξξξ dptxEtx tx,: Autocorrelation of the Random Process: ( ) ( ) ( )[ ] ( ) ( ) ( )∫ ∫+∞ ∞− +∞ ∞− =ΩΩ= ηξξξη ddptxtxEttR txtx 21 ,2121 ,,:, Autocovariance of the Random Process: ( ) ( ) ( )[ ] ( ) ( )[ ] 221121 ,,:, txtxtxtxEttC −Ω−Ω= ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )2121212121 ,,,,
txtxttRtxtxtxtxEttC −=−ΩΩ= Table of Content 5 SOLO Stationarity of a Random Process 1. Wide Sense Stationarity of a Random Process: • Mean Average of the Random Process is time invariant: ( ) ( )[ ] ( ) ( ) .,: constxdptxEtx tx ===Ω= ∫+∞ ∞− ξξξ • Autocorrelation of the Random Process is of the form: ( ) ( ) ( )ττ RttRttRtt 21: 2121 ,−= =−= ( ) ( )
( )[ ] ( ) ( ) ( ) ( )12,2121 ,,,:,21 ttRddptxtxEttR txtx === ∫ ∫+∞ ∞− +∞ ∞− ηξξξηωωsince: We have: ( ) ( )ττ −= RR Power Spectrum or Power Spectral Density of a Stationary Random Process: ( ) ( ) ( )∫+∞ ∞− −= ττωτω djRS exp: 2. Strict Sense Stationarity of a Random Process: All probability density functions are time invariant: ( ) ( ) ( ) .,, constptp
xtx == ωωω Ergodicity: ( ) ( ) ( )[ ]Ω==Ω=Ω ∫+ −∞→ ,,2 1:, lim txExdttx Ttx ErgodicityT TT A Stationary Random Process for which Time Average = Assembly Average Random Processes 6 SOLO Time Autocorrelation: Ergodicity: ( ) ( ) ( ) ( ) ( )∫+ −∞→ Ω+Ω=Ω+Ω=T TT dttxtxT txtxR ,,2 1:,, lim τττ For a Ergodic Random Process define Finite Signal
Energy Assumption: ( ) ( ) ( ) ∞<Ω=Ω= ∫+ −∞→ T TT dttxT txR ,2 1,0 22 lim Define: ( ) ( ) ≤≤−Ω =Ωotherwise TtTtxtxT 0 ,:, ( ) ( ) ( )∫ +∞ ∞− Ω+Ω= dttxtxT R TTT ,,2 1: ττ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )∫∫∫ ∫∫∫ −− − − +∞ − − − − ∞− Ω+Ω−Ω+Ω=Ω+Ω= Ω+Ω+Ω+Ω++Ω= T T TT T T TT T T TT T TT T T TT T TTT dttxtxT dttxtxT dttxtxT dttxtxT
dttxtxT dttxtxT R τ τ τ τ τττ ττωττ ,,2 1,, 2 1,, 2 1 ,,2 1,, 2 1,, 2 1 00 Let compute: ( ) ( ) ( ) ( ) ( )∫∫−∞→−∞→∞→ Ω+Ω−Ω+Ω=T T TTT T T TTT TT dttxtxT dttxtxT Rτ τττ ,,2 1,, 2 1limlimlim ( ) ( ) ( )ττ RdttxtxT T T TT T =Ω+Ω∫−∞→ ,,2 1lim ( ) ( ) ( ) ( )[ ] 0,,2 1,, 2 1 suplimlim → Ω+Ω≤Ω+Ω≤≤−∞→−∞→ ∫ τττττ txtxT dttxtxT TT TtTT T T TTT therefore: ( ) (
)ττ RRTT =→∞ lim ( ) ( ) ( )[ ]Ω==Ω=Ω ∫+ −∞→ ,,2 1:, lim txExdttx Ttx ErgodicityT TT T− T+ ( )txT t Random Processes 7 SOLO Ergodicity (continue): ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( )( )[ ] ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) [ ]TTTT TT TT TTT XXT dvvjvxdttjtxT dtjtxdttjtxT ddttjtxtjtxT dttxtxdjT djR * 2 1exp,exp, 2 1 exp,exp,2 1 exp,exp,2 1 ,,exp2 1exp =−ΩΩ= +
−Ω+Ω= +−Ω+Ω= Ω+Ω−=− ∫∫ ∫∫ ∫ ∫ ∫ ∫∫ ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− +∞ ∞− +∞ ∞− +∞ ∞− ωω ττωτω ττωτω τττωττωτLet compute: where: and * means complex-conjugate.( ) ( )∫+∞ ∞− −Ω= dvvjvxX TT ωexp,: Define: ( ) ( ) ( ) ( ) ( ) ( )[ ]∫ ∫∫+∞ ∞− + −∞→ +∞ ∞−∞→∞→ Ω+Ω−= −= = τττωττωτω ddttxtxE TjdjRE T XXES T T TTT
TT TT T ,,2 1expexp 2: limlimlim * Since the Random Process is Ergodic we can use the Wide Stationarity Assumption: ( ) ( )[ ] ( )ττ RtxtxE TT =Ω+Ω ,, ( ) ( ) ( ) ( ) ( ) ( ) ( )∫ ∫ ∫∫ ∫∞+ ∞− +∞ ∞− + −∞→ +∞ ∞− + −∞→∞→ −= −= −= = ττωτ ττωττττωω djR ddtT jRddtRT jT XXES T TT T TT TT T exp 2 1exp 2 1exp 2: 1 * limlimlim Random Processes 8
SOLO Ergodicity (continue): We obtained the Wiener-Khinchine Theorem (Wiener 1930): ( ) ( ) ( )∫+∞ ∞−→∞−= = dtjR T XXES TT T τωτω exp2 :* lim Norbert Wiener1894 - 1964 Alexander YakovlevichKhinchine1894 - 1959 The Power Spectrum or Power Spectral Density of a Stationary Random Process S (ω) is the Fourier Transform of the
Autocorrelation Function R (τ). Random Processes 9 SOLO White Noise A (not necessary stationary) Random Process whose Autocorrelation is zero for any two different times is called white noise in the wide sense. ( ) ( ) ( )[ ] ( ) ( )211 2 2121 ,,, ttttxtxEttR −=ΩΩ= δσ ( )1 2 tσ - instantaneous variance Wide Sense Whiteness Strict Sense Whiteness A
(not necessary stationary) Random Process in which the outcome for any two different times is independent is called white noise in the strict sense. ( ) ( ) ( ) ( )2121, ,,21 ttttp txtx −=Ω δ A Stationary White Noise Random has the Autocorrelation: ( ) ( ) ( )[ ] ( )τδσττ 2,, =Ω+Ω= txtxER Note In general whiteness requires Strict Sense Whiteness.
In practice we have only moments (typically up to second order) and thus only Wide Sense Whiteness. Random Processes 10 SOLO White Noise A Stationary White Noise Random has the Autocorrelation: ( ) ( ) ( )[ ] ( )τδσττ 2,, =Ω+Ω= txtxER The Power Spectral Density is given by performing the Fourier Transform of the Autocorrelation: ( ) ( ) ( ) ( ) (
) 22 expexp στωτδστωτω =−=−= ∫∫+∞ ∞− +∞ ∞− dtjdtjRS ( )ωS ω2σ We can see that the Power Spectrum Density contains all frequencies at the same amplitude. This is the reason that is called White Noise. The Power of the Noise is defined as: ( ) ( ) 20 σωτ ==== ∫+∞ ∞− SdtRP Random Processes 11 SOLO Markov Processes A Markov Process is
defined by: Andrei AndreevichMarkov 1856 - 1922 ( ) ( )( ) ( ) ( )( ) 111 ,|,,,|, tttxtxptxtxp >∀ΩΩ=≤ΩΩ ττ i.e. the Random Process, the past up to any time t1 is fully defined by the process at t1. Examples of Markov Processes: 1. Continuous Dynamic System( ) ( )( ) ( )wuxthtz vuxtftx ,,, ,,, == 2.
Discrete Dynamic System ( ) ( )( ) ( )kkkkk kkkkk wuxthtz vuxtftx ,,, ,,, 1 1 == + + x - state space vector (n x 1)u - input vector (m x 1)v - white input noise vector (n x 1) - measurement vector (p x 1)z - white measurement noise vector (p x 1)w Random Processes Table of Content SOLO Stochastic Processes The earliest work on SDEs was done to
describe Brownian motion in Einstein's famous paper, and at the same time by Smoluchowski. However, one of the earlier works related to Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This work was followed upon by Langevin. Later Itō and Stratonovich put SDEs on more solid mathematical footing. In
physical science, SDEs are usually written as Langevin Equations.
These are sometimes confusingly called "the Langevin Equation" even though there are many possible forms. These consist of an ordinary differential equation containing a deterministic part and an additional random white noise term. A second form is the Smoluchowski Equation and, more generally, the Fokker-Planck Equation. These are partial
differential equations that describe the time evolution of probability distribution functions. The third form is the stochastic differential equation that is used most frequently in mathematics and quantitative finance (see below). This is similar to the Langevin form, but it is usually written in differential form. SDEs come in two varieties, corresponding to
two versions of stochastic calculus.
Background Terminology A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, thus resulting in a solution which is itself a stochastic process. SDE are used to model diverse phenomena such as fluctuating stock prices or physical system subject to thermal fluctuations. Typically,
SDEs incorporate white noise which can be thought of as the derivative of Brownian motion (or the Wiener process); however, it should be mentioned that other types of random fluctuations are possible, such as jump processes. Stochastic Differential Equation (SDE) SOLO Stochastic Processes Brownian motion or the Wiener process was discovered
to be exceptionally complex mathematically. The Wiener process is non-differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Ito Stochastic Calculus and the Stratonovich Stochastic Calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether
the one is more appropriate than the other in a given situation. Guidelines exist and conveniently, one can readily convert an Ito SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down. Stochastic Calculus Table of Content Stochastic ProcessesSOLO Brownian
Motion In 1827 Brown, a botanist, discovered the motion of pollen particles in water. At the beginning of the twentieth century, Brownian motion was studied by Einstein, Perrin and other physicists. In 1923, against this scientific background, Wiener defined probability measures in path spaces, and used the concept of Lebesgue integrals to lay the
mathematical foundations of stochastic analysis. In 1942, Ito began to reconstruct from scratch the concept of stochastic integrals, and its associated theory of analysis.
He created the theory of stochastic differential equations, which describe motion due to random events. Albert Einstein 1879 - 1955 Norbert Wiener1894 - 1964 Henri Léon Lebesgue 1875 - 1941 Robert Brown 1773–1858 Albert Einstein's (in his 1905 paper) and Marian Smoluchowski's (1906) independent research of the problem that brought the
solution to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules. Marian Ritter von Smolan Smoluchowski1872 - 1917 Kiyosi Itô1915 - 2008 Brown.robert.jpg Marian_Smoluchowski.jpg Stochastic ProcessesSOLO Random Walk Assume the process of walking on a straight line at discrete
intervals T. At each timewe walk a distance s , randomly, to the left or to the right, with the same probability p=1/2. In this way we created a Stochastic Process called Random Walk. (This experiment is equivalent to tossing a coin to get, randomly, Head or Tail). Assume that at t = n T we have taken k steps to the right and n-k steps to the left, then
the distance traveled isx (nT) is a Random Walk, taking the values r s, wherer equals n, n-2,…, -(n-2),-n ( ) ( ) ( ) snksknsknTx −=−−= 2 ( ) ( )2 2nr ksnksrnTx+=⇒−== Therefore ( ) n nnr npnr nnr kPsrnTxP2 1 222 += += +=== Stochastic ProcessesSOLO Random Walk (continue – 1) The Random value is ( ) nxxxnTx +++= 21 We have at step i the
event xi: P xi = +s = p = 1/2 and P xi = - s = 1-p = 1/2 ( ) ( )( ) ( ) nrppn pnk en eppn nrkPsrnTxP 2/12 2 2 2/ 1 12 1 2−− −−= −≈ +=== ππ ( ) 0=−=−++== sxPssxPsxE iii ( ) 2222 ssxPssxPsxE iii =−=−++== ( ) ( ) 222 22 1 0 1 1 2 21 0 snxExExExxEnTxE xExExEnTxE n xxEn i n jji n ji ji =+++== =+++=≠= = =∑∑ === ≠==⇒jisxE jixExExxE i ii
tindependenxx ji ji 22 , 0 For large r ( )nr > and( ) +=+≈≤ ∫ − n rerfdyesrnTxP nry 2 1 2 1 2 1 / 0 2/2 π Stochastic ProcessesSOLO Random Walk (continue – 2) For n1 > n2 > n3 > n4 the number of steps to the right from n2T to n1T interval is independent of the number of steps to the right between n4T to n3T interval. Hence x (n1T) – x (n2T) is
independent of x (n4T) – x (n3T).
Table of Content SOLO Stochastic Processes Smoluchowski Equation In physics, the Diffusion Equation with drift term is often called Smoluchowski equation (after Marian von Smoluchowski). Let w(r, t) be a density, D a diffusion constant, ζ a friction coefficient, and U(r, t) a potential. Then the Smoluchowski equation states that the density evolves
according to The diffusivity term acts to smoothen out the density, while the drift term shifts the density towards regions of low potential U. The equation is consistent with each particle moving according to a stochastic differential equation, with a bias term and a diffusivity D. Physically, the drift term originates from a force being balanced by a
viscous drag given by ζ. The Smoluchowski equation is formally identical to the Fokker–Planck equation, the only difference being the physical meaning of w: a distribution of particles in space for the Smoluchowski equation, a distribution of particle velocities for the Fokker–Planck equation. SOLO Stochastic Processes Einstein-Smoluchowski
Equation In physics (namely, in kinetic theory) the Einstein relation (also known as Einstein–Smoluchowski relation) is a previously unexpected connection revealed independently by Albert Einstein in 1905 and by Marian Smoluchowski (1906) in their papers on Brownian motion. Two important special cases of the relation are: (diffusion of charged
particles) ("Einstein–Stokes equation", for diffusion of spherical particles through liquid with low Reynolds number) Where • ρ (x,t) density of the Brownian particles•D is the diffusion constant,•q is the electrical charge of a particle,•μq, the electrical mobility of the charged particle, i.e. the ratio of the particle's terminal drift velocity to an applied
electric field,•kB is Boltzmann's constant,•T is the absolute temperature,•η is viscosity•r is the radius of the spherical particle.The more general form of the equation is:where the "mobility" μ is the ratio of the particle's terminal drift velocity to an applied force, μ = vd / F. 2 2 xD t ∂∂= ∂∂ ρρ Einstein’s EquationFor Brownian Motion ( ) ( ) −= tD x tDtx
4exp 4 1, 2 2/1πρ Table of Content Paul Langevin1872-1946 Langevin Equation SOLO Stochastic Processes Langevin equation (Paul Langevin, 1908) is a stochastic differential equation describing the time evolution of a subset of the degrees of freedom. These degrees of freedom typically are collective (macroscopic) variables changing only slowly in
comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid, Langevin, P.
(1908). "On the Theory of Brownian Motion". C. R.
Acad. Sci.
(Paris) 146: 530–533. ( )td xdvtv td vdm =+−= ηλ We are interested in the position x of a particle of mass m. The force on the particle is the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a noise term η (t) that has a Gaussian Probability Distribution with Correlation Function ( ) ( ) ( )'2', , ttTktt jiBji −= δδληη where
kB is Boltzmann’s constant and T is the Temperature. Table of Content Paul_Langevin.jpg Propagation Equation SOLO Stochastic Processes Definition 1: Holder Continuity Condition ( )( ) 111 , mxnxmx Kttxk ∈Given a mx1 vector on a mx1 domain, we say that is Holder Continuous in K if for some constants C, α >0 and some norm || ||: ( ) ( ) α2121 ,,
xxCtxktxk −<− Holder Continuity is a generalization of Lipschitz Continuity (α = 1): Holder Continuity Lipschitz Continuity( ) ( ) 2121 ,, xxCtxktxk −<− Rudolf Lipschitz1832 - 1903 Otto Ludwig Hölder1859 - 1937 Propagation Equation SOLO Stochastic Processes Definition 2: Standard Stochastic State Realization (SSSR) The Stochastic Differential
Equation: ( ) ( ) ( ) ( ) [ ]fnxnxnnxnx ttttndtxGdttxftxd ,,, 0111 ∈+= ( ) ( ) ( ) ( ) ( ) ( ) 0===+= tndEtndEtndEtndtndtnd pgpg we can write ( ) ( ) ( ) ( ) ( ) ( )sttQswtwEtd tndtw Tg −== δ ( )tnd g ( ) ( ) ( ) dttQtntndE nxnT gg =Wiener (Gauss) Process ( )tnd p Poisson Process ( ) ( ) = na a a Tpp n tntndE λσ λσ λσ 2 22 12 00 00 00 2 1 (1) where is
independent of( ) 00 xtx = 0x ( )tnd (2) is Holder Continuous in t, Lipschitz Continuous in ( )txGnxn , x( ) ( )txGtxG T nxnnxn ,, is strictly Positive Definite( ) ( ) ji ij i ij xx txG x txG ∂∂∂ ∂∂ , ;, 2 are Globally Lipschitz Continuous in x, continuous in t, and globally bounded. (3) The vector f (x,t) is Continuous in t and Globally Lipschitz Continuous in , and ∂fi/
∂xi are Globally Lipschitz Continuous in , and continuous in t. x x The Stochastic Differential Equation is called a Standard Stochastic State Realization (SSSR) Table of Content Stochastic ProcessesSOLO Lévy Process In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is any continuous-time stochastic process Paul
Pierre Lévy1886 - 1971 A Stochastic Process X = Xt: t ≥ 0 is said to be a Lévy Process if: 1.
X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. Independent incrementsA continuous-time stochastic process assigns a random variable Xt to each point t ≥ 0 in time. In
effect it is a random function of t.
The increments of such a process are the differences Xs − Xt between its values at different times t < s. To call the increments of a process independent means that increments Xs − Xt and Xu − Xv are independent random variables whenever the two time intervals do not overlap and, more generally, any finite number of increments assigned to
pairwise non-overlapping time intervals are mutually (not just pairwise) independent C3%A9vy Stochastic ProcessesSOLO Lévy Process (continue – 1) Paul Pierre Lévy1886 - 1971 A Stochastic Process X = Xt: t ≥ 0 is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3.
Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. Stationary increments To call the increments stationary means that the probability distribution of any increment Xs − Xt depends only on the length s − t of the time interval; increments with equally long time intervals are
identically distributed. In the Wiener process, the probability distribution of Xs − Xt is normal with expected value 0 and variance s − t. In the (homogeneous) Poisson process, the probability distribution of Xs − Xt is a Poisson distribution with expected value λ(s − t), where λ > 0 is the "intensity" or "rate" of the process. Stochastic ProcessesSOLO
Lévy Process (continue – 2) Paul Pierre Lévy1886 - 1971 A Stochastic Process X = Xt: t ≥ 0 is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one).
2.
Independent increments: For any , are independent. 3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. DivisibilityLévy processes correspond to infinitely divisible probability distributions:The probability distributions of the increments of any Lévy process are infinitely
divisible, since the increment of length t is the sum of n increments of length t/n, which are i.i.d. by assumption (independent increments and stationarity). Conversely, there is a Lévy process for each infinitely divisible probability distribution: given such a distribution D, multiples and dividing define a stochastic process for positive rational time,
defining it as a Dirac delta distribution for time 0 defines it for time 0, and taking limits defines it for real time. Independent increments and stationarity follow by assumption of divisibility, though one must check continuity and that taking limits gives a well-defined function for irrational time. Table of Content Stochastic ProcessesSOLO Martingale
Originally, martingale referred to a class of betting strategies that was popular in 18th century France. The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double his bet after every loss so that the first win would
recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, his probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users History of
Martingale The concept of martingale in probability theory was introduced by Paul Pierre Lévy, and much of the original development of the theory was done by Joseph Leo Doob. Part of the motivation for that work was to show the impossibility of successful betting strategies. Paul Pierre Lévy1886 - 1971 Joseph Leo Doob1910 - 2004 C3%A9vy
Stochastic ProcessesSOLO Martingale In probability theory, a martingale is a stochastic process (i.e., a sequence of random variables) such that the conditional expected value of an observation at some time t, given all the observations up to some earlier time s, is equal to the observation at that earlier time s A discrete-time martingale is a discrete-
time stochastic process (i.e., a sequence of random variables) X1, X2, X3, ... that satisfies for all n i.e., the conditional expected value of the next observation, given all the past observations, is equal to the last observation. Somewhat more generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with respect to another sequence X1, X2, X3 ... if for
all n Similarly, a continuous-time martingale with respect to the stochastic process Xt is a stochastic process Yt such that for all t This expresses the property that the conditional expectation of an observation at time t, given all the observations up to time s, is equal to the observation at time s (of course, provided that s ≤ t). Stochastic ProcessesSOLO
Martingale In full generality, a stochastic process Y : T × Ω → S is a martingale with respect to a filtration Σ∗ and probability measure P if * Σ∗ is a filtration of the underlying probability space (Ω, Σ, P); * Y is adapted to the filtration Σ∗, i.e., for each t in the index set T, the random variable Yt is a Σt-measurable function; * for each t, Yt lies in the Lp
space L1(Ω, Σt, P; S), i.e. * for all s and t with s < t and all F Σ∈ s, where χF denotes the indicator function of the event F. In Grimmett and Stirzaker's Probability and Random Processes, this last condition is denoted as which is a general form of conditional expectation It is important to note that the property of being a martingale involves both the
filtration and the probability measure (with respect to which the expectations are taken). It is possible that Y could be a martingale with respect to one measure but not another one; the Girsanov theorem offers a way to find a measure with respect to which an Itō process is a martingale. Table of Content Stochastic ProcessesSOLO Chapmann –
Kolmogorov Equation Sydney Chapman1888 - 1970 Andrey Nikolaevich Kolmogorov 1903 - 1987 Suppose that fi is an indexed collection of random variables, that is, a stochastic process. Let be the joint probability density function of the values of the random variables f1 to fn. Then, the Chapman-Kolmogorov equation is Note that we have not yet
assumed anything about the temporal (or any other) ordering of the random variables -- the above equation applies equally to the marginalization of any of them. Particularization to Markov Chains When the stochastic process under consideration is Markovian, the Chapman-Kolmogorov equation is equivalent to an identity on transition densities. In
the Markov chain setting, one assumes that Then, because of the Markov property, where the conditional probability is the transition probability between the times i > j. So, the Chapman-Kolmogorov equation takes the form When the probability distribution on the state space of a Markov chain is discrete and the Markov chain is homogeneous, the
Chapman-Kolmogorov equations can be expressed in terms of (possibly infinite-dimensional) matrix multiplication, thus:where P(t) is the transition matrix, i.e., if Xt is the state of the process at time t, then for any two points i and j in the state space, we have ( )nii ffpn ,,1,,1 ( ) ( )∫+∞ ∞−− = − nniinii fdffpffpnn ,,,, 1,,11,, 111 ( ) ( ) ( ) ( )1|12|11,,
||,,11211 −− = nniiiiinii ffpffpfpffpnnn ( ) ( ) ( )∫+∞ ∞− = 212|23|13| |||122313 dfffpffpffp iiiiii Stochastic ProcessesSOLO Chapmann – Kolmogorov Equation (continue – 1) Particularization to Markov Chains ( ) ( ) ( )∫+∞ ∞− = 20022,|,22,|,00,|, ,|,,|,,|,00220000 dttxtxptxtxptxtxp txtxtxtxtxtx Let be a probability density function on the Markov process
x(t) given that x(t0) = x0, and t0 < t, then, ( )00,|, ,|,00 txtxp txtx Geometric Interpretation of Chapmann – Kolmogorov Equation Table of Content Stochastic ProcessesSOLO Kiyosi Itô1915 - 2008 In 1942, Itô began to reconstruct from scratch the concept of stochastic integrals, and its associated theory of analysis. He created the theory of stochastic
differential equations, which describe motion due to random events. In 1945 Ito was awarded his doctorate. He continued to develop his ideas on stochastic analysis with many important papers on the topic. Among them were “On a stochastic integral equation” (1946), “On the stochastic integral” (1948), “Stochastic differential equations in a
differentiable manifold” (1950), “Brownian motions in a Lie group” (1950), and “On stochastic differential equations” (1951). Itô Lemma and Itô Processes Itô Lemma and Itô processes In its simplest form, Itô 's lemma states that for an Itô process and any twice continuously differentiable function f on the real numbers, then f(X) is also an Itô process
satisfying Or, more extended. Let X(t) be an Itô process given by and let f(t,x) be a function with continuous first- and second-order partial derivatives Then by Itô's lemma: SOLO tttt dBdtXd σµ += ( ) ( ) ( ) ( ) ( ) ( ) dtXfXfdBXf dtXfdXXfXfd ttT tttttt ttT tttt ++= += σσµσ σσ ''2 1'' ''2 1' Stochastic Processes Itô Lemma and Itô processes (continue – 1)
Informal derivation A formal proof of the lemma requires us to take the limit of a sequence of random variables, which is not done here. Instead, we can derive Ito's lemma by expanding a Taylor series and applying the rules of stochastic calculus. Assume the Itō process is in the form of Expanding f(x, t) in a Taylor series in x and t we have and
substituting a dt + b dB for dx gives In the limit as dt tends to 0, the dt2 and dt dB terms disappear but the dB2 term tends to dt.
The latter can be shown if we prove that since Deleting the dt2 and dt dB terms, substituting dt for dB2, and collecting the dt and dB terms, we obtain as required. SOLO Stochastic Processes Table of Content Ruslan L. Stratonovich(1930 – 1997) Stratonovich invented a stochastic calculus which serves as an alternative to the Itô calculus; the
Stratonovich calculus is most natural when physical laws are being considered. The Stratonovich integral appears in his stochastic calculus. He also solved the problem of optimal non-linear filtering based on his theory of conditional Markov processes, which was published in his papers in 1959 and 1960. The Kalman-Bucy (linear) filter (1961) is a
special case of Stratonovich's filter. He also developed the value of information theory (1965). His latest book was on non-linear non-equilibrium thermodynamics. SOLO Stratonovich Stochastic Calculus Stochastic Processes Table of Content A solution to the one-dimensional Fokker–Planck equation, with both the drift and the diffusion term. The
initial condition is a Dirac delta function in x = 1, and the distribution drifts towards x = 0. The Fokker–Planck equation describes the time evolution of the probability density function of the position of a particle, and can be generalized to other observables as well. It is named after Adriaan Fokker and Max Planck and is also known as the Kolmogorov
forward equation. The first use of the Fokker–Planck equation was the statistical description of Brownian motion of a particle in a fluid. In one spatial dimension x, the Fokker–Planck equation for a process with drift D1(x,t) and diffusion D2(x,t) is More generally, the time-dependent probability distribution may depend on a set of N macrovariables xi.
The general form of the Fokker–Planck equation is then where D1 is the drift vector and D2 the diffusion tensor; the latter results from the presence of the stochastic force. Fokker – Planck Equation Adriaan Fokker 1887 - 1972 Max Planck1858 - 1947 SOLO Adriaan Fokker„Die mittlere Energie rotierender elektrischer Dipole im Strahlungsfeld"
Annalen der Physik 43, (1914) 810-820 Max Plank, „Ueber einen Satz der statistichen Dynamik und eine Erweiterung in der Quantumtheorie“, Sitzungberichte der Preussischen Akadademie der Wissenschaften (1917) p. 324-341 Stochastic Processes ( ) ( ) ( )[ ] ( ) ( )[ ]txftxDx txftxDx txft ,,,,, 22 2 1 ∂∂+ ∂∂−= ∂∂ ( )[ ] ( )[ ]∑∑∑= == ∂∂ ∂+∂∂−= ∂∂ N i N
jNji ji N iNi i ftxxDxx ftxxDx ft 1 1 12 2 11 1 ,,,,,, FokkerPlanck.gif AdriaanFokker.jpg Max_planck.jpg Fokker – Planck Equation (continue – 1) The Fokker–Planck equation can be used for computing the probability densities of stochastic differential equations. where is the state and is a standard M-dimensional Wiener process. If the initial probability
distribution is , then the probability distribution of the stateis given by the Fokker – Planck Equation with the drift and diffusion terms: Similarly, a Fokker–Planck equation can be derived for Stratonovich stochastic differential equations. In this case, noise-induced drift terms appear if the noise strength is state-dependent. SOLO Consider the Itô
stochastic differential equation: ( ) ( ) ( )[ ] ( ) ( )[ ]txftxDx txftxDx txft ,,,,, 22 2 1 ∂∂+ ∂∂−= ∂∂ Fokker – Planck Equation (continue – 2) Derivation of the Fokker–Planck Equation SOLO Start with ( ) ( ) ( )11|1, 111|, −−− −−− = kxkkxxkkxx xpxxpxxpkkkkk and ( ) ( ) ( ) ( )∫∫+∞ ∞−−−− +∞ ∞−−− −−− == 111|11, 111|, kkxkkxxkkkxxkx xdxpxxpxdxxpxp
kkkkkk define ( ) ( )ttxxtxxttttt kkkk ∆−==∆−== −− 11 ,,, ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( )∫+∞ ∞−∆−∆− ∆−∆−∆−= ttxdttxpttxtxptxp ttxttxtxtx || Let use the Characteristic Function of ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )ttxtxtxtxdttxtxpttxtxss ttxtxttxtx ∆−−=∆∆−∆−−−=Φ ∫+∞ ∞−∆−∆−∆ |exp: || ( ) ( ) ( ) ( )[ ]ttxtxp ttxtx ∆−∆− || The inverse
transform is ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )∫∞+ ∞−∆−∆∆− Φ∆−−=∆− j j ttxtxttxtx sdsttxtxsj ttxtxp || exp2 1| π Using Chapman-Kolmogorov Equation we obtain: ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxsj ttxdttxpsdsttxtxsj txp j j ttxttxtx ttx ttxtxp j j ttxtxtx ttxtx ∆−∆−Φ∆−−= ∆−∆−Φ∆−−= ∫ ∫ ∫ ∫ ∞+ ∞−
∞+ ∞−∆−∆−∆ +∞ ∞−∆− ∆− ∞+ ∞−∆−∆ ∆− | | | exp2 1 exp2 1 | π π Stochastic Processes Fokker – Planck Equation (continue – 3) Derivation of the Fokker–Planck Equation (continue – 1) SOLO The Characteristic Function can be expressed in terms of the moments about x (t-Δt) as: ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxsj txpj j ttxttxtxtx
∆−∆−Φ∆−−= ∫ ∫+∞ ∞− ∞+ ∞−∆−∆−∆ |exp 2 1 π ( ) ( ) ( ) ( )( ) ( ) ( ) ( )[ ] ( ) ∑ ∞ =∆−∆∆−∆ ∆−∆−−−+=Φ 1|| | !1 i ittxtx i ttxtx ttxttxtxEi ss Therefore ( ) ( )[ ] ( ) ( )[ ] ( )( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( )ttxdsdttxpttxttxtxE i sttxtxs jtxp j j ttxi ittxtx i tx ∆−∆− ∆−∆−−−+∆−−= ∫ ∫ ∑+∞ ∞− ∞+ ∞−∆− ∞ =∆− 1| | !1exp 2 1 π Use the fact that ( ) ( ) ( )[ ] ( ) ( ) ( )[ ](
)[ ] ,2,1,01exp 2 1 =∂ ∆−−∂−=∆−−−∫∞+ ∞− itx ttxtxsdttxtxss j i ii j j i δπ ( ) ( )[ ] ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( )∫∑ ∫ ∫∞+ ∞− ∞ =∆− +∞ ∞−∆− ∞+ ∞− ∆−∆−∆−∆−−∂ ∆−−∂−+ ∆−∆−∆−−= 1 |! 1 exp2 1 ittx i i ii ttx j j tx ttxdttxpttxttxtxEtx ttxtx i ttxdttxpsdttxtxsj txp δ π where δ [u] is the Dirac delta function: [ ] ( ) [ ] ( ) ( ) ( ) ( ) (
)000..0exp2 1FFFtsuFFduuuFsdus ju j j ==∀== −+ +∞ ∞− ∞+ ∞−∫∫ δ πδ Stochastic Processes Fokker – Planck Equation (continue – 4) Derivation of the Fokker–Planck Equation (continue – 2) SOLO [ ] ( ) ( ) [ ] ( ) ( ) ( ) ( ) ( )afafaftsufufduuaufsduasj uaau j j ==∀=−−=− −+= +∞ ∞− ∞+ ∞−∫∫ ..exp 2 1 δπ δ [ ] ( ) ( ) ( ) ( ) ( ) ∫∫∫∞+ ∞− ∞+ ∞− ∞+ ∞−
=→=−−=−j j j j j j sdussFsj ufdu dsdussF jufsduass jua ud dexp 2 1exp 2 1exp 2 1 πππδ ( ) [ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )au j j j j j j j j ud ufdsdsFass jsdduusufass j sdduuasufsj dusduassj ufduuaud duf = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− +∞ ∞− +∞ ∞− ∞+ ∞− +∞ ∞− −=−=−−= −−=−−=− ∫∫ ∫ ∫ ∫∫ ∫∫ exp2 1expexp 2 1 exp2 1exp 2 1 ππ ππδ [ ] ( ) ( ) ( ) ( ) ( ) ( )
∫∫∫∞+ ∞− ∞+ ∞− ∞+ ∞− =→=−−=−j j ii ij j j j ii i i sdussFsj ufdu dsdussF jufsduass jua ud dexp 2 1exp 2 1exp 2 1 πππδ ( ) [ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )au i ii j j iij j ii j j iij j ii i i ud ufdsdassFs jsdduusufass j sdduuasufsj dusduassj ufduuaud duf = −=−=−−= −−=−−=− ∫∫ ∫ ∫ ∫∫ ∫∫∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− +∞ ∞− +∞ ∞− ∞+ ∞− +∞ ∞−
1exp2 1expexp 2 1 exp2 1exp 2 1 ππ ππδ Useful results related to integrals involving Delta (Dirac) function Stochastic Processes Fokker – Planck Equation (continue – 5) Derivation of the Fokker–Planck Equation (continue – 3) SOLO ( ) ( )[ ] ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ]txpttxdttxpttxtxttxdttxpsdttxtxsj ttxttxttx ttxtx j j ∆− +∞ ∞−∆− +∞
∞−∆− ∆−− ∞+ ∞− =∆−∆−∆−−=∆−∆−∆−− ∫∫ ∫ δπ δ exp2 1 ( ) ( ) ( )[ ]( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑ ∑ ∫ ∫∑ ∞ ==∆ ∆−∆− ∞ = ∞+ ∞−∆−∆− +∞ ∞− ∞ =∆−∆− ∂∆−∆−−∂−= ∆−∆−∆−∆−−∂ ∆−−∂−= ∆−∆−∆−∆−−∂ ∆−−∂− 10 | 1| 1| | ! 1 |! 1 |! 1 it i ttxi ttxtxii ittx ittxtxi ii ittx
ittxtxi ii tx txpttxttxtxE i ttxdttxpttxttxtxEtx ttxtx i ttxdttxpttxttxtxEtx ttxtx i δ δ ( ) [ ] ( ) ( ) ( ) [ ] [ ] ( )auau i i i i i ii i i ud ufdduua uad duf ud ufdduua ud duf == =−− →−=− ∫∫+∞ ∞− +∞ ∞− δδ 1We found ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑ ∞ ==∆ ∆−∆−∆− ∂ ∆−∆−−∂−+=1 0 | | ! 1 it i ttxi ttxtxii ttxtxtx txpttxttxtxE itxptxp ( ) ( )[ ] ( ) ( )[ ]
( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑ ∞ = ∆− →∆ ∆− →∆ ∂∆−∆−−∂ ∆−= ∆− 100 |1lim ! 1lim ii ttxii t ittxtx t tx txpttxttxtxE tit txptxp Therefore Rearranging, dividing by Δt, and tacking the limit Δt→0, we obtain: Stochastic Processes Fokker – Planck Equation (continue – 6) Derivation of the Fokker–Planck Equation (continue – 4) SOLO We found ( ) ( )[ ] ( ) ( )[ ]
( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑ ∞ = ∆−∆− →∆ ∆− →∆ ∂∆−∆−−∂ ∆−= ∆− 1 | 00 |1lim ! 1lim ii ttxi ttxtxi t ittxtx t tx txpttxttxtxE tit txptxp Define: ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) t ttxttxtxEtxtxm ittxtx t i ∆∆−∆−− =− ∆− →∆− |lim: | 0 Therefore ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ]( )( )[ ]∑ ∞ = − ∂−∂−= ∂∂ 1 ! 1 ii txiii tx tx txptxtxm it txp ( ) ( )ttxtxt ∆−=→∆− 0lim: and:
This equation is called the Stochastic Equation or Kinetic Equation. It is a partial differential equation that we must solve, with the initial condition: ( ) ( )[ ] ( )[ ]000 0 txptxp tx === Stochastic Processes Fokker – Planck Equation (continue – 7) Derivation of the Fokker–Planck Equation (continue – 5) SOLO We want to find px(t) [x(t)] where x(t) is the
solution of ( ) ( ) ( ) [ ]fg ttttntxfdt txd,, 0∈+= ( ) 0: == tnEn gg ( )tng ( ) ( )[ ] ( ) ( )[ ] ( ) ( )τδττ −=−− ttQnntntnE gggg ˆˆ Wiener (Gauss) Process ( ) ( )[ ] ( ) ( )[ ] ( ) [ ] ( ) [ ] ( )tQnEtxnEt ttxttxtxEtxtxm gg t=== ∆∆−∆−−=− →∆−22 2 2 0 2 || lim: ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( )txfnEtxftxtd txdE t ttxttxtxEtxtxm g t,,| |lim: 0 0 1 =+= = ∆∆−∆−−=− →∆−
( ) ( )[ ] ( ) ( )[ ] ( ) 20 |lim: 0>= ∆∆−∆−−=− →∆− it ttxttxtxEtxtxm i t i Therefore we obtain: ( ) ( )[ ] ( )[ ] ( ) ( )[ ]( )( ) ( ) ( ) ( )[ ] ( )[ ] 2 2 2 1, tx txptQ tx txpttxf t txp txtxtx ∂∂ +∂ ∂−= ∂∂ Stochastic Processes Fokker–Planck Equation Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) Kolmogorov forward
equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) are partial differential equations (PDE) that arise in the theory of continuous-time continuous-state Markov processes. Both were published by Andrey Kolmogorov in 1931. Later it was realized that the KFE was already known to physicists under the name Fokker–Planck
equation; the KBE on the other hand was new.
Kolmogorov forward equation addresses the following problem.
We have information about the state x of the system at time t (namely a probability distribution pt(x)); we want to know the probability distribution of the state at a later time s > t. The adjective 'forward' refers to the fact that pt(x) serves as the initial condition and the PDE is integrated forward in time. (In the common case where the initial state is
known exactly pt(x) is a Dirac delta function centered on the known initial state). Kolmogorov backward equation on the other hand is useful when we are interested at time t in whether at a future time s the system will be in a given subset of states, sometimes called the target set. The target is described by a given function us(x) which is equal to 1 if
state x is in the target set and zero otherwise. We want to know for every state x at time t (t < s) what is the probability of ending up in the target set at time s (sometimes called the hit probability). In this case us(x) serves as the final condition of the PDE, which is integrated backward in time, from s to t. for t ≤ s , subject to the final condition p(x,s)
= us(x). ( ) ( ) ( )[ ] ( ) ( )[ ]txptxDx txptxDx txpt ,,,,, 22 2 1 ∂∂+ ∂∂= ∂∂− ( ) ( ) ( )[ ] ( ) ( )[ ]txptxDx txptxDx txpt ,,,,, 22 2 1 ∂∂+ ∂∂−= ∂∂ Andrey Nikolaevich Kolmogorov1903 - 1987 SOLO Stochastic Processes Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) (continue – 1) Kolmogorov backward equation on
the other hand is useful when we are interested at time t in whether at a future time s the system will be in a given subset of states, sometimes called the target set. The target is described by a given function us(x) which is equal to 1 if state x is in the target set and zero otherwise. We want to know for every state x at time t (t < s) what is the
probability of ending up in the target set at time s (sometimes called the hit probability).
In this case us(x) serves as the final condition of the PDE, which is integrated backward in time, from s to t. Formulating the Kolmogorov backward equation Assume that the system state x(t) evolves according to the stochastic differential equation then the Kolmogorov backward equation is, using Itô 's lemma on p(x,t): SOLO Stochastic Processes
Table of Content Bartlett-Moyal Theorem SOLO Stochastic Processes Let Φx(t)|x(t1) (s,t) be the Characteristic Function of the Markov Process x (t), t T ɛ(some interval).
Assume the following: (1) Φx(t)|x(t1) (s,t) is continuous differentiable in t, t T.ɛ ( ) ( ) ( ) ( )[ ] ( ) ( )( )txtsgt txtxttxsE Ttxtx ,; |1exp1| ≤ ∆−−∆+(2) where E | g| is bounded on T. (3) then ( ) ( ) ( ) ( )[ ] ( ) ( )( )txtst txtxttxsE Ttxtx t,;: |1explim 1| 0φ= ∆−−∆+ →∆ ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) 1| 1| |,;exp|, 1 1 txtxtstxsEt txtsT txtxtxtx φ=∂ Φ∂ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
[ ] ( )∫+∞ ∞− −=Φ txdtxtxptxsts txtxT txtx 1|| |exp,11 The Characteristic Function of ( ) ( ) ( ) ( )[ ] 11| |1 tttxtxp txtx > Maurice Stevenson Bartlett 1910 - 2002 Jose EnriqueMoyal 1910 - 1998 Theorem 1 Bartlett-Moyal Theorem SOLO Stochastic Processes ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )t txtstxtts t txts txtxtxtx t txtx ∆Φ−∆+Φ =∂ Φ∂→∆ 1|1| 0 1| |,|,lim
|,111 Proof By definition ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) 1|1|| |exp|exp,111 txtxsEtxdtxtxptxsts Ttxtxtxtx Ttxtx −=−=Φ ∫ +∞ ∞− ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )∫+∞ ∞− ∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts txtxT txtx 1|| |exp,11 But since x (t) is a Markov process, we can use the Chapman-Kolmogorov Equation ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( )∫
∆+=∆+ txdtxtxptxttxptxttxp txtxtxtxtxtx 1||1| |||111 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )∫ ∫+∞ ∞− ∆+∆+∆+−=∆+Φ ttxdtxdtxtxptxttxpttxstts txtxtxtxT txtx 1||| ||exp,111 ( ) ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )txdttxdtxttxptxttxstxtxptxs txtxT txtxT∫ ∫ ∆+∆+−∆+−−= |exp|exp 11 |1| ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) 1|| ||expexp11
txtxtxttxsEtxsE Ttxtx Ttxtx −∆+−⋅−= Bartlett-Moyal Theorem SOLO Stochastic Processes ( )( ) ( )( ) ( )( )t txtstxtts t txts xx t x ∆Φ−∆+Φ= ∂Φ∂ →∆ 11 0 1 |,|,lim |, Proof (continue – 1) We found ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) 1|1|| |exp|exp,111 txtxsEtxdtxtxptxsts Ttxtxtxtx Ttxtx −=−=Φ ∫ +∞ ∞− ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )∫+∞ ∞−∆− ∆+∆+∆+
−=∆+Φ ttxdtxttxpttxstts ttxtx Ttxtx 1|| |exp,1 ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) 1|| ||expexp11 txtxtxttxsEtxsE Ttxtx Ttxtx −∆+−⋅−= Therefore ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( )( ) ( ) ( ) ( ) ( )[ ] ( )( ) ( ) 1| 1 ,; | 0| 1| 0| |,;exp ||1exp limexp |1|exp limexp 1 1 1 1 1 txtxtstxsE txt txtxttxsEtxsE txt txtxttxsEtxsE Ttxtx txts Ttxtx
t Ttxtx Ttxtx t Ttxtx φφ ⋅−= ∆−−∆+− ⋅−= ∆−−∆+− ⋅−= →∆ →∆ q.e.d. Bartlett-Moyal Theorem SOLO Stochastic Processes Discussion about Bartlett-Moyal Theorem (1) The assumption that x (t) is a Markov Process is essential to the derivation ( )( ) ( ) ( ) ( ) ( )[ ]td txxdsEtxts Ttxtx |1exp :,; 1| −−=φ (2) The function is calledItô Differential of the
Markov Process, orInfinitesimal Generator of Markov Process ( )( )txts ,;φ (3) The function is all we need to define the Stochastic Process(this will be proven in the next Lemma) ( )( )txts ,;φ Bartlett-Moyal Theorem SOLO Stochastic Processes Lemma Let x(t) be an (nx1) Vector Markov Process generated by ( ) nddttxfxd += , where pg ndndnd += pnd -
is an (nx1) Poisson Process with Zero Mean and Rate Vector and Jump Probability Density pa(α). gnd - is an (nx1) Wiener (Gauss) Process with Zero Mean and Covariance( ) ( ) ( ) dttQtndtndE T gg = then ( )( ) ( ) ( )[ ]∑= −−−−=n iiai TT sMsQstxfstxtsi 1 12 1,,; λφ Proof We have ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( ) td txndnddttxfsE td txxdsEtxts pg Ttxtx
Ttxtx |1,exp|1exp :,; 11 || −++−= −−=φ ( ) ( ) ( )( )[ ] ( ) ( )[ ] [ ] [ ] pT gTT pgT txtx ndsEndsEdttxfstxndnddttxfsE −−−=++− expexp,exp|,exp1| Because are independentpg ndndxd ,, [ ] ( ) ( )dtdtdtdtndinjumponeonlyP i n ijjii 01 +=−= ∏ ≠ λλλ Bartlett-Moyal Theorem SOLO Stochastic Processes Lemma Let x(t) be an (nx1) Vector Markov Process
generated by ( ) pg ndnddttxfxd ++= , then ( )( ) ( ) ( )[ ]∑= −−−−=n iiai TT sMsQstxfstxtsi 1 12 1,,; λφ Proof (continue – 1) Because is Gaussiangnd [ ] −=− dtsQsndsE T gT 2 1expexp The Characteristic Function of the Generalized Poisson Process can be evaluated as follows. Let note that the Probability of two or more jumps occurring at dt is
0(dt)→0 [ ] [ ] [ ] [ ]∑= −+⋅=−n iiiip T ndinjumponeonlyPasEjumpsnoPndsE1 exp1exp But [ ] ( ) ( )dtdtdtjumpsnoPn ii n ii 011 11 +−=−= ∑∏== λλ [ ] ( ) ( )dtdtdtdtndinjumponeonlyP i n ijjii 01 +=−= ∏ ≠ λλλ [ ] [ ] ( ) ( ) ( )[ ]∑∑∑=== −−=+−+−=−n iiai n ii sM ii n iip T sMdtdtdtasEdtndsEi iia111 110exp1exp λλλ Bartlett-Moyal Theorem SOLO
Stochastic Processes Lemma Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , then ( )( ) ( ) ( )[ ]∑= −−−−=n iiai TT sMsQstxfstxtsi 1 12 1,,; λφ Proof (continue – 3) We found [ ] −=− dtsQsndsE T gT 2 1expexp [ ] [ ] ( ) ( ) ( )[ ]∑∑∑=== −−=+−+−=−n iiai n ii sM ii n iip T sMdtdtdtasEdtndsEi ita111 110exp1exp λλλ ( )
( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( ) ( )[ ] [ ] [ ] ( )[ ] ( )[ ]td sMdtdtsQsdttxfs td ndsEndsEdttxfs td txndnddttxfsE td txxdsEtxts n iiai TT pT gTT pgT txtxT txtx i111 21 exp,exp1expexp,exp |1,exp|1exp:,; 1 || 11 − −− −− =−−−− = −++−= −−= ∑= λ φ ( ) ( )[ ] ( ) ( )[ ]( ) ( )[ ]∑ ∑= = −−−−=− −− +−+− =n iiai TT n iiai TT sMdtsQstxfstd sMdtdtdtsQsdtdttxfs i
i 1 1 22 12 1, 111021 10,1 λλ q.e.d. Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑== == ∗+−+∂∂ ∂+ ∂∂−= ∂∂ n iai n i n j ji ijn i i ii pppxx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process
x(t).
Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 where the convolution (*) is defined as ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,: 1 ProofFrom Theorem 1 and the previous Lemma, we have: ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) −−−−−= −=∂ Φ∂ ∑= 11 | 1| 11| |12 1,exp |,;exp|, 1 1 1
txsMsQstxfstxsE txtxtstxsEt txts n iiai TTTtxtx Lemma Ttxtx Theoremtxtx iλ φ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∫∞+ ∞− +∞ ∞− Φ=⇔−=Φj j txtxT ntxtxtxtxT txtx sdtstxsj txttxptxdtxttxptxsts ,exp2 1|,|,exp, 1111 |1|1|| π ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+ ∞− Φ∂∂= ∂∂ j j txtxT ntxtx sdtst txsj txttxpt ,exp2 1|, 11 |1| π We also have:
Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑== == ∗+−+∂∂ ∂+ ∂∂−= ∂∂ n iai n i n j ji ijn i i ii pppxx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial
Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 1) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) −−−−−=−= ∂Φ∂ ∑= 11 |1| 11| |1 2 1,exp|,;exp |,11 1 txsMsQstxfstxsEtxtxtstxsEt txts n iiai TTTtxtx LemmaT txtx Theoremtxtx iλφ ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+ ∞− Φ∂∂= ∂∂ j j txtxT ntxtx sdtst txsj txttxpt ,exp2 1|, 11 |1| π ( ) ( ) ( ) (
) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( ) ( ) ( )( )[ ]1|1 1|1| 1| 1| 1| |,|, |,exp2 1 exp|,exp2 1 ,exp|exp2 1 |,expexp2 1 1 1 1 1 1 1 txtxptxfx txtxptxfsdtxtxptxfLstxs j sdvdtvstxtvptvfstxsj sdvdtvfstvstxtvptxsj sdtxtxfstxsEtxsj txtxix n i i txtxij j txtxTT n j j Ttxtx TTn j j TTtxtx Tn j j TTtxtx
Tn ∇=∂ ∂=−= −−= −−= −− ∑∫ ∫ ∫ ∫ ∫ ∫ = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− π π π π Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑== == ∗+−+∂∂ ∂+ ∂∂−= ∂∂ n iai n i n j ji ijn i i ii pppxx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition
Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 2) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) −−−−−=−= ∂Φ∂ ∑= 11 |1| 11| |1 2 1,exp|,;exp |,11 1 txsMsQstxfstxsEtxtxtstxsEt txts n iiai TTTtxtx LemmaT txtx Theoremtxtx iλφ ( ) ( ) ( ) ( )[ ] ( ) (
) ( ) ( ) ( )∫∞+ ∞− Φ∂∂= ∂∂ j j txtxT ntxtx sdtst txsj txttxpt ,exp2 1|, 11 |1| π ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( )[ ]∑∑∫ ∫ ∫ ∫ ∫ ∫ = = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∂∂∂ =−= −−= −= − n i n j ji txtxijj j txtxTT n j j Ttxtx TTn j j TTtxtx Tn j j TTtxtx Tn xx txtxptxQsdstxtxptQLstxs j
sdsvdtvstxtvptQstxsj sdvdstQstvstxtvptxsj sdtxstQstxsEtxsj 1 1 1|2 1| 1| 1| 1| |, 2 1|exp 2 1 exp|exp2 1 exp|exp2 1 |expexp2 1 1 1 1 1 1 π π π π Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑== == ∗+−+∂∂ ∂+ ∂∂−= ∂∂ n iai n i n j ji ijn i i ii
pppxx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 3) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) −−−−−=−= ∂Φ∂ ∑= 11 |1| 11| |1 2 1,exp|,;exp |,11 1 txsMsQstxfstxsEtxtxtstxsEt txts n iiai
TTTtxtx LemmaT txtx Theoremtxtx iλφ ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+ ∞− Φ∂∂= ∂∂ j j txtxT ntxtx sdtst txsj txttxpt ,exp2 1|, 11 |1| π ( ) ( ) ( ) ( ) ( ) [ ] [ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] [ ] [ ] ( ) ( ) [ ] [ ] ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) [ ] [ ] ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( )∫∫ ∫ ∫ ∫ ∫ ∫ −−=−−−= −−−−= −−−= −−− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− initxtxiiaitxtxi
j j txtxiiiT n j j Ttxtxiii Tn j j iiiT txtxT n j j iiiT txtxT n vdtxsvspvsptxtxpsdtxtvpasELtxsj sdvdtvstxtvpasEtxsj sdvdasEtvstxtvptxsj sdtxasEtxsEtxsj i 11|1|1| 1| 1| 1| |,,,,||exp1exp2 1 exp|exp1exp2 1 exp1exp|exp2 1 |exp1expexp2 1 111 1 1 1 λλλπ λπ λπ λπ ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,: 1Table of Content Fokker- Planck Equation
SOLO Stochastic Processes Feller- Kolmogorov Equation Let x(t) be an (nx1) Vector Markov Process generated by ( ) pnddttxfxd += , ( ) [ ]∑∑== ∗+−+∂ ∂−=∂∂ n iai n i i ii pppx pf t p 11 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof where the convolution (*) is defined as ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,: 1 Andrey Nikolaevich Kolmogorov 1903 - 1987 Derived from Theorem 2 by tacking 0=gnd Fokker- Planck Equation SOLO Stochastic Processes Fokker-Planck Equation Let x(t) be an (nx1) Vector Markov Process generated by ( ) gnddttxfxd += , ( )
∑∑∑= == ∂∂ ∂+ ∂∂−= ∂∂ n i n j ji ijn i i i xx pQ x pf t p 1 1 2 1 2 1 Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof Derived from Theorem 2 by tacking 0=pnd Discussion of Fokker-Planck Equation The Fokker-Planck Equation can be
written as a Conservation Law 01 =∇+∂∂= ∂∂+ ∂∂ ∑ =J t p x J t p n i i where pQpfJ ∇−=2 1: This Conservation Law is a consequence of the Global Conservation of Probability ( ) ( ) ( ) ( )( ) 1|, 1| 1=∫ xdtxttxp txtx Table of Content Langevin and Fokker- Planck Equations SOLO Stochastic Processes The original Langevin equation describes Brownian
motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid, ( ) ( )tm vmtd vd td xdvtv td vdm ηληλ 1+−=⇒=+−= We are interested in the position x of a particle of mass m. The force on the particle is the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a noise term η
(t) that has a Gaussian Probability Distribution with Correlation Function ( ) ( ) ( ) 2, /2'2', mTkQttTktt BjiBji λδδληη =−= where kB is Boltzmann’s constant and T is the Temperature. Let be the Transition Probability Density Function that corresponds to the Langevin Equation state. Then p satisfies the Partial Differential Equation given by the Fokker-
Planck Equation: ( ) ( ) ( ) ( )( ) ptvttvp tvtv =1| |,1 ( ) ( ) ( ) ( )( ) ( )( )00000| |,1 vtvtvttvp tvtv −= δ ( )( )2 2/ v pQ v pvm t p ∂∂+ ∂−∂−= ∂∂ λ We assume that the initial state at t0 is v(t0) and is deterministic Langevin and Fokker- Planck Equations SOLO Stochastic Processes The Fokker-Planck Equation: ( ) ( ) ( ) ( )( ) ( )( ) −−=2 2 2/120| ˆ 2 1exp 2 1|, 1
σσπvv tvttvp tvtv ( ) ( ) ( ) ( )( ) ( )( )00000| |,1 vtvtvttvp tvtv −= δ ( )( )2 2/ v pQ v pvm t p ∂∂+ ∂−∂−= ∂∂ λ the initial state at t0 is v(t0) is deterministic The solution to the Fokker-Planck Equation is: where: A solution to the one-dimensional Fokker–Planck equation, with both the drift and the diffusion term. The initial condition is a Dirac delta function
in x = 1, and the distribution drifts towards x = 0. ( ) −−= 00 expˆ ttm vvλ and: ( ) −−−= 0 2 2exp1 ttm Qλσ Table of Content FokkerPlanck.gif Generalized Fokker - Planck Equation SOLO Stochastic Processes ( )TXtxpx ,|,Define the set of past data. We need to find( ) ( ) ( )( )nn tttxxxTX ,,,,,,,:, 2121 = where we assume that ( ) ( )TXtx ,∉ Start the
analysis by defining the Conditional Characteristic Function of the Increment of the Process: ( ) ( )( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( )( )[ ] ( ) ( )( ) ( ) ( ) ( )ttxtxxtxdTXttxtxpttxtxs TXttxttxtxsETXttxts TXttxxT TTXttxxTXttxx ∆−−=∆∆−∆−−−= ∆−∆−−−=∆−Φ ∫∞+ ∞−∆− ∆−∆∆−∆ :,,|,exp ,,|exp,,|, ,,| ,,|,,| ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )( )∫∞+ ∞−∆−∆∆− ∆−Φ∆−−==∆− j j
TXttxxT nTXttxtx sdTXttxtsttxtxsj TXvttxtxp ,,|,exp2 1,,|, ,,|,,| π The Inverse Transform is The Fokker-Planck Equation was derived under the assumption that is a Markov Process. Let assume that we don’t have a Markov Process, but an Arbitrary Random Process (nx1 vector), where an arbitrary set of past value , must be considered. nn txtxtx ,;;,;,
2211 ( )tx ( )tx ( ) ( )nTn T sssxxx 11 , == Generalized Fokker - Planck Equation SOLO Stochastic Processes Using Chapman – Kolmogorov Equation we obtain: ( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( )( ) ( )∫ ∫ ∫ ∫ ∫ ∞+ ∞− ∞+ ∞−∆−∆−∆ ∞+ ∞−∆− ∆− ∞+ ∞−∆−∆ +∞ ∞−∆−∆−∆− ∆−∆−∆−Φ∆−−=
∆−∆−∆−Φ∆−−= ∆−∆−∆−= ∆− j j TXttxTXttxxT n TXttx TXttxtxp j j TXttxxT n TXttxTXttxtxTXttxtx ttxdsdTXttxpTXttxtsttxtxsj ttxdTXttxpsdTXttxtsttxtxsj ttxdTXttxpTXttxtxpTXtxp TXttxtx ,|,,|,exp2 1 ,|,,|,exp2 1 ,|,,|,,|, ,|,,| ,| ,,|, ,,| ,|,,|,,| ,,| π π where Let expand the Conditional Characteristic Function in a Taylor Series about the vector 0=s ( ) ( )( ) ( ) ( ) ( )
( )[ ] ( ) ( ) ( )( )[ ] ( ) ( )( ) ( )∫ ∞+ ∞−∆− ∆−∆∆−∆ ∆−∆−∆−−−= −∆+−=∆−Φ ttxdTXttxtxpttxtxs TXtxtxttxsETXttxts TXttxxT TTXttxxTXttxx ,,|,exp ,,|exp,,|, ,,| ,,|,,| ( ) ( )( ) ( ) ( ) ( ) ∑∑ ∑ ∑∑∑ = ∞ = ∞ = ∆−∆ = = ∆−∆ = ∆−∆∆−∆ =∂∂ Φ∂= +∂∂ Φ∂+ ∂Φ∂ +=∆−Φ n ii m m mn m mn m TXttxxm n n i n iii ii TXttxxi n i i TXttxxTXttxx mmssssmm ssss ss TXttxts n
n n10 0 1 1 ,,| 1 1 1 ,,|2 1 ,,|,,| 1 1 1 1 2 21 21 1 1 1 !! 1 !2 11,,|, ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) ∑= ∆−∆∆−∆ =∆−∆−−∆−−⋅∆−−−= ∂∂∂∆−Φ∂ n ii mnn mmTXttxx m mn mm TXttxxm mmTXttxttxtxttxtxttxtxEsss TXttxtsn n1 2211,,| 21 ,,| :,,|1,,|, 21 21 Generalized Fokker - Planck Equation SOLO Stochastic Processes ( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( )
( ) ( )∫ ∫+∞ ∞− ∞+ ∞−∆−∆−∆∆− ∆−∆−∆−Φ∆−−= j j TXttxTXttxxT nTXttxtx ttxdsdTXttxpTXttxtsttxtxsj TXtxp ,|,,|,exp2 1,|, ,|,,|,,| π ( ) ( ) ( )[ ] ( )( ) ( )( ) ( )∫ ∫ ∑ ∑ +∞ ∞− ∞+ ∞−∆− ∞ = ∞ = ∆−∆ ∆−∆−∂∂ Φ∂∆−−= j j TXttxm m mn m mn m TXttxxm n Tn ttxdsdTXttxpss ssmmttxtxs jn n n,| !! 1exp 2 1.| 0 01 1 ,,| 11 1 1 π ( ) ( ) ( )[ ] ( )( ) ( )( ) (
)ttxdTXttxpdsdsss ssttxtxs jmm TXttxm m j j j j nm nm mn m TXttxxm Tn nn n n∆−∆− ∂∂Φ∂ ∆−−= ∆− ∞ = ∞ = +∞ ∞− ∞+ ∞− ∞+ ∞− ∆−∆∑ ∑ ∫ ∫ ∫ ,|exp2 1 !! 1,| 0 011 1 ,,| 11 1 1 π ( )( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpdsdsssTXttxttxtxttxtxEttxtxs jmm TXttxm m j j j j nmn mmnn mTXttxx Tn n m n nn ∆−∆−∆−∆−−∆−−∆−−−= ∆− ∞ =
∞ = + ∞ ∞− ∞+ ∞− ∞+ ∞−∆−∆∑ ∑ ∫ ∫ ∫ ,|,,|exp 2 1 !! 1,| 0 01111,,| 11 11 π we obtained: ( )( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpdssTXttxttxtxEttxtxs jm TXttxm m n i j j imi miiTXttxxiii i m n ii i ∆−∆− ∆−∆−−∆−−−= ∆− ∞ = ∞ = + ∞ ∞− = ∞+ ∞−∆−∆∑ ∑ ∫ ∏ ∫ ,|,,|exp 2 1 ! 1,| 0 0 1,,| 1π Generalized Fokker - Planck Equation SOLO Stochastic
Processes Using : [ ] ( ) ( ) ( ) ( ) ( ) ∫∫∫∞+ ∞− ∞+ ∞− ∞+ ∞− =→=−=−j j ii ij j j j ii i sdussFsj ufdu dsdussF jufsdauss jau ud dexp 2 1exp 2 1exp 2 1 πππδ we obtained: we obtain: ( ) ( ) [ ]( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxEdsttxtxssjm TXtxp TXttxm m j j miiTXttxxiiii mi i mn i TXttxtx n ii i ∆−∆− ∆−∆−−∆−−−= ∆− ∞ = ∞ = ∞+
∞− ∞+ ∞−∆−∆ = ∆− ∑ ∑ ∫ ∫∏ .|,,|exp2 1 ! 1 ,|, .|0 0 ,,|1 ,,| 1π ( ) ( ) [ ]( ) ( ) ( )[ ] ( ) ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxEtx ttxtx m TXtxp TXttxm m n i miiTXttxxm i iim i m TXttxtx n i i ii ∆−∆− ∆−∆−− ∂∆−−∂−= ∆− ∞ = ∞ = ∞+ ∞− =∆−∆ ∆− ∑ ∑ ∫ ∏ ,|,,|! 1 ,|, ,|0 0 1 ,,| ,,| 1 δ ( )( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )[ ]∑ ∑ ∏
∑ ∑ ∏ ∫∞ = ∞ = ==∆∆−∆ ∞ = ∞ = = +∞ ∞−∆−∆−∆ ∆−∆−∆−− ∂∂−= ∆−∆−∆−∆−−∆−− ∂∂−= 0 0 10,|,,| 0 0 1,|,,| 1 1 ,|,,|! 1 ,|,,|! 1 m m n itTXttx miiTXtxxm i m i m m m n iTXttx miiTXttxxiim i m i m n i i ii n i i ii TXttxpTXttxttxtxEtxm ttxdTXttxpTXttxttxtxEttxtxtxm δ For m1=…=mn=m=0 we obtain : ( ) ( ) [ ]TXttxp TXttxttx ,|,,,| ∆−∆−∆− Generalized
Fokker - Planck Equation SOLO Stochastic Processes we obtained: ( ) ( ) [ ] ( ) [ ]( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )[ ] 0,|,,|! 1 ,|,,|, 10 0 10,|,,| ,|,,| 1 ≠= ∆−∆−∆−− ∂∂−= ∆−− ∑∑ ∑ ∏= ∞ = ∞ = ==∆∆−∆ ∆−∆− n ii m m n itTXttx miiTXtxxm i m i m TXttxTXttxtx mmTXttxpTXttxttxtxEtxm TXttxpTXtxp n i i ii Dividing both sides by Δt and taking Δt →0 we obtain: ( )
[ ] ( ) ( ) [ ] ( ) [ ] ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( ) 0,| ,,|lim ! 1 ,|,,|,lim ,|, 10 0 1,| ,,| 0 ,|,,| 0 ,| 1 ≠= ∆∆−∆−− ∂∂−= ∆∆−− =∂ ∂ ∑∑ ∑ ∏= ∞ = ∞ = = ∆ →∆ ∆−∆− →∆ n ii m m n iTXtx miiTXtxx tmi m i m TXttxTXttxtx t TXtx mmTXtxpt TXttxttxtxE txm t TXttxpTXtxp t TXtxp n i i ii This is the Generalized Fokker - Planck Equation for Non-Markovian Random
Processes Generalized Fokker - Planck Equation SOLO Stochastic Processes Discussion of Generalized Fokker – Planck Equation ( ) [ ] ( )( ) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) t TXtxttxtxttxtxEA mmTXtxpAtxtxmmt TXtxp n p pn n mnn mTXtxx tmm n iiTXtxmmm nm m m m n mTXtx ∆∆−−∆−− = ≠=∂∂ ∂−=∂ ∂ ∆ →∆ = ∞ = ∞ =∑∑ ∑ ,,|lim: 0,|!! 1,|, 1 1 11 1
11,,| 0,, 1,| 10 0 1 ,| • The Generalized Fokker - Planck Equation is much more complex than the Fokker – Planck Equation because of the presence of the infinite number of derivative of the density function. • It requires certain types of density function, infinitely differentiable, and knowledge of all coefficients • To avoid those difficulties we seek
conditions on the process for which ∂p/∂t is defined by a finite set of derivatives.
pmmA ,,1 Generalized Fokker - Planck Equation SOLO Stochastic Processes Discussion of Generalized Fokker – Planck Equation ( ) [ ] ( )( ) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) t TXtxttxtxttxtxEA mmTXtxpAtxtxmmt TXtxp n p pn n mnn mTXtxx tmm n iiTXtxmmm nm m m m n mTXtx ∆∆−−∆−− = ≠=∂∂ ∂−=∂ ∂ ∆ →∆ = ∞ = ∞ =∑∑ ∑ ,,|lim: 0,|!! 1,|, 1 1 11 1
11,,| 0,, 1,| 10 0 1 ,| • To avoid those difficulties we seek conditions on the process for which ∂p/∂t is defined by a finite set of derivatives. Those were defined by Pawula, R.F. (1967) Lemma 1 Let( ) ( ) ( )( ) ( ) 0,,| lim: 111,,| 00,,0, 1 1≠= ∆∆−− = ∆ →∆mm t TXtxttxtxEA mTXtxx tm If is zero for some even m1, then Proof For m1 odd and m1 ≥ 3, we have
( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t TXtxttxtxttxtxE t TXtxttxtxEA mm TXtxx t mTXtxx tm ∆ ∆−−∆−− =∆ ∆−−= +− ∆ →∆ ∆ →∆ ,,|lim ,,|lim: 2 1 112 1 11,,| 0 11,,| 00,,0, 11 1 1 0,,0,1 mA 30 10,,0,1≥∀= mAm Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 1 Let ( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 00,,0, 1 1≠= ∆∆−− = ∆ →∆mm t
TXtxttxtxEA mTXtxx tm Proof For m1 odd and m1 ≥ 3, we have ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t TXtxttxtxttxtxE t TXtxttxtxEA mm TXtxx t mTXtxx tm ∆ ∆−−∆−− =∆ ∆−−= +− ∆ →∆ ∆ →∆ ,,|lim ,,|lim: 2 1 112 1 11,,| 0 11,,| 00,,0, 11 1 1 Using Schwarz Inequality, we have ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) 0,,0,10,,0,1 111,,| 0 111,,| 0 20,,0, 11 11 1 ,,|lim
,,|lim +− +∆ →∆ −∆ →∆= ∆∆−− ∆∆−− ≤ mm mTXtxx t mTXtxx tm AA t TXtxttxtxE t TXtxttxtxEA In the same way, for m1 ≥ 4, and m1 even we have ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t TXtxttxtxttxtxE t TXtxttxtxEA mm TXtxx t mTXtxx tm ∆ ∆−−∆−− =∆ ∆−−= +− ∆ →∆ ∆ →∆ ,,|lim ,,|lim: 2 2 112 2 11,,| 0 11,,| 00,,0, 11 1 1 0,,0,20,,0,22 0,,0, 111 +−≤
mmm AAAUsing Schwarz Inequality, again for m1 ≥ 4 If is zero for some even m1, then0,,0,1 mA 30 10,,0,1≥∀= mAm Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 1 Let( ) ( ) ( )( ) ( ) 0,,| lim: 111,,| 00,,0, 1 1≠= ∆∆−− = ∆ →∆mm t TXtxttxtxEA mTXtxx tm Proof (continue) we haveevenmmAAA oddmmAAA mmm mmm
110,,0,20,,0,22 0,,0, 110,,0,10,,0,12 0,,0, 4 3 111 111 ≥≤ ≥≤ +− +− 00,,0, =rAFor some m1 = r even we have , and Therefore A r-2,0,…,0=0, A r-1,0,…,0 =0, A r+1,0,…,0 =0, A r+2,0,…,0 =0, if A r,0,…,0 = 0 and all A are bounded. This procedure will continue leaving A 1,0,…,0 not necessarily zero and achieving: 420 310 310 420 0,,0,0,,0,42 0,,0,2
0,,0,20,,0,2 0,,0,1 0,,0,0,,0,22 0,,0,1 0,,0,0,,0,42 0,,0,2 ≥+=≤ ≥+=≤ ≥−=≤ ≥−=≤ ++ ++ −− −− rAAA rAAA rAAA rAAA rrr rrr rrr rrr 00,,0,0,,0,30,,0,2 ==== ∞→ rAAAq.e.d. If is zero for some even m1, then0,,0,1 mA 30 10,,0,1≥∀= mAm Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite
and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,,21 Proof 2,,0 321,0,0,0,,00,0, 321≥∀=== mmmAAA mmm ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 0,, 1 1>= ∆∆−−∆−− = ∑= ∆ →∆ n ii mnn mTXtxx tmm mm t TXtxttxtxttxtxEA n p 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = =n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p We
shall prove this Lemma by Induction.Let start with n=3 ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim 1 332211,,| 0,, 321 321>= ∆∆−−∆−−∆−− = ∑= ∆ →∆ n ii mmmTXtxx tmmm mm t TXtxttxtxttxtxttxtxEA We proved in Lemma 1 that and A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero. ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) 22,0,0 20,2,0 233,,| 0
222,,| 0 2 3322,,| 0 2,,0 32 32 32 32 ,,|lim ,,|lim ,,|lim mm mTXtxx t mTXtxx t mmTXtxx tmm AAt TXtxttxtxE t TXtxttxtxE t TXtxttxtxttxtxEA = ∆∆−− ∆∆−− ≤ ∆∆−−∆−− = ∆ →∆ ∆ →∆ ∆ →∆ Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA
,,0,0,,,00,,0, ,,,21 Proof (continue – 1) 2,,0 321,0,0,0,,00,0, 321≥∀=== mmmAAA mmm ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 0,, 1 1>= ∆∆−−∆−− = ∑= ∆ →∆ n ii mnn mTXtxx tmm mm t TXtxttxtxttxtxEA n p A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero. 22,0,0 20,2,0 2,,0 3232 mmmm AAA ≤ ≥+>= ⇒zeroynecessarilnotA mmmmA mm 1,1,0 3232,,0
3&0,032 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = =n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p 22,0,0 20,0,2 2,0, 3131 mmmm AAA ≤ ≥+>= ⇒zeroynecessarilnotA mmmmA mm 1,0,1 3131,0, 3&0,032 20,2,0 20,0,2 20,, 2121 mmmm AAA ≤ ≥+>= ⇒zeroynecessarilnotA mmmmA mm 0,1,1 21210,, 3&0,021 Generalized Fokker -
Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,,21 Proof (continue – 2) ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 0,, 1 1>= ∆∆−−∆−− = ∑= ∆ →∆ n ii mnn mTXtxx tmm mm t TXtxttxtxttxtxEA n p 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = =n iiimm n
iiimm mmtsmzeronecessarlynotA mmtsmA p p ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) 4 332211,,| 0 4,, ,,|lim 321 321 ∆∆−−∆−−∆−− = ∆ →∆ t TXtxttxtxttxtxttxtxEA mmmTXtxx tmmm ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) 321 3 22 4,0,00,4,02 0,0,2 433,,| 0 422,,| 0 2211,,| 0 ,,|lim ,,|lim ,,|lim mmm mTXtxx t mTXtxx t mTXtxx t AAAt TXtxttxtxE t TXtxttxtxE t
TXtxttxtxE = ∆∆−− ⋅ ∆∆−− ⋅ ∆∆−− ≤ ∆ →∆ ∆ →∆ ∆ →∆ 321321 4,0,00,4,02 0,0,24 mmmmmm AAAA ≤ Since 000,032132 ,,324,0,00,4,0 >∀=⇒>∀== immmmm mAmmAA Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,,21
Proof (continue – 3) q.e.d. ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 0,, 1 1>= ∆∆−−∆−− = ∑= ∆ →∆ n ii mnn mTXtxx tmm mm t TXtxttxtxttxtxEA n p 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = =n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p We proved that only are not necessarily zero and1,1,01,0,10,1,11,0,00,1,00,0,1 ,,,,, AAAAAA 3..003 1,,
321 ≥=>∀= ∑=i iimmm mmtsmA In the same way, assuming that the result is true for (n-1) is straight forward to show that is true for n and 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = = n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p Generalized Fokker - Planck Equation SOLO Stochastic Processes Theorem 2 Let for some set (X,T) and
let each of the moments vanish for some even mi. Then the transition density satisfies the Generalized Fokker-Planck Equation nmmm AAA ,,0,0,,,00,,0, ,,,21 Proof q.e.d. ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0,,1,,0,,1.,00 0,,1,,00 1 1 2 1 ,,|1 lim, ,,|1 lim, 2 1 ==→∆ =→∆ = == =−∆+−∆+∆ = =−∆+∆ = ∂∂∂ +∂ ∂−=∂∂ ∑∑∑ ji i mmjjiit ji miit i n i n j ji jin i i
i ATXtxtxttxtxttxEt txC ATXtxtxttxEt txB xx pC x pB t p ( )TXtxpp x ,|,= 0,,1,,0,,1,,00,,1,,0 , === jii mmm AASince vanishes for some even mi, from Lemma 2 the onlynon-necessarily zero Moments are nmmm AAA ,,0,0,,,00,,0, ,,,21 The Generalized Fokker – Planck Equation becomes ( ) [ ] ( )( ) ( ) ( ) ( )( )( ) ( ) ( )∑∑∑ ∑∑ ∑ = === == = ∞ = ∞ = ⋅∂∂
∂+⋅∂∂−= ≠=⋅∂∂ ∂−=∂ ∂ n i n jmm i n im i n iiTXtxmmm nm m m m n mTXtx pAxjx pAx mmTXtxpAtxtxmmt TXtxp jii pn n 1 10,,1,,0,,1,,0 2 10,,1,,0 1,| 10 0 1 ,| 2 1 0,|!! 1,|,11 1 Generalized Fokker - Planck Equation SOLO Stochastic Processes History The Fokker-Planck Equation was derived by Uhlenbeck and Orenstein for Wiener noise in the paper: “On
the Theory of Brownian Motion”, Phys. Rev. 36, pp.823 – 841 (September 1, 1930), (available on Internet) George EugèneUhlenbeck (1900-1988) Leonard Salomon Ornstein (1880 -1941) Ming Chen Wang (王明贞( (1906-2010( Un updated version was published by M.C. Wang and Uhlenbeck : “On the Theory of Brownian Motion II”,. Rev. Modern
Physics, 17, Nos. 2 and 3, pp.323 – 342 (April-July 1945), (available on Internet).They assumed that all Moments above second must vanish. The sufficiency of a finite set of Moments to obtain a Fokker-Planck Equation was shown by R.F. Pawula, “Generalization and Extensions of Fokker-Planck-Kolmogorov Equations,”, IEEE, IT-13, No.1, pp. 33-41
(January 1967) Table of Content Karhunen-Loève Theorem SOLOStochastic Processes Michel Loève1907 )Jaffa( - 1979 )Berkley( In the theory of stochastic processes, the Karhunen-Loève theorem (named after Kari Karhunen and Michel Loève) is a representation of a stochastic process as an infinite linear combination of orthogonal functions,
analogous to a Fourier series representation of a function on a bounded interval. In contrast to a Fourier series where the coefficients are real numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen-Loève theorem are random variables and the expansion basis depends on
the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. If we regard a stochastic process as a random function F, that is, one in which the random value is a function on an interval [a, b], then this theorem can be considered as a random orthonormal expansion of F. Given
a Stochastic Process x (t) defined on an interval [a,b], Karhunen-Loeve Theorem states that ( ) ( ) ( ) btatbtxtxn nn ≤≤=≈ ∑∞ =1 ˆ ϕ ( ) ( ) ≠= =∫ nm nmdttt b a mn 0 1*ϕϕ ( ) ( ) ( ) ( ) ( ) ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ:bydefined functionslorthonormaare are random variables( ) ( ) ,2,1* == ∫ ndtttxbb a nn ϕand( ) =≠ = =→= mn
mnbbE bEtxEIf nmn n λ0 * 00 Karhunen-Loève Theorem (continue – 1) SOLOStochastic Processes Proof: ( ) ( ) ≠= =∫ nm nmdttt b a mn 0 1*ϕϕ ( ) ( ) ( ) ( ) ( ) ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ:bydefined functionslorthonormaare ( ) ( ) ( ) btatbtxtxn nn ≤≤=≈ ∑∞ =1 ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxbb a nn ϕIf ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
btatsttdtttxtxEdtttxtxEbtxE mm b a m b a mm ≤≤∀== = ∫∫ 111222122211 ..*** ϕλϕϕ 1 =≠ =mn mnbbE nmn λ 0*then ( ) ( ) ( ) ( ) ( ) ( ) =≠ === = ∫∫∫ mn mndtttdttbtxEbdtttxEbbE n b a nmm b a nmm b a nmn λϕϕλϕϕ 0****** 111111111 2 ( ) ( ) ( ) ( )( ) ,2,10**0 === == ∫∫ ndtttxEdtttxEbEtxEb a n b a nn ϕϕ Karhunen-Loève Theorem (continue – 2)
SOLOStochastic Processes Proof: ( ) ( ) ≠= =∫ nm nmdttt b a mn 0 1*ϕϕ ( )( ) ( ) ( ) ( ) ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ :andfunctionslorthonormaare ( ) ( ) ( ) btatbtxtxn nn ≤≤=≈ ∑∞ =1 ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxbb a nn ϕIf ( ) ( ) ( ) ( ) btatstttbbEbtbEbtxE mmn nmnmn nnm ≤≤∀== = ∑∑ ∞ = ∞ =111 11 111 ..*** ϕλϕϕ 3 =≠
=mn mnbbE nmn λ 0* then ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatstdttttRdtttxtxEdtttxtxEbtxEb a m b a m b a mm ≤≤∀== = ∫∫∫ 112221222122211 ..,*** ϕϕϕbut ( )( ) ( ) ( ) ( ) ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕtherefore with positiverealbbE mmm &*=λ Karhunen-Loève Theorem (continue – 3) SOLOStochastic Processes ( ) ( ) btatbtxn nn ≤≤=
∑∞ =1 ˆ ϕ then ( ) ( ) ( ) ( ) btatttRtxtxEn nn ≤≤−=− ∑∞ =1 22,ˆ ϕλ Convergence of Karhunen – Loève Theorem4 therefore ( ) ( ) ( ) ( ) btatttRtxtxEn nn ≤≤=⇔=− ∑∞ =1 22,0ˆ ϕλ Proof: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttbtxEtbtxEtxtxEn nnnn nnn nn ≤≤== = ∑∑∑ ∞ = ∞ = ∞ = 111 ******ˆ ϕϕλϕϕ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttbtxEtbtxEtxtxEn nnnn nnn nn nn
≤≤== = ∑∑∑ ∞ = =∞ = ∞ = 1 * 11 ***ˆ* ϕϕλϕϕλλ ( ) ( ) btatsttbtxE nnn ≤≤∀= 1111 ..* ϕλ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttbtxEtbtxEtxtxEn nnnn nnn nn ≤≤== = ∑∑∑ ∞ = ∞ = ∞ = 111 ***ˆ**ˆ*ˆˆ ϕϕλϕϕ ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttR txtxtxEtxtxEtxEtxtxtxtxEtxtxE nnn ≤≤−= +−−=−−=− ∑∞ =1 2 222 , ˆˆ**ˆ*ˆˆˆ ϕλ Table of Content
References: SOLO Stochastic_processes Stochastic_differential_equations Papoulis, A., “Probability, Random Variables, and Stochastic Processes”, McGraw Hill, 1965, Ch. 14 and 15 Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971 McGarty, T., “Stochastic Systems and State
Estimation”, John Wiley & Sons, 1974 Maybeck, P.S., “Stochastic Systems Estimation and Control”, Academic Press, Mathematics in Science and Engineering, Volume 141-2, 1982, Ch. 11 and 12 Stochastic Processes Table of Content Jazwinski, A.H., “Stochastic Processes and Filtering Theory”, Academic Press, 1970 January 12, 2015 80 SOLO
TechnionIsraeli Institute of Technology 1964 – 1968 BSc EE1968 – 1971 MSc EE Israeli Air Force1970 – 1974 RAFAELIsraeli Armament Development Authority 1974 – 2013 Stanford University1983 – 1986 PhD AA Functional Analysis ( ) ( ) ( ) bxtxxtxtxaxxtfdttf nnn n iiiin b a =<<<<<<<<=−= −−= +→∞∑∫ 1121100 01lim SOLO Riemann Integral ix
1+ix it ( )itf ax =0 bxn = εδ <−= + ii xx 1 ( )∫b a dttf In Riemann Integral we divide the interval [a,b]in n non-overlapping intervals, that decrease asn increases. The value f (ti) is computed inside theintervals. bxtxxtxtxa nnn =<<<<<<<<= −− 1121100 The Riemann Integral is not always defined, for example: ( ) =irationalex rationalexxf 3 2 The
Riemann Integral of this function is not defined. Georg Friedrich BernhardRiemann 1826 - 1866 Integration SOLO Stochastic Processes Thomas Joannes Stieltjes 1856 - 1894 Riemann–Stieltjes integral Bernhard Riemann1826 - 1866 The Stieltjes integral is a generalization of Riemann integral. Let f (x) and α (x) be] real-valued functions defined in the
closed interval [a,b]. Take a partition of the interval and consider a Riemann sum bxxxa n <<<<= 10 ( ) ( ) ( )[ ] [ ]iii n iiii xxxxf ,1 11 − =− ∈−∑ ξααξ If the sum tends to a fixed number I when max(xi-xi-1)→0 then I is called aStieltjes integral or a Riemann-Stieltjes integral. The Stieltjes integral of fwith respect to α is denoted: ( ) ( )∫ xdxf α ∫ αdf If f
and α have a common point of discontinuity, then the integral doesn’t exist.However, if f is continuous and α’ is Riemann integrable over the specific interval or sometimes simply ( ) ( ) ( )xd xddxfxdxf αααα == ∫∫ :'' Functional Analysis my ky ( )[ ]kyEµ ( )[ ]myEµ 1M 2M( )[ ] 01 =MEµ ( )[ ] 02 =MEµ( )xfy = SOLO Lebesgue Integral Measure The mean
idea of the Lebesgue integral is the notion of Measure. Definition 1: E (M) є [a,b] is the regionin x є [a,b], of the function f (x) for which ( ) Mxf > Definition 2: µ [E (M)] the measure of E (M) is ( )[ ]( ) 0≥= ∫ME dxMEµ We can see that µ [E (M)] is the sum of lengths on x axis for which ( ) Mxf > From the Figure above we can see that for jumps M1 and
M2 ( )[ ] ( )[ ] 021 == MEME µµ Example: Let find the measure of the rationale numbers, ratio of integers, that are countable n mrrrrrr k ====== ,, 4 3, 4 1, 3 2, 3 1, 2 15321 3 Since the rationale numbers are discrete we can choose ε > 0 as small as we want and construct an open interval of length ε/2 centered around r1, an interval of ε/22
centered around r2,.., an interval of ε/2k centered around rk ( )[ ] εεεεµ =++++≤ k rationalsE222 2 ( )[ ] 00 =⇒→ rationalsEµε Functional Analysis ( ) ( ) ( )[ ] ( ) ( )xfyyyyxfyyEyydttfbxa nnibxa n i iiin b a≤≤ −≤≤= −∞→=<<<<<<=−= ∑∫ supinflim 110 0 1 µ a b 0y1y 1−ky 1+kyky 1−ny ny( )[ ]1+kyEµ ( )[ ]1−kyEµ( )[ ]kyEµ ( )xfy = ( ) =irationalex
rationalexxf 3 2 SOLO Lebesgue Integral Henri Léon Lebesgue1875 - 1941 A function y = f (x) is said to be measurable if the set of points x at which f (x) < c is measurable for any and all choices of the constant c. The Lebesgue Integral for a measurable function f (x) is defined as: Example ( )( ) ( )( ) ( )( ) ( )( ) ( ) 30131 0 1110/ =−==+= ∫∫∫∫≤≤
irationalsErationalsEirationalsExfE dxxfdxxfdxxfdxxf 3 2 0 1 x ( )xfIrationals Rationals For a continuous function the Riemann and Lebesgue integrals give the same results. Integration SOLO Stochastic Processes Lebesgue-Stieltjes integration Thomas Joannes Stieltjes1856 - 1894 Henri Léon Lebesgue 1875 - 1941 In measure-theoretic analysis and
related branches of mathematics, Lebesgue-Stieltjes integration generalizes Riemann-Stieltjes and Lebesgue integration, preserving the many advantages of the latter in a more general measure-theoretic framework. Let α (x) a monotonic increasing function of x, and define an interval I =(x1,x2). Define the nonnegative function ( ) ( ) ( )12 xxIU αα
−=The Lebesgue integral with respect to a measure constructed using U (I) is called Lebesgue-Stieltjes integral, or sometimes Lebesgue-Radon integral. Johann Karl August Radon 1887– 1956 Integration SOLO Stochastic Processes Darboux Integral Lower (green) and upper (green plus lavender) Darboux sums for four subintervals Jean-Gaston
Darboux 1842 - 1917 In real analysis, a branch of mathematics, the Darboux integral or Darboux sum is one possible definition of the integral of a function. Darboux integrals are equivalent to Riemann integrals, meaning that a function is Darboux-integrable if and only if it is Riemann-integrable, and the values of the two integrals, if they exist, are
equal. Darboux integrals have the advantage of being simpler to define than Riemann integrals. Darboux integrals are named after their discoverer, Gaston Darboux. A partition of an interval [a,b] is a finite sequence of values xi such that bxxxa n <<<<= 10 Definition Each interval [xi−1,xi] is called a subinterval of the partition. Let ƒ:[a,b]→R be a
bounded function, and let ( )nxxxP ,,, 10 = be a partition of [a,b]. Let [ ]( ) [ ]( )xfmxfM iiiixxx ixxx i,, 11 inf:;sup:−− ∈∈== The upper Darboux sum of ƒ with respect to P is ( )∑= −−=n iiiiPf MxxU 11, : The lower Darboux sum of ƒ with respect to P is ( )∑= −−=n iiiiPf mxxL 11, : Integration SOLO Stochastic Processes Darboux Integral(continue – 1)
Lower (green) and upper (green plus lavender) Darboux sums for four subintervalsJean-Gaston Darboux1842 - 1917 The upper Darboux sum of ƒ with respect to P is ( )∑= −−=n iiiiPf MxxU 11, : The lower Darboux sum of ƒ with respect to P is ( )∑= −−=n iiiiPf mxxL 11, : The upper Darboux integral of ƒ is [ ] baofpartitionaisPUU Pff ,:inf ,= The lower
Darboux integral of ƒ is [ ] baofpartitionaisPLL Pff ,:inf ,= If Uƒ = Lƒ, then we say that ƒ is Darboux-integrable and set ( ) ff b a LUdttf ==∫the common value of the upper and lower Darboux integrals. Integration SOLO Stochastic Processes Lebesgue Integration Henri Léon Lebesgue 1875 - 1941 Illustration of a Riemann integral (blue) and a
Lebesgue integral (red) Riemann Integral A sequence of Riemann sums. The numbers in the upper right are the areas of the grey rectangles. They converge to the integral of the function. Darboux Integral Lower (green) and upper (green plus lavender) Darboux sums for four subintervals Jean-Gaston Darboux 1842 - 1917 Bernhard Riemann1826 -
1866 SOLO Stochastic Processes Richard SnowdenBucy Abdrew JamesViterby1935 - Harold J.Kushner1932 - Moshe Zakai1926 - Jose EnriqueMoyal (1910 – 1998) Rudolf E.Kalman1930 - Maurice Stevenson Bartlett (1910 - 2002) George EugèneUhlenbeck (1900-1988) Leonard Salomon Ornstein (1880 -1941) Bernard OsgoodKoopman )1900 – 1981(
Edwin James GeorgePitman )1897 – 1993( Georges Darmois(1888 -1960) Page 3 Stochastic Processes SOLO HERMELIN Updated: 10.05.11 15.06.14 SOLO Stochastic Processes Table of Content Random Variables Stochastic Differential Equation (SDE) Brownian Motion Smoluchowski Equation Langevin Equation Lévy Process Martingale Chapmann
– Kolmogorov Equation Itô Lemma and Itô Processes Stratonovich Stochastic Calculus Fokker – Planck Equation Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) Propagation Equation SOLO Stochastic Processes Table of Content (continue) Bartlett-Moyal TheoremFeller- Kolmogorov Equation Langevin
and Fokker- Planck Equations Generalized Fokker - Planck Equation Karhunen-Loève Theorem References 4 Random ProcessesSOLO Random Variable: A variable x determined by the outcome Ω of a random experiment. ( )Ω= xx Random Process or Stochastic Process: A function of time x determined by the outcome Ω of a random experiment. ( ) (
)Ω= ,txtx 1Ω 2Ω 3Ω 4Ω x t This is a family or an ensemble of functions of time, in general different for each outcome Ω. Mean or Ensemble Average of the Random Process: ( ) ( )[ ] ( ) ( )∫+∞ ∞− =Ω= ξξξ dptxEtx tx,: Autocorrelation of the Random Process: ( ) ( ) ( )[ ] ( ) ( ) ( )∫ ∫+∞ ∞− +∞ ∞− =ΩΩ= ηξξξη ddptxtxEttR txtx 21 ,2121 ,,:, Autocovariance
of the Random Process: ( ) ( ) ( )[ ] ( ) ( )[ ] 221121 ,,:, txtxtxtxEttC −Ω−Ω= ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )2121212121 ,,,, txtxttRtxtxtxtxEttC −=−ΩΩ= Table of Content 5 SOLO Stationarity of a Random Process 1. Wide Sense Stationarity of a Random Process: • Mean Average of the Random Process is time invariant: ( ) ( )[ ] ( ) ( ) .,: constxdptxEtx tx
===Ω= ∫+∞ ∞− ξξξ • Autocorrelation of the Random Process is of the form: ( ) ( ) ( )ττ RttRttRtt 21: 2121 ,−= =−= ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )12,2121 ,,,:,21 ttRddptxtxEttR txtx === ∫ ∫+∞ ∞− +∞ ∞− ηξξξηωωsince: We have: ( ) ( )ττ −= RR Power Spectrum or Power Spectral Density of a Stationary Random Process: ( ) ( ) ( )∫+∞ ∞− −= ττωτω djRS exp:
2. Strict Sense Stationarity of a Random Process: All probability density functions are time invariant: ( ) ( ) ( ) .,, constptp xtx == ωωω Ergodicity: ( ) ( ) ( )[ ]Ω==Ω=Ω ∫+ −∞→ ,,2 1:, lim txExdttx Ttx ErgodicityT TT A Stationary Random Process for which Time Average = Assembly Average Random Processes 6 SOLO Time Autocorrelation: Ergodicity: (
) ( ) ( ) ( ) ( )∫+ −∞→ Ω+Ω=Ω+Ω=T TT dttxtxT txtxR ,,2 1:,, lim τττ For a Ergodic Random Process define Finite Signal Energy Assumption: ( ) ( ) ( ) ∞<Ω=Ω= ∫+ −∞→ T TT dttxT txR ,2 1,0 22 lim Define: ( ) ( ) ≤≤−Ω =Ωotherwise TtTtxtxT 0 ,:, ( ) ( ) ( )∫ +∞ ∞− Ω+Ω= dttxtxT R TTT ,,2 1: ττ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )∫∫∫ ∫∫∫ −− − − +∞ − − −
− ∞− Ω+Ω−Ω+Ω=Ω+Ω= Ω+Ω+Ω+Ω++Ω= T T TT T T TT T T TT T TT T T TT T TTT dttxtxT dttxtxT dttxtxT dttxtxT dttxtxT dttxtxT R τ τ τ τ τττ ττωττ ,,2 1,, 2 1,, 2 1 ,,2 1,, 2 1,, 2 1 00 Let compute: ( ) ( ) ( ) ( ) ( )∫∫−∞→−∞→∞→ Ω+Ω−Ω+Ω=T T TTT T T TTT TT dttxtxT dttxtxT Rτ τττ ,,2 1,, 2 1limlimlim ( ) ( ) ( )ττ RdttxtxT T T TT T =Ω+Ω∫−∞→ ,,2 1lim
( ) ( ) ( ) ( )[ ] 0,,2 1,, 2 1 suplimlim → Ω+Ω≤Ω+Ω≤≤−∞→−∞→ ∫ τττττ txtxT dttxtxT TT TtTT T T TTT therefore: ( ) ( )ττ RRTT =→∞ lim ( ) ( ) ( )[ ]Ω==Ω=Ω ∫+ −∞→ ,,2 1:, lim txExdttx Ttx ErgodicityT TT T− T+ ( )txT t Random Processes 7 SOLO Ergodicity (continue): ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( )( )[ ] ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) [ ]TTTT TT TT TTT XXT
dvvjvxdttjtxT dtjtxdttjtxT ddttjtxtjtxT dttxtxdjT djR * 2 1exp,exp, 2 1 exp,exp,2 1 exp,exp,2 1 ,,exp2 1exp =−ΩΩ= +−Ω+Ω= +−Ω+Ω= Ω+Ω−=− ∫∫ ∫∫ ∫ ∫ ∫ ∫∫ ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− +∞ ∞− +∞ ∞− +∞ ∞− ωω ττωτω ττωτω τττωττωτLet compute: where: and * means complex-conjugate.( ) ( )∫+∞ ∞− −Ω= dvvjvxX TT ωexp,:
Define: ( ) ( ) ( ) ( ) ( ) ( )[ ]∫ ∫∫+∞ ∞− + −∞→ +∞ ∞−∞→∞→ Ω+Ω−= −= = τττωττωτω ddttxtxE TjdjRE T XXES T T TTT TT TT T ,,2 1expexp 2: limlimlim * Since the Random Process is Ergodic we can use the Wide Stationarity Assumption: ( ) ( )[ ] ( )ττ RtxtxE TT =Ω+Ω ,, ( ) ( ) ( ) ( ) ( ) ( ) ( )∫ ∫ ∫∫ ∫∞+ ∞− +∞ ∞− + −∞→ +∞ ∞− + −∞→∞→ −= −= −= =
ττωτ ττωττττωω djR ddtT jRddtRT jT XXES T TT T TT TT T exp 2 1exp 2 1exp 2: 1 * limlimlim Random Processes 8 SOLO Ergodicity (continue): We obtained the Wiener-Khinchine Theorem (Wiener 1930): ( ) ( ) ( )∫+∞ ∞−→∞−= = dtjR T XXES TT T τωτω exp2 :* lim Norbert Wiener1894 - 1964 Alexander YakovlevichKhinchine1894 - 1959 The Power
Spectrum or Power Spectral Density of a Stationary Random Process S (ω) is the Fourier Transform of the Autocorrelation Function R (τ). Random Processes 9 SOLO White Noise A (not necessary stationary) Random Process whose Autocorrelation is zero for any two different times is called white noise in the wide sense. ( ) ( ) ( )[ ] ( ) ( )211 2 2121 ,,,
ttttxtxEttR −=ΩΩ= δσ ( )1 2 tσ - instantaneous variance Wide Sense Whiteness Strict Sense Whiteness A (not necessary stationary) Random Process in which the outcome for any two different times is independent is called white noise in the strict sense. ( ) ( ) ( ) ( )2121, ,,21 ttttp txtx −=Ω δ A Stationary White Noise Random has the Autocorrelation: (
) ( ) ( )[ ] ( )τδσττ 2,, =Ω+Ω= txtxER Note In general whiteness requires Strict Sense Whiteness. In practice we have only moments (typically up to second order) and thus only Wide Sense Whiteness. Random Processes 10 SOLO White Noise A Stationary White Noise Random has the Autocorrelation: ( ) ( ) ( )[ ] ( )τδσττ 2,, =Ω+Ω= txtxER The Power
Spectral Density is given by performing the Fourier Transform of the Autocorrelation: ( ) ( ) ( ) ( ) ( ) 22 expexp στωτδστωτω =−=−= ∫∫+∞ ∞− +∞ ∞− dtjdtjRS ( )ωS ω2σ We can see that the Power Spectrum Density contains all frequencies at the same amplitude. This is the reason that is called White Noise. The Power of the Noise is defined as: ( ) ( )
20 σωτ ==== ∫+∞ ∞− SdtRP Random Processes 11 SOLO Markov Processes A Markov Process is defined by: Andrei AndreevichMarkov 1856 - 1922 ( ) ( )( ) ( ) ( )( ) 111 ,|,,,|, tttxtxptxtxp >∀ΩΩ=≤ΩΩ ττ i.e. the Random Process, the past up to any time t1 is fully defined by the process at t1. Examples of Markov Processes: 1. Continuous Dynamic
System( ) ( )( ) ( )wuxthtz vuxtftx ,,, ,,, == 2. Discrete Dynamic System ( ) ( )( ) ( )kkkkk kkkkk wuxthtz vuxtftx ,,, ,,, 1 1 == + + x - state space vector (n x 1)u - input vector (m x 1)v - white input noise vector (n x 1) - measurement vector (p x 1)z - white measurement noise vector (p x 1)w Random Processes Table of Content SOLO Stochastic Processes
The earliest work on SDEs was done to describe Brownian motion in Einstein's famous paper, and at the same time by Smoluchowski. However, one of the earlier works related to Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This work was followed upon by Langevin. Later Itō and Stratonovich put SDEs on
more solid mathematical footing. In physical science, SDEs are usually written as Langevin Equations. These are sometimes confusingly called "the Langevin Equation" even though there are many possible forms. These consist of an ordinary differential equation containing a deterministic part and an additional random white noise term.
A second form is the Smoluchowski Equation and, more generally, the Fokker-Planck Equation. These are partial differential equations that describe the time evolution of probability distribution functions. The third form is the stochastic differential equation that is used most frequently in mathematics and quantitative finance (see below). This is
similar to the Langevin form, but it is usually written in differential form. SDEs come in two varieties, corresponding to two versions of stochastic calculus. Background Terminology A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, thus resulting in a solution which is itself a
stochastic process. SDE are used to model diverse phenomena such as fluctuating stock prices or physical system subject to thermal fluctuations. Typically, SDEs incorporate white noise which can be thought of as the derivative of Brownian motion (or the Wiener process); however, it should be mentioned that other types of random fluctuations are
possible, such as jump processes.
Stochastic Differential Equation (SDE) SOLO Stochastic Processes Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is non-differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Ito Stochastic Calculus and the
Stratonovich Stochastic Calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation.
Guidelines exist and conveniently, one can readily convert an Ito SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down. Stochastic Calculus Table of Content Stochastic ProcessesSOLO Brownian Motion In 1827 Brown, a botanist, discovered the motion of pollen
particles in water. At the beginning of the twentieth century, Brownian motion was studied by Einstein, Perrin and other physicists. In 1923, against this scientific background, Wiener defined probability measures in path spaces, and used the concept of Lebesgue integrals to lay the mathematical foundations of stochastic analysis. In 1942, Ito began
to reconstruct from scratch the concept of stochastic integrals, and its associated theory of analysis. He created the theory of stochastic differential equations, which describe motion due to random events. Albert Einstein 1879 - 1955 Norbert Wiener1894 - 1964 Henri Léon Lebesgue 1875 - 1941 Robert Brown 1773–1858 Albert Einstein's (in his 1905
paper) and Marian Smoluchowski's (1906) independent research of the problem that brought the solution to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules. Marian Ritter von Smolan Smoluchowski1872 - 1917 Kiyosi Itô1915 - 2008 Brown.robert.jpg Marian_Smoluchowski.jpg Stochastic
ProcessesSOLO Random Walk Assume the process of walking on a straight line at discrete intervals T. At each timewe walk a distance s , randomly, to the left or to the right, with the same probability p=1/2.
In this way we created a Stochastic Process called Random Walk. (This experiment is equivalent to tossing a coin to get, randomly, Head or Tail). Assume that at t = n T we have taken k steps to the right and n-k steps to the left, then the distance traveled isx (nT) is a Random Walk, taking the values r s, wherer equals n, n-2,…, -(n-2),-n ( ) ( ) ( )
snksknsknTx −=−−= 2 ( ) ( )2 2nr ksnksrnTx+=⇒−== Therefore ( ) n nnr npnr nnr kPsrnTxP2 1 222 += += +=== Stochastic ProcessesSOLO Random Walk (continue – 1) The Random value is ( ) nxxxnTx +++= 21 We have at step i the event xi: P xi = +s = p = 1/2 and P xi = - s = 1-p = 1/2 ( ) ( )( ) ( ) nrppn pnk en eppn nrkPsrnTxP 2/12 2 2 2/ 1 12
1 2−− −−= −≈ +=== ππ ( ) 0=−=−++== sxPssxPsxE iii ( ) 2222 ssxPssxPsxE iii =−=−++== ( ) ( ) 222 22 1 0 1 1 2 21 0 snxExExExxEnTxE xExExEnTxE n xxEn i n jji n ji ji =+++== =+++=≠= = =∑∑ === ≠==⇒jisxE jixExExxE i ii tindependenxx ji ji 22 , 0 For large r ( )nr > and( ) +=+≈≤ ∫ − n rerfdyesrnTxP nry 2 1 2 1 2 1 / 0 2/2 π Stochastic
ProcessesSOLO Random Walk (continue – 2) For n1 > n2 > n3 > n4 the number of steps to the right from n2T to n1T interval is independent of the number of steps to the right between n4T to n3T interval. Hence x (n1T) – x (n2T) is independent of x (n4T) – x (n3T). Table of Content SOLO Stochastic Processes Smoluchowski Equation In physics, the
Diffusion Equation with drift term is often called Smoluchowski equation (after Marian von Smoluchowski). Let w(r, t) be a density, D a diffusion constant, ζ a friction coefficient, and U(r, t) a potential. Then the Smoluchowski equation states that the density evolves according to The diffusivity term acts to smoothen out the density, while the drift term
shifts the density towards regions of low potential U. The equation is consistent with each particle moving according to a stochastic differential equation, with a bias term and a diffusivity D. Physically, the drift term originates from a force being balanced by a viscous drag given by ζ. The Smoluchowski equation is formally identical to the Fokker–
Planck equation, the only difference being the physical meaning of w: a distribution of particles in space for the Smoluchowski equation, a distribution of particle velocities for the Fokker–Planck equation. SOLO Stochastic Processes Einstein-Smoluchowski Equation In physics (namely, in kinetic theory) the Einstein relation (also known as Einstein–
Smoluchowski relation) is a previously unexpected connection revealed independently by Albert Einstein in 1905 and by Marian Smoluchowski (1906) in their papers on Brownian motion. Two important special cases of the relation are: (diffusion of charged particles) ("Einstein–Stokes equation", for diffusion of spherical particles through liquid with
low Reynolds number) Where • ρ (x,t) density of the Brownian particles•D is the diffusion constant,•q is the electrical charge of a particle,•μq, the electrical mobility of the charged particle, i.e. the ratio of the particle's terminal drift velocity to an applied electric field,•kB is Boltzmann's constant,•T is the absolute temperature,•η is viscosity•r is the
radius of the spherical particle.The more general form of the equation is:where the "mobility" μ is the ratio of the particle's terminal drift velocity to an applied force, μ = vd / F. 2 2 xD t ∂∂= ∂∂ ρρ Einstein’s EquationFor Brownian Motion ( ) ( ) −= tD x tDtx 4exp 4 1, 2 2/1πρ Table of Content Paul Langevin1872-1946 Langevin Equation SOLO
Stochastic Processes Langevin equation (Paul Langevin, 1908) is a stochastic differential equation describing the time evolution of a subset of the degrees of freedom. These degrees of freedom typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic)
variables are responsible for the stochastic nature of the Langevin equation. The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid, Langevin, P. (1908). "On the Theory of Brownian Motion". C. R. Acad. Sci. (Paris) 146: 530–533. ( )td xdvtv td
vdm =+−= ηλ We are interested in the position x of a particle of mass m. The force on the particle is the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a noise term η (t) that has a Gaussian Probability Distribution with Correlation Function ( ) ( ) ( )'2', , ttTktt jiBji −= δδληη where kB is Boltzmann’s constant and T is
the Temperature.
Table of Content Paul_Langevin.jpg Propagation Equation SOLO Stochastic Processes Definition 1: Holder Continuity Condition ( )( ) 111 , mxnxmx Kttxk ∈Given a mx1 vector on a mx1 domain, we say that is Holder Continuous in K if for some constants C, α >0 and some norm || ||: ( ) ( ) α2121 ,, xxCtxktxk −<− Holder Continuity is a generalization of
Lipschitz Continuity (α = 1): Holder Continuity Lipschitz Continuity( ) ( ) 2121 ,, xxCtxktxk −<− Rudolf Lipschitz1832 - 1903 Otto Ludwig Hölder1859 - 1937 Propagation Equation SOLO Stochastic Processes Definition 2: Standard Stochastic State Realization (SSSR) The Stochastic Differential Equation: ( ) ( ) ( ) ( ) [ ]fnxnxnnxnx ttttndtxGdttxftxd ,,,
0111 ∈+= ( ) ( ) ( ) ( ) ( ) ( ) 0===+= tndEtndEtndEtndtndtnd pgpg we can write ( ) ( ) ( ) ( ) ( ) ( )sttQswtwEtd tndtw Tg −== δ ( )tnd g ( ) ( ) ( ) dttQtntndE nxnT gg =Wiener (Gauss) Process ( )tnd p Poisson Process ( ) ( ) = na a a Tpp n tntndE λσ λσ λσ 2 22 12 00 00 00 2 1 (1) where is independent of( ) 00 xtx = 0x ( )tnd (2) is Holder Continuous in t,
Lipschitz Continuous in ( )txGnxn , x( ) ( )txGtxG T nxnnxn ,, is strictly Positive Definite( ) ( ) ji ij i ij xx txG x txG ∂∂∂ ∂∂ , ;, 2 are Globally Lipschitz Continuous in x, continuous in t, and globally bounded. (3) The vector f (x,t) is Continuous in t and Globally Lipschitz Continuous in , and ∂fi/∂xi are Globally Lipschitz Continuous in , and continuous in t.
x x The Stochastic Differential Equation is called a Standard Stochastic State Realization (SSSR) Table of Content Stochastic ProcessesSOLO Lévy Process In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is any continuous-time stochastic process Paul Pierre Lévy1886 - 1971 A Stochastic Process X = Xt: t ≥ 0 is
said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3.
Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. Independent incrementsA continuous-time stochastic process assigns a random variable Xt to each point t ≥ 0 in time. In effect it is a random function of t. The increments of such a process are the differences Xs − Xt
between its values at different times t < s. To call the increments of a process independent means that increments Xs − Xt and Xu − Xv are independent random variables whenever the two time intervals do not overlap and, more generally, any finite number of increments assigned to pairwise non-overlapping time intervals are mutually (not just
pairwise) independent C3%A9vy Stochastic ProcessesSOLO Lévy Process (continue – 1) Paul Pierre Lévy1886 - 1971 A Stochastic Process X = Xt: t ≥ 0 is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3.
Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. Stationary increments To call the increments stationary means that the probability distribution of any increment Xs − Xt depends only on the length s − t of the time interval; increments with equally long time intervals are
identically distributed. In the Wiener process, the probability distribution of Xs − Xt is normal with expected value 0 and variance s − t. In the (homogeneous) Poisson process, the probability distribution of Xs − Xt is a Poisson distribution with expected value λ(s − t), where λ > 0 is the "intensity" or "rate" of the process.
Stochastic ProcessesSOLO Lévy Process (continue – 2) Paul Pierre Lévy1886 - 1971 A Stochastic Process X = Xt: t ≥ 0 is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent.
3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. DivisibilityLévy processes correspond to infinitely divisible probability distributions:The probability distributions of the increments of any Lévy process are infinitely divisible, since the increment of length t is the sum of n
increments of length t/n, which are i.i.d. by assumption (independent increments and stationarity). Conversely, there is a Lévy process for each infinitely divisible probability distribution: given such a distribution D, multiples and dividing define a stochastic process for positive rational time, defining it as a Dirac delta distribution for time 0 defines it
for time 0, and taking limits defines it for real time. Independent increments and stationarity follow by assumption of divisibility, though one must check continuity and that taking limits gives a well-defined function for irrational time. Table of Content Stochastic ProcessesSOLO Martingale Originally, martingale referred to a class of betting strategies
that was popular in 18th century France.
The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double his bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and
available time jointly approach infinity, his probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users History of Martingale The concept of martingale in probability theory was introduced by Paul Pierre Lévy, and
much of the original development of the theory was done by Joseph Leo Doob. Part of the motivation for that work was to show the impossibility of successful betting strategies. Paul Pierre Lévy1886 - 1971 Joseph Leo Doob1910 - 2004 C3%A9vy Stochastic ProcessesSOLO Martingale In probability theory, a martingale is a stochastic process (i.e., a
sequence of random variables) such that the conditional expected value of an observation at some time t, given all the observations up to some earlier time s, is equal to the observation at that earlier time s A discrete-time martingale is a discrete-time stochastic process (i.e., a sequence of random variables) X1, X2, X3, ... that satisfies for all n i.e., the
conditional expected value of the next observation, given all the past observations, is equal to the last observation. Somewhat more generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with respect to another sequence X1, X2, X3 ... if for all n Similarly, a continuous-time martingale with respect to the stochastic process Xt is a stochastic
process Yt such that for all t This expresses the property that the conditional expectation of an observation at time t, given all the observations up to time s, is equal to the observation at time s (of course, provided that s ≤ t). Stochastic ProcessesSOLO Martingale In full generality, a stochastic process Y : T × Ω → S is a martingale with respect to a
filtration Σ∗ and probability measure P if * Σ∗ is a filtration of the underlying probability space (Ω, Σ, P); * Y is adapted to the filtration Σ∗, i.e., for each t in the index set T, the random variable Yt is a Σt-measurable function; * for each t, Yt lies in the Lp space L1(Ω, Σt, P; S), i.e. * for all s and t with s < t and all F Σ∈ s, where χF denotes the indicator
function of the event F. In Grimmett and Stirzaker's Probability and Random Processes, this last condition is denoted as which is a general form of conditional expectation It is important to note that the property of being a martingale involves both the filtration and the probability measure (with respect to which the expectations are taken). It is
possible that Y could be a martingale with respect to one measure but not another one; the Girsanov theorem offers a way to find a measure with respect to which an Itō process is a martingale. Table of Content Stochastic ProcessesSOLO Chapmann – Kolmogorov Equation Sydney Chapman1888 - 1970 Andrey Nikolaevich Kolmogorov 1903 - 1987
Suppose that fi is an indexed collection of random variables, that is, a stochastic process.
Let be the joint probability density function of the values of the random variables f1 to fn. Then, the Chapman-Kolmogorov equation is Note that we have not yet assumed anything about the temporal (or any other) ordering of the random variables -- the above equation applies equally to the marginalization of any of them. Particularization to Markov
Chains When the stochastic process under consideration is Markovian, the Chapman-Kolmogorov equation is equivalent to an identity on transition densities. In the Markov chain setting, one assumes that Then, because of the Markov property, where the conditional probability is the transition probability between the times i > j. So, the Chapman-
Kolmogorov equation takes the form When the probability distribution on the state space of a Markov chain is discrete and the Markov chain is homogeneous, the Chapman-Kolmogorov equations can be expressed in terms of (possibly infinite-dimensional) matrix multiplication, thus:where P(t) is the transition matrix, i.e., if Xt is the state of the
process at time t, then for any two points i and j in the state space, we have ( )nii ffpn ,,1,,1 ( ) ( )∫+∞ ∞−− = − nniinii fdffpffpnn ,,,, 1,,11,, 111 ( ) ( ) ( ) ( )1|12|11,, ||,,11211 −− = nniiiiinii ffpffpfpffpnnn ( ) ( ) ( )∫+∞ ∞− = 212|23|13| |||122313 dfffpffpffp iiiiii Stochastic ProcessesSOLO Chapmann – Kolmogorov Equation (continue – 1) Particularization
to Markov Chains ( ) ( ) ( )∫+∞ ∞− = 20022,|,22,|,00,|, ,|,,|,,|,00220000 dttxtxptxtxptxtxp txtxtxtxtxtx Let be a probability density function on the Markov process x(t) given that x(t0) = x0, and t0 < t, then, ( )00,|, ,|,00 txtxp txtx Geometric Interpretation of Chapmann – Kolmogorov Equation Table of Content Stochastic ProcessesSOLO Kiyosi Itô1915 -
2008 In 1942, Itô began to reconstruct from scratch the concept of stochastic integrals, and its associated theory of analysis. He created the theory of stochastic differential equations, which describe motion due to random events. In 1945 Ito was awarded his doctorate. He continued to develop his ideas on stochastic analysis with many important
papers on the topic. Among them were “On a stochastic integral equation” (1946), “On the stochastic integral” (1948), “Stochastic differential equations in a differentiable manifold” (1950), “Brownian motions in a Lie group” (1950), and “On stochastic differential equations” (1951). Itô Lemma and Itô Processes Itô Lemma and Itô processes In its
simplest form, Itô 's lemma states that for an Itô process and any twice continuously differentiable function f on the real numbers, then f(X) is also an Itô process satisfying Or, more extended. Let X(t) be an Itô process given by and let f(t,x) be a function with continuous first- and second-order partial derivatives Then by Itô's lemma: SOLO tttt dBdtXd
σµ += ( ) ( ) ( ) ( ) ( ) ( ) dtXfXfdBXf dtXfdXXfXfd ttT tttttt ttT tttt ++= += σσµσ σσ ''2 1'' ''2 1' Stochastic Processes Itô Lemma and Itô processes (continue – 1) Informal derivation A formal proof of the lemma requires us to take the limit of a sequence of random variables, which is not done here. Instead, we can derive Ito's lemma by expanding a
Taylor series and applying the rules of stochastic calculus. Assume the Itō process is in the form of Expanding f(x, t) in a Taylor series in x and t we have and substituting a dt + b dB for dx gives In the limit as dt tends to 0, the dt2 and dt dB terms disappear but the dB2 term tends to dt. The latter can be shown if we prove that since Deleting the dt2
and dt dB terms, substituting dt for dB2, and collecting the dt and dB terms, we obtain as required. SOLO Stochastic Processes Table of Content Ruslan L. Stratonovich(1930 – 1997) Stratonovich invented a stochastic calculus which serves as an alternative to the Itô calculus; the Stratonovich calculus is most natural when physical laws are being
considered.
The Stratonovich integral appears in his stochastic calculus. He also solved the problem of optimal non-linear filtering based on his theory of conditional Markov processes, which was published in his papers in 1959 and 1960. The Kalman-Bucy (linear) filter (1961) is a special case of Stratonovich's filter. He also developed the value of information
theory (1965).
His latest book was on non-linear non-equilibrium thermodynamics. SOLO Stratonovich Stochastic Calculus Stochastic Processes Table of Content A solution to the one-dimensional Fokker–Planck equation, with both the drift and the diffusion term. The initial condition is a Dirac delta function in x = 1, and the distribution drifts towards x = 0. The
Fokker–Planck equation describes the time evolution of the probability density function of the position of a particle, and can be generalized to other observables as well. It is named after Adriaan Fokker and Max Planck and is also known as the Kolmogorov forward equation. The first use of the Fokker–Planck equation was the statistical description of
Brownian motion of a particle in a fluid. In one spatial dimension x, the Fokker–Planck equation for a process with drift D1(x,t) and diffusion D2(x,t) is More generally, the time-dependent probability distribution may depend on a set of N macrovariables xi. The general form of the Fokker–Planck equation is then where D1 is the drift vector and D2 the
diffusion tensor; the latter results from the presence of the stochastic force. Fokker – Planck Equation Adriaan Fokker 1887 - 1972 Max Planck1858 - 1947 SOLO Adriaan Fokker„Die mittlere Energie rotierender elektrischer Dipole im Strahlungsfeld" Annalen der Physik 43, (1914) 810-820 Max Plank, „Ueber einen Satz der statistichen Dynamik und
eine Erweiterung in der Quantumtheorie“, Sitzungberichte der Preussischen Akadademie der Wissenschaften (1917) p.
324-341 Stochastic Processes ( ) ( ) ( )[ ] ( ) ( )[ ]txftxDx txftxDx txft ,,,,, 22 2 1 ∂∂+ ∂∂−= ∂∂ ( )[ ] ( )[ ]∑∑∑= == ∂∂ ∂+∂∂−= ∂∂ N i N jNji ji N iNi i ftxxDxx ftxxDx ft 1 1 12 2 11 1 ,,,,,, FokkerPlanck.gif AdriaanFokker.jpg Max_planck.jpg Fokker – Planck Equation (continue – 1) The Fokker–Planck equation can be used for computing the probability
densities of stochastic differential equations.
where is the state and is a standard M-dimensional Wiener process. If the initial probability distribution is , then the probability distribution of the stateis given by the Fokker – Planck Equation with the drift and diffusion terms: Similarly, a Fokker–Planck equation can be derived for Stratonovich stochastic differential equations.
In this case, noise-induced drift terms appear if the noise strength is state-dependent.
SOLO Consider the Itô stochastic differential equation: ( ) ( ) ( )[ ] ( ) ( )[ ]txftxDx txftxDx txft ,,,,, 22 2 1 ∂∂+ ∂∂−= ∂∂ Fokker – Planck Equation (continue – 2) Derivation of the Fokker–Planck Equation SOLO Start with ( ) ( ) ( )11|1, 111|, −−− −−− = kxkkxxkkxx xpxxpxxpkkkkk and ( ) ( ) ( ) ( )∫∫+∞ ∞−−−− +∞ ∞−−− −−− == 111|11, 111|,
kkxkkxxkkkxxkx xdxpxxpxdxxpxp kkkkkk define ( ) ( )ttxxtxxttttt kkkk ∆−==∆−== −− 11 ,,, ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( )∫+∞ ∞−∆−∆− ∆−∆−∆−= ttxdttxpttxtxptxp ttxttxtxtx || Let use the Characteristic Function of ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )ttxtxtxtxdttxtxpttxtxss ttxtxttxtx ∆−−=∆∆−∆−−−=Φ ∫+∞ ∞−∆−∆−∆ |exp: || ( ) ( ) ( ) ( )[
]ttxtxp ttxtx ∆−∆− || The inverse transform is ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )∫∞+ ∞−∆−∆∆− Φ∆−−=∆− j j ttxtxttxtx sdsttxtxsj ttxtxp || exp2 1| π Using Chapman-Kolmogorov Equation we obtain: ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxsj ttxdttxpsdsttxtxsj txp j j ttxttxtx ttx ttxtxp j j ttxtxtx ttxtx ∆−∆−Φ∆−
−= ∆−∆−Φ∆−−= ∫ ∫ ∫ ∫ ∞+ ∞− ∞+ ∞−∆−∆−∆ +∞ ∞−∆− ∆− ∞+ ∞−∆−∆ ∆− | | | exp2 1 exp2 1 | π π Stochastic Processes Fokker – Planck Equation (continue – 3) Derivation of the Fokker–Planck Equation (continue – 1) SOLO The Characteristic Function can be expressed in terms of the moments about x (t-Δt) as: ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] (
)ttxdsdttxpsttxtxsj txpj j ttxttxtxtx ∆−∆−Φ∆−−= ∫ ∫+∞ ∞− ∞+ ∞−∆−∆−∆ |exp 2 1 π ( ) ( ) ( ) ( )( ) ( ) ( ) ( )[ ] ( ) ∑ ∞ =∆−∆∆−∆ ∆−∆−−−+=Φ 1|| | !1 i ittxtx i ttxtx ttxttxtxEi ss Therefore ( ) ( )[ ] ( ) ( )[ ] ( )( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( )ttxdsdttxpttxttxtxE i sttxtxs jtxp j j ttxi ittxtx i tx ∆−∆− ∆−∆−−−+∆−−= ∫ ∫ ∑+∞ ∞− ∞+ ∞−∆− ∞ =∆− 1| | !1exp 2 1 π Use
the fact that ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )[ ] ,2,1,01exp 2 1 =∂ ∆−−∂−=∆−−−∫∞+ ∞− itx ttxtxsdttxtxss j i ii j j i δπ ( ) ( )[ ] ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( )∫∑ ∫ ∫∞+ ∞− ∞ =∆− +∞ ∞−∆− ∞+ ∞− ∆−∆−∆−∆−−∂ ∆−−∂−+ ∆−∆−∆−−= 1 |! 1 exp2 1 ittx i i ii ttx j j tx ttxdttxpttxttxtxEtx ttxtx i ttxdttxpsdttxtxsj txp δ π where δ [u] is the Dirac
delta function: [ ] ( ) [ ] ( ) ( ) ( ) ( ) ( )000..0exp2 1FFFtsuFFduuuFsdus ju j j ==∀== −+ +∞ ∞− ∞+ ∞−∫∫ δ πδ Stochastic Processes Fokker – Planck Equation (continue – 4) Derivation of the Fokker–Planck Equation (continue – 2) SOLO [ ] ( ) ( ) [ ] ( ) ( ) ( ) ( ) ( )afafaftsufufduuaufsduasj uaau j j ==∀=−−=− −+= +∞ ∞− ∞+ ∞−∫∫ ..exp 2 1 δπ δ [ ] ( ) ( )
( ) ( ) ( ) ∫∫∫∞+ ∞− ∞+ ∞− ∞+ ∞− =→=−−=−j j j j j j sdussFsj ufdu dsdussF jufsduass jua ud dexp 2 1exp 2 1exp 2 1 πππδ ( ) [ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )au j j j j j j j j ud ufdsdsFass jsdduusufass j sdduuasufsj dusduassj ufduuaud duf = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− +∞ ∞− +∞ ∞− ∞+ ∞− +∞ ∞− −=−=−−= −−=−−=− ∫∫ ∫ ∫ ∫∫ ∫∫ exp2 1expexp 2 1 exp2
1exp 2 1 ππ ππδ [ ] ( ) ( ) ( ) ( ) ( ) ( ) ∫∫∫∞+ ∞− ∞+ ∞− ∞+ ∞− =→=−−=−j j ii ij j j j ii i i sdussFsj ufdu dsdussF jufsduass jua ud dexp 2 1exp 2 1exp 2 1 πππδ ( ) [ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )au i ii j j iij j ii j j iij j ii i i ud ufdsdassFs jsdduusufass j sdduuasufsj dusduassj ufduuaud duf = −=−=−−= −−=−−=− ∫∫ ∫ ∫ ∫∫ ∫∫∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+
∞− +∞ ∞− +∞ ∞− ∞+ ∞− +∞ ∞− 1exp2 1expexp 2 1 exp2 1exp 2 1 ππ ππδ Useful results related to integrals involving Delta (Dirac) function Stochastic Processes Fokker – Planck Equation (continue – 5) Derivation of the Fokker–Planck Equation (continue – 3) SOLO ( ) ( )[ ] ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ]txpttxdttxpttxtxttxdttxpsdttxtxsj
ttxttxttx ttxtx j j ∆− +∞ ∞−∆− +∞ ∞−∆− ∆−− ∞+ ∞− =∆−∆−∆−−=∆−∆−∆−− ∫∫ ∫ δπ δ exp2 1 ( ) ( ) ( )[ ]( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑ ∑ ∫ ∫∑ ∞ ==∆ ∆−∆− ∞ = ∞+ ∞−∆−∆− +∞ ∞− ∞ =∆−∆− ∂∆−∆−−∂−= ∆−∆−∆−∆−−∂ ∆−−∂−= ∆−∆−∆−∆−−∂ ∆−−∂− 10 | 1| 1| | ! 1 |! 1 |!
1 it i ttxi ttxtxii ittx ittxtxi ii ittx ittxtxi ii tx txpttxttxtxE i ttxdttxpttxttxtxEtx ttxtx i ttxdttxpttxttxtxEtx ttxtx i δ δ ( ) [ ] ( ) ( ) ( ) [ ] [ ] ( )auau i i i i i ii i i ud ufdduua uad duf ud ufdduua ud duf == =−− →−=− ∫∫+∞ ∞− +∞ ∞− δδ 1We found ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑ ∞ ==∆ ∆−∆−∆− ∂ ∆−∆−−∂−+=1 0 | | ! 1 it i ttxi ttxtxii ttxtxtx
txpttxttxtxE itxptxp ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑ ∞ = ∆− →∆ ∆− →∆ ∂∆−∆−−∂ ∆−= ∆− 100 |1lim ! 1lim ii ttxii t ittxtx t tx txpttxttxtxE tit txptxp Therefore Rearranging, dividing by Δt, and tacking the limit Δt→0, we obtain: Stochastic Processes Fokker – Planck Equation (continue – 6) Derivation of the Fokker–Planck Equation (continue –
4) SOLO We found ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]( )( )[ ]∑ ∞ = ∆−∆− →∆ ∆− →∆ ∂∆−∆−−∂ ∆−= ∆− 1 | 00 |1lim ! 1lim ii ttxi ttxtxi t ittxtx t tx txpttxttxtxE tit txptxp Define: ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) t ttxttxtxEtxtxm ittxtx t i ∆∆−∆−− =− ∆− →∆− |lim: | 0 Therefore ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ]( )( )[ ]∑ ∞ = − ∂−∂−= ∂∂ 1 ! 1 ii txiii tx tx txptxtxm it
txp ( ) ( )ttxtxt ∆−=→∆− 0lim: and: This equation is called the Stochastic Equation or Kinetic Equation. It is a partial differential equation that we must solve, with the initial condition: ( ) ( )[ ] ( )[ ]000 0 txptxp tx === Stochastic Processes Fokker – Planck Equation (continue – 7) Derivation of the Fokker–Planck Equation (continue – 5) SOLO We want
to find px(t) [x(t)] where x(t) is the solution of ( ) ( ) ( ) [ ]fg ttttntxfdt txd,, 0∈+= ( ) 0: == tnEn gg ( )tng ( ) ( )[ ] ( ) ( )[ ] ( ) ( )τδττ −=−− ttQnntntnE gggg ˆˆ Wiener (Gauss) Process ( ) ( )[ ] ( ) ( )[ ] ( ) [ ] ( ) [ ] ( )tQnEtxnEt ttxttxtxEtxtxm gg t=== ∆∆−∆−−=− →∆−22 2 2 0 2 || lim: ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( )txfnEtxftxtd txdE t ttxttxtxEtxtxm g t,,|
|lim: 0 0 1 =+= = ∆∆−∆−−=− →∆− ( ) ( )[ ] ( ) ( )[ ] ( ) 20 |lim: 0>= ∆∆−∆−−=− →∆− it ttxttxtxEtxtxm i t i Therefore we obtain: ( ) ( )[ ] ( )[ ] ( ) ( )[ ]( )( ) ( ) ( ) ( )[ ] ( )[ ] 2 2 2 1, tx txptQ tx txpttxf t txp txtxtx ∂∂ +∂ ∂−= ∂∂ Stochastic Processes Fokker–Planck Equation Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward
equation (KBE) Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) are partial differential equations (PDE) that arise in the theory of continuous-time continuous-state Markov processes. Both were published by Andrey Kolmogorov in 1931. Later it was realized that the KFE was already known to physicists
under the name Fokker–Planck equation; the KBE on the other hand was new. Kolmogorov forward equation addresses the following problem. We have information about the state x of the system at time t (namely a probability distribution pt(x)); we want to know the probability distribution of the state at a later time s > t. The adjective 'forward'
refers to the fact that pt(x) serves as the initial condition and the PDE is integrated forward in time. (In the common case where the initial state is known exactly pt(x) is a Dirac delta function centered on the known initial state). Kolmogorov backward equation on the other hand is useful when we are interested at time t in whether at a future time s
the system will be in a given subset of states, sometimes called the target set. The target is described by a given function us(x) which is equal to 1 if state x is in the target set and zero otherwise. We want to know for every state x at time t (t < s) what is the probability of ending up in the target set at time s (sometimes called the hit probability). In
this case us(x) serves as the final condition of the PDE, which is integrated backward in time, from s to t. for t ≤ s , subject to the final condition p(x,s) = us(x). ( ) ( ) ( )[ ] ( ) ( )[ ]txptxDx txptxDx txpt ,,,,, 22 2 1 ∂∂+ ∂∂= ∂∂− ( ) ( ) ( )[ ] ( ) ( )[ ]txptxDx txptxDx txpt ,,,,, 22 2 1 ∂∂+ ∂∂−= ∂∂ Andrey Nikolaevich Kolmogorov1903 - 1987 SOLO Stochastic
Processes Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) (continue – 1) Kolmogorov backward equation on the other hand is useful when we are interested at time t in whether at a future time s the system will be in a given subset of states, sometimes called the target set. The target is described by a
given function us(x) which is equal to 1 if state x is in the target set and zero otherwise. We want to know for every state x at time t (t < s) what is the probability of ending up in the target set at time s (sometimes called the hit probability). In this case us(x) serves as the final condition of the PDE, which is integrated backward in time, from s to t.
Formulating the Kolmogorov backward equation Assume that the system state x(t) evolves according to the stochastic differential equation then the Kolmogorov backward equation is, using Itô 's lemma on p(x,t): SOLO Stochastic Processes Table of Content Bartlett-Moyal Theorem SOLO Stochastic Processes Let Φx(t)|x(t1) (s,t) be the Characteristic
Function of the Markov Process x (t), t T ɛ(some interval). Assume the following: (1) Φx(t)|x(t1) (s,t) is continuous differentiable in t, t T.ɛ ( ) ( ) ( ) ( )[ ] ( ) ( )( )txtsgt txtxttxsE Ttxtx ,; |1exp1| ≤ ∆−−∆+(2) where E | g| is bounded on T. (3) then ( ) ( ) ( ) ( )[ ] ( ) ( )( )txtst txtxttxsE Ttxtx t,;: |1explim 1| 0φ= ∆−−∆+ →∆ ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) 1| 1| |,;exp|,
1 1 txtxtstxsEt txtsT txtxtxtx φ=∂ Φ∂ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )∫+∞ ∞− −=Φ txdtxtxptxsts txtxT txtx 1|| |exp,11 The Characteristic Function of ( ) ( ) ( ) ( )[ ] 11| |1 tttxtxp txtx > Maurice Stevenson Bartlett 1910 - 2002 Jose EnriqueMoyal 1910 - 1998 Theorem 1 Bartlett-Moyal Theorem SOLO Stochastic Processes ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )t
txtstxtts t txts txtxtxtx t txtx ∆Φ−∆+Φ =∂ Φ∂→∆ 1|1| 0 1| |,|,lim |,111 Proof By definition ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) 1|1|| |exp|exp,111 txtxsEtxdtxtxptxsts Ttxtxtxtx Ttxtx −=−=Φ ∫ +∞ ∞− ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )∫+∞ ∞− ∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts txtxT txtx 1|| |exp,11 But since x (t) is a Markov process, we can use the Chapman-
Kolmogorov Equation ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( )∫ ∆+=∆+ txdtxtxptxttxptxttxp txtxtxtxtxtx 1||1| |||111 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )∫ ∫+∞ ∞− ∆+∆+∆+−=∆+Φ ttxdtxdtxtxptxttxpttxstts txtxtxtxT txtx 1||| ||exp,111 ( ) ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )txdttxdtxttxptxttxstxtxptxs txtxT txtxT∫ ∫ ∆+∆+−∆+−−= |exp|exp 11
|1| ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) 1|| ||expexp11 txtxtxttxsEtxsE Ttxtx Ttxtx −∆+−⋅−= Bartlett-Moyal Theorem SOLO Stochastic Processes ( )( ) ( )( ) ( )( )t txtstxtts t txts xx t x ∆Φ−∆+Φ= ∂Φ∂ →∆ 11 0 1 |,|,lim |, Proof (continue – 1) We found ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) 1|1|| |exp|exp,111 txtxsEtxdtxtxptxsts Ttxtxtxtx Ttxtx −=−=Φ ∫ +∞ ∞−
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )∫+∞ ∞−∆− ∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts ttxtx Ttxtx 1|| |exp,1 ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) 1|| ||expexp11 txtxtxttxsEtxsE Ttxtx Ttxtx −∆+−⋅−= Therefore ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( )( ) ( ) ( ) ( ) ( )[ ] ( )( ) ( ) 1| 1 ,; | 0| 1| 0| |,;exp ||1exp limexp |1|exp limexp 1 1 1 1 1 txtxtstxsE txt
txtxttxsEtxsE txt txtxttxsEtxsE Ttxtx txts Ttxtx t Ttxtx Ttxtx t Ttxtx φφ ⋅−= ∆−−∆+− ⋅−= ∆−−∆+− ⋅−= →∆ →∆ q.e.d. Bartlett-Moyal Theorem SOLO Stochastic Processes Discussion about Bartlett-Moyal Theorem (1) The assumption that x (t) is a Markov Process is essential to the derivation ( )( ) ( ) ( ) ( ) ( )[ ]td txxdsEtxts Ttxtx |1exp :,; 1| −−=φ (2)
The function is calledItô Differential of the Markov Process, orInfinitesimal Generator of Markov Process ( )( )txts ,;φ (3) The function is all we need to define the Stochastic Process(this will be proven in the next Lemma) ( )( )txts ,;φ Bartlett-Moyal Theorem SOLO Stochastic Processes Lemma Let x(t) be an (nx1) Vector Markov Process generated by ( )
nddttxfxd += , where pg ndndnd += pnd - is an (nx1) Poisson Process with Zero Mean and Rate Vector and Jump Probability Density pa(α). gnd - is an (nx1) Wiener (Gauss) Process with Zero Mean and Covariance( ) ( ) ( ) dttQtndtndE T gg = then ( )( ) ( ) ( )[ ]∑= −−−−=n iiai TT sMsQstxfstxtsi 1 12 1,,; λφ Proof We have ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ]
( ) td txndnddttxfsE td txxdsEtxts pg Ttxtx Ttxtx |1,exp|1exp :,; 11 || −++−= −−=φ ( ) ( ) ( )( )[ ] ( ) ( )[ ] [ ] [ ] pT gTT pgT txtx ndsEndsEdttxfstxndnddttxfsE −−−=++− expexp,exp|,exp1| Because are independentpg ndndxd ,, [ ] ( ) ( )dtdtdtdtndinjumponeonlyP i n ijjii 01 +=−= ∏ ≠ λλλ Bartlett-Moyal Theorem SOLO Stochastic Processes Lemma Let
x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , then ( )( ) ( ) ( )[ ]∑= −−−−=n iiai TT sMsQstxfstxtsi 1 12 1,,; λφ Proof (continue – 1) Because is Gaussiangnd [ ] −=− dtsQsndsE T gT 2 1expexp The Characteristic Function of the Generalized Poisson Process can be evaluated as follows.
Let note that the Probability of two or more jumps occurring at dt is 0(dt)→0 [ ] [ ] [ ] [ ]∑= −+⋅=−n iiiip T ndinjumponeonlyPasEjumpsnoPndsE1 exp1exp But [ ] ( ) ( )dtdtdtjumpsnoPn ii n ii 011 11 +−=−= ∑∏== λλ [ ] ( ) ( )dtdtdtdtndinjumponeonlyP i n ijjii 01 +=−= ∏ ≠ λλλ [ ] [ ] ( ) ( ) ( )[ ]∑∑∑=== −−=+−+−=−n iiai n ii sM ii n iip T
sMdtdtdtasEdtndsEi iia111 110exp1exp λλλ Bartlett-Moyal Theorem SOLO Stochastic Processes Lemma Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , then ( )( ) ( ) ( )[ ]∑= −−−−=n iiai TT sMsQstxfstxtsi 1 12 1,,; λφ Proof (continue – 3) We found [ ] −=− dtsQsndsE T gT 2 1expexp [ ] [ ] ( ) ( ) ( )[ ]∑∑∑=== −−=+−
+−=−n iiai n ii sM ii n iip T sMdtdtdtasEdtndsEi ita111 110exp1exp λλλ ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( ) ( )[ ] [ ] [ ] ( )[ ] ( )[ ]td sMdtdtsQsdttxfs td ndsEndsEdttxfs td txndnddttxfsE td txxdsEtxts n iiai TT pT gTT pgT txtxT txtx i111 21 exp,exp1expexp,exp |1,exp|1exp:,; 1 || 11 − −− −− =−−−− = −++−= −−= ∑= λ φ ( ) ( )[ ] ( ) ( )[ ]( ) ( )[ ]∑ ∑= = −
−−−=− −− +−+− =n iiai TT n iiai TT sMdtsQstxfstd sMdtdtdtsQsdtdttxfs i i 1 1 22 12 1, 111021 10,1 λλ q.e.d. Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑== == ∗+−+∂∂ ∂+ ∂∂−= ∂∂ n iai n i n j ji ijn i i ii pppxx pQ x pf t p 11 1 2 1 2 1 λ
Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 where the convolution (*) is defined as ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,: 1 ProofFrom Theorem 1 and the previous Lemma, we have: ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) (
)[ ] ( ) −−−−−= −=∂ Φ∂ ∑= 11 | 1| 11| |12 1,exp |,;exp|, 1 1 1 txsMsQstxfstxsE txtxtstxsEt txts n iiai TTTtxtx Lemma Ttxtx Theoremtxtx iλ φ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∫∞+ ∞− +∞ ∞− Φ=⇔−=Φj j txtxT ntxtxtxtxT txtx sdtstxsj txttxptxdtxttxptxsts ,exp2 1|,|,exp, 1111 |1|1|| π ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+ ∞− Φ∂∂= ∂∂ j j
txtxT ntxtx sdtst txsj txttxpt ,exp2 1|, 11 |1| π We also have: Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑== == ∗+−+∂∂ ∂+ ∂∂−= ∂∂ n iai n i n j ji ijn i i ii pppxx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for
the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 1) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) −−−−−=−= ∂Φ∂ ∑= 11 |1| 11| |1 2 1,exp|,;exp |,11 1 txsMsQstxfstxsEtxtxtstxsEt txts n iiai TTTtxtx LemmaT txtx Theoremtxtx iλφ ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+ ∞− Φ∂∂= ∂∂ j j
txtxT ntxtx sdtst txsj txttxpt ,exp2 1|, 11 |1| π ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( ) ( ) ( )( )[ ]1|1 1|1| 1| 1| 1| |,|, |,exp2 1 exp|,exp2 1 ,exp|exp2 1 |,expexp2 1 1 1 1 1 1 1 txtxptxfx txtxptxfsdtxtxptxfLstxs j sdvdtvstxtvptvfstxsj sdvdtvfstvstxtvptxsj sdtxtxfstxsEtxsj txtxix
n i i txtxij j txtxTT n j j Ttxtx TTn j j TTtxtx Tn j j TTtxtx Tn ∇=∂ ∂=−= −−= −−= −− ∑∫ ∫ ∫ ∫ ∫ ∫ = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− π π π π Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑== == ∗+−+∂∂ ∂+ ∂∂−= ∂∂ n iai n i n j ji ijn i i ii pppxx
pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 2) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) −−−−−=−= ∂Φ∂ ∑= 11 |1| 11| |1 2 1,exp|,;exp |,11 1 txsMsQstxfstxsEtxtxtstxsEt txts n iiai TTTtxtx
LemmaT txtx Theoremtxtx iλφ ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+ ∞− Φ∂∂= ∂∂ j j txtxT ntxtx sdtst txsj txttxpt ,exp2 1|, 11 |1| π ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( )[ ]∑∑∫ ∫ ∫ ∫ ∫ ∫ = = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∂∂∂ =−= −−= −= − n i n j ji txtxijj j txtxTT n j j Ttxtx TTn j j TTtxtx
Tn j j TTtxtx Tn xx txtxptxQsdstxtxptQLstxs j sdsvdtvstxtvptQstxsj sdvdstQstvstxtvptxsj sdtxstQstxsEtxsj 1 1 1|2 1| 1| 1| 1| |, 2 1|exp 2 1 exp|exp2 1 exp|exp2 1 |expexp2 1 1 1 1 1 1 π π π π Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑== ==
∗+−+∂∂ ∂+ ∂∂−= ∂∂ n iai n i n j ji ijn i i ii pppxx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 3) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) −−−−−=−= ∂Φ∂ ∑= 11 |1| 11| |1 2 1,exp|,;exp |,11
1 txsMsQstxfstxsEtxtxtstxsEt txts n iiai TTTtxtx LemmaT txtx Theoremtxtx iλφ ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )∫∞+ ∞− Φ∂∂= ∂∂ j j txtxT ntxtx sdtst txsj txttxpt ,exp2 1|, 11 |1| π ( ) ( ) ( ) ( ) ( ) [ ] [ ] ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )[ ] [ ] [ ] ( ) ( ) [ ] [ ] ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) [ ] [ ] ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( )∫∫ ∫ ∫ ∫ ∫ ∫ −−=−−−= −−−−= −−−= −−− ∞+
∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− initxtxiiaitxtxi j j txtxiiiT n j j Ttxtxiii Tn j j iiiT txtxT n j j iiiT txtxT n vdtxsvspvsptxtxpsdtxtvpasELtxsj sdvdtvstxtvpasEtxsj sdvdasEtvstxtvptxsj sdtxasEtxsEtxsj i 11|1|1| 1| 1| 1| |,,,,||exp1exp2 1 exp|exp1exp2 1 exp1exp|exp2 1 |exp1expexp2 1 111 1 1 1 λλλπ λπ λπ λπ ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,:
1Table of Content Fokker- Planck Equation SOLO Stochastic Processes Feller- Kolmogorov Equation Let x(t) be an (nx1) Vector Markov Process generated by ( ) pnddttxfxd += , ( ) [ ]∑∑== ∗+−+∂ ∂−=∂∂ n iai n i i ii pppx pf t p 11 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential
Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof where the convolution (*) is defined as ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvspppii 11| |,,,,: 1 Andrey Nikolaevich Kolmogorov 1903 - 1987 Derived from Theorem 2 by tacking 0=gnd Fokker- Planck Equation SOLO Stochastic Processes Fokker-Planck Equation Let x(t) be an (nx1) Vector Markov
Process generated by ( ) gnddttxfxd += , ( ) ∑∑∑= == ∂∂ ∂+ ∂∂−= ∂∂ n i n j ji ijn i i i xx pQ x pf t p 1 1 2 1 2 1 Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof Derived from Theorem 2 by tacking 0=pnd Discussion of Fokker-Planck
Equation The Fokker-Planck Equation can be written as a Conservation Law 01 =∇+∂∂= ∂∂+ ∂∂ ∑ =J t p x J t p n i i where pQpfJ ∇−=2 1: This Conservation Law is a consequence of the Global Conservation of Probability ( ) ( ) ( ) ( )( ) 1|, 1| 1=∫ xdtxttxp txtx Table of Content Langevin and Fokker- Planck Equations SOLO Stochastic Processes The
original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid, ( ) ( )tm vmtd vd td xdvtv td vdm ηληλ 1+−=⇒=+−= We are interested in the position x of a particle of mass m. The force on the particle is the sum of the viscous force proportional to
particle’s velocity λ v (Stoke’s Law) plus a noise term η (t) that has a Gaussian Probability Distribution with Correlation Function ( ) ( ) ( ) 2, /2'2', mTkQttTktt BjiBji λδδληη =−= where kB is Boltzmann’s constant and T is the Temperature. Let be the Transition Probability Density Function that corresponds to the Langevin Equation state. Then p
satisfies the Partial Differential Equation given by the Fokker-Planck Equation: ( ) ( ) ( ) ( )( ) ptvttvp tvtv =1| |,1 ( ) ( ) ( ) ( )( ) ( )( )00000| |,1 vtvtvttvp tvtv −= δ ( )( )2 2/ v pQ v pvm t p ∂∂+ ∂−∂−= ∂∂ λ We assume that the initial state at t0 is v(t0) and is deterministic Langevin and Fokker- Planck Equations SOLO Stochastic Processes The Fokker-
Planck Equation: ( ) ( ) ( ) ( )( ) ( )( ) −−=2 2 2/120| ˆ 2 1exp 2 1|, 1 σσπvv tvttvp tvtv ( ) ( ) ( ) ( )( ) ( )( )00000| |,1 vtvtvttvp tvtv −= δ ( )( )2 2/ v pQ v pvm t p ∂∂+ ∂−∂−= ∂∂ λ the initial state at t0 is v(t0) is deterministic The solution to the Fokker-Planck Equation is: where: A solution to the one-dimensional Fokker–Planck equation, with both the drift
and the diffusion term. The initial condition is a Dirac delta function in x = 1, and the distribution drifts towards x = 0. ( ) −−= 00 expˆ ttm vvλ and: ( ) −−−= 0 2 2exp1 ttm Qλσ Table of Content FokkerPlanck.gif Generalized Fokker - Planck Equation SOLO Stochastic Processes ( )TXtxpx ,|,Define the set of past data. We need to find( ) ( ) ( )( )nn
tttxxxTX ,,,,,,,:, 2121 = where we assume that ( ) ( )TXtx ,∉ Start the analysis by defining the Conditional Characteristic Function of the Increment of the Process: ( ) ( )( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( )( )[ ] ( ) ( )( ) ( ) ( ) ( )ttxtxxtxdTXttxtxpttxtxs TXttxttxtxsETXttxts TXttxxT TTXttxxTXttxx ∆−−=∆∆−∆−−−= ∆−∆−−−=∆−Φ ∫∞+ ∞−∆− ∆−∆∆−∆ :,,|,exp ,,|exp,,|, ,,|
,,|,,| ( ) ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )( )∫∞+ ∞−∆−∆∆− ∆−Φ∆−−==∆− j j TXttxxT nTXttxtx sdTXttxtsttxtxsj TXvttxtxp ,,|,exp2 1,,|, ,,|,,| π The Inverse Transform is The Fokker-Planck Equation was derived under the assumption that is a Markov Process. Let assume that we don’t have a Markov Process, but an Arbitrary Random Process (nx1 vector), where an
arbitrary set of past value , must be considered. nn txtxtx ,;;,;, 2211 ( )tx ( )tx ( ) ( )nTn T sssxxx 11 , == Generalized Fokker - Planck Equation SOLO Stochastic Processes Using Chapman – Kolmogorov Equation we obtain: ( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( )( ) ( )∫ ∫ ∫ ∫ ∫ ∞+ ∞− ∞+ ∞−∆−∆−∆
∞+ ∞−∆− ∆− ∞+ ∞−∆−∆ +∞ ∞−∆−∆−∆− ∆−∆−∆−Φ∆−−= ∆−∆−∆−Φ∆−−= ∆−∆−∆−= ∆− j j TXttxTXttxxT n TXttx TXttxtxp j j TXttxxT n TXttxTXttxtxTXttxtx ttxdsdTXttxpTXttxtsttxtxsj ttxdTXttxpsdTXttxtsttxtxsj ttxdTXttxpTXttxtxpTXtxp TXttxtx ,|,,|,exp2 1 ,|,,|,exp2 1 ,|,,|,,|, ,|,,| ,| ,,|, ,,| ,|,,|,,| ,,| π π where Let expand the Conditional Characteristic
Function in a Taylor Series about the vector 0=s ( ) ( )( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( )( )[ ] ( ) ( )( ) ( )∫ ∞+ ∞−∆− ∆−∆∆−∆ ∆−∆−∆−−−= −∆+−=∆−Φ ttxdTXttxtxpttxtxs TXtxtxttxsETXttxts TXttxxT TTXttxxTXttxx ,,|,exp ,,|exp,,|, ,,| ,,|,,| ( ) ( )( ) ( ) ( ) ( ) ∑∑ ∑ ∑∑∑ = ∞ = ∞ = ∆−∆ = = ∆−∆ = ∆−∆∆−∆ =∂∂ Φ∂= +∂∂ Φ∂+ ∂Φ∂ +=∆−Φ n ii m m mn m mn m TXttxxm n
n i n iii ii TXttxxi n i i TXttxxTXttxx mmssssmm ssss ss TXttxts n n n10 0 1 1 ,,| 1 1 1 ,,|2 1 ,,|,,| 1 1 1 1 2 21 21 1 1 1 !! 1 !2 11,,|, ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) ∑= ∆−∆∆−∆ =∆−∆−−∆−−⋅∆−−−= ∂∂∂∆−Φ∂ n ii mnn mmTXttxx m mn mm TXttxxm mmTXttxttxtxttxtxttxtxEsss TXttxtsn n1 2211,,| 21 ,,| :,,|1,,|, 21 21 Generalized Fokker - Planck
Equation SOLO Stochastic Processes ( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( )( ) ( )∫ ∫+∞ ∞− ∞+ ∞−∆−∆−∆∆− ∆−∆−∆−Φ∆−−= j j TXttxTXttxxT nTXttxtx ttxdsdTXttxpTXttxtsttxtxsj TXtxp ,|,,|,exp2 1,|, ,|,,|,,| π ( ) ( ) ( )[ ] ( )( ) ( )( ) ( )∫ ∫ ∑ ∑ +∞ ∞− ∞+ ∞−∆− ∞ = ∞ = ∆−∆ ∆−∆−∂∂ Φ∂∆−−= j j TXttxm m mn m mn m TXttxxm n Tn ttxdsdTXttxpss ssmmttxtxs jn n
n,| !! 1exp 2 1.| 0 01 1 ,,| 11 1 1 π ( ) ( ) ( )[ ] ( )( ) ( )( ) ( )ttxdTXttxpdsdsss ssttxtxs jmm TXttxm m j j j j nm nm mn m TXttxxm Tn nn n n∆−∆− ∂∂Φ∂ ∆−−= ∆− ∞ = ∞ = +∞ ∞− ∞+ ∞− ∞+ ∞− ∆−∆∑ ∑ ∫ ∫ ∫ ,|exp2 1 !! 1,| 0 011 1 ,,| 11 1 1 π ( )( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpdsdsssTXttxttxtxttxtxEttxtxs jmm TXttxm m j j j j nmn mmnn
mTXttxx Tn n m n nn ∆−∆−∆−∆−−∆−−∆−−−= ∆− ∞ = ∞ = + ∞ ∞− ∞+ ∞− ∞+ ∞−∆−∆∑ ∑ ∫ ∫ ∫ ,|,,|exp 2 1 !! 1,| 0 01111,,| 11 11 π we obtained: ( )( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpdssTXttxttxtxEttxtxs jm TXttxm m n i j j imi miiTXttxxiii i m n ii i ∆−∆− ∆−∆−−∆−−−= ∆− ∞ = ∞ = + ∞ ∞− = ∞+ ∞−∆−∆∑ ∑ ∫ ∏ ∫ ,|,,|exp 2 1 ! 1,| 0 0 1,,| 1π
Generalized Fokker - Planck Equation SOLO Stochastic Processes Using : [ ] ( ) ( ) ( ) ( ) ( ) ∫∫∫∞+ ∞− ∞+ ∞− ∞+ ∞− =→=−=−j j ii ij j j j ii i sdussFsj ufdu dsdussF jufsdauss jau ud dexp 2 1exp 2 1exp 2 1 πππδ we obtained: we obtain: ( ) ( ) [ ]( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxEdsttxtxssjm TXtxp TXttxm m j j miiTXttxxiiii mi i mn
i TXttxtx n ii i ∆−∆− ∆−∆−−∆−−−= ∆− ∞ = ∞ = ∞+ ∞− ∞+ ∞−∆−∆ = ∆− ∑ ∑ ∫ ∫∏ .|,,|exp2 1 ! 1 ,|, .|0 0 ,,|1 ,,| 1π ( ) ( ) [ ]( ) ( ) ( )[ ] ( ) ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxEtx ttxtx m TXtxp TXttxm m n i miiTXttxxm i iim i m TXttxtx n i i ii ∆−∆− ∆−∆−− ∂∆−−∂−= ∆− ∞ = ∞ = ∞+ ∞− =∆−∆ ∆− ∑ ∑ ∫ ∏ ,|,,|! 1 ,|, ,|0 0 1 ,,| ,,| 1 δ ( )( ) ( ) ( )[ ] ( ) (
) ( )( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )[ ]∑ ∑ ∏ ∑ ∑ ∏ ∫∞ = ∞ = ==∆∆−∆ ∞ = ∞ = = +∞ ∞−∆−∆−∆ ∆−∆−∆−− ∂∂−= ∆−∆−∆−∆−−∆−− ∂∂−= 0 0 10,|,,| 0 0 1,|,,| 1 1 ,|,,|! 1 ,|,,|! 1 m m n itTXttx miiTXtxxm i m i m m m n iTXttx miiTXttxxiim i m i m n i i ii n i i ii TXttxpTXttxttxtxEtxm ttxdTXttxpTXttxttxtxEttxtxtxm δ For m1=…=mn=m=0 we obtain :
( ) ( ) [ ]TXttxp TXttxttx ,|,,,| ∆−∆−∆− Generalized Fokker - Planck Equation SOLO Stochastic Processes we obtained: ( ) ( ) [ ] ( ) [ ]( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )[ ] 0,|,,|! 1 ,|,,|, 10 0 10,|,,| ,|,,| 1 ≠= ∆−∆−∆−− ∂∂−= ∆−− ∑∑ ∑ ∏= ∞ = ∞ = ==∆∆−∆ ∆−∆− n ii m m n itTXttx miiTXtxxm i m i m TXttxTXttxtx mmTXttxpTXttxttxtxEtxm TXttxpTXtxp n i i ii
Dividing both sides by Δt and taking Δt →0 we obtain: ( ) [ ] ( ) ( ) [ ] ( ) [ ] ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( ) 0,| ,,|lim ! 1 ,|,,|,lim ,|, 10 0 1,| ,,| 0 ,|,,| 0 ,| 1 ≠= ∆∆−∆−− ∂∂−= ∆∆−− =∂ ∂ ∑∑ ∑ ∏= ∞ = ∞ = = ∆ →∆ ∆−∆− →∆ n ii m m n iTXtx miiTXtxx tmi m i m TXttxTXttxtx t TXtx mmTXtxpt TXttxttxtxE txm t TXttxpTXtxp t TXtxp n i i ii This is the Generalized
Fokker - Planck Equation for Non-Markovian Random Processes Generalized Fokker - Planck Equation SOLO Stochastic Processes Discussion of Generalized Fokker – Planck Equation ( ) [ ] ( )( ) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) t TXtxttxtxttxtxEA mmTXtxpAtxtxmmt TXtxp n p pn n mnn mTXtxx tmm n iiTXtxmmm nm m m m n mTXtx ∆∆−−∆−− =
≠=∂∂ ∂−=∂ ∂ ∆ →∆ = ∞ = ∞ =∑∑ ∑ ,,|lim: 0,|!! 1,|, 1 1 11 1 11,,| 0,, 1,| 10 0 1 ,| • The Generalized Fokker - Planck Equation is much more complex than the Fokker – Planck Equation because of the presence of the infinite number of derivative of the density function.
• It requires certain types of density function, infinitely differentiable, and knowledge of all coefficients • To avoid those difficulties we seek conditions on the process for which ∂p/∂t is defined by a finite set of derivatives. pmmA ,,1 Generalized Fokker - Planck Equation SOLO Stochastic Processes Discussion of Generalized Fokker – Planck Equation (
) [ ] ( )( ) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) t TXtxttxtxttxtxEA mmTXtxpAtxtxmmt TXtxp n p pn n mnn mTXtxx tmm n iiTXtxmmm nm m m m n mTXtx ∆∆−−∆−− = ≠=∂∂ ∂−=∂ ∂ ∆ →∆ = ∞ = ∞ =∑∑ ∑ ,,|lim: 0,|!! 1,|, 1 1 11 1 11,,| 0,, 1,| 10 0 1 ,| • To avoid those difficulties we seek conditions on the process for which ∂p/∂t is defined by a finite set of
derivatives. Those were defined by Pawula, R.F. (1967) Lemma 1 Let( ) ( ) ( )( ) ( ) 0,,| lim: 111,,| 00,,0, 1 1≠= ∆∆−− = ∆ →∆mm t TXtxttxtxEA mTXtxx tm If is zero for some even m1, then Proof For m1 odd and m1 ≥ 3, we have ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t TXtxttxtxttxtxE t TXtxttxtxEA mm TXtxx t mTXtxx tm ∆ ∆−−∆−− =∆ ∆−−= +− ∆ →∆ ∆ →∆
,,|lim ,,|lim: 2 1 112 1 11,,| 0 11,,| 00,,0, 11 1 1 0,,0,1 mA 30 10,,0,1≥∀= mAm Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 1 Let ( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 00,,0, 1 1≠= ∆∆−− = ∆ →∆mm t TXtxttxtxEA mTXtxx tm Proof For m1 odd and m1 ≥ 3, we have ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t TXtxttxtxttxtxE t TXtxttxtxEA mm
TXtxx t mTXtxx tm ∆ ∆−−∆−− =∆ ∆−−= +− ∆ →∆ ∆ →∆ ,,|lim ,,|lim: 2 1 112 1 11,,| 0 11,,| 00,,0, 11 1 1 Using Schwarz Inequality, we have ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) 0,,0,10,,0,1 111,,| 0 111,,| 0 20,,0, 11 11 1 ,,|lim ,,|lim +− +∆ →∆ −∆ →∆= ∆∆−− ∆∆−− ≤ mm mTXtxx t mTXtxx tm AA t TXtxttxtxE t TXtxttxtxEA In the same way, for m1 ≥ 4, and
m1 even we have ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( )t TXtxttxtxttxtxE t TXtxttxtxEA mm TXtxx t mTXtxx tm ∆ ∆−−∆−− =∆ ∆−−= +− ∆ →∆ ∆ →∆ ,,|lim ,,|lim: 2 2 112 2 11,,| 0 11,,| 00,,0, 11 1 1 0,,0,20,,0,22 0,,0, 111 +−≤ mmm AAAUsing Schwarz Inequality, again for m1 ≥ 4 If is zero for some even m1, then0,,0,1 mA 30 10,,0,1≥∀= mAm Generalized
Fokker - Planck Equation SOLO Stochastic Processes Lemma 1 Let( ) ( ) ( )( ) ( ) 0,,| lim: 111,,| 00,,0, 1 1≠= ∆∆−− = ∆ →∆mm t TXtxttxtxEA mTXtxx tm Proof (continue) we haveevenmmAAA oddmmAAA mmm mmm 110,,0,20,,0,22 0,,0, 110,,0,10,,0,12 0,,0, 4 3 111 111 ≥≤ ≥≤ +− +− 00,,0, =rAFor some m1 = r even we have , and Therefore A r-
2,0,…,0=0, A r-1,0,…,0 =0, A r+1,0,…,0 =0, A r+2,0,…,0 =0, if A r,0,…,0 = 0 and all A are bounded. This procedure will continue leaving A 1,0,…,0 not necessarily zero and achieving: 420 310 310 420 0,,0,0,,0,42 0,,0,2 0,,0,20,,0,2 0,,0,1 0,,0,0,,0,22 0,,0,1 0,,0,0,,0,42 0,,0,2 ≥+=≤ ≥+=≤ ≥−=≤ ≥−=≤ ++ ++ −− −− rAAA rAAA rAAA rAAA rrr rrr rrr
rrr 00,,0,0,,0,30,,0,2 ==== ∞→ rAAAq.e.d. If is zero for some even m1, then0,,0,1 mA 30 10,,0,1≥∀= mAm Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,,21 Proof 2,,0 321,0,0,0,,00,0, 321≥∀=== mmmAAA mmm ( ) ( ) ( )( ) (
) ( )( ) ( ) 0 ,,|lim: 1 11,,| 0,, 1 1>= ∆∆−−∆−− = ∑= ∆ →∆ n ii mnn mTXtxx tmm mm t TXtxttxtxttxtxEA n p 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = =n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p We shall prove this Lemma by Induction.Let start with n=3 ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim 1 332211,,| 0,, 321 321>= ∆∆−−∆−−∆−− =
∑= ∆ →∆ n ii mmmTXtxx tmmm mm t TXtxttxtxttxtxttxtxEA We proved in Lemma 1 that and A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero. ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) 22,0,0 20,2,0 233,,| 0 222,,| 0 2 3322,,| 0 2,,0 32 32 32 32 ,,|lim ,,|lim ,,|lim mm mTXtxx t mTXtxx t mmTXtxx tmm AAt TXtxttxtxE t TXtxttxtxE t TXtxttxtxttxtxEA =
∆∆−− ∆∆−− ≤ ∆∆−−∆−− = ∆ →∆ ∆ →∆ ∆ →∆ Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,,21 Proof (continue – 1) 2,,0 321,0,0,0,,00,0, 321≥∀=== mmmAAA mmm ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 0,, 1 1>= ∆∆−−∆−−
= ∑= ∆ →∆ n ii mnn mTXtxx tmm mm t TXtxttxtxttxtxEA n p A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero. 22,0,0 20,2,0 2,,0 3232 mmmm AAA ≤ ≥+>= ⇒zeroynecessarilnotA mmmmA mm 1,1,0 3232,,0 3&0,032 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = =n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p 22,0,0 20,0,2 2,0, 3131 mmmm
AAA ≤ ≥+>= ⇒zeroynecessarilnotA mmmmA mm 1,0,1 3131,0, 3&0,032 20,2,0 20,0,2 20,, 2121 mmmm AAA ≤ ≥+>= ⇒zeroynecessarilnotA mmmmA mm 0,1,1 21210,, 3&0,021 Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0,
,,,21 Proof (continue – 2) ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 0,, 1 1>= ∆∆−−∆−− = ∑= ∆ →∆ n ii mnn mTXtxx tmm mm t TXtxttxtxttxtxEA n p 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = =n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) 4 332211,,| 0 4,, ,,|lim 321 321 ∆∆−−∆−−∆−− = ∆ →∆ t
TXtxttxtxttxtxttxtxEA mmmTXtxx tmmm ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) 321 3 22 4,0,00,4,02 0,0,2 433,,| 0 422,,| 0 2211,,| 0 ,,|lim ,,|lim ,,|lim mmm mTXtxx t mTXtxx t mTXtxx t AAAt TXtxttxtxE t TXtxttxtxE t TXtxttxtxE = ∆∆−− ⋅ ∆∆−− ⋅ ∆∆−− ≤ ∆ →∆ ∆ →∆ ∆ →∆ 321321 4,0,00,4,02 0,0,24 mmmmmm AAAA ≤ Since 000,032132 ,,324,0,00,4,0
>∀=⇒>∀== immmmm mAmmAA Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,,21 Proof (continue – 3) q.e.d. ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0 ,,|lim: 1 11,,| 0,, 1 1>= ∆∆−−∆−− = ∑= ∆ →∆ n ii mnn mTXtxx tmm mm t TXtxttxtxttxtxEA
n p 20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = =n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p We proved that only are not necessarily zero and1,1,01,0,10,1,11,0,00,1,00,0,1 ,,,,, AAAAAA 3..003 1,, 321 ≥=>∀= ∑=i iimmm mmtsmA In the same way, assuming that the result is true for (n-1) is straight forward to show that is true for n and
20..1,0 3..00 1,, 1,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = = n iiimm n iiimm mmtsmzeronecessarlynotA mmtsmA p p Generalized Fokker - Planck Equation SOLO Stochastic Processes Theorem 2 Let for some set (X,T) and let each of the moments vanish for some even mi. Then the transition density satisfies the Generalized Fokker-Planck Equation nmmm AAA
,,0,0,,,00,,0, ,,,21 Proof q.e.d. ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) 0,,1,,0,,1.,00 0,,1,,00 1 1 2 1 ,,|1 lim, ,,|1 lim, 2 1 ==→∆ =→∆ = == =−∆+−∆+∆ = =−∆+∆ = ∂∂∂ +∂ ∂−=∂∂ ∑∑∑ ji i mmjjiit ji miit i n i n j ji jin i i i ATXtxtxttxtxttxEt txC ATXtxtxttxEt txB xx pC x pB t p ( )TXtxpp x ,|,= 0,,1,,0,,1,,00,,1,,0 , === jii mmm AASince vanishes for some even
mi, from Lemma 2 the onlynon-necessarily zero Moments are nmmm AAA ,,0,0,,,00,,0, ,,,21 The Generalized Fokker – Planck Equation becomes ( ) [ ] ( )( ) ( ) ( ) ( )( )( ) ( ) ( )∑∑∑ ∑∑ ∑ = === == = ∞ = ∞ = ⋅∂∂ ∂+⋅∂∂−= ≠=⋅∂∂ ∂−=∂ ∂ n i n jmm i n im i n iiTXtxmmm nm m m m n mTXtx pAxjx pAx mmTXtxpAtxtxmmt TXtxp jii pn n 1 10,,1,,0,,1,,0 2
10,,1,,0 1,| 10 0 1 ,| 2 1 0,|!! 1,|,11 1 Generalized Fokker - Planck Equation SOLO Stochastic Processes History The Fokker-Planck Equation was derived by Uhlenbeck and Orenstein for Wiener noise in the paper: “On the Theory of Brownian Motion”, Phys. Rev. 36, pp.823 – 841 (September 1, 1930), (available on Internet) George EugèneUhlenbeck
(1900-1988) Leonard Salomon Ornstein (1880 -1941) Ming Chen Wang (王明贞( (1906-2010( Un updated version was published by M.C. Wang and Uhlenbeck : “On the Theory of Brownian Motion II”,. Rev. Modern Physics, 17, Nos. 2 and 3, pp.323 – 342 (April-July 1945), (available on Internet).They assumed that all Moments above second must
vanish. The sufficiency of a finite set of Moments to obtain a Fokker-Planck Equation was shown by R.F. Pawula, “Generalization and Extensions of Fokker-Planck-Kolmogorov Equations,”, IEEE, IT-13, No.1, pp. 33-41 (January 1967) Table of Content Karhunen-Loève Theorem SOLOStochastic Processes Michel Loève1907 )Jaffa( - 1979 )Berkley( In the
theory of stochastic processes, the Karhunen-Loève theorem (named after Kari Karhunen and Michel Loève) is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. In contrast to a Fourier series where the coefficients are real
numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen-Loève theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. If we
regard a stochastic process as a random function F, that is, one in which the random value is a function on an interval [a, b], then this theorem can be considered as a random orthonormal expansion of F. Given a Stochastic Process x (t) defined on an interval [a,b], Karhunen-Loeve Theorem states that ( ) ( ) ( ) btatbtxtxn nn ≤≤=≈ ∑∞ =1 ˆ ϕ ( ) ( ) ≠=
=∫ nm nmdttt b a mn 0 1*ϕϕ ( ) ( ) ( ) ( ) ( ) ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ:bydefined functionslorthonormaare are random variables( ) ( ) ,2,1* == ∫ ndtttxbb a nn ϕand( ) =≠ = =→= mn mnbbE bEtxEIf nmn n λ0 * 00 Karhunen-Loève Theorem (continue – 1) SOLOStochastic Processes Proof: ( ) ( ) ≠= =∫ nm nmdttt b a mn 0 1*ϕϕ (
) ( ) ( ) ( ) ( ) ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ:bydefined functionslorthonormaare ( ) ( ) ( ) btatbtxtxn nn ≤≤=≈ ∑∞ =1 ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxbb a nn ϕIf ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatsttdtttxtxEdtttxtxEbtxE mm b a m b a mm ≤≤∀== = ∫∫ 111222122211 ..*** ϕλϕϕ 1 =≠ =mn mnbbE nmn λ 0*then ( ) ( ) ( ) ( ) ( ) ( ) =≠ === = ∫∫∫
mn mndtttdttbtxEbdtttxEbbE n b a nmm b a nmm b a nmn λϕϕλϕϕ 0****** 111111111 2 ( ) ( ) ( ) ( )( ) ,2,10**0 === == ∫∫ ndtttxEdtttxEbEtxEb a n b a nn ϕϕ Karhunen-Loève Theorem (continue – 2) SOLOStochastic Processes Proof: ( ) ( ) ≠= =∫ nm nmdttt b a mn 0 1*ϕϕ ( )( ) ( ) ( ) ( ) ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ
:andfunctionslorthonormaare ( ) ( ) ( ) btatbtxtxn nn ≤≤=≈ ∑∞ =1 ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxbb a nn ϕIf ( ) ( ) ( ) ( ) btatstttbbEbtbEbtxE mmn nmnmn nnm ≤≤∀== = ∑∑ ∞ = ∞ =111 11 111 ..*** ϕλϕϕ 3 =≠ =mn mnbbE nmn λ 0* then ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatstdttttRdtttxtxEdtttxtxEbtxEb a m b a m b a mm ≤≤∀== = ∫∫∫ 112221222122211
..,*** ϕϕϕbut ( )( ) ( ) ( ) ( ) ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕtherefore with positiverealbbE mmm &*=λ Karhunen-Loève Theorem (continue – 3) SOLOStochastic Processes ( ) ( ) btatbtxn nn ≤≤= ∑∞ =1 ˆ ϕ then ( ) ( ) ( ) ( ) btatttRtxtxEn nn ≤≤−=− ∑∞ =1 22,ˆ ϕλ Convergence of Karhunen – Loève Theorem4 therefore ( ) ( ) ( ) ( )
btatttRtxtxEn nn ≤≤=⇔=− ∑∞ =1 22,0ˆ ϕλ Proof: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttbtxEtbtxEtxtxEn nnnn nnn nn ≤≤== = ∑∑∑ ∞ = ∞ = ∞ = 111 ******ˆ ϕϕλϕϕ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttbtxEtbtxEtxtxEn nnnn nnn nn nn ≤≤== = ∑∑∑ ∞ = =∞ = ∞ = 1 * 11 ***ˆ* ϕϕλϕϕλλ ( ) ( ) btatsttbtxE nnn ≤≤∀= 1111 ..* ϕλ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
btatttbtxEtbtxEtxtxEn nnnn nnn nn ≤≤== = ∑∑∑ ∞ = ∞ = ∞ = 111 ***ˆ**ˆ*ˆˆ ϕϕλϕϕ ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) btatttR txtxtxEtxtxEtxEtxtxtxtxEtxtxE nnn ≤≤−= +−−=−−=− ∑∞ =1 2 222 , ˆˆ**ˆ*ˆˆˆ ϕλ Table of Content References: SOLO Stochastic_processes Stochastic_differential_equations Papoulis, A., “Probability, Random
Variables, and Stochastic Processes”, McGraw Hill, 1965, Ch. 14 and 15 Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971 McGarty, T., “Stochastic Systems and State Estimation”, John Wiley & Sons, 1974 Maybeck, P.S., “Stochastic Systems Estimation and Control”, Academic Press,
Mathematics in Science and Engineering, Volume 141-2, 1982, Ch. 11 and 12 Stochastic Processes Table of Content Jazwinski, A.H., “Stochastic Processes and Filtering Theory”, Academic Press, 1970 January 12, 2015 80 SOLO TechnionIsraeli Institute of Technology 1964 – 1968 BSc EE1968 – 1971 MSc EE Israeli Air Force1970 – 1974
RAFAELIsraeli Armament Development Authority 1974 – 2013 Stanford University1983 – 1986 PhD AA Functional Analysis ( ) ( ) ( ) bxtxxtxtxaxxtfdttf nnn n iiiin b a =<<<<<<<<=−= −−= +→∞∑∫ 1121100 01lim SOLO Riemann Integral ix 1+ix it ( )itf ax =0 bxn = εδ <−= + ii xx 1 ( )∫b a dttf In Riemann Integral we divide the interval [a,b]in n non-
overlapping intervals, that decrease asn increases. The value f (ti) is computed inside theintervals. bxtxxtxtxa nnn =<<<<<<<<= −− 1121100 The Riemann Integral is not always defined, for example: ( ) =irationalex rationalexxf 3 2 The Riemann Integral of this function is not defined. Georg Friedrich BernhardRiemann 1826 - 1866 Integration
SOLO Stochastic Processes Thomas Joannes Stieltjes 1856 - 1894 Riemann–Stieltjes integral Bernhard Riemann1826 - 1866 The Stieltjes integral is a generalization of Riemann integral. Let f (x) and α (x) be] real-valued functions defined in the closed interval [a,b]. Take a partition of the interval and consider a Riemann sum bxxxa n <<<<= 10 ( ) ( ) (
)[ ] [ ]iii n iiii xxxxf ,1 11 − =− ∈−∑ ξααξ If the sum tends to a fixed number I when max(xi-xi-1)→0 then I is called aStieltjes integral or a Riemann-Stieltjes integral. The Stieltjes integral of fwith respect to α is denoted: ( ) ( )∫ xdxf α ∫ αdf If f and α have a common point of discontinuity, then the integral doesn’t exist.However, if f is continuous and α’ is
Riemann integrable over the specific interval or sometimes simply ( ) ( ) ( )xd xddxfxdxf αααα == ∫∫ :'' Functional Analysis my ky ( )[ ]kyEµ ( )[ ]myEµ 1M 2M( )[ ] 01 =MEµ ( )[ ] 02 =MEµ( )xfy = SOLO Lebesgue Integral Measure The mean idea of the Lebesgue integral is the notion of Measure. Definition 1: E (M) є [a,b] is the regionin x є [a,b], of the
function f (x) for which ( ) Mxf > Definition 2: µ [E (M)] the measure of E (M) is ( )[ ]( ) 0≥= ∫ME dxMEµ We can see that µ [E (M)] is the sum of lengths on x axis for which ( ) Mxf > From the Figure above we can see that for jumps M1 and M2 ( )[ ] ( )[ ] 021 == MEME µµ Example: Let find the measure of the rationale numbers, ratio of integers, that
are countable n mrrrrrr k ====== ,, 4 3, 4 1, 3 2, 3 1, 2 15321 3 Since the rationale numbers are discrete we can choose ε > 0 as small as we want and construct an open interval of length ε/2 centered around r1, an interval of ε/22 centered around r2,.., an interval of ε/2k centered around rk ( )[ ] εεεεµ =++++≤ k rationalsE222 2 ( )[ ] 00 =⇒→
rationalsEµε Functional Analysis ( ) ( ) ( )[ ] ( ) ( )xfyyyyxfyyEyydttfbxa nnibxa n i iiin b a≤≤ −≤≤= −∞→=<<<<<<=−= ∑∫ supinflim 110 0 1 µ a b 0y1y 1−ky 1+kyky 1−ny ny( )[ ]1+kyEµ ( )[ ]1−kyEµ( )[ ]kyEµ ( )xfy = ( ) =irationalex rationalexxf 3 2 SOLO Lebesgue Integral Henri Léon Lebesgue1875 - 1941 A function y = f (x) is said to be
measurable if the set of points x at which f (x) < c is measurable for any and all choices of the constant c. The Lebesgue Integral for a measurable function f (x) is defined as: Example ( )( ) ( )( ) ( )( ) ( )( ) ( ) 30131 0 1110/ =−==+= ∫∫∫∫≤≤ irationalsErationalsEirationalsExfE dxxfdxxfdxxfdxxf 3 2 0 1 x ( )xfIrationals Rationals For a continuous function
the Riemann and Lebesgue integrals give the same results. Integration SOLO Stochastic Processes Lebesgue-Stieltjes integration Thomas Joannes Stieltjes1856 - 1894 Henri Léon Lebesgue 1875 - 1941 In measure-theoretic analysis and related branches of mathematics, Lebesgue-Stieltjes integration generalizes Riemann-Stieltjes and Lebesgue
integration, preserving the many advantages of the latter in a more general measure-theoretic framework. Let α (x) a monotonic increasing function of x, and define an interval I =(x1,x2). Define the nonnegative function ( ) ( ) ( )12 xxIU αα −=The Lebesgue integral with respect to a measure constructed using U (I) is called Lebesgue-Stieltjes integral,
or sometimes Lebesgue-Radon integral. Johann Karl August Radon 1887– 1956 Integration SOLO Stochastic Processes Darboux Integral Lower (green) and upper (green plus lavender) Darboux sums for four subintervals Jean-Gaston Darboux 1842 - 1917 In real analysis, a branch of mathematics, the Darboux integral or Darboux sum is one possible
definition of the integral of a function. Darboux integrals are equivalent to Riemann integrals, meaning that a function is Darboux-integrable if and only if it is Riemann-integrable, and the values of the two integrals, if they exist, are equal. Darboux integrals have the advantage of being simpler to define than Riemann integrals. Darboux integrals are
named after their discoverer, Gaston Darboux. A partition of an interval [a,b] is a finite sequence of values xi such that bxxxa n <<<<= 10 Definition Each interval [xi−1,xi] is called a subinterval of the partition. Let ƒ:[a,b]→R be a bounded function, and let ( )nxxxP ,,, 10 = be a partition of [a,b]. Let [ ]( ) [ ]( )xfmxfM iiiixxx ixxx i,, 11 inf:;sup:−− ∈∈==
The upper Darboux sum of ƒ with respect to P is ( )∑= −−=n iiiiPf MxxU 11, : The lower Darboux sum of ƒ with respect to P is ( )∑= −−=n iiiiPf mxxL 11, : Integration SOLO Stochastic Processes Darboux Integral(continue – 1) Lower (green) and upper (green plus lavender) Darboux sums for four subintervalsJean-Gaston Darboux1842 - 1917 The
upper Darboux sum of ƒ with respect to P is ( )∑= −−=n iiiiPf MxxU 11, : The lower Darboux sum of ƒ with respect to P is ( )∑= −−=n iiiiPf mxxL 11, : The upper Darboux integral of ƒ is [ ] baofpartitionaisPUU Pff ,:inf ,= The lower Darboux integral of ƒ is [ ] baofpartitionaisPLL Pff ,:inf ,= If Uƒ = Lƒ, then we say that ƒ is Darboux-integrable and set (
) ff b a LUdttf ==∫the common value of the upper and lower Darboux integrals. Integration SOLO Stochastic Processes Lebesgue Integration Henri Léon Lebesgue 1875 - 1941 Illustration of a Riemann integral (blue) and a Lebesgue integral (red) Riemann Integral A sequence of Riemann sums. The numbers in the upper right are the areas of the grey
rectangles.
They converge to the integral of the function. Darboux Integral Lower (green) and upper (green plus lavender) Darboux sums for four subintervals Jean-Gaston Darboux 1842 - 1917 Bernhard Riemann1826 - 1866 SOLO Stochastic Processes Richard SnowdenBucy Abdrew JamesViterby1935 - Harold J.Kushner1932 - Moshe Zakai1926 - Jose
EnriqueMoyal (1910 – 1998) Rudolf E.Kalman1930 - Maurice Stevenson Bartlett (1910 - 2002) George EugèneUhlenbeck (1900-1988) Leonard Salomon Ornstein (1880 -1941) Bernard OsgoodKoopman )1900 – 1981( Edwin James GeorgePitman )1897 – 1993( Georges Darmois(1888 -1960) Copyright © 2017 TUXDOC Inc. About | Contact Us

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy