0% found this document useful (0 votes)
118 views23 pages

Introduction To Master Equations

This chapter introduces master equations, which are differential equations that describe the evolution of probabilities for Markov processes with discrete states, like chemical reactions or epidemics. It presents a simple two-state system with constant transition rates between the states. The master equations derived give the change in probability of being in each state over time. They can be solved to find the stationary probabilities and conditional probabilities of being in a given state at time t.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views23 pages

Introduction To Master Equations

This chapter introduces master equations, which are differential equations that describe the evolution of probabilities for Markov processes with discrete states, like chemical reactions or epidemics. It presents a simple two-state system with constant transition rates between the states. The master equations derived give the change in probability of being in each state over time. They can be solved to find the stationary probabilities and conditional probabilities of being in a given state at time t.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 4

Introduction to master equations

In this chapter we will briefly present the main results about master equations. They are
di↵erential equations that describe the evolution of the probabilities for Markov processes
for systems that jump from one to other state in a continuous time. In this sense they
are the continuous time version of the recurrence relations for Markov chains mentioned
at the end of chapter 1. We will emphasize their use in the case that the number
of available states is discrete, as it corresponds to applications to chemical reactions,
radioactive substances, epidemic spreading and many other cases of interest. We give
also a brief account of the generating function technique to solve master equations and
some approximated methods of solution.

4.1 A two-state system with constant rates


Let us consider now a situation in which something can switch between two states that
we name “1” and “2”. There are many examples of this situation:
-A radioactive atom which is in state 1 has not yet disintegrated, whereas the state 2 is
the atom after the disintegration.
-A person can be healthy (state 1) or sick (state 2).
-In a chemical reaction, say the combination of sodium and chlorine to produce salt,
Na+Cl!NaCl, an atom of sodium can be not bound (state 1) or bound (state 2) to a
chlorine atom.
-A bilingual person can be speaking one language (state 1) or another (state 2).
In these and many other examples, one can represent the transitions between the
two states as random events that happen at some rates. We denote by !(1 ! 2) the
rate at which one particle1 jumps from state 1 to state 2 and we assume that those
transitions occur uniformly and randomly at the constant rate !(1 ! 2). This means
that there is a probability !(1 ! 2)dt that the particle jumps from 1 to 2 in the time
interval (t, t + dt).
The inverse process, that of switching from 2 to 1, might or might not occur. For
example, a person can get a flu (so it goes from 1 ! 2) but it can usually recover from
1
For the sake of brevity, we will call “particles” the agents, systems, persons, atoms or whatever
that is making the jumps between the states.
68 Introduction to master equations

the flu (so it goes from 2 ! 1). A radioactive substance, however, can not switch back
from the decayed atom to the original atom. In the reversible chemical reactions, the
product molecule (NaCl, for instance) can be broken up in the constituent atoms. The
bilingual person can be switching many times a day between the using of both languages.
When the inverse switching does occur, its rate !(2 ! 1) might or might not be related
to the rate !(1 ! 2). If a particle in state 1 adopts the form X and a particle in state
2 the form Y, we can indicate this process schematically by:

X ! Y. (4.1)

We will use sometimes, for brevity, the notation !i!j ⌘ !(i ! j) to indicate the rate
at which one particle goes from state i to state j.

4.1.1 The particle point of view


Starting at some initial time t0 , we ask the probabilities P1 (t) and P2 (t) that one given
particle is in state 1 or 2, respectively, at time t. Obviously, they must satisfy P1 (t) +
P2 (t) = 1. We will derive now a di↵erential equation for P1 (t). We do that by relating
the probabilities at two closed times, t and t + dt. In doing so, we are implicitly using
the Markov assumption as the transition rate from one state to the other during the
time interval (t, t + dt) does not depend at all on the previous history, but only on the
state at time t. The probability P1 (t + dt) that the particle is in state 1 at time t + dt
has two contributions: that of being in 1 at time t and not having jumped to state 2
during the interval (t, t + dt), and that of being at 2 at time t and having made a jump
from 2 to 1 in the interval (t, t + dt). Summing up all these cases and using the rules
of conditional probability, we have
P1 (t + dt) = P1 (t)Prob(staying in 1) + P2 (t)Prob(jumping from 2 to 1). (4.2)
By the definition of the rate, the probability of jumping from 2 to 1 in the time interval
(t, t + dt) is !(2 ! 1)dt whereas the probability of staying in state 1 is one minus the
probability of leaving state 1 in the same time interval, or 1 !(1 ! 2)dt. This leads
to:
P1 (t + dt) = P1 (t)[1 !(1 ! 2)dt] + P2 (t)!(2 ! 1)dt + O(dt2 ). (4.3)
The terms of order O(dt2 ) could arise if the particle is in state 1 at time t + dt because
it was in state 1 at time t and made two jumps, one from 1 to 2 and another from
2 to 1, during the time interval (t, t + dt). In this way, it could end up again in
state 1, so contributing to Prob(staying in 1). This would happen with probability
!(1 ! 2)dt⇥!(2 ! 1)dt = O(dt2 ). Similarly, there could be higher order contributions
from particles jumping from 2 to 1 and then back from 1 to 2. All these multiple events
happen with vanishing probability in the limit dt ! 0. Rearranging, and taking the limit
dt ! 0 we get the di↵erential equation:
dP1 (t)
= !(1 ! 2)P1 (t) + !(2 ! 1)P2 (t). (4.4)
dt
4.1 A two-state system with constant rates 69

A similar reasoning leads to the equivalent equation for P2 (t):


dP2 (t)
= !(2 ! 1)P2 (t) + !(1 ! 2)P1 (t). (4.5)
dt
Eqs. (4.4)-(4.5) are a very simple example of master equations: equations for the
probability that a stochastic particle than can jump between di↵erent states is in one of
these states at a time t. These first-order di↵erential equations have to be solved under
the assumption of some initial conditions at some initial time t0 : P1 (t0 ) and P2 (t0 ).
Note that
d
[P1 (t) + P2 (t)] = 0, (4.6)
dt
implying that P1 (t) + P2 (t) = 1 at all times t provided that the initial condition satisfies,
as it should, P1 (t0 ) + P2 (t0 ) = 1. One defines the probability current, J(1 ! 2) from
state 1 to 2 as:

J(1 ! 2) = !(1 ! 2)P1 (t) + !(2 ! 1)P2 (t). (4.7)

It has a negative contribution coming from those jumps leaving state 1 to go to state 2,
and a positive contribution from those jumps from state 2 to state 1. A similar definition
leads to J(2 ! 1) = J(1 ! 2). In terms of these probability currents, the master
equations are
dP1 (t) dP2 (t)
= J1 (t), = J2 (t). (4.8)
dt dt
In the case of constant rates !(1 ! 2) and !(2 ! 1) that we are considering
throughout this section it is possible to find the explicit solution of Eqs. (4.4)-(4.5):

!2!1 + !1!2 e (!2!1 +!1!2 )(t t0 )


P1 (t) = P1 (t0 )
!2!1 + !1!2
!2!1 (1 e (!2!1 +!1!2 )(t t0 ) )
+P2 (t0 ) , (4.9)
!2!1 + !1!2

(!2!1 +!1!2 )(t t0 )


!1!2 (1 e )
P2 (t) = P1 (t0 )
!2!1 + !1!2
!1!2 + !2!1 e (!2!1 +!1!2 )(t t0 )
)
+P2 (t0 ) . (4.10)
!2!1 + !1!2
From here, and using P1 (t0 ) + P2 (t0 ) = 1, we can obtain the stationary distribution as
the limit t ! 1:
!2!1
P1st = , (4.11)
!2!1 + !1!2

!1!2
P2st = , (4.12)
!2!1 + !1!2
70 Introduction to master equations

a particularly simple solution. Note that in this case the stationary distribution satisfies

!(1 ! 2)P1st = !(2 ! 1)P2st , (4.13)

showing that in the case of two states the stationary distributions satisfy the detailed
balance condition, equivalent to (1.141) for Markov chains.
An interesting quantity is the probability P (i, t|j, t0 ) that the particle is in state i at
time t given that it was in state j at time t0 for i, j = 1, 2. In general, it is difficult
to compute P (i, t|j, t0 ) directly from the rules of the process. The reason has already
been mentioned before: in a finite interval (t0 , t) there might have been many jumps to
intermediate states and all these have to be included in the calculation of the conditional
probability. For instance, to compute P (1, t|1, t0 ) one has to include the jumps 1!2!1,
in which the particle has left state 1 to go to state 2 and returned from it, any number
of times, from zero to infinity.
Luckily, there is a simple alternative way to proceed in the case of two states and
constant rates. We can obtain P (1, t|1, t0 ) from P1 (t) setting as a initial condition
P1 (t0 ) = 1, P2 (t0 ) = 0, as we know (probability 1) that the particle is in state 1 at time
t0 . Taking the explicit solution (4.9) with P1 (t0 ) = 1, P2 (t0 ) = 0, we obtain

!2!1 + !1!2 e (!2!1 +!1!2 )(t t0 )


P (1, t|1, t0 ) = , (4.14)
!2!1 + !1!2
and, of course,
(!2!1 +!1!2 )(t t0 )
!1!2 1 e
P (2, t|1, t0 ) = 1 P (1, t|1, t0 ) = , (4.15)
!2!1 + !1!2
with equivalent expressions for P (1, t|2, t0 ) and P (2, t|2, t0 ):
(!1!2 +!1!2 )(t t0 )
!2!1 1 e
P (1, t|2, t0 ) = , (4.16)
!2!1 + !1!2
!1!2 + !2!1 e (!2!1 +!1!2 )(t t0 )
P (2, t|2, t0 ) = . (4.17)
!2!1 + !1!2
In terms of these conditional probabilities, we can reason that the probability that
the particle is in state 1 at time t is the probability that it was in state 1 at time t0
times the probability P (1, t|1, t0 ) plus the probability that it was in state 2 at t0 times
the probability P (1, t|2, t0 ):

P1 (t) = P1 (t0 )P (1, t|1, t0 ) + P2 (t0 )P (1, t|2, t0 ), (4.18)

and similarly:

P2 (t) = P1 (t0 )P (2, t|1, t0 ) + P2 (t0 )P (2, t|2, t0 ). (4.19)

In fact, the conditional probabilities P (i, t|j, t0 ) of a Markov process can not be arbitrary
functions. If we consider an intermediate time t0 < t1 < t, the probability that the
4.1 A two-state system with constant rates 71

particle is in state 1 at time t provided it was in state 2 at time t0 can be computed


using the probabilities that the particle was at the intermediate time t1 in one of the two
states. Namely,

P (1, t|2, t0 ) = P (1, t|1, t1 )P (1, t1 |2, t0 ) + P (1, t|2, t1 )P (2, t1 |2, t0 ), (4.20)

and similar equations for P (1, t|1, t0 ), P (2, t|2, t0 ), P (2, t|1, t0 ). The first term in the
right-hand-side is the probability that the particle went from 2 at time t0 to 1 at time t
and passed trough 1 at time t1 and a similar interpretation for the second term. These
relations are called the Chapman-Kolmogorov equations for this process. If we knew
the conditional probabilities P (i, t|j, t0 ) for arbitrary times t, t0 , the solution of these
two functional equations (4.18)-(4.19) would allow one to compute P1 (t) and P2 (t).
However, (1) it is difficult to obtain directly from the rules of the process the conditional
probabilities P (i, t|j, t0 ), and (2) it is difficult to solve a functional equation. We have
circumvented these problems in our previous treatment by considering a small time
increment t t0 = dt. In this limit of small dt we find two advantages: (1) the conditional
probabilities can be obtained from the rates, i.e. P (1, t+dt|2, t) = !(2 ! 1)dt+O(dt2 ),
dP1 (t)
etc. and (2) by expanding P1 (t+dt) = P1 (t)+ dt+O(dt2 ) we obtain a di↵erential
dt
instead of a functional equation.
When we will discuss in the next chapter the use of numerical algorithms to simulate
numerically master equations, we will need to use the probability density f 1st (j, t|i, t0 )
of the first jump from i ! j. This is defined such that f 1st (j, t|i, t0 )dt is the probability
that the particle is at state i at time t0 and stays there until it jumps to state j in
the time interval (t, t + dt), with no intermediate jumps in the whole interval
(t0 , t). Let us now derive directly this function for the particular case of the two-
state system. We divide the time interval (t0 , t) in subintervals of length dt: (t0 , t0 +
dt), (t0 + dt, t0 + 2dt), . . . , (t dt, t). The particle is in, say, state i = 1 at time
t0 and does not make any jump to state 2 until the time interval (t, t + dt). This
means that it does not jump during any of the intervals (t0 + (k 1)dt, t0 + kdt)
for k = 1, . . . , K = (t t0 )/dt. The probability that it does not jump during the
interval (t0 + (k 1)dt, t0 + kdt) is 1 !(1 ! 2)dt. Therefore, the probability that
it
QK does not jump during any of these intervals is the product of all these probabilities,
k=1 (1 !(1 ! 2)dt) = (1 !(1 ! 2)dt)K = (1 !(1 ! 2)dt)(t t0 )/dt . In the
limit of dt ! 0 this becomes2 equal to e !(1!2)(t t0 ) . Finally, we have to multiply by
the probability that there is a jump from 1 to 2 during the interval (t, t + dt) which is
!(1 ! 2)dt. This yields, finally:

f 1st (2, t|1, t0 ) = !(1 ! 2)e !(1!2)(t t0 )


. (4.21)

Similarly, we obtain

f 1st (1, t|2, t0 ) = !(2 ! 1)e !(2!1)(t t0 )


. (4.22)
2
We use limx!0 (1 + x)a/x = ea with x = !(1 ! 2)dt and a = !(1 ! 2)(t t0 ).
72 Introduction to master equations

4.1.2 The occupation numbers point of view


We have considered so far a particle that can jump between two states and asked for
the probability for that particle to be in state 1 or in state 2. We now consider a system
which is composed by N of such particles, each one of them making the random jumps
from state 1 to 2 or vice-versa. We introduce the occupation number n of state 1 and
ask now for the probability p(n; t) that at a given time t exactly n of the N particles
are in that state. If the N particles are independent of each other and they all start in
the same state, such that P1 (t) is the same for all particles, then p(n; t) is given by a
binomial distribution:
✓ ◆
N
p(n; t) = P1 (t)n (1 P1 (t))N n . (4.23)
n
We will now find directly a di↵erential equation satisfied by p(n; t). This is not really
necessary in this simple case, but the techniques we will develop will be useful in the
cases in which the jumps between states do not occur independently for each of the
particles, or particles start in di↵erent initial conditions.
As before, we will relate the probabilities at times t and t + dt. Recall first that the
total number of particles is N and if there are n particles in state 1, then the remainder
N n are in state 2. How can we have n particles in state 1 at time t + dt? There are
three possibilities:
(1) There were n particles in state 1 and N n in state 2 at time t and no one left the
state it was in.
(2) There were n 1 particles in state 1 and one of the N (n 1) particles that were
in state 2 jumped from 2 to 1 during that interval.
(3) There were n + 1 particles in state 1 and one particle jumped from 1 to 2 during
that interval.
As we are considering the limit of a small time interval dt, other possible processes
(for instance, the jump of two or more particles from one state to another) need not be
included as their contribution will be of a higher order in dt. So, we write:

p(n; t + dt) = p(n; t) ⇥ Prob(no particle jumped)


+ p(n 1; t) ⇥ Prob(any of the N n + 1 particles jumps from 2 ! 1)
+ p(n + 1; t) ⇥ Prob(any of the n + 1 particles jumps from 1 ! 2)
+ O(dt2 ). (4.24)

Let us analyze each term one by one:


(1) The probability that no particle jumped is the product of probabilities that none of
the n particles in state 1 jumped to 2 and none of the N n particles in state 2 jumped
to 1. The probability that one particle does not jump from 1 to 2 is 1 !1!2 dt, hence,
the probability that none of the n particles jumps from 1 to 2 is the product for all
particles of this probability, or [1 !1!2 dt]n . Expanding to first order in dt, this is equal
to 1 n!1!2 dt + O(dt2 ). Similarly, the probability that none of the N n particles in 2
jumps to 1 is 1 (N n)!2!1 dt + O(dt2 ). Finally, the probability that no jump occurs
whatsoever is the product of these two quantities, or 1 (n!1!2 + (N n)!2!1 )dt +
4.1 A two-state system with constant rates 73

O(dt2 ).
(2) The probability that one particle jumps from 2 to 1 is !2!1 dt, hence the probability
that any of the N n+1 particles jumps from 2 ! 1 is the sum of all these probabilities,
or (N n + 1)!2!1 dt.
(3) The probability that one particle jumps from 1 to 2 is !1!2 dt, hence the probability
that any of the n + 1 particles jumps from 1 ! 2 is the sum of all these probabilities,
or (n + 1)!1!2 dt.
Again, in all these expressions could be higher-order terms corresponding to multiple,
intermediate, jumps. Putting all these terms together, we obtain:
p(n; t + dt) = p(n; t)(1 (n!1!2 + (N n)!2!1 )dt)
+p(n 1; t)(N n + 1)!2!1 dt
+p(n + 1; t)(n + 1)!1!2 dt + O(dt2 ). (4.25)
Rearranging and taking the limit dt ! 0 we obtain the di↵erential equations:
@p(n; t)
= (n!1!2 + (N n)!2!1 )p(n; t) (4.26)
@t
+(N n + 1)!2!1 p(n 1; t) + (n + 1)!1!2 p(n + 1; t),
This set of N +1 equations for the functions p(n; t), valid for n = 0, 1, . . . , N , constitute
the master equation from the occupation number point of view, complementary to the
one-particle point of view considered in the previous subsection.
As this set of N +1 coupled di↵erential equations is certainly much more complicated
than (4.4)-(4.5) and given that we can reconstruct p(n; t) from P1 (t) using (4.23), one
can ask which is the usefulness of such approach. The answer is that (4.23) is only
valid in the case of independent particles, an assumption which is not always correct.
For example, it is true in the case of the radioactive atoms in which each atom decays
independently of others. But it is not correct in the vast majority of cases of interest.
Consider the example in which state 1 is “healthy” and state 2 is “sick” for a contagious
disease. The rate !1!2 at which a healthy person gets infected (the rate at which he
goes from 1 to 2) depends naturally on the number N n of sick persons, as the more
infected people there is around one person, the higher the probability that this person
gets the disease, whereas we might consider that the recovery rate !2!1 is independent
on how many people are infected. In a chemical reaction, the rate at which Cl and Na
react to form NaCl depends on how many free (not combined) atoms exist, and so on. In
all these cases, it is not possible to write down closed equations for the probability that
one particle is in state 1 independently on how many particles share this state. When
this happens, master equations of the form of (4.26) are the necessary starting point.
The general structure of the master equation obtained here is not so di↵erent of
the one derived in the previous subsection taking the particle point of view. We can
introduce global rates to jump between global states with n particles in state 1. In this
case we define the rates:
⌦(n ! n + 1) = (N n)!2!1 (4.27)
⌦(n ! n 1) = n!1!2 (4.28)
74 Introduction to master equations

and write the master equation as:

@p(n; t)
= (⌦(n ! n 1) + ⌦(n ! n + 1))p(n; t) (4.29)
@t
+⌦(n 1 ! n)p(n 1; t) + ⌦(n + 1 ! n)p(n + 1; t),

the same kind of balance relation that we can find in (4.4-4.5). The first term in the
right-hand-side is the loss of probability from n to either n 1 or n + 1, whereas the
second and third represent the gain of probability from states with n 1 or n+1 particles,
respectively.
Before we generalize these results to particles that can be in more than two states,
let us introduce a simplifying notation. We introduce the so-called “step” operator E.
This is a linear operator defined for any function f (n) which depends on an integer
variable n and it is defined as

E[f (n)] = f (n + 1), (4.30)

which allows us to define powers of this operator as

E ` [f (n)] = f (n + `), (4.31)

for ` 2 Z an integer number, in particular E 1


[f (n)] = f (n 1). Using this notation,
(4.26) can be written in the compact form

@p(n; t) 1
=(E 1)[⌦(n ! n 1)p(n; t)] + (E 1)[⌦(n ! n + 1)p(n; t)].
dt
(4.32)

Note that the first term, the one with the operator (E 1), represents the loss of
probability of processes in which particles leave state 1 to go to state 2 and hence,
decrease the population of state 1. On the other hand, the term including the operator
(E 1 1) represents those processes that increase the population of 1 by including the
contribution of the jumps from 2 to 1. This will be a common feature of more general
master equations.

4.2 The general case


We now briefly introduce the notation and equations in the case that there are more than
two states, as the generalization is straightforward. If one particle can be in many, but
numerable, discrete states i = . . . , 2, 1, 0, 1, 2, . . . we denote by Pi (t) the probability
that it is in state i at time t. From this state it can jump to state j with rate !(i ! j).
As it can jump, in principle, toP any other state, the probability that it leaves state i in
the time interval (t, t + dt) is j6=i !(j ! i)dt P and the probability that it remains in
state i during the same time interval is 1 j6=i !(j ! i)dt. At the same time, the
4.2 The general case 75

probability that there is a jump from any other state j into state i at this time interval is
P
j6=i !(j ! i)dt. So we can write for the probability of being in state i at time t + dt:
X X
Pi (t + dt) = Pi (t)[1 !(i ! j)dt] + Pj (t)!(j ! i)dt + O(dt2 ). (4.33)
j6=i j6=i

Again, after rearranging and taking the limit dt ! 0 we obtain the master equation
for a discrete process:
dPi (t) X
= [ !(i ! j)Pi (t) + !(j ! i)Pj (t)] , (4.34)
dt j6=i

or, in terms of the currents J(i ! j) = !(i ! j)Pi (t) + !(j ! i)Pj (t):

dPi (t) X
= J(i ! j). (4.35)
dt j6=i

Although it is not very common in practice, nothing prevents us from considering


the more general case where the transition rates depend on time. Hence, a more general
master equation is:
dPi (t) X
= [ !(i ! j; t)Pi (t) + !(j ! i; t)Pj (t)] . (4.36)
dt j6=i

To find the solution of this set of equations we need to specify an initial condition
Pi (t = t0 ), 8i.
A natural extension is to consider that the states i in which a particle can be form
a continuum, instead of a discrete set. We can think, for instance, in the position of
a Brownian particle than can jump randomly from point x to point y. In this case,
we need to define f (x; t) as the probability density of being in location x at time t.
In other words, f (x; t)dx is the probability that one particle is found in the interval
(x, x + dx). Similarly, we need to define the transition rate w(y ! x) from y to x such
that w(y ! x)dxdt is the probability of jumping to the interval (x, x + dx) during the
time interval (t, t + dt) given that it was at the point y at time t. With these definitions,
it is not difficult to generalize the master equation as:
Z
@f (x; t)
= dy [w(y ! x)f (y; t) w(x ! y)f (x; t)] , (4.37)
@t
which was already mentioned at the end of chapter 1.
Let us go back to the discrete case. We stress again that the transition rates !i!j ⌘
!(i ! j) do not need to satisfy any relation amongst them3 . Remember also that
3
The elements !(i ! i) are not defined and one usually takes !(i ! i) = 0 although their precise X
value is irrelevant in the majority of formulas. Note also that the sum in (4.36) can be replaced by
8j
since the term j = i does not contribute to this sum whatever the value of !i!i .
76 Introduction to master equations

!(i ! j) are rates, not probabilities, and besides having units of [time] 1 do not need
to be bounded to the interval [0, 1] (although they are non-negative quantities). It is
easy now to verify that whatever the coefficients !(i ! j) it follows from (4.36) the
conservation of the total probability,

d X
Pi (t) = 0, (4.38)
dt i

P
and, P
again, we have the normalization condition i Pi (t) = 1 for all times t provided
that i Pi (t0 ) = 1.
When the total number of states N is finite, it is possible, and useful sometimes, to
define the matrix W as

Wij = !(jP! i) if i 6= j
(4.39)
Wii = j6=i !(i ! j)

Wi ⌘ |Wii | is nothing but the total scape rate from the state i as the sum of all rates to
all possible states. In terms of the coefficients Wij the rate equations admit the matrix
form:

dPi (t) X
= Wij Pj (t) (4.40)
dt j

The matrix W is such that the rows add to zero. This property ensures that the solutions
Pi (t) respect the positivity condition Pi (t) 0 provided that Pi (t0 ) 0.
We now determine the pdf f 1st (j, t|i, t0 ) of the first jump from i ! j. The definition
is the same as before: f 1st (j, t|i, t0 )dt is the probability that the particle is at state i at
time t0 and stays there until it jumps to state j in the time interval (t, t + dt), with
no intermediate jumps to any other state in the interval (t0 , t). We need not to
repeat the proof we gave in the case of two states. Since the probability that the system
jumps from i to any other state in the time interval (t, t + dt) is Wi dt, the probability
that there have been no jumps in the interval (t0 , t) is e Wi (t t0 ) . As the probability that
there is a jump from i to j in the time interval (t, t + dt) is !(i ! j)dt, the required
probability density function is:

f 1st (j, t|i, t0 ) = !(i ! j)e Wi (t t0 )


(4.41)

This will be useful when discussing the numerical methods for simulating a master
equation.
We can adopt also the occupation numbers point of view and ask for the probability
p(n1 , n2 , . . . ; t) that there are n1 particles in state 1, n2 in state 2, and so on. Instead
of writing general formulas, we will now consider some specific examples.
4.3 Examples 77

4.3 Examples
Radioactive decay
Let us take as an example a -radioactive substance. The events are the emission of
electrons by an atom at an individual rate !. Schematically:

X !Y (4.42)

where X (state 1) denotes a radioactive atom and Y (state 2) the product of the dis-
integration. In fact, this example is nothing but a simplification of the two states case
considered in section 4.1 as the reverse transition does not occur. All we need to do
then is to take !(1 ! 2) = ! and !(2 ! 1) = 0. The initial condition at t0 = 0 is that
the atom is in the 1 state, or P1 (t0 ) = 1. Then, the probability that this atom has not
yet disintegrated at time t is
!t
P1 (t) = e . (4.43)

If at time t = 0 there are N atoms which have not yet disintegrated and, as atoms
disintegrate independently of each other, the probability that at time t there are n
atoms not yet disintegrated is given by the binomial distribution (4.23), or
✓ ◆
N
p(n; t) = e n!t (1 e !t )N n . (4.44)
n
The average value of the binomial distribution is:
X
hn(t)i = np(n; t) = N e !t , (4.45)
n

the law of radioactive decay.


The probability p(n; t) satisfies the master equation (4.32) with ⌦(n ! n + 1) = 0,
as it follows from !2!1 = 0 consistent with the intuitive condition that there is no
mechanism by which the number of non-disintegrated atoms can be increased:
@p(n; t)
= (E 1) [⌦(n ! n 1)p(n; t)] , (4.46)
@t
and it is a simple exercise to check that (4.44) indeed satisfies this equation.

Birth (from a reservoir) and death process


An important class of master equations respond to the birth and death scheme. In one
of its simplest forms, we assume that particles are created at a constant rate !A out
of NA sources. Once created these particles can disappear at another constant rate !.
This is schematized as:

A !X !; (4.47)
78 Introduction to master equations

The sources A are assumed to be endless, so that the rate of production of the X particles
is always constant, independently on how many particles have been already created. The
set of NA particles are the “reservoir” out of which the X-particles are created.
We take the occupation numbers point of view and focus on the probability p(n; t)
that there are n X-particles at time t. We have now three elementary contributions to
P (n; t + dt) according to what happened in the time interval (t, t + dt):
(1) There were n X-particles at time t and none was lost and none was created from the
reservoir. The probability that one source A does not create a particle in the interval
(t, t + dt) is 1 !A dt and the probability that none of the NA sources creates a particle
is (1 !A dt)NA = 1 ⌦A dt + O(dt2 ), with ⌦A ⌘ NA !A . The probability that one
of the X-particles does not disappear is 1 !dt and the probability that none of the n
X-particles disappears is (1 !dt)n = 1 n!dt + O(dt2 ). Hence, the probability that
nothing occurs in the interval (t, t + dt) is the product of these two probabilities.
(2) There were n + 1 X-particles at time t and one particle disappeared. The probability
of one particle disappearing is !dt, and the probability that any of the n + 1 particles
disappears is the sum of these probabilities, or (n + 1)!dt.
(3) There were n 1 X-particles and one was created from the reservoir. As each one
of the NA sources has a probability !A dt of creating a particle, the total probability for
this event is NA !A dt = ⌦A dt.
Combining the probabilities of these events we get :

p(n; t + dt) = p(n; t)[1 !ndt][1 ⌦A dt]


+ p(n + 1; t)(n + 1)!dt (4.48)
+ p(n 1; t)⌦A dt + O(dt2 ).

Rearranging, and taking the limit dt ! 0 we get the master equation:

@p(n; t)
= (n! + ⌦A )p(n; t) + (n + 1)!p(n + 1; t) + ⌦A p(n 1; t), (4.49)
@t
or in terms of the step operator:

@p(n; t) 1
= (E 1)[⌦(n ! n 1)p(n; t)] + (E 1)[⌦(n ! n + 1)p(n; t)]. (4.50)
@t
with

⌦(n ! n 1) = n !, (4.51)
⌦(n ! n + 1) = ⌦A = NA !A . (4.52)

Again, the term E 1 corresponds to the destruction of X-particles, whereas the term
E 1 1 corresponds to their creation. This time, however, there is no a priori upper limit
for the value of the number of X-particles and therefore n = 0, 1, . . . , 1. The master
equation consists in infinite coupled equations which have to be solved using the initial
conditions P (n; 0) = n,N0 , n = 0, 1, . . . , being N0 the number of X-particles present
at the initial time t = 0. We will find its solution in a later section.
4.3 Examples 79

A chemical reaction
We consider the simple chemical reaction in which an atom A and an atom B combine
to give the molecule AB. The reaction is reversible, so the molecule AB can break up in
the constituent atoms:

A+B ! AB. (4.53)

For simplicity, let us consider a situation in which initially there are the same number
N of A-atoms and B-atoms and no AB molecules. An A-atom can be not bound (state
1) or bound to B to form AB (state 2). We denote by n(t) the number of A-atoms in
state 1 at time t. As one atom A combines with an atom B, the number of B-atoms
at time t is also n(t), and the number of AB-molecules is N n(t). The combination
of an A and a B-atom to form a molecule is a complicated process for which we adopt
a probabilistic point of view. We assume that the rate at which an individual A-atom
combines with a B-atom to form the product AB is !1!2 , while the reverse reaction
AB ! A + B happens at a rate !2!1 .
We could focus on the particle point of view and write down equations for the
probability than one A-atom is in state 1 (unbound) or in state 2 (bound). These
equations for P1 (n; t) and P2 (n; t) would look similar to (4.4)-(4.5). A new ingredient
appears in this case. It is not reasonable to assume that the rate !1!2 is a constant
independent on how many B-atoms there are. For A to react with B, they first have to
meet. We can imagine that the reaction takes place in a container of volume V . If the
atoms move freely throughout the whole volume we can assume that the reaction rate
!1!2 is proportional to the particle density n/V of B-atoms. We insist that this is a
assumption we take to describe the chemical reaction. This assumption will in general
not be fully correct as it assumes that the density of B-atoms at the neighborhood of
an A-atom is homogeneous. If we take this assumption, though, we are led to taking
the dependence !1!2 [n] = k12 nV 1 , being k12 this time a constant, independent of
n and V . On the other hand, it seems reasonable to assume that the breaking up of
an AB-molecule into its constituent atoms, being an event involving only one molecule,
does not depend on the density of atoms or molecules, so the rate !2!1 is independent
of n.
To cope with this difficulty, we adopt the occupation numbers point of view and
consider the individual events that increase of decrease the number n of A-atoms in
order to analyze the change in probability during the time interval (t, t + dt). The terms
that contribute to p(n; t + dt), the probability that at time t + dt there are n A-atoms
are as follows:
(1) There are n A-atoms at time t and no one of them reacts with a B-atom and no one
of the N n molecules breaks up. Probability: (1 !1!2 [n]dt)n (1 !2!1 dt)N n =
1 (n!1!2 [n] + (N n)!2!1 )dt + O(dt2 ).
(2) There are n + 1 A-atoms at time t and one of them reacts with any of the n + 1
B-atoms. The probability that one A-atom reacts is !1!2 [n + 1]dt. The probability that
any of the (n + 1) A-atoms reacts is (n + 1)!1!2 [n + 1]dt.
(3) There are n 1 A-atoms at time t and one of the N (n 1) AB-molecules breaks
80 Introduction to master equations

up in its constituent atoms. This event happens with probability (N n + 1)!2!1 dt.
Combining all these event, we get:

p(n; t + dt) = p(n; t) ⇥ (1 (n!1!2 [n] + (N n)!2!1 )dt)


+p(n + 1; t) ⇥ (n + 1)!1!2 [n + 1]dt (4.54)
+p(n 1; t) ⇥ (N n + 1)!2!1 dt.

Rearranging, taking the limit dt ! 0 and introducing the step operators, we arrive at:

@p(n; t) 1
= (E 1)[⌦(n ! n 1)p(n; t)] + (E 1)[⌦(n ! n + 1)p(n; t)]. (4.55)
dt
with
1 2
⌦(n ! n 1) = n!1!2 [n] = k12 V n, (4.56)
⌦(n ! n + 1) = (N n)!2!1 . (4.57)

The fact that the rate ⌦(n ! n 1) / n2 is an example of the law of mass
action. The process n ! n 1 requires that an A-atom meets a B-atom, an event
that is postulated to happen with a probability proportional to the product nA nB of
the number nA of A-atoms and nB of B-atoms, which (in our simplified treatment) are
exactly the same nA = nB = n. More generally, if we consider the chemical reaction

aA + bB ! cC + dD, (4.58)

where a A-molecules and b B-molecules react to form c C-molecules and d D-molecules,


the global rates are assumed to be:

⌦(nA ! nA a, nB ! nB b, nC ! nC + c, nD ! nD + d) = knaA nbB ,


(4.59)
⌦(nA ! nA + a, nB ! nB + b, nC ! nC c, nD ! nD d) = k 0 ncC ndD ,
(4.60)

with k and k 0 constants.

Self-annihilation
In this case, we consider that the X-particles are created out of an endless reservoir at a
rate !A , but disappear in pairs. Schematically,

A ! X,
(4.61)
X + X ! ;.

We denote by ! the individual rate at which one particle encounters another and leads
to the annihilation of both. This example combines ingredients from the two previous
examples. While the creation part from the reservoir is identical to the one analyzed in
4.3 Examples 81

4.3, the same arguments used in the previous chemical reaction case lead us to assume
that the individual reaction rate depends on the density of available particles with which
a given particle can interact with, or ![n] = kV 1 (n 1), for a system containing n
particles.
To find the master equation for p(n; t), the probability of having n X-particles at
time t, we focus on the elementary processes that can occur between t and t + dt:
(1) There were n X-particles at time t and nothing happened during the time interval
(t, t + dt). This includes that no particle was created, probability 1 ⌦A dt + O(dt2 )
and no X-particles were annihilated, with probability 1 n![n]dt + O(dt2 ), for a final
probability 1 (n![n] + ⌦A )dt + O(dt2 ).
(2) There were n + 2 particles at time t. Two particles were annihilated with probability
(n + 2)![n + 2]dt, corresponding to the product of the number of particles n + 2 with
the rate at which any of then can be annihilated, ![n + 2].
(3) There were n 1 X-particles and one was created from the reservoir with probability
⌦A dt.
Combining all these terms, we can write:
p(n; t + dt) = p(n; t)[1 (n![n] + ⌦A )dt] case (i)
+ p(n + 2; t)(n + 2)![n + 2]dt case (ii) (4.62)
+ p(n 1; t)⌦A dt + O(dt2 ) case (iii)
Rearranging, taking the limit dt ! 0 and introducing the steps operators, it can be
written as:
@p(n; t)
= (E 2 1)[⌦(n ! n 2)p(n; t)] + (E 1
1)[⌦(n ! n + 1)p(n; t)]. (4.63)
@t
with
1
⌦(n ! n 2) = n![n] = kV n(n 1), (4.64)
⌦(n ! n + 1) = ⌦A . (4.65)
Where the term E 2 1 represents the annihilation of the two particles, and the E 1 1
the creation of one particle.
If we now added to the annihilation of particles the death of a single particle at a
rate ! 0 , i.e. the scheme:
A ! X,
X + X ! ;, (4.66)
X ! ;,
we could do all the detailed algebra again, but the result
@p(n; t)
=(E 2 1)[⌦(n ! n 2)p(n; t)] + (E 1
1)[⌦(n ! n + 1)p(n; t)]
@t
+ (E 1)[⌦(n ! n 1)p(n; t)]. (4.67)
with ⌦(n ! n 1) = n! 0 could have been guessed given the interpretation of the terms
E ` 1. The reader can fill in the gaps of the proof if he finds it necessary.
82 Introduction to master equations

The prey-predator Lotka-Volterra model


Let us see now an example of the use of a master equations in the context of population
dynamics. We consider the so-called predator-prey Lotka-Volterra model. In this model
a predator (e.g. a fox) survives and can reproduce thanks to the eating of a prey (e.g. a
rabbit) which, in turn, survives and reproduces by eating a natural resource (e.g. grass).
There are many simplifications assumed in this model and it can only considered to be
a sketch of the real dynamics. But this simple modeling is very popular as it can grasp
the essential (but not all) details of the process.
So, we consider an animal species X (the prey) which reproduces by eating an un-
limited natural resource, A. The schematic reaction is as follows4 :

A+X ! 2X, (4.68)

with some rate !0 . This means that there is a probability !0 dt that a rabbit gives rise
to another rabbit at the time interval (t, t + dt). As we have assumed that the resources
(grass) are unlimited, the rate !0 is considered to be a constant. The population of
rabbits at time t is n1 (t). At the same time, the species Y (the predator, the foxes)
reproduces by eating species X. Again schematically:

X +Y ! 2Y, (4.69)

with a individual rate !1 . This means that there is a probability !1 dt that a fox eats a
rabbit and reproduces during the time interval (t, t + dt). As this bears similarities with
previous examples, it seems reasonable to assume that this individual rate depends on
the density of rabbits present at time t, so we take the dependence5 !1 [n1 ] = k1 V 1 n1 .
Finally, the species Y can die of natural causes at a rate !2 :

Y ! ;. (4.70)

It is possible to add other processes such as the spontaneous death of the prey X, but
let us not complicate the model and study the consequences of these simple steps.
We denote by p(n1 , n2 ; t) the probability that there are n1 animals of species X and
n2 animals of species Y at time t. The master equation can be obtained by enumerating
the elementary processes occurring in the time interval (t, t + dt) that might contribute
to p(n1 , n2 ; t + dt) namely:
(i) The population was (n1 , n2 ) at time t and no rabbit reproduced and no rabbit was
eaten and no fox died.
(ii) The population was (n1 1, n2 ) at time t and a rabbit reproduced.
(iii) The population was (n1 , n2 + 1) at time t and a fox died.
(iv) The population was (n1 + 1, n2 1) at time t and a fox ate a rabbit and reproduced.

4
Note that this assumes some sort of “asexual” reproduction as it is not necessary that two rabbits
meet in order to have o↵spring.
5
As foxes and rabbits live in a two-dimensional space, V has to be considered as a measure of the
area, rather than the volume, where they live.
4.4 The generating function method for solving master equations 83

The contributions to the probability are, respectively:

p(n1 , n2 ; t + dt) =p(n1 , n2 ; t)[1 n1 !0 dt][1 n2 !1 [n1 ]dt][1 n2 !2 dt]


+ p(n1 1, n2 ; t)(n1 1)!0 dt (4.71)
+ p(n1 , n2 + 1; t)(n2 + 1)!2 dt
+ p(n1 + 1, n2 1; t)(n2 1)!1 [n1 + 1]dt + O(dt2 )

Rearranging and taking the limit dt ! 0 we obtain the desired master equation:
@p(n1 , n2 ; t)
= (n1 !0 + n2 !1 [n1 ] + n2 !2 )p(n1 , n2 ; t)
@t
+(n1 1)!0 p(n1 1, n2 ; t) (4.72)
+(n2 + 1)!2 p(n1 , n2 + 1; t)
+(n2 1)!1 [n1 + 1]p(n1 + 1, n2 1; t).
It can also be written using the step operators E1 and E2 acting on variables n1 and n2
respectively:
@p(n1 , n2 ; t)
=(E1 1 1)[⌦ ((n1 , n2 ) ! (n1 + 1, n2 )) p(n1 , n2 ; t)
@t
+ (E2 1)[⌦ ((n1 , n2 ) ! (n1 , n2 1)) p(n1 , n2 ; t), (4.73)
+ (E1 E2 1 1)[⌦ ((n1 , n2 ) ! (n1 1, n2 + 1)) p(n1 , n2 ; t)],

with:

⌦ ((n1 , n2 ) ! (n1 + 1, n2 )) = n1 !0 , (4.74)


⌦ ((n1 , n2 ) ! (n1 , n2 1)) = n2 !2 , (4.75)
1
⌦ ((n1 , n2 ) ! (n1 1, n2 + 1)) = n2 !1 [n1 ] = k1 V n1 n2 . (4.76)

The term (E1 1 1) represents the creation of an X-particle, (E2 1) the annihilation
of an Y-particle, and (E1 E2 1 1) the simultaneous annihilation of an X-particle and
the creation of an Y-particle.
It should be clear by now which is the general structure of a master equation and,
after some practice, the reader should be able to write down the final expression in terms
of the step operators without the need to go through all the detailed steps of the proof.
Now that we have learnt how to derive master equations, we have to solve them.
This is not an easy task and that is the main reason why numerical methods to simulate
a master equation are so widespread. We will review them in the next chapter. Before,
however, we will see a powerful technique and some approximated methods of solution.

4.4 The generating function method for solving mas-


ter equations
We now introduce an analytical method to solve master equations. We will start by the
example of the birth and death process and consider the set of equations (4.32). We
84 Introduction to master equations

define the generating function G(s, t) by means of:

1
X
G(s, t) = sn p(n; t) (4.77)
n= 1

Note that the sum runs for all values of n. This is a technical point which is not always
necessary, but simplifies the derivation of the solution. We note that, although equations
(4.32) have been derived for a situation in which the variable n can only take values
between 0 and N , we can consider them valid for all integer values of n. All we need
to do is to set the initial condition such that p(n; 0) = 0 for n 2 / [0, N ]. One can check
@p(n; t)
that = 0 for n 2 / [0, N ], and hence it is p(n; t) = 0 for n 2
/ [0, N ].
@t
If we know the generating function, we can expand it in series and, using (4.77)
identify the coefficients of the series expansion with the probabilities p(n; t). Note the
property:

1
X
G(1, t) = p(n; t) = 1, (4.78)
n= 1

coming from the normalization condition and valid at all times t. Moreover, the knowl-
edge of the generating function allows the easy determination of the moments of the
n
random variable n. For the first moment, we use the trick n = @s @s s=1
:

X X @sn P
@ n sn p(n; t)
hn(t)i = np(n; t) = p(n; t) =
n n
@s s=1 @s s=1

@G(s, t)
= . (4.79)
@s s=1

@ n
For the second moment, we use n2 = @s
s @s
@s s=1
to obtain:

X ✓ ◆
2 2 @ @G(s, t)
hn (t)i = n p(n; t) = s . (4.80)
n
@s @s s=1

We next find a di↵erential equation for G(s, t). We begin by taking the time deriva-
tive of (4.78) and replacing (4.32) and the rates (4.27)-(4.28):

X1
@G(s, t) @p(n; t)
= sn (4.81)
@t n= 1
@t
1
X ⇥ ⇤
= sn (E 1)[n!1!2 p(n; t)] + (E 1
1)[(N n)!2!1 p(n; t)] .
n= 1
4.4 The generating function method for solving master equations 85

Now we use the following result, valid for k 2 Z:


X X
sn (E k 1)[f (n)] = sn (f (n + k) f (n))
n n
X X
= sn f (n + k) sn f (n)
n n
X X
k n+k
= s s f (n + k) sn f (n)
n n
X
k n
= (s 1) s f (n), (4.82)
n

where in the third line we have use the change of variables n + k ! n.


We also use the result:
X X @sn P
n @ ( n sn p(n; t)) @G(s, t)
s np(n; t) = s p(n; t) = s =s . (4.83)
n n
@s @s @s

Substitution of (4.82)-(4.83) in (4.81), leads after some simple algebra to



@G(s, t) @G(s, t)
= (1 s) (!1!2 + !2!1 s) !2!1 N G(s, t) . (4.84)
@t @s

This is a partial di↵erential equation for the generating function G(s, t). We have
hence reduced the problem of solving a set of N + 1 ordinary di↵erential equations
(4.81) to that of solvingPone partial di↵erential equation subject to the initial condition
G(s, t = 0) = G0 (s) ⌘ n p(n; 0). The solution of (4.84) can be found by the method
of the characteristics and the reader is addressed in this point to the wide bibliography
in this vast area (see exercise 4).
We take a simpler and limited approach here. Imagine we want to study just the
stationary distribution in which the probabilities
P n pst (n) are no longer a function of time
and the generating function is Gst (s) = n s pst (n) = limt!1 G(s, t). This function,
being independent of time, satisfies (4.84) with the time derivative equal to zero or,
after simplifying the (1 s) factor:

dGst (s)
0 = (!1!2 + !2!1 s) !2!1 N Gst (s). (4.85)
ds
This is an ordinary di↵erential equation whose solution under the initial condition fol-
lowing from (4.78), Gst (1) = 1 is
 N
!1!2 + !2!1 s
Gst (s) = . (4.86)
!1!2 + !2!1
All that remains is to expand this in powers of s using Newton’s binomial theorem:
N ✓ ◆✓
X ◆n ✓ ◆N n
N !2!1 s !1!2
Gst (s) = , (4.87)
n=0
n !1!2 + !2!1 !1!2 + !2!1
86 Introduction to master equations

which gives
✓ ◆✓ ◆n ✓ ◆N n
N !2!1 !1!2
pst (n) = (4.88)
n !1!2 + !2!1 !1!2 + !2!1
!2!1
a binomial distribution of parameter P1st = , in full agreement with (4.23)
!1!2 + !2!1
in the steady state using (4.11).
Let us apply, as a last example, the technique of the generating function to the birth
and death process of section 4.3. The resulting partial di↵erential equation is:

@G(s, t) @G(s, t)
= (s 1) ⌦A G(s, t) ! . (4.89)
@t @s
It is possible to find the solution of this equation using the method of characteristics.
For simplicity, let us focus again in the stationary state in which the time-derivative is
equal to zero:
dGst (s)
0 = ⌦A Gst (s) ! . (4.90)
ds
The solution with the initial condition Gst (1) = 1 is
⌦A
(s 1)
Gst (s) = e ! . (4.91)

If we expand it in power series of its argument:


1 ✓ ◆n n
⌦A X ⌦A s
Gst (s) = e ! (4.92)
n=0
! n!

hence
⌦A ✓ ◆n
e ! ⌦A
pst (n) = , (4.93)
n! !
⌦A
a Poisson distribution of parameter = .
!

4.5 The mean-field theory


Sometimes (more often than desired) it is not possible to solve the master equation
using the generating function technique or any other. However, it is possible to obtain
very easily approximated equations for the first moments of the probability p(n; t). In
many occasions the knowledge of the first moment hn(t)i, gives important information
about the underlying stochastic process.
Let us then the consider the general master equation:
@p(n; t) X `
= (E 1) [⌦n!n ` p(n; t)] , (4.94)
@t `
4.5 The mean-field theory 87

Multiplying by n and summing over n, one gets after some algebra the (exact) equations
for the first moment, as:
dhn(t)i X
= h` ⌦n!n ` i . (4.95)
dt `

Let us apply this result to the radioactive decay discussed in 4.3. The only contri-
bution to the master equation comes from the term ` = 1 with ⌦(n ! n 1) = !n,
therefore:
dhn(t)i
= h!ni = !hni (4.96)
dt
A closed equation whose solution hn(t)i = N e !t agrees, of course, with (4.45), ob-
tained by other methods.
Other times we are not so lucky. If we take the master equation of the self-annihilation
process of 4.3, as given in (4.63), the contributions come from the terms ` = 2 and
` = 1, with the rates (4.64)-(4.65). This gives:

dhn(t)i 1
= h2kV n(n 1)i + h⌦A i (4.97)
dt
1
= 2kV hn2 i + 2kV 1
hni + ⌦A . (4.98)

But, alas!, this is a not closed equation, as the evolution of the first moment hni depends
on the second moment hn2 i. It is possible now to derive an equation for the evolution
of the second moment using the general result

dhn2 (t)i X
= h`(` 2n)⌦n!n ` i . (4.99)
dt `

However, when we replace the rates (4.64)-(4.65), the evolution of the second moment
depends on the third moment, and so on in an infinite hierarchy of equations.
The simplest simplification scheme to break the hierarchy is to assume that the
stochastic process is such that the fluctuations of the n variable can be neglected, or
2
[n] = hn2 i hni2 ⇡ 0, which implies hn2 i ⇡ hni2 . This is the mean-field approxima-
tion6 . Replacing in (4.98) we obtain the closed equation:

dhn(t)i 1
= 2kV hni2 + 2kV 1
hni + ⌦A . (4.100)
dt
It is convenient to consider the average density of the X-particles, defined as x(t) =
V 1 hn(t)i. A simple manipulation leads to

dx(t)
= 2kx2 + 2kV 1
x+V 1
⌦A . (4.101)
dt
6
The words “mean-field” have di↵erent meanings in di↵erent contexts. Here, it refers specifically
to the assumption that the fluctuations can be neglected.
88 Introduction to master equations

The last term V 1 ⌦A = V 1 !A NA = !A xA , being xA = V 1 NA the density of the


reservoir. In the thermodynamic limit, V ! 1, we can neglect the second term in the
right-hand-side and arrive at the macroscopic equation7 :

dx(t)
= 2kx2 + !A xA . (4.102)
dt
Let us turn now to the prey-predator Lotka-Volterra model. WeP begin by computing
the average values
P of the number of prey and predators, hn 1 (t)i = n1 ,n2 n1 p(n1 , n2 ; t)
and hn2 (t)i = n1 ,n2 n2 p(n1 , n2 ; t). Taking the time derivative and replacing the master
equation (4.73) and the rates (4.74)-(4.76), one obtains after some algebra:

dhn1 (t)i 1
= !0 hn1 i k1 V hn1 n2 i. (4.103)
dt
dhn2 (t)i
= k1 V 1 hn1 n2 i !2 hn2 i. (4.104)
dt
These equations are again
P not closed. We could now compute the evolution the time
evolution of hn1 n2 i = n1 ,n2 n1 n2 p(n1 , n2 ; t) but then it would be coupled to higher
and higher order moments, a complete mess! We use again the mean-field approach
and assume that the correlations between the populations of prey and predator can
be neglected, or hn1 n2 i = hn1 ihn2 i. Under this approximation, we obtain the closed
equations:

dhn1 (t)i
= !0 hn1 i k1 V 1 hn1 ihn2 i, (4.105)
dt
dhn2 (t)i
= k1 V 1 hn1 ihn2 i !2 hn2 i. (4.106)
dt
1
Which can be written in terms of the density of the di↵erent species x1 (t) = V hn1 (t)i,
x2 (t) = V 1 hn2 (t)i,

dx1 (t)
= !0 x1 k1 x1 x2 , (4.107)
dt
dx2 (t)
= k1 x1 x2 !2 x2 . (4.108)
dt
These are the celebrated Lotka-Volterra equations.

4.6 The Fokker-Planck equation


The master equation is a complicated set of many coupled di↵erential equations. We
have already seen that it can be analyzed in terms of a partial di↵erential equation for
the generating function G(s, t). We will now find an approximated partial di↵erential
p
pThe explicit solution is x(t) = xst tanh (t/⌧ + arctanh(x(0)/xst )) with xst = !A xA /2k and ⌧ =
7

1/ 2k!A xA .
4.6 The Fokker-Planck equation 89

equation valid directly for the probability p(n; t). It is possible to develop a rigorous
derivation of this equation estimating the order of the approximation that occurs in
the truncation. Here, we o↵er a very simplified derivation in which it is not possible
to determine precisely the error of the approximation. We limit ourselves to master
equations for one variable of the form (5.61), but similar expansions can be carried out
in the case of having more than one variable. The idea is to consider that n is a large,
macroscopic, variable and can be treated as a continuous variable8 . With this is mind,
we use a Taylor series expansion in the definition of the step operator applied to any
function f (n):
df (n) `2 d2 f (n)
E ` [f (n)] = f (n + `) = f (n) + ` + + ..., (4.109)
dn 2! dn2
and
df (n) `2 d2 f (n)
(E ` 1)[f (n)] = E ` [f (n)] f (n) = ` + + ..., (4.110)
dn 2! dn2
where (without much justification) we restrict the expansion to the second order in
`. The lack of justification of this truncation is one of the weak points of this simple
derivation9 . Replacing in (5.61) we obtain:
✓ ◆
@p(n; t) X @(⌦n!n ` p(n; t)) `2 @ 2 (⌦n!n ` p(n; t))
= ` + 2
, (4.111)
@t `
@n 2! @n

or, rearranging terms,


✓ ◆
@p(n; t) @ 1 @
= F (n)p(n; t) + (G(n)p(n; t)) , (4.112)
@t @n 2 @n
with
X
F (n) = ` ⌦n!n ` , (4.113)
`
X
G(n) = `2 ⌦n!n ` . (4.114)
`

Equation (4.112) has the form of a Fokker-Planck equation for the probability p(n; t)
with F (n) and G(n) the drift and di↵usion coefficients, that we have already encounter
in previous chapters. Just to finish this chapter, we mention that one can write down
the associated Langevin equation (in the Itô interpretation):
dn(t) p
= F (n) + G(n)⇠(t), (4.115)
dt
being ⇠(t) the usual, zero-mean, delta-correlated white noise.
8
It is possible to formalize this idea by introducing x = n/V , being V a large parameter (not
necessarily related to the volume V ). The expansion becomes then a power series in V .
9
An interesting result is Pawula’s theorem that states, basically, that only by truncating at second
order does the resulting equation (4.112) be ensured to have the property that the probability p(n; t)
remains positive at all times.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy