0% found this document useful (0 votes)
429 views129 pages

Asymptotic and Perturbation Methods

This document is a table of contents for a textbook on asymptotic and perturbation methods. It lists 5 chapters that introduce techniques like asymptotic expansions, matched asymptotic expansions, the method of multiple scales, and the Wentzel-Kramers-Brillouin method. Each chapter contains sections that explain these methods and provide examples of applying them to algebraic equations, differential equations, and partial differential equations.

Uploaded by

okmovies
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
429 views129 pages

Asymptotic and Perturbation Methods

This document is a table of contents for a textbook on asymptotic and perturbation methods. It lists 5 chapters that introduce techniques like asymptotic expansions, matched asymptotic expansions, the method of multiple scales, and the Wentzel-Kramers-Brillouin method. Each chapter contains sections that explain these methods and provide examples of applying them to algebraic equations, differential equations, and partial differential equations.

Uploaded by

okmovies
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

Math 6730 : Asymptotic and Perturbation

Methods

Hyunjoong Kim and Chee-Han Tan

Last modified : January 13, 2018


2
Contents

Preface 5

1 Introduction to Asymptotic Approximation 7


1.1 Asymptotic Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.1 Order symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.1.2 Accuracy vs convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.3 Manipulating asymptotic expansions . . . . . . . . . . . . . . . . . . . . 10
1.2 Algebraic and Transcendental Equations . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1 Singular quadratic equation . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.2 Exponential equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.3 Trigonometric equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Differential Equations: Regular Perturbation Theory . . . . . . . . . . . . . . . 16
1.3.1 Projectile motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.2 Nonlinear potential problem . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.3 Fredholm alternative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2 Matched Asymptotic Expansions 31


2.1 Introductory example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.1.1 Outer solution by regular perturbation . . . . . . . . . . . . . . . . . . . 31
2.1.2 Boundary layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.1.3 Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.1.4 Composite expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 Extensions: multiple boundary layers, etc. . . . . . . . . . . . . . . . . . . . . . 34
2.2.1 Multiple boundary layers . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.2 Interior layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 Partial differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4 Strongly localized perturbation theory . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.1 Eigenvalue asymptotics in 3D . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.2 Eigenvalue asymptotics in 2D . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4.3 Summing all logarithmic terms . . . . . . . . . . . . . . . . . . . . . . . 41
2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3 Method of Multiple Scales 55


3.1 Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.1.1 Regular expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.1.2 Multiple-scale expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3
4 Contents

3.1.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2 Forced Motion Near Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.3 Periodically Forced Nonlinear Oscillators . . . . . . . . . . . . . . . . . . . . . . 62
3.3.1 Isochrones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3.2 Phase equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.3 Phase resetting curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3.4 Averaging theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3.5 Phase-locking and synchronisation . . . . . . . . . . . . . . . . . . . . . . 67
3.3.6 Phase reduction for networks of coupled oscillators . . . . . . . . . . . . 68
3.4 Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4.1 Elastic string with weak damping . . . . . . . . . . . . . . . . . . . . . . 70
3.4.2 Nonlinear wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.5 Pattern Formation and Amplitude Equations . . . . . . . . . . . . . . . . . . . . 73
3.5.1 Neural field equations on a ring . . . . . . . . . . . . . . . . . . . . . . . 73
3.5.2 Derivation of amplitude equation using the Fredholm alternative . . . . . 75
3.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4 The Wentzel-Kramers-Brillouin (WKB) Method 93


4.1 Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2 Turning Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.2.1 Transition layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.2.2 Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.2.3 Matching for x > xt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.2.4 Matching for x < xt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2.6 The opposite case: qt0 < 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.3 Wave Propagation and Energy Methods . . . . . . . . . . . . . . . . . . . . . . 103
4.3.1 Connection to energy methods . . . . . . . . . . . . . . . . . . . . . . . . 104
4.4 Higher-Dimensional Waves - Ray Methods . . . . . . . . . . . . . . . . . . . . . 106
4.4.1 WKB expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.4.2 Surfaces and wave fronts . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.4.3 Solution of the eikonal equation . . . . . . . . . . . . . . . . . . . . . . . 109
4.4.4 Solution of the transport equation . . . . . . . . . . . . . . . . . . . . . . 109
4.4.5 Ray equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.4.6 Summary for λ = 1/µ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4.7 Breakdown of the WKB solution . . . . . . . . . . . . . . . . . . . . . . 113
4.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

5 Method of Homogenization 119


5.1 Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.2 Multi-dimensional Problem: Periodic Substructure . . . . . . . . . . . . . . . . . 122
5.2.1 Periodicity of D(x, y) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.2.2 Homogenization procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5.3 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Preface

These notes are largely based on Math 6730: Asymptotic and Perturbation Methods course,
taught by Paul Bressloff in Fall 2017, at the University of Utah. The main textbook is [Hol12],
but additional examples or remarks or results from other sources are added as we see fit, mainly
to facilitate our understanding. These notes are by no means accurate or applicable, and any
mistakes here are of course our own. Please report any typographical errors or mathematical
fallacy to us by email hkim@math.utah.edu or tan@math.utah.edu.

5
6
Chapter 1

Introduction to Asymptotic
Approximation

Our main goal is to construct approximate solutions of differential equations to gain insight
of the problem, since they are nearly impossible to solve analytically in general due to the
nonlinear nature of the problem. Among the most important machinery in approximating
functions in some small neighbourhood is the Taylor’s theorem: Given f ∈ C (N +1) (Bδ (x0 )),
for any x ∈ Bδ (x0 ) we can write f (x) as

N
X f (k) (x0 )
f (x) = (x − x0 )k + RN +1 (x),
k=0
k!

where RN +1 (x) is the remainder term

f (N +1) (ξ)
RN +1 (x) = (x − x0 )N +1
(N + 1)!

and ξ is a point between x and x0 . Taylor’s theorem can be used to solve the following problem:

Given a certain tolerance ε = |x − x0 | > 0, how many terms should


we include in the Taylor polynomial to achieve that accuracy?

Asymptotic approximation concerns about a slightly different problem:

Given a fixed number of terms N , how accurate is


the asymptotic approximation as ε −→ 0?

We want to avoid from including as many terms as possible as ε −→ 0 and in contrast to


Taylor’s theorem, we do not care about convergence of the asymptotic approximation. In fact,
most asymptotic approximations diverge as N −→ ∞ for a fixed ε.

Remark 1.0.1. If the given function is sufficiently differentiable, then Taylor’s theorem offers
a reasonable approximation and we can easily analyse the error as well.

7
8 1.1. Asymptotic Expansion

1.1 Asymptotic Expansion


We begin the section with a motivating example. Suppose we want to evaluate the integral
Z ∞ −t
e
f (ε) = dt, ε > 0.
0 1 + εt
We can develop an approximation of f (ε) for sufficiently small ε > 0 by repeatedly integrating
by parts. Indeed,
Z ∞
e−t
f (ε) = 1 − ε dt
0 (1 + εt)2
= 1 − ε + 2ε2 − 6ε3 + · · · + (−1)N N !εN + RN (ε)
N
X
= ak εk + RN (ε),
k=0

where ∞
e−t
Z
N +1 N +1
RN (ε) = (−1) (N + 1)!ε dt.
0 (1 + εt)N +2
Since ∞ ∞
e−t
Z Z
dt ≤ e−t dt = 1,
0 (1 + εt)N +2 0
it follows that
|RN (ε)| ≤ |(N + 1)!εN +1 |.
Thus, for fixed N > 0 we have that

f (ε) − PN a εk
k=0 k
lim =0

ε→0 ε N
or
N
X N
 X
ak ε k + o ε N = ak εk + O εN +1 .

f (ε) =
k=0 k=0
N
X
The formal series ak εk is said to be an asymptotic expansion of f (ε) such that for fixed N ,
k=0
it provides a good approximation to f (ε) as ε −→ 0. However, this expansion is not convergent
for any fixed ε > 0, since
(−1)N N !εN −→ ∞ as ε −→ 0,
i.e. the correction term actually blows up!

Remark 1.1.1. Observe that for sufficiently small ε > 0,


|RN (ε)|  |(−1)N N !εN |,
which means that the remainder RN (ε) is dominated by the (N + 1)th term of the approxima-
tion, i.e. the error is of higher-order of the approximating function. This property is something
that we would want to impose on the asymptotic expansion, and this idea can be made precise
using the Landau symbols.
Introduction to Asymptotic Approximation 9

1.1.1 Order symbols


Definition 1.1.2.
1. f (ε) = O(g(ε)) as ε −→ 0 means that there exists a finite M for which

|f (ε)| ≤ M |g(ε)| as ε −→ 0.

2. f (ε) = o(g(ε)) as ε −→ 0 means that



f (ε)
lim = 0.
ε→0 g(ε)

3. The ordered sequence of functions {φk (ε)}∞


k=0 is called an asymptotic sequence as
ε −→ 0 if and only if

φk+1 (ε) = o(φk (ε)) as ε −→ 0 for each k.

4. Let f (ε) be a continuous function of ε and {φk (ε)}∞


k=0 an asymptotic sequence. The
formal series expansion
XN
ak φk (ε)
k=0

is called an asymptotic expansion valid to order φN (ε) if for any N ≥ 0,



f (ε) − PN a φ (ε)
k=0 k k
lim = 0.

ε→0 φN (ε)

N
X
We typically writes f (ε) ∼ ak φk (ε) as ε −→ 0.
k=0

Remark 1.1.3. Intuitively, an asymptotic expansion of a given function f is a finite sum


which might diverges, yet it still provides an increasingly accurate description of the asymptotic
behaviour of f as ε −→ 0. There is a caveat here: for a divergent asymptotic expansion, for
some ε, there exists an optimal N0 = N0 (ε) that gives best approximation to f , i.e. adding
more terms actually gives worse accuracy. However, for values of ε sufficiently close to the
limiting value 0, the optimal number of terms required increases, i.e. for every ε1 > 0, there
exists an δ and an optimal N0 = N0 (δ) such that

N
X
f (ε) − ak φk (ε) < ε1 for every |z − z0 | < δ and N > N0 .


k=0

Sometimes in approximating general solutions of ODEs, we will need to consider time-


dependent asymptotic expansions. Suppose ẋ = f (x, ε), x ∈ Rn . We seek a solution of the
form
N
X
x(t, ε) ∼ ak (t)φk (ε) as ε −→ 0,
k=0
10 1.1. Asymptotic Expansion

which will tend to be valid over some range of times t. It is often useful to characterise the
time interval over which the asymptotic expansion exists. We say that this estimate is valid
on a time-scale 1/δ̂(ε) if

x(t, ε) − PN a (t)φ (ε)
k=0 k k
lim = 0 for 0 ≤ δ̂(ε)t ≤ C,

ε→0 φN (ε)

for some C independent of ε.

1.1.2 Accuracy vs convergence


In the case of a Taylor series expansion, one can increase the accuracy (for fixed ε) by including
more terms in the approximation, assuming we are expanding within the radius of convergence.
This is not usually the case for an asymptotic expansion because the asymptotic expansion
concerns the limit as ε −→ 0 whereas increasing the number of terms concerns N −→ ∞ for
fixed ε.

1.1.3 Manipulating asymptotic expansions


Two asymptotic expansions can be added together term by term, assuming both involve the
same basis functions {φk (ε)}. Multiplication can also be carried out provided the asymptotic
sequence {φk (ε)} can be ordered in a particular way. What about differentiation? Suppose

f (x, ε) ∼ φ1 (x, ε) + φ2 (x, ε) as ε −→ 0.

It is not necessarily the case that

d d d
f (x, ε) ∼ φ1 (x, ε) + φ2 (x, ε) as ε −→ 0.
dx dx dx
There are two possible scenarios:

Example 1.1.4. Consider f (x, ε) = e−x/ε sin ex/ε . Observe that for x > 0 we have that



f (x, ε)
lim = 0 for all finite n,
ε→0 εn

which means that


f (x, ε) ∼ 0 + 0 · ε + 0 · ε2 + . . . as ε −→ 0.
However,
d 1  1
f (x, ε) = − e−x/ε sin ex/ε + cos ex/ε −→ ∞ as ε −→ 0.

dx ε ε
i.e. the derivative cannot be expanded using the asymptotic sequence {1, ε, ε2 , . . .}.
Introduction to Asymptotic Approximation 11

Example 1.1.5. Even if {φk (ε)} is an ordered asymptotic sequence, its derivative {φ0k (ε)}
need not be. Consider φ1 (x) = 1 + x, φ2 (x) = ε sin(x/ε) for x ∈ (0, 1). Then φ2 = o(φ1 ) but

φ01 (x) = 1, φ02 (x) = cos(x/ε),

which are not ordered!

On the bright side, if

f (x, ε) ∼ a1 (x)φ1 (ε) + a2 (x)φ2 (ε) as ε −→ 0, (1.1.1)

and if
d
f (x, ε) ∼ b1 (x)φ1 (ε) + b2 (x)φ2 (ε) as ε −→ 0, (1.1.2)
dx
dak df
then bk = , i.e. the asymptotic expansion for can be obtained from term by term
dx dx
differentiation of (1.1.1). Throughout this course, we will assume that (??) holds whenever
we are given (1.1.1) which almost holds in practice. Integration, on the other hand, is less
problematic. If

f (x, ε) ∼ a1 (x)φ1 (ε) + a2 (x)φ2 (ε) as ε −→ 0 for x ∈ [a, b],

and all the functions are integrable, then


Z b Z b  Z b 
f (x, ε) dx ∼ a1 (x) dx φ1 (ε) + a2 (x) dx φ2 (ε) as ε −→ 0.
a a a

1.2 Algebraic and Transcendental Equations


We study three examples where approximate solutions are found using asymptotic expansions,
but each uses different method. They serve to illustrate the important point that instead of
performing the routine procedure with standard asymptotic sequence, we should taylor our
asymptotic expansion to extract the physical property or behavior of our problem.

1.2.1 Singular quadratic equation


Consider the quadratic equation
εx2 + 2x − 1 = 0. (1.2.1)
This is known as a singular problem since the order of the polynomial (and thus the nature
of the equation) changes when ε = 0; in this case the unique solution is x = 1/2. It is evident
from Figure 1.1 that there are two real roots for sufficiently small ε; one is located slightly to
the left of x = 1/2 and one far left on the x-axis. This means that the asymptotic expansion
should not start out as
x(ε) ∼ εx0 + . . . ,
12 1.2. Algebraic and Transcendental Equations

because then x(ε) −→ 0 as ε −→ 0. Therefore, we try the asymptotic expansion

x(ε) ∼ x0 + εα x1 + . . . as ε −→ 0, (1.2.2)

for some α > 0. Substituting (1.2.2) into (1.2.1) leads to

ε x20 + 2εα x0 x1 + . . . + 2 [x0 + εα x1 + . . . ] −1 = 0.


 
(1.2.3)
| {z } | {z }
1 2

y y = 1 − 2x
y = εx2

x
(1/2,0)

Figure 1.1: Graphs of y = 1 − 2x and y = εx2 .

Requiring (1.2.3) to hold as ε −→ 0 results in the O(1) equation


1
2x0 − 1 = 0 =⇒ x0 = .
2

Since the right-hand side is zero, the O(ε) in 1 must be balanced by the O(εα ) term in 2 .
This means that we must choose α = 1 and the O(ε) equation is
1
x20 + 2x1 = 0 =⇒ x1 = − .
8
Consequently, a two-term expansion of one of the roots is
1 ε
x(1) (ε) ∼ − + ... as ε −→ 0.
2 8
The chosen ansatz (1.2.2) produce an approximation for the root near x = 1/2 and we missed
the other root because it approaches negative infinity as ε −→ 0. One possible method to
generate the other root is to consider solving

ε(x − x1 )(x − x2 ) = 0,

but a more systematic method which is applicable to ODEs is to avoid the O(1) solution. Take

x ∼ εγ (x0 + εα x1 + . . . ) as ε −→ 0, (1.2.4)
Introduction to Asymptotic Approximation 13

for some α > 0. Substituting (1.2.4) into (1.2.1) gives

ε(1+2γ) x20 + 2εα x0 x1 + . . . + 2εγ [x0 + εα x1 + . . . ] − |{z}


 
1 = 0. (1.2.5)
| {z } | {z }
1 2 3

The terms on the LHS must balance to produce zero, and we need to determine the order of
the problem that comes from this balancing. There are 3 possibilities on leading-order:

1. Set γ = 0 and we recover the root x(1) (ε) on balancing 2 and 3 .

2. Balance 1 and 3 , and so 2 is higher-order. The condition 1 ∼ 3 requires

1
1 + 2γ = 0 =⇒ γ = − ,
2

so that the leading-order term in 1 , 3 are of O(1), whilst 2 = O(ε−1/2 ) which is lower
order that 1 . This is not possible.

3. Balance 1 and 2 , and so 3 is higher-order. The condition 1 ∼ 2 requires

1 + 2γ = γ =⇒ γ = −1,

so that the leading-order term in 1 , 2 are of O(ε−1 ) and 3 = O(1). This is consistent
with the assumption!

Setting γ = −1 in (1.2.5) and multiplying by ε result in

x20 + 2εα x0 x1 + . . . . . . + 2 (x0 + εα x1 + . . . . . .) − ε = 0.



(1.2.6)

The O(1) equation is


x20 + 2x0 = 0 =⇒ x0 = 0 or x0 = −2.

The solution x0 = 0 gives rise to the root x(1) (ε) by choosing α = 1, so the new root is obtained
by taking x0 = −2. Balancing the equation as before means we must choose α = 1 and the
O(ε) equation is
1
2x0 x1 + 2x1 − 1 = 0 =⇒ x1 = − .
2
Hence, a two-term expansion of the second root of (1.2.2) is

1 ε
x(2) (ε) ∼ −2 − as ε −→ 0.
ε 2

Remark 1.2.1. We may choose x0 = 1/2 in (1.2.2) since one of the root should be close to
x = 1/2 as we “switch on” ε in the term εx2 .
14 1.2. Algebraic and Transcendental Equations

1.2.2 Exponential equation


Unlike algebraic equations, it is harder to determine the number of solutions of transcendental
equations in most cases and we must resort to graphical method. Consider the equation

x2 + eεx = 5 (1.2.7)

From Figure 1.2, we see that there are two real solutions nearby x = ±2. We assume an
asymptotic expansion of the form

x(ε) ∼ x0 + εα x1 + . . . as ε −→ 0, (1.2.8)

for some α > 0. Substituting (1.2.8) into (1.2.7) and expanding the exponential term eεx
around x = 0 we obtain

x20 + 2εα x0 x1 + . . . + [1 + εx0 + . . . ] = |{z}


 
5 . (1.2.9)
| {z } | {z }
1 2 3

The O(1) equation is


x20 + 1 = 5 =⇒ x0 = ±2.

Balancing the O(εα ) term in 1 and the O(ε) term in 2 gives α = 1 and the O(ε) equation
is
1
2x0 x1 + x0 = 0 =⇒ x1 = − .
2
Hence, a two-term asymptotic expansion of each solution is
ε
x(ε) ∼ ±2 − as ε −→ 0.
2

y y = 5 − x2
y = eεx

√ √ x
(- 5,0) ( 5,0)

Figure 1.2: Graphs of y = 5 − x2 and y = eεx .


Introduction to Asymptotic Approximation 15

1.2.3 Trigonometric equation


Consider the equation x
x + 1 + ε sech = 0. (1.2.10)
ε
It appears from Figure 1.3 that there exists a real solution and it approaches x = −1 as ε −→ 0.
If we naively try
x ∼ x0 + εα x1 + . . . as ε −→ 0,
we obtain
x0 + εα x1 + . . .
 
α
[x0 + ε x1 + . . .] + 1 + ε sech =0
ε
and it follows that x0 = −1 since sech(x) ∈ (0, 1] for any x ∈ R. However, we cannot balance
subsequent leading-order terms since it is not possible to find α due to the nature of sech(x).
From the definition of asymptotic sequences, we assume an asymptotic expansion of the form

x(ε) ∼ x0 + µ(ε) as ε −→ 0, (1.2.11)

where we impose the condition µ(ε)  1 when ε  1. Substituting (1.2.11) into (1.2.10) we
obtain  
x0 µ(ε)
[x0 + µ(ε)] + 1 + ε sech + = 0. (1.2.12)
ε ε
The O(1) equation remains x0 = −1 and (1.2.12) reduces to
 
x0 µ(ε)
µ(ε) + ε sech + = 0.
ε ε
Since    
x0 µ(ε) 1 2
sech + ∼ sech − = 1/ε −1/ε
∼ 2e−1/ε ,
ε ε ε e +e
we require
µ(ε) = −2εe−1/ε = o(1) as ε −→ 0.
To construct the third term in the expansion, we would extend (1.2.11) into

x ∼ −1 − 2εe−1/ε + ν(ε),

where we impose the condition ν(ε)  εe−1/ε .

y y = −x − 1
y = ε sech(x/ε)

(0,ε)

x
(-1,0)

Figure 1.3: Graphs of y = −x − 1 and y = ε sech(x/ε).


16 1.3. Differential Equations: Regular Perturbation Theory

1.3 Differential Equations: Regular Perturbation The-


ory
Roughly speaking, regular perturbation theory is a variant of Taylor’s theorem, in the sense
that we seek power series solution in ε. More precisely, we assume that the solution takes the
form
x ∼ x0 + εx1 + ε2 x2 + . . . as ε −→ 0,
where x0 is the zeroth-order solution, i.e. the solution for the case ε = 0.

1.3.1 Projectile motion


Consider the motion of a gerbil projected radially upward from the surface of the Earth. Let
x(t) be the height of the gerbil from the surface of the Earth. Newton’s law of motion asserts
that
d2 x gR2
= − , (1.3.1)
dt2 (x + R)2
where R is the radius of the Earth and g is the gravitational constant. If x  R, then to a
first approximation we obtain the initial value problem

d2 x gR2
2
≈ − 2
= −g, x(0) = 0, x0 (0) = v0 , (1.3.2)
dt R
where v0 is some initial velocity. The solution is

gt2
x(t) = − + v0 t. (1.3.3)
2

x(t)
 2

v0 v0
,
g 2g

 
2v0
g
,0
t

Figure 1.4: Graph of x(t) versus t of the first approximation problem (1.3.2).

Unfortunately, this simplification does not determine a correction to the approximate solu-
tion (1.3.3). To this end, we nondimensionalise (1.3.1) with dimensionless variables

t x
τ= , y= ,
tc xc
Introduction to Asymptotic Approximation 17

where tc = v0 /g and xc = v02 /g are the chosen characteristic time and length scales respectively.
This results in the dimensionless initial-value problem
d2 y 1
2
= − 2
, y(0) = 0, y 0 (0) = 1. (1.3.4)
dτ (1 + εy)
xc v2
Observe that the dimensionless parameter ε = = 0 measures how high the projectile gets
R gR
in comparison to the radius of the Earth. Consider an asymptotic expansion
y(τ ) ∼ y0 (τ ) + εα y1 (τ ) + . . . as ε −→ 0. (1.3.5)
where the exponent α > 0 is included since a-priori there is no reason to assume α = 1.
Assuming we can differentiate (1.3.5) term by term, we obtain using generalised Binomial
theorem h i 1
y000 + εα y100 + . . . = − ∼ −1 + 2εy0 + . . . ,
[1 + εy0 + . . . ]2
with
y0 (0) + εα y1 (0) + · · · = 0, y00 (0) + εα y10 (0) + · · · = 1.
The O(1) problem is
τ2
y000 = −1, y0 (0) = 0, y00 (0) = 1 =⇒ y0 (τ ) = − + τ,
2
and we must choose α = 1 to balance the term 2εy0 . Consequently, the O(ε) problem is
τ3 τ4
y100 = 2y0 , y1 (0) = 0, y10 (0) = 0 =⇒ y1 (τ ) = − .
3 12
Hence, a two-term asymptotic expansion of the solution of (1.3.4) is
 
1 1  τ
y(τ ) ∼ τ 1 − τ + ετ 3 1 − .
2 3 4
Note that the O(1) term is the scaled solution of (1.3.1) in a uniform gravitational field and
the O(ε) term (first-order correction) contains the nonlinear effect of the problem.

1.3.2 Nonlinear potential problem


An interesting physical problem is the model of the diffusion of ions through a solution contain-
ing charged molecules. Assuming the solution occupies a domain Ω, the electrostatic potential
φ(x) in the solution satisfies the Poisson-Boltzmann equation
k
X
2
∇ φ=− αi zi e−zi φ , x ∈ Ω, (1.3.6)
i=1

where the αi are positive constants and zi is the valence of the ith ionic species. The whole
system must be neutral and this gives the electroneutrality condition
k
X
αi zi = 0. (1.3.7)
i=1
18 1.3. Differential Equations: Regular Perturbation Theory

We impose the Neumann boundary condition in which we assume the charge is uniform on the
boundary:
∇φ · n = ∂n φ = ε on ∂Ω, (1.3.8)
where n is the unit outward normal to ∂Ω.
This nonlinear problem has no known solutions. To deal with this, we invoke the classical
Debye-Hückle theory in electrochemistry which assumes that the potential is small enough
so that the Poisson-Boltzmann equation can be linearised. Because of the boundary condition
(1.3.8), we may assume the zeroth-order solution is 0 and guess an asymptotic expansion of
the form
φ ∼ ε (φ0 (x) + εφ1 (x) + . . . ) as ε −→ 0, (1.3.9)
where a small potential means ε is small. Substituting (1.3.9) into (1.3.6) and expanding the
exponential function around the point 0 yields
k
X
2 2
αi zi e−εzi (φ0 +εφ1 +... )

ε ∇ φ0 + ε∇ φ + . . . = −
i=1
k  
X 1
=− αi zi 1 − εzi (φ0 + εφ1 + . . . ) + ε2 zi2 (φ0 + εφ1 + . . . )2 + . . .
i=1
2
k    
X
2 1 2 2
=− αi zi 1 − εzi φ0 + ε −zi φ1 + zi φ0 + . . .
i=1
2
k
! k  !
X X 1
∼ε αi zi2 φ0 + ε2 αi zi2 φ1 − zi φ20 .
i=1 i=1
2

k
X
Setting κ2 = αi zi2 , the O(ε) equation is
i=1

∇2 φ0 = κ2 φ0 in Ω, (1.3.10a)
∂n φ0 = 1 on ∂Ω. (1.3.10b)

k
1X
Setting λ = αi zi3 , the O(ε2 ) equation is
2 i=1

∇2 − κ2 φ1 = −λφ20

in Ω, (1.3.11a)
∂n φ1 = 0 on ∂Ω. (1.3.11b)

Take Ω to be the region outside the unit sphere, which is radially symmetric. Writing the
Laplacian operator ∇2 in terms of spherical coordinates, the solution must be independent of
the angular variables since the boundary condition is independent of the angular variables.
With φ0 = φ0 (r), the O(ε) equation now has the form
 
1 d 2 dφ0
r − κ2 φ0 = 0 for 1 < r < ∞, (1.3.12a)
r2 dr dr
φ00 (1) = −1, (1.3.12b)
Introduction to Asymptotic Approximation 19

where the negative sign is due to n = −r̂. The bounded solution of (1.3.12) is
1
φ0 (r) = eκ(1−r) ,
(1 + κ)r

where the exponential term is the screening term. With φ1 = φ1 (r), the O(ε2 ) equation takes
the form
 
1 d 2 dφ0 λ
2
r − κ2 φ1 = − e2κ(1−r) for 1 < r < ∞, (1.3.13a)
r dr dr (1 + κ)2 r2
φ01 (1) = 0. (1.3.13b)

Using the method of variation of parameters, the solution of (1.3.13) is


α −κr γ  κr
e E1 (3κr) − e−κr E1 (κr)

φ1 (r) = e +
r κr
λe2κ
γ=
2κ(1 + κ)2
γ
(κ − 1)e2κ E1 (3κ) + (κ + 1)E1 (κ)
 
α=
κ(1 + κ)
Z ∞ −t
e
E1 (z) = dt.
z t

1.3.3 Fredholm alternative


Let L0 and L1 be linear differential or integral operatots on the Hilbert space L2 (R) with the
standard inner product Z ∞
hf, gi = f (x)g(x) dx.
−∞

Consider the perturbed eigenvalue problem

(L0 + εL1 ) φ = λφ. (1.3.14)

Spectral problems are widely studied in the context of time-dependence PDEs when time-
harmonic solutions are sought for instance, and we are interested in the behaviour of the
spectrum of L0 as we perturb L0 . Suppose further that for ε = 0, the unperturbed equation
has a unique solution (λ0 , φ0 ) with λ0 non-degenerate. For simplicity, take L0 to be self-adjoint,
that is
hf, L0 gi = hL0 f, gi.
Since L0 , L1 are linear, we introduce the asymptotic expansions for both the eigenfunction
φ and eigenvalue λ with asymptotic sequence {1, ε, ε2 , . . . }

φ ∼ φ0 + εφ1 + ε2 φ2 + . . .
λ ∼ λ0 + ελ1 + ε2 λ2 + . . . .

We obtain

(L0 + εL1 ) φ0 + εφ1 + ε2 φ2 + . . . = λ0 + ελ1 + ε2 λ2 + . . . φ0 + εφ1 + ε2 φ2 + . . . .


    
20 1.4. Problems

The O(1) equation is L0 φ0 = λ0 φ0 and the O(ε) equation is


L0 φ1 + L1 φ0 = λ0 φ1 + λ1 φ0
(L0 − λ0 I) φ1 = λ1 φ0 − L1 φ0 .
It follows from the Fredholm alternative that a necessary condition for the existence of φ1 ∈
L2 (R) is that
(λ1 φ0 − L1 φ0 ) ∈ ker((L0 − λ0 I)∗ )⊥ = ker(L0 − λ0 I)⊥ ,
and this in turn provides the solvability condition for λ1 . Since ker(L0 − λ0 I) = span(φ0 ) and
L0 is self-adjoint,
0 = hφ0 , (L0 − λ0 I) φ1 i = λ1 hφ0 , φ0 i − hφ0 , L1 φ0 i
hφ0 , L1 φ0 i
λ1 = .
hφ0 , φ0 i
This expression of λ1 represents the first-order correction to the eigenvalue of the operator
(L0 + εL1 ). The O(εn ) equation can be analysed in a similar manner, where λn can be found
using the solvability condition from the Fredholm alternative, assuming {λ0 , λ1 , . . . , λn−1 } are
non-degenerate.

1.4 Problems
1. Consider the transcendental equation

1+ x2 + ε = e x . (1.4.1)
Explain why there is only one small root for small ε. Find the three term expansion of
the root
x ∼ x0 + x1 εα + x2 εβ , β > α > 0.


Solution: Consider two graph f (x) = x2 + ε and g(x) = ex − 1. If x < 0, then
f (x) > 0 > g(x). It means that there is no solution in negative region. If x > 0, then
f (x) → x as x → ∞ starting its curve from f (0) = ε. One can draw graph of f (x)
and g(x) on x > 0, then it yields there is only one solution.
To obtain first expansion, set ε = 0. Then we get

1 + x = ex =⇒ x = 0.

Since there is only on solution for all ε > 0, then one can set x0 = 0 and expand x
further at this point. To do so, rewrite the equation (1.4.1) as

x2 + ε = (ex − 1)2

and x  1 as ε  1, it is reasonable to expand RHS as Taylor series. Then one can


obtain
 2
2 1 2 1 3 7
x + ε = x + x + x + ··· = x2 + x3 + x4 + · · · . (1.4.2)
2! 3! 12
Introduction to Asymptotic Approximation 21

Before we balance both sides, consider the leading-order of both sides. Without
doubt, the leading-order is ε2α with coefficient x21 . For LHS, we have three cases

(a) If 2α > 1, then the leading-order is ε with coefficient 1. It leads to a contradic-


tion when balancing both sides because 2α = 1. (×)
(b) If 2α = 1, the balancing equation yields x21 + 1 = x21 and it does not make sense.
(×)
(c) Thus, the only case is 2α < 1.

Then one can rewrite equation (1.4.2) as


7 4 7
ε = x3 + x + · · · = (x31 ε3α + 3x21 x2 ε2α+β + · · · ) + (x41 ε4α + · · · ).
12 12
Since the leading-order of RHS is ε3α , it provides that 1 = 3α and 1 = x31 . Thus,
α = 1/3 and x1 = 1. The next leading term is ε4α . Since there is no remaining term
on LHS, then balance RHS as
7 4
2α + β = 4α and 0 = 3x21 x2 + x,
12 1
yields β = 2α = 2/3 and x2 = −7/36. Therefore, the three term expansion of root is
7 2/3
x ∼ 0 + ε1/3 − ε . (1.4.3)
36

2. A classical eigenvalue problem is the transcendental equation

λ = tan(λ).

(a) After sketching the two functions in the equation, establish that there is an infinite
number of solutions, and for sufficiently large λ takes the form
π
λ = πn + − xn ,
2
with xn small.

Solution: Tangent is π-periodic function with asymptotic line λn = πn + π/2.


tan(λ) → ∞ as λ → λ+ n . Since f (λ) = λ passes through all the asymptotic line
and tangent function is close to the asymptotic line, then for sufficiently large
n, λ takes the form
λ = λn − xn
where xn is a small number and tends to zero as n → ∞.

(b) Find an asymptotic expansion of the large solutions of the form

λ ∼ ε−α λ0 + εβ λ1 ,


and determine ε, α, β, λ0 , λ1 .
22 1.4. Problems

Solution:Set λ = 1/ε and see the asymptotic behavior of xn . Then one can
figure out an asymptotic expansion of λ. For convenience, set xn = x. Then
one can get  
1 1
− x = tan − x = cot(x)
ε ε
because tan(1/ε) = 0. By multiplying ε tan(x) on both sides and we get

tan(x) − εx tan(x) = ε.

Sicne we know that x → 0 as ε → 0, take x ∼ x0 εθ , θ > 0. It follows that

(x0 εθ + · · · ) − ε(x0 εθ + · · · )(x0 εθ + · · · ) = ε

Since θ > 0, the leading-order of LHS is εθ . To balance both sides with O(ε),
set θ = 1 and get x0 = 1. Therefore, an asymptotic expansion of λ is
1 1
λ= − x ∼ − ε = ε−1 (1 + ε2 (−1)).
ε ε
It follows that α = −1, β = 2, λ0 = 1 and λ1 = −1.

3. In the study of porous media one is interested in determining the permeability k(s) =
F 0 (c(s)), where
Z 1
F −1 (c − εr) dr = s
0
−1
F (c) − F −1 (c − ε) = β,

and β is a given positive constant. The functions F (c) and c both depend on ε, whereas
s and β are independent of ε. Find the first term in the expansion of the permeability
for small ε. Hint: consider an asymptotic expansion of c and use the fact that s is
independent of ε.

Solution: Take c ∼ c0 + c1 ε + · · · . Substituting it into given integral equation and


expanding F −1 as Taylor series centered at c = c0 yields
Z 1
dF −1
F −1 (c0 ) + ε(c1 − r) (c0 ) + O(ε2 )dr
0 dc
1 dF −1
 
−1
= F (c0 ) + ε c1 − (c0 ) + O(ε2 ) = s.
2 dc

Since s is independent of ε, then it gives us that


1 1
F −1 (c0 ) = s and c0 − = 0 =⇒ c0 = F (s) and c1 = .
2 2
Introduction to Asymptotic Approximation 23

From the second condition, expanding F −1 as Taylor series follows provides that

dF −1 dF −1
F −1 (c0 ) + εc1 (c0 ) − F −1 (c0 ) − ε(c1 − 1) (c0 ) + O(ε2 ) = β.
dc dc
It follows that
1 ε
ε + O(ε2 ) = β =⇒ k(s) = F 0 (s) ∼ .
F 0 (s) β

4. Let A and D be real n × n matrices.

(a) Suppose A is symmetric and has n distinct eigenvalues. Find a two-term expansion
of the eigenvalues of the perturbed matrix A + εD, where D is positive definite.

Solution: We assume the asymptotic expansions of the eigenpairs (λ, x):

λ ∼ λ0 + ελ1 + ε2 λ2 + . . .
x ∼ x0 + εx1 + ε2 x2 + . . . .

Substituting these into the eigenvalue equation (A + εD)x = λx yields

(A + εD) (x0 + εx1 + . . . ) = (λ0 + ελ1 + . . . ) (x0 + εx1 + . . . ) .

The O(1) equation is Ax0 = λ0 x0 which means that (λ0 , x0 ) is the eigenpair of
the matrix A. The O(ε) equation is

Ax1 + Dx0 = λ0 x1 + λ1 x0 ,

or
Lx1 = (A − λ0 I)x1 = λ1 x0 − Dx0 .
It follows from the Fredholm Alternative that the solvability condition for λ1 is

λ1 ∈ ker(LT )⊥ = ker(L)⊥ = span(x0 ).

Consequently,

xT0 Dx0
0 = xT0 Lx1 = xT0 (λ1 x0 − Dx0 ) =⇒ λ1 = .
xT0 x0

(b) Consider the matrices    


0 1 0 0
A= , D= .
0 0 1 0
Use this example to show that the O(ε) perturbation of a matrix need not result
in a O(ε) perturbation of the eigenvalues, nor that the perturbation is smooth (at
ε = 0).
24 1.4. Problems


 
0 1
Solution: The perturbed matrix A + εD = has eigenvalues λ = ± ε,
ε 0
which is not of O(ε) and is not differentiable at ε = 0.

5. The eigenvalue problem for the vertical displacement y(x) of an elastic string with variable
density is
y 00 + λ2 ρ(x, ε)y = 0, 0 < x < 1,
where y(0) = y(1) = 0. For small ε, assume ρ ∼ 1 + εµ(x), where µ(x) is positive and
continuous. Consider the asymptotic expansions

y ∼ y0 (x) + εy1 (x), λ ∼ λ0 + ελ1 .

(a) Find y0 , λ0 and λ1 . (The latter will involve an integral expression.)

Solution: Substituting the given asymptotic expansions together with the ap-
proximation ρ ∼ 1 + εµ(x) gives

[y000 + εy100 + . . . ] + [λ0 + ελ1 + . . . ]2 [1 + εµ(x)] [y0 + εy1 + . . . ] = 0.

The O(1) equation is

y000 + λ2 y0 = 0, y0 (0) = y0 (1) = 0,

and this boundary value problem has solutions

y0,n (x) = A sin(λ0,n x) = A sin(nπx), n ∈ Z.

The O(ε) equation is

y100 + λ20 y1 + λ20 µ(x)y0 + 2λ0 λ1 y0 = 0, y1 (0) = y1 (1) = 0.

d2
Using integration by parts, one can show that the linear operator L = 2 + λ20
dx
with domain
D(L) = f ∈ C 2 [0, 1] : f (0) = f (1) = 0 ,


is self-adjoint with respect to the L2 inner product over [0, 1]. Moreover, for a
fixed λ0 it has a one-dimensional kernel ker(L) = span(sin(λ0 x)). We can now
determine λ1 using Fredholm alternative, this results in

0 = hy0 , λ2 µ(x)y0 i + hy0 , 2λ0 λ1 y0 i


λ20 hy0 , µ(x)y0 i
λ1 = −
2λ0 hy0 , y0 i
Z 1
= −λ0 µ(x) sin2 (nπx) dx,
0
Introduction to Asymptotic Approximation 25

since
1 1
1 − cos(2nπx) A2
Z Z
2 2 2
hy0 , y0 i = A sin (nπx) dx = A dx = .
0 0 2 2

(b) Using the equation for y1 , explain why the asymptotic expansion can break down
when λ0 is large.

Solution: From the previous results, one can find the equation for y1 as
 Z 1 
00 2 2 2
y1 (x) + λ0 y1 (x) = λ0 −µ(x)y0 (x) + 2 µ(s)y0 (s)ds .
0

Notice that the RHS proportional to λ20 and it follows that the particular solution
of y1 is proportional to λ20 , then it implies that y1 → ∞. This can break down
the expansion mixed with ε.

6. Consider the following eigenvalue problem:


Z a
K(x, s)y(s) ds = λy(x), 0 < x < a.
0

This is a Fredholm integral equation, where the kernel K(x, d) is known and is assumed
to be smooth and positive. The eigenfunction y(x) is taken to be positive and normalized
so that Z a
y 2 (s) ds = a.
0
Both y(x) and λ depend on the parameter a, which is assumed to be small.

(a) Find the first two terms in the expansion of λ and y(x) for small a.

Solution: Since the LHS of Fredholm integral equation is proportional to a,


because integral contains a, the leading-order of eigenvalue λ is O(a). So, take

λ ∼ λ0 a + λ1 a2 and y(x) ∼ y0 (x) + y1 (x)a.

Expand K(x, s) and y(s) as Taylor series in terms of s centered at s = 0 because


0 < s < a is also small. Then we get
Z a
(K(x, 0) + Kd (x, 0)s + · · · )(y(0) + y 0 (0)s + · · · )ds = λy(x),
0

and it follows that


a2
aK(x, 0)y(0) + (K(x, 0)y 0 (0) + Kd (x, 0)y(0)) + · · · = λy(x).
2
Balance O(a) terms and one can obtain

K(x, 0)y0 (0) = λ0 y0 (x). (1.4.4)


26 1.4. Problems

Balance O(a2 ) terms and one can find


1
K(x, 0)y1 (0) + (K(x, 0)y00 (0) + Kd (x, 0)y0 (0)) = λ1 y0 (x) + λ0 y1 (x). (1.4.5)
2
In the same fashion, find one more asymptotic equation from given normalization
equation Z a
(y(0) + y 0 (0)s + · · · )2 ds = a
0
and it follows that
a2
a · (y(0))2 + · 2y(0)y 0 (0) + · · · = a.
2
Balance O(a) terms and one can obtain

(y0 (0))2 = 1. (1.4.6)

Balance O(a2 ) terms and one can find


1
2y0 (0)y1 (0) + · 2y0 (0)y00 (0) = 0. (1.4.7)
2
From equation (1.4.4,1.4.6), one can find

y0 (0) = 1 and λ0 = K(0, 0).

This implies that


K(x, 0)
y0 (x) = .
K(0, 0)
Similarly, one can find
1 1
λ1 = (K(0, 0)y00 (0) + Kd (0, 0)) = (Kx (0, 0) + Kd (0, 0))
2 2
and
   
1 1 K(x, 0)Kx (x, 0)
y1 (x) = K(x, 0)y1 (0) + + Kd (x, 0) − λ1 y0 (x) ,
λ0 2 K(0, 0)

and it follow that


 
1 K(x, 0)
y1 (x) = − [Kx (x, 0) − Kx (0, 0)] + Kd (x, 0) − 2λ1 y0 (x) .
2λ0 K(0, 0)

(b) By changing variables, transform the integral equation into


Z 1
λ
K(aξ, ar)φ(r) dr = φ(ξ), 0 < ξ < 1.
0 a

Write down the normalisation condition for φ.


Introduction to Asymptotic Approximation 27

Solution: Substituting x = aξ and s = ar into the Fredholm integral equation


yields Z 1
K(aξ, ar)y(ar)adr = λy(ar),
0

and set φ(r) = y(ar). It follows that


Z 1
λ
K(aξ, ar)φ(r)dr = φ(r).
0 a
In the same fashion, consider the normalization equation
Z a Z 1 Z 1
2 2
y (s)ds = a =⇒ φ (r)adr = a =⇒ φ2 (r)dr = 1.
0 0 0

(c) From part (b) find the two-term expansion for λ and φ(ξ) for small a.

Solution: Take λ ∼ aλ0 + a2 λ1 and φ ∼ φ0 + aφ1 . In the same fashion we did


in part (a), expand K inside of integral centered at zero
Z 1
(K(0, 0) + Kx (0, 0)aξ + Kd (0, 0)ar + · · · )φ(r)dr =
0
Z 1 Z 1
K(0, 0) φ(r)dr + aξKx (0, 0) φ(r)dr+
0 0
Z 1
aKd (0, 0) rφ(r)dr + · · · .
0

Then balance O(1) terms in both sides of the equations and we get
( R1
K(0, 0) 0 φ0 (r)dr = λ0 φ0 (ξ)
R1 .
0
(φ0 (r))2 dr = 1

It implies that φ0 is constant, and it yields that

φ0 (ξ) = 1 and λ0 = K(0, 0). (1.4.8)

Similarly, balance O(a) terms of the equations and one can obtain
 R1 R1 R1
K(0, 0) 0 φ1 (r)dr + ξKx (0, 0) 0 φ0 (r)dr + Kd (0, 0) 0 rφ0 (r)dr

= λ0 φ1 (ξ) + λ1 φ0 (ξ)
R 1

φ (r)φ1 (r)dr = 0
0 0

R1
It follows that 0
φ1 (r)dr = 0 and one can have

1
ξKx (0, 0) + Kd (0, 0) = λ0 φ1 (ξ) + λ1 .
2
28 1.4. Problems

Integrate both side with respect to ξ on [0, 1] and get


1 1
λ1 = Kx (0, 0) + Kd (0, 0). (1.4.9)
2 2
This eigenvalue yields that
 
Kx (0, 0) 1
φ1 (ξ) = ξ− . (1.4.10)
λ0 2

(d) Explain why the expansions in parts (a) and (c) are the same for λ but not the
eigenfunction.

Solution: The eigenvalue is coordinate invariant, so it is not affected by change


of variables. However, the eigenfunctions are.

7. In quantum mechanics, the perturbation theory for bound states involves the time-
independent Schrodinger equation

ψ 00 − [V0 (x) + εV1 (x)] ψ = −Eψ, −∞ < x < ∞,

where ψ(−∞) = ψ(∞) = 0. In this problem, the eigenvalue E represents energy and V1
is a perturbing potential. Assume that the unperturbed (ε = 0) eigenvalue is nonzero
and nondegenerate.
(a) Assuming

ψ(x) ∼ ψ0 (x) + εψ1 (x) + ε2 ψ2 (x), E ∼ E0 + εE1 + ε2 E2 ,

write down the equation for ψ0 (x) and E0 . We will assume in the following that
Z ∞ Z ∞
2
ψ0 (x) dx = 1, |V1 (x)| dx < ∞.
−∞ −∞

Solution: Substituting expansion of ψ and E into the Schrodinger equation,


one can balance O(1) terms and get

ψ000 (x) − V0 (x)ψ0 (x) = −Eψ0 (x).

(b) Substituting ψ(x) = eφ(x) into the Schrodinger equation and derive the equation for
φ(x).

Solution: Take derivative twice to ψ and it yields that

ψ 0 (x) = φ0 (x)eφ(x) and ψ 00 (x) = (φ00 (x) + (φ0 (x))2 )eφ(x) .

Plug them into the Schrodinger equation and drop common term eφ(x) . Then it
follows that
φ00 (x) + (φ0 (x))2 − (V0 (x) + εV1 (x)) = −E.
Introduction to Asymptotic Approximation 29

(c) By expanding φ(x) for small ε, determine E1 and E2 in terms of ψ0 and V1 .

Solution: Assume that φ(x) ∼ φ0 (x) + εφ1 (x) + ε2 φ2 (x). Substituting it into
the new Schrodinger equation and balance O(ε) terms. Then one can obtain

φ001 + 2φ00 φ01 = V1 − E1 .

Define an differential operator L = d2 /dx2 +2φ00 ·d/dx. Notice that for sufficiently
smooth f , Z ∞
hψ02 , Lf i = e2φ0 (x) (f 00 (x) + 2φ00 (x)f 0 (x))dx.
−∞

Performing integration by parts


Z ∞ Z ∞
2φ0 (x) 00 2φ0 (x) 0 ∞
2
hψ0 , Lf i = e f (x)dx + e f (x)|−∞ − e2φ0 (x) f 00 (x)dx,
−∞ −∞

and it follows that hψ02 , Lf i = 0. From the observation, take inner product with
ψ02 to the first order balance equation and get

hψ02 , Lφ1 i = 0 = hψ02 , V1 i − E1 hψ02 , 1i = hψ02 , V1 i − E1 .

Therefore, Z ∞
E1 = hψ02 , V1 i = V1 (x)[ψ0 (x)]2 dx. (1.4.11)
−∞

To find φ1 , solve the first order inhomogeneous ODE of φ01 by integrating factora ,
or observe that
Z x Z x  
d 2 dφ1
2
ψ0 Lφ1 dy = ψ0 dy = ψ02 (x)φ01 (x)
−∞ dy dy
Z−∞
x
= ψ02 (V1 − E1 )dy
−∞

and it yields that Z x


1
φ01 (x) = 2 ψ02 (V1 − E1 )dy.
ψ0 (x) −∞

Similarly, find the second order balance equation

φ002 + 2φ00 φ02 + (φ01 )2 = −E2 =⇒ Lφ2 = −(φ01 )2 − E2 .

Therefore, E2 = −hψ02 , (φ01 )2 i.


a
In hierarchical system, use Green function, set Ansatz, or various DE methods.
30 1.4. Problems
Chapter 2

Matched Asymptotic Expansions

For most of singular perturbation problem of differential equations, the solution has extreme
changes because a singular problem converges to a differential equation with different order or
behavior as  → 0. If we apply the regular asymptotic expansion, it fails to represent such
drastic change and to match all boundary condition. To resolve the problem, we introduce
matched asymptotic expansion which approximates the exact solution by zooming in the ex-
treme changing zones, such as inner or boundary layers, together with the regular expansion
for outer region.

2.1 Introductory example


Consider a singular problem
(
y 00 + 2y 0 + 2y = 0 , 0 < x < 1
(2.1.1)
y(0) = y(1) = 1

If  = 0, then we have a first order ODE. It only needs one boundary condition. It yields to
have drastic dynamics on boundary layer. Remark that boundary layer could be interior, not
only near boundary of domain.

2.1.1 Outer solution by regular perturbation


Set y(x) ∼ y0 (x) + y1 (x) + · · · . Substitute into equation (2.1.1) and we have

(y000 (x) + y100 (x) + · · · ) + 2(y00 (x) + y10 (x) + · · · ) + 2(y0 (x) + y1 (x) + · · · ) = 0.

Balance O(1) and it provides

y00 + y0 = 0 =⇒ y0 (x) = ae−x .

It leads to dilemma that the solution has only one arbitrary constant but we have two boundary
conditions. It is over-determined. Moreover, the outer solution cannot describe solution over
the whole domain [0, 1]. The following question is which boundary layer would we use?

31
32 2.1. Introductory example

(a) Possible boundary layer at x = 1 (b) Possible boundary layer at x = 0

Figure 2.1: Two choices of boundary layers. It can be chosen by investigating the sign of y 00
near the boundary layer by looking at the concavity of the function.

2.1.2 Boundary layer


Assume that boundary layer is at x = 0. Introduce the stretched coordinate x̃ = x/α , α > 0.
Treat x̃ as fixed when  is reduced. Setting Y (x̃) = y(x) yields

d2 Y dY
1−2α 2
+ 2−α + 2Y = 0, Y (0) = 0. (2.1.2)
dx̃ dx̃
Try a solution of a form

Y (x̃) ∼ Y0 (x̃) + γ Y1 (x̃) + · · · , γ > 0.

Substitution into inner equation provides

d2 d
1−2α 2 (Y0 + γ Y1 + · · · ) + 2−α (Y0 + γ Y1 + · · · ) + 2(Y0 + γ Y1 + · · · ) = 0.
| dx̃ {z } | dx̃ {z } | {z }
(3)
(1) (2)

One need to determined correct balance condition:

• Balance (1) and (3) with taking (2) is higher order. Then it requires α = 1/2. Then (1),
(3) = O(1), but (3) = O−1/2 . (×)

• Balancing (2) and (3) gives outer solution. (×)

• Balance (1) and (2) with taking (3) is higher order. Then it requires α = 1. Then (1),
(2) = O(−1 ) and (3) = O(1). (Yay!)

Choosing the last balance, one can obtain an equation from O(−1 ) terms

Y000 + 2Y00 = 0, 0 < x̃ < ∞.

One can get inner solution Y0 (x̃) = A(1 − e−2x̃ ), where A is unknown constant.
Matched Asymptotic Expansions 33

2.1.3 Matching
It remains to determine the constant A. The inner and outer solutions are both approximations
of the same function. Hence they sould agree in the transition zone between inner and outer
layers. Thus
lim Y0 (x̃) = lim+ y0 (x), (2.1.3)
x̃→∞ x→0

and yields A = e. Therefore, Y0 (x) = e(1 − e1−2x/ ).

2.1.4 Composite expression


So far, we have a solution in two pieces, neither is uniformly valid in x ∈ [0, 1]. We would like
to construct a composite solution that holds everywhere. One way is subtracting constant to
match each one
Y (x) ∼ y0 (x) + Y (x/) − y0 (0). (2.1.4)
Near x = 0, y0 (x) is canceled out with the constant and vice versa.
The matching condition Y0 (+∞) = y0 (0+ ) may not work in general. First, the limits might
not exist. Second, complication may arise when constructing second order terms. A more
general approach is to explicitly introduce an intermediate region between inner and outer
domain. Introduce an intermediate variable xη = x/η() with   η  1. The inner and outer
solution should give same result when expression in terms of xη . Then

1. change from x to xη in outer expansion youter (xη ). Assume there is η1 () such that youter
is valid for η1 ()  η() ≤ 1.

2. Change variable x̃ to xη in inner expansion to obtain yinner (xη ). Assume there is η2 ()
such that inner is valid for   η()  η2 ().

3. If η1  η2 , then domain of validity overlap (because inner expansion valid on x ≤ η2 and


outer expansion valid on x ≥ η2 ) and we require youter ∼ yinnter in the overlap region.

Return to our particular example. Let xη = x/β with 0 < β < 1. Then
1−β
yinner ∼ A(1 − e−2xη / ) ∼ A + O(β−1 ),

and
β
youter ∼ e1−xη  ∼ e + O(β ).
These are hard to match so we consider higher-order term. Find the second balance equation
1 1
y100 + y1 = − y000 , y1 (1) = 0 =⇒ y1 (x) = (1 − x)e1−x
2 2
from O() terms of outer expansion and

Y100 + 2Y10 = −2Y0 , Y1 (0) = 0 =⇒ Y1 (x̃) = B(1 − e−2x̃ ) − x̃e(1 + e−2x̃ )

from O(1) terms of inner expansion. Determine B by matching on intermediate zone


β  β
youter ∼ e1−xη  + (1 − xη β )e1−xη 
2
34 2.2. Extensions: multiple boundary layers, etc.

 1
∼ e · 1 − e · xη β + · e · 1 + e · x2η 2β + · · ·
 2 x
2 
η
yinner ∼ e(1 − eξ ) +  B(1 − eξ ) − 1−β e(1 + eξ ) , ξ = −2xη /1−β

β
∼ e −  xη · e +  · B + · · · ,

and yields B = e/2. Therefore, the composite solution is


 
e
y(x) ∼ y0 (x) + y1 (x) + Y0 (x/) + Y1 (x/) − e − xη b e + ·  . (2.1.5)
|{z} 2
=x

Remark 2.1.1. Things to look for in more general problems on [0, 1]


1. The boundary layer could be at x = 1 or there could be boundary layers at both ends.
At x = 1, the stretched coordinate is x̃ = (x − 1)/α .

2. There is an interior layers at some x0 ()


x − x0
x̃ = .

3. -dependence could be funky, e.g. ν = −1/ log .

4. The solution odes not have layered structure.

2.2 Extensions: multiple boundary layers, etc.


2.2.1 Multiple boundary layers
Consider a boundary value problem

2 y 00 + xy 0 − y = −ex , with y(0) = 2, and y(1) = 1, (2.2.1)

which is singular and non-linear. Note that in case when  = 0, we get y = ex and it does
not match any boundary conditions. This solution is the first term in the outer solution, i.e.
y0 (x) = ex .
Start to find inner solution at x = 0. Set x̃ = x/α and Y (x̃) = y(x). Then we have

d2 d α
2−2α
 2
Y = −e−x̃ = −(1 + x̃α + · · · ) .
Y + x̃ Y − |{z}
| {zdx̃ } | {z dx̃ }
(3)
| {z }
(4)
(1) (2)

In order to balance (1),(3) and (4), we require α = 1. Then taking Y ∼ Y0 + · · · yields the
following balance equation for O(1)

Y000 − Y0 = −1, Y0 (0) = 2.

Its general solution is

Y0 (x̃) = 1 + Ae−x̃ + (1 − A)ex̃ , 0 < x̃ < ∞.


Matched Asymptotic Expansions 35

To achieve A, matching Y0 (+∞) = y0 (0) implies A = 1. At x = 1, setting x̃ = (x − 1)/β and


Y (x̃) = y(x) provides that

d2 d β
2−2β
2
Y + (1 + β x̃)1−β Y − Y = e−1+ x̃ .
dx̃ dx̃
Achieve balance for β = 1 and we obtain
00 0
Y 0 + Y 0 − Y 0 = −e, − ∞ < x̃ < 0, with Y 0 (0) = 1.

Its general solution is


Y 0 (x̃) = e + Ber+ x̃ + (1 − e − B)er− x̃ ,

where r± = (−1 ± 5)/2. Matching Y 0 (−∞) = y0 (1) provides B = 1 − e. Therefore, its
composite solution is
h x i h x i
y ∼ y0 (x) + Y0 − Y0 (+∞) + Y 0 − Y 0 (−∞)
 
x −x/ −r+ (1−x)/
∼e +e + (1 − e)e .

2.2.2 Interior layers


It is also possible for a boundary layer to occur in the interior of the domain rather than
at a physical boundary – matching now has to determined the location at the interior layer.
Consider a boundary value problem

y 00 = y(y 0 − 1), 0<x<1 (2.2.2)

with y(0) = 1 and y(1) = −1. For its outer equation, setting y ∼ y0 + · · · yields

y0 (y00 − 1) = 0 =⇒ y0 = 0 or y0 (x) = x + a

for some constant a. Since the outer equation does not satisfy both boundary condition at
once, we need to find a boundary layer to fit boundary conditions.
Assume that boundary layer is at x = 0. In the boundary layer, y 00 > 0 and y 0 < 0. Since y
can be positive, we cannot match signs of differential equation everywhere. If boundary layer
is at x = 1, then y 00 < 0 and y 0 − 1 < 0. Since y can be negative, it cannot match signs
everywhere in boundary layer. What if it has interior layer at x = x0 ? For x − x0 = 0− , we
have y 00 < 0, y 0 − 1 > 0, but y > 0. For x − x0 = 0+ , we have y 00 > 0, y 0 − 1 < 0, but y < 0.
Thus interior layer can match the signs.
From the argument of interior layer argument, find inner solution by setting x̃ = (x−x0 )/α ,
0 < x0 < 1. Then we have two outer regions 0 ≤ x < x0 and x0 < x ≤ 1. The inner equation
is
1−2α Y 00 = −α Y Y 0 − Y,
and one can balance if α = 1. Setting Y (x̃) ∼ Y0 (x̃) + · · · gives
1
Y000 = Y0 Y00 =⇒ Y00 = Y02 + A.
2
It has three general solution depending on sign of A:
36 2.3. Partial differential equations
h i
1−DeB x̃
1. Y0 = B 1+DeB x̃
if A > 0,

B x̃
 
2. Y0 = B tan C − 2
if A < 0,
2
3. Y0 = C−x̃
if A = 0.
Three forms (rather than a single general solution) reflects non-linearity.
Next, match the inner solution with outer solution
(
x + 1 , x < x0
y0 (x) = .
x − 2 , x0 < x

Only inner solution 1. can match these outer solutions. Without loss of generality, assume
B > 0 and we get

−B = Y0 (+∞) = y0 (x+
0 ) = x0 − 2 and B = Y0 (−∞) = y0 (x−
0 ) = x0 + 1.

It yields x0 = 1/2 and B = 3/2. What about D? Remember that y(x0 ) = 0. This implies that
3 1−D
Y0 (x0 ) = 0 = · =⇒ D = 1.
2 1+D
Therefore,
3 1 − e3x̃/2
· Y0 (x̃) ∼
.
2 1 + e3x̃/2
Finally, the composite solution can be constructed in the two domains [0, x0 ) and (x0 , 1]
(
1−e3(2x−1)/4
x + 1 + 32 · 1+e 3
3(2x−1)/4 − 2 , 0 ≤ x < x0
y(x) ∼ 3 1−e3(2x−1)/4 3
.
x − 2 + 2 · 1+e3(2x−1)/4 + 2 , x0 < x ≤ 1

2.3 Partial differential equations


Consider Burger’s equation

ut + u · ux = uxx , − ∞ < x < ∞, t > 0 (2.3.1)


u(x, 0) = φ(x) (2.3.2)

Notice that this perturbation problem is singular because type of solution is changed from
parabolic to hyperbolic when  > 0 to  = 0. Assume that φ(x) is smooth and bounded except
for a jump continuity at x = 0 with φ(0− ) > φ(0+ ) and φ0 ≥ 0. For concreteness, set
(
1, x < 0
u(x, 0) = .
0, 0 < x

This is an example of a Riemann problem – evolves into a traveling front that sharpens as
 → 0. We can handle it in similar way for boundary layer problem. For outer solution,
expanding u(x, t) ∼ u0 (x, t) + · · · gives balance equation for O(1) terms

∂t u0 + u0 · ∂x u0 = 0.
Matched Asymptotic Expansions 37

Solve it using the method of characteristics


dt dx du0
= 1, = u0 and = 0,
dτ dτ dτ
and it yields characteristic straight lines

x = x0 + φ(x0 )t.

Characteristic into set at the shock x = s(t) with determined using the Rankine-Hugoniot
equation
− 2
1 [φ(x+ 2
0 )] − [φ(x0 )] 1 −
ṡ = · − = [φ(x+
0 ) + φ(x0 )]. (2.3.3)
2 φ(x+0 ) − φ(x0 ) 2
We will derive an equation for s(t) using match asymptotics. Introduce a moving inner layer
around s(t)
x − s(t)
x̃ = .

The inner PDE for U (x̃, t) = u(x, t)

∂t U − −α s0 (t)∂x̃ U + −α U · ∂x̃ U = 1−2α ∂x̃2 U.

In order to balance terms, require α = 1 and U ∼ U0 + · · ·

−s0 (t)∂x̃ U0 + U0 · ∂x̃ U0 = ∂x̃2 U0 .

Integrating with respect to x̃ gives


1
∂x̃ U0 = U02 − s0 (t)U0 + A(t).
2
Its matching conditions are

lim U0 = u−
0 and lim U0 = u+
0
x̃→−∞ x̃→+∞

where u±
0 = limx→s(t)± u0 (x, t). Since U0 (x̃, t) is a constant for x̃ → ±∞, we have ∂x̃ U0 → 0 as
x̃ → ±∞. Then we have
1
0 = [u− 2 0 −
0 ] − s (t)u0 + A(t),
2
1 0
0 = [u+ 2 +
0 ] − s (t)u0 + A(t).
2

Subtracting part of equations yields


− 2
1 [φ(x+ 2
0 )] − [φ(x0 )] 1
s0 (t) = · + − = [φ(x+ −
0 ) + φ(x0 )].
2 φ(x0 ) − φ(x0 ) 2

Hence A(t) = 21 u+
0 u0 . We now note that the inner equation can be rewritten as

1 −
∂x̃ U0 = (U0 − u+
0 )(U0 − u0 ),
2
38 2.4. Strongly localized perturbation theory

with u± ±
0 = u0 (t). Then one can achieve the following equations
Z   Z
1 1 1 −
dU0 + − − = dx̃(u+
0 − u0 )
U0 − u0 U0 − u0 2
U0 − u+

0
1 + −
=⇒ log − = (u0 − u0 )x̃ + C(t)
U0 − u0 2
U0 − u+ + −
=⇒ − 0
= b(x̃, t) = B(t)e(u0 −u0 )x̃/2
u0 − U0
where B(t) = eC(t) . Therefore,

u+
0 + b(x̃, t)u0
U0 (x̃, t) = .
1 + b(x̃, t)
In order to determine B(t), we have to go to next order [See Holmes for more details]. You
may find, in the end, s
1 + tφ0 (x+
0)
B(t) = .
1 + tφ (x−
0
0)

2.4 Strongly localized perturbation theory


This work is mainly done by Michael J. Ward, see [SWF07; BEW08; CSW09; Pil+10; Kur+15]
for more details. Consider a diffusion equation with small holes. Before we start to apply
perturbation theory on the problem, recall Green’s function in two and three dimensional space.
Green’s function is solution with single input data, especially in case of Laplace operator,
4u = δ(x − x0 ), in Rn , n = 2, 3.
Then u ∼ −1/4π|x − x0 | as x → x0 in 3D and u ∼ log |x − x0 |/2π as x → x0 in 2D. In case of
3D, the Laplace operator in spherical coordinate with angular symmetry is
2
4u = ur r + ur , for |x − x0 | > 0,
r
and its solution is u(r) = B/r for some constant r. By taking integral in Ω , ball centered at
x0 with radius , we get
Z Z Z
2
4udx = ∇u · nds = 4πr · ur = −4πB = δ(x − x0 )dx = 1.
Ω ∂Ω Ω

It yields that u(r) = −1/4πr, which is the Green’s function in 3D.

2.4.1 Eigenvalue asymptotics in 3D


Let Ω be a 3D bounded domain with a hole of “radius” O(), denoted by Ω , removed from Ω.
Consider an eigenvalue problem in Ω\Ω as follows:



 4u + λu = 0, in Ω\Ω

u = 0, on ∂Ω
. (2.4.1)

 u = 0, on ∂Ω

R

u2 dx = 1
Ω\Ω
Matched Asymptotic Expansions 39

We assume that Ω shrinks to a point x0 as  → 0. For example, we could assume Ω to be the


sphere |x − x0 | ≤ . The unperturbed problem is

4φ + λφ = 0,
 in Ω
φ = 0, on ∂Ω . (2.4.2)
R

Ω\Ω
φ2 dx = 1
R
Assume this has eigenpair φj (x) and µj for j = 0, 1, · · · with Ω φj φk dx = 0 if j 6= k and
φ0 (x) > 0 for x ∈ Ω. We look for perturbed eigenpair near the φ0 (x) and µ0 . Expand
λ ∼ µ0 + ν()λ1 + · · · where (ν() → 0 as  → 0.) In the outer region away from the hole, we
take u ∼ φ0 (x) + ν()u1 (x) + · · · . Since Ω → {x0 } as  → 0, we have the following

4u1 + µ0 u1 = −λ1 φ0 , in Ω\{x0 }

u1 = 0, on ∂Ω . (2.4.3)

R

2u1 φ0 dx = 0

Construct the inner solution near the hole. Let y = (x − x0 )/ and set V (x; ) = u(x0 + y).
Then we find that V satisfies

4y V + λ2 V = 0, outside of Ω0 = Ω /.

Take V ∼ V0 + ν()V1 + · · · and get



4y V0 = 0,
 outside Ω0
V0 = 0, on ∂Ω0 .

V0 → φ0 (x0 ) as |y| → ∞

Try a solution of it by V0 = φ0 (x0 )(1 − Vc (y)). Then Vc satisfies



4y Vc = 0,
 outside Ω0
Vc = 1, on ∂Ω0 .

Vc → 0 as |y| → ∞

A classical result from PDE theory is Vc ∼ C/|y| as |y| → ∞ where C is electrostatic capaci-
tance of Ω0 , determined by shape and size of Ω0 . We now have
 
C
V0 (x) ∼ φ0 (x0 ) 1 − .
|x − x0 |
It has to match
φ0 (x0 ) + ν()u1
as x → x0 . This yields that ν() =  and u( x) → −φ0 (x0 )C/|x − x0 | as x → x0 . To evaluate
perturbed eigenvalue λ1 , return to equation (2.4.3). Since u1 → 4πφ0 (x0 )C · (−1/4π|x − x0 |)
as x → x0 , then we have the modified problem
(
Lu1 ≡ 4u1 + µu1 = −λ1 φ0 + 4πCφ0 (x0 )δ(x − x0 ), in Ω
. (2.4.4)
u1 = 0, on ∂Ω
40 2.4. Strongly localized perturbation theory

Use Green’s identity


Z Z
φ0 Lu1 − u1 Lφ0 dx = φ0 ∂n u1 − u1 ∂n φ0 ds.
Ω ∂Ω

Since φ0 = u1 = 0 on ∂Ω and Lφ0 = 0, we have


Z Z
0= φ0 Lu1 dx = φ0 [−λ1 φ0 + 4πCφ0 (x0 )δ(x − x0 )]dx,
Ω Ω

and it yields
4πCφ20 (x0 )
λ1 = R 2 .
φ dx
Ω 0
Therefore, λ ∼ µ0 + λ1 .
Remark 2.4.1. 1. Let us assume that u = 0 on ∂Ω is replaced by the no-flux condition on
∂Ω. Then  = 0 problem becomes

4φ + µφ = 0,

∂n φ = 0, ∂Ω .

R 2

φ dx = 1

The principal eigenvalues µ0 = 0 and φ0 (x) = 1/|ω|1/2 . In this case, λ1 ∼ 4πC/|Ω| (to
leading order it is independent of location x0 .)
holes Ωj for j R= 1, · · · n and well-separated, its eigenvalue expansion is
2. For multiple P
λ ∼ µ0 + 4π j cj [φ0 (xj )]2 / Ω φ20 dx.

2.4.2 Eigenvalue asymptotics in 2D


In the same fashion with 3D case, we want to find an asymptotic expansion of the same
problem (2.4.1), but 2D. Let µ0 and φ0 be principal eigenpair of unperturbed problem (2.4.2).
Set λ ∼ µ0 + ν()λ1 + · · · for eigenvalue and u ∼ φ0 + ν()u1 + · · · in outer region, where
ν() → 0 as  → 0. Then the equation for second term of outer expansion is (2.4.3). In
the inner region, set y = (x − x0 )/ and take u(x) = ν()V0 (y) where 4y V0 = 0. We want
V0 (y) ∼ A0 log |y| as |y| → ∞. To do so, setting V0 = A0 Vc where
(
4y Vc = 0, y ∈ Ω0
(2.4.5)
Vc = 0, y on ∂Ω0

gives that
Vc ∼ log |y| − log d + O(1/|y|), as |y| → ∞
where d is logarithmic capacitance determined by shape of Ω0 . It is interesting enough to notice
the logarithmic capacitance of simple objects in the table. Then write inner solution in outer
variable
|y|
u(x) ∼ ν()A0 log ∼ ν()A0 [− log(d) + log |x − x0 |] .
d
Matching solution yields that

φ0 (x0 ) + ν()u1 (x) ∼ − log(d)A0 ν() + A0 ν() log |x − x0 |,


Matched Asymptotic Expansions 41

Ω0 Geometric info Capacitance d


Circle radius a a
Ellipse radius a, b (a
√ + b)/2 3
Triangle side h 3[Γ(1/3)] h/(8π 2 )
Rectangle side h [Γ(1/4)]2 h/(4π 3/2 )

Table 2.1: The logarithmic capacitance in 2D for simple geometric figures.

as x → x0 . In order to match the conditions, set ν() = −1/ log(d). Then unknown constant
A0 = φ0 (x0 ). Thus,
u1 (x) ∼ φ0 (x0 ) log |x − x0 |,
as x → x0 . Hence, by the same procedure in 3D case by using Green’s identity, one can find
eigenvalue expansion
[φ0 (x0 )]2
λ ∼ µ0 + 2π · ν() R 2 .
φ dx
Ω 0

Remark 2.4.2. Further terms in expansion yields

λ ∼ µ0 + A1 ν + A2 ν 2 + A3 ν 3 + · · · .

Its potential problem is that the log decreases very slowly as  decreases. Then the remaining
term is quite large and break the asymptotic expansions. By summing the log series, one can
solve the problem.

2.4.3 Summing all logarithmic terms


Consider Poisson’s equation in a domain with one small hole given

4w = −B, in Ω\Ω

w = 0, on ∂Ω . (2.4.6)

w = 0, on ∂Ω

In the outer region, set w(x; ) = w0 (x; ν()) + σ()w1 (x; ν()) + · · · where ν() = −1/ log(d)
and σ  ν k for any k > 0. It gives the outer equation

4w0 = −B,
 in Ω\{x0 }
w = 0, on ∂Ω . (2.4.7)

w is singular, as x → x0

In the inner region, set y = (x−x0 )/ and V (y; ) = w(x0 +y; ). Expand V (y; ) = V0 (y; ν())+
µ0 ()V1 (y; ν()) + · · · where µ0  ν k for all k > 0. Then V0 satisfies
(
4y V0 = 0, outside Ω0
. (2.4.8)
V0 = 0, on ∂Ω0
42 2.5. Exercises

The leading order matching condition is

lim w0 (x; ν) ∼ lim V0 (y; ν).


x→x0 |y|→∞

Introduce an unknown function γ = γ(ν) with γ(0) = 1 and let V0 (y; ν) = νγVc (y). Then it
follows that 
4y Vc = 0,
 outside Ω0
Vc = 0, on ∂Ω0 .

Vc ∼ log |y|, as |y| → ∞

Thus, Vc (y) ∼ log |y| − log d + O(1/|y|) for 1  |y|. In original coordinate,

V0 (y; ν) ∼ γ + νγ log |x − x0 |.

Matching condition gives w0 ∼ νγ log |x − x0 | + γ as x → x0 . So outer problem is



4w0 = −B,
 in Ω\{x0 }
w = 0, on ∂Ω . (2.4.9)

w ∼ γ + νγ log |x − x0 | as x → x0

Introduce wOH (x) and G(x; x0 with


( (
4wOH = −B, in Ω 4G = δ(x − x0 ), in Ω
, .
wOH = 0, on ∂Ω G = 0, on ∂Ω
1
One can find G(x; x0 ) = 2π log |x−x0 |+R0 (x; x0 ) where R0 is the regular prt of Green’s function
which converges as x → x0 . Then we can write down the solution

w0 (x; ν) = wOH (x) + 2πγνG(x; x0 ).

As x → x0 , we obtain the asymptotic condition


 
1
wOH (x0 ) + 2πγν log |x − x0 | + R(x; x0 ) = γ + γν log |x − x0 |,

and it yields
wOH (x0 )
γ(ν) = .
1 − 2πνR0 (x0 ; x0 )
Therefore, the final expansion of the Poisson equation is
ν()
w(x) ∼ wOH (x) + · 2πwOH (x0 )G(x; x0 ).
1 − 2πR0 (x0 ; x0 )ν()

2.5 Exercises
1. Find a composite expansion of the solution to the following problems on x ∈ [0, 1] with
a boundary layer at the end x = 0:

(a) y 00 + 2y 0 + y 3 = 0, y(0) = 0, y(1) = 1/2.


Matched Asymptotic Expansions 43

Solution: To find outer expansion y, set y ∼ y0 + · · · and balance O(1) terms

0 + 2y00 + y03 = 0.

It yields a general solution


y0−2 = x + D
for some constant D. Since the boundary layer is at x = 0, boundary condition
at x = 1 determines integrating constant D and yields
1
y0 (x) = √ .
x+3

Now, construct inner expansion near x = 0. Setting x̃ = x/α and Y (x̃) = y(x)
yields ODE for inner solution

1−2α Y 00 + 2−α Y 0 + Y 3 = 0.

In order to balance terms, require α = 1 and Y ∼ Y0 + · · · . Then we achieve

Y000 + 2Y00 = 0.

A general solution of Y0 is

Y0 (x̃) = C(1 − e−2x̃ )

with boundary condition Y0 (0) = 0. Matching condition

lim Y0 (x̃) = lim+ y0 (x)


x̃→∞ x→0

yields C = 1/ 3. Therefore, the composite expansion of the solution is
x 1 1 1
y(x) ∼ y0 (x) + Y −√ =√ − √ e−2x/ .
 3 x+3 3

(b) y 00 + (1 + 2x)y 0 − 2y = 0, y(0) = , y(1) = sin().

Solution: To find outer expansion y, set y ∼ y0 + y1 + · · · and balance O(1)


terms
0 + (1 + 2x)y00 − 2y 0 = 0, y0 (0) = y0 (1) = 0,
and its general solution is y0 (x) = C(2x + 1). Since we know that it has a
boundary layer at x = 0, match boundary condition and get C = 0. Thus
y0 (x) = 0. Balancing O() terms yields that

y000 + (1 + 2x)y10 − 2y10 , y1 (0) = y1 (1) = 1,

and its solution with boundary condition at x = 1 is y1 (x) = (2x + 1)/3. It


44 2.5. Exercises

follows that the outer expansion is



y(x) ∼ (2x + 1) + · · · .
3
Now, consider inner expansion near x = 0. Setting x̃ = x/α and Y (x̃) = y(x)
yields ODE for inner solution

1−2α Y 0 + −α Y 0 + 2x̃Y 0 − 2Y = 0.

To balance the equation, it requires that α = 1 by setting Y ∼ Y0 . Then we


have
Y000 + Y00 = 0, Y0 (0) = 0 =⇒ Y0 (x̃) = D(1 − e−x̃ ).
Matching condition gives

lim Y (x̃) = D = lim y(x) = .
x̃→∞ x→0 3
Therefore, the composite expansion of the solution is

y(x) ∼ y1 (x) + (Y (x/) − /3) = (2x + 1 − e−x/ ).
3
2. Consider the integral equation
Z x
y(x) = −q(x) [y(s) − f (s)]sds, 0 ≤ x ≤ 1,
0

where f (x) is positive and smooth.


(a) Taking q(x) = 1 find a composite expansion of the solution y(x). [Hint: convert to
an ODE.]

Solution: Observe that

y(0) = 0 =⇒ y(0) = 0.

Taking derivative on given integral equation gives

y 0 (x) + xy(x) = xf (x).

One can get the outer expansion by setting y(x) ∼ y0 (x) + · · ·

xy0 (x) = xf (x) =⇒ y0 (x) = f (x), 0 < x ≤ 1.

Since f is positive function limx→0 y0 (x) = f (0) > 0, which does not match
boundary condition. It implies that the expansion has boundary layer at x = 0.
Scale near x = 0 by taking a new coordinate x̃ = x/α and Y (x̃) = y(x). In this
coordinate, the smooth function f (x) can be count as a constant f (x) ∼ f (0).
It follows the ODE for inner expansion

1−α Y 0 + α x̃Y = α x̃f (0).


Matched Asymptotic Expansions 45

To balance the equation, it requires 1 − α = α, i.e. α = 1/2 and its general


solution with boundary condition Y (0) = 0 yields the first term inner expansion
 2

Y (x̃) = f (0) 1 − e−x̃ /2 ,

and it matches with outer solution

lim Y (x̃) = f (0) = lim f (x) = lim y0 (x).


x̃→∞ x→0 x→0

Therefore, the composite expansion of the integral equation is


2 /2
y(x) ∼ f (x) − f (0)e−x .

(b) Generalize to the case that q(x) is a positive smooth function.

Solution: It is also true that y(0) = 0 because f is positive function. Taking


derivative and substituting integral term gives

q0
y 0 =  y − q(y − f )x,
q
and one can rewrite it as
 0  
y y
 + xq = xf =⇒ z 0 + xqz 0 = xf
q q

by setting z = y/q. In the same fashion in part 1., obtain outer expansion by
balancing O(1)

f (x)
z(x) ∼ z0 (x) = =⇒ y(x) ∼ y0 (x) = f (x).
q(x)

Since f, q are positive, then z has boundary layer at x = 0. With the same
argument in part 1., one can get the ODE for inner expansion Z(x̃)

f (0)  2

Z 0 + xq(0)Z = xf (0) =⇒ Z(x̃) = 1 − e−q(0)x̃ /2 ,
q(0)
 
−q(0)x̃2 /2
that is Y (x̃) ∼ f (0) 1 − e . Therefore, its composite expansion is

2 /2
y(x) ∼ f (x) − f (0)e−q(x)x .

(c) Show that solution of part 2. still holds if q(x) is continuous but not differentiable
everywhere on [0, 1].

Solution: The basic idea showing the claim is to derive all the expansions from
46 2.5. Exercises

integral equation. For the outer expansion, setting y ∼ y0 gives


Z x Z x
0 = −q(x) (y0 (s) − f (s))sds =⇒ (y0 (s) − f (s))sds = 0
0 0

because q(x) is positive. Without worrying about differentiability of q, take


derivative on the equation and get same outer expansion y0 (x) = f (x). In
the similar way, to find the inner expansion, set the new coordinate x̃ = x/α
and Y (x̃) = y(x). By approximating continuous function q(x) = q(0) and
f (x) = f (0) in the boundary layer, it follows that
Z α x̃
1−α
 Y = −q(0) (Y (s/α ) − f (0))sds.
0

Now, one can take derivative and get the same differential equation for inner
expansion. Therefore, one can achieve the same composite expansion.

3. (Boundary layer at both ends) Find a composite expansion of the following problem on
[0, 1] and sketch the solution:

y 00 + (x + 1)2 y 0 − y = x − 1, y(0) = 0, y(1) = −1.

Solution: To find outer expansion y, set y ∼ y0 + · · · and balance O(1) terms

0 = y0 + x − 1 =⇒ y0 (x) = 1 − x.

It does not satisfy neither boundary conditions. Hence there are two boundary layer
at x = 0 and x = 1. First, consider boundary layer at x = 0 by setting x̃ = x/α for
α > 0 and U (x̃) = y(x). It follows ODE for U

1−2α U 00 + (1+α x̃2 +  · 2x̃ + 1−α )U 0 = U − 1 + α x̃.

Since α > 0, the smallest order of LHS is O(1 − 2α) and RHS is O(1). To balance
them, require α = 1/2 and setting U ∼ U0 provides

U 00 = U − 1 =⇒ U (x̃) = Aex̃ + Be−x̃ + 1.

By boundary condition at x = 0, y(0) = 0, we achieve A + B + 1 = 0. Matching


condition yields

lim U (x̃) = lim+ y(x) = 1 =⇒ A = 0,


x̃→∞ x→0

and U (x̃) = 1 − e−x̃ . Similarly, to find inner expansion at x = 1, set ξ = (x − 1)/β


and V (ξ) = y(x). It provides ODE for V

1−2β V 00 + (1+β x̃2 +  · 4x̃ + 1−β · 4)V 0 = V β x̃.


Matched Asymptotic Expansions 47

Since β > 0, the smallest order of LHS is O(1 − 2α) and RHS is O(1). To balance
them, require β = 1/2 and setting U ∼ U0 provides

V 00 = V =⇒ V (ξ) = Ceξ + De−ξ .

By boundary condition at x = 1, that is y(1) = −1, we obtain C +D = −1. Matching


outer and inner layer near x = 1 gives that

lim V (ξ) = lim− y(x) = 0 =⇒ D = 0,


ξ→−∞ x→1

and V (ξ) = −eξ . Therefore, the composite expansion of the solution is


h  x  i  x − 1 
y(x) ∼ y0 (x) + U 1/2 − 1 + U −0 ,
 1/2

and it follows that


1/2 1/2
y(x) ∼ 1 − x − e−x/ − ex−1/ .

4. (Matched asymptotics can also be used in the time domain) The Michaelis-Menten reac-
tion scheme for an enzyme catalyzed reaction is
ds
= −s + (µ + s)c,
dt
dc
 = s − (κ + s)c,
dt
where s(0) = 1, c(0) = 0. Here s(t) is the concentration of substrate, c(t) is the concen-
tration of the catalyzed chemical product, and µ, κ are positive constants with µ < κ.
Find the first term in the expansions in the outer layer, the initial layer around t = 0,
and the composite expansion.
Solution: Find the expansions in the outer layer by setting s ∼ s0 + · · · and c ∼
c0 + · · · and balancing O(1) terms

ds0
= −s0 + (µ + s0 )c0 ,
dt
0 = s0 − (κ + s0 )c0 .

It yields that

s0 (t)
s0 (t) − 1 + κ log s0 (t) = (µ − κ)t, c0 (t) = .
s0 (t) + κ

Notice that s0 is implicitly determined. One can observe that c has a layer near t = 0.
Setting t̃ = t/α , S(t̃) = s(t) and C(t̃) = c(t) gives the system of ODE

dS
−α = −S + (µ + S)C,
dt̃
dC
1−α = S − (κ + S)C
dt̃
48 2.5. Exercises

It requires that α = 1 to balance equation for C not same with outer expansion. By
setting S ∼ S0 and C ∼ C0 + · · · , it follows that
dS0
= 0,
dt̃
dC0
= S0 − (κ + S0 )C0 .
dt̃
First equation with initial condition s(0) = 1 gives that S0 (t̃) = 1. Hence we write
ODE for C0 as
dC0 1  
= 1 − (κ + 1)C0 =⇒ C0 (t̃) = 1 − e−(κ+1)t̃ .
dt̃ κ+1
Fortunately, this solution satisfies matching condition

1 s0 (0)
lim C0 (t̃) = = = lim c0 (t).
t̃→∞ κ+1 s0 (0) + κ t→0

Therefore, the composite solution of perturbation equation is


1 −(κ+1)t/
c(t) ∼ c0 (t) − e
κ+1
where c0 is implicitly determined by s0 .

5. (Implicit inner solution) A classical model in gas lubrication theory is the Reynolds
equation
d d
H 3 yy 0 =

 (Hy), 0 < x < 1,
dx dx
where y(0) = y(1) = 1. Here H(x) is a known, smooth, positive function with H(0) 6=
H(1).

(a) Suppose that there is a boundary layer at x = 1. Construct the first terms of the
outer and inner solutions. Note that the leading order term Y0 of the inner solution
is defined implicitly according to (x − 1)/ = F (Y0 ). Calculate the function F .

Solution: Setting y ∼ y0 and balancing O(1) terms yields outer solution equa-
tion
d C
0= (Hy0 ) =⇒ y0 (x) =
dx H(x)
where C is constant. Since we have boundary layer at x = 1, then applying
boundary condition at x = 0 to outer solution gives y0 (x) = H(0)/H(x). In the
inner layer, setting x̃ = (x − 1)/α and Y (x̃) = y(x) provides the ODE for inner
solution
d d
1−2α H 3 Y Y 0 = −α (HY ).

dx̃ dx̃
Since the inner layer near x = 1, then continuous function H can be approxi-
Matched Asymptotic Expansions 49

mated as H(x) ∼ H(1). It follows that

d d
1−2α H 3 (1) (Y0 Y00 ) = −α H(1) Y0 .
dx̃ dx̃
for first expansion term Y0 of Y . To balance the equation, it requires α = 1 and
now get
d 1 d
(Y0 Y00 ) = 2 Y0 .
dx̃ H (1) dx̃
The general solution of ODE is given by

Y x̃
Y0 (x̃) − 1 − C log 1 + = 2

C H (1)

with boundary condition Y0 (0) = 1. Matching condition gives

H(0)
lim Y0 (x̃) = lim y0 (x) = .
x̃→∞ x→1 H(1)

It determines C = −H(0)/H(1). Thus we have



2
H(1)Y0
F (Y0 ) = H (1)(Y0 − 1) + H(0)H(1) log 1 −
= x̃.
H(0)

(b) Use matching to construct the composite solution.

Solution: By the result from part 1., one can write the composite solution as
   
H(0) −1 x−1 H(0)
y(x) ∼ + F − .
H(x)  H(1)

(c) Show that if the boundary layer was assumed to be at x = 0, then the inner and
outer solutions would not match.
Solution: It follows the same procedure in part 2., but achieve different F

2
H(0)Y0
F (Y0 ) = H (0)(Y0 − 1) + H(0)H(1) log 1 −
= x̃.
H(1)

However, as x̃ → ∞, the RHS tends to negative infinity. It does not match the
conditions.

6. (Boundary layer at both ends) In a one-dimensional bounded domain, the potential φ(x)
of an ionized gas satisfies

d2 φ
− + h(φ/) = α, 0 < x < 1,
dx2
with boundary conditions
φ0 (0) = −γ, φ0 (1) = γ.
50 2.5. Exercises

Charge conservation requires Z 1


h(φ(x)/)dx = β.
0

The function h(s) is smooth and strictly increasing with h(0) = 0. The positive constants
α and β are known (and independent of ), and the constant γ is determined from the
conservation equation.

(a) Calculate γ in terms of α and β.

Solution: Integration on given differential equation over [0, 1] gives


Z 1
0 0
−[φ (1) − φ (0)] + h(φ(x)/)dx = α · 1 =⇒ −2γ + β = α,
0

and it yields γ = (β − α)/2.

(b) Find the exact solution for the potential when h(s) = s. Sketch the solution for
γ < 0 and small , and describe the boundary layers that are present.

Solution: With h(s) = s, we have

d2 φ φ
− + = α,
dx2 
and its general solution is
   
x x
φ(x) = A sinh √ + B cosh √ + α,
 

where A, B are constants. Then one can obtain it derivative


    
0 1 x x
φ (x) = √ A cosh √ + B sinh √
  

To determine A and B, imposing boundary conditions to the general solution


we have
A
φ0 (0) = √ = −γ,

and     
0 1 1 1
φ (1) = √ A cosh √ + B sinh √ =γ
  
Solving them for A, B yields

√ √
1 + cosh(1/ )
A = −γ , B =γ · √ .
sinh(1/ )
Matched Asymptotic Expansions 51

Then it follows that


    
0 γ x−1 x
φ (x) = √ sinh √ + sinh √ .
sinh(1/ )  

For x 6= 0, 1, then φ0 (x) decays to zero as  → 0. Since φ0 (0) and φ0 (0) are
nonzero, then it implies that φ has boundary layers at x = 0, 1.

(c) Suppose that h(s) = s2k+1 , where k is a positive integer, and assume β < α. Find
the first term in the inner and outer expansions of the solution.

Solution: With h(s) = s2k+1 , we have

d2 φ
− 2k+1 + φ2k+1 = 2k+1 α, (2.5.1)
dx2
with same boundary conditions. For  = 0, φ has a trivial solution. Thus we
expand φ as
φ ∼ p (φ0 + q φ1 + · · · ),
and its derivatives are

φ0 ∼ p (φ00 + q φ01 + · · · ), φ00 ∼ p (φ000 + q φ001 + · · · ).

First, consider the boundary layer at x = 0. Rescale as x̃ = x/r and set


Φ(x̃) = φ(x). Then we have

d d dx̃ d
→ = −r .
dx dx dx dx̃
It allows the governing equation in the boundary layer at x = 0 to be

− 2k+1−2r Φ00 + Φ2k+1 = 2k+1 α, (2.5.2)

with the boundary condition

−r Φ0 (0) = −γ, (2.5.3)

and it requires r = p and gives Φ0 (0) = −γ. Then (2.5.2) turns out to be

− 2k+1−p (Φ000 + q Φ001 + · · · )+


(2k+1)p (Φ0 + Φ1 + · · · )2k+1 = 2k+1 α.

To construct a boundary layer at x = 0, the only remaining case is to balancing


O(2k+1−p ) and O((2k+1)p ) and it requires p = (2k + 1)/(2k + 2). Then we have
a differential equation for boundary layer at x = 0

− Φ000 + Φ2k+1
0 = 0. (2.5.4)
52 2.5. Exercises

Multiplying Φ00 and perform integration gives

1 0 2 Φ2k+2
− (Φ0 ) + 0 = C,
2 2k + 2
for some constant C. As x̃ → ∞, Φ0 matches with the outer solution φ(x) = 0
for 0 < x < 1. It implies that C = 0. Then we have its general solutions
 −1/k
k
Φ0 (x̃) = √ (±x̃ − D) ,
k+1
for some constant D. Its derivative becomes
 −(k+1)/k  
0 1 k k
Φ0 (x̃) = − √ (±x̃ − D) · ±√ . (2.5.5)
k k+1 k+1
Imposing boundary condition at x = 0 gives
 −(k+1)/k  
1 kD k
− −√ · ±√ = −γ.
k k+1 k+1
Since γ = (β − α)/2 < 0, then we choose the negative sign and determine D
such that
kD √
−√ = (−γ k + 1)−k/(k+1) := λ.
k+1

Setting κ = k/ k + 1 gives

Φ0 (x̃) = (λ − κx̃)−1/k . (2.5.6)

Since the boundary layer at x = 1 satisfies the same differential equation (2.5.4),
then one can derive the lowest order boundary layer Ψ0 with rescaling x̂ =
(1 − x)/r
Ψ0 (x̂) = (λ + κx̂)−1/k . (2.5.7)
Therefore, the match asymptotic expansion of the differential equation is
 −1/k
h κx i−1/k κ(1 − x)
y(x) ∼ λ − r + λ+ , (2.5.8)
 r

where r = (2k + 1)/(2k + 2).

(d) Can one construct a composite solution using the first terms?
Solution: Not exactly :)

7. (Internal boundary layer) Consider the problem


y 00 + y(1 − y)y 0 − xy = 0, 0 < x < 1,
with y(0) = 2 and y(1) = −2. A numerical solution for small  shows that there is a
boundary later at x = 1 and an internal layer at some x0 , where y ∼ 0.
Matched Asymptotic Expansions 53

(a) Find the first term in the expansion of the outer solution. Assume that this function
satisfies the boundary condition at x = 0.

Solution: Setting y ∼ y0 +· · · and balancing O(1) yields the following equation

y0 (1 − y0 )y00 − xy0 = 0.

Since we assume that it satisfies y(0) = 2, then y0 (x) 6= 0. Then it follows that

(1 − y0 )y00 = x =⇒ y0 (x) = 1 + 1 − x2 .

(b) Explain why there cannot be a boundary layer at x = 1, which links the boundary
condition at x = 1 with the outer solution of part 1. evaluated at x = 1.

Solution: If it has a boundary layer at near x = 1, then the solution connects


limt→1 y0 (x) = 1 and boundary condition y(1) = −2. Then there is x in the
boundary layer such that 0 < y < 1. Since y 00 , y 0 < 0 in the layer, then one can
conclude
y 00 + y(1 − y)y 0 − xy < 0,
that is such expansion cannot satisfy IVP.

(c) Assume that there is an interior layer at some point x0 , which links the outer solution
calculated in (a) for 0 ≤ x < x0 with
√ the outer solution y ∼ 0 for x0 < x < 1. From
the matching show that x0 = 3/2. Note that there will be an undetermined
constant.

Solution: In the interior layer, scale the coordinate as x̃ = (x − x0 )/α and set
Y (x̃) = y(x). Then one can achieve equation for the interior solution

1−2α Y 00 + −α Y (1 − Y )Y 0 − (α x̃ + x0 )Y = 0.

Expanding the interior solution as Y ∼ Y0 + · · · and setting α = 1 yields the


balance equation for O(−1 ) terms

Y 00 + Y (1 − Y )Y 0 = 0.

Taking integration on both sides,


1 1
Y 0 + Y 2 − Y 3 = C,
2 3
where C is constant. From the right matching condition, limx̃→∞ Y (x̃) =
limx̃→∞ Y 0 (x̃) = 0. Hence, C = 0. Invoking partial fraction to the separable
ODE gives the general solution
1 2 1 2
x̃ + D = − log Y − + log(3 − 2Y ).
6 9 3Y 9
54 2.5. Exercises

The left matching condition yields that


3
q
lim Y (x̃) = = lim− y(x) = 1 + 1 − x20 ,
x̃→−∞ 2 x→x0

and it implies that x0 = 3/2. With the undetermined constant D, the interior
expansion Y (x̃) = G−1 (x̃) where
 
2 1 2
G(Y ) = 6 − log Y − + log(3 − 2Y ) − D .
9 3Y 9

(d) Given the interior layer at x0 , construct the first term in the expansion of the inner
solution at x = 1.

Solution: In the similar fashion, setting ξ = (x − 1)/β and V (ξ) = y(x). Then
we get the same ODE with the left matching condition

V 00 + V (1 − V )V 0 = 0.

Then it follows the general solution


1 2 1 2
ξ + E = − log(−V ) − + log(3 − 2V ),
6 9 3V 9
where E is a constant. Since V (0) = −2, then we have
 
2 1 2 1 2 7
E = − log 2 + + log 7 = + log .
9 6 9 6 9 2

Therefore, the expansion in the inner layer at x = 1 is V (ξ) = H −1 (ξ) where


     
2 −V 1 2 3 − 2V 1
H(V ) = 6 − log − + log − .
9 2 3V 9 7 6
Chapter 3

Method of Multiple Scales

3.1 Introductory Example


As in the previous chapter, we will introduce the ideas underlying the method by a simple
example. Consider the initial value problem

y 00 + εy 0 + y = 0 for t > 0 (3.1.1a)


y(0) = 0, y 0 (0) = 1 (3.1.1b)

which models a linear oscillator with weak damping. This reduces to the linear oscillator model
when ε = 0.

3.1.1 Regular expansion


We do not expect boundary layers since (3.1.1) is not a singular problem. This suggests that
the solution might have a regular asymptotic expansion, i.e. we try a regular expansion

y(t) ∼ y0 (t) + εy1 (t) + . . . as ε −→ 0 (3.1.2)

Substituting (3.1.2) into (3.1.1) and collecting terms in equal powers of ε yields

y000 + y0 = 0
yn00 + yn = −yn−1
0
for n ≥ 1,

with initial conditions

y0 (0) = 0, y00 (0) = 1, yn (0) = yn0 (0) = 0, n ≥ 1.

Solving the O(1) and O(ε) equations we obtain


1
y(t) ∼ sin(t) − εt sin(t), (3.1.3)
2
but this is problematic since the correction term y1 (t) contains a secular term t sin(t) which
blows up as t −→ ∞. Consequently, the asymptotic expansion is valid for only small values of
t, since εy1 (t) ∼ y0 (t) when εt ∼ 1. The problem is that regular perturbation theory does not

55
56 3.1. Introductory Example

y(t)

t
10 20 30 40 50 60 70

−1

−2 sin(t) − 0.05t sin(t)


Exact solution

Figure 3.1: Comparison between the regular asymptotic approximation (3.1.4) and the exact
solution (3.1.4) for ε = 0.1.

capture the correct behaviour of the exact solution. Indeed, (3.1.1) is a constant-coefficient
linear ODE and it can be solved exactly:
1 p 
y(t) = p e−εt/2 sin t 1 − ε2 /4 (3.1.4)
1 − ε2 /4
It is clear that the exact solution decays but the first term in our regular asymptotic approx-
imation (3.1.3) does not. Also, we will pick up the secular terms if we naively expand the
exponential function around t = 0, since
εt ε2 t2
 
y(t) ≈ 1 − + + . . . sin(t).
2 8

3.1.2 Multiple-scale expansion


In fact, there are two time-scales in the exact solution:
1. The slowly decaying exponential component which varies on a time-scale of O (1/ε);

2. The fast oscillating component which varies on a time-scale of O(1).


To identify or separate these time-scales, we introduce the variables

t1 = t, t2 = εα t, α > 0,

where t2 is called the slow time-scale because it does not affect the asymptotic expansion until
εα t ∼ 1. We treat these two time-scales as independent variables and consequently the original
time derivative becomes
d dt1 ∂ dt2 ∂ ∂ ∂
−→ + = + εα . (3.1.5)
dt dt ∂t1 dt ∂t2 ∂t1 ∂t2
Method of Multiple Scales 57

Substituting (3.1.5) into (3.1.1) yields the transformed problem


 2
∂t1 + 2εα ∂t1 ∂t2 + ε2α ∂t22 y + ε (∂t1 + εα ∂t2 ) y + y = 0,

(3.1.6a)

α

y(t1 , t2 ) = 0, (∂t1 + ε ∂t2 ) y(t1 , t2 ) = 1. (3.1.6b)
t1 =t2 =0 t1 =t2 =0

Unlike the original problem, additional constraints are needed for (3.1.6) to have a unique
solution, and it is precisely this degree of freedom that allows us to eliminate the secular terms!
We now introduce an asymptotic expansion

y ∼ y0 (t1 , t2 ) + εy1 (t1 , t2 ) + . . . . (3.1.7)

Substituting (3.1.7) into (3.1.6) yields


 2
∂t1 + 2εα ∂t1 ∂t2 + ε2α ∂t22 [y0 + εy1 + . . . ]


+ ε (∂t1 + εα ∂t2 ) (y0 + . . . ) + (y0 + εy1 + . . . ) = 0.

The O(1) problem is

∂t21 + 1 y0 = 0,


y0 (0, 0) = 0, ∂t1 y0 (0, 0) = 1,

and its general solution is

y0 (t1 , t2 ) = a0 (t2 ) sin(t1 ) + b0 (t2 ) cos(t1 ),

where a0 (0) = 1, b0 (0) = 0. Note that y0 (t1 , t2 ) consists of purely harmonic components with
slowly varying amplitude. We now need to determine α in the slow time-scale t2 . Observe that
for α > 1 the O(ε) equation is
∂t21 + 1 y1 = −∂t1 y0 ,


and the inhomogeneous term ∂t1 y0 will generate secular terms, since it belongs to the kernel
of homogeneous linear operator ∂t21 + 1 . More importantly, there is no way to generate non-
trivial solution that will cancel the secular term. This can be prevented by choosing α = 1.
The O(ε) equation is

∂t21 + 1 y1 = −2∂t1 ∂t2 y0 − ∂t1 y0 ,




y1 (0, 0) = 0, ∂t1 y1 (0, 0) + ∂t2 y0 (0, 0) = 0.

Substituting y0 gives

∂t21 + 1 y1 = −2 (a00 cos(t1 ) − b00 sin(t1 )) − (a0 cos(t1 ) − b0 sin(t1 ))




= (2b00 + b0 ) sin(t1 ) − (2a00 + a0 ) cos(t1 ).

The general solution of the O(ε) problem is

y1 (t1 , t2 ) = a1 (t2 ) sin(t1 ) + b1 (t2 ) cos(t1 )


1 1
− (2b00 + b0 ) t1 cos(t1 ) − (2a00 + a0 ) t1 sin(t1 ),
2 | {z } 2 | {z }
secular secular
58 3.1. Introductory Example

with a1 (0) = b00 (0), b1 (0) = 0. We can choose the functions a0 , b0 to remove the secular terms,
which results in

2b00 + b0 = 0 =⇒ b0 (t2 ) = β0 e−t2 /2 = 0, since b0 (0) = 0,

and
2a00 + a0 = 0 =⇒ a0 (t2 ) = α0 e−t2 /2 = e−t2 /2 , since a0 (0) = 0.
Hence, a first term approximation of the solution y(t) of (3.1.1) is

y ∼ e−εt/2 sin(t).

One can prove that this asymptotic expansion is uniformly valid for 0 ≤ t ≤ O (1/ε).

3.1.3 Discussion
1. Many problems have the O(1) equation as

y000 + ω 2 y0 = 0.

and the general solution is

y0 (t) = a cos(ωt) + b sin(ωt).

If the original problem is nonlinear and the O(1) equation is as above, then it is usually
more convenient to use a complex representation of y0 , i.e.

y(t) = Aeiωt + Āe−iωt = B cos (ωt + θ) .

These complex representations make identify the secular terms much easier.

2. Often, higher-order equations have the form

yn00 + ω 2 yn = f (t).

A secular term arises if f (t) contains a solution of the O(1) problem, e.g. cos(ωt) or
sin(ωt). We can avoid secular terms by requiring the t2 -dependent coefficients of cos(ωt1 )
and sin(ωt1 ) to vanish. For example, there are no secular terms if

f (t) = sin(ωt) cos(ωt) = sin(2ωt)/2,

but there is a secular term if


1
f (t) = cos3 (ωt) = (3 cos(ωt) + cos(3ωt)) .
4

3. The time scales should be modified depending on the problem. Some possibilities include:

(a) Several time-scales: e.g. t1 = t/ε, t2 = t, t3 = εt, . . . .


Method of Multiple Scales 59

(b) More complex ε-dependency:

1 + ω1 ε + ω2 ε2 + . . . t, t2 = εt.

t1 =
| {z }
expansion of the effective frequency

This is called the Lindstedt’s method or the method of strained coordinates.


(c) Correct scaling may not be obvious, so we might start off with

t1 = εα t, t2 = εβ t, α < β.

(d) Nonlinear time-dependence:

t1 = f (t, ε), t2 = εt.

3.2 Forced Motion Near Resonance


In this section, we consider an extension of the introductory example: a dampled nonlinear
oscillator that is forced at a frequency near resonance. As an example, we will study the
damped Duffing equation
 
y 00 + ελy 0 + y + εκy 3 = ε cos (1 + εω)t for t > 0 (3.2.1a)
y(0) = 0, y 0 (0) = 0. (3.2.1b)
 
The damping term ελy 0 , nonlinear correction term εκy 3 and forcing term ε cos (1 + εω)t are
small. Also, ω, λ, κ are constants with λ and κ nonnegative. We expect the solution to be
small due to the small forcing and zero initial conditions.
Consider the simpler equation

y 00 + y = ε cos(Ωt), Ω 6= ±1, y(0) = y 0 (0) = 0. (3.2.2)

The unique solution is


ε h i
y(t) = cos(Ωt) − cos(t) (3.2.3)
1 − Ω2
and the solution blows up as expected when the driving frequency Ω ≈ 1. To understand the
situation, suppose Ω = 1 + εω. The particular solution of (3.2.2) is given by
 1  
−
 cos (1 + εω)t if ω 6= 0, −2/ε,
ω(2 + εω)
yp (t) = (3.2.4)
 1 εt sin(t)

otherwise.
2
In both cases a relatively small, order O(ε), forcing results in at least an O(1) solution. More-
over, the behaviour of the solution depends on ω, which is typical of a forcing system.
We take t1 = t and t2 = εt, although we should take t2 = εα t, α > 0 in general to allow for
some flexibility. The forced Duffing equation becomes
h i h i
∂t21 + 2ε∂t1 ∂t2 + ε2 ∂t22 y + ελ ∂t1 + ε∂t2 y + y + εκy 3 = ε cos (t1 + εωt1 ) . (3.2.5)
60 3.2. Forced Motion Near Resonance

Although we expect the leading-order term in the expansion to be O(ε), the solution can
become larger near a resonant frequency. Because it is not clear what amplitude the solution
actually reaches, we guess a general asymptotic expansion of the form

y ∼ εβ y0 (t1 , t2 ) + εγ y1 (t1 , t2 ) + . . . , β < γ. (3.2.6)

We also assume that β < 1 due to the resonance effect. Substituting (3.2.6) into (3.2.5) gives
h i h i
εβ ∂t21 y0 + 2ε 1+β
∂t1 ∂t2 y0 + εγ ∂t21 y1 1+β
+ . . . + ε λ∂t1 y0 + . . .
| {z }
| {z } | {z }
4 1 2
h i h i
+ εβ y0 + εγ y1 + . . . + ε1+3β κy03 + . . . = ε cos (t1 + εwt1 ) .
|{z} | {z } | {z }
1 2 3

The O(εβ ) problem is

∂t21 + 1 y0 = 0,


y0 (0, 0) = ∂t1 y0 (0, 0) = 0,

and its general solution is


y0 = A(t2 ) cos(t1 + θ(t2 )),
with A(0) = 0.
We need to determine β and γ before proceed any further. The terms 2 concern with the
preceeding solution y0 and the term 3 is the forcing term. For the most complete approxima-
tion, the problem for the second term y1 in the expansion (3.2.6), which comes from the terms
1 , must deal with both 2 and 3 . This is possible if we choose γ = 1 and β = 0. The O(ε)
equation in

∂t21 + 1 y1 = −2∂t1 ∂t2 y0 − λ∂t1 y0 − κy03 + cos(t1 + ωt2 )



h i
= 2A0 + λA sin(t1 + θ) + 2θ0 A cos(t1 + θ)
κ h  i
− A3 3 cos(t1 + θ) + cos 3(t1 + θ) + cos(t1 + ωt2 ).
4
Note that

cos(t1 + ωt2 ) = cos(t1 + θ − θ + ωt2 )


= cos(t1 + θ) cos(θ − ωt2 ) + sin(t1 + θ) sin(θ − ωt2 ).

Thus, we can remove the secular terms sin(t1 + θ) and cos(t1 + θ) by requiring

2A0 + λA = − sin(θ − ωt2 ) (3.2.7a)



2θ0 A − A3 = − cos(θ − ωt2 ). (3.2.7b)
4
From A(0) = 0 and assuming A0 (0) > 0, it follows that θ(0) = −π/2.
Method of Multiple Scales 61

Figure 3.2: Nullcline for φτ . (a) F (r, β) as a function of r with varying β. (b) Nullcline for φτ
with varying β. Parameter are given by γ = 0.75 and β = −βc , βc /2, βc , 1.5βc , respectively.

It remains to solve (3.2.7) with initial conditions A(0) = 0, θ(0) = −π/2 to find the ampli-
tude function
√ A(t2 ) and phase function θ(t2 ). For the analytic simplicity, changing variables
as r = κA/2 and φ = θ − wt2 gives
(
2r0 = −λr − γ2 sin φ,
γ (3.2.8)
2φ0 = β + 3r2 − 2r cos φ.

where γ = κ and β = −2ω. We now analyze the rewritten amplitude equation (3.2.8). The
nullcline for rτ is r = −γ sin θ/2λ. Similarly, nullcline for φτ is given by cos θ = 2r(β +3r2 )/γ ≡
F (r, β), see Fig. 3.2:

• If β > 0, there is unique r for each θ, see the blue line.

• If 0 > β > βc where minr F (r, βc ) = −1 (and it turns out that βc3 = −81γ 2 /16), then
there are two values of r for each cos θ in some interval (−z, 0) for some z ∈ [0, 1]. See
the red line.

• If β < βc , then two values of r exist for all cos θ between −1 and 0. See the purple line.

For 0 > β > βc , then the non-trivial fixed point (FB) stability of (3.2.8) with varying the
nullcline rτ for λ ≥ 0 is the following, see Fig. 3.3:

• For small λ, only one stable fixed point, see the curve A intersecting with the red line.

B,C If λ = λ1C , there is a SN bifurcation, that is, saddle and a stable FP. See the curve B
and B intersecting with the red line.

• At λ = λ2C , there is a second SN bifurcation in which saddle and other stable FP (from
A) annihilate leaning the stable FP (from B). See the curve D and E intersecting with
the red line.
62 3.3. Periodically Forced Nonlinear Oscillators

Figure 3.3: Non-trivial fixed points as a function of λ and its bifurcation diagram. (a) Inter-
sections of nullcline φτ and rτ with varying λ. (b) Bifurcation diagram of fixed radius rF P as
a function of λ.

3.3 Periodically Forced Nonlinear Oscillators


This section is taken from [Bre14, Chapter 1.2] and [PRK03, Chapter 7.1]. Consider a general
model of a nonlinear oscillator
du
= f (u), u = (u1 , . . . , uM ) , with M ≥ 2. (3.3.1)
dt
For example, u1 might represents the membrane potential of the neuron (treated as a point
processor) and u2 , . . . , uM represent various ionic channel gating variables. Suppose there exists
a stable periodic solution U (t) = U (t + ∆0 ), where ω0 = 2π/∆0 is the natural frequency of the
oscillator. In phase space, the solution is an isolated attractive trajectory called a limit cycle.
The dynamics on the limit cycle can be described by a uniformly rotating phase, i.e.

= ω0 and U (t) = g(φ(t)), (3.3.2)
dt
with g a 2π-periodic function. The phase φ should be viewed as a coordinate along the limit
cycle, such that it grows monotonically in the direction of the motion and gains 2π during each
rotation. Note that the phase is neutrally stable with respect to perturbations along the limit
cycle - this reflects the time-shift invariance of an autonomous dynamical system. On the limit
cycle, the time shift ∆t is equivalent to the phase shift ∆φ = ω0 ∆t. Now, suppose that a small
external periodic input is applied to the oscillator such that
du
= f (u) + εP (u, t), (3.3.3)
dt
where P (u, t) = P (u, t + ∆) with ω = 2π/∆ the forcing frequency. If the amplitude ε is
sufficiently small and the cycle is stable, then the resulting deviations transverse to the limit
cycle are small so that the main effect of the perturbation is a phase-shift along the limit cycle.
This suggests a description of the perturbed dynamics with the phase variable only. Therefore,
we need to extend the definition of phase to a neighbourhood of the limit cycle.
Method of Multiple Scales 63

3.3.1 Isochrones
Roughly speaking, the idea is to define the phase variable in such a way that it rotates uniformly
on the limit cycle as well as its neighbourhood. Suppose that we observe the unperturbed
system stroboscopically at time intervals of length ∆0 . This leads to a Poincaré mapping
u(t) −→ u(t + ∆0 ) ≡ G(u(t)).
The map G has all points on the limit cycle as fixed points. Choose a point U ∗ on the limit
cycle and consider all points in a neighbourhood of U ∗ in RM that are attracted to it under the
action of Φ. They form an (M − 1)-dimensional hypersurface I, called an isochrone, crossing
the limit cycle at U ∗ . A unique isochrone can be drawn through each point on the limit cycle
so we can parameterise the isochrones by the phase φ, i.e. I = I(φ). Finally, we extend the
definition of phase to the vicinity of the limit cycle by taking all points u ∈ I(φ) to have the
same phase, Φ(u) = φ, which then rotates at the natural frequency ω0 (in the unperturbed
case).

Example 3.3.1. Consider the following complex amplitude equation that arises for a limit
cycle oscillator close to a Hopf bifurcation:
dA
= (1 + iη)A − (1 + iα)|A|2 , A ∈ C.
dt
In polar coordinates A = Reiθ , we have
dR
= R(1 − R2 )
dt

= η − αR2 .
dt
Observe that the origin is unstable and the unit circle is a stable limit cycle. The solution for
arbitrary initial data R(0) = R0 , θ(0) = θ0 is
−1/2
1 − R02 −2t
  
R(t) = 1 + e
R0
α 
θ(t) = θ0 + ω0 t − ln R02 + (1 − R02 )e−2t ,

2
where ω0 = η − α is the natural frequency of the stable limit cycle at R = 1. Strobing the
solution at time t = n∆0 , we see that
lim θ(n∆0 ) = θ0 − α ln R0 .
n→∞

Hence, we can define a phase on the whole plane as


Φ(R, θ) = θ − α ln R
and the isochrones are the lines of constant phase Φ, which are logarithmic spirals on the (R, θ)
plane. We verify that this phase rotates uniformly:
dΦ dθ α dR
= − = η − αR2 − α(1 − R2 ) = η − α = ω0 .
dt dt R dt
It seems like the angle variable θ can be taken to be the phase variable Φ since it rotates
with a constant angular velocity ω0 . However, if the initial amplitude deviates from unity, an
additional phase shift occurs due to the term proportional to α in the θ̇-equation. It can be
seen from θ(t) and R(t) that the additional phase shift is −α ln R0 .
64 3.3. Periodically Forced Nonlinear Oscillators

3.3.2 Phase equation


For an unperturbed oscillator in the vicinity of the limit cycle, we have from (3.3.1) and (3.3.2)
M M
dΦ(u) X ∂Φ duk X ∂Φ
ω0 = = = fk (u).
dt k=1
∂uk dt k=1
∂uk

Now consider the perturbed system (3.3.3) but with the “’unperturbed” definition of the phase:
M M
dΦ(u) X ∂Φ   X ∂Φ
= fk (u) + εPk (u, t) = ω0 + ε Pk (u, t).
dt k=1
∂u k
k=1
∂u k

Because the sum is O(ε) and the deviations of u from the limit cycle U are small, to a
first approximation, we can neglect these deviations and calculate the sum on the limit cycle.
Consequently,
M
dΦ(u) X ∂Φ(U )
= ω0 + ε Pk (U , t).
dt k=1
∂u k

Finally, since points on the limit cycle are in one-to-one correspondence with the phase θ, we
obtain the closed phase equation


= ω0 + εQ(φ, t), (3.3.4)
dt
where
M
X ∂Φ(U (φ))
Q(φ, t) = Pk (U (φ), t) (3.3.5)
k=1
∂uk

is a 2π-periodic function of φ and a ∆-periodic function of t. The phase equation (3.3.4) de-
scribes the dynamics of the phase of a periodic oscillator in the presence of a small periodic
external force and Q(φ, t) contains all the information of the dynamical system. This is known
as the phase reduction method.

Example 3.3.2. Returning to Example 3.3.1, the system in Cartesian coordinate is

dx
= x − ηy − x2 + y 2 (x − ηy) + ε cos(ωt)

dt
dy
= y + ηy − x2 + y 2 (y + αx)

dt
where we periodically force the nonlinear oscillator in the x-direction. The isochrone is given
by y α
− ln x2 + y 2 ,

Φ = arctan
x 2
and differentiating with respect to x yields

∂Φ y αx
=− 2 − .
∂x x + y 2 x2 + y 2
Method of Multiple Scales 65

On the limit cycle u0 = (x0 , y0 ) = (cos φ, sin φ), we have


∂Φ
(u0 (φ)) = − sin φ − α cos φ.
∂x
It follows that the corresponding phase equation is

= ω0 − ε (α cos φ + sin φ) cos(ωt).
dt

3.3.3 Phase resetting curves


In neuroscience, the function Q(φ, t) can be related to an easily measurable property of a neural
oscillator, namely its phase resetting curves (PRC). Let us denote this by a 2π-periodic
function R(φ). For a neural oscillator, the PRC is found experimentally by perturbing the
oscillator with an impulse at different times in its cycle and measuring the resulting phase shift
from the unperturbed oscillator. Suppose we perturb u1 , it follows from (3.3.4) that
 
dφ ∂Φ(U (φ))
= ω0 + ε δ(t − t0 ).
dt ∂u1
Integrating over a small interval around t0 , we see that the impulse induces a phase shift
∆φ = εR(φ0 ), where
∂Φ(U (φ))
R(φ) = and φ0 = φ(t0 ).
∂u1
Given the phase resetting curve R(φ), a general time-dependent voltage perturbation εP (t) is
determined by the phase equation

= ω0 + εR(φ)P (t) = ω0 + εQ(φ, t).
dt
We can also express the PRC in terms of the firing times of a neuron. Let T n be the nth
firing time of the neuron. Consider the phase φ = 0. In the absence of perturbation, we have
φ(t) = 2πt/∆0 so the firing times are T n = n∆0 . On the other hand, a small perturbation
applied at the point φ on the limit cycle at time t ∈ (T n , T n+1 ), induces a phase shift that
changes the next firing time. Depending on the type of neurons, the impulse either advance or
delay the onset of the next spike. Oscillators with a strictly positive PRC R(φ) are called type
I, whereas those for which the PRC has a negative regime are called type II.

3.3.4 Averaging theory


In the zero-order approximation, i.e. ε = 0, the phase equation (3.3.4) gives rise to φ(t) =
φ0 + ω0 t. Since Q(φ, t) is 2π-periodic in φ and ∆-periodic in t, we expand Q(φ, t) as a double
Fourier series
X
Q(φ, t) = al,k eikφ+ilωt
l,k
X
= al,k eikφ0 ei(kω0 +lω)t ,
l,k
66 3.3. Periodically Forced Nonlinear Oscillators

where ω = 2π/∆. Thus Q contains fast oscillating terms (compared to the time scale 1/ε)
together with slowly varying terms, the latter satisfy the resonance condition

kω0 + lω ≈ 0.

Substituting this double Fourier series into the phase equation (3.3.4), we see that the fast
oscillating terms lead to O(ε) phase deviations, while the resonant terms can lead to large
variations of the phase and are mostly important for the dynamics. Thus we have to average
the forcing term Q keeping only the resonant terms. We now identify the resonant terms using
the resonance condition above:

1. The simplest case is ω ≈ ω0 for which the resonant terms satisfy l = −k. This results in
an averaged forcing X
Q(φ, t) ≈ a−k,k eik(φ−ωt) = q(φ − ωt)
k

and the phase equation becomes



= ω0 + εq(φ − ωt).
dt
Introducing the phase difference ψ = φ − ωt between the oscillator and external input,
we obtain

= −∆ω + εq(ψ),
dt
where ∆ω = ω − ω0 is the degree of frequency detuning.

2. The other case is ω ≈ mω0 /n, where m and n are coprime. The forcing term becomes
X
Q(φ, t) ≈ a−nk,mk eik(mφ−nωt) = qb(mφ − nωt)
k

and the phase equation has the form



q (mφ − nωt).
= ω0 + εb
dt
Introducing the phase difference ψ = mφ − nωt, we obtain

= mω0 − nω + εmb
q (ψ),
dt
where the frequency detuning is ∆ω = nω − nω0 instead.

The above analysis is an application of the averaging theorem. Assuming ∆ω = ω − ω0 =


O(ε) and setting ψ = φ − ωt, we have


= −∆ω + εQ(ψ + ωt, t) = O(ε).
dt
Define Z T
1
q(ψ) = lim Q(ψ + ωt, t) dt,
T −→∞ T 0
Method of Multiple Scales 67

and consider the averaged equation


dψ̄
= −∆ω + εq(ψ̄),
dt
where q only contains the resonant terms of Q as above. The averaging theorem guarantees
that there exists a change of variable ψ = ψ̄ + εw(ϕ, t) that maps solutions of the full equation
to those of the averaged equation to leading order in ε. In general, one can only establish that
a solution of the full equation is ε-close to a corresponding solution of the average equation for
times of O(1/ε), i.e.
sup ψ(t) − ψ̄(t) ≤ Cε.
t∈I

3.3.5 Phase-locking and synchronisation


We now discuss the solutions of the averaged phase equation

= −∆ω + εq(ψ). (3.3.6)
dt
Suppose that the 2π-periodic function q(ψ) has a unique maximum qmax and a unique minimum
qmin . The fixed points ψ ∗ of (3.3.6) satisfy εq(ψ ∗ ) = ∆ω.
1. Synchronisation regime
If the degree of detuning is sufficiently small, in the sense that
εqmin < ∆ω < εqmax ,
then there exists at least one pair of stable/unstable fixed points (ψs , ψu ). (This follows
from the fact that q(ψ) is 2π-periodic and continuous so it has to cross any horizontal
line an even number of times.) The system then evolves to the solution φ(t) = ωt + ψs
and this is the phase-locked synchronise state. The oscillator is also said to be frequency
entrained, meaning that the frequency of the oscillator coincides with that of the external
force.
2. Drift regime
Increasing |∆ω| means that ψs , ψu coalesce at a saddle point, beyond which there are
no fixed points. This results in a saddle-node bifurcation and phase-locking disappears.
If |∆ω is large, then ψ̇ never changes sign and the oscillation frequency differs from the
forcing frequency. The phase ψ(t) rotates through 2π with period
Z 2π

Tψ = .
0 εq(ψ) − ∆ω
The mean frequency of rotation is thus Ω = ω + Ωψ , where Ωψ = 2π/Tψ is the beat
frequency.
For a fixed ε, suppose that ∆ω is close to one of the bifurcation point ∆ωmax := εqmax .
The integral in Tψ is dominated by a small region around ψmax and expanding q(ψ)
around ψmax yields
Z ∞ −1

Ωψ ≈ 2π 00 2

−∞ εq (ψmax )ψ − (∆ω − ∆ωmax )

p √
= ε|q 00 (ψmax )|(∆ω − ∆ωmax ) = O( ε)
68 3.3. Periodically Forced Nonlinear Oscillators

3.3.6 Phase reduction for networks of coupled oscillators


We extend the analysis to a network of N coupled oscillators. Let ui ∈ RM , i = 1, . . . , N
denote the state of the ith oscillator. The general model can be written as

N
dui X
= f (ui ) + ε aij H(uj ), i = 1, . . . , N, (3.3.7)
dt j=1

where the first term represents the local autonomous dynamics and the second term describes
the interaction between oscillators. In a similar fashion to a single periodically forced oscillator,
we can write down the phase equation:
  N
!
dφi (ui ) ∂φi X
= ω0 + ε · aij H(uj ) , i = 1, . . . , N. (3.3.8)
dt ∂ui j=1

Since the limit cycle is uniquely defined by phase,

N
dφi X
= ω0 + ε aij Qi (φi , φj ), i = 1, . . . , N, (3.3.9)
dt j=1

where
∂φi
Qi (φi , φj ) = (U (φi )) · H(U (φj )). (3.3.10)
∂ui
The final step is to use the method of averaging to obtain the phase-difference equation.
Introducing ψi = φi − ω0 t, we obtain

N
dψi X
=ε aij Qi (ψi + ω0 t, ψj + ω0 t) .
dt j=1

Upon averaging over one period, we obtain

N
dψi X
=ε aij h(ψj − ψi ), (3.3.11)
dt j=1

where
N
!
Z ∆0
1 X
h(ψj − ψi ) = R(ψi + ω0 t) · H(U (ψj + ω0 t)) dt
∆0 0 j=1
N
!
Z 2π
1 X
= R(φ + ψi − ψj ) · H(U (φ)) dφ,
2π 0 j=1

with φ = ψj + ω0 t. Here, R is the phase resetting curve.


Method of Multiple Scales 69

Phase-locked solutions
We define a one-to-one phase-locked solutions to be
ψi (t) = t∆w + ψ i , (3.3.12)
where ψ i is constant. Taking time derivative on (3.3.12) and imposing (3.3.11) yields
N
X 
∆ω = ε aij h ψ j − ψ i , i = 1, . . . , N. (3.3.13)
j=1

Since we have N equations in N unknowns ∆ω and N − 1 phases ψ j − ψ 1 , then one can find
the phase-locked solutions (We only care about phase difference.)

Stability
In order to determine local stability, we set
ψi (t) = ψ i + t∆ω + ∆ψi (t). (3.3.14)
To linearize it, taking time derivative on (3.3.14) and imposing the phase-locked solutions
(3.3.13) gives
N
d∆ψi X
=ε H
b ij (Φ) ∆ψj , (3.3.15)
dt j=1

where Φ = ψ̄1 , . . . ψ̄N and
 X 
Hb ij (Φ) = aij h ψ j − ψ i − δij aik h ψ k − ψ i . (3.3.16)
k

Pair of identical oscillators


For example, we assume that N = 2 and symmetric coupling, that is a12 = a21 and a11 = a22 = 0
(no self-interaction). Let ψ = ψ2 − ψ1 . Then (3.3.11) turns out to be

= εH − (ψ),
dt
where H −1 (ψ) = h(−ψ) − h(ψ). Imposing the assumption on (3.3.13) implies that the phase-
locked states are given by zeros of the odd function, H − (ψ) = 0. Furthermore, it is stable
if
dH − (ψ)
ε < 0.

By symmetry and periodicity, the in-phase solution ψ = 0 and anti-phase solution ψ = π are
guaranteed to exist.

3.4 Partial Differential Equations


In this section, we apply the method of multiple scales to the linear wave equation and the
nonlinear Klein-Gordon equation.
70 3.4. Partial Differential Equations

3.4.1 Elastic string with weak damping


Consider the one-dimensional wave equation with weak damping:

∂x2 u = ∂t2 u + ε∂t u, 0 < x < 1, t > 0, (3.4.1a)


u = 0 at x = 0 and x = 1, (3.4.1b)
u(x, 0) = g(x), ∂t u(x, 0) = 0. (3.4.1c)

Similar to the weakly damped oscillator, we introduce two separate time scales t1 = t, t2 = εt.
In this case, (3.4.1) becomes
h i h i
∂x2 u = ∂t21 + 2ε∂t1 ∂t2 + ε2 ∂t22 u + ε ∂t1 + ε∂t2 u, (3.4.2a)
u = 0 at x = 0 and x = 1, (3.4.2b)
h i
u(x, 0) = g(x), ∂t1 + ε∂t2 u = 0. (3.4.2c)
t1 =t2 =0

As before, the solution of (3.4.2) is not unique and we will use this degree of freedom to
eliminate the secular terms.
We try a regular asymptotic expansion of the form

u ∼ u0 (x, t1 , t2 ) + εu1 (x, t1 , t2 ) + . . . as ε −→ 0. (3.4.3)

The O(1) problem is

∂x2 u0 = ∂t21 u0 , (3.4.4a)


u0 (x, 0, 0) = g(x), ∂t1 u0 (x, 0, 0) = 0. (3.4.4b)

Separation of variables yields the general solution



X
u0 (x, t1 , t2 ) = [an (t2 ) sin(λn t1 ) + bn (t2 ) cos(λn t1 )] sin(λn x), λn = nπ. (3.4.5)
n=1

The initial conditions will be imposed once we determine an (t2 ) and bn (t2 ). The O(ε) equation
is

∂x2 u1 = ∂t21 u1 + 2∂t1 ∂t2 u0 + ∂t1 u0


X∞
2
= ∂t1 u1 + An (t1 , t2 ) sin(λn x), (3.4.6)
n=1

where
An = (2a0n + an ) λn cos(λn t1 ) − (2b0n + bn ) λn sin(λn t1 ).
Given the zero boundary conditions in (3.4.1), it is appropriate to introduce the Fourier ex-
pansion

X
u1 = Vn (t1 , t2 ) sin(λn x).
n=1

Substituting this into (3.4.7) together with the expression of An , we obtain

∂t21 Vn + λ2n Vn = − (2a0n + an ) λn cos(λn t1 ) + (2b0n + bn ) λn sin(λn t1 ).


Method of Multiple Scales 71

The secular terms are eliminated provided

2a0n + an = 0, 2b0n + bn = 0,

and these have general solutions of the form

an (t2 ) = an (0)e−t2 /2 , bn (t2 ) = bn (0)e−t2 /2 .

Finally, a first term approximation of the solution of (3.4.1) is



X
an (0)e−εt/2 sin(λn t) + bn (0)e−εt/2 cos(λn t) sin(λn x), λn = nπ.
 
u(x, t) ∼ (3.4.7)
n=1

Applying the initial condition in (3.4.4), we find that an (0) = 0 and


Z 1
bn (0) = 2 g(x) sin(λn x) dx.
0

3.4.2 Nonlinear wave equation


Consider the nonlinear Klein-Gordon equation

∂x2 u = ∂t2 u + u + εu3 , −∞ < x < ∞, t > 0, (3.4.8a)


u(x, 0) = F (x), ∂t u(x, 0) = G(x). (3.4.8b)

It describes the motion of a string on an elastic foundation as well as the waves in a cold
electron plasma.
As usual, let us consider (3.4.8) with ε = 0:

∂x2 u = ∂t2 u + u, −∞ < x < ∞, t > 0, (3.4.9a)


u(x, 0) = F (x), ∂t u(x, 0) = G(x). (3.4.9b)
 
We guess a solution of the form exp i(kx − ωt) . This yields the dispersion relation

−k 2 = −ω 2 + 1 =⇒ ω = ± 1 + k 2 = ±ω(k).

We may solve (3.4.9) using the spatial Fourier transform


Z ∞
u
b(k, t) = u(x, t)e−ikx dx.
−∞

This produces an ODE for u


b(k, t):

−k 2 u
b = ∂tt u
b+ub, (3.4.10a)
u
b(k, 0) = Fb(k), ∂t u
b(k, 0) = G(k).
b (3.4.10b)

Solving (3.4.10) and applying the inverse Fourier transform we obtain the general solution of
(3.4.9): Z ∞ Z ∞
i(kx−ω(k)t)
u(x, t) = A(k)e dk + B(k)ei(kx+ω(k)t dk, (3.4.11)
−∞ −∞
72 3.4. Partial Differential Equations

where A(k) and B(k) are determined from the initial conditions in (3.4.10). This shows that
the solution
 of (3.4.9)
 can be written as the superposition of the plane wave solutions u± (x, t) =
exp i(kx + ω(k)t) . We would like to investigate how the nonlinearity affects a right-moving

plane wave u(x, t) = cos(kx − ωt), where k > 0 and ω = 1 + k 2 .
A regular asymptotic expansion of the form

u(x, t) ∼ w0 (kx − ωt) + εw1 (x, t) + . . .

will lead to secular terms, and thus we use multiple scales to find an asymptotic approximation
of the solution of (3.4.8). We take three independent variables

θ = kx − ωt, x2 = εx, t2 = εt.

The spatial and time derivatives become


∂ ∂ ∂ ∂ ∂ ∂
−→ k +ε , −→ −ω +ε .
∂x ∂θ ∂x2 ∂t ∂θ ∂t2
Consequently, the nonlinear Klein-Gordon equation becomes
h i2 h i2
k∂θ + ε∂x2 u = − ∂θ + ε∂t2 u + u + εu3
h i
k 2 − ω 2 ∂θ2 u + 2εk∂x2 ∂θ u + 2εω∂t2 ∂θ u = u + εu3 + O(ε2 )u
h   i
2
∂θ − 2ε k∂x2 + ω∂t2 ∂θ + O(ε ) u + u + εu3 = 0,
2
(3.4.12)

where we use the dispersion relation −k 2 = −ω 2 +1. We assume a regular asymptotic expansion
of the form
u(x, t) ∼ u0 (θ, x2 , t2 ) + εu1 (θ, x2 , t2 ) + . . . .
The O(1) equation is
∂θ2 + 1 u0 = 0,


and the general solution of this problem is


 
u0 = A(x2 , t2 ) cos θ + φ(x2 , t2 ) .

The O(ε) equation is

∂θ2 + 1 u1 = 2 (k∂x2 + ω∂t2 ) ∂θ u0 − u30



h i 1 3  
= −2 (k∂x2 + ω∂t2 ) A sin(θ + φ) − A cos 3(θ + φ)
4
h 3 2i
− 2 (k∂x2 + ω∂t2 ) φ + A A cos(θ + φ).
8
The secular terms are eliminated provided

(k∂x2 + ω∂t2 ) A = 0 (3.4.13a)


3
(k∂x2 + ω∂t2 ) φ + A2 = 0. (3.4.13b)
8
Method of Multiple Scales 73

These constitute the amplitude-phase equations and can be solved using characteristic
coordinates. Specifically, let

r = ωx2 + kt2 and s = ωx2 − kt2 .

With this (3.4.13) simplifies to

∂r A = 0
3
∂r φ = − A2
16ωk
and solving this yields
3
A = A(s) and φ = − A2 r + φ0 (s).
16ωk
Hence, a first term approximation of the solution of (3.4.8) is
 
3 2
u ∼ A(wx2 − kt2 ) cos (kx − ωt) − (ωx2 + kt2 )A + φ0 (ωx2 − kt2 ) . (3.4.14)
16ωk
We can now attempt to answer our main question: how does the nonlinearity affects the
plane wave solution? Consider the plane wave initial conditions

u(x, 0) = α cos(kx) and ∂t u(x, 0) = αω sin(kx).

In multiple scale expansion, these translates to

u0 (θ, x2 , 0) = α cos(θ) and ∂θ u0 (θ, x2 , 0) = −α sin(θ).

Imposing these initial conditions on (3.4.14) we obtain


3 2
A(ωx2 ) = α and φ0 (ωx2 ) = A x2 .
16k
Thus, a first term approximation of the solution of (3.4.8) in this case is
3εα2
   
u(x, t) ∼ α cos kx − 1 + ωt
16ω 2
∼ α cos(kx − ω
b t).

We see that the nonlinearity increases the phase velocity since it increases from c = ω/k to
c=ωb /k.

3.5 Pattern Formation and Amplitude Equations


3.5.1 Neural field equations on a ring
Consider a population of neurons distributed on the circle S 1 = [0, π]:
1 π
Z
∂a
= −a(θ, t) + w(θ − θ0 )f (a(θ0 , t)) dθ0 (3.5.1a)
∂t π 0
74 3.5. Pattern Formation and Amplitude Equations

1
f (a) = , (3.5.1b)
1 + exp(−η(a − k))
where a(θ, t) denotes the activity at time t of a local population of cells at position θ ∈ [0, π),
w(θ−θ0 ) is the strength of synaptic weights between cells at θ0 and θ and the firing rate function
f is a sigmoid function. Assuming w is an even π-periodic function, it can be expanded as a
Fourier series: X
w(θ) = W0 + 2 Wn cos(2πθ), Wn ∈ R. (3.5.2)
n≥1

Suppose there exists a uniform equilibrium solution ā of (3.5.1), satisfying


Z π
w(θ − θ0 ) 0
ā = f (ā) dθ = f (ā)W0 . (3.5.3)
0 π
The stability of the equilibrium solution is determined by setting a(θ, t) = ā + a(θ)eλt and
linearising (3.5.1) about ā. Expanding f around ā yields

f (ā + a(θ)eλt ) ≈ f (ā) + f 0 (ā)a(θ)eλt ,

and we obtain the eigenvalue equation


π
f 0 (ā)
Z
λa(θ) = −a(θ) + w(θ − θ0 )a(θ0 ) dθ0 = La(θ). (3.5.4)
π 0

Since the linear operator L is compact on L2 (S 1 ), it has a discrete spectrum with eigenvalues

λn = −1 + f 0 (ā)Wn , n ∈ Z,

and corresponding eigenfunctions

an (θ) = zn e2inθ + zn∗ e−2inθ .

These are obtained by integrating the eigenvalue equation against cos(2nθ) over [0, π]: CHT:
Check this again, unsure about this
π π
f 0 (ā)
Z Z 
0 0 0
λn an = −an + w(θ − θ )a(θ ) dθ cos(2nθ) dθ
π 0 0
!
π π
f 0 (ā)
Z Z X
0 0
= −an + a(θ ) Wm cos(2m(θ − θ )) cos(2nθ) dθ dθ0
π 0 0 m∈Z
0 XZ π Z π
f (ā)
= −an + Wm a(θ0 ) cos(2nθ0 ) dθ0 cos(2mθ) cos(2nθ) + sin(2mθ) cos(2nθ) dθ
π m∈Z 0 0

f 0 (ā) X hπ i
= −an + Wm am δ±m,n
π m∈Z 2
f 0 (ā) h i
= −an + Wn an + W−n a−n
2
0
= −an + f (ā)Wn an ,

where Z π
an = a(θ) cos(2nθ) dθ = a−n .
0
Method of Multiple Scales 75

The eigenvalue expression reveals the bifurcation parameter µ = f 0 (ā). For sufficiently
small µ, corresponding to a low activity state, λn < 0 for all n and the fixed point is stable.
As µ increases beyond a critical value µc , the fixed point becomes unstable due to excitation
of the eigenfunctions associated with the largest Fourier component of w(θ). Suppose that
W1 = maxm Wm . Then λn > 0 for all n if and only if
1
1 < µWn ≤ µW1 =⇒ µ > = µc .
W1
Consequently, for µ > µc , the excited modes will be

a(θ) = ze2iθ + z̄e−2iθ = 2|z| cos(2(θ − θ0 )),

where z = |z|e−2iθ0 . We expect this mode to grow and stop at a maximum amplitude as µ
approaches µc , mainly because of the saturation of f .

3.5.2 Derivation of amplitude equation using the Fredholm alterna-


tive
Unfortunately, the linear stability analysis breaks down for large amplitude of the activity
profile. Suppose the system is just above the bifurcation point, i.e.

µ − µc = ε∆µ, 0 < ε  1 (3.5.5)

If ∆µ = O(1), then µ − µc = O(ε) and we can carry out a perturbation expansion in powers
of ε. We first Taylor expand the nonlinear function f around a = ā:

f (a) − f (ā) = µ(a − ā) + g2 (a − ā)2 + g3 (a − ā)3 + O(a − ā)4 . (3.5.6)

Assume a perturbation expansion of the form



a = ā + εa1 + εa2 + ε3/2 a3 + . . . . (3.5.7)

The dominant temporal behaviour just beyond bifurcation is the slow growth of the excited
mode eε∆µt and this motivates the introduction of a slow time scale τ = εt. Substituting
(3.5.5), (3.5.6) and (3.5.7) into (3.5.1) yields
h ih √ i
∂t + ε∂τ ā + εa1 + εa2 + ε3/2 a3 + . . .
h √ i
= − ā + εa1 + εa2 + ε3/2 a3 + . . .
1 π
Z
+ w(θ − θ0 )f (ā) dθ0
π 0
1 π h√
Z  i
+ w(θ − θ0 ) µc + ε∆µ εa1 + εa2 + ε3/2 a3 + . . . dθ0
π 0
1 π h√
Z i2
+ w(θ − θ0 )g2 εa1 + εa2 + . . . dθ0
π 0
1 π h√
Z i3
+ w(θ − θ )g3 εa1 + εa2 + . . . dθ0
0
π 0
76 3.5. Pattern Formation and Amplitude Equations

Define the linear operator L:


b
Z π
µc
La(θ) = −a(θ) + w(θ − θ0 )a(θ0 ) dθ0 = −a(θ) + µc w ∗ a(θ).
π 0

Collecting terms with equal powers of ε then leads to a hierarchy of equations of the form:

ā = W0 f (ā)
La
b 1=0
b 2 = V2 := −g2 w ∗ a2
La 1

b 3 = V3 := ∂a1 − ∆µ w ∗ a1 − 2g2 w ∗ (a1 a2 ) − g3 w ∗ a31 .


La
∂τ

The O(1) equation determines the fixed point ā. The O( ε) equation has solutions of the
form
a1 = z(τ )e2iθ + z ∗ (τ )e−2iθ .
A dynamical equation for z(τ ) can be obtained by deriving solvability conditions for the higher-
order equations using Fredholm alternative. These equations have the general form
b n = Vn (ā, a1 , . . . , an−1 ), n ≥ 2.
La

For any two periodic functions U, V , define the inner product


1 π ∗
Z
hU, V i = U (θ)V (θ) dθ.
π 0

Using integration by parts, it is easy to see that L b is self-adjoint with respect to this particular
±2iθ
inner product and since Lã
b = 0 for ã = e , we have

hã, La
b n i = hLã,
b an i = 0.

Since La
b n = Vn , it follows from the Fredholm alternative that the set of solvability conditions
are
hã, Vn i = 0 for n ≥ 2.
The O(ε) solvability condition hã, V2 i = 0 is automatically satisfied. The O(ε3/2 ) solvability
condition can be expanded into

hã, ∂τ a1 − ∆µ w ∗ a1 i = g3 hã, w ∗ a31 i + 2g2 hã, w ∗ (a1 a2 )i. (3.5.8)

Taking ã = e2iθ then generates a cubic amplitude for z. First, we have


1 π −2iθ  dz 2iθ dz ∗ −2iθ 
Z
2iθ dz
he , ∂τ a1 i = e e + e dθ = (3.5.9)
π 0 dτ dτ dτ

To deal with the convolution terms, observe that since w is even, for any function b(θ) we have

he2iθ , w ∗ bi = hw ∗ e2iθ , bi
1 π 1 π
Z  Z 
0 −2iθ0 0
= w(θ − θ )e dθ b(θ) dθ
π 0 π 0
Method of Multiple Scales 77
!
Z π Z π
1 1 X
0 −2iθ0 0
= 2Wn cos(2n(θ − θ ))e dθ b(θ) dθ
π 0 π 0 n≥1
!
Z π Z π
1 1 X 
2in(θ−θ0 ) −2in(θ−θ0 )

−2iθ0
= Wn e +e e dθ0 b(θ) dθ
π 0 π 0 n≥1
Z π  Z π 
1 1 −2iθ
= W1 e dθ0 b(θ) dθ
π 0 π 0
1 π
Z
= W1 e−2iθ b(θ) dθ
π 0
= W1 he2iθ , bi.

Set W1 = µ−1
c . From the identity above we then have

he2iθ , ∆µ w ∗ a1 i = ∆µW1 he2iθ , a1 i


∆µ π π −2iθ  2iθ
Z Z 
= e ze + z ∗ e−2iθ dθ
µc π 0 0
= µ−1
c ∆µz (3.5.10)

and

he2iθ , w ∗ a31 i = W1 he2iθ , a31 i


Z π
1  3
= e−2iθ ze2iθ + z ∗ e−2iθ dθ
µc π 0
Z π
1  
= e−2iθ z 3 e6iθ + 3z 2 z ∗ e2iθ + 3zz ∗ e−2iθ + (z ∗ )3 e−6iθ dθ
µc π 0
= 3µ−1 2 ∗ −1
c z z = 3µc z|z| .
2
(3.5.11)

The next step is to determine a2 . From the O(ε) equation we have

µc π
Z
−La2 = a2 −
b w(θ − θ0 )a2 (θ0 ) dθ0
π 0
g2 π
Z
= w(θ − θ0 )a21 (θ0 ) dθ0
π 0
g2 π n
Z  oh i
0 0 0 0
X
= W0 + Wn e2in(θ−θ ) + e−2in(θ−θ ) z 2 e4iθ + 2|z|2 + (z ∗ )2 e−4iθ dθ0
π 0 n≥1
h i
= g2 2|z|2 W0 + z 2 W2 e4iθ + (z ∗ )2 W2 e−4iθ . (3.5.12)

Let
a2 (θ) = A+ e4iθ + A− e−4iθ + A0 + ζa1 (θ). (3.5.13)
The constant ζ remains undetermined at this order of perturbation but does not appear in the
amplitude equation for z(τ ). Substituting (3.5.13) into (3.5.12) yields

g2 z 2 W2 g2 (z ∗ )2 W2 2g2 |z|2 W0
A+ = , A− = , A0 = . (3.5.14)
1 − µc W2 1 − µc W 2 1 − µc W0
78 3.6. Problems

Consequently,

he2iθ , w ∗ (a1 a2 )i = W1 he2iθ , a1 a2 i


Z π
1   
= e−2iθ ze2iθ + z ∗ e−2iθ A+ e4iθ + A− e−4iθ + A0 + ζa1 (θ) dθ
µc π 0
h i
= µ−1
c z ∗
A+ + zA0
 ∗ 2
g2 z|z|2 W0

−1 g2 z |z| W2
= µc + (3.5.15)
1 − µc W2 1 − µc W0
 
2 −1 W2 2W0
= z|z| g2 µc + . (3.5.16)
1 − µc W2 1 − µc W0

Finally, substituting (3.5.9), (3.5.10), (3.5.11) and (3.5.16) into the O(ε3/2 ) solvability condition
(3.5.8), we obtain the Stuart-Landau equation

dz
= z(∆µ − Λ|z|2 ), (3.5.17)

where  
W2 2W0
Λ = −3g3 − 2g22 + . (3.5.18)
1 − µc W2 1 − µc W0
Note that we also absorbed a factor of µc into τ .

3.6 Problems
1. Find a first-term expansion of the solution of the following problems using two time
scales.

(a) y 00 + ε(y 0 )3 + y = 0, y(0) = 0, y 0 (0) = 1.

Solution: We introduce a slow scale τ = εt and an asymptotic expansion

y ∼ y0 (t, τ ) + εy1 (t, τ ) + . . . .

The original problem becomes


h i  hh i i3
∂t2 + 2ε∂t ∂τ + ε2 ∂τ2 y0 + εy1 + . . . + ε ∂t + ε∂τ y0 + εy1 + . . .
 
+ y0 + εy1 + . . . = 0,

with boundary conditions


 
y0 + εy1 + . . . (0, 0) = 0
h i 
∂t + ε∂τ y0 + εy1 + . . . (0, 0) = 1.
Method of Multiple Scales 79

The O(1) problem is


 
∂t2 + 1 y0 = 0, y0 (0, 0) = ∂t y0 (0, 0) = 0, (3.6.1)

and its general solution is

y0 (t, τ ) = A(τ )eit + A∗ (τ )e−it , (3.6.2)

where A(τ ) is complex function of τ . The O(ε) equation is


   3
∂t2 + 1 y1 = −2∂t ∂τ y0 − ∂t y0
h i h i3
it ∗ −it 3 it ∗ −it
= −2i Aτ e − Aτ e − (i ) Ae − A e
h i h i
= −2i Aτ eit − A∗τ e−it + i A3 e3it − 3A2 A∗ eit + 3A(A∗ )2 e−it + (A∗ )3 e−3it
h i h i h i
= −i 2Aτ + 3A|A|2 eit + i 2A∗τ + 3A∗ |A|2 e−it + i A3 e3it + (A∗ )3 e−3it
h i
it ∗ −it 3 3it ∗ 3 −3it
= F (τ )e + F (τ )e + i A e + (A ) e .

The secular terms are eliminated provided F (τ ) = 0. Writing A(τ ) = R(τ )eiθ(τ ) ,
F (τ ) becomes  
2 Rτ e + iRθτ e + 3Reiθ R2 = 0,
iθ iθ

or  
2 Rτ + iRθτ + 3R3 = 0.
Consequently, we have
θτ = 0 =⇒ θ(τ ) = θ0
and
2Rτ 1 1
2Rτ + 3R3 = 0 =⇒ = −3 =⇒ = 3τ + C =⇒ R(τ ) = √ .
R3 R2 3τ + C
Therefore, (3.6.2) becomes

y0 (t, τ ) = R(τ )ei(t+θ0 ) + R(τ )e−i(t+θ0 )


= 2R(τ ) cos(t + θ0 ).

We now impose the initial conditions from (3.6.1):

y0 (0, 0) = 0 =⇒ 2R(0) cos(θ0 ) = 0


∂t y0 (0, 0) = 1 =⇒ −2R(0) sin(θ0 ) = 1,
80 3.6. Problems

which means
i 1 3π
R(0)eiθ0 = − = √ eiθ0 =⇒ C = 4 and θ0 = .
2 C 2

Hence, a first-term approximation of the solution of the original problem is

2 cos(t + 3π/2) 2 sin(t)


y∼ √ ∼√ .
3εt + 4 3εt + 4

(b) εy 00 + εκy 0 + y + εy 3 = 0, y(0) = 0, y 0 (0) = 1, κ > 0.

Solution: The equation appears to have a boundary


√ layer, but it does not √
in this
0
case since ε appears on y as well. Let T = t/ ε and Y (T ) = y(t) = y( εT ),
then
d 1 d
=√
dt ε dT
and the original problem becomes

∂T2 Y + εκ∂T Y + Y + εY 3 = 0 (3.6.3a)

Y (0) = 0, ∂T Y (0) = ε. (3.6.3b)

Since one
√ of the boundary conditions is of O( ε), we take the slow scale to

be τ = εT = t and the fast scale to be T = t/ ε. Assuming an asymptotic
expansion of the form

Y ∼ Y0 (T, τ ) + εY1 (T, τ ) + . . . . (3.6.4)

Substituting (3.6.4) into (3.6.3) we obtain


h √ i √  √ h √ i √ 
∂T2 + 2 ε∂T ∂τ + ε∂τ2 Y0 + εY1 + . . . + εκ ∂T + ε∂τ Y0 + εY1 + . . .
 √   √ 3
+ Y0 + εY1 + . . . + ε Y0 + εY1 + . . . = 0,

with boundary conditions


 √
Y0 +
εY1 + . . . (0, 0) = 0
h √ i √  √
∂T + ε∂τ Y0 + εY1 + . . . (0, 0) = ε.

The O(1) problem is


 
∂T2 + 1 Y0 = 0, Y0 (0, 0) = ∂T Y0 (0, 0) = 0,

and its general solution is

Y0 (T, τ ) = A(τ )eiT + A∗ (τ )e−iT .


Method of Multiple Scales 81

The O( ε) equation is
 
∂T2 + 1 Y1 = −2∂T ∂τ Y0 − κ∂T Y0
h i h i
it ∗ −it it ∗ −it
= −2i Aτ e − Aτ e − κi Ae − A e
h i h i
= −i 2Aτ + κA eit + i 2A∗τ + κA∗ e−it
= F (τ )eit + F ∗ (τ )e−it .

The secular terms are eliminated provided F (τ ) = 0, i.e.

2Aτ + κA = 0 =⇒ A(τ ) = A(0)e−κτ /2 .

It can be easily seen from the initial conditions of the O(1) problem that A(0) =
0, and so Y0 ≡ 0. Before we proceed any further, note that

Y1 (T, τ ) = B(τ )eiT + B ∗ (τ )e−iT .

The O(ε) equation is


   
∂T2 + 1 Y2 = −2∂T ∂τ Y1 − ∂τ2 Y0 − κ ∂T Y1 + ∂τ Y0 − Y03
= −2∂T ∂τ Y1 − κ∂T Y1 .

This has the same structure as the O( ε) equation and it should be clear then
that the secular terms are eliminated provided

2Bτ + κB = 0 =⇒ B(τ ) = B(0)e−κτ /2 .

Imposing the initial condition Y1 (0, 0) = 0 and (∂T Y1 +∂τ Y0 )(0, 0) = ∂T Y1 (0, 0) =
1, we obtain

B(0) + B ∗ (0) = 0
h i
i B(0) − B ∗ (0) = 1,

which gives B(0) = −i/2. Hence, the O( ε) solution is

Y1 (T, τ ) = B(0)e−κτ /2 eiT + B ∗ (0)e−κτ /2 e−iT


 
−κτ /2 i iT i −iT
=e − e + e
2 2
= e−κτ /2 sin(T )

and a first-term approximation of the solution of the original problem is


 
−κt/2 t
y(t) = Y (T ) ∼ e sin √ .
ε
82 3.6. Problems

2. In the study of Josephson junctions, the following problem appears

φ00 + ε (1 + γ cos φ) φ0 + sin φ = εα, φ(0) = 0, φ0 (0) = 0, γ > 0. (3.6.5)

Use the method of multiple scales to find a first-term approximation of φ(t).

Solution: With the slow scale τ = εt, (3.6.5) becomes


h i h i
 ∂t2 + 2ε∂t ∂τ + ε2 ∂τ2 φ + ε (1 + γ cos φ) ∂t + ε∂τ φ + sin φ = εα,
h i (3.6.6)
 φ(0, 0) = 0, ∂t + ε∂τ φ(0, 0) = 0, γ > 0.

Assume an asymptotic expansion of the form

φ ∼ φ0 (t, τ ) + εφ1 (t, τ ) + ε2 φ2 (t, τ ) + . . . . (3.6.7)

Substituting (3.6.7) into (3.6.6) and expanding both sin(φ) and cos(φ) around φ = φ0
we obtain:
h i
∂t2 + 2ε∂t ∂τ + ε2 ∂τ2 φ0 + εφ1 + ε2 φ2 + . . .

  h i
2
∂t + ε∂τ φ0 + εφ1 + ε2 φ2 + . . .
 
+ ε 1 + γ cos φ0 − sin φ0 εφ1 + ε φ2 + . . .
+ sin φ0 + cos φ0 εφ1 + ε2 φ2 + . . . = εα,
 

with boundary conditions

φ0 + εφ1 + ε2 φ2 + . . . (0, 0) = 0

h i
∂t + ε∂τ φ0 + εφ1 + ε2 φ2 + . . . (0, 0) = 0.


The O(1) problem is

∂t2 φ0 + sin φ0 = 0, φ0 (0, 0) = 0, ∂t φ0 (0, 0) = 0.

To solve this nonlinear problem ,we approximate sin φ0 ≈ φ0 and the general solution
of the problem is approximately

φ0 (t, τ ) ≈ A(τ ) cos(t) + B(τ ) sin(t). (3.6.8)

The initial conditions gives A(0) = 0 = B(0).

The O(ε) equation is

∂t2 φ1 + 2∂t ∂τ φ0 + (1 + γ cos φ0 ) ∂t φ0 + (cos φ0 ) φ1 = α,

with boundary conditions

φ1 (0, 0) = 0, ∂t φ1 (0, 0) = −∂τ φ0 (0, 0).


Method of Multiple Scales 83

We approximate cos φ0 ≈ 1 and substitute the expression (3.6.8) for φ0 :

∂t2 φ1 + φ1 = −2∂t ∂τ φ0 − (1 + γ) ∂t φ0 + α
= −2 (−A0 sin(t) + B 0 cos(t)) − (1 + γ) (−A sin t + B cos(t)) + α
= [2A0 + A(1 + γ)] sin(t) − [2B 0 + B(1 + γ)] cos(t) + α.

The secular terms are eliminated provided the coefficients of cos(t) and sin(t) vanish.
This yields two initial value problems
(
2A0 + A(1 + γ) = 0, A(0) = 0
0
2B + B(1 + γ) = 0, B(0) = 0

which has solutions A(τ ) = B(τ ) ≡ 0. It follows from (3.6.8) that φ0 ≡ 0 and we
need to investigate the O(ε2 ) problem. The general solution of the O(ε) problem is

φ1 (t, τ ) ≈ C(τ ) cos(t) + D(τ ) sin(t) + α, (3.6.9)

and the initial conditions gives C(0) = −α and D(0) = 0.

The O(ε2 ) equation is

∂t2 φ2 + 2∂t ∂τ φ1 + ∂τ2 φ0 + (1 + γ cos φ0 ) (∂t φ1 + ∂τ φ0 )


− γ (sin φ0 ) φ1 ∂t φ0 + (cos φ0 ) φ2 = 0.

Simplifying using φ0 ≡ 0 we obtain

∂t2 φ2 + φ2 = −2∂t ∂τ φ1 − (1 + γ)∂t φ1


= −2 (−C 0 sin(t) + B 0 cos(t)) − (1 + γ) (−A sin(t) + B cos(t))
= [2C 0 + C(1 + γ)] sin(t) − [2D0 + D(1 + γ)] cos(t).

The secular terms are eliminated provided the coefficients of cos(t) and sin(t) vanish.
This yields two initial value problems
(
2C 0 + C(1 + γ) = 0, C(0) = −α
0
2D + D(1 + γ) = 0, D(0) = 0

and the general solutions are D(τ ) ≡ 0 and


   
1+γ
C(τ ) = −α exp − τ .
2

Hence, a first-term approximation of the solution of the original problem (3.6.5) is

φ ∼ ε α − αe−(1+γ)εt/2 cos(t) ∼ εα 1 − e−(1+γ)εt/2 cos(t) .


 

3. Consider the equation


ẍ + ẋ = −ε(x2 − x), 0 < ε  1. (3.6.10)
84 3.6. Problems

Use the method of multiple scales to show that


x0 (t, τ ) = A(τ ) + B(τ )e−t ,
with τ = εt, and identify any resonant terms at O(ε). Show that the non-resonance
condition is ∂τ A = A − A2 and describe the asymptotic behaviour of solutions.
Solution: With the slow scale τ = εt and assuming an asymptotic expansion of the
form
x(t, τ ) ∼ x0 (t, τ ) + εx1 (t, τ ) + . . . ,
the differential equation (3.6.10) becomes
h i  h i 
∂t2 + 2ε∂t ∂τ + ε2 ∂τ2 x0 + εx1 + . . . + ∂t + ε∂τ x0 + εx1 + . . .
h 2  i
= −ε x0 + εx1 + . . . − x0 + εx1 + . . .
h i
= −ε x20 − x0 + O(ε2 ).

The O(1) equation is


∂t2 x0 + ∂t x0 = 0,
and its general solution is

x0 (t, τ ) = A(τ ) + B(τ )e−t .

The O(ε) equation is

∂t2 x1 + ∂t x1 = −2∂t ∂τ x0 − ∂τ x0 − (x20 − x0 )


h i h i h i
−t −t −t 2 −t
= −2 − Bτ e − Aτ + Bτ e − (A + Be ) − (A + Be )
h i h i
= − A2 − A + Aτ − e−t Bτ − 2Bτ + 2AB − B − B 2 e−2t
= F (τ ) + G(τ )e−t + H(τ )e−2t .

Since the first two terms belongs to the kernel of the homogeneous operator, the
corresponding particular solution has the form F (τ ) and G(τ )te−t and only the first
one blows up as t −→ ∞, since

G(τ )te−t −→ 0 as t −→ ∞.

Hence, the non-resonance condition is F (τ ) = 0, or

∂τ A = A − A2 . (3.6.11)

A phase-plane analysis shows that the system (3.6.11) has an unstable fixed point
at A = 0 and a stable fixed point at A = 1. Thus, we conclude that A(τ ) −→ 1 as
τ −→ ∞, provided A(0) > 0.

4. Consider the differential equation


ẍ + x = −εf (x, ẋ), with |ε|  1.
Method of Multiple Scales 85

Let y = ẋ.

(a) Show that if E(t) = E(x(t), y(t)) = (x(t)2 + y(t)2 )/2, then

Ė = −εf (x, y)y.

Hence, show that E(t) is approximately 2π-periodic with x = A0 cos(t) + O(ε)


provided Z 2π
f (A0 cos τ, −A0 sin τ ) sin τ dτ = 0.
0

Solution: With y = ẋ, we have

ẏ = ẍ = −x − εf (x, ẋ) = −x − εf (x, y).

Therefore

Ė(x, y) = xẋ + y ẏ
= xy + y (−x − εf (x, y))
= −εf (x, y)y.

This means that to O(1), Ė = 0 which implies an unperturbed solution of the


form x0 (t) = A0 cos(t + θ0 ) = A0 cos(t). WLOG we may take θ0 , as we can shift
time to eliminate any phase shift because we are dealing with an autonomous
system. Assume asymptotic expansions for both E(x, y) and x(t):

x ∼ x0 + εx1 + . . .
E ∼ E0 + εE1 (t) + . . . .

From the expression of Ė(x, y), the O(ε) equation is

dE1  
= −f (x, y)y = −f (x0 + εx1 + . . . , x˙0 + εx˙1 + . . . ) x˙0 + εx˙1 + . . .
dt
= −f (x0 , x˙0 )x˙0 + O(ε).

Therefore, to O(1),
Z t
E1 (t) = E1 (0) − f (x0 (τ ), x˙0 (τ ))x˙0 (τ ) dτ
0
Z t
= E1 (0) + A0 f (A0 cos(τ ), −A0 sin(τ )) sin(τ ) dτ.
0

If t is a multiple of 2π, say t = 2πn, then


Z 2π
E1 (2πn) = E1 (0) + nA0 f (A0 cos(τ ), −A0 sin(τ )) dτ
0
86 3.6. Problems

and we deduce that E1 is approximately 2π-periodic if and only if


Z 2π
f (A0 cos(τ ), −A0 sin(τ )) sin(τ ) dτ = 0.
0

(b) Suppose that the periodicity condition on part (a) does not hold. Let En =
E(x(2πn), y(2πn)). Show that to lowest order En satisfies a difference equation
of the form
En+1 = En + εF (En ),
with Z 2π p p p 
F (En ) = 2En f 2En cos τ, − 2En sin τ sin τ dτ.
0

Hint: Take x ∼ A cos t with A = 2E slowly varying over a single period of length
2π.
Solution: Since
A20
E(t) ∼ E0 + εE1 (t) ∼ ,
2
we have p
A0 (t) ≈ 2E(t) + O(ε).
From part (a), we then have

E(t + 2π) ∼ E0 + εE1 (t + 2π)


(c) Hence, deduce that a periodic orbit with approximate amplitude A∗ = 2E ∗ exists
if F (E ∗ ) = 0 and this orbit is stable if
dF ∗
ε (E ) < 0.
dE
Hint: Spiralling orbits close to the periodic orbit x = A∗ cos(t) + O(ε) can be ap-
proximated by a solution of the form x = A cos(t) + O(ε).

Solution: From part (b), we have a one-dimensional map

(d) Using the above result, find the approximate amplitude of the periodic orbit of the
Van der Pol equation
ẍ + x + ε(x2 − 1)ẋ = 0
and verify that it is stable.

Solution: In this case we have f (x, y) = (x2 − 1)y and so


Z 2π p h ih p i
F (En ) = 2En 2En cos2 (τ ) − 1 − 2En sin(τ ) sin(τ ) dτ
Z0 2π h i
= 1 − 2En cos2 (τ ) 2En sin2 (τ ) dτ
0
Method of Multiple Scales 87
Z 2π
= 2En sin2 (τ ) − 4En2 sin2 (τ ) cos2 (τ ) dτ
Z0 2π h i
= En 1 − cos(2τ ) − En2 sin2 (2t) dτ
Z0 2π h i E2 h i
n
= En 1 − cos(2τ ) − 1 − cos(4τ ) dτ
0 2
h En2 i
= 2π En −
h 2i
= πEn 2 − En .

Thus the zeros of F are E ∗ = 0, 2 and the approximate


√ amplitude of the periodic
∗ ∗
orbit of the Van der Pol equation is A = 2E = 2. This orbit is stable since

F 0 (En ) = 2π(1 − En ) =⇒ F 0 (2) = −2π < 0.

5. Consider the Van der Pol equation

ẍ + x + ε(x2 − 1)ẋ = Γ cos(ωt), 0 < ε  1,

with Γ = O(1) and ω 6= 1/3, 1, 3. Use the method of multiple scales to show that the
solution is attracted to  
Γ
x(t) = cos(ωt) + O(ε)
1 − ω2
when Γ2 ≥ 2(1 − ω 2 )2 and
1/2
Γ2
  
Γ
x(t) = 2 1 − cos t + cos(ωt) + O(ε)
2(1 − ω 2 )2 1 − ω2

when Γ2 < 2(1 − ω 2 )2 . Explain why this result breaks down when ω = 1/3, 1, 3.

Solution: Introducing the slow scale τ = εt and substituting the asymptotic expan-
sion
x ∼ x0 (t, τ ) + εx1 (t, τ ) + . . .
into the Van der Pol equation we obtain
h i   
2 2
∂t +2ε∂t ∂τ + ε ∂τ x0 + εx1 + . . . + x0 + εx1 + . . .
h 2 ih i 
+ ε x0 + εx1 + . . . − 1 ∂t + ε∂τ x0 + εx1 + . . . = Γ cos(ωt).

The O(1) equation is  


∂t2 + 1 x0 = Γ cos(ωt)
and its complementary solution is

xc0 (τ, t) = A(τ ) cos(t + θ(τ )) = A(τ ) cos(Ω(t, τ )).


88 3.6. Problems

Suppose ω 6= 1 so that we prevent secular terms of the O(1) equation. Assuming a


particular solution of the form

xp0 (t, τ ) = B(τ ) cos(ωt).

Substituting this into the O(1) equation yields

Γ
−ω 2 B + B = Γ =⇒ B = = δ.
1 − ω2
Thus the general solution of the O(1) equation is

x0 (t, τ ) = xc0 (t, τ ) + xp0 (t, τ ) = A(τ ) cos(Ω(t, τ )) + δ cos(ωt).

The O(ε) equation is


 
∂t2 + 1 x1 = −2∂t ∂τ x0 − (x20 − 1)∂t x0
h i
= 2 Aτ sin(Ω) + Cθτ cos(Ω) − x20 ∂t x0 + ∂t x0
h i h i
= 2 Aτ sin(Ω) + Cθτ cos(Ω) − A sin(Ω) + δω sin(ωt) − x20 ∂t x0
h i
= 2Aθτ cos(Ω) + 2Aτ − A sin(Ω) − δω sin(ωt) − x20 ∂t x0 .

We expand the term x20 ∂t x0 as follows:


h ih i
2 2 2 2 2
−x0 ∂t x0 = A cos (Ω) + δ cos (ωt) + 2Aδ cos(Ω) cos(ωt) A sin(Ω) + δω sin(ωt)
A3
= sin(2Ω) cos(Ω) + Aδ 2 sin(Ω) cos2 (ωt) + A2 δ sin(2Ω) cos(ωt)
2
δ3ω
+ A2 δω cos2 (Ω sin(ωt) + sin(2ωt) cos(ωt) + Aδ 2 ω cos(Ω) sin(2ωt).
2
We carefully apply double-angle formula and product-to-sum identity

2 cos2 (X) = 1 + cos(2X)


2 sin(X) cos(Y ) = sin(X + Y ) + sin(X − Y )

onto each term of x20 ∂t x0 :

A3 A3 h i
sin(2Ω) cos(Ω) = sin(3Ω) + sin(Ω)
2 4
Aδ 2 h i
Aδ 2 sin(Ω) cos2 (ωt) = sin(Ω) 1 + cos(2ωt)
2
Aδ 2 h i
= sin(Ω) + sin(Ω) cos(2ωt)
2
Aδ 2 h i
= 2 sin(Ω) + sin(Ω + 2ωt) + sin(Ω − 2ωt)
4
Method of Multiple Scales 89

A2 δ h i
A2 δ sin(2Ω) cos(ωt) = sin(2Ω + ωt) + sin(2Ω − ωt)
2
A2 δω h i
A2 δω cos2 (Ω) sin(ωt) = sin(ωt) 1 + cos(2Ω)
2
2
A δω h i
= sin(ωt) + sin(ωt) cos(2Ω)
2
A2 δω h i
= 2 sin(ωt) + sin(ωt + 2Ω) + sin(ωt − 2Ω)
4
2
A δω h i
= 2 sin(ωt) + sin(2Ω + ωt) − sin(2Ω − ωt)
4
δ3ω δ3ω h i
sin(2ωt) cos(ωt) = sin(3ωt) + sin(ωt)
w 4
Aδ 2 ω h i
Aδ 2 ω cos(Ω) sin(2ωt) = sin(2ωt + Ω) + sin(2ωt − Ω)
2
Aδ 2 ω h i
= sin(Ω + 2ωt) − sin(Ω − 2ωt) .
2
Combining everything, the O(ε) equation takes the form

A3 Aδ 2
   3

2
 h i A
∂t + 1 x1 = 2Aθτ cos(Ω) + 2Aτ − A + + sin(Ω) + sin(3Ω)
4 2 4
A2 δω δ 3 ω
   3 
δ ω
+ −δω + + sin(ωt) + sin(3ωt)
2 4 4
 2
Aδ 2 ω
 2
Aδ 2 ω
 
Aδ Aδ
+ + sin(Ω + 2ωt) + − sin(Ω − 2ωt)
4 2 4 2
 2
A δ A2 δω
 2
A δ A2 δω
 
+ + sin(2Ω + ωt) + − sin(2Ω − ωt).
2 4 2 4

Terms of the form sin(ωt) and sin(3ωt) will be resonant if ω = 1, 1/3. Terms of the
form sin(Ω ± 2ωt) will be resonant if

t ± 2ωt = (1 ± 2ω)t = ±t ⇐⇒ ω = 0, 1.

Terms of the form sin(2Ω ± ωt) will be resonant if

2t ± ωt = (2 ± ω)t = ±t ⇐⇒ ω = 1, 3.

Therefore, if we assume that ω 6= 1/3, 1, 3 then the only resonant terms on the right-
hand side of the O(ε) equation are those involving cos(Ω) and sin(Ω) and we require
their coefficients to vanish, i.e.

A3 Aδ 2
2Aθτ = 0 and 2Aτ = A − −
4 2 
2
δ2

A
=A 1− −
4 2
90 3.6. Problems

A2 δ2
 
= −A − C(δ) , where C(δ) = 1 − .
4 2

We now perform a case analysis:

(a) When C(δ) < 0, that is, δ 2 > 2, Aτ has only one fixed point at A = 0 and this
is asymptotically stable, i.e. A(τ ) −→ 0 as τ −→ ∞ for any initial conditions
A(0). Therefore, the solution is attracted to
 
Γ
x(t) = δ cos(ωt) + O(ε) = cos(ωt) + O(ε).
1 − ω2
p
(b) When C(δ) > 0, that is, δ 2 < 2, Aτ has three fixed points A0 = 0, ±2 C(δ) and
a phase-plane analysis shows that the fixed point A0 = 0 becomes unstable and
the other two are stable. Therefore, as τ −→ ∞ we have
( p
2 C(δ) if A(0) > 0,
A(τ ) −→ p
−2 C(δ) if A(0) < 0.

In the case of positive A(0), the solution is then attracted to


p
x(t) = 2 C(δ) cos(t + θ0 ) + δ cos(ωt) + O(ε)
1/2 
Γ2
 
Γ
=2 1− + cos(ωt) + O(ε).
2(1 − ω 2 )2 1 − ω2

6. Multiple scales with nonlinear wave equations. The Korteweg-de Vries (KdV)
equation is
ut + ux + αuux + βuxxxx = 0, x ∈ R, t > 0,
where α, β are positive real constants and u(x, 0) = εf (x) for 0 < ε  1.

(a) Let θ = kx − ωt and seek traveling wave solutions using an expansion of the form

u(x, t) ∼ ε[u0 (θ) + εu1 (θ) + . . . ],

where ω = k − βk 3 and k > 0 is a constant. Show that this can lead to secular
terms.

Solution:

(b) Use multiple scales (variables θ, εx, εt) to eliminate the secular terms in part (a) and
find a first-term expansion. In the process, show that f (x) must have the form

f (x) = A cos(kx + φ)

for constants A, B, φ in order to generate a traveling wave? Hint: Use the fact that
f (x) is independent of ε.
Method of Multiple Scales 91

Solution:
92 3.6. Problems
Chapter 4

The Wentzel-Kramers-Brillouin
(WKB) Method

The WKB method, named after Wentzel, Kramers and Brillouin, is a method for finding
approximate solutions to linear differential equations with spatially varying coefficients. The
origin of WKB theory dates back to 1920s where it was developed by Wentzel, Kramers and
Brillouin to study time-independent Schrodinger equation. This often arises from the following
problem:
d2 y
− q(εx)y = 0,
dx2
with the slowly varying potential energy. To handle such problem, the WKB method introduces
an ansatz of the expansion term as a product of slowly varying and exponenetially rapidly
varying terms.

4.1 Introductory Example


Consider the differential equation

ε2 y 00 − q(x)y = 0 on x ∈ [0, 1], (4.1.1)

where q is a smooth function. For constant q, the general solution of (4.1.1) is


√ √
y(x) = a0 e−x q/ε
+ b0 e x q/ε

and the solution either blows up (q > 0) or oscillates (q < 0) rapidly on a scale of O(ε). The
hypothesis of the WKB method is that this exponential solution can be generalised to obtain
an approximate solution of the full problem (4.1.1).
We start with the following general WKB ansatz:
α
y(x) ∼ eθ(x)/ε [y0 (x) + εα y1 (x) + . . . ] as ε −→ 0 (4.1.2)

for some α > 0. Here, we assume that the solution varies exponentially with respect to the
fast variation. From (4.1.2) we obtain:
α
y 0 ∼ ε−α θx y0 + y00 + θx y1 + . . . eθ/ε

(4.1.3a)
00
 −2α 2 −α 0 2
 θ/εα
y ∼ ε θx y0 + ε θxx y0 + 2θx y0 + θx y1 + . . . e (4.1.3b)

93
94 4.1. Introductory Example

Figure 4.1: An example of turning points: quantum tunneling. Depending on effective potential
energy, the solutions have different behavior and need to be matched (Taken from Wikipedia
Commons).

α
y 000 ∼ ε−3α θx3 y0 + ε−2α θx 3θx y00 + 3θxx y0 + θx2 y1 + . . . eθ/ε
 
(4.1.3c)
α
y 0000 ∼ ε−4α θx4 y0 + ε−3α θx2 6θxx y0 + 4θx y00 + θx2 y1 + . . . eθ/ε
 
(4.1.3d)

Substituting both (4.1.2) and (4.1.3) into (4.1.1) and cancelling the exponential term yield
 2 
2 θx y0 1 0 2
+ α θxx y0 + 2θx y0 + θx y1 + . . . − q(x) [y0 + εα y1 + . . . ] = 0.

ε (4.1.4)
ε2α ε

Such cancellation is possible due to the linearity of the equation!


Balancing leading-order terms in (4.1.4) we see that α = 1. The O(1) equation is the
well-known eikonal equation:
θx2 = q(x), (4.1.5)
and its solutions (in one-dimensional) are
Z x p
θ(x) = ± q(s) ds. (4.1.6)

To determine y0 (x), we need to solve the O(ε) equation which is the transport equation:

θxx y0 + 2θx y00 + θx2 y1 = q(x)y1 . (4.1.7)

The y1 terms cancel out due to the eikonal equation (4.1.5) and (4.1.7) reduces to

θxx y0 + 2θx y00 = 0. (4.1.8)

This can be easily solved since it is separable:

y00 θxx
=−
y0 2θx
1
ln |y0 | = − ln |θx | + C
2 p
ln |y0 | = − ln |θx | + C
The Wentzel-Kramers-Brillouin (WKB) Method 95

C
y0 (x) = √ = Cq(x)−1/4 ,
θx
where C is an arbitrary nonzero constant and the last line follows from (4.1.6). Hence, a
first-term asymptotic approximation of the general solution of (4.1.1) is
1 xp
  Z   Z x 
−1/4 1 p
y(x) ∼ q(x) a0 exp − q(s) ds + b0 exp q(s) ds , (4.1.9)
ε ε
where a0 , b0 are arbitrary constants, possibly complex. It is evident that (4.1.9) is valid if
q(x) 6= 0 on [0, 1]. The x-values where q(x) = 0 are called turning points and this nontrivial
issue will be addressed in Section 4.2.

Example 4.1.1. Choose q(x) = −e2x . Then the WKB approximation (4.1.9) is
x x
y(x) ∼ e−x/2 a0 e−ie /ε + b0 eie /ε = e−x/2 [α0 cos(λex ) + β0 sin(λex )] ,
 

where λ = 1/ε. With boundary conditions y(0) = a, y(1) = b, we obtain


 √
b e sin (λ(ex − 1)) − a sin (λ(ex − e))

−x/2
y(x) ∼ e .
sin (λ(e − 1))
The exact solution of (4.1.1) with q(x) = −e2x can be solved as follows. Performing a change
of variable x̃ = ex /ε = λex , we obtain
dx 1
x = ln(ε) + ln(x̃) =⇒ = .
dx̃ x̃
Setting Y (x̃) = y(x), it follows from the chain rule that
dY dy dx y0
= =
dx̃ dx dx̃ x̃
0 00
2
dY y y 1 dY y 00
= − + = − + .
dx̃2 x̃2 x̃2 x̃ dx̃ x̃2
Consequently, the equation of Y (x̃) is the zeroth-order Bessel’s differential equation
d2 Y dY
x̃2 2
+ x̃ + x̃2 Y = 0,
dx̃ dx̃
and the solution of this is
Y (x̃) = c0 J0 (x̃) + d0 Y0 (x̃) = c0 J0 (λex ) + d0 Y0 (λex ) = y(x),
where J0 (·) and Y0 (·) are the zeroth-order Bessel functions of the first and second kinds respec-
tively. Finally, solving for c0 and d0 using the boundary conditions yields
1
c0 = [bY0 (λ) − aY0 (λe)]
D
1
d0 = [aJ0 (λe) − bJ0 (λ)]
D
D = J0 (λe)Y0 (λ) − Y0 (λe)J0 (λ).
One can plot the exact solution and the WKB approximation and see that their difference is
almost zero!
96 4.1. Introductory Example

To measure the error of the WKB approximation (4.1.9), we look at the O(ε2 ) equation
which has the form

θxx y1 + 2θx y10 + θx2 y2 + y000 = q(x)y2 . (4.1.10)

The y2 terms vanish due to the eikonal equation (4.1.5) and so (4.1.10) reduces to

θxx y1 + 2θx y10 + y000 = 0. (4.1.11)

Because the first two terms of (4.1.11) are similar to the transport equation (4.1.8), we make
an ansatz y1 (x) = y0 (x)w(x). (4.1.11) reduces to

2θx y0 w0 + y000 = 0. (4.1.12)

Suppose q(x) > 0 so that θx is a real-valued function. Rearranging (4.1.12) in terms of w0 and
integrating by parts with respect to x we obtain

2θx y0 w0 = −y000
2Cθx w0 d2
   
C d Cθxx
√ =− 2 √ =
θx dx θx dx 2θx3/2
  
0 1 d θxx 1
w = √
4 dx θx3/2 θx
Z x    
1 d θxx 1
w(x) = √ ds
4 dx θx3/2 θx
1 x θxx
  Z    
1 θxx d 1
=d+ − 3/2
√ ds
4 θx2 4 θx dx θx
1 x θxx
  Z  2 
1 θxx
=d+ + ds,
4 θx2 8 θx3

where d is an arbitrary constant. On the other hand, θx is a complex-valued function if q(x) < 0,

i.e. θx = ±i −q. We then have

 
i −qx iqx
θxx =± √ =∓ √
2 −q 2 −q
θxx iqx iqx
2
=∓ √ =±
θx 2q −q 2(−q)3/2
−qx2 q2
2
θxx = = x
4(−q) 4q
√ 3
θx3 = (±i)3 −q = ∓i (−q)3/2
2
θxx qx2 iqx2
= =∓ .
θx3 ∓4iq(−q)3/2 4(−q)5/2
The Wentzel-Kramers-Brillouin (WKB) Method 97

Consequently,
 Z x 2 
1 qx 1 qx p
d+ + ds if θx (x) = q(x),


 3/2



 8q 32 q 5/2
Z x 2 
1 qx 1 qx

 p
d − − if θx (x) = −


 3/2
ds q(x),
8q 32 q 5/2
w(x) = Z x
iqx2

 1 iqx 1 p

 d+ 3/2
− 5/2
ds if θx (x) = i −q(x),
8 (−q) 32 (−q)




 Z x 2


 1 iq x 1 iq x
p
d − + ds if θx (x) = −i −q(x).


8 (−q)3/2 32 (−q)5/2
Finally, for small ε the WKB ansatz (4.1.2) is well-ordered provided
|εy1 (x)|  |y0 (x)|, or |εw(x)|  1.
In terms of the function q(x) and its first derivatives, for x ∈ [x0 , x1 ] we will have an accurate
approximation if   Z x1 
1 qx qx
ε |d| + 3/2
4+ dx  1,
32 q

x0
q
where | · | := k · k∞ over the interval [x0 , x1 ]. We stress that this condition holds if the interval
[x0 , x1 ] does not contain a turning point.

Remark 4.1.2. The constants a0 , b0 in (4.1.9) and d in w(x) are determined from boundary
conditions. However, it is very possible that these constants depend on ε. It is therefore
necessary to make sure this dependence does not interfere with the ordering assumed in the
WKB ansatz (4.1.2).

4.2 Turning Points


This section is devoted to the analysis of turning points of q(x). Assume q(x) is smooth and has
a simple zero at xt ∈ [0, 1], i.e. q(xt ) = 0 and q 0 (xt ) 6= 0. For concreteness, we take q 0 (xt ) > 0
and so we expect solutions of (4.1.1) to be oscillatory for x < xt and exponential for x > xt .
We can apply the WKB method on the regions {x < xt } and {x > xt }. More precisely, from
(4.1.9) we have (
yL (x, xt ) if x < xt ,
y∼ (4.2.1)
yR (x, xt ) if x > xt ,
where
1 xt p
  Z   Z xt 
1 1 p
yL (x, xt ) = aL exp − q(s) ds + bL exp q(s) ds (4.2.2a)
q(x)1/4 ε x ε x
1 xp
  Z   Z x 
1 1 p
yR (x, xt ) = aR exp − q(s) ds + bR exp q(s) ds . (4.2.2b)
q(x)1/4 ε xt ε xt
An important realization is that these coefficients aL , bL , aR , bR are not all independent. In
addition to the two boundary conditions at x = 0 and x = 1, we also have matching conditions
in a transition layer centered at x = xt .
98 4.2. Turning Points

4.2.1 Transition layer


Following the boundary layer analysis, we introduce the boundary layer coordinate
x − xt
x̃ = or x = xt + εβ x̃.
εβ
We can reduce (4.1.1) by expanding the function q(x) around the turning point xt

q(x) = q(xt + εβ x̃) = q(xt ) + q 0 (xt )εβ x̃ + . . .


≈ εβ x̃q 0 (xt ).

Denote the inner solution by Y (x̃). Transforming (4.1.1) using

d 1 d
= β
dx ε dx̃
gives the inner equation
ε2−2β Y 00 − εβ x̃qt0 + . . . Y = 0,

(4.2.3)
where qt0 := q 0 (xt ). Balancing leading-order terms in (4.2.3) means we require

2
2 − 2β = β =⇒ β = .
3
Since it is not clear what the asymptotic sequence should be, we take the asymptotic
expansion to be
Y ∼ εγ Y0 (x̃) + . . . . (4.2.4)
The O(ε2/3 ) equation is
Y000 − x̃qt0 Y0 = 0, −∞ < x̃ < ∞. (4.2.5)
Performing a coordinate transformation s = (qt0 )1/3 x̃, (4.2.5) becomes the Airy’s equation:

d2 Y0
− sY0 = 0, −∞ < s < ∞, (4.2.6)
ds2
and this can be solved either using power series expansion or Laplace transform. The general
solution of (4.2.6) is
Y0 (s) = aAi(s) + bBi(s), (4.2.7)
where Ai(·) and Bi(·) are Airy functions of the first and the second kinds respectively. It is
well-known that
∞    
1 X 1 k+1 2π k
Ai(x) = 2/3 Γ sin (k + 1) 31/3 x
3 π k=0 k! 3 3
   
1 3 0 1 4
= Ai(0) 1 + x + . . . + Ai (0) x + x + . . .
6 12
iπ/6 2πi/3 −iπ/6 −2πi/3
 
Bi(x) = e Ai xe +e Ai xe
   
1 3 0 1 4
= Bi(0) 1 + x + . . . + Bi (0) x + x + . . . ,
6 12
The Wentzel-Kramers-Brillouin (WKB) Method 99

Figure 4.2: Plot of the two Airy functions (Taken from Wikipedia Commons).

2
where Γ(·) is the gamma function. Setting ξ = |x|3/2 , we also have that
3
  
1 π 5 π
 
 √π|x|1/4 cos ξ − 4 + 72ξ sin ξ − 4 if x −→ −∞,


Ai(x) ∼   (4.2.8a)
1 −ξ 5
 √

 e 1− ξ if x −→ +∞,
2 π|x|1/4 72
  
1 π 5 π
 
 √π|x|1/4 cos ξ + 4 + 72ξ sin ξ + 4 if x −→ −∞,


Bi(x) ∼   (4.2.8b)
1 ξ 5
√

 e 1+ ξ if x −→ +∞.
π|x|1/4 72

4.2.2 Matching
From (4.2.7), the general solution of (4.1.1) in the transition layer is
h i h i
0 1/3 0 1/3
Y0 (x̃) = aAi (qt ) x̃ + bBi (qt ) x̃ . (4.2.9)

We now have 6 undetermined constants from (4.2.2) and (4.2.9), but these are all connected
since the inner solution (4.2.9) must match the outer solutions (4.2.2). These will results in
two arbitrary constants in the general solution (4.2.1). Since the inner solution is unbounded,
we introduce an intermediate variable
x − xt 2
xη = η
, 0<η< ,
ε 3
where the interval for η comes from the requirement that the scaling for the intermediate vari-
able must lie between the outer scale, O(1) and the inner scale, O(ε2/3 ).
100 4.2. Turning Points

4.2.3 Matching for x > xt


We first change the stretched variable x̃ to the intermediate variable xη :
x − xt x − xt
x̃ = β
= η β−η = εη−β xη = εη−2/3 xη .
ε ε ε
Note that xη > 0 since x > xt . From (4.2.4) and (4.2.9), the inner solution Y (x̃) now becomes

Y ∼ εγ Y0 εη−2/3 xη + . . .

h    i
1/3 1/3
∼ εγ aAi (qt0 ) εη−2/3 xη + bBi (qt0 ) εη−2/3 xη + . . .
∼ εγ [aAi(r) + bBi(r)] + . . .
    
γ a 2 3/2 b 2 3/2
∼ε √ exp − r + √ 1/4 exp r , (4.2.10)
2 πr1/4 3 πr 3

where r = q 0 (xt )1/3 εη−2/3 xη > 0 and the last line follows from (4.2.8). On the other hand, since
Z x p Z xt +εη xη p
q(s) ds ∼ (s − xt )qt0 ds
xt xt
  xt +εη xη
0 2
p 3/2

= qt (s − xt )
3
xt
2
qt0 (εη xη )3/2
p
=
3
2
= εr3/2
3
and
−1/4 −1/4 −1/6
q(x)−1/4 ∼ [q(xt ) + (x − xt )qt0 ] = [εη xη qt0 ] = ε−1/6 (qt0 ) r−1/4 ,

the right outer solution yR becomes

ε−1/6
    
2 3/2 2 3/2
yR ∼ aR exp − r + bR exp r . (4.2.11)
(qt0 )1/6 r1/4 3 3

Consequently, matching (4.2.10) the right outer solution yR with (4.2.11) the inner solution Y
yields the following:

1 a 1/6 b 1/6
γ = − , aR = √ (qt0 ) , bR = √ (qt0 ) . (4.2.12)
6 2 π π

4.2.4 Matching for x < xt


Because x < xt , we have xη < 0 which introduces complex numbers into the outer solution
yL . Using the asymptotic properties of Airy functions as r −→ −∞ (see (4.2.8)), the inner
solution becomes

Y ∼ εγ [aAi(r) + bBi(r)] + . . .
The Wentzel-Kramers-Brillouin (WKB) Method 101
    
γ a 2 3/2 π b 2 3/2 π
∼ε √ cos |r| − +√ cos |r| + .
π|r|1/4 3 4 π|r|1/4 3 4

Using the identity cos θ = (eiθ + e−iθ )/2, a more useful form of the inner expansion Y as
r −→ −∞ is

εγ h
−iπ/4 iπ/4
 iζ iπ/4 −iπ/4
 −iζ i
Y ∼ √ ae + be e + ae + be e , (4.2.13)
2 π|r|1/4

2
where ζ = |r|3/2 . On the other hand, since
3
Z xt p Z xt p
q(s) ds ∼ (s − xt )qt0 ds
x η
xt +ε xη
  xt
0 2
p 3/2

= qt (s − xt )
3
xt +εη xη
2
qt0 (εη xη )3/2
p
=−
3
2
= − ε|r|3/2 (−1)3/2
3
2
= iε|r|3/2 ,
3

and
−1/4 −1/6
q(x)−1/4 ∼ [εη xη qt0 ] = ε−1/6 (qt0 ) |r|−1/4 (−1)−1/4
−1/6
= ε−1/6 (qt0 ) |r|−1/4 e−iπ/4 ,

the left outer solution yL becomes

ε−1/6 e−iπ/4 
aL e−iζ + bL eiζ .

yL ∼ (4.2.14)
(qt0 )1/6 |r|1/4

Consequently, matching (4.2.14) the left outer solution yL with (4.2.13) the inner solution Y
yields the following:

(qt0 )1/6 (qt0 )1/6


aL = √ (ia + b) , bL = √ (a + ib) = iāL . (4.2.15)
2 π 2 π

From (4.2.12), it follows that

bR i
aL = iaR + , b L = aR + b R (4.2.16)
2 2

or in matrix form     
aL i 1/2 aR
= . (4.2.17)
bL 1 i/2 bR
102 4.2. Turning Points

4.2.5 Conclusion
Because we assume q(t) < 0 for x < xt , this introduces complex numbers on yL :

q(x)−1/4 = e−iπ/4 |q(x)|−1/4


Z xt p Z xt p
q(s) ds = i |q(s)| ds
x x

In conclusion, we have (
yL (x, xt ) if x < xt ,
y(x) =
yR (x, xt ) if x > xt ,
where
    
1 bR −iθ(x)/ε −iπ/4 ibR iθ(x)/ε −iπ/4
yL (x, xt ) = iaR + e e + aR + e e
|q(x)|1/4 2 2
 
1 −iθ(x)/ε iπ/4 iθ(x)/ε −iπ/4
 bR −iθ(x)/ε −iπ/4 iθ(x)/ε iπ/4

= aR e e +e e + e e +e e
|q(x)|1/4 2
    
1 1 π 1 π
= 2aR cos θ(x) − + bR cos θ(x) +
|q(x)|1/4 ε 4 ε 4
1
aR e−κ(x)/ε + bR eκ(x)/ε
 
yR (x, xt ) = 1/4
q(x)
Z xt p
θ(x) = |q(s)| ds
Zx x p
κ(x) = |q(s)| ds.
xt

Example 4.2.1. Consider q(x) = x(2 − x), where −1 < x < 1. The simple turning point is at
xt = 0, with q 0 (0) = 2 > 0. One can compute and show that
1 p 1 h p i
θ(x) = (1 − x) x(x − 2) − ln 1 − x + x(x − 2) , x < 0
2 2
1 p 1 π
κ(x) = (x − 1) x(2 − x) − arcsin(x − 1) + , x > 0.
2 2 4

4.2.6 The opposite case: qt0 < 0


The approximation derived for q 0 (xt ) > 0 can be used when q 0 (xt ) < 0 by simply making the
change of variables z = xt − x. This results in
    
aL i/2 1 aR
= .
bL 1/2 i bR
Consequently,
1  θ(x)/ε
+ bL e−θ(x)/ε

yL (x) = 1/4
aL e
q(x)
    
1 1 π 1 π
yR (x) = 2bL cos κ(x) − + aL cos κ(x) + .
|q(x)|1/4 ε 4 ε 4
The Wentzel-Kramers-Brillouin (WKB) Method 103

4.3 Wave Propagation and Energy Methods


In this section, we study how to obtain an asymptotic approximation of a travelling-wave
solution of the following PDE which models the string displacement
uxx = µ2 (x)utt + α(x)ut + β(x)u, 0 < x < ∞, t > 0 (4.3.1a)
u(0, t) = cos(ωt) (4.3.1b)
The terms α(x)ut and βu correspond to damping and elastic support respectively. From the
initial condition, we see that the string is periodically forced at the left end and so the solution
will develop into a wave that propagates to the right.
Observe that there is no obvious small parameter ε, but we will extract one from the
following observation. In the special case where α = β = 0 and µ equals some constant, (4.3.1)
reduces to the classical wave equation and we obtain the right-moving plane waves
u(x, t) = ei(wt−kx) , where the wavenumber k satisfies k = ±ωµ.


For higher temporal frequencies ω  1, these waves have short wavelength, i.e. λ =  1.
k
Motivated by this, we choose ε = 1/ω and construct an asymptotic approximation of the
travelling-wave solution of (4.3.1) in the case of a high frequency. The WKB ansatz is assumed
to be  
    
 
γ
 1 
u(x, t) ∼ exp i wt − w θ(x)
   u0 (x) + γ u1 (x) + . . . . (4.3.2)
| {z } 
 w {z } 

fast oscillation  | 
slowly-varying

Substituting (4.3.2) into (4.3.1) we obtain


d
−ω 2γ θx2 u0 + w−γ u1 + . . . + iwγ θx (∂x u0 + . . . ) + (iω γ θx u0 + . . . )

dx
= −µ2 ω 2 u0 + ω −γ u1 + . . . − iωα (u0 + . . . ) + β (u0 + . . . ) .


Balancing the first terms on each side of this equation gives γ = 1. The O(ω 2 ) = O(1/ε2 )
equation is the eikonal equation:
θx2 = µ2 (x), (4.3.3)
and its solutions are Z x
θ(x) = ± µ(s) ds. (4.3.4)
0
We choose the positive solution as we are considering the right-moving waves. The O(ω) =
O(1/ε) equation is the transport equation:
− θx2 u1 + iθx ∂x u0 + i (θx ∂x u0 + θxx u0 ) = −µ2 u1 − iαu0 . (4.3.5)
The u1 terms cancel out due to the eikonal equation (4.3.3), so (4.3.5) reduces to
θxx u0 + 2θx ∂x u0 = −αu0 , . (4.3.6)
With θx = µ(x), we can rearrange (4.3.6) and obtain a first order ODE in u0 :
 
µx + α
∂x u0 + u0 = 0, (4.3.7)

104 4.3. Wave Propagation and Energy Methods

which can be solved using the method of integrating factor. The integrating factor is given by
Z x     Z x 
µs (s) + α(s) p 1 α(s)
I(x) = exp ds = µ(x) exp ds ,
0 2µ(s) 2 0 µ(s)
and so (4.3.7) can be written as
1 x α(s)
 Z 
d a0 a0
(I(x)u0 ) = 0, u0 = =p exp − ds . (4.3.8)
dx I(x) µ(x) 2 0 µ(s)
Finally, imposing the boundary condition at x = 0 we obtain a first-term asymptotic expansion
of the travelling-wave solution of (4.3.1)
s
1 x α(s)
 Z   Z x 
µ(0)
u(x, t) ∼ exp − ds cos ωt − ω µ(s) ds . (4.3.9)
µ(x) 2 0 µ(s) 0

Observe that in (4.3.9) the amplitude and phase of the travelling wave depend on the spatial
position x. Interestingly, (4.3.9) is independent of β(x).

4.3.1 Connection to energy methods


Energy methods are extremely powerful in the study of wave-related problems. To determine
the energy equation in this case, we multiply (4.3.1) by ut :
ut uxx = µ2 (x)ut utt + α(x)u2t + β(x)uut
1  1
∂x (ut ux ) − ∂t u2x = µ2 (x)∂t u2t + α(x)u2t + β(x)∂t u2
 
  2 2
1 1 1 2
∂t µ2 (x) u2t + β(x)u2 + ux − ∂x (ut ux ) = −α(x)u2t
 
2 2 2
∂t E(x, t) + ∂x S(x, t) = −Φ(x, t),
where
1 1 1
E(x, t) = energy density := µ2 (x) (∂t u)2 + (∂x u)2 + β(x)u2
2 2 2
S(x, t) = energy flux := −∂t u∂x u
Φ(x, t) = dissipation function := α(x) (∂t u)2 .
We are interested in the energy over some spatial interval of the form [x1 (t), x2 (t)]. It
follows from Leibniz’s rule,
d x2 (t)
Z Z x2 (t)
E(x, t) dx = E(x2 (t), t)x˙2 − E(x1 (t), t)x˙1 + ∂t E(x, t) dx (4.3.10a)
dt x1 (t) x1 (t)
Z x2 (t)
= E(x2 (t), t)x˙2 − E(x1 (t), t)x˙1 − S(x2 (t), t) + S(x1 (t), t) − Φ(x, t) dx.
x1 (t)
(4.3.10b)
The term E(xj (t), t)x˙j is the change of energy due to the motion ofR the endpoint, S(xj (t), t)
is the flux of energy across the endpoint due to wave motion and − [x1 (t),x2 (t)] Φ(x, t) dx is the
energy loss over the interval due to dissipation.
The Wentzel-Kramers-Brillouin (WKB) Method 105

The WKB solution can be written in the more general form:


 

u(x, t) ∼ A(x) cos wt − ϕ(x)  , ϕ(x) = ωθ(x). (4.3.11)


| {z } |{z}
slowly changing amplitude rapidly changing phase

It follows that
1
E(x, t) ∼ A2 µ2 ω 2 + ϕ2x sin2 [ωt − ϕ(x)]

(4.3.12a)
2
S(x, t) ∼ ωϕx A2 sin2 [ωt − ϕ(x)] (4.3.12b)
Φ(x, t) ∼ αω 2 A2 sin2 [ωt − ϕ(x)] . (4.3.12c)

Note that we neglect A0 since A is slowly changing. Suppose we choose xi (t) satisfying
ω
ẋi = = phase velocity.
ϕx (xi )

Such curves in the x − t plane are called phase lines. Then

1 ωA2  2 2
µ ω + ϕ2x sin2 [ωt − ϕ(x)] − ωϕx A2 sin2 [ωt − ϕ(x)]

E ẋ − S ∼
2 ϕx
1 ωA2  2 2
µ ω − ϕ2x sin2 [ωt − ϕ(x)] = 0,

=
2 ϕx

since θ(x) = ϕ(x)/ω satisfies the eikonal equation (4.3.3). Hence, if x2 − x1 = O(1/ω) then it
dE
follows from (4.3.10) that ≈ 0, i.e. the total energy remains constant (to the first term)
dt
between any two phase lines x1 (t), x2 (t) that are O(1/ω) apart.
Recall the energy equation that

∂t E + ∂x S = −Φ.

Averaging the energy equation over one period in time results in


Z 2π/ω ! Z 2π/ω
∂x S(x, t) dt = − Φ(x, t) dt,
0 0

where the average of ∂t E over one period vanishes using (4.3.12) for E. Substituting (4.3.12)
for S and Φ, we obtain

∂x ϕx A2 = −αωA2


∂x θx A2 = −αA2


θxx A2 + 2θx AAx = −αA2


θxx A + 2θx Ax = −αA,

which implies that A = u0 since the last equation is precisely the transport equation (4.3.6).
Physically, this means that the transport equation corresponds to the balance of energy over
one period in time.
106 4.4. Higher-Dimensional Waves - Ray Methods

Figure 4.3: Instructive case of the multi-dimensional wave equation. In R2 , the wave propagates
from the circle with radius a.

4.4 Higher-Dimensional Waves - Ray Methods


The extension of the WKB method to higher dimensions is relatively straightforward, but the
equations could be difficult to solve explicitly. Consider the n-dimensional wave equation
∇2 u = µ2 (x)∂t2 u, x ∈ Rn , n = 2, 3. (4.4.1)
We look for time-harmonic solutions u(x, t) = e−iωt V (x) and (4.4.1) reduces to the Helmholtz
equation
∇2 V + ω 2 µ2 (x)V = 0. (4.4.2)
It is more instructive to have some understanding of what properties the solution has and
how the WKB approximation takes advantage of them. Suppose µ is constant and we want to
solve (4.4.2) in the region exterior to the circle kxk = a in R2 . Exploiting the geometry leads
to the choice of polar coordinates
x = ρ cos(ϕ), y = ρ sin(ϕ).
We impose the Dirichlet boundary condition V = f (ϕ) at ρ = a and the Sommerfeld radia-
tion condition which ensures that waves only propagate outward from the circle:

ρ [∂ρ V − iωµV ] = 0 for ρ −→ ∞.
Using separation of variables, the general solution of (4.4.2) is given by

!
(1)
X Hn (ωµρ)
V (ρ, ϕ) = αn (1)
e−inϕ , (4.4.3)
n=−∞ Hn (ωµa)
(1)
where Hn is the Hankel function of first kind and the αn are determined from the boundary
condition at ρ = a. It is known that for large values of z
r
(1) 2   nπ π 
Hn (z) ∼ exp i z − − .
πz 2 4
The Wentzel-Kramers-Brillouin (WKB) Method 107

Consequently, in the regime of higher frequency ω  1 (4.4.3) reduces to


r
a iωµ(ρ−a)
V (ρ, ϕ) ∼ f (ϕ) e . (4.4.4)
ρ

Thus we have a WKB-like solution for constant µ. Radial lines in this example correspond
to rays and from (4.4.4) we see that along a ray (i.e. ϕ is fixed), the solution has
p a highly
oscillatory component that is multiplied by a slowly varying amplitude V0 = f (ϕ) a/ρ that
decays as ρ increases.

4.4.1 WKB expansion


We first specify the domain and boundary conditions. The Helmholtz equation (4.4.2) is to be
solved in a region exterior to a smooth surface S, where S encloses a bounded convex domain.
This means that there is a well-defined unit outward normal at every point on the surface. We
impose the Dirichlet boundary condition

V (x0 ) = f (x0 ) for x0 ∈ S

and focus only on outward propagating waves.


For higher frequency waves, we take a WKB ansatz of the form
 
iωθ(x) 1
V (x) ∼ e V0 (x) + V1 (x) + . . . . (4.4.5)
ω

Then

∇V ∼ {iω∇θV0 + i∇θV1 + ∇V1 + . . . } eiωθ (4.4.6a)


∇2 V ∼ −ω 2 ∇θ · ∇θV0 + ω −∇θ · ∇θV1 + 2i∇θ · ∇V0 + ∇2 θV0 + . . . eiωθ .
 
(4.4.6b)

Substituting (4.4.6) into (4.4.2) and rearranging we find that

ω 2 −∇θ · ∇θV0 + µ2 V0 + ω −∇θ · ∇θV1 + 2i∇θ · ∇V0 + i∇2 θV0 + µ2 V1 + O(1) = 0


  
 
2
 1 2
 2
 1
∇θ · ∇θ − µ V0 + ∇θ · ∇θ − µ V1 − i∇ θV0 − 2i∇θ · ∇V0 + O = 0.
ω ω2

The O(1) equation is the eikonal equation which is now nontrivial to solve:

∇θ · ∇θ = µ2 . (4.4.7)

After cancelling the V1 term using the eikonal equation (4.4.7), the O(1/ω) equation is the
transport equation
2∇θ · ∇V0 + ∇2 θ V0 = 0.

(4.4.8)

Both ±θ are solutions to the eikonal equation and we choose the positive solution +θ since
this corresponds to the outward propagating waves.
108 4.4. Higher-Dimensional Waves - Ray Methods

Figure 4.4: Schematic figure of wave fronts in R3 and the path followed by one of the points
in the wave front (Taken from [Hol12, page 267]).

4.4.2 Surfaces and wave fronts


The usual method method for solving the nonlinear eikonal equation (4.4.7) is to introduce
characteristic coordinates. More precisely, we use curves that are orthogonal to the level
surfaces of θ(x) which are also known as wave fronts or phase fronts.
First, note that the WKB approximation of (4.4.1) has the form

u(x, t) ∼ ei(ωθ(x)−ωt) V0 (x).

We introduce the phase function

Θ(x, t) = ωθ(x) − ωt.

Suppose we start at t = 0 with the surface Sc = {θ(x) = c}, so that

Θ(x, 0) = ωc.

As t increases, the points where Θ = ωc change, and therefore points forming Sc move and
form a new surface Sc+t = {θ(x) = c + t}. We still have

Θ(x, t) = ωc.

The path each point takes to get from Sc to Sc+t is obtained from the solution of the eikonal
equation and in the WKB method these paths are called rays.
The evolution of the wave front generates a natural coordinate system (s, α, β) where α, β
comes from parameterising the wave front and s from parameterising the rays. Note that
these coordinates are not unique as there are no unique parameterisation for the surfaces and
rays. It turns out that determining these coordinates is crucial in the derivation of the WKB
approximation.
The Wentzel-Kramers-Brillouin (WKB) Method 109

Example 4.4.1. Suppose we know a-priori that θ(x) = x · x. In this case, the surface Sc+t is
described by the equation |x|2 = c + t, which is just the sphere with radius c + t. The rays are
now radial lines and so the points forming Sc move along radial lines to form the surface Sc+t .
To this end, we use a modified version of spherical coordinates:

(x, y, z) = ρ(s) (sin α cos β, sin α sin β, cos α) ,

with
0 ≤ α < π, 0 ≤ β ≤ 2π, 0 ≤ s.
The function ρ(s) is required to be smooth and strictly increasing. Examples are ρ = s,
ρ = es − 1 or ρ = ln(1 + s).

An important property of the preceeding modified spherical coordinates is that (s, α, β)


forms an orthogonal coordinate system. That is, under the change of variables x = X(s, α, β),
the vector ∂s X tangent to the ray is orthogonal to the wave front Sc+t . We now in the
opposite case: we need to find θ(x) given conditions on the map X(s, α, β). Observe the
degree of freedom on specifiy X.

4.4.3 Solution of the eikonal equation


In what follows, we will assume that (s, α, β) forms an orthogonal coordinate system. This
means that a ray’s tangent vector ∂s X points in the same direction as ∇θ when x = X(s, α, β),
or equivalently
∂X
= λ∇θ, (4.4.9)
∂s
where λ is a smooth positive function, to be specified later. WLOG, we assume that the rays
are parameterised so that s ≥ 0. One should not confuse s with the arclength parameterisation.
Along a ray,
∂s θ(X) = ∇θ · ∂s X = λ∇θ · ∇θ.
Therefore we can rewrite the eikonal equation as

∂s θ = λµ2 (4.4.10)

which can be integrated directly to yield


Z s
θ(s, α, β) = θ(0, α, β) + λµ2 dσ, (4.4.11)
0

assuming we can find such a coordinate system (s, α, β). This amounts to solving (4.4.9) which
is generally nonlinear and requires the assistance of numerical method. Nonetheless, we still
have the freedom of choosing the function λ.

4.4.4 Solution of the transport equation


It remains to find the first term V0 of the WKB approximation (4.4.5). Using (4.4.9) we have

∂s V0 = ∇V0 · ∂s X = λ∇V0 · ∇θ.


110 4.4. Higher-Dimensional Waves - Ray Methods

Consequently we can also rewrite the transport equation (4.4.8) as

2∂s V0 + λ ∇2 θ V0 = 0.

(4.4.12)

Using the identity  


J
∂s = J∇2 θ, (4.4.13)
λ

∂(x, y, z)
where J = is the Jacobian of the transformation x = X(s, α, β), we can rewrite
∂(s, α, β)
(4.4.12) as
 
J
2J∂s V0 + λ∂s V0 = 0
λ
 
2
 2 J
J∂s V0 + λV0 ∂s =0
λ
   
J 2
 2 J
∂s V0 + V0 ∂s =0
λ λ
 
1 2
∂s JV = 0,
λ 0

and its general solution is s


λ(x)
V0 (x) = a0 . (4.4.14)
J(x)
Imposing the boundary condition V0 (x0 ) = f (x0 ), we obtain
s
λ(x)J(x0 )
V0 (x) = f (x0 ) . (4.4.15)
λ(x0 )J(x)

This is true provided θ(0, α, β) = 0 in (4.4.11) since otherwise we will get an additional expo-
nential term from the WKB ansatz (4.4.5)

eiωθ(x0 ) = eiωθ(0,α,β) .

We now prove the identity (4.4.13) in R2 but this easily extends to R3 . The transformation
in R2 is x = X(s, α) and its Jacobian is

∂(x, y)
J = = ∂s x∂α y − ∂α x∂s y.
∂(s, α)

Using chain rule and the ray equation (4.4.9) we obtain

∂s J = ∂s (∂s x) ∂α y + ∂s x∂s (∂α y) − ∂s (∂s y) ∂α x − ∂s y∂s (∂α x)


= ∂s (∂s x) ∂α y − ∂α (∂s x) ∂s y + ∂α (∂s y) ∂s x − ∂s (∂s y) ∂α x
h i h i
= ∂α y ∂s x∂s + ∂s y∂y (∂s x) − ∂s y ∂α x∂x + ∂α y∂y (∂s x)
h i h i
+ ∂s x ∂α x∂x + ∂α y∂y (∂s y) − ∂α x ∂s x∂x + ∂s y∂y (∂s y)
The Wentzel-Kramers-Brillouin (WKB) Method 111
h i h i
= ∂α y∂s x − ∂s y∂α x ∂x (∂s x) + ∂α y∂s y∂y (∂s x) − ∂s y∂α y∂y (∂s x)
h i h i
+ ∂s x∂α y − ∂α x∂s y∂y ∂y (∂s y) + ∂s x∂α x∂x (∂s y) − ∂α x∂s x∂x (∂s y)
= J∂x (∂s x) + J∂y (∂s y)
= J∇ · (∂s x)
= J∇ · (λ∇θ) .

For any smooth function q(x),

∂s (qJ) = q∂s J + J∂s q


= qJ∇ · (λ∇θ) + J∇q · ∂s x
h i h i
= J q∇ · (λ∇θ) + J ∇q · (λ∇θ)
= J∇ · (qλ∇θ) .

The identity (4.4.13) follows by choosing q = 1/λ.

4.4.5 Ray equation


We may now focus on solving the ray equation (4.4.9). To remove the θ dependence, let
X = (X1 , X2 , X3 ). Dividing (4.4.9) by λ and differentiating the resulting equation component-
wise yields
    X 3  
∂ 1 ∂Xi ∂ ∂θ(x) ∂xi ∂ ∂θ(x)
= =
∂s λ ∂s ∂s ∂xi j=1
∂s ∂xj ∂xi
   
∂ ∂X
= ∇θ ·
∂xi ∂s
= (∂xi ∇θ) · (λ∇θ)
1
= λ∂xi (∇θ · ∇θ)
2
1
= λ∂xi µ2 .
2
In vector form, this equals  
∂ 1 ∂
X = λµ∇µ. (4.4.16)
∂s λ ∂s
We require two boundary conditions as (4.4.16) is a second-order equation in s. Recall that
each ray starts on the initial surface S. Given any point x0 ∈ S, its ray satisfies

X|s=0 = x0 . (4.4.17)

The second boundary condition is typically



∂X
= λ0 µ0 n0 , (4.4.18)
∂s s=0

where n0 is the unit outward normal at x0 , λ0 = λ(0, α, β) and µ0 = µ(0, α, β).


112 4.4. Higher-Dimensional Waves - Ray Methods

We can also rewrite the ray equation (4.4.9) by taking the dot product of (4.4.9) against
∂s X:
∂X ∂X
· = λ2 ∇θ · ∇θ = λ2 µ2 .
∂s ∂s
If ` be the arc length along a ray, then
Z s Z s
`= k∂s Xk ds = λµ ds.
0 0
Hence, s equals the arc length along a ray if we choose λµ = 1. Another common choice is
λ = 1.

4.4.6 Summary for λ = 1/µ


From (4.4.16), choosing λµ = 1 amounts to solving
 
∂ ∂
µ X = ∇µ(X) (4.4.19a)
∂s ∂s
X|s=0 = x0 ∈ S, ∂s X|s=0 = n0 . (4.4.19b)
Once this is solved, the phase function becomes
Z s
θ(X) = µ(X) dσ (4.4.20)
0
and the amplitude is s
µ(x0 )J(x0 )
V0 (x) = f (x0 ) . (4.4.21)
µ(x)J(x)
Finally, the WKB approximation for the outward propagating wave is
s   Z s 
µ(x0 )J(x0 )
u(x, t) ∼ f (x0 ) exp iω −t + µ(X(σ)) dσ , (4.4.22)
µ(x)J(x) 0

where s is the value for which the solution of (4.4.19) satisfies X(s) = x.

Example 4.4.2. For constant µ, the ray equation (4.4.19) becomes


∂ 2X
= 0 =⇒ X(s) = x0 + sn0 .
∂s2
The phase function is Z s
θ = µ0 dσ = µ0 s.
0
Thus, given a point x on the ray, s = n0 · (x − x0 ) the WKB approximation is
s
J(x0 )
u(x, t) ∼ f (x0 ) exp [i (k · (x − x0 ) − ωt)] ,
J(x)
where k = µ0 ωn0 is the wave vector for the ray. In R2 , when the boundary surface is the circle
of radius a, n0 is simply the position vector x − x0 and s is then the distance from the circle.
In polar coordinates (ρ, ϕ), the Jacobian is just ρ and
r
a iω(µ0 (ρ−a)−t)
u(x, t) ∼ f (x0 ) e .
ρ
The Wentzel-Kramers-Brillouin (WKB) Method 113

4.4.7 Breakdown of the WKB solution


It is important to consider circumstances in which the solution (4.4.22) can go wrong:
1. It does not hold at turning points x of µ, i.e. µ(x) = 0. Nonetheless, this can be handled
analogously to the one-dimensional case in Section 4.2 using boundary layer method.
2. A more likely complication arises when J = 0. Points where this occurs are called caustics
and these arise when two or more rays intersect, which results in the breakdown of the
characteristic coordinates (s, α, β). If a ray passes through a caustic, one picks up an
additional factor in the WKB solution (4.4.22) of the form eimπ/2 , where the integer m
depends on the rank of the Jacobian matrix at the caustic.
3. A less obvious breakdown occurs when X(s) = x has no solution. This happens with
shadow regions and it is resolved by introducing the idea of ray splitting.

4.5 Problems
1. Use the WKB method to find an approximation of the following problem on x ∈ [0, 1]:

εy 00 + 2y 0 + 2y = 0, y(0) = 0, y(1) = 1.

Solution: We make a WKB ansatz of the form


α
y(x) ∼ eθ(x)/ε (y0 (x) + εα y1 (x) + . . . ) . (4.5.1)

Substituting (4.5.1) into the given differential equation yields

ε ε−2α θx2 y0 + ε−α θxx y0 + 2θx y00 + θx2 y1 + . . .


  

+ 2 ε−α θx y0 + y00 + θx y1 + . . . + 2 [y0 + εα y1 + . . . ] = 0.


 

Balancing leading order terms of the first two terms we obtain α = 1 and the O(1/ε)
equation is the eikonal equation

θx2 + 2θx = 0 = θx (θx + 2)

which has two general solutions:

θ(x) ≡ c1 or θ(x) = −2x + c2 ,

where c1 , c2 are arbitrary constants. The O(1) equation, after simplifying using the
eikonal equation, is the following:

θxx y0 + 2θx y00 + 2y00 + 2y0 = 0. (4.5.2)

Suppose θx = 0, then (4.5.2) reduces to 2y00 + 2y0 = 0 and its general solution is

y0 (x) = a0 e−x .
114 4.5. Problems

Suppose θx = −2, then (4.5.2) reduces to −2y00 + 2y0 = 0 and its general solution is

y0 (x) = b0 ex .

Thus a first-term approximation of the general solution of the original problem is

y ∼ a0 e−x + b0 ex e−2x/ε ∼ a0 e−x + b0 ex−2x/ε ,

where we absorb the constants c1 , c2 into a0 , b0 respectively. Imposing the boundary


conditions y0 (0) = 0 and y0 (1) = 1 results in two linear equations in terms of a0 and
b0 :
(
a0 + b 0 =0 e
−1
=⇒ a0 = −b0 , b0 = 2−2/ε
a0 e + b 0 e 1−2/ε
=1 e −1

Hence, a first-term WKB approximation is

y ∼ b0 −e−x + ex−2x/ε


∼ −b0 e−x − ex−2x/ε



e −x x−2x/ε

∼ e − e
1 − e2−2/ε
1 1−x x+1−2x/ε

∼ e − e .
1 − e2−2ε

2. Consider seismic waves propagating through the upper mantle of the Earth from a source
on the Earth’s surface. We want to use a WKB approximation in R3 to solve the equation

∇2 v + ω 2 µ2 (r)v = 0,

where µ has spherical symmetry. Take λ = 1/µ.

Figure 4.5: Rays representing waves propagating inside the earth from a source on the surface
of the earth.
The Wentzel-Kramers-Brillouin (WKB) Method 115

(a) Use the ray equation to show that the vector p = r × (µ∂s r) is independent of s.
Hence, show that krkµ sin(χ) is constant along a ray, where χ is the angle between
r and ∂s r.

Solution: With the choice λ = 1/µ, the ray equation (4.4.16) reduces to
 
∂ ∂
µ X = ∇µ(X), with x = r = X(s, α, β).
∂s ∂s

Using the product rule for differentiating cross product we obtain


 
∂ ∂
∂s p = r×µ r
∂s ∂s
 
∂ ∂ ∂ ∂
= r×µ r+r× µ r
∂s ∂s ∂s ∂s
 
∂ ∂
=µ r × r + r × ∇µ(r).
∂s ∂s

The first term vanishes because the cross product of any vector with itself is
zero and the second term vanishes since r and ∇µ(r) are parallel. Therefore
∂s p = 0 and so p is independent of s.

An immediate consequence of the previous result is that the vector p is constant


along a ray, i.e. p has constant magnitude κ > 0 along a ray. First, the
geometrical interpretation of the cross product gives the following

krkk∂s rk sin(χ) = kr × ∂s rk,

where χ is the angle between the vectors r and ∂s r. Multiplying each side by
the positive scalar function µ we obtain

krkµk∂s rk sin(χ) = kr × µ∂s rk = kpk = κ.

Using the ray equation and the eikonal equation,

k∂s rk2 = ∂s r∂s r = (λ∇θ) · (λ∇θ) = λ2 µ2 = 1,

since we take λ = 1/µ. Hence,

κ = krkµk∂s rk sin(χ) = krkµ sin(χ) along a ray. (4.5.3)

(b) Part (a) implies that each ray lies in a plane containing the origin of the sphere. Let
(ρ, ϕ) be polar coordinates of this plane. It follows that for a polar curve ρ = ρ(ϕ),
the angle χ satisfies
ρ
sin(χ) = q . (4.5.4)
ρ2 + (∂ϕ ρ)2
116 4.5. Problems

Assuming ∂ϕ ρ 6= 0, show that


Z ρ
dr
ϕ = ϕ0 + κ p ,
ρ0 r µ r2 − κ2
2

where ρ0 , ϕ0 , κ are constants.

Solution: Given a ray, let (ρ, ϕ) be polar coodinates of the plane containing
such ray. Since this plane contains the origin of the sphere, we can identify ρ as
the magnitude of the radial (position) vector r and from (4.5.3) we know that
κ κ
sin(χ) = = . (4.5.5)
krkµ µρ

Substituting (4.5.5) into (4.5.4) and rearranging we obtain


κ ρ
=q
ρµ
ρ2 + (∂ϕ ρ)2
ρ2 µ
q
ρ2 + (∂ϕ ρ)2 =
κ
4 2
ρ µ
ρ2 + (∂ϕ ρ)2 = 2
κ
ρ 4 µ2
(∂ϕ ρ)2 = 2 − ρ2
κ
2
ρ
(∂ϕ ρ)2 = 2 ρ2 µ2 − κ2

κ
ρp 2 2
∂ϕ ρ = ± ρ µ − κ2 .
κ
Assuming ∂ϕ ρ 6= 0, we can invert this to obtain ∂ρ ϕ. Therefore,
κ
∂ρ ϕ = ± p
ρ ρ2 µ2 − κ2
Z ρ
dr
ϕ = ϕ0 ± κ p ,
ρ0 r µ2 r2 − κ2

where (ϕ0 , ρ0 ) satisfies κ = ±ρ0 µ(ρ0 ) sin(ϕ0 ).

(c) Use the definition of arc length, show that for a polar curve
q
µds = ρ2 + (∂ϕ ρ)2 dϕ. (4.5.6)

Combining this result with part (b), show that the solution of the eikonal equation
is given by
1 ϕ 2 2
Z
θ= µ ρ dϕ.
κ ϕ0
The Wentzel-Kramers-Brillouin (WKB) Method 117

Solution: First of all, we must distinguish the ray parameter s with the ar-
clength parameter ` of a ray. For a polar curve (ϕ, ρ(ϕ)), we have x = ρ(ϕ) cos ϕ
and y = ρ(ϕ) sin ϕ and so
s 
2  2
dx dy
d` = + dϕ
dϕ dϕ
q
= (−ρ sin ϕ + ∂ϕ ρ cos ϕ)2 + (ρ cos ϕ + ∂ϕ ρ sin ϕ)2 dϕ
q
= ρ2 sin2 ϕ + cos2 ϕ + (∂ϕ ρ)2 cos2 ϕ + sin2 ϕ dϕ
 
q
= ρ2 + (∂ϕ ρ)2 dϕ.

Recall that the arclength ` along a ray satisfies


Z s
`= λµ ds.
0

It follows from the choice of λ = 1/µ that


q
d` = ds = ρ2 + (∂ϕ ρ)2 .

Since we take λ = 1/µ, the solution of the eikonal equation is


Z s Z s
2
θ= λµ ds = µ ds
0 0
Z ϕ q
= µ ρ2 + (∂ϕ ρ)2 dϕ
Zϕϕ0
µρ h i
= dϕ From (4.5.4).
sin(χ)
Zϕϕ0  µρ  h i
= µρ dϕ From (4.5.5).
ϕ0 κ
Z ϕ
1
= µ2 ρ2 dϕ,
κ ϕ0

as desired. Note: I did a dimensional analysis on the original expression (4.5.6)


given in the problem and found out that µ is dimensionless, which is clearly
false.
118 4.5. Problems
Chapter 5

Method of Homogenization

5.1 Introductory Example


Consider the boundary value problem
 
d du
D = f (x), 0 < x < 1, (5.1.1)
dx dx

with u(0) = a and u(1) = b. In many physical problems, D is known as the conductivity
tensor and we are interested in D = D(x, x/ε), where it includes a slow variation in x as well
as a fast variation over a length scale that is O(ε). A physical realisation of this is a material
having micro and macrostructures with spatial variation. For example, we might have
1
D(x, y) = , (5.1.2)
1 + αx + βg(x) cos y

with
α = 0.1, β = 0.1, ε = 0.01, g(x) = e4x(x−1) .
Our main goal is to try to replace, if possible, D(x, x/ε) = D(x, y) with some effective
(averaged) D that is independent of ε. A naive guess would be to simply average over the fast
variation, i.e.
1 y
Z
hDi∞ = lim D(x, r) dr. (5.1.3)
y→∞ y 0

For the given example (5.1.2), we have that


−1/2
hDi∞ = (1 + αx)2 − (βg(x))2

. (5.1.4)

It turns out that this is not a good approximation because the solution of (5.1.1) with hDi∞
might be a bad approximation of the solution of (5.1.1).
Because of the two different length scales in (5.1.1), it is natural to invoke the method of
multiple scales, but with an important distinction. Here, we want to eliminate the fast length
scale y = x/ε, as opposed to the standard multiple scales where we keep both the slow and
normal scales. For the existence of solution of (5.1.1), we assume D(x, y) is smooth and satisfies

0 < Dm (x) ≤ D(x, y) ≤ DM (x) (5.1.5)

119
120 5.1. Introductory Example

Figure 5.1: Rapidly varying coefficient D and its average. The red line depicts rapidly varying
D(x, x/) in (5.1.2) and the blue dotted line shows its effective mean D(x) = 1/(1 + αx).

for all x ∈ [0, 1] and y > 0, where Dm (x) and DM (x) are both continuous. With the fast scale
y = x/ε and the slow scale x, the derivative becomes

d 1
−→ ∂y + ∂x
dx ε
and (5.1.1) becomes
(∂y + ε∂x )[D(x, y)(∂y + ε∂x )u] = ε2 f (x). (5.1.6)
We assume a regular perturbation expansion of the form

u ∼ u0 (x, y) + εu1 (x, y) + ε2 u2 (x, y) + . . . ,

with u0 , u1 , u2 , . . . smooth, bounded functions of y. The O(1) equation is

∂y [D(x, y)∂y u0 ] = 0,

and its general solution is


Z y
ds
u0 (x, y) = c1 (x) + c0 (x) , (5.1.7)
y0 D(x, s)

where y0 is some fixed but arbitrary number. In order fo u0 to be bounded, we require c0 = 0,


since the associated integral in (5.1.7) is unbounded. Indeed, from the assumption (5.1.5), if
y > y0 , then Z y Z y
ds ds
≤ ,
y0 DM (x) y0 D(x, s)
Method of Homogenization 121

and it follows that y


y − y0
Z
ds
≤ .
DM (x) y0 D(x, s)
Since the left-hand side becomes infinite as y −→ ∞, so does the right-hand side. Therefore,
u0 = u0 (x) = c1 (x). At this point, it is worth noting that
Z y
y − y0 ds y − y0
≤ ≤ , (5.1.8)
DM (x) y0 D(x, s) Dm (x)

i.e. the integral is unbounded but its growth is confined by linear functions in y as y −→ ∞.
The O(ε) equation is
∂y [D(x, y)∂y u1 ] = −∂x u0 · ∂y D. (5.1.9)
Integrating this with respect to y twice and using the fact that u0 = u0 (x) yields

D(x, y)∂y u1 = b0 (x) − ∂x u0 D(x, y)


b0 (x)
∂y u1 = − ∂x u0
D(x, y)
Z y
ds
u1 (x, y) = b1 (x) + b0 (x) − y∂x u0 . (5.1.10)
| {z } y0 D(x, s) | {z }
1
| {z } 3
2

Observe that 2 and 3 increases linearly with y for large y, and analogous to removing
secular terms in multiple scales, we require that these two terms cancel each other so that u1
is bounded. This means that we must impose
 Z y 
1 ds
lim b0 (x) − y∂x u0 = 0.
y→∞ y y0 D(x, s)

This can be rewritten as


Z y
−1 −1 1 ds
∂x u0 (x) = hD i∞ b0 (x), where hD i∞ = lim . (5.1.11)
y→∞ y y0 D(x, s)

In general multiple scales problem, it is enough to get information from O(ε) terms to
obtain a first-term approximation. However, for homogenization problems, we need to proceed
to O(ε2 ) equation to determine u0 (x). The O(ε2 ) equation is

∂y [D(x, y)∂y u2 ] = f (x) − b00 − ∂y [D(x, y)∂x u1 ],

and integrating twice with respect to y gives the general solution


Z y Z y Z y
ds 0 s ds
u2 (x, y) = d1 (x) + d0 (x) − ∂x u1 (x, s) ds + (f − b0 ) . (5.1.12)
y0 D(x, s) y0 y0 D(x, s)

The last integral is O(y 2 ) for large y and cannot be cancelled by other terms in (5.1.12).
Therefore, we require b00 (x) = f (x). Finally, rearranging (5.1.11) and differentiating with
respect to x we obtain
∂x [D(x)∂x u0 (x)] = b00 (x) = f (x), (5.1.13)
122 5.2. Multi-dimensional Problem: Periodic Substructure

where D(x) is the harmonic mean of D, defined as


y
D(x) = hD−1 i−1
∞ = lim R y ds
. (5.1.14)
y→∞
y0 D(x,s)

We called (5.1.13) the homogenized differential equation with the homogenized, or ef-
fective, coefficient D.

Figure 5.2: Exact and averaged solution. The red line depicts the exact solution and the blue
line shows its homogenized solution.

Example 5.1.1. Given D(x, y) in (5.1.2), it follows that


1 y
Z 
−1
hD i∞ = lim 1 + αx + βg(x) cos(s) ds = 1 + αx. (5.1.15)
y→∞ y y
0

In particular, D = 1 for α = 0. For f (x) = 0, α = 0 and β 6= 0, the solution of (5.1.13), with


u0 (0) = 0 and u0 (1) = 1, is u0 (x) = x. In Figure 5.2 we compare u0 (x) with the exact solution
of (5.1.1)
x + εβ sin(x/ε)
u(x) = .
1 + εβ sin(1/ε)

5.2 Multi-dimensional Problem: Periodic Substructure


Given an open, connected, smooth region Ω ⊂ Rn , consider the inhomogeneous Dirichlet
problem

∇ · (D∇u) = f (x), x ∈ Ω, (5.2.1a)


Method of Homogenization 123

Figure 5.3: Fundamental domain with periodic substructure. On the fundamental domain,
function has a same set of values.

u = g(x), x ∈ ∂Ω. (5.2.1b)


The coefficient D = D(x, x/ε) is assumed to be positive and smooth, and because (5.2.1) is
harder to solve compared to (5.1.1), we also assume that D is periodic in the fast scale y = x/ε.
In other words, there is a period vector y p with positive entries such that
D(x, y + y p ) = D(x, y) for all x, y. (5.2.2)

5.2.1 Periodicity of D(x, y)


Suppose
D = D(y) = y + cos(2y1 − 3y2 ).
One finds that y p = (π, 2π/3) and this means that we can determine D anywhere in R2 if we
know its values in the rectangle (y1 , y2 ) ∈ [α0 , α0 + π] × [β0 , β0 + 2π/3] for arbitrary α0 , β0 ∈ R.
This structure motivates the definition of a cell (or fundamental domain), Ωp . Mathematically,
given y p = (p1 , p2 ), Ωp is the rectangle
Ωp = [α0 , α0 + p1 ] × [β0 , β0 + p2 ],
where α0 , β0 are given arbitrary constants that must be consistent with Ω. It is possible for
the period vector y p to depend on the slow variable x. For example, consider
D(x, y) = 6 + cos (y1 ex2 + 4y2 ) .
One finds that y p = (2πex2 , π/2).
An important consequence of periodicity is values of a function on the boundary of the
fundamental domain is also periodic. Suppose yL and yR are points on the left-hand and right-
hand boundary of the fundamental domain respectively. For any C 2 periodic functions w, we
have 
 w(yL )
 = w(yR )
∇y w(yL ) = ∇y w(yR ) (5.2.3)

∂yi ∂yj w(yL ) = ∂yi ∂yj w(yR )

These conditions must hold at upper and lower boundary as well.


124 5.2. Multi-dimensional Problem: Periodic Substructure

5.2.2 Homogenization procedure


Setting y = x/ε, the derivative becomes
1
∇ −→ ∇x + ∇y .
ε
Substituting this into (5.2.1) and multiplying each side by ε2 yields

(∇y + ε∇x ) [D(x, y)(∇y + ε∇x )u(x, y)] = ε2 f (x). (5.2.4)

We introduce an asymptotic expansion of the form

u ∼ u0 (x, y) + εu1 (x, y) + ε2 u2 (x, y) + . . .

and we assume that u0 , u1 , u2 , . . . are periodic in y with period y p due to the periodicity
assumption on D.
The O(1) equation is
∇y (D∇y u0 ) = 0,
and the general solution of this, which is bounded, is u0 = u0 (x). If D were constant, then
it follows from Liouville’s theorem that bounded solutions of Laplace’s equation over R2 are
constants. One can argue similarly in the case where D is not constant. The O(ε) equation is

∇y · (D∇y u1 ) = −(∇y D) · (∇x u0 ). (5.2.5)

Because u1 is periodic in y, it suffices to solve (5.2.5) in a cell Ωp and then simply extend the
solution using periodicity. Observe that (5.2.5) is linear with respect to y and u0 does not
depend on y. Thus the general solution of (5.2.5) follows from superposition principle

u1 (x, y) = a · ∇x u0 + c(x), (5.2.6)

with a = a(x, y) periodic in y, satisfying

∇y · (D∇y ai ) = −∂yi D for y ∈ Ωp . (5.2.7)

The O(ε2 ) eqution is

∇y · [D(∇y u2 + ∇x u1 )] + ∇x · [D(∇y u1 + ∇x u0 )] = f (x). (5.2.8)

To derive the homogenized equation for u0 , we introduce the cell average of a function v(x, y)
over Ωp : Z
1
hvip (x) = v(x, y) dVy .
|Ωp | Ωp
Averaging the first term of (5.2.8) and applying the divergence theorem gives
Z
D E 1
∇y · [D(∇y u2 + ∇x u1 )] = ∇y · [D(∇y u2 + ∇x u1 )] dVy
p |Ωp | Ωp
Z
1
= Dn · (∇y u2 + ∇x u1 ) dSy
|Ωp | ∂Ωp
=0
Method of Homogenization 125

since u1 , u2 are periodic over the cell Ωp . Next, using (5.2.6) we have

hD∂yi u1 ip = hD∂yi (a · ∇x u0 )ip


= hD∂yi aip · ∇x u0 .

Similarly,
D E  
hD∂xi u0 ip = hDip ∂xi u0 =⇒ ∇x · (D∇x u0 ) = ∇x · hDip ∇x u0
p

Combining everything, the average of (5.2.8) is


h i h i
∇x · hD∇y aip · ∇x u0 + ∇x · hDip ∇x u0 = f (x).

We can rewrite the homogenized problem in a more compact fashion:


h i
∇x · D∇x u0 = f (x) for x ∈ Ω, (5.2.9a)
u0 = g(x) for x ∈ ∂Ω, (5.2.9b)
D = hD∇y aip + hDip I. (5.2.9c)

In R2 , the homogenized coefficients are


 
hDip + hD∂y1 a1 ip hD∂y1 a2 ip
D= (5.2.10)
hD∂y2 a1 ip hDip + hD∂y2 a2 ip

and the functions ai are smooth periodic solutions of the cell problem

∇y · (D∇y ai ) = −∂yi D for y ∈ Ωp . (5.2.11)

Example 5.2.1. Consider the cell Ωp = [0, a] × [0, b] in R2 . To determined the homogenized
coefficients in (5.2.10), it is necessary to solve the cell problem (5.2.11). Consider a “separable”
coefficient function D:
D(x, y) = D0 (x1 , x2 )eα(y1 ) eβ(y2 ) ,
where α(y1 ) and β(y2 ) are periodic with period a and b respectively. The cell equations for
a1 , a2 are

∂y1 (D∂y1 a1 ) + ∂y2 (D∂y2 a1 ) = −∂y1 D


∂y1 (D∂y1 a2 ) + ∂y2 (D∂y2 a2 ) = −∂y2 D.

Taking a1 = a1 (y1 ) and a2 = a2 (y2 ), it follows that

eα(y1 ) ∂y1 a1 = κ1 − eα(y1 )


eβ(y2 ) ∂y2 a2 = κ2 − eβ(y2 )

and
Z y1
a1 (y1 ) = −y1 + κ1 e−α(s) ds
0
126 5.3. Problem
Z y2
a2 (y2 ) = −y2 + κ2 e−β(s) ds.
0

From the periodicity of a1 and a2 , i.e.

a1 (0) = a1 (a), a2 (0) = a2 (b),

one finds that −1 −1


Z a Z b
−α(s) −β(s)
κ1 = a e ds , κ2 = b e ds .
0 0

Now, since ∂y2 a1 = ∂y1 a2 = 0, it follows from (5.2.10) that D12 = D21 = 0. Moreover,

1 a b
Z Z  
α(y1 )+β(y2 ) −α(y1 )
hD∂y1 a1 ip = D0 (x)e − 1 + κ1 e dy1 dy2
ab 0 0
1 a b 1 a b
Z Z Z Z
α(y1 )+β(y2 )
=− D0 (x)e dy1 dy2 + D0 (x)κ1 eβ(y2 ) dy1 dy2
ab 0 0 ab 0 0
 Z b 
1 β(s)
= −hDip + D0 (x)κ1 e ds
b 0
 
κ1
= −hDip + D0 (x) ,
κ2
and similarly
Z a Z b
1  
hD∂y2 a2 ip = D0 (x)eα(y1 )+β(y2 ) − 1 + κ2 e−β(y2 ) dy1 dy2
ab0 0
1 a b 1 a b
Z Z Z Z
α(y1 )+β(y2 )
=− D0 (x)e dy1 dy2 + D0 (x)κ2 eα(y1 ) dy1 dy2
ab 0 0 ab 0 0
 Z a 
1
= −hDip + D0 (x)κ2 eα(s) ds
a
  0
κ2
= −hDip + D0 (x) .
κ1
Consequently, the homogenized differential equation (5.2.9) for u0 is

∂x1 (D1 ∂x1 u0 ) + ∂x2 (D2 ∂x2 u0 ) = 0,

where Di (x) = λi D0 (x), with


κ1 κ2
λ1 = , λ2 = .
κ2 κ1
Interestingly, for D1 we get the harmonic mean of eα(y1 ) multiplied by the arithmetic mean of
eβ(y2 ) , and vice versa for D2 .

5.3 Problem
1. Consider the equation

∂x (D∂x u) + g(u) = f (x, x/ε), 0 < x < 1, (5.3.1)


Method of Homogenization 127

with u = 0 when x = 0, 1. Assume D = D(x, x/ε). Use the method of multiple-scales to


show that the leading order homogenised equation is

∂x D∂x u0 + g(u0 ) = hf i∞ ,
where D is the harmonic mean of D and
 Z y 
1
hf i∞ = lim f (x, s) ds .
y→∞ y y0

We assume the coefficient D(x, y) is smooth and satisfies

0 < Dm (x) ≤ D(x, y) ≤ DM (x),

for some continuous functions Dm , DM in [0, 1]. We introduce y = x/ε and designate
the slow scale simply as x. The derivative transforms into
d ∂ 1 ∂ 1
−→ + = ∂x + ∂y ,
dx ∂x ε ∂y ε

and (5.3.1) becomes


h i
(∂y + ε∂x ) D(x, y) (∂y + ε∂x ) u + ε2 g(u) = ε2 f (x, y). (5.3.2)

We take a regular asymptotic expansion

u ∼ u0 (x, y) + εu1 (x, y) + ε2 u2 (x, y) + . . . , (5.3.3)

where we assume that un , n = 0, 1, . . . are bounded functions of y. We now substitute


(5.3.3) into (5.3.2) and collect terms of same order.

The O(1) equation is h i


∂y D(x, y)∂y u0 = 0,
and its general solution is
Z y
ds
u0 (x, y) = c1 (x) + c0 (x) ,
y0 D(x, s)

with y0 fixed. We deduce from the lecture that c0 (x) must be zero and consequently
u0 is a function of x only, i.e. u0 (x, y) = u0 (x).

The O(ε) equation is h i


∂y D(x, y)∂y u1 = −∂x u0 ∂y D,
and its general solution is
Z y
ds
u1 (x, y) = b1 (x) + b0 (x) − y∂x u0 .
y0 D(x, s)
128 5.3. Problem

We deduce from the lecture that the following equation must be true to prevent u1
from blowing up:
∂x u0 = hD−1 i∞ b0 (x), (5.3.4)
where hD−1 i∞ = (D)−1 .

The O(ε2 ) equation is


h i  
∂y D(x, y)∂y u2 = f (x, y) − ∂x b0 − g(u0 ) − ∂y D∂x u1 ,

and solving this yields


Z y h i
D(x, y)∂y u2 = a0 (x) + f (x, s) ds − ∂x b0 + g(u0 ) y − D∂x u1
y0
h i
a0 (x) ∂ b
x 0 + g(u 0 y
) 1
Z y
∂y u2 = − ∂x u1 − + f (x, s) ds
D(x, y) D(x, y) D(x, y) y0
Z y Z y
ds
u2 (x, y) = d1 (x) + d0 (x) − ∂x u1 (x, s) ds
y0 D(x, s) y0
Z y  h Z τ 
1 i
+ − ∂x b0 + g(u0 ) τ + f (x, s) ds dτ
y0 D(x, τ ) y0

Since the last integral is O(y 2 ) for large y and there are no other terms in the
expression of u2 (x, y) that can cancel this growth, it is necessary to impose

1 y
Z   h Z τ 
1 i
lim − ∂x b0 + g(u0 ) τ + f (x, s) ds dτ = 0,
y→∞ y 2 y D(x, τ ) y0
0

A slightly weaker requirement is


Z τ i 
1 h
lim f (x, s) ds − ∂x b0 + g(u0 ) τ = 0,
τ →∞ τ y0

or equivalently Z τ
1
∂x b0 + g(u0 ) = lim f (x, s) ds = hf i∞ . (5.3.5)
τ →∞ τ y0

Differentiating (5.3.4) and using the relation (5.3.5), it follows that


 
D∂x u0 = b0 =⇒ ∂x D∂x u0 = ∂x b0 = hf i∞ − g(u0 ),

and the leading order homogenised equation follows.


Bibliography

[Bre14] P. C. Bressloff. “Waves in Neural Media”. In: Lecture Notes on Mathematical Mod-
elling in the Life Sciences (2014). doi: 10.1007/978-1-4614-8866-8.
[BEW08] P. C. Bressloff, B. A. Earnshaw, and M. J. Ward. “Diffusion of Protein Receptors on
a Cylindrical Dendritic Membrane with Partially Absorbing Traps”. In: SIAM Jour-
nal on Applied Mathematics 68.5 (2008), pp. 1223–1246. doi: 10.1137/070698373.
[CSW09] D. Coombs, R. Straube, and M. Ward. “Diffusion on a Sphere with Localized
Traps: Mean First Passage Time, Eigenvalue Asymptotics, and Fekete Points”. In:
SIAM Journal on Applied Mathematics 70.1 (2009), pp. 302–332. doi: 10.1137/
080733280.
[Hol12] M. H. Holmes. Introduction to Perturbation Methods. 2nd. Vol. 20. Texts in Applied
Mathematics. Springer Science & Business Media, 2012. doi: 10.1007/978- 1-
4614-5477-9.
[Kur+15] V. Kurella et al. “Asymptotic Analysis of First Passage Time Problems Inspired
by Ecology”. In: Bulletin of mathematical biology 77.1 (2015), pp. 83–125. doi:
10.1007/s11538-014-0053-5.
[PRK03] A. Pikovsky, M. Rosenblum, and J. Kurths. Synchronization: A Universal Concept
in Nonlinear Sciences. Vol. 12. Cambridge Nonlinear Science. Cambridge University
Press, 2003.
[Pil+10] S Pillay et al. “An asymptotic Analysis of the Mean First Passage Time for Nar-
row Escape Problems: Part I: Two-dimensional Domains”. In: Multiscale Modeling
&amp; Simulation 8.3 (2010), pp. 803–835. doi: 10.1137/090752511.
[SWF07] R. Straube, M. J. Ward, and M. Falcke. “Reaction Rate of Small Diffusing Molecules
on a Cylindrical Membrane”. In: Journal of Statistical Physics 129.2 (2007), pp. 377–
405. doi: 10.1007/s10955-007-9371-4.

129

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy