0% found this document useful (0 votes)
13 views58 pages

CompSci Vercauteren

The lecture notes cover the fundamentals of computational sciences, focusing on mathematical modeling, numerical methods, and error analysis in scientific computing. Key topics include dimensional analysis, flow modeling, time integration methods, and numerical integration of partial differential equations. The document outlines the steps involved in scientific computing, emphasizing the importance of model formulation, analysis, and validation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views58 pages

CompSci Vercauteren

The lecture notes cover the fundamentals of computational sciences, focusing on mathematical modeling, numerical methods, and error analysis in scientific computing. Key topics include dimensional analysis, flow modeling, time integration methods, and numerical integration of partial differential equations. The document outlines the steps involved in scientific computing, emphasizing the importance of model formulation, analysis, and validation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Computational Sciences – Lecture notes

M.Sc. Computational Sciences


Freie Universität Berlin
Winter semester 2019/2020

Nikki Vercauteren
January 2, 2021

Contents
1 Introduction 3
1.1 Steps in scientific computing . . . . . . . . . . . . . . . . . . . . 3
1.2 Numerical methods and scientific goals . . . . . . . . . . . . . . . 4
1.3 Error sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Arguments from dimension 6


2.1 Dimensional analysis . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Dimensionless representation of a harmonic oscillator . . . . . . . 10

3 Flow modelling: macroscopic dynamics of fluid flows 12


3.1 From particle dynamics to molecular di↵usion . . . . . . . . . . . 12
3.2 Di↵usion equation . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 Similarity solution to the one-dimensional di↵usion equation 15
3.3 Advection-di↵usion equation . . . . . . . . . . . . . . . . . . . . 16
3.3.1 Dimensionless form of the advection-di↵usion equation . . 17
3.3.2 Fourier series representation of the solution . . . . . . . . 18

4 Time integration methods 20


4.1 Formulation of the continuous problem . . . . . . . . . . . . . . . 20
4.2 One-step approximation methods . . . . . . . . . . . . . . . . . . 22
4.3 A priori error analysis . . . . . . . . . . . . . . . . . . . . . . . . 23
4.4 Long-term behaviour of example problems . . . . . . . . . . . . . 26
4.4.1 Dissipative systems . . . . . . . . . . . . . . . . . . . . . . 26
4.4.2 Hamiltonian systems . . . . . . . . . . . . . . . . . . . . . 28

5 Numerical integration of partial di↵erential equations 36


5.1 Advection-di↵usion equation . . . . . . . . . . . . . . . . . . . . 36
5.2 Discretisation by the finite di↵erences method . . . . . . . . . . . 37
5.2.1 A priori error analysis . . . . . . . . . . . . . . . . . . . . 39
5.3 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3.1 Direct approach . . . . . . . . . . . . . . . . . . . . . . . 42

1
5.3.2 Frequentist approach: von Neumann stability analysis . . 42
5.3.3 Di↵usion equation . . . . . . . . . . . . . . . . . . . . . . 43
5.3.4 Advection equation . . . . . . . . . . . . . . . . . . . . . . 45
5.3.5 Advection-Di↵usion equation . . . . . . . . . . . . . . . . 49

6 Transport equation: solution by the method of characteristics 50


6.1 Method of characteristics . . . . . . . . . . . . . . . . . . . . . . 50
6.2 Characteristics of the Burger’s equation . . . . . . . . . . . . . . 52

References 53

2
1 Introduction
Mathematical tools & concepts: Mathematical modelling, numerical math-
ematics.
Suggested references: [Ben00, QSS+ 07, Sto14]

1.1 Steps in scientific computing


Computational sciences is a multidisciplinary discipline that uses mathematical
modelling and computing capabilities to understand and enable numerical sim-
ulations of phenomena arising from physics, chemistry, biology and generally
applied sciences.

Modelling the system A scientific computing task starts by modelling the


system. By “mathematical model” we mean anything that can be expressed in
terms of mathematical formulae and that is amenable to mathematical anal-
ysis or numerical simulation. Typical mathematical models involve (list non-
exhaustive, non-disjoint):

• deterministic ODE or PDE models (e.g. population growth, chemical re-


actions, mechanical systems and analogues, climate and weather, . . . )

• stochastic models (e.g. growth of small populations, chemical reactions in


cells, asset prices, . . . )
• optimality principles (e.g. principles of utility, properties of materials, min-
imisation of energy consumption, trading strategies, . . . )

• discrete or continuous flow models (e.g. queuing problems, traffic, logistics,


load balancing in parallel computers, . . . )
• statistical models (e.g. distribution of votes, change in the precipitation
rate, wage justice, criminal statistics, Google PageRank, . . . )

Of course, you can combine any of the aforementioned modelling approaches


to obtain what is called a hybrid model. From the modelling viewpoint, the world
is divided into things whose e↵ects are neglected (e.g. molecular di↵usion in a
weather forecasting model), things whose behaviour the model is designed to
study, so-called observables or states (e.g. the wind velocity and temperature
in an atmospheric flow model), and things that a↵ect the model, but that are
not within the scope of the model, called boundary conditions (e.g., sea surface
temperature in a model of atmospheric flow).
The standard way to build a mathematical model then involves the following
four steps:

1. Formulate the modelling problem: What is the question to be an-


swered? What type of model is appropriate to answer the question?
2. Outline the model: Which e↵ects should be included in the model,
which are negligible? Write down relations between the states.

3
3. Practicability check: Is the model “solvable”, either by analytical meth-
ods or numerical simulation? Do I have access to all the parameters in
the model? Can the model be used to make predictions?
4. Reality check: Make predictions of known phenomena and compare with
available data (qualitatively or quantitatively).

Note that there is a trade-o↵ between simplicity of a model (easier to anal-


yse and interpret) and “realism” or accuracy of the model (potentially more
complicated, analysis may be through computer simulations only, simulations
may be more computationally intensive).

Analyse the model and its properties With a model at hand comes the
theoretical analysis of the model, along with the analysis of its properties. Does
a solution exist? Is it unique? This step can involve results from analysis,
spectral theory, probability theory, etc.

Select an appropriate numerical method Depending on the theoretical


characteristics of the model, one chooses an adapted numerical method. If some
invariants are preserved for example, one may want to select a method that
allows the preservation of those invariants. The chosen method can then be
analysed in order to determine its convergence rate and its stability, and to
estimate numerical errors.

Implementation and validation After all these steps comes the implemen-
tation of the method, and its validation on academic tests. The validation is
an important step, as it ensures that the method performs appropriately in well
known situations.

1.2 Numerical methods and scientific goals


Depending on the problem to be solved, the objective can be di↵erent and call
for di↵erent numerical treatment.
• The main goal might be to ensure the convergence of the numerical
method: in this case, the error on the final result can become arbitrarily
small with corresponding numerical resolution.
• Precision might be prioritised: in this case, one ensures that the errors
are small in comparison to a set tolerance.
• The main objective might be reliability: in this case, one wants to guar-
antee that the global error remains below a certain threshold. This typi-
cally calls for validation of the method on test cases.
• One may want to achieve the best efficiency, and ensure that the com-
putational cost of the method is kept to a minimum. That can become
critical in cases where computational time is critical, as in the case of
weather forecasting for example.
The choice of a numerical method adapted to the problem under consideration
will usually entail a compromise between these di↵erent aspects.

4
1.3 Error sources
Error sources are numerous in scientific computing. Analysis of the errors of
numerical simulations requires acknowledging the di↵erent sources of errors.
These can include
• Model error: approximation of the physics of the problem in its mathe-
matical modelling. For example, one may choose to neglect variations of
air density - except in the buoyancy term - in the atmosphere (Boussinesq
approximation). The evaluation of model errors is best discussed with
scientific experts of the application concerned.
• Parameters errors: model parameter values or initial and boundary con-
ditions may be uncertain. The impact of parameter uncertainty on the
simulation outputs can be studied through methods of uncertainty quan-
tification.
• Algorithmic and numerical errors: tools exist to estimate discretisation
errors, and will be surveyed in this course. Such errors can come from ma-
chine roundo↵; from approximation errors due to the numerical method,
which can be studied through numerical analysis; but also from human
mistakes, which can be avoided by validating numerical results.

5
2 Arguments from dimension
Mathematical tools & concepts: basic ODE, linear algebra
Suggested references: [Buc14, Ben00, IBM+ 05]

Physical dimensions are part of almost any model, and dimensional analysis is
one of the starting point of modelling. Based on dimensional arguments, it is
sometimes possible to derive relationships between physical quantities involved
in a phenomenon that one wishes to understand and model. Additionally, a
dimensional equation can have the number of parameters reduced or eliminated
through non-dimensionalisation. This process is achieved through dimensional
analysis and subsequent reduction of the number of independent variables. This
gives insight into the fundamental properties of the system and reduces the
number of (numerical) experiments necessary to analyse it. The framework
enables quantification of the relative importance or characteristic scales of dif-
ferent processes interacting in the model, and is thus very helpful to identify
small parameters in a model.

Motivating example. Let us start with some motivation and look at the
classical pendulum (see Fig. 2.1). The governing equation of motion for the
angle ✓ as a function of t is
¨ =
L✓(t) g sin ✓(t) (2.1)

with g acceleration due to gravity and L the length of the pendulum. When ✓
is small, ✓ ⇡ sin ✓ and we may replace the last equation by
¨ =
✓(t) ! 2 ✓(t) (2.2)
p
with ! = g/L. The solution of (2.2) is

✓(t) = A sin(!t) + B cos(!t) (2.3)


˙
with A, B depending on the initial conditions ✓(0) and ✓(0). Since sine and
cosine have a period of 2⇡, we find that the pendulum has period
s
2⇡ L
T = = 2⇡ , (2.4)
! g

which is independent of the mass m of the pendulum and which does not depend
on the initial position ✓(0) = ✓0 .

Derivation from dimensional arguments. Let us now derive the essential


dependence of T on L and g without using any di↵erential equations. To this
end we conjecture that there exists a function f such that

T = f (✓, L, g, m) . (2.5)

We denote the physical units (a.k.a. dimensions) of the variables ✓, L, g, m by


square brackets. Specifically,
2
[T ] = s , [✓] = dimensionless , [L] = m [g] = ms , [m] = kg

6
Figure 2.1: The classical pendulum. The radial position at time t is given by
¨
the arclength s(t) = L✓(t), hence the radial force on the mass is mL✓(t).

In an equation the physical units must match, so the idea is to combine ✓, L, g, m


is such a way that the physical units of the formula have the unit of time T . This
excludes transcendental functions, such as log or tan for the variables carrying
physical units. Using the ansatz

f (✓, L, g, m) / L↵1 g ↵2 m↵3 (2.6)

with unknowns ↵1 , ↵2 , ↵3 2 R that must be chosen such that

s = m↵1 +↵2 s 2↵2


kg↵3

Note that we have ignored ✓ as it does not carry any physical units. By com-
parison of coefficients we then find

↵1 = 1/2 , ↵2 = 1/2 , ↵3 = 0 .

which yields s
L
T / (2.7)
g
and which is consistent with (2.4). Note, however, that we cannot say anything
about a possible dependence of T on the dimensionless angle variable ✓. (The
unknown dependence on the angle is the constant prefactor 2⇡.)

2.1 Dimensional analysis


Let y, x1 , . . . , xn be physical (measurable) scalar quantities, out of which we
want to build a model. The quantities y, xi come with fundamental physical
units L1 , . . . , Lm with m  n. Now the model consists in assuming that there
is an a priori unknown function f such that

y = f (x1 , . . . , xn ) (2.8)

In the SI system there are exactly seven fundamental physical units: mass (L1 =
kg), length (L2 = m), time (L3 = s), electric current (L4 = A), temperature

7
(L5 = K), amount of substance (L6 =mol) and luminous intensity (L7 =cd),
and we postulate that the physical dimension of any measurable scalar quantity
-
can be expressed as a product of powers of the L1 , . . . , L7 .
-

Example 2.1. It holds that


kg m2
[Energy] = = L1 L22 L3 2 .
s2
Here, the number of fundamental physical units is m = 3.

Step 1: Remove redundancies from the model If the unknown function


f is a function of n variables x1 , . . . , xn with m  n fundamental physical units,
using strategy in the pendulum example may lead to an underdetermined system
of equations (there we had 4 variables with only 3 fundamental units).
To remove such redundancies, it is helpful to translate the problem into the
language of linear algebra: Let L1 , . . . , Lm be our fundamental physical units
and identify Li with to the i-th canonical basis vector

ei = (0, . . . , 0, 1, 0, . . . , 0)T ,

of Rm , with the entry 1 in position i. Now pick a subset {p1 , . . . , pm } of


{x1 , . . . , xn } so that p1 , . . . , pm are linearly independent in the sense that no
[pi ] can be expressed in terms of the [p1 ], . . . , [pi 1 ], [pi+1 ], . . . , [pm ]. With this
correspondence, each [pi ] is a linear combination of the canonical basis vectors
e1 , . . . , em , and so, by construction, there exist ↵i,1 , . . . , ↵i,m 2 R with
↵ ↵
[pi ] = L1 i,1 L2 i,,2 · · · L↵
m
i,m
, (2.9)

such that the vectors

vi = (↵i,1 , . . . , ↵i,m ) 2 Rm , i = 1, . . . , m ,

are linearly independent and therefore form a basis of Rm .


Example 2.2 (Cont’d). The dimensional unit of energy has the canonical basis
representation 0 1
&
1
@ 2 A = e1 + 2e2 2e3
-3
2
if considered as a vector in R3 .
We call the set {p1 , . . . , pm } the set of primary variables or primary quanti-
ties.1 The secondary variables are then defined as the set

{s1 , . . . , sn m} = {x1 , . . . , xn } \ {p1 , . . . , pm } . (2.10)

By construction, the secondary variables are expressible as linear combinations


of the primary variables. In terms of primary and secondary variables our
postulated model (2.8) reads (with an abuse of notation)

y = f (p1 , . . . , pm , s1 , . . . , sn m) . (2.11)
1 We assume that the set of primary variables exists, otherwise we have to rethink our
postulated model f . (It is important that you understand this reasoning.)

8
Step 2: Construct dimensionless quantities Having refined our model
according to Step 1 above, we construct a quantity z with [z] = [y] such that

z = p↵ ↵m
1 · · · pm ,
1
(2.12)

with uniquely defined coefficients ↵1 , . . . , ↵m 2 R. Now call ⇧ the dimensionless


quantity given by ⇧ = y/z, in other words

f (p1 , . . . , pm , s1 , . . . , sn m)
⇧= , (2.13)
p↵1 · · · pm
1 ↵m

We want to express ⇧ solely as a function of the primary variables. To this end


note that we can write

[sj ] = [p1 ]↵j,1 · · · [pm ]↵j,m

for suitable coefficients ↵j,1 , . . . , ↵j,m ; this can be done for all the sj . Along the
lines of the previous considerations we introduce zj with [zj ] = [sj ] by

zj = p1 j,1 · · · p↵
m
j,m

and define the dimensionless quantity ⇧j = sj /zj . Note that, by the rank-nullity
theorem there are exactly n m such quantities where n m is the dimension
of the nullspace of the matrix spanned by the x1 , . . . , xn Replacing all the sj by
zj ⇧j , we can recast (2.13) as

⇧ = F (p1 , . . . , pm , ⇧1 , . . . , ⇧n m) , (2.14)

with the shorthand


f (p1 , . . . , pm , z1 ⇧1 , . . . , zn m ⇧n m )
F (p1 , . . . , pm , ⇧1 , . . . , ⇧n m) := , (2.15)
p↵1 · · · pm
1 ↵m

This suggests that we regard F as a function F : P ! R of the primary variables


p1 , . . . , pm where P = span{p1 , . . . , pm } ⇢ Rm . Note, however, that ⇧ and ⇧j
are dimensionless (and so is F ). Hence F is even independent of the primary
variables, for otherwise we could rescale, say, p1 , by which none of p2 , . . . , pm
or any of the dimensionless quantities ⇧j change (they are dimensionless); as
a consequence F is a homogeneous function of degree 0 in p1 , and the same is
true for any of the other pj . Therefore F is independent of p1 , . . . , pm .

Step 3: Find y up to a multiplicative constant The last statement can


be rephrased by saying that y can be expressed in terms of a relation between
dimensionless parameters. The surprising implication is that the unknown quan-
tity y that we want to model has the functional form of z in (2.12), namely,

y = ⇧ p↵ ↵m
1 · · · pm .
1
(2.16)

No trigonometric functions, no logarithms or anything like this appear here. We


summarise our findings by the famous Buckingham ⇧ Theorem (see [Buc14]).
Theorem 2.3 (Buckingham, 1914). Any complete physical relation of the form
y = f (x1 , . . . , xn ) can be reduced to a relation between the associated dimen-
sionless quantities, where the number of independent dimensionless quantities

9
is equal to n m, the di↵erence between the number physical quantities x1 , . . . , xn
and the number of fundamental dimensional units. That is, there exists a func-
tion : Rn m ! R such that
y = z (⇧1 , . . . , ⇧n m) (2.17)
or, in other words, ✓ ◆
s1 sn m
y=z ,..., . (2.18)
z1 zn m

2.2 Dimensionless representation of a harmonic oscillator


In this example we will show how to identify dimensionless quantities and use
them to non-dimensionalise an equation. We consider a bead spring system in
one dimension under the influence of friction and of a driving force. The position
of the bead of mass m is denoted by its displacement u from its equilibrium
position. The second law of Newton states that the inertia force (i.e. mass
times acceleration) equals the sum of the forces exerted on the bead. These
forces are the driving force FD , taken as a harmonic force with angular frequency
! and amplitude F0 , the spring force FS , which is linearly proportional to the
displacement and directed against it, and the frictional force Ff , assumed to be
linearly proportional to the velocity and directed against it. This leads to the
balance of forces
mü = Ff + FS + FD = cu̇ ku + F0 sin !t,
and thus to the equation of motion
mü + cu̇ + ku = F0 sin !t. (2.19)
The model (2.19) has two variables u and t and five parameters m, c, k, F0 and !,
leading to N = 7 physical quantities with the three fundamental physical units
L1 (mass), L2 (length) and L3 (time). Every term in the equation (2.19) has
the dimension of force, so L1 L2 L3 2 , from which we can deduce the dimension
of c and k. The physical units of the system are
u = L2 , [t] = L3 , [m] = L1 , [c] = L1 L3 1 , [k] = L1 L3 2 ,
[F0 ] = L1 L2 L3 2 , [!] = L3 1
We need to select three primary variables to span the three fundamental units
and, without loss of generality we pick k, F0 , !. Then
0 1 0 1 0 1
1 1 0
[k] = @ 0 A , [F0 ] = @ 1 A , [!] = @ 0 A. (2.20)
2 2 1
To obtain four dimensionless quantities, we need to solve a linear system of
equations for each secondary variable. As an example, we obtain for u
Ax = b (2.21)
with unknown x = (↵1 , ↵2 , ↵3 ) and coefficients
0 1 0 1
1 1 0 0
A=@ 0 1 0 A, b = @ 1 A. (2.22)
2 2 1 0

10
The unique solution x = ( 1, 1, 0) leads to [u] = [F 0]
[k] and we can thus form the
dimensionless quantity u⇤ = uk
F0 . Similarly, we obtain the four dimensionless
quantities
uk m! 2 c!
u⇤ = , t⇤ = !t , m⇤ = , c⇤ = .
F0 k k
The dimensionless spring equation then reads as

m⇤ ü⇤ + c⇤ u̇⇤ + u⇤ = sin !t⇤ , (2.23)

where the time derivative is taken with respect to t⇤ .

11
3 Flow modelling: macroscopic dynamics of fluid
flows
Mathematical tools & concepts: Brownian motion, Scalar conservation laws
Suggested references: [SJ05]

Many models of applied sciences are posed in the form of partial di↵erential
equations (PDEs). For example in flow modelling, a mean field approach is taken
to analyse the level of the fluxes and densities of particles, and the evolution
of fluxes and densities is described through PDEs. Such modelling plays a
central role in atmospheric models among others. Before analysing numerical
methods to simulate PDEs, we will study an example of how PDEs can arise
from conservation laws governing the evolution of a continuous mass rather than
discrete particles.

3.1 From particle dynamics to molecular di↵usion


A fundamental transport process in fluids is di↵usion. Di↵usion is random in
nature, and from the microscopic point of view, di↵usion describes the process
by which molecules move about randomly in response to the temperature, fol-
lowing Brownian motion. From the macroscopic point of view, the characteristic
transport by di↵usion is from regions of high concentration to low concentration.
This property is described by Fick’s law of di↵usion.

Fick’s law of di↵usion To describe a di↵usive flux equation in one dimen-


sion, let us consider a collection of particles undergoing a random walk in one
dimension. Let N (x, t) be the number of particles at position x at time t. After
some time t, an average of half of the particles will have moved left and half
will have moved right on the x axis. Accordingly, the net movement of particles
to the right is
1
[N (x + x, t) N (x, t)]
2
The flux qx is the net movement of particles across the section of length x
during a certain time interval t

1 N (x + x, t) N (x, t)
qx = , (3.1)
2 t t
which can be rearranged as

( x)2 N (x + x, t) N (x, t)
qx = . (3.2)
2 t ( x)2 ( x)2
Now we define the concentration of particles as
N (x, t)
C(x, t) = (3.3)
x
2
and define the di↵usion constant D = ( 2 x)t . We then obtain

C(x + x, t) C(x, t)
qx = D (3.4)
x x

12
↑a
and in the infinitesimal limit
2
di

G 88 . (e) @C
p
-
=
+ qx = D (3.5)
@x
which is Fick’s first law of di↵usion. Generalising to three dimension yields
~ ankmann
fut
[pv8y OT q = DrC S (3.6)
stress tensor
-pr + -

19
=

3.2 Di↵usion equation


gravity continuum mech
227 Fick’s law of di↵usion gives us an expression for the flux of mass due to the
process of di↵usion, however we still need an equation to predict the change in
density or concentration over time. Such an equation can be derived as a scalar
conservation law. We consider the one-dimensional case again. Since the total
number of particles in [x1 , x2 ] during a time [t1 , t2 ] is conserved (no particles
disappear or appear), we have
Z x2 Z x2 Z t2 Z t2
C(x, t2 )dx C(x, t1 )dx = qx (x1 , t)dt qx (x2 , t)dt (3.7)
x1 x1 t1 t1

Supposing that C and qx are continuously di↵erentiable with respect to x and


t, the left-hand side of 3.7 can be expressed as
Z x2 Z x 2 Z t2
@
[C(x, t2 ) C(x, t1 )] dx = C(x, t)dtdx. (3.8)
x1 x1 t1 @t

The right-hand side can be rewritten similarly, leading to


Z x 2 Z t2 Z t2 Z x 2
@ @
C(x, t)dtdx = qx (x, t)dxdt.
x1 t1 @t t1 x1 @x

The latter is equivalent to


Z x 2 Z t2 
@ @
C(x, t) + qx (x, t) dtdx = 0, (3.9)
x1 t1 @t @x

which is true for any choice of rectangle [x1 , x2 ] ⇥ [t1 , t2 ]. By the Fundamental
Theorem of the Calculus of Variation (see the following Lemma for a simple
version) this implies that

@ @
C(x, t) + qx (x, t) = 0, (3.10)
@t @x
which is a first-order conservation law. Using Fick’s law of di↵usion (3.5) and
assuming a constant di↵usion coefficient D we obtain the partial di↵erential
equation (PDE)
@C @2C
=D 2, (3.11)
@t @x
which is the one-dimensional di↵usion equation. Generalising to more dimen-
sions yields the PDE
@C
= Dr2 C. (3.12)
@t

13

8rt.
(1) 121 convection las :
eb +
gr
.

,
as F

mass conservation
+2
* p
cattering
U
((x +) =

Green fut ?
Hmm like mom .
dist (Bollemann 1D)
For initial dist.
any
((x ,
+ = 0) =
f(x)
x)dx
(((x)f(x
-
-

-generaton ?
Ot
Gu = 02 = ((t
,
x) = e ((x ,
+ = 0)
In

propagator
=jf(x))f(x - x)dx

(((x1)e8f(x -
-
x)dx

where c
· f(x) de
C
,
(x
,
x) =

Jde
1)f(x ye) f(t ye)llX(1lye yell
-

,
-

+o Rt *
R
&
*
,
ly, yu)1 e I x

->
global unique
solution

pathological Let
Stability
-
:

[1 61] ,
a
[0 , "max]
8 S(t))
=
f(x ,
y(t) +

y(+) fr
+
& =

Lyapunovstability
42 ex]
I Solde 18171Da

s eem
i
* + -
](y(+) -
-(t)))(3

lim 8(1) = 0
stable
+
Asymptotic
- x

I =
[tr &]
,

(y(t) -

=(x)) - 0
+- x

Thermal fluctuation ar periodic perfurbal


Lemma 3.1. If f (x, y) is a continuous function defined on R2 such that
ZZ
f (x, y)dxdy = 0 .
R

for each rectangle R ✓ R2 , then f (x, y) ⌘ 0 for all (x, y).


Proof. Suppose that there exists a pair of coordinates (x0 , y0 ) such that f (x0 , y0 ) 6=
0. Without loss of generality assume that f (x0 , y0 ) > 0. Since f is continuous,
there is a > 0 such that f (x, y) > f (x0 , y0 )/2 whenever |x x0 | < and
|y y0 | < . Therefore if we let

R := {(x, y) : |x x0 | < and |y y0 | < } ,

then ZZ ZZ
1
f (x, y)dxdy f (x0 , y0 )dxdy.
R 2 R

By assumption the left hand side is zero, consequently we obtain


1 2
0 f (x0 , y0 ),
2
which is a contradiction. Thus f (x, y) ⌘ 0

3.2.1 Similarity solution to the one-dimensional di↵usion equation


We will present an analytical solution to the one-dimensional di↵usion equation
(3.11), or equivalently heat equation, based on dimensional analysis or the so-
called similarity method. We consider the one-dimensional problem of a narrow,
infinite pipe of radius a. A mass of tracer M is injected uniformly across the
cross-section of area A = ⇡a2 at the point x = 0 at time t = 0, and we seek
a solution for the concentration C of the tracer over time, subject to di↵usion
alone. We set the boundary conditions

C(±1, t) = 0, (3.13)

and the initial condition (representing the injection uniformly over a cross-
section of infinitesimally small width in the x-direction)

C(x, 0) = (M/A) (x), (3.14)

where (x) is the Dirac function (0 everywhere except at x = 0 where (x) = 1,


and the integral from 1 to +1 is 1).
We denote the three fundamental physical units as M (mass), L (length) and
T (time). The parameters that control the solution and corresponding dimen-
sions are the concentration C [M/L3 ], the injected concentration M/A [M/L2 ],
the di↵usion constant D [L2 /T ], x [L] and t [T ]. There are m = 5 parameters and
n = 3 fundamental dimensions, thus we can for the following two dimensionless
groups
C
⇡1 = p (3.15)
M/A Dt
x
⇡2 = p (3.16)
Dt

14
Using the two dimensionless groups, Buckingham’s theorem tells us that
✓ ◆
M x
C= p p , (3.17)
A Dt Dt
where is a yet unknown function of ⇡2 . This solution is called a similarity
solution because C has the same shape in x at all t. Now we will assume the
form (3.17) and use it as a solution to the di↵usion equation (3.11) to solve for
the function . Essentially, we are doing a change of variable and denote our
similarity variable ⌘ = ⇡2 = pxDt . Substituting (3.17) into (3.11) and using the
chain rule, we obtain

@C @ M
= p (⌘) (3.18)
@t @t A Dt

@ M M @ @⌘
= p (⌘) + p (3.19)
@t A Dt A Dt @⌘ @t
✓ ◆
M @
= p (⌘) + ⌘ , (3.20)
2At Dt @⌘
and similarly

@2C M @2
= p . (3.21)
@x2 ADt Dt @⌘ 2
Substituting those results into the di↵usion equation, we obtain the ordinary
di↵erential equation in ⌘
✓ ◆
d2 1 d
+ (⌘) + ⌘ = 0. (3.22)
d⌘ 2 2 d⌘
We have thus reduced the PDE to an ODE, which is the goal of any analyt-
ical solution methods for PDEs. Converting the initial (3.14) and boundary
conditions (3.13) in the new variable ⌘ gives

(±1) = 0. (3.23)

A solution can be obtained as


✓ ◆
1 ⌘2
(⌘) = p exp . (3.24)
2 ⇡ 4
Replacing by the original variables, we obtain the solution
✓ 2◆
M x
C(x, t) = p exp . (3.25)
A 4⇡Dt 4Dt

3.3 Advection-di↵usion equation


We further consider the evolution of the concentration of a solute in a flow, sub-
ject to di↵usion, and additionally to transport by the flow (called advection).
As we will see, non-dimensionalisation will inform us on the relative contribu-
tions of advection and di↵usion to the overall evolution of concentrations. The
advection-di↵usion equation can be derived through mass conservation, similarly

15
to the di↵usion equation presented above but considering additionally transport
by the flow. The general form of the model PDE is an advection-di↵usion equa-
tion for the unknown concentration c : R+ ⇥ ⌦ ! R
@c(t, x)
+ u(x) · rc(t, x) D c(t, x) = 0, (3.26)
@t
where u(x) is the velocity of the flow (which is assumed to be divergence free
in this form), D is a di↵usion coefficient which is assumed constant, and there
is no source term. Initial conditions (t = 0) and boundary conditions (x at
the boundaries of the spatial domain) have to be specified to complete the
model. For example, one could consider periodic boundary conditions. For a
one-dimensional spatial domain ⌦ = [0, L] and t 2 [0, T ], initial and (periodic)
boundary conditions are in this case
c(0, x) = c0 (x), and c(t, 0) = c(t, L) 8t 2 [0, T ].

3.3.1 Dimensionless form of the advection-di↵usion equation


To determine what processes are most important, we compare the magnitudes
of each term. To do this we must recast each variable into a dimensionless form.
Variables in this problem are c, x, t, u and parameters are D, c0 , L where c0 is
the initial concentration and L is a length scale associated with the problem’s
geometry (for example, the domain length). The relative importance of advec-
tion and di↵usion should be a function of t, D, and u, so that we use those three
variables to form the non-dimensional Peclet number
u2 t
Pe = , (3.27)
D
or using the lengthscale and L = ut
uL
Pe = . (3.28)
D
Further using the dimensionless quantities t⇤ = tu/L, x⇤ = x/L and c⇤ = c/c0 ,
the dimensionless equation reads
@c⇤ 1

+ rc⇤ c⇤ = 0. (3.29)
@t Pe
The equation (3.29) now depends only on the single dimensionless parameter
P e, which quantifies the relative importance of advection and di↵usion. Flow
with small Peclet number is di↵usion-dominated, whereas flow with large Peclet
number is advection-dominated.

Some properties of the solution Analysing some properties of the solution


will become important to know how to choose an appropriate numerical treat-
ment of the problem. Since the derivation of the advection-di↵usion equation
relies on the conservation of mass, we readily know that the total concentration,
i.e. the concentration integrated over the entire domain, is preserved. Further-
more, the solution is dissipative. Indeed, the energy of the solution
Z
E(t) = c2 (t, x)dx

dE(t)
is a decreasing function of t. This can be seen by calculation of dt .

16
3.3.2 Fourier series representation of the solution
Decomposing the solution of the advection-di↵usion equation into Fourier modes
in very helpful to understand some properties of the solution. Furthermore,
we will use such Fourier decompositions in order to analyse the properties of
numerical schemes that we will use for scientific computing. For that, we make
the assumption that the solution is periodic in space so that we can expand in
a Fourier series such as, for a given time t
1
X
c(x, t) = bk (t)eikx , (3.30)
k= 1

where k are the wave numbers and bk (t) is the amplitude of the wave mode k at
time t. Since the solution is simply a superposition of Fourier modes and due to
the linearity of the sum, we can substitute an arbitrary Fourier mode bk (t)eikx in
3.26 and analyse its behaviour (note that we work with the dimensional form of
the equation here for ease of interpretation of the results). For a one-dimensional
case with a constant advection velocity u, we obtain for one individual Fourier
mode corresponding to the wavenumber k

@(bk (t)eikx ) @(bk (t)eikx ) @ 2 (bk (t)eikx )


+u D = 0, (3.31)
@t @x @x2
and thus
@(bk (t))
eikx + u(bk (t)ik)eikx = D(bk (t)(ik)2 )eikx , (3.32)
@t
which leads to the following ODE for bk (t)

dbk
= Dk 2 + iuk bk , (3.33)
dt
or writing = Dk 2 + iuk,
dbk
= bk . (3.34)
dt
The real part of is negative, since R( ) = Dk 2 < 0 and the imaginary part
is =( ) = uk. The solution of 3.34 is given by

bk (t) = bk (0) exp( Dk 2 t) cos(ukt + ),

where is an initial phase angle. The amplitude of the solution decays in time
due to <( ) = Dk 2 < 0, which is solely related to the di↵usion component.
Again, we see that the solution is hence dissipative. The advection term pro-
duces phase changes through =( ) = uk. Later in the course, the model
problem 3.34 will serve as a prototypical example problem for dissipative sys-
tems when considering ODE integration methods. We will use it as an example
for which to define numerical methods which are specifically appropriate for the
numerical integration of dissipative systems.

17
4 Time integration methods
Mathematical tools & concepts: ODE, numerical discretisation, stability
analysis.
Suggested references: [QSS+ 07, Du11, Sto14, HW05]

When considering time integration or numerical integration of ODEs, we some-


times need a very accurate solution for a finite time. For example, when calcu-
lating the trajectory of a satellite. In this case, we will prioritise the accuracy
of a numerical scheme. In other cases, we may be interested in the long time
behaviour of the system, as when analysing a system exhibiting a periodic or
limit cycle behaviour. In such a context, this geometrical characteristic will be
the main aspect directing the choice of a numerical scheme. Some problems also
mix characteristic timescales of the dynamics. An example comes with chemical
reactions in models of air pollution, where some reactions are very fast while
others are very slow. A good numerical method in this context should be able
to treat all timescales simultaneously. Some of the numerical methods of inte-
gration of PDEs, representing the evolution of space-time variables, treat the
discretisation in time and space separately. This is the case of methods based
on finite di↵erences. Time integration methods for PDEs are thus an essential
component of numerical integration of PDEs, and are essentially methods of
numerical integration of ODEs. In this chapter we review di↵erent methods of
time integration and introduce di↵erent notions to analyse the quality of numer-
ical solutions. We then analyse typical example systems with specific structural
properties, and show how to choose numerical schemes appropriately depending
on the underlying properties of the systems.

4.1 Formulation of the continuous problem


Consider the initial value problem
dy
= f (t, y(t)), y(0) = y0 , (4.1)
dt
where y : R+ ! Rd is a vector valued function of time t 0 and f : Rd ! Rd is
a vector field. The problem can be written as integral form
Z t
y(t) = y0 + f (s, y(s))ds, (4.2)
0

for which theoretical analysis requires less regularity assumptions. The inte-
gral form is also useful to suggest numerical schemes. Both formulations are
equivalent if f is continuous.

Existence and uniqueness of a solution If the vector field f is locally


Lipschitz, the Picard-Lindelöf existence and uniqueness theorem for initial value
problems [Tes12] implies that (4.1) has a unique local solution. The local Lip-
schitz condition is that there exists an open ball J ✓ R+ centred on t0 and of
radius rJ , an open ball B centred on y0 and of radius rB , and a constant L > 0
such that

|f (t, y1 ) f (t, y2 )|  L|y1 y2 | 8(t, y1 , y2 ) 2 J ⇥ B. (4.3)

18
If in addition, the Lipschitz condition is satisfied for J = R+ and B = Rd (f
globally Lipschitz) then the initial value problem (4.1) admits a unique global
solution.

Stability of the solution We now assume that (4.1) admits a unique solution
on an interval [0, tmax [. A question of interest for stability analysis is, if we
slightly perturb (4.1), what happens to the solution? The sensitivity of the
solution to perturbations may be defined in the sense of Lyapunov. We consider
the following perturbed problem on an interval I = [t0 , t1 ] ⇢ [0, tmax [

dz
= f (t, z(t)) + (t), z(t0 ) = y(t0 ) + 0, (4.4)
dt
where 0 and (t) are small perturbations. The idea of Lyapunov stability is
that the solution of the perturbed system stays close to the reference solution
if the perturbation is not too large.
Definition 4.1 (Lyapunov stability). Let

| 0 |  ✏, | (t)|  ✏ 8t 2 I. (4.5)

The problem (4.1) is stable in the sense of Lyapunov over I if there exists C > 0
such that the solution of (4.4) satisfies

8t 2 I, |y(t) z(t)|  C✏. (4.6)

provided that limt!+1 | (t)| = 0.


If I has no upper bound we say that (4.1) is asymptotically stable if it is Lya-
punov stable in any bounded interval I and in addition,

|y(t) z(t)| ! 0, (4.7)


t!+1

The constant C depends in general on the problem data t0 , y0 , f . Lyapunov


stability ensures that perturbations which are bounded by ✏ can only lead to
modifications of the solution of order C(T )✏, where the dependence on T is
typically exponential. Asymptotic stability ensures that if perturbations tend
to 0 in long time, then the solution of the perturbed system stays uniformly
close to the solution of the reference system. An example of asymptotically
stable systems is the family of linear dissipative systems (see section 4.4.1).
This notion is generally relevant for dissipative systems.

4.2 One-step approximation methods


We now want to formulate numerical methods to approximate the solution of
(4.1) over a finite time interval [0, T ]. For that, we introduce the discrete nodes
t0 = 0 < t1 < · · · < tN = T and denote the numerical approximation of the
exact solution y(tn ) as y n . The time increments are denoted as tn = tn+1 tn .
To construct one-step approximation methods, we discretise (4.2) over the
time interval [tn , tn+1 ] using a quadrature rule. The abstract algebraic relation

y n+1 = y n + tn tn (tn , y n ) , (4.8)

19
where tn (tn , y n ) is an approximation of
Z tn+1
1
f (s, y(s)) ds, (4.9)
tn+1 tn tn

provides an iterative rule to calculate the numerical trajectory of the solution.


Such a rule gives a relationship where y n+1 depends only on y n , and associated
numerical methods are called one-step methods. Numerical methods obtained
in this way are called Runge-Kutta methods. Those can be organised in two
categories, namely explicit methods where y n+1 can be obtained explicitly from
y n , and implicit methods where a nonlinear equation needs to be solved to obtain
y n+1 from y n . A few examples follow.

1. Explicit methods
(a) Explicit Euler: y n+1 = y n + tn f (tn , y n );
(b) Heun’s method:
y n+1 = y n + 2tn (f (tn , y n ) + f (tn+1 , y n + tn f (tn , y n )));
(c) Fourth order Runge-Kutta scheme: multiple stages are included

F1 = f (tn , y n )
✓ ◆
tn n tn
F 2 = f tn + ,y + F1
2 2
✓ ◆
tn n tn
F 3 = f tn + ,y + F2
2 2
F4 = f (tn + tn , y n + tn F 3 ) ,

and finally
F1 + 2F2 + 2F3 + F4
y n+1 = y n + tn
6
2. Implicit methods
(a) Implicit Euler: y n+1 = y n + tn f tn+1 , y n+1 ;
tn
(b) Crank-Nickolson: y n+1 = y n + f (tn , y n ) + f tn+1 , y n+1 ;
2
⇣ ⇣ n n+1
⌘⌘
(c) Midpoint rule: y n+1 = y n + tn f tn +t2n+1 , y +y2 ;

Implementation of implicit schemes At each step of an implicit scheme,


a nonlinear problem has to be solved for y n+1 . There is an associated computa-
tional extra burden, which may however be compensated by a better stability,
and therefore a possibility to use larger time steps. The nonlinear problem can
be solved by iterations of a fixed point method at each time step. Existence
and uniqueness of a solution follows conditions given by the implicit function
theorem.
Example 4.2 (Predictor-corrector implementation of an Euler implicit scheme).
We start with a predicted state obtained through Euler explicit method

z n+1,0 = y n + tn f (tn , y n )

20
which we then correct using fixed point iterations following
z n+1,k+1 = y n + tn f tn+1 , z n+1,k .
If the time step is chosen appropriately, one can show that z n+1,k ! y n+1 .
k!+1
In practice, one can set a certain tolerance ✏ > 0 and run a finite number of
iterations such that
|z n+1,k+1 z n+1,k |  ✏.
The convergence is often very fast and even one iteration can suffice to get a
good approximation.

4.3 A priori error analysis


With a priori error analysis, the objective is to estimate the numerical error
made by the numerical scheme depending on the problem’s parameters (e.g.
integration time, time step, force field). Local numerical errors occur at each
iteration of an integration procedure and accumulate over time. In order to
control this error accumulation, a notion of stability is introduced. The local
errors are controlled by a notion of consistency of the numerical scheme. One can
show that a numerical method which is stable and consistent is also convergent.

Truncation error The local truncation error is the residual error obtained
if one applies the numerical scheme to the exact solution. For the nth iter-
ation, it consists of the di↵erence between y(n + 1) and its approximation
y n + tn tn (tn , y n ). From that we obtain the definition of the local trun-
cation error
y n+1 y n tn tn (tn , y n )
⌘ n+1 : = (4.10)
tn
Definition 4.3 (Consistency). Let t = max0nN 1 tn be the maximum
time step. A numerical method is consistent if
✓ ◆
lim max |⌘ n | = 0, (4.11)
t!0 1nN

and it is consistent of order p if there exists a constant C such that, for all
0nN 1
|⌘ n+1 |  C( tn )p ! 0. (4.12)
Proofs of consistency generally rely on Taylor expansions of the exact solu-
tion and thus require regularity of the vector field.
Example 4.4 (Consistency of Euler’s scheme). Euler’s explicit scheme is con-
sistent of order 1. The truncation error is
y(tn + tn ) (y (tn ) + tn f (tn , y (tn )))
⌘ n+1 = .
tn
Using Taylor’s expansion around y(tn ), we see that there exists a ✓n 2 [0, 1]
such that
t2n 00
y(tn+1 ) = y(tn ) + tf (tn , y(tn )) + y (tn + ✓n tn ),
2
t2n 00
(y(tn + tn )) (y (tn ) + tn f (tn , y (tn ))) = y (tn + ✓n tn ).
2

21
Moreover, the second derivative y 00 (t) can be expressed in terms of the derivatives
of f , by deriving the expression ẏ = f (t, y(t)) with respect to time

y 00 (⌧ ) = @t f (⌧, y(⌧ )) + @y f (⌧, y(⌧ )) · f (⌧, y(⌧ )).

In the case where f and its derivatives are continuous, y 00 is uniformly bounded
on any interval [0, T ] with T < +1 (because a continuous function on a closed
and bounded region has a maximum and a minimum value). Therefore

|⌘ n+1 |  C tn ,
1
where C = 2 supt2[0,T ] |y 00 (t)| depends on the integration time and on the initial
condition.

Stability The notion of stability of a numerical scheme quantifies the robust-


ness of the approximation with regard to perturbations. This is the numerical
analogue to the notion of Liapunov stability for the solutions of di↵erential
equations. We state the definition for a fixed integration time interval [0, T ]
and N iterations with a constant time step t > 0, for simplicity. Extension to
variable time step is straightforward.
Definition 4.5 (Stability). A numerical method (4.8) is stable if there exists a
constant S > 0 (depending on T but not on N or t) such that, for all sequences
z = z n 1nN starting with the same initial condition z 0 = y 0 and satisfying
(
y n+1 = y n + tn tn (tn , y n ) ,
(4.13)
z n+1 = z n + tn tn (tn , z n ) + t n+1 ,

the following inequality holds


N
X
max |y n zn|  S t | n
|. (4.14)
1nN
n=1

Convergence Convergence states that the numerical solution converges to


the true solution.
Definition 4.6 (Convergence). A numerical method is convergent if the global
error satisfies
max |y n y(tn )| ! 0 (4.15)
1nN
0
when y = y(t0 ) and t = max0nN 1 |tn+1 tn | ! 0
Theorem 4.7. A stable and consistent method is convergent.
Proof. We replace z n in (4.13) by the exact solution y(tn ), which consists of
choosing n+1 = ⌘ n+1 , i.e. the local truncation error defined in (4.10). Keeping
the same initial condition, we obtain the stability
N
X
max |y n y(tn )|  S t |⌘ n |  S T max |⌘ n |. (4.16)
1nN 1nN
n=1

22
From the definition of consistency, the right hand side tends to zero with t.
Moreover, if the method is consistent of order p, then the global error is of order
tp since
max |y n y(tn )|  S T C tp . (4.17)
1nN

4.4 Long-term behaviour of example problems


4.4.1 Dissipative systems
In some cases, we may need to integrate a system over very long times (virtually
infinite), and the convergence of the method may not be the appropriate condi-
tion to be satisfied. Indeed, we will see that some methods may be sufficiently
stable to ensure convergence to the true solution when t ! 0, but that they
may nevertheless generate a solution that blows up in an unphysical manner
when the computations are performed for a finite value of t. Many physical
systems are characterised by true solutions which are bounded or even decay
with time, and appropriate numerical schemes for such systems should be able
to reproduce this behaviour.
As a generic example, we consider the special class of one dimensional linear
dissipative systems of the form, for a given 2 C

ẏ(t) = y(t), <( ) < 0. (4.18)

Linear dissipative systems can be seen as a linearised version of more interesting


systems, and serve as a basis to understand the need for additional notions of
stability which can be applied to nonlinear dissipative systems. Examples of
dissipative systems are numerous and very frequent in fluid-dynamical problems:
e.g. concentration of a scalar subject to transport and di↵usion, turbulent
flows, hurricanes, oscillating dynamics with friction, . . . We have seen in section
3.3.2 that expressing the di↵usion or heat equation in Fourier modes leads to a
problem of the form (4.18) for an individual Fourier mode.
We are interested in numerical schemes which permit to approach a solution of
systems of this type for arbitrarily long time intervals, even with a relatively
large value of t. The relevant notion of stability is that of absolute stability or
A-stability, which states that the numerical scheme reproduces the qualitative
behaviour of the solution in long times: y(t) ! 0 as t ! 0, and thus we want
y n ! 0 as n ! +1.
Example 4.8 (Euler’s scheme). Although the explicit Euler method is suffi-
ciently stable to yield solutions that converge in the limit t ! 0, it may gen-
erate a sequence that blows up in an unphysical manner when the computations
are done with finite values of t. Consider the model problem

ẏ = y(t), y(0) = 1 and t 2 (0, 1), (4.19)

with < 0. The exact solution is y(t) = e t , which tends to 0 as t ! 1.


Applying Euler’s explicit scheme to (4.19) leads to

y 0 = 1, y n+1 = y n (1 + t) = (1 + t)n+1 , n 0. (4.20)

23
Consequently, limn!1 y n = 0 if and only if

1<1+ t < 1, i.e. t < 2/| |. (4.21)

If condition (4.21) is satisfied, then for a fixed value of t, the numerical so-
lution reproduces the exact behaviour of the true solution when tn ! 1. Oth-
erwise, the numerical solution blows up asymptotically. Therefore, (4.21) is a
stability condition.
Since the problem (4.18) is linear, we can write one iteration of a one-step
method in the form
y n+1 = R( t)y n , (4.22)
where R depends on the chosen scheme and can be interpreted as an amplifica-
tion factor describing the amplification of the solution between two consecutive
time steps. For example, for an explicit Euler scheme, R(z) = 1 + z, or for
Crank-Nicholson,
1 + z/2
R(z) = .
1 z/2

Absolute stability The region of absolute stability of a numerical method is


defined as the set
A = {z 2 C, |R(z)| < 1}, (4.23)
where R(z) is the amplification factor defined in (4.22). A scheme is said to
be A-stable if {z 2 C, <(z) < 0} ⇢ A, i.e. it is absolutely stable for all t.
Otherwise it is conditionally absolutely stable. For example, the explicit Euler
scheme is conditionally stable under the condition |1 + z| < 1, whereas the
implicit Euler scheme is unconditionally A-stable.

Sti↵ problems Some practical applications involve dissipative systems rep-


resented by systems of equations in which individual components decay at very
di↵erent rates. This is frequent when modelling chemical reactions with di↵er-
ent decay rates, for example. We may not be interested in accurately resolving
the precise behaviour of the most rapidly decaying variable, but mainly is re-
solving the slowly decaying variable. Yet the time step should be chosen such
as to keep stable dynamics for the fastest decaying variable, and that requires
a small time step. Examples where the time step required to maintain stability
in a numerical integration is far smaller than that which might seem sufficient
to accurately resolve part of the evolving variables are called sti↵ problems.
As an example we consider the evolution of the concentration of chemical
species represented by the vector y and subject to di↵erent decay rates (eg.
radioactive decay of di↵erent isotopes). For two species, we consider that the
system evolves according to the following system of ODEs, with a given param-
eter µ 1
✓ ◆ ✓ 0 ◆
1 0 y1
ẏ = M y, M = , y(0) = . (4.24)
0 µ y20

The solution to this system is y(t) = y10 e t , y20 e µt . There are two very
di↵erent time scales in the solution, namely one species decays on a timescale
of order 1, while the other decays with order 1/µ. We suppose that we are

24
interested in solving the dynamics of the first species while preserving stability
for the second. If we apply the explicit Euler scheme, we obtain

y n+1 = (Id + tM ) y n . (4.25)

The condition of stability requires to have |1 µ t| < 1, or t < 2/µ, which


is a severe limitation on the time step. We are thus forced to use a very small
time step to track the solution of a fast decaying variable, which will virtually
stay constant for large values of t (due to the exponentially fast decay). It is
clear that conditionally stable methods are not appropriate for approximating
sti↵ problems. A solution is to use A-stable methods. For example, implicit
methods are more expensive to use but are A-stable.
However A-stability may not be enough. A large time step will not produce
unstable amplification of the fastest decaying variables, but sufficiently large
time steps prevent them from decaying properly. This motivates the concept of
L-stability.
Definition 4.9 (L-stability). A method is L-stable if it is A-stable and R(z) ! 0
as z ! 1, where R(z) is the amplification factor of the method introduced in
(4.22).

L-stable methods are in general very good at integrating sti↵ equations since
the fastest modes will decay the most rapidly. The implicit Euler method is an
example of L-stable method.

4.4.2 Hamiltonian systems


A system of di↵erential equations of the form
@H
p˙i = , (4.26)
@qi
@H
q˙i = , i = 1, · · · , n (4.27)
@pi
is known as a Hamiltonian system. The function H of the 2n variables pi , qi
is the Hamiltonian or energy integral. A simple calculation shows that H is
constant along trajectories, i.e. H is a first integral:

dH(p(t), q(t))
= @p H(p(t), q(t)) · ṗ(t) + @q H(p(t), q(t)) · q̇(t) = 0
dt
where (p, q) 2 R2dn . A relevant notion of stability for a Hamiltonian system is
therefore the long-term preservation of the Hamiltonian, or total energy.
The system can also be written as

ẏ = JrH(y), (4.28)

where y = (pi , qi ) and J is the skew-symmetric matric defined by


✓ ◆
0 Iddn
J= . (4.29)
Iddn 0

25
Example 4.10 (Small particle in a stratified atmosphere). The motion of a
small particle in a stratified atmosphere is governed by the second-order ODE:

⇢(z) ⇢p
z̈ = g , (4.30)
⇢p

where g is the gravity, ⇢(z) is the density of the ambient fluid and the particle is
at equilibrium at a height z = C, where ⇢(C) = ⇢p . The ODE can equivalently
be written as a first order system

ż = v, (4.31)
⇢(z) ⇢p
v̇ = g , (4.32)
⇢p

The function Z
1 g
H = E = v2 [⇢(z) ⇢p ]dz, (4.33)
2 ⇢p
is the energy of the system (the sum of kinetic energy and potential energy) and
is constant along trajectories. A simple calculation shows that (4.31) can be
written in the form
@H
ż = , (4.34)
@v
@H
v̇ = , (4.35)
@z
and the system is a Hamiltonian system.

Phase volume preservation Hamiltonian systems possess the property of


phase-volume preservation, as well as the more important property of preserv-
ing symplectic structures (symplecticity), which we will define in the following.
Defining y = (p, q) 2 R2dn , the system (4.26) can be written as the ODE

ẏ = F (y), (4.36)

where F (y) is the vector field. Consider a domain with finite volume D0 2
Rd , d = 2n. The transformation t : y(0) 7! t (y(0)) = y(t; y(0)) maps D0 into
the domain Dt (t 0), according to the solution of (4.36) that satisfies the
given initial conditions y(0). The volume Vt of the domain Dt is equal to
Z
Vt = dy (4.37)
Dt
Z
@ t
= det dy (4.38)
D0 @y
Z
= | det M | dy, (4.39)
D0

where M is the Jacobian matrix of the flow map, which is composed of the ele-
ments @@yji . Therefore, the volume preserving condition is the following equality:

| det M | = 1. (4.40)

26
Volume preserving numerical integration In the following example we
want to explore numerical methods for the integration of a system characterised
by Hamiltonian dynamics. Convergence results obtained by a priori analysis
are given for certain schemes, however the constants which appear in error
estimations, such as the stability constant, typically depend exponentially on
time. Since Hamiltonian systems preserve energy over time, and preserve volume
in phase space, the question is whether those quantities are indeed preserved over
long time for a finite time step which is not a priori related to the convergence
of the solution in finite times. This is important because convergence results
are valid in the limit t ! 0.
In order for the numerical method to preserve volume in phase space, the
requirement is that the volume preserving condition (4.40) is satisfied for the
numerical method tn
det r tn = 1. (4.41)
Example 4.11 (Numerical integration of a harmonic oscillator). We consider
the example of the dust particle (4.30) in a linearly stratified atmosphere ⇢(z) =
↵z +⇢p , ↵ < 0. The corresponding Hamiltonian (4.33), which is the total energy
of the system (preserved over time), simplifies to
1 2 1 g
H=E= v ↵z 2 , (4.42)
2 2 ⇢p

and the system is

ż = v, (4.43)
g
v̇ = ↵z. (4.44)
⇢p

Denoting y = (z, v), the system is written as


✓ ◆
0 1
ẏ = Ay, A = g
↵ 0 . (4.45)
⇢p

The explicit Euler scheme leads to the relation

y n+1 = y n + A ty n . (4.46)

The Jacobian matrix of the numerical method is then

r tn = Id + A t, (4.47)

and the determinant is |Id + A t| = (1 ⇢gp ↵ t2 ), which is not one but strictly
larger than one. Thus the scheme does not preserve phase space volume but
rather increases phase space volume with time.
The implicit Euler scheme
1
y n+1 = (Id A t) yn (4.48)

has the following Jacobian


✓ ◆
1 1 t
r tn = g g , (4.49)
1 ⇢p ↵ t2 ⇢p ↵ t 1

27
and the determinant is
g
1 ⇢p ↵ t2
det r tn =⇣ ⌘2 (4.50)
g 2
1 ⇢p ↵ t

which is not one either but smaller than one. Thus the scheme does not preserve
phase space volume but rather decreases phase space volume with time.
However, symplectic numerical methods preserve phase volume and we in-
troduce them next.

Symplectic flow Hamiltonian flows possess the even deeper property of pre-
serving symplectic structures. A flow map g(y) : R2d 7! R2 d is said to be sym-
plectic if its Jacobian g 0 (y) satisfies the following property

g 0 (y)T · J · g 0 (y) = J. (4.51)

One can show that this property has a geometric interpretation of preserv-
ing oriented areas along the flow, and thus also to preserve volume in phase
space. The fundamental property of Hamiltonian systems is that the flow map
t : y(0) 7! t (y(0)) = y(t; y(0)) is symplectic

0 0
T
t ·J · t = J. (4.52)

To see this, let t be the flow of 4.26 and notice that


✓ ◆ ✓ ◆
d @ t (y) @ d t (y) @ @ t (y)
= = (J · rH( t (y))) = J · r2 H( t (y)) ·
dt @y @y dt @y @y
@ t (y)
Denoting : t 7! @y for simplification we obtain

d
(t)T · J · (t) = (t)T ·r2 H( t (y))·J T J· (t)+ (t)T ·r2 H( t (y))J 2 (t) = 0,
dt
where we use J T = J. This shows that (t)T · J · (t) = (0)T · J · (0) = J,
since (0) = @ @y
0 (y)
= Id, and thus the flow is symplectic.

Symplectic integrators In order for the long-term integration of Hamilto-


nian dynamics to be stable, in the sense that energy is preserved, numerical
schemes should preserve geometric properties of the flow. The relevant criterion
here is that the numerical scheme should be symplectic. Intuitively speaking,
symplectic integrators preserve exactly an approximate energy, which implies
that the energy of the system is approximately preserved over long time inte-
gration.
Definition 4.12 (Symplectic integrator). A numerical method is symplectic if
the application
0 1 0
t : y 7! y = t (y )

is symplectic when the method is applied to a Hamiltonian system.

28
Symplecticity is linked to the conservation of volume occupied in phase space,
and thus for a symplectic numerical method, trajectories cannot converge to a
given trajectory (otherwise the volume occupied in phase space would shrink
with integration time, like we saw for the implicit Euler scheme in Example
4.11). Trajectories cannot diverge either, as it would imply an increase of phase
volume (see Example 4.11 when using the explicit Euler scheme). One can show
that symplectic methods preserve an approximate Hamiltonian for exponentially
long times of integration.
In particular, the implicit midpoint rule is symplectic. A fruitful means of
constructing symplectic integrators is to use a splitting strategy.

Symplectic splitting methods Suppose that a system ẏ = F (y), y 2 Rn


can have its vector field ”split” as

ẏ = F 1 (y) + F 2 (y). (4.53)

If, by chance, the exact flows t,1 and t,2 of the systems ẏ = F 1 (y) and
ẏ = F 2 (y) can be calculated explicitly, one can, from a given initial value y0 ,
first solve the first system to obtain a value y1/2 , and from this value integrate
the second system to obtain y1 . For a Hamiltonian system with

ẏ = JrH(y), H(y) = H1 (y) + H2 (y) (4.54)

we obtain the split flows

ẏ = JrH1 (y), ẏ = JrH2 (y), (4.55)

which we suppose can be integrated. Since the flow maps t,1 (y) and t,2 (y)
are solution of a Hamiltonian system we have
0 0 0 0
T T
t,1 (y) ·J · t,1 (y) = J, t,2 (y) ·J · t,2 (y) = J. (4.56)

The composition of the two exact flows : = t,1 t,2 is also symplectic since

0 0 0 ⇤ 0 T
(y)T · J · (y) = t,2 (y ) t,1 (y) · J · 0t,2 (y ⇤ ) 0
t,1 (y) (4.57)
0 T 0 ⇤ T 0 ⇤ 0
= t,1 (y) t,2 (y ) · J · t,2 (y ) t,1 (y) (4.58)
0 T 0
= t,1 (y) · J · t,1 (y) = J. (4.59)

Example 4.13 (Separable Hamiltonian system). We consider a separable Hamil-


tonian system for which H(p, q) = K(q) + P (p) (notice that the example 4.11
satisfies this condition). For this case, we obtain the two following Hamiltonian
systems

ṗ = K 0 (q), ṗ = 0 (4.60)
0
q̇ = 0, q̇ = P (p) (4.61)

The flow maps t,1 (p, q) and t,2 (p, q) are respectively

t,1 (p, q) = (p + t K 0 (q), q) , t,2 (p, q) = (p, q + t P 0 (p)) , (4.62)

29
and they are symplectic since they are flow maps of Hamiltonian systems. Their
composition is thus also symplectic. The splitting method based on splitting the
Hamiltonian into kinetic and potential energy terms is given by

pn+1 = pn + t K 0 (q n ), (4.63)
n+1 n 0 n+1
q =q + t P (p ). (4.64)

This scheme is referred to as symplectic Euler scheme. We can verify that the
determinant of the Jacobian of this scheme applied to the example system (4.11)
is indeed one. For this example, the Euler symplectic scheme leads to

z n+1 = z n + t vn ,
g g
v n+1 = v n + t ↵z n+1 = v n + t ↵(z n + tv n ).
⇢p ⇢p

We obtain in matrix form


✓ ◆
1 t
y n+1 = Ay n , A= g
↵ t 1+ g
t2 ,
⇢p ⇢p ↵

and we indeed see that | det(A)| = 1 such that volume in phase space is preserved
through integration, and the total energy is approximately preserved.
A second order, symmetric variant is called the Störmer-Verlet scheme:
t 0 n
pn+1/2 = pn + K (q ), (4.65)
2
q n+1 = q n + t P 0 (pn+1/2 ), (4.66)
t 0 n+1
pn+1 = pn+1/2 + K (q ). (4.67)
2
The implicit midpoint rule

y n+1 = y n + t JrH (y n+1 + y n )/2 (4.68)

is also a symplectic method, of order 2.

30
Figure 4.1: Area preservation of numerical methods for a harmonic oscillator
system. Figure from [Ha10].

5 Numerical integration of partial di↵erential


equations
Mathematical tools & concepts: PDE, finite di↵erences, stability analysis.
Suggested references: [Du11, HW05, QSS+ 07]

In the previous section, the basic strategy to represent the evolution of con-
tinuous functions that are solutions of ordinary di↵erential equations was to
approximate the set of values taken by the function at a finite number of grid
points. From the grid-point values, the derivatives of the function were approx-
imated using finite di↵erences. The goal of the present section is to examine the
behaviour of numerical schemes in which finite di↵erences replace both time and
space derivatives in time-dependent partial di↵erential equations (PDEs). The
finite-di↵erence approximations of the time and space derivatives will be based
on discrete values taken by the solution function at regularly-spaced grid points
of a space-time grid. The analysis of such numerical integration approaches has
to consider simultaneously the space and time discretisation errors. Based on
the example problem of the advection-di↵usion equation, we will analyse how to
approximate the solution numerically using finite-di↵erences schemes, and how
to ensure the quality of the numerical solution.

31
5.1 Advection-di↵usion equation
Our model PDE is an advection-di↵usion equation for the unknown concentra-
tion c : R+ ⇥ ⌦ ! R

@c(t, x)
+ u(x) · rc(t, x) D c(t, x) = 0, (5.1)
@t
where u(x) is the velocity of the flow (which is assumed to be divergence free
in this form), D is a di↵usion coefficient which is assumed constant, and there
is no source term. Initial conditions (t = 0) and boundary conditions (x at
the boundaries of the spatial domain) have to be specified to complete the
model. For example, one could consider periodic boundary conditions. For a
one-dimensional spatial domain ⌦ = [0, L] and t 2 [0, T ], initial and (periodic)
boundary conditions are in this case

c(0, x) = c0 (x), and c(t, 0) = c(t, L) 8t 2 [0, T ].

We recall the dimensionless form of the equation obtained earlier


@c⇤ 1
+ rc⇤ c⇤ = 0, (5.2)
@t⇤ Pe
or dropping the stars for convenience
@c 1
+ rc c = 0. (5.3)
@t Pe
We have already analysed some properties of the solution in an earlier section
and will use this model problem to introduce finite-di↵erences methods to nu-
merically simulate such a model problem.

5.2 Discretisation by the finite di↵erences method


The method known as the finite-di↵erence method uses approximations to the
partial derivatives in equations to reduce PDE’s to a set of algebraic equations.
To simplify notations, we will consider a one-dimensional spatial domain in the
following. We consider a time discretisation with regular intervals t and a
regular space discretisation of spacing x, with a maximal time integration
T = N t and an integration domain of size L = J x, N and J being integers.
The unknown to consider for a PDE describing the evolution of f (t, x) are the
values fjn for 1  n  N and 1  j  J and will represent the numerical approx-
imations of the exact solution f (tn , xj ) over the points of the mesh determined
by t and x.

Semi-discretisation and the method of lines The method of lines most


often refers to the construction or analysis of numerical methods for partial
di↵erential equations that proceeds by first discretising the spatial derivatives
only and leaving the time variable continuous. This leads to a system of ordinary
di↵erential equations to which a numerical method for initial value ordinary
di↵erential equations can be applied. It is sometimes called semi-discretisation.
Similarly to the time discretisation considered earlier, various approxima-
tions are possible for the spatial derivative, based on Taylor expansions in which

32
we neglect high-order terms. Possible semi-discretisation include a right-sided
approximation:
f (t, x + x) f (t, x)
@x f (t, x) ⇡ , (5.4)
x
or left-sided
f (t, x) f (t, x x)
@x f (t, x) ⇡ , (5.5)
x
or centred
f (t, x + x) f (t, x x)
@x f (t, x) ⇡ . (5.6)
2 x
A second order space derivative approximation may be
f (t, x + x) 2f (t, x) + f (t, x x)
@x f (t, x) ⇡ . (5.7)
x2

For example, the method of lines approximation of (5.3) using central dif-
ferences is
dcj cj+1 cj 1 1 cj+1 2cj + cj 1
= + , 1  j  J, (5.8)
dt 2 x Pe x2
where cj = c(t, xj ). The resulting system of equations is now an ODE system
since there is only one independent variable t. The PDE is thus replaced by a
system of J ODEs, whose solutions will be the J functions c1 (t), c2 (t), · · · cJ (t).
A complete specification of the ODE system still requires initial conditions.
Including the initial and periodic boundary conditions discussed above results
in
c(xj , t = 0) = c0 (xj ), and c(x1 , t) = c(xJ , t), t 0
The system can now be integrated numerically using ODE integration methods.

Full finite-di↵erences discretisation To obtain a full discretisation, the


time derivative needs to be discretised as well. Time discretisation, i.e. the
approximation of @t f (t, x) can be done according to the schemes presented in
Section 4. For example, the equation (5.3) can be approximated as follow,
based on an explicit Euler scheme for the time derivative and central di↵erences
in space:
cn+1
j cnj cnj+1 cnj 1 1 cnj+1 2cnj + cnj 1
= + , (5.9)
t 2 x Pe x2
where cnj = c(tn , xj ). If we rather use the Euler implicit scheme for time dis-
cretisation, we obtain
cn+1
j cnj cn+1
j+1 cn+1
j 1
n+1
1 cj+1 2cn+1
j + cn+1
j 1
= + . (5.10)
t 2 x Pe x2
And if we use a Crank-Nicolson scheme we obtain
!
cn+1
j cnj 1 cnj+1 cnj 1 cn+1
j+1 cn+1
j 1
= + (5.11)
t 2 2 x 2 x
!
1 cnj+1 2cnj + cnj 1 cn+1
j+1 2cn+1
j + cn+1
j 1
+ + . (5.12)
2P e x2 x2

33
Implementation as a matrix system The full finite-di↵erences discreti-
sation has reduced the PDE to a set of algebraic equations. The numerical
solution of the PDE is recovered by solving the algebraic equations. The un-
known at step n is the vector [cn1 , cn2 , · · · cnJ ]. For the example using an explicit
Euler scheme for the time derivative and central di↵erences in space to solve
5.3,
cn+1
j cnj cnj+1 cnj 1 1 cnj+1 2cnj + cnj 1
= + ,
t 2 x Pe x2
one needs to solve the system of equations
✓ n ◆
n+1 n
cj+1 cnj 1 1 cnj+1 2cnj + cnj 1
cj = cj + t + , 1  j  J.
2 x Pe x2
The system can be written in matrix-vector form. Denoting the vector cn =
[cn1 , cn2 , · · · cnJ ], the system reads
✓ ✓ ◆◆
1 1
cn+1 = Id t A+ B cn ,
2 x P e x2
with Id being the identity matrix and with the following tridiagonal matrices
A and B, here written to consider periodic boundary conditions
0 1 0 1
0 1 0 1 2 1 0 1
B .. C B .. C
B 1 0 1 . C B 1 2 1 . C
B C B C
B . . . . . . C B . . . . . . C
B0 1 . . . C B . . . C
A=B C, B = B 0 1 C.
B .. .. .. C B .. .. .. C
B . . . 1 0CC B . . . 1 0C
B B C
B .. C B .. C
@ . 1 0 1 A @ . 1 2 1A
1 0 1 0 1 0 1 2
⇥ ⇤
Finally, denoting the matrix M : = Id t 2 1 x A + P e 1 x2 B , the system
of algebraic equations is written as the matrix system

cn+1 = Mcn .

All of the above-mentionned finite-di↵erences discretisation schemes can be writ-


ten in this matrix form with a matrix M 2 RJ⇥J depending on the scheme. Note
that although the matrix M is high dimensional, it is sparse with a limited num-
ber of non-zero entries per row (determined by the number of grid values used
to approximate each derivative) and is therefore amenable to efficient storage
and inversion techniques.

5.2.1 A priori error analysis


Similar to the analysis of convergence for time discretisation, we can analyse
the consistence and the stability of the numerical schemes, which will lead to
convergence. Here the order of convergence of a scheme will be defined by the
power of t and x entering in the error bounds of the di↵erence between the
exact solution and its approximation in a certain norm. Similarly to the ODE
case,
Consistency + Stability = Convergence

34
Consistency The local truncation error of a numerical scheme is the resid-
ual that is generated by pretending the exact solution to satisfy the numerical
method itself. Truncation errors will arise due to the numerical scheme both
in time and space. Thus the local truncation error ⌘jn depends on the spatial
node j and on the time step n. Denoting the exact solution of 5.3 by c, for the
scheme 5.9 using Euler explicit in time and central di↵erences in space, the local
truncation error at node (tn , xj ) is
cn+1
j cnj
cnj+1 cnj 1 1 cnj+1 2cnj + cnj 1
⌘jn = + (5.13)
t 2 x Pe x2
Definition 5.1 (Consistency). The global truncation error is
⌘( t, x) = max |⌘jn | (5.14)
n,j

A numerical scheme is consistent if ⌘( t, x) goes to 0 as t and x tend to


0 independently. It is consistent of order p in time and q in space if there exists
a constant C such that
|⌘( t, x)|  C ( tp + xq ) . (5.15)
Proofs of consistency generally rely on Taylor expansions of the exact solu-
tion and thus require regularity of the solution.
Example 5.2 (Consistency of the scheme 5.9 ). To estimate the truncation error
of the scheme 5.9, we use Taylor expansions at an element (t, x) 2 [0, T ] ⇥ ⌦:
@c t2 @ 2 c
c(t + t, x) = c(t, x) + t
(t, x) + (t, x) + O( t3 ),
@t 2 @t2
@c x2 @ 2 c x3 @ 3 c
c(t, x + x) = c(t, x) + x (t, x) + (t, x) + (t, x) + · · ·
@x 2 @x2 6 @x3
4 4
x @ c
+ (t, x) + O( x5 ),
24 @x4
@c x2 @ 2 c x3 @ 3 c
c(t, x x) = c(t, x) x (t, x) + 2
(t, x) (t, x) + · · ·
@x 2 @x 6 @x3
x4 @ 4 c
+ (t, x) + O( x5 ).
24 @x4
For the time derivative approximation by an Euler explicit scheme, we obtain
c(t +t, x) c(t, x) @c
= (t, x) + O( t).
t @t
For the spatial derivatives approximation by central di↵erences, we obtain
c(tx +x) c(t, x x) @c
= (t, x) + O( x2 ),
2 x @x
c(tx + x) 2c(t, x) + c(t, x x) @2c
= (t, x) + O( x2 ).
x2 @x2
Finally the truncation error is
@c @c 1 @2c
⌘jn = (t, x) + O( t) + (t, x) + O( x2 ) (t, x) + O( x2 )
@t @x P e @x2
C t + x2
and the scheme is consistent of order 1 in time and order 2 in space.

35
Stability Similarly to the analysis of numerical schemes for ODEs, one needs
a criterion to ensure that local truncation errors do not accumulate too quickly
during a simulation.
In order to understand the evolution of errors during a simulation, we recall
that the finite-di↵erences discretisation of the advection-di↵usion equation leads
to a matrix system

cn+1 = Mcn , cn = [cn1 , cn2 , · · · cnJ ].

This linear system defines a recurrence operation in time, or a time stepping


algorithm. In practice, errors are done at each time step. If we denote en as
the numerical error at time n, we can write the error-prone numerical solution
as zn = cn + en . Applying the numerical scheme to it results in

zn+1 = Mzn
cn+1 + en+1 = M(cn + en )

and thus
en+1 = Men ,
or recursively en+1 = Men = · · · = Mn e0 . In other words, numerical errors
evolve in the same way as the solution and the norm of the matrix M controls
the growth of error. A sufficient stability condition is thus

kMk  1

with a suitable matrix norm k · k. In the case where kMk  1 only when t and
x satisfy a condition of the type t  S x↵ (S being some constant), we refer
to conditional stability. If the exponent ↵ is small, e.g ↵ = 1, the restriction
on the time step t is not too restrictive, but higher exponents lead to strong
restrictions on the time step for stable simulations.

Convergence Stability and consistence imply convergence.


Definition 5.3 (Convergence). A finite di↵erences scheme is convergent in a
norm k · k ,p if the error satisfies

lim ke k ,p = 0,
x, t!0

where the error vector en has entries (en )j = unj u(tn , xj ) for all 1  j  J.

5.3 Stability analysis


In fact, the critical condition to satisfy for a finite-di↵erence scheme to appro-
priately approximate the true solution of a PDE is that of stability. As long
as the scheme is stable for a given problem, consistency of this scheme will
ensure convergence with the order of convergence given by the order of consis-
tency. For a given matrix norm and a numerical scheme written in the matrix
form cn+1 = Mcn , a sufficient condition for the scheme to be stable is to have
kMk  1. We will study two approaches to check this criterion. One is a di-
rect approach consisting of evaluating the matrix norm. The other approach,

36
known as von Neumann stability analysis, is based on an analysis in the fre-
quency domain, using Fourier series to represent the solution. As the di↵usion
part of the PDE and the advection part of the PDE lead to di↵erent difficul-
ties, we will analyse first the pure di↵usion case and the pure advection case
separately, before combining the results to investigate the general case of an
advection-di↵usion model.

5.3.1 Direct approach


The finite-di↵erences schemes introduced above lead to the matrix form of the
problem cn+1 = M cn . The direct approach consists of evaluating the norm of
the matrix M.

5.3.2 Frequentist approach: von Neumann stability analysis


The basic idea of the von Neumann method is to represent the discretised solu-
tion at some particular time step by a finite Fourier series of the form
N
X
cnj = ĉnk eikj x
(5.16)
k= N

and to examine the stability of the individual Fourier components. The total
solution will be stable if and only if every Fourier component is stable. The use
of Fourier series is strictly appropriate only if the spatial domain is periodic. For
more general boundary conditions, a rigorous stability analysis is more difficult,
but the von Neumann method still provides a way to characterise obviously
unstable numerical schemes and avoid them for the scientific computing task.
A key property of Fourier series is that individual Fourier modes satisfy
d ikx
e = ikeikx .
dx
Similarly, for finite Fourier series, if one starts with some initial conditions
cnj = eikj x , after one iteration of the finite-di↵erences scheme, one will have

cn+1
j = Ak eikj x
,

where the coefficient Ak is called an amplification factor and is determined by


the form of the finite di↵erences formula. Restricting the analysis to linear
schemes and constant coefficients, the amplification factor is the same from
time step to time step and after n iterations of the scheme, one obtains for the
amplitude of the k th Fourier mode ĉnk
n
ĉnk = Ak ĉnk 1
= (Ak ) ĉ0k .

It follows that if the value of the amplification factor determines if the amplitude
of this particular Fourier mode will grow or decay with the iterations of the
numerical integration scheme. Hence, the stability of each Fourier component
is determined by the modulus of its amplification factor. The von Neumann
stability condition is thus
|Ak |  1 8k 2 Z. (5.17)

37
5.3.3 Di↵usion equation
We will see that for a di↵usion process

@c @2c
= D 2, (5.18)
@t @x
using an implicit scheme in time ensures unconditional stability, whereas using
an explicit scheme in time leads to a conditional stability, where the time step
is limited by the square of the spatial step.
The discretisation using an Euler implicit scheme for the approximation of
the time derivation and central di↵erences in space is

cn+1
j cnj cn+1
j+1 2cn+1
j + cn+1
j 1
=D . (5.19)
t x2
where cnj = c(tn , xj ). Using the Euler explicit scheme for time discretisation,
we obtain
cn+1
j cnj cnj+1 2cnj + cnj 1
=D , (5.20)
t x2

Direct approach The finite-di↵erences schemes 5.19 and 5.20 respectively


lead to the matrix forms cn+1 = MI cn and cn+1 = ME cn , where
✓ ◆ 1
t t
MI = Id + D 2
B and ME = Id D B
x x2

and B is the tridiagonal matrix B = tridiag( 1, 2, 1). The direct approach


consists of evaluating the norms of those matrices.

von Neumann stability analysis We represent the discretised solution at


some particular time step by a finite Fourier series of the form
N
X
cnj = ĉnk eikj x
(5.21)
k= N

Substitution of an arbitrary Fourier component in the Euler implicit scheme


5.19 leads to
ĉn+1
k eikj x
ĉnk eikj x
ĉn+1
k eik(j+1) x
2ĉn+1
k eikj x
+ ĉn+1
k eik(j 1) x
=D
t x2
and thus 
t
ĉn+1
k 1+D eik x
+2 e ik x
= ĉnk .
x2
Using the fact that eik x
+2 e ik x
= 4 sin2 k x
2 we obtain

ĉn+1
k = Ak ĉnk ,

with the amplification factor for any Fourier mode k


1
Ak = .
1 + 4D t
x2 sin2 k x
2

38
Therefore we have
|Ak |  1 8k 2 Z
and the Euler implicit scheme in unconditionally stable.
If we instead substitute an arbitrary Fourier mode in the Euler explicit
scheme 5.20, we obtain
ĉn+1
k = Ak ĉnk ,
with the amplification factor for any Fourier mode k
✓ ◆
t 2 k x
Ak = 1 4D sin .
x2 2
The stability condition is only satisfied if
✓ ◆
t 2 k x
|Ak | = 1 4D sin 1 8k 2 Z
x2 2
which gives the following condition on the time step
x2
t . (5.22)
2D
In conclusion, for a pure di↵usion model, an implicit treatment of the time
integration leads to unconditional stability, while an explicit treatment leads to
conditional stability with a condition on t and x given by 5.22.

The CFL condition A condition which restrictively couples the time step
and the spatial grid size of an explicit time integrator scheme, such as exem-
plified in 5.22, is called a Courant-Friedrichs-Lewy or CFL condition. It is
restrictive, since for a given spatial grid size, one cannot consider too large a
time step. For example, doubling the spatial resolution will require a substan-
tially smaller time step (in the case of the CFL condition 5.22, 4 times smaller).
This is a simple yet fundamental observation for numerical integration of PDEs.
As already mentioned above, this e↵ect can be circumvented when employing
schemes with better stability properties, in particular implicit schemes. The
trade-o↵ therefore becomes weighing the computational cost of solving a linear
system every iteration against stepping an explicit method a large number of
times.

5.3.4 Advection equation


We will see that for an advection process
@c @c
+u = 0, (5.23)
@t @x
the spatial discretisation is the important aspect to be considered, as the sta-
bility will mainly be a↵ected by the choice of the spatial discretisation.
The discretisation using central di↵erences in space and Euler explicit in
time is
cn+1
j cnj cnj+1 cnj 1
+u = 0. (5.24)
t 2 x
Using instead left-sided di↵erences in space and Euler explicit in time leads to
cn+1
j cnj cnj cnj 1
+u = 0. (5.25)
t x

39
Direct approach The finite di↵erences schemes 5.24 and 5.25 respectively
lead to the matrix forms cn+1 = MC cn and cn+1 = ML cn , where
t t
MC = Id u A and ML = Id u AL
2 x x
and A is the tridiagonal matrix A = tridiag( 1, 0, 1) and AL is the tridiagonal
matrix AL = tridiag( 1, 1, 0). The direct approach consists of evaluating the
norms of those matrices.

von Neumann stability analysis Again we represent the discretised solu-


tion at some particular time step by a finite Fourier series of the form
N
X
cnj = ĉnk eikj x
(5.26)
k= N

Substitution of an arbitrary Fourier component in the central di↵erences scheme


5.24 and direct calculation leads to

ĉn+1
k = Ak ĉnk ,

with the amplification factor for any Fourier mode k


t
Ak = 1 u i sin (k x) .
x
2 2
Since |Ak | = 1 + u xt sin2 (k x) 1, the central di↵erences scheme is
unconditionally unstable.
If we instead substitute an arbitrary Fourier mode in the left-sided di↵erences
scheme 5.25, we obtain
ĉn+1
k = Ak ĉnk ,
with the amplification factor for any Fourier mode k
t ik x
Ak = 1 u 1 e .
x
Using direct trigonometric calculations, one can show that
✓ ◆ ✓ ◆
2 t k x t
|Ak | = 1 4u sin2 · 1 u
x 2 x

Under the assumption that u > 0, we will only have |Ak |  1 under the condition
0 < u xt  1 or
x
t . (5.27)
u
For the case where u < 0, one needs to use right-sided di↵erences to obtain
a similar conditional stability condition.

40
The CFL condition Discretisation of the advection equation using a left-
sided approximation of the spatial derivative results in the CFL condition 5.27,
or equivalently
t
0u  1. (5.28)
x
In the case u < 0, this requirement cannot be fulfilled and the solution is
thus unstable. Instead, one needs to consider a right-sided approximation of
the derivative in order to obtain a CFL condition that can be fulfilled. More
generally, the CFL condition says that the choice of the discretisation cannot
be made independently of the data that determine the PDE to be solved. The
scheme should be an upwind scheme, i.e. one should use backward di↵erences
with respect to the advection velocity.
The quantity u xt is called the Courant number. In more general problems,
the solution may consist of a family of waves travelling at di↵erent speeds, in
which case the Courant number should be defined such that u is the speed of
the most rapidly travelling wave.
In summary, for the advection equation, using central di↵erences in space
results in unconditional instability, while choosing an upwind scheme leads to
conditional stability with a CFL condition that has to be fulfilled.

Dissipation and dispersion The von Neumann stability analysis also en-
lightens the dissipation and dispersion of a numerical scheme. To see this, we
first notice that an exact solution to the advection equation 5.23, given some
initial condition c0 (x), is given by

c(x, t) = c0 (x ut).

In particular, for any discrete time step tn , then c(x, tn ) = c0 (x un t). A


Fourier representation of the initial condition is given by
1
X
c0 (x) = ĉ0k eikx ,
k= 1

from which we obtain for the solution c(x, t)


1
X
c(x, tn ) = ĉ0k eik(x un t)
,
k= 1

iuk t
and setting gk = e we obtain for the exact solution evaluated at a given
discrete node
1
X
c(xj , tn ) = ĉ0k eikj x
(gk )n .
k= 1

From the Fourier representation of the numerical solution 5.26 and the recursive
rule ĉn+1
k = Ak ĉnk , we have
N
X
cnj = ĉ0k eikj x
(Ak )n ,
k= N

and thus Ak is the numerical counterpart of gk which is generated by the nu-


merical method at hand. Notice that |gk | = 1 whereas |Ak |  1 to ensure

41
stability. Thus, Ak is a dissipation coefficient. The smaller |Ak |, the higher is
the reduction of the amplitude of the wave mode ĉ0k , and, as a consequence, the
higher the numerical dissipation. The ratio

|Ak |
✏a (k) =
|gk |

is called the amplification error of the k th harmonic associated with the numer-
ical scheme (in this case, it corresponds to the amplification factor).
On the other hand, we have gk = e iuk t , and writing
i! t i!
kk t
Ak = |Ak | e = |Ak | e ,

we notice that the velocity of propagation of the true solution is u, while the
numerical velocity of propagation relative to the k th harmonic is !k . The ratio
between the two velocities
!
✏d (k) =
uk
quantifies the dispersion error relative to the k th harmonic.
Example 5.4 (Upwind scheme). Discretising the advection equation using the
upwind scheme 5.25 led to the amplification coefficient Ak = 1 u xt 1 e ik x .
If we consider as an example a wave of period 2 x, the norm of the amplification
coefficient of this harmonic is
✓ ◆
2 t t
|Ak | = 1 4u · 1 u
x x
2
Hence, in the case u xt = 1, then |Ak | = 1 and there is no dissipation error. In
2
the case u xt = 0.5, we have |Ak | = 0 and the 2 x wave is damped in a single
time step. From this observation we conclude that the upwind scheme is strongly
damping, hence solutions get smoothed during the numerical integration.
The phase change per time step associated with the upwind scheme will be
(see calculations in the von Neumann stability analysis earlier)

=(A(k)) ↵ sin(k x)
d = arctan = arctan
R(A(k)) 1 ↵(1 cos(k x))

where ↵ = u xt is the Courant number. Taking the ratio with the analytical
phase speed, the dispersion error can be calculated. Doing the calculation high-
lights that if u xt < 0.5, then waves are slowed down, whereas if 0.5 < u xt < 1,
waves are accelerated. The dispersion error is a function of the CFL number
and of the wave number k. The phase error is larger for shorter waves (larger
wavenumber k).

5.3.5 Advection-Di↵usion equation


The stability analyses of the di↵usion equation and of the advection equation
taught us that it is wise to use an upwind scheme on the advection part of the
advection-di↵usion equation, and that an implicit time stepper allows to avoid
having a strong constraint on the time step to use because of the di↵usion term.

42
Hence, assuming that the advection velocity u > 0, an appropriate finite-
di↵erences scheme could be:
cn+1
j cnj cn+1
j cn+1
j 1
n+1
1 cj+1 2cn+1
j + cn+1
j 1
= + . (5.29)
t x Pe x2
The matrix to consider for the stability analysis is
✓ ◆ 1
t 1 t
M= Id A+ B ,
x Pe x2

with A = tridiag( 1, 1, 0) and B = tridiag( 1, 2, 1). The von Neumann


stability analysis of this scheme results in the amplification factor
1
Ak =
1 + 4Pe t
x2 +2 t
x sin2 k x
2 +i t
x sin (k x)

for which one can show that |Ak |  1. Hence, the scheme is unconditionally
stable.

6 Transport equation: solution by the method


of characteristics
6.1 Method of characteristics
We will now learn how to solve initial value problems for first-order PDE’s using
the method of characteristic curves. The idea of the method is to discover curves
(the characteristic curves) along which the PDE becomes an ODE. Once the
ODE is found, it can be solved along the characteristic curves and transformed
into a solution for the PDE. Consider a function f (x, t) satisfying a first-order
linear PDE of the form
@ @
f (x, t) + v(x, t) f (x, t) = 0. (6.1)
@t @x
We will view this equation as saying that f is not changing along a curve x =
x(t), which means
d
f (x(t), t) = 0. (6.2)
dt
Using the chain rule we get
@f dx @f
0= + . (6.3)
@t dt @x
By virtue of (6.1) and (6.3) we must have

dx
= v(x, t), (6.4)
dt
which is an ODE for x(t). A solution (x(t), t) satisfies

d @ @ @ @
(x(t), t) = + x0 (t) = + v(x, t) = 0, (6.5)
dt @t @x @t @x

43
which implies that (x(t), t) = 0 (x0 ). Each value of x0 determines a unique
characteristic base curve if v is such that the initial value problems for the ODE
(6.4) are uniquely solvable (we assume v smooth enough for that). On any of
the integral curves (x(t), t) = 0 (x0 ), f will also be constant (see (6.2) and
(6.5) ). Since the curves of constant and constant f coincide, f has to be a
function of alone
f (x(t), t) = F ( (x, t)). (6.6)
We will consider for example an initial condition f (x, 0) = f0 (x) such that
f (x, 0) = F ( (x, 0)). This equation can be solved for x, which then leads to
f (x, t) = f0 (x( (x, t))).
Example 6.1 (Linear waves). We consider first the simple case of a constant
advection velocity
@ @
f (x, t) + v0 f (x, t) = 0. (6.7)
@t @x
Now by substitution, one can easily see that f = ⇢(x v0 t) is solution for
any di↵erentiable ⇢(x). Note that f = ⇢(x v0 t) describes the propagation of
values given by an initial condition and moving with velocity v0 . For v0 > 0,
the propagation occurs to the right, the opposite sign propagating to the left. If
⇢(x) = sin x, then f = sin(x v0 t), the point (x, t) such that x v0 t = ⇡/2 is
at the crest of a wave and it moves in the x t plane along the straight line
x = v0 t + ⇡/2. Thus the solutions of (6.7) represent linear waves that travel
with velocity v0 . This velocity is relative to the x axis.
Example 6.2. Consider the initial value problem
@f @f 1
+ x sin(t) = 0, f0 (x) = 1 + .
@t @x 1 + x2
Here we have v(x, t) = x sin t. Characteristic base curves for this problem
are solutions of
dx
= x sin t, x(0) = x0 .
dt
By separation of variables we get
Z Z
1
dx = sin tdt.
x
Hence
ln x = cos t + c,
and using the initial condition
x(t) = x0 e1 cos t
.
The function f is preserved along the characteristic base curves
1+cos t
f (x(t), t) = f0 (x0 ), x0 = x(t)e .
Since we know that
1
f0 (x0 ) = 1 + ,
1 + x20
we find that
1
f (x, t) = 1 + ,
1+ x2 e 2+2 cos t
The characteristic base curves and the solution f (x, t) are illustrated in Figure
6.1.

44
3

2.5
2
1.9
1.8
2 1.7

f(x,t)
1.6
1.5
1.5
t

1.4
1.3
1.2
1 1.1
1
3
2.5
0.5 15
2 10
1.5 5
1 0
t 0.5 -10
-5
0
-5 -3 -1 1 3 5 0 -15 x
x

Figure 6.1: The left panel shows the characteristic base curves for example 6.2,
the right panel shows the corresponding solution f (x, t)

6.2 Characteristics of the Burger’s equation

45
References
[Ben00] E.A. Bender. An Introduction to Mathematical Modeling. Dover, Mineola, 2000.
[Buc14] E. Buckingham. On Physically Similar Systems; Illustrations of the Use of Di-
mensional Equations. Phys. Rev. 4, 345–376, 1914.
[Du11] D.R. Durran. Numerical Methods for Fluid Dynamics. Texts in Applied Math-
ematics, Springer, 2011.
[Ha10] E. Hairer. Geometric Numerical Integration. Lecture Notes, TU Muenchen,
2010.
[HW05] G. Hornberger and P. Wiberg. Numerical Methods in the Hydrological Sciences.
Special Publication Series 57, American Geophysical Union, 2005.
[IBM+ 05] R. Illner, C.S. Bohun, S. McCollum, and T. van Roode. Mathematical Modelling:
A Case Studies Approach. AMS, Providence, 2005.
[QSS+ 07] A. Quarteroni, R. Sacco and F. Saleri. Numerical Mathematics. Texts in Applied
Mathematics, Springer, 2007.
[SJ05] S. Socolofsky and G. Jirka Special Topics in Mixing and Transport Processes
in the Environment. Lecture notes, Texas A&M University and University of
Karlsruhe, 2005.
[Sto14] G. Stolz Introduction au Calcul Scientifique. Lecture notes, Ecole des Mines,
2014.
[Tes12] G. Teschl. Ordinary Di↵erential Equations and Dynamical Systems. AMS,
Providence, 2012.

46

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy