CompSci Vercauteren
CompSci Vercauteren
Nikki Vercauteren
January 2, 2021
Contents
1 Introduction 3
1.1 Steps in scientific computing . . . . . . . . . . . . . . . . . . . . 3
1.2 Numerical methods and scientific goals . . . . . . . . . . . . . . . 4
1.3 Error sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1
5.3.2 Frequentist approach: von Neumann stability analysis . . 42
5.3.3 Di↵usion equation . . . . . . . . . . . . . . . . . . . . . . 43
5.3.4 Advection equation . . . . . . . . . . . . . . . . . . . . . . 45
5.3.5 Advection-Di↵usion equation . . . . . . . . . . . . . . . . 49
References 53
2
1 Introduction
Mathematical tools & concepts: Mathematical modelling, numerical math-
ematics.
Suggested references: [Ben00, QSS+ 07, Sto14]
3
3. Practicability check: Is the model “solvable”, either by analytical meth-
ods or numerical simulation? Do I have access to all the parameters in
the model? Can the model be used to make predictions?
4. Reality check: Make predictions of known phenomena and compare with
available data (qualitatively or quantitatively).
Analyse the model and its properties With a model at hand comes the
theoretical analysis of the model, along with the analysis of its properties. Does
a solution exist? Is it unique? This step can involve results from analysis,
spectral theory, probability theory, etc.
Implementation and validation After all these steps comes the implemen-
tation of the method, and its validation on academic tests. The validation is
an important step, as it ensures that the method performs appropriately in well
known situations.
4
1.3 Error sources
Error sources are numerous in scientific computing. Analysis of the errors of
numerical simulations requires acknowledging the di↵erent sources of errors.
These can include
• Model error: approximation of the physics of the problem in its mathe-
matical modelling. For example, one may choose to neglect variations of
air density - except in the buoyancy term - in the atmosphere (Boussinesq
approximation). The evaluation of model errors is best discussed with
scientific experts of the application concerned.
• Parameters errors: model parameter values or initial and boundary con-
ditions may be uncertain. The impact of parameter uncertainty on the
simulation outputs can be studied through methods of uncertainty quan-
tification.
• Algorithmic and numerical errors: tools exist to estimate discretisation
errors, and will be surveyed in this course. Such errors can come from ma-
chine roundo↵; from approximation errors due to the numerical method,
which can be studied through numerical analysis; but also from human
mistakes, which can be avoided by validating numerical results.
5
2 Arguments from dimension
Mathematical tools & concepts: basic ODE, linear algebra
Suggested references: [Buc14, Ben00, IBM+ 05]
Physical dimensions are part of almost any model, and dimensional analysis is
one of the starting point of modelling. Based on dimensional arguments, it is
sometimes possible to derive relationships between physical quantities involved
in a phenomenon that one wishes to understand and model. Additionally, a
dimensional equation can have the number of parameters reduced or eliminated
through non-dimensionalisation. This process is achieved through dimensional
analysis and subsequent reduction of the number of independent variables. This
gives insight into the fundamental properties of the system and reduces the
number of (numerical) experiments necessary to analyse it. The framework
enables quantification of the relative importance or characteristic scales of dif-
ferent processes interacting in the model, and is thus very helpful to identify
small parameters in a model.
Motivating example. Let us start with some motivation and look at the
classical pendulum (see Fig. 2.1). The governing equation of motion for the
angle ✓ as a function of t is
¨ =
L✓(t) g sin ✓(t) (2.1)
with g acceleration due to gravity and L the length of the pendulum. When ✓
is small, ✓ ⇡ sin ✓ and we may replace the last equation by
¨ =
✓(t) ! 2 ✓(t) (2.2)
p
with ! = g/L. The solution of (2.2) is
which is independent of the mass m of the pendulum and which does not depend
on the initial position ✓(0) = ✓0 .
T = f (✓, L, g, m) . (2.5)
6
Figure 2.1: The classical pendulum. The radial position at time t is given by
¨
the arclength s(t) = L✓(t), hence the radial force on the mass is mL✓(t).
Note that we have ignored ✓ as it does not carry any physical units. By com-
parison of coefficients we then find
↵1 = 1/2 , ↵2 = 1/2 , ↵3 = 0 .
which yields s
L
T / (2.7)
g
and which is consistent with (2.4). Note, however, that we cannot say anything
about a possible dependence of T on the dimensionless angle variable ✓. (The
unknown dependence on the angle is the constant prefactor 2⇡.)
y = f (x1 , . . . , xn ) (2.8)
In the SI system there are exactly seven fundamental physical units: mass (L1 =
kg), length (L2 = m), time (L3 = s), electric current (L4 = A), temperature
7
(L5 = K), amount of substance (L6 =mol) and luminous intensity (L7 =cd),
and we postulate that the physical dimension of any measurable scalar quantity
-
can be expressed as a product of powers of the L1 , . . . , L7 .
-
ei = (0, . . . , 0, 1, 0, . . . , 0)T ,
vi = (↵i,1 , . . . , ↵i,m ) 2 Rm , i = 1, . . . , m ,
y = f (p1 , . . . , pm , s1 , . . . , sn m) . (2.11)
1 We assume that the set of primary variables exists, otherwise we have to rethink our
postulated model f . (It is important that you understand this reasoning.)
8
Step 2: Construct dimensionless quantities Having refined our model
according to Step 1 above, we construct a quantity z with [z] = [y] such that
z = p↵ ↵m
1 · · · pm ,
1
(2.12)
f (p1 , . . . , pm , s1 , . . . , sn m)
⇧= , (2.13)
p↵1 · · · pm
1 ↵m
for suitable coefficients ↵j,1 , . . . , ↵j,m ; this can be done for all the sj . Along the
lines of the previous considerations we introduce zj with [zj ] = [sj ] by
↵
zj = p1 j,1 · · · p↵
m
j,m
and define the dimensionless quantity ⇧j = sj /zj . Note that, by the rank-nullity
theorem there are exactly n m such quantities where n m is the dimension
of the nullspace of the matrix spanned by the x1 , . . . , xn Replacing all the sj by
zj ⇧j , we can recast (2.13) as
⇧ = F (p1 , . . . , pm , ⇧1 , . . . , ⇧n m) , (2.14)
y = ⇧ p↵ ↵m
1 · · · pm .
1
(2.16)
9
is equal to n m, the di↵erence between the number physical quantities x1 , . . . , xn
and the number of fundamental dimensional units. That is, there exists a func-
tion : Rn m ! R such that
y = z (⇧1 , . . . , ⇧n m) (2.17)
or, in other words, ✓ ◆
s1 sn m
y=z ,..., . (2.18)
z1 zn m
10
The unique solution x = ( 1, 1, 0) leads to [u] = [F 0]
[k] and we can thus form the
dimensionless quantity u⇤ = uk
F0 . Similarly, we obtain the four dimensionless
quantities
uk m! 2 c!
u⇤ = , t⇤ = !t , m⇤ = , c⇤ = .
F0 k k
The dimensionless spring equation then reads as
11
3 Flow modelling: macroscopic dynamics of fluid
flows
Mathematical tools & concepts: Brownian motion, Scalar conservation laws
Suggested references: [SJ05]
Many models of applied sciences are posed in the form of partial di↵erential
equations (PDEs). For example in flow modelling, a mean field approach is taken
to analyse the level of the fluxes and densities of particles, and the evolution
of fluxes and densities is described through PDEs. Such modelling plays a
central role in atmospheric models among others. Before analysing numerical
methods to simulate PDEs, we will study an example of how PDEs can arise
from conservation laws governing the evolution of a continuous mass rather than
discrete particles.
12
↑a
and in the infinitesimal limit
2
di
G 88 . (e) @C
p
-
=
+ qx = D (3.5)
@x
which is Fick’s first law of di↵usion. Generalising to three dimension yields
~ ankmann
fut
[pv8y OT q = DrC S (3.6)
stress tensor
-pr + -
19
=
which is true for any choice of rectangle [x1 , x2 ] ⇥ [t1 , t2 ]. By the Fundamental
Theorem of the Calculus of Variation (see the following Lemma for a simple
version) this implies that
@ @
C(x, t) + qx (x, t) = 0, (3.10)
@t @x
which is a first-order conservation law. Using Fick’s law of di↵usion (3.5) and
assuming a constant di↵usion coefficient D we obtain the partial di↵erential
equation (PDE)
@C @2C
=D 2, (3.11)
@t @x
which is the one-dimensional di↵usion equation. Generalising to more dimen-
sions yields the PDE
@C
= Dr2 C. (3.12)
@t
13
8rt.
(1) 121 convection las :
eb +
gr
.
,
as F
mass conservation
+2
* p
cattering
U
((x +) =
Green fut ?
Hmm like mom .
dist (Bollemann 1D)
For initial dist.
any
((x ,
+ = 0) =
f(x)
x)dx
(((x)f(x
-
-
-generaton ?
Ot
Gu = 02 = ((t
,
x) = e ((x ,
+ = 0)
In
propagator
=jf(x))f(x - x)dx
(((x1)e8f(x -
-
x)dx
where c
· f(x) de
C
,
(x
,
x) =
Jde
1)f(x ye) f(t ye)llX(1lye yell
-
,
-
+o Rt *
R
&
*
,
ly, yu)1 e I x
->
global unique
solution
pathological Let
Stability
-
:
[1 61] ,
a
[0 , "max]
8 S(t))
=
f(x ,
y(t) +
y(+) fr
+
& =
Lyapunovstability
42 ex]
I Solde 18171Da
s eem
i
* + -
](y(+) -
-(t)))(3
lim 8(1) = 0
stable
+
Asymptotic
- x
I =
[tr &]
,
(y(t) -
=(x)) - 0
+- x
then ZZ ZZ
1
f (x, y)dxdy f (x0 , y0 )dxdy.
R 2 R
C(±1, t) = 0, (3.13)
and the initial condition (representing the injection uniformly over a cross-
section of infinitesimally small width in the x-direction)
14
Using the two dimensionless groups, Buckingham’s theorem tells us that
✓ ◆
M x
C= p p , (3.17)
A Dt Dt
where is a yet unknown function of ⇡2 . This solution is called a similarity
solution because C has the same shape in x at all t. Now we will assume the
form (3.17) and use it as a solution to the di↵usion equation (3.11) to solve for
the function . Essentially, we are doing a change of variable and denote our
similarity variable ⌘ = ⇡2 = pxDt . Substituting (3.17) into (3.11) and using the
chain rule, we obtain
@C @ M
= p (⌘) (3.18)
@t @t A Dt
@ M M @ @⌘
= p (⌘) + p (3.19)
@t A Dt A Dt @⌘ @t
✓ ◆
M @
= p (⌘) + ⌘ , (3.20)
2At Dt @⌘
and similarly
@2C M @2
= p . (3.21)
@x2 ADt Dt @⌘ 2
Substituting those results into the di↵usion equation, we obtain the ordinary
di↵erential equation in ⌘
✓ ◆
d2 1 d
+ (⌘) + ⌘ = 0. (3.22)
d⌘ 2 2 d⌘
We have thus reduced the PDE to an ODE, which is the goal of any analyt-
ical solution methods for PDEs. Converting the initial (3.14) and boundary
conditions (3.13) in the new variable ⌘ gives
(±1) = 0. (3.23)
15
to the di↵usion equation presented above but considering additionally transport
by the flow. The general form of the model PDE is an advection-di↵usion equa-
tion for the unknown concentration c : R+ ⇥ ⌦ ! R
@c(t, x)
+ u(x) · rc(t, x) D c(t, x) = 0, (3.26)
@t
where u(x) is the velocity of the flow (which is assumed to be divergence free
in this form), D is a di↵usion coefficient which is assumed constant, and there
is no source term. Initial conditions (t = 0) and boundary conditions (x at
the boundaries of the spatial domain) have to be specified to complete the
model. For example, one could consider periodic boundary conditions. For a
one-dimensional spatial domain ⌦ = [0, L] and t 2 [0, T ], initial and (periodic)
boundary conditions are in this case
c(0, x) = c0 (x), and c(t, 0) = c(t, L) 8t 2 [0, T ].
16
3.3.2 Fourier series representation of the solution
Decomposing the solution of the advection-di↵usion equation into Fourier modes
in very helpful to understand some properties of the solution. Furthermore,
we will use such Fourier decompositions in order to analyse the properties of
numerical schemes that we will use for scientific computing. For that, we make
the assumption that the solution is periodic in space so that we can expand in
a Fourier series such as, for a given time t
1
X
c(x, t) = bk (t)eikx , (3.30)
k= 1
where k are the wave numbers and bk (t) is the amplitude of the wave mode k at
time t. Since the solution is simply a superposition of Fourier modes and due to
the linearity of the sum, we can substitute an arbitrary Fourier mode bk (t)eikx in
3.26 and analyse its behaviour (note that we work with the dimensional form of
the equation here for ease of interpretation of the results). For a one-dimensional
case with a constant advection velocity u, we obtain for one individual Fourier
mode corresponding to the wavenumber k
dbk
= Dk 2 + iuk bk , (3.33)
dt
or writing = Dk 2 + iuk,
dbk
= bk . (3.34)
dt
The real part of is negative, since R( ) = Dk 2 < 0 and the imaginary part
is =( ) = uk. The solution of 3.34 is given by
where is an initial phase angle. The amplitude of the solution decays in time
due to <( ) = Dk 2 < 0, which is solely related to the di↵usion component.
Again, we see that the solution is hence dissipative. The advection term pro-
duces phase changes through =( ) = uk. Later in the course, the model
problem 3.34 will serve as a prototypical example problem for dissipative sys-
tems when considering ODE integration methods. We will use it as an example
for which to define numerical methods which are specifically appropriate for the
numerical integration of dissipative systems.
17
4 Time integration methods
Mathematical tools & concepts: ODE, numerical discretisation, stability
analysis.
Suggested references: [QSS+ 07, Du11, Sto14, HW05]
for which theoretical analysis requires less regularity assumptions. The inte-
gral form is also useful to suggest numerical schemes. Both formulations are
equivalent if f is continuous.
18
If in addition, the Lipschitz condition is satisfied for J = R+ and B = Rd (f
globally Lipschitz) then the initial value problem (4.1) admits a unique global
solution.
Stability of the solution We now assume that (4.1) admits a unique solution
on an interval [0, tmax [. A question of interest for stability analysis is, if we
slightly perturb (4.1), what happens to the solution? The sensitivity of the
solution to perturbations may be defined in the sense of Lyapunov. We consider
the following perturbed problem on an interval I = [t0 , t1 ] ⇢ [0, tmax [
dz
= f (t, z(t)) + (t), z(t0 ) = y(t0 ) + 0, (4.4)
dt
where 0 and (t) are small perturbations. The idea of Lyapunov stability is
that the solution of the perturbed system stays close to the reference solution
if the perturbation is not too large.
Definition 4.1 (Lyapunov stability). Let
| 0 | ✏, | (t)| ✏ 8t 2 I. (4.5)
The problem (4.1) is stable in the sense of Lyapunov over I if there exists C > 0
such that the solution of (4.4) satisfies
19
where tn (tn , y n ) is an approximation of
Z tn+1
1
f (s, y(s)) ds, (4.9)
tn+1 tn tn
1. Explicit methods
(a) Explicit Euler: y n+1 = y n + tn f (tn , y n );
(b) Heun’s method:
y n+1 = y n + 2tn (f (tn , y n ) + f (tn+1 , y n + tn f (tn , y n )));
(c) Fourth order Runge-Kutta scheme: multiple stages are included
F1 = f (tn , y n )
✓ ◆
tn n tn
F 2 = f tn + ,y + F1
2 2
✓ ◆
tn n tn
F 3 = f tn + ,y + F2
2 2
F4 = f (tn + tn , y n + tn F 3 ) ,
and finally
F1 + 2F2 + 2F3 + F4
y n+1 = y n + tn
6
2. Implicit methods
(a) Implicit Euler: y n+1 = y n + tn f tn+1 , y n+1 ;
tn
(b) Crank-Nickolson: y n+1 = y n + f (tn , y n ) + f tn+1 , y n+1 ;
2
⇣ ⇣ n n+1
⌘⌘
(c) Midpoint rule: y n+1 = y n + tn f tn +t2n+1 , y +y2 ;
z n+1,0 = y n + tn f (tn , y n )
20
which we then correct using fixed point iterations following
z n+1,k+1 = y n + tn f tn+1 , z n+1,k .
If the time step is chosen appropriately, one can show that z n+1,k ! y n+1 .
k!+1
In practice, one can set a certain tolerance ✏ > 0 and run a finite number of
iterations such that
|z n+1,k+1 z n+1,k | ✏.
The convergence is often very fast and even one iteration can suffice to get a
good approximation.
Truncation error The local truncation error is the residual error obtained
if one applies the numerical scheme to the exact solution. For the nth iter-
ation, it consists of the di↵erence between y(n + 1) and its approximation
y n + tn tn (tn , y n ). From that we obtain the definition of the local trun-
cation error
y n+1 y n tn tn (tn , y n )
⌘ n+1 : = (4.10)
tn
Definition 4.3 (Consistency). Let t = max0nN 1 tn be the maximum
time step. A numerical method is consistent if
✓ ◆
lim max |⌘ n | = 0, (4.11)
t!0 1nN
and it is consistent of order p if there exists a constant C such that, for all
0nN 1
|⌘ n+1 | C( tn )p ! 0. (4.12)
Proofs of consistency generally rely on Taylor expansions of the exact solu-
tion and thus require regularity of the vector field.
Example 4.4 (Consistency of Euler’s scheme). Euler’s explicit scheme is con-
sistent of order 1. The truncation error is
y(tn + tn ) (y (tn ) + tn f (tn , y (tn )))
⌘ n+1 = .
tn
Using Taylor’s expansion around y(tn ), we see that there exists a ✓n 2 [0, 1]
such that
t2n 00
y(tn+1 ) = y(tn ) + tf (tn , y(tn )) + y (tn + ✓n tn ),
2
t2n 00
(y(tn + tn )) (y (tn ) + tn f (tn , y (tn ))) = y (tn + ✓n tn ).
2
21
Moreover, the second derivative y 00 (t) can be expressed in terms of the derivatives
of f , by deriving the expression ẏ = f (t, y(t)) with respect to time
In the case where f and its derivatives are continuous, y 00 is uniformly bounded
on any interval [0, T ] with T < +1 (because a continuous function on a closed
and bounded region has a maximum and a minimum value). Therefore
|⌘ n+1 | C tn ,
1
where C = 2 supt2[0,T ] |y 00 (t)| depends on the integration time and on the initial
condition.
22
From the definition of consistency, the right hand side tends to zero with t.
Moreover, if the method is consistent of order p, then the global error is of order
tp since
max |y n y(tn )| S T C tp . (4.17)
1nN
23
Consequently, limn!1 y n = 0 if and only if
If condition (4.21) is satisfied, then for a fixed value of t, the numerical so-
lution reproduces the exact behaviour of the true solution when tn ! 1. Oth-
erwise, the numerical solution blows up asymptotically. Therefore, (4.21) is a
stability condition.
Since the problem (4.18) is linear, we can write one iteration of a one-step
method in the form
y n+1 = R( t)y n , (4.22)
where R depends on the chosen scheme and can be interpreted as an amplifica-
tion factor describing the amplification of the solution between two consecutive
time steps. For example, for an explicit Euler scheme, R(z) = 1 + z, or for
Crank-Nicholson,
1 + z/2
R(z) = .
1 z/2
The solution to this system is y(t) = y10 e t , y20 e µt . There are two very
di↵erent time scales in the solution, namely one species decays on a timescale
of order 1, while the other decays with order 1/µ. We suppose that we are
24
interested in solving the dynamics of the first species while preserving stability
for the second. If we apply the explicit Euler scheme, we obtain
L-stable methods are in general very good at integrating sti↵ equations since
the fastest modes will decay the most rapidly. The implicit Euler method is an
example of L-stable method.
dH(p(t), q(t))
= @p H(p(t), q(t)) · ṗ(t) + @q H(p(t), q(t)) · q̇(t) = 0
dt
where (p, q) 2 R2dn . A relevant notion of stability for a Hamiltonian system is
therefore the long-term preservation of the Hamiltonian, or total energy.
The system can also be written as
ẏ = JrH(y), (4.28)
25
Example 4.10 (Small particle in a stratified atmosphere). The motion of a
small particle in a stratified atmosphere is governed by the second-order ODE:
⇢(z) ⇢p
z̈ = g , (4.30)
⇢p
where g is the gravity, ⇢(z) is the density of the ambient fluid and the particle is
at equilibrium at a height z = C, where ⇢(C) = ⇢p . The ODE can equivalently
be written as a first order system
ż = v, (4.31)
⇢(z) ⇢p
v̇ = g , (4.32)
⇢p
The function Z
1 g
H = E = v2 [⇢(z) ⇢p ]dz, (4.33)
2 ⇢p
is the energy of the system (the sum of kinetic energy and potential energy) and
is constant along trajectories. A simple calculation shows that (4.31) can be
written in the form
@H
ż = , (4.34)
@v
@H
v̇ = , (4.35)
@z
and the system is a Hamiltonian system.
ẏ = F (y), (4.36)
where F (y) is the vector field. Consider a domain with finite volume D0 2
Rd , d = 2n. The transformation t : y(0) 7! t (y(0)) = y(t; y(0)) maps D0 into
the domain Dt (t 0), according to the solution of (4.36) that satisfies the
given initial conditions y(0). The volume Vt of the domain Dt is equal to
Z
Vt = dy (4.37)
Dt
Z
@ t
= det dy (4.38)
D0 @y
Z
= | det M | dy, (4.39)
D0
where M is the Jacobian matrix of the flow map, which is composed of the ele-
ments @@yji . Therefore, the volume preserving condition is the following equality:
| det M | = 1. (4.40)
26
Volume preserving numerical integration In the following example we
want to explore numerical methods for the integration of a system characterised
by Hamiltonian dynamics. Convergence results obtained by a priori analysis
are given for certain schemes, however the constants which appear in error
estimations, such as the stability constant, typically depend exponentially on
time. Since Hamiltonian systems preserve energy over time, and preserve volume
in phase space, the question is whether those quantities are indeed preserved over
long time for a finite time step which is not a priori related to the convergence
of the solution in finite times. This is important because convergence results
are valid in the limit t ! 0.
In order for the numerical method to preserve volume in phase space, the
requirement is that the volume preserving condition (4.40) is satisfied for the
numerical method tn
det r tn = 1. (4.41)
Example 4.11 (Numerical integration of a harmonic oscillator). We consider
the example of the dust particle (4.30) in a linearly stratified atmosphere ⇢(z) =
↵z +⇢p , ↵ < 0. The corresponding Hamiltonian (4.33), which is the total energy
of the system (preserved over time), simplifies to
1 2 1 g
H=E= v ↵z 2 , (4.42)
2 2 ⇢p
ż = v, (4.43)
g
v̇ = ↵z. (4.44)
⇢p
y n+1 = y n + A ty n . (4.46)
r tn = Id + A t, (4.47)
and the determinant is |Id + A t| = (1 ⇢gp ↵ t2 ), which is not one but strictly
larger than one. Thus the scheme does not preserve phase space volume but
rather increases phase space volume with time.
The implicit Euler scheme
1
y n+1 = (Id A t) yn (4.48)
27
and the determinant is
g
1 ⇢p ↵ t2
det r tn =⇣ ⌘2 (4.50)
g 2
1 ⇢p ↵ t
which is not one either but smaller than one. Thus the scheme does not preserve
phase space volume but rather decreases phase space volume with time.
However, symplectic numerical methods preserve phase volume and we in-
troduce them next.
Symplectic flow Hamiltonian flows possess the even deeper property of pre-
serving symplectic structures. A flow map g(y) : R2d 7! R2 d is said to be sym-
plectic if its Jacobian g 0 (y) satisfies the following property
One can show that this property has a geometric interpretation of preserv-
ing oriented areas along the flow, and thus also to preserve volume in phase
space. The fundamental property of Hamiltonian systems is that the flow map
t : y(0) 7! t (y(0)) = y(t; y(0)) is symplectic
0 0
T
t ·J · t = J. (4.52)
d
(t)T · J · (t) = (t)T ·r2 H( t (y))·J T J· (t)+ (t)T ·r2 H( t (y))J 2 (t) = 0,
dt
where we use J T = J. This shows that (t)T · J · (t) = (0)T · J · (0) = J,
since (0) = @ @y
0 (y)
= Id, and thus the flow is symplectic.
28
Symplecticity is linked to the conservation of volume occupied in phase space,
and thus for a symplectic numerical method, trajectories cannot converge to a
given trajectory (otherwise the volume occupied in phase space would shrink
with integration time, like we saw for the implicit Euler scheme in Example
4.11). Trajectories cannot diverge either, as it would imply an increase of phase
volume (see Example 4.11 when using the explicit Euler scheme). One can show
that symplectic methods preserve an approximate Hamiltonian for exponentially
long times of integration.
In particular, the implicit midpoint rule is symplectic. A fruitful means of
constructing symplectic integrators is to use a splitting strategy.
If, by chance, the exact flows t,1 and t,2 of the systems ẏ = F 1 (y) and
ẏ = F 2 (y) can be calculated explicitly, one can, from a given initial value y0 ,
first solve the first system to obtain a value y1/2 , and from this value integrate
the second system to obtain y1 . For a Hamiltonian system with
which we suppose can be integrated. Since the flow maps t,1 (y) and t,2 (y)
are solution of a Hamiltonian system we have
0 0 0 0
T T
t,1 (y) ·J · t,1 (y) = J, t,2 (y) ·J · t,2 (y) = J. (4.56)
The composition of the two exact flows : = t,1 t,2 is also symplectic since
0 0 0 ⇤ 0 T
(y)T · J · (y) = t,2 (y ) t,1 (y) · J · 0t,2 (y ⇤ ) 0
t,1 (y) (4.57)
0 T 0 ⇤ T 0 ⇤ 0
= t,1 (y) t,2 (y ) · J · t,2 (y ) t,1 (y) (4.58)
0 T 0
= t,1 (y) · J · t,1 (y) = J. (4.59)
ṗ = K 0 (q), ṗ = 0 (4.60)
0
q̇ = 0, q̇ = P (p) (4.61)
The flow maps t,1 (p, q) and t,2 (p, q) are respectively
29
and they are symplectic since they are flow maps of Hamiltonian systems. Their
composition is thus also symplectic. The splitting method based on splitting the
Hamiltonian into kinetic and potential energy terms is given by
pn+1 = pn + t K 0 (q n ), (4.63)
n+1 n 0 n+1
q =q + t P (p ). (4.64)
This scheme is referred to as symplectic Euler scheme. We can verify that the
determinant of the Jacobian of this scheme applied to the example system (4.11)
is indeed one. For this example, the Euler symplectic scheme leads to
z n+1 = z n + t vn ,
g g
v n+1 = v n + t ↵z n+1 = v n + t ↵(z n + tv n ).
⇢p ⇢p
and we indeed see that | det(A)| = 1 such that volume in phase space is preserved
through integration, and the total energy is approximately preserved.
A second order, symmetric variant is called the Störmer-Verlet scheme:
t 0 n
pn+1/2 = pn + K (q ), (4.65)
2
q n+1 = q n + t P 0 (pn+1/2 ), (4.66)
t 0 n+1
pn+1 = pn+1/2 + K (q ). (4.67)
2
The implicit midpoint rule
30
Figure 4.1: Area preservation of numerical methods for a harmonic oscillator
system. Figure from [Ha10].
In the previous section, the basic strategy to represent the evolution of con-
tinuous functions that are solutions of ordinary di↵erential equations was to
approximate the set of values taken by the function at a finite number of grid
points. From the grid-point values, the derivatives of the function were approx-
imated using finite di↵erences. The goal of the present section is to examine the
behaviour of numerical schemes in which finite di↵erences replace both time and
space derivatives in time-dependent partial di↵erential equations (PDEs). The
finite-di↵erence approximations of the time and space derivatives will be based
on discrete values taken by the solution function at regularly-spaced grid points
of a space-time grid. The analysis of such numerical integration approaches has
to consider simultaneously the space and time discretisation errors. Based on
the example problem of the advection-di↵usion equation, we will analyse how to
approximate the solution numerically using finite-di↵erences schemes, and how
to ensure the quality of the numerical solution.
31
5.1 Advection-di↵usion equation
Our model PDE is an advection-di↵usion equation for the unknown concentra-
tion c : R+ ⇥ ⌦ ! R
@c(t, x)
+ u(x) · rc(t, x) D c(t, x) = 0, (5.1)
@t
where u(x) is the velocity of the flow (which is assumed to be divergence free
in this form), D is a di↵usion coefficient which is assumed constant, and there
is no source term. Initial conditions (t = 0) and boundary conditions (x at
the boundaries of the spatial domain) have to be specified to complete the
model. For example, one could consider periodic boundary conditions. For a
one-dimensional spatial domain ⌦ = [0, L] and t 2 [0, T ], initial and (periodic)
boundary conditions are in this case
32
we neglect high-order terms. Possible semi-discretisation include a right-sided
approximation:
f (t, x + x) f (t, x)
@x f (t, x) ⇡ , (5.4)
x
or left-sided
f (t, x) f (t, x x)
@x f (t, x) ⇡ , (5.5)
x
or centred
f (t, x + x) f (t, x x)
@x f (t, x) ⇡ . (5.6)
2 x
A second order space derivative approximation may be
f (t, x + x) 2f (t, x) + f (t, x x)
@x f (t, x) ⇡ . (5.7)
x2
For example, the method of lines approximation of (5.3) using central dif-
ferences is
dcj cj+1 cj 1 1 cj+1 2cj + cj 1
= + , 1 j J, (5.8)
dt 2 x Pe x2
where cj = c(t, xj ). The resulting system of equations is now an ODE system
since there is only one independent variable t. The PDE is thus replaced by a
system of J ODEs, whose solutions will be the J functions c1 (t), c2 (t), · · · cJ (t).
A complete specification of the ODE system still requires initial conditions.
Including the initial and periodic boundary conditions discussed above results
in
c(xj , t = 0) = c0 (xj ), and c(x1 , t) = c(xJ , t), t 0
The system can now be integrated numerically using ODE integration methods.
33
Implementation as a matrix system The full finite-di↵erences discreti-
sation has reduced the PDE to a set of algebraic equations. The numerical
solution of the PDE is recovered by solving the algebraic equations. The un-
known at step n is the vector [cn1 , cn2 , · · · cnJ ]. For the example using an explicit
Euler scheme for the time derivative and central di↵erences in space to solve
5.3,
cn+1
j cnj cnj+1 cnj 1 1 cnj+1 2cnj + cnj 1
= + ,
t 2 x Pe x2
one needs to solve the system of equations
✓ n ◆
n+1 n
cj+1 cnj 1 1 cnj+1 2cnj + cnj 1
cj = cj + t + , 1 j J.
2 x Pe x2
The system can be written in matrix-vector form. Denoting the vector cn =
[cn1 , cn2 , · · · cnJ ], the system reads
✓ ✓ ◆◆
1 1
cn+1 = Id t A+ B cn ,
2 x P e x2
with Id being the identity matrix and with the following tridiagonal matrices
A and B, here written to consider periodic boundary conditions
0 1 0 1
0 1 0 1 2 1 0 1
B .. C B .. C
B 1 0 1 . C B 1 2 1 . C
B C B C
B . . . . . . C B . . . . . . C
B0 1 . . . C B . . . C
A=B C, B = B 0 1 C.
B .. .. .. C B .. .. .. C
B . . . 1 0CC B . . . 1 0C
B B C
B .. C B .. C
@ . 1 0 1 A @ . 1 2 1A
1 0 1 0 1 0 1 2
⇥ ⇤
Finally, denoting the matrix M : = Id t 2 1 x A + P e 1 x2 B , the system
of algebraic equations is written as the matrix system
cn+1 = Mcn .
34
Consistency The local truncation error of a numerical scheme is the resid-
ual that is generated by pretending the exact solution to satisfy the numerical
method itself. Truncation errors will arise due to the numerical scheme both
in time and space. Thus the local truncation error ⌘jn depends on the spatial
node j and on the time step n. Denoting the exact solution of 5.3 by c, for the
scheme 5.9 using Euler explicit in time and central di↵erences in space, the local
truncation error at node (tn , xj ) is
cn+1
j cnj
cnj+1 cnj 1 1 cnj+1 2cnj + cnj 1
⌘jn = + (5.13)
t 2 x Pe x2
Definition 5.1 (Consistency). The global truncation error is
⌘( t, x) = max |⌘jn | (5.14)
n,j
35
Stability Similarly to the analysis of numerical schemes for ODEs, one needs
a criterion to ensure that local truncation errors do not accumulate too quickly
during a simulation.
In order to understand the evolution of errors during a simulation, we recall
that the finite-di↵erences discretisation of the advection-di↵usion equation leads
to a matrix system
zn+1 = Mzn
cn+1 + en+1 = M(cn + en )
and thus
en+1 = Men ,
or recursively en+1 = Men = · · · = Mn e0 . In other words, numerical errors
evolve in the same way as the solution and the norm of the matrix M controls
the growth of error. A sufficient stability condition is thus
kMk 1
with a suitable matrix norm k · k. In the case where kMk 1 only when t and
x satisfy a condition of the type t S x↵ (S being some constant), we refer
to conditional stability. If the exponent ↵ is small, e.g ↵ = 1, the restriction
on the time step t is not too restrictive, but higher exponents lead to strong
restrictions on the time step for stable simulations.
lim ke k ,p = 0,
x, t!0
where the error vector en has entries (en )j = unj u(tn , xj ) for all 1 j J.
36
known as von Neumann stability analysis, is based on an analysis in the fre-
quency domain, using Fourier series to represent the solution. As the di↵usion
part of the PDE and the advection part of the PDE lead to di↵erent difficul-
ties, we will analyse first the pure di↵usion case and the pure advection case
separately, before combining the results to investigate the general case of an
advection-di↵usion model.
and to examine the stability of the individual Fourier components. The total
solution will be stable if and only if every Fourier component is stable. The use
of Fourier series is strictly appropriate only if the spatial domain is periodic. For
more general boundary conditions, a rigorous stability analysis is more difficult,
but the von Neumann method still provides a way to characterise obviously
unstable numerical schemes and avoid them for the scientific computing task.
A key property of Fourier series is that individual Fourier modes satisfy
d ikx
e = ikeikx .
dx
Similarly, for finite Fourier series, if one starts with some initial conditions
cnj = eikj x , after one iteration of the finite-di↵erences scheme, one will have
cn+1
j = Ak eikj x
,
It follows that if the value of the amplification factor determines if the amplitude
of this particular Fourier mode will grow or decay with the iterations of the
numerical integration scheme. Hence, the stability of each Fourier component
is determined by the modulus of its amplification factor. The von Neumann
stability condition is thus
|Ak | 1 8k 2 Z. (5.17)
37
5.3.3 Di↵usion equation
We will see that for a di↵usion process
@c @2c
= D 2, (5.18)
@t @x
using an implicit scheme in time ensures unconditional stability, whereas using
an explicit scheme in time leads to a conditional stability, where the time step
is limited by the square of the spatial step.
The discretisation using an Euler implicit scheme for the approximation of
the time derivation and central di↵erences in space is
cn+1
j cnj cn+1
j+1 2cn+1
j + cn+1
j 1
=D . (5.19)
t x2
where cnj = c(tn , xj ). Using the Euler explicit scheme for time discretisation,
we obtain
cn+1
j cnj cnj+1 2cnj + cnj 1
=D , (5.20)
t x2
ĉn+1
k = Ak ĉnk ,
38
Therefore we have
|Ak | 1 8k 2 Z
and the Euler implicit scheme in unconditionally stable.
If we instead substitute an arbitrary Fourier mode in the Euler explicit
scheme 5.20, we obtain
ĉn+1
k = Ak ĉnk ,
with the amplification factor for any Fourier mode k
✓ ◆
t 2 k x
Ak = 1 4D sin .
x2 2
The stability condition is only satisfied if
✓ ◆
t 2 k x
|Ak | = 1 4D sin 1 8k 2 Z
x2 2
which gives the following condition on the time step
x2
t . (5.22)
2D
In conclusion, for a pure di↵usion model, an implicit treatment of the time
integration leads to unconditional stability, while an explicit treatment leads to
conditional stability with a condition on t and x given by 5.22.
The CFL condition A condition which restrictively couples the time step
and the spatial grid size of an explicit time integrator scheme, such as exem-
plified in 5.22, is called a Courant-Friedrichs-Lewy or CFL condition. It is
restrictive, since for a given spatial grid size, one cannot consider too large a
time step. For example, doubling the spatial resolution will require a substan-
tially smaller time step (in the case of the CFL condition 5.22, 4 times smaller).
This is a simple yet fundamental observation for numerical integration of PDEs.
As already mentioned above, this e↵ect can be circumvented when employing
schemes with better stability properties, in particular implicit schemes. The
trade-o↵ therefore becomes weighing the computational cost of solving a linear
system every iteration against stepping an explicit method a large number of
times.
39
Direct approach The finite di↵erences schemes 5.24 and 5.25 respectively
lead to the matrix forms cn+1 = MC cn and cn+1 = ML cn , where
t t
MC = Id u A and ML = Id u AL
2 x x
and A is the tridiagonal matrix A = tridiag( 1, 0, 1) and AL is the tridiagonal
matrix AL = tridiag( 1, 1, 0). The direct approach consists of evaluating the
norms of those matrices.
ĉn+1
k = Ak ĉnk ,
Under the assumption that u > 0, we will only have |Ak | 1 under the condition
0 < u xt 1 or
x
t . (5.27)
u
For the case where u < 0, one needs to use right-sided di↵erences to obtain
a similar conditional stability condition.
40
The CFL condition Discretisation of the advection equation using a left-
sided approximation of the spatial derivative results in the CFL condition 5.27,
or equivalently
t
0u 1. (5.28)
x
In the case u < 0, this requirement cannot be fulfilled and the solution is
thus unstable. Instead, one needs to consider a right-sided approximation of
the derivative in order to obtain a CFL condition that can be fulfilled. More
generally, the CFL condition says that the choice of the discretisation cannot
be made independently of the data that determine the PDE to be solved. The
scheme should be an upwind scheme, i.e. one should use backward di↵erences
with respect to the advection velocity.
The quantity u xt is called the Courant number. In more general problems,
the solution may consist of a family of waves travelling at di↵erent speeds, in
which case the Courant number should be defined such that u is the speed of
the most rapidly travelling wave.
In summary, for the advection equation, using central di↵erences in space
results in unconditional instability, while choosing an upwind scheme leads to
conditional stability with a CFL condition that has to be fulfilled.
Dissipation and dispersion The von Neumann stability analysis also en-
lightens the dissipation and dispersion of a numerical scheme. To see this, we
first notice that an exact solution to the advection equation 5.23, given some
initial condition c0 (x), is given by
c(x, t) = c0 (x ut).
iuk t
and setting gk = e we obtain for the exact solution evaluated at a given
discrete node
1
X
c(xj , tn ) = ĉ0k eikj x
(gk )n .
k= 1
From the Fourier representation of the numerical solution 5.26 and the recursive
rule ĉn+1
k = Ak ĉnk , we have
N
X
cnj = ĉ0k eikj x
(Ak )n ,
k= N
41
stability. Thus, Ak is a dissipation coefficient. The smaller |Ak |, the higher is
the reduction of the amplitude of the wave mode ĉ0k , and, as a consequence, the
higher the numerical dissipation. The ratio
|Ak |
✏a (k) =
|gk |
is called the amplification error of the k th harmonic associated with the numer-
ical scheme (in this case, it corresponds to the amplification factor).
On the other hand, we have gk = e iuk t , and writing
i! t i!
kk t
Ak = |Ak | e = |Ak | e ,
we notice that the velocity of propagation of the true solution is u, while the
numerical velocity of propagation relative to the k th harmonic is !k . The ratio
between the two velocities
!
✏d (k) =
uk
quantifies the dispersion error relative to the k th harmonic.
Example 5.4 (Upwind scheme). Discretising the advection equation using the
upwind scheme 5.25 led to the amplification coefficient Ak = 1 u xt 1 e ik x .
If we consider as an example a wave of period 2 x, the norm of the amplification
coefficient of this harmonic is
✓ ◆
2 t t
|Ak | = 1 4u · 1 u
x x
2
Hence, in the case u xt = 1, then |Ak | = 1 and there is no dissipation error. In
2
the case u xt = 0.5, we have |Ak | = 0 and the 2 x wave is damped in a single
time step. From this observation we conclude that the upwind scheme is strongly
damping, hence solutions get smoothed during the numerical integration.
The phase change per time step associated with the upwind scheme will be
(see calculations in the von Neumann stability analysis earlier)
=(A(k)) ↵ sin(k x)
d = arctan = arctan
R(A(k)) 1 ↵(1 cos(k x))
where ↵ = u xt is the Courant number. Taking the ratio with the analytical
phase speed, the dispersion error can be calculated. Doing the calculation high-
lights that if u xt < 0.5, then waves are slowed down, whereas if 0.5 < u xt < 1,
waves are accelerated. The dispersion error is a function of the CFL number
and of the wave number k. The phase error is larger for shorter waves (larger
wavenumber k).
42
Hence, assuming that the advection velocity u > 0, an appropriate finite-
di↵erences scheme could be:
cn+1
j cnj cn+1
j cn+1
j 1
n+1
1 cj+1 2cn+1
j + cn+1
j 1
= + . (5.29)
t x Pe x2
The matrix to consider for the stability analysis is
✓ ◆ 1
t 1 t
M= Id A+ B ,
x Pe x2
for which one can show that |Ak | 1. Hence, the scheme is unconditionally
stable.
dx
= v(x, t), (6.4)
dt
which is an ODE for x(t). A solution (x(t), t) satisfies
d @ @ @ @
(x(t), t) = + x0 (t) = + v(x, t) = 0, (6.5)
dt @t @x @t @x
43
which implies that (x(t), t) = 0 (x0 ). Each value of x0 determines a unique
characteristic base curve if v is such that the initial value problems for the ODE
(6.4) are uniquely solvable (we assume v smooth enough for that). On any of
the integral curves (x(t), t) = 0 (x0 ), f will also be constant (see (6.2) and
(6.5) ). Since the curves of constant and constant f coincide, f has to be a
function of alone
f (x(t), t) = F ( (x, t)). (6.6)
We will consider for example an initial condition f (x, 0) = f0 (x) such that
f (x, 0) = F ( (x, 0)). This equation can be solved for x, which then leads to
f (x, t) = f0 (x( (x, t))).
Example 6.1 (Linear waves). We consider first the simple case of a constant
advection velocity
@ @
f (x, t) + v0 f (x, t) = 0. (6.7)
@t @x
Now by substitution, one can easily see that f = ⇢(x v0 t) is solution for
any di↵erentiable ⇢(x). Note that f = ⇢(x v0 t) describes the propagation of
values given by an initial condition and moving with velocity v0 . For v0 > 0,
the propagation occurs to the right, the opposite sign propagating to the left. If
⇢(x) = sin x, then f = sin(x v0 t), the point (x, t) such that x v0 t = ⇡/2 is
at the crest of a wave and it moves in the x t plane along the straight line
x = v0 t + ⇡/2. Thus the solutions of (6.7) represent linear waves that travel
with velocity v0 . This velocity is relative to the x axis.
Example 6.2. Consider the initial value problem
@f @f 1
+ x sin(t) = 0, f0 (x) = 1 + .
@t @x 1 + x2
Here we have v(x, t) = x sin t. Characteristic base curves for this problem
are solutions of
dx
= x sin t, x(0) = x0 .
dt
By separation of variables we get
Z Z
1
dx = sin tdt.
x
Hence
ln x = cos t + c,
and using the initial condition
x(t) = x0 e1 cos t
.
The function f is preserved along the characteristic base curves
1+cos t
f (x(t), t) = f0 (x0 ), x0 = x(t)e .
Since we know that
1
f0 (x0 ) = 1 + ,
1 + x20
we find that
1
f (x, t) = 1 + ,
1+ x2 e 2+2 cos t
The characteristic base curves and the solution f (x, t) are illustrated in Figure
6.1.
44
3
2.5
2
1.9
1.8
2 1.7
f(x,t)
1.6
1.5
1.5
t
1.4
1.3
1.2
1 1.1
1
3
2.5
0.5 15
2 10
1.5 5
1 0
t 0.5 -10
-5
0
-5 -3 -1 1 3 5 0 -15 x
x
Figure 6.1: The left panel shows the characteristic base curves for example 6.2,
the right panel shows the corresponding solution f (x, t)
45
References
[Ben00] E.A. Bender. An Introduction to Mathematical Modeling. Dover, Mineola, 2000.
[Buc14] E. Buckingham. On Physically Similar Systems; Illustrations of the Use of Di-
mensional Equations. Phys. Rev. 4, 345–376, 1914.
[Du11] D.R. Durran. Numerical Methods for Fluid Dynamics. Texts in Applied Math-
ematics, Springer, 2011.
[Ha10] E. Hairer. Geometric Numerical Integration. Lecture Notes, TU Muenchen,
2010.
[HW05] G. Hornberger and P. Wiberg. Numerical Methods in the Hydrological Sciences.
Special Publication Series 57, American Geophysical Union, 2005.
[IBM+ 05] R. Illner, C.S. Bohun, S. McCollum, and T. van Roode. Mathematical Modelling:
A Case Studies Approach. AMS, Providence, 2005.
[QSS+ 07] A. Quarteroni, R. Sacco and F. Saleri. Numerical Mathematics. Texts in Applied
Mathematics, Springer, 2007.
[SJ05] S. Socolofsky and G. Jirka Special Topics in Mixing and Transport Processes
in the Environment. Lecture notes, Texas A&M University and University of
Karlsruhe, 2005.
[Sto14] G. Stolz Introduction au Calcul Scientifique. Lecture notes, Ecole des Mines,
2014.
[Tes12] G. Teschl. Ordinary Di↵erential Equations and Dynamical Systems. AMS,
Providence, 2012.
46