0% found this document useful (0 votes)
12 views143 pages

MATH 6193 - Lecture Notes

The document consists of lecture notes for MATH 6193 by Donna M. G. Dyer, covering various topics in mathematics including partial differential equations, solution methods, and numerical schemes. It includes detailed sections on finite difference approximations, well-posedness, boundary conditions, and methods for solving both one-dimensional and two-dimensional problems. The notes are structured with clear headings and subheadings, providing a comprehensive overview of the subject matter.

Uploaded by

Kene De Gannes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views143 pages

MATH 6193 - Lecture Notes

The document consists of lecture notes for MATH 6193 by Donna M. G. Dyer, covering various topics in mathematics including partial differential equations, solution methods, and numerical schemes. It includes detailed sections on finite difference approximations, well-posedness, boundary conditions, and methods for solving both one-dimensional and two-dimensional problems. The notes are structured with clear headings and subheadings, providing a comprehensive overview of the subject matter.

Uploaded by

Kene De Gannes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 143

Lecture Notes for MATH 6193

Donna M. G. Dyer

January 2022
Contents

1 Introduction 4
1.1 Introductory Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Partial Di¤erential Equations . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Solution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Finite Di¤erence Approximations . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Approximation of a Derivative by Finite Di¤erences . . . . . . . . . . . . 9
1.6 Hyperbolic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7 Parabolic Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2 Well-posedness 15
2.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Worked Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Spatial Derivatives 21
3.1 Finite Di¤erence Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Central Di¤erence Approximation for ux . . . . . . . . . . . . . . . . . . 23
3.2.1 Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.2 Points Per Wavelength . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Higher Order Central Di¤erence Approximation for ux . . . . . . . . . . 26
3.3.1 Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.2 Points Per Wavelength . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3.3 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.3.4 Compact Central Di¤erences: Beyond 4th Order . . . . . . . . . . 29


3.4 One-sided Di¤erence Approximations . . . . . . . . . . . . . . . . . . . . 30
3.5 Temporal Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5.1 Analysis of Dispersive Errors . . . . . . . . . . . . . . . . . . . . . 34
3.5.2 Analysis of Dissipative Errors . . . . . . . . . . . . . . . . . . . . 37

1
4 Mostly Explicit Schemes 40
4.1 Method 1: D+ in time, D0 in space . . . . . . . . . . . . . . . . . . . . . 41
4.1.1 Truncation Error and Order of Method . . . . . . . . . . . . . . . 42
4.1.2 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 42
4.1.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2 Method 2: D in time, D0 in space . . . . . . . . . . . . . . . . . . . . . 50
4.2.1 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 52
4.2.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3 Method 3: Lax-Friedrichs . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.3.1 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 59
4.4 Method 4: Leap Frog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4.1 Leap-frog (2; 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4.2 Leap-frog (2; 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.4.3 Arti…cial Dissipation . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.5 Method 5: Lax Wendro¤ . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.5.1 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 69
4.6 Method 6: MacCormack’s Method . . . . . . . . . . . . . . . . . . . . . . 70
4.7 Method 7: Runge-Kutta Time Stepping Schemes . . . . . . . . . . . . . 71

5 Solving Systems of Equations 73


5.1 Numerical Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

6 Implicit Schemes 76
6.1 Crank-Nicolson Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.1.1 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 78
6.2 Compact Implicit Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3 Semi-Implicit Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.4 Implicit Schemes for Systems . . . . . . . . . . . . . . . . . . . . . . . . 80

7 Parabolic Problems 82
7.1 Method 1: D+ in time, D+ D in Space . . . . . . . . . . . . . . . . . . . 82
7.1.1 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 83
7.2 Method 2: D in time, D+ D in Space . . . . . . . . . . . . . . . . . . . 84
7.2.1 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 84
7.2.2 Suitability of Method . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.3 Method 3: Leap Frog - the Wrong Choice . . . . . . . . . . . . . . . . . . 85
7.3.1 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 86
7.4 Method 4: Dufort-Frankel Method . . . . . . . . . . . . . . . . . . . . . 86
7.5 Method 5: Crank-Nicolson . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.5.1 Von Neumann Stability Analysis . . . . . . . . . . . . . . . . . . 88

2
8 Boundary Conditions 90
8.1 Numerical Treatment of Boundary Conditions . . . . . . . . . . . . . . . 90
8.2 Extrapolating Boundary Conditions . . . . . . . . . . . . . . . . . . . . . 92
8.3 Boundary Conditions for Parabolic Problems . . . . . . . . . . . . . . . . 94
8.3.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

8.3.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.4 Boundary Conditions for Hyperbolic Problems . . . . . . . . . . . . . . . 100
8.5 Solved Problem: The 1-D Euler Equations . . . . . . . . . . . . . . . . . 104
8.5.1 A Particular Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

9 Two Dimensional Problems 119


9.1 Two Dimensional Scalar Problems . . . . . . . . . . . . . . . . . . . . . . 120
9.2 Two Dimensional Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 121
9.3 Operator Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
9.3.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
9.4 Alternating Directions Implicit (ADI) Method . . . . . . . . . . . . . . . 127
9.5 Solved Problem: The 2-D Euler Equations . . . . . . . . . . . . . . . . . 128
9.5.1 A Particular Case . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

3
Chapter 1

Introduction

1.1 Introductory Concepts

When educating a child in the …eld of mathematics, it is common to proceed in the


following order: …rst the integers, then fractions, then decimals, then rationals, then
irrationals and …nally we arrive at the concept of a real number. In other words, we
start from the discrete concept and move toward the continuous concept. In numerical
analysis however, the approach is exactly the opposite. We take a continuous di¤erential
equation that describes a physical process, and replace it with an appropriate discrete
analogue. We then solve this analogue and accept our result to be a representation of
the solution for the original (and continuous) equation. The challenge that we face is
in developing an analogue that can indeed approximate a given equation, and to do so
with an acceptable degree of accuracy.
To illustrate this concept, consider the cubic equation x3 x2 + 1: We can calculate
values for this cubic equation for the domain 10 x 10 with increments of 0:5; and
then represent the result graphically as shown in Figure 1.1. The representative curve
is obtained by connecting these discrete points by straight lines. Naturally, in order to
obtain a smoother curve, we may take smaller increments along the x axis before we
connect the points to create the representative curve, as illustrated in Figure 1.2. In
both cases, however, we have approximated the continuous equation x3 x2 + 1 by a
set of discrete points. This is what we do when we obtain the numerical solution of a
continuous equation.
The major challenge to be faced is the development of algorithms that are both
e¢ cient and easy to implement. The availability of computing power is also a concern,
and for that reason, it is important to be able to estimate the computational cost of
any potential numerical method before it is selected. Other important considerations
include the domain of the solution; in terms of the number of space dimensions required,

4
1000

500

0
x3-x 2+1

-500

-1000

-1500
-10 -5 0 5 10
x

Figure 1.1: Pointwise representation of the curve x3 x2 +1 for the domain 10 x 10


with increments of 0:5 along the x axis

5
1000

500

0
x3-x 2+1

-500

-1000

-1500
-10 -5 0 5 10
x

Figure 1.2: Pointwise representation of the curve x3 x2 +1 for the domain 10 x 10


with increments of 0:1 along the x axis

6
the time period in question, and the boundary and initial conditions for the problem.

1.2 Partial Di¤erential Equations


Partial di¤erential equations (PDEs) are often used to describe physical processes in
science and engineering. Some problems may be represented by a single PDE of a given
order, or by a system of …rst order PDEs. The most common PDEs associated with
engineering systems are second order. The classi…cation of a single second order PDE
in two independent variables x and t, illustrated by equation (1:2:1)

Auxx + 2Buxt + Cutt + Dut + Eux + F = 0 (1.2.1)

is as follows:

If B 2 AC > 0; then the PDE (1:2:1) is hyperbolic, and has two real unique
characteristics. An example of this is the one dimensional wave equation utt = uxx :

If B 2 AC = 0; then the PDE (1:2:1) is parabolic, and has one real unique
characteristic. An example of this is the one dimensional heat equation ut = uxx :

If B 2 AC < 0; then the PDE (1:2:1) is elliptic, with two unique complex conjugate
characteristics. An example of this is the two dimensional Laplace equation uxx +
uyy = 0:

Hyperbolic and parabolic PDEs are time-dependent, and these are the major focus
of our attention in this text. Elliptic equations may be solved by a variety of methods,
and a brief introduction to the solution of these equations is provided in the last chapter.

1.3 Solution Methods


There are three major classes of solution methods:

1. Finite Di¤erences: Here, we divide the x axis into discrete points and approximate
the solution at these points. For time-dependent problems, the solution at each
point is marched forward in time to obtain the solution over the time interval of
interest.

2. Finite Elements: Here, we approximate the solution via sum functions, each of
which is non-zero over a small region. Here, the PDE is expressed in its variational
form, and the region of interest is divided into sub-regions called elements. With
two spatial dimensions, these elements are usually in the form of triangles or

7
quadrilaterals. The advantage of the …nite element method is that any given
region can be easily subdivided into such elements very accurately, once the size
of the elements is adapted to suit the geometry of the region.

3. Spectral Methods: Here, the general idea is to write the solution of the di¤erential
equation as a sum of global basis functions (like Fourier series) and then to choose
the coe¢ cients in the sum in order to satisfy the di¤erential equation as well as
possible. These methods are widely used for solving periodic functions.

The focus of our attention in this course is the solution of PDEs via the …nite dif-
ference method. Each method has its advantages and disadvantages, but the method of
…nite di¤erences is the simplest way to solve a PDE numerically.

1.4 Finite Di¤erence Approximations


Our goal is to approximate a given PDE by a suitable algebraic equation or set of equa-
tions. In so doing, we replace a continuous di¤erential equation with its corresponding
in…nitely dimensional solution space with a …nite set of algebraic equations of …nite
dimensional solution space. In order to achieve this, the following steps are taken:

1. First we identify a …nite number of discrete points within the given domain. These
points are called nodes. This step is referred to as discretization.

2. We then replace the derivatives in the PDE with a discrete di¤erence approxima-
tion. This allows us to create a set of algebraic equations that can be utilized to
evaluate discrete nodal values. This is called the approximation step.

3. Finally, we solve the set of algebraic equations to obtain a discrete approximation


to the solution of the original PDE. This is known as the solution step.

There are a couple pertinent questions that need to be answered to assess the quality
of the approximate solution. The …rst question that arises is how we can be certain
that our result is comparable with the actual solution to the PDE, as we will often
will not have any access to an exact solution to compare it with. If we did possess
an actual solution, there would have been no need to …nd a numerical approximation.
Numerical methods are used precisely to obtain solutions to problems that cannot be
solved analytically. A second question that needs to be addressed is how we can tell
whether one given numerical approximation is better than another obtained by a di¤erent
method.
We illustrate this point as follows. Let u be the true solution to a given PDE, and
the approximate solution we obtain via a given …nite di¤erence method U: If jju U jj <

8
where jj jj is a valid norm1 , and is su¢ ciently small, it can be said that U is a good
approximation, and the method we have used is a good method. The question we wish
to answer is how we can determine this without actually knowing u: We also need to
determine how "good" is the "goodness" of our numerical method. The answer to this
lies in the analysis of each of the three steps: discretization, approximation and solution.

1.5 Approximation of a Derivative by Finite Di¤er-


ences
What makes a PDE a di¤erential equation is the presence of derivatives of the dependent
variable with respect to the independent variables. A given derivative does not have a
unique …nite di¤erence approximation, and we need to pick the one that best suits the
di¤erential equation we are trying to solve. We can illustrate this point by looking at
the simple …rst derivative. A …rst derivative can be conceptualized as a simple gradient
function.
Let us now focus on three equally spaced consecutive points (xi 1 ; ui 1 ) ; (xi ; ui ) and
(xi+1 ; ui+1 ) for the curve shown in Figure 1.3. We will refer to the space between points
du
or nodes on the x axis as 4x . We can approximate the gradient at the point (xi ; ui )
dx
in several ways, but we will focus on the three most simple ways. First, we may utilize

Figure 1.3: Approximation of the derivative at the point (xi ; ui ) using …nite di¤erences

what we will refer to henceforth as the backward di¤erence approximation to the …rst
1
The norm of a mathematical object is a quantity that in some sense describes the length, size or
extent of that object.

9
derivative; which is corresponds to the slope of line a in Figure 1.3:
du ui ui 1
' : (1.5.1)
dx xi xi xi 1

Next, we have the so-called forward di¤erence approximation to the …rst derivative;
which corresponds to the slope of line b in Figure 1.3:
du ui+1 ui
' : (1.5.2)
dx xi xi+1 xi

Thirdly, we have the central di¤erence approximation to the …rst derivative; correspond-
ing to the slope of line c shown in Figure 1.3:
du ui+1 ui 1
' : (1.5.3)
dx xi xi+1 xi 1

Clearly for the given example, the central di¤erence approximation is better than the
forward and backward di¤erence approximation for the calculation of the gradient at
the point (xi ; ui ) : The approximation would also be much better if 4x was smaller,
i.e., if the nodes on the x axis were closer together. Finite di¤erence techniques are
approximations because the derivative at any given point is estimated by a di¤erence
@u u
quotient over a small interval. In other words, we represent the derivative by for
@x x
small x:
We will now show that the theory behind the …nite di¤erence approximation of deriv-
atives of u (x) is essentially that of the Taylor series expansion of u (x). For a function
u (x) that has derivatives that are single-valued, continuous and …nite, by Taylor’s the-
orem
h2 h3
u (x + h) = u (x) + hu0 (x) + u00 (x) + u000 (x) + ::: (1.5.4)
2! 3!
for small h: Taking the …rst two terms, we have

u (x + h) = u (x) + hu0 (x) + O h2 ;

where O (h2 ) represents terms of containing second and higher powers of h. Given that
h is small, these O (h2 ) terms may be ignored to give
u (x + h) u (x)
u0 (x) ' : (1.5.5)
h
Now consider Figure 1.4, which is clearly analogous to Figure 1.3. Clearly the approxi-
mation (1:5:5) is the same as the forward di¤erence approximation to the …rst derivative
(1:5:1) :

10
Figure 1.4: Approximation of the derivative at a given point (x; u). The distance between
adjacent points on the x grid is h:

In a similar way,
h2 00 h3 000
u (x h) = u (x) hu0 (x) + u (x) u (x) + ::: (1.5.6)
2! 3!
for small h: We may therefore say that
u (x h) = u (x) hu0 (x) + O h2 :
Given that h is small, we may again ignore the terms O (h2 ) to get
u (x) u (x h)
u0 (x) ' ; (1.5.7)
h
which is the same as the backward di¤erence approximation for the …rst derivative
(1:5:2) : Finally, if we add (1:5:5) to (1:5:7) ; we get
u (x + h) u (x) + u (x) u (x h) u (x + h) u (x h)
2u0 (x) ' = ;
h h
which gives
u (x + h) u (x h)
u0 (x) ' :
2h
This is the same as the central di¤erence approximation for the …rst derivative (1:5:3) :
As we have established the analogy between the Taylor expansion and the approxi-
mation of a derivative at a point, we can use this to approximate higher derivatives as
well. Adding (1:5:4) and (1:5:6) ; we obtain
1
u (x + h) + u (x h) = 2u (x) + h2 u00 (x) + O h4 ;
2

11
where O (h4 ) represents terms containing fourth and higher powers of h: For small h;
O (h4 ) is negligible, and we may write
u (x + h) 2u (x) + u (x h)
u00 (x) ' : (1.5.8)
h2
This is referred to as the central di¤erence approximation to the second derivative.
The accuracy of these derivative approximations stems from the fact that we have
ignored terms that are multiplied by higher powers of h: Clearly if terms of O (h2 )
are ignored, then the approximation we get is worse than if terms of O (h4 ) are ignored.
This allows us to make a statement on the accuracy of the …nite di¤erence approximation
chosen for a given problem.

1.6 Hyperbolic Problems


Hyperbolic PDEs govern propagation problems in which the solution u (x; t) in the
domain of interest (x; t) is marched forward in time from its initial state u (x; 0) = u0 (x).
This propagation is carried out in accordance with the imposed boundary conditions.
The simplest hyperbolic equation is the so-called one-way advection equation, also known
as the "baby-wave" equation
@u @u
+a = ut + aux = 0: (1.6.1)
@t @x
This is a hyperbolic equation that only has one set of characteristics, hence the termi-
nology "one-way". Along each characteristic, the value of u is constant with respect to
du
time, meaning that = 0. Since we know that
dt
du @u @u @x
= + = 0;
dt @t @x @t
the characteristic of the one-way advection equation corresponds to the solution of
dx
= a (x; t) :
dt
Let us illustrate this with a suitable example. Consider the baby-wave equation

ut = ux (1.6.2)

with initial condition u (x; 0) = f (x) : We can show that the solution of this problem is

u (x; t) = f (x + t) ;

12
meaning that the wave is travelling along the x axis to the left with a …nite speed of
propagation. Suppose further that f (x) is non-zero only in a small neighbourhood of
x = 0: This means that for any large x = a where a > 0; u (x; t) is zero until t ' a:
Hence there is a …nite time for the disturbance at x = 0 to reach x = a; and the
propagation speed of the disturbance is 1: Consider t = x + x0 where x0 is a constant.
Then
dt
= 1 ) dt = dx:
dx
Now
@u @u @u @u @u @u
du = dt + dx = ( dx) + dx = + dx:
@t @x @t @x @t @x
Since we know that ut = ux ; it follows that

@u @u
du = + dx = 0;
@x @x

meaning that u does not change along this line. Hence the baby-wave equation simply
propagates the initial condition f (x0 ) along the line

t= x + x0 ; (1.6.3)

and (1:6:3) is referred to as the characteristic curve of the equation (1:6:2) :


To illustrate this, let us take the solution to the baby wave equation to be

u (x; t) = eikx e t :

By substituting this solution into (1:6:2) ; we get

eikx e t = ikeikx e t ) = ik:

It follows that
u (x; t) = eikx eikt = eik(x+t) : (1.6.4)
This is a travelling wave solution with an amplitude that does not decay with time.
Here, k represents the wavenumber2 (with units of length 1 ); and the correspond-
2
ing wavelength is = : Note that the wavelength is inversely proportional to the
k
wavenumber.
2
The wavenumber is the spatial frequency of a wave.

13
1.7 Parabolic Problems
Parabolic problems are characterized by dissipative behaviour due to loss of energy to
friction. Consider the simplest parabolic equation: the one-dimensional heat equation

ut = uxx ; u (x; 0) = eikx : (1.7.1)

We can solve this equation explicitly by taking

u (x; t) = eikx e t :

Substituting this into (1:7:1) ; we get

eikx e t = k 2 eikx e t ) = k2:

The solution is therefore


k2 t
u (x; t) = eikx e : (1.7.2)
It follows that as t ! 1; the solution u (x; t) ! 0 at a rate proportional to k 2 : This is
indicative of the dissipative nature of the equation, as the initial data "…zzles out" with
increasing time. We note that the dissipation increases with wavenumber k; meaning
that short wavelengths are strongly damped, while long wavelengths are weakly damped.
It follows that di¤usion is e¢ cient in smoothing out small scale disturbances (with large
k values), but not large scale ones (with small k):
We can make a comparison between (1:6:4) and (1:7:2) : It can be proven that all
odd spatial derivatives (such as ux ; uxxx etc.) are non-dissipative advective terms char-
acterizing the propagation of disturbances, while all even spatial derivatives (such as
uxx ; uxxxx etc.) are dissipative terms. We should note however that the sign in front of
the even spatial derivative is very important, as if the wrong sign is used, we have an
ill-posed problem with unrealistic explosive growth instabilities (which is not physically
possible). This will be explained in greater detail in the chapter on well-posedness.

14
Chapter 2

Well-posedness

2.1 Concept
A problem is well-posed (Hadamard, 1923) if it has a unique solution that depends
continuously on the boundary and or initial data. If a problem is not well-posed, it is said
to be ill-posed. We can only solve a problem numerically if it is well-posed. To illustrate
this concept, let us consider the following hyperbolic problem for 1 x 1; t 0
!u = A! u ; ! u (x; 0) = !
g (x) :
t x

This problem is well-posed if for a given norm jj jj , 9 K; such that for all t > 0
jj!
u (x; t)jj K exp ( t) jj! u (x; 0)jj = K exp ( t) jj!
g (x)jj ; (2.1.1)
where K and are both independent of the initial data. It is clear from (2:1:1) that
the rate of growth of the solution of a well-posed problem should be controlled and
independent of the initial data !
g (x) :
The more commonly used functional norms are
The 2 norm de…ned as
Z 1 1=2
jj! j!
2
g (x)jj2 = g (x)j dx :
1

The 1 norm de…ned as


Z 1
jj!
g (x)jj1 = j!
g (x)j dx:
1

The 1 norm de…ned as


jj!
g (x)jj1 = max !
g (x) :
1 < x < 1

15
2.2 Worked Examples
We will illustrate the concept of a well-posed problem with the following examples:

Example 2.2.1 Consider the problem


!
u t = A!
u x; !
u (x; 0) = !
g (x) ; (2.2.1)

where
0 1
A= :
1 0
The eigenvalues of matrix A are = i; so the system is not hyperbolic (hyperbolic
systems have real eigenvalues). Now if the initial data is
!
u (x; 0) = !
g (x) = eikx !
z;

then the solutions are in the form


!
u (x; t) = e t eikx !
z: (2.2.2)

Substituting (2:2:2) into (2:2:1) gives

e t eikx !
z = Aike t eikx !
z;

which means that


!
z = ikA!
z: (2.2.3)
i i
Now is the eigenvector for the eigenvalue i: If we take !
z = ; we get
1 1

A!
z = i!
z: (2.2.4)

Substituting (2:2:4) into (2:2:3) ; we get


!
z = ik ( i!
z ) = k!
z;

which means that = k; and


!
u (x; t) = ekt eikx !
z: (2.2.5)

From the solution (2:2:5) ; we see that the solution grows with time like ekt for any k
however large it is. Hence the problem (2:2:1) is ill-posed.

16
Remark 2.2.1 Note that in the above example, if the eigenvector corresponding to the
eigenvalue = i were used, we would have obtained

A!
z = i!
z

and
!
z = ik (i!
z)= k!
z;
meaning that = k: This would have given solution
!
u (x; t) = e kt ikx
e !
z:

This solution decays with time depending on the size of k; and is not of concern. How-
ever, since the other eigenvalue = i gives uncontrolled growth, the overall problem is
still ill-posed.

Example 2.2.2 Consider the 2-way wave equation


!
u t = A!
u x; !
u (x; 0) = !
g (x) ; (2.2.6)

where
0 1
A= :
1 0
The eigenvalues of matrix A are + = 1; 1 so it is a hyperbolic system (as the
=
1
eigenvalues are both real). The eigenvector corresponding to + = 1 is ! z+= and
1
1
the eigenvector for = 1 is ! z = : Now if the initial data is
1
!
u (x; 0) = !
g (x) = eikx !
z;

then the solutions are in the form


!
u (x; t) = e t eikx !
z: (2.2.7)

Substituting (2:2:7) into (2:2:6) gives

e t eikx !
z = Aike t eikx !
z;

which means that


!
z = ikA!
z: (2.2.8)

17
1
Now taking !
z+= and + = 1; we get
1

A!
z+=!
z +: (2.2.9)

Substituting (2:2:9) into (2:2:8) ; we get


!
z + = ik (!
z +) ;
+

which means that + = ik; and the solution is


!
u (x; t) = eikt eikx !
z + = eik(x+t) !
z +: (2.2.10)

1
In a similar way, if we take !
z = and = 1; we can show that the solution
1
is
!
u (x; t) = e ikt ikx
e !
z = eik(x t) !
z : (2.2.11)
Both solutions (2:2:10) and (2:2:11) are bounded in time, so the problem is well-posed.

Example 2.2.3 Consider the one dimensional heat equation for 0 x L; t > 0

ut = uxx ; u (x; 0) = eikx : (2.2.12)

Again we seek a solution of the form

u (x; t) = e t eikx : (2.2.13)

Substituting (2:2:13) into (2:2:12) ; we have

e t eikx = k 2 e t eikx ;

meaning that = k 2 : The solution is therefore


k2 t ikx
u (x; t) = e e :

There is no exponential growth here, and the problem (2:2:12) is well-posed.

Example 2.2.4 Consider the one dimensional "backwards" heat equation for 0 x
L; t > 0
ut = uxx ; u (x; 0) = eikx : (2.2.14)
Again we seek a solution of the form

u (x; t) = e t eikx : (2.2.15)

18
Substituting (2:2:15) into (2:2:14) ; we have

e t eikx = k 2 e t eikx ;

meaning that = k 2 : The solution is therefore


2
u (x; t) = ek t eikx :

There is unbounded growth here as the solution blows up as t ! 1, so this problem


(2:2:14) is ill-posed.

Remark 2.2.2 The backward heat equation is ill-posed because it is non-physical. It


can be interpreted to mean that heat travels from the colder regions to the hotter regions,
which we know never happens.

It can be proven that all constant coe¢ cient hyperbolic problems are well-posed.
However, variable coe¢ cient hyperbolic problems are well-posed only locally in time.
Also, there are no general results for non-linear problems. The method used in the
previous examples is commonly referred to as Fourier or Von Neumann analysis. In
general, the sign in front of the spatial derivative is very important. When the spatial
derivative is even, the sign in front of it is very important. For example the problems
ut = uxx and ut = uxxxx are both well-posed, but the problems ut = uxx and
ut = uxxxx are both ill-posed. However, for spatial derivatives of odd order, the sign in
front does not matter. For example, both ut = ux and ut = ux are well-posed. We can
explain this physically because the sign in front of the odd ordered derivative in space
only determines the direction of the travelling wave (to the left or to the right).
When there are more than one spatial derivative of odd order on the right hand side of
the equation, well-posedness depends on the highest order spatial derivative present. We
also classify the equation as hyperbolic or parabolic depending on the highest spatial
derivative. For example ut = uxx + ux is considered to be parabolic, as the highest
derivative is uxx : We can perform Fourier analysis on this example as follows:

Example 2.2.5 Consider


ut = uxx + ux : (2.2.16)
Taking the solution to be of the form

u (x; t) = e t eikx : (2.2.17)

Substituting (2:2:17) into (2:2:16) ; we have

e t eikx = k 2 e t eikx + ike t eikx ;

19
meaning that = k 2 + ik: The solution is therefore

u (x; t) = e( k2 +ik)t ikx


e :

For large k; the behaviour of the solution is dominated by the term k 2 : Hence the
problem (2:2:16) is well-posed.

Remark 2.2.3 This problem is well-posed even if the highest spatial derivative has a
very small coe¢ cient. If we have ut = uxx + ux where 0 < 1; and we repeat the
Fourier analysis, we would obtain the solution

u (x; t) = e( k2 +ik)t ikx


e :

Even though is very small, for large enough values of k we will still have damped growth,
and it follows that the highest spatial derivative still dominates the behaviour (unless we
are only interested in large wavelength disturbances where k is small).

20
Chapter 3

Spatial Derivatives

3.1 Finite Di¤erence Grids


In order to solve a PDE, the time derivative is computed by evaluating the discrete spatial
derivatives. This solution is then marched forward in time. Consider the general Cauchy
problem1 (also referred to as the one-way advective equation or baby-wave equation)

ut = ux ; u (x; 0) = g (x) ; 1 < x < 1:

The simplest algorithm for its numerical solution via the …nite di¤erence technique is as
follows:

1. Discretize the x axis by placing N equally spaced points along it with the distance
between each point of 4x: Each point or node on the spatial grid is represented
as xi where i = 1 : N:

2. Calculate a discrete approximation ux for each point xi at the initial time t = t1 :


We denote this as ux (xi ; t1 ) = g (xi ) :

3. Now we divide the time period we are interested in into m time steps

(t1 ; t2 ; t3 ; :::tm )

with a time step-size 4t. We now use the previous data ux (xi ; t1 ) = g (xi ) to
compute an estimate for u (xi ; t2 ), which is the solution for the problem at the next
time level, and this is what we call "marching forward in time". This is achieved via
1
A Cauchy problem is a pure initial value problem with no boundaries, i.e. an in…nite domain. In
one dimensional space x, this means that 1 < x < 1: In order to solve such a problem numerically,
it is necessary to assume periodicity over a representative …nite domain.

21
a single or multistep ordinary di¤erential equation algorithm - such methods are
covered in preliminary courses you may have taken in numerical analysis. (*Note:
If this is your …rst course in numerical analysis, don’t worry, we will revise the
basic theory behind these methods in this course).
4. Repeat this process until we have an estimate for u at each time step along all the
nodal points.
The division of the spatial domain into nodes must be done for each axis in our spatial
domain. For a one dimensional domain, say in the x direction, we only need to divide the
x axis into grid points xi for i = 1 : N (see Figure 3.1). For a two dimensional domain,

Figure 3.1: One dimensional grid along the x axis. Nodes here are equally spaced for
computational simplicity with the …nite di¤erence method.

the solution domain is covered by a grid of lines. The intersection of these gridlines are
considered to be the nodal points at which the …nite di¤erence solution to the PDE is
to be obtained (see Figure 3.2). This idea can be extended to three dimensions when

Figure 3.2: Two-dimensional solution domain D (x; y) is divided into a …nite di¤erence
grid formed by intersecting gridlines on the x and y axes. Node Dij is created from the
ith gridline on the x axis intersecting with the jth gridline on the y axis in D (x; y) :

necessary. The value of the spatial derivative ux must be approximated at each node in
the domain of interest at each time level. Clearly we need to have a method to estimate
the spatial derivative ux : We turn our attention to this in the next section.

22
3.2 Central Di¤erence Approximation for ux
Let us introduce a regular spatial grid along the x axis with N equally-spaced grid
points. The space-step size (distance between any two grid points) is denoted as h: We
shall adopt the following notation for simplicity:

u (xj ) = uj : (3.2.1)

Our goal is to …nd the solution for u at all points xj :


Consider the Taylor series expansions
h2 h3 h4
u (x + h) = u + hux + uxx + uxxx + uxxx + O h5 ;
2! 3! 4!
h2 h3 h4
u (x h) = u hux +
uxx uxxx + uxxx + O h5 ;
2! 3! 4!
where O (h ) can be interpreted from the fact that jO (h5 )j Ch5 for some constant C:
5

As shown previously, by subtracting these two expansions, we get


h3
u (x + h) u (x h) = 2hux + uxxx + O h5 ;
3
from which we may obtain the central di¤erence approximation for the …rst derivative
u (x + h) u (x h) h2
ux = uxxx + O h4 :
2h 6
Expressing this in terms of our general notation (3:2:1) ; we have
uj+1 uj 1 h2
ux (xj ) = uxxx (xj ) + O h4 :
2h 6
A numerical approximation for the …rst derivative at the point xj is therefore
uj+1 uj 1
ux (xj ) ' = D0 uj : (3.2.2)
2h
This is referred to as the central di¤erence approximation to the …rst derivative. The
h2
leading order error term for D0 uj is uxxx ; since if h is small, h2 is much bigger than
6
the subsequent terms of O (h4 ) : It follows that D0 uj is referred to as a second order
approximation for ux ; since the leading order error term is proportional to h2 : Note that
h2
the sign before the leading order error term is of no consequence. The term uxxx is
6
referred to as the truncation error.

23
3.2.1 Fourier Analysis
When we wish to obtain a more qualitative way to analyze the error for a given method,
we carry out a Fourier analysis. We will therefore use Fourier analysis to analyze the
error of D0 uj : If u (x) = eikx ; then

ux = ikeikx = iku: (3.2.3)

It follows that
u (x + h) u (x h) eik(x+h) eik(x h)
eikh e ikh
= = eikx
2h 2h 2h
eikh e ikh
ieikx iu
= = (sin kh) ;
2i h h
and therefore
u (x + h) u (x h) sin kh
= (iku) : (3.2.4)
2h kh
Now recall that
sin z
lim
= 1:
z ! 0 z

sin kh
It follows that for …xed k; if h ! 0; then ! 1: Using this in the above, we see
kh
that ux = iku; and we recover (3:2:3) from (3:2:4) : Clearly though, if kh is not small,
we have a problem as our approximation breaks down. For this reason, we use small
grids (i.e. small h between nodal points) whenever using …nite di¤erence methods not
just to produce nicer graphical results, but also to ensure that no matter what k is, kh
remains relatively small.

3.2.2 Points Per Wavelength


The error in Fourier space Er (k; kh) for the previous method is calculated from (3:2:3)
and (3:2:4) as
sin (kh) sin (kh)
Er (k; kh) = ik ik = ik 1 = ik [e2 (kh)] ;
kh kh
where
sin (kh)
e2 (kh) = 1
kh
is the relative error term representing the error per wavenumber k; and ik is referred
to as the accumulative error term. As the name suggests, the accumulative error is the
error that accumulates as the wavenumber k increases. We note the following:

24
For …xed k; e2 (kh) ! 0 as h ! 0; which means that the method is consistent.
A consistent numerical method is one that generates a numerical solution that
converges to the exact solution as the step-size h ! 0:

For z ! 0; we can say that


sin z
= 1 + O z2 :
z
It follows that
e2 (z) = 1 1 + O z2
and therefore e2 (kh) = O (z 2 ) : (Note that the sign in front of O (z 2 ) is not relevant,
as we are just interested in the magnitude of the error). Hence for small kh; we
may say that
e2 (kh) = O (kh)2 :
It follows that the central di¤erence approximation D0 uj for the …rst derivative ux
is second order accurate.

kh is a dimensionless number, as k is a wavenumber with units (length) 1 ; and


h has units length: It follows that e2 (kh) is also dimensionless. Given that the
2
wavelength of a Fourier mode is = (we take the modulus of the wavenumber
jkj
k so that the wavelength is always positive), if N is the number of grid points per
wavelength, then for step-size h > 0
2 2
N= = = ;
h jkj h jkhj

and
2
jkhj = :
N
It follows that the relative error for a given wavenumber k is inversely proportional
to the number of points per wavelength N: Also the relative error depends only on
N:

Since
sin z z2
=1 + O z4 ;
z 6
then !
(kh)2
e2 (kh) = 1 1 + O (kh)4 :
6

25
The relative error for the …rst derivative central di¤erence approximation D0 uj is
therefore s
2 2
(kh)2 2 2
2 2
e2 (kh) ' = N = ) N ' : (3.2.5)
6 6 3N 2 3e2 (kh)
For example, if we want the relative error of our method to be approximately 0:001;
then s
2 2
N' ' 81:1155735:
3 (0:001)
Therefore, we need to have at least N = 82 points per wavelength to have a
relative error of 0:001; which corresponds to a very …ne mesh. This method may
not be ideal for solving ux if a high level of accuracy is required unless we have
su¢ cient computing power at our disposal. It should be noted that in practice, it
is not easy to estimate the characteristic wavelength for a given problem. In such
cases, it becomes necessary to use one’s own intuition about the problem to create
a grid that is …ne enough to achieve a su¢ ciently small relative error.

3.3 Higher Order Central Di¤erence Approximation


for ux
The previous approximation for D0 uj was shown to be a second order accurate approx-
imation for the …rst derivative ux :
uj+1 uj 1
ux ' D0 uj = :
2h
As it is the only central di¤erence method that involves the two nearest neighbours, it is
referred to as a compact second order method. We can utilize Taylor series expansions
in order to obtain higher order compact central di¤erence methods as follows.
Recall that
uj+1 uj 1 h2
ux (xj ) = uxxx (xj ) + O h4 : (3.3.1)
2h 6
Now if we utilize the second closest adjacent points to xj ; i.e. xj 2 ; we will have a space
between these two points of 4h between uj+2 and uj 2 : Therefore
uj+2 (2h)2
uj 2
ux (xj ) = uxxx (xj ) + O (2h)4 : (3.3.2)
4h 6
Multiplying equation (3:3:1) by 4=3 and subtracting the result from (3:3:2) multiplied
by 1=3; we obtain
4 uj+1 uj 1 1 uj+2 uj 2
ux (xj ) = + O h4 ;
3 2h 3 4h

26
which may be simpli…ed to
(uj+2 uj 2 ) + 8 (uj+1 uj 1 )
ux (xj ) = + O h4 :
12h
Hence
(uj+2
uj 2 ) + 8 (uj+1 uj 1 )
ux (xj ) ' (3.3.3)
12h
is a compact fourth order central di¤erence approximation for ux : The leading order
h4
error term (i.e. the truncation error) can be found to be uxxxx ; which is why the
30
method is fourth order. Although this method is much more accurate, it utilizes a
5 point stencil. We shall see that larger stencils require much more computational
storage space, which limits the speed of implementation of the code. Also, we will see
that larger stencils require more "start-up" information at the boundaries, which also
compromises the accuracy of the overall result. In practice, the best methods are those
that have the desired accuracy levels as well as the smallest possible stencils.

3.3.1 Fourier Analysis


Let us carry out a standard Fourier analysis on the above method. Let us take u = eikx:
It follows that ux = iku: Using this in the approximation (3:3:3) ; we get
4 uj+1 uj 1 1 uj+2 uj 2
ux '
3 2h 3 4h
ik(x+h) ik(x h)
4 e e 1 eik(x+2h) eik(x 2h)
= :
3 2h 3 4h
This can be expressed as
4 eikh e ikh 1 ikx ei2kh e i2kh
ux ' eikx ik e ik ;
3 2i (kh) 3 2i (2kh)
which is
4 sin kh 1 sin (2kh)
ux ' ik u: (3.3.4)
3 kh 3 2kh
Now let us consider
4 sin z 1 sin 2z 8 sin z sin 2z
f4 (z) = = :
3 z 3 2z 6z
Taking Taylor series expansions, we get
z3 z5 4z 3 4z 5
8 z + + O (z 7 ) 2z + + O (z 7 )
6 120 3 15
f4 (z) = ;
6z

27
which can be simpli…ed to
1 4
f4 (z) = 1 z + O z6 : (3.3.5)
30
Comparing (3:3:5) with (3:3:4) ; we may conclude that
1
ux ' ik 1 (kh)4 + O (kh)6 u:
30
The error in Fourier space Er is therefore
Er = ik [e4 (kh)] ;
where
(kh)4
e4 (kh) = + O (kh)6
30
is the relative error term representing the error per wavenumber k; and the accumulative
error term is ik: Clearly, the method is fourth order accurate. We also see that the
method is consistent, as the relative error e4 (kh) ! 0 as kh ! 0; so the approximate
solution obtained converges to the exact solution for small grid sizes.

3.3.2 Points Per Wavelength


(kh)4
Since we know that e4 (kh) ' ; if we wish to have a relative error of 0:001; then
30
(kh)4 = 30 0:001 = 0:03:
Using the formula obtained previously for the number of points per wavelength N =
2
; we have
jkhj
4 (2 )4 (2 )4
N = = ' 15:0973093:
jkhj4 0:03
Recall that for a relative error of 0:001 previously with the second order compact central
di¤erence method, we required N = 82: Clearly we have greatly reduced the resolution
requirements by using a more compact method.
In general, if you know what the wavenumber k is, you can determine what the grid
step-size h should be for the N of your choice. However, often we do not know what the
wavelength is (and therefore we do not know k). Often what is done is an estimation of
the oscillation of the solution (based on a rough numerical approximation performed on
a randomly selected …ne grid). From that oscillation, a characteristic wavelength may
be obtained. An analysis may then be performed to estimate the size of h that is needed
to limit the error of the chosen method.

28
3.3.3 Aliasing

Any value of kh cannot be chosen. Since we are looking at waves of the form eikx ; the
limits of kh are (for all possible wavenumbers k)

< kh < :

Now since we are looking at trigonometric functions on a grid with space-step h; then
xj = jh; so we are considering waves of the form eikjh : Also, as the trigonometric
functions cos and sin are 2 periodic, for any two given wavenumbers k1 and k2 on this
grid, we expect that
k1 h = k2 h + 2 ;
and it follows that
eik2 hj = eik1 hj :
It is therefore possible that a high frequency wave (with large k) can be confused with
a low frequency wave (with small k) on the grid. For example, if j = 1; k1 h = 2 and
k2 h = 0; then
e2 i = ei(0) = 1:
This phenomenon where high frequency waves may look like low frequency waves is
called aliasing.
This phenomenon may also occur when utilizing a grid that is too coarse (i.e. h is
too large). In Figure 3.3, we see that with a …ne grid, the numerical solution is seen
to have one period, while with a coarser grid (larger step-size h), a completely di¤erent
period may be deduced from the numerical values obtained.

Figure 3.3: Periodic graph obtained with a …ne grid (nodal values marked with shaded
circles) has a di¤erent period than that traced out by choosing a coarser grid (with nodal
values marked with stars)

3.3.4 Compact Central Di¤erences: Beyond 4th Order


A compact central di¤erence approximation exists for any even order 2p by utilizing p
neighbours on either side of the point xj (that is xj 1 ; :::; xj p ): The precise weights may

29
be derived by writing the Taylor series for uj 1 ; :::; uj p ; and taking linear combinations
to eliminate all odd derivatives of order 2L + 1 for L = 0; 1; :::; p 1: The leading
2p d2p+1 u
order term is proportional to h : The di¤erence approximation can also be
dx2p+1
obtained in Fourier space

Exercise 3.3.1 Find the 6th order compact central di¤erence approximation for ux :
Utilize Fourier analysis to verify the order of the method, and determine the number of
points per wavelength N that will be necessary to obtain a relative error of 0:001:

3.4 One-sided Di¤erence Approximations


We have previously discussed the forward di¤erence (also known as Forward Euler)
method (1:5:1)
uj+1 uj
ux (xj ) = = D+ uj
h
and backward di¤erence (also known as Backward Euler) method (1:5:2)
uj uj 1
ux (xj ) = = D uj :
h
Both methods are …rst order accurate. The forward Euler method is derived from Taylor
series expansions as follows:

h2 h2
u (x + h) u (x) = u + hux + uxx + O h3 u = hux + uxx + O h3 ;
2! 2!
which gives
u (x + h) u (x) h
ux = uxx + O h2 :
h 2
This is
uj+1 uj h
ux (xj ) = uxx + O h2 :
h 2
The method is
uj+1 uj
D+ uj = ' ux
h
where the truncation error is h2 uxx : This indicates that the method is …rst order.
In a similar way, the backward Euler method is derived from Taylor series expansions
as follows:
h2 h2
u (x) u (x h) = u u hux + uxx + O h3 = hux uxx + O h3 ;
2! 2!

30
which gives
u (x) u (x h) h
ux = + uxx + O h2 :
h 2
This is
uj uj 1 h
ux (xj ) = + uxx + O h2 :
h 2
The method is
uj 1 uj
D uj = ' ux
h
where the truncation error is h2 uxx ; which indicates that the method is …rst order.
Let us now turn our attention to the approximation of the second derivative uxx :
Since
h2 h3 h4
u (x + h) = u + hux + uxx + uxxx + uxxxx + O h5
2! 3! 4!
and
h2 h3 h4
u (x h) = u hux + uxx uxxx + uxxxx + O h5
2! 3! 4!
then
h4
u (x + h) + u (x h) 2u (x) = 2u + h2 uxx + uxxxx + O h6 2u:
12

This gives

u (x + h) + u (x h) 2u (x) h4 O (h6 )
uxx = uxxxx +
h2 12h2 h2
which is
2uj + uj 1 h2
uj+1
uxx (xj ) = uxxxx + O h4 :
h2 12
The second derivative central di¤erence approximation for uxx is therefore
uj+1 2uj + uj 1
uxx ' ; (3.4.1)
h2
h2
where the truncation error is u
12 xxxx
; which means that this is a second order method.
Note that
uj+1 uj uj uj 1
uj uj 1 h h uj+1 2uj + uj 1
D+ D uj = D+ = =
h h h2

31
and
uj+1 uj uj uj 1
uj+1 uj h h uj+1 2uj + uj 1
D D+ uj = D = = :
h h h2

Therefore the approximation (3:4:1) for uxx is often referred to as D+ D uj or D D+ uj :


Applying Fourier analysis to this, we take u = eikx : Therefore

uxx = k 2 eikx :

Now for the approximation (3:4:1) for uxx ; we have

eik(x+h) 2eikx + eik(x h) eikx eikh + e ikh


2
D+ D uj = = :
h2 h2
This is
eikh + e ikh
eikx 2 2
2 eikx (2 cos kh 2)
D+ D uj = =
h2 h2
2k 2 eikx 2
= (1 cos kh) = (1 cos kh) k 2 eikx :
k 2 h2 k 2 h2
Hence the error in Fourier space Er (k; kh) is
2
Er (k; kh) = k 2 eikx (1 cos kh) k 2 eikx
k 2 h2
2
= k 2 eikx 1 (1 cos kh) = k 2 eikx [e2 (kh)] ;
k 2 h2

where the relative error (representing the error per wavenumber k) e2 (kh) is
2
e2 (kh) = 1 (1 cos kh) :
k 2 h2
Now expanding cos kh for small kh using Taylor series expansions, we get
" #!
2 (kh)2 (kh)4
e2 (kh) = 1 1 1 + + O (kh)6
(kh)2 2! 4!
!
1 (kh)2 (kh)2
=1 2 + O (kh)4 = + O (kh)4 :
2 24 12

32
For …xed k; e2 (kh) ! 0 as h ! 0; which means that the method is consistent. (recall
that a consistent numerical method is one that generates a numerical solution that
converges to the exact solution as the step-size h ! 0):
The method D+ D uj is referred to as the second derivative central di¤erence ap-
(kh)2
proximation. Note that the relative error for the method D+ D uj is e2 (kh) ' ,
12
and this is smaller than the relative error for the …rst derivative central di¤erence ap-
(kh)2
proximation - recall from (3:2:5) that e2 (kh) ' for D0 uj - although they both
6
utilize the two nearest neighbours uj+1 and uj 1 . This is expected since D+ D uj is more
compact, as it uses uj+1 and uj 1 to approximate the second derivative rather than the
…rst derivative.

3.5 Temporal Errors


We that we have discussed the approximation of spatial derivatives, we may switch our
attention to the e¤ect of marching the solution forward in time by approximating the
temporal derivative. The same methods that we have developed for spatial derivatives
can be used to discretize the temporal derivative. However, there are two serious consid-
erations that must take precedence before we select a method for the numerical solution
of any given PDE:

1. The use of any given spatial derivative approximation coupled with any given
temporal derivative approximation does not necessarily result in a stable numerical
method. An unstable numerical method yields a solution that grows exponentially
in time, similar to what we would expect if we attempted to solve an ill-posed
PDE. (we get the "NaN" error in Matlab). This is a numerical artifact if the PDE
that we are solving is well-posed.

2. The random choice of a numerical method may also result in a numerical solution
that is nicely bounded (i.e. the method is stable) but partially or completely
inaccurate. In other words, the solution generated is not an accurate depiction of
the solution. This is a much worse problem, as the error is not immediately obvious.
There are two sources of inaccuracy for a given numerical method: dispersion and
dissipation. Dispersion is the case where the computed solution has di¤erent phase
velocities for a given wave number. Dissipation is the case where the computed
solution loses energy and …zzles out (is damped) over time.

33
3.5.1 Analysis of Dispersive Errors
Before we focus on dispersive errors that may be generated by the numerical method
chosen, let us …rst describe what is meant by a dispersive system. For a given PDE with
solution
!u (x; t) = ei!(k)t eikx !
z;
where k is the wavenumber and the ! is the frequency (which is dependent on the
d!
wavenumber k): The group velocity is the rate at which energy propagates in a
dk
wave-packet. In the space time plane(x; t), the phase velocity of the wave with wavenum-
d!
ber k is !=k: The system is said to be non-dispersive if is constant. Otherwise,
dk
the system is said to be dispersive. Obviously, if a given PDE system is inherently non-
dispersive, any dispersive behaviour that we …nd in the numerical solution of the system
must have been introduced by the numerical method selected.
Example 3.5.1 Consider the baby-wave equation ut = ux : Assuming the solution is of
the form u (x; t) = ei!t eikx ; and substituting into the baby-wave equation, we get
i!ei!t eikx = ikei!t eikx ;
which means that ! = k: Hence d! dk
= 1 = constant, and we may conclude that the
baby-wave equation is non-dispersive. Therefore, if we observe any dispersive behaviour
at all in the numerical solution of this equation, we know that the error comes from the
numerical method that was selected. For the system
ut = ux ; u (x; 0) = sin x; 1 < x < 1; t 0;
we would expect the numerical solution to indicate a sine wave that is translated to the
left over time without any change of shape. If we solve this system numerically and
begin to see spurious oscillations in the translated solution, this would indicate that the
numerical method chosen is dispersive.
Example 3.5.2 Consider the equation
ut = ux + uxxx :
Assuming the solution is of the form u (x; t) = ei!t eikx ; and substituting into the baby-
wave equation, we get
i!ei!t eikx = ikei!t eikx + (ik)3 ei!t eikx ;
which means that ! = k k 3 : Hence d! dk
= 1 3k 2 6= constant, and we may conclude that
the equation is inherently dispersive. Therefore, we expect to see dispersive behaviour in
the numerical solution of this equation, and we cannot make any conclusions about the
dispersive nature of any numerical method utilized.

34
To continue our discussion on numerical dispersive error, let us consider the baby-
wave equation
ut = ux ; u (x; 0) = eikx ; 1 < x < 1; t 0: (3.5.1)
We wish to obtain a numerical approximation for the exact solution u (x; t) : Let us
utilize a semi-discrete approximation

vt = D0 v ; v (x; 0) = eikx ; (3.5.2)

where v (x; t) is the numerical approximate solution we seek (obtained by using the
central di¤erence operator D0 to discretize the spatial derivative). The exact solution
can be shown to be of the form

u (x; t) = a (t) eikx : (3.5.3)

Similarly, we can show that the approximate solution is of the form

v (x; t) = b (t) eikx : (3.5.4)

Substituting (3:5:3) into (3:5:1) ; we have (after dropping the exponentials):

da
= ika; a (0) = 1:
dt
Solving this, we obtain
a (t) = eikt = ei!u t ;
where !u = k: The exact solution u (x; t) is non-dispersive since

d!u
= 1 = constant.
dk
Next, substituting (3:5:4) into (3:5:2) ; we get

db sin kh
= ik b; b (0) = 1;
dt kh

which gives
sin kh
b (t) = eik( kh )t = ei!v t ;

sin kh sin kh
where !v = k = : The approximate solution v (x; t) is dispersive since
kh h

d!v
6= constant.
dk

35
Note that this is not a surprising result, since
v (x + h) v (x h) h2
vt = D0 v = = vx + vxxx + O h4 :
2h 6
Therefore; we are really solving the equation
h2
vt = vx + vxxx + O h4 :
6
This equation is dispersive because of the presence of the odd spatial derivative vxxx on
the right-hand side of the equation (we explained before that odd spatial derivatives are
dispersive). Here, the equation

h2
ut = ux + uxxx
6
is known as the modi…ed equation.
Let us compare the exact solution

u (x; t) = eikt eikx = exp (ik (x + t))

with the approximate solution


sin kh sin kh
v (x; t) = eik( kh )t eikx = exp ik t+x :
kh
The phase Pu for the exact solution is

Pu = k (x + t) ;

while the phase Pv for the approximate solution is


sin kh
Pv = k t+x :
kh

It follows that the only errors are in phase due to dispersion. The phase error e (kt; kh)
is de…ned as

sin kh (kh)2
e (kt; kh) = jPu Pv j = kt 1 ' kt :
kh 6

The phase error grows at a rate proportional to the size of t: This means that for any
given kh; the dispersive error in the numerical approximation v (x; t) gets worse as time
progresses:

36
3.5.2 Analysis of Dissipative Errors
Let us consider the baby-wave equation

ut = ux ; u (x; 0) = g (x) ; 1 < x < 1; t 0: (3.5.5)

The exact solution of the baby-wave equation (3:5:5) is

u (x; t) = g (x + t) :

This is a simple translation (i.e. "wave") towards the left of the initial condition
u (x; 0) = g (x).
We wish to obtain a numerical approximation v (x; t) for the exact solution u (x; t) :
Let us utilize a semi-discrete …rst order accurate approximation

v (x + h) v (x)
vt = D+ v = ; (3.5.6)
h
where v (x; t) is the numerical approximate solution we seek (obtained by using the
forward Euler operator D+ to discretize the spatial derivative). Using Fourier analysis,
let us take
v (x; t) = c (t) eikx : (3.5.7)
Substituting (3:5:7) into (3:5:6) ; we get

dc eikx eikh eikx


eikx = c;
dt h

which simpli…es to

dc eikh 1 cos kh + i sin kh 1


= c= c:
dt h h

This can be expressed as

dc sin kh 1 cos kh
= ik c: (3.5.8)
dt kh h

Now if c (t) = exp (i!t) where ! = ! (k) ; we may substitute this into (3:5:8) to get

sin kh 1 cos kh
i! = ik : (3.5.9)
kh h

37
Since (3:5:9) is complex with a negative real part, it follows that c (t) = exp (i!t)
oscillates with decaying amplitude as time increases. This is referred to as numerical
dissipation. We also should point out that since the real part of i! is
(kh)2
1 cos kh 1 1 2 k2h
' = ;
h h 2
the rate of decay of the solution is of O (h) : We should note that this dissipation is a
direct consequence of the numerical method utilized to solve the equation, as there is no
decay inherent in the baby-wave equation.
Suppose that instead of the forward di¤erence operator, we had chosen to use the
backward di¤erence operator to approximate the spatial derivative. In this case, we
consider instead the semi-discrete …rst order accurate approximation
v (x) v (x h)
vt = D v = : (3.5.10)
h
Using Fourier analysis, let us take

v (x; t) = c (t) eikx : (3.5.11)

Substituting (3:5:11) into (3:5:10) ; we get

dc eikx eikx e ikh


eikx = c;
dt h
which simpli…es to
ikh
dc 1 e 1 cos kh + i sin kh
= c= c:
dt h h
This can be expressed as
dc sin kh 1 cos kh
= ik + c: (3.5.12)
dt kh h
Now if c (t) = exp (i!t) where ! = ! (k) ; we may substitute into (3:5:12) to get

sin kh 1 cos kh
i! = ik + : (3.5.13)
kh h
Since (3:5:13) is complex with a positive real part, it follows that c (t) = exp (i!t)
oscillates with increasing amplitude as time increases. The numerical approximation
therefore grows exponentially with time, and we say that the method is unstable. This

38
is a direct consequence of the numerical method utilized to solve the equation, as there
is no growth factor inherent in the solution to the baby-wave equation.
A natural question to ask is why it was okay to use D+ but not D to discretize
the spatial derivative in this problem. We stated previously that the solution of the
baby-wave equation

ut = ux ; u (x; 0) = g (x) ; 1 < x < 1; t 0;

is
u (x; t) = g (x + t) : (3.5.14)
This solution (3:5:14) is constant for the family of curves (characteristic curves)

x + t = constant. (3.5.15)

Recall that this solution signi…es the simple translation (or wave) towards the left of the
initial condition u (x; 0) = g (x) : Therefore, at any given time, u (x; t) depends on the
values of u (x; t) at earlier times t as well as on the x values to the right of the current
position, but not on the x values to the left of the current position. It is therefore
natural to use a forward di¤erence method to discretize the spatial derivative, but it is
not possible to use the backward di¤erence method.

Remark 3.5.1 The use of D+ to discretize the spatial derivative is sometimes referred
to as upward di¤erencing, and the use of D is known as downward di¤erencing.

Remark 3.5.2 A common cause of numerical instability is the poor choice of a numer-
ical method, i.e., one that is not a good …t for the physical principles underlying the
equation being solved.

39
Chapter 4

Mostly Explicit Schemes

In order to carry out a general discussion on the most commonly utilized explicit schemes,
we will demonstrate the numerical solution of the baby-wave equation, as it is one of
the most basic one-dimensional initial value problems. We wish to solve

ut = ux ; u (x; 0) = g (x) ; 1 < x < 1; t 0: (4.0.1)

We will create regular grids (i.e. equally spaced points along the grid) for space and
time. The space step-size will be denoted by h; and the time step-size will be referred
to as t: Therefore, the total time taken to reach time point tn is n k; and the total
distance from the origin to the point xj is j h: As we have done previously, we refer to
the approximate solution to u (x; t) as v (x; t) : We therefore utilize the following notation

vjn = u (xj ; tn ) ' unj

to depict the value of the approximate solution at space point xj and time point tn :
Note that the superscript n in vjn refers to the time point tn ; and the subscript j in vjn
refers to the spatial point xj : Hence, whenever we are referring to a discretization of
the spatial derivative, we will use subscript notation, and whenever we are referring to
a temporal discretization, we utilize superscripts. For example, to discretize the …rst
spatial derivative ux using the forward Euler method, we use
n
vj+1 vjn
D+ vjn = :
h
We refer to this as D+ in space. To discretize the …rst temporal derivative ut using the
forward Euler method, we have

vjn+1 vjn
D+ vjn = ;
t

40
and we call this D+ in time.
In order to determine the suitability of any given numerical scheme for a given
problem, we must study the stability and consistency of the method. In the scheduled
computer labs for this course, we will discuss the numerical implementation of the …rst
method discussed in the following sections (using Matlab as a teaching tool). Additional
handouts will be prepared for each lab. The implementation of the subsequent methods
will be the subject of the computer lab assignments given throughout the course.

Remark 4.0.3 Consistency + Stability = Convergence.

4.1 Method 1: D+ in time, D0 in space


To solve the baby wave equation (4:0:1) using D+ (forward Euler) in time and D0 (central
di¤erence) in space, we consider

vjn+1 vjn n
vj+1 vjn 1
D+ vjn = = = D0 vjn :
t 2h
This can be written as
t n
vjn+1 = vjn + v vjn 1 : (4.1.1)
2h j+1
Here (4:1:1) is the update scheme for the main loop of our code, as it gives us the updated
value (at time step n + 1) of the solution based on the information we had previously
(at time step n): This is called a one-step scheme since the update vjn+1 depends only
on the data at the previous time level n: We implement the initial condition by taking

vj1 = g (xj ) :

Note that (4:1:1) is an explicit scheme since vjn+1 can be obtained from the data at
the previous level n straight away (i.e. there is no other information on the right hand
side of the equation at time n + 1 to make the scheme implicit). Also the system
(4:0:1) that we are solving has an in…nite domain 1 < x < 1: As it is impossible to
create an in…nitely long spatial grid, we are forced to limit our space interval to some
representative domain, and to utilize periodic boundary conditions to solve the problem
over the speci…ed time interval.
Before deciding to utilize this method to solve our system (6:4:1) ; there are a few
considerations we must take into account. First of all, we must know the order of our
method, and use this to determine whether our solution is converging to a plausible
solution. Secondly, we must perform a stability analysis to determine whether this
method is suitable for our problem.

41
Remark 4.1.1 It should be noted that in several programming languages the indexing
for loops starts from 0; but with Matlab - our choice for this course - any loop must be
indexed from one onwards.

4.1.1 Truncation Error and Order of Method


Recall that
vjn+1 vjn
vt (xt ; tn ) = D+ vjn = ( t) utt + O ( t)2
t
and n
h2 vj+1 vjn 1
vx (xt ; tn ) = D0 vjn uxxx + O h4 :
=
2h 6
Therefore to solve the baby wave equation, we are actually using

vjn+1 vjn n
vj+1 vjn 1 h2
( t) utt + O ( t)2 = uxxx + O h4 :
t 2h 6
This may be written as

vjn+1 vjn n
vj+1 vjn 1 h2
= + ( t) utt uxxx + O ( t)2 + O h4 :
t 2h 6
The truncation error T is
h2
uxxx
T = ( t) utt
6
and it is representative of the amount by which the approximation scheme fails to satisfy
the exact solution. Since T / t and T / h2 ; we refer to this as a (1; 2) scheme.

De…nition 1 A scheme with truncation error T that is proportional to A ( t)p + Bhq


is referred to as a (p; q) scheme.

4.1.2 Von Neumann Stability Analysis


We wish to perform Fourier analysis on this …nite di¤erence scheme. We can safely
assume that the x dependence is of the form exp (ikxj ) ; but it is not clear what to do
for the time dependence, since a solution can have constant modulus with time, can
decay with time, or can blow up with time (with or without oscillating). We should
therefore account for all possibilities. We therefore take vjn = z n exp (ikxj ), where z is
complex, as this exhibits all three possibilities:

If jzj = 1; the solution has constant modulus with time.

42
If jzj < 1; the solution decays in time.

If jzj > 1; the solution blows up (grows exponentially) as time increases.

For simpli…cation, let us take = exp (ikh) : Therefore j j = 1; and exp (ikxj ) =
exp (ikjh) = j ; and we may write

vjn = z n j :

t
Substituting this into (4:1:1) and taking = also for simplicity, we have
h

z n+1 j
= zn j
+ zn j+1
zn j 1
:
2
Dividing through by z n j
yields

1
z =1+ =1+ fexp (ikh) exp ( ikh)g ;
2 2
which gives

exp (ikh) exp ( ikh)


z =1+ i = 1 + i f sin khg : (4.1.2)
2i

Let us turn our attention to the exact solution. Recall the basic Fourier analysis
of the baby wave equation ut = ux as follows: Assuming the solution is of the form
u (x; t) = exp (i!t) exp (ikx) ; and substituting into the baby-wave equation, we get

i! exp (i!t) exp (ikx) = ik exp (i!t) exp (ikx) ;

which means that ! = k: Hence

vjn = exp (i!tn ) exp (ikxj ) = exp (i! [ t] n) exp (ikjh)


= exp (ik [ t] n) exp (ikjh) = z n exp (ikxj ) :

It follows that z = exp (ik [ t]) and so

jzj = jexp (ik [ t])j = 1:

Therefore for the exact solution jzj = 1:


Taking the modulus of (4:1:2) ; we get

jzj = j1 + i f sin khgj :

43
If kh = 0; we recover jzj = 1 which is the same as the exact solution. However when
kh = 0, since h 6= 0 as our grid must have a non-zero step-size, we are only considering
the case k = 0: This corresponds to the constant solution

u (x; t) = constant

which trivially satis…es the baby wave equation, but is of no interest to us.
For any wavenumber k 6= 0 and h ! 0, then kh is very small and the approximation
(4:1:2) becomes
t
z ' 1 + i (kh) = 1 + i kh = 1 + ik ( t)
h
since sin x ' x for small x: Therefore

z n ' (1 + ik t)n ' (exp (ik t))n = exp (ik [ t] n) = exp (iktn )

which gives jzj = 1: Hence in this limit, we recover the exact solution. This is what is
referred to as consistency, as h ! 0; our numerical solution approaches the value of the
exact solution.
There is a drawback however, as we are dealing with any wavenumber k: Since we
are looking at waves of the form eikx ; the limits of kh are (for all possible wavenumbers
k)
kh :
Suppose for example we are looking at the case where kh = : Then
2
n o
z =1+i sin =1+ i
2
and p
jzj = j1 + i j = 1+ 2 > 1:
This is not good news, because for jzj > 1; the solution blows up (grows exponentially)
as time increases. Therefore, this method (D+ in time, D0 in space) is highly unstable
and not a good choice for solving the equation (4:0:1).

4.1.3 Implementation
Although this is not the ideal method for solving the baby-wave equation, as it the
…rst numerical method that we have encountered, we will discuss its implementation in
detail. Let us consider the problem

ut = ux ; u (x; 0) = sin 2 x; 1 < x < 1; t 0:

44
As the spatial axis is in…nite, we shall utilize periodic boundary conditions and consider
only the section of the x axis. Let us take 0 x L; 0 t Q; (where L; Q must
be speci…ed in the code) and solve
ut = ux ; u (x; 0) = sin 2 x; u (0; t) = u (L; t) : (4.1.3)
We wish to solve (4:1:3) via the numerical scheme (4:1:1)
t n
vjn+1 = vjn + v vjn :
2h j+1 1

This may be written as

vjn+1 = vjn 1 + vjn + n


vj+1 (4.1.4)
2 2
t
where = :
h
The solution of the problem is found by writing a code to implement the update
scheme (4:1:4) : There are two approaches to solving this problem. A nested loop may
be created in time and space variables to update the solution v on the spatial grid at
every time step. The alternative is to solve the system by writing it in matrix form.
As our program of choice in this course is Matlab, which was developed speci…cally
for the solution of matrix systems, we choose to select the matrix method to solve the
problem. (Note that this is the preference of the author, as nested for loops is a perfectly
acceptable way to solve the problem).

Let us write (4:1:4) in matrix form


!
v n+1 = B !
v n: (4.1.5)
Here !v n+1 and !
v n are N 1 column vectors at time step n + 1 and n respectively.
The form of the vector !
v at a given time level n is
0 1n
v1
B v2 C
B C
B v3 C
! B C
vn=B B
C :
C (4.1.6)
B C
B C
@vN 1 A
vN

Our …rst vector !


v 1 is calculated from the initial condition u (x; 0) = sin 2 x as:
vj1 = sin 2 xj ; j = 1 : N: (4.1.7)

45
As there are N space points on the x axis, B is a square matrix of size N N . As the
boundary conditions signify what occurs at the …rst and last space points respectively,
we will write all rows for B except for the …rst row and the last N th row. According
to (4:1:1) ; the matrix B can be written - with the exception of the …rst and last rows -
as follows: 0 1
B 1 0 0C
B 2 2 C
B 0 1 0 0C
B 2 2 C
B=B
B 0 0 0 0C
C: (4.1.8)
B 0 0 0 0C
B C
@ 0 0 0 0 1 2A
2

Notice that B is a sparse tri-diagonal matrix with zeros everywhere except for the main
diagonal and the …rst super and sub diagonals. On the main diagonal we have 1; i.e.
the coe¢ cient of vjn from equation (4:1:4) ; and on the …rst sub-diagonal and the …rst
super-diagonal, we have and which correspond to the coe¢ cients of vjn 1
n
and vj+1
2 2
respectively. From (4:1:5) ; (4:1:6) and (4:1:8) we have
0 1n+1 0 10 1n
v1 v1
B v2 C B CB C
B C B 2 1 2 0 0 0 0 C B v2 C
B v3 C B 0 C
1 2 0 0 0 C B v3 CB
B C B 2 C
B C = B 0 0 0 0 CB C :
B C B CB C
B C B 0 0 0 0C B C
B C B CB C
@vN 1 A @ 0 0 0 0 1 A @v N 1
A
2 2
vN vN
To illustrate this, we calculate the update v2n+1 from the previous data at time level n
as follows: 0 1n
v1
B v2 C
B C
B v3 C
B C
v2n+1 = 2
1 2 0 0 0 0 B B
C :
C
B C
B C
@vN 1 A
vN
This is equivalent to
v2n+1 = v1n + v2n + v3n ;
2 2
which is (4:1:4) evaluated at space step j = 2: This matrix multiplication is done for all
values j = 2 : N 1 to produce an updated vector ! v n+1 from the previous information

46
at time level n provided from !v n : The question now is what we do at the boundary
when j = 1 and j = N: This is directly related to the …rst row and last rows of B;
which must be con…gured according to the periodic boundary conditions. What is done
is commonly referred to as a "wrap around" of the …rst and last rows.
Recall that
vjn+1 = vjn 1 + vjn + n
vj+1 :
2 2
Now when j = 1; (4:1:4) becomes

v1n+1 = v0n + v1n + v2n : (4.1.9)


2 2
which is a problem, as we do not have a space point at j = 0: However, we are given
periodic boundary conditions. Instead of worrying about what happens when we reach
to the edge of the boundary, we "wrap around" the nonexistent point v0n in (4:1:9) with
n
vN to get
v1n+1 = n
vN + v1n + v2n : (4.1.10)
2 2
What we are actually saying is that when we run out of space points, we loop back
around to the starting point. This is essentially the concept of a periodic boundary
condition.
In a similar way, when j = N; we have

n+1 n n n
vN = vN 1 + vN + vN +1 : (4.1.11)
2 2
This time, we have a nonexistent value of v at a space point j = N + 1 which does not
exist. Again, we solve the problem with the periodic boundary condition that tells us
n n
to wrap around to the …rst point j = 1, hence replacing vN +1 with v1 to get

n+1 n n
vN = vN 1 + vN + v1n : (4.1.12)
2 2
Our …nal matrix B inclusive of the …rst and last rows is therefore
0 1
1 2
0 0 2
B 1 2 0 0 C
B 2 C
B 0 1 0 0 C
B 2 2 C
B=B B 0 0 0 0 C C: (4.1.13)
B 0 0 0 0 C
B C
@ 0 0 0 0 1 A
2 2
2
0 0 2
1

47
and the update !
v n+1 is calculated from
0 1n+1 0 10 1n
v1 1 2
0 0 2 v1
B v2 C B 1 0 0 CB C
v2
B C B 2 2 CB C
B v3 C B 0 1 0 0 CB C
v3
B C B 2 2 CB C
B C =B CB C :
B C B 0 0 0 0 CB C (4.1.14)
B C B 0 0 0 0 CB C
B C B CB C
@vN 1 A @ 0 0 0 0 1 A @vN 1 A
2 2
vN 2
0 0 2
1 vN

For our pseudocode, we will use the general Matlab notation: X (i; k) is the element
in the ith row and k th column of the matrix X; X (: ; k) signi…es all elements in the
k th column of X; X (i; :) signi…es all elements in the ith row of X ; X 0 signi…es the
transpose of the matrix X: We must now create the initial N 1 column vector ! v1
according to the initial condition u (x; 0) = sin 2 x; and the square matrix B de…ned in
(4:1:8) : The initial vector !
v 1 will be used to calculate the update !
v2
!
v2=B !
v1

and !
v 2 will be used to generate the subsequent solution vector !
v3
!
v3=B !
v2

This process will be repeated using


!
v n+1 = B !
vn

until a solution !
v n+1 has been successfully created for all time steps n = 1 : M 1:
We will now proceed to outline the steps of the code. Matlab notation will be
used throughout, but this general outline can be utilized to create a code in any given
computer language of choice.

Step 1: Enter the numerical values for L; Q; M; N by creating a function


DplusDzero:m in Matlab
f unction [U; X; T ] = DplusDzero (L; Q; M; N )

Step 2: Calculate the step sizes h and t = k depending on the values provided
for L; Q; M; N:
L
h= ;
N 1
Q
k= ;
M 1

48
Step 3: Generate the x grid with N points and step-size h, and the time grid with
M points and step-size t :
x = linspace (0; L; N ) ;
t = linspace(0; Q; M );

Step 4: Create a zero square matrix B of dimensions N N , and calculate


B = zeros (N ) ;
t
= ;
h
Modify the matrix B by placing the element 1 along the main diagonal, on
2
the …rst sub-diagonal, and on the …rst super-diagonal:
2
B = diag (ones (1; N )) + diag (ones (1; N 1) ; 1) + :::
2

diag (ones (1; N 1) ; 1) ;


2
Add in the wrap-around periodicity conditions:

B (1; N ) = ;
2
B(N; 1) = ;
2
Step 5: Create a zero matrix U of dimensions M N: We will utilize the rows of
U to store each solution vector !
v for time steps 1 : M
U = zeros (M; N ) ;

Step 6: Generate the initial condition vector, and store it in the …rst row of the
solution matrix U
for i = 1 : N
U (1 ; :) = sin (2 x(i)) ;
end

Step 7: Perform the matrix multiplication for each time step i = 2 : M: The ith
row of U must be calculated by multiplying the matrix B with the transposed
row of U (the transpose is needed as a matrix cannot be multiplied with a row
vector). In order to store the resulting column vector in the next available row of
the solution matrix U; we will need to transpose it to a row vector.

49
for i = 2 : M
U (i; :) = (B U (i 1; :)0 )0 ;
end
Step 8: Plot the result
f igure(1)
[X; T ] = meshgrid (x; t) ;
mesh (X; T; U )
xlabel (0 x0 )
ylabel(0 t 0 )
zlabel(0 z 0 )
When the problem is solved with the method D+ vjn = D0 vjn utilizing the Matlab code
DplusDzero:m outlined above, we obtain the plot shown in Figure 4.1. Note that the
solution becomes slightly distorted over time. This is a result of the inherent numerical
errors previously explained, since the exact solution of the baby-wave equation is a
simple translation of the original sine wave with no deformation. The periodic boundary
conditions are apparent in the plot, as the wave is seen to travel to the right and reappear
accordingly on the right of the x axis.

Remark 4.1.2 This code runs well initially, but after some time the solution is seen to
break down, as we have previously explained that the method is unstable, and therefore
ill-suited to the solution of the baby-wave equation, as seen in Figure 4.1.

4.2 Method 2: D in time, D0 in space


To solve the baby wave equation (4:0:1) using D (backward Euler) in time and D0
(central di¤erence) in space, we consider
vjn vjn 1 n
vj+1 vjn 1
D vjn = = = D0 vjn :
t 2h
This is
t n
vjn = vjn v 1
+ vjn 1 :
2h j+1
We can write this as (increasing or decreasing the space or time counters can be done
so long as it is done consistently for all terms in the expression)
t n+1
vjn+1 = vjn + v vjn+11 :
2h j+1

50
Figure 4.1: Method 1 (D+ in time D0 in space) is clearly unstable as time progresses.

t
Taking = ; we may express this as
h

vjn+11 + vjn+1 + n+1


vj+1 = vjn : (4.2.1)
2 2

We implement the initial condition by taking

vj1 = g (xj ) :

Recall that we are solving has an in…nite domain 1 < x < 1: As it is impossible to
create an in…nitely long spatial grid, we again limit our space interval to a representative
spatial domain, and utilize periodic boundary conditions to solve the problem over the
speci…ed time interval.
Now (4:2:1) is still a one-step scheme as it involves only time levels n and n + 1:
However, this is no longer an explicit method since the values at time level n + 1 involve

51
three di¤erent space points j 1; j and j + 1: For this reason, we say that this is an
implicit scheme. In order to advance the solution forward one time-step, we must solve
a system of linear equations.

4.2.1 Von Neumann Stability Analysis


j
Once again, we take = exp (ikh) : Therefore j j = 1; and exp (ikxj ) = exp (ikjh) = :
We consider
vjn = z n j :
Substituting this into (4:2:1), we have

z n+1 j
z n+1 j+1
z n+1 j 1
= zn j :
2
Dividing through by z n j ; we get
1
z 1 =1
2
exp (ikh) exp ( ikh)
z 1 i =1
2i
z [1 i sin kh] = 1
1
z= :
1 i sin kh
Therefore
1 1
jzj = =p 1 for all values of kh:
1 i sin kh 1+ 2 sin2 kh
This scheme is therefore said to be unconditionally stable.
Now since as kh ! 0; jzj ! 1; the method is consistent. Recall that < kh < :
When kh = 0; ; ; we obtain jzj = 1: However, for all other values of kh; we get
jzj < 1: This means that the method is dissipative, as the solution decays with time.
Although the scheme is unconditionally stable, this is not an ideal method to solve the
baby wave equation because it is highly dissipative. By selecting very small step-size h;
we can limit the rate of dissipation for a while, but we cannot prevent it from happening
over time. The danger of using an unconditionally stable method is that no matter what
step-sizes are chosen for h and t; a solution will be found. However, this solution may
not be accurate because of the dissipative error inherent in the numerical method itself.
Remark 4.2.1 The use of highly dissipative schemes is not recommendable, unless we
are interested in suppressing the instabilities that we cannot analyze in the actual problem
that we are trying to solve.

52
4.2.2 Implementation

As this is the …rst implicit scheme we have encountered, some details of the numerical
solution will be provided. Let us consider once again the problem

ut = ux ; u (0; t) = g (x) ; 1 < x < 1; t 0;

As the spatial axis is in…nite, we shall utilize periodic boundary conditions and consider
only the section of the x axis. Let us take 0 x L; 0 t Q; (where L; Q must
be speci…ed in the code) and solve the speci…c problem

ut = ux ; u (0; t) = u (L; t) : (4.2.2)

with initial conditions


8
> 1
>
> 0; 0 x<
>
> 4
>
> 1 1
< 4x 1; x<
u (0; t) = 4 2 : (4.2.3)
> 1 3
>
> 3 4x; x<
>
> 2 4
>
> 3
: 0; x 1
4
We wish to solve the problem via the numerical scheme previously given

vjn+11 + vjn+1 + n+1


vj+1 = vjn : (4.2.4)
2 2

Again, we will illustrate the solution of the system (4:2:4) by writing it in matrix
form.
B! v n+1 = !v n: (4.2.5)
To solve this, we must therefore calculate the inverse of the matrix B and calculate the
update solution vector !v n+1 from the previous solution ! v n from
!
v n+1 = B 1 !
v n: (4.2.6)

Although some of the steps are quite similar to what was done for the solution of the
problem via D+ vjn = D0 vjn ; we will outline the solution with all the details.

53
Both ! v n+1 and !
v n are N 1 vectors at time step n + 1 and n respectively. The
!
form of the vector v at any given time level n is
0 1n
v1
B v2 C
B C
B v3 C
! B C
vn=B B
C ;
C
B C
B C
@vN 1 A
vN

and the …rst vector !


v 1 is calculated from the initial conditions provided in (4:2:3).
As there are N space points on the x axis, B is a square matrix of size N N . The
boundary conditions signify what occurs at the …rst and last space points j = 1 and
j = N , we can use (4:2:4) to write all rows for B except for the …rst and last rows as
follows: 0 1
B 1 0 0 C
B2 2 C
B0 1 0 0 C
B 2 2 C
B=B
B0 0 0 0 C:
C (4.2.7)
B0 0 0 0 C
B C
@0 0 0 0 1 A
2 2

Notice that B is a sparse matrix with zeros everywhere except for the main diagonal and
the …rst super and sub diagonals. On the main diagonal we have 1; i.e. the coe¢ cient
of vjn from equation (4:2:4) ; and on the …rst sub-diagonal and the …rst super-diagonal,
we have 2 and 2 which corresponds to the coe¢ cients of vjn 1 and vj+1n
respectively.
Let us now concentrate on the …rst and the last rows of the matrix B: The …rst row
of B corresponds to the numerical scheme (4:2:4) when j = 1, which is

v0n+1 + v1n+1 + v2n+1 = v1n :


2 2

Since there is no existing space point j = 0; we use the "wrap-around" periodic condition
u (0; t) = u (L; t) to say that
n+1
v0n+1 = vN ;
resulting in
n+1
vN + v1n+1 + v2n+1 = v1n : (4.2.8)
2 2

54
Next, we consider (4:2:4) when j = N

n+1 n+1 n+1 n


vN 1 + vN + vN +1 = vN :
2 2
Again, there is no existing space point j = N; so we use the "wrap-around" periodic
condition u (0; t) = u (L; t) to write
n+1 n+1
vN +1 = v1 ;

which gives us
n+1 n+1
vN 1 + vN + v1n+1 = vN
n
: (4.2.9)
2 2
Using (4:2:8) and (4:2:9) in (4:2:7) ; we get the …nal form of the matrix B
0 1
1 2
0 0 2
B 1 0 0 C
B 2 2 C
B 0 1 0 0 C
B 2 2 C
B=B B 0 0 0 0 C:
C (4.2.10)
B 0 0 0 0 C
B C
@ 0 0 0 0 2 1 A
2
2
0 0 0 2 1
We now must take the inverse of matrix B: Matlab allows us to do this very easily with
the "inv" command. We will call the inverted matrix C; which is written in Matlab as

C = inv (B) :

Our solution is therefore calculated from


!
v n+1 = C !
v n:

We will use the general Matlab notation: X (i; k) is the element in the ith row and
k th column of the matrix X; X (: ; k) signi…es all elements in the k th column of X;
X (i; :) signi…es all elements in the ith row of X ; X 0 signi…es the transpose of the
matrix X: We must now create the initial N 1 column vector ! v 1 according to the initial
condition provided; and the square matrix B de…ned in (4:1:8) : This matrix B must be
inverted to create a new matrix C: The initial vector ! v 1 will be used to calculate the
update ! v2
!v2=C ! v1
This process will be repeated using
!
v n+1 = C !
vn

55
until a solution !
v n+1 has been successfully created for all the time steps.
We will now outline the steps of the code. Matlab notation will be used throughout,
but this general outline can be utilized to create a code in any given computer language
of choice.

Step 1: Enter the numerical values for L; Q; M; N by creating a function


DminusDzero:m in Matlab
f unction [U; X; T ] = DminusDzero (L; Q; M; N )

Step 2: Calculate the step sizes h and t = k depending on the values provided
for L; Q; M; N:
L
h= ;
N 1
Q
k= ;
M 1
Step 3: Generate the x grid with N points and step-size h, and the time grid with
M points and step-size t :
x = linspace (0; L; N ) ;
t = linspace(0; Q; M );

Step 4: Create a zero square matrix B of dimensions N N , and calculate


B = zeros (N ) ;
t
= ;
h
Modify the matrix B by placing the element 1 along the main diagonal, on
2
the …rst sub-diagonal, and on the …rst super-diagonal:
2
B = diag (ones (1; N )) + diag (ones (1; N 1) ; 1) + :::
2

diag (ones (1; N 1) ; 1) ;


2
Add in the wrap-around periodicity conditions

B (1; N ) = ;
2
B(N; 1) = ;
2

56
Step 6: Calculate the matrix C which is the inverse of matrix B
C = inv(B);
Step 5: Create a zero matrix U of dimensions M N: We will utilize the rows of
U to store each solution vector !
v for time steps 1 : M
U = zeros (M; N ) ;
Step 6: Generate the initial condition vector, and store it in the …rst row of the
solution matrix U
for i = 1 : N
if (x (i) >= 0:25 & x(i) < 0:5)
U (1; i) = 4 x(i) 1;
elseif (x(i) >= 0:5 & x(i) < 0:75)
U (1; i) = 3 4 x(i);
else
U (1; i) = 0;
end
end
Step 7: Perform the matrix multiplication for each time step i = 2 : M: The ith
row of U must be calculated by multiplying the matrix B with the transposed
row of U (the transpose is needed as a matrix cannot be multiplied with a row
vector). In order to store the resulting column vector in the next available row of
the solution matrix U; we will need to transpose it to a row vector.
for i = 2 : M
U (i; :) = (C U (i 1; :)0 )0 ;
end
Step 8: Plot the result
f igure(1)
[X; T ] = meshgrid (x; t) ;
mesh (X; T; U )
xlabel (0 x0 )
ylabel(0 t 0 )
zlabel(0 z 0 )

57
When the problem is solved with the method D+ vjn = D0 vjn utilizing the Matlab
code DminusDzero:m outlined above, we obtain the plot shown in Figure 4.2. The
initial conditions prescribe an initial triangular wave. The baby-wave equation ut = ux
describes the simple translation of this triangular wave to the right at constant speed
with no loss of energy or shape. We note that our numerical solution is increasingly

Figure 4.2: Snapshots of numerical solution of the baby-wave equation solved using
the method D in time D0 in space. Here we took t = 0:002; h = 0:01; N = 101;
M = 5001; Q = 1; L = 1

damped over time. This is a clear re‡ection of our previous analysis which predicted
dissipative errors, and the damping of the solution as time progresses. The periodic
boundary conditions are apparent in the plot, as the "hat" wave is seen to travel to the
right and reappear accordingly on the right of the x axis.

58
4.3 Method 3: Lax-Friedrichs
The Lax–Friedrichs method is named after Peter Lax1 and Kurt O. Friedrichs2 . This
method is essentially an attempt to correct the shortcomings of the original forward
Euler in time central di¤erence in space scheme (method 1). We recall the forward
Euler in time, central di¤erence in space scheme for the baby wave equation (4:0:1)
t n
vjn+1 = vjn + v vjn :
2h j+1 1

As this method did not preform very well, we replace the term vjn with an average over
the two adjacent space steps at time level n as follows:
n
vj+1 + vjn 1 t n
vjn+1 = + v vjn : (4.3.1)
2 2h j+1 1

This is a (1; 2) explicit scheme. Before we can use it, we must …rst study its consistency
and stability - in order to determine whether the method can be utilized to produce a
convergent solution for the problem.

4.3.1 Von Neumann Stability Analysis


Let us take again
vjn = z n j
= z n exp (ikhj) :
t
Substituting into (4:3:1) gives (taking = )
h
0 1
1
B [exp (ikh [j + 1]) + exp (ikh [j 1])] C
z n+1 exp (ikhj) = z n @ 2 A:
+ (exp (ikh [j + 1]) exp (ikh [j 1]))
2
Dividing through by z n exp (ikhj) ; we get
exp (ikh) + exp ( ikh) exp (ikh) exp ( ikh)
z= + i
2 2i
z = cos kh + i sin kh:
1
Peter David Lax (born in1926 in Budapest, Hungary) is an American mathematician who has made
signi…cant contributions to the …eld of numerical analysis. He graduated with his PhD from New York
University in 1949 under the supervision of the late Professor Kurt O. Friedrichs.
2
Kurt Otto Friedrichs (1901 –1982) was one of the most famous German-American mathematicians
of the twentieth century. His greatest contribution to the …eld of applied mathematics was in the theory
and solution of PDEs.

59
It follows that
p q
2
jzj = cos2 kh + 2 sin kh = 1 + ( 2 1) sin2 kh:

Consider the term ( 2 1) sin2 kh: Since we know that 0 sin2 kh 1; then if 2
1;
we will obtain jzj 1. It follows that

1 ) jzj 1:

We can summarize by saying:

t
The condition for stability for this method is 1 where = This is commonly
h
referred to as the Courant-Levy-Friedrichs (CFL) condition3 .

When kh 6= 0; we have < 1; and this means that jzj < 1: This means that the
Lax Friedrichs method is dissipative but stable when < 1:

Remark 4.3.1 The above conclusions are speci…c to the baby wave equation

ut = ux ; u (x; 0) = g (x) :

If however the system we were solving were

ut = aux ; u (x; 0) = g (x) ;

where a is a given constant, we can show that the corresponding CFL condition for the
Lax Friedrichs method would be
t
a 1:
h
Remark 4.3.2 Lax Friedrichs is not usually accurate, but it can be useful for complex
nonlinear problems that are prone to instabilities owing to their dissipative nature.
3
In mathematics, the Courant–Friedrichs–Lewy condition (CFL condition) is a necessary condition
for convergence while solving certain partial di¤erential equations (usually hyperbolic PDEs) numeri-
cally by the method of …nite di¤erences. It arises when explicit time-marching schemes are used for
the numerical solution. As a consequence, the time step must be less than a certain time in many
explicit time-marching computer simulations, otherwise the simulation will produce incorrect results.
The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in
their 1928 paper.

60
4.4 Method 4: Leap Frog
Leap frog is essentially a central di¤erence scheme in space and time. There are several
compact central di¤erence schemes that can be utilized (of varying accuracy), but we
must bear in mind that the more accurate central di¤erence schemes call for larger
stencils, and hence greater storage space.

4.4.1 Leap-frog (2; 2)


Leap-frog (2; 2) is a second order accurate central di¤erence scheme in space and in
time. We discretize the baby wave equation (4:0:1) with leap-frog (2; 2) by taking
D0 vjn = D0 vjn as follows:
vjn+1 vjn 1 n
vj+1 vjn 1
= :
2 ( t) 2h
This simpli…es to
vjn+1 = vjn 1
+ n
vj+1 vjn 1 ; (4.4.1)
t
where = as usual. This is a two-level scheme, as in order to reach the time level
h
n + 1; you need to "leap over" time level n: This means that the computation of the
solution at each time level requires information from the two previous time levels. This
poses a problem computationally, as all we have initially is the information at time level
1 (provided from our initial conditions vj1 for all j): In order to obtain our solution at
time level 2 (i.e. to obtain vj2 ) we must use a "start up" one-step scheme (such as the
three methods we covered previously) once. Then will we be able to use our two-step
leap-frog method to reach to time level 3 (i.e. to …nd vj3 ) by utilizing the information
from time levels 1 and 2 (i.e. information vj1 and vj2 ): Henceforth, the two-step method
can be utilized to leap forward in time to all subsequent time levels vjn for n 3:
Questions must be asked about the accuracy of the method, since we may be com-
promising the order accuracy of the method by utilizing a di¤erent scheme once before
starting to use leap-frog (2; 2) : By theorem, the overall second order accuracy of this
method is still maintained even if the start-up procedure is …rst order once the time
interval it uses to calculate the …rst step is very small. (Note: the proof of this is beyond
the scope of this course). In order to minimize the error introduced by utilizing a less
accurate one-step method to start-up, bootstrapping from a very small …rst one-step
method is sometimes done. For example, we can use a 1-step method to go from time 0
to time t=16; then switch to the two-step method using the information from time 0
and time t=16 to get to time t=8: Then we can use the two-step method with infor-
mation from time 0 and time t=8 to get to time t=4: Then we can use the two-step
method with information from time 0 and time t=4 to get to time t=2: Then we can

61
use the two-step method with information from time 0 and time t=2 to get to time t:
We have now obtained the information for the time step t that we needed to proceed
with a two-step method starting from time 0: From then onwards, we can use the two
step method using a normal time-step of t:

Von Neumann Stability Analysis


Let us take again
vjn = z n j
= z n exp (ikhj) :
t
Substituting into (4:4:1) gives (taking = )
h
z n+1 j
= zn 1 j
+ zn j+1
zn j 1
:

Dividing through by z n j ; we have

1 1 1 exp (ikh) exp ( ikh)


z=z + =z + 2i;
2i

which gives
z2 i (2 sin kh) z 1 = 0:
This is a quadratic equation which may be solved to get
q
2i sin kh (2i sin kh)2 4 ( 1) p
z= = i sin kh 1 2 sin2 kh: (4.4.2)
2
When 1 2
sin2 kh 0; z is a complex number with real and imaginary part, leading
to p
jzj = 2 sin2 kh + 1 2 sin2 kh = 1:

This is true when 1 2


sin2 kh 0; i.e. when
2
sin2 kh 1:

Now since 0 sin2 kh 1; then we have 2


1; which means 1: The CFL condition
for this method is therefore
t
1:
h
If instead 1 2
sin2 kh < 0; then 2 > 1 ) > 1; and (4:4:2) is totally imaginary. This
would imply that jzj > 1; which gives unbounded growth.

62
Remark 4.4.1 If the equation we were solving was ut = aux where a is a constant, the
CFL condition for the Leap-frog (2; 2) method would be
t
a 1:
h

Remark 4.4.2 Since we get jzj = 1 when we satisfy the CFL condition, this means that
there is no dissipation at all in this scheme. All numerical errors are therefore purely
dispersive when leap-frog (2; 2) is utilized to solve the baby-wave equation.
Remark 4.4.3 The method also has no inherent dissipation when used to solve more
complicated hyperbolic systems. This is not ideal if we wish to smooth out unwanted
disturbances in the solution, or if we wish to minimize the numerical errors induced into
the solution when solving more complex nonlinear problems.
Remark 4.4.4 Leap-frog methods involve the "leaping over" of all even or all odd time
levels. This introduces so-called "decoupling" errors into the numerical solution, (jzj = 1
means that it is possible that z = 1; which is inconsistent with the exact solution). Such
errors may not be signi…cant when solving simple equations (like the baby-wave equation
in this instance), but they become more signi…cant with the increased complexity of the
system being solved. In order to reduce the decoupling error, it is common practice to
take two half steps (time increment t=2) to switch from jumping over the even time
level to jumping over the odds, or vice versa. this switching step is repeated at regular
intervals during the execution of the numerical code. Alternatively, we can recouple the
system by averaging the solution periodically with
n+1=2 vjn+1 + vjn n 1=2 vjn + vjn 1
vj = ; vj = ;
2 2
and then leap frog on half-steps. (*Note: care must be taken with indexing here).
Remark 4.4.5 In practice, when using leap-frog methods to solve more complicated
equations, arti…cial dissipation can be introduced into the equation being solved to damp
out numerical errors. This may sometimes be necessary since leap-frog methods are
typically non-dissipative by nature.
Remark 4.4.6 The truncation error T for leap-frog (2; 2) when used to solve the baby-
wave equation is
( t)2 uttt h2 uxxx
T = ;
6
so therefore, when t = h; we can decrease the truncation error. This means that we
t
wish to have = ! 1: The scheme should ideally be run slightly below the CFL limit
h
to minimize errors. This is especially important because this method is non-dissipative,
and errors will accumulate with time.

63
4.4.2 Leap-frog (2; 4)
As the name suggests, this is another central di¤erence in time and space scheme with
a fourth order accurate spatial central di¤erence operator. Even though this method is
more accurate than the leap-frog (2; 2) method, the extra accuracy costs us in compu-
tational power as it requires a bigger stencil. The reason that the accuracy in time is
not increased, is that this is already a two-step method in time. Any increase in the
temporal stencil will increase the number of previous steps required to leap forward in
time. The simpli…ed update scheme for solving the baby wave equation (4:0:1) is

t 4 n 1 n
vjn+1 = vjn 1
+ v vjn v vjn : (4.4.3)
h 3 j+1 1
6 j+2 2

Von Neumann Stability Analysis


Let us take again
vjn = z n j
= z n exp (ikhj) :
t
Substituting into (4:4:3) gives (taking = )
h
4 1
z n+1 j
= zn 1 j
+ zn j+1 j 1 j+2 j 2
:
3 6

Dividing through by z n j ; we have

1 8i exp (ikh) exp ( ikh) 2i exp (2ikh) exp ( 2ikh)


z=z + :
3 2i 6 2i
This simpli…es to
4 1
z2 = 1 + i 2 sin kh sin 2kh z;
3 6
which is a quadratic equation. Solving for z we have
q
4 1 2
2 i 3 sin kh 6 sin 2kh 4 2 34 sin kh 16 sin 2kh 4 ( 1)
z=
2
s
2
4 1 2
4 1
z= i sin kh sin 2kh 1 sin kh sin 2kh :
3 6 3 6
If we have
2
2 4 1
1 sin kh sin 2kh 0;
3 6

64
then z is a complex number, and we can take its modulus as follows:
s
2 2 p
2
4 1 2
4 1
jzj = sin kh sin 2kh + 1 sin kh sin 2kh = 1 = 1:
3 6 3 6
Therefore, we have stability if
2
2 4 1
sin kh sin 2kh 1;
3 6
i.e. if
1
4 < 0:728:
sin kh 16 sin 2kh
3
Once this condition is violated, the scheme is unstable. The CFL condition for applying
the leap-frog (2; 4) scheme to the baby-wave equation is < 0:728:
We may make the following general observations about the leap-frog (2; 4) scheme:
Von Neumann stability analysis for the baby-wave equation (4:0:1)solved with
leap-frog (2; 4) predicts a CFL condition of
< 0:728
t
for = : Therefore, there is a stricter CFL condition as compared to the leap-
h
frog (2; 2) scheme. This is one of the prices we pay for increased spatial accuracy.
However, as must be smaller, we can use larger space step-size h than with leap-
frog (2; 2) ; which translates to less spatial grid points with leap-frog (2; 4): This
may not make much of a di¤erence for a simple one-dimensional problem, but for
two or three dimensional problems, this makes a signi…cant di¤erence in terms of
computing power that is needed.
Again we can prove that jzj = 1 once the CFL condition is respected, meaning
that the leap-frog (2; 4) method is non-dissipative scheme. Pure dispersive error is
therefore to be expected.
As for the leap-frog (2; 2) method, we will observe odd/even time level decoupling.
As previously discussed, jzj = 1 ) z = 1; and z = 1 is inconsistent (exact
solution is z = 1):
The truncation error for the scheme is of the form T = O ( t)2 + O (h4 ) : It
follows that if t is of the same order as the h; then T = O (h2 ) + O (h4 ) = O (h2 )
overall, and we lose the advantage we hoped to gain of fourth order accuracy in
space. For this reason, the method works best if we use smaller time step sizes
t (i.e. not of the same order as h) while taking care not to violate the CFL
condition.

65
Since this is a two-step method, a lower order accurate one-step start-up method
is required (as with the leap-frog (2; 2) scheme). Bootstrapping is a good idea, as
this method has purely dispersive error, and any numerical errors introduced will
never be lost over time.

Recoupling of odd and even time steps is also to be desired. (Note: see the
discussion about the leap-frog (2; 2) method for details).

4.4.3 Arti…cial Dissipation


For methods like leap-frog where the error is purely dispersive, it is sometimes bene…cial
to incorporate an arti…cial dissipation term into the problem being solved numerically.
This is done to damp out false spatial oscillations in the numerical solution. Since even
spatial derivatives (recall one must be careful with the sign in front of these terms) are
inherently dissipative, the addition of such terms into the equation will have the desired
e¤ect - that of damping out unwanted spurious oscillations due to the shortcomings of
the numerical method selected. For example, it would be expected that the addition of
a term such as u; uxx , uxxxx etc. would introduce some measure of dissipation
into the problem. Here the is referred to as a tuning parameter, since its magnitude
will a¤ect the impact of the incorporated dissipative term. The ideal tuning parameter
is normally determined after extensive numerical experimentation. The amount of
arti…cial dissipation introduced is problem dependent. One must be careful to prevent
over-damping, which could adversely a¤ect the integrity of the numerical solution.
It is not true that any even derivative of u (with the appropriate sign in front of the
term) introduced into the equation will have the desired e¤ect. To illustrate this point,
let us consider the baby-wave equation ut = ux : We may add arti…cial dissipation by
modifying the equation to be
ut = ux u
where > 0 is the tuning parameter. In order to analyze the e¤ect of the term u; we
may ignore the term ux as follows:
Consider
ut = u
where is small and positive (i.e. 0 < 1). Applying leap frog (using D0 for time
discretization of ut )
v n+1 v n 1
= vn:
2 t
Let us take v n = z n v0 ; we have

z n+1 v0 z n 1 v0 = 2 ( t)z n v0 :

66
Dividing through by z n v0 ; we get
1
z z = 2 ( t)

which is the quadratic


z 2 + ( t)z 1 = 0:
Solving this, we have
q
2 ( t) (2 ( t))2 4 ( 1) p
z= = ( t) 2( t)2 + 1:
2
In the case where ( t) is small, we obtain

z' ( t) 1:

The root 1 ( t) is negative and results in exponential decay of the solution, but the
root 1 ( t) is positive (as the term ( t) is small) which leads to exponential growth of
the solution, and hence instability. The addition of the term u is not recommended,
although we would have expected it to work, since even derivatives of u in an equation
are in essence dissipative.
We can however make this work by lagging the dissipative term in time as follows.
If we considered instead
v n+1 v n 1
= vn 1;
2 t
we would have (after taking v n = z n v0 )

z n+1 v0 z n 1 v0 = 2 ( t)z n 1 v0 :

Dividing through by z n v0 ; we get


1
z z = 2 ( t)z 1 ;

which is
z 2 + [ ( t) 1] = 0:
Solving this, we have p
z= 1 ( t):
In the case where ( t) 1, we obtain two real roots within the unit circle. It follows
then that jzj < 1; which implies stability.
Another method would be to take an average of the dissipation term over time as
follows:
v n+1 v n 1 v n+1 + v n 1
= :
2 t 2

67
This gives
v n+1 vn 1
= ( t) v n+1 + v n 1
:
Taking v n = z n v0 ; we have

z n+1 zn 1
v0 = ( t) z n+1 + z n 1
v0 :

Dividing through by v n = z n v0 gives


1 1
z z = ( t) z + z :

This is
z2 1= ( t) z 2 + 1
z 2 (1 + ( t)) = 1 ( t)
1 ( t)
z2 = :
1 + ( t)
Given that 0 < 1; then for both roots, jz 2 j < 1 ) jzj < 1, which implies stability.

Remark 4.4.7 When adding dissipative terms to the leap-frog scheme, these terms must
not be on the same time level n; or they will cause exponential growth of the numerical
solution over time.
n 1
Remark 4.4.8 The ideal arti…cial dissipation term for leap-frog (2; 2) is h4 vxxxx (i.e.
4
of order h to limit the error introduced into the equation by the addition of the term).

Remark 4.4.9 The ideal arti…cial dissipation term for leap-frog (2; 4) is n 1
h6 vxxxxxx
(i.e. of order h6 to limit the error introduced into the equation by the addition of the
term).

4.5 Method 5: Lax Wendro¤


Recall that we are trying to solve the baby wave equation numerically: Using the Taylor
series expansion in t for the exact solution u; we have

( t)2
un+1
j = unj + t (ut )nj + (utt )nj + O ( t)3 : (4.5.1)
2!
Now
ut = ux ) utt = uxt ; utx = uxx :

68
Once u and its derivatives are continuous over the domain of interest, we can safely
conclude that uxt = utx ; and therefore utt = uxx : We may therefore substitute this into
(4:5:1) to get
n ( t)2
n+1 n
uj = uj + t (ux )j + (uxx )nj + O ( t)3 : (4.5.2)
2!
Now using central di¤erences in (4:5:2) to discretize the spatial derivatives, we have
(replacing u (x; t) by the numerical approximation v (x; t) as usual)
n
vj+1 vjn 1 ( t)2 n
vj+1 2vjn + vjn 1
vjn+1 = vjn + t + + O ( t)3 :
2h 2! h2
t
This may be simpli…ed (taking = ) to get
h
2
vjn+1 = vjn + n
vj+1 vjn 1 + n
vj+1 2vjn + vjn 1 + O ( t)3 : (4.5.3)
2 2
The Lax Wendro¤ (2; 2) scheme is comprised of the …rst two terms of (4:5:3)
2
vjn+1 = vjn + n
vj+1 vjn 1 + n
vj+1 2vjn + vjn 1 : (4.5.4)
2 2
Exercise 4.5.1 Prove that Lax Wendro¤ is a (2; 2) scheme.

4.5.1 Von Neumann Stability Analysis


Let us take
vjn = z n j
= z n exp (ikhj) :
t
Substituting into (4:5:4) gives (taking = )
h
2
z n+1 j
= zn j
+ zn j+1
zn j 1
+ zn j+1
2z n j
+ zn j 1
:
2 2
Dividing throughout by z n j
yields
2
1 1
z =1+ + + 2
2 2
exp (ikh) exp ( ikh) 2 exp (ikh) + exp ( ikh)
z =1+ i + 1
2i 2
2
z =1+ (cos kh 1) + i ( sin kh) :

69
Therefore q
jzj = (1 + 2 (cos kh 1))2 + 2 sin2 kh:
kh
Since cos kh 1= 2 sin2 ; we have
2
s
2
2kh
jzj = 1 2 2 sin + 2 sin2 kh
2
r
kh kh
jzj = 1+4 4 sin4 4 2 sin2 + 2 sin2 kh: (4.5.5)
2 2
Now since
kh kh
sin kh = 2 sin cos ;
2 2
it follows that
kh kh kh kh kh kh
sin2 kh = 4 sin2 cos2 = 4 sin2 1 sin2 = 4 sin2 4 sin4 :
2 2 2 2 2 2

Therefore, (4:5:5) becomes


s
kh kh kh kh
jzj = 1+4 4 sin4 4 2 sin2 + 2 4 sin2 4 sin4
2 2 2 2
r
kh
= 1+4 2 ( 2 1) sin4 :
2
kh
As 2
> 0 and 0 sin4 1; it follows that jzj 1 once 2
1; i.e. for < 1: Note
2
that for every mode kh except for kh = 0; jzj < 1: Therefore Lax Wendro¤
is a dissipative scheme with a CFL condition 1.

4.6 Method 6: MacCormack’s Method


This method is a variation of the Lax Wendro¤ scheme, essentially formed by dividing
the Lax Wendro¤ scheme into two half steps called the "predictor" and the "corrector".
For the solution of the baby wave equation (4:0:1) ; for each update in time we use the
forward predictor half step
t n
vbj = vjn + vj+1 vjn = vjn + t D+ vjn ; (4.6.1)
h

70
immediately followed by the backward corrector half step
1 t 1
vjn+1 = vbj + vjn + (b
vj vbj 1 ) = vbj + vjn + t (D vbj ) : (4.6.2)
2 h 2

Overall, this is referred to as the forward backward (FB) step. We may alternate this
with a backward predictor - forward corrector (BF) step by interchanging the D+ and D
operators in (4:6:1) and (4:6:2). In practice, FB and BF steps are coded as subroutines
and used alternatively for each update in time in the main program:

F BBF F BBF F BBF F B::: (4.6.3)

We should note that the scheme (4:6:3) is equivalent to the Lax Wendro¤ method
only when applied to linear di¤erential equations of u (x; t). It is therefore not surprising
that the above scheme is dissipative and is a (2; 2) method. The advantage is that (4:6:3)
is more easily applied to complicated nonlinear equations. For example, consider the
nonlinear hyperbolic conservation law

ut = [f (u)]x :

We can apply (4:6:1) and (4:6:2) to this as follows:

vbj = vjn + t D+ f vjn ;


1
vjn+1 = vbj + vjn + t (D f (b
vj )) ;
2
where the stability bound is
t 0
max f vjn :
h
Remark 4.6.1 Higher order extensions of MacCormack’s method are possible by utiliz-
ing higher order spatial discretizations. The most common variants are the MacCormack
(2; 4) and (2; 6) methods. It should be noted that such schemes must be run with much
smaller time steps in order to bene…t from the greater accuracy in space (as observed for
the leap-frog (2; 4) method).

4.7 Method 7: Runge-Kutta Time Stepping Schemes


Runge-Kutta time di¤erencing schemes4 are popular implicit and explicit iterative meth-
ods for the approximation of time derivatives. The most popular of these schemes is
4
These schemes were developed by the German mathematicians C. Runge and M.W. Kutta in the
early 20th century.

71
the four stage fourth-order scheme commonly referred to as RK4: Consider the ordinary
di¤erential equation
ut = f (u; t) :
The RK4 method is comprised of four stages described below:
k1
k1 = t f (v n ; tn ) ; vb1 = v n + ;
2
t k2
k2 = tf vb1 ; tn + ; vb2 = v n + ;
2 2
t
k3 = tf vb2 ; tn + ; vb3 = v n + k3 ;
2
k1 + 2k2 + 2k3 + k4
k4 = t f (b
v3 ; tn + t) ; v n+1 = v n + :
6
This is a fourth order accurate scheme, and is therefore storage intensive. It reduces to
Simpson’s rule for integration when f = f (t) : When RK4 is applied to solve

ut = au ; u (0) = u0
2:8
numerically, it can be shown that the method is stable when : t
jaj
The low-storage version of RK4 for time independent problems of the form

ut = f (u)

is as follows:
k1
k1 = t f (v n ) ; vb1 = v n +
;
4
k2
v1 ) ; vb2 = v n + ;
k2 = t f (b
3
k3
v2 ) ; vb3 = v n + ;
k3 = t f (b
2
n+1 n
k4 = t f (b
v3 ) ; v = v + k4 :
This fourth order method is low-storage since the individual computations for each stage
need not be saved until the end.
In order to solve the system (4:0:1) with RK4; we can utilize a fourth order compact
di¤erence operator D4 to discretize the spatial derivative ux : The resulting method is a
fourth order accurate scheme in time and space: It should be noted that although RK4
is fourth order accurate in time and therefore highly accurate, it is storage intensive,
and is therefore not a good idea for three dimensional problems.

72
Chapter 5

Solving Systems of Equations

All the numerical methods we have previously discussed may also be applied to systems
of partial di¤erential equations. The only real di¤erence is in the actual programming
needed to implement the schemes. For purposes of our discussion, we will consider a
hyperbolic system of travelling waves across the real axis 1 < x < 1 for t 0
!
u t = A!
ux ; !
u (x; 0) = !
u 0 (x) ; (5.0.1)

where !u and ! u 0 are m vectors and A is a m m matrix. As the system is hyperbolic,


A is a constant matrix with m real distinct eigenvalues 1 ; 2 ; :::; m (this is a linear
algebra result). The eigenvalues are the characteristic speeds of the system.

5.1 Numerical Solution

Utilizing a leap-frog (2; 2) method to solve the system numerically, we have


!
v n+1 !v jn 1 !
v nj+1 !
v nj
j 1
=A
2 ( t) 2h

which is
!
v n+1 =!
v nj 1
+ A !
v nj+1 !
v nj : (5.1.1)
j 1

Again we have taken = t=h:

Von Neumann Stability Analysis


Let us take
!
v nj = z n j !
v 0 = z n exp (ikhj) !
v 0:

73
Substituting into (5:1:1) gives
z n+1 j !v = zn
0
1 j !
v 0 + A zn j+1 !
v0 zn j 1 !
v0 :
Dividing through by z n 1 j
; we have
z2 !v0=! v0+ Az 1 !
v0
which is
exp (ikh) exp ( ikh)
z2I 2i Az I !
v 0 = 0:
2i
This is the same as
z2I (2i A sin kh) z I !
v0=0
where I is the m m identity matrix. Let us denote the matrix M by
M = z 2 I (2i A sin kh) z I :
We therefore have M !v 0 = 0: If ! v 0 6= 0; the solvability condition states that the
determinant of M must vanish, i.e.
det z 2 I (2i A sin kh) z I = 0:
As the system we are solving is hyperbolic, 9 a matrix P such that
1
P AP = ;
where is an m m matrix with characteristic speeds j for j = 1 : m (i.e. the
eigenvalues of matrix A) along its diagonal. Using similarity transforms (as determinants
are invariant under such transforms), we may conclude that
det z 2 I 2i sin kh z I = 0:
Since z 2 I 2i sin kh z I is a diagonal matrix, at least one of its diagonal entries
must vanish. Hence, for any ; 9 2m values of z denoted by zj (j = 1; 2; ::; m) where
for each j; zj are the roots of the quadratic
z2 2i j sin kh z 1 = 0:
We have already studied leap-frog (2; 2) for a scalar baby-wave equation. The same
analysis will apply, leading to the result that
zj = 1 , j jj 1:
For stability, we therefore must ensure that this holds always, and therefore our CFL
condition is
1
;
j max j
where max is the characteristic speed with the largest magnitude. Note that this corre-
sponds to the CFL condition for a scalar baby-wave equation ut = max ux as expected.

74
Decoupling of the System
Since we are dealing with a hyperbolic system, 9 a matrix P such that
1
P AP = :

We are solving the coupled system


!
u t = A!
u x: (5.1.2)

If we take !
w =P !
u ; then it follows that
!
u =P 1!
w )P 1!
w t = AP 1!
w x:

Now pre-multiplying both sides by P; we get


1! 1!
PP w t = P AP w x;

which is equivalent to
!
wt = !
w x: (5.1.3)
This process has decoupled the dependent variables, reducing the coupled system (5:1:2)
into a system of independent scalar equations (5:1:3) : Here the scalar variable wj is the
jth component of ! w that satis…es
@wj @wj
= j ; (5.1.4)
@t @x
and (5:1:4) represents a wave travelling with speed j:

Remark 5.1.1 Note that the decoupled system (5:1:3) is still coupled by the boundary
conditions - we will discuss this in further detail in the chapter dealing with the imple-
mentation of boundary conditions.

75
Chapter 6

Implicit Schemes

In this section, we will discuss the most commonly utilized implicit schemes. Again,
we will demonstrate the numerical solution of the one dimensional baby-wave equation
travelling along the x axis 1 < x < 1; t 0;

ut = ux ; u (x; 0) = g (x) : (6.0.1)

We continue to utilize regular grids for space and time with space step-size h; and time
step-size t: Again, the total time taken to reach time point tn is n k; and the total
distance from the origin to the point xj is j h: As we have done previously, we refer to
the approximate solution to u (x; t) as v (x; t) ; and utilize the following notation

vjn = v (xj ; tn ) ' u (xj ; tn ) = unj :

In the previous chapter, we encountered one implicit scheme for solving the baby-wave
equation, namely the backward Euler (1; 2) method

D vjn = D0 vjn :

However, the low accuracy and highly dissipative nature of the backward Euler scheme
limit its applicability. It is a common trait for implicit schemes to be unconditionally
stable, but there are other methods that are more accurate, easy to implement, and
may not be as dissipative. Most implicit schemes are ideal for the solution of parabolic
equations, as we will see in a later chapter.

76
6.1 Crank-Nicolson Scheme
The Crank-Nicolson scheme1 is second order in time and space (i.e. a (2; 2) scheme).
For the baby-wave equation (6:0:1) ; the scheme is
1
D+ vjn = D0 vjn+1 + D0 vjn : (6.1.1)
2
This is !
vjn+1 vjn 1
n+1
vj+1 vjn+11 n
vj+1 vjn 1
= + ;
t 2 2h 2h
which simpli…es to

t t t t
vjn+11 + vjn+1 + vj+1
n+1
= vjn 1 + vjn + vj+1
n
:
4h 4h 4h 4h

Taking = t=h; we have

vjn+11 + vjn+1 + vj+1


n+1
= vjn 1 + vjn + vj+1
n
: (6.1.2)
4 4 4 4

We solve this numerically by taking N space and M time points on the regular spatial
and time grids respectively, and writing (6:1:2) in the form

A!
v n+1 = B !
v n;

where A and B are matrices of dimensions M N and ! v is a N 1 column vector.


The updated solution vector is then taken by inverting the matrix A and calculating
!
v n+1 = A 1
(B !
v n) :

Further details will be provided in the computer lab sessions for this course.

Remark 6.1.1 The truncation error T for the Crank-Nicolson method when applied to
the baby-wave equation (6:0:1) is
!
2 2
h 5 ( t)
T = + uxxx
6 24

which implies that the method is (2; 2) : The proof of this statement is left as an exercise.
1
This numerical scheme was developed by John Crank and Phyllis Nicolson in the late 20th century.

77
6.1.1 Von Neumann Stability Analysis
Let us take
vjn = z n j
= z n exp (ikhj) :
Substituting into (6:1:2) gives

z n+1 j 1
+ z n+1 j
+ z n+1 j+1
= zn j 1
+ zn j
+ zn j+1
:
4 4 4 4

Dividing through by z n j
results in

1 1
z +1+ = +1+ :
4 4 4 4

Hence
i exp (ikh) exp ( ikh) i exp (ikh) exp ( ikh)
z +1 = +1
2 2i 2 2i

z i sin kh + 1 =1 i sin kh:


2 2
Therefore
1 2 i sin kh
z= ;
1 + 2 i sin kh
and taking the modulus, we have
q
2
1+ 4
sin2 kh
jzj = q = 1:
2
1+ 4
sin2 kh

It follows that the Crank-Nicolson scheme is unconditionally stable when it is utilized


to solve the baby-wave equation (6:0:1) : Also since jzj = 1; the Crank-Nicolson method
has purely dispersive error for this problem (similar to the leap-frog methods).
One way to modify the Crank-Nicolson method to introduce some dissipation (which
serves to damp unwanted oscillations from dispersive errors) is to make a slight change
as follows:
D+ vjn = D0 vjn+1 + (1 ) D0 vjn : (6.1.3)
Setting = 1; this scheme becomes

D+ vjn = D0 vjn+1

78
vjn+1 vjn n+1
vj+1 vjn+11
=
t 2h
t t
vjn+11 + vjn+1 n+1
vj+1 = vjn : (6.1.4)
2h 2h
When = t=h; we see that (6:1:4) is identical to the unconditionally stable and
dissipative Backward Euler in time Central Di¤erence in space scheme (4:2:1). Setting
1
= instead, we get
2
1
D+ vjn = D0 vjn+1 + D0 vjn ;
2
which is the unconditionally stable and dispersive Crank-Nicolson scheme (6:1:1) : The
Backward Euler in time Central Di¤erence in space scheme is a (1; 2) method, and the
Crank-Nicolson method is (2; 2) : In practice, we wish to get the best accuracy and not
too much dissipation, so the code is most e¤ective when is close to but greater than
1
; i.e.
2
1
= + 1 t; 1 > 0
2
In practice, the code is written with as a variable so that it may be adjusted accordingly
for best results.

6.2 Compact Implicit Schemes


In order to construct higher order implicit schemes, care must be taken to restrict the
size of the stencil of the resulting method. This is important as we will need to take
the inverse of a very large matrix, and sparse matrices require less computing power to
invert. Larger stencils result in a greater number of non-zero diagonals in the de…ning
matrix. The idea is therefore to construct compact higher order accurate methods with
the smallest possible stencil - preferably a 3-point stencil so that the resulting matrix is
tri-diagonal.
Let us consider the following (taken from the Taylor series expansions of u (x; t) as
previously described)
h2
ux = D0 u uxxx + O(h4 ): (6.2.1)
6
Now we know that
uj+1 2uj + uj+1
uxx = + O h2 = D+ D u + O h2 : (6.2.2)
h2

79
We can use (6:2:2) in (6:2:1) to write

h2
D0 u = ux + D+ D ux + O(h4 );
6
which is
h2
D0 u = I+ D+ D ux + O h4 ;
6
where I is the identity matrix. We may therefore say that
1
h2
ux = I + D+ D D0 u + O h4 : (6.2.3)
6

We now have a fourth order accurate approximation to ux using a 3-point stencil. We


can combine this fourth order approximation (6:2:3) with RK4 to obtain a (4; 4) scheme
to solve the baby-wave equation (6:0:1) : Note however that because we are using RK4;
the overall method is no longer unconditionally stable although it is implicit.

6.3 Semi-Implicit Schemes


Semi-implicit schemes are, as the name suggests, methods that are implicit on some
terms and explicit on others. Such methods are commonly used for solving equations
that contain a non-linear term. Consider for example the nonlinear advection-reaction
equation
ut = ux + R (u) (6.3.1)
with advective term ux and nonlinear reaction term R(u): This can be solved by combin-
ing the implicit (2; 2) Crank-Nicolson scheme - applied to the terms ut = ux - combined
with the explicit second order Adams-Bashforth scheme - acting on R (u) - as follows:

1 3 1
D+ vjn = D0 vjn+1 + D0 vjn + R vjn R vjn 1
:
2 2 2

6.4 Implicit Schemes for Systems


In general any explicit or implicit scheme can be utilized to solve a system
!
u t = A!
ux ; !
u (x; 0) = !
u 0 (x) : (6.4.1)

We have seen that for the scalar case ut = aux ; the simplest implicit method requires
the inversion of at least a tri-diagonal matrix. This is relatively easy to accomplish.

80
However, if we wish to do the same for the system (6:4:1) ; we will need to create a block
tri-diagonal system of equations, which must be solved block by block by inversion.
Needless to say, this is highly ine¢ cient in terms of computing power. For this reason,
it is much more sensible to employ an appropriate explicit method for solving any given
systems of equations.

81
Chapter 7

Parabolic Problems

For the purposes of illustration, we will consider one of the most basic parabolic problems
- the one dimensional heat equation

ut = auxx ; a > 0: (7.0.1)

We may take into consideration all the previously discussed schemes. However in each
case, a careful study of the stability of the scheme must be carried out before attempting
to utilize the method. It will also be important to determine what sort of error to expect,
the truncation error, and the order of accuracy for each scheme. For our purposes, we
will again create two regular grids (i.e. equally spaced points along the grid) for space
and time. The space step-size will be denoted by h; and the time step-size will be referred
to as t: As we have done previously, we refer to the approximate solution to u (x; t)
as v (x; t) ; and we utilize the notation

vjn = v (xj ; tn ) ' u (xj ; tn ) = unj :

to depict the value of the approximate solution at space point xj and time point tn :

7.1 Method 1: D+ in time, D+D in Space


The simplest possible explicit scheme for solving (7:0:1) is the (1; 2) forward Euler in
time, central di¤erence in space method

D+ vjn = a D+ D vjn

which is
vjn+1 vjn vjn 1 2vjn + vj+1
n
=a :
t h2

82
This simpli…es to

a t a t a t
vjn+1 = vjn 1 + 1 2 vjn + n
vj+1 : (7.1.1)
h2 h2 h2

7.1.1 Von Neumann Stability Analysis


Let us take
vjn = z n j
= z n exp (ikhj) :
Substituting into (7:1:1) gives

a t a t a t
z n+1 j
= zn j 1
+ 1 2 zn j
+ zn j+1
:
h2 h2 h2

Dividing through by z n j ; we have

a t a t 1 a t a t exp (ikh) + exp ( ikh)


z=1 2 2
+ + =1 2 + 2 2
h h2 h2 h 2
a t a t a t
=1 2 2 + 2 2 cos kh = 1 2 2 (1 cos kh) :
h h h

Since a > 0; and 1 cos kh 0; it follows that z 1: In order to have stability (i.e.
jzj 1); we need

a t a t 1
(1 cos kh) 1 ,
h2 h2 1 cos kh
for all values kh. Recall that we are restricted by

kh :

The largest possible value of 1 cos kh = 1 1 = 2 occurs when kh = : It follows


that the CFL condition for this scheme is
a t 1 h2
, t :
h2 2 2a
This method has a very strict CFL condition, and is therefore not the method of choice
for solving the heat equation (high "computing cost" as time step-size is very restricted).
Note that the method is also dissipative, since for most modes jzj < 1:

83
7.2 Method 2: D in time, D+D in Space
As previously discussed, it is impractical to utilize a scheme with an extreme CFL con-
dition. Since implicit schemes are typically unconditionally stable, we turn our attention
to the simplest possible implicit scheme; the (1; 2) backward Euler in time, central dif-
ference in space method for solving (7:0:1) as follows:
D vjn = a D+ D vjn :
This is
vjn vjn 1
vjn 1
n
2vjn + vj+1
=a :
t h2
This simpli…es to
a t a t a t
vjn 1 + 1+2 vjn + n
vj+1 = vjn 1 :
h2 h2 h2
which may be rewritten as
a t a t a t
vjn+11 + 1 + 2 vjn+1 + n+1
vj+1 = vjn : (7.2.1)
h2 h2 h2
We discretize the x axis into N regularly spaced points with step-size h; and the t axis
into M regularly spaced time points with step-size t: This allows us to write (7:2:1) in
the form
A! v n+1
j =! v nj ;
where A is a M N matrix, and ! v nj , !v n+1
j are N 1 column vectors. The update is
!
calculated from v n+1 1 !n
= A v : Note that the boundary conditions must be taken into
j j
account as usual, or the problem we are attempting to solve will be ill-posed.

7.2.1 Von Neumann Stability Analysis


Let us take
vjn = z n j
= z n exp (ikhj) :
Substituting into (7:2:1) gives
a t a t a t
z n+1 j 1
+ 1+2 z n+1 j
+ z n+1 j+1
= zn j :
h2 h2 h2
Dividing through by z n j ; we have
a t a t 1
z 1+2 + = 1:
h2 h2

84
This can be written as
a t 2a t exp (ikh) + exp ( ikh)
z 1+2 =1
h2 h2 2
a t
z 1+2 (1 cos kh) = 1
h2
1
z= :
a t
1 + 2 2 (1 cos kh)
h
Since a > 0 and 0 1 cos kh 2; it follows that jzj 1 for all values of kh: Therefore,
the method is unconditionally stable. As jzj < 1 for most modes, the method is also
dissipative.

7.2.2 Suitability of Method


It can be shown that the truncation error T for this (1; 2) method is
t ah2
T = utt uxxxx :
2 12
(*The proof of this is left as an exercise). Therefore, if we can make t of order h2 ; we
reduce the numerical error as T ! 0. Unless this is done, our numerical solution will
be strongly di¤used over time, which is not at all desirable. However, by doing this, we
are imposing the same restriction we are trying to escape from of the previous explicit
method D+ vjn = a D+ D vjn ; and we are losing the advantage we had of removing the
restriction of the time step by choosing an unconditionally stable method. Clearly, the
method D vjn = a D+ D vjn is not the ideal choice for solving the heat equation (7:0:1).

7.3 Method 3: Leap Frog - the Wrong Choice


As neither of the previous methods are suitable for solving the heat equation (7:0:1) ;
the next natural choice is leap frog (2; 2) ; i.e., a central di¤erence in time and space
D0 vjn = a D+ D vjn :
This is
vjn+1 vjn 1 vjn 1 2vjn + vj+1
n
=a :
2 ( t) h2
Simplifying, we have the two-step method
2a t 4a t 2a t
vjn+1 = 2
vjn 1 + 2
vjn + n
vj+1 + vjn 1 : (7.3.1)
h h h2
(*Recall that this is a two step method as we have three time levels).

85
7.3.1 Von Neumann Stability Analysis
Let us take
vjn = z n j
= z n exp (ikhj) :
Substituting into (7:3:1) gives

2a t 4a t 2a t
z n+1 j
= zn j 1
+ zn j
+ zn j+1
+ zn 1 j
:
h2 h2 h2

Dividing through by z n 1 j
; we get
1
4a t + 4a t
z2 = 1 z+1= (cos kh 1) z + 1:
h2 2 h2

This quadratic equation can be expressed in the form

z2 2 (cos kh 1) z 1 = 0;

2a t
where = : This can be solved to get
h2
q
2 (cos kh 1) 4 2 (1 cos kh)2 4 ( 1)
z=
q 2
= (cos kh 1) 2 (1 cos kh)2 + 1:

Now since cos kh 1 0; > 0 and 2 (1 cos kh)2 + 1 1; it follows that one of
the two roots will have a modulus greater than one. This means that the method is
unconditionally unstable. It is therefore not at all possible to utilize this method to
solve the heat equation (7:0:1) :

7.4 Method 4: Dufort-Frankel Method


We have just seen that any attempt to utilize leap frog will fail for the heat equation.
Naturally, the question that may be asked is if the leap frog method may be adapted
somehow to make it suitable for use. The Dufort-Frankel scheme is an attempt to do
this. The original leap frog (2; 2) method was

2a t n 4a t n
vjn+1 vjn 1
= vj 1
n
+ vj+1 v :
h2 h2 j

86
We now replace the last term vjn by an average over time the two adjacent time steps
vjn+1 + vjn 1
to obtain
2
!
n+1 n 1
2a t 4a t v j + vj
vjn+1 vjn 1 = vjn 1 + vj+1
n
; (7.4.1)
h2 h2 2
which is the Dufort-Frankel scheme
2a t 2a t 2a t n
vjn+1 1 + = 1 vjn 1
+ vj+1 + vjn 1 : (7.4.2)
h2 h2 h2
It is easy to show that this method is unconditionally stable (*the proof of this is left
as an exercise). However, this does not mean that the scheme is the perfect choice for
solving the heat equation (7:0:1) : In order to demonstrate why this is so, let us rewrite
2a t
the scheme (7:4:2) by adding and minusing the term 2vjn to (7:4:1) as follows:
h2
2a t n 2a t n+1
vjn+1 vjn 1 = 2
n
vj 1 + vj+1 2vjn vj + vjn 1 2vjn :
h h2
This can be written as
!
vjn+1 vjn 1 n
vjn 1 + vj+1 2vjn t
2
vjn+1 + vjn 1 2vjn
=a a ;
2 t h2 h ( t)2
that is
D0 vjn = a D+ D vjn a 2
D+ D vjn ;
where = t=h: This is essentially a numerical approximation for the equation
2
ut = a uxx utt :
Unless we choose t and h wisely to make ! 0 as t ! 0 and h ! 0; our numerical
scheme will not be consistent. This is a serious limitation, and it implies that the Dufort-
Frankel method is not the ideal method for solving the heat equation even though it is
unconditionally stable.

7.5 Method 5: Crank-Nicolson


Out of all the methods we have considered so far in this course, the (2; 2) Crank-Nicolson
scheme is the stand-out choice for solving the heat equation (7:0:1) : The scheme when
applied to the heat equation (7:0:1) is of the form
a
D+ vjn = D+ D vjn+1 + D+ D vjn :
2

87
This is equivalent to
!
vjn+1 vjn a vjn+11 + vj+1
n+1
2vjn+1 vjn n
1 + vj+1 2vjn
= + ;
t 2 h2 h2

and it simpli…es to
a t a t a t
vjn+11 + vj
n+1
1 + + v n+1
j+1
2h2 h2 2h2
a t a t n+1 a t
= vjn 1 + v n
j 1 + v j+1 : (7.5.1)
2h2 h2 2h2
If we discretize the x axis into N regularly spaced points with step-size h; and the t axis
into M regularly spaced time points with step-size t; we may write (7:2:1) in the form

A!
v n+1
j =B !
v nj ;

where A and B are M N matrices, and ! v nj , !v n+1


j are N 1 column vectors. Each
! n+1 1 ! n
update is then calculated from v j = A B v j : Note that this is an implicit method,
and that the boundary conditions must be taken into account as usual.

7.5.1 Von Neumann Stability Analysis


Let us take
vjn = z n j
= z n exp (ikhj) :
a t
Substituting into (7:5:1) and simplifying, we obtain (taking = )
2h2
z n+1 j 1
+ (1 + 2 ) z n+1 j
z n+1 j+1

= zn j 1
+ (1 2 ) zn j
+ zn j+1
:

Dividing through by z n j ; we have


1 1
+ +
2 +1+2 z=2 +1 2 :
2 2
We know that
1
+ exp (ikh) exp ( ikh)
= = cos kh;
2 2
so we may write
2 cos kh + 1 2 1 2 (1 cos kh)
z= = :
2 cos kh + 1 + 2 2 (1 cos kh) + 1

88
Since we know that > 0 always, and 0 1 cos kh 2; it follows that jzj 1
always. Therefore the Crank-Nicolson method is unconditionally stable when used to
solve the heat equation (7:0:1) : It is also dissipative since for most modes jzj < 1: The
Crank-Nicolson is the best method we have seen so far for solving the heat equation.

89
Chapter 8

Boundary Conditions

8.1 Numerical Treatment of Boundary Conditions


In order to use …nite di¤erence schemes for the approximation of initial boundary value
problems, some consideration needs to be given to what happens on the boundary. The
initial conditions can easily be de…ned as the initial point from which the solution is
marched forward in time. As we shall see, approximations are needed at the boundary
for the scheme to be properly implemented.
Consider for example the problem

ut = ux ; u (x; 0) = g (x) ; 0 < x < L; t 0:

This represents a baby-wave travelling to the left from the point x = L. As such, a
boundary condition is required at x = L; but not at x = 0: Suppose we wish to solve
this problem using a (1; 2) forward Euler in time, second di¤erence in space method

vjn+1 vjn n
vj+1 vjn 1
=
4t 2h
which is
4t 4t
vjn+1 = vjn 1 + vjn + n
vj+1
2h 2h
where j = 1 : N as before. Although there is no boundary condition imposed at the
point x = 0; there is a numerical problem as we cannot evaluate the scheme at the …rst
node position j = 1; or we would obtain

4t 4t
v1n+1 = v0n + v1n + v2n
2h 2h

90
and v0n does not exist (i.e. there is no grid point to the left of the node at x = 0). This is
referred to as a ghost point. This forces us to perform some sort of numerical boundary
approximation at the point x = 0:
The method we have chosen above has a three-point stencil. Clearly, if we were to
use a larger stencil such as leap-frog (2; 4)

4t 4 n 1 n
vjn+1 = vjn 1
+ v vjn v vjn ; (8.1.1)
h 3 j+1 1
6 j+2 2

then at j = 1; we obtain two ghost points v0n and v n 1 at the left of the boundary x = 0:
Larger stencils clearly require more complicated numerical boundary treatments.
This problems described above have come into play since we utilized central di¤erence
approximations for the spatial derivative: If we had chosen instead to use a purely upwind
method like D+ vjn to approximate ux ; we would have had

vjn+1 vjn n
vj+1 vjn
= :
4t h
This gives
4t 4t
vjn+1 = 1 vjn + n
vj+1
h h
and at the point j = 1; we would have obtained

4t 4t
v1n+1 = 1 v1n + v2n :
h h

Clearly, this is no problem to implement. We note that an upwind scheme follows the
direction of propagation of information, which precludes any need for special boundary
treatment at x = 0: The natural conclusion to make is to always use upwind schemes
always. This is a good conclusion if we are dealing with a scalar equation like the one
we considered above, however such schemes pose a problem when we are solving more
complicated hyperbolic systems where waves are propagating in di¤erent directions. An
alternative in such a situation would be to decouple the system and use the upwind
di¤erences for each characteristic direction as needed, but this is both computationally
expensive and tedious. In such cases, a central di¤erence approach may be easier to
implement, but boundary treatment will be necessary to utilize the resulting scheme.
An important question that we must ask ourselves is what happens to the stability
and consistency of the overall method when we are forced to make approximations on the
boundary. An important theorem (which we shall utilize but not prove in this course)
states that if we lose only one order of accuracy on the boundary, we still maintain the
overall accuracy and stability of the method, and therefore achieve convergence.

91
Theorem 2 For any given numerical scheme of order (p; q) ; it is su¢ cient for the
boundary treatment to be of the order (p 1; q 1) for the overall accuracy of the scheme
to still be of order (p; q) :

8.2 Extrapolating Boundary Conditions


In order to implement a …nite di¤erence scheme, it is often necessary to provide a
numerical approximation for the solution at a so-called "ghost point". In the absence
of a boundary condition, as is the example of the baby-wave equation in the previous
section which had no boundary condition at the point x = 0; it is necessary to …nd a
way around this problem. One such method is by extrapolation of the solution curve to
estimate the solution value at the ghost point(s).
Zero order extrapolation (see Figure 8.1) is referred to the process of equating the
solution at the ghost point to the value of the solution at the boundary. This simple
extrapolation is the result of a straight line connecting the ghost point to the last nodal
point at the boundary. We refer to the previous example

ut = ux ; u (x; 0) = g (x) ; 0 < x < L; t 0; (8.2.1)

which has no boundary condition at the left end x = 0: When using the scheme

4t 4t
vjn+1 = vjn 1 + vjn + n
vj+1 ; (8.2.2)
2h 2h

we saw that there is a ghost point v0n at all points in time when we set j = 1. At each
time level, we will use zero order extrapolation by setting the value of the solution at
the ghost point "j = 0" to the value of the solution at j = 1; as shown in Figure 8.1.
(Note that in reality, j 6= 0 as we have that j = 1 : N ): As a straight line has zero slope,
we are really doing the following:
v1n v0n
ux jx = 0 = 0 ) D v1n = = 0:
h
This gives
v1n = v0n (8.2.3)
at each point in time n: Therefore at j = 1; the scheme (8:2:2) becomes

4t 4t 4t 4t
v1n+1 = v0n + v1n + v2n = 1 v1n + v2n ;
2h 2h 2h 2h

and this solves our problem at the left boundary.

92
Figure 8.1: Zero order extrapolation of solution at left boundary j = 1 to estimate the
ghost point at " j = 0 ".

Higher order accuracy is possible by a higher order extrapolation. First order extrap-
olation is the process of estimating the solution by extending the solution utilizing a line
with the same slope as the solution at the boundary point. First order extrapolation for
the estimation of the ghost point (at "j = 0") is depicted in Figure 8.2 As the slope is

Figure 8.2: First order extrapolation of solution at left boundary j = 1 to estimate ghost
point at j = 0:

constant, this is the same as saying that


v0n 2v1n + v2n
uxx jx = 0 = 0 ) D+ D v1n = = 0:
h2
This results in
v0n = 2v1n v2n ;
which solves the problem of a ghost point o¤ the left boundary at each time level.

93
In general, nth order extrapolation is done by taking
@ n+1 v
=0
@xn+1
and then approximating the derivative numerically with
D+ D D+ :::D vjn = 0
at the corresponding boundary value j: Note that this extrapolation technique may be
utilized on either end of the boundary as needed. The order of extrapolation chosen
depends on the accuracy of the method being utilized. For example, if a method is
second order accurate in space, it is su¢ cient to utilize …rst order extrapolation at the
boundary.
Remark 8.2.1 Extrapolation at the boundary in order to estimate the solution at a
ghost point is sometimes referred to as taking a one-sided di¤erence at the boundary.

8.3 Boundary Conditions for Parabolic Problems


Consider the one dimensional heat equation on a bounded interval
ut = auxx ; a > 0; 0 x L; t 0: (8.3.1)
For this problem to be well posed, we require two boundary conditions on either end
of the x interval as well as an initial condition. The most general form of boundary
conditions is
1 u (0; t) + 1 ux (0; t) = f1 (t) ; (8.3.2)
2 u (L; t) + 2 ux (L; t) = f2 (t) : (8.3.3)
1
When 1 = 0 = 2 ; we say that there are Dirichlet conditions at both ends. When
2
1 = 0 = 2 ; we say that there are Neumann conditions at both ends. When neither
3
1 ; 2 ; 1 nor 2 are zero, we have Robin conditions at both ends. Of course, we can
have various combinations of these. A Dirichlet condition is essentially a case where the
temperature is prescribed at the given boundary, while a Neumann condition refers to
the case where the temperature ‡ux (i.e. the derivative of a solution) is prescribed at the
given boundary. In order to demonstrate the use of the Dirichlet boundary condition,
we need to look at concrete examples.
1
Named after Johann Peter Gustav Lejeune Dirichlet (13 February 1805 –5 May 1859), a prominent
German mathematician.
2
Named after Karl Gottfried Neumann (May 7, 1832 – March 27, 1925), a famous German mathe-
matician.
3
Named after Victor Gustave Robin (1855-1897), a notable French mathematical analyst and applied
mathematician.

94
8.3.1 Example 1

Consider the numerical solution of the heat equation (8:3:1) given the following boundary
and initial conditions
ux (0; t) = 0; u (L; t) = 0; u (x; 0) = 1:
This can be interpreted as the solution of a heat equation in a rod of length L with
initial temperature 1; where the right end x = L is held at temperature 0; and the left
end x = 0 is insulated (i.e. there is zero heat ‡ux at the left). We begin by creating two
regular grids (i.e. equally spaced points along the grid) for space and time. The space
step-size will be denoted by h; and the time step-size will be referred to as t: As we
have done previously, we refer to the approximate solution to u (x; t) as v (x; t) ; and we
utilize the notation
vjn = v (xj ; tn ) ' u (xj ; tn ) = unj :
to depict the value of the approximate solution at space point xj and time point tn :
Suppose we wish to solve this problem with the …rst method we considered for the
solution of a heat equation
D+ vjn = aD+ D vjn ;
which is given in (7:1:1) as
a t a t a t
vjn+1 = vjn 1 + 1 2 vjn + n
vj+1 : (8.3.4)
h2 h2 h2
We may solve (8:3:4) in two ways; by writing a nested for loop for time and space,
or by using matrix notation. We will select the matrix notation method of solution.
Note however that this method may also be implemented using nested for loops without
making use of the available linear algebra libraries in Matlab or in your programming
language of choice.
Recall that we may express (8:3:4) in matrix form as
!v n+1 = B !v n: (8.3.5)
Here !v n+1 and !
v n are N 1 vectors at time step n + 1 and n respectively. The form
!
of the vector v at time level n is 0 1n
v1
B v2 C
B C
B C
B C
B C : (8.3.6)
B C
B C
B C
@vN 1 A
vN

95
At …rst may think that our …rst vector ! v 1 would be represented by a column vector
of ones, as prescribed by the initial condition u (x; 0) = 1 (i.e. at all space points, the
initial solution v has a value of 1): However, we are told that at the right-most point
1
x = L; the value of the solution is always 0: For this reason, we must take vN = 0;
which means that the last entry of the vector v is 0; while all other values are set to 1
as follows: 0 11
1
B1C
B C
B C
B C
B C : (8.3.7)
B C
B C
B C
@1A
0
As there are N space points on the x axis, B is a square matrix of size N N . As the
boundary conditions signify what occurs at the …rst and last space points respectively,
we will write all rows for B except for the …rst row and the last N th row. The matrix
B everywhere (except for the …rst and last rows) using (8:3:4) as follows:
0 1
B a 2t 1 2 ah2t a t C
Bh h2 C
B a t
1 2 ah2t a t C
B h2 h2 C
B=B
B
C:
C (8.3.8)
B C
B C
@ a t
1 2 ah2t a tA
h2 h2

Notice that B is a sparse matrix with zeros everywhere except for the main diagonal
a t
and the …rst super and sub diagonals. On the main diagonal we have 1 2 2 ; i.e. the
h
coe¢ cient of vjn from equation (8:3:4) ; and on the …rst super-diagonal and the …rst sub-
a t n
diagonal, we have 2 which corresponds to the coe¢ cients of vj+1 and vjn 1 respectively.
h
From (8:3:5) ; (8:3:6) and (8:3:8) we have
0 1n+1 0 10 1n
v1 v1
B v2C B a 2t 1 2 ah2t a t CB v2 C
B C Bh h2 CB C
B v3C B a t
1 2 ah2t a t CB v3 C
B C B h2 h2 CB C
B C =B CB C :
B C B CB C
B C B CB C
B C B CB C
@vN 1 A @ a t
1 2 ah2t a tA @
vN 1
A
h2 h2

vN vN

96
To illustrate this, we calculate the update v2n+1 from the previous data at time level n
as follows: 0 1n
v1
B v2 C
B C
B v3 C
B C
v2n+1 = ah2t 1 2 ah2t ah2t 0 0 B
B
C :
C
B C
B C
@vN 1 A
vN
This is equivalent to
a t a t a t
v2n+1 = v1n + 1 2 v2n + v3n ;
h2 h2 h2
which is (8:3:4) evaluated at space step j = 2: This is done for all values j = 2 : N 1:
Let us now turn our attention to the …rst row of B; which must take into account
the left Neumann boundary condition ux (0; t) = 0: Recall that
a t a t a t
vjn+1 = vjn 1 + 1 2 vjn + n
vj+1 :
h2 h2 h2
Now when j = 1; we have
a t a t a t
v1n+1 = v0n + 1 2 v1n + v2n : (8.3.9)
h2 h2 h2
This is a problem, as we do not have a space point j = 0: We say that v0n are "ghost
points" for all time levels n: However, it is necessary to provide a value for it in order for
our code to run. This is done by approximating the left boundary condition ux (0; t) = 0
numerically as D vjn when j = 1 as follows:
v1n v0n
D v1n = 0 = :
h
This gives us
v1n = v0n : (8.3.10)
Using (8:3:10) in (8:3:9) ; we get
a t a t a t
v1n+1 = v1n + 1 2 v1n + v2n
h2 h2 h2
which simpli…es to
a t a t
v1n+1 = 1 v1n + v2n : (8.3.11)
h2 h2

97
a t
This tells us that the …rst element in the …rst row of B is 1 ; and the second
h2
a t
element in the …rst row of B is 2 ; while all other entries of the …rst row of matrix B
h
are zero.
Next, let us look at the last row of B; which must incorporate the right Dirichlet
condition u (L; t) = 0: Consider again the update scheme
a t a t a t
vjn+1 = 2
vjn 1 + 1 2 2 vjn + 2
n
vj+1 :
h h h
When j = N; this evaluates to
n+1 a t n a t n a t n
vN = 2
vN 1+ 1 2 2 vN + 2
vN +1 :
h h h
n
This is a problem for us as we do not have a space point at j = N + 1: Again, vN +1 are
ghost points for all time levels n: Our right boundary condition states that the last space
position is never changed from 0: We can force the last position to never be updated
over time by setting the last row of matrix B to be 0 at all positions except for the last
element of that last row which we set to 1: Our …nal proposed matrix B inclusive of the
…rst and last rows is therefore
0 1
1 ah2t a t
h2
0 0 0 0 0
B a 2t 1 2 ah2t a t C
B h h2 C
B a t
1 2 a t a t C
B h 2 h2 h2 C
B=B B C: (8.3.12)
C
B C
B C
@ a t
1 2 ah2t ah2t A
h2
0 0 0 0 0 0 1
Our method is therefore
0 1n+1 0 10 1n
v1 1 ah2t a t
h2
0 0 0 0 0 v1
B v2 C B a 2t 1 2 ah2t a t CB v2 C
B C B h h2 CB C
B v3 C B a t
1 2 ah2t a t CB v3 C
B C B h2 h2 CB C
B C =B CB C :
B C B CB C
B C B CB C
B C B CB C
@vN 1 A @ a t
1 2 ah2t a tA @
vN 1 A
h2 h2
vN 0 0 0 0 0 0 1 vN

8.3.2 Example 2
Let us consider a slight variation of the previous example, a one-dimensional heat equa-
tion with Robin conditions on the left and right boundaries
ut = uxx ; 0 x L; t 0;

98
with Robin conditions on the left and right boundaries

u (0; t) + ux (0; t) = 0; u (L; t) + ux (L; t) = 0; (8.3.13)

and the initial condition


u (x; 0) = f (x) :
We may solve this problem numerically with the (1; 2) explicit method D+ vjn =
D+ D vjn : This translates to the update scheme

t t t
vjn+1 = vjn 1 + 1 2 vjn + n
vj+1 : (8.3.14)
h2 h2 h2
This may be expressed in matrix form as
!
v n+1 = B !
v n; (8.3.15)

where ! v n+1 and !


v n are N 1 vectors at time step n + 1 and n respectively, and B
is an N N matrix.
Let us determine the form of the matrix B: The update scheme (8:3:14) suggests a
t
tri-diagonal matrix with 2 along the …rst sub-diagonal and …rst super-diagonal, and 1
h
t
2 2 along the main diagonal. However, the …rst and last rows of B must be determined
h
by the Robin boundary conditions (8:3:13) : This can be found by considering the update
scheme when j = 1 and j = N respectively, and determining the approximations that
can be substituted for the corresponding ghost points at both ends.
When j = 1; the scheme (8:3:14) becomes
t t t
v1n+1 = v0n + 1 2 v1n + v2n : (8.3.16)
h2 h2 h2
Here v0n is a ghost point at each time level n: Now the boundary condition at the left
end can be approximated numerically as follows
v1n v0n
v1n +D v1n =0 , v1n + = 0:
h
Making the ghost point the subject of the formula, we get

v0n = v1n (h + 1) : (8.3.17)

Substituting (8:3:17) into (8:3:16) gives


t t t
v1n+1 = v1n (h + 1) + 1 2 v1n + v2n
h2 h2 h2

99
which simpli…es to
t t
v1n+1 = v1n 1 + (h 1) + v2n : (8.3.18)
h2 h2
For the last row, we consider the scheme (8:3:14) when j = N

n+1 t n t n t n
vN = vN 1 + 1 2 vN + vN +1 : (8.3.19)
h2 h2 h2
n
Clearly, vN +1 is a ghost point since there is no space point at j = N + 1: The boundary
condition on the right end can be approximated numerically as
n n
n n n vN +1 vN
vN + D+ vN = 0 , vN + = 0:
h
Making the ghost point the subject of the formula, we have
n n
vN +1 = vN (1 h) : (8.3.20)

Substituting (8:3:20) into (8:3:19) gives

n+1 t n t n t n
vN = vN 1 + 1 2 vN + vN (1 h) ;
h2 h2 h2
which is equivalent to

n+1 t n t n
vN = vN 1 + 1 (1 + h) vN : (8.3.21)
h2 h2
The matrix B for (8:3:15) can now be written using (8:3:14) ; (8:3:18) and (8:3:21)
as
0 t t
1
1+ h2
(h 1) h2
0 0 0 0 0
B t
1 2 h2t t C
B h2 h2 C
B t
1 2 h2t t C
B h2 h2 C
B=B
B
C:
C
B C
B C
@ t
1 2 h2t t A
h2 h2
t t
0 0 0 0 0 h2
1 h2
(1 + h)

8.4 Boundary Conditions for Hyperbolic Problems


Consider the wave equation

utt = uxx ; 0 x L; t 0; (8.4.1)

100
subject to speci…ed boundary and initial conditions.
Let us de…ne the vector !
v to be
! ut
v = :
ux

We may write the problem (8:4:1) in the form

! utt uxx
vt= = =!
vx
uxt utx
i¤ uxt = utx : This may be stated as

! 0 1 !
vt= vx=A!
v x: (8.4.2)
1 0
In order to determine the characteristics, we diagonalize the matrix A as follows. Now
0 1 2
=0) 1 = 0;
1 0

1
so the eigenvalues are = 1: We can show that the eigenvector for = 1 is and
1
1
the eigenvector for = 1 is : This allows us to say that
1
1 1
1 1 1 1 0 2 2
A=T T = 1 1 : (8.4.3)
1 1 0 1 2 2

Using (8:4:3) in (8:4:2) ; we have


!
vt=T T 1 !
v x:
1
Pre-multiplying each side by the matrix T gives
1! 1 1 ! 1 !
T vt=T T T vx= T v x:

Hence
1! 1!
T v t
= T v x
:
De…ning !
w =T 1!
v ; we get
!
wt = !
wx
which is
w1 1 0 w1
= :
w2 t
0 1 w2 x

101
Therefore, the decoupled system corresponding to (8:4:2) is

(w1 )t = (w1 )x ; (w2 )t = (w2 )x :

An incoming characteristic for a given boundary is one that enters the domain at
that boundary, while an outgoing characteristic is one that leaves that domain at that
boundary. For any given hyperbolic problem to be well-posed, the number of boundary
conditions must be equal to the number of incoming characteristics at the speci…ed
boundaries. We must specify the boundary conditions for the two sets of characteristics
w1 and w2 according to the direction of the characteristic, as portrayed in Figure 8.3.
Note that the general rule is that at the boundary, the outgoing characteristics must be

Figure 8.3: Characteristics w1 and w2 for the system are in opposing directions.

a linear combination of all the incoming characteristics. According to Figure 8:3; at the
left boundary, w2 is the outgoing characteristic and w1 is the incoming characteristic.
Therefore, the left boundary condition must be expressed as

w2 (0; t) = w1 (0; t) + g (t) ; (8.4.4)

while on the right, we have

w1 (L; t) = w2 (L; t) + h (t) ; (8.4.5)

where and are to be determined, while g (t) and h (t) are arbitrarily speci…ed func-
tions.
Recall that in our case
1 1 1 ut
T 1
= ; !
v = ; !
w =T 1!
v:
2 1 1 ux
Therefore at the left end x = 0; we have

! w1 (0; t) 1 ut (0; t) 1 1 1 ut (0; t)


w jx = 0 = =T = :
w2 (0; t) ux (0; t) 2 1 1 ux (0; t)

102
This is equivalent to
1
w1 (0; t) u
2 t
(0; t) + 12 ux (0; t)
= 1 : (8.4.6)
w2 (0; t) u
2 t
(0; t) 12 ux (0; t)

Using (8:4:6) in (8:4:4) ; we get

1 1 1 1
ut (0; t) ux (0; t) = ut (0; t) + ux (0; t) + g (t) :
2 2 2 2

which is the same as

ut (0; t) ux (0; t) = (ut (0; t) + ux (0; t)) + G (t)

The boundary and initial conditions provided for the problem would allow us to deter-
mine and G (t) :
Now at the right end x = L; we have

! w1 (L; t) 1 ut (L; t) 1 1 1 ut (L; t)


w jx = L = =T = :
w2 (L; t) ux (L; t) 2 1 1 ux (L; t)

This is equivalent to
1
w1 (L; t) u
2 t
(L; t) + 12 ux (L; t)
= 1 : (8.4.7)
w2 (L; t) u
2 t
(L; t) 12 ux (L; t)

Recall that the roles of the incoming and outgoing waves are switched at this end and
from (8:4:5)
w1 (L; t) = w2 (L; t) + h (t) : (8.4.8)
Using (8:4:7) in (8:4:8) gives

1 1 1 1
ut (L; t) + ux (L; t) = ut (L; t) ux (L; t) + h (t) :
2 2 2 2

which is the same as

ut (L; t) ux (L; t) = (ut (L; t) + ux (L; t)) + H (t)

Again, the boundary and initial conditions provided for the problem would allow us to
determine and H (t) :

103
Example 8.4.1 Suppose for example the original wave equation arose as part of a vi-
brating string simulation. Suppose we are given the boundary condition u (0; t) = 0;
where the left end is tied down to the …xed height 0: How does this translate into the
boundary condition formulation above?
@
Since u (0; t) = 0; then u (0; t) = 0 = ut (0; t) : Consider now the left boundary,
@t
where we had that

ut (0; t) ux (0; t) = (ut (0; t) + ux (0; t)) + G (t) ;

We substitute ut (0; t) = 0 into this equation to obtain

ux (0; t) = ux (0; t) + G (t)

It follows that we must take = 1; and G (t) = 0:

Remark 8.4.1 Note that the slope of the characteristic is the inverse of the speed of the
wave. Slower characteristics therefore have greater slopes.

8.5 Solved Problem: The 1-D Euler Equations


Let us consider the one dimensional Euler equations
@
t + (u ) = 0; (8.5.1)
@x
1
ut + u ux + Px = 0; (8.5.2)

Pt + u P x + P ux = 0; (8.5.3)
where u is the velocity in the x direction, is the density, P is the pressure, is the gas
constant. Consider a small disturbance that upsets the steady state ( 0 ; u0 ; P0 )

= 0 b ; P = P0 + Pb:
+ b ; u = u0 + u (8.5.4)

We substitute (8:5:4) into the equations (8:5:1) ; (8:5:2) and (8:5:3). Dropping the non-
linear terms and the hat notation, we obtain the linearized set of equations

t + 0 ux + u0 x = 0; (8.5.5)
Px
ut + u0 ux + = 0; (8.5.6)
0

Pt + u0 Px + P0 ux = 0: (8.5.7)

104
This may be expressed as a system of equations
0 1 0 10 1
u0 0 0
@uA = @ 0 u0 A @uA :
1
(8.5.8)
0
P t 0 P0 u0 P x
We …nd the characteristic speeds by obtaining the eigenvalues of the matrix as follows
u0 0 0
1
0 u0 0
= 0;
0 P0 u0
which gives
P0
(u0 + ) (u0 + )2 = 0:
0
Solving for ; we get

1 = u0 ; 2 = u 0 + c0 ; 3 = u 0 + c0 ;
r
P0
where c0 = is the speed of sound. For < 0; the wave moves towards the right,
0
whereas when > 0; the wave moves towards the left. When a wave is moving towards
a given boundary, we say that it is incoming at that boundary. Naturally in such a case,
it is outgoing from the opposite boundary. A wave with zero eigenvalue is taken to be
incoming at both boundaries. For the given problem, there are four possible cases.
Case 1: When 0 < u0 < c0 ; then 1 < 0; 3 < 0 and 2 > 0: Therefore 2 moves
to the left, while 1 and 3 move to the right (see Figure 8.4).

Figure 8.4: Case 1: Outgoing and incoming characteristics

Case 2: When c0 < u0 < 0; then 1 > 0; 2 > 0 and 3 < 0: Therefore 1 and
2 move to the left, while 3 moves to the right (see Figure 8.5).

Case 3: When u0 > c0 > 0 (i.e. when ambient air is faster than the speed of
sound), all eigenvalues 1 ; 2 ; 3 < 0, so they all move to the right (see Figure
8.6).

105
Figure 8.5: Case 2: Outgoing and incoming characteristics

Figure 8.6: Case 3: All characteristics move to the right.

Case 4: When u0 < c0 < 0, all eigenvalues 1; 2; 3 > 0, so they all move to
the left (see Figure 8.7).

Figure 8.7: Case 4: All characteristics move to the left.

Now let us turn our attention back to the system we are solving (8:5:8), that is in
the form
!vt=A! v x; (8.5.9)
where 0 1 0 1
u0 0 0
!
v = @uA ; A=@ 0 u0 1 A:
0
P 0 P0 u0
We can write the matrix A as
1
A=T T ; (8.5.10)

106
where T is the matrix of eigenvectors:
0 1
1 c10 1
c0
T = @0 10 1
0
A
0 c0 c0

with inverse 0 21
1
1 0 c0
1 B C
T = @0 0 1 A
2 2c0
1
0 2
0
2c0

and is in the form of a 3 3 identity matrix with the corresponding eigenvectors on


the diagonal instead of ones:
0 1
u0 0 0
=@ 0 u0 c0 0 A:
0 0 u0 + c0
1
Now using (8:5:10) in (8:5:9) and pre-multiplying by T ; we have
1! 1 1 !
T vt=T T T v x;

which is
1! 1 !
T v t
= T v x
:
1!
Let us take T v =!
w ; to get the decoupled system
!
wt = !
w x; (8.5.11)

where the characteristic variables for the problem are


0 21 0 1 0 1
1 P
1 0 c0 (c0 )2
B C@ A B C
T 1!v = @0 0 1 A u =@ 0 u
2
+ 2cP0 A : (8.5.12)
2 2c0 0 u
0 0 1 P 2
+ 2cP0
2 2c0

8.5.1 A Particular Case


Let us consider the case of the one-dimensional linearized Euler equations on the bound-
ary 1 x 1; with boundary conditions

Px ( 1; t) = 0 = Px (1; t) ;

107
and initial conditions
u0 = 0 ; P0 = 1; 0 = 1; = 1:4:
Let us discretize the spatial and temporal axes into regular grids with N and M nodes
respectively, with space step-size h and time-step-size t: Before we can do anything
else, we must consider the boundary conditions for the problem. We will do this by
taking into account the direction of the characteristics of the system on either boundary.
Looking from the x = 1 end, we have

Characteristic Eigenvalues Conclusion


P
(c0 )2
u0 = 0 = 1 1 incoming at x = 1
0 u
2
+ 2cP0 u0 c0 = c0 = <0
2 2 outgoing at x = 1
0 u
2
+ 2cP0 u 0 + c0 = c0 = 3 >0 3 incoming at x = 1

We therefore have two incoming characteristics and one outgoing characteristic at the
left boundary (see Figure 8.8). Let us look at the eigenvalue 1 = u0 (incoming on the

Figure 8.8: Direction of characteristics

P
left) with characteristic variable : From (8:5:11) we have
(c0 )2
P P
= u0 :
(c0 )2 t (c0 )2 x

As we know that u0 = 0; at x = 1 we get


Pt ( 1; t)
t ( 1; t) = 0:
(c0 )2
Discretizing, we have
D+ P1n
D+ n
1 = 0;
(c0 )2
n+1
1
n
1 1 P1n+1 P1n
= 0;
t (c0 )2 t

108
which simpli…es to
n+1 1 n+1 n 1 n
1 2 P1 = 1 2 P1 : (8.5.13)
(c0 ) (c0 )
0 u
Consider next the eigenvalue 2; which has an outgoing characteristic variable
+
2
P
: This boundary condition for this characteristic must be provided at the end x = 1:
2c0
It is therefore not surprising that there is only one boundary condition at the left end
x = 1; which is
Px ( 1; t) = 0
This will correspond to the 2 characteristic. Discretizing at the left end using forward
Euler in space, we get
P2n+1 P1n+1
D+ P1n+1 = = 0;
h
which is
P1n+1 = P2n+1 : (8.5.14)
Note that we have chosen time level n + 1: It is okay to discretize at time level n; but
the selection of the n + 1 time level makes the n time level matrix less cluttered. We
will explain this in more detail later on when we build the matrix system.
The question we should ask ourselves is what is done when there are more than one
outgoing characteristics at a given boundary. In a case like that, it is not as simple as
a process of elimination, saying that the single boundary condition must correspond to
the single outgoing wave. We need to …nd a more general method to analyze such cases.
Although our method will seem redundant with this simple example, we will outline the
general method for the purposes of illustration. Note that our analysis will lead to the
same result (8:5:14) :
First, we say that any outgoing characteristic must be a linear combination of the
0 u
incoming characteristics at the given boundary. Hence, the characteristic variable +
2
P
corresponding to 2 must satisfy
2c0
0 u P P 0 u P
+ = + + ; (8.5.15)
2 2c0 (c0 )2 2 2c0
where and are constants to be determined. We know that Px ( 1; t) = 0; so we can
make use of this by di¤erentiating (8:5:15) to get
0 1
Px ( 1; t)
x ( 1; t)
0 ux ( 1; t) Px ( 1; t) B B (c0 )2 C
C:
+ =@ (8.5.16)
2 2c0 0 ux ( 1; t) Px ( 1; t) A
+ +
2 2c0

109
Using Px ( 1; t) = 0 in (8:5:16) ; we have

0 ux ( 1; t) 0 ux ( 1; t)
= x ( 1; t) ;
2 2
meaning that
= 0; = 1: (8.5.17)
Utilizing (8:5:17) in (8:5:16) ; we have

0 ux ( 1; t) Px ( 1; t) 0 ux ( 1; t) Px ( 1; t)
+ =
2 2c0 2 2c0
which simpli…es to
Px ( 1; t)
= 0 ) Px ( 1; t) = 0
c0
Discretizing, we get

P2n+1 P1n+1
D+ P1n+1 = = 0 ) P2n+1 = P1n
h
the same result we got previously. Note that this general method that can be applied
to any outgoing characteristic.
Thirdly, we consider the eigenvalue 3 = u0 + c0 (incoming on the left) with char-
0 u P
acteristic variable + : From (8:5:11) we have
2 2c0

0 u P 0 u P
+ = ( u0 + c0 ) + :
2 2c0 t 2 2c0 x

As we know that u0 = 0; at x = 1 we have

0 ut ( 1; t) Pt ( 1; t) 0 ux ( 1; t) Px ( 1; t)
+ = c0 + :
2 2c0 2 2c0

Since we have that Px ( 1; t) = 0; this reduces to

0 ut ( 1; t) Pt ( 1; t) c0 0
+ = ux ( 1; t) :
2 2c0 2
Discretizing, we have

0 un+1
1 un1 1 P1n+1 P1n c0 0 un2 un1
= ;
2 t 2c0 t 2 h

110
which is
2 c0 0 c0 0 3
0
un1 + un2
0 1 6 2 t 2h 2h 7
un+1
1 + P1n+1 = 4 1 5: (8.5.18)
2 t 2 tc0 n
+P1
2 tc0
Looking now from the right boundary x = +1, we have
Characteristic Eigenvalues Conclusion
P
(c0 )2
u0 = 0 = 1 1 incoming at x = +1
0 u
2
+ 2cP0 u0 c0 = c0 = <0
2 2 incoming at x = +1
0 u
2
+ 2cP0 u 0 + c0 = c0 = 3 >0 3 outgoing at x = +1

We therefore have two incoming characteristics and one outgoing characteristic at the
left boundary (see Figure 8.8). We must work in the same order as we did previously
to circumvent errors in the formation of our matrix system when we write our code.
We will therefore start by analyzing the incoming characteristic 1 ; next the incoming
characteristic 2 and …nally the outgoing characteristic 3 :
P
First, 1 = u0 = 0 corresponds to the incoming characteristic : From
(c0 )2
(8:5:11) we have
P P
2 = u0 :
(c0 ) t (c0 )2 x
Since u0 = 0; we get
Pt (1; t)
t (1; t) = 0:
(c0 )2
Discretizing we have
+ n D+ PNn
D N = 0;
(c0 )2
n+1
N
n
N PNn+1 PNn
= 0;
t t (c0 )2
which simpli…es to
n+1 PNn+1 n PNn
N = N : (8.5.19)
(c0 )2 (c0 )2
0 u P
Next, 2 = u0 c0 = 0 corresponds to the incoming characteristic + : From
2 2c0
(8:5:11) we have
0 u P 0 u P
+ = ( u0 c0 ) + :
2 2c0 t 2 2c0 x

111
Since u0 = 0 and Px (1; t) = 0; we get

0 ut (1; t) Pt (1; t) 0 ux (1; t)


+ = c0 :
2 2c0 2

Discretizing (making sure we use D in space as we have no point to the right of xN );


we have + n
0 D uN D+ PNn 0 D uN
n
+ = c0 ;
2 2c0 2
0 un+1
N unN 1 PNn+1 PNn c0 0 unN unN 1
+ = ;
2 t 2c0 t 2 h
which simpli…es to
0 c0 0 n c0 0 n 1
0
uN + uN 1
0 1 B 2 t 2h 2h C
un+1
N + PNn+1 = @ 1 A: (8.5.20)
2 t 2 t c0 + PNn
2 t c0

P 0 u
Finally, we look at the outgoing characteristic corresponding to the
+
2 2c0
eigenvalue 3 : This boundary condition for this characteristic must be provided at the
right end x = 1: It is therefore not surprising that there is only one boundary condition
at the left end x = 1; which is
Px (1; t) = 0
so this must correspond to 3 : We note that in order to discretize this, we must take
into account the direction of the wave, and use the operator D instead of D+ in order
to get
PNn+1 PNn+11
=0
h
which gives us
PNn+1 = PNn+11 (8.5.21)
Note again that we have chosen to discretize at time level n + 1 instead of at time level
n (although again we stress that there is nothing wrong with choosing time level n) .
Once more, we should ask ourselves the question of what we would have needed to
do if there was more than one outgoing characteristic at this boundary, i.e., when we
cannot just say that the one boundary condition at the right must correspond to the
one outgoing characteristic on the right. We will again outline the general method for
the purposes of illustration. Note that our analysis will lead to the same result (8:5:21) :

112
The outgoing characteristic must be a linear combination of the incoming characteris-
0 u P
tics at the given boundary. Hence, the characteristic variable + corresponding
2 2c0
to 3 must satisfy

0 u P P 0 u P
+ = + + ; (8.5.22)
2 2c0 (c0 )2 2 2c0

where and are constants to be determined. We know that Px (1; t) = 0; so we can


make use of this by di¤erentiating (8:5:22) to get
0 1
Px (1; t)
x (1; t)
0 ux (1; t) Px (1; t) B
B (c0 )2 C
C:
+ =@ (8.5.23)
2 2c0 0 ux (1; t) Px (1; t) A
+ +
2 2c0

Using Px (1; t) = 0 in (8:5:23) ; we have

0 ux (1; t) 0 ux (1; t)
= x (1; t) + ;
2 2
meaning that
= 0; = 1: (8.5.24)
Utilizing (8:5:24) in (8:5:23) ; we have

0 ux (1; t) Px (1; t) 0 ux (1; t) Px (1; t)


= + ;
2 2c0 2 2c0
which simpli…es to
Px (1; t)
= 0 ) Px (1; t) = 0:
c0
Discretizing, we get

PNn+1 PNn+11
D PNn+1 = = 0 ) PNn+1 = PNn+11
h
which is the same result (8:5:21) that we obtained previously. Note again that this gen-
eral method that can be applied to any outgoing characteristic, although it is redundant
in this particular case.
We are now ready to place our boundary conditions. recall that we are solving the
system
!vt=A! v x; (8.5.25)

113
where 0 1 0 1
u0 0 0
!
v = @uA ; A=@ 0 u0 1 A:
0
P 0 P0 u0
Let us choose to use the Crank-Nicolson method to discretize the system:
!
v n+1 ! vn A
= D0 !
v n+1 + D0 !vn :
t 2
Recall that we have a spatial axis that has N nodal points and step-size h: This means
that we will obtain
t t
eye A D0 ! v n+1 = eye + A D0 !v n;
2 2
where eye is a 3N 3N identity matrix, and A D0 is block 3N 3N matrix of the form
0 10 1
u0 I 0I 0 Dzero zero zero
A D0 = @ zero u0 I 1 A@
0
I zero Dzero zero A :
zero P0 I u0 I zero zero Dzero
Here I is the N N identity matrix, zero is the N N zero matrix, and Dzero is a
sparse tri-diagonal N N matrix of the form
0 1
1
0 2h
B 1 1 C
B 2h 01 2h 1 C
B 0 2h C
B 2h C
B
Dzero = B C:
C
B C
B C
@ 1
0 1 A
2h 2h
1
2h
0
Note that this comes from the de…nition of the second order central di¤erence operator
D0 ; for example
unj+1 unj 1
D0 unj =
2h
so the main diagonal of Dzero is the coe¢ cient of unj which is zero, the …rst super-
1
diagonal of Dzero is the coe¢ cient of unj+1 which is 2h ; and the …rst sub-diagonal of
Dzero is the coe¢ cient of unj 1 : which is 2h1 ):
t
We are now in a position to create the 3N 3N block matrices M at1 = eye A D0
2
t
and M at2 = eye + A D0 for
2
(M at1) ! v n+1 = (M at2) !
v n: (8.5.26)

114
Here, M at1 is the n + 1 matrix, while M at2 is the n matrix. We place the boundary
conditions (8:5:13) ; (8:5:14), (8:5:18), (8:5:19) ; (8:5:20) and (8:5:21) into M at1 and
M at2 in the positions shown in Figure 8.9. Note that ! v n+1 refers to the 3N 1 column

Figure 8.9: BCs for x = 1 are placed into the 3N 3N block matrices M at1 and M at2
in the illustrated positions. Note that M at1 corresponds to the n + 1 time level matrix,
and M at2 corresponds to the n time level matrix.

vector consisting of the N density points 1 to N ; the N velocity points u1 to uN ;


and the N pressure points P1 to PN all at the "n + 1" time level, and !
v n is the 3N 1
column vector consisting of the N density points 1 to N ; the N velocity points u1 to
uN ; and the N pressure points P1 to PN each at the "n" time level.
The solution of the problem is shown in Figures 8.10, 8.11 and 8.12.

115
Figure 8.10: Pressure plot for 0 t 5; 1 x 1 with space step-size 0.01 and time
step-size 0.001.

116
Figure 8.11: Velocity plot for 0 t 5; 1 x 1 with space step-size 0.01 and time
step-size 0.001.

117
Figure 8.12: Density plot for 0 t 5; 1 x 1 with space step-size 0.01 and time
step-size 0.001.

118
Chapter 9

Two Dimensional Problems

Consider the hyperbolic system


!
u t = A!
u x + B!
uy ;

where !
u is a m 1 column vector, and A and B are m m matrices. This system is
hyperbolic i¤ for all real ; (not both zero) 9 P such that
1
P ( A + B) P = D;

where D is a real diagonal matrix. The corresponding one-dimensional sub-problems


are the two hyperbolic systems
!
ut =A !
ux ; !
ut =B !
u y:

We seek solutions of the form


!
u (t; x; y) = ei!t ei x ei y !
u 0;

where ! u 0 is an eigenvector of A + B; and ! is the corresponding real eigenvalue.


Now if we are given that A and B are symmetric, it follows that A + B is also
symmetric. In general however A and B are not simultaneously symmetric, which is a
major complication. Clearly we cannot just assume that the same one-dimensional type
analysis will apply to two-dimensional systems, as often A and B are not simultaneously
diagonalizable. It is natural to think that we can skirt the problem by making the
matrix A diagonal via suitable similarity transforms. However, this is seldom a solution
to the problem, as matrix B is not usually diagonalized under those same transforms.
We must therefore conclude that we cannot solve two-dimensional systems by simply
applying the methods previously developed for two-dimensional scalar problems. Note
this is not the case for one-dimensional problems, as we can can apply same methods
used for one-dimensional scalar problems to solve one-dimensional systems.

119
9.1 Two Dimensional Scalar Problems
Let us consider the problem
ut = ux + uy : (9.1.1)
Applying (2; 2) leap frog, we have
n+1 n 1 n
vj;k vj;k vj+1;k vjn 1;k
n
vj;k+1 n
vj;k 1
= + ;
2 t 2 x 2 y
where j is the index for the nodes on a regular x spatial grid with step-size x , k is
the index for the nodes on a regular y spatial grid with step-size y; and t is the time
step-size for the regular temporal grid indicated by the index n: In general, x 6= y;
but for simplicity, we will take x = y = h to get

n+1 n 1 t n
vj;k = vj;k + vj+1;k vjn 1;k
n
+ vj;k+1 n
vj;k 1 : (9.1.2)
h
Let us perform a Von Neumann stability analysis on (9:1:2) : Assuming the solution
n
vj;k = z n exp (i hj) exp (i hk) (9.1.3)

and substituting (9:1:3) into (9:1:2) ; we get (eventually)

z2 1 = 2i z (sin h + sin h) (9.1.4)


t
where = : For simplicity, let us use the notation
h
= h; = h:

We therefore have
z2 1 = 2i z (sin + sin ) :
This is a quadratic equation which can be solved to get
q
z = i (sin + sin ) 1 2 (sin + sin )2 : (9.1.5)

When 1 2
(sin + sin )2 0; i.e. if
2
(sin + sin )2 1;

we get complex roots z from (9:1:5) and it follows that

jzj2 = 2
(sin + sin )2 + 1 2
(sin + sin )2 = 1;

120
so jzj = 1: Also, since
max (sin + sin ) = 2
we can say that jzj = 1 if
1
2
(2)2 1) :
2
The CFL condition for the leap frog (2; 2) method when x = y = h is therefore
t
0:5: Recall that when we solve the one-dimensional baby-wave equation with (2; 2)
h
t
leap frog, the CFL condition was found to be 1: We therefore have a much more
h
stringent stability condition when we use the same method to solve the two-dimensional
counterpart. This is bad news for the e¢ ciency of the method, as we need smaller time-
steps, and therefore more iterations in order to get a solution. This is also bad news
for the accuracy of the method, since the analysis for leap-frog (2; 2) implied that for
t
reduced truncation error, we need to take as close as possible to one. As we are now
h
t
forced to take 0:5; we cannot bene…t from this.
h
In general, any method used for solving two-dimensional scalar problems must have
greatly reduced time steps as compared to the same method applied to its one-dimensional
counterpart. As you can imagine, this is an even greater problem when dealing with
three-dimensional problems. One way to overcome this shortcoming is by using operator
splitting methods which we shall discuss later on in this chapter.

9.2 Two Dimensional Systems


Let us illustrate a point that we made previously that the methods for one-dimensional
systems cannot be simply applied to solve two-dimensional systems. Consider the system
of one-dimensional baby-wave equations
!
ut =A !
u x:

We saw previously that the stability boundary for using the leap-frog (2; 2) method on
this system is controlled by the eigenvalue of largest magnitude max of the matrix A;
i.e.
1
:
j max j
Let us turn our attention to the related two-dimensional system
!
ut =A !
ux+B !
u y: (9.2.1)

121
For simplicity, we consider the simplest case with regular spatial and temporal grids
where
x = y = h:
Applying leap-frog (2; 2) to the system (9:2:1) ; we obtain
!
v n+1 ! v nj;k 1 v nj+1;k !
! v nj v nj;k+1 !
! v nj;k
j;k 1;k 1
=A +B : (9.2.2)
2 t 2 x 2 y

We may apply the Von Neumann stability analysis to (9:2:2) by assuming a solution of
the form
!
v nj;k = z n exp (i hj) exp (i hk) !
v 0:
Substituting this into (9:2:2) leads (eventually) to

z2 1 I 2i z (A sin + B sin ) !
v 0 = 0;

where again we have taken = h; = h: Taking the matrix G to be

G = z2 1 I 2i z (A sin + B sin ) ;

we have
G!
v 0 = 0:
We therefore need to …nd a stability condition to ensure that jzj 1 for every possible
and such that jGj = 6 0:
It is not even easy to check that jGj =
6 0; as we cannot reduce the system to a set of
scalar equations unless A and B are simultaneously diagonalizable, which is rarely the
case. In the special case when A = B; then A and B are simultaneously diagonalizable
and we can show that the CFL condition is
1
:
2 j max j

where max is the eigenvalue of largest magnitude of the matrix A = B: When A 6= B;


which is usually the case, we may at best make rough estimates for the CFL condition
by using the fact that A sin + B sin is similar to a real diagonal matrix. Since such
estimates are unreliable, we will end our discussion at this point.

9.3 Operator Splitting


Operator splitting is a method that reduces multi-dimensional problems to a sequence
of one-dimensional problems which can then be solved by applying one-dimensional

122
schemes. This is clearly bene…cial, as in previous sections, we have encountered sev-
eral di¢ culties in our attempts to solve simple two-dimensional problems. We shall
demonstrate the technique for the problem

ut = ux + uy :

Let us apply the forward Euler in time central di¤erence in space scheme, bearing in
mind that this is merely for the purpose of demonstration, since we have already seen
that this method is unstable for the one-dimensional baby-wave equation. We obtain
n+1 n n
vj;k vj;k vj+1;k vjn 1;k
n
vj;k+1 n
vj;k 1
= + :
t 2 x 2 y
t
For simplicity, we take x= y = h; and = to get
h

n+1 n n
vj;k = vj;k + vj+1;k vjn 1;k
n
+ vj;k+1 n
vj;k 1 : (9.3.1)
2
We will not bother to consider a Von Neumann stability analysis as we know that
this scheme will be unstable, but let us look anyway for solutions of the form
n
vj;k = an exp (i hj) exp (i hk) :

Taking
= h; = h;
we have
n
vj;k = an exp (i j) exp (i k) : (9.3.2)
Substituting (9:3:2) into (9:3:1) ; we obtain (eventually)

an+1 = an (1 + i [sin + sin ]) : (9.3.3)

Now we know that for 0 < 1; exp ( ) ' 1 + : Therefore when and are small (i.e.
when the step-size h is small enough), then

1 + i (sin + sin ) ' exp (i (sin + sin ))


= exp (i sin ) exp (i sin ) ' (1 + i sin ) (1 + i sin ) :
(9.3.4)

Using this in (9:3:3) ; we have

a ' (1 + i sin ) (1 + i sin ) :

123
Recall that when we performed the stability analysis for the one dimensional baby-
wave equation ut = ux via the method D+ vjn = D0 vjn ; we obtained z = 1 + i f sin khg ;
which corresponds to
a = 1 + i sin ;
using our current notation (a = z; = k): In the same way, by applying D+ vjn = D0 vjn
to the one dimensional baby-wave equation in the y direction ut = uy we would obtain

a = 1 + i sin :

(Note: here we have used a = z; = k): This implies that we have the equivalent of a
split scheme

n n
vej;k = vj;k + vj+1;k vjn 1;k ;
2
n+1
vj;k = vej;k + (e
vj;k+1 vej;k 1 ) :
2
Essentially, we are solving ut = ux …rst, and then using the result as intermediate initial
conditions to solve ut = uy : In so doing, we have e¤ectively split the two-dimensional
scheme (9:3:1) into a ‘product’of two one-dimensional schemes. The reader should note
once more that the result is two unstable schemes split from one unstable scheme, so
this example is purely theoretical and will be utilized for teaching purposes only. We
can however apply the same example to stable schemes to get a more useful result, i.e.
a split scheme that will work to solve the problem.
We can also utilize operator notation to split this scheme as follows. Consider again
our problem
ut = ux + uy :
Applying our theoretical method (as it is an unstable one that will not actually solve
our problem)
n
D+ vj;k n
= D0;x vj;k n
+ D0;y vj;k
which is
n+1 n
vj;k vj;k n n
= D0;x vj;k + D0;y vj;k ;
t
n+1 n n n
vj;k = vj;k + t D0;x vj;k + D0;y vj;k ;
n+1 n
vj;k = [I + tD0;x + tD0;y ] vj;k : (9.3.5)
Now let us add a second order error in space and in time to (9:3:5) as follows
n+1
vj;k = I+ tD0;x + tD0;y + ( t)2 D0;x D0;y vj;k
n
:

124
We may then ‘factorize’the operators to get
n+1 n
vj;k = (I + tD0;x ) (I + tD0;y ) vj;k : (9.3.6)

Now we may split (9:3:6) as


n
vej;k = (I + tD0;x ) vj;k ; (9.3.7)
n+1
vj;k = (I + tD0;y ) vej;k ; (9.3.8)

which is equivalent to what we obtained previously. Note that there is a second order
error in space and time involved in the splitting process, which corresponds to the
approximation errors taken into account in (9:3:4) :

9.3.1 Implementation
Essentially we are trying to solve (in matrix notation)
!
u 0 = (A + B) !
u:

The solution of this is


!
u (t) = e(A+B)t !
u (0) :
Now we may use a Taylor’s expansion to say that

t2
e(A+B)t = I + t (A + B) + (A + B)2 + :::
2
taking care to note that terms such as (A + B)2 must be calculated using standard
matrix multiplication, and AB 6= BA. We therefore have

t2
e(A+B)t = I + t (A + B) + A2 + AB + BA + B 2 + :::
2
Factorizing this, we get

t2 2 t2 2 t2
e(A+B)t ' I + tA + A I + tB + B + (BA AB) + :::
2 2 2
Now
t2 2 t2
I + tA + A ' eAt ; I + tB + B 2 ' eBt :
2 2
Using this, we have
t2
e(A+B)t ' eAT eBt + (BA AB) + ::: (9.3.9)
2

125
Consider instead
e(A+B)2t = e(A+B)t e(A+B)t
t2 t2
' eAt eBt + (BA AB) + O t3 eBt eAt + (AB BA) + O t3 :
2 2
We can therefore show that
t2 t2 2 t2
e(A+B)2t ' eAt eBt eBt eAt + I + tA + A I + tB + B 2 (AB BA)
2 2 2
t2 t2 2 t2
+ I + tB + B I + tA + A2 (BA AB) + O t3 :
2 2 2
That is
t2 At Bt t2 Bt At
e(A+B)2t ' eAt eBt eBt eAt + e e (AB BA) + e e (BA AB) + O t3 ;
2 2
which simpli…es to
e(A+B)2t ' eAt eBt eBt eAt + O t3 :
Ignoring the terms O (t3 ) ; we have the approximation
e(A+B)2t = e(A+B)t e(A+B)t ' eAt eBt eBt eAt : (9.3.10)
Consider then our split scheme (9:3:7) and (9:3:8) : If we let
A=I+ tD0;x ; B = I + tD0;y
we have
n
vej;k = A vj;k ; (9.3.11)
n+1
vj;k = B vej;k : (9.3.12)
In the main loop of the code we …rst implement
ve = A v n ; v n+1 = B ve; (9.3.13)
and follow this immediately by
ve = Bv n+1 ; v n+2 = A ve: (9.3.14)
In this way (9:3:13) ; (9:3:14) are repeated, resulting in what we may refer to as "Alter-
nating Direction Loops"
(ABBA) (ABBA) (ABBA) :::
to get the resulting solution. When an implicit scheme is utilized in this fashion, the
resulting scheme is referred to as an ADI (Alternating Directions Implicit) method.

126
9.4 Alternating Directions Implicit (ADI) Method
Consider the two dimensional heat equation

ut = uxx + uyy :

One of the best methods we have encountered so far for solving a one-dimensional heat
equation is the Crank-Nicolson method, so this is a natural choice. Applying Crank-
Nicolson and using operator notation, we have
n+1 n
vj;k vj;k 1 n+1 n n+1 n
= D+;x D ;x vj;k + D+;x D ;x vj;k + D+;y D ;y vj;k + D+;y D ;y vj;k :
t 2
This is equivalent to

n+1 n t n+1 n n+1 n


vj;k = vj;k + D+;x D ;x vj;k + D+;x D ;x vj;k + D+;y D ;y vj;k + D+;y D ;y vj;k :
2
This may be written in the form

t t n+1 t t n
I D+;x D ;x D+;y D ;y vj;k = I+ D+;x D ;x + D+;y D ;y vj;k :
2 2 2 2

Using the method of operator splitting, we may make the approximation of the above
(in the same way as we did in the previous section) to get

t t n+1
I D+;y D ;y I D+;x D ;x vj;k
2 2
t t n
= I+ D+;x D ;x I+ D+;y D ;y vj;k :
2 2

Breaking this into two steps, we have the ADI method

t n+1 t n
I D+;x D ;x vj;k = I+ D+;y D ;y vj;k ;
2 2
t n+2 t n+1
I D+;y D ;y vj;k = I+ D+;x D ;x vj;k :
2 2

It can be shown that the above method is unconditionally stable, which should not be
surprising, as the Crank-Nicolson method is unconditionally stable when applied to the
one-dimensional heat equation.

127
9.5 Solved Problem: The 2-D Euler Equations
Consider the general linearized two-dimensional Euler equations

t + u0 x + v0 y + 0 ux + 0 vy = 0;
1
ut + u0 ux + v0 uy + Px = 0;
0
1
vt + u0 vx + v0 vy + Py = 0;
0
Pt + u0 Px + v0 Py + P0 ux + P0 vy = 0:
When we take = 1:4; this system of equations gives a reasonable approximation to
two-dimensional acoustic disturbances in a steady gas ‡ow. This may be written in
matrix notation as
!
wt = A ! wx + B !wy (9.5.1)
where 0 1 0 1
u0 0 0 0
! BuC B 0 u0 0 C
1
w =B C B
@vA ; A = @ 0
C ;
0
0 u0 0 A
P 0 P0 0 u0
and 0 1
v0 0 0 0
B 0 v0 0 0 C
B=B
@ 0
C
1 A:
0 v0 0
0 0 P0 v0
Let us apply Crank-Nicolson to (9:5:1) as follows:
A n+1 B n+1
n
D+ wi;j = D0;x wi;j n
+ D0;x wi;j + D0;y wi;j n
+ D0;y wi;j ;
2 2
n+1 n
wi;j wi;j A B n+1 A B n
= D0;x + D0;y wi;j + D0;x + D0;y wi;j ;
t 2 2 2 2
n+1 t n+1 n t n
wi;j [A D0;x + B D0;y ] wi;j = wi;j + [A D0;x + B D0;y ] wi;j ;
2 2
t n+1 t n
I [A D0;x + B D0;y ] wi;j = I+ [A D0;x + B D0;y ] wi;j :
2 2
Now using an operator splitting approximation, we have
t t n+1 t t n
I A D0;x I B D0;y wi;j = I+ A D0;x I+ B D0;y wi;j :
2 2 2 2

128
The ADI loop is therefore

t n+1 t n
I A D0;x wi;j = I+ B D0;y wi;j ;
2 2
t n+2 t n+1
I B D0;y wi;j = I+ A D0;x wi;j :
2 2

Notice that the main loop is in the form

(ABBA) (ABBA) (ABBA) :::

The alternative would be to use


t n+1 t n
I+ B D0;y wi;j = I A D0;x wi;j ;
2 2
t n+2 t n+1
I+ A D0;x wi;j = I B D0;y wi;j :
2 2

with a repeated pattern of

(BAAB) (BAAB) (BAAB) :::

9.5.1 A Particular Case


Let us solve the Euler equations inside a two dimensional unit box 0 x 1; 0 y
1; t 0 given that

0 = 1 ; P0 = 1 ; u0 = 0 ; v0 = 0 ; = 1:4;

with Dirichlet boundary conditions

u (0; y; t) = 0 ; u (1; y; t) = 0 ; v (x; 0; t) = 0 ; v (x; 1; t) = 0;

and initial conditions

(x; y; 0) = 0; u (x; y; 0) = 0; v (x; y; 0) = 0;


( )!
2 2
1 1 1
P (x; y; 0) = exp 10 x + y :
100 2 2
Essentially what we are dealing with is a still gas where the pressure is perturbed in
the centre of the square domain. We will utilize the ADI method with Crank-Nicolson

129
outlined previously to solve this problem. In order to do this, however, we must discuss
the placement of the boundary conditions.
As we will be using the ADI method, we will consider the y direction and the x
direction separately. Let us begin with the y equation corresponding to (9:5:1)
!
wt = B !
w y: (9.5.2)

First we must …nd the eigenvalues of B as follows

v0 0 0 0
0 v0 0 0
1 = 0;
0 0 v0 0
0 0 P0 v0

P0
) ( v0 )2 ( v0 )2 = 0:
0

This yields four eigenvalues


s s
P0 P0
1 = v0 ; 2 = v0 ; 3 = v0 + ; 4 = v0 :
0 0

Using the provided steady-state values, we get


p p
1 =0; 2 =0; 3 = ; 4 = :

After …nding the corresponding eigenvectors (left as an exercise), we obtain


1
B=T T

where 0 1 0 1
1 1
1 0 0 0 0 0
B0 1 0 0 C B0 0 0 0 C
T =B
@0
C
1A ; =B
@0 p C ;
0 1
0 0 A
p
0 0 1 1 0 0 0
0 1
1
1 0 0
B0 1 0 0 C
T 1
=B
@0
C
1 A :
0 2 2
1
0 0 2 2

We may therefore write (9:5:2) in the form


!
wt = T T 1 !
w y:

130
1
Multiplying across by T ; we obtain

T 1 !
wt = T 1
T T 1 !
wy ) T 1!
w = T 1!
w :
t y

The characteristic variables for the y direction are therefore


0 1
10 1 0 1
1
1 0 0 P
B 0 C B C B C
T 1!w =B
0 1 0 C BuC = B u C
@0 0 1 A@ A
v @ v + PA:
2 2 2 2
1
0 0 2 2
P 2
v + P2

We are now in a position to consider boundary conditions at the bottom (left) and top
(right) boundaries of the square. Let us create regular grids in the x and y spatial
directions by dividing each axis into N equally spaced nodal points, with a step-size on
the x axis of x and step-size on the y axis of y. The temporal axis is divided into M
nodal points with time step-size t:
At y = 0; i.e. at the bottom (left), we have:

Eigenvalue Characteristic Conclusion


1
1 = 0 P 1 is incoming at y = 0

2 =0 u 2 is incoming at y = 0
p p
P
3 = >0 v+ 3 is incoming at y = 0
p p 2 2
P
4 = <0 2
v+ 2 4 is outgoing at y = 0

The characteristic corresponding to the …rst eigenvalue 1 is incoming at y = 0: Hence


1 1
P = 1 P = 0;
t y

which is
1
t Pt = 0:

Discretizing, we have
1
D+ n
1 D+ P1n = 0;
n+1
1
n
1 1 P1n+1 P1n
= 0;
t t
leading to the placement boundary condition for 1 as

n+1 1 1
1 + P1n+1 = n
1 + P1n : (9.5.3)

131
The characteristic corresponding to the second eigenvalue 2 is also incoming at
y = 0: Hence
(u)t = 2 (u)y ;
which reduces to
ut = 0:
Discretizing, we have
un+1
1 un1
D+ un1 = 0 ) = 0;
t
leading to the placement boundary condition for 2 of

un+1
1 = un1 : (9.5.4)

The characteristic corresponding to the third eigenvalue 3 is also incoming at y = 0:


Hence p p
P P
v+ = 3 v+ ;
2 2 t 2 2 y

which reduces to
p p p
v t + Pt = ( v y + Py ) :
Discretizing, we have
p p
D+ v1n + D+ P1n = D+;y v1n + D+;y P1n ;

p v1n+1 v1n P1n+1 P1n v2n v1n p P2n P1n


+ = + ;
t t y y
leading to the placement boundary condition for 3 of
p p
n+1 1 n+1
v1 + P1 = v1n + v2n
t t y t y
p p
1
+ P1n + P2n : (9.5.5)
t y y
Finally at y = 0; the characteristic corresponding to the fourth eigenvalue 4 is out-
going at y = 0: We know that the outgoing characteristic must be a linear combination
of the incoming characteristics, so we have
p p
P 1 P
v+ = P + (u) + v+ ;
2 2 2 2
where ; ; are constants to be determined. A solution to this is

= 0; = 0; = 1;

132
which gives p p
P P
v+ = v+ :
2 2 2 2
This means that at y = 0; we need to have

v = 0;

which is not surprising, as v (x; 0; t) = 0 is the boundary condition provided at y = 0.


This is placed on the boundary by taking

v1n+1 = 0: (9.5.6)

Next we consider the top (right) boundary of the box y = 1: Here we have:

Eigenvalue Characteristic Conclusion


1
1 = 0 P 1 is incoming at y = 1

2 =0 u 2 is incoming at y = 1
p p
P
3 = >0 v+ 3 is outgoing at y = 1
p p 2 2
P
4 = <0 2
v+ 2 4 is incoming at y = 1

The characteristic corresponding to the …rst eigenvalue 1 is incoming at y = 1: Hence

1 1
P = 1 P = 0;
t y

1
t Pt = 0:

Discretizing, we have
n+1
N
n
N 1 PNn+1 PNn
= 0;
t t
which gives us our …rst boundary condition placed at y = 1

n+1 1 1
N + PNn+1 = n
N + PNn : (9.5.7)

The characteristic corresponding to the …rst eigenvalue 2 is incoming at y = 1:


Hence
(u)t = 2 (u)y = 0;
ut = 0:

133
Discretizing, we have
un+1 unN
D+ unN = 0 ) N
= 0;
t
which gives us our next condition placed at y = 1

un+1
N = unN : (9.5.8)

The characteristic for 3 is outgoing at y = 1: As outgoing characteristics are a linear


combination of the incoming ones, we have
p p
P 1 P
v+ = P + (u) + v+ :
2 2 2 2

A solution to this is
= 0; = 0; = 1;
which leads to p p
P P
v+ = v+ :
2 2 2 2
This simpli…es to give v = 0; which is what we would expect since the boundary condition
that we were given was v (x; 1; t) = 0: This is placed on the boundary y = 1 by taking
n+1
vN = 0: (9.5.9)

Finally we look at the incoming characteristic corresponding to 4 at y = 1; which


implies that p p
P P
v+ = 4 v+ :
2 2 t 2 2 y
p
Using 4 = ; we get
p p
v t + Pt = vy Py :
Discretizing, making sure that we use D ;y in the y direction, we have
p p
n
D+ vN + D+ PNn = D ;y
n
vN D ;y PNn ;

n+1
p vN n
vN PNn+1 PNn n
vN vNn
1
+ =
t t y
p PNn PNn 1
:
y

134
Therefore, the last condition at the boundary y = 1 is
p p
n+1 1 n+1 n n
vN + PN = vN 1 + vN (9.5.10)
t t y t y
p p
1
+ PNn 1 + PNn :
y t y
We now switch our attention to the x direction. We must …nd the eigenvalues of the
matrix A; the corresponding eigenvectors, and the characteristic variables as we did for
the y direction. The details of these steps are left for the reader. For brevity, we will
simply state the characteristics and conclusions for the x = 0; 1 boundaries.
At x = 0 (i.e. on the left), we obtain:

Eigenvalue Characteristic Conclusion


1
1 = 0 P 1 is incoming at x = 0

2 =0 v 2 is incoming at x = 0
p p
P
3 = >0 u+ 3 is incoming at x = 0
p p 2 2
P
4 = <0 2
u+ 2 4 is outgoing at x = 0

Starting with the …rst characteristic for 1 which is incoming at x = 0; we have


1 1
P = 1 P = 0:
t x

since 1 = 0: Discretizing, we get


1
D+ n
1 D+ P1n = 0;

n+1
1
n
1 1 P1n+1 P1n
= 0:
t t
The …rst condition to be implemented at x = 0 is

n+1 1 1
1 + P1n+1 = n
1 + P1n : (9.5.11)

Next, we have the incoming characteristic for 2 which gives


(v)t = 2 (v)x = 0;
since 2 = 0: Discretizing, we have
v1n+1 v1n
D+ v1n = = 0:
t

135
This leads to
v1n+1 = v1n : (9.5.12)
The next incoming characteristic for 3 gives
p p
P P
u+ = 3 u+ :
2 2 t 2 2 x
p
Since 3 = ; we have
p p
u t + Pt = ux + Px :
Discretizing, we get
p p
D+ un1 + D+ P1n = D+;x un1 + D ;x P1n ;

p un+1
1 un1 P1n+1 P1n un2 un1
+ =
t t x
p P2n P1n
+ ;
x

which can be written as


p p
1
un+1
1 + P1n+1 = + un1 + un2
t t t x x
p p
1
+ P1n + P2n : (9.5.13)
t x x

Finally we have the outgoing characteristic for 4 : As any outgoing characteristic is


the linear combination of the incoming ones, we have
p p
P 1 P
u+ = P + (v) + u+ :
2 2 2 2

A solution for this is


=0; =0; = 1;
which gives p p
P P
u+ = u + ) u = 0:
2 2 2 2
This is expected since we are given that u (0; y; t) = 0: Hence we place this on the
boundary as
un+1
1 = 0: (9.5.14)

136
Looking next to the x = 1 boundary, we have

Eigenvalue Characteristic Conclusion


1
1 = 0 P 1 is incoming at x = 1

2 =0 v 2 is incoming at x = 1
p p
P
3 = >0 u+ 3 is outgoing at x = 1
p p 2 2
P
4 = <0 2
u+ 2 4 is incoming at x = 1

Starting with the incoming characteristic for 1; we have


1 1
P = 1 P = 0;
t x

since 1 = 0: This is
1
t Pt = 0:

Discretizing, we have
n+1
N
n
N 1 PNn+1 PNn
= 0:
t t
Therefore on the x = 1 boundary, we have the condition

n+1 1 1
N + PNn+1 = n
N + PNn : (9.5.15)

Next the incoming characteristic for 2 gives

(v)t = 2 (v)x = 0;

since 2 = 0: Discretizing, we have


n+1 n
vN vN
n
D+ vN = = 0;
t
which is
n+1 n
vN = vN : (9.5.16)
Thirdly we have the outgoing characteristic for 3 ; which is a linear combination of
the incoming characteristics
p p
P 1 P
u+ = P + (v) + u+ :
2 2 2 2

137
A solution to this is
=0; =0; = 1;
which gives p p
P P
u+ = u+ :
2 2 2 2
Hence we get u = 0; which is expected since we have been given the boundary condition
u (1; y; t) = 0: This is placed on the boundary as
un+1
N = 0: (9.5.17)
Finally, we have the incoming characteristic corresponding to the eigenvalue 4; which
implies p p
P P
u+ = 4 u+ :
2 2 t 2 2 x
p
As 4 = ; this simpli…es to
p p
u t + Pt = ux Px :
Discretizing, we have
p p
D+ unN + D+ PNn = D ;x unN D ;x PNn ;
p un+1
N unN PNn+1 PNn unN unN 1 p PNn PNn 1
+ = ;
t t x x
which is
p p
1
un+1
N + PNn+1 = unN 1 + unN (9.5.18)
t t x t x
p p
1
+ PNn 1 + PNn :
x t x

Implementation

We are now in a position to create the four 4N 4N block matrices for the ADI method
n+1 n n+2 n+1
A1 wi;j = B1 wi;j ; B2 wi;j = A2 wi;j :
First we create the block 4N 4N matrices ADzero and BDzero of the form
0 1
u0 Dzero 0 Dzero zeros(N ) zeros(N )
B zeros (N ) u0 Dzero zeros (N ) 1
DzeroC
ADzero = @ B 0 C;
zeros (N ) zeros (N ) u0 Dzero zeros (N ) A
zeros (N ) P0 Dzero zeros (N ) u0 Dzero

138
0 1
v0 DzeroY zeros (N ) 0 DzeroY zeros(N )
B zeros (N ) v0 DzeroY zeros (N ) zeros (N ) C
BDzero = B
@ zeros (N ) 1
C:
A
zeros (N ) v0 DzeroY 0
DzeroY
zeros (N ) zeros (N ) P0 DzeroY u0 DzeroY
Here I is the N N identity matrix, zeros (N ) is the N N zero matrix, and Dzero
is a sparse tri-diagonal N N matrix of the form
0 1
1
0 2h
B 1 0 1 C
B 2h 2h C
B 1
0 2h1 C
B 2h C
Dzero = B
B
C:
C
B C
B C
@ 1
0 1 A
2h 2h
1
2h
0

Note that this comes from the de…nition of the second order central di¤erence operator
D0 ; for example
unj+1 unj 1
D0 unj =
2h
so the main diagonal of Dzero is the coe¢ cient of unj which is zero, the …rst super-
1
diagonal of Dzero is the coe¢ cient of unj+1 which is 2h ; and the …rst sub-diagonal of
n 1
Dzero is the coe¢ cient of uj 1 : which is 2h ):
Next, we create the matrices A1 ; A2 ; B1 ; B2 ; where
t t
A1 = I ADzero; A2 = I + ADzero;
2 2
t t
B1 = I BDzero; B2 = I + BDzero;
2 2
Now we wish to place the boundary conditions. We do this in the x direction …rst

A1 !
w n+1 = A2 !
wn

where A1 us the "n + 1" matrix, and A2 the "n" matrix. The boundary conditions
(9:5:11) ; (9:5:12) ; (9:5:13) ; (9:5:14) ; (9:5:15) ; (9:5:16) ; (9:5:17) and (9:5:18) are placed
into these matrices in the positions indicated in Figure 9.1. Note that ! v n+1 refers to
the 4N 1 column vector consisting of the N density points 1 to N ; the N velocity
points u1 to uN ; the N velocity points v1 to vN ;and the N pressure points P1 to PN
all at the "n + 1" time level, and ! v n is the 4N 1 column vector consisting of the N
density points 1 to N ; the N velocity points u1 to uN ; the N velocity points v1 to
vN ;and the N pressure points P1 to PN all at the "n" time level.

139
Figure 9.1: BCs for x = 0 and x = 1 are placed into the 4N 4N block matrices A1
and A2 in the illustrated positions. Note that A1 corresponds to the n + 1 time level
matrix, and A2 corresponds to the n time level matrix.

140
For the y direction, we have

B1 !
w n+1 = B2 !
wn

where B1 us the "n + 1" matrix, and B2 the "n" matrix. The boundary conditions
(9:5:3) ; (9:5:4) ; (9:5:5) ; (9:5:6) ; (9:5:7) ; (9:5:8) ; (9:5:9) and (9:5:10) are placed into
these two matrices in the same positions previously indicated in Figure 9.1 for the x
direction A1 and A2 matrices. The numerical results for varying pressure over time are
shown in Figure 9.2.

141
Figure 9.2: Time splits showing pressure variation over time.

142

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy