0% found this document useful (0 votes)
21 views116 pages

Skrip Tum 14

Uploaded by

Peter Schulze
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views116 pages

Skrip Tum 14

Uploaded by

Peter Schulze
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

Professur für

Thermofluiddynamik

Notes on

Computational
Thermo-Fluid Dynamics

Camilo F. Silva, Ph. D.

Kilian Förner, Ph. D.

Grégoire Varillon, Ph. D.

Prof. Wolfgang Polifke, Ph. D.

Winter 2023/24
2

www.epc.ed.tum.de/tfd
An expert is a man who has made all the mistakes, which can be made,
in a very narrow field.

Niels Bohr (1885-1962).

A computation is a temptation that should be resisted as long as possible.

John P. Boyd paraphrasing Thomas Stearns Eliot (1888-1965).


Contents

1 Introduction 10

1.1 Partial Differential Equations (PDEs) . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.2 Generalities on partial differential equations (PDEs) . . . . . . . . . . . . . . . . . 13

1.3 Parabolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.4 Hyperbolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.5 Elliptic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.6 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

1.6.1 Dirichlet boundary condition . . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.6.2 Neumann boundary condition . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.6.3 Robin boundary condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

1.7 Overview of the course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.8.1 1D convection equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.8.2 1D diffusion equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

1.8.3 1D convection diffusion equation . . . . . . . . . . . . . . . . . . . . . . . . 24

1.8.4 More videos. Now in 2D (Optional) . . . . . . . . . . . . . . . . . . . . . . 25

1.8.5 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2 Finite Differences 26

2.1 Computational grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.2 Deriving FD numerical schemes of arbitrary order . . . . . . . . . . . . . . . . . . 28

6
CONTENTS 7

2.2.1 Taylor series and truncation error . . . . . . . . . . . . . . . . . . . . . . . 29

2.2.2 Forward Euler scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.2.3 Centered scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

2.2.4 Backward Euler scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2.2.5 Second order derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.3 2D steady heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.3.1 Discretizing the 2D steady heat equation by finite differences . . . . . . . 35

2.3.2 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.3.3 Assembling the linear system . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2.4.1 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.4.2 Tips
by Juan Pablo Garcia (ex-student) . . . . . . . . . . . . . . . . . . . . . . . . . 41

3 Finite Volumes 42

3.1 Derivation of algebraic equations from PDE . . . . . . . . . . . . . . . . . . . . . . 43

3.1.1 Applying divergence theorem . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.1.2 Defining cell normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.1.3 Applying an integral rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

3.1.4 Applying Green’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.2 Exercises Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.3 Exercises Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.3.1 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.3.2 Tips
by Juan Pablo Garcia (ex-student) . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.3.3 Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4 Unsteady Problems 54
8 CONTENTS

4.1 Explicit time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.1.1 Von Neumann stability analysis of FE scheme . . . . . . . . . . . . . . . . 57

4.2 Implicit time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.2.1 Von Neumann analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.3 The weighted average or θ-method . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.3.1 Von Neuman Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.4 Predictor-corrector methods (Runge-Kutta) . . . . . . . . . . . . . . . . . . . . . . 66

4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5 Sparse Matrices and Linear Solvers 72

5.1 Sparse matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5.2 Iterative solvers and preconditioning . . . . . . . . . . . . . . . . . . . . . . . . . . 75

5.3 Preconditioned Richardson iteration . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5.4 Projection methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.5.1 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . . 82

5.5.2 Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

6 Green’s functions 83

6.1 Green’s function solution equation for the steady heat equation . . . . . . . . . . 84

6.2 Treatment of boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

6.3 Derivation of the Green’s function for a simple problem . . . . . . . . . . . . . . . 88

6.4 What to integrate? (Warning) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

6.5 Green’s functions for a rectangular domain . . . . . . . . . . . . . . . . . . . . . . 91

6.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

6.7.1 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . . 95


CONTENTS 9

A Addendum to Finite Volumes 96

A.1 Uniform rectangular grid and Boundary Conditions . . . . . . . . . . . . . . . . . 96

A.2 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

A.2.1 South . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

B Addendum to Sparse Matrices and Linear Solvers: condition number 102

B.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

B.2 Sensitivity of solutions to linear sytems . . . . . . . . . . . . . . . . . . . . . . . . 103

B.3 Convergence of iterative solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

C Addendum to unsteady problems 105

C.1 Classical Runge-Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

C.1.1 Implicit Runge-Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . 106

C.2 Low-storage Rung-Kutta methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

D Convective problems 109

D.1 Convective Partial Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . 110

D.1.1 Analytical solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

D.1.2 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

D.2 Stable schemes for convection problems . . . . . . . . . . . . . . . . . . . . . . . . 111

D.2.1 Space-centered schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

D.2.2 Upwind scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

D.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114


Introduction
1
References
[1] K UZMIN D. A guide to numerical methods for transport equations. Fiedrich-Alexander-Universität,
2010.
[2] P OLIFKE , W., AND K OPITZ , J. Wärmeübertragung. Grundlagen, analytische und numerische Methoden.
Pearson Studium, 2005.

Objectives
• Getting introduced to PDEs
• Coding in Matlab analytical solutions of simple convection, diffusion and convection-diffusion
equations.

10
1.1 Partial Differential Equations (PDEs) 11

Contents
1.1 Partial Differential Equations (PDEs) . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Generalities on partial differential equations (PDEs) . . . . . . . . . . . . . . . 13
1.3 Parabolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Hyperbolic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Elliptic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.6.1 Dirichlet boundary condition . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.2 Neumann boundary condition . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.3 Robin boundary condition . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7 Overview of the course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8.1 1D convection equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.8.2 1D diffusion equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8.3 1D convection diffusion equation . . . . . . . . . . . . . . . . . . . . . . . 24
1.8.4 More videos. Now in 2D (Optional) . . . . . . . . . . . . . . . . . . . . . 25
1.8.5 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . 25

1.1 Partial Differential Equations (PDEs)

How to model pollutant dispersal in a river, or the evolving distribution of a harmful gas in
a city? How to describe heat conduction in solids or the propagation of sound through the
atmosphere? How to understand the flow of air in the vicinity of an airfoil? A general answer
to these apparently different queries is given by Partial Differential Equations (PDE). Generally,
PDEs are derived from first principles, as for example laws of conservation. We will introduce
now a general but also simple framework to formulate laws of conservation and derive PDEs
from them.

Let φ be the concentration per unit mass of a conserved scalar. Examples of this scalar φ could
be some intrinsic properties of a given substance as, for instance, specific internal energy, spe-
cific enthalpy or specific entropy, among others. A given component of the vector of momen-
tum per unit mass can be also seen as an example of a conserved scalar φ. Now, if we assume
ρ to be the density of the carrier flow, we define w = ρφ. We are refering then to the concentra-
tion of that scalar per unit volume. The whole amount of that scalar within a defined control
volume V is given by

Z Z
W= w( x, t) dV = ρ( x, t)φ( x, t) dV, (1.1)
V V
12 Chapter 1: Introduction

−→
f

∂V V

Figure 1.1: An arbitrary control volume V with boundary ∂V, boundary normal vector n and
flux f

where x = ( x, y, z) is a vector describing space and t denotes time. Any variation of the scalar
w( x, t) within the control volume will depend on the rate at which φ enters or leave the domain.
This is generally expressed as a flux f normal to the surface of the control volume as depicted
in Fig. 1.1. This can be mathematically written as

w( x,t)
Z z Z
∂ }| {
ρ( x, t)φ( x, t) dV + f · n dS = 0. (1.2)
∂t V ∂V

It is also possible that the scalar w( x, t) is produced or extinguished within the control vol-
ume. We define then a source (sink) s( x, t) as a mechanism that produces (annihilates) a given
amount of w( x, t) per unit volume and per unit time. The conservation equation reads

w( x,t)
Z z Z Z
∂ }| {
ρ( x, t)φ( x, t) dV + f · n dS = s( x, t) dV. (1.3)
∂t V ∂V V

Sources (sinks) are usually placed at the right-hand side (RHS) of the conservation equation.
It is then clear that the amount of w( x, t) in a defined control volume at a given time depends
not only on how much of this quantity is entering and leaving the domain, but also on how
much of this quantity is being produced or consumed inside the domain. Let us now apply the
divergence theorem

Z Z
f · n dS = ∇ · f dV, (1.4)
∂V V

where

 
∂ ∂ ∂
∇= , , . (1.5)
∂x ∂y ∂z

Accordingly, Eq. (1.3) becomes


1.2 Generalities on partial differential equations (PDEs) 13

Z Z Z

ρ( x, t)φ( x, t) dV + ∇ · f dV = s( x, t) dV, (1.6)
∂t V V V

and reordering

Z  

ρ( x, t)φ( x, t) + ∇ · f − s( x, t) dV = 0 (1.7)
V ∂t

Since these integrals refers to any control volume (the choice of the control volume is arbitrary),
this control volume can be chosen as small as possible (V→ dV) so that the evolution of w( x, t)
is then described by a PDE:

w( x,t)
∂ z }| {
ρ( x, t)φ( x, t) +∇ · f = s( x, t) (1.8)
∂t

The only idea missing in the analysis is the evaluation of the flux f of w( x, t). A given flux f
is composed by a convective f con and a diffusive part f dif . On the one hand, the convective
contribution of the flux is defined as

f con = v( x, t) w( x, t) (1.9)

i. e. as function of the velocity of convection v, i. e. the velocity of the carrier fluid, in the
direction normal to the surface of the control volume. An illustration of convective flow is
given in Fig. 1.2(a). On the other hand, the diffusive contribution is written as

f dif = −D( x, t) ρ( x, t)∇φ( x, t), (1.10)

which is the simplest but also the most classical form. Principally, it is as a linear function
between the gradient of the concentration of φ and a diffusion coefficient D( x, t). This expression
is known as the Fick’s Law when referring to mass diffusion and as the Fourier’s law when talking
about heat conduction, respectively. An illustration of diffusive flux is given in Fig. 1.2(b).
Note that the diffusion coefficient D( x, t) is accompanied by a negative sign. Using this model,
diffusion occurs from regions of high concentration to regions of low concentration of w.

1.2 Generalities on partial differential equations (PDEs)

After having introduced the generic form of a transport equation (Eq. (1.8)), it is interesting to
perform now a standard classification of the PDEs that are often used in thermo-fluid dynam-
14 Chapter 1: Introduction

time=1 time=1

time=2 time=2

time=3 time=3

(a) (b)

Figure 1.2: Ilustration of two type of fluxes. (a) convective flux f con
(b) diffusive flux f dif .

ics. Let us write now a standard linear PDE

∂2 φ( x, t) ∂2 φ( x, t) ∂2 φ( x, t) ∂φ( x, t) ∂φ( x, t)
A 2
+ B + C 2
+D +E + Fφ( x, t) = G (1.11)
∂t ∂t∂x ∂x ∂t ∂x

where the coefficients A, B, C, D, E, F, G may either depend on time and/or space or be


constant. Defining now

α = B2 − 4AC (1.12)

it can be shown that, similar to conic sections which are defined by Ax2 + 2Bxt + Cy2 + · · · = 0,
a second order PDE is classified as:

• Elliptic: if α < 0

• Parabolic: if α = 0

• Hyperbolic if α > 0

whereas the PDE is always hyperbolic if A, B and C are zero. The behaviour of analytical
and numerical solutions of these equations depend on how the coefficients (A · · · F) interact
with each other. A ‘best’ numerical method for solving these three types of PDEs does not
exist: a given numerical scheme could be appropriate for solving a given PDE, but would fail
when intending to solve another. In addition, it is important to retain that, depending on how
these coefficients are related to each other, i. e. how a PDE is classified, boundary and/or initial
conditions are applied. Assuming that a given physical problem contains both convective and
diffusive flux, Eq. (1.8) becomes
1.3 Parabolic PDEs 15


(ρφ) + ∇ · (v ρφ) − ∇ · (D ρ∇φ) = s, (1.13)
∂t

where the dependency on space and time of all variables is implicitly considered. Equa-
tion (1.13) is a general transport equation and we will referred to it in this course as the convection-
diffusion-reaction (CDR) equation. We point out that the equations of conservation of mass and
energy, in fluid mechanics, may be written as

• Mass conservation: φ = 1. Both diffusive and source terms are equal to zero.

• Energy conservation: φ = ht where ht refer to the total enthalpy of the fluid. s may con-
tain several sources of energy as the heat release coming from combustion, for example.

The momentum equation, in contrast, is a well recognized non-linear PDE and therefore cannot
be classified as previously described. This transport equation is written as

• Momentum conservation: [φ1 , φ2 , φ3 ] = v, s = −∇ p where v and p are the velocity field


and the pressure field, respectively. Note that ρD∇v is the viscous stress tensor and that
D is associated, in Newtonian fluids, with two scalars known as kinematic viscosity and
bulk viscosity.

1.3 Parabolic PDEs

The general transport equation (Eq. (1.13)) is recognized as a parabolic PDE. Neglecting the
convective flux in particular, Eq. (1.20) becomes


(ρφ) − ∇ · (D ρ∇φ) = s. (1.14)
∂t

Replacing φ = cT, D ρc = λ, s = ω̇ and assuming constant density and constant c, Eq. (1.14)
results in

∂T
ρc − ∇ · (λ∇ T ) = ω̇, (1.15)
∂t

where c, λ, and ω̇ stand for the heat capacity of the medium, the thermal conductivity of the
medium, and a heat release rate source term, respectively. In this case, the diffusion coefficient
D is called thermal diffusivity. Equation (1.15) is known as the heat equation, a fundamental PDE
in this course.
16 Chapter 1: Introduction

Parabolic equations always need, in addition to boundary conditions, an initial condition, i. e.


an initial distribution of temperature for instance. Moreover, if a physical process is intended
to reach a steady state after a given transient, then the time derivative term is expected to
vanish and, consequently, the solution given by the corresponding elliptic equation (steady
state version) is obtained.

1.4 Hyperbolic PDEs

Equation (1.13) is a general case of a parabolic PDE. In this equation, there are two terms in
competition: the convective flux and the diffusive flux of a given scalar. An interesting case
can be taken from the energy equation where φ = cT, D ρc = λ, s = ω̇. If λ and the convective
velocity field v are constant in time and space, we can write

∂T ω̇
+ v∇ · T − D∇2 T = . (1.16)
∂t ρc

When the convective flux, or simply called convection (or sometimes advection), is of several
orders of magnitude larger than the diffusive flux, Eq. (1.16) starts behaving more as an hyper-
bolic equation than as a classical parabolic equation. The ratio of diffusion terms to convective
terms is defined by the Peclet number:

v0 L0
Pe = , (1.17)
D0

where v0 , L0 , and D0 stand for a reference velocity, a reference length, and a reference diffusion
coefficient, respectively. For significant high Peclet numbers, where v  D , Eq. (1.16) can be
approximated to

∂T ω̇
+ v∇ · T = . (1.18)
∂t ρc

which is a PDE of hyperbolic type. An example of a physical mechanism described by Eq. (1.18)
would be present in a high speed combustion chamber in which a hot spot released by the
turbulent flame is convected downstream at a high velocity. This type of equation, although
it may appear simpler than Eq. (1.16), is indeed more difficult to solve. The reason is that hy-
perbolic conservation laws admit discontinuous solutions!. Sometimes numerical schemes that
aim to solve hyperbolic PDEs introduce a so-called artificial viscosity, i. e. a ficticious diffusion
coefficient in order to smooth solutions and avoid, therefore, too fine meshes in regions where
high gradients, or even discontinuities, of quantities involved are expected.

In a first order hyperbolic PDE, as Eq. (1.18), information travels in the direction of the flow,
1.5 Elliptic PDEs 17

i. e. ‘downstream’, as time evolves. As a result, this PDE needs only boundary conditions in the
upstream region, i. e. at the inlet of the system. Imposing boundary conditions downstream at
the domain outlet would lead to an ill-posed problem. An example of a second order hyper-
bolic PDE is the so-called wave equation (not shown here). In fluid dynamics, this equation
is obtained by combining mass and momentum equations. This hyperbolic PDE, in contrast
with the first order hyperbolic PDE, needs boundary conditions both at the inlet and the outlet
of the domain since information, i.e. waves, propagates not only forwards but also backwards
as time is evolving. In addition to boundary conditions, hyperbolic PDEs, as in the case for
parabolic PDEs, also require an initial condition.

1.5 Elliptic PDEs

A physical process, after a given transient, may reach a steady state. In such a case, the first
term of Eq. (1.13) vanishes resulting in

∇ · (v ρφ) − ∇ · (D ρ∇φ) = s. (1.19)

This equation is classified as elliptic as long as D is defined as strictly positive. Further on, in a
flow at rest v = 0, we write

− ∇ · (D ρ∇φ) = s. (1.20)

Assuming now a constant density as well as a constant diffusion coefficient and reordering,
Eq. (1.20) becomes

− ∇2 φ = s/D ρ. (1.21)

Equation (1.21) is known as the Poisson equation and is very useful in incompressible flows. The
Poisson equation describes the steady state of heat transfer in a uniform medium, as will be
shown in more detail in next chapters. When no source is present, i. e. s = 0, then the Poisson
equations turns into the Laplace equation and reads as

∇2 φ = 0. (1.22)

Equations (1.19) to (1.22) are elliptic. All of them model a system in which any perturbation
somewhere within the domain (a sudden change in the position of the source s, a perturbation
on a boundary, etc) is felt immediately all over the whole domain. Accordingly, these equations
require boundary conditions at every border, i. e. restrictions in all frontiers of the system. Table
18 Chapter 1: Introduction

Elliptic ∇ · (v ρφ) − ∇ · (D ρ∇φ) = s Steady Convection-Diffusion-Reaction


- ∇ · (D ρ∇φ) = s Steady Diffusion-Reaction
Parabolic ∂
∂t (ρφ) + ∇ · (v ρφ) − ∇ · (D ρ∇φ) = s Unsteady Convection-Diffusion-Reaction

∂t (ρφ) − ∇ · (D ρ∇φ) = s Unsteady Diffusion-Reaction
Hyperbolic ∂
∂t (ρφ) + ∇ · (v ρφ) = s Unsteady Convection-Reaction
∇ · (v ρφ) = s Steady Convection-Reaction

Table 1.1: Summary of PDE’s classification. Reaction is described in all equations as long as

s 6= 0.

1.1 summarizes the classification of the PDEs described in this chapter.

1.6 Boundary conditions

In order to complete the statement of the physical problem, which is modelled through a given
CDR equation, we need to establish Boundary Conditions (BC), if elliptic equations are being
used, or both initial and BC if parabolic or unsteady hyperbolic equations are considered. Let
us consider Ω as a bounded domain with boundaries Γ. These boundaries can be decomposed

Γ = Γ − + Γ + + Γ0 , (1.23)

where Γ− , Γ+ , and Γ0 represent inlet, outlet, and walls, respectively. They are defined as

Inlet Γ − = { x ∈ Γ | f · n < 0} (1.24)


+
Outlet Γ = { x ∈ Γ | f · n > 0} (1.25)
Wall Γ = { x ∈ Γ | f · n = 0}
0
(1.26)

where n is the unit normal vector pointing outwards of the domain Ω at the point x ∈ Γ, as
shown in Fig. 1.3. Boundary conditions need to be applied to all boundaries if parabolic or
elliptic PDEs are considered. Only Γ− and Γ0 should be accounted for if an hyperbolic PDE of
first order is used. Boundary conditions are normally divided in three types as follows.
1.6 Boundary conditions 19

−→
0 Γ+
Γ

Γ− f

Figure 1.3: An arbitrary domain Ω with boundaries identified as inlets, outlets and walls.

1.6.1 Dirichlet boundary condition

Usually, the scalar w = ρφ is known (or can be imposed relatively easily) at inlet and walls of a
given system. Modeling requires then that w is fixed at a given value. In such situations

w( x, t) = w D ( x, t) ∀x ∈ ΓD , (1.27)

where Γ D is the subset of Γ in which Dirichlet BC are applied.

1.6.2 Neumann boundary condition

Neumann BC are imposed generally at the outlet, where the flux of w is known. It follows

f ( x, t) · n = g( x, t) ∀x ∈ ΓN , (1.28)

where Γ N is the subset of Γ in which Neumann BC are applied.

1.6.3 Robin boundary condition

Dirichlet and Neumman BC can be combined in a third type of boundaries known as Robin BC.
They are defined as

w( x, t) + f ( x, t) · n = 0 ∀x ∈ ΓR , (1.29)

where Γ R is the subset of Γ on which Robin BC are applied.


20 Chapter 1: Introduction

1.7 Overview of the course

After visualizing the general structure of a transport equation (Eq. 1.13), it becomes intriguing
to know the different ways in which they are aimed to be solved. Naturally, the first possibility
that is considered involves analytical methods. Some of them rely on separation of variables,
Fourier Series, eigenfunction expansions and Fourier transform techniques, among others. If
a given PDE can be solved analytically, there is probably no reason to look for a numerical
method. Analytical methods are sometimes preferable since they are exact, crystalline (you can
usually look through them) and compact. There are situations though, where implementing an
analytical method can be computationally expensive and maybe not exact. A clear example of
such a situation are the solutions based on series expansions: in theory a solution is exact as
long as a sum includes an infinite number of terms. In addition, analytical solutions are avail-
able only for relatively simple PDEs. This simplicity is related usually to the topology of the
associated domain: lines, rectangles, circles, cuboids, spheres or a given geometry with some
kind of regularity. The definition of ‘simple’ also considers the PDE structure for most of the
cases. A linear PDE with variable coefficients is already too complex to be solved analytically.
Moreover, for non-linear PDEs analytical solutions are very difficult to be obtained.

Due to these restrictions of analytical methods, numerical strategies are introduced. Numerical
methods, in the framework of PDEs, can be divided in three big groups: Finite Differences
(FD), Finite Volumes (FV) and Finite Elements (FEM):

• Finite Differences (Chap. 2) is a method derived from the definition of both the deriva-
tive and the Taylor series. It consists in replacing the differential terms of a PDE by the
corresponding finite difference expressions. Although very powerful, since high order
numerical schemes (very accurate schemes), can be easily derived, it is suitable only for
relatively simple geometries. Problems with complex geometries need mapping strate-
gies (mathematical transforms) to convert the geometry from the physical space to a com-
putational space with suitable coordinates so that the problem becomes tractable with FD
methods. Moreover, some times this method leads to numerical schemes that are not con-
servative.

• Finite Volumes (Chap. 3) is a numerical method widely spread in models describing flu-
ids, i. e. in Computational Fluid Dynamics (CFD). This is due to the corresponding numer-
ical scheme which is based on the conservation law, a fundamental principle in nature.
Another advantage is the facility in which the numerical scheme can be adapted to com-
plex topologies. The main drawback of FV is the relative difficulty that exists to increase
the order of the associated numerical schemes.

• Finite Elements is the most spread numerical method in structural mechanics. The main
reason for this success is the ability of such schemes to adapt to complex geometries in
an easy way and, sometimes under particular approaches, to account for problems with
1.7 Overview of the course 21

discontinuous solutions. The principal disadvantage of FEM is its complexity in both


scheme derivation and numerical implementation.

Finite differences, finite volumes and finite elements are spatial discretization methods. Ac-
cordingly, they are used either alone when solving elliptic PDEs, or together with temporal
schemes when solving Unsteady problems (Chap. 4).

When a PDE is linear, the corresponding discretization (by any of the three methods above
mentioned) is a linear algebraic system, i.e. a system of the form Ax = b. Here, A, x and
b stand for the system matrix, the vector of unknowns and the source vector, respectively.
Solving this linear system means solving the associated PDE. There is no such a thing as ‘best’
algorithm to solve a linear system. The performace of a numerical algorithm when solving
Ax = b depends principally both on the structure (how the non-zero elements are distributed
within the matrix) and on the size of the matrix A. Consequently, it is fundamental to put some
attention in Sparse Matrices and Linear Solvers (Chap. 5)

As mentioned before, analytical approaches, if available, are sometimes preferable with respect
to numerical methods. At this point it is very useful to introduce Green’s functions (Chap. 6).
Green’s functions build an analytical framework in which a general solution of a given PDE
(for general boundary conditions and source terms) is explicitly constructed as a sum of inte-
gral terms. Green functions are another powerful option to solve PDEs, although it is somehow
restricted to topologies that can be easily described by either cartesian, polar or spherical coor-
dinates.
22 Chapter 1: Introduction

1.8 Exercises

In this chapter we introduced partial differential equations (PDEs) in the context of transport
equations. We have identified particularly three types of equations in which linear PDEs are
classified: parabolic, hyperbolic or elliptic. Now, we propose to use Matlab as the numerical
tool to write and visualize some analytical solutions that correspond to some specific cases of
these equations.

1.8.1 1D convection equation

The 1D convection equation reads:

∂φ ∂φ
+ vx =0 (1.30)
∂t ∂x

Any function φ with argument x − v x t is a solution of this equation. (φ( x, t) = φ( x − v x t)).

Stage 1: Use Matlab symbolic toolbox

We consider the function φ( x ) defined as:



 x if 0 < x ≤ 1
φ( x ) = (2 − x ) if 1 < x ≤ 2 (1.31)
if 2 < x < L

 0

This function can describe, for example, the distribution of temperature in a fluid at given mo-
ment. This distribution could be observed as an initial condition when solving the convection-
diffusion equation. In order to introduce the information given by this function, it is necessary
to express it as a Fourier sine expansion as

N
mπx
φ( x ) ≈ φ̃( x ) = ∑ am sin
L
(1.32)
m =1

where N is a number high enough to satisfy a good approximation φ̃( x ) (It is recommended
N > 100)1 . The period is established as L = 50. The Fourier coefficients am are found by
solving

1 Note
that Eqs. (1.32), (1.35) and (1.37) are aproximations of the exact solution. The exact solution is obtained
when N = ∞. Here we prefer to stress that such a case is impossible to be achieved in practice and, accordingly, we
do not use equalities.
1.8 Exercises 23

0.5
!(x)
0

−0.5

−1
−20 −15 −10 −5 0 5 10 15 20
space

Figure 1.4: The function φ( x ) with L = 20 and N = 100.

) drawno
if t==0
2im( f r am e w 0)
ytime’,
else i m= frame ’a p p en d’, ’Dela
’,
iteMode
end n am e ,’ gif ’,’Wr fra
m,file me
w r ite (imind,c =g
im et f
[imind,cm] = rgb2ind(im,8) ram
e( g
imwrite(imind,cm,filename,’gif’,’Loopcount’, inf) cf )

Figure 1.5: Commands to generate a video in gif format. Note that these command are in
disorder!

Z L
2 mπx
am = φ( x ) sin dx. (1.33)
L 0 L

Use matlab to compute Eq. (1.33) analitically. (use principally commands int and subs)

Stage 2: Ploting φ̃( x )

• Code the expressions for am according to Eq (1.33) . Subsequently, code and plot the
function φ̃( x ) resulting from Eq. (1.32). Try several values of N and see its influence
when estimating φ( x ). The function φ̃( x ) should look like shown in Fig. 1.4.

Stage 3: Making a movie of convection

• Replace φ̃( x ) by φ̃( x − v x t) (note that coefficients am remain the same) for one value of v x
(v x = 1 m/s for example) and several times (ti+1 = ti + ∆t) for small ∆t.

• For each time t plot the function φ̃( x − v x t). In order to create the movie (here in gif
format) use the functions shown in Fig. 1.5.
24 Chapter 1: Introduction

1.8.2 1D diffusion equation

The diffusion equation in a one dimensional domain reads

∂φ ∂2 φ
− D 2 = 0. (1.34)
∂t ∂x

The solution of this equation, for homogeneous Dirichlet boundary conditions, is written as

N
mπx

2
φ( x, t) ≈ φ̃( x, t) = am e−D(mπ/L) t sin . (1.35)
m =1
L

This solution, which is based on a Fourier series expansion, is a classical way to solve the
diffusion equation mentioned in numerous books. As an example we can refer to chatper 14 in
[2].

Stage 4: Making a movie of diffusion

• For each time t plot the function φ̃( x, t). In order to create the movie (here in gif format)
use the functions shown in Fig. 1.5.

1.8.3 1D convection diffusion equation

The convection-diffusion equation for 1D reads:

∂φ ∂φ ∂2 φ
+ ux − D 2 = 0. (1.36)
∂t ∂x ∂x

Analytical solutions for this equation exist and are very different depending on the bound-
ary conditions used. For a very simple case, in which periodic boundary conditions are used
(φ(0, t) = φ( L, t)), the solution can be written as:

N
mπ ( x − v x t)

2
φ( x, t) ≈ φ̃( x, t) = am e−D(mπ/L) t sin (1.37)
m =1
L

Stage 5: Making a movie of convection-diffusion

• For each time t plot the function φ̃( x, t). In order to create the movie (here in gif format)
use the functions shown in Fig. 1.5.
1.8 Exercises 25

4 4

3 3

2 2

1 1

0 0
−20 −15 −10 −5 0 5 10 15 20 −20 −15 −10 −5 0 5 10 15 20

Figure 1.6: Solution of the 1D convection-diffusion equation with v x = 3 m/s and D = 0.1
m2 /s (see Eq. (1.37)) . Left: Snapshot after t = 1 s. Right: Snapshot after t = 5 s.

1.8.4 More videos. Now in 2D (Optional)

Solutions of the previous equations can be also ploted as animated surfaces. In order to create
such plots, it is first required that the vector x and φ̃( x, t) (for a given value of t) is mapped to
a rectangular grid. Subsequently use the functions shown in Fig. 1.5. Two snapshots, which
correspond to a solution of the convection-diffusion equation, are shown in Fig. 1.6. Note that
the direction y is chosen arbitrarily.

1.8.5 Useful MATLAB commands

syms Defines the following symbols to be treated as symbols by the symbolic


toolbox of MATLAB (e.g. syms x defines x to be handled as symbol to
do symbolic calculations.)

Example:

>> syms x
>> int(x)

ans =

x^2/2
Finite Differences
2
References
[1] M ORTON , K. W., AND M AYERS , D. F. Numerical solution of partial differential equations. Cambridge
University Press, 2005.
[2] P OLIFKE , W., AND K OPITZ , J. Wärmeübertragung. Grundlagen, analytische und numerische Methoden.
Pearson Studium, 2005.

Objectives
• Learn to derive a finite difference numerical scheme for different orders of accuracy.

26
2.1 Computational grid 27

Contents
2.1 Computational grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Deriving FD numerical schemes of arbitrary order . . . . . . . . . . . . . . . . 28
2.2.1 Taylor series and truncation error . . . . . . . . . . . . . . . . . . . . . . 29
2.2.2 Forward Euler scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.3 Centered scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2.4 Backward Euler scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.5 Second order derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 2D steady heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3.1 Discretizing the 2D steady heat equation by finite differences . . . . . . 35
2.3.2 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3.3 Assembling the linear system . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.1 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4.2 Tips
by Juan Pablo Garcia (ex-student) . . . . . . . . . . . . . . . . . 41

2.1 Computational grid

A computational grid is a discretization of the computational domain, made of elements, faces


and nodes. Finite differences is a very efficient method for relative simple topologies, and
therefore it is mainly applied on Cartesian grids such as the ones shown in Fig. 2.1.

In Cartesian grids, the faces of the elements are aligned with the axes of a Cartesian frame or
reference. The nodes, defined as the intersection of the faces, can be placed either uniformly

Figure 2.1: Cartesian grids in 1D, 2D and 3D


28 Chapter 2: Finite Differences

WW W w P e E EE

∆x
Figure 2.2: Labeling of 1D computational grid.

(equidistant) or non-uniformly. Accordingly, the elements (or cells) can be either lines (1D),
rectangles (2D) or cuboids (3D).

2.2 Deriving FD numerical schemes of arbitrary order

As already mentioned at the end of last chapter, the finite differences method consists in re-
placing derivative expressions of a given quantity at a specific point, by a linear combination
of values of that quantity at neighbour points. We could say then that derivative with respect
to x of a quantity φ at P is approximated by a linear combination:


≈ · · · + aWW φWW + · · · + aP φP + aE φE + · · · , (2.1)
dx xP

where the subindex indicate the position at which φ is evaluated. In practice, there are three
schemes highly used. The definition of these schemes is referred to a stream that flows from
left (upstream or West) to right (downstream or East) as illustrated in Fig. 2.2. These schemes
are:

• Forward Euler (FE): Linear combination of values of φ in the downstream region:


≈ aP φP + aE φE + aEE φEE + · · · (2.2)
dx xP

• Centered: Linear combination of values of φ in both upstream and downstream regions:


≈ · · · + aW φW + aP φP + aE φE + · · · (2.3)
dx xP

• Backward Euler (BE): Linear combination of values of φ in the upstream region:


≈ · · · + aWW φWW + aW φW + aP φP (2.4)
dx xP

The question now is how to establish appropriate values for the coefficients a so that a more
accurate approximation is obtained. We will see it in the following.
2.2 Deriving FD numerical schemes of arbitrary order 29

2.2.1 Taylor series and truncation error

The derivation of numerical schemes in the framework of finite differences is strongly related
with the definition of the Taylor series. This series expresses a function f evaluated at any point
in x as an infinite sum of terms that depend on the function derivatives at a reference point xP .
It is written as


( x − xP ) k d k f
f (x) = ∑ k! dx k
k =0 xP

df ( x − xP )2 d2 f ( x − xP )3 d3 f ( x − xP ) k d k f
= f ( xP ) + ( x − xP ) + + +···+ +··· .
dx xP 2! dx2 xP 3! dx3 xP k! dx k xP
(2.5)

2.2.2 Forward Euler scheme

Let us start considering only two coefficients of the linear combination (Eq. (2.2)). The approx-
df
imation of dx x is then given by
P

df
≈ a1 f ( xP ) + a2 f ( xE ). (2.6)
dx xP

Considering now Eq. (2.5), fixing x = xE and rearranging

df f ( xE ) − f ( xP ) ∆x d2 f (∆x )2 d3 f (∆x )k−1 dk f


= − − −···− + · · · , (2.7)
dx xP ∆x 2! dx2 xP 3! dx3 xP k! dx k xP

where the notation ∆x = xE − xP has been introduced. In practice, it is not possible to carry
out an infinite summation of terms. Instead a truncation of the series is performed in which
only the significant terms are considered. Since the derivative terms in the RHS of Eq. (2.7) are
usually unknown they are dropped. The expression (2.7) is then written as

a1 a2
z }| { z }| {
f ( xE ) − f ( xP ) −1

df df 1
+ O(∆x ) = and therefore ≈ f ( xE ) + f ( xP ).
dx xP ∆x dx xP ∆x ∆x
(2.8)

The term O(∆x ) represents the truncation error introduced in the approximation. In this case
30 Chapter 2: Finite Differences

this term (O(∆x )) is of first order , i. e. the same order of ∆x. We say then that the scheme is first
order accurate. We introduce the definition of order of accuracy:

Order of accuracy: The accuracy order of a FD discretisation is the power of (∆x)


to which the truncation error is approximately proportional.

It means that if the distance between nodes (∆x = xE − xP ) is reduced by the half, for instance,
df
the error in the discretisation of dx x will also be reduced by approximately the half. In the
P
limit of ∆x → 0, we recover the definition of the derivative:

df f ( xE ) − f ( xP )
= lim (2.9)
dx xP ∆x →0 ∆x

The truncation error O(∆x ) will be very small for very small ∆x, but it would represent consid-
erable large number of grid points, i.e large meshes! Let us instead be smart and add another
term to the linear combination. It means that we will consider the information given by a third
grid point. This leads to

df
≈ a1 f ( xP ) + a2 f ( xE ) + a3 f ( xEE ). (2.10)
dx xP

We now evaluate f ( xE ) and f ( xEE ) using the Taylor series definition (Eq. (2.5)):

∆x d f (∆x )2 d2 f (∆x )3 d3 f (∆x )k dk f


f ( xE ) = f ( xP ) + + + +···+ + · · · (2.11)
1! dx xP 2! dx2 xP 3! dx3 xP k! dx k xP

2∆x d f (2∆x )2 d2 f (2∆x )3 d3 f (2∆x )k dk f


f ( xEE ) = f ( xP ) + + + +···+ +··· ,
1! dx xP 2! dx2 xP 3! dx3 xP k! dx k xP
(2.12)

where a uniform grid is considered and therefore 2∆x = xEE − xP . By combining Eqs. (2.11)
and (2.12) with Eq. (2.10), we realize that three conditions between the coefficients arise. These
conditions are:
2.2 Deriving FD numerical schemes of arbitrary order 31

a1 + a2 + a3 = 0 since we want ( a1 + a2 + a3 ) f ( xP ) = 0 (2.13)


df df
a2 ∆x + 2a3 ∆x = 1 since we want ( a2 ∆x + 2a3 ∆x ) = (2.14)
dx xP dx xP
a2 /2 + 2a3 = 0 since we want ( a2 (∆x )2 /2! + a3 (2∆x )2 /2!) = 0 (2.15)

which leads to a1 = −3/(2∆x ), a2 = 2/∆x, a3 = −1/(2∆x ). Plugging them into Eq. (2.10)
yields

df −3 f ( xP ) + 4 f ( xE ) − f ( xEE )
+ O(∆x )2 = . (2.16)
dx xP 2∆x

The approximation of Eq. (2.16) is of order O(∆x )2 , i. e. second order accurate. The terms of
second order O(∆x )2 of the Taylor series expression were ‘designed’ to be zero (Eq. (2.29)).
Note, nevertheless, that the remaining terms of order O(∆x )3 in the Taylor series are divided
by ∆x in Eq. (2.16), and that is why this approximation is second order accurate and not third
order accurate.

One advantage of a second order scheme in comparison with one of first order is that results
df
based on a second order scheme will approach the exact value of dx x faster as ∆x → 0. It
P
can be reasoned: if the distance between nodes ∆x is reduced by two (increasing the number
df
of elements in the mesh by two), the truncation error in the approximation of dx x will be
P
approximately four times smaller than previously (when the cells were 2 times larger). For
a first order scheme this error would be only approximately two times smaller if reducing
∆x by the half. Theoretically, we could increase the order of the approximation by adding
more terms to the linear combination (Eq. (2.10)). Accordingly, four terms would lead to an
approximation of third order, since terms O(∆x ) and O(∆x )2 will be forced to vanish; five
terms would produce an approximation of fourth order, and so on. In practice, the length of
the stencil, i. e. number of coefficients involved in the approximation, should no be too high
since several difficulties can arise. For instance, if a large stencil scheme characterizes a linear
PDE, the resulting linear system is not as easy to solve as if a short stencil is considered: the
resulting matrix is less sparse and possibly more irregular with large stencils. Another problem
could be also related to computational memory issues since eventually too many coefficients
would be required to be stored at each node.

2.2.3 Centered scheme

Let us write the most classical centered scheme as a linear combination of two terms:
32 Chapter 2: Finite Differences

df
≈ a1 f ( xW ) + a2 f ( xE ) (2.17)
dx xP

We proceed now as done previously. Using Eq. (2.5) the quantities f ( xW ) and f ( xE ) are de-
scribed as

∆x d f (∆x )2 d2 f (∆x )3 d3 f (∆x )k dk f


f ( xW ) = f ( xP ) − + − +···+ +···
1! dx xP 2! dx2 xP 3! dx3 xP k! dx k xP
(2.18)

∆x d f (∆x )2 d2 f (∆x )3 d3 f (∆x )k dk f


f ( xE ) = f ( xP ) + + + +···+ + · · · (2.19)
1! dx xP 2! dx2 xP 3! dx3 xP k! dx k xP

The equations to solve are:

a1 + a2 = 0 (2.20)
− a1 ∆x + a2 ∆x = 1 (2.21)
(2.22)

which give the solutions a1 = −1/(2∆x ) and a2 = 1/(2∆x ):

df f ( xE ) − f ( xW )
≈ . (2.23)
dx xP 2∆x

which is actually second order accurate although information from only two grid points is
used!. The reason comes up by replacing Eq. (2.18) and Eq. (2.19) into Eq. (2.23): the term con-
taining (∆x ) is actually eliminated by the subtraction! As done by the Forward Euler scheme,
the centered scheme can have also as many terms as desired, so that accuracy is increased.

2.2.4 Backward Euler scheme

Considering two terms, the Backward Euler (BE) scheme is written:

df
≈ a1 f ( xP ) + a2 f ( xW ) (2.24)
dx xP
2.2 Deriving FD numerical schemes of arbitrary order 33

After, repeating the procedure previously used, we obtain the approximation

df f ( xP ) − f ( xW )
≈ . (2.25)
dx xP ∆x

which is an approximation of order O(∆x ). Higher accuracy can be obtained, as stated before,
by adding more terms to the linear combination of Eq. (2.24).

2.2.5 Second order derivatives

We have already seen how FD schemes are designed for a first order derivative using three
approaches: FE, centred and BE. Now, we will study the technique to compute a second order
derivative for a defined order of accuracy. We will concentrate on the centered scheme. As
before, we started expressing the second order derivative as a linear combination of three terms

d2 f
≈ a1 f ( xW ) + a2 f ( xP ) + a3 f ( xE ). (2.26)
dx2 xP

Subsequently, we express f ( xW ) and f ( xE ) considering Eq. (2.18) and Eq. (2.19). As a result,
three equations are stated:

a1 + a2 + a3 = 0 since we want ( a1 + a2 + a3 ) f ( x i ) = 0 (2.27)


df
− a1 ∆x + a3 ∆x = 0 since we want (− a1 ∆x + a3 ∆x ) =0 (2.28)
dx xP
d2 f d2 f
a1 (∆x )2 /2 + a3 (∆x )2 /2 = 1 since we want ( a1 (∆x )2 /2! + a2 (∆x )2 /2!) =
dx2 xP dx2 xP
(2.29)

which yield: a1 = 1/(∆x )2 , a1 = −2/(∆x )2 and a3 = 1/(∆x )2 . Replacing now these values in
Eq. (2.26) yields

d2 f f ( xW ) − 2 f ( xP ) + f ( xE )
+ O(∆x )2 = . (2.30)
dx2 xP (∆x )2

Naturally, we would expect that this approximation is of first order. Nevertheless, replacing
Eq. (2.18) and Eq. (2.19) in Eq. (2.30) turns out that, in addition to the first and second terms in
the RHS, also the fourth term cancels out. Consequently, the approximation given by Eq. (2.30)
is second order accurate. It should be mentioned, that this relation can be also computed by
34 Chapter 2: Finite Differences

combining the definition of first order derivative as follows:

df df
d2 f

dx x dx x [ f ( xE ) − f ( xP )] /∆x − [ f ( xP ) − f ( xW )] /∆x
≈ e w
= , (2.31)
dx2 xP (∆x ) ∆x

in which eventually the same expression given by Eq. (2.30) is recovered. As mentioned for
first order derivatives, the order of accuracy of second order derivatives approximations is also
strongly linked with the length of the stencil, i. e. number of coefficients involved in the linear
combination: the higher the number of coefficients, the higher the order accuracy.

Finally, a new concept will be introduced:

Consistency: A numerical scheme is consistent as long as the truncation error


tends to zero when ∆x tends to zero.

As observed in all previous FD schemes described, decreasing the value of ∆x results in a reduc-
tion of the truncation error. Therefore, we can conclude that all these schemes are consistent.
How large this reduction is, depends on the order of accuracy of the scheme.

2.3 2D steady heat equation

Now, that we have observed how first and second order derivatives are discretized, let us put
that in practice. Recall the general form of a diffusion-reaction transport equation


(ρφ) − ∇ · (D ρ∇φ) = s, (2.32)
∂t

where φ is the concentration of a conserved scalar per unit mass, ρ is the density of the carrier
flow and D is the diffusivity of φ. In an incompressible medium (constant density), the specific
internal energy u is given by u = cT, where c is the heat capacity. Considering then u as the
conserved scalar we are interested in, Eq. (2.32) becomes

∂T
ρc − ρc∇ · (D∇ T ) = s, (2.33)
∂t

If now we consider the thermal diffusivity D = λ/(ρc), where λ is the thermal conductivity of
the medium, Eq. (2.33) results in

∂T
ρc − ∇ · (λ∇ T ) = ω̇. (2.34)
∂t
2.3 2D steady heat equation 35

where ω̇ represents a given heat release source term. For two dimensions, Eq. (2.34) is written
as

   
∂T ∂ ∂T ∂ ∂T
ρc − λ − λ = ω̇. (2.35)
∂t ∂x ∂x ∂y ∂y

Note that we are considering a thermal conductivity λ that depends on space and, as a result,
it cannot be taken outside from the spatial differential operator. In the following we will con-
sider the steady version of the heat equation and, in addition, no source inside the domain is
accounted for. The temperature distribution in the domain will only depend on the boundary
conditions and on the thermal conductivity of the medium.

2.3.1 Discretizing the 2D steady heat equation by finite differences

In the steady state, and considering, the problem of interest reduces to

   
∂ ∂T ∂ ∂T
λ + λ = −ω̇ (2.36)
∂x ∂x ∂y ∂y

Applying now the approximation suggested by Eq. (2.30) leads to

   


∂T
 λe ∂T
∂x − λw − TP
λe TE∆x
∂T
∂x
− TW
− λw TP∆x
e w
λ ≈ =
∂x ∂x P ∆x ∆x (2.37)
λe TE − TP (λe + λw ) + λw TW
=
(∆x )2

and

   
− TP − TS
− λs λn TN∆y − λs TP∆y
∂T ∂T


∂T
 λn ∂y ∂y
n s
λ ≈ =
∂y ∂y P ∆y ∆y (2.38)
λn TN − TP (λn + λs ) + λs TS
= .
(∆y)2

where the notation (labeling of nodes) is shown in Fig. 2.3. The approximation of the steady
heat equation, which is second order accurate, in a Cartesian grid is given then by
36 Chapter 2: Finite Differences

n
W w P e E
s ∆y
S

∆x
Figure 2.3: Labeling of a 2D computational grid

   
∂ ∂T ∂ ∂T
λ + λ
∂x ∂x P ∂y ∂y P
(2.39)
λe TE + λw TW λn TN + λs TS λe + λw λn + λs
 
≈ + − TP + = −ω̇
(∆x ) 2
(∆y) 2 (∆x ) 2 (∆y)2

The values of the thermal conductivity between nodes are computed by interpolation. The
discretization above can be rewritten as

 
λe λw λn λs λe λw λn λs
TE + T + TN + T + − − − − TP
(∆x )2 (∆x )2 W (∆y)2 (∆y)2 S (∆x )2 (∆x )2 (∆y)2 (∆y)2
(2.40)
=cE TE + cW TW + cN TN + cS TS + cP TP = −ω̇. (2.41)

We want to point out that, when ω̇ = 0, the coefficient for the central node cP can be computed
as the sum of all other coefficients:

cP = − ( cE + cW + cN + cS ) , (2.42)

or, as will be more evident in chapter 5, that the resulting matrix is digonal dominant.
2.3 2D steady heat equation 37

Conjugate Heat Transfer

Isothermal
L
y

x q̇y
Imposed heat flux

Figure 2.4: Boundary conditions

2.3.2 Boundary conditions

In the introductory chapter, three types of Boundary Conditions (BC) were introduced: Dirich-
let, Neumann and Robin. In this section, they are stated in the framework of the heat equation
and finite differences. Let us assume that we desire to impose a temperature profile TD (y) at
the West boundary (see Fig. 2.4). The equations at nodes where x = 0 and 0 ≤ y ≤ L are given
by

TP = TD (y) ∀ P ∈ ΓD Dirichlet BC (or first kind) (2.43)

where Γ D is the set of nodes that belong to the West boundary. Let us assume now that we
want to apply a given heat flux q̇ · n = g( x) at the South boundary. We want then that

∂T
q̇y P
= − λP = g( x ) ∀ P ∈ ΓN Neumann BC (or second kind), (2.44)
∂y P

where Γ N is the set of nodes that belong to the South boundary. The first idea that naturally
comes to our minds is to discretize this BC as

∂T TN − TP
− λP = − λP . (2.45)
∂y P ∆y

Nevertheless, this BC is only first order accurate, as seen previously. In an elliptic equation the
information (a given error for instance) propagates everywhere in the domain. Accordingly,
even if we use a second order centered scheme to discretize the steady heat equation in at the
interior of the domain, the solution would not be anymore second order accurate but only first
order accurate due to the low order of accuracy of the downwind scheme used for discretizing
the BC. Instead, we can use the second order downwind scheme (Eq. 2.16) so that
38 Chapter 2: Finite Differences

∂T −3TP + 4TN − TNN


− λP = − λP = g ( x ). (2.46)
∂y P 2∆y

This is the equation that should be applied to the nodes belonging to Γ N if we want to conserve
the order of accuracy of the solution. Finally, we want to apply a Robin BC at the North and
East boundaries of the domain. For the North boundary, the BC reads:

∂T
α( TP − T∞ ) + λP =0 ∀P ∈ Γ R Robin BC (or third kind) (2.47)
∂y P

Equation (2.47) is a classical BC for conjugate heat transfer, i. e. heat transfer at the interface of
a solid and a fluid (see [2]). This boundary condition is widely used when evaluating cooling
(or heating) of a solid caused by the thermal interaction with the fluid in which it is immersed.
In this case, T∞ stands for the temperature of the surrounding fluid and α is the convective heat
transfer coefficient. Using a second order accurate scheme, Eq. (2.47) is discretized as:

3TP − 4TS + TSS


α( TP − T∞ ) + λP = 0. (2.48)
2∆y

2.3.3 Assembling the linear system

Let us express Eq. (2.39), as a linear system of equations R TI = b I for the inner nodes, and
Eqs. (2.43), (2.44) and (2.47) as a linear system of equations B Tb = bb for the boundary nodes.
A final system A T = b can be assembled, where A is a matrix composed by the sub matrices
R and B and b a vector composed by b I and bb . We have consequently a linear system

AT=b (2.49)

to be solved, where A is N × N matrix and N is the number of nodes of the complete system
after discretization. Correspondingly, T and b are vectors of size N × 1.
2.4 Exercises 39

j+1
j−1

j
i−1

i
∆y
i+1
∆x

Figure 2.5: Suggested labeling of a 2D computational grid

2.4 Exercises

Discretizing the 2D steady heat equation

Stage 1: Understanding (pen and paper)

The 2D steady heat equation reads:

   
∂ ∂T ∂ ∂T
λ + λ = 0.. (2.50)
∂x ∂x ∂y ∂y

• Derive a FD scheme of second order of accuracy with constant λ and Dirichlet Boundary
conditions. The suggested labeling of the grid is illustrated in Fig. 2.5

Stage 2: Coding in Matlab: Dirichlet BC and constant λ

A file ‘finite_diff.m’ is given as a starting point. Only the heading of the code is explicitly given.
It is suggested to start with very small systems (25 nodes at most).

Stage 3: Coding in Matlab: Different BC.

In addition to Dirichlet BC at the West, a Neumann BC will be imposed at the South whereas
Robin BC will be imposed at North and East. This is illustrated in Fig. 2.6.

Stage 4: Coding in Matlab: Considering variable thermal conductivity (optional).

Eq. (2.50) should be now discretized by considering λ depending on space.

Stage 5: Coding in Matlab: Considering a pointwise source (optional).

Consider a non-zero source in Eq. (2.50). An illustration of such a field is given in Fig. 2.7
40 Chapter 2: Finite Differences

Conjugate Heat Transfer

Isothermal
L
y

x q̇y
Imposed heat flux

Figure 2.6: Boundary conditions at stage 3.

Figure 2.7: Example of temperature field obtained by finite differences.

2.4.1 Useful MATLAB commands

index = @(ii,jj) ii+(jj-1)∗ n This is a useful definition to address an element of a


vector by two coordinates. If a property that is phys-
ically in 2D space with dimensions n × m has to be
saved in a 1D vector (dimension nm × 1), all colums
of the 2D object are concatenated to form only one
column. This functions lets you address an element
as if it were still in 2D (e.g. X(index(2,2)).

x=A\b Solve the linear system Ax = b for x. The backslash


operator (\) selects the best way to to this within
matlab.
2.4 Exercises 41

2.4.2 Tips
by Juan Pablo Garcia (ex-student)

• It is suggested to start with a case in which ∆x = ∆y and L x = Ly = 1 in order to avoid a


very complex problem from the beginning. In addition, by generating a small mesh and
simple values for L x , Ly , ∆x and ∆y, the problem will be easier to debug. In order to do
that, is good to start with a mesh 11x11 and having ∆x = ∆y = 0.1

• Solving the problem will be done by solving one system of equations. It is suggested then
to refresh the knowledge on solving equation systems (with matrices).

• Some of the tools learned in the Teaser.m exercises are necessary. For example, generating
a vector with all the values of a matrix (if the matrix is 10 × 10 tje vector will be 100 × 1).

• Give a number (index) to each node corresponding to the position in the matrix. In order
to relate the position in the matrix to the position in the coordinates x, y two things should
be considered:

– As the number of column increases, it also increases the value of the x coordinate.
In contrast, as the number of row increases, the value of the y coordinate decreases
(due to the notation illustrated in Fig. 2.5).
– The position in the matrix is given by (row, column), which would correspond to
(y, x ) instead of ( x, y) that we usually use to name the points in the cartesian system
of coordinates.
Finite Volumes
3
Objectives
• Learn to derive a finite volume numerical scheme for non-Cartesian domains.

Contents
3.1 Derivation of algebraic equations from PDE . . . . . . . . . . . . . . . . . . . . 43
3.1.1 Applying divergence theorem . . . . . . . . . . . . . . . . . . . . . . . . 43
3.1.2 Defining cell normals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.1.3 Applying an integral rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1.4 Applying Green’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2 Exercises Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3 Exercises Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3.1 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3.2 Tips
by Juan Pablo Garcia (ex-student) . . . . . . . . . . . . . . . . . 53
3.3.3 Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

In Chapter 2, we have seen how the 2D steady heat equation is discretized by the Finite Dif-
ferences (FD) technique. We resolved the 2D steady heat equation considering a rectangular
domain and a thermal conductivity λ dependent on space. In the present chapter we will add
an additional difficulty: the evaluation of heat conduction in a non-Cartesian grid. The fact that

42
3.1 Derivation of algebraic equations from PDE 43

Figure 3.1: Illustration of domain and grid of interest. Nodes in blue are shown with the
respective notation in Fig. 3.2

the faces of cells are not longer aligned with the Cartesian frame of reference add an enormous
complexity to the problem if FD is the chosen approach. On the contrary, the Finite Volumes
method, a completely different approach, ‘defeats’ complexity. A great advantage of FV is that
under pure physical reasoning, a given PDE may be discretized even in complex geometries.
Moreover, it respects the laws of conservation under which transport equations are constructed
and is not computationally expensive if compared with techniques such as finite elements.

In order not to add superfluous complexity to the derivation of FV, the heat conductivity λ will
be considered constant over the whole domain. We are interesting then in solving

λ∇2 T = 0, (3.1)

in a domain discretized with a non-Cartesian grid as the one illustrated in Fig. 3.1

3.1 Derivation of algebraic equations from PDE

Note that the derivation procedure that will be introduced holds for any geometry in 2D (and
eventually 3D). The procedure will be described as a sequence of steps.

3.1.1 Applying divergence theorem

We integrate Eq. (3.1) over a control surface1 SP centered at the node P. This control surface SP
has vertices sw, se, ne and nw, as shown in Fig. 3.2(b), where s, e, n and w stand for ‘south’, ‘east’,
‘north’ and ‘west’ with respect to the node P. Note from Fig 3.2 that lowercase letters represent
1 Note that we are considering here a two-dimensional case. That is why we treat a computational cell as a control
surface and not as a control volume.
44 Chapter 3: Finite Volumes

NE
NW N
ne
nw
E w
e
W P P
sw se s

SW SE Sw
S Se
(a) (b) (c)
Ne
Nw
n nE n n
nW
e e w
w

s sE s
sW

(d) (e) (f)

Figure 3.2: Domain of interest. Six different illustrations where nodes and middle point be-
tween nodes are labeled by upper cases and lower cases, respectively. Control surfaces are
labelled with respect to the center node (or point) of the corresponding surface.

points between nodes, whereas capital letters stand for the actual nodes of the grid (Fig. 3.2(a)).
After applying the divergence theorem2 , this surface integral becomes a line integral. It reads:

Z I
2
λ∇ TdS = λ∇ T · ndl = 0. (3.2)
SP ∂SP

where n is the unit vector normal to the line element dl. It is positive defined if pointing
outwards with respect to SP . The value of λ∇2 T at the node P is then given by

1
I
2
λ∇ T P
≈ P λ∇ T · ndl. (3.3)
S ∂SP

Subsequently, we decompose the line integral by the corresponding contributions of the four
faces:

I Z Z Z Z
λ∇ T · ndS = λ∇ T · ndl + λ∇ T · ndl + λ∇ T · ndl + λ∇ T · ndl. (3.4)
∂SP se
lsw ne
lse nw
lne sw
lnw

2 In
two-dimensional space, vector field on a closed surface S which has a piece-wise
RR if F is a differentiable
H
smooth boundary ∂S, then S ∇ · FdS = ∂S F · ndΓ, where n is the outward pointing unit normal vector of the
boundary.
3.1 Derivation of algebraic equations from PDE 45

nn nw
kl n e k
ne nn ne
nw nw
nw | yne |
ne | xnw
ne |

nw P

sw se
ns

Figure 3.3: Normal vectors belonging to the control surface SP

3.1.2 Defining cell normals

The unit normal vector n of the four faces belonging to the control surface SP are defined as:

(∆yse
sw , − ∆xsw )
se q
ns = se k
where se
klsw k = se )2 + ( ∆yse )2 ,
(∆xsw sw (3.5)
klsw
(∆yse , −∆xse
ne ne ) q
ne = ne k
where ne
klse k= (∆xse
ne )2 + ( ∆yne )2 ,
se (3.6)
klse
(∆ynw
ne , − ∆xne )
nw q
nn = nw k
where nw
klne k = (∆xne
nw )2 + ( ∆ynw )2 ,
ne (3.7)
klne
(∆ynw , −∆xnw
sw sw ) q
nw = sw k
where sw
klnw k = (∆xnw
sw )2 + ( ∆ysw )2 ,
nw (3.8)
klnw

where ∆yba = yb − y a and ∆x ba = xb − x a . The normal vectors are illustrated in Fig. 3.3. By
recalling that ∇ = ( ∂x , ∂y ), the first term on the RHS of Eq. (3.4) becomes
∂ ∂

Z Z  
λ ∂T ∂T
λ∇ T · ndl = se k
, · (∆yse
sw , − ∆xsw ) dl
se
(3.9)
se
lsw klsw se
lsw ∂x ∂y

Performing now the inner product operation yields

λ∆yse λ∆x se
Z Z Z
sw ∂T ∂T
λ∇ T · ndl = se k
dl − se sw dl (3.10)
se
lsw klsw se
lsw ∂x klsw k se
lsw ∂y
46 Chapter 3: Finite Volumes

3.1.3 Applying an integral rule

Now it is necessary to perform the integration on the resulting terms. Let us define a variable
♥ that depends on l and assume we want to solve

Z
♥(l ) dl. (3.11)
se
lsw

There are several techniques available of numerical integration. We show here three generally
used:

• Mid-point rule: Z
se se
♥(l ) dl ≈ lsw ♥|s dl = ♥|s klsw k (3.12)
se
lsw

• Trapezoidal rule:

♥|sw + ♥|se ♥|sw + ♥|se se


Z Z
♥(l ) dl ≈ dl = klsw k (3.13)
se
lsw se
lsw 2 2

• Simpson’s rule:

♥|sw + 4 ♥|s + ♥|se ♥|sw + 4 ♥|s + ♥|se se


Z Z
♥(l ) dl ≈ dl = klsw k (3.14)
se
lsw se
lsw 6 6

Applying now the mid point rule to Eq. (3.10) leads to

λ∆yse λ∆yse
Z
sw se ∂T sw se ∂T
λ∇ T · ndl = se
k l sw k − se
klsw k
se
lsw klsw k ∂x s klsw k ∂y s
(3.15)
∂T se ∂T
= λ∆yse sw − λ∆xsw
∂x s ∂y s

A similar procedure is performed to the other three terms of the RHS of Eq. (3.4). The resulting
expression reads

I
λ
λ ∇2 T P
≈ ∇ T · ndS ≈
S P ∂SP

λ ∂T se ∂T ∂T ne ∂T
∆yse
sw − ∆xsw + ∆yne
se − ∆xse (3.16)
SP ∂x s ∂y s ∂x e ∂y e

∂T nw ∂T ∂T sw ∂T
+ ∆ynw
ne − ∆xne + ∆ysw
nw − ∆xnw
∂x n ∂y n ∂x w ∂y w
3.1 Derivation of algebraic equations from PDE 47

Note that it has been assumed that s, e, n and w are the midpoints between sw − se, se − ne,
ne − nw and nw − sw, respectively. Whereas this assumption is valid for meshes that are not
strongly deformed, it should be reexamined if that is not the case. For example, considering e
(which is defined as the midpoint between the nodes P and E) to be also the midpoint between
sw and se might be a strong assumption for the grid illustrated in Fig. 3.2. Instead, a true
midpoint e∗ = ( xe∗ , ye∗ ) should be accounted for by doing xe∗ = ( xsw + xse )/2 and ye∗ =
(ysw + yse )/2. This is not done here in order not to add unnecesary complexity in the notation.

3.1.4 Applying Green’s theorem

Green’s theorem 3 , in its general form, is defined for a line integral so that, for a point s for
instance, we have

1 1
I I
∂T ∂T
≈ s Tdy and ≈ s − Tdx, (3.17)
∂x s S ∂Ss ∂y s S ∂Ss

where Ss is the control surface with respect to the point s, as illustrated in Fig. 3.2(c). We
decompose now the first line integral of Eq. (3.16) by the contributions of all four faces of the
control surface Ss . It yields

Z Se Z e Z w Z Sw
∂T 1 1 1 1
≈ T dy + T dy + T dy + T dy (3.18)
∂x s Ss Sw Ss Se Ss e Ss w

Applying now the mid-point rule of integration, Eq. (3.18) becomes

∂T 1  Se Sw

≈ ∆y T
Sw S + ∆y e
T
Se se + ∆y w
T
e P + ∆y T
w sw . (3.19)
∂x s Ss

Carrying out the same procedure to the second line integral of Eq. (3.17) yields:

1 −1  Se
I
∂T Sw

≈ − Tdx ≈ ∆x T
Sw S + ∆x e
T
Se se + ∆x w
T
e P + ∆x T
w sw . (3.20)
∂y s Ss ∂Ss Ss

In the same way, we can retrieve expressions for the other spatial derivatives of Eq. (3.16):

3 Let ∂S be a positively Horiented (’right-hand-rule’), smooth curve enclosing a surface S. If P and Q are differen-
RR
tiable functions on S then ∂S Pdx + Qdy = S (∂ x Q − ∂y P)dS.
48 Chapter 3: Finite Volumes

∂T 1  sE 
≈ ∆y s T se + ∆y nE
sE T E + ∆y n
nE Tne + ∆y s
n T P , (3.21)
∂x e Se
∂T −1  sE 
≈ ∆x s Tse + ∆x nE
sE ET + ∆x n
T
nE ne + ∆x s
T
n P , (3.22)
∂y e Se
∂T 1  e Ne Nw

≈ ∆y w TP + ∆y e T ne + ∆y Ne T N + ∆y w
Nw T nw , (3.23)
∂x n Sn
∂T −1  e Ne Nw

≈ ∆x w T P + ∆x e T ne + ∆x Ne TN + ∆x w
Nw Tnw , (3.24)
∂y n Sn
∂T 1  s 
≈ ∆y sW swT + ∆y n
s PT + ∆y nW
n T nw + ∆y sW
T
nW W , (3.25)
∂x w Sw
∂T −1  s 
≈ ∆x sW T sw + ∆x n
s T P + ∆x nW
n T nw + ∆x sW
nW T W . (3.26)
∂y w Sw

Gathering all the terms, leads to the final expression for ∇2 T at the node P for the geometry of
Fig. 3.2. After replacing them in Eq. (3.16), we can write

λ ∇2 T P

λ∆yse sw

Se Sw

∆y Sw T S + ∆y e
Se T se + ∆y w
TP + ∆y T sw
SP S s e w

λ∆xsw se  
Se Sw
+ ∆x Sw T S + ∆x e
Se T se + ∆x w
TP + ∆x T sw
SP S s e w

λ∆yne se
 
+ ∆y sE
T se + ∆y nE
sE E T + ∆y n
nE neT + ∆y s
T
n P
SP S e s

λ∆xse ne  
+ ∆x sE
T se + ∆x nE
T E + ∆x n
T ne + ∆x s
T P (3.27)
SP S e s sE nE n

λ∆yne nw  
Ne Nw
+ ∆y e
TP + ∆y T ne + ∆y Ne T N + ∆y w
Nw T nw
SP S n w e

λ∆xne nw  
Ne Nw
+ ∆x e
T
w P + ∆x e T ne + ∆x Ne N T + ∆x w
Nw nwT
S P Sn
λ∆ynw sw  
+ ∆y s
sW swT + ∆y n
T
s P + ∆y nW
T nw + ∆y sW
nW W T
SP S w n

λ∆xnw sw  
+ ∆x s
T sw + ∆x n
T P + ∆x nW
Tnw + ∆x sW
T W
SP S w sW s n nW

The areas of the control surfaces can be computed by applying the Gaussian trapezoidal for-
3.1 Derivation of algebraic equations from PDE 49

mula as follows
1
SP = |( xne yse − xse yne ) + ( xse ysw − xsw yse ) + ( xsw ynw − xnw ysw ) + ( xnw yne − xne ynw )| (3.28)
2
1
Ss = |( xe ySe − xSe ye ) + ( xSe ySw − xSw ySe ) + ( xSw yw − xw ySw ) + ( xw ye − xe yw )| (3.29)
2
1
Se = |( xnE ysE − xsE ynE ) + ( xsE ys − xs ysE ) + ( xs yn − xn ys ) + ( xn ynE − xnE yn )| (3.30)
2
1
Sn = |( xNe ye − xe yNe ) + ( xe yw − xw ye ) + ( xw yNw − xNw yw ) + ( xNw yNe − xNe yNw )| (3.31)
2
1
Sw = |( xn ys − xs yn ) + ( xs ysW − xsW ys ) + ( xsW ynW − xnW ysW ) + ( xnW yn − xn ynW )| (3.32)
2
whereas the values at points within nodes are obtained by interpolation:

TSW + TS + TP + TW
Tsw = , (3.33)
4
TS + TSE + TE + TP
Tse = , (3.34)
4
TP + TE + TNE + TN
Tne = , (3.35)
4
TW + TP + TN + TNW
Tnw = . (3.36)
4

Using this procedure to derive a FV scheme over an uniform rectangular domain allows sev-
eral simplifications, explained in detail in appendix A. The resulting algebraic expression is
nothing but the same derived from a FD scheme. The treatment of nodes at boundaries follows
the same derivation explained in this chapter. The corresponding derivation is also shown in
appendix A.
50 Chapter 3: Finite Volumes

h1 /2
f (x)
h2 /2

Figure 3.4: Illustration of domain of interest (note that d f /dx < 0).

3.2 Exercises Part 1

Stage 1: Pen and paper

• Understand the derivation of Eq. (3.27).

Stage 2: Building the Grid

As starting point five matlab files are given:

1. FVM_main.m: This is the main file of the program. It calls the file InitFVM and the
routines setUpMesh and solveFVM.

2. InitFVM: Initial parameters are given. Nothing need to be done.

3. solveFVM: subroutine to set up the matrix A and the vector B. In particular the vector B
needs to be filled.

4. stamp: routine to fill the elements of matrix A. This routine is given practically empty.

5. generate_stencil_innernode.m: routine to buid the stencil for the inner node.

The first task consist in:

• setUpMesh: This routine must be written. It should take into accout the formfunction
defined in InitFVM.

Stage 3: Just do it

This stage is anything but complete. Almost everything must be done

• complete routines solveFVM and stamp.


3.3 Exercises Part 2 51

T∞
Th
α

h1 h2

L
Figure 3.5: Sketch of a cooling fin.

3.3 Exercises Part 2

In the exercises, we examine the stationary temperature distribution in a cooling fin. The fin is
‘fat’, so the well-known quasi-1D approximation is not applicable. The configuration of the fin
is sketched in Fig. 3.5.

On the west surface, we assume a constant hot temperature Th . The other surfaces4 are cooled
by convection with an ambient temperature T∞ . We apply then the conjugated heat transfer
formula

q̇|W = α( TW − T∞ ), (3.37)

with the heat-transfer coefficient α. Since the fin is symmetric, we can simulate only a half fin
to reduce the computational effort. Consequently, we set zero heat flux at the symmetry axis,
i. e.
q̇|S = 0. (3.38)

In Fig. 3.6, the computed temperature distribution is plotted (with h1 = 10, h2 = 3, l = 10,
Th = 100, λ = 1, α = 5, and T∞ = 90).

Stage 1: Pen and paper

• Based on the knowledge acquired when discretizing the heat equation for the inner nodes,
the students are asked to derive the numerical scheme corresponding to the boundaries
and the two corners.

Stage 2: Choosing a code

4 denoted by .W for “wall”


52 Chapter 3: Finite Volumes

../Matlab/figure/stationaryT.pdf ../Matlab/figure/stationaryTcontour.pdf

Figure 3.6: Temperature distribution in a cooling fin as surface plot on the left-hand side and
as contour plot on the left.

• Each group is asked to discuss about the codes written in Session 03. Pick one of the
codes.

Stage 3: Just do it

• Let’s code the boundary conditions!. A text file ’stamp.m’ is provided with some lines
already written. Also, the files generate_stencil_east.m, generate_stencil_north.m and
generate_stencil_south.m are given (but not complete).

• And what about the edge boundary conditions?


3.3 Exercises Part 2 53

3.3.1 Useful MATLAB commands

switch x Use this way of controlling the flow in the code to make decisions like
case A assigning boundary conditions to boundaries.
case B
end

Example:

switch northernBoundary
case ’Dirichlet’
...
case ’Neumann’
...
case ’Robin’
end

3.3.2 Tips
by Juan Pablo Garcia (ex-student)

• The way of working is very similar to the one related to the session of finite differences.
It is then useful to review the tips provided in the previous chapter.

3.3.3 Flowchart

l, h, λ, ...
InitFVM.m

formfunction, ...
setUpMesh.m
X, Y

FVM_main.m
T, X, Y, ... X, Y, ...

solveFVM.m stamp.m
T, ... stencil, b

T, X, Y, ...
post.m
Unsteady Problems
4
References
[1] M ORTON , K. W., AND M AYERS , D. F. Numerical solution of partial differential equations. Cambridge
University Press, 2005.

Objectives
• Learn how to discretize the temporal operator of a transport equation.
• Learn how to compute stability of temporal schemes considering the Von Neumann stability anal-
ysis.

54
55

Contents
4.1 Explicit time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.1.1 Von Neumann stability analysis of FE scheme . . . . . . . . . . . . . . . 57
4.2 Implicit time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2.1 Von Neumann analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.3 The weighted average or θ-method . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.3.1 Von Neuman Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.4 Predictor-corrector methods (Runge-Kutta) . . . . . . . . . . . . . . . . . . . . 66
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

We have already seen how elliptic PDEs may be discretized using either Finite Difference FD,
Finite Volume FV. In this chapter, we will focus on PDEs that describe unsteady processes. Let
us write a convection-diffusion equation for the specific internal energy u = cT of an incom-
pressible flow (constant density ρ). Deriving this expression from Eq. (1.13) (where φ = u)
results in

∂T
+ ∇ · vT − ∇ · D∇ T = s/ρ = s I , (4.1)
∂t

where the thermal diffusivity is defined as D = λ/(ρc). Combining now the PDE associated
to mass conservation ∇ · v = 0 with Eq. (4.1) results in

∂T
+ v · ∇ T − D∇2 T = s I , (4.2)
∂t

where D is considered constant in space. For a 2D flow, Eq. (4.2) becomes:

∂2 T ∂2 T
 
∂T ∂T ∂T
+ vx + vy −D + 2 = sI , (4.3)
∂t ∂x ∂y ∂x2 ∂y

The second order terms (diffusive terms) of Eq. (4.3) have been already discretized by finite
differences and finite volumes, as studied in previous chapters. We can express them as

∂2 T ∂2 T
 
D − 2 = R( T ), (4.4)
∂x2 ∂y

where R represents a linear operator. This operator becomes a matrix R if discretized by a


given numerical method (by FD or FV for example). The convective term of Eq. (4.3) has not
yet been discretized in our course. Its discretization by FD of FV is indeed simpler than for the
56 Chapter 4: Unsteady Problems

second order terms. Let us denote the corresponding linear operator as L so that Eq. (4.3) is
written now as

∂T
+ L( T ) − R( T ) = s I , (4.5)
∂t

or as a semi-discretized equation:

∂T
+ L T − R T = −b I , (4.6)
∂t

where the matrices L and R are related to the linear operators L and R, respectively. In addi-
tion to Eq. (4.6), we have to consider the equations for boundary conditions, which can be of
Dirichlet, Neumann or Robin type. In any case, after discretization we obtain a linear system
of equations that relates the boundary nodes to the adjacent nodes. It is written as

B T = bb (4.7)

Equations 4.6 and 4.7 can be expressed together in a general semi-discretized equation

∂T
− A T = −b, (4.8)
∂t

where A is a N × N matrix than contains the sub-matrices R − L and B. In the same way, the
vector b is a global vector of size N × 1 that is composed by both b I and bb .

4.1 Explicit time discretization

The simplest numerical scheme for time discretization, known as the Forward Euler (FE) scheme,
is now applied to Eq. (4.8):

T n +1 − T n
− A T n = −b, (4.9)
∆t

where T n and T n+1 denote the temperature T at time instants t = n∆t and t = (n + 1)∆t,
respectively. It should be noted that T n+1 is entirely described here by the corresponding past
values. Reordering Eq. (4.9) yields

T n+1 = T n + ∆t ( A T n − b) . (4.10)
4.1 Explicit time discretization 57

Let us emphasize that, by discretizing Eq. 4.8 explicitly in time, the temperature T n+1 Eq. (4.10)
is obtained without any need to solve a linear system. As a consequence, all the solutions at
each time step of the convection-diffusion equation ( T n |n=1 , T n |n=2 , · · · , T n |n ) can be calcu-
lated without too much computational effort. Moreover, if a steady solution exists after a given
elapsed time, this one is obtained just by advancing n times the initial solution.

Every strategy has always pros and cons. The great advantage of the Forward Euler (FE)
scheme is that, as mentioned before, very simple linear systems result after time discretiza-
tion, where there is not even the need to solve a linear system. Consequently, solutions at
every time step are obtained in a very simple way. The big disadvantage of the FE scheme lays
on its stability. Let us define stability:

Stability: If the exact solution is bounded, the numerical solution remains bounded. It means
that, for stability to hold, it is necessary that numerical errors, which are generated
during the solution of discretized equations, should not be magnified.

In the worst case, explicit schemes are unconditionally unstable. It means that, not matter what
is done, solutions will explode after certain elapsed time. In a better situation, although not
the best, explicit schemes can be conditionally stable. As a result, there are some parameters
to control so that solutions remain bounded as time grows. Most of the time, the principal
parameter to control is the time step ∆t as function of the cell size (∆x say).

A formal analysis of stability remains cumbersome for complex spatial discretization schemes
as finite volumes (when the grid is not cartesian) and finite elements. That is the reason why
most of the time formal analysis is carried out for simplified spatial schemes as finite differ-
ences. Although generality cannot be argued for such an analysis, this approach is useful in
order to define non-dimensional numbers that can be used as a measure when studying stabil-
ity in complex numerical problems. In the following, the Von Neumann stability analysis will
be introduced.

4.1.1 Von Neumann stability analysis of FE scheme

Von Neuman analysis, also known as Fourier analysis, is a technique based on Fourier de-
composition of numerical errors. It is related to FD spatial discretization of a linear PDE with
constant coefficients where boundaries are assumed to be periodic. In order to introduce Von
Neumann analysis, let us consider a homogeneous (s=0), linear PDE with constant coefficients
(which are equal to one) with one function T that depends on two variables x (space) and t
(time). This PDE reads

∂T ∂T ∂2 T
+ − 2 = 0, (4.11)
∂t ∂x ∂x
58 Chapter 4: Unsteady Problems

This equation has particular solutions of the form T = e βt e kx , where β and k are real constants
and  denotes the imaginary unit. The general solution reads as


T ( x, t) = ∑ am e β m t e km x , (4.12)
m =1

which is nothing but a linear combination of the particular solutions. Here, am are recognized
as the Fourier coefficients, k m is the wave number defined as k m = πm/L, where L is the length
of the domain, and β m is a real constant that depends on m. It can also be shown that the
general solution of the discretized equation associated to (4.11) at a given time tn = n∆t and at
a given discretized point xi = i∆x reads 1 :

M
Tin = ∑ am e β m n∆t e km i∆x , (4.13)
m =1

where M = L/∆x. The error in the approximation ein is defined by

ein = Tin − T ( xi , tn ) (4.14)

and is strongly associated to the truncation error made when discretizing. Since both the exact
solution T ( xi , tn ) and the discretized solution Tin satisfy the discretized equation exactly, the
error ein also satisfies the dicretized equation. Accordingly, the error can be also expressed as a
linear combination of the form of Eq. (4.13), or particularly as

ein = e β m n∆t e km i∆x . (4.15)

The error ein+1 and a time tn+1 = (n + 1)∆t is expressed as

ein+1 = e β m (n+1)∆t e km i∆x = e β m ∆t ein (4.16)

It is evident that whether the error grows, stays constant or decreases depends on whether
e β m ∆t is bigger, equal or smaller than unity, respectively. In order to simplify the notation, let us
define the amplification factor e β m ∆t as

ein+1
G≡ = e βm ∆t . (4.17)
ein

1 Be aware that here  denotes the imaginary unit whereas i and j denote the nodes of a given computational grid.
4.1 Explicit time discretization 59

Stability analysis of 1D convection-diffusion equation

Now that we have expressed the error in Eq. (4.15), we proceed to analyze the stability of
schemes discretizing Eq. (4.11). Note that no attention is given to boundary conditions (they
are considered periodic) and, therefore, their contribution on stability is neglected. In addi-
tion, since Von Neumann analysis is aimed for FD schemes, we define the operators describing
convective and diffusion mechanisms as

( Ti+1 − Ti−1 )
L( T ) ≈ v x (4.18)
2∆x
( Ti−1 − 2Ti + Ti+1 )
R( T ) ≈ D , (4.19)
(∆x )2

which are obtained using a second order accurate centered 1D-FD scheme. Replacing these
operators in Eq. (4.10) and reordering yields

ν
Tin+1 = Tin − ( Tin+1 − Tin−1 ) + δ( Tin−1 − 2Tin + Tin+1 ), (4.20)
2

where

v x ∆t D ∆t
ν= and δ= . (4.21)
∆x (∆x )2

The non-dimensional numbers ν and δ are of extreme importance in the stability analysis and
are known as the Courant number and diffusion number, respectively. As mentioned before, the
error in the approximation enj also satisfies Eq. (4.20) and therefore we can write

ν
ein+1 = ein − (ein+1 − ein−1 ) + δ(ein−1 − 2ein + ein+1 ). (4.22)
2

Let us now divide all terms by ein and replace by the definition of the error (Eq. (4.15)). This
results in

ν
G = 1 − (e km ∆x − e− km ∆x ) + δ(e− km ∆x − 2 + e km ∆x ). (4.23)
2

Pure convective equation

In order to simplify the analysis, let us assume first a pure convection problem (δ = 0) and
develop the exponential terms using the Euler’s formula. It leads to
60 Chapter 4: Unsteady Problems

ν
G = 1− [cos(k m ∆x ) +  sin(k m ∆x ) − cos(k m ∆x ) +  sin(k m ∆x )] (4.24)
2

G = 1 − ν sin(k m ∆x ) ⇒ | G |2 = 1 + ν2 sin2 (k m ∆x ), (4.25)

where π/N ≤ k m ∆x ≤ π and N is the number of nodes of the discretized domain. Equation
(4.25) means that, even for very large values of N, the second order centered FD scheme always
presents | G | > 1, and therefore is unconditionally unstable. Let us redefine the operator L( T )
as

( Ti − Ti−1 )
L( T ) ≈ v x , (4.26)
∆x

i.e. as a first order upwind 1D-FD scheme. The equation of the error becomes

ein+1 = ein − ν(ein − ein−1 ) (4.27)

which leads to

G = 1 − ν [1 − cos(k m ∆x ) +  sin(k m ∆x )] . (4.28)

It can be shown that for this case, G ≤ 1 as long as the CFL condition is satisfied, i. e. as long as
0 ≤ ν ≤ 1. The upwind scheme is recognized to be conditionally stable for the pure convection
equation. There is a more general way to interpret the CFL condition. In the convection equa-
tion, an hyperbolic PDE of first order, the information propagates always from the upstream
to the downstream region of the domain. The velocity of this propagation is the velocity of
the carrier flow. In a one dimensional domain with constant mean flow, the pure convection
equation is written as Eq. (4.11) neglecting the diffusive term. The solution of this equation is
actually any function f with argument (x − v x t). It means, that a perturbation f is constant
over a characteristic line, or in other words, that a perturbation f is convected at a velocity v x
without being disturbed. If f is a Gaussian pulse (a distribution of temperature, say), this pulse
is convected along the domain without losing its Gaussian distribution (see also exercises of
Chapter 1). This example is illustrated in Fig. 4.1(a).

Figure 4.1(a) shows that the solution T ( xi , tn ) of the PDE depends only on the information (on
the values of T) at the points belonging to the characteristic line. Let us now spatially discretize
the convection equation with the operator L of Eq. (4.26), and use the Foward Euler scheme
with two different time steps defined as ∆t = ν∆x/v x with ν < 1 and ν > 1, respectively.
Figure 4.1(b), illustrate the domain of dependence of Tin when these two different ∆t are used.
The domain of dependence of the solution Tin is actually the data used to compute this solution. It
4.1 Explicit time discretization 61

t t

T (xi , ti )
ν<1 Tin
e
lin n−1
Ti−1 Tin−1
ic
rist x − vx t = 0
a cte
ar
ch ∆t

T (x0 , t0 ) x
∆x x
T at t0 at ti t

ν>1 Tin

x0 vx ti xi x n−1
Ti−1 Tin−1

∆t

∆x x

(a) (b)

Figure 4.1: Illustration of the CFL condition.

is interesting to notice that for ν > 1 the domain of dependence of the discretized equation does
not contain the domain of dependance of the PDE, i.e. the characteristic line, and actually the
solution diverges after a certain time. In contrast, convergence is achieved when ν < 1 since
the characteristic line lies within the domain of dependence of the discretized PDE. It should
be stressed out, though, that the CFL condition is a necessary condition, but no sufficient, for
stability.

Pure diffusion equation

The same procedure can be carried out to perform stability analysis on a pure diffusion equa-
tion. Let us consider Eq. (4.23) with ν = 0:

G = 1 + δ(e− km ∆x − 2 + e km ∆x ). (4.29)

and developing by means of Euler’s formula

G = 1 + δ [cos(k m ∆x ) −  sin(k m ∆x ) − 2 + cos(k m ∆x ) +  sin(k m ∆x )] (4.30)

G = 1 − 2δ [1 − cos(k m ∆x )] ⇒ G = 1 − 4δ sin2 (k m ∆x/2). (4.31)


62 Chapter 4: Unsteady Problems

The worst case takes place when k m ∆x = π. Stability is conditioned then by

1
|1 − 4δ| ≤ 1 ⇒ δ≤ (4.32)
2

Stability analysis of 2D diffusion equation

During this course, we focus on the study of the heat equation mainly in two dimensional
domains. It is worth then to analyse the stability conditions in such cases. The solution of a
PDE with constant coefficients of a function T that depends on three variables (x, y and t) is
generally obtained by applying the technique of separation of variables. Accordingly, the a
error is now expressed as

n
ei,j = e βm n∆t e k x,m i∆x e ky,m j∆y (4.33)

where k x,m and k y,m represent the wave numbers in the x and y direction, respectively. Let us
define now the operator R( T ) according to a second order centered 2D-FD scheme:

Ti−1,j − 2Ti,j + Ti+1,j Ti,j−1 − 2Ti,j + Ti,j+1


 
R( T ) ≈ D + (4.34)
(∆x )2 (∆y)2

This defintion is now inserted in Eq. (4.10). After neglecting the convective terms, it yields

n +1 n D ∆t n n D ∆t n
Ti,j = Ti,j + ( Ti−1,j − 2Ti,j + Tin+1,j ) + (T n
− 2Ti,j n
+ Ti,j +1 ). (4.35)
(∆x ) 2 (∆y)2 i,j−1

As mentioned before, the error in the approximation as defined in Eq. (4.3.1) also satisfies
Eq. (4.35). Therefore we can write

n +1 n D ∆t n n D ∆t n
ei,j = ei,j + (e − 2ei,j + ein+1,j ) + (e n
− 2ei,j n
+ ei,j +1 ) (4.36)
(∆x )2 i−1,j (∆y)2 i,j−1

n , knowing that G = en+1 /en and using the definition of Eq. (4.3.1),
Dividing now all terms by ei,j i,j i,j
Eq. (4.36) becomes

D ∆t − k x,m ∆x D ∆t − ky,m ∆y
G = 1+ (e − 2 + e k x,m ∆x ) + (e − 2 + e ky,m ∆y ) (4.37)
(∆x ) 2 (∆y)2
4.2 Implicit time discretization 63

Applying now the Euler’s formula2 leads to

D ∆t D ∆t
G = 1−4 sin2 (k x,m ∆x/2) − 4 sin2 (k y,m ∆y/2). (4.38)
(∆x ) 2 (∆y)2

The expression of Eq. (4.38) holds for a rectangular domain where ∆x is not necessary equal to
∆y. The worst case arises when k x,m ∆x = k x,m ∆y = π. The stability condition is then

D ∆t D ∆t (∆x )2 (∆y)2
 
1
1−4 −4 ≤1 ⇒ ∆t ≤ . (4.39)
(∆x ) 2 (∆y)2 2D (∆x )2 + (∆y)2

In the particular case when ∆x = ∆y, the stability condition reduces to

(∆x )2
∆t ≤ . (4.40)
4D

Figures 4.2, 4.3 and 4.4 show an example of temperature evolution through time in a trape-
zoidal fin. Two different values of ∆t have been considered in order to visualize the stability of
the FE scheme (explicit scheme). We have observed so far, that stability of an explicit scheme is
not easy to obtain. Once a given numerical scheme is known to be consistent and in addition is
proved to be stable, then we can claim that such a numerical scheme is convergent.

Convergence: This property is achieved if the numerical solution approach the exact solution
of the PDE and converge to it as the mesh size tends to zero. In other words
a scheme is convergent if it is both consistent and stable.

4.2 Implicit time discretization

We have by now studied the stability issues of an explicit scheme. Although these schemes
do not imply the need of solving complex linear systems at each time step ∆t, they do require
generally small values ∆t to assure convergence. The idea is now to evaluate what happens
to stability if we go to the other extreme, i.e. if we consider fully implicit schemes. In a fully
implicit scheme, the time evolution of a quantity is computed based on future values of that
quantity. Therefore, instead of applying the spatial operator C to T at tn = n∆t, it is applied to
T at tn+1 = (n + 1)∆t. Therefore, Eq. (4.8) becomes
2
! !2
− α e α/2·2 − 2 + e− α/2·2 e α/2 − e− α/2
e α
−2+e =4 =4 = 4 sin2 (α/2)
22 2
64 Chapter 4: Unsteady Problems

T n +1 − T n
− A T n+1 = −b. (4.41)
∆t

Eq. (4.42) can be reorganized as

( I − ∆t A) T n+1 = |T n −
{z∆t b}, (4.42)
| {z }
A∗ b∗

to make clear that a linear system A∗ T n+1 = b∗ is needed to be solved to obtain the field T n+1 .

4.2.1 Von Neumann analysis

Let us now define the operators L and R, as done before (the Eqs. (4.18) and (4.19)). Subse-
quently, we replace them in Eq. (4.42), where A is defined as in Eq. (4.8), and we neglect the
influence of the boundary conditions on the stability analysis as previously done. We obtain

ν
Tin+1 + ( Tin++11 − Tin−+11 ) − δ( Tin−+11 − 2Tin+1 + Tin++11 ) = Tin . (4.43)
2

where it becomes evident that obtaining the value of T n+1 in an implicit scheme is not as easy
as it was for a explicit scheme: we need now to solve a linear system at each time step ∆t.
Nevertheless, despite this disadvantage, there is an enormous improvement in the stability
of such schemes. Applying the Von Neumann analysis for the corresponding pure diffusion
problem (ν = 0), it can be shown [XX1] that the amplification factor is given by

1
G= . (4.44)
1 + 4δ sin2 (k∆x/2)

Thus, the fully implicit scheme is then unconditionally stable, the best situation for stability, since
there is no value of δ, defined as strictly positive, for which G ≥ 1.

Figures 4.2, 4.3 and 4.4 show an example of temperature evolution through time in a trape-
zoidal fin. One value of ∆t have been considered in order to visualize the strong stability of the
BE scheme (implicit scheme).

4.3 The weighted average or θ-method

Equation (4.42) represents a fully implicit scheme, also known as Backward Euler (BE) Method.
A generalization can be performed for cases that lay between fully explicit (Forward Euler) and
4.3 The weighted average or θ-method 65

fully implicit (Backward Euler). It is done by adding a weight coefficient θ which ‘tunes’ the
numerical scheme towards either FE or BE. The weighted expression reads:

T n +1 − T n
− θ A T n+1 − (1 − θ ) A T n = −b. (4.45)
∆t

Note that a value of θ = 0 tunes the scheme to fully explicit while a value of θ = 1 makes it
fully implicit.

4.3.1 Von Neuman Analysis

1D convection-diffusion

The amplification of the θ-scheme is given by [XX1]

1 − 4(1 − θ )δ sin2 (k∆x/2)


G= . (4.46)
1 + 4θδ sin2 (k∆x/2)

The worst case takes place when k∆x = π. In such situation, stability is then conditioned by

1 − 4(1 − θ ) δ 1
≤1 ⇒ δ(1 − 2θ ) ≥ . (4.47)
1 + 4θδ 2

Equation (4.47) tells us, on the one hand, that for values 0.5 ≤ θ ≤ 1 the numerical scheme
derived is expected to be unconditionally stable. On the other hand, values of θ in the range of
0 ≤ θ < 0.5 give conditionally stable schemes. Values of θ very close to 0.5 (but still smaller)
are unstable only for very large values of ∆t.

A special case of the θ-method is the so called Crank-Nicolson scheme, in which θ = 0.5. Fol-
lowing the analysis of order of accuracy seen in previous chapters (by applying the definition
of Taylor series), it can be demonstrated that this value makes the scheme become second order
accurate in time O(∆t)2 . For FE and BE schemes the approximations in time are only of first
order O(∆t).

2D diffusion

n +1
Analogly to Eq. (4.36), the error ei,j at (i, j) for the time step n + 1 using the weighted average
method can be expressed as:
66 Chapter 4: Unsteady Problems

ein−+1,j
1 n +1
+ ein++1,j
1 n +1 n +1 n +1
!
n +1
− 2ei,j ei,j −1 − 2ei,j + ei,j+1
ei,j n
=ei,j + ∆t θ D + +
(∆y)2 (∆x )2
n + en
ein−1,j − 2ei,j n n n
!
i +1,j ei,j −1 − 2ei,j + ei,j+1
(1 − θ ) D + (4.48)
(∆y)2 (∆x )2

n , this can be rewritten as:


Applying the Fourier transform of the error and dividing by ei,j

e k x,m ∆x − 2 + e− k x,m ∆x e ky,m ∆y − 2 + e− ky,m ∆y


 
β m ∆t
G =1 − e ∆t θ D + +
(∆x )2 (∆y)2
− 2 + e− k x,m ∆x e ky,m ∆y − 2 + e− ky,m ∆y
 k x,m ∆x 
e
∆t(1 − θ ) D +
(∆x )2 (∆y)2
i e k x,m ∆x − 2 + e− k x,m ∆x e ky,m ∆y − 2 + e− ky,m ∆y
h  
β m ∆t
=1 − ∆t D e θ+1−θ + (4.49)
(∆x )2 (∆y)2

By definition, G = e β m ∆t . Furthermore, we know from the previous discussion, that e α − 2 +


e− α = sin2 (α/2) and so we can solve for G:

1 − ∆t D(1 − θ )e k x,m ∆x − 2 + e− k x,m ∆x e ky,m ∆y − 2 + e− ky,m ∆y


 
G= +
1 + ∆t D θ (∆x )2 (∆y)2
1 − ∆t D(1 − θ ) 4 sin (k x,m ∆x/2) 4 sin(k y,m ∆y/2)
2
 
= + (4.50)
1 + ∆t D θ (∆x )2 (∆y)2

We recall that a scheme is stable if | G | ≤ 1. For θ ∈ [0.5, 1], this condition is satisfied for every
∆t, for other θ values, we do a worst case estimation and receive:

(∆x )2 (∆y)2
 
1
∆t ≤ (4.51)
2D (1 − 2θ ) (∆x )2 + (∆y)2

4.4 Predictor-corrector methods (Runge-Kutta)

As seen previously, it seems that fully explicit methods are ‘at most’ conditionally stable and
only first order accurate. Is there any strategy to build explicit schemes with higher levels of
accuracy and a better stability conditioning? The answer is yes. There are techniques, known
4.4 Predictor-corrector methods (Runge-Kutta) 67

as predictor-corrector methods, in which information of the prediction made by a simple ex-


plicit scheme is not considered as the final result, but instead is used to construct more robust
algorithms. Suppose then we start assuming that

T n+1 = T n + ∆t A T n − b∆t, (4.52)

Equation (4.52) represents the FE scheme studied previously. Now, instead of assuming the
output of this equation as the final result, we take it as a preliminary prediction and we note it
withe:

en+1 = T n + ∆tA T n − b∆t


T ⇐= Predictor (4.53)

The next step is to establish a correction formulation to correct it. A suitable one could be the
Crank-Nicolson method. It yields

1 h i
T n+1 = T n + ∆t A T n + A T
en+1 − b∆t, ⇐= Corrector (4.54)
2

where the final estimate of T n+1 is obtained. This two-level method is a good example to
introduce predictor-corrector methods. They are not the best to be used in practice, though,
since the associated stability is not ideal. We can, nevertheless, go further. Let us now introduce
multi-stage methods. These methods aim to use the information given by ( A T ) at times t that
lay between n∆t < t < (n + 1)∆t. Runge-Kutta formulations belong to this category. Let us
consider the most classical, called explicit Runge-Kutta of order 4 (some times also referred as
RK4). Under this method, the final estimate of T n+1 is now given by:

1 h ... i
T n+1 = T n + ∆t A T n + 2A Ṫ n+1/2 + 2A T̈ n+1/2 + A T n+1 − b∆t, (4.55)
| 6 {z }
Correction for t=(n+1)∆t

where

1
Ṫ n+1/2 = T n + ∆tA T n − b∆t/2 ⇐= Prediction for t = (n + 1/2)∆t (4.56)
2
1
T̈ n+1/2 = T n + ∆t A Ṫ n+1/2 − b∆t/2 ⇐= Correction for t = (n + 1/2)∆t (4.57)
2
...n+1
T = T n + ∆tA T̈ n+1/2 − b∆t ⇐= Prediction for t = (n + 1)∆t. (4.58)

Note in Eq. (4.55) that the averaging technique used is the Simpson’s rule. It means that the
68 Chapter 4: Unsteady Problems

correction gives more importance (more weight when averaging) to the estimates performed
at times t = (n + 1/2)∆t, the times between the time step intervals, than at times t = n∆t and
t = (n + 1)∆t, i.e. the estimates at the actual intervals. It is also worth to note that Eqs. (4.56)
and (4.57) are based on the FE and BE methods, respectively, whereas Eq. (4.58) is based on the
midpoint rule. The method RK4 is fourth order accurate in time O(∆t)4 . More details about
Runge-Kutta methods are found in C.
4.4 Predictor-corrector methods (Runge-Kutta) 69

Explicit scheme with ∆t = 1.3 ms Explicit scheme with ∆t = 1 ms Implicit scheme with ∆t = 100 ms

t=0 t=0 t=0


5 100 5 100 5 100

98 98 98

96 96 96
0 0 0
y

y
94 94 94

92 92 92
−5 −5 −5
0 5 10 0 5 10 0 5 10
x x x

t = 0.50 t = 0.50 t = 0.50


5 100 5 100 5 100

98 98 98

96 96 96
0 0 0
y

y
94 94 94

92 92 92

−5 90 −5 90 −5 90
0 5 10 0 5 10 0 5 10
x x x

t = 1.00 t = 1.00 t = 1.00


5 5 100 5 100
98
98 98
96
96 96
0 0 0
y

94
94 94

92 92 92

−5 90 −5 90 −5 90
0 5 10 0 5 10 0 5 10
x x x

t = 1.22 t = 1.22 t = 1.22


5 5 100 5 100
98
98 98
96
96 96
0 0 0
y

94
94 94
92
92 92
90
−5 −5 90 −5 90
0 5 10 0 5 10 0 5 10
x x x

t = 1.26 t = 1.26 t = 1.26


5 5 100 5 100

95 98 98

96 96
90
0 0 0
y

94 94
85
92 92

−5 80 −5 90 −5 90
0 5 10 0 5 10 0 5 10
x x x

Figure 4.2: Temporal evolution of temperature field in contour plots. Time in seconds.
70 Chapter 4: Unsteady Problems

Explicit scheme with ∆t = 1.3 ms Explicit scheme with ∆t = 1 ms Implicit scheme with ∆t = 100 ms

t = 1.30 t = 1.30 t = 1.30


5 150 5 100 5 100

98 98

100 96 96
0 0 0
y

y
94 94
50
92 92

−5 0 −5 90 −5 90
0 5 10 0 5 10 0 5 10
x x x

t = 2.00 t = 2.00
5 100 5 100

98 98

96 96
0 0
y

y
94 94

92 92

−5 90 −5 90
0 5 10 0 5 10
x x
-
t = 4.00 t = 4.00
5 100 5 100

98 98

96 96
0 0
y

94 94

92 92

−5 90 −5 90
0 5 10 0 5 10
x x
-
Number of iterations = - Number of iterations=4000 Number of iterations=40

Figure 4.3: Temporal evolution of temperature field in contour plots. Time in seconds.

Explicit scheme with ∆t = 1.3 ms Explicit scheme with ∆t = 1 ms Implicit scheme with ∆t = 100 ms

t = 1.26 t = 2.00 t = 2.00

100
100 100

90 95 95
T

80 90 5 90 5
0 5 0 0
5 0 5 0 5 0
10 −5 y 10 −5 y 10 −5 y
x x x
t = 1.30 t = 4.00 t = 4.00

150 100 100


100
T

95 95
T

50
90 5 90 5
0 0 0
5 0 5 0
0 5 −5
10 0
y 5 10 −5 10 −5
x x y x y

Figure 4.4: Temporal evolution of temperature field in surface plots. Time in seconds.
4.5 Exercises 71

4.5 Exercises

Stage 1: Merging codes

• Each group is asked to discuss and choose one of the codes written in Session 04.

Stage 2: Coding temporal explicit scheme

• Add a case called ’unsteady’ in solveFVM. In that way, the code will be able to perform
both ’steady’ and ’unsteady’ computations.

• Discretize the time with an explicit scheme. Choose dirichlet Boundary conditions.

• Which should be the stability criterion to use?. Does it change if non rectangular geome-
tries are considered?

• Impose a Neumann BC at the south and Robin BC at north and east. Is the stability
criterion similar as previously ?

Stage 3: Coding temporal θ scheme

• Code the θ temporal scheme

Stage 4: Coding RK4 scheme (Optional)

• Code the RK4 scheme.

• Is there too much difference with respect to the explicit scheme?


Sparse Matrices and Linear Solvers
5
References
[XX1] M ORTON , K. W. AND M AYERS , D. F. Numerical Solution of Partial Differential Equations. 2nd editon,
Chp. 7. Cambridge University Press, 2005.
[XX2] S AAD , Y. Iterative Methods for Sparse Linear Systems. 2nd edition. ociety for Industrial and Applied
Mathematics SIAM, 2003.

Objectives
• Get to know, what sparse matrices are and why they require less storage and less computational
effort.
• Get an overview of sparse linear solvers.
• Understand preconditioning techniques.

72
5.1 Sparse matrix 73

Contents
5.1 Sparse matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.2 Iterative solvers and preconditioning . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3 Preconditioned Richardson iteration . . . . . . . . . . . . . . . . . . . . . . . . 76
5.4 Projection methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.5.1 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.5.2 Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

As we have seen in the previous chapters, numerical problems derived from linear PDEs are
reduced to a system of linear equations. As the system dimension grows, also does the com-
putational time and the required storage. In this chapter, we discuss how the so-called sparse
matrices, obtained after discretization, can be optimally stored so that computational efficiency
increases. We will also observe how, for high dimension problems, iterative methods show
better performance with respect to direct methods, provided that a suitable preconditioning
is implemented. In this chapter we offer an overview of basic iterative methods based on the
Richardson iteration. Projection methods, which are more complex approaches, will be also
briefly described.

5.1 Sparse matrix

A linear system is usually described by

Ax = b, (5.1)

where x, b ∈ Rn and A = ( aij ) ∈ Rn×n . As we can see from Figs. 5.1 and 5.2, a system matrix
derived from PDE discretization contains generally mainly zero entries. Such matrices, pop-
ulated primarily with zeros, are called sparse matrices. As we can observe in these figures, the
ratio of non-zero elements to the possible number of entries n2 further reduces with a growing
number of nodes. This ratio is called the matrix density. This is an outcome of the discretization
stencil which is used. For example the FD stencil in 2D (see Eqs. (2.39) and (2.42)) relates a node
to its four neighbors such that in the linear system a maximum of five entries per line may oc-
cur. It should be clear then that, for high dimensions mainly, the storage of sparse matrices can
be reduced significantly by storing only the nonzero entries. This means that only the values
and the corresponding matrix position of each non-zero entry must be kept. By doing so, the
74 Chapter 5: Sparse Matrices and Linear Solvers

../Matlab/figure/Asmall.pdf ../Matlab/figure/Abig.pdf

Figure 5.1: Non-zero pattern of a matrix de- Figure 5.2: Non-zero pattern of a matrix
rived from FV discretization with 100 nodes derived from FV discretization with 10 000
(density 0.0735). nodes (density 8.8305e-04).

storage scales with the number of nonzero elements, which in turn scales with the number of
nodes n. It is observed then that, when using the sparse format of storage, the storage scales
in a linear way ≈ O(n). This is, indeed, a much better situation with respect to the required
storage in the full format, since it scales quadratically O(n2 ). In Fig. 5.3 the required storage in
MATLAB is plotted versus the number of unknowns n, where the dashed blue and solid red
line correspond to the sparse and dense matrix, respectively.

Let us take now a look of what happens with the computational effort when either full format
or sparse format is considered and a direct approach is used to solve the linear system. The
computational effort for solving a dense linear system scales with O(n3 ), when we use, say,
the Gaussian elimination. Operations involving sparse matrices include many zero operations,
and therefore they can be left out from the overall computation. As a result, if a matrix can
be treated in a sparse representation, the computational effort can be reduced significantly.
Figure 5.4 shows the computational time on Intel i7 CPU 3.4 GHz 4K/8T with 16GB Ram to
solve a linear system with n number of unknowns. Despite of the more efficient computation,
the numerical effort scale in most cases over-linearly (O(nc ) with 1 < c < 3) when direct
methods are involved to solve a linear system.
5.2 Iterative solvers and preconditioning 75

../Matlab/figure/Storage.pdf ../Matlab/figure/CompTime.pdf

Figure 5.3: Storage in MATLAB for a n- Figure 5.4: Compuational time in MAT-
dimensional sparse (dashed blue) and dense LAB to solve a n-dimensional sparse (dashed
(solid red) system matrix form the FVM dis- blue) and dense (solid red) system form the
cretization. FVM discretization (on Intel i7 CPU 3.4 GHz
4K/8T, 16GB Ram).

5.2 Iterative solvers and preconditioning

In many application, memory usage of direct sparse solvers makes their use simply not real-
istic. Therefore, some iterative solvers have been introduced to replace the matrix inversion op-
eration by a series of matrix vector products. The solution of the linear system is approached
step by step until a certain convergence criterion is achieved. In many applications, iterative
solvers scale better than direct sparse solvers. The former can achieve a performance up to a
quasi-linear behavior O(n log n).

The key for a good performance of such iterative solvers is a suitable preconditioning, a tech-
nique that we discuss further down in this section. An iterative solver approximates the so-
lution of the linear system step by step until a certain convergence criterion is satisfied. An
indicator for a fast convergence is a small condition number κ2 ( A) = || A||2 A−1 2 (See App. B
for further details). This number can be seen as a measurement of how much the solution x can
change (how sensitive it is) for a small change in the right-hand side b. The condition number
is by definition greater or equal 1, being 1 for the identity matrix.

In practice, it is often not recommended to solve directly the original system (5.1) since the ma-
trix A might be associated with a big value of κ2 . Instead, a modified, so called preconditioned,
system is solved in order to improve the conditioning of the problem. For left preconditioning,
76 Chapter 5: Sparse Matrices and Linear Solvers

this is done by multiplying the system with a full range matrix C −1 ∈ Rn×n as:

C −1 Ax = C −1 b. (5.2)

The matrix C is called preconditioner. The above system has the same solution as the original
system but, with an appropriate choice of the preconditioner, the iteration convergences much
faster and the solver is hence more efficient. Usually, the matrix C −1 is not built explicitly,
but instead C −1 is applied at every iteration step in an implicit way. Choosing A as precondi-
tioner would be ideal, since κ2 ( A−1 A) = 1. However, we would have to apply A−1 , which is
practically the solution of the original problem. We choose then a preconditioner which is

• close to the original system matrix A

• cheap to apply.

5.3 Preconditioned Richardson iteration

A basic iterative solver is the Richardson iteration. Here at every iteration step m, the residual
b − Ax is added to the
 current iteration
 values x(m) . Using preconditioning, the preconditioned
residual w(m) = C −1 b − Ax(m) is added instead. Many iterative solvers are in fact such of a
solver with a particular preconditioner. This method is shown in Algorithm 5.1.

Algorithm 5.1 Preconditioned Richardson iteration


Guess initial value x(0)
while not converged
 do 
w(m) = C −1 b − Ax(m) )
x ( m +1) = x ( m ) + w ( m )
end while

We want to discuss when this iteration converges and therefore we rewrite the iteration as a
fixed point iteration. Let x ∗ be the unique solution of Eq. (5.1) and e(m) = x(m) − x∗ the error
at the m-th iteration step. Furthermore, we define the iteration matrix T = I − C −1 A , where I


denotes the n dimensional identity matrix. Then we get


 
e(m+1) = x(m) + C −1 b − Ax(m) − x∗
−1 −1
= e(m) + C
| {z b} −C Ax
(m)

=C −1 Ax∗
−1 (m)
= (I − C A)e
= Te(m)
= T k e (0) (5.3)
5.3 Preconditioned Richardson iteration 77

Thus, it follows that the iteration convergences for every initial value x(0) and every right-hand
side b, iff1 limm→∞ T m = 0. This is the case, iff all absolute values of the eigenvalues λ j of T are
less than 1. In terms of the spectral radius ρ( T ) ≡ max j λ j ( T ) , this criterion can be written as

ρ( T ) < 1. (5.4)

This means that the preconditioned Richardson iteration can also diverge! Only with a suitable
preconditioning, it converges. The concrete procedure depends on the choice of the precondi-
tioner C. Some classical solvers are listed in the following. Hereby, we split the matrix A into
its diagonal D as well as into its strict lower part E and strict upper part F, so A = D + E + F.

• Jacobi iteration: Taking the diagonal D as preconditioner leads to the Jacobi iteration. We
formulate this method in its classical pseudo code: Since D is diagonal, its diagonal values
aii act only on the corresponding line i of the system. Accordingly, at the m-iteration step,
we have to solve for the i-th line:
!
n
1
bi − ∑ aij x j
( m +1) (m) (m)
xi = xi +
aii j =1
!
1
bi − ∑ aij x j
(m)
= (5.5)
aii j 6 =i

So when does this procedure convergence? We already know the answer, namely if
ρ I − D −1 A < 1. Using the Gershgorin circle theorem, it can be shown that this crite-


rion is satisfied for diagonally dominant matrices. Such a matrix is a matrix, where

| aii | ≥ ∑ aij ∀i (5.6)


j 6 =i

holds. Most matrices from discretization schemes satisfy this condition, e. g. for the FD
scheme (see Eq. (2.42)) every diagonal entry is formed as the sum over the other entries
in the same line. Other examples of methods which can be formulated as preconditioned
Richardson iteration are:

• Gauss-Seidel: Here, not only the diagonal but also lower part is used for preconditioning.
Thus, the preconditioner C = ( D + E). In comparison to the Jacobi iteration, this method
converges mostly faster and more often but on the other hand is is a bit more laborious.

• SOR: The co-called successive over relaxation uses C = ω1 ( D + ωE) with the relaxation
factor ω ∈ (0, 2). For large values of ω, the relaxation factor can speed up the convergence
in comparison to the Gauss-Seidel method but it can also lead to a more unstable iteration.
Small relaxation factors (from zero to one) can lead to a stable iteration, where Gauss-
Seidel is unstable.
1 “iff” is an common abbreviation for “if and only if”.
78 Chapter 5: Sparse Matrices and Linear Solvers

K1 K1
x∗ x∗ x∗
x2

x1 x1
K2
Rn Rn Rn
x0 x0 x0

(a) Choosing a start vector x(0) . (b) Solving the minimization prob- (c) Solving the minimization prob-
lem on K1 . lem on K2 .

Figure 5.5: Sketch of the procedure of projection methods.

As we have seen previously, these methods do not converge for all linear systems. But similar
to criterion (5.4), convergence can be guaranteed for many practical cases.

5.4 Projection methods

Powerful tools are the so called projection methods. Here, a brief description of two approaches is
given: the CG-method (conjugate gradient method) and GMRES (generalized minimal residual
method). For details we refer to [XX2]. Both CG and GMRES, approximate the solution on a
specific subspace, the so called Krylov space

Km = x(0) + span{r (0) , Ar (0) , ..., Am−1 r (0) } ⊂ Rn , (5.7)


 
where r (0) = b − Ax(0) ∈ Rn denotes the initial residual vector. Now, at the m iteration step
a minimization problem is solved on this subspace. For CG,
 
(m) 1 T −1
x = argmin x C Ax − x T b (5.8)
x∈Km 2

and for GMRES


x(m) = argmin C −1 (b − Ax) . (5.9)
x∈Km 2

If the residual is small enough, the procedure is stop, otherwise the Km is extended to K(m+1)
and the minimization problem is solved on a larger space. This is repeated until the residual is
below a certain boarder. For both problems, there exist very efficient implementations.

This iteration is illustrated in Fig. 5.5 where the exact solution is denoted as x∗ . In the first step,
a initial value x(0) (Fig. 5.5(a)) is set and a one-dimensional space K1 is constructed (Fig. 5.5(b)).
Subsequently, the problem is projected on that subspace, i.e. the minimization problem is
solved over the one-dimensional region. If the residual r 1 = b − Ax1 is too large, we have
to extend the space by one dimension, which leads to the 2D space K2 shown in (Fig. 5.5(c)).
5.4 Projection methods 79

The the minimization problem is then solved on that space and we obtain x2 . We decide to
continue the procedure, if the residual r 2 is still too large, otherwise we can stop the iteration.
It can be shown that by further increasing the space dimension, the residual reduces or stays
constant but never grows (a monotonical decay of the residual).

The CG-method can only be used for symmetrical matrices A (i. e. A T = A) whereas GMRES
can be used for all types of systems. With exact arithmetic 2 , these projection methods converge
for every linear system. Since a single iteration step is more expensive than for the Richardson
iteration, the performance of projection methods highly depends on a suitable preconditioner.
Moreover, as in the case of methods based on the Richardson iteration, these methods only
show a good performance, if A is sparse!

2 Note that the case of exact arithmetic is an ideal case.


When computers are used, the machine error is introduced
and, although these errors are very small (of the order of 2−52 for a machine with double precision), exact arithmetic
cannot be argued.
80 Chapter 5: Sparse Matrices and Linear Solvers

5.5 Exercises

Stage 1: Unifying a code

• Each group is asked either to merge the codes or chose one code (those written in Session
04). The important point here is to spend some time sharing ideas about the way of
coding.

Stage 2: Sparse Matrices (obtained from steady problems)

• Use the command spy to visualize the structure of your matrix.

• As observed in the lecture, storing and computing matrices in the sparse form leads to
considerable improvements in computational efficiency. The first to do is then to ’re-
write’ the code considering sparse matrices (indeed it resumes in expressing the matrix
A in the sparse format).

• Compare computational time invested and computational storage of A when using ei-
ther sparse or full formats. Use the commands tic and toc for time and whos for storage.
Generate two figures similar to Figs. 5.4 and 5.3 (three different cases to generate those
plots should be enough).

Stage 3: Iterative solvers 1

The backslash operator is a very efficient matlab routine to solve linear systems. This is due
mainly to two reasons: 1) It uses different approaches depending on the structure of the matrix
(square, diagonal, three-diagonal, symmetric, full , etc). 2) The selected approach is actually a
set of Basic Linear Algebra Subprograms (BLAS). These subprograms or routines are already
compiled (binary format) and therefore directly accessible for the computer. Sometimes, BLAS
routines also are optimized to manage the way memory (RAM, Cache) works for a given com-
puter.

Therefore, there is no point in comparing the backslash operator with ’self-made’ matlab rou-
tines to solve linear systems. A comparison would be fair, at least, if pre-compiled version of
the self-made routines are used so that our computers do not have to ‘waste’ time compiling:
converting the very high-level language (Matlab) to machine language (binary).

What is possible is to compare self-made routines with self-made routines and, by doing so, to
compare basic iterative methods for solving linear systems. At this point it is important to see
the influence of preconditioning approaches and its influence in the convergence of the solution
on problem

• Derive and code one algorithm for Jacobi, Gauss-Seidel and SOR, respectively.
5.5 Exercises 81

• Create a random diagonal dominant matrix and a random vector b. Use Jacobi, Gauss-
Seidel and SOR to solve the corresponding linear system.

• Create a random three-diagonal dominant matrix and a random vector b. Use Jacobi,
Gauss-Seidel and SOR to solve the corresponding linear system.

• Create a full random matrix and a random vector b. Use Jacobi, Gauss-Seidel and SOR to
solve the corresponding linear system.

• Plot residual Vs number of iterations (One figure for each case) for a 100 × 100 matrix.

note: use the command rand to create random arrays.

Stage 4: Iterative solvers 2

• Use Jacobi, Gauss-Seidel and SOR to solve the linear system given by the discretization
of the 2D heat Equation (finite volumes)

• Plot residual Vs number of iterations

Stage 5: The wall of fame (or shame)

• Each group is asked to compute the time invested to solve a given problem (given in class)
considering the best algorithm between Jacobi, Gauss-Seidel and SOR (time is taken just
before and after solving the linear system). Afterwards the code should be given so that
either Armin or I test it. We will put the final classification of names and times on a big
poster: the wall of fame.

• The stop parameter should be defined as

kb − Ax k2
e≥ (5.10)
k b k2

where k · k2 is recognized as the Euclidean norm and e = 0.01. Stop if more than 2000
iterations are needed.

Stage 6: Very large matrices (optional)

• Use the best method between Jacobi, Gauss-Seidel and SOR to compare it with gmres
(already implemented in matlab). Use different sizes (medium, large, very large) of ma-
trices. Which one perform better?
82 Chapter 5: Sparse Matrices and Linear Solvers

5.5.1 Useful MATLAB commands

max(eig(A)) Gives the largest eigenvalue of matrix A which is equal to the spectral
radius ρ of the matrix.

spy(A) Plots the structure of non-zero elements of matrix A. Non-zero ele-


ments are depicted as a dot.

tic the command tic starts a timer. The value of the timer is returned by
toc calling toc.
whos(A) This function gives you information about the memory used by the
matrix (or variable) A.

rand(n) Creates a dense n × n matrix filled with decimal values in the range
[0, 1].

5.5.2 Flowchart

l, h, λ, ...
InitFVM.m

formfunction, ...
setUpMesh.m
X, Y

T, X, Y, ... X, Y, ...

FVM_main.m stamp.m
stencil, b

solveFVM.m
T, A, b
iterative
T, ... T Solver.m

T, X, Y, ...
post.m
Green’s functions
6
References
[1] Kevin D. Cole, James V. Beck, A. Haiji-Sheik, and Bahman Litkouhi. Heat conduction using Green’s
functions. CRC Press, 2011.

Objectives
• Understand the great utility of Green’s functions together with its corresponding advantages and
disadvantages when solving partial differential equations.

83
84 Chapter 6: Green’s functions

Contents
6.1 Green’s function solution equation for the steady heat equation . . . . . . . . 84
6.2 Treatment of boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.3 Derivation of the Green’s function for a simple problem . . . . . . . . . . . . 88
6.4 What to integrate? (Warning) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.5 Green’s functions for a rectangular domain . . . . . . . . . . . . . . . . . . . . 91
6.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.7.1 Useful MATLAB commands . . . . . . . . . . . . . . . . . . . . . . . . . 95

So far, we have studied how to discretize the 2D heat equation in space and time. Space dis-
cretization was carried out by finite differences and finite volumes whereas time discretization
was performed using both explicit and implicit schemes. Finite differences was applied in a
rectangular geometry, whereas finite volumes was carried out on a non-Cartesian domain. In-
deed, analytical solutions exist for solving the unsteady/steady heat equation on a rectangular
topology and, as stated before, an analytical solution should be considered instead of a numer-
ical approach if the analytical solution is both available and feasible. Generally, an analytical
solution is derived for one specific partial differential equation with defined boundary condi-
tions and is valid only for that case. Actually, if the partial differential equation remains the
same and only small changes are applied to BC or to a given source, the previous analytical so-
lution does not hold anymore and another analytical derivation must be performed. It would
be very helpful, if the departing point of such derivations is the same for several different
boundary and initial conditions. Indeed, such a procedure exists in the framework of Green’s
functions.

6.1 Green’s function solution equation for the steady heat equation

The general solution of the heat equation in terms of Green’s Function (GF) also comprises the
unsteady version. Nevertheless, we will not consider variations in time in the solution. First of
all, let us recall the steady heat equation:

∂2 T ∂2 T
 
1
+ 2 + ω̇ ( x, y) = 0, (6.1)
∂x2 ∂y λ

where ω̇ denotes a given source of heat placed at (x, y). Let us assume now that there is an
auxilary problem defined by
6.1 Green’s function solution equation for the steady heat equation 85

∂2 G ∂2 G
 
+ 2 + δ( x − xs ) = 0, (6.2)
∂x2 ∂y

where G ( x| xs ) represents a GF. This function should be read as ‘the measurement of G at the
observation point x = ( x, y) due to a pulse δ produced at position xs = ( xs , ys )’. The function
δ represents the Dirac delta function which is mainly defined by its filter property:

Z ∞
f ( x)δ( x − xs ) dx = f ( xs ) (6.3)
−∞

for a given function f ( x). An important property of Green’s functions is called ‘reciprocity’
and is expressed as

G ( x | x s ) = G ( x s | x ). (6.4)

Based on the reciprocity property, we can then express Eqs. (6.1) and (6.2) in terms of xs as

∂2 T ∂2 T ∂2 G
   
1 ∂G
+ 2 + ω̇ ( xs , ys ) = 0 and + 2 + δ( x − xs ) = 0. (6.5)
∂xs2 ∂ys λ ∂xs2 ∂ys

Subsequently, we multiply the first expression by G and the second expression by T and sub-
tract the last from the former. The general equation results after integrating over the control
area S containing the source.

∂2 T ∂2 T ∂2 G ∂2 G
   
1
Z Z Z Z
G + 2 dxs + G ω̇ dxs − T + 2 dxs − Tδ( x − xs ) dxs = 0.
S ∂xs2 ∂ys λ S S ∂xs2 ∂ys S
(6.6)
R
Reordering Eq. (6.6) and knowing that S
T ( xs )δ( x − xs ) dxs = T ( x) results in

∂2 T ∂2 T ∂2 G ∂2 G
Z     
1
Z
T ( x) = G ω̇ dxs + G + 2 −T + 2 dxs . (6.7)
λ S S ∂xs2 ∂ys ∂xs2 ∂ys

Leaving the first term on the RHS of Eq. (6.7) unchanged and applying Green’s second identity
on the second, leads to

    
1
Z I
∂T ∂T ∂G ∂G
T ( x) = G ω̇ dxs + G + −T + · n dls . (6.8)
λ S ∂S ∂xs ∂ys ∂xs ∂ys

Equation (6.8) is the solution of the 2D steady heat equation. The first term on the RHS repre-
86 Chapter 6: Green’s functions

sents the contribution of the source term ω̇, whereas the second term accounts for the influence
of boundary conditions on the final distribution of temperature T ( x). For the 1D case, Eq. (6.8)
reduces to

 E
1 dT dG
Z
T (x) = G ω̇ dxs + G −T (6.9)
λ dxs dxs W

6.2 Treatment of boundary conditions

Equation (6.8) holds for any 2D domain. Nevertheless, Green functions are not simple to obtain
(or do not exist) for complex geometries. We will consider, consequently, a 2D rectangular
domain as shown in Fig. 6.1. For that case, the second term of Eq. (6.8) can be developed as
follows:
y
Ly

( xs , ys )

0 x
Lx

Figure 6.1: 2D rectangular domain.

I     
∂T ∂T ∂G ∂G
G + −T + · n dls
∂S ∂xs ∂ys ∂xs ∂ys
Z   Z  
∂T ∂G ∂T ∂G
= G −T · n dys + G −T · n dxs
lE ∂xs ∂xs lN ∂ys ∂ys
Z   Z  
∂T ∂G ∂T ∂G
+ G −T · n dys + G −T · n dxs
lW ∂xs ∂xs lS ∂ys ∂ys
(6.10)

where l E , l N , l W and l S stand for the contour lines at east, north, west, and south, respec-
tively. The boundary conditions for the auxiliary problem (Eq. (6.2)) are always homogeneous.
Equations (6.11) to (6.13) list the definition of BC for both the actual problem and the auxiliary
problem taking as an example the West or East boundary.
6.2 Treatment of boundary conditions 87

• Dirichlet
T ( L x , ys ) = f (ys ) and G ( L x , ys ) = 0 (6.11)

• Neumann
∂T ( L x , ys ) q̇( L x , ys ) ∂G ( L x , ys )
·n = and ·n = 0 (6.12)
∂xs λ ∂xs

• Robin
∂T ( L x , ys ) ∂G ( L x , ys )
λ · n + α [ T ( L x , ys ) − T∞ ( L x , ys )] = 0 and λ · n + αG ( L x , ys ) = 0
∂xs ∂xs
(6.13)

Following the procedure introduced before, we multiply the BC of the actual problem by G and
the BC of the auxiliary problem by T, and we substract the last from the former. It yields

• Dirichlet
TG − GT = G f (6.14)

• Neumann  
∂T ∂G G q̇
G −T ·n = , (6.15)
∂xs ∂xs λ

• Robin  
∂T ∂G αGT∞
G −T ·n = (6.16)
∂xs ∂xs λ

where the dependence on L x and ys has been implicitly taken into account. Equation (6.14) is
of non utility since it does not give any additional information with respect to what we already
knew, namely that G = 0 at this boundary. On the contrary, Eqs. (6.15) and (6.16) clearly show
an evaluation of the expression inside the integral of the first term at the RHS of Eq. (6.10). Let
us now assume a problem in which:

• East → Robin BC is applied

• North → Robin BC is applied

• West → Dirichlet BC is applied

• South → Neuman BC is applied.

The solution of this particular problem for the 2D heat equation, after replacing Eqs. (6.15) and
Eq. (6.16) into Eq. (6.8), reads:
88 Chapter 6: Green’s functions

1 1 1 1
Z Z Z Z Z
∂G
T ( x, y) = G ω̇ dys dxs + αGT∞ kdys + αGT∞ dxs + T dys + G q̇ dxs .
λ S λ l E λ l N l W ∂x s λ lS
(6.17)

It is important to realize that all terms concerning Neumann or Robin BC will always be posi-
tive defined for any boundary. On the contrary, the term corresponding to Dirichlet BC will be
positive for South and W whereas it will be negative for East and North boundaries.

6.3 Derivation of the Green’s function for a simple problem

Until now we have mentioned ‘Green’s functions’ several times but they remain still somehow
abstract. The idea of this section is to derive the GF for a simple configuration so that we get an
idea of how they look like. At the end of this section some Green functions will be listed that
correspond to the solution of the 1D heat equation for several types of boundary conditions.
Let us then start by defining the 1D heat equation corresponding to the auxiliary problem. It
reads as

d2 G ( x | x s )
+ δ( x − xs ) = 0 for 0 < x < L, (6.18)
dx2

where xs is associated with the position of the source and is placed somewhere between 0 and
L. Let us now define the boundary conditions as follows:

dG ( L)
G (0) = 0 at the inlet =0 at the outlet (6.19)
dx

In order to remove the singularity of the differential equation induced by the δ−source, the
strategy is now to decompose the domain into two sub-domains:

d2 G1 ( x | xs )
=0 for 0 < x < xs (6.20)
dx2
d2 G2 ( x | xs )
=0 for xs < x < L. (6.21)
dx2

Now, we have two problems to solve. The solutions are easily obtained by integration, so that
6.3 Derivation of the Green’s function for a simple problem 89

G1 ( x | xs ) = C1 x + C2 for 0 < x < xs (6.22)


G2 ( x | xs ) = C3 x + C4 for xs < x < L, (6.23)

where C1 , C2 , C3 and C4 are constants to be defined by four different equations. Two of these
equations are the boundary conditions defined in Eq. (6.19). The third equation comes from a
continuity condition G1 ( x | xs ) = G2 ( x | xs ) when x = xs so that

C1 xs + C2 = C3 xs + C4 . (6.24)

The last equation arises by integrating the ‘missing’ part of the problem, i. e. by integrating
Eq. (6.18) in a region xs − e < xs < xs + e, where e is a very small number. This condition reads
as

d2 G ( x | x s )
Z xs +e Z xs +e
λ dx = − δ( x − xs ) dx. (6.25)
xs −e dx2 xs −e

This leads for e → 0 to

xs
dG ( x | xs ) dG2 ( x | xs ) dG1 ( x | xs )
λ = −1 − = −1.
dx xs dx dx

Therefore, the fourth condition is expressed as

C3 − C1 = −1. (6.26)

Resolving Eqs. (6.19), (6.24), and (6.26) yields: C1 = 1, C2 = 0, C3 = 0 and C4 = xs . The Green’s
function for the problem defined by Eqs. (6.18) and (6.19) reads as

(
x for 0 < x < xs
G ( x | xs ) = . (6.27)
xs for xs < x < L

In Table 6.1, we list some Green’s functions of our interest. Note that the suffixes in the column
‘Case’ stand for (1) Dirichlet BC, (2) Neumann BC and (3) Robing BC.
90 Chapter 6: Green’s functions

Boundary Conditions Green Function


Case
Inlet Outlet 0 < x < xs xs < x < L

X11 G (0| x s ) = 0 G ( L| xs ) = 0 x (1 − xs /L) xs (1 − x/L)


dG ( L| xs )
X12 G (0| x s ) = 0 dx =0 x xs
dG ( L| xs )
X13 G (0| x s ) = 0 λ dx + TG ( L| xs ) =0 x [1 − B2 ( xs /L)/(1 + B2 )] xs [1 − B2 ( x/L)/(1 + B2 )]
dG (0| xs ) dG ( L| x )
X23 dx =0 λ dx s + TG ( L| xs )
= 0 L(1 + 1/B2 − xs /L) L(1 + 1/B2 − x/L)
dG (0| xs )
λ dx + TG (0| xs ) = 0 ← Inlet ( B1 B2 x + B1 x − B1 B2 xxs /L ( B1 B2 xs + B1 xs − B1 B2 xxs /L
X33 dG ( L| x )
λ dx s + TG ( L| xs ) = 0 ← Outlet − B2 xs + B2 L + L)/C − B2 x + B2 L + L)/C
where B1 = α1 L/λ, B2 = α2 L/λ and C = B1 B2 + B1 + B2

Table 6.1: Some Green’s function of interest that satisfy the 1D steady heat equation. Taken

from [1].

6.4 What to integrate? (Warning)

Even for a simple case, such that one of Eq. (6.27), some confusion may be caused when inte-
grating. Let us assume we want to integrate that 1D Green’s function of Eq. (6.27) within the
domain 0 < x < L:

Z L
(
Ga = x for 0 < x < xs
G ( x | xs ) dxs , where G ( x | xs ) = . (6.28)
0 Gb = xs for xs < x < L

Should we do

Z xs = x Z xs = L Z xs = x Z xs = L
Ga dxs + Gb dxs or Gb dxs + Ga dxs ? (6.29)
x s =0 xs = x x s =0 xs = x
| {z } | {z }
Option 1 Option 2

Option 1 is incorrect, thus option 2 is the one we must choose. Indeed, the first integral cor-
respond to the region 0 < xs < x whereas the second integral lies between x < xs < L (note
that the integration is performed with respect to xs and not with respect to x). Accordingly, Gb
corresponds to the first integral since xs < x and Ga with the second integral since x < xs .
6.5 Green’s functions for a rectangular domain 91

Green’s Function
Case
Eigenfunction Xm ( x ) Eigenvalue β m , m = 1, 2, ... Norm Nx

X11 sin( β m x ) β m = mπ/L, L/2

X12 sin( β m x ) β m = (2m − 1)π/2L L/2

X13 sin( β m x ) β m Lcot( β m L) + B2 = 0 L/(2φ2m )

X23 cos( β m x ) β m L tan( β m L) − B2 = 0 L/(2φ2m )

β m L cos( β m x ) tan( β m L)
X33 L/(2φm )
+ B1 sin( β m x ) +[ β m (α1 + α2 )/λ]/( β2m − α1 α2 λ−2 ) = 0
where Bi = αi L/λ, φim = ( β2m L2 + Bi2 )/( β2m L2 + Bi2 + Bi ) and Φm = φ2m /( β2m L2 + B12 + B1 φ2m )

Table 6.2: Some Green’s function of interest that satisfy the 1D steady heat equation. (Series

version).

6.5 Green’s functions for a rectangular domain

The Green’s functions listed in Table 6.1 are very useful in the study of steady heat conduction
in a one-dimensional domain. However, they are not unique: another family of Green’s func-
tions exist for 1D steady heat conduction. This family is based on a series expansion and is
expressed as:


1 Xm ( x ) Xm ( x s )
G ( x | xs ) = ∑ β2m Nx
for 0 < x < Lx , (6.30)
m =1

where Xm ( x ) and β m denote a mth eigenfunction of the system and a mth eigenvalue of the
system, respectively. Nx represents the norm of the mth eigenfunction1 . The eigenfunctions
Xm ( x ) and eigenvalues β m for several boundary conditions are listed in table 6.30.

In practice, Green’s functions expressed as series have an important disadvantage with respect
to Green’s functions expressed as polynomials (see Table 6.1): the solution based on the former
is accurate as long as a sufficient number of terms is included in the sum. On the one hand,
when considering homogeneous Dirichlet BC (T = 0), this is actually not a problem since no
more than six terms are usually needed in order to get a sufficiently accurate solution. On
the other hand, when non-homogenous Dirichlet BC make part of the problem, a convergence
problem arise. In such a case, hundreds of terms are most of the time necessary to reach a ‘fair’
solution. Figure 6.2 shows an example of the difficulty of the series based Green’s functions to
obtain correct values of T close to the boundaries, in the case in which a temperature different
from zero at the boundaries is imposed. This difficulty is highly visible at points very close to
Dirichlet boundaries and is the reason why a more refined discretization may perform worse
1X d2 Xm ( x )
m ( x ) and β m are called eigenfunction and eigenvalue, respectively, because they satisfy dx2
+ β m X ( x ) = 0.
92 Chapter 6: Green’s functions

2 2

1.5 1.5

1 1

0.5 0.5

0 0
1 1
0.8 1 0.8 1
0.6 0.8 0.6 0.8
0.4 0.6 0.4 0.6
0.4 0.4
0.2 0.2 0.2 0.2
0 0 0 0

(a) (b)

Figure 6.2: Illustration of Gibbs phenomenon with Green’s function series of 80 terms. (a)
∆x, ∆y = 0.025 (b) ∆x, ∆y = 0.0125.

than a coarse one. Indeed, this kind of problem is recognized as the Gibbs phenomenon.

Although GFs based on series do not behave as well as GFs based on polynomials, they are
of relevance since Green’s function formulations for 2D and 3D Cartesian geometries are most
of the time based on them. A Green’s function associated to the steady heat equation in a
rectangular domain must satisfy

∂2 G2D ( x, y| xs , ys ) ∂2 G2D ( x, y| xs , ys )
+ + δ( x − xs )δ(y − ys ) = 0 for 0 < x < Lx; 0 < y < Ly.
∂x2 ∂y2
(6.31)

By combining two 1D Green’s functions (Eq. (6.30) for x and y directions), a Green’s function
for a rectangular domain is constructed as

∞ ∞
1 Xm ( x ) Xm ( xs ) Yn (y)Yn (ys )
G2D ( x, y| xs , ys ) = ∑ ∑ β2 2
+ θn Nx Ny
for 0 < x < Lx; 0 < y < Ly,
m =1 n =1 m
(6.32)

where Yn (y) and θn denote a nth eigenfunction and a nth eigenvalue, respectively. Ny represents
the norm of the nth eigenfunction Yn (y). Values for Yn (y), θn and Ny are found in Table 6.2 by
replacing x by y and m by n.
6.6 Discussion 93

6.6 Discussion

It might be dissapointing that an analytical solution, such as the one expressed in Eq. (6.32),
remains difficult to compute, specially for non-homogeneous Dirichlet boundary conditions.
One may question then the utility of such an approach and, instead, prefer to solve the 2D heat
equation by finite differences, for instance. Nevertheless, it is important to highlight the insight
that the analytical solution based on Green’s functions (see Eqs. (6.8) and (6.17)) give us. The
first remarkable feature of a solution of the heat equation is that the distribution of temperature
in the domain under study can be expressed as a superposition of solutions that correspond to
both a given source and the boundary conditions of the system. A second remarkable feature
is that, in contrast to the integrals associated to BC (see (6.17)), the integral associated to a
source is very easily computed using a GF, as the one of Eq. (6.32), since it converges very fast.
Moreover, since the integral is only performed over the domain in which the source is located,
a very accurate computation may result if highly refining only the source domain. This is a
much simpler approach than applying finite differences with a non-uniform mesh, so that only
mesh refinement is performed in the region of the source. Since the final objective is to solve
the heat equation (in 2D for our particular case) as accurately and computationally cheap as
possible, a good idea is to combine numerical methods (finite differences, finite volumes or
finite elements) with Green’s functions. Whereas the former are used to compute the temper-
ature distribution due to boundary conditions, the later is implemented to study the effect of
heat sources within the domain. It is interesting to note that a complete study of the influence
of heat sources on a given thermal system can be performed very efficiently by computing only
once the temperature field due to BC, and then GF as many times as desired for the heat sources
of interest.

6.7 Exercises

Stage 1: Numerical integration

Before implementing Green’s functions, it is useful first to refresh our minds about numerical
integration with the matlab function trapz

• Perform

Z 2
x dx (6.33)
0

Note: The result should be a scalar.

• Perform and plot the corresponding result


94 Chapter 6: Green’s functions

Z 2Z 2
cos( xs ) sin(ys ) dxs dys (6.34)
0 0

Note: the result should be a scalar

Stage 2: 2D Green’s functions

For this stage, the GF of table 6.2 are considered. Compute the solution of the 2D steady heat
equation based on Eq. (6.8) with the Green’s function given by Eq. (6.32)

Define three points ( x, y) for the observer and

• Consider a source distributed uniformly allong the domain. Take into account

– Homogeneous Dirichlet BC (Everywhere)


– Homogeneous Neumann BC (both North and East) and homogeneous Dirichlet BC
(West and South. )

• Consider a source distributed uniformly but only in a small region of the domain. Here
it is possible to create a mesh for the source entouring only the region of the source. Use

– Homogeneous Dirichlet BC (Everywhere)


– Homogeneous Neumann BC (both North and East) and homogeneous Dirichlet BC
(West and South. )

Define a whole field ( x, y) for the observer and

• Consider a source distributed uniformly allong the domain. Take into account

– Homogeneous Dirichlet BC (Everywhere)


– Homogeneous Neumann BC (both North and East) and homogeneous Dirichlet BC
(West and South. )

• Consider a source distributed uniformly but only in a small region of the domain. Here
it is possible to create a mesh for the source entouring only the region of the source. Use

– Homogeneous Dirichlet BC (Everywhere)


– Homogeneous Neumann BC (both North and East) and homogeneous Dirichlet BC
(West and South. )

Stage 3: 2D Green’s functions and FD (or FV)

Green’s functions expressed as series can perform poorly when non-homogeneous Dirichlet BC
are applied. Remember that one of the most important ‘take away’ ideas of solutions based on
6.7 Exercises 95

GF is that these can be explained as a superposition (contribution) of source and boundaries.


Therefore

• Compute a given problem in Finite Differences for a given set of boundary conditions.

• Compute one set up of that domain for several distribution of sources using Green func-
tions.

• Compute the final solution by adding the contribution of BC computed by FD with the
contribution of the source given by GF.

Stage 4: 2D Green’s functions: Robin BC (optional)

From table 6.2 it is observed that the eigenvalues β m for Robin BC are not given explicitly.
Indeed, the values of β m depend on the values of α and k and should be found by solving the
equation shown in table 6.2 (column 2) for each m. Consequently, applying Robin BC by the
approach of Green’s functions might become really expensive.

• Use Robin BC for both North and East and Dirichlet BC on West and South.

6.7.1 Useful MATLAB commands

trapz(X,Y) Computes the integral of Y with respect to X using trapezoidal integra-


tion
Addendum to Finite Volumes
A
A.1 Uniform rectangular grid and Boundary Conditions

If the geometry under study is Cartesian, the following terms

• ∆ySe w nw Nw
Sw = ∆yse = ∆ye = ∆yne = ∆yNe = ∆ysW = ∆yn
sw s nW = ∆ysE = ∆ynE = 0
s n

vanish, and the spatial derivatives of Eq. (3.16) become

96
A.1 Uniform rectangular grid and Boundary Conditions 97

∂T 1  e Sw

≈ ∆y Se Tse + ∆y w T sw , (A.1)
∂x s Ss
∂T −1  Se 
≈ ∆x Sw ST + ∆x w
T
e P , (A.2)
∂y s Ss
∂T 1  nE 
≈ ∆y sE TE + ∆y s
n T P , (A.3)
∂x e Se
∂T −1  sE 
≈ ∆x s Tse + ∆x n
nE T ne , (A.4)
∂y e Se
∂T 1  Ne 
≈ ∆y e T ne + ∆y w
T
Nw nw , (A.5)
∂x n Sn
∂T −1  e Nw

≈ ∆x w T P + ∆x Ne TN , (A.6)
∂y n Sn
∂T 1  n 
≈ ∆y s TP + ∆y sW
nW T W , (A.7)
∂x w Sw
∂T −1  s 
≈ ∆x T
sW sw + ∆x nW
n T nw . (A.8)
∂y w Sw
(A.9)

The final expression simplifies then to

 
λ se ∂T ∂T ∂T ∂T
2
∇ T P
≈ P −∆xsw + ∆yne
se − ∆xne
nw
+ ∆ysw
nw =
S ∂y s ∂x e ∂y n ∂x w

λ∆xsw se   λ∆yne   (A.10)


Se se
∆x Sw T S + ∆x w
e TP + ∆y nE
sE TE + ∆y s
n TP
S P Ss S P Se
λ∆x nw   λ∆ynw sw  
Nw
+ P ne ∆x e
w TP + ∆x Ne T N + ∆y n
s T P + ∆y sW
nW T W
S Sn S P Sw

se = ∆x Se = − ∆x w = − ∆x nw =
Considering rectangular cells of same size, where ∆x = ∆xsw Sw e ne
∆xwe = − ∆x Nw , and ∆y = ∆yne = ∆ynE = − ∆ys T = − ∆ysw = ∆yn = − ∆ysW , Eq. (A.10)
Ne se sE n P nw s nW
breaks down into:

∆x ∆y
∇2 T ≈− (−∆xTS + ∆xTP ) + (−∆yTP + ∆yTE )
P (∆x∆y) 2 (∆x∆y)2
(A.11)
∆x ∆y
+ (−∆xTP + ∆xTN ) − (−∆yTW + ∆yTP )
(∆x∆y) 2 (∆x∆y)2
98 Chapter A: Addendum to Finite Volumes

After some algebra, Eq. (A.11) becomes

1 TW − 2TP + TE T − 2TP + TN
I
∇2 T ≈ ∇ T · ndl ≈ + S (A.12)
SP ∂S P (∆x ) 2 (∆y)2

which is in fact the formulation given by the finite difference approach for a 2D uniform rect-
angular grid.
A.2 Boundary conditions 99

A.2 Boundary conditions

In this section we will give an example of how to discretize the Laplacian operator ∇2 on a
node which lays on the ‘south’ boundary. The discretization of ∇2 on a node placed at ‘east’,
‘north’ or ‘west’ boundaries can be derived in a similar manner.

A.2.1 South

NW
N
NE nw
n
w ne
W w e

P P e
E

(a) (b) (c)

Nw N Ne nW nw
n nw W
ne n w n
nE ne W
e w w
E
P e P e P
E

(d) (e) (f)

Figure A.1: Labeling of nodes and point between nodes in the region close to a South boundary.

Figure A.1 illustrates the cells and nodes adjancent to a given region at the south of the domain.
First, the Divergence theorem is applied to the region encircling P (Fig. A.1(a)):

1
I
2
∇ T P ≈ η ∇ T · ndl =
S ∂Sη
Z  (A.13)
1
Z Z Z
(∇ T ) · ndl + (∇ T ) · ndl + (∇ T ) · ndl + (∇ T ) · ndl .
Sη lwe lene nw
lne w
lnw

Subsequently the mid-point rule of integration is considered. The approximation of the Lapla-
cian operator at node P is written then as:
100 Chapter A: Addendum to Finite Volumes

"
1 1
I
∂T ∂T
2
∇ T P
≈ η ∇ T · ndS ≈ η ∇ T | P · n∆lwe + ∆yne
e − ∆xene
S ∂S η S ∂x ηe ∂y ηe
#
∂T ∂T ∂T ∂T
+ ∆ynw
ne − ∆xne
nw
+ ∆yw
nw − ∆xnw
w
∂x n ∂y n ∂x ηw ∂y ηw
(A.14)

Note that the first term on the right hand side of Eq. (A.14) is the term associated to the bound-
ary contour. It was not decomposed in x and y directions. Applying now Green’s theorem to
the remaining terms of Eq. (A.14) leads to

E Z nE Z n Z P
Z 
1 1
I
∂T
≈ ηe Tdy = ηe Tdy + Tdy + Tdy + Tdy ≈
∂x S ∂S ηe S P E nE n
ηe (A.15)
1  E P

∆y P T | e + ∆y nE
E T | ηE + ∆y n
nE T | ne + ∆y n T | η
Sηe

1 −1  E
I
∂T P

≈ Tdx ≈ ∆x P |e
T + ∆x nE
E T |ηE + ∆x n
nE |ne
T + ∆x n T |η (A.16)
∂y ηe Sηe ∂Sηe Sηe

1 1  e
Z
∂T Ne Nw

≈ Tdy ≈ ∆y w T | P + ∆y e T | ne + ∆y Ne T | N + ∆y w
Nw T | nw (A.17)
∂x n Sn ∂Sn Sn

1 −1  e
Z
∂T Ne Nw

≈ Tdx ≈ ∆x w T | P + ∆x e T | ne + ∆x Ne T | N + ∆x w
Nw T | nw (A.18)
∂y n Sn ∂Sn Sn

1 1  P
I
∂T W

≈ Tdy ≈ ∆yW T |w + ∆yP
n
T |η + ∆ynW
n T | nw + ∆y nW T | ηW (A.19)
∂x ηw Sηw ∂Sηw Sηw

1 −1  P
I
∂T W

≈ Tdx ≈ ∆xW T |w + ∆xP
n
T |η + ∆xnnW T |nw + ∆xnW T |ηW (A.20)
∂y ηw Sηw ∂Sηw Sηw

The first term on the right hand side of Eq. (A.14) is now replaced by the desired boundary
condition as follows:
A.2 Boundary conditions 101

Neumann BC

− ∇ T |P · n = q̇ (A.21)

Robin BC

α
∇ T |P · n = − ( TP − T∞ ) (A.22)
λ

Evidently for Dirichlet BC there is no necessity of deriving such a procedure, since values at
the corresponding node would be directly given.
Addendum to Sparse Matrices and Linear
B
Solvers: condition number

B.1 Definition

The norm of a matrix A is defined as


k Axk2
M = k Ak2 = max , (B.1)
x 6 =0 k x k2

where x is a non-zero vector. The number M also corresponds to the largest singular value of
the matrix A. Let’s define the reciprocal number

k Axk2
m = min , (B.2)
x 6 =0 k x k2

which is also the smallest singular value of the matrix A. If A is not singular, then

k y k2 1
m = min − 1
= −
. (B.3)
y 6 =0 k A y k2 k A 1 k2

The condition number is defined as the ratio of the largest to the smallest singular value

M
κ2 ( A ) = , (B.4)
m

102
B.2 Sensitivity of solutions to linear sytems 103

which corresponds to the ratio of the largest to the smallest amplification given by matrix A.
From (B.1), (B.3) it comes
κ2 ( A) = k Akk A−1 k. (B.5)

B.2 Sensitivity of solutions to linear sytems

We now consider x the solution to the linear system

Ax = f. (B.6)

The solution to (B.6) is perturbed by δx due to a perturbation of the right hand side δf, namely

A(x + δx) = f + δf. (B.7)

From (B.1) and (B.3), one gets

1 1
≥ and kδfk ≥ mkδxk, (B.8)
kfk Mkxk

and finally
kδxk kδfk
≤ κ2 ( A ) . (B.9)
kxk kfk
If we think about δf as an error due to the numerical approximation of the right hand side
f, then (B.9) induces that the amplification of this error is bounded by κ2 ( A): the higher the
condition number, the higher the error on the solution x. For f ∼ O(1) and kδfk ∼ 10−15 , if the
linear system is ill-conditioned such that κ2 ( A) ∼ 1012 , then the resulting relative error on the
solution might scale up to 10−3 .

B.3 Convergence of iterative solvers

To illustrate the importance of the condition number for iterative solvers, let’s start from the Al-
gorithm 5.1 in Chapter 5. This algorithm is slightly modified by introducing a scalar parameter
α at the second step to obtain a relaxation algorithm.

Algorithm B.1 Relaxation


Guess initial value x(0)
while not converged
 do 
w(m) = C −1 b − Ax(m) )
x(m+1) = x(m) + αw(m)
end while
104 Chapter B: Addendum to Sparse Matrices and Linear Solvers: condition number

The idea is to choose α such to get the highest convergence rate as possible. Following (5.3), the
error e(m) is propagated such that

e(m+1) = ( I − αC −1 A)e(m) . (B.10)

Therefore
ke(m+1) k ≤ k I − αC −1 Akke(m) k. (B.11)

If A and C are symmetrical1 , then2

ke(m+1) k ≤ |1 − αλmax |ke(m) k, (B.12)

where λmax is the largest eigenvalue of the preconditioned operator C −1 A. This last sequence
converges if
0 ≤ α ≤ 2/λmax , (B.13)

and as fast as |1 − αλmax | is close to 0. The best choice of α is obtained for

2
αopt = . (B.14)
λmax + λmin

In that case the speed of convergence is

λmax − λmin κ2 − 1
|1 − αopt λmax | = = . (B.15)
λmax + λmin κ2 + 1

The role of the condition number is clearly illustrated in the last equation: with an inefficient
κ2 − 1
preconditioning, κ2 (C −1 A) remains large and is close to unity, but with an efficient pre-
κ2 + 1
κ2 − 1
conditioning κ2 (C −1 A) is close to unity and is close to zero, increasing the convergence
κ2 + 1
rate. The necessary number of iterations to reach a given accuracy scales as κ2 .

1 For symmetric operators the condition number is the ratio of the largest to the smallest eigenvalues. For non-

symmetric operator the reasoning is similar since the ratio of the largest to the smallest eigenvalues decrease with
the condition number.
2 see Sec. 5.2.1 in C. Canuto, M. Hussaini, A. Quarteroni, T. Zang, Spectral Methods in Fluid Dynamics, Springer-

Verlag, Berlin, 1988.


Addendum to unsteady problems
C
Runge-Kutta methods (RK) are classically used as a time stepping schemes to approximate the
solution of differential equations of the kind

dy
= f (y, t), with y ( t0 ) = y0 . (C.1)
dt
Given the solution at the n-th time iteration (yn , tn ), the idea is to approximate the solution at
tn+1 = tn + h by the evaluating the slope of the function, i.e. dy/dt = f (y, t), at different times
tin , within the interval [tn , tn+1 ]. These slopes are then combined to build the final approxima-
tion (yn+1 , tn+1 ).

C.1 Classical Runge-Kutta methods

Let’s begin by an example: the four-step explicit scheme (RK4). First the different slopes are
computed successively

k 1 = f ( y n , t n ), (C.2a)
k1 h
k2 = f (yn + h , t n + ), (C.2b)
2 2
k2 h
k 3 = f ( y n + h , t n + ), (C.2c)
2 2
k4 = f (yn + hk3 , tn + h), (C.2d)

105
106 Chapter C: Addendum to unsteady problems

secondly a weighted average of these slopes is built to approximate the slope between tn and
t n +1

1
k= (k1 + 2k2 + 2k3 + k4 ), (C.2e)
6

and finally the solution is propagated in time from tn to tn+1

yn+1 = yn + hk. (C.2f)

As illustrated on Fig. C.1, the k i ’s slopes are closer and closer of the actual mean slope between
tn and tn+1 as i increases. A more general algorithm for the I-steps method (RK-I) reads

k 1 = f ( y n , t n ), (C.3a)
i −1
k i = f (yn + h ∑ β ij k j , tin ), tin = tn + αi h for 2 < i < I, (C.3b)
j =1

and

I
k= ∑ γi k i , yn+1 = yn + hk. (C.3c)
i =1

The RK scheme is consistent if and only if ∑iI γi = 1. General requirements on αi ’s, β i ’s and
γi ’s so that the RK-I scheme has a given order p are not straightforward. However, a necessary
condition is that I ≥ p. Then, coefficients β ij need to be chosen accordingly thanks to Taylor
series expansion. The RK-4 scheme presented in example (C.2) is of order 4. Other ’ready-to-
use’ RK schemes are derived and presented in (Gears, 1971)1 or (Press, 2007)2

C.1.1 Implicit Runge-Kutta methods

In the general algorithm (C.3) all slopes are determined from information coming from the
previous slopes, in other words each slope k i determined at (yn + h ∑ij− 1 i
=1 β ij k j , tn ) depends only
j
on information coming from tn < tin , for 1 ≤ j ≤ i − 1. These schemes are explicit and share
the drawbacks (and advantages!) of forward Euler methods discussed in Chap. 4, e.g. stability
limitations when handling diffusion.

One way to remedy these shortcomings is to build an implicit RK scheme where each slope k i

1 Gear,
C. (1971). Numerical initial value problems in ordinary differential equations. Prentice-Hall.
2 Press,
William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007). Numerical Recipes: The
Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8
C.2 Low-storage Rung-Kutta methods 107

Figure C.1: Runge-Kutta slopes. The blue line is the exact solution, the red arrows are the
successive slopes evaluated at the successive locations (red dots) and the green star is the final
approximation (Credit: Hilber Traum CC BY-SA 4.0).

j j
depends on information coming from tn < tin and tn ≥ tin . The general algorithm now reads

k 1 = f ( y n , t n ), (C.4a)
I
k i = f (yn + h ∑ β ij k j , tin ), tin = tn + αi h for 2 < i < I, (C.4b)
j =1

and

I
k= ∑ γi k i , yn+1 = yn + hk. (C.4c)
i =1

So the k i ’s now depends on some k j ’s such that j ≥ i.

C.2 Low-storage Rung-Kutta methods

As can be observed from the algorithms (C.3) and (C.4), Runge-Kutta methods require, for each
variable y, to store as many information as there are steps in the scheme: for the RK-4 scheme,
108 Chapter C: Addendum to unsteady problems

at each time step, 4 slopes are stored in addition to yn . In order to improve memory efficiency,
some low-storage scheme have been devised, see e.g. (Williamson, 1980)3

k0 = 0, y0n = yn , (C.5a)
ki = ai kin−1 + h f (yin−1 , ti−1 ), (C.5b)
yin = yin−1 + bi k i (C.5c)

for 1 ≤ i ≤ I, and yn+1 = ynI . In that scheme, the new slope and location overwrite the old val-
ues. Therefore, for each variable y only 2 storage location are required, whatever the number of
steps of the RK-I scheme. Note that the the term f (yin−1 , ti−1 ), which is here evaluated explicitly,
can also be treated implicitly (i.e. f (yin , ti )) or semi-implicitely with the Θ-method.

3 Williamson, J. H. (1980). Low-storage Runge-Kutta schemes. J. Comput. Phys., 35(1):48–56.


Convective problems
D
References
[1] C. H IRSCH Numerical Computation of Internal and External Flows - vol. 2 Computational Methods for
Inviscid and Viscous Flows. John Wiley & Sons, 1990.

Objectives
• Understanding the stability peculiarities of convective problems and derive a scheme accordingly,
• Coding in Matlab an algorithm to numerically solve a convection equation.

109
110 Chapter D: Convective problems

Contents
D.1 Convective Partial Differential Equation . . . . . . . . . . . . . . . . . . . . . . 110
D.1.1 Analytical solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
D.1.2 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
D.2 Stable schemes for convection problems . . . . . . . . . . . . . . . . . . . . . . 111
D.2.1 Space-centered schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
D.2.2 Upwind scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
D.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

D.1 Convective Partial Differential Equation

In Chapters 2 to 6, the focus was on the simplification of Eq. (1.16) for small Peclet number,
i.e. in the case of high heat-diffusion. As a result the motion of the fluid is neglected and the
resulting equation is a parabolic PDE. In the case of high Peclet, heat-diffusion is neglected
against advection of enthalpy by the fluid motion. The resulting equation is a hyperbolic PDE,

∂t T + v∂ x T = 0, (D.1)

with v the advection velocity of temperature in m/s.

D.1.1 Analytical solution

Equation (D.1) possesses a family of analytical solutions. Let T0 ( x ) be an initial condition.


Provided suitable boundary conditions are enforced,

T ( x, t) = T0 ( x − vt), (D.2)

is a solution. Here we recover the solution from Eq. (1.30) from the exercices of Chap. 1. This
means that any temperature variation in the flow will be advected in a preferred direction.

D.1.2 Boundary conditions

To chose some suitable boundary conditions, we use the argument that the norm of the solution
should not grow (i.e. its growth rate should decay) in time in the absence of source term and
gradient in the flow velocity, and with homogeneous boundary conditions. Let’s define the
D.2 Stable schemes for convection problems 111

norm of the solution over the domain D = [0, L] by

1
Z
k T k2 = T 2 dx. (D.3)
2 D

Then its growth rate is given by


Z Z Z Z
d t k T k2 = T ∂t Tdx = − T v∂ x Tdx = − T v∂ x Tdx − T v∂ x Tdx. (D.4)
D D D D

The second integral of the last term is integrated by part,


Z Z
2
dt k T k = − T v∂ x Tdx + ∂ x T vTdx − [ T vT ]∂D , (D.5)
D D
dt k T k2 = vT 2 (0, t) − vT 2 ( L, t). (D.6)

If v is positive, the second term is always negative. In order to have a negative growth rate
of the norm, an homogeneous boundary condition is only needed at x = 0 : T (0, t) = 0. For
negative advection velocity v, the boundary condition would be at x = L.

This conditions complies with the phenomenological understanding of the system: information
travels from abscissa 0 toward L, therefore some information needs to be prescribed at the
’input’, i.e. x = 0.

D.2 Stable schemes for convection problems

In Chapter 4, we have demonstrated that the explicit centred scheme

Tin+1 − Tin T n − Tin−1


= − v i +1 , (D.7)
∆t 2∆x
is unconditionally unstable for pure advection problems, but that the explicit forward scheme

Tin+1 − Tin T n − Tin−1


= −v i , (D.8)
∆t ∆x

is conditionally stable for v > 0, as long as the CFL condition is fulfilled: v∆t
∆x < 1. However,
in non-idealised situations, the advection velocity v is non-uniform and its sign can change.
Below are presented two solutions to derive stable schemes regardless the sign of v.
112 Chapter D: Convective problems

D.2.1 Space-centered schemes

Equation (D.7) is reformulated as


σ n
Tin+1 = Tin − ( T − Tin−1 ), (D.9)
2 i +1
v∆t
with σ = . The term Tin on the r.h.s. will be substituted in two different manner to stabilize
∆x
the centred scheme

The Lax-Friedrichs scheme

Tin → ( Tin+1 + Tin−1 )/2 Equation D.7 becomes

Tin+1 + Tin−1 σ
Tin+1 = − ( Tin+1 − Tin−1 ). (D.10)
2 2
A Von-Neumann analysis (same as in Chap. 4) demonstrates the stability of the scheme Eq. (D.10)
for σ < 1. To understand the stabilization process, Eq. (D.10) is rewritten under the form

∆t 1
Tin+1 = Tin − ( Tin+1 − Tin−1 ) + ( Tin+1 − 2Tin + Tin−1 ), (D.11)
2∆x 2
i.e. it corresponds to (D.9) plus a diffusion term (last term) corresponding to the discretization
of
∆x2 2
∂ 2 T. (D.12)
2∆t x
The Lax-Friederichs scheme can be seen as a centred scheme with some added numerical dif-
fusion proportional to ∆x
2
2∆t . This scheme is first order and second order accurate in time and
space, respectively.

In two dimensions The Lax-Friedrichs scheme extends to 2D as

n +1 1 n     
Ti,j = Ti+1,j + Tin−1,j + Ti,j
n
+1 + T n
i,j−1 − σx T n
i +1,j − T n
i −1,j − σy T n
i,j+1 − T n
i,j−1 . (D.13)
4

The Lax-Wendroff scheme

A second order accurate – both in space and time – scheme can also be derived, this is the
Lax-Wendroff scheme:

∆t σ2
Tin+1 = Tin − ( Tin+1 − Tin−1 ) + ( Tin+1 − 2T n i + Tin−1 ). (D.14)
2∆x 2
D.2 Stable schemes for convection problems 113

It is also stable for σ ≤ 1. The reader is referred to sec. 17.2.4 of [1] for the two-dimensional
version of Lax-Wendroff scheme.

D.2.2 Upwind scheme

A second option to derive a stable scheme regardless the direction of the advection velocity, is
to split v inside
v∆t n
Tin+1 = Tin − ( T − Tin−1 ), (D.15)
∆x i
into its positive and negative parts

v + |v| v − |v|
v+ = and v− = . (D.16)
2 2

The resulting scheme is the superposition of upward and downward scheme for v+ and v− ,
respectively
∆t  + n
Tin+1 = Tin − v Ti − Tin−1 + v− Tin+1 − Tin .
 
(D.17)
∆x
This scheme can be equivalently written

σ n |v|∆t n
Tin+1 = Tin − ( Ti+1 − Tin−1 ) + Ti+1 − 2Tin + Tin−1 ,

(D.18)
2 ∆x
i.e. a centred scheme plus a added diffusion (last term) proportional to

|v|∆x2
, (D.19)
2
that vanishes when v tends to 0.
114 Chapter D: Convective problems

D.3 Exercises
Index

Amplification Factor, 57 Unconditionally Stable, 63


Auxiliary Problem, 83 Unconditionally Unstable, 56

Characteristic Line, 59 Wave Number, 57


condition number, 74
Conditionally Stable, 59
Conjugate Heat Transfer, 37
Convection-Diffusion Reaction, 14
convective heat transfer coefficient, 37
Coonditionally Stable, 65
Courant Number, 58

Diffusion Coefficient, 12
Diffusion Number, 58
Dirichlet Boundary Condition, 18
Domain of Dependence, 59

Fick’s Law, 12
First Order, 29
First Order Accurate, 29
Fourier’s law, 12

Laplace Equation, 16

Neumann Boundary Conditions, 18

Partial Differential Equations, 10


Elliptic, 16
Hyperbolic, 15
Parabolic, 14
Poisson Equation, 16
preconditioning, 74

Robin Boundary Condition, 18

sparse matrix, 72

Thermal Diffusivity, 14
Trapz Function, 92

115
116 INDEX

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy