Laplace PDF
Laplace PDF
Introduction
Laplace Equation is a second order partial differential equation (PDE) that appears in many
areas of science an engineering, such as electricity, fluid flow, and steady heat conduction.
Solution of this equation, in a domain, requires the specification of certain conditions that the
unknown function must satisfy at the boundary of the domain. When the function itself is
specified on a part of the boundary, we call that part the Dirichlet boundary; when the normal
derivative of the function is specified on a part of the boundary, we call that part the Neumann
boundary. In a problem, the entire boundary can be Dirichlet or a part of the boundary can be
Dirichlet and the rest Neumann. A problem with Neumann condition specified on the entire
boundary does not have a unique solution. In some problems, a linear combination of the
function and its normal derivative is specified; such situations are called Robin boundary. We
will not deal with the Robin problem, but it is fairly straightforward to extend the method
described here to these problems. A typical Laplace problem is schematically shown in
Figure-1. In domain D,
∂2 ∂2
∇2 0
∂x 2 ∂y 2
and on the boundary
∂
f on S D and g on S N
∂n
where n is the normal to the boundary, S D is the Dirichlet boundary, and S N is the Neumann
boundary.
In this paper, the finite-difference-method (FDM) for the solution of the Laplace equation
is discussed. In this method, the PDE is converted into a set of linear, simultaneous equations.
When the simultaneous equations are written in matrix notation, the majority of the elements
of the matrix are zero. Such matrices are called ”sparse matrix”. However, for any meaningful
problem, the number of simultaneous equations becomes very large, say of the order of a few
thousand. There are special purpose routines that deal with very large, sparse matrices.
Furthermore, one needs skillful ways of storing such large matrices, otherwise, several
Gigabits will be used up just for the storing.
An alternative way of solving very large system of simultaneous equations is iterative. The
advantage of iterative solution is that the storing of large matrices is unnecessary. In this paper,
we will demonstrate the use of one such iterative technique. We will also explore one way of
accelerating the convergence of the iterative method.
Figure-1: Laplace problem.
Finite Difference (FD)
Consider three points on the x-axis separated by a distance h, as shown in Figure-2a. For
convenience, we have labeled these points as i − 1, i, and i 1. Let the value of a function
x, y at these three points be i−1 , i , and i1 . Now we can write two Taylor expansions for
i−1 and i1 as follows.
∂ ∂2 h2 ∂3 h3 ∂4 h4
i−1 i − |i h | i − | i |i Oh 5 (1)
∂x ∂x 2 2! ∂x 3 3! ∂x 4 4!
Dirichlet Problem
Consider the simple problem of Figure-4 posed in a box with only four interior points. is
given on the East, West, North, and South walls. Thus, 1, 2, 1, 3, 2, 4, 3, 4,
4, 3, 4, 2, 3, 1, and 2, 1 are known. We have to calculate the values of 2, 2,
3, 2, 2, 3, and 3, 3. We begin the iterative process by assuming
0 2, 2 0 3, 2 0 2, 3 0 3, 3 0 (10)
Figure-4: Dirichlet problem on a 4 by 4 box.
The superscript 0 is an iteration counter. Starting at bottom left, we write
1 2, 2 1 1, 2 0 3, 2 2, 1 0 2, 3 (11)
4
1 3, 2 1 1 2, 2 4, 2 3, 1 0 3, 3
4
1 2, 3 1 1, 3 0 3, 3 1 2, 2 2, 4
4
3, 3 1 1 2, 3 4, 3 1 3, 2 3, 4
1
4
In the first line of Eqn.(11), we compute the first iterated value of 1 2, 2. Note that, as soon
as this first iterate becomes available, it is used in the calculation of the first iterate of
1 3, 2, in the second line of Eqn.(11). This is very easily accomplished by storing 0 2, 2
and 1 2, 2 in the same memory location, thereby overwriting 0 2, 2 with 1 2, 2.
Therefore, for any location i, j, iteration1 i, j is overwritten on iteration i, j.
However, there is one little calculation that you need to do before you overwrite. First,
store iteration1 i, j in temp. Then compute the relative error at i, j as
temp − iteration i, j
i, j (12)
temp
Then overwrite iteration i, j by temp. In this manner, we will keep track of the relative errors
at all the i, j locations. Then at the end of the iteration loop iteration 1, we determine a
representative relative error as
max max of all i, j (13)
At this point, you need to think about something critical. Do you really need to store all the
i, j in order to calculate max ? Or can you keep updating the value of max as you travel from
point to point within an iteration loop. For a large problem, storing all the i, j can become a
very bad idea from the point of view of memory usage.
When max becomes available, compare this with the user-specified tolerance 10 −n for n
digit accuracy. If max is larger than the specified tolerance, you need to go through another
iteration loop. Otherwise, the iteration is stopped and the ”converged” solution for is
outputted.
It is a good idea to keep track of the iteration count, because it is a measure of the cost.
Secondly, you must specify the ”maximum iteration allowed”. Otherwise, the program may get
caught in a never-ending iteration loop due to some error in the specification of data.
Now, let us consider the large problem of Figure-5 and fix our ideas regarding various
do-loops that we will use in the program. The grid for this problem is defined by 1 ≤ i ≤ N and
1 ≤ j ≤ M. The east, west, north, and south walls are Dirichlet boundaries. The boundary
conditions 1, j; j 2, M − 1, N, j; j 2, M − 1, i, 1; i 2, N − 1, and
i, M; i 2, N − 1 are inserted as data.
Exercise-2
Solve the Neumann problem of Exercise-1 by taking px 20, qy 200, fx 0, and
gy 4 in a 1 1 box with (i) N 5 and M 5, (ii) N 17 and M 17, and (iii) N 65
and M 65.
Sample Calculation for Exercise-2
On a 5 ⊗ 5 grid, the iteration converges after twentyone cycles, when the tolerance is set to
0. 01. The values of at the center of the domain, i.e. at location 3, 3, at the end of each
iteration are given below for debugging purposes: 20.63, 37.81, 50.44, 60.21, 68.11, 74.68,
80.23, 84.96, 89.02, 92.52, 95.54, 98.14, 100.4, 102.3, 104.0, 105.5, 106.7, 107.8, 108.8,
109.6, 110.3. (These numbers are obtained on a Compaq Compiler.)
The values of on the 5 ⊗ 5 grid at the end of the first cycle of iteration are given below.
100.0 57.97 25.31 9.980 7.848
200.0 65.94 21.64 7.305 5.715
200.0 63.75 20.63 7.578 6.250
200.0 55.00 18.75 9.688 7.844
110.0 20.00 20.00 20.00 10.00
Accelerating the Convergence
Here we describe a method for accelerating the convergence of an iterative scheme. This
method is known as ”successive over/under relaxation”. We show this method for the iterative
scheme
4 (24)
2
2 4 − cos N−1
cos M−1
Exercise-3
Through numerical experiments, compute the optimal for the Neumann problem of
Exercise-2, Case (iii).
Example
East 50
South 0
West 75
When the problem is solved on a 5 ⊗ 5 grid, by using Gauss-Seidel Iterative scheme, nine
iterations are necessary to reach a solution with 99% accuracy. The progress of the iterative
scheme is shown in the Table below.
In an optimally over-relaxed iterative scheme with 1. 1716, calculated from Eqn.(24), the
99% accurate solution is obtained after six iterations. The progress of the over-relaxed scheme
is shown below.