0% found this document useful (0 votes)
16 views72 pages

Fourier Series and PDEs

The document consists of lecture notes on Partial Differential Equations (PDEs) from the University of Zimbabwe, focusing on their significance in various scientific fields such as physics and engineering. It defines key concepts, terminology, and classifications related to PDEs, including linearity, order, and homogeneity, while providing examples and discussing the geometrical structure of solutions. The course emphasizes the study of second-order linear PDEs and their applications in modeling real-world phenomena.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views72 pages

Fourier Series and PDEs

The document consists of lecture notes on Partial Differential Equations (PDEs) from the University of Zimbabwe, focusing on their significance in various scientific fields such as physics and engineering. It defines key concepts, terminology, and classifications related to PDEs, including linearity, order, and homogeneity, while providing examples and discussing the geometrical structure of solutions. The course emphasizes the study of second-order linear PDEs and their applications in modeling real-world phenomena.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

1

University of Zimbabwe

Department of Mathematics

HMTH232: Fourier Analysis and Partial Differential Equations

Lecture Notes

Dr Gift Muchatibaya, giftmuchatibaya@gmail.com, Office M219

Mr T. M. Mazikana 0782121927 tawandamazikana@gmail.com


2

tawandamazikana@gmail.com
Chapter 1

Partial Differential
Equations

1.0.1 Why study pdes?


Partial Differential Equations (PDEs) are often used to construct models of the
most basic theories underlying physics and engineering. Fields of science that are
highly dependent on the study of partial differential equations are, for example:
acoustics, aerodynamics, elasticity, electrodynamics, fluid dynamics, geophysics
(seismic wave propagation), heat transfer, meteorology, oceanography, optics,
petroleum engineering, plasma physics (ionized liquids and gases), quantum
mechanics and finance. For example,

1. ut + cux = 0. Transport equation

2. utt − c2 uxx = 0. Wave Equation (vibrating string)

3. ut = kuxx . Parabolic equation (heat and diffusion)

4. uxx + uyy = 0. Elliptic equation (stationary wave and diffusion)

5. −iut = ∆u. Schroedinger equation (Hydrogen atom)

In particular, in the field of fluid dynamics, the famous Navier-Stokes Equations


is at the center of most investigations

∂u
+ (u · 5)u − ν 52 u = − 5 u + g (1.0.1)
∂t
and dynamics on the financial markets are modelled by the so called-Black-
Scholes equation given by

∂V 1 ∂2V ∂V
+ σ 2 S 2 2 + rS − rV = 0 (1.0.2)
∂t 2 ∂S ∂S

Let u be a function of several variables, say, (x1 , x2 , · · · , xn , t) ∈ Rn+1 ,


such that u : Ω ⊂ Rn+1 → R. The variables x1 , x2 , · · · , xn , t ∈ R, are called

3
4 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS

the independent variables and u is called the dependent variable. The partial
derivatives of u with respect to the variables x1 , x2 , · · · , xn , t are given by
u(x1 + h, x2 , · · · , xn , t) − u(x1 , x2 , · · · , xn , t) ∂u
ux1 (x1 , x2 , · · · , xn , t) = lim =
h→0 h ∂x1
u(x1 , x2 + h, · · · , xn , t) − u(x1 , x2 , · · · , xn , t) ∂u
ux2 (x1 , x2 , · · · , xn , t) = lim =
h→0 h ∂x2
.. ..
. =.
u(x1 , x2 , · · · , xn , t + h) − u(x1 , x2 , · · · , xn , t) ∂u
ut (x1 , x2 , · · · , xn , t) = lim =
h→0 h ∂t
.. ..
. =.
ux1 (x1 + h, x2 , · · · , xn , t) − ux1 (x1 , x2 , · · · , xn , t) ∂2u
ux1 x1 (x1 , x2 , · · · , xn , t) = lim =
h→0 h ∂x21
ux (x1 , x2 + h, · · · , xn , t) − ux1 (x1 , x2 , · · · , xn , t) ∂2u
ux1 x2 (x1 , x2 , · · · , xn , t) = lim 1 =
h→0 h ∂x1 ∂x2

Higher order derivatives are defined in a similar way.


The variables x1 , x2 , · · · , xn ∈ R are called the space variables and normally
t ∈ R+ is the time variable. The number of space variables is used to denote
the dimension of the space and the problem to be solved.
In most cases we will work with
• u = u(x, t) a function of two variables, the related problem to be solved
is referred to as a one dimensional problem. The partial derivatives are
denoted by ux , ut , uxx , uxt , utt , etc
• u = u(x, y, t) a function of three variables, the related problem to be solved
is referred to as a two dimensional problem. The partial derivatives are
denoted by ux , uy , ut , uxx , uyy , utt , uxy , uxt , uyt , etc
• u = u(x, y, z, t) a function of four variables, the related problem to be
solved is referred to as a three dimensional problem. The partial deriva-
tives are defined as above.

1.1 Terminology
We are going to define the terms that we are going to work with, we will focus
mostly on u = u(x, y, t).
Definition 1 Let u = u(x, y, t), the a partial differential equation (pde) for
u : Ω ⊂ R2+1 → R is a relation

F (x, y, t, u, ux , uy , ut , uxx , uyy , utt , · · · , ) = 0 (1.1.1)

where F is a given function of the variables x, y, t, the unknown u and a finite


number of its partial derivatives.

Example 1 • ut + cux = 0
1.1. TERMINOLOGY 5

• utt − c2 uxx = 0

• uxx + uyy = 0

Thus, a pde is an equation involving one or more partial derivatives of an un-


known function of several variables.

Definition 2 The order of the pde is the order of the highest derivative ap-
pearing in the equation.

Example 2 • ut + cux = 0, first order pde

• utt − c2 uxx = 0, second order pde

• uxx + uyy + ux + uy = 0 second order pde

• uxxx + uyy + ux + uy = 0 third order pde

Definition 3 A pde is linear if it is linear in the unknown function u and all


its derivatives with the coefficients depending only on the independent variables.
If a pde is not linear, it is said to be nonlinear.

• The most general form of a first order pde for u = u(x, y), is given by

a(x, y)ux + b(x, y, t)uy + c(x, y)u = g(x, y).

• The most general form of a linear second order pde u = u(x, y), is given
by

a(x, y)uxx +2b(x, y)uxy +c(x, y)uyy +d(x, y)ux +e(x, y)uy +f (x, y)u = g(x, y).

Definition 4 A pde is linear if it is linear in the unknown function and all its
derivatives with coefficients depending only on the independent variables. If a
pde is not linear it is called nonlinear.

For example, xyuxx +2uxy +3x2 uyy = 4ex is a linear pde whereas ux uxx +(uy )2 =
0 is a nonlinear pde. The general form of a second-order linear pde of a function
of two variables u(x, y) is given by

a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + d(x, y)ux + e(x, y)uy + h(x, y)u = f (x, y)
(1.1.2)
For fist-order, we have

a(x, y)ux + b(x, y)uy + c(x, y)u = f (x, y) (1.1.3)

Definition 5 A pde is said to be quasilinear if it is linear in the highest deriva-


tive.

In general, a first order quasilinear equation is given by

a(x, y, u)ux + b(x, y, u)uy = f (x, y, u), (1.1.4)


6 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS

whereas a second order quasilinear equation is given by

a(x, y, u, ux , uy )uxx +2b(x, y, u, ux , uy )uxy +c(x, y, u, ux , uy )uyy = f (x, y, u, ux , uy )


(1.1.5)
The equation ux uxx + (uy )2 = 0 is quasilinear. All linear equations are quasi-
linear. The equations uxx + (uyy )2 = 0 and u2x + uy = x2 are all nonlinear and
non-quasilinear!!
Remark 1 The focus of this course will be on the study of second-order linear
partial differential equations.

Definition 6 A pde is called homogeneous if the equation does not contain a


term independent of the unknown function and its derivatives. For example, in
1.1.2 if f (x, y) ≡ 0, the equation is homogenous. Otherwise, the PDE is called
inhomogeneous.

The following equations are all homogeneous:


• ut + cux = 0,
• utt − c2 uxx = 0,
• uxx + uyy = 0,
• ux + 2xyuy + u = 0
The following equations are nonhomogeneous
• ut + cux = 1,
• utt − c2 uxx = sin(x + t),
• uxx + uyy = tanh(x + y) + 1,
• ux + 2xyuy + u = x2 + y 2
The larger part of this course will be devoted to the study of this type of
equations.
A very important property of linear pdes can be explained by expressing a
pde as an operator L acting on u;

Lu = f (x, y) (1.1.6)

for the equation 1.1.2. In this case a pde is linear if

L(c1 u1 + c2 u2 ) = c1 Lu1 + c2 Lu2 , c1 , c2 ∈ R (1.1.7)

Most importantly, if u1 , u2 , · · · , un are solutions of the homogeneous equation


Lu
Pn = 0, then, using the above definition, one can trivially show that u =
i=1 ci ui is also a solution of the equation Lu = 0. This is the superposition
principle for the linear homogeneous pdes, we will need this property when
solving linear equations using the method of separation of variables and the
application of Fourier Series.
Definition 7 A classical solution of the k th -order PDE (1.1.1) is a k−times
continuously differentiable function u(x, y, t) satisfying (1.1.1).
1.1. TERMINOLOGY 7

That is, by a solution of a pde, we mean either

• an explicit function u = f (x, y, t). which when substituted into the pde,
reduces it into an identity.

• a relation φ(u, x, y, t) = 0 which determines u implicitly as a function of


the independent variables and satisfies the pde.

Using the above definition, one can easily verify that the given function is a
solution of the corresponding pde.
x

• 4ux +3uy +u = 0, u(x, y) = e 4 f (3x−4y), f an arbitrary and twice differentiable function

• uxy = 0, u(x, y) = f (x)+g(y) f, g are arbitrary and differentiable function

1
• uxx + uyy + uzz = 0, u(x, y, z) = p , ∀(x, y, z) 6= 0
x2 + y2 + z2

• utt −c2 uxx = 0, u(x, t) = f (x+ct)+g(x−ct), f, g are arbitrary and twice differentiable function
R x+ct
• Show that u(x, t) = 21 [f (x+ct)+f (x−ct)]+ 2c 1
x−ct
g(s)ds, is a solution of
utt = c2 uxx , subject to the conditions u(x, 0) = f (x) and ut (x, 0) = g(x),
where f is a twice differentiable function and g is a differentiable function.

1.1.1 Some simple and solvable pdes


Some pdes can be solved easily by direct integration or by treating them as
”odes”, here are some examples

• uxy = 0. Integrating w.r.t x we obtain


R obtain uy = f (y), integrating again
now w.r.t y, we obtain, u(x, y) = f (y)dy + G(x) = F (y) + G(x), where
G and F are arbitrary functions. One can easily verify that the solution
indeed satisfies the given pde.

• uxy + uy = 0, integrating w.r.t y yields, ux + u = f (x), since the partial


derivatives with respect to y are missing from the equation, we can treat
ux + u = f (x) as an ode of the form u0 + u = f (x), which is linear and
can be solved easily to obtain,

u(x, y) = e−x (F (x) + g(y))

• Solve (i) uxy = 2xey (ii) uxy = sin x cos y, subject to the conditions
ux (x, π/2) = 2x, u(π, y) = 2 sin y.

Remark 2 Notice that where the solution of an ODE contains arbitrary con-
stants, the solution to a pde contains arbitrary functions. In the same spirit,
while an ODE of order m has m− linearly independent solutions, a pde has
infinitely many (there are arbitrary functions in the solution!). These are con-
sequences of the fact that a function of two variables contains immensely more
(a whole dimension worth) of information than a function of only one variable.
8 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS

1.1.2 Geometrical Structure of Solutions of Pdes


Recall that, for odes, solutions are curves in space. The general solution con-
sists of a family of curves and consists of arbitrary constants, applying ini-
tial/boundary conditions will enable one to determine a unique solution of the
ode by choosing a curve that passes through some given point out of infinitely
many solutions. For pdes, the general solution consists of arbitrary functions,
which are surfaces in space not curves. In general, an nth-order pde would have
n arbitrary functions in its general solution.

1.1.3 Classification
We have so far classified the pdes as linear, nonlinear homogeneous, nonhomo-
geneous etc, now, we look at the other very important and fundamental form of
classification of the pdes that we will often refer to when discussing the method
of solution. We do this for second order linear pdes. Recall that the second
order linear pde is given by

a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + d(x, y)ux + e(x, y)uy + f (x, y)u = g(x, y)
(1.1.8)
the term a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy that contain the second -order
derivatives is called the principal part of the pde.
Definition 8 The function ∆(x, y) = b2 − ac, is called the discriminant of the
pde.
Definition 9 The pde 1.1.8 defined in Ω ⊂ R is said to be
1. Hyperbolic in Ω if ∆(x, y) = b2 − ac > 0, ∀(x, y) ∈ Ω
2. Parabolic in Ω if ∆(x, y) = b2 − ac = 0, ∀(x, y) ∈ Ω
3. Elliptic in Ω if ∆(x, y) = b2 − ac < 0, ∀(x, y) ∈ Ω
Remark 3 Note that the classification is solely determined by the principal
part. Since ∆ is a function of the independent variables, the type might vary as
the x, y vary in Ω.
Example 3 1. The pde 3uxx + 2uxy + 5uyy + xuy = 0 is elliptic, since
∆ = b2 − ac = 12 − (3)(5) = −14 < 0
2. The Tracomi equation for transonic flow uxx + yuyy = 0, has ∆ = 02 −
(1)(y) = −y. Thus, the equation is elliptic if y > 0, parabolic if y = 0 and
hyperbolic if y < 0.
3. The pde yuxx −2uxy −xuyy −ux +cos(y)uy −4 = 0, has ∆ = b2 −ac = 1+xy.
The pde is hyperbolic for all (x, y) ∈ Ω such that xy > −1and elliptic when
(x, y) ∈ Ω such that xy < −1, and parabolic when (x, y) ∈ Ω such that
xy = 1.
Remark 4 The above classification is crucial in the study of pdes in that the
type of equation determines what properties might be expected of solutions, what
kinds of information must be supplied with the equation to have a unique so-
lution, and influences the types of numerical techniques that can be used to
approximate solutions.
1.1. TERMINOLOGY 9

1.1.4 Canonical Transformaton of 2nd −order linear pdes


We explore the idea of transforming equation 1.1.8 into a form more suitable
for obtaining solutions, or at least giving us information about solutions. We
now show that it is possible to transform equation 1.1.8 to a relatively simple
form called its canonical form, which varies according to whether the equation
is hyperbolic, parabolic, or elliptic.
Consider the general coordinate transformation

ξ = ξ(x, y), η = η(x, y) (1.1.9)

We assume that this is one-to-one, so the Jacobian does not vanish in Ω:

ξx ξy
J= 6= 0, ∀(x, y) ∈ Ω (1.1.10)
ηx ηy

This then implies that the transformation has an inverse or is invertible with
x = x(ξ, η), y = y(ξ, η). Let w = u(x(ξ, η), y(ξ, η)), using the chain-rule to
compute
ux = wξ ξx + wη ηx , uy = wξ ξy + wη ηy
and
uxx = wξξ ξx2 + 2wξη ξx ηx + wηη ηx2 + wξ ηxx + wη ηxx ,
uyy = wξξ ξy2 + 2wξη ξy ηy + wηη ηy2 + wξ ξyy + wη ηyy ,
and

uxy = wξξ ξx ξy + wξη (ξx ηy + ξy ηx ) + wηη ηx ηy + wξ ξxy + wη ηxy ,

one can easily show that equation 1.1.8 can be transformed to

Awξξ + 2Buξη + Cwηη + Dwξ + Ewη + F w = G (1.1.11)


where A = aξx2 + 2bξx ξy + cξy2 , C = aηx2 + 2bηx ηy + cηy2 , B = aξx ηx + b(ξx ηy +
ηx ξy ) + cξy ηy

Remark 5 We can easily show that

B 2 − AC = (b2 − ac)J 2 ≡ ∆(ξ, η) = ∆(x, y)J 2 ,

thus, the sign of the discriminant does not change after the coordinate transfor-
mation, this implies that the type is invariant under any coordinate transforma-
tion.!!

1.1.5 Hyperbolic Canonical Form


If b2 − ac > 0 in some region of interest, with a(x, y) 6= 0. In this case we choose
ξ, η such that

A(ξ, η) = C(ξ, η) = 0,

then the coordinate transformation given by

ξ = ξ(x, y), η = η(x, y),


10 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS

where ξ(x, y) = k, and η(x, y) = c with J 6= 0, are the characteristics curves of


the corresponding pde which are solutions of the odes:

dy b + b2 − ac
= (1.1.12)
dx √a
dy b − b2 − ac
= (1.1.13)
dx a
respectively.
Equations (1.1.12-1.1.13) are called the characteristic equations for the hy-
perbolic pde. The curves ξ(x, y) = k, and η(x, y) = c will transform the
pde(1.1.8) into the form

wξη + D̃(ξ, η)wξ + Ẽ(ξ, η)wη + F̃ (ξ, η)w = G̃(ξ, η)

or in short, we have
wξη = H(ξ, η, w, wξ , wη ) (1.1.14)
called its canonical form. This form corresponds only to hyperbolic pdes!!

Remark 6 Note that if equation 1.1.8 does not contain lower order derivative
terms and the nonhomogeneous term, then, the canonical form is simply,

wξη = 0,

which is solvable! The solution of the transformed problem is given by w(ξ, η) =


F (η) + G(ξ). To obtain the solution of the original problem, we replace w by u
and express ξ = φ(x, y) and η = ψ(x, y), hence, we have

u = F (φ(x, y)) + G(ψ(x, y)).

Example: Classify the equation uxx + 2uxy − 3uyy = 0 and hence transform
it into its canonical form and find the general solution of the pde.

Solution: Here a = 1, b = 1 and c = −3, therefore b2 − ac = 1 − (1)(−3) =


4 > 0, hence the problem is hyperbolic with characteristics obtained from:

dy 1+2
= 1 =3 (1.1.15)
dx
dy 1−2
= 1 = −1 (1.1.16)
dx
solving these equations, we obtain, y = 3x + k and y = −x + c, therefore we
define ξ = y − 3x and η = y + x. We note that

−3 1
J= = −4 6= 0, (1.1.17)
1 1

We now transform the pde with u(x, y) = w(ξ, η).


Now
ux = wξ ξx + wη ηx = −3wξ + wη , uy = wξ ξy + wη ηy = wξ + wη and
uxx = ∂u
∂x
x
= ∂
∂ξ (−3w ξ + w η )ξ x + ∂
∂η (−3w ξ + wη )ηx = −3(−3wξξ + wηξ ) +
1.1. TERMINOLOGY 11

(−3wξη + wηη ) = 9wξξ − 6wξη + wηη


∂u ∂ ∂
uyy = ∂yy = ∂ξ (wξ + wη )ξy + ∂η (wξ + wη )ηy = (wξξ + wηξ ) + (wξη + wηη ) =
wξξ + 2wξη + wηη
and
uxy = ∂u ∂ ∂
∂y = ∂ξ (−3wξ + wη )ξy + ∂η (−3wξ + wη )ηy = (−3wξξ + wηξ ) + (−3wξη +
x

wηη ) = −3wξξ − 2wξη + wηη

Substituting these into the equation, we obtain:


uxx +2uxy −3uyy = (9wξξ −6wξη +wηη )+2(−3wξξ −2wξη +wηη )−3(wξξ +2wξη +wηη ) = 0
This reduces to
wξη = 0
which is the canonical form of the hyperbolic equations.
The solution of this equation is w(ξ, η) = F (ξ) + G(η), thus, in terms of the
x, y coordinates, we have

u(x, y) = F (y − 3x) + G(y + x)


Example: We consider the problem
4uxx + 5uxy + uyy + ux + uy = 2
Here, b = 5/2, a = 4, c = 1,clearly ∆(x, y) = b2 − ac = 25/4 − 4 = 9/4 > 0.
Thus, the equation is a hyperbolic equation.
To obtain ξ, η coordinates of transformation, we solve the characteristic equa-
tions:
√ 5 3
dy b + b2 − ac +
= = 2 2 =1
dx a 4
√ 5 3
dy b − b2 − ac − 1
= = 2 2 =
dx a 4 4
Solving, we obtain:
y − x = ξ, 4y − x = η
as the characteristic curves. Note that J 6= 0 and that
ξx = −1, ξy = 1; ηx = −1, ηy = 4.
Transforming u and its derivatives, we obtain:
u(x, y) =u(x(ξ, η), y(ξ, η)) = w(ξ, η)
ux =uξ ξx + uη ηx = −wξ − wη
uy =uξ ξy + uη ηy = wξ + 4wη
uxx =∂x (ux ) = ∂ξ (ux )ξx + ∂η (ux )ηx = ∂ξ (−wξ − wη )(−1) + ∂η (−wξ − wη )(−1)
=wξξ + 2wξη + wηη
uyy =∂y (uy ) = ∂ξ (uy )ξy + ∂η (uy )ηy = ∂ξ (wξ + 4wη )(1) + ∂η (wξ + 4wη )(4)
=wξξ + 8wξη + 16wηη
uxy =∂y (ux ) = ∂ξ (ux )ξy + ∂η (ux )ηy = ∂ξ (−wξ − wη )(1) + ∂η (−wξ − wη )(4)
= − wξξ − 5wξη − 4wηη
12 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS

Thus, substituting u and its derivatives into the given pde, we obtain:

4uxx + 5uxy + uyy + ux + uy =


4(wξξ + 2wξη + wηη ) + 5(−wξξ − 5wξη − 4wηη ) + (wξξ + 8wξη + 16wηη ) − wξ − wη + wξ + 4wη = 2

Thus, we have,
−9wξη + 3wη = 2,
or
1 2
wξη − wη = − ,
3 9
the canonical form of the hyperbolic equation!
Integrating once this with respect to η, we obtain
1 2
wξ − w = − η + G(ξ).
3 9
Since only derivatives in ξ appear, the above equation can be treated as an
ordinary differential equation, that is
dw 1 2
− w = − η + G(ξ).
dξ 3 9
1
− ξ
This is a linear first order equation with an integrating factor e 3 . Thus, we
have:
1 1 1
 
d  − ξ 2 − ξ − ξ
we 3 = − ηe 3 + G(ξ)e 3 .
dξ 9

1
ξ−
Integrating both sides and dividing by e 3 , we obtain:
1
2 ξ
w = η + G̃(ξ) + H(η)e 3 .
3
This is the solution of the transformed equation. To obtain the solution of the
original equation, we express ξ = ξ(x, y) = y − x and η = η(x, y) = 4y − x.
Substituting into the above solution with u = w(ξ, η), we obtain
1
2 (y−x)
u = (4y − x) + G̃(y − x) + H(4y − x)e 3 .
3
Example: The Wave Equation Consider the one-dimensional wave equa-
tion
utt − c2 uxx = 0
subject to the initial condition

u(x, 0) = f (x)

and
ut (x, 0) = g(x).
1.1. TERMINOLOGY 13

Obtain the solution of this initial-value problem using the method of character-
istics.
Here B = 0, A = 1, C = −c2 , hence the discriminant ∆ = B 2 − AC = c2 >
0, hence the equation is hyperbolic. The characteristics are given by

dx 0 ± c2
= = ±c
dt 1
The characteristic equations are:
ξ = x + ct, η = x − ct,
using this, the wave equation is reduced to the canonical form:
uξ,η = 0
whose solution is
u(ξ, η) = F (ξ) + G(η)
In terms of x and t, we have
u(x, t) = F (x + ct) + G(x − ct). (1.1.18)
Differentiating this solution with respect to t, we have
u(x, t) = cF 0 (x + ct) − cG0 (x − ct). (1.1.19)
Setting t = 0 in the above equations, we obtain:
u(x, 0) = F (x) + G(x) = f (x) (1.1.20)
0 0
ut (x, 0) = cF (x) − cG (x) = g(x) (1.1.21)
Integrating (1.1.21) we obtained
Z x Z x Z x
0 0
c F (s)ds − c G (s)ds = g(s)ds (1.1.22)
x0 x0 x0

where x0 is any initial point of integration.


Z x
1
F (x) − G(x) = F (x0 ) − G(x0 ) + g(s)ds (1.1.23)
c x0

Using equations (1.1.20) and (1.1.23), we obtain


Z x
1 1 1 1
F (x) = F (x0 ) − G(x0 ) + f (x) + g(s)ds (1.1.24)
2 2 2 2c x0
Z x
1 1 1 1
G(x) = G(x0 ) − F (x0 ) + f (x) − g(s)ds (1.1.25)
2 2 2 2c x0
(1.1.26)
Thus,
Z x+ct
1 1 1 1
F (x + ct) = F (x0 ) − G(x0 ) + f (x + ct) + g(s)ds (1.1.27)
2 2 2 2c x0
Z x−ct
1 1 1 1
G(x − ct) = G(x0 ) − F (x0 ) + f (x − ct) − g(s)ds (1.1.28)
2 2 2 2c x0
(1.1.29)
14 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS

Adding, we obtain
Z x+ct
1 1
u(x, t) = [f (x + ct) + f (x − ct)] + g(s)ds (1.1.30)
2 2c x−ct

This solution is known as the d’Alembert’s solution of the initial value prob-
lem for the wave.

1.1.6 The Parabolic Canonical Form


We consider again, for x, y ∈ Ω, the equation

a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + d(x, y)ux + e(x, y)uy + h(x, y)u = f (x, y)
(1.1.31)
Suppose that b2 − ac = 0 in the region Ω. That is, the equation is parabolic in
Ω. Now a and c cannot be equal to zero, since this would also imply that b = 0
and the given equation would be a first order pde! We also assume that a 6= 0
in Ω. In this case we have only one characteristic equation:
dy b
= (1.1.32)
dx a
This is the characteristic equation for the pde in the parabolic case. The solution
to this equation gives only one characteristic curve

ξ(x, y) = k.

The other characteristic curve η(x, y) = c is chosen arbitrarily provided that


the Jacobian of transformation J does not vanish anywhere in Ω.
Using these transformations, we obtain a transformed pde of the forms

wηη + Dwη + Ewη + Hw = F ; (1.1.33)

wηη = G(ξ, η, w, wξ , wη ); (1.1.34)


or
wξξ + Dwη + Ewη + Hw = F ; (1.1.35)
wξξ = G(ξ, η, w, wξ , wη ). (1.1.36)
this the canonical form of the parabolic equations.
Example:
Show that the pde uxx +4uxy +4uyy = 0 is a parabolic equation, hence transform
it to its canonical form and determine its solution.
Solution
Here a = 1, b = 2 and c = 4, so that b2 − ac = 0 and the equation is therefore
parabolic. The characteristic equation of this pde is given by
dy 2
= = 2,
dx 1
hence the corresponding characteristic curve is ξ(x, y) = y − 2x. We choose η
such that J 6= 0, say η = x. Here, we note that:

ξx = −2, ξy = 1; ηx = 1, ηy = 0.
1.1. TERMINOLOGY 15

Transforming the derivatives in the given equation, we have that:

uxx = 4wξξ − 4wξη + wηη , uyy = wξξ


and
uxy = −2wξξ + wξη
Substituting these into the pde, we obtain

wηη = 0

which is the expected canonical form of a parabolic equation.


Integrating once with respect to η, we obtain:

wη = F (ξ),

doing this once more, we obtain

w(ξ, η) = ηF (ξ) + G(ξ)

which is the general solution of the transformed equation, the solution of the
original problem is

u(x, y) = xF (y − 2x) + G(y − 2x)

Example: The equation 4uxx +12uxy +9uyy −2ux +u = 0 is parabolic since,


setting a = 4, b = 6 and c = 9, we have b2 − ac = 0. One of the characteristic
dy
curves η(x, y) is obtained by solving the equation dx = 32 and the second one
(arbitrary choice) is given by ξ = y, we therefore have:

η(x, y) = 3x − 2y, ξ = y

This transforms the given equation to the form


1 1
wξξ − wη + w = 0
9 9
Example: Find the general solution of the equation:

x2 uxx + 2xyuxy + y 2 uyy = 0

Note that B 2 − AC = x2 y 2 − x2 y 2 = 0, the equation is therefore parabolic.


We have one characteristic curve which is a solution of
dy
= b/a = y/x,
dx
given by ξ = y/x, the other one is η = y, the equation is transformed to

uηη = 0,

whose solution is
u(ξ, η) = ηF (ξ) + G(ξ)
or y y
u(x, y) = yF +G
x x
16 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS

1.1.7 The Elliptic Canonical Form


If b2 − ac < 0, then the characteristic equations

dy −b ± b2 − ac
= (1.1.37)
dx a
would give complex valued curves ξ, η. Thus, the elliptic equation has no real
characteristic curves. The corresponding canonical form will be given by

wξξ + wηη + E(η, ξ, wξ , wη ) = 0


Chapter 2

Initial Boundary Value


Problems(IBVPs) on finite
domains

We now consider initial-boundary value problems for u(x, t) defined on a do-


main Ω with prescribed of u(x, t) on the boundary x ∈ δΩ together with the
initial value u(x, 0) = f (x), ∀x ∈ Ω. This class of problems are called the initial
boundary value problems (IBVPs.)

2.0.1 Important pdes in applications:


There are three classical partial differential equations of physical interest which
are linear and of order two which appear quite often in applications, and which
dominate the theory of partial differential equations. These represent the three
main classes: parabolic, hyperbolic and elliptic equations. This course will fo-
cus on the study of the heat(parabolic), wave(hyperbolic) and Laplace equa-
tions(elliptic), these equations apply to a wide range of a variety of different
physical systems:
The first two equations which are time dependent are called the evolution
equations. The last equation, which is time independent is called the steady
state equation.
• The heat/Diffusion Equation:

ut = κuxx ,

this describes the temperature u in a region containing no heat sources or


sinks, it also applies to diffusion of a chemical that has concentration u.
The constant κ is called the diffusivity. The equation is second-order in
space and first order in time.
• The Wave Equation:
utt − c2 uxx = 0,
this describes the displacement u of a vibrating string (1D), membrane
(2D), vibrating solid (3D), or gass or liquid. The quantity c is the speed

17
18CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS

of propagation of the waves. The equation is second order in both time


and space.
• The Laplace Equation:
uxx + uyy = 0
This describes the steady state temperature distribution in a two dimen-
sional space, also describes the gravitational or electrostatic potential in
a region. The equation is second order in both space variables, x and y.

2.1 Boundary and Initial Conditions


In general, any given partial differential equation will have infinitely many solu-
tions (compare with odes). To determine a unique solution, we consider various
initial and boundary conditions. We illustrate this by considering first the case
of the one-dimensional heat equation.

2.1.1 The Heat Equation


We consider a one-dimensional rod of length L that is capable of conducting
heat, we denote the temperature at a position x and time t by u(x, t), then using
Fourier’s Law and the conservation of energy, the following governing equation
can be derived
ut = κuxx , 0 < x < L, t > 0 (2.1.1)
here κ = K/cρ is called the thermal diffusivity constant, which represents the
conductivity properties of the rod. If the heat energy is initially concentrated in
one place, the above equation describes how the heat energy will spread out, a
physical process called diffusion. Hence the equation is also called the diffusion
equation.
Equation (2.1.1) is often referred to as the heat equation

2.1.2 Boundary Conditions


If the temperature inside the rod evolves according to (2.1.1), we need to know
what happens at the boundaries x = 0, x = L for us to be able to predict the
future. The rod is assumed to be insulated at except at the end point. Two
conditions are needed corresponding to the second partial derivative in x in the
governing equation (2.1.1), one at each end point ( here the rod is assumed to
be very thin such that there is no heat transfer in the vertical direction.) We
now specify three possible types of conditions placed on the boundaries x = 0
and x = L of the rod:
• Fixed temperature T0 and T1 at the two end points:

u(0, t) = T0 , u(L, T ) = T1

These are called the Dirichlet Boundary conditions, in this case, there are
no derivatives involved in the conditions. If

u(0, t) = u(L, T ) = 0,

then the boundary conditions are said to be homogeneous.


2.1. BOUNDARY AND INITIAL CONDITIONS 19

• Flux boundary conditions: If there is no heat flow at the end points, we


then have
ux (0, t) = 0, ux (L, T ) = 0
In practice, this is achieved by means of insulation. The boundary condi-
tions are called the Neumann boundary conditions, in this case we have
conditions expressed in terms of the derivatives of u.

• Cooling conditions: The Newton’s law of cooling boundary conditions is


given by

−ux (0, t) = h(Ts − u(0, t)), ux (L, t) = h(Ts − u(L, t)).

These are often referred to as the Robin boundary conditions.

2.1.3 Initial Conditions


The governing equation (2.1.1) has one derivative in time, we must have one
condition in time. It is possible that the initial temperature distribution is not
a constant but a function of position, x, thus, we have

u(x, 0) = f (x)

This signifies that the temperature is known at time t = 0 and is given by the
function f (x) for all x < x < L. Thus, the temperature evolves from this initial
state, called the initial condition.
The specification of the pde and the accompanying boundary and initial con-
ditions gives what is called the Initial-Boundary-Value-Problem(IBVP). Thus,
a problem of the form:

ut = κuxx , 0 < x < L, t > 0 (2.1.2)


u(0, t) = T1 , u(L, t) = T2 , t > 0 (2.1.3)
u(x, 0) = f (x), 0 < x < L (2.1.4)

is called an IBVP.

2.1.4 Method of Solution: Separation of Variables


We now attempt to solve the above equation (2.1.2). We use the Method
Of Separation of variables. This is a very powerful technique for solving
linear homogeneous pdes with homogeneous boundary conditions. The method
reduces a given pde into a system of a simple solvable ordinary differential
equations.
Thus, we set in this case the boundary conditions

u(0, t) = 0 u(L, 0) = 0,

that is T1 = T2 = 0 in (2.1.3) above.

Remark 7 In the case of nonhomogeneous boundary conditions, a linear trans-


formation can be made to transform them to homogeneous conditions. We now
just focus on the homogenous case!
20CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS

We assume that the solution is given in the form:

u(x, t) = h(x)g(t), (2.1.5)


where h(x is a function of x only and g(t) is a function of t only. This is the
separated form of the solution, hence the name separation of variables.
Since this is supposed to be a solution, it must satisfy the given pde, so we
compute ut and uxx and substitute them into the pde:
∂ ∂g(t) dg(t)
ut = (h(x)g(t)) = h(x) = h(x) = h(x)ġ(t) (2.1.6)
∂t ∂t dt
∂2 ∂ 2 h(x) d2 h(x)
uxx = 2
(h(x)g(t)) = g(t) 2
= g(t) = g(t)h00 (x) (2.1.7)
∂x ∂x dx2
where (˙) and (00 ) are the ordinary derivatives in t and x respectively. Note the
transformation from partial derivatives to ordinary derivatives.
Now, substituting (2.1.6) and (2.1.7) into the (2.1.1), we obtain

h(x)ġ(t) = κg(t)h00 (x) (2.1.8)

and dividing by κh(x)g(t) since this is assumed to be nonzero in the case on


nontrivial solutions, we obtain
ġ(t) h00 (x)
= (2.1.9)
κg(t) h(x)
Since the left-hand side is a function of t only and the right-hand side is a
function of x only, the two can only be equal if each side is equal to the same
constant.

ġ(t) h00 (x)


= =λ (2.1.10)
κg(t) h(x)
λ is called the separation constant, this must be determined as well. Note
that there are three possible cases:

λ = 0, λ > 0, λ < 0

From above, we have the system of simple linear ordinary differential equations

ġ(t) = λκg(t) (2.1.11)


00
h (x) = λh(x) (2.1.12)

From the boundary conditions, u(0, t) = 0, u(L, t) = 0, we have

u(0, t) = h(0)g(t) = 0,
either h(0) = 0 or g(t) = 0, but if g(t) = 0, then u(x, t) ≡ 0, a trivial solution,
thus, h(0) = 0. Similarly,

u(L, t) = h(L)g(t) = 0,

implies that h(L) = 0. Thus, for h(x), we have to solve the boundary value
problem
h00 (x) = λh(x), h(0) = 0, h(L) = 0. (2.1.13)
2.1. BOUNDARY AND INITIAL CONDITIONS 21

• Suppose λ = 0 then from (2.1.13), we have that

h00 (x) = 0, h(0) = 0, h(L) = 0,

leading to
h(x) = ax + b
applying the boundary conditions, we then have h(0) = a · 0 + b = 0, →
b = 0 and h(L) = 0 = aL = 0, since L 6= 0, it follows that a = 0, thus
h(x) = 0, hence u(x, t) ≡ 0, a trivial solution. Thus, when λ = 0, we
obtain a trivial solution. We discard this case.
• Suppose λ > 0, that is λ = ω 2 , ω ∈ R, then we have the system of
equations

ġ(t) = ω 2 κg(t) (2.1.14)


00 2
h (x) = ω h(x) (2.1.15)

Solving first h00 (x) = ω 2 h(x), h(0) = 0, h(L) = 0, we have the solution

h(x) = aeωx + beωx ,

in this case h(0) = 0 implies a+b = 0 → a = −b and h(L) = aeωL +beωL =


0, thus, we have
a[eωL − e−ωL ] = 0 → a = 0
if a = 0 then b = 0, thus, u(x, t) ≡ 0, a trivial solution! Thus, we discard
this case again.
• We now consider the case λ < 0, that is λ = −ω 2

ġ(t) = −ω 2 κg(t) (2.1.16)


00 2
h (x) = −ω h(x) (2.1.17)

Solving these, we obtain


2
g(t) = Ce−ω κt
(2.1.18)
f (x) = A cos ωx + B sin ωx (2.1.19)

Applying the boundary conditions to the second solution, we have:


h(0) = 0 = A + 0 · B → A = 0, thus h(x) = B sin ωx. Now f (L) = 0 =
B sin Lω = 0. Either B = 0 or sin ωL = 0. If B = 0, we will then have
a trivial solution, so B 6= 0, then sin ωL = 0, giving us Lω = nπ, n =
1, 2, 3, · · · , thus, we have

ω= , n = 1, 2, 3, · · · .
L

Now, for each value of n, we have hn (x) = Bn sin x together with
L
n2 π 2
− κt
gn (t) = Cn e L2 , thus,
n2 π 2
nπ − 2 κt
un = hn (x)gn (t) = bn sin xe L , bn = Bn Cn
L
22CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS

Using the superposition principle, we have that

∞ n2 π 2
X nπ − 2 κt
u(x, t) = bn sin xe L
n=1
L

We have so far not used the initial condition u(x, 0) = f (x), setting t = 0
in the above solution, we have


X nπ
u(x, 0) = f (x) = bn sin x,
n=1
L
thus,

X nπ
f (x) = bn sin x
n=1
L

which is the half-range Fourier sine series for f (x), from the theory of
Fourier series, the bn are the Fourier Coefficients bn , which are then given
by,
2 L
Z

bn = f (x) sin dx
L 0 L

Thus, the solution to the heat equation that satisfies the given initial and
boundary conditions, is given by

∞ Z L ! n2 π 2
2 X nπ nπ − 2 κt
u(x, t) = f (x) sin dx sin xe L .
L n=1 0 L L

We can easily check that u(0, t) = u(L, t) = 0 as expected and also that,
since the boundaries are kept at zero temperature and there is no heat
source inside the bar, as t → ∞, u(x, t) → 0.

Remark 8 • In the case of Dirichlet conditions, the case λ < 0 is the only
case that leads to the bounded nontrivial solutions, when solving problems,
we go straight to this case. You do not have to consider the other cases
λ ≥ 0!
• If the homogeneous boundary conditions are of Dirchlet type, the solution
leads to a Fourier sine series

Example 4 Solve the IVBP:

ut = κuxx , 0 < x < π, t > 0


u(0, t) = 0, u(π, t) = 0, t > 0
u(x, 0) = x, 0 < x < π

Following the above example with λ = −ω 2 , we obtain the solution



X 2
u(x, t) = bn sin nxe−n κt
.
n=1
2.1. BOUNDARY AND INITIAL CONDITIONS 23

Applying the initial condition u(x, 0) = x, we obtain



X
x= bn sin nx,
n=1

from the theory of Fourier series, we have


2 π (−1)n+1
Z
bn = x sin nxdx = 2 ,
π 0 n
thus

X (−1)n+1 2
u(x, t) = 2 sin nxe−n κt ,
n=1
n

2.1.5 IBVP with Neumann boundary conditions


We now consider the IBVP with insulated boundaries, that is

ux (0, t) = 0, ux (L, t) = 0,

that is, we attempt to solve the problem

ut = κuxx , 0 < x < L, t > 0 (2.1.20)


ux (0, t) = 0, ux (L, t) = 0, t > 0 (2.1.21)
u(x, 0) = f (x), 0 < x < L (2.1.22)

As before, we use the method of separation of variables, setting

u(x, t) = h(x)g(t),

we obtain a system of equations

ġ(t) = λκg(t) (2.1.23)


00
h (x) = λh(x) (2.1.24)

• Considering first the case λ = 0, we obtain, from the second equation,as


before
h(x) = ax + b,
both h0 (0) = 0 and h0 (L) = 0 imply that a = 0, thus, in this case

u(x, t) = b,

b arbitrary.
• The case λ > 0, leads to a trivial solution. (Verify this!)
• We now consider the case λ < 0, that is λ = −ω 2 . Thus, we solve the
equations

ġ(t) = −ω 2 κg(t) (2.1.25)


00
h (x) = −ωh(x) (2.1.26)
24CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS

Applying the boundary conditions, we obtain that h0 (0) = 0, h0 (L) = 0


are the conditions to be satisfied by the h(x) function. The solution of the
above ordinary differential equation is given by
2
g(t) = Ce−ω κt

h(x) = A cos ωx + B sin ωx,

now applying the boundary conditions to the second solution, we have,


h0 (0) = 0 = Bω = 0, thus B = 0, we must then have h(x) = A cos ωx,
using h0 (L) = 0, it then follows that Aω sin ωL = 0, since A 6= 0, we then
have ωL = nπ, n = 1, 2, 3, · · · . Thus, in this case we have the solution
n2 π 2 κ
− t
gn (t) = Cn e L2

h(x) = An cos x,
L

n2 π 2 κ
nπ − t
Thus, un (x, t) = an cos xe L2 , n ≥ 1. Combining this with the
L
solution corresponding to the case λ = 0, and using the superposition
principle, we have that

∞ n2 π 2 κ
1 X nπ − t
u(x, t) = a0 + an cos xe L2 (2.1.27)
2 n=1
L

1
with b = a0 .
2
Applying the initial condition, we have

1 X nπ
u(x, 0) = f (x) = a0 + an cos x,
2 n=1
L

or simply

1 X nπ
f (x) = a0 + an cos x,
2 n=1
L
which is the Fourier Cosine series for f (x). The coefficients an , n ≥ 1 are
then given by
2 L
Z
nπx
an = f (x) cos dx
L 0 L
1
We note that from (2.1.27), as t → ∞ the solution u(x, t) → a0 , the
2
equilibrium temperature. This is expected since no heat is allowed to
escape via the boundaries.
The complete solution is given by
Z L ∞ Z L ! n2 π 2 κ
1 2 X nπx nπ − t
u(x, t) = f (x)dx + f (x) cos dx cos xe L2
L 0 L n=1 0 L L
2.2. HEAT EQUATION WITH NONHONOGENEOUS BOUNDARY CONDITIONS25

Example 5

ut = κuxx , 0 < x < L, t > 0 (2.1.28)


ux (0, t) = 0, ux (L, t) = 0, t > 0 (2.1.29)

0, 0 < x < π/2
u(x, 0) = , 0<x<L (2.1.30)
1, π/2 < x < π

Here, computing the coefficients, we obtain


2 π 2 π
Z Z
a0 = f (x)dx = dx = 1
π 0 π π/2
2 π 2 π
Z Z
2 nπ
an = f (x) cos nxdx = cos nxdx = − sin
π 0 π π/2 nπ 2

Thus, we have the solution



1 2X1 2
u(x, t) = − cos nxe−n κt
2 π 1 n

2.2 Heat Equation with nonhonogeneous bound-


ary conditions
We have so far solve the heat equation with end points at fixed zero temperature.
We now consider the case where u(0, t) = T1 and u(L, t) = T2 , where T1 6= T2 6=
0. Thus, we consider the initial-boundary value problem of the type:

ut = κuxx , 0 < x < L, t > 0 (2.2.1)


u(0, t) = T1 , u(L, t) = T2 , t > 0 (2.2.2)
u(x, 0) = f (x), 0 < x < L (2.2.3)

If we attempt to use separation of variables u(x, t) = X(x)T (t), applying the


boundary conditions, we obtain
T1
u(0, t) = X(0)T (t) = T1 → T (t) = ,
X(0)
T1
thus, T (t) is a constant, thus u(x, t) = X(x) is independent of t, which is
X(0)
unrealistic. Similarly, if we apply the boundary condition at x = L, we have
T2
u(0, t) = X(L)T (t) = T2 → T (t) = ,
X(L)
T2
leading to u(x, t) = X(x) which is unrealistic.
X(L)
To overcome this difficulty, we consider a transformation

u(x, t) = U (x, t) + φ(x),


26CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS

where φ(x) and U (x, t) are functions to be determined. We now consider equa-
tions to be satisfied by φ(x) and U (x, t). Substituting the above transformation
into the governing pde, we obtain

Ut (x, t) = κ(Uxx + φ00 (x)),

to avoid solving a nonhomogeneous equation for U (x, t), we set

φ00 (x) = 0.

From the boundary conditions, we have

u(0, t) = U (0, t) + φ(0) = T1 ,


u(L, t) = U (L, t) + φ(L) = T2 ,

to have homogeneous boundary conditions for U , we set φ(0) = T1 , φ(L) = T2


Thus, solving φ00 (x) = 0, we have

(T2 − T1 )x
φ(x) = + T1
L
and U (x, t) is then determined from

Ut = κUxx , 0 < x < L, t > 0 (2.2.4)


U (0, t) = 0, U (L, t) = 0, t > 0 (2.2.5)
U (x, 0) = f (x) − φ(x), 0 < x < L (2.2.6)

whose solution is
∞ n2 π 2
X nπx −κ 2 t
U (x, t) = bn sin e L ,
n=1
L
where Z L
2 nπx
bn = (f (x) − φ(x))sin
L 0 L
Thus,

∞ n2 π 2
X nπx −κ 2 t (T2 − T1 )x
u(x, t) = bn sin e L + + T1 .
n=1
L L

(T2 − T1 )x
We note that the term + T1 represents the steady state temper-
L
ature of the rod. That is, as t → ∞, the temperature settles at

(T2 − T1 )x
u(x, t) = + T1 .
L

Remark 9 The same transformation can be used to deal with nonhomogeneous


equations with a source term dependent on the position only or a constant.
2.3. NONHOMOGENEOUS HEAT EQUATION WITH HONOGENEOUS BOUNDARY CONDITIONS27

Example 6 Solve the IBVP:


ut = 7uxx , 0 < x < 5, t > 0
u(0, t) = 1, u(5, t) = 4, t > 0

3 − x, 0<x<3
u(x, 0) = f (x) =
10(x − 3), 3 < x < 5
If we let u(x, t) = U (x, t) + φ(x), here with T1 = 1, T2 = 4, L = 5, therefore
(T2 − T1 )x 3
φ = + T1 = x + 1. U (x, t) solve the homogeneous equation with
L 5
homogeneous boundary conditions:

Ut = 7Uxx , 0 < x < 5, t > 0


U (0, t) = 0, U (5, t) = 0, t > 0

3  2 − 8 x,

0<x<3
U (x, 0) = f (x) − x − 1 = 5
5 47
 −31 + x,
 3≤x<5
5
whose solution is
∞  nπx  2
π 2 t/25
X
U (x, t) = bn sin e−7n
n=1
5

where
Z 5
2  nπx 
bn = U (x, 0) sin dx
5 0 5
Z 3
2 5
 Z  
2 8  nπx  47  nπx 
= 2 − x sin dx + −31 + x sin dx
5 0 5 5 5 3 5 5
 
110 3nπ 4 32
= − 2 2 sin + − (−1)n .
n π 5 nπ nπ
Thus,
∞    
X 110 3nπ 4 32  nπx  2 2 3
u(x, t) = − 2 2 sin + − (−1)n . sin e−7n π t/25 + x+1.
n=1
n π 5 nπ nπ 5 5

3
Note that as t → ∞, u(x, t) → x + 1 which is the equilibrium temperature.
5

2.3 Nonhomogeneous Heat Equation with hon-


ogeneous boundary conditions
We now consider the nonhomegenous equation with homogeneous boundary
conditions of the form:

ut = κuxx + ψ(x), 0 < x < L, t > 0 (2.3.1)


u(0, t) = 0, u(L, t) = 0, t > 0 (2.3.2)
u(x, 0) = f (x), 0 < x < L (2.3.3)
28CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS

It is clear that the substitution u = h(x)g(t) will not separate the equation
ut = κuxx + ψ(x) into a function of x only and a function of t only. That is,
the result will be
hġ = κh00 g + ψ(x).
To overcome this difficulty, we again assume that u(x, t) can be expressed in
the form
u(x, t) = U (x, t) + φ(x).
Substituting into the pde, we obtain

Ut (x, t) = κUxx + κφ00 + ψ.

We then set κφ00 + ψ = 0. Solving for φ, we obtain


ZZ
1
φ(x) = − ψ(x)dx + ax + b = H(x) + ax + b
κ
From the boundary conditions, we have

u(0, t) = U (0, t) + φ(0) = 0,


u(L, t) = U (L, t) + φ(L) = 0,

to have homogeneous boundary conditions for U , we set φ(0) = 0, φ(L) = 0.


We use these conditions to determine a and b to obtain:
H(L)
φ(x) = H(x) − x
L
U is then obtained from

Ut = κUxx , 0 < x < L, t > 0


U (0, t) = 0, U (L, t) = 0, t > 0
U (x, 0) = f (x) − φ(x), 0 < x < L

Example 7 Solve the IBVP:

ut = uxx + 1, 0 < x < π, t > 0


u(0, t) = 0, u(π, t) = 0, t > 0
u(x, 0) = x, 0 < x < π.

2.4 The Wave Equation


Consider a perfectly flexible string elastic stretched between two points at x = 0
and x = L with tension T. If the string is displaced slightly from its initial
position of rest, with the end points remaining fixed, then the string will vibrate.
The position of any point P in the string will then depend on its distance from
the end points and on the instant in time. Its displacement u(x, t) is governed
by the so called wave equation
p
utt = c2 uxx , c = T /ρ.
2.4. THE WAVE EQUATION 29

We now consider the one-dimensional wave equation given by

∂2u 2
2∂ u
pde : − c = 0, 0 < x < L, t > 0 (2.4.1)
∂t2 ∂x2
BCs : u(0, t) = 0, u(L, t) = 0, t > 0 (2.4.2)
ICs : u(x, 0) = f (x), ut (x, 0) = g(x) 0 < x < L, . (2.4.3)

This problem models the motion of a taut string with given initial position
u(x, 0) = f (x)) and initial velocity ut (x, 0) = g(x), these are called the initial
conditions(ICs) of the problem. The end points x = 0, L are fixed, these are
called the homogeneous boundary conditions (BCs) of Dirichlet type.
We solve the above IBVP using the Method of Separation of Variables:
We let u(x, t) = X(x)T (t), substituting into the wave equation, we obtain

T 00 (t)X(x) = c2 X 00 (x)T,

dividing by c2 X(x)T (t), we obtain

T 00 X 00
= = −λ2 ,
c2 T X
leading to a system of ordinary differential equations

T 00 − c2 λ2 T = 0
X 00 − λ2 X = 0

with solutions

T (t) = C cos cλt + D sin cλt (2.4.4)


X(x) = A cos λx + B sin λx (2.4.5)

Just as before the boundary conditions lead to X(0) = 0, X(L) = 0,thus, we


have
X(0) = A = 0, X = B sin λx,
but
X(L) = B sin λL = 0,
for nontrivial solutions, B 6= 0, so sin λL = 0, thus

λL = nπ, n = 1, 2, 3, · · · ,

or

λ= , n = 1, 2, 3, · · · .
L
Thus, for each value of n we have solutions of the form

Xn (x) = Bn sin .
L
The corresponding solution T (t) is given by
nπct nπct
Tn (t) = Cn cos + Dn sin , n≥1 (2.4.6)
L L
30CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS

Thus, we have that


 
nπx cnπt cnπt
un (x, t) = Xn (x)Tn (t) = Bn sin Cn cos + Dn sin
L L L
 
nπct nπct nπx
= an cos + bn sin sin , n≥1
L L L

Using the superposition principle, we have

∞  
X nπct nπct nπx
un (x, t) = an cos + bn sin sin . (2.4.7)
n=1
L L L

Our solution must satisfy the initial conditions u(x, 0) = f (x) and ut (x, 0) =
g(x), we thus have


X nπx
u(x, 0) = f (x) = an sin (2.4.8)
n=1
L

X nπc nπx
ut (x, 0) = g(x) = bn sin (2.4.9)
n=1
L L

or

X nπx
f (x) = an sin (2.4.10)
n=1
L

X nπc nπx
g(x) = bn sin (2.4.11)
n=1
L L

which are both Fourier Sine series with Fourier Coefficients

Z L
2 nπx
an = f (x) sin dx
L 0 L
Z L
2 nπx
bn = g(x) sin dx
nπc 0 L

Example 8 Solve the IBVP

∂2u ∂2u
− 36 = 0, 0 < x < 1, t > 0
∂t2 ∂x2
u(0, t) = 0, u(1, t) = 0, t > 0
u(x, 0) = 0, ut (x, 0) = 3(1 − x) 0 < x < 1, .

Here L = 1, c = 6, with f (x) = 0, it then follows that


Z 1
2
an = 0 · sin nπxdx = 0
1 0
2.4. THE WAVE EQUATION 31

and with g(x) = 3(1 − x), we have


Z 1
2
bn = 3(1 − x) sin nπxdx
6nπ 0
Z 1 Z 1
1 1
= sin(nπx)dx − x sin(nπx)dx
nπ 0 nπ 0
1
1 1 1
= [1 − cos nπ] + 2 2 cos nπ − 3 3 sin(nπx)
n2 π 2 n π n π 0
1
= 2 2
n π

Thus, the solution is then given by



X 1
u(x, t) = sin(nπx) sin(6nπt)
n=1
n2 π 2

Example 9 Solve the IBVP


∂2u ∂2u
2
− 9 2 = 0, 0 < x < π, t > 0
∂t ∂x
u(0, t) = 0, u(π, t) = 0, t > 0

x, 0 < x < π/2
u(x, 0) =
π − x, π/2 < x < π
ut (x, 0) = 0, 0 < x < π, .

x, 0 < x < π/2
Here L = π, c = 3, with f (x) = , it then follows that
π − x, π/2 < x < π

2 π 2 π/2 2 π
Z Z Z
an = f (x) sin nxdx = x sin nxdx + (π − x) sin nxdx
π 0 π 0 π π/2
4 nπ
= 2 sin , ( after using integration by parts)
n π 2
(
0, n even
= 4 n+1
(−1) , n odd
n2 π
Since a0 6= 0 only for odd values of n, we then have
4
an = (−1)n+1 , n = 1, 2, 3, · · ·
(2n − 1)2 π
and with g(x) = 0, we have
Z π
2
bn = 0 sin nxdx = 0
3nπ 0

Thus, the solution is then given by



X 4(−1)n+1
u(x, t) = sin(2n − 1)x cos 3(2n − 1)t
n=1
(2n − 1)2 π
32CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS
Chapter 3

Orthogonal Functions and


Fourier Series

In 1800, Joseph Fourier conjectured that any periodic function f (x) of period
2π can be written, at almost every point, as a sum of trigonometric series


a0 X
f (x) = + (an cos nx + b sin nx), (3.0.1)
2 n=1

called the Fourier Series. The constants an , bn , n ≥ 0 are real numbers defined
by

1 π
Z
an = f (x) cos nxdx, n = 0, 1, 2, · · · (3.0.2)
π −π
1 π
Z
bn = f (x) sin nxdx, n = 1, 2, · · · (3.0.3)
π −π

These are called the Fourier Coefficients. Joseph Fourier developed these series
when he was working on initial-boundary value problems (IBVP) modeling heat
flow under a variety of conditions. This is the focus of the first part of this course.
We will then study partial differential equations latter in the course. Fourier
Series and the related Fourier Transforms will be used to develop solutions of
IBVPs.
It is useful to think about the general context in which one finds oneself when
discussing Fourier series and transforms. In this chapter we provide a glimpse
into more general notions for generalized Fourier series and the convergence
of Fourier series. We can view the sine and cosine functions in the Fourier
trigonometric series representations as basis vectors in an infinite dimensional
function space. A given function in that space may then be represented as a
linear combination over this infinite basis. Before we start the discussion of
Fourier series we will review some basic results on vector spaces, inner-product
spaces and orthogonal projections.

33
34 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

3.1 Vector Spaces and Inner-Product Spaces


We will start with a space of vectors and the operations of addition and scalar
multiplication. We will need a set of scalars, which generally come from some
field. However, in our applications the field will either be the set of real numbers
or the set of complex numbers.

3.1.1 Finite Dimensional Vector Spaces


Much of the discussion and terminology that we will use comes from the theory
of vector spaces. Until now you may only have dealt with finite dimensional
vector spaces. Even then, you might only be comfortable with two and three
dimensions, e.g v = ai + bj ∈ R2 , v = ai + bj + ck ∈ R3 . We will review a little
of what we know about finite dimensional spaces so that we can introduce more
general function spaces later.

Definition 10 A vector space V over a field F is a set that is closed under


addition and scalar multiplication and satisfies the following conditions:
For u, v, w ∈ V, and α, β ∈ F(Here F = C or F = R)

1. u + v = v + u

2. αu + βu = u(α + β)

3. There exists a 0 such that 0 + v = v.

4. There exists an additive inverse, -v, such that v + (-v) = 0.

Recall that for i, j, k ∈ R3 , a1 , a2 , a3 ∈ R, then any v ∈ R3 , can be expressed


as:
v = a1 i + a2 j + a3 k.
The vectors i, j, k are said to span R3 and are called a basis for R3 , (linearly
independent and orthogonal vectors.)

Remark 10 Clearly {i, j, k} form a linearly independent set.

We can generalize these ideas. In an n-dimensional vector space any vector in


the space can be represented as the sum over n linearly independent vectors (the
equivalent of non-coplanar vectors). Such a linearly independent set of vectors
{v}ni=1 satisfies the condition
n
X
ci vi = 0 ⇐⇒ ci = 0
i=1

Now we can define a basis for an n-dimensional vector space. We begin with
the standard basis in an n-dimensional vector space. It is a generalization of
the standard basis i, j, k in three dimensions. We define the standard basis with
the notation
e1 = (1, · · · , 0), e2 = (0, 1, · · · , 0), etc, in general, we have

ek = (0, · · · , 0, 1, 0 · · · , 0), k = 1, 2, · · · , n.
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 35

Note that these vectors form a linearly independent set. Thus, we can expand
any v ∈ Rn as
n
X
v = (v1 , · · · , vn ) = vi ei
i=1
here the vi are called the components of v. To determine the components vi , we
need to defined the inner product(or dot product) of vectors:
Definition 11 If u = (u1 , · · · , un ) and v = (v1 , · · · , vn ) belong to Rn , their
dot product is the number
n
X
u · v =< u, v >= u1 v1 + · · · + un vn = ui vi
i=1

The dot product has the following properties:


• < u, v > = < v, u >
• < αu, v >= α < u, v >=< u, αv >, α ∈ R
• < u + v, w > = < u, w > + < v, w >
• < u, u > ≥ 0 and < u, u > = 0 ⇐⇒ u = 0

Orthogonal Operations
1
p
• The norm of a vector v : ||v|| =< v, v > 2 = v12 + v22 + · · · + vn2
• Orthogonality of two vectors: u⊥v ⇐⇒ < u, v >= 0.
• Orthogonality of a collection of vectors: {u1 , · · · , um } is an orthog-
onal collection of vectors ⇐⇒ < ui , uj >= 0 if i 6= j.

Remark 11 Clearly the set {ek }nk=1 is a linearly independent set of or-
thogonal vectors.

• Orthogonal basis: If m = n, the dimension of the space, then an orthog-


onal collection {u1 , · · · , un } where ui 6= 0 for all i, forms an orthogonal
n
basis. In that case, any vector v ∈ RP can be expanded in terms of the
n
orthogonal basis via the formula v = i=1 vi ui . How then can we deter-
mine the components vi ? We can use the scalar product of v with each
basis element uj . Using the properties of the scalar product, we have for
i = 1, · · · , n
n
X n
X
< v, uj >=< vi ui , uj >= vi < ui , uj >= vj < uj , uj > .
i=1 i=1

Giving,

< v, uj > < v, uj >


vj = = .
< uj , uj > ||uj ||2
Thus
n
X ui
v= < v, ui >
i=1
||ui ||2
36 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

• Orthonormal basis: Orthogonal basis {u1 , · · · , un } with ||ui || = 1


for all i. That is

0, i 6= j
< ui , uj >=
1, i=j

Remark 12 Clearly the set {ek }nk=1 is a linearly independent set of or-
thornomal vectors.

Thus, in this case we have


n
X n
X
v= < v, ui > ui = αi ui ; αi =< v, ui >
i=1 i=1

3.1.2 Function Spaces


Earlier we studied finite dimensional vector spaces. Given a set of basis vectors,
{ei }ni , in vector space V, wePshowed that we can expand any vector v ∈ V
n
in terms of this basis, v = i=1 vi ei . We then discussed the simple case of
extracting the components vi of the vector. The keys to doing this simply were
to have

• a scalar product and,

• an orthogonal basis set.

These are also the key ingredients that we will need in the infinite dimen-
sional case. In fact, we need these when we study Fourier series.
We consider a space of functions f on [a, b]. We think of functions f as
infinite-dimesnional vectors whose components are the values f (x) as x varies
in [a, b].
We will consider various infinite dimensional function spaces. Functions in
these spaces would differ by their properties. For example, we could consider

• the space of continuous functions on [a, b], that is f ∈ C[a, b],

• the space of piecewise-continuous functions P C[a, b],

• the space of differentiably continuous functions C k [a, b],

• the set of square integrable functions from a to b,: L2 [a, b]. This is defined
as ( Z )
b
2
L2 [a, b] := f : |f (x)| dx < ∞
a

As you will see, there are many types of function spaces . In order to view these
spaces as vector spaces, we must be able to add functions and multiply them by
scalars in such a way that they satisfy the definition of a vector space.
We will also need a scalar product defined on this space of functions. There
are several types of scalar products, or inner products, that we can define.
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 37

Definition 12 An inner product <, >, on a real vector space V is a mapping


from V × V into R such that for u, v, w ∈ V and α ∈ R, one has

• < v, v >≥ 0 and < v, v >= 0, ⇐⇒ v = 0.

• < w, v >=< v, w >

• < αw, v >= α < w, v >

• < u + v, w >=< u, w > + < v, w > .

Definition 13 A real inner product space is real vector space equipped with the
above inner product.

For complex inner product spaces, the above properties hold with the third
property replaced with < v, w >= < w, v >.
For the time being, we will only deal with real valued functions and, thus we
will need an inner product appropriate for such spaces. We have the following
definitions.

• Let f (x) and g(x) be real-valued functions defined on [a, b]. Then, we
define the inner product, if the integral exists, as
Z b
< f, g >= f (x)g(x)dx
a

Spaces in which < f, f >< ∞ under this inner product are called the
space of square integrable functions on (a, b) and are denoted as L2 (a, b).

Remark 13 If f (x), g(x) ∈ C defined on [a, b]. Then,


Z b
< f, g >= f (x)g(x)dx
a

• Two functions f (x) and g(x) are said to be orthogonal on the interval
[a, b] if < f, g >= 0, ∀x ∈ [a, b].

Example 10 The functions f (x) = sin 3x, g(x) = cos(3x) are orthogonal
on [−π, π].

• Orthogonal collections: A collection of functions φ1 (x), φ2 (x), · · · , φn (x), · · ·


defined on [a, b] is called orthogonal on [a, b] if
Z b
< φi , φj >= φi (x)φj (x)dx = 0, when i 6= j.
a

Example 11 Each of the sets {cos kx, k ≥ 0}, {sin kx, k ≥ 1} and
{sin kx, cos kx, k ≥ 0} is orthogonal on [−π, π].

Example 12 The set {enxi }∞


n=−∞ is an orthogonal set on [−π, π].
38 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

• If f (x) is a function defined on [a, b], we define the norm of f to be

1 !1
Z b 2
||f || =< f, f > 2 = f (x)2 dx
a

• A collection of functions {φ1 (x), φ2 (x), · · · , φm (x), · · · } defined on [a, b] is


called orthonormal on [a, b] if
Z b 
0, i 6= j
< φi , φj ) = φi (x)φj (x)dx = δij =
a 1, i=j

• Note that if the collection {φ1 (x), φ2 (x), · · · , φm (x), · · · } is orthogonal


on [a, b] and ||φi (x)|| =
6 0, the collection
 
φ1 (x) φ2 (x) φm (x)
, ,··· , , · · · is orthonormal on [a, b].
||φ1 (x)|| ||φ2 (x)|| ||φm (x)||
Now that we have function spaces equipped with an inner product, we seek
a basis for the space. For an n-dimensional space we need n basis vectors.
For an infinite dimensional space, how many will we need? How do we know
when we have enough? We will provide some answers to these questions later.
Lets assume that we have a basis of functions {φn (x)}∞n=1 . Given a function
f (x), how can we go about finding the components of f in this basis?
In other words, let

X
f (x) = cn φn (x).
n=1

How do we find the cn s?


Formally, we take the inner product of f (x) with each φj (x) and use the
properties of the inner product to find


X
< φj , f > = < φj , cn φn (x) >=< φj , c1 φ1 + c2 φ2 + · · · cj φj + · · · >
n=1
= c1 < φj , φ1 > +c2 < φj , φ2 > + · · · cj < φj , φj > + · · ·
X∞
= cn < φj , φn (x) >
n=1

If the basis is an orthogonal basis, we then have

< φj , f >= cj < φj , φj > .

Thus,
< φj , f > < φj , f >
cj = = .
< φj , φj > ||φj ||2
Thus, in summary, a function f (x) be represented by an expansion over a
basis of orthogonal functions, {φn (x)}∞
n=1 ,
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 39


X
f (x) = cn φn (x),
n=1

then the expansion coefficients cn are formally determined as


< φn , f >
cn = .
||φn ||2

Definition 14 If {φn (x)}∞ n=1 is an orthonormal basis of L2 [a, b] and f ∈ [a, b],
then the numbers < f, φn (x) > are called the generalised Fourier coefficients of
f with respect to {φn (x)}∞
n=1 . The series


X
f∼ < f, φn (x) > φn (x)
n=1

is called the generalised Fourier series.

Remark 14 Often it is more convenient not to require the elements of the basis
to be unit vectors. Suppose {ψn (x)}∞
n=1 is an orthogonal set, then, we have

∞ ∞
X φn (x) X
f∼ < f, φn (x) > = cn φn (x)
n=1
||φn (x)||2 n=1

This will be referred to as the general Fourier series expansion and the cj ’s
are called the Fourier coefficients. Technically, equality only holds when the
infinite series converges to the given function on the interval of interest.

Example 13 Show that the set {sin(nx)}∞ n=1 is orthogonal on the interval
[−π, π]. Find the coefficients for the expansion given by

X
f (x) = bn sin(nx).
n=1

We established that the set of functions φn (x) = sin nx for n = 1, 2, · · · is


orthogonal on the interval [−π, π]. Recall that using trigonometric identities,
we have for n 6= m,
Z π
< φn , φm >= sin nx sin mxdx = πδnm
−π

We determine the expansion coefficients using

1 π
Z
< f, φn >
bn = = f (x) sin nxdx
< φn , φn > π −π

Definition 15 An orthogonal system {φn (x)}∞ n=0 on [a, b] is complete if the


fact that a function f (x) on [a, b] satisfies < f, φn >= 0 for all n ≥0 implies
that f ≡ 0 on [a, b], or, more precisely, that
Z b
2
||f || = f 2 (x)dx = 0.
a
40 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

• If {φn (x)}∞
n=0 on [a, b] is a complete orthogonal system on [a, b], then every
(piecewise continuous) function f (x) on [a, b] has the expansion


X < f, φn >
f (x) = φn (x)
n=0
||φn ||2

• If {φn (x)}∞
n=0 on [a, b] is a complete orthogonal system on [a, b], the
expansion formula above holds for every (pwc) function f (x) on [a, b] in
the L2 −sense, but not necessarily pointwise, i.e. for a fixed x ∈ [a, b] the
series on the RHS of the above equation might not necessarily converge
and, even if it does, it might not converge to f (x).

• The system {1, cos(x), cos(2x), cos(3x), · · · } = {cos(kx), k ≥ 0} is orthog-


onal on [π, π] but it is not complete on [π, π].

• Indeed if f (x) is odd on [−π, π] with ||f || = 0. such as f (x) = x or


f (x) = sin x, we have
Z π
f (x) cos kxdx = 0, k ≥ 0,
−π

since f (x) cos nx is odd on a symmetric interval [−π, π].

Theorem 1 The system

T = {1, cos x, sin x, cos 2x, sin 2x, · · · }

is a complete orthogonal system on [−π, π].

• To show the orthogonality of this system, one needs to show that


Z π
cos mx cos nxdx = 0, ∀m.n ≥ 0, m 6= n (3.1.1)
−π
Z π
sin mx sin nxdx = 0, ∀m.n ≥ 1, m 6= n (3.1.2)
−π
Z π
cos mx sin nxdx = 0, ∀m ≥ 0 n ≥ 1. (3.1.3)
−π

• Also note that,

||1||2 = 2π, || cos nx||2 = || sin nx||2 = π

• Note that the completeness of the above set is difficult to prove.

• Using the previous theorem, it follows that every (pwc) function f (x) on
[−π, π] admits the expansion

a0 X
f (x) ∼ + (an cos nx + b sin nx), (3.1.4)
2 n=1
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 41

where
Z π
a0 < f, 1 > 1
= = f (x)dx (3.1.5)
2 ||1||2 2π −π
1 π
Z
< f, cos nx >
an = = f (x) cos nxdx (3.1.6)
|| cos nx||2 π −π
1 π
Z
< f, sin nx >
bn = = f (x) sin nxdx (3.1.7)
|| sin nx||2 π −π

• The series expansion (3.1.4) in terms of the trigonometric system T is


called the Fourier series expansion of f (x) on [−π, π].

• More generally, if L > 0 and f (x) is pwc on [−L, L], then it will have a
Full-Range Fourier series expansion on [−L, L] given by


a0 X n  nπx   nπx o
f (x) ∼ + an cos + b sin , (3.1.8)
2 n=1
L L

where an , bn are defined by


Z L
1  nπx 
an = f (x) cos dx, n = 0, 1, 2, · · · (3.1.9)
L −L L
Z L
1  nπx 
bn = f (x) sin dx, n = 1, 2, · · · (3.1.10)
L −L L

a0
are called theFourier coefficients. Again we note that the term is due
2
1
to the constant function cos(0) = 1, the factor being included just for
2
convenience, so that the formula for a0 and that of an , n ≥ 1 agree.

We make the following remarks:

Remark 15 Computing the Fourier Series (3.1.8) of any given function re-
duces to the task of evaluating the integrals in (3.1.9) and (3.1.10). A firm
grasp of integration by parts is required to complete these calculations success-
fully.

Remark 16 • In general the Fourier Series of f (x) does not converge to


f (x) for all values of f (x), so the quality ’=’ in (3.1.8)is not always true,
we then use the notation ∼
∞ h
1 X nπx nπx i
f (x) ∼ a0 + an cos + bn sin (3.1.11)
2 n=1
L L

just to indicate that the rhs is the Fourier Series of f (x). We will discuss
this aspect under the convergence of Fourier Series.
42 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

3.1.3 Periodic Functions


A function f is said to be 2L periodic if

f (x + 2L) = f (x), ∀x ∈ R.

Remark 17 • For the set {1, cos nx, sin nx, n ≥ 1} the functions are all
2L periodic functions.
nπx nπx
• For the set {1, cos , sin , n ≥ 1} the functions are all 2π periodic
L L
functions.
We thus use the trigonometric Fourier series to represent 2L periodic func-
tions

We note that our function f (x) is defined only for x ∈ [−L, L], whereas its
Fourier series is defined for all x ∈ R. We extend the definition of f (x) to
(−∞, ∞) by extending f (x) periodically from [−L, L] to (−∞, ∞).

Definition 16 Suppose that f (x) is defined for x ∈ [−L, L]. The periodic
extension of f (x), to (−∞, ∞)denoted by fp (x) is defined by

f (x) x ∈ [−L, L]
fp (x) =
f (x + 2L), x ∈ R

To sketch the periodic extension of f (x), simply sketch f (x) for −L ≤ x ≤ L


and then continually repeat the same pattern with period 2L by translating the
original sketch for −L ≤ x ≤ L.

Example 14 Below we have graphs of f (x) = x, x ∈ [−π, π] and its periodic


extension for x ∈ [−3π, 3π].

f(x), for x [- , ]
4

-1

-2

-3

-4
-4 -3 -2 -1 0 1 2 3 4
x

Figure 3.1: f (x) = x, x ∈ [−π, π].

3.1.4 Even and Odd functions


Definition 17 A function f (x) is said to be odd if f (−x) = −f (x) for all x.

Examples are x, x3 , sin x, etc

Definition 18 A function f (x) is said to be even if f (−x) = f (x) for all x.

Examples are x2 , x4 , cos x, constant functions f (x) = 1, etc


3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 43

periodic extension f p(x) for f(x)=x to x [-3 ,3 ]


4

-2

-4
-10 -8 -6 -4 -2 0 2 4 6 8 10

Figure 3.2: Periodic extension of f (x) = x, x ∈ [−3π, 3π].

Remark 18 • Some functions are neither odd nor even, for example, ex .
• If f (x) and g(x) are both even, then h(x) = f (x)g(x) is also even, e.g
h(x) = (x2 )(cos(x)) is even, since
h(−x) = ((−x)2 ) cos(−x) = x2 cos(x) = h(x)
• If f (x) and g(x) are both odd , then h(x) = f (x)g(x) is odd, e.g h(x) =
(x)(sin(x)) is even, since
h(−x) = (−x) sin(−x) = (−x)(− sin(x)) = h(x)
• If f (x) is even and g(x) is odd , then h(x) = f (x)g(x) is odd, e.g h(x) =
(x2 )(sin(x)) is odd, since
h(−x) = (−x)2 sin(−x) = x2 (− sin(x)) = −h(x)
• For an even function f (x), we have f (−x) = f (x), hence
Z L Z L
f (x)dx = 2 f (x)dx
−L 0

• For an odd function f (x), we have f (−x) = −f (x), hence


Z L Z L Z L
f (x)dx = − f (x)dx + f (x)dx = 0
−L 0 0
nπx nπx
Now since cos is an even function and sin is an odd function, we get,
L L
2 RL nπx
• For f even, an = 0
f (x) cos dx, ∀n ≥ 0 and bn = 0, ∀n ≥ 1.
L L
2 RL nπx
• For f odd, bn = 0
f (x) sin dx, ∀n ≥ 1 and an = 0, ∀n ≥ 0.
L L
Remark 19 When computing the FS of any given function, it is important
to check whether the function is even or not, this can reduce the computations
considerably. We therefore strongly recommend that we sketch the graph of the
function first before doing any computations.
In applications, we consider partial sums defined as follows:
Definition 19 The N th partial sum SN (x) of the Fourier Series of f (x) is
denoted by
N h
1 X nπx nπx i
SN (x) = a0 + an cos + bn sin (3.1.12)
2 n=1
L L
44 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

We sketch SN (x) with f (x) and show that as N → ∞ convergence takes place.

3.1.5 Convergence of Fourier Series


We need to investigate when equality holds between f (x) and its Fourier Series.
This is the question of convergence. We recall first the following definitions:

Definition 20 1. Suppose that the right hand limit of f (x) at x0 , denoted


by limx→x+ f (x) = f (x+ 0 ) and left hand limit limx→x− f (x) = f (x−
0 ) both
0 0
exist, that is they are finite. Then the function f is said to be continuous
at x = x0 if


f (x+
0 ) = f (x0 ) = f (x0 ).

− −
2. Suppose that f (x+ +
0 ) 6= f (x0 ) with 0 < f (x0 ) − f (x0 ) < ∞, then f (x) is
said to have a finite-jump at x0 . or a jump discontinuity at x0 .

Thus, in an interval x ∈ [−L, L] we consider functions together wth their peri-


odic extensions for ∈ R that are either continuous or piece-wise continuous with
finite jumps.

π − x, x ∈ [0, π]
Example 15 • f (x) = This is a piece-wise contin-
0, x ∈ [−π, 0]
uous function with a jump discontinuity at x = 0.

• f (x) = tan x, x ∈ [−π, π] not a piece wise continuous function.

We now state the conditions for convergence for a Fourier Series to f (x) as
a theorem:

Theorem 2 If f (x) and f 0 (x) are piece-wise continuous on the interval −L ≤


x ≤ L, periodic with period 2L, then its Fourier series of f (x) converges

1. to f (x), where f (x) is continuous, that is


∞ h
1 X nπx nπx i
f (x) = a0 + an cos + bn sin ;
2 n=1
L L

if f is continuous at x.

2. to the average of the two limits, usually 12 [f (x+ ) + f (x− )], where the f (x)
has a jump discontinuity at x, that is
∞ h
1 1 X nπx nπx i
[f (x+ ) + f (x− )] = a0 + an cos + bn sin ,
2 2 n=1
L L

if f has a jump discontinuity at x.

Remark 20 Thus, the Fourier series actually converges to f (x) at points be-
tween −L and +L, where f (x) is continuous. At the end points, x = L or
x = −L, the infinite series converges to the average of the two values of the
periodic extension.
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 45

Definition 21 The N th partial sum SN (x) of the Fourier Series of f (x) is


denoted by
N h
1 X nπx nπx i
SN (x) = a0 + an cos + bn sin (3.1.13)
2 n=1
L L
The convergence of the Fourier Series depend on the behaviour of the partial
sums SN (x), we expect that
lim ||f (x) − SN (x)|| < ε, 0 < ε  1.
N →∞

Thus as the number of terms increase, we expect the FS to get closer and closer
to the function f (x) pointwise.
Example 16 Given the function f (x) = |x|, x ∈ [−π, π]. Sketch f (x). Express
f (x) in Fourier Series.
The function is even, hence bn = 0, ∀n ≥ 1 and

2 π
Z
an = f (x) cos(nx)dx
π 0
Since f (x) = |x| and x > 0 on [0, π], we have for n = 0,
 π
2 π 2 π 2 x2
Z Z
a0 = x cos(0)dx = xdx = =π
π 0 π 0 π 2 0
and for an , n ≥ 1 we have using integration by parts that
2 π 2 π
Z Z
an = |x| cos(nx)dx = x cos(nx)dx
π 0 π 0

2 π 1
 Z
2 1 2 h cos nx iπ
= x · sin nx − sin nxdx =
π n π n nπ n 0
0
 0
 − 4
2 n , for n odd : n = 2k − 1
= [(−1) − 1] = π(2k − 1)2
n2 π  0 for n even : n = 2k
Thus, the Fourier Series expansion formula

1 X
f (x) = a0 + [an cos nx + bn sin nx]
2 n=1

yields

π 4X 1
|x| = − cos(2n − 1)x
2 π n=1 (2n − 1)2
We note that
1
π 4X 1 π 4
S1 (x) = − 2
cos(2n − 1)x = − cos x
2 π n=1 (2n − 1) 2 π
2
π 4X 1 π 4 4
S2 (x) = − cos(2n − 1)x = − cos x − cos 3x
2 π n=1 (2n − 1)2 2 π 9π
3
π 4X 1 π 4 4 4
S3 (x) = − cos(2n − 1)x = − cos x − cos 3x − cos 5x
2 π n=1 (2n − 1)2 2 π 9π 25π
etc
46 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

f(x), FS with N=2 f(x), FS with N=50


4 4

3 3

2 2

1 1

0 0
-4 -2 0 2 4 -4 -2 0 2 4

Figure 3.3: Fouries series of f (x) = |x| together with partial sums for S2 (x) and
S50 (x).

S 3 (x),-- and f(x)=|x|


4

0
-10 -8 -6 -4 -2 0 2 4 6 8 10

Figure 3.4: Fouries series of f (x) = |x| together with its periodic extension and
the partial sum S3 (x).

Example 17 Given the function f (x) = x, x ∈ [−π, π]. Sketch f (x). Express
f (x) in Fourier Series.
In this case f (x) = x is an odd function, thus, we have an = 0∀n ≥ 0 and
2 π
Z
bn = x sin(nx)dx
π 0
2 hx iπ 2 Z ∞ −1
= − cos nx − cos nxdx
π n 0 π 0 n
2 (−1) (−1)n+1
= ·π· (−1)n = 2 ·
π n n
Hence the Fourier Series is given by

X (−1)n+1
x=2 sin nx
n=1
n

We list a few partial sums:


1
X (−1)n+1
S1 (x) =2 sin nx = 2 sin x
n=1
n
2
X (−1)n+1
S2 (x) =2 sin nx = 2 sin x − sin 2x
n=1
n
3
X (−1)n+1 2
S3 (x) =2 sin nx = 2 sin x − sin 2x + sin 3xetc
n=1
n 3
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 47

f(x), FS with N=10 f(x), FS with N=50


4 4

2 2

0 0

-2 -2

-4 -4
-4 -2 0 2 4 -4 -2 0 2 4

Figure 3.5: Fouries series of f (x) = x together with its partial sums for for
N = 10, 50.
S 10 (x),-- and f(x)=x
4

-2

-4
-10 -8 -6 -4 -2 0 2 4 6 8 10

Figure 3.6: Fouries series of f (x) = x together with its periodic extension and
its partial sums corresponding to N = 10.

Example 18 Let f (x) = ex , x ∈ [−π, π]. Here again L = π, we also note that
since f is neither even nor odd, so we have to compute all the a0 , an and bn
coefficients.
1 Rπ x 1 2
Now, a0 = e dx = (eπ − e−π ) = sinh π, for n ≥ 1, we evaluate
π −π π π
using integration by parts, to obtain
" #
π
1 π x 1 ex 1 π x
Z Z
an = e cos(nx)dx = sin(nx) − e sin(nx)dx
π −π π n −π n −π
Z π
1
=0− ex sin(nx)dx ← integrating this by parts again
nπ −π
" #
π
ex 1 π x
Z
1
=− − cos(nx) + e cos(nx)dx
nπ n −π n −π
1 π 1
= 2
[e − e−π ](−1)n − 2 an
n π n π
thus, we have

(−1)n π −π (−1)n
an = [e − e ] = 2 sinh π
(n2 π + 1) (n2 π + 1)
1 Rπ x
and bn = e sin(nx)dx, again using integration by part repeatedly as above,
π −π
we obtain
(−1)n n π (−1)n n
bn = − 2 [e − e−π ] = −2 2 sinh π
(n π + 1) (n π + 1)
thus, the F.S of ex is given by

1 X (−1)n
ex ∼ sinh π + 2 sinh π (cos nx − n sin nx)
π n=1
(πn2 + 1)
48 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

3.1.6 Half-Range Fourier Series


In real applications, functions are only defined for positive domains, that is for
x ∈ [0, L]. To define the Fourier Series of such functions, we extend the definition
of the given function from x ∈ [0, L], commonly referred to as the half-range to
the full-range domain x ∈ [−L, L]. There are only two natural ways of doing
that, we have the even extension and Odd extension of f.

Definition 22 Let f (x), x ∈ [0, L], then an even extension fe of f (x) to [−L, L]
is defined by 
f (x) x ∈ [0, L]
fe (x) =
f (−x), x ∈ [−L, 0]

Now, since fe (x) is an even function, that is fe (−x) = fe (x), it then follows
that bn = 0, n ≥ 1 and
Z L Z L
1 nπx 2 nπx
an = fe (x) cos dx = f (x) cos dx
L −L L L 0 L

The corresponding Fourier series is given by



1 X nπx
f (x) = a0 + an cos (3.1.14)
2 n=1
L

where Z L
2 nπx
an = f (x) cos dx, n ≥ 0,
L 0 L
called the Half-Range Cosine series.
Definition 23 Let f (x), x ∈ [0, L], then an odd extension fo of f (x) to [−L, L]
is defined by 
f (x) x ∈ [0, L]
fo (x) =
−f (−x), x ∈ [−L, 0]
Now, since fo (x) is an odd function, that is fo (−x) = −fo (x), it then follows
that an = 0, n ≥ 0 and
Z L Z L
1 nπx 2 nπx
bn = fo (x) sin dx = f (x) sin dx
L −L L L 0 L

The corresponding Fourier series is given by



X nπx
f (x) = bn sin (3.1.15)
n=1
L

with Z L
2 nπx
bn = f (x) sin dx, n ≥ 1
L 0 L
called the Half-Range Sine series.
Example 19 Expand f (x) = x, 0 < x < 2 in a half-range (a) Sine series, (b)
Cosine series.
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 49

P∞ nπx
• Sine Series: Here L = 2, and the series are given by f (x) = n=1 bn sin
2
with
2 2
Z
nπx
bn = f (x) sin dx, n ≥ 1
2 0 2
Z 2
nπx
= x sin dx
0 2
nπx 2
x cos 2
Z 2
nπx
=− 2 + cos dx
nπ nπ 0 2
2 0
 2 2
4 2 nπx 4
=− + sin = − (−1)n
nπ nπ 2 nπ
0

4 X 1 nπx
Thus, f (x) = (−1)n+1 sin
π n=1 n 2
P∞ nπx
Cosine Series: Here L = 2, and the series are given by f (x) = n=1 an cos
2
with
2 2
Z Z 2
nπx nπx
an = f (x) cos dx = f (x) cos dx, n ≥ 0
2 0 2 0 2
Z 2 2
x2
a0 = xdx = = 2, and for an ≥ 1
0 2 0
Z 2 2 Z 2
nπx 2 nπx 2 nπx
an = x cos dx = x sin − sin dx
0 2 nπ 2 0 nπ 0 2
 2 2
2 nπx 4 4
= cos = 2 2 (cos nπ − 1) = 2 2 [(−1)n − 1]
nπ 2 n π n π
0

4 P∞ (−1)n − 1 nπx 8 X 1 (2n − 1)πx
Thus, f (x) = 1+ 2 n=1 2
cos = 1− 2 2
cos
π n 2 π n=1 (2n − 1) 2

3.1.7 Bessel’s Inequality


For a finite Fourier Series involving N terms we derive the so-called Bessel
Inequality, in which N can be taken to infinity provided the function f is square
integrable.
Let S be an orthonormal set in an inner-product space, say X. Let {φ1 , φ2 , · · · , φn }
be a finite subset of S and f ∈ X such that ||f ||2 < ∞. Then
n
X
| < f, φi > |2 ≤ ||f ||2 .
i=1
This is called the Bessel’s inequality. Recall that ci =< f, φi >, thus, we have,
n
X
|ci |2 ≤ ||f ||2 .
i=1
50 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

Proof 1 We compute
* n n
+
X X
f− < f, φi > φi , f − < f, φi > φi .
i=1 i=1

We first observe that

* n n
+ n n
X X X X
< f, φi > φi , < f, φj > φi = < f, φi > < f, φj > < φi , φj >= | < f, φi > |2
i=1 j=1 i,j=1 i=1

We then have:
* n n
+
X X
0 ≤ f− < f, φi > φi , f − < f, φi > φi
i=1 i=1
* n + n n
X X X
2
= ||f || − < f, φi > φi , f − < f, < f, φj > φj > + | < f, φi > |2
i=1 j=1 i=1
n
X
= ||f ||2 − | < f, φi > |2
i=1

Thus,
n
X
| < f, φi > |2 ≤ ||f ||2 ,
i=1

this can also be expressed in the form:


n
X
|cn |2 ≤ ||f ||2 .
i=1

For the trigonometric Fourier series, we have: If f (x) is square integrable in the
interval [−L, L] with coefficients a0 , an , bn , then Bessel’s Inequality takes the
form:

1 L
Z
1 X
[f (x)]2 dx ≥ a20 + [a20 + b2n ] (3.1.16)
L −L 2 n=1

Thus, the sequences formed by the coefficients are bounded, intuitively this
means that an , bn → 0 as n → ∞!

3.1.8 Parseval’s equality


Let S be an orthonormal set in an inner-product space, say X. Let {φ1 , φ2 , · · · , φn }
be a finite subset of S that is complete and f ∈ X. Then
n
X
| < f, φi > |2 = ||f ||2 ,
i=1
or
n
X
|cn |2 = ||f ||2 ,
i=1
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 51

this is then called Parseval’s Identity. If the Trigonometric Fourier Series


of f (x) converges to f (x) for all x ∈ [−L, L], then Parseval’s Identity takes the
form
Z L ∞
1 2 1 2 X 2
[f (x)] dx = a0 + [a0 + b2n ]. (3.1.17)
L −L 2 n=1

Proof 2 Exercise!

We use the Parseval’s Identity to prove convergence of some infinity series. We


have the following examples:
Example 20 Recall that we obtained the FS of f (x) = x, x ∈ [−π, π]] to be
P∞ (−1)n+1 (−1)n+1
x = 2 n=1 sin nx, here an = 0, ∀n ≥ 0 and bn = 2 . Thus,
n n
applying the Parseval’s Identity, we have
∞ ∞
1 π 2
Z X X 1
[x] dx = b2n = 4 2
π −π n=1 n=1
n
 π ∞
1 x3 X 1
=4 2
π 3 −π n=1
n

2π 2 X 1
=4
3 n=1
n2

π2 X 1
=
6 n=1
n2

π 4 P∞ 1
Example 21 For the example f (x) = |x|, x ∈ [−π, π] with |x| = − n=1 cos(2n−
2 π (2n − 1)2
4
1)x. Here a0 = π, an = − , and bn = 0, we have
π(2n − 1)2
π ∞
a20 X 2
Z
1
|x|2 dx = + an
π −π 2 n=1
 π ∞
1 x3 π2 16 X 1
= + 2
π 3 −π 2 π n=1 (2n − 1)4

2π 2 π2 16 X 1
= + 2
3 2 π n=1 (2n − 1)4

π2 X 1
=
96 n=1 (2n − 1)4

π2 P∞ (−1)n
Example 22 For the example f (x) = x2 , x ∈ [−π, π] with x2 = +4 n=1 cos nx.
3 (n2 )
2π 4(−1)n
Here a0 = , an = − , and bn = 0. Then in this case, one can show
3 n2
52 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES

that Parseval’s Identity reduces to



π4 X 1
=
90 n=1 n4

3.1.9 Complex Fourier Series


The Fourier Series of f (x), x ∈ [−L, L] can be written in complex form which
is more compact and simpler to work with at times.
Recall that

a0 X nπx nπx
f (x) = + [an cos + bn sin ] (3.1.18)
2 n=1
L L

Using
nπxi nπxi
nπx 1 −
cos = (e L + e L )
L 2
nπxi nπxi
nπx 1 −
sin = (e L + e L )
L 2i

we have
nπxi nπxi
nπx nπx 1 1 −
an cos + bn sin = (an − ibn )e L + (an + ibn )e L (3.1.19)
L L 2 2
Inserting into (3.1.18), we obtain

∞ nπxi nπxi
X −
f (x) = α0 + [αn e L + βn e L ] (3.1.20)
n=1

1 1 1
where α0 = a0 , αn = (an −ibn ) βn = (an +ibn ) Computing the coefficients,
2 2 2
we have
" Z #
1 1 L i L
Z
nπx nπx
αn = f (x) cos dx − f (x) cos dx
2 L −L L L −L L
Z L
nπxi
1 −
= f (x)e L dx
2L −L

and

" Z #
1 1 L i L
Z
nπx nπx
βn = f (x) cos dx + f (x) cos dx
2 L −L L L −L L
Z L
nπxi
1
= f (x)e L dx
2L −L
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 53

It follows that βn = α−n , thus, we have

∞ nπxi ∞ nπxi
X X −
f (x) = α0 + αn e L + α−n e L (3.1.21)
n=1 n=1
∞ nπxi n=−1 nπxi
X X
= α0 + αn e L + αn e L (3.1.22)
n=1 −∞

∞ nπxi
X
= αn e L (3.1.23)
−∞

Thus, we have the pair

∞ nπxi
X
f (x) = αn e L (3.1.24)
−∞
Z L
nπxi
1 −
αn = f (x)e L dx (3.1.25)
2L −L

called the Complex Fourier Series.


Example 23 Compute the complex Fourier series of (x) = ex , x ∈ [−π, π]
P∞
By definition, we have f (x) = −∞ αn einx where
Z π
1
αn = ex e−inx dx
2π −π
Z π
1
= ex−inx dx
2π −π
1 π 1 h i
= ex−inx −π = eπ(1−in) − e−π(1−in)
2π(1 − in) 2π(1 − in)
(−1)n sinh π (−1)n (1 + in)
= (eπ − e−π ) = (−1)n = sinh π
2π(1 − in) π(1 − in) π(1 + n2 )

sinh π X (−1)n (1 + in) inx
Thus, we have ex = e
π n=−∞ (1 + n2 )
54 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES
Chapter 4

Fourier Transforms

The Fourier series developed above are primarily useful when dealing with func-
tions defined on a finite interval, say, [−L, L] or [0, L]. The functions consid-
ered are supposed to be periodic. Fourier Series are used when solving par-
tial differential equations on finite domains defined above, in particular for ini-
tial/boundary value problems defined on [0, L]. If the functions are non-periodic
and defined on (−∞, ∞) or [0, ∞), Fourier Transforms or Fourier Integral Trans-
forms are developed.
The Fourier Transforms are used when dealing with partial differential equa-
tions over spatial domains that are infinite or semi-infinite. In this case, the
Fourier transform technique transforms the pde into a simpler ”ode” that is
in terms of the new variable, the transform variable, this can then be solved.
Taking the inverse transform of the solution converts the solution of the ode to
that of the corresponding pde.
We start by defining what the Fourier Transform is and computing some
Fourier transforms of basic functions.

Definition 24 The Fourier Transform of a function f (x), x ∈ (−∞, ∞) de-


noted by F(f (x)) or F (µ), is an integral transform defined by
Z ∞
F(f (x)) = f (x)e−iµx dx = F (µ). (4.0.1)
−∞

This is a full-range Fourier Transform of f (x).

Definition 25 The inverse Fourier Transform of F (µ) denoted by F −1 (F (µ))


is an integral transform defined by
Z ∞
−1 1
F (F (µ)) = F (µ)eiµx dµ = f (x) (4.0.2)
2π −∞

Remark 21 In general, since f can be a piece-wise continuous function, since


the inverse is continuous, the inverse can be expressed as:
Z ∞
1 1
F −1 (F (µ)) = F (µ)eiµx dµ = [f (x+ ) + f (x− )]. (4.0.3)
2π −∞ 2

55
56 CHAPTER 4. FOURIER TRANSFORMS

 0, −∞ < x < −a
Example 24 Consider the square wave f (x) = 1, −a < x < a , where
0, a < x < ∞.

a > 0. Find its Fourier Transform

By definition
Z −a Z a Z ∞
−iµx −iµx
F(f (x)) = 0·e dx + 1·e dx + 0 · e−iµx dx
−∞ −a a
1 aiµ sin µa
= (e − e−aiµ ) = 2 = F (µ)
iµ µ
Using the inverse transform, we have that:


 0, −∞ < x < −a
1
Z ∞
sin µa iµx

 1, −a < x < a
e dµ = f (x) = 1
π −∞ µ  , x = ±a

 2
0, a < x < ∞.

xn e−ax ,

x>0
Example 25 Let f (x) = , where a > 0, n > −1. Find its
0, x < 0.
Fourier Transform
Solution: Exercise!
Example 26 Let f (x) = e−|x|a where a > 0. Find its Fourier Transform
By definition
Z ∞
F(f (x)) = e−|x|a e−iµx dx
−∞
Z 0 Z ∞
1 1
= e(a−iµ)x dx + e−(a+iµ)x dx = +
−∞ 0 a − iµ a + iµ
2a
= = F (µ)
a2 + µ2
Using the inverse transform, we have that:
a ∞
Z
1
eiµx dµ = f (x) = e−|x|a ,
π −∞ a + µ2
2

or simply
Z ∞
1 π
eiµx dµ = e−|x|a .
−∞ a2 + µ2 a

4.0.1 Properties of The Fourier Transform


1. The Fourier Transform operator F is a linear operator. That is, if a, b are
any constants, with F{f (x)} = F (µ) and F{g(x)} = G(µ), then

F{af (x) + bg(x)} = aF{af (x)} + F{bg(x)} = aF (µ) + bG(µ)


57

2. Let n be a positive integer. Suppose f (n) (x) is a piecewise continuous


function on the real line with

lim f (k) (x) = lim f (k) (x) = 0,


x→∞ x→−∞

for k = 0, 1, 2, 3 · · · , n − 1, then

F{f n (x)} = (iµ)n F (µ)

Thus, F{f 0 (x)} = (iµ)1 F (µ) = iµF (µ) and F{f 00 (x)} = (iµ)2 F (µ) =
−µ2 F (µ) etc.

Remark 22 • Note how derivatives are replaced my multiplication by


(iµ)n .
• When applied to an equation, an ordinary differential equation is then
transformed into an algebraic equation.
• Since we will be dealing with pdes, for a function of two variables,
say u(x, t), we have F{ux (x, t)} = iµU (µ, t) and F{uxx (x, t)} =
dU
−µ2 U (µ, t) and also F{ut (x, t)} = (µ, t) as well as
dt
d2 U
F{utt (x, t)} = (µ, t)
dt2
dU
Thus, for an equation of the form ut +ux = u, we have +iµU = U
dt
2
d U
and for utt + uxx = u, when transformed, we have, − µ2 U = U.
dt2
Thus, a pde is transformed into an ode by the Fourier Transform since
all the derivatives with respect to the space variables x are removed
and replaced by an algebraic functions, which then leaves the equation
depending on derivatives with respect to t only, hence an ode.

4.0.2 The Convolution product and theorem


In this section we introduce the convolution of two functions f (x) and g(x) which
we denote by (f ∗ g)(x). The convolution product together with the Convolution
Theorem is important as it enables us to determine the inverse Fourier transform
of a product of two transformed functions:

F −1 (F (µ)G(µ)) = (f ∗ g)(x)

The Convolution
Let f (x) and g(x) be two functions of x, x ∈ (−∞, ∞). The convolution of f (x)
and g(x) is also a function of x, denoted by (f ∗ g)(x) and is defined by the
relation
Z ∞
(f ∗ g)(x) = f (x − τ )g(τ )dτ (4.0.4)
−∞
58 CHAPTER 4. FOURIER TRANSFORMS

Example 27 Show that (f ∗ g)(x) = (g ∗ f )(x).

By definition, we have that


Z ∞
(f ∗ g)(x) = f (x − τ )g(τ )dτ
−∞

If we let x − τ = v, and eliminate τ, we have


Z ∞
(f ∗ g)(x) = f (x − τ )g(τ )dτ
−∞
Z −∞
=− f (v)g(x − v)dv
Z ∞∞
= f (v)g(x − v)dv
−∞
=(g ∗ f )(x)

Now we can state the convolution theorem:

Theorem 3 Let F (µ) and G(µ) be the Fourier transforms for f (x) and g(x)
respectively. Then the inverse Fourier transform of F (µ)G(µ) is given by

F −1 (F (µ)G(µ)) = (f ∗ g)(x)

To prove the above result, we start from the definition:


Z ∞ Z ∞ 
F(f ∗ g)(x) = f (x − τ )g(τ )dτ e−iµx dx
−∞ −∞
Z ∞ Z ∞ 
−iµx
= g(τ ) f (x − τ )e dτ dx
−∞ −∞

If we let x − τ = v, and eliminate x, we have


Z ∞ Z ∞ 
F{(f ∗ g)(x)} = f (x − τ )g(τ )dτ e−iµx dx
−∞ −∞
Z ∞ Z ∞ 
−iµx
= g(τ ) f (x − τ )e dτ dx
−∞ −∞
Z ∞ Z ∞ 
−iµ(v+τ )
= g(τ ) f (v)e dτ dv
−∞ −∞
Z ∞ Z ∞
= g(τ )−iµτ dτ f (v)e−iµv dv
−∞ −∞
=G(µ)F (µ), thus
(f ∗ g)(x) =F −1 {G(µ)F (µ)}.

We will use this result mostly when we solve partial differential equations.
59

4.0.3 Half-Range Fourier Transforms: The Fourier Cosine


and Sine Transforms
When the function f (x) is defined in the domain x > 0, that is the half-range,
we obtain either the Fourier Sine and Fourier Cosine Transform.
To determine the Fourier Transform of a function f (x), x > 0, we extended
the definition of f (x) to the entire domain −∞ < x < ∞ using either an even
extension or an odd extension. Suppose that f (−x) = f (x) for negative values
of x, then by definition we have
Z ∞
F{f (x)} = f (x)e−iµx dx
−∞
Z 0 Z ∞
= f (−x)e−iµx dx + f (x)e−iµx dx
−∞ 0
Z ∞ Z ∞
= iµx
f (x)e dx + f (x)e−iµx dx
Z0 ∞ 0

iµx −iµx
= f (x)(e + e )dx
0
Z ∞
=2 f (x) cos µxdx
0

thus, we define,
Z ∞
1
Fc (µ) = F{f (x)} = f (x) cos µxdx
2 0

to be the Fourier cosine transform of f (x), x ∈ [0, ∞]. A similar derivation can
be made for the Fourier sine transform. This leads to the following definitions:

Definition 26 Let f (x), x > 0 then the Fourier Cosine Transform of f (x) is
defined by Z ∞
Fc (µ) = f (x) cos µdx,
0
its inverse is given by
Z ∞
2
f (x) = Fc (µ) cos µdx
π 0

Definition 27 Let f (x), x > 0 then the Fourier Sine Transform of f (x) is
defined by Z ∞
Fs (µ) = f (x) sin µdx,
0
its inverse is given by
Z ∞
2
f (x) = Fc (µ) sin µdx
π 0

Example 28 Find the Fourier Sine and Cosine Series of the function f (x) =
e−ax , a > 0.
60 CHAPTER 4. FOURIER TRANSFORMS

We note that
Z ∞ Z ∞ Z ∞
e−ax eiµx dx = e−ax cos µxdx + i e−ax sin µxdx = Fc (µ) + iFs (µ)
0 0 0

but,
Z ∞ Z ∞ ∞
−ax iµx 1
e e dx = e−x(a−iµ) dx = − e−x(a−iµ)
0 0 a − iµ 0
1 a + iµ
= = 2 = Fc (µ) + iFs (µ)
a − iµ a + µ2

Thus, equating the real and imaginary parts, we have


a µ
Fc (µ) = , Fs (µ) = 2 .
a2 +µ2 a + µ2
Using the inverse transforms, we have
2a ∞ cos µx
Z
e−xa = dµ,
π 0 a2 + µ2
and Z ∞
−xa 2 µ sin µx
e = dµ.
π 0 a2 + µ2
The operational rules for solving partial differential equations are given below:
Theorem 4 Let f (x) and f 0 (x) be continuous on [0, ∞), such that f (x) → 0
and f 0 (x) → 0. Suppose that f 00 (x) is continuous [0, ∞). Then, we have

Fs {f 00 (x))} = µf (0) − µ2 Fs (µ)


Fc {f 00 (x))} = −f 0 (0) − µ2 Fc (µ)

The above results can easily be proved by integration by parts.


We note that, for a function of two variables, say u(x, t), the above results
reduce to

Fs {uxx (x, t))} = µu(x, 0) − µ2 Us (µ, t)


Fc {uxx (x, t))} = −ux (x, 0) − µ2 Uc (µ)

Thus, when solving half-range problems using the Fourier Sine and Cosine
Transform method, the nature of the boundary condition at x = 0 will help
us decide whether to use the Fourier Sine or Cosine transform
• if the condition u(x, 0) is given, then we use the Fourier Sine Transform,
• if instead, we have a condition of the form ux (x, 0), we make use of the
Fourier Cosine Transform.
Chapter 5

Initial Boundary Value


Problems(IBVPs) on
infinite and semi-infinite
domains: The Fourier
Transform Method

We have so far solved IBVPs on a finite domain, [0, L] by using the method of
separation of variables which reduced the pde to a system of ordinary differential
equations, whose solutions together with the application of the superposition
principle and the use of Fourier Series lead to the solution of a pde in terms of
infinite series. We now consider problems defined on infinite and semi-infinite
domains, we apply the method of Fourier transforms. The pde is transformed
to a simple ordinary differential equation that is in terms of the transformed
variable, the associated initial conditions are transformed likewise. The solution
of the pde is then obtained as an inverse Fourier transform of the solution of the
ode. This will be illustrated by use of examples, we focus mostly on the heat,
wave and the Laplace equations.

5.0.1 Heat Equation


We now consider how the full-range Fourier Transform can be used to solve
initial-boundary-value problems. Let us consider the heat equation on an infinite
domain:

ut = kuxx , − ∞ < x < ∞, t > 0


u(x, 0) = f (x), − ∞ < x < ∞

we also assume that

u(x, t) → 0, ux (x, t) → 0 as x → ±∞.

This gives the boundedness conditions of the problem at ±∞.

61
62CHAPTER 5. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON INFINITE AND SEMI-INF

To solve the above problem, since the problem has two independent variables
x and t, we need to decide which variable to use for transformation. In this
case, we have to use the x variables since it is in the domain (−∞, ∞) which
corresponds to the definition of the Fourier Transform.
Now, let F{u(x, t)} = U (µ, t) be the Fourier Transform of u(x, t) with re-
spect to x. We then transform the pde, we have

F{ut (x, t) − kuxx } = F{0}


F{ut (x, t)} − kF{uxx (x, t)} = 0

F{u(x, t)} − k(iµ)2 U (µ, t) = 0
∂t
∂U (µ, t)
+ kµ2 U (µ, t) = 0
∂t

which is an ordinary differential equation with µ as a parameter.


Transforming the initial conditions, we have

F{u(x, 0)} = F{f (x)} → U (µ, 0) = F (µ).

Thus, we now have to solve the initial-value problem


∂U (µ, t)
+ kµ2 U (µ, t) = 0, (5.0.1)
∂t
U (µ, 0) = F (µ). (5.0.2)

Solving (5.0.1), we obtain


2
U (µ, t) = C(µ)e−kµ t ,

applying the initial-conditions, we have

U (µ, 0) = F (µ) = C(µ),

thus, we have that


2
U (µ, t) = F (µ)e−kµ t .
To obtain the solution of the transformed equation, we have
Z ∞
−1 1 2
u(x, t) = F {U (µ, t)} = F (µ)e−kµ t eiµx dµ,
2π −∞

this completes the solution of the problem.

Example 29 Solve the initial-value problem

ut = kuxx , − ∞ < x < ∞, t > 0


u(x, 0) = e−|x| , − ∞ < x < ∞

Transforming the pde, we obtain


∂U (µ, t)
+ kµ2 U (µ, t) = 0
∂t
63

and the initial conditions yield


Z ∞
2
F{u(x, 0)} = F{e−|x| } → U (µ, 0) = e−|x| e−iµx dx =
−∞ 1 + µ2

Thus, solving the above pde and using the above initial condition, we obtain
2 2
U (µ, t) = e−kµ t .
1 + µ2

Thus, the solution of the pde given by u(x, t) = F −1 {U (µ, t)} is


∞ ∞ 2
e−kµ t+iµx
Z Z
1 2 2 1
u(x, t) = 2
e−kµ t eiµx dµ = dµ.
2π −∞ 1+µ π −∞ 1 + µ2

We note that the right-handside of the solution looks complex yet the left
hand side is real. We can extract the real solution by verifying that the complex
part always vanishes.
Note:
2 2 2
1 ∞ e−kµ t+iµx 1 ∞ e−kµ t cos µx i ∞ e−kµ t sin µx
Z Z Z
dµ = dµ + dµ.
π −∞ 1 + µ2 π −∞ 1 + µ2 π −∞ 1 + µ2
2
2 ∞ e−kµ t cos µx
Z
= dµ,
π 0 1 + µ2
∞ 2
e−kµ t sin µx
Z
i
since the integrand of dµ is odd hence the integral is 0!!.
π −∞ 1 + µ2

Example 30 Solve the initial-value problem

ut = uxx , − ∞ < x < ∞, t > 0



 1, 0 ≤ x ≤ 1
u(x, 0) = −1, −1 ≤ x ≤ 0
0, |x| > 1

2 ∞ 1 − cos µ −µ2 t
Z
Show that the solution is given by u(x, t) = e sin µxdµ.
π 0 µ
It is standard to leave the answer in the integral form, just like what we
did with solutions obtained by making use of the Fourier Series application, the
answer was left in series form. Some of the integrals are not easy to handle
analytically, they may actually require numerical evaluation.

5.0.2 Wave Equation


We now consider the vibrations of an infinite string modeled by the IVP:

utt = c2 uxx , − ∞ < x < ∞, t > 0


u(x, 0) = f (x), − ∞ < x < ∞
ut (x, 0) = g(x), − ∞ < x < ∞
64CHAPTER 5. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON INFINITE AND SEMI-INF

Transforming the pde w.r.t x, we obtain


∂2U
(µ, t)) = −µ2 c2 U (µ, t), − ∞ < µ < ∞, t > 0
∂t2
U (µ, 0) = F (µ), − ∞ < µ < ∞
∂U
(µ, 0) = G(µ), − ∞ < µ < ∞
∂t

Solving the pde, we obtain


U (µ, t) = A(µ) cos(µct) + B(µ) sin(µct)
Applying the accompanying conditions, we obtain
U (µ, 0) = A(µ) = F (µ), Ut (µ, 0) = B(µ)µc = G(µ),
thus, we have the solution
G(µ)
U (µ, t) = F (µ) cos(µct) + sin(µct),
µc
thus, we have the solution
Z ∞  
1 G(µ)
u(x, t) = F −1 {U (µ, t)} = F (µ) cos(µct) + sin(µct) eiµt dµ
2π −∞ µc

Example 31 Solve the initial-value problem


utt = uxx , − ∞ < x < ∞, t > 0
u(x, 0) = e−|x| , − ∞ < x < ∞

1, |x| ≤ 1
ut (x, 0) =
0, |x| > 1
2 2 sin µ
Here F (µ) = and G(µ) = . Thus, the solution is then given by
1 + µ2 µ
Z ∞ 
1 2 2 sin µ
u(x, t) =F −1 {U (µ, t)} = cos(µct) + sin(µct) eiµt dµ
2π −∞ 1 + µ2 µ2
2 ∞
Z  
2 2 sin µ
= cos(µct) + sin(µct) cos µxdµ
π 0 1 + µ2 µ2

5.0.3 Half-Range Fourier Transforms: The Fourier Cosine


and Sine Transforms
When the function f (x) is defined in the domain x > 0, we obtain the Fourier
Sine and Fourier Cosine Transform.
To determine the Fourier Transform of a function f (x), x > 0, we extended
the definition of f (x) to the entire domain −∞ < x < ∞ using either an even
extension or an odd extension. Suppose that f (−x) = f (x) for negative values
of x, This leads to the following definitions:
65

Definition 28 Let f (x), x > 0 then the Fourier Cosine Transform of f (x) is
defined by Z ∞
Fc (µ) = f (x) cos µdx,
0

its inverse is given by


Z ∞
2
f (x) = Fc (µ) cos µdx
π 0

Definition 29 Let f (x), x > 0 then the Fourier Sine Transform of f (x) is
defined by Z ∞
Fs (µ) = f (x) sin µdx,
0

its inverse is given by


Z ∞
2
f (x) = Fc (µ) sin µdx
π 0

Example 32 Find the Fourier Sine and Cosine Series of the function f (x) =
e−ax , a > 0.

We note that
Z ∞ Z ∞ Z ∞
−ax iµx −ax
e e dx = e cos µxdx + i e−ax sin µxdx = Fc (µ) + iFs (µ)
0 0 0

but,
Z ∞ Z ∞ ∞
−ax iµx 1
e e dx = e−x(a−iµ) dx = − e−x(a−iµ)
0 0 a − iµ 0
1 a + iµ
= = 2 = Fc (µ) + iFs (µ)
a − iµ a + µ2

Thus, equating the real and imaginary parts, we have


a µ
Fc (µ) = , Fs (µ) = 2
a2 + µ2 a + µ2

The operational rules for solving partial differential equations are given below:
Let f (x) and f 0 (x) be continuous on [0, ∞), such that f (x) → 0 and f 0 (x) →
0. Suppose that f 00 (x) is continuous [0, ∞). Then, we have

Fs {f 00 (x))} = µf (0) − µ2 Fs (µ)


Fc {f 00 (x))} = −f 0 (0) − µ2 Fc (µ)

The above results can easily be proved by integration by parts.


We note that, for a function of two variables, say u(x, t), the above results
reduce to
66CHAPTER 5. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON INFINITE AND SEMI-INF

Fs {uxx (x, t))} = µu(x, 0) − µ2 Us (µ, t)


Fc {uxx (x, t))} = −ux (x, 0) − µ2 Uc (µ)

Thus, when solving half-range problems using the Fourier Sine and Cosine
Transform method, the nature of the boundary condition at x = 0 will help
us decide whether to use the Fourier Sine or Cosine transform

• if the condition u(x, 0) is given, then we use the Fourier Sine Transform,

• if instead, we have a condition of the form ux (x, 0), we make use of the
Fourier Cosine Transform.

Let us consider some examples

Example 33 Solve the heat equation

ut = uxx , 0 < x < ∞


ux (0, t) = 0
u(x, 0) = e−x

Since in this case we have the boundary condition ux (0, t), we make use of the
Fourier Cosine Transform.
Thus, we let
Fc {(u(x, t))} = Uc (µ, t),
the transforming the pde and the initial condition we have

Fc {(ut (x, t))} = Fc {(uxx (x, t))}


∂Uc
(µ, t) = −ux (0, t) − µ2 Uc (µ, t) = −µ2 Uc (µ, t)
∂t

and
1
Fc {(u(x, 0))} = Uc (µ, 0) = Fc {e−x )} = ,
1 + µ2
Thus, we need to solve the IVP

∂Uc
(µ, t) = −µ2 Uc (µ, t) (5.0.3)
∂t
1
Uc (µ, 0) = (5.0.4)
1 + µ2
2
Solving (5.0.5), we obtain Uc (µ, t) = C(µ)e−µ t . Applying the initial condi-
1
tion (5.0.6), we obtain that C(µ) = , thus, we have
1 + µ2
1 2
Uc (µ, t) = e−µ t ,
1 + µ2
67

the solution of the pde is then obtained by taking the inverse Fourier Cosine
transform:

∞ ∞ 2
e−µ t
Z Z
2 2
u(x, t) = Fc−1 {(Uc (µ, t)) = Uc (µ, t) cos µxdµ = cos µxdµ.
π 0 π 0 1 + µ2

Example 34 Solve the heat equation

ut = uxx + u, 0 < x < ∞


u(0, t) = 0
u(x, 0) = e−x

Since in this case we have the boundary condition u(0, t), we make use of the
Fourier Sine Transform.
Thus, we let
Fs {(u(x, t))} = Us (µ, t),

the transforming the pde and the initial condition we have

Fs {(ut (x, t))} = Fs {(uxx (x, t)) + u}


∂Us
(µ, t) = u(0, t) − µ2 Us (µ, t) + U (µ, t) = (1 − µ2 )Us (µ, t)
∂t

and
µ
Fs {(u(x, 0))} = Us (µ, 0) = Fs {e−x )} = ,
1 + µ2

Thus, we need to solve the IVP

∂Us
(µ, t) = (1 − µ2 )Uc (µ, t) (5.0.5)
∂t
µ
Us (µ, 0) = (5.0.6)
1 + µ2

2
Solving (5.0.5), we obtain Us (µ, t) = C(µ)e(1−µ )t . Applying the initial con-
µ
dition (5.0.6), we obtain that C(µ) = , thus, we have
1 + µ2
µ 2
Uc (µ, t) = e(1−µ )t ,
1 + µ2

the solution of the pde is then obtained by taking the inverse Fourier Cosine
transform:

∞ ∞ 2
µe−µ t
Z Z
2 2
u(x, t) = Fs−1 {(Us (µ, t)) = Us (µ, t) sin µxdµ = sin µxdµ.
π 0 π 0 1 + µ2
68CHAPTER 5. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON INFINITE AND SEMI-INF

5.1 The Laplace Equation


We have so far considered the wave equation, which is hyperbolic, and the heat
equation, which is parabolic, and we solved them both in a finite, infinite and
semi-infinite domains. We now consider the third class of important pdes called
the Laplace Equation, which elliptic.
In a two-dimensional space, the Laplace equation is given by

∇2 u = uxx + uyy = 0. (5.1.1)

Here
∇2 u

is called the Laplacian of u.


Note that, here ∆(x, y) = b2 − ac = 0 − 1 = −1 < 0. Hence the equation is
elliptic.
Solutions to the above equation Laplace (5.1.1) are called harmonic func-
tions. The Laplacian appears in both the heat and the wave equation:

ut =κ∇2 u
utt =c2 ∇2 u

The Laplace’s equation, represents the steady state of a field that depends on
two or more independent variables, which are typically spatial. The steady-state
of both equations is obtained by setting, respectively ut = 0 and utt = 0 leading
to, as before in two dimension:

∇2 u = uxx + uyy = 0,

or in three dimension:

∇2 u = uxx + uyy + uzz = 0.

We demonstrate for the problem in a finite domaian, say:

uxx + uyy = 0, (x, y)Ω,

where:
Ω(x, y) : {(x, y)|0 < x < L1 , 0 < y < L2 }

and the boundary conditions are given by:

u(x, 0) = f1 (x), u(x, L2 ) = f2 (x), u(0, y) = g1 (y), u(L1 , y) = g2 (y).

the above problem can be decomposed into a sequence of four boundary value
problems each having only one boundary segment that has inhomogeneous
boundary conditions and the remainder of the boundary is subject to homo-
geneous boundary conditions. These latter problems can then be solved by
separation of variables.
5.1. THE LAPLACE EQUATION 69

5.1.1 The Laplace Equation in a half-plane


We now consider the IBVP corresponding to the Laplace equation. We consider
first the case of the Laplace equation defined in a finite domain in x and semi-
infinite in y. Thus, we consider the Laplaces equation in a semi-infinite strip:

∇2 u = uxx + uyy = 0, −∞ < x < ∞, y > 0,

with the corresponding boundary conditions:

u(x, 0) =f (x), − ∞ < x < ∞


lim u(x, y) =0
x→±∞

lim u(x, y) =0
y→±∞

To solve this problem we consider the Fourier transform in x:


Z ∞
U (µ, y) = u(x, y)e−iµx dx
−∞
Z ∞
1
u(x, y) = U (µ, y)eiµx dµ
2π −∞

Taking the Fourier transform in x of the Laplaces equation we obtain the ODE

∂2U
− µ2 U = 0 (5.1.2)
∂y 2

whose general solution is given by

U (µ, y) = a(µ)eµy + b(µ)e−µy (5.1.3)

Next we use the boundary conditions to determine a(µ) and b(µ).

lim u(x, y) = 0 → lim U (µ, y) = 0


y→±∞ y→±∞

such that a(µ) = 0 if µ > 0 and b(µ) = 04 if µ < 0. Therefore, (5.1.3)


becomes

a(µ)eµy ,

µ<0
U (µ, y) = (5.1.4)
b(µ)e−µy , µ > 0
which may be written in a compact form

U (µ, y) = c(µ)e−|µ|y (5.1.5)

where c(µ) = a(µ), µ < 0 and c(µ) = b(µ), µ > 0.


From the boundary condition u(x, 0) = f (x) we have
Z ∞
U (µ, 0) = F{f (x)} = f (x)e−iµx dx = F (µ). (5.1.6)
−∞
70CHAPTER 5. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON INFINITE AND SEMI-INF

Using this together with (5.1.5), we obtain that

c(µ) = F (µ).

Thus,
Z ∞ Z ∞
1 1
u(x, y) = F (µ)e−|µ|y eiµx dx = F (µ)G(µ)eiµx dx (5.1.7)
2π −∞ 2π −∞

where G(µ) = e−|µ|y .


Note that, in this case, u(x, y) = F −1 {F (µ)G(µ)}. We know that from the
convolution theorem:
1
u(x, y) = F −1 {F (µ)G(µ)} = (f ∗ g)(x),

2y
where F −1 {F (µ)} = f (x) and F −1 {G(µ)} = F −1 e−|µ|y =

.
x2 + y 2
Thus, Z ∞
1 y f (ξ)
u(x, y) = (f ∗ g)(x) = dξ
2π π −∞ (x − ξ)2 + y 2

5.1.2 Laplace Equation in a quarter plane


To solve the Laplaces equation in a quarter-plane x > 0, y > 0

∇2 u = uxx + uyy = 0, x > 0, y > 0

with boundary conditions

u(0, y) =g(y), y > 0


uy (x, 0) =f (x), x > 0
lim u(x, y) = lim u(x, y) = 0
x→±∞ y→±∞

we decompose the problem into two subproblems that are easier to solve:

u(x, y) = u1 (x, y) + u2 (x, y)

where:

∇2 u1 =0, ∇2 u2 = 0,
u1 (0, y) =g(y) u2 (0, y) = 0
∂u1 ∂u2
(x, 0) =0 (x, 0) = f (x)
∂y ∂y
lim u1 (x, y) =0 lim u2 (x, y) = 0
x→∞ x→∞
lim u1 (x, y) =0 lim u2 (x, y) = 0
y→∞ y→∞
5.1. THE LAPLACE EQUATION 71

When solving for u1 (x, y), we consider the Fourier cosine transform in y since
∂u1
we have a homogeneous boundary condition (0, y) = 0.
∂y
Z ∞
2
u1 (x, y) = U1 (µ, y) cos µydµ
π 0
Z ∞
U1 (µ, y) = u1 (x, y) cos µydy
0

Taking the Fourier cosine transform in y of the Laplaces equation for u1 and us-
∂u1
ing the homogeneous boundary condition (0, y) = 0, we obtain the ordinary
∂y
differential equation
∂ 2 U1
− µ2 U1 = 0 (5.1.8)
∂x2
whose general solution is

U1 (x, µ) = a(µ)e−µx + b(µ)eµx , x > 0, µ > 0 (5.1.9)

To determine a(µ) and b(µ) we use the boundary conditions.

lim u1 (x, y) = 0 → lim U1 (x, µ) = 0 → b(µ) = 0


x→∞ x→∞

Therefore
U1 (x, µ) = a(µ)e−µx
R∞
Now since U1 (0, µ) = Fc {u1 (0, y)} = Fc {g(y)} → a(µ) = 0 g(y) cos µydy =
G(µ) The solution u1 (x, y) is obtained from U1 (x, µ) by taking the inverse:

2 ∞
Z
u1 (x, y) = G(µ)e−µx cos µydµ
π 0
When solving for u2 (x, y), we consider the Fourier sine transform in x since
we have a homogeneous boundary condition u2 (0, y) = 0.

Z ∞
2
u2 (x, y) = U2 (µ, y) sin µxdµ
π 0
Z ∞
U2 (µ, y) = u2 (x, y) sin µxdx
0

Taking the Fourier sine transform in x of the Laplaces equation for u2 and
using the homogeneous boundary condition u2 (0, y) = 0, we obtain the ordinary
differential equation
∂ 2 U2
− µ2 U2 = 0 (5.1.10)
∂y 2
whose general solution is

U2 (µ, y) = a(µ)e−µy + b(µ)eµy , y > 0, µ > 0 (5.1.11)


72CHAPTER 5. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON INFINITE AND SEMI-INF

To determine a(µ) and b(µ) we use the boundary conditions.

lim u1 (x, y) = 0 → lim U2 (µ, y) = 0 → b(µ) = 0


y→∞ x→∞

Therefore
U2 (µ, y) = a(µ)e−µy
∂U2 ∂u1 F (µ)
Now since (0, µ) = Fs { (0, y)} = Fs {f (x)} = F (µ) → a(µ) =
∂y ∂y µ
The solution u2 (x, y) is obtained from U2 (µ, y) by taking the inverse:

2 ∞ F (µ) −µy
Z
u2 (x, y) = e sin µxdµ
π 0 µ
Thus,

u(x, y) =u1 (x, y) + u2 (x, y)


2 ∞ 2 ∞ F (µ) −µy
Z Z
= G(µ)e−µx cos µydµ + e sin µxdµ
π 0 π 0 µ
Z ∞ 
2 F (µ) −µy
= G(µ)e−µx cos µy + e sin µx dµ
π 0 µ

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy