Fourier Series and PDEs
Fourier Series and PDEs
University of Zimbabwe
Department of Mathematics
Lecture Notes
tawandamazikana@gmail.com
Chapter 1
Partial Differential
Equations
∂u
+ (u · 5)u − ν 52 u = − 5 u + g (1.0.1)
∂t
and dynamics on the financial markets are modelled by the so called-Black-
Scholes equation given by
∂V 1 ∂2V ∂V
+ σ 2 S 2 2 + rS − rV = 0 (1.0.2)
∂t 2 ∂S ∂S
3
4 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS
the independent variables and u is called the dependent variable. The partial
derivatives of u with respect to the variables x1 , x2 , · · · , xn , t are given by
u(x1 + h, x2 , · · · , xn , t) − u(x1 , x2 , · · · , xn , t) ∂u
ux1 (x1 , x2 , · · · , xn , t) = lim =
h→0 h ∂x1
u(x1 , x2 + h, · · · , xn , t) − u(x1 , x2 , · · · , xn , t) ∂u
ux2 (x1 , x2 , · · · , xn , t) = lim =
h→0 h ∂x2
.. ..
. =.
u(x1 , x2 , · · · , xn , t + h) − u(x1 , x2 , · · · , xn , t) ∂u
ut (x1 , x2 , · · · , xn , t) = lim =
h→0 h ∂t
.. ..
. =.
ux1 (x1 + h, x2 , · · · , xn , t) − ux1 (x1 , x2 , · · · , xn , t) ∂2u
ux1 x1 (x1 , x2 , · · · , xn , t) = lim =
h→0 h ∂x21
ux (x1 , x2 + h, · · · , xn , t) − ux1 (x1 , x2 , · · · , xn , t) ∂2u
ux1 x2 (x1 , x2 , · · · , xn , t) = lim 1 =
h→0 h ∂x1 ∂x2
1.1 Terminology
We are going to define the terms that we are going to work with, we will focus
mostly on u = u(x, y, t).
Definition 1 Let u = u(x, y, t), the a partial differential equation (pde) for
u : Ω ⊂ R2+1 → R is a relation
Example 1 • ut + cux = 0
1.1. TERMINOLOGY 5
• utt − c2 uxx = 0
• uxx + uyy = 0
Definition 2 The order of the pde is the order of the highest derivative ap-
pearing in the equation.
• The most general form of a first order pde for u = u(x, y), is given by
• The most general form of a linear second order pde u = u(x, y), is given
by
a(x, y)uxx +2b(x, y)uxy +c(x, y)uyy +d(x, y)ux +e(x, y)uy +f (x, y)u = g(x, y).
Definition 4 A pde is linear if it is linear in the unknown function and all its
derivatives with coefficients depending only on the independent variables. If a
pde is not linear it is called nonlinear.
For example, xyuxx +2uxy +3x2 uyy = 4ex is a linear pde whereas ux uxx +(uy )2 =
0 is a nonlinear pde. The general form of a second-order linear pde of a function
of two variables u(x, y) is given by
a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + d(x, y)ux + e(x, y)uy + h(x, y)u = f (x, y)
(1.1.2)
For fist-order, we have
Lu = f (x, y) (1.1.6)
• an explicit function u = f (x, y, t). which when substituted into the pde,
reduces it into an identity.
Using the above definition, one can easily verify that the given function is a
solution of the corresponding pde.
x
−
• 4ux +3uy +u = 0, u(x, y) = e 4 f (3x−4y), f an arbitrary and twice differentiable function
1
• uxx + uyy + uzz = 0, u(x, y, z) = p , ∀(x, y, z) 6= 0
x2 + y2 + z2
• utt −c2 uxx = 0, u(x, t) = f (x+ct)+g(x−ct), f, g are arbitrary and twice differentiable function
R x+ct
• Show that u(x, t) = 21 [f (x+ct)+f (x−ct)]+ 2c 1
x−ct
g(s)ds, is a solution of
utt = c2 uxx , subject to the conditions u(x, 0) = f (x) and ut (x, 0) = g(x),
where f is a twice differentiable function and g is a differentiable function.
• Solve (i) uxy = 2xey (ii) uxy = sin x cos y, subject to the conditions
ux (x, π/2) = 2x, u(π, y) = 2 sin y.
Remark 2 Notice that where the solution of an ODE contains arbitrary con-
stants, the solution to a pde contains arbitrary functions. In the same spirit,
while an ODE of order m has m− linearly independent solutions, a pde has
infinitely many (there are arbitrary functions in the solution!). These are con-
sequences of the fact that a function of two variables contains immensely more
(a whole dimension worth) of information than a function of only one variable.
8 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS
1.1.3 Classification
We have so far classified the pdes as linear, nonlinear homogeneous, nonhomo-
geneous etc, now, we look at the other very important and fundamental form of
classification of the pdes that we will often refer to when discussing the method
of solution. We do this for second order linear pdes. Recall that the second
order linear pde is given by
a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + d(x, y)ux + e(x, y)uy + f (x, y)u = g(x, y)
(1.1.8)
the term a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy that contain the second -order
derivatives is called the principal part of the pde.
Definition 8 The function ∆(x, y) = b2 − ac, is called the discriminant of the
pde.
Definition 9 The pde 1.1.8 defined in Ω ⊂ R is said to be
1. Hyperbolic in Ω if ∆(x, y) = b2 − ac > 0, ∀(x, y) ∈ Ω
2. Parabolic in Ω if ∆(x, y) = b2 − ac = 0, ∀(x, y) ∈ Ω
3. Elliptic in Ω if ∆(x, y) = b2 − ac < 0, ∀(x, y) ∈ Ω
Remark 3 Note that the classification is solely determined by the principal
part. Since ∆ is a function of the independent variables, the type might vary as
the x, y vary in Ω.
Example 3 1. The pde 3uxx + 2uxy + 5uyy + xuy = 0 is elliptic, since
∆ = b2 − ac = 12 − (3)(5) = −14 < 0
2. The Tracomi equation for transonic flow uxx + yuyy = 0, has ∆ = 02 −
(1)(y) = −y. Thus, the equation is elliptic if y > 0, parabolic if y = 0 and
hyperbolic if y < 0.
3. The pde yuxx −2uxy −xuyy −ux +cos(y)uy −4 = 0, has ∆ = b2 −ac = 1+xy.
The pde is hyperbolic for all (x, y) ∈ Ω such that xy > −1and elliptic when
(x, y) ∈ Ω such that xy < −1, and parabolic when (x, y) ∈ Ω such that
xy = 1.
Remark 4 The above classification is crucial in the study of pdes in that the
type of equation determines what properties might be expected of solutions, what
kinds of information must be supplied with the equation to have a unique so-
lution, and influences the types of numerical techniques that can be used to
approximate solutions.
1.1. TERMINOLOGY 9
ξx ξy
J= 6= 0, ∀(x, y) ∈ Ω (1.1.10)
ηx ηy
This then implies that the transformation has an inverse or is invertible with
x = x(ξ, η), y = y(ξ, η). Let w = u(x(ξ, η), y(ξ, η)), using the chain-rule to
compute
ux = wξ ξx + wη ηx , uy = wξ ξy + wη ηy
and
uxx = wξξ ξx2 + 2wξη ξx ηx + wηη ηx2 + wξ ηxx + wη ηxx ,
uyy = wξξ ξy2 + 2wξη ξy ηy + wηη ηy2 + wξ ξyy + wη ηyy ,
and
thus, the sign of the discriminant does not change after the coordinate transfor-
mation, this implies that the type is invariant under any coordinate transforma-
tion.!!
A(ξ, η) = C(ξ, η) = 0,
or in short, we have
wξη = H(ξ, η, w, wξ , wη ) (1.1.14)
called its canonical form. This form corresponds only to hyperbolic pdes!!
Remark 6 Note that if equation 1.1.8 does not contain lower order derivative
terms and the nonhomogeneous term, then, the canonical form is simply,
wξη = 0,
Example: Classify the equation uxx + 2uxy − 3uyy = 0 and hence transform
it into its canonical form and find the general solution of the pde.
dy 1+2
= 1 =3 (1.1.15)
dx
dy 1−2
= 1 = −1 (1.1.16)
dx
solving these equations, we obtain, y = 3x + k and y = −x + c, therefore we
define ξ = y − 3x and η = y + x. We note that
−3 1
J= = −4 6= 0, (1.1.17)
1 1
Thus, substituting u and its derivatives into the given pde, we obtain:
Thus, we have,
−9wξη + 3wη = 2,
or
1 2
wξη − wη = − ,
3 9
the canonical form of the hyperbolic equation!
Integrating once this with respect to η, we obtain
1 2
wξ − w = − η + G(ξ).
3 9
Since only derivatives in ξ appear, the above equation can be treated as an
ordinary differential equation, that is
dw 1 2
− w = − η + G(ξ).
dξ 3 9
1
− ξ
This is a linear first order equation with an integrating factor e 3 . Thus, we
have:
1 1 1
d − ξ 2 − ξ − ξ
we 3 = − ηe 3 + G(ξ)e 3 .
dξ 9
1
ξ−
Integrating both sides and dividing by e 3 , we obtain:
1
2 ξ
w = η + G̃(ξ) + H(η)e 3 .
3
This is the solution of the transformed equation. To obtain the solution of the
original equation, we express ξ = ξ(x, y) = y − x and η = η(x, y) = 4y − x.
Substituting into the above solution with u = w(ξ, η), we obtain
1
2 (y−x)
u = (4y − x) + G̃(y − x) + H(4y − x)e 3 .
3
Example: The Wave Equation Consider the one-dimensional wave equa-
tion
utt − c2 uxx = 0
subject to the initial condition
u(x, 0) = f (x)
and
ut (x, 0) = g(x).
1.1. TERMINOLOGY 13
Obtain the solution of this initial-value problem using the method of character-
istics.
Here B = 0, A = 1, C = −c2 , hence the discriminant ∆ = B 2 − AC = c2 >
0, hence the equation is hyperbolic. The characteristics are given by
√
dx 0 ± c2
= = ±c
dt 1
The characteristic equations are:
ξ = x + ct, η = x − ct,
using this, the wave equation is reduced to the canonical form:
uξ,η = 0
whose solution is
u(ξ, η) = F (ξ) + G(η)
In terms of x and t, we have
u(x, t) = F (x + ct) + G(x − ct). (1.1.18)
Differentiating this solution with respect to t, we have
u(x, t) = cF 0 (x + ct) − cG0 (x − ct). (1.1.19)
Setting t = 0 in the above equations, we obtain:
u(x, 0) = F (x) + G(x) = f (x) (1.1.20)
0 0
ut (x, 0) = cF (x) − cG (x) = g(x) (1.1.21)
Integrating (1.1.21) we obtained
Z x Z x Z x
0 0
c F (s)ds − c G (s)ds = g(s)ds (1.1.22)
x0 x0 x0
Adding, we obtain
Z x+ct
1 1
u(x, t) = [f (x + ct) + f (x − ct)] + g(s)ds (1.1.30)
2 2c x−ct
This solution is known as the d’Alembert’s solution of the initial value prob-
lem for the wave.
a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + d(x, y)ux + e(x, y)uy + h(x, y)u = f (x, y)
(1.1.31)
Suppose that b2 − ac = 0 in the region Ω. That is, the equation is parabolic in
Ω. Now a and c cannot be equal to zero, since this would also imply that b = 0
and the given equation would be a first order pde! We also assume that a 6= 0
in Ω. In this case we have only one characteristic equation:
dy b
= (1.1.32)
dx a
This is the characteristic equation for the pde in the parabolic case. The solution
to this equation gives only one characteristic curve
ξ(x, y) = k.
ξx = −2, ξy = 1; ηx = 1, ηy = 0.
1.1. TERMINOLOGY 15
wηη = 0
wη = F (ξ),
which is the general solution of the transformed equation, the solution of the
original problem is
η(x, y) = 3x − 2y, ξ = y
uηη = 0,
whose solution is
u(ξ, η) = ηF (ξ) + G(ξ)
or y y
u(x, y) = yF +G
x x
16 CHAPTER 1. PARTIAL DIFFERENTIAL EQUATIONS
ut = κuxx ,
17
18CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS
u(0, t) = T0 , u(L, T ) = T1
These are called the Dirichlet Boundary conditions, in this case, there are
no derivatives involved in the conditions. If
u(0, t) = u(L, T ) = 0,
u(x, 0) = f (x)
This signifies that the temperature is known at time t = 0 and is given by the
function f (x) for all x < x < L. Thus, the temperature evolves from this initial
state, called the initial condition.
The specification of the pde and the accompanying boundary and initial con-
ditions gives what is called the Initial-Boundary-Value-Problem(IBVP). Thus,
a problem of the form:
is called an IBVP.
u(0, t) = 0 u(L, 0) = 0,
λ = 0, λ > 0, λ < 0
From above, we have the system of simple linear ordinary differential equations
u(0, t) = h(0)g(t) = 0,
either h(0) = 0 or g(t) = 0, but if g(t) = 0, then u(x, t) ≡ 0, a trivial solution,
thus, h(0) = 0. Similarly,
u(L, t) = h(L)g(t) = 0,
implies that h(L) = 0. Thus, for h(x), we have to solve the boundary value
problem
h00 (x) = λh(x), h(0) = 0, h(L) = 0. (2.1.13)
2.1. BOUNDARY AND INITIAL CONDITIONS 21
leading to
h(x) = ax + b
applying the boundary conditions, we then have h(0) = a · 0 + b = 0, →
b = 0 and h(L) = 0 = aL = 0, since L 6= 0, it follows that a = 0, thus
h(x) = 0, hence u(x, t) ≡ 0, a trivial solution. Thus, when λ = 0, we
obtain a trivial solution. We discard this case.
• Suppose λ > 0, that is λ = ω 2 , ω ∈ R, then we have the system of
equations
Solving first h00 (x) = ω 2 h(x), h(0) = 0, h(L) = 0, we have the solution
∞ n2 π 2
X nπ − 2 κt
u(x, t) = bn sin xe L
n=1
L
We have so far not used the initial condition u(x, 0) = f (x), setting t = 0
in the above solution, we have
∞
X nπ
u(x, 0) = f (x) = bn sin x,
n=1
L
thus,
∞
X nπ
f (x) = bn sin x
n=1
L
which is the half-range Fourier sine series for f (x), from the theory of
Fourier series, the bn are the Fourier Coefficients bn , which are then given
by,
2 L
Z
nπ
bn = f (x) sin dx
L 0 L
Thus, the solution to the heat equation that satisfies the given initial and
boundary conditions, is given by
∞ Z L ! n2 π 2
2 X nπ nπ − 2 κt
u(x, t) = f (x) sin dx sin xe L .
L n=1 0 L L
We can easily check that u(0, t) = u(L, t) = 0 as expected and also that,
since the boundaries are kept at zero temperature and there is no heat
source inside the bar, as t → ∞, u(x, t) → 0.
Remark 8 • In the case of Dirichlet conditions, the case λ < 0 is the only
case that leads to the bounded nontrivial solutions, when solving problems,
we go straight to this case. You do not have to consider the other cases
λ ≥ 0!
• If the homogeneous boundary conditions are of Dirchlet type, the solution
leads to a Fourier sine series
ux (0, t) = 0, ux (L, t) = 0,
u(x, t) = h(x)g(t),
u(x, t) = b,
b arbitrary.
• The case λ > 0, leads to a trivial solution. (Verify this!)
• We now consider the case λ < 0, that is λ = −ω 2 . Thus, we solve the
equations
n2 π 2 κ
nπ − t
Thus, un (x, t) = an cos xe L2 , n ≥ 1. Combining this with the
L
solution corresponding to the case λ = 0, and using the superposition
principle, we have that
∞ n2 π 2 κ
1 X nπ − t
u(x, t) = a0 + an cos xe L2 (2.1.27)
2 n=1
L
1
with b = a0 .
2
Applying the initial condition, we have
∞
1 X nπ
u(x, 0) = f (x) = a0 + an cos x,
2 n=1
L
or simply
∞
1 X nπ
f (x) = a0 + an cos x,
2 n=1
L
which is the Fourier Cosine series for f (x). The coefficients an , n ≥ 1 are
then given by
2 L
Z
nπx
an = f (x) cos dx
L 0 L
1
We note that from (2.1.27), as t → ∞ the solution u(x, t) → a0 , the
2
equilibrium temperature. This is expected since no heat is allowed to
escape via the boundaries.
The complete solution is given by
Z L ∞ Z L ! n2 π 2 κ
1 2 X nπx nπ − t
u(x, t) = f (x)dx + f (x) cos dx cos xe L2
L 0 L n=1 0 L L
2.2. HEAT EQUATION WITH NONHONOGENEOUS BOUNDARY CONDITIONS25
Example 5
where φ(x) and U (x, t) are functions to be determined. We now consider equa-
tions to be satisfied by φ(x) and U (x, t). Substituting the above transformation
into the governing pde, we obtain
φ00 (x) = 0.
(T2 − T1 )x
φ(x) = + T1
L
and U (x, t) is then determined from
whose solution is
∞ n2 π 2
X nπx −κ 2 t
U (x, t) = bn sin e L ,
n=1
L
where Z L
2 nπx
bn = (f (x) − φ(x))sin
L 0 L
Thus,
∞ n2 π 2
X nπx −κ 2 t (T2 − T1 )x
u(x, t) = bn sin e L + + T1 .
n=1
L L
(T2 − T1 )x
We note that the term + T1 represents the steady state temper-
L
ature of the rod. That is, as t → ∞, the temperature settles at
(T2 − T1 )x
u(x, t) = + T1 .
L
where
Z 5
2 nπx
bn = U (x, 0) sin dx
5 0 5
Z 3
2 5
Z
2 8 nπx 47 nπx
= 2 − x sin dx + −31 + x sin dx
5 0 5 5 5 3 5 5
110 3nπ 4 32
= − 2 2 sin + − (−1)n .
n π 5 nπ nπ
Thus,
∞
X 110 3nπ 4 32 nπx 2 2 3
u(x, t) = − 2 2 sin + − (−1)n . sin e−7n π t/25 + x+1.
n=1
n π 5 nπ nπ 5 5
3
Note that as t → ∞, u(x, t) → x + 1 which is the equilibrium temperature.
5
It is clear that the substitution u = h(x)g(t) will not separate the equation
ut = κuxx + ψ(x) into a function of x only and a function of t only. That is,
the result will be
hġ = κh00 g + ψ(x).
To overcome this difficulty, we again assume that u(x, t) can be expressed in
the form
u(x, t) = U (x, t) + φ(x).
Substituting into the pde, we obtain
∂2u 2
2∂ u
pde : − c = 0, 0 < x < L, t > 0 (2.4.1)
∂t2 ∂x2
BCs : u(0, t) = 0, u(L, t) = 0, t > 0 (2.4.2)
ICs : u(x, 0) = f (x), ut (x, 0) = g(x) 0 < x < L, . (2.4.3)
This problem models the motion of a taut string with given initial position
u(x, 0) = f (x)) and initial velocity ut (x, 0) = g(x), these are called the initial
conditions(ICs) of the problem. The end points x = 0, L are fixed, these are
called the homogeneous boundary conditions (BCs) of Dirichlet type.
We solve the above IBVP using the Method of Separation of Variables:
We let u(x, t) = X(x)T (t), substituting into the wave equation, we obtain
T 00 (t)X(x) = c2 X 00 (x)T,
T 00 X 00
= = −λ2 ,
c2 T X
leading to a system of ordinary differential equations
T 00 − c2 λ2 T = 0
X 00 − λ2 X = 0
with solutions
λL = nπ, n = 1, 2, 3, · · · ,
or
nπ
λ= , n = 1, 2, 3, · · · .
L
Thus, for each value of n we have solutions of the form
nπ
Xn (x) = Bn sin .
L
The corresponding solution T (t) is given by
nπct nπct
Tn (t) = Cn cos + Dn sin , n≥1 (2.4.6)
L L
30CHAPTER 2. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON FINITE DOMAINS
∞
X nπct nπct nπx
un (x, t) = an cos + bn sin sin . (2.4.7)
n=1
L L L
Our solution must satisfy the initial conditions u(x, 0) = f (x) and ut (x, 0) =
g(x), we thus have
∞
X nπx
u(x, 0) = f (x) = an sin (2.4.8)
n=1
L
∞
X nπc nπx
ut (x, 0) = g(x) = bn sin (2.4.9)
n=1
L L
or
∞
X nπx
f (x) = an sin (2.4.10)
n=1
L
∞
X nπc nπx
g(x) = bn sin (2.4.11)
n=1
L L
Z L
2 nπx
an = f (x) sin dx
L 0 L
Z L
2 nπx
bn = g(x) sin dx
nπc 0 L
∂2u ∂2u
− 36 = 0, 0 < x < 1, t > 0
∂t2 ∂x2
u(0, t) = 0, u(1, t) = 0, t > 0
u(x, 0) = 0, ut (x, 0) = 3(1 − x) 0 < x < 1, .
2 π 2 π/2 2 π
Z Z Z
an = f (x) sin nxdx = x sin nxdx + (π − x) sin nxdx
π 0 π 0 π π/2
4 nπ
= 2 sin , ( after using integration by parts)
n π 2
(
0, n even
= 4 n+1
(−1) , n odd
n2 π
Since a0 6= 0 only for odd values of n, we then have
4
an = (−1)n+1 , n = 1, 2, 3, · · ·
(2n − 1)2 π
and with g(x) = 0, we have
Z π
2
bn = 0 sin nxdx = 0
3nπ 0
In 1800, Joseph Fourier conjectured that any periodic function f (x) of period
2π can be written, at almost every point, as a sum of trigonometric series
∞
a0 X
f (x) = + (an cos nx + b sin nx), (3.0.1)
2 n=1
called the Fourier Series. The constants an , bn , n ≥ 0 are real numbers defined
by
1 π
Z
an = f (x) cos nxdx, n = 0, 1, 2, · · · (3.0.2)
π −π
1 π
Z
bn = f (x) sin nxdx, n = 1, 2, · · · (3.0.3)
π −π
These are called the Fourier Coefficients. Joseph Fourier developed these series
when he was working on initial-boundary value problems (IBVP) modeling heat
flow under a variety of conditions. This is the focus of the first part of this course.
We will then study partial differential equations latter in the course. Fourier
Series and the related Fourier Transforms will be used to develop solutions of
IBVPs.
It is useful to think about the general context in which one finds oneself when
discussing Fourier series and transforms. In this chapter we provide a glimpse
into more general notions for generalized Fourier series and the convergence
of Fourier series. We can view the sine and cosine functions in the Fourier
trigonometric series representations as basis vectors in an infinite dimensional
function space. A given function in that space may then be represented as a
linear combination over this infinite basis. Before we start the discussion of
Fourier series we will review some basic results on vector spaces, inner-product
spaces and orthogonal projections.
33
34 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES
1. u + v = v + u
2. αu + βu = u(α + β)
Now we can define a basis for an n-dimensional vector space. We begin with
the standard basis in an n-dimensional vector space. It is a generalization of
the standard basis i, j, k in three dimensions. We define the standard basis with
the notation
e1 = (1, · · · , 0), e2 = (0, 1, · · · , 0), etc, in general, we have
ek = (0, · · · , 0, 1, 0 · · · , 0), k = 1, 2, · · · , n.
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 35
Note that these vectors form a linearly independent set. Thus, we can expand
any v ∈ Rn as
n
X
v = (v1 , · · · , vn ) = vi ei
i=1
here the vi are called the components of v. To determine the components vi , we
need to defined the inner product(or dot product) of vectors:
Definition 11 If u = (u1 , · · · , un ) and v = (v1 , · · · , vn ) belong to Rn , their
dot product is the number
n
X
u · v =< u, v >= u1 v1 + · · · + un vn = ui vi
i=1
Orthogonal Operations
1
p
• The norm of a vector v : ||v|| =< v, v > 2 = v12 + v22 + · · · + vn2
• Orthogonality of two vectors: u⊥v ⇐⇒ < u, v >= 0.
• Orthogonality of a collection of vectors: {u1 , · · · , um } is an orthog-
onal collection of vectors ⇐⇒ < ui , uj >= 0 if i 6= j.
Remark 11 Clearly the set {ek }nk=1 is a linearly independent set of or-
thogonal vectors.
Giving,
Remark 12 Clearly the set {ek }nk=1 is a linearly independent set of or-
thornomal vectors.
These are also the key ingredients that we will need in the infinite dimen-
sional case. In fact, we need these when we study Fourier series.
We consider a space of functions f on [a, b]. We think of functions f as
infinite-dimesnional vectors whose components are the values f (x) as x varies
in [a, b].
We will consider various infinite dimensional function spaces. Functions in
these spaces would differ by their properties. For example, we could consider
• the set of square integrable functions from a to b,: L2 [a, b]. This is defined
as ( Z )
b
2
L2 [a, b] := f : |f (x)| dx < ∞
a
As you will see, there are many types of function spaces . In order to view these
spaces as vector spaces, we must be able to add functions and multiply them by
scalars in such a way that they satisfy the definition of a vector space.
We will also need a scalar product defined on this space of functions. There
are several types of scalar products, or inner products, that we can define.
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 37
Definition 13 A real inner product space is real vector space equipped with the
above inner product.
For complex inner product spaces, the above properties hold with the third
property replaced with < v, w >= < w, v >.
For the time being, we will only deal with real valued functions and, thus we
will need an inner product appropriate for such spaces. We have the following
definitions.
• Let f (x) and g(x) be real-valued functions defined on [a, b]. Then, we
define the inner product, if the integral exists, as
Z b
< f, g >= f (x)g(x)dx
a
Spaces in which < f, f >< ∞ under this inner product are called the
space of square integrable functions on (a, b) and are denoted as L2 (a, b).
• Two functions f (x) and g(x) are said to be orthogonal on the interval
[a, b] if < f, g >= 0, ∀x ∈ [a, b].
Example 10 The functions f (x) = sin 3x, g(x) = cos(3x) are orthogonal
on [−π, π].
Example 11 Each of the sets {cos kx, k ≥ 0}, {sin kx, k ≥ 1} and
{sin kx, cos kx, k ≥ 0} is orthogonal on [−π, π].
1 !1
Z b 2
||f || =< f, f > 2 = f (x)2 dx
a
∞
X
< φj , f > = < φj , cn φn (x) >=< φj , c1 φ1 + c2 φ2 + · · · cj φj + · · · >
n=1
= c1 < φj , φ1 > +c2 < φj , φ2 > + · · · cj < φj , φj > + · · ·
X∞
= cn < φj , φn (x) >
n=1
Thus,
< φj , f > < φj , f >
cj = = .
< φj , φj > ||φj ||2
Thus, in summary, a function f (x) be represented by an expansion over a
basis of orthogonal functions, {φn (x)}∞
n=1 ,
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 39
∞
X
f (x) = cn φn (x),
n=1
Definition 14 If {φn (x)}∞ n=1 is an orthonormal basis of L2 [a, b] and f ∈ [a, b],
then the numbers < f, φn (x) > are called the generalised Fourier coefficients of
f with respect to {φn (x)}∞
n=1 . The series
∞
X
f∼ < f, φn (x) > φn (x)
n=1
Remark 14 Often it is more convenient not to require the elements of the basis
to be unit vectors. Suppose {ψn (x)}∞
n=1 is an orthogonal set, then, we have
∞ ∞
X φn (x) X
f∼ < f, φn (x) > = cn φn (x)
n=1
||φn (x)||2 n=1
This will be referred to as the general Fourier series expansion and the cj ’s
are called the Fourier coefficients. Technically, equality only holds when the
infinite series converges to the given function on the interval of interest.
Example 13 Show that the set {sin(nx)}∞ n=1 is orthogonal on the interval
[−π, π]. Find the coefficients for the expansion given by
∞
X
f (x) = bn sin(nx).
n=1
1 π
Z
< f, φn >
bn = = f (x) sin nxdx
< φn , φn > π −π
• If {φn (x)}∞
n=0 on [a, b] is a complete orthogonal system on [a, b], then every
(piecewise continuous) function f (x) on [a, b] has the expansion
∞
X < f, φn >
f (x) = φn (x)
n=0
||φn ||2
• If {φn (x)}∞
n=0 on [a, b] is a complete orthogonal system on [a, b], the
expansion formula above holds for every (pwc) function f (x) on [a, b] in
the L2 −sense, but not necessarily pointwise, i.e. for a fixed x ∈ [a, b] the
series on the RHS of the above equation might not necessarily converge
and, even if it does, it might not converge to f (x).
• Using the previous theorem, it follows that every (pwc) function f (x) on
[−π, π] admits the expansion
∞
a0 X
f (x) ∼ + (an cos nx + b sin nx), (3.1.4)
2 n=1
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 41
where
Z π
a0 < f, 1 > 1
= = f (x)dx (3.1.5)
2 ||1||2 2π −π
1 π
Z
< f, cos nx >
an = = f (x) cos nxdx (3.1.6)
|| cos nx||2 π −π
1 π
Z
< f, sin nx >
bn = = f (x) sin nxdx (3.1.7)
|| sin nx||2 π −π
• More generally, if L > 0 and f (x) is pwc on [−L, L], then it will have a
Full-Range Fourier series expansion on [−L, L] given by
∞
a0 X n nπx nπx o
f (x) ∼ + an cos + b sin , (3.1.8)
2 n=1
L L
a0
are called theFourier coefficients. Again we note that the term is due
2
1
to the constant function cos(0) = 1, the factor being included just for
2
convenience, so that the formula for a0 and that of an , n ≥ 1 agree.
Remark 15 Computing the Fourier Series (3.1.8) of any given function re-
duces to the task of evaluating the integrals in (3.1.9) and (3.1.10). A firm
grasp of integration by parts is required to complete these calculations success-
fully.
just to indicate that the rhs is the Fourier Series of f (x). We will discuss
this aspect under the convergence of Fourier Series.
42 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES
f (x + 2L) = f (x), ∀x ∈ R.
Remark 17 • For the set {1, cos nx, sin nx, n ≥ 1} the functions are all
2L periodic functions.
nπx nπx
• For the set {1, cos , sin , n ≥ 1} the functions are all 2π periodic
L L
functions.
We thus use the trigonometric Fourier series to represent 2L periodic func-
tions
We note that our function f (x) is defined only for x ∈ [−L, L], whereas its
Fourier series is defined for all x ∈ R. We extend the definition of f (x) to
(−∞, ∞) by extending f (x) periodically from [−L, L] to (−∞, ∞).
Definition 16 Suppose that f (x) is defined for x ∈ [−L, L]. The periodic
extension of f (x), to (−∞, ∞)denoted by fp (x) is defined by
f (x) x ∈ [−L, L]
fp (x) =
f (x + 2L), x ∈ R
f(x), for x [- , ]
4
-1
-2
-3
-4
-4 -3 -2 -1 0 1 2 3 4
x
-2
-4
-10 -8 -6 -4 -2 0 2 4 6 8 10
Remark 18 • Some functions are neither odd nor even, for example, ex .
• If f (x) and g(x) are both even, then h(x) = f (x)g(x) is also even, e.g
h(x) = (x2 )(cos(x)) is even, since
h(−x) = ((−x)2 ) cos(−x) = x2 cos(x) = h(x)
• If f (x) and g(x) are both odd , then h(x) = f (x)g(x) is odd, e.g h(x) =
(x)(sin(x)) is even, since
h(−x) = (−x) sin(−x) = (−x)(− sin(x)) = h(x)
• If f (x) is even and g(x) is odd , then h(x) = f (x)g(x) is odd, e.g h(x) =
(x2 )(sin(x)) is odd, since
h(−x) = (−x)2 sin(−x) = x2 (− sin(x)) = −h(x)
• For an even function f (x), we have f (−x) = f (x), hence
Z L Z L
f (x)dx = 2 f (x)dx
−L 0
We sketch SN (x) with f (x) and show that as N → ∞ convergence takes place.
−
f (x+
0 ) = f (x0 ) = f (x0 ).
− −
2. Suppose that f (x+ +
0 ) 6= f (x0 ) with 0 < f (x0 ) − f (x0 ) < ∞, then f (x) is
said to have a finite-jump at x0 . or a jump discontinuity at x0 .
We now state the conditions for convergence for a Fourier Series to f (x) as
a theorem:
if f is continuous at x.
2. to the average of the two limits, usually 12 [f (x+ ) + f (x− )], where the f (x)
has a jump discontinuity at x, that is
∞ h
1 1 X nπx nπx i
[f (x+ ) + f (x− )] = a0 + an cos + bn sin ,
2 2 n=1
L L
Remark 20 Thus, the Fourier series actually converges to f (x) at points be-
tween −L and +L, where f (x) is continuous. At the end points, x = L or
x = −L, the infinite series converges to the average of the two values of the
periodic extension.
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 45
Thus as the number of terms increase, we expect the FS to get closer and closer
to the function f (x) pointwise.
Example 16 Given the function f (x) = |x|, x ∈ [−π, π]. Sketch f (x). Express
f (x) in Fourier Series.
The function is even, hence bn = 0, ∀n ≥ 1 and
2 π
Z
an = f (x) cos(nx)dx
π 0
Since f (x) = |x| and x > 0 on [0, π], we have for n = 0,
π
2 π 2 π 2 x2
Z Z
a0 = x cos(0)dx = xdx = =π
π 0 π 0 π 2 0
and for an , n ≥ 1 we have using integration by parts that
2 π 2 π
Z Z
an = |x| cos(nx)dx = x cos(nx)dx
π 0 π 0
π
2 π 1
Z
2 1 2 h cos nx iπ
= x · sin nx − sin nxdx =
π n π n nπ n 0
0
0
− 4
2 n , for n odd : n = 2k − 1
= [(−1) − 1] = π(2k − 1)2
n2 π 0 for n even : n = 2k
Thus, the Fourier Series expansion formula
∞
1 X
f (x) = a0 + [an cos nx + bn sin nx]
2 n=1
yields
∞
π 4X 1
|x| = − cos(2n − 1)x
2 π n=1 (2n − 1)2
We note that
1
π 4X 1 π 4
S1 (x) = − 2
cos(2n − 1)x = − cos x
2 π n=1 (2n − 1) 2 π
2
π 4X 1 π 4 4
S2 (x) = − cos(2n − 1)x = − cos x − cos 3x
2 π n=1 (2n − 1)2 2 π 9π
3
π 4X 1 π 4 4 4
S3 (x) = − cos(2n − 1)x = − cos x − cos 3x − cos 5x
2 π n=1 (2n − 1)2 2 π 9π 25π
etc
46 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES
3 3
2 2
1 1
0 0
-4 -2 0 2 4 -4 -2 0 2 4
Figure 3.3: Fouries series of f (x) = |x| together with partial sums for S2 (x) and
S50 (x).
0
-10 -8 -6 -4 -2 0 2 4 6 8 10
Figure 3.4: Fouries series of f (x) = |x| together with its periodic extension and
the partial sum S3 (x).
Example 17 Given the function f (x) = x, x ∈ [−π, π]. Sketch f (x). Express
f (x) in Fourier Series.
In this case f (x) = x is an odd function, thus, we have an = 0∀n ≥ 0 and
2 π
Z
bn = x sin(nx)dx
π 0
2 hx iπ 2 Z ∞ −1
= − cos nx − cos nxdx
π n 0 π 0 n
2 (−1) (−1)n+1
= ·π· (−1)n = 2 ·
π n n
Hence the Fourier Series is given by
∞
X (−1)n+1
x=2 sin nx
n=1
n
2 2
0 0
-2 -2
-4 -4
-4 -2 0 2 4 -4 -2 0 2 4
Figure 3.5: Fouries series of f (x) = x together with its partial sums for for
N = 10, 50.
S 10 (x),-- and f(x)=x
4
-2
-4
-10 -8 -6 -4 -2 0 2 4 6 8 10
Figure 3.6: Fouries series of f (x) = x together with its periodic extension and
its partial sums corresponding to N = 10.
Example 18 Let f (x) = ex , x ∈ [−π, π]. Here again L = π, we also note that
since f is neither even nor odd, so we have to compute all the a0 , an and bn
coefficients.
1 Rπ x 1 2
Now, a0 = e dx = (eπ − e−π ) = sinh π, for n ≥ 1, we evaluate
π −π π π
using integration by parts, to obtain
" #
π
1 π x 1 ex 1 π x
Z Z
an = e cos(nx)dx = sin(nx) − e sin(nx)dx
π −π π n −π n −π
Z π
1
=0− ex sin(nx)dx ← integrating this by parts again
nπ −π
" #
π
ex 1 π x
Z
1
=− − cos(nx) + e cos(nx)dx
nπ n −π n −π
1 π 1
= 2
[e − e−π ](−1)n − 2 an
n π n π
thus, we have
(−1)n π −π (−1)n
an = [e − e ] = 2 sinh π
(n2 π + 1) (n2 π + 1)
1 Rπ x
and bn = e sin(nx)dx, again using integration by part repeatedly as above,
π −π
we obtain
(−1)n n π (−1)n n
bn = − 2 [e − e−π ] = −2 2 sinh π
(n π + 1) (n π + 1)
thus, the F.S of ex is given by
∞
1 X (−1)n
ex ∼ sinh π + 2 sinh π (cos nx − n sin nx)
π n=1
(πn2 + 1)
48 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES
Definition 22 Let f (x), x ∈ [0, L], then an even extension fe of f (x) to [−L, L]
is defined by
f (x) x ∈ [0, L]
fe (x) =
f (−x), x ∈ [−L, 0]
Now, since fe (x) is an even function, that is fe (−x) = fe (x), it then follows
that bn = 0, n ≥ 1 and
Z L Z L
1 nπx 2 nπx
an = fe (x) cos dx = f (x) cos dx
L −L L L 0 L
where Z L
2 nπx
an = f (x) cos dx, n ≥ 0,
L 0 L
called the Half-Range Cosine series.
Definition 23 Let f (x), x ∈ [0, L], then an odd extension fo of f (x) to [−L, L]
is defined by
f (x) x ∈ [0, L]
fo (x) =
−f (−x), x ∈ [−L, 0]
Now, since fo (x) is an odd function, that is fo (−x) = −fo (x), it then follows
that an = 0, n ≥ 0 and
Z L Z L
1 nπx 2 nπx
bn = fo (x) sin dx = f (x) sin dx
L −L L L 0 L
with Z L
2 nπx
bn = f (x) sin dx, n ≥ 1
L 0 L
called the Half-Range Sine series.
Example 19 Expand f (x) = x, 0 < x < 2 in a half-range (a) Sine series, (b)
Cosine series.
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 49
P∞ nπx
• Sine Series: Here L = 2, and the series are given by f (x) = n=1 bn sin
2
with
2 2
Z
nπx
bn = f (x) sin dx, n ≥ 1
2 0 2
Z 2
nπx
= x sin dx
0 2
nπx 2
x cos 2
Z 2
nπx
=− 2 + cos dx
nπ nπ 0 2
2 0
2 2
4 2 nπx 4
=− + sin = − (−1)n
nπ nπ 2 nπ
0
∞
4 X 1 nπx
Thus, f (x) = (−1)n+1 sin
π n=1 n 2
P∞ nπx
Cosine Series: Here L = 2, and the series are given by f (x) = n=1 an cos
2
with
2 2
Z Z 2
nπx nπx
an = f (x) cos dx = f (x) cos dx, n ≥ 0
2 0 2 0 2
Z 2 2
x2
a0 = xdx = = 2, and for an ≥ 1
0 2 0
Z 2 2 Z 2
nπx 2 nπx 2 nπx
an = x cos dx = x sin − sin dx
0 2 nπ 2 0 nπ 0 2
2 2
2 nπx 4 4
= cos = 2 2 (cos nπ − 1) = 2 2 [(−1)n − 1]
nπ 2 n π n π
0
∞
4 P∞ (−1)n − 1 nπx 8 X 1 (2n − 1)πx
Thus, f (x) = 1+ 2 n=1 2
cos = 1− 2 2
cos
π n 2 π n=1 (2n − 1) 2
Proof 1 We compute
* n n
+
X X
f− < f, φi > φi , f − < f, φi > φi .
i=1 i=1
* n n
+ n n
X X X X
< f, φi > φi , < f, φj > φi = < f, φi > < f, φj > < φi , φj >= | < f, φi > |2
i=1 j=1 i,j=1 i=1
We then have:
* n n
+
X X
0 ≤ f− < f, φi > φi , f − < f, φi > φi
i=1 i=1
* n + n n
X X X
2
= ||f || − < f, φi > φi , f − < f, < f, φj > φj > + | < f, φi > |2
i=1 j=1 i=1
n
X
= ||f ||2 − | < f, φi > |2
i=1
Thus,
n
X
| < f, φi > |2 ≤ ||f ||2 ,
i=1
For the trigonometric Fourier series, we have: If f (x) is square integrable in the
interval [−L, L] with coefficients a0 , an , bn , then Bessel’s Inequality takes the
form:
∞
1 L
Z
1 X
[f (x)]2 dx ≥ a20 + [a20 + b2n ] (3.1.16)
L −L 2 n=1
Thus, the sequences formed by the coefficients are bounded, intuitively this
means that an , bn → 0 as n → ∞!
Proof 2 Exercise!
π 4 P∞ 1
Example 21 For the example f (x) = |x|, x ∈ [−π, π] with |x| = − n=1 cos(2n−
2 π (2n − 1)2
4
1)x. Here a0 = π, an = − , and bn = 0, we have
π(2n − 1)2
π ∞
a20 X 2
Z
1
|x|2 dx = + an
π −π 2 n=1
π ∞
1 x3 π2 16 X 1
= + 2
π 3 −π 2 π n=1 (2n − 1)4
∞
2π 2 π2 16 X 1
= + 2
3 2 π n=1 (2n − 1)4
∞
π2 X 1
=
96 n=1 (2n − 1)4
π2 P∞ (−1)n
Example 22 For the example f (x) = x2 , x ∈ [−π, π] with x2 = +4 n=1 cos nx.
3 (n2 )
2π 4(−1)n
Here a0 = , an = − , and bn = 0. Then in this case, one can show
3 n2
52 CHAPTER 3. ORTHOGONAL FUNCTIONS AND FOURIER SERIES
Using
nπxi nπxi
nπx 1 −
cos = (e L + e L )
L 2
nπxi nπxi
nπx 1 −
sin = (e L + e L )
L 2i
we have
nπxi nπxi
nπx nπx 1 1 −
an cos + bn sin = (an − ibn )e L + (an + ibn )e L (3.1.19)
L L 2 2
Inserting into (3.1.18), we obtain
∞ nπxi nπxi
X −
f (x) = α0 + [αn e L + βn e L ] (3.1.20)
n=1
1 1 1
where α0 = a0 , αn = (an −ibn ) βn = (an +ibn ) Computing the coefficients,
2 2 2
we have
" Z #
1 1 L i L
Z
nπx nπx
αn = f (x) cos dx − f (x) cos dx
2 L −L L L −L L
Z L
nπxi
1 −
= f (x)e L dx
2L −L
and
" Z #
1 1 L i L
Z
nπx nπx
βn = f (x) cos dx + f (x) cos dx
2 L −L L L −L L
Z L
nπxi
1
= f (x)e L dx
2L −L
3.1. VECTOR SPACES AND INNER-PRODUCT SPACES 53
∞ nπxi ∞ nπxi
X X −
f (x) = α0 + αn e L + α−n e L (3.1.21)
n=1 n=1
∞ nπxi n=−1 nπxi
X X
= α0 + αn e L + αn e L (3.1.22)
n=1 −∞
∞ nπxi
X
= αn e L (3.1.23)
−∞
∞ nπxi
X
f (x) = αn e L (3.1.24)
−∞
Z L
nπxi
1 −
αn = f (x)e L dx (3.1.25)
2L −L
Fourier Transforms
The Fourier series developed above are primarily useful when dealing with func-
tions defined on a finite interval, say, [−L, L] or [0, L]. The functions consid-
ered are supposed to be periodic. Fourier Series are used when solving par-
tial differential equations on finite domains defined above, in particular for ini-
tial/boundary value problems defined on [0, L]. If the functions are non-periodic
and defined on (−∞, ∞) or [0, ∞), Fourier Transforms or Fourier Integral Trans-
forms are developed.
The Fourier Transforms are used when dealing with partial differential equa-
tions over spatial domains that are infinite or semi-infinite. In this case, the
Fourier transform technique transforms the pde into a simpler ”ode” that is
in terms of the new variable, the transform variable, this can then be solved.
Taking the inverse transform of the solution converts the solution of the ode to
that of the corresponding pde.
We start by defining what the Fourier Transform is and computing some
Fourier transforms of basic functions.
55
56 CHAPTER 4. FOURIER TRANSFORMS
0, −∞ < x < −a
Example 24 Consider the square wave f (x) = 1, −a < x < a , where
0, a < x < ∞.
a > 0. Find its Fourier Transform
By definition
Z −a Z a Z ∞
−iµx −iµx
F(f (x)) = 0·e dx + 1·e dx + 0 · e−iµx dx
−∞ −a a
1 aiµ sin µa
= (e − e−aiµ ) = 2 = F (µ)
iµ µ
Using the inverse transform, we have that:
0, −∞ < x < −a
1
Z ∞
sin µa iµx
1, −a < x < a
e dµ = f (x) = 1
π −∞ µ , x = ±a
2
0, a < x < ∞.
xn e−ax ,
x>0
Example 25 Let f (x) = , where a > 0, n > −1. Find its
0, x < 0.
Fourier Transform
Solution: Exercise!
Example 26 Let f (x) = e−|x|a where a > 0. Find its Fourier Transform
By definition
Z ∞
F(f (x)) = e−|x|a e−iµx dx
−∞
Z 0 Z ∞
1 1
= e(a−iµ)x dx + e−(a+iµ)x dx = +
−∞ 0 a − iµ a + iµ
2a
= = F (µ)
a2 + µ2
Using the inverse transform, we have that:
a ∞
Z
1
eiµx dµ = f (x) = e−|x|a ,
π −∞ a + µ2
2
or simply
Z ∞
1 π
eiµx dµ = e−|x|a .
−∞ a2 + µ2 a
for k = 0, 1, 2, 3 · · · , n − 1, then
Thus, F{f 0 (x)} = (iµ)1 F (µ) = iµF (µ) and F{f 00 (x)} = (iµ)2 F (µ) =
−µ2 F (µ) etc.
F −1 (F (µ)G(µ)) = (f ∗ g)(x)
The Convolution
Let f (x) and g(x) be two functions of x, x ∈ (−∞, ∞). The convolution of f (x)
and g(x) is also a function of x, denoted by (f ∗ g)(x) and is defined by the
relation
Z ∞
(f ∗ g)(x) = f (x − τ )g(τ )dτ (4.0.4)
−∞
58 CHAPTER 4. FOURIER TRANSFORMS
Theorem 3 Let F (µ) and G(µ) be the Fourier transforms for f (x) and g(x)
respectively. Then the inverse Fourier transform of F (µ)G(µ) is given by
F −1 (F (µ)G(µ)) = (f ∗ g)(x)
We will use this result mostly when we solve partial differential equations.
59
iµx −iµx
= f (x)(e + e )dx
0
Z ∞
=2 f (x) cos µxdx
0
thus, we define,
Z ∞
1
Fc (µ) = F{f (x)} = f (x) cos µxdx
2 0
to be the Fourier cosine transform of f (x), x ∈ [0, ∞]. A similar derivation can
be made for the Fourier sine transform. This leads to the following definitions:
Definition 26 Let f (x), x > 0 then the Fourier Cosine Transform of f (x) is
defined by Z ∞
Fc (µ) = f (x) cos µdx,
0
its inverse is given by
Z ∞
2
f (x) = Fc (µ) cos µdx
π 0
Definition 27 Let f (x), x > 0 then the Fourier Sine Transform of f (x) is
defined by Z ∞
Fs (µ) = f (x) sin µdx,
0
its inverse is given by
Z ∞
2
f (x) = Fc (µ) sin µdx
π 0
Example 28 Find the Fourier Sine and Cosine Series of the function f (x) =
e−ax , a > 0.
60 CHAPTER 4. FOURIER TRANSFORMS
We note that
Z ∞ Z ∞ Z ∞
e−ax eiµx dx = e−ax cos µxdx + i e−ax sin µxdx = Fc (µ) + iFs (µ)
0 0 0
but,
Z ∞ Z ∞ ∞
−ax iµx 1
e e dx = e−x(a−iµ) dx = − e−x(a−iµ)
0 0 a − iµ 0
1 a + iµ
= = 2 = Fc (µ) + iFs (µ)
a − iµ a + µ2
Thus, when solving half-range problems using the Fourier Sine and Cosine
Transform method, the nature of the boundary condition at x = 0 will help
us decide whether to use the Fourier Sine or Cosine transform
• if the condition u(x, 0) is given, then we use the Fourier Sine Transform,
• if instead, we have a condition of the form ux (x, 0), we make use of the
Fourier Cosine Transform.
Chapter 5
We have so far solved IBVPs on a finite domain, [0, L] by using the method of
separation of variables which reduced the pde to a system of ordinary differential
equations, whose solutions together with the application of the superposition
principle and the use of Fourier Series lead to the solution of a pde in terms of
infinite series. We now consider problems defined on infinite and semi-infinite
domains, we apply the method of Fourier transforms. The pde is transformed
to a simple ordinary differential equation that is in terms of the transformed
variable, the associated initial conditions are transformed likewise. The solution
of the pde is then obtained as an inverse Fourier transform of the solution of the
ode. This will be illustrated by use of examples, we focus mostly on the heat,
wave and the Laplace equations.
61
62CHAPTER 5. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON INFINITE AND SEMI-INF
To solve the above problem, since the problem has two independent variables
x and t, we need to decide which variable to use for transformation. In this
case, we have to use the x variables since it is in the domain (−∞, ∞) which
corresponds to the definition of the Fourier Transform.
Now, let F{u(x, t)} = U (µ, t) be the Fourier Transform of u(x, t) with re-
spect to x. We then transform the pde, we have
Thus, solving the above pde and using the above initial condition, we obtain
2 2
U (µ, t) = e−kµ t .
1 + µ2
We note that the right-handside of the solution looks complex yet the left
hand side is real. We can extract the real solution by verifying that the complex
part always vanishes.
Note:
2 2 2
1 ∞ e−kµ t+iµx 1 ∞ e−kµ t cos µx i ∞ e−kµ t sin µx
Z Z Z
dµ = dµ + dµ.
π −∞ 1 + µ2 π −∞ 1 + µ2 π −∞ 1 + µ2
2
2 ∞ e−kµ t cos µx
Z
= dµ,
π 0 1 + µ2
∞ 2
e−kµ t sin µx
Z
i
since the integrand of dµ is odd hence the integral is 0!!.
π −∞ 1 + µ2
2 ∞ 1 − cos µ −µ2 t
Z
Show that the solution is given by u(x, t) = e sin µxdµ.
π 0 µ
It is standard to leave the answer in the integral form, just like what we
did with solutions obtained by making use of the Fourier Series application, the
answer was left in series form. Some of the integrals are not easy to handle
analytically, they may actually require numerical evaluation.
Definition 28 Let f (x), x > 0 then the Fourier Cosine Transform of f (x) is
defined by Z ∞
Fc (µ) = f (x) cos µdx,
0
Definition 29 Let f (x), x > 0 then the Fourier Sine Transform of f (x) is
defined by Z ∞
Fs (µ) = f (x) sin µdx,
0
Example 32 Find the Fourier Sine and Cosine Series of the function f (x) =
e−ax , a > 0.
We note that
Z ∞ Z ∞ Z ∞
−ax iµx −ax
e e dx = e cos µxdx + i e−ax sin µxdx = Fc (µ) + iFs (µ)
0 0 0
but,
Z ∞ Z ∞ ∞
−ax iµx 1
e e dx = e−x(a−iµ) dx = − e−x(a−iµ)
0 0 a − iµ 0
1 a + iµ
= = 2 = Fc (µ) + iFs (µ)
a − iµ a + µ2
The operational rules for solving partial differential equations are given below:
Let f (x) and f 0 (x) be continuous on [0, ∞), such that f (x) → 0 and f 0 (x) →
0. Suppose that f 00 (x) is continuous [0, ∞). Then, we have
Thus, when solving half-range problems using the Fourier Sine and Cosine
Transform method, the nature of the boundary condition at x = 0 will help
us decide whether to use the Fourier Sine or Cosine transform
• if the condition u(x, 0) is given, then we use the Fourier Sine Transform,
• if instead, we have a condition of the form ux (x, 0), we make use of the
Fourier Cosine Transform.
Since in this case we have the boundary condition ux (0, t), we make use of the
Fourier Cosine Transform.
Thus, we let
Fc {(u(x, t))} = Uc (µ, t),
the transforming the pde and the initial condition we have
and
1
Fc {(u(x, 0))} = Uc (µ, 0) = Fc {e−x )} = ,
1 + µ2
Thus, we need to solve the IVP
∂Uc
(µ, t) = −µ2 Uc (µ, t) (5.0.3)
∂t
1
Uc (µ, 0) = (5.0.4)
1 + µ2
2
Solving (5.0.5), we obtain Uc (µ, t) = C(µ)e−µ t . Applying the initial condi-
1
tion (5.0.6), we obtain that C(µ) = , thus, we have
1 + µ2
1 2
Uc (µ, t) = e−µ t ,
1 + µ2
67
the solution of the pde is then obtained by taking the inverse Fourier Cosine
transform:
∞ ∞ 2
e−µ t
Z Z
2 2
u(x, t) = Fc−1 {(Uc (µ, t)) = Uc (µ, t) cos µxdµ = cos µxdµ.
π 0 π 0 1 + µ2
Since in this case we have the boundary condition u(0, t), we make use of the
Fourier Sine Transform.
Thus, we let
Fs {(u(x, t))} = Us (µ, t),
and
µ
Fs {(u(x, 0))} = Us (µ, 0) = Fs {e−x )} = ,
1 + µ2
∂Us
(µ, t) = (1 − µ2 )Uc (µ, t) (5.0.5)
∂t
µ
Us (µ, 0) = (5.0.6)
1 + µ2
2
Solving (5.0.5), we obtain Us (µ, t) = C(µ)e(1−µ )t . Applying the initial con-
µ
dition (5.0.6), we obtain that C(µ) = , thus, we have
1 + µ2
µ 2
Uc (µ, t) = e(1−µ )t ,
1 + µ2
the solution of the pde is then obtained by taking the inverse Fourier Cosine
transform:
∞ ∞ 2
µe−µ t
Z Z
2 2
u(x, t) = Fs−1 {(Us (µ, t)) = Us (µ, t) sin µxdµ = sin µxdµ.
π 0 π 0 1 + µ2
68CHAPTER 5. INITIAL BOUNDARY VALUE PROBLEMS(IBVPS) ON INFINITE AND SEMI-INF
Here
∇2 u
ut =κ∇2 u
utt =c2 ∇2 u
The Laplace’s equation, represents the steady state of a field that depends on
two or more independent variables, which are typically spatial. The steady-state
of both equations is obtained by setting, respectively ut = 0 and utt = 0 leading
to, as before in two dimension:
∇2 u = uxx + uyy = 0,
or in three dimension:
where:
Ω(x, y) : {(x, y)|0 < x < L1 , 0 < y < L2 }
the above problem can be decomposed into a sequence of four boundary value
problems each having only one boundary segment that has inhomogeneous
boundary conditions and the remainder of the boundary is subject to homo-
geneous boundary conditions. These latter problems can then be solved by
separation of variables.
5.1. THE LAPLACE EQUATION 69
lim u(x, y) =0
y→±∞
Taking the Fourier transform in x of the Laplaces equation we obtain the ODE
∂2U
− µ2 U = 0 (5.1.2)
∂y 2
a(µ)eµy ,
µ<0
U (µ, y) = (5.1.4)
b(µ)e−µy , µ > 0
which may be written in a compact form
c(µ) = F (µ).
Thus,
Z ∞ Z ∞
1 1
u(x, y) = F (µ)e−|µ|y eiµx dx = F (µ)G(µ)eiµx dx (5.1.7)
2π −∞ 2π −∞
we decompose the problem into two subproblems that are easier to solve:
where:
∇2 u1 =0, ∇2 u2 = 0,
u1 (0, y) =g(y) u2 (0, y) = 0
∂u1 ∂u2
(x, 0) =0 (x, 0) = f (x)
∂y ∂y
lim u1 (x, y) =0 lim u2 (x, y) = 0
x→∞ x→∞
lim u1 (x, y) =0 lim u2 (x, y) = 0
y→∞ y→∞
5.1. THE LAPLACE EQUATION 71
When solving for u1 (x, y), we consider the Fourier cosine transform in y since
∂u1
we have a homogeneous boundary condition (0, y) = 0.
∂y
Z ∞
2
u1 (x, y) = U1 (µ, y) cos µydµ
π 0
Z ∞
U1 (µ, y) = u1 (x, y) cos µydy
0
Taking the Fourier cosine transform in y of the Laplaces equation for u1 and us-
∂u1
ing the homogeneous boundary condition (0, y) = 0, we obtain the ordinary
∂y
differential equation
∂ 2 U1
− µ2 U1 = 0 (5.1.8)
∂x2
whose general solution is
Therefore
U1 (x, µ) = a(µ)e−µx
R∞
Now since U1 (0, µ) = Fc {u1 (0, y)} = Fc {g(y)} → a(µ) = 0 g(y) cos µydy =
G(µ) The solution u1 (x, y) is obtained from U1 (x, µ) by taking the inverse:
2 ∞
Z
u1 (x, y) = G(µ)e−µx cos µydµ
π 0
When solving for u2 (x, y), we consider the Fourier sine transform in x since
we have a homogeneous boundary condition u2 (0, y) = 0.
Z ∞
2
u2 (x, y) = U2 (µ, y) sin µxdµ
π 0
Z ∞
U2 (µ, y) = u2 (x, y) sin µxdx
0
Taking the Fourier sine transform in x of the Laplaces equation for u2 and
using the homogeneous boundary condition u2 (0, y) = 0, we obtain the ordinary
differential equation
∂ 2 U2
− µ2 U2 = 0 (5.1.10)
∂y 2
whose general solution is
Therefore
U2 (µ, y) = a(µ)e−µy
∂U2 ∂u1 F (µ)
Now since (0, µ) = Fs { (0, y)} = Fs {f (x)} = F (µ) → a(µ) =
∂y ∂y µ
The solution u2 (x, y) is obtained from U2 (µ, y) by taking the inverse:
2 ∞ F (µ) −µy
Z
u2 (x, y) = e sin µxdµ
π 0 µ
Thus,