0% found this document useful (0 votes)
67 views50 pages

PDE Textbook (101 150)

This document provides a proof of the derivation of the heat equation's self-similar solution in one dimension. It shows that the self-similar solution takes the form of a Gaussian curve whose standard deviation increases over time. It also discusses how the initial condition can be represented as a Dirac delta function, which is not an ordinary function but a distribution. Finally, it presents a general solution method for the one-dimensional heat equation using convolution with the Gaussian kernel.

Uploaded by

ancelmomtmtc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views50 pages

PDE Textbook (101 150)

This document provides a proof of the derivation of the heat equation's self-similar solution in one dimension. It shows that the self-similar solution takes the form of a Gaussian curve whose standard deviation increases over time. It also discusses how the initial condition can be represented as a Dirac delta function, which is not an ordinary function but a distribution. Finally, it presents a general solution method for the one-dimensional heat equation using convolution with the Gaussian kernel.

Uploaded by

ancelmomtmtc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Chapter 3.

Heat equation in 1D 91

Proof. is just by calculation. Note that β = α2 because one derivative with


respect to t is “worth” of two derivatives with respect to x.
We impose another assumption:
Condition 3.1.1. Total heat energy
Z ∞
I(t) := u(x, t) dx (3.1.3)
−∞

is finite and does not depend on t.


The second part is due to the first one. Really (not rigorous) integrating

ut = kuxx (1)

by x from −∞ Rto +∞ and assuming that R ux (±∞) = 0 we see that ∂t I(t) = 0.


∞ −1 ∞
Note that −∞ uα,β,γ dx = γ|α| −∞
u dx and to have them equal we
should take γ = |α| (actually we restrict ourselves by α > 0). So (3.1.2)
(uα,β,γ (x, t) = γu(αx, βt)) becomes

uα (x, t) = αu(αx, α2 t). (3.1.4)

This is transformation of similarity. Now we are looking for a self-


similar solution of (3.1.1) i.e. solution such that uα (x, t) = u(x, t) for all
α > 0, x, t > 0. So we want

u(x, t) = αu(αx, α2 t) ∀α > 0, t > 0, x. (3.1.5)


1
We want to get rid off one of variables; so taking α = t− 2 we get
1 1 1 1
u(x, t) = t− 2 u(t− 2 x, 1) = t− 2 φ(t− 2 x) (3.1.6)

with φ(ξ) := u(ξ, 1). Equality (3.1.6) is equivalent to (3.1.5).


1 1
Now we need to plug u(x, t) = t− 2 φ(t− 2 x) into equation (3.1.1). Note
that
1 3 1 1 1 1 3 
ut = − t− 2 φ(t− 2 x) + t− 2 φ0 (t− 2 x) × − t− 2 x =
2 2
1 −3 1 1 1
− t 2 φ(t− 2 x) + t− 2 xφ0 (t− 2 x)

2
Chapter 3. Heat equation in 1D 92

and
1 3 1
ux = t−1 φ0 (t− 2 x), uxx = t− 2 φ00 (t− 2 x).
3 1
Plugging into ut = kuxx , multiplying by t 2 and plugging t− 2 x = ξ we
arrive to
1
− φ(ξ) + ξφ0 (ξ) = kφ00 (ξ).

(3.1.7)
2
0
Good news: it is ODE. Really good news: φ(ξ) + ξφ0 (ξ) = ξφ(ξ) .
Then integrating we get
1
− ξφ(ξ) = kφ0 (ξ). (3.1.8)
2
Remark 3.1.1. Sure there should be +C but we are looking for a solution
fast decaying with its derivatives at ∞ and it implies that C = 0.
Separating in (3.1.8) variables and integrating we get
dφ 1 1 1 2
= − ξ dξ =⇒ log(φ) = − ξ 2 + log(c) =⇒ φ(ξ) = ce− 4k ξ
φ 2k 4k
1 1
and plugging into (3.1.6) (which is u(x, t) = t− 2 φ(t− 2 x)) we arrive to
1 x2
u(x, t) = √ e− 4kt . (3.1.9)
2 πkt
Remark 3.1.2. We took c = √1 to satisfy I(t) = 1.
2 πk
Really,
Z +∞
x 2
− 21 − 4kt
√ Z +∞
2

I(t) = c t e dx = c 4k e−z dz = 2c kπ
−∞ −∞

where we changed variable x = z/ 2kt and used the equality
Z +∞
2 √
J= e−x dx = π. (3.1.10)
−∞

To prove (3.1.10) just note that


Z +∞ Z +∞ Z 2π Z ∞
−x2 −y 2 2
2
J = e × e dxdy = dθ e−r rdr = π
−∞ −∞ 0 0

where we used polar coordinates; since J > 0 we get (3.1.10).


Chapter 3. Heat equation in 1D 93

Remark 3.1.3. Solution which we got is a very important one. However we


have a problem understanding what is u|t=0+ as u(x, t) → 0 as t → 0+ and
R ∞
x 6= 0 but u(x, t) → ∞ as t → 0+ and x = 0 and −∞ u(x, t) dx = 1.
In fact u|t=0+ = δ(x) which is Dirac δ-function which actually is not an
ordinary function but a distribution (see Section 11.1).
Distributions play important role in the modern Analysis and applica-
tions, in particular, to Physics.
To work around this problem we consider
Z x
U (x, t) = u(x, t) dx. (3.1.11)
−∞

We claim that

Proposition 3.1.2. (i) U (x, t) also satisfies equation (3.1.1).


(
0 x < 0,
(ii) U (x, 0+ ) = θ(x) = is a Heaviside step function.
1 x>0

Proof. Plugging u = Ux into (3.1.1) we see that (Ut − kUxx )x = 0 and then
(Ut − kUxx ) = Φ(t). However one can see easily that as x → −∞ U is fast
decaying with all its derivatives and therefore Φ(t) = 0 and Statement (i) is
proven.
Note that
Z √x
1 4kt 2 1 1 x 
U (x, t) = √ e−z dz =: + erf √ (3.1.12)
4π −∞ 2 2 4kt
with Z z
2 2
erf(z) := √ e−z dz (erf)
π 0

and that an upper limit in integral tends to ∓∞ as t → 0+ and x ≶ 0. Then


since an integrand is very fast decaying at ∓∞ we using (3.1.10) arrive to
Statement (ii).
Remark 3.1.4. (a) One can construct U (x, t) as a self-similar solution albeit
with γ = 1.

(b) We can avoid analysis of U (x, t) completely just by Rnoting that u(x, t)

is a δ-sequence as t → 0+ : u(x, t) → 0 for all x 6= 0 but −∞ u(x, t) dx = 1.
Chapter 3. Heat equation in 1D 94

Consider now a smooth function g(x), g(−∞) = 0 and note that


Z ∞
g(x) = θ(x − y)g 0 (y) dy. (3.1.13)
−∞
Rx
Really, the r.h.e. is −∞ g 0 (y) dy = g(x) − g(−∞).
Also note that U (x − y, t) solves the IVP with initial condition U (x −
+
y, 0 ) = θ(x − y). Therefore
Z ∞
u(x, t) = U (x − y, t)g 0 (y) dy
−∞

solves the IVP with initial condition u(x, 0+ ) = g(y).


Integrating by parts with respect to y we arrive to
Z ∞
u(x, t) = Ux (x − y, t)g(y) dy
−∞
and finally to
Z ∞
u(x, t) = G(x − y, t)g(y) dy (3.1.14)
−∞
with
1 (x−y)2
G(x − y, t) := √ e− 4kt . (3.1.15)
2 kπt
So we have proven:
Theorem 3.1.3. Formulae (3.1.14)–(3.1.15) give us a solution of

ut = kuxx − ∞ < x < ∞, t > 0, (3.1.16)


u|t=0 = g(x). (3.1.17)

Remark 3.1.5. We will recreate the same formulae in Section 5.3 using
Fourier transform.
Example 3.1.1. Find the solution u(x, t) to

ut = 4uxx − ∞ < x < ∞, t > 0,


(
1 − |x| |x| < 1,
u|t=0 =
0 |x| ≥ 1,
max |u| < ∞.
Chapter 3. Heat equation in 1D 95

Solution.
Z ∞
1 1 2
u(x, t) = √ e− 16t (x−y) g(y) dy
16πt −∞
1  0 − 1 (x−y)2
Z Z 1 
1 2
=√ e 16t (1 + y) dy + e− 16t (x−y) (1 − y) dy .
16πt −1 0

Plugging y = x + 4z t and changing limits

1 
Z −x/4 t
2 √
u(x, t) = √ √
e−z 1 + x + 4z t dz
π −(x+1)/4 t

Z −(x−1)/4 t
2 √ 
+ √
e−z 1 − x − 4z t dz
−x/4 t

we evaluate integrals of the terms without factor z as erf functions and


terms with factor z we simply integrate resulting in
1  x+1 x 
u(x, t) = (1 + x) erf( √ ) − erf( √ )
2 4 t 4 t
1  x x−1 
+ (1 − x) erf( √ ) − erf( √ )
2 4 t 4 t
√  √ √
2 t −z2 z=(x+1)/4 t −z2 z=(x−1)/4 t 

+√ e √ +e √ .
π z=x/4 t z=x/4 t

3.1.3 Inhomogeneous right-hand expression


Consider instead problem with the right-hand expression

ut = kuxx + f (x, t) − ∞ < x < ∞, t > 0, (3.1.18)


u|t=0 = g(x). (3.1.17)

By Duhamel principle the contribution of this right-hand expression is


Z tZ
G(x − y, t − τ )f (y, τ ) dydτ (3.1.19)
0
Chapter 3. Heat equation in 1D 96

Indeed, Duhamel integral (2.5.8)


Z t
u(x, t) = U (x, t, τ ) dτ (2.5.8)
0

remains except now auxiliary function U (x, t; τ ) satisfies heat equation (3.1.1)
with initial condition Ut |t=τ = f (x, τ ).
One can prove it exactly as in Subsection 2.5.2.
Exercise 3.1.1. Do it by yourself!
Therefore we arrive to

Theorem 3.1.4. Solution of problem (3.1.18)–(3.1.17) is given by


Z tZ ∞ Z ∞
u(x, t) = G(x − y, t − τ )f (y, τ ) dydτ + G(x − y, t)g(y) dy.
0 −∞ −∞
(3.1.20)

Here the first term expression solves IVP with the right-hand expression
f (x, t) and initial condition u(x, 0) = 0 and the second term solves IVP with
the right-hand expression 0 and initial condition u(x, 0) = g(x).

Theorem 3.1.5. Let u satisfy (3.1.18) in domain Ω ⊂ Rt × Rx with f ∈


C ∞ (Ω). Then u ∈ C ∞ (Ω).

We will prove it later.


Remark 3.1.6. (a) So far we have not discussed the uniqueness of the solu-
tion. In fact, as formulated, the solution is not unique. But do not worry:
“extra solutions” are so irregular (fast growing and oscillating) at infinity
that they have no “physical sense”. We discuss it later.

(b) IVP in the direction of negative time is ill-posed. Indeed, if f ∈ C ∞ it


follows from Theorem 3.1.5 that solution cannot exist for t ∈ (−, 0] unless
g ∈ C ∞ (which is only necessary, but not sufficient).

(c) Domain of dependence for (x, t) is {(x0 , t0 ) : − ∞ < x0 < ∞, 0 < t0 < t}
and the propagation speed is infinite! No surprise: heat equation is not
relativistic!
Chapter 3. Heat equation in 1D 97

erf 0 (x)

erf(x)

x x

(a) erf(x) (b) erf 0 (x)

3.1.4 References
Plots:

(a) Visual Example (animation)

(b) erf function on Wolfram alpha

(c) erf derivative on Wolfram alpha

3.2 Heat equation (miscellaneous)


3.2.1 Heat equation on half-line
In the previous lecture we considered heat equation

ut = kuxx (3.2.1)

with x ∈ R and t > 0 and derived formula


Z ∞
u(x, t) = G(x, y, t)g(y) dy. (3.2.2)
−∞
with
1 (x−y)2
G(x, y, t) = G0 (x − y, t) := √ e− 4kt (3.2.3)
2 kπt
for solution of IVP u|t=0 = g(x).
Chapter 3. Heat equation in 1D 98

R ∞quickly decays as |x − y| → ∞ and it tends to 0 as


Recall that G(x, y, t)
t → 0+ for x 6= y, but −∞ G(x, y, t) dy = 1.
Consider the same equation (3.2.1) on half-line with the homogeneous
Dirichlet or Neumann boundary condition at x = 0:

uD |x=0 = 0, (D)
uN x |x=0 = 0. (N)

The method of continuation (see Subsection 2.6.3) works. Indeed, coeffi-


cients do not depend on x and equation contains only even order derivatives
with respect to x. Recall that continuation is even under Neumann condition
and odd under Dirichlet condition.
Then we arrive to solutions in a form similar to (3.2.2) but with different
function G(x, y) and domain of integration [0, ∞):

G = GD (x, y, t) = G0 (x − y, t) − G0 (x + y, t), (3.2.4)


G = GN (x, y, t) = G0 (x − y, t) + G0 (x + y, t) (3.2.5)

for (D) and (N) respectively.


Both these functions satisfy equation (3.2.1) with respect to (x, t), and
corresponding boundary condition

GD |x=0 = 0, (3.2.6)
GN x |x=0 = 0. (3.2.7)

Both GD (x, y, t) and GN (x, y, t) tend to 0 as t → 0+ , x 6= y and


Z ∞
GD (x, y, t) dy → 1 as t → 0+ , (3.2.8)
0
Z ∞
GN (x, y, t) dx = 1. (3.2.9)
0

Further,
G(x, y, t) = G(y, x, t) (3.2.10)
Exercise 3.2.1. (a) Prove (3.2.8) and (3.2.9). Explain the difference.

(b) Prove (3.2.10).


Chapter 3. Heat equation in 1D 99

3.2.2 Inhomogeneous boundary conditions


Consider now inhomogeneous boundary conditions (one of them)

uD |x=0 = p(t), (3.2.11)


uN x |x=0 = q(t). (3.2.12)

Consider
ZZ

0= G(x, y, t − τ ) −uτ (y, τ ) + kuyy (y, τ ) dτ dy
Π

with Π := Π = {x > 0, 0 < τ < t − }. Integrating by parts with respect


to τ in the first term and twice with respect to y in the second one we get
ZZ  
0= −Gt (x, y, t − τ ) + kGyy (x, y, t − τ ) u(y, τ ) dτ dy
Z ∞Π Z ∞
− G(x, y, )u(y, t − ) dy + G(x, y, t)u(y, 0) dy
0 0
Z t− h i
+k −G(x, y, t − τ )uy (y, τ ) + Gy (x, y, t − τ )u(y, τ ) dτ.
0 y=0

Note that, since G(x, y, t) satisfies (3.2.1) not only with respect to (x, t)
but also with respect to (y, t) as well due to symmetry G(x, y, t) = G(y, x, t),
see (3.2.10), the first line is 0.
In the second line the first term
Z ∞
− G(x, y, )u(y, t − ) dy
0

tends to −u(x, t) as  → 0+ because of properties of G(x, y, t). Indeed,


G(x, y, τ ) tends everywhere but for x = y to 0 as τ → 0+ and its integral
from 0 to ∞ tends to 1.
So we get
Z ∞
u(x, t) = G(x, y, t) u(y, 0) dy
0 | {z }
=g(y)
Z th i
+k −G(x, y, t − τ )uy (y, τ ) + Gy (x, y, t − τ )u(y, τ ) dτ.
0 y=0
(3.2.13)
Chapter 3. Heat equation in 1D 100

The first line in the r.h.e. gives us solution of the IBVP with 0 boundary
condition. Let us consider the second line.
In the case of Dirichlet boundary condition G(x, y, t) = 0 as y = 0 and
therefore we get here
Z t
k Gy (x, 0, t − τ ) u(0, τ ) dτ.
0 | {z }
=p(τ )

In the case of Neumann boundary condition Gy (x, y, t) = 0 as y = 0 and


therefore we get here
Z t
−k G(x, 0, t − τ ) u(0, τ ) dτ.
0 | {z }
=q(τ )

So, (3.2.13) becomes


Z ∞ Z t
uD (x, t) = GD (x, y, t)g(y) dy + k GD y (x, 0, t − τ )p(τ ) dτ (3.2.14)
0 0
and Z ∞ Z t
uN (x, t) = GN (x, y, t)g(y) dy − k GN (x, 0, t − τ )q(τ ) dτ. (3.2.15)
0 0

Remark 3.2.1. (a) If we consider a half-line (−∞, 0) rather than (0, ∞) then
the same terms appear on the right end (x = 0) albeit with the opposite
sign.
(b) If we consider a finite interval (a, b) then there will be contributions
from both ends.
(c) If we consider Robin boundary condition (ux − αu)|x=0 = q(t) then
formula (3.2.15) would work but G should satisfy the same Robin condition
and we cannot construct G by the method of continuation.
(d) This proof (which also works for the Cauchy problem) shows that
integral formulae give us the unique solution satisfying condition
2
|u(x, t)| ≤ C e|x| ∀t > 0, x, ∀ > 0
provided g(x) satisfies the same condition. This is much weaker assumption
than max |u| < ∞.

Visual examples (animation)


Chapter 3. Heat equation in 1D 101

x x

(a) (b)

Figure 3.1: Plots of GD (x, y, t) and GN (x, y, t) as y = 0 (for some values of


t)

3.2.3 Inhomogeneous Rright-hand expression


Consider equation
ut − kuxx = f (x, t). (3.2.16)
Either by Duhamel principle applied to the problem (3.2.2)–(3.2.3) or
by the method of continuation applied to (3.1.20) we arrive to

Theorem 3.2.1. Solution of (3.2.16), (3.2.3) with homogeneous Dirichlet


or Neumann boundary condition at x = 0 is given by
Z tZ ∞ Z ∞
u(x, t) = G(x, y, t − τ )f (y, τ ) dydτ + G(x, y, t)g(y) dy (3.2.17)
0 0 0

with G(x, y, t) given by (3.2.4) or (3.2.5) correspondingly.

Remark 3.2.2. Inhomogeneous boundary conditions could be also incorpo-


rated as in (3.2.13).
Chapter 3. Heat equation in 1D 102

3.2.4 Multidimensional heat equation


Now we claim that for 2D and 3D heat equations

ut = k uxx + uyy , (3.2.18)

ut = k uxx + uyy + uzz , (3.2.19)

similar formulae hold:


ZZ
u= G2 (x, y; x0 , y 0 ; t)g(x0 , y 0 ) dx0 dy 0 , (3.2.20)
ZZZ
u= G3 (x, y, z; x0 , y 0 , z 0 ; t)g(x0 , y 0 , z 0 ) dx0 dy 0 dz 0 (3.2.21)

with

G2 (x, y; x0 , y 0 ; t) = G1 (x, x0 , t)G1 (y, y 0 , t), (3.2.22)


G3 (x, y, z; x0 , y 0 , z 0 ; t) = G1 (x, x0 , t)G1 (y, y 0 , t)G1 (z, z 0 , t); (3.2.23)

in particular for the whole Rn


n |x−x0 |2
Gn (x, x0 ; t) = (4πkt)− 2 e− 4kt . (3.2.24)

To justify our claim we note that

(a) Gn satisfies n-dimensional heat equation. Really, consider f.e. G2 :

G2 t (x, y; x0 , y 0 ; t) =G1 t (x, x0 , t)G1 (y, y 0 , t) + G1 (x, x0 , t)G1.t (y, y 0 , t)


=kG1 xx (x, x0 , t)G1 (y, y 0 , t)+kG1 (x, x0 , t)G1 yy (y, y 0 , t)
=k∆ G1 t (x, x0 , t)G1 (y, y 0 , t) = k∆G2 (x, y; x0 , y 0 ; t).


(b) Gn (x, x0 ; t) quickly decays as |x − x0 | → ∞ and it tends to 0 as t → 0+


for x 6= x0 , but

G(x, x0 , t) dy → 1 as t → 0+ .
RR
(c)

(d) G(x, x0 , t) = G(x0 , x, t).

(e) G(x, x0 , t) has a rotational (spherical) symmetry with a center at x0 .


No surprise: we considered heat equation in isotropic media.
Chapter 3. Heat equation in 1D 103

Properties (b)–(d) are due to the similar properties of G1 and imply integral
representation (3.2.20) (or its n-dimensional variant).
Visual examples (animation)
Similarly to Theorem 3.1.5 and following it remark we have now:
Theorem 3.2.2. Let u satisfy equation (3.2.16) in domain Ω ⊂ Rt × Rnx
with f ∈ C ∞ (Ω). Then u ∈ C ∞ (Ω).
Remark 3.2.3. (a) This “product trick” works for heat equation or Schrödinger
equation because both of them are equations (not systems) ut = Lu with L
which does not contain differentiation by t.
(b) So far we have not discussed the uniqueness of the solution. In fact, as
formulated, the solution is not unique. But do not worry: “extra solutions”
are so irregular at infinity that they have no “physical sense”. We discuss it
later.
(c) IVP in the direction of negative time is ill-posed. Indeed, if f ∈ C ∞ it
follows from Theorem 3.2.2 that solution cannot exist for t ∈ (−, 0] unless
g ∈ C ∞ (which is only necessary, but not sufficient).

3.2.5 Maximum Pprinciple


Consider heat equation in the domain Ω like below

We claim that
Theorem 3.2.3 (Maximum Principle). Let u satisfy heat equation in Ω.
Then
max u = max u. (3.2.25)
Ω Γ
Chapter 3. Heat equation in 1D 104

Almost correct proof. Let (3.2.25) be wrong. Then maxΩ u > maxΓ u and
there exists point P = (x̄, t̄) ∈ Ω \ Γ s.t. u reaches its maximum at P .
Without any loss of the generality we can assume that P belongs to an
upper lid of Ω. Then
ut (P ) ≥ 0. (3.2.26)
Indeed,
 u(x̄, t̄) ≥ u(x̄, t) for all t : t̄ > t > t̄ −  and then u(x̄, t̄) −
u(x̄, t) /(t̄ − t) ≥ 0 and as t % t̄) we get (3.2.26).
Also
uxx (P ) ≤ 0. (3.2.27)
Indeed, u(x, t̄) reaches maximum as x = x̄. This inequality combined with
(3.2.26) almost contradict to heat equation ut = kuxx (“almost”because
there could be equalities).
Correct proof. Note first that the above arguments prove (3.2.25) if u satisfies
inequality ut − kuxx < 0 because then there will be a contradiction.
Further note that v = u − εt satisfies vt − kvxx < 0 for any ε > 0 and
therefore
max(u − εt) = max(u − εt).
Ω Γ
+
Taking limit as ε → 0 we get (3.2.25).
Remark 3.2.4. (a) Sure, the same proof works for multidimensional heat
equation.
(b) In fact, there is a strict maximum principle. Namely either in Ω \ Γ u is
strictly less than maxΓ u or u = const. The proof is a bit more sophisticated.
Corollary 3.2.4 (minimum principle).
min u = min u. (3.2.28)
Ω Γ

Proof. Really, −u also satisfies heat equation.


Corollary 3.2.5. (i) u = 0 everywhere on Γ =⇒ u = 0 everywhere on Ω.
(ii) Let u, v both satisfy heat equation. Then u = v everywhere on Γ =⇒
u = v everywhere on Ω.
Proof. (i) Really, then maxΩ u = minΩ u = 0.
(ii) Really, then (u − v) satisfies heat equation.
Chapter 3. Heat equation in 1D 105

3.A Project: Walk problem


3.1.1 Intro into project: Random walks
Consider a 1D-grid with a step h  1 and also consider grid in time with a
step τ  1. So, xn = nh and tm = mτ .
Assume that probabilities to move to the left and right (to the next
point) for one time tick are qL and qR respectively.
Then denoting by pm n the probability to be at time tm at point xn we
get equation

pm m−1 m−1
n = pn−1 qR + pn (1 − qL − qR ) + pm−1
n+1 qL . (3.A.1)

One can rewrite it as

pm m−1
n − pn = pm−1 m−1
n−1 qR − 2pn (qL + qR ) + pm−1
n+1 qL =

K pm−1 m−1
+ pm−1 m−1 m−1
 
n−1 − pn n−1 − L pn+1 − pn−1 (3.A.2)

where we used notations K = 21 (qL + qR ) and L = 21 (qR − qL ).


Task 3.A.1. Using Taylor formula and assuming that p(x, t) is a smooth
function prove that

1  ∂ 2p
Λp := p n+1 − 2p n + p n−1 = + O(h2 ), (3.A.3)
h2 ∂x2
1  ∂p
Dp := pn+1 − pn−1 = + O(h2 ), (3.A.4)
2h ∂x
1 m  ∂p
p − pm−1 = + O(τ ). (3.A.5)
τ ∂t
Then (3.A.2) becomes after we neglect small terms

∂p ∂ 2p ∂p
=λ 2 −µ (3.A.6)
∂t ∂x ∂x
where K = λτ /h2 , L = µτ /2h.
Remark 3.A.1. This is a correct scaling or we will not get any PDE.
Chapter 3. Heat equation in 1D 106

Remark 3.A.2. Here p = p(x, t) is not a probability but a probability density:


probability to be at momentP t on interval (x, x + dx) is P(x < ξ(t) <
m
x + dx) = p(x, t) dx. Since −∞<n<inf ty pn = 1 we have
Z ∞
p(x, t) dx = 1. (3.A.7)
−∞

Remark 3.A.3. The first term on the right of (3.A.6) is a diffusion term; in
the case of symmetric walk qL = qR only it is present:

∂p ∂ 2p
= λ 2. (3.A.8)
∂t ∂x
The second term on the right of (3.A.6) is a convection term; moving it to
the left and making change of coordinates tnew = t, xnew = x − µt we get in
this new coordinates equation (3.A.8). So this term is responsible for the
shift with a constant speed µ (on the top of diffusion).
Remark 3.A.4. (3.A.2) is a finite difference equation which is a finite differ-
ence approximation for PDE (3.A.7). However this approximation is stable
h2
only if τ ≤ 2λ . This is a fact from numerical analysis.
Task 3.A.2 (Main task). Multidimensional case. Solution (in due time when
we study). BVP. More generalization (later).

3.1.2 Absorption problem


Consider 1D-walk (with the same rules) on a segment [0, l] with both
absorbing ends. Let pn be a probability that our walk will end up at l if
started from xn . Then

pn = pn−1 qL + pn+1 qR + pn (1 − qL − qR ). (3.A.9)

Task 3.A.3. Prove limiting equation

∂ 2p ∂p
0=λ 2
−µ . (3.A.10)
∂x ∂x
Solve it under boundary conditions p(0) = 0, p(l) = 1. Explain these
boundary conditions.
Remark 3.A.5. Here p = p(x) is a probability and (3.A.7) does not hold.
Chapter 3. Heat equation in 1D 107

Task 3.A.4 (Main task). Multidimensional case: in the domain with the
boundary. Boundary conditions (there is a part Γ of the boundary and we
are interested in the probability to end up here if started from given point).
May be: Generalization: part of boundary is reflecting.

Problems to Chapter 3
Crucial in many problems is formula (3.2.14) rewritten as
Z ∞
u(x, t) = G(x, y, t)g(y) dy. (1)
−∞

with
1 (x−y)2
G(x, y, t) = √ e− 4kt (2)
2 kπt
This formula solves IVP for a heat equation

ut = kuxx (3)

with the initial function g(x).


In many problems below for a modified standard problem you need to
derive a similar formula albeit with modified G(x, y, t). Consider
Z z
2 2
erf(z) = √ e−z dz (Erf)
π 0
as a standard function.
Problem 1. Solve Cauchy problem for a heat equation
(
ut = kuxx t > 0, −∞ < x < ∞
(4)
u|t=0 = g(x)

with
( (
1 |x| < 1, 1 − |x| |x| < 1,
g(x) = g(x) =
0 |x| ≥ 1; 0 |x| ≥ 1;

g(x) = e−a|x| ; g(x) = xe−a|x| ;


g(x) = |x|e−a|x| ; g(x) = x2 e−a|x| ;
Chapter 3. Heat equation in 1D 108

2 2
g(x) = e−ax ; g(x) = xe−ax ;
2
g(x) = x2 e−ax .

Problem 2. Using method of continuation obtain formula similar to (1)-(2)


for solution of IBVP for a heat equation on x > 0, t > 0 with the initial
function g(x) and with

(a) Dirichlet boundary condition u|x=0 = 0;

(b) Neumann boundary condition ux |x=0 = 0;

Problem 3. Solve IBVP problem for a heat equation



ut = kuxx
 t > 0, 0 < x < ∞
u|t=0 = g(x) (5)

u|x=0 = h(t)

with
( (
1 t < 1, 1 x < 1,
g(x) = 0, h(t) = g(x) = h(t) = 0;
0 t ≥ 1; 0 x ≥ 1,
( (
1 − x x < 1, 1 − x2 x < 1,
g(x) = h(t) = 0; g(x) = h(t) = 0;
0 x ≥ 1, 0 x ≥ 1,
g(x) = e−ax , h(t) = 0; g(x) = xe−ax , h(t) = 0;
2 2
g(x) = e−ax , h(t) = 0; g(x) = xe−ax , h(t) = 0;
2
g(x) = x2 e−ax , h(t) = 0; g(x) = 0, h(t) = 1;
g(x) = 1, h(t) = 0;

Problem 4. 
ut = kuxx
 t > 0, 0 < x < ∞
u|t=0 = g(x) (6)

u |
x x=0 = h(t)
Chapter 3. Heat equation in 1D 109

with
( (
1 t < 1, 1 x < 1,
g(x) = 0, h(t) = g(x) = h(t) = 0;
0 t ≥ 1; 0 x ≥ 1,
(
1 − x x < 1,
g(x) = h(t) = 0; g(x) = e−ax , h(t) = 0;
0 x ≥ 1,
2
g(x) = xe−ax , h(t) = 0; g(x) = e−ax , h(t) = 0;
2 2
g(x) = xe−ax , h(t) = 0; g(x) = x2 e−ax , h(t) = 0;
g(x) = 0, h(t) = 1; g(x) = 1, h(t) = 0;

Problem 5. Using method of continuation obtain formula similar to (1)-(2)


for solution of IBVP for a heat equation on {x > 0, t > 0} with the initial
function g(x) and with

(a) Dirichlet boundary condition on both ends u|x=0 = u|x=L = 0;

(b) Neumann boundary condition on both ends ux |x=0 = ux |x=L = 0;

(c) Dirichlet boundary condition on one end and Neumann boundary


condition on another u|x=0 = ux |x=L = 0.

Problem 6. Consider heat equation with a convection term

ut + cux = kuxx . (7)


convection term

(a) Prove that it is obtained from the ordinary heat equation with respect
to U by a change of variables U (x, t) = u(x + ct, t). Interpret (7) as
equation describing heat propagation in the media moving to the right
with the speed c.

(b) Using change of variables u(x, t) = U (x − vt, t) reduce it to ordinary


heat equation and using (1)-(2) for a latter write a formula for solution
u(x, t).

(c) Can we use the method of continuation directly to solve IBVP with
Dirichlet or Neumann boundary condition at x > 0 for (7) on {x >
0, t > 0}? Justify your answer.
Chapter 3. Heat equation in 1D 110

(d) Plugging u(x, t) = v(x, t)eαx+βt with appropriate constants α, β reduce


(7) to ordinary heat equation.

(e) Using (d) write formula for solution of such equation on the half-line
or an interval in the case of Dirichlet boundary condition(s). Can we
use this method in the case of Neumann boundary conditions? Justify
your answer.
Problem 7. Using either formula (1)-(2) or its modification (if needed)
(a) Solve IVP for a heat equation (3) with g(x) = e−ε|x| ; what happens as
ε → +0?

(b) Solve IVP for a heat equation with convection (7) with g(x) = e−ε|x| ;
what happens as ε → +0?

(c) Solve IBVP with the Dirichlet boundary condition for a heat equation
(7) with g(x) = e−ε|x| ; what happens as ε → +0?

(d) Solve IBVP with the Neumann boundary condition for a heat equation
(3) with g(x) = e−ε|x| ; what happens as ε → +0?
Problem 8. Consider a solution of the diffusion equation ut = uxx in [0 ≤
x ≤ L, 0 ≤ t < ∞].
Let

M (T ) = max u(x, t),


[0≤x≤L,0≤t≤T ]

m(T ) = min u(x, t).


[0≤x≤L,0≤t≤T ]

(a) Does M (T ) increase or decrease as a function of T ?

(b) Does m(T ) increase or decrease as a function of T ?


Problem 9. The purpose of this exercise is to show that the maximum
principle is not true for the equation ut = xuxx which has a coefficient which
changes sign.
(a) Verify that u = −2xt − x2 is a solution.

(b) Find the location of its maximum in the closed rectangle [−2 ≤ x ≤
2, 0 ≤ t ≤ 1].
Chapter 3. Heat equation in 1D 111

(c) Where precisely does our proof of the maximum principle break down
for this equation?

Problem 10. (a) Consider the heat equation on J = (−∞, ∞) and prove
that an “energy” Z
E(t) = u2 (x, t) dx (8)
J

does not increase; further, show that it really decreases unless u(x, t) = const;

(b) Consider the heat equation on J = (0, l) with the Dirichlet or Neumann
boundary conditions and prove that an E(t) does not increase; further,
show that it really decreases unless u(x, t) = const;

(c) Consider the heat equation on J = (0, l) with the Robin boundary
conditions

ux (0, t) − a0 u(0, t) = 0, (9)


ux (L, t) + aL u(L, t) = 0. (10)

If a0 > 0 and al > R L0, 2show that the endpoints contribute to the
decrease of E(t) = 0 u (x, t) dx.
This is interpreted to mean that part of the energy is lost at the boundary,
so we call the boundary conditions radiating or dissipative.
Hint. To prove decrease of E(t) consider it derivative by t, replace ut by
kuxx and integrate by parts.
Remark 3.P.1 In the case of heat (or diffusion) equation an energy given
by (8) is rather mathematical artefact.
Problem 11. Find a self-similar solution u of

ut = (um ux )x − ∞ < x < ∞, t > 0


R∞
with finite −∞
u dx.
Problem 12. Find a self-similar solution u of

ut = ikuxx − ∞ < x < ∞, t > 0


R∞
with constant −∞
u dx = 1.
Hint. Some complex variables required.
Chapter 3. Heat equation in 1D 112

Problem 13. Find smooth for t > 0 self-similar solution u(x, t) to

ut = (xux )x x > 0, t > 0,


u|x=0 = t−1 t > 0.

Problem 14. Find smooth for t > 0 self-similar solution u(x, t) to

ut + uux = 0 − ∞ < x < ∞, t > 0,


x
u|t=0 = sign(x) := .
|x|

Problem 15. Find smooth for t > 0 self-similar solution u(x, t) to

ut + (uux )x = 0 − ∞ < x < ∞, t > 0,


Z ∞
u ≥ 0, u(x, t) dx < ∞.
−∞

Problem 16. Find smooth for t > 0 self-similar solution u(x, t) to

ut + uxx = 0 − ∞ < x < ∞, t > 0,


x
u|t=0 = sign(x) := .
|x|

You are not allowed to use a standard formula for solution of IVP for heat
equation.
Problem 17. Consider 1D “radioactive cloud” problem:

ut + vux − uxx + βu = 0,
u|t=0 = δ(x),

where v is a wind velocity, β shows the speed of “dropping on the ground”.

(a) Hint. Reduce to the standard heat equation by u = ve−βt and x = y +vt,
use the standard formula for v:

v = (4πt)−1/2 exp(−x2 /4t)

and then write down u(x, t).


Chapter 3. Heat equation in 1D 113

(b) Find “contamination level” at x


Z ∞
D(x) = β u(x, t) dt.
0

Hint. By change of variables t = y 2 with appropriate z reduce to calculation


of Z
exp(−ay 2 − by −2 ) dy

and calculate it using f.e. https://www.wolframalpha.com with input

int_0^infty exp (-ay^2-b/y^2)dy

you may need to do it few times)

(c) Try to generalize to 2-dimensional case:

ut + vux − (uxx + uyy ) + βu = 0,


u|t=0 = δ(x)δ(y).
Chapter 4

Separation of Variables and


Fourier Series

In this Chapter we consider simplest separation of variables problems, arising


simplest eigenvalue problems and corresponding Fourier series.

4.1 Separation of variables (the first blood)


Consider IBVP for homogeneous 1D-wave equation on the finite interval
(0, l):

utt − c2 uxx = 0, 0 < x < l, (4.1.1)


u|x=0 = u|x=l = 0, (4.1.2)
u|t=0 = g(x), ut |t=0 = h(x). (4.1.3)

Note, that the boundary conditions are also homogeneous. So inhomoge-


neous are only initial conditions.

4.1.1 Separation of variables


Let us skip temporarily initial conditions (4.1.3) and consider only (4.1.1)–
(4.1.2). and look for a solution in a special form

u(x, t) = X(x)T (t) (4.1.4)

with unknown functions X(x) on (0, l) and T (t) on (−∞, ∞).

114
Chapter 4. Separation of Variables and Fourier Series 115

Remark 4.1.1. We are looking for non-trivial solution u(x, t) which means
that u(x, t) is not identically 0. Therefore neither X(x) nor T (t) could be
identically 0 either.
Plugging (4.1.4) (which is (u(x, t) = X(x)T (t)) into (4.1.1)–(4.1.2) we
get

X(x)T 00 (t) = c2 X 00 (x)T (t),


X(0)T (t) = X(l)T (t) = 0,

which after division by X(x)T (t) and T (t) respectively become

T 00 (t) X 00 (x)
= c2 , (4.1.5)
T (t) X(x)
X(0) = X(l) = 0. (4.1.6)

Recall, neither X(x) nor T (t) are identically 0.


In equation (4.1.5) the l.h.e. does not depend on x and the r.h.e. does
not depend on t and since we have an identity we conclude that
Both expressions do not depend on x, t and therefore they are constant.
This is a crucial conclusion of the separation of variables method. We
rewrite the (4.1.5) as two equalities

T 00 (t) X 00 (x)
= −c2 λ, = −λ,
T (t) X(x)

with unknown constant λ, which in turn we rewrite as (4.1.7) and (4.1.8):

X 00 + λX = 0, (4.1.7)
X(0) = X(l) = 0, (4.1.6)
T 00 + c2 λT = 0. (4.1.8)

4.1.2 Eigenvalue problem


Consider BVP (for ODE) (4.1.7)–(4.1.6): It is usually called Sturm-Liouville
problem. We need to find its solution X(x) which is not identically 0.
Definition 4.1.1. Such solutions are called eigenfunctions and correspond-
ing numbers λ eigenvalues (compare with eigenvectors and eigenvalues).
Chapter 4. Separation of Variables and Fourier Series 116

Proposition 4.1.1. Problem (4.1.7)–(4.1.6) has eigenvalues and eigenfunc-


tions
π 2 n2
λn = n = 1, 2, . . . , (4.1.9)
l2
πnx
Xn (x) = sin( ). (4.1.10)
l
Proof. Note that (4.1.7) is a 2-nd order linear ODE with constant coefficients
and to solve it one needs to consider characteristic equation

k2 + λ = 0 (4.1.11)
√ √ √
and therefore k1,2 = ± −λ and X(x) = Ae −λx + Be− −λx (provided
λ 6= 0). So far λ ∈ C.
Plugging into X(0) = 0 and X(l) = 0 we get

A +B = 0,
√ √
−λl
Ae + Be− −λl
=0

and this system has a non-trivial solution (A, B) 6= 0 if and only if its
determinant is 0:

√ √ √
1 1

√ √ = e− −λl − e −λl = 0 ⇐⇒ e2 −λl = 1
e −λl e− −λl

⇐⇒ 2 −λl = 2πni with n=1,2,. . . .

Here we excluded n = 0 since λ 6= 0, and also n = −1, −2, . . . since both


n and −n lead to the same λ and X. The last equation is equivalent to
2 2
(4.1.9): λn = π l2n . Then k1,2 = ± πni
l
.
Meanwhile B = −A anyway and we get X = 2Ai sin( πnxl
) i.e.
πnx
Xn (x) = sin( ) (10)
l
since the constant factor does not matter.
So far we have not covered λ = 0. But then k1,2 = 0 and X = A + Bx
and plugging into (4.1.6) we get A = A + Bl = 0 =⇒ A = B = 0 and
λ = 0 is not an eigenvalue.
Chapter 4. Separation of Variables and Fourier Series 117

4.1.3 Simple solutions


After eigenvalue problem has been solved we plug λ = λn into (4.1.8) (which
is T 00 + c2 λT = 0):
cπn 2
T 00 + ( ) T =0 (4.1.12)
l
which is also a 2-nd order linear ODE with constant coefficients.
Characteristic equation k 2 + ( cπn l
)2 = 0 has solutions k1,2 = ± cπni
l
and
therefore
cπnt cπnt
Tn (t) = An cos( ) + Bn sin( ) (4.1.13)
l l
and finally we get a simple solution

cπnt cπnt  πnx


un (x, t) = An cos( ) + Bn sin( ) · sin( )
| l {z l } | {zl }
=Tn (t) =Xn (x)

with n = 1, 2, . . . (4.1.14)

This simple solution (4.1.14) can be rewritten as


cπnt πnx
un (x, t) = Cn cos( + φn ) sin( )
l l
represents a standing wave:

Standing Wave

which one can decompose into a sum of running waves

Standing Wave Decomposition

and the general discussion of standing waves could be found in Standing


Wave Discussion.
Points which do not move are called nodes and points with maximal
amplitude are called antinodes.
Chapter 4. Separation of Variables and Fourier Series 118

4.1.4 General solutions


The sum of solutions of (4.1.1)-(4.1.2) is also a solution:

X cπnt cπnt  πnx
u(x, t) = An cos( ) + Bn sin( ) · sin( ). (4.1.15)
n=1
l l l

We have an important question to answer:


Have we covered all solutions of (4.1.1)–(4.1.2)? – Yes, we did, but we
need to justify it.
Plugging (4.1.15) into initial conditions (4.1.3)1,2 (that is u|t=0 = g(x)
and ut |t=0 = h(x)) we get respectively

X πnx
An sin( ) = g(x), (4.1.16)
n=1
l

X cπn πnx
Bn sin( ) = h(x). (4.1.17)
n=1
l l

How to find An and Bn ? Do they exist? Are they unique?


What we got are Fourier series (actually sin-Fourier series). And we
consider their theory in the several next sections (and many lectures).

Visual Examples (Animation).

More coming! Including multidimensional:

Visual Examples (Animation: Disk).

4.2 Eigenvalue problems


4.2.1 Problems with explicit solutions
Example 4.2.1 (Dirichlet-Dirichlet). Consider eigenvalue problem from Sec-
tion 4.1:

X 00 + λX = 0 0 < x < l, (4.2.1)


X(0) = X(l) = 0; (4.2.2)
Chapter 4. Separation of Variables and Fourier Series 119

it has eigenvalues and corresponding eigenfunctions


πn 2
λn = , n = 1, 2, . . . (4.2.3)
l
πn 
Xn = sin x . (4.2.4)
l
Visual Examples (animation)
Example 4.2.2 (Neumann-Neumann). Consider eigenvalue problem
X 00 + λX = 0 0 < x < l, (4.2.5)
X 0 (0) = X 0 (l) = 0 (4.2.6)
it has eigenvalues and corresponding eigenfunctions
πn 2
λn = , n = 0, 1, 2, . . . (4.2.7)
l
πn 
Xn = cos x . (4.2.8)
l
Visual Examples (animation)

Indeed, plugging X = Aekx + Be−kx , k = −λ 6= 0 into (4.2.6) X 0 (0) =
X 0 (l) = 0 we get
A −B = 0,
√ √
−λl
Ae − Be− =0 −λl


where we divided
√ the last equation by −λ, which leads to the same
condition 2 −λl  = 2πin, n = 1,πn
2, . . . as before but with eigenfunctions
πn
(4.2.8): cos l x rather than sin l x as in Example 4.2.1.
But now, plugging λ = 0 and X = A + Bx we get B = 0 and X(x) = 1
is also an eigenfunction and we should add n = 0.
Example 4.2.3 (Dirichlet-Neumann). Consider eigenvalue problem
X 00 + λX = 0 0 < x < l, (4.2.9)
0
X(0) = X (l) = 0 (4.2.10)
has eigenvalues and corresponding eigenfunctions
π(2n + 1) 2
λn = , n = 0, 1, 2, . . . (4.2.11)
2l
π(2n + 1) 
Xn = sin x (4.2.12)
2l
Chapter 4. Separation of Variables and Fourier Series 120

Visual Examples (animation)



Indeed, plugging X = Aekx + Be−kx , k = −λ 6= 0 into (4.2.6) X 0 (0) =
X 0 (l) = 0 we get

A +B = 0,
√ √
−λl
Ae − Be− −λl
=0

where
√ we divided the last equation by −λ, which leads to condition
2 −λl = (2πn + 1)i, n = 0, 1, 2, . . . and with eigenfunctions (4.2.12):
Xn = sin π(2n+1)

2l
x .
Plugging λ = 0 and X = A + Bx we get B = 0 and A = 0, so λ = 0
is not an eigenvalue. The same problem albeit with the ends reversed
(i.e. X 0 (0) = X(l) = 0) has the same eigenvalues and eigenfunctions
cos π(2n+1)
2l
x .
Example 4.2.4 (periodic). Consider eigenvalue problem

X 00 + λX = 0 0 < x < l, (4.2.13)


0 0
X(0) = X(l), X (0) = X (l); (4.2.14)

it has eigenvalues and corresponding eigenfunctions

λ0 = 0, (4.2.15)
X0 = 1, (4.2.16)
2πn 2
λ2n−1 = λ2n = , n = 1, 2, . . . (4.2.17)
l
2πn  2πn 
X2n−1 = cos x , X2n = sin x . (4.2.18)
l l
Visual Examples (animation)

Alternatively, as all eigenvalues but 0 have multiplicity 2 one can select


2πn 2
λn = , n = . . . , −2, −1, 0, 1, 2, . . . (4.2.19)
l
2πn 
Xn = exp ix . (4.2.20)
l
Chapter 4. Separation of Variables and Fourier Series 121

Indeed, now we get


√ √
−λl
A(1 − e ) + B(1 − e− −λl
) = 0,
√ √
−λl
) − B(1 − e− −λl ) = 0
A(1 − e
√ √
which means that e −λl = 1 ⇐⇒ −λ = 2πni and in this case we get two
linearly independent eigenfunctions.
As λ = 0 we get just one eigenfunction X0 = 1.
Example 4.2.5 (quasiperiodic). Consider eigenvalue problem

X 00 + λX = 0 0 < x < l, (4.2.21)


X(0) = e−ikl X(l), X 0 (0) = X 0 (l)e−ikl X(l) (4.2.22)

with 0 < k < l
; it has eigenvalues and corresponding eigenfunctions
2πn 2
λn = +k , n = 0, 2, 4, . . . (4.2.23)
l
 2πn  
Xn = exp + k ix , (4.2.24)
l
2π(n + 1) 2
λn = −k , n = 1, 3, 5, . . . (4.2.25)
l
 2π(n + 1)  
Xn = exp − k ix . (4.2.26)
l
k is called quasimomentum.

(a) For k = 0, 2πl we get periodic solutions, considered in the previous


Example 4.2.4,
π
(b) for k = l
we get antiperiodic solutions

Visual Examples (animation)

where all eigenvalues have multiplicity 2 (both l-periodic and l-antiperiodic


functions are 2l-periodic, and each 2l-periodic is the sum of l-periodic and
l-antiperiodic functions).
(c) For all other k eigenvalues are real and simple but eigenfunctions are
not real-valued.
Chapter 4. Separation of Variables and Fourier Series 122

Indeed, we get now


√ √
−λl −λl
A(eikl − e ) + B(−e ) = 0,
√ √
ikl −λl ikl − −λl
A(e − e ) − B(e − e )=0

and we get either −λ = ik + πmi/l with m ∈ Z.
Remark 4.2.1. This is the simplest example of problems appearing in the
description of free electrons in the crystals; much more complicated and
realistic example would be Schrödinger equation

X 00 + λ − V (x) X = 0


or its 3D-analog.

4.2.2 Problems with “almost” explicit olutions


Example 4.2.6 (Robin boundary conditions). Consider eigenevalue problem

X 00 + λX = 0 0 < x < l, (4.2.27)


0 0
X (0) = αX(0), X (l) = −βX(l) (4.2.28)

with α ≥ 0, β ≥ 0 (α + β > 0). Then


Z l Z l
λ 2
X dx = − X 00 X dx
0 0
Z l
= X 02 dx − X 0 (l)X(l) + X 0 (0)X(0)
0
Z l
= X 02 dx + βX(l)2 + αX(0)2 (4.2.29)
0

and λn = ωn2 where ωn > 0 are roots of


(α + β)ω
tan(ωl) = ; (4.2.30)
ω 2 − αβ
Xn = ω cos(ωn x) + α sin(ωn x); (4.2.31)

(n = 1, 2, . . .). Indeed, looking for X = A cos(ωx) + B sin(ωx) with ω = λ
(we are smarter now!) we find from the first equation of (4.2.28) that
ωB = αA and we can take A = ω, B = α, X = ω cos(ωx) − α sin(ωx).
Chapter 4. Separation of Variables and Fourier Series 123

Then plugging to the second equation we get

−ω 2 sin(ωl) + αω cos(ωl) = −βω cos(ωl) − αβ sin(ωl)

which leads us to (4.2.30)–(4.2.31).


Observe that
π(n−1)
(a) α, β → 0+ =⇒ ωn → l
.
πn
(b) α, β → +∞ =⇒ ωn → l
.
π(n− 12 )
(c) α → 0+ , β → +∞ =⇒ ωn → l
.

Example 4.2.7 (Robin boundary conditions (negative eigenvalues)). However


if α and/or β are negative, one or two negative eigenvalues λ = −γ 2 can
also appear where

(α + β)γ
tanh(γl) = − , (4.2.32)
γ 2 + αβ
X(x) = γ cosh(γx) + α sinh(γx). (4.2.33)

Indeed, looking for X = A cosh(γx) + B sinh(γx) with γ = −λ we find
from the first equation of (4.2.28) that γB = αA and we can take A = γ,
B = α, X = γ cosh(γx) − α sinh(γx).
Then plugging to the second equation we get

γ 2 sinh(γl) + αγ cosh(γl) = −βγ cosh(γl) − αβ sinh(γl)

which leads us to (4.2.32)–(4.2.33).


To investigate when it happens, consider the threshold case of eigenvalue
λ = 0: then X = cx + d and plugging into boundary conditions we have
c = αd and c = −β(d + lc); this system has non-trivial solution (c, d) 6= 0 iff
α + β + αβl = 0. This hyperbola divides (α, β)-plane into three zones:
Chapter 4. Separation of Variables and Fourier Series 124

α=0

no
ne
ga
tiv
e
ei
ge
nv
al
ue
on

s
β=0
e
ne α + β + αβl = 0
ga
tiv
e β = −1
α + β + αβl = 0
eig
en
va
lu
tw

e
o
ne
g
at
iv
e
ei
ge
nv
al
ue
s

α = −1

Visual Examples (animation)


Visual Examples (animation)

To calculate the number of negative eigenvalues one can either apply


the general variational principle or analyze the case of α = β; for both
approaches see Appendix 4.5.6.
Example 4.2.8 (Oscillations of the beam). Small oscillations of the beam are
described by
utt + Kuxxxx = 0 (4.2.34)
with the boundary conditions

u|x=0 = ux |x=0 = 0, (4.2.35)


u|x=l = ux |x=l = 0 (4.2.36)
Chapter 4. Separation of Variables and Fourier Series 125

(both ends are clamped) or

u|x=0 = ux |x=0 = 0, (4.2.37)


uxx |x=l = uxxx |x=l = 0 (4.2.38)

(left end is clamped and the right one is free); one can also consider different
and more general boundary conditions.
Example 4.2.8 (continued). Separating variables we get

X IV − λX = 0, (4.2.39)

T 00 + ω 2 T = 0, ω = Kλ (4.2.40)

with the boundary conditions correspondingly

X(0) = X 0 (0) = 0, (4.2.41)


X(l) = X 0 (l) = 0 (4.2.42)
or
X(0) = X 0 (0) = 0, (41)
X 00 (l) = X 000 (l) = 0. (4.2.43)

Eigenvalues of the both problems are positive (we skip the proof). Then
λ = k 4 and

X(x) = A cosh(kx) + B sinh(kx) + C cos(kx) + D sin(kx), (4.2.44)

and (4.2.41) implies that C = −A, D = −B and


 
X(x) = A cosh(kx) − cos(kx) + B sinh(kx) − sin(kx) . (4.2.45)

Plugging it into (4.2.42) we get

A(cosh(kl) − cos(kl)) + B(sinh(kl) − sin(kl)) = 0,


A(sinh(kl) + sin(kl)) + B(cosh(kl) − cos(kl)) = 0.

Then determinant must be 0:

(cosh(kl) − cos(kl))2 − (sinh2 (kl) − sin2 (kl)) = 0


Chapter 4. Separation of Variables and Fourier Series 126

which is equivalent to
cosh(kl) · cos(kl) = 1. (4.2.46)
On the other hand, plugging
 
X(x) = A cosh(kx) − cos(kx) + B sinh(kx) − sin(kx) (45)
into (4.2.43) X 00 (l) = X 000 (l) = 0 leads us to
A(cosh(kl) + cos(kl)) + B(sinh(kl) + sin(kl)) = 0,
A(sinh(kl) − sin(kl)) + B(cosh(kl) + cos(kl)) = 0.
Then determinant must be 0:
(cosh(kl) + cos(kl))2 − (sinh2 (kl) − sin2 (kl)) = 0
which is equivalent to
cosh(kl) · cos(kl) = −1. (4.2.47)
We solve (4.2.46) and (4.2.47) graphically: Case of both ends free, results

− cos(kl)

1/ cosh(kl)

cos(kl)

in the same eigenvalues λn = kn4 as when both ends are clumped, but with
eigenfunctions
 
X(x) = A cosh(kx) + cos(kx) + B sinh(kx) + sin(kx) .
and also in double eigenvalue λ = 0 and eigenfunctions 1 and x.

Visual Examples (animation)


Chapter 4. Separation of Variables and Fourier Series 127

Problems to Sections 4.1 and 4.2


“Solve equation graphically” means that you plot a corresponding function
and points (zn , 0) where it intersects with OX will give us all the frequencies
ωn = ω(zn ).
“Simple solution” u(x, t) = X(x)T (t).
Hint. You may assume that all eigenvalues are real (which is the case). In
Problems 3–8 you may assume that all eigenvalues are real and non-negative
(which is also the case).
Problem 1. Justify Example 4.2.6 and Example 4.2.7 Consider eignevalue
problem with Robin boundary conditions
X 00 + λX = 0 0 < x < l,
X 0 (0) = αX(0),
X 0 (l) = −βX(l),
with α, β ∈ R.
(a) Prove that positive eigenvalues are λn = ωn2 and the corresponding
eigenfunctions are Xn where ωn > 0 are roots of
(α + β)ω
tan(ωl) = 2 ; Xn = ωn cos(ωn x) + α sin(ωn x);
ω − αβ
with n = 1, 2, . . .. Solve this equation graphically.
(b) Prove that negative eigenvalues if there are any are λn = −γn2 and the
corresponding eigenfunctions are Yn where γn > 0 are roots of
(α + β)γ
tanh(γl) = − 2 ,
γ + αβ
Yn (x) = γn cosh(γn x) + α sinh(γn x).
Solve this equation graphically.
(c) To investigate how many negative eigenvalues are, consider the thresh-
old case of eigenvalue λ = 0: then X = cx + d and plugging into b.c.
we have c = αd and c = −β(d+lc); this system has non-trivial solution
(c, d) 6= 0 iff α + β + αβl = 0. This hyperbola divides (α, β)-plane into
three zones.
(d) Prove that eigenfunctions corresponding to different eigenvalues are
orthogonal:
Chapter 4. Separation of Variables and Fourier Series 128

Z l
Xn (x)Xm (x) dx = 0 as λn 6= λm
0

where we consider now all eigenfunctions (no matter corresponding to


positive or negative eigenvalues).
(e) Bonus Prove that eigenvalues are simple, i.e. all eigenfunctions corre-
sponding to the same eigenvalue are proportional.
Problem 2. Analyse the same problem albeit with Dirichlet condition on the
left end: X(0) = 0.
Problem 3. Oscillations of the beam are described by equation
utt + Kuxxxx = 0, 0 < x < l.
with K > 0.
If both ends clamped (that means having the fixed positions and direc-
tions) then the boundary conditions are
u(0, t) = ux (0, t) = 0
u(l, t) = ux (l, t) = 0.
(a) Find equation, describing frequencies, and find corresponding eigen-
functions (You may assume that all eigenvalues are real and positive).
(b) Solve equation, describing frequencies, graphically.
(c) Prove that eigenfunctions corresponding to different eigenvalues are
orthogonal.
(d) Bonus. Prove that eigenvalues are simple, i.e. all eigenfunctions
corresponding to the same eigenvalue are proportional.
(e) Bonus. Sketch on the computer a couple first eigenfunctions.
Hint. Change coordinate system so that interval becomes [−L, L] with
L = l/2; consider separately even and odd eigenfunctions.
Problem 4. Consider oscillations of the beam with both ends simply sup-
ported :
u(0, t) = uxx (0, t) = 0,
u(l, t) = uxx (l, t) = 0.
Follow the previous problem (all parts) but also consider eigenvalue 0.
Chapter 4. Separation of Variables and Fourier Series 129

Problem 5. Consider oscillations of the beam with both ends free:

uxx (0, t) = uxxx (0, t) = 0, (12)


uxx (l, t) = uxxx (l, t) = 0. (13)

Follow previous problem but also consider eigenvalue 0.


Problem 6. Consider oscillations of the beam with the clamped left end and
the simply supported right end. Then boundary conditions are different on
the left and right ends.
Note. In this case due to the lack of symmetry you cannot consider
separately even and odd eigenfunctions. Thee same is true for some other
problems below.
Problem 7. Consider oscillations of the beam with the clamped left end and
the free right end. Then boundary conditions are different on the left and
right ends.
Problem 8. Consider oscillations of the beam with the simply supported left
end and the free right end. Then boundary conditions are different on the
left and right ends.
Problem 9. Consider wave equation with the Neumann boundary condition
on the left and “weird” b.c. on the right:

utt − c2 uxx = 0 0 < x < l,


ux (0, t) = 0,
(ux + iαut )(l, t) = 0

with α ∈ R.

(a) Separate variables;

(b) Find “weird” eigenvalue problem for ODE;

(c) Solve this problem;

(d) Find simple solution u(x, t) = X(x)T (t).

Hint. You may assume that all eigenvalues are real (which is the case).
Chapter 4. Separation of Variables and Fourier Series 130

Problem 10. Consider energy levels of the particle in the “rectangular well”
−uxx + V u = λu
(
−H |x| ≤ L,
with V (x) =
0 |x| > 0.
Hint. Solve equation for |x| < L and for |x| > L and solution must
be continous (with its first derivative) as |x| = L: u(L − 0) = u(L + 0),
ux (L − 0) = ux (L + 0) and the same at −L.
Hint. All eigenvalues belong to interval (−H, 0).
Hint. Consider separately even and odd eigenfunctions.

4.3 Orthogonal systems


4.3.1 Examples
All systems we considered in the previous Section were orthogonal i.e.
(Xn , Xm ) = 0 ∀m 6= n (4.3.1)
with Z l
(X, Y ) := X(x)Ȳ (x) dx, kXk2 := (X, X). (4.3.2)
0
where Ȳ means complex-conjugate to Y .
Exercise 4.3.1. Prove it by direct calculation.
However, instead we show that this nice property (and the fact that
eigenvalues are real) is due to the fact that those problems are self-adjoint.
Consider X, Y satisfying Robin boundary conditions
X 0 (0) − αX(0) = 0, (4.3.3)
X 0 (l) + βX(l) = 0 (4.3.4)
with α, β ∈ R (and Y satisfies the same conditions). Note that
Z
(X , Y ) = X 00 (x)Ȳ (x) dx
00

Z
= − X 0 (x)Ȳ 0 (x) dx + X 0 (l)Ȳ (l) − X 0 (0)Ȳ (0)

= − (X 0 , Y 0 ) − βX(l)Ȳ (l) − αX(0)Ȳ (0). (4.3.5)


Chapter 4. Separation of Variables and Fourier Series 131

Therefore if we plug Y = X 6= 0 an eigenfunction, X 00 + λX = 0,


satisfying conditions (4.3.3) and (4.3.4), we get −λkXk2 in the left-hand
expression (with obviously real kXk2 6= 0) and also we get the real right
expression (since α, β ∈ R); so λ must be real: all eigenvalues are real.
Further, for (X, Y 00 ) we obtain the same equality albeit with α, β replaced
by their complex conjugate ᾱ, β̄:
Z
(X, Y ) = X(x)Ȳ 00 (x) dx = −(X 0 , Y 0 ) − β̄X(l)Ȳ (l) − ᾱX(0)Ȳ (0)
00

and therefore due to assumption α, β ∈ R


(X 00 , Y ) = (X, Y 00 ). (4.3.6)
But then if X, Y are eigenfunctions corresponding to different eigenvalues λ
and µ we get from (4.3.6) that −λ(X, Y ) = −µ(X, Y ) and (X, Y ) = 0 due
to λ 6= µ.
Remark 4.3.1. For periodic boundary conditions we cannot apply these
arguments to prove that cos(2πnx/l) and sin(2πnx/l) are orthogonal since
they correspond to the same eigenvalue; we need to prove it directly.
Remark 4.3.2. (a) So, we observed that for the operator A : X → −X 00
with the domain D(A) = {X ∈ L2 ([0, l]) : AX ∈ L2 ([0, l]), X(0) =
X(l) = 0}, where Lp (J)R denotes the space of functions with the
finite norm kXkLp (J) = ( J |X|p dx)1/p , has the property (AX, Y ) =
(X, AY ). But this is not a self-adjointness, only a symmetry.
(b) Self-adjoinness means that A∗ = A, and this means that the operators
A∗ and A have the same domain, which means that the domain of A∗
is not wide, labelindent=0ptr than the domain of A. And one can see
easily, that in order for (AX, Y ) = (X, A∗ Y ) be valid for all X with
X(0) = X(l) = 0, exactly the same conditions should be imposed on
Y : Y (0) = Y (l) = 0.
(c) On the other hand, the same operator A : X → −X 00 with the domain
D(A) = {X ∈ L2 ([0, l]) : AX ∈ L2 ([0, l]), X(0) = X 0 (0) = X(l) = 0}
is only symmetric but not self-adjoint because the equality (AX, Y ) =
(X, A∗ Y ) holds for all such X, and for all Y , such that Y (l) = 0, which
means that
D(A∗ ) = {Y ∈ L2 ([0, l]) : AY ∈ L2 ([0, l]), Y (l) = 0},
Chapter 4. Separation of Variables and Fourier Series 132

i.e. domain of A∗ is larger. And one can check that for such operator
A there are no eigenvalues and eigenfunctions at all!
So, we arrive to the following terminology

(a) Operator A is called symmetric, if A∗ ⊃ A, i.e. A∗ has the same or


larger domain but on domain of A, it coincides with A.

(b) Operator A is called self-adjoint, if A∗ = A, i.e. if both A and A∗ have


the same domain.

We do not formulate it as definition since we do not discuss exact definition of


the adjoint operator. We need to discuss domain and the difference between
symmetric and self-adjoint because operator X 7→ −X 00 is unbounded.

4.3.2 Abstract orthogonal systems: definition


Consider linear space H, real or complex. From the linear algebra course’s
standard definition:

1. u + v = v + u ∀u, v ∈ H;

2. (u + v) + w = u + (v + w) ∀u, v, w ∈ H;

3. ∃0 ∈ H : 0 + u = u ∀u ∈ H;

4. ∀u ∈ H ∃(−u) : u + (−u) = 0;

5. α(u + v) = αu + αv ∀u, v ∈ H ∀α ∈ R;

6. (α + β)u = αu + βu ∀u ∈ H ∀α, β ∈ R;

7. α(βu) = (αβ)u ∀u ∈ H ∀α, β ∈ R;

8. 1u = u ∀u ∈ H.

For complex linear space replace R by C. Assume that on H inner product


is defined:

1. (u + v, w) = (u, w) + (v, w) ∀u, v, w ∈ H;

2. (αu, v) = α(u, v) ∀u, v ∈ H ∀α ∈ C;


Chapter 4. Separation of Variables and Fourier Series 133

3. (u, v) = (v, u) ∀u, v ∈ H;

4. kuk2 := (u, u) ≥ 0 ∀u ∈ H (it implies that it is real–if we consider


complex spaces) and kuk = 0 ⇐⇒ u = 0.

Definition 4.3.1. (a) Finite dimensional real linear space with an inner
product is called Euclidean space.

(b) Finite dimensional complex linear space with an inner product is called
Hermitian space.

(c) Infinite dimensional linear space (real or complex) with an inner


product is called pre-Hilbert space.

For Hilbert space we will need another property (completeness) which


we add later.

Definition 4.3.2. (a) System {un }, 0 6= un ∈ H (finite or infinite) is


orthogonal if (um , un ) = 0 ∀m 6= n;

(b) Orthogonal system is orthonormal if kun k = 1 ∀n, i.e. (um , un ) = δmn


– Kronecker symbol.

4.3.3 Orthogonal systems: approximation


Consider a finite orthogonal
P system {un }. Let K be its linear hull : the set
of linear combinations n αn un . Obviously K is a linear subspace of H. Let
v ∈ H and we try to find the best approximation of v by elements of K, i.e.
we are looking for w ∈ K s.t. kv − wk minimal.

Theorem 4.3.1. (i) There exists an unique minimizer;

(ii) This minimizer is an orthogonal projection of v to K, i.e. w ∈ K s.t.


(v − w) is orthogonal to all elements of K;
P
(iii) Such orthogonal projection is unique and w = n αn un with

(v, un )
αn = . (4.3.7)
kun k2

(iv) kvk2 = kwk2 + kv − wk2 .


Chapter 4. Separation of Variables and Fourier Series 134

(v) v = w ⇐⇒ kvk2 = kwk2 .


Proof. (c) Obviously (v − w) is orthogonal to un iff (4.3.7) holds. If (4.3.7)
holds for all n then (v − w) is orthogonal to all un and therefore to all
their linear combinations.
(d)-(e) In particular (v − w) is orthogonal to w and then
kvk2 = k(v − w) + wk2 = kv − wk2 + 2 Re (v − w, w) +kwk2 .
| {z }
=0

(a)-(b) Consider w0 ∈ K. Then kv − w0 k2 = kv − wk2 + kw − w0 k2 because


(w − w0 ) ∈ K and therefore it is orthogonal to (v − w).

4.3.4 Orthogonal systems: approximation. II


Now let {un }n=1,2,..., be an infinite orthogonal system. Consider its finite
subsystem with n = 1, 2, . . . , N , introduce KN for it and consider orthogonal
projection wN of v on KN . Then
N
X
wN = αn un
n=1

where αn are defined by (4.3.7): αn = (v, un )/kun k2 .


Then according to Statement (iv) of Theorem 4.3.1
N
X
2 2 2 2
kvk = kv − wN k + kwN k ≥ kwN k = |αn |2 kun k2 .
n=1

Therefore series in the right-hand expression below converges



X
2
kvk ≥ |αn |2 kun k2 . (4.3.8)
n=1

Really, recall that non-negative series can either converge or diverge to ∞.


Then wN is a Cauchy sequence. Indeed, for M > N
M
X
2
kwN − wM k = |αn |2 kun k2 ≤ εN
n=N +1
Chapter 4. Separation of Variables and Fourier Series 135

with εN → 0 as N → ∞ because series in (4.3.8) converges.


Now we want to conclude that wN converges and to do this we must
assume that every Cauchy sequence converges.

Definition 4.3.3. (a) H is complete if every Cauchy sequence converges


in H.

(b) Complete pre-Hilbert space is called Hilbert space.

Remark 4.3.3. Every pre-Hilbert space could be completed i.e. extended to


a complete space. From now on H is a Hilbert space.
Then we can introduce K — a closed linear hull of {un }n=1,2,... i.e. the
space of
X∞
αn un (4.3.9)
n=1

with αn satisfying

X
|αn |2 kun k2 < ∞. (4.3.10)
n=1

(Linear hull would be a space of finite linear combinations).


Let v ∈ H. We want to find the best approximation of v by elements of
K. But then we get immediately

Theorem 4.3.2. If H is a Hilbert space then Theorem 4.3.1 holds for infinite
systems as well.

4.3.5 Orthogonal systems: completeness


Definition 4.3.4. Orthogonal system is complete if equivalent conditions
below are satisfied:

(a) Its closed convex hull coincides with H.

(b) If v ∈ H is orthogonal to all un then v = 0.

Remark 4.3.4. (a) Don’t confuse completeness of spaces and completeness


of orthogonal systems.

(b) Complete orthogonal system in H = orthogonal basis in H.


Chapter 4. Separation of Variables and Fourier Series 136

(c) In the finite-dimensional space orthogonal system is complete iff the


number of vectors equals to the dimension. Not so in the infinite-
dimensional space.
Our next goal is to establish completeness of some orthogonal systems
and therefore to give a positive answer (in the corresponding frameworks)
to the question in the end of Section 4.3.1 can we decompose any function
into eigenfunctions? Alternatively: Is the general solution a combination of
simple solutions?

4.3.6 Orthogonal systems: approximation.


Now let {un }n=1,2,..., be infinite orthogonal system. Consider its finite
subsystem with n = 1, 2, . . . , N , introduce KN for it and consider orthogonal
projection wN of v on KN . Then
N
X
wN = αN un
n=1

where αn are defined by (4.3.7). Then according to (d) of Theorem 4.3.1


N
X
kvk2 = kv − wN k2 + kwN k2 ≥ kwN k2 = |αn |2 kun k2 .
n=1

Therefore series in the right-hand expression below converges



X
2
kvk ≥ |αn |2 kun k2 (4.3.11)
n=1

Really, recall that non-negative series can either converge or diverge to ∞.


Then wN is a Cauchy sequence. Really, for M > N
M
X
2
kwN − wM k = |αn |2 kun k2 ≤ εN
n=N +1

with εN → 0 as N → ∞ because series in (4.3.11) converges.


Now we want to conclude that wN converges and to do this we must
assume that every Cauchy sequence converges.
Chapter 4. Separation of Variables and Fourier Series 137

Definition 4.3.5. (a) H is complete if every Cauchy sequence converges


in H.

(b) Complete pre-Hilbert space is called Hilbert space.

Remark 4.3.5. Every pre-Hilbert space could be completed i.e. extended to


a complete space. From now on H is a Hilbert space.
Then we can introduce K–a closed linear hull of {un }n=1,2,... i.e. the space
of ∞
X
αn un (4.3.12)
n=1

with αn satisfying

X
|αn |2 kun k2 < ∞. (4.3.13)
n=1

(Linear hull would be a space of finite linear combinations).


Let v ∈ H. We want to find the best approximation of v by elements of
K. But then we get immediately

Theorem 4.3.3. If H is a Hilbert space then Theorem 4.3.1 holds for infinite
systems as well.

4.4 Orthogonal systems and Fourier series


4.4.1 Formulae
Consider trigonometric system of functions
n1 πnx πnx o
, cos( ), sin( ) n = 1, . . . (4.4.1)
2 l l
on interval J := [x0 , x1 ] with (x1 − x0 ) = 2l.
These are eigenfunctions of X 00 + λX = 0 with periodic boundary condi-
tions X(x0 ) = X(x1 ), X 0 (x0 ) = X 0 (x1 ).
Let us first establish this is orthogonal system.
Chapter 4. Separation of Variables and Fourier Series 138

Proposition 4.4.1.
Z
πmx πnx
cos( ) cos( ) dx = lδmn ,
J l l
Z
πmx πnx
sin( ) sin( ) dx = lδmn ,
J l l
Z
πmx πnx
cos( ) sin( ) dx = 0,
J l l
and
Z Z Z
πmx πmx
cos( ) dx = 0, sin( ) dx = 0, dx = 2l
J l J l J

for all m, n = 1, 2, . . ..

Proof. Easy; use formulae

2 cos(α) cos(β) = cos(α − β) + cos(α + β),


2 sin(α) sin(β) = cos(α − β) − cos(α + β),
2 sin(α) cos(β) = sin(α − β) + sin(α + β).

Therefore according to the previous Section 4.3 we arrive to decomposi-


tion ∞
1 X πnx πnx 
f (x) = a0 + an cos( ) + bn sin( ) (4.4.2)
2 n=1
l l
with coefficients calculated according to (4.3.7)
Z
1 πnx
an = f (x) cos( ) dx n = 0, 1, 2, . . . , (4.4.3)
l J l
Z
1 πnx
bn = f (x) sin( ) dx n = 1, 2, . . . , (4.4.4)
l J l
and satisfying Parseval’s equality
∞ Z
l 2
X
l |an | + |bn | = |f (x)|2 dx.
2 2

|a0 | + (4.4.5)
2 n=1 J
Chapter 4. Separation of Variables and Fourier Series 139

Exercise 4.4.1. (a) Prove formulae (4.4.2)–(4.4.5) based on (4.3.7) and


Proposition 4.4.1.

(b) Also prove it only based on Proposition 4.4.1, that means without
norms, inner products (just do all calculations from the scratch).
So far this is an optional result: provided we can decompose function f (x).

4.4.2 Completeness of the trigonometric system


Now our goal is to prove that any function f (x) on J could be decomposed
into Fourier series

1 X πnx πnx 
f (x) = a0 + an cos( ) + bn sin( ) (2)
2 n=1
l l

First we need

Definition 4.4.1. (a) f (x) is piecewise-continuous function on J = [a, b]


if for some a = x0 < x1 < . . . < xK = b function f (x) is continuous
on (xk , xk+1 ) and limits limx→x+ f (x) and limx→x− f (x) exist; k =
k k+1
0, 1, . . . , N − 1.

(b) Further, f (x) is piecewise-continuously differentiable function, if it is


continuously differentiable on (xk , xk+1 ) and limits limx→x+ f (0 x) and
k
limx→x− f 0 (x) exist; k = 0, 1, . . . , N − 1.
k+1

Lemma 4.4.2. Let f (x) be a piecewise-continuous function on J. Then


Z
f (x) cos(ωx) dx → 0 as ω → ∞ (4.4.6)
J

and the same is true for cos(ωx) replaced by sin(ωx).

Proof. (a) Assume first that f (x) is continuously differentiable on J. Then


integrating by parts
Z
f (x) cos(ωx) dx
J Z
x1
−1 −1
= ω f (x) sin(ωx) − ω f 0 (x) sin(ωx) dx = O(ω −1 ).

x0 J
Chapter 4. Separation of Variables and Fourier Series 140

(b) Assume now only that f (x) is continuous on J. Then it could be


uniformly approximated by continuous functions (proof is not difficult but
we skip it anyway):

∀ε > 0 ∃fε ∈ C 1 (J) : ∀x ∈ J|f (x) − fε (x)| ≤ ε.

Then obviously the difference between integrals (4.4.6) for f and for fε
does not exceed 2lε; so choosing ε = ε(δ) = δ/(4l) we make it < δ/2. After
ε is chosen and fε fixed we can choose ωε s.t. for ω > ωε integral (4.4.6) for
fε does not exceed δ/2 in virtue of (a). Then the absolute value of integral
(4.4.6) for f does not exceed δ.

(c) Integral (4.4.6) for interval J equals to the sum of integrals over intervals
where f is continuous.

Now calculate coefficients according to (4.4.3)-(4.4.4) (albeit plug y


instead of x)
Z
1 πny
an = f (y) cos( ) dy n = 0, 1, 2, . . . , (4.4.3)
l J l
Z
1 πny
bn = f (y) sin( ) dy n = 1, 2, . . . , (4.4.4)
l J l
and plug into partial sum:

N 
1 X πnx πnx 
SN (x) = a0 + an cos( ) + bn sin( ) =
2 n=1
l l
Z
1
KN (x, y)f (y) dy, (4.4.7)
l J

N
1 X πny πnx πny πnx 
KN (x, y) = + cos( ) cos( ) + sin( ) sin( )
2 n=1 l l l l
N
1 X πn(y − x)
= + cos( ). (4.4.8)
2 n=1 l

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy