Applied Maths 3
Applied Maths 3
AKSUM UNIVERSITY
APPLIED(Math
MATHEMATICS
331) III
(Math
CREDIT 2062) 4
HOURS:
CREDIT HOURS: 4
June15, 2014
June15, 2014
APPLIED MATHEMATICS III
(MATH 2062)
Written By:
Badri Ahmed (MSc.)
Moges Birhanu (MSc.)
Tekleberhan Berhe (MSc.)
Edited By:
June15, 2014
Course Description
Topics discussed in the course include methods of solving first-order differential equations,
second-order linear equations, Fourier series and integrals, Fourier and Laplace transformations,
Vector Calculus, divergence, Curl, line integral, Green’s and Stokes’ Theorem, functions of
complex variables, Cauchy integral theorem and formula.
Course Objective
i
MODULE INTRODUCTION
The basic mathematical skills of Applied Mathematics come from a variety of sources, which
depend on the field of interest: the theory of ordinary and partial differential equations, statistical
sciences, probability and decision theory, operational analysis, optimization theory, the
mechanics of solid materials and of fluids flows, numerical analysis, scientific computation and
the science of modern computer-based modeling.
This module is prepared for those students who take as supportive especially for the students
study on engineering and other fields.
The courses in Applied Mathematics are designed for students with a wide range of goals and are
not limited to the needs of students following an applied mathematics concentration.
The module covers all the basic ideas which is used to study applied mathematics III.
This module must build upon the presumption that students studying the subject have already
gained some knowledge in applied mathematics I and II. In this module, you will learn the
concept of Ordinary differential equation with order one and two, Fourier series and integrals,
and vector calculus. You will also learn complex analytic functions and complex integrals. You
may have seen many solved examples, exercises and miscellaneous exercises at the end of each
chapter. We try to balance each examples and exercises to the level of students to understand it
easily.
ii
CONTENTS Page
Chapter One: ORDINARY DIFFERENTIAL EQUATIONS OF THE FIRST ORDER
iii
Chapter Five: Vector Calculus
iv
APPLIED MATHEMATICS III
Chapter One
Introduction:
Unit Objectives:
Upon successful completion of this chapter, the student will be able to:
Define the very basic definitions and terminology of differential equations as well as a
discussion of central issues and objectives for the course.
Classify Ordinary Differential Equations (ODEs) and distinguish ODEs from Partial
Differential Equations (PDEs).
Explain what is meant by an integrating factor, find the integrating factor for a linear
first-order equation with constant coefficients and use it to solve the equation.
Solve “Exact” and “Homogeneous” equations.
Decide which (if any) of the above methods can be used to solve a given first-order
differential equation.
Use a given change of variable to transform a first-order differential equation into one
that can more easily be solved.
Section Objectives:
Definition 1.1 A Differential Equation (DE) is an equation involving a dependent variable and its
derivative with respect to one or more independent variable.
Definition 1.2 A differential equation involves functions of only a single variable is called an
Ordinary Differential Equation (ODE). If a differential equation contains only ordinary derivatives
of one or more unknown functions with respect to a single independent variable, it is said to be an
ordinary differential equation (ODE).
The PDE (Partial Differential Equation) is an equation which involves partial derivatives of
an unknown function of two or more variables. That is, the derivatives of functions of more than
one variable.
Definition 1.3 The order of the equation is the order of the highest derivatives that appears
Definition 1.4 The degree of a DE is the highest exponent of the highest order derivatives which
involves in the DE.
An ODE is said to be order n is the nth derivative of the unknown function Y is the highest
derivative of Y in the equation.
Examples 1.1
Definition 1.5 The most general nth order linear differential equation can be written
𝑎0 (𝑥)𝑦 𝑛 + 𝑎1 (𝑥)𝑦 𝑛−1 + ⋯ + 𝑎𝑛−1 (𝑥)𝑦 = 𝑓(𝑥) *
Badri A, Moges
where, 𝑎 (𝑥), 𝑎B.(𝑥)
and…Teklebrhan B. the coefficient of the
. 𝑎 (𝑥) called 3
equation AKU
0 1 𝑛
APPLIED MATHEMATICS III
The known function 𝑓(𝑥) is called non homogeneous term, equation (*) is called homogeneous
if 𝑓(𝑥) = 0
which is called constant coefficient linear equation. 𝑎0 ≠ 0 , otherwise the equation would not
be the nth order. An ordinary differential equation that is not linear is said to be non-linear.
Example 1.4
ODE Property
𝑑𝑦
1. + 2𝑥𝑦 = 𝑠𝑖𝑛𝑥 linear variable coefficient,
𝑑𝑥
order equation,
𝑑2 𝑦 𝑑𝑦
3. + 𝑎 𝑑𝑥 + 𝑏𝑦 = 𝑠𝑖𝑛𝜑𝑥 𝑤𝑖𝑡ℎ 𝜑 − 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 linear variable coefficient
𝑑𝑥 2
Definition 1.6 A function relation between two variables (dependent and independent variables)
which satisfy the given DE is called the solution of an ODE or integral curve. A solution of an nth
order equation that contains n arbitrary constants is called the general solution of the equation. If the
arbitrary constants in the general solution are assigned specific value, the result is called a particular
solution of the equation.
Note: It is possible to have more than one solution of a differential equation. For instance 𝑦 =
2𝑥 3 + 𝐴 and 𝑦 = 2𝑥 3 are solution of the differential equation 𝑦 ′ = 12𝑥.
Definition 1.7 An ODE together with an initial condition is called an initial value problem.
Thus, if the ordinary differential equation is explicit
𝑦 ′ = 𝑓(𝑥, 𝑦)
the initial value problem is of the form
𝑦 ′ = 𝑓(𝑥, 𝑦) , 𝑦(𝑥0 ) = 𝑦0
𝑑𝑦
Existence : Does the differential equation 𝑑𝑥 = 𝑓(𝑥, 𝑦) possess solutions?
Do any of the solution curves pass through the point (𝑥0 , 𝑦0 )?
Uniqueness : When can we be certain that there is precisely one solution curve passing
through the point (𝑥0 , 𝑦0 )?
Section Objectives:
𝑑𝑦
= 𝑓(𝑥) ⟹ 𝑑𝑦 = 𝑓(𝑥)𝑑𝑥
𝑑𝑥
Solution: We have
𝑑𝑦
= 𝑐𝑜𝑠𝑥
𝑑𝑥
This implies that
𝑑𝑦 = 𝑐𝑜𝑠𝑥𝑑𝑥 … (*)
Integrate equation (*) both sides, i.e.
∫ 𝑑𝑦 = ∫ 𝑐𝑜𝑠𝑥𝑑𝑥
Hence,
𝑦 = 𝑠𝑖𝑛𝑥 + 𝑐
b) Solve 𝑦 ′ = ln 𝑥 (exercise)
We begin our study of how to solve differential equations with the simplest of all differential
equations: first-order equations with separable variables. Because the method in this section and
many techniques for solving differential equations involve integration you are urged to refresh
du
your memory on important formulas (such as∫ ) and techniques (such as integration by parts)
u
by consulting a calculus text.
𝑦 ′ = 𝐹(𝑥)𝐺(𝑦) (*)
⟹ 𝑀1 (𝑦) = 𝑀2 (𝑥) + 𝑐,
Where,
1
∫ 𝐺(𝑦) 𝑑𝑦 = 𝑀1 (𝑦) and ∫ 𝐹(𝑥)𝑑𝑥 = 𝑀2 (𝑥) + 𝑐
= |1 + 𝑥|. 𝑒 𝑐1
= ±. 𝑒 𝑐1 (1 + 𝑥)
Replacing, 𝑐 = ±. 𝑒 𝑐1 then gives
𝑦 = 𝑐(1 + 𝑥)
Example 1.8: (An initial value problem)
𝑑𝑦
Solve (𝑒 2𝑦 − 𝑦)𝑐𝑜𝑠𝑥 = 𝑒 𝑦 𝑠𝑖𝑛2𝑥 , 𝑦(0) = 0
𝑑𝑥
Before integrating we use term wise division on the left-hand side and the trigonometric identity
𝑠𝑖𝑛2𝑥 = 2𝑠𝑖𝑛𝑥𝑐𝑜𝑠𝑥
on the right-hand side. Then
Exercises
𝑦−1
( b) 𝑦 ′ = 𝑥 2 (1 + 𝑦) (d) 𝑑𝑥 + 𝑒 3𝑥 𝑑𝑦 = 0 (f) 𝑦 ′ + 1−𝑥 = 0
Sometimes, the best way of solving a DE is to use a change of variables that will put the DE into
a form whose solution method we know. We now consider a class of DEs that are not directly
solvable by separation of variables, but, through a change of variables, can be solved by that
method.
Definition 1.9 A function 𝑓(𝑥, 𝑦) is said to be algebraically homogenous of degree 𝑛, or simply
homogenous of degree 𝑛,
if
𝑓(𝑘𝑥, 𝑘𝑦) = 𝑘 𝑛 𝑓(𝑥, 𝑦),
for some real number 𝑛 and all 𝑘 > 0 for 𝑓(𝑥, 𝑦) ≠ (0,0).
Example 1.9.
a) 𝑓(𝑥, 𝑦) = 𝑥 2 + 3𝑥𝑦 + 4𝑦 2 is homogenous
b) Show that 𝑓(𝑥, 𝑦) = ln|𝑦| − ln|𝑥| for (𝑥, 𝑦) ≠ (0,0) is homogenous of degree 0.
Example 1.10
𝑥 2 +𝑦 2
a) 𝑓(𝑥, 𝑦) = 𝑥 3 +𝑦 3 , homogenous
𝑑𝑥 𝑑𝑢
∫ =∫
𝑥 𝐹(𝑢) − 𝑢
Hence the solution of the given differential equation becomes
𝑑𝑢
𝑙𝑛|𝑥| = ∫ 𝐹(𝑢)−𝑢 + 𝑐
a) 2𝑥 2 𝑦 ′ = 𝑥 2 + 𝑦 2
Solution: If we divide both sides by 2𝑥 2 then we obtain
1 1 𝑥 2
𝑦′ = + ( )
2 2 𝑦
Which is homogeneous. Now, setting
𝑦
𝑢= ⟹ 𝑦 = 𝑢𝑥
𝑥
,yields the equation
1 2 1
𝑥𝑢′ = 𝑢 −𝑢+
2 2
After rearrange
2𝑢′ 1
2
=
(𝑢 − 1) 𝑥
Then integrating yields
2𝑑𝑢 1
∫ 2
= ∫ 𝑑𝑥
(𝑢 − 1) 𝑥
−2
= 𝑙𝑛(𝑥) + 𝑐
𝑢−1
Solving for 𝑢 gives
2
𝑢 = 1−
ln(𝑥) + 𝑐
Hence,
2𝑥
𝑦 = 𝑥 − ln(𝑥)+𝑐
′
𝑥2 + 𝑦2
𝑦 =
𝑥𝑦
Solution: If we divide the numerator and denominator of the fraction by 𝑥 2 ,we obtain
𝑦
1 + (𝑥 )2
𝑦′ = 𝑦
(𝑥 )
𝑦
which is homogeneous. Now setting 𝑢 = 𝑥 or 𝑦 = 𝑢𝑥
1+𝑢2 1
Yields, 𝑥𝑢′ = −𝑢 =𝑢
𝑢
1
∫ 𝑢𝑑𝑢 = ∫ 𝑑𝑥
𝑥
𝑢2
⟹ = ln(𝑥) + 𝑐
2
and then
𝑦 = √2 ln(𝑥) + 𝑐
Exercises
Solve a) (𝑦 2 + 2𝑥𝑦)𝑑𝑥 − 𝑥 2 𝑑𝑦 = 0
𝑏) (𝑥 2 + 𝑦 2 )𝑑𝑥 = 2𝑥𝑦 𝑑𝑦
c) 𝑥 2 𝑦𝑑𝑥 − (𝑥 3 + 𝑦 3 )𝑑𝑦 = 0
𝜕𝑓 𝑑𝑘
To determine𝑘(𝑦), we derive (differentiate with respect to y) from (*). Use (b) to get 𝑑𝑦 and
𝜕𝑦
𝑑𝑘
then integrating 𝑑𝑦 to get k.
𝑓(𝑥, 𝑦) = 𝑥 2 𝑦 + 𝑘(𝑦)
Taking the partial derivative of the last expression with respect to y and setting the result equal to
𝑁(𝑥, 𝑦), gives
𝜕𝑓
= 𝑥 2 + 𝑘 ′ (𝑦) = 𝑥 2 − 1
𝜕𝑦
It follows that 𝑘 ′ (𝑦) = −1 and 𝑘(𝑦) = −𝑦.
Hence 𝑓(𝑥, 𝑦) = 𝑥 2 𝑦 − 𝑦
Note: in the above example, the equation could be solved by separation of variables.
Example 1.14: Solve (𝑒 2𝑦 − 𝑦𝑐𝑜𝑠𝑥𝑦)𝑑𝑥 + (2𝑥𝑒 2𝑦 − 𝑥𝑐𝑜𝑠𝑥𝑦 + 2𝑦)𝑑𝑦 = 0
Solution: The equation is exact because
𝜕𝑀 𝜕𝑁
= 2𝑒 2𝑦 + 𝑥𝑦𝑠𝑖𝑛𝑥𝑦 − 𝑐𝑜𝑠𝑥𝑦 =
𝜕𝑦 𝜕𝑥
Hence a function 𝑓(𝑥, 𝑦) exists for which
𝜕𝑓 𝜕𝑓
𝑀(𝑥, 𝑦) = 𝜕𝑥 and 𝑁(𝑥, 𝑦) = 𝜕𝑦
𝜕𝑓
Now for variety we shall start with the assumption that = 𝑁(𝑥, 𝑦) ;that is
𝜕𝑦
𝜕𝑓
=2𝑥𝑒 2𝑦 − 𝑥𝑐𝑜𝑠𝑥𝑦 + 2𝑦
𝜕𝑦
It follows that
𝑓(𝑥, 𝑦) = 𝑥𝑒 2𝑦 − 𝑠𝑖𝑛𝑥𝑦 + 𝑦 2 + ℎ(𝑥)
𝜕𝑓
= 𝑒 2𝑦 − 𝑦𝑐𝑜𝑠𝑥𝑦 − ℎ′ (𝑥) = 𝑒 2𝑦 − 𝑦𝑐𝑜𝑠𝑥𝑦
𝜕𝑥
and so
ℎ′ (𝑥) = 0 or ℎ(𝑥) = 𝑐.
Hence a family of solution is
𝑥𝑒 2𝑦 − 𝑠𝑖𝑛𝑥𝑦 + 𝑦 2 + 𝑐 = 0.
Exercises
Definition1. 12: An integrating factor is a factor which we multiply the given non exact equation
to make it exact.
𝐼(𝑥) = 𝑒 ∫ 𝑃(𝑥)𝑑𝑥
Case2: We see also look to see if there is an integrating factor that depends only on 𝑦 and not
𝜕𝐼
on 𝑥 .We can do the same calculation ,this time using 𝜕𝑦 would be zero, to see that such an
𝑁𝑥 −𝑀𝑦
integrating factor exists if the ratio is a function of 𝑄(𝑦) only of 𝑦 (and not x); then
𝑀
𝐼(𝑦) = 𝑒 ∫ 𝑄(𝑦)𝑑𝑦
Example 1.15: Solve for 𝑦(𝑥), if (2𝑥𝑦 2 − 4𝑦) + (3𝑥 2 𝑦 − 8𝑥)𝑦 ′ = 0
Solution: In differential form the equation is
(2𝑥𝑦 2 − 4𝑦)𝑑𝑥 + (3𝑥 2 𝑦 − 8𝑥)𝑑𝑦 = 0
Therefore,
𝑀 = 2𝑥𝑦 2 − 4𝑦 and 𝑁 = 3𝑥 2 𝑦 − 8𝑥
𝑀𝑦 = 4𝑥𝑦 − 4 and 𝑁𝑥 = 6𝑥𝑦 − 8
These are not equal (i.e. 𝑀𝑦 ≠ 𝑁𝑥 ), so the equation is not exact. We look for integrating factors
using the two criteria we know.
𝑀𝑦 −𝑁𝑥 −2𝑥𝑦+4
First we have = is not a function of 𝑥 only.
𝑁 3𝑥 2 𝑦−8𝑥
𝑁𝑥 −𝑀𝑦 2𝑥𝑦—4 1
Second we have = = 𝑦 is a function of 𝑦 only.
𝑀 2𝑥𝑦 2 −4𝑦
But , we have
𝑓𝑦 (𝑥, 𝑦) = 3𝑥 2 𝑦 2 − 8𝑥𝑦
Then
3𝑥 2 𝑦 2 − 8𝑥𝑦 = 3𝑥 2 𝑦 2 − 8𝑥𝑦 + 𝑘 ′ (𝑦)
It follows that,
𝑘 ′ (𝑦) = 0
Integrating both sides ,
𝑘(𝑦) = 𝑐.
Therefore our solutions are given implicitly by 𝑥 2 𝑦 3 − 4𝑥𝑦 2 = 𝑐
Example 1.16: Solve 𝑥𝑦 𝑑𝑥 + (2𝑥 2 + 3𝑦 2 − 20)𝑑𝑦 = 0
Solution: We have 𝑀 = 𝑥𝑦 and 𝑁 = 2𝑥 2 + 3𝑦 2 − 20.
We find the partial derivatives 𝑀𝑦 = 𝑥 and 𝑁𝑥 = 4𝑥. Since 𝑀𝑦 ≠ 𝑁𝑥 ,the nonlinear first-
order differential equation is not exact. We have
𝑀𝑦 −𝑁𝑥 𝑥 − 4𝑥 −3𝑥
= 2 = 2
𝑁 2𝑥 + 3𝑦 − 20 2𝑥 + 3𝑦 2 − 20
2
Exercises
1. Determine whether the given differential equation is exact. If it is exact, solve it.
f) (5𝑦 − 2𝑥)𝑦 ′ − 2𝑦 = 0
g) (𝑡𝑎𝑛𝑥 − 𝑠𝑖𝑛𝑥𝑠𝑖𝑛𝑦)𝑑𝑥 + 𝑐𝑜𝑠𝑥𝑐𝑜𝑠𝑦𝑑𝑦 = 0
𝑑𝑦
h) 𝑥 𝑑𝑥 = 2𝑥𝑒 𝑥 − 𝑦 + 6𝑥 2
3 2
a) 𝑦𝑑𝑥 − 𝑥𝑑𝑦 + 3𝑥 2 𝑦 2 𝑒 𝑥 = 0 d) 𝑐𝑜𝑠𝑥𝑑𝑥 + (1 + 𝑦 𝑠𝑖𝑛𝑥) 𝑑𝑦 = 0
−2 3𝑦
b) 𝑦′ = − 2𝑥 e) 𝑦(𝑥 + 𝑦 + 1)𝑑𝑥 + (𝑥 + 2𝑦)𝑑𝑦 = 0
𝑦
Linear ODEs that can be transformed to linear form are models of various phenomena, for
instance, in physics, biology, population dynamics, and ecology.
If the first term is 𝑓(𝑥)𝑦’ (instead of 𝑦’), divided the equation by 𝑓(𝑥) to get the “ standard form
“ (*) with 𝑦’ as the first term.
For instance,
𝑦’𝑐𝑜𝑠𝑥 + 𝑦𝑠𝑖𝑛 𝑥 = 𝑥
is a linear ODE and its standard form is
𝑦 ′ + 𝑦 tan 𝑥 = 𝑥 sec 𝑥
To find the general solution of (a) we use an “ integrating factor”; we multiply the equation by a
function 𝐼(𝑥), to obtain
𝑑𝑦
𝐼(𝑥) 𝑑𝑥 + 𝐼(𝑥)𝑃(𝑥)𝑦 = 𝐼(𝑥)𝑟(𝑥) (b)
𝐼 ′ (𝑥) = 𝐼(𝑥)𝑝(𝑥)
This is now a (very easy) separable equation for the function 𝐼(𝑥), and the solution is
𝐼(𝑥) = 𝑒 ∫ 𝑝(𝑥)𝑑𝑥 .
If 𝑟(𝑥) = 0, then the ODE (a) becomes
𝑦 ′ + 𝑃(𝑥)𝑦 = 0 (c)
and is called homogenous. By separating variables and integrating we then obtain
𝑑𝑦
= −𝑃(𝑥)𝑑𝑥
𝑦
Thus ,
ln|𝑦| = − ∫ 𝑃(𝑥)𝑑𝑥 + 𝑐 ∗
Taking exponents on both sides, we obtain the general solution of the homogenous ODE (c)
∗
𝑦(𝑥) = 𝑐𝑒 − ∫ 𝑃(𝑥)𝑑𝑥 where, 𝑐 = ±𝑒 𝑐
Solution: This linear equation can be solved by separation of variables. Alternatively, since the
differential equation is already in standard form , we identify 𝑃(𝑥) = −3, and so the integrating
factor is 𝑒 ∫(−3)𝑑𝑥 = 𝑒 −3𝑥 . We then multiply the given equation by this factor and recognize that
𝑑𝑦
𝑒 −3𝑥 − 3𝑒 −3𝑥 𝑦 = 𝑒 −3𝑥 . 0
𝑑𝑥
is the same as
𝑑 −3𝑥
[𝑒 𝑦] = 0
𝑑𝑥
Integration on the last equation
𝑑
∫ 𝑑𝑥 [𝑒 −3𝑥 𝑦]𝑑𝑥 = ∫ 0 𝑑𝑥
then yields,
𝑒 −3𝑥 𝑦 = 𝑐 or 𝑦 = 𝑐𝑒 3𝑥 , ∞ < 𝑥 < −∞.
Example 1.18: Solve 𝑥𝑦 ′ = 𝑥 4 − 4𝑦
4
Solution: We have 𝑦 ′ + 𝑥 𝑦 = 𝑥 3
so
4
𝑝(𝑥) = 𝑥 and 𝑟(𝑥) = 𝑥 3
Multiplying both sides by
𝑒 ∫ 𝑝(𝑥)𝑑𝑥 = 𝑒 4ln(𝑥) = 𝑥 4
to get
𝑥 4 𝑦 ′ + 4𝑥 3 𝑦 = 𝑥 7
Hence, 𝑐 = 0.
−cos(2𝑥)
Solving for 𝑦 gives 𝑦=
2cos(𝑥)
Example 1.20 Solve the IVP cos 𝑥 𝑦 ′ + 𝑦 = sin 𝑥 , 𝑦(0) = 2 ( Similar, do it)
Exercises
Unit Summary:
The order of the equation is the order of the highest derivatives that appears.
The degree of a DE is the highest exponent of the highest order derivatives which involves in
the DE.
The nth order linear differential equation is written as,
𝑎0 (𝑥)𝑦 𝑛 + 𝑎1 (𝑥)𝑦 𝑛−1 + ⋯ + 𝑎𝑛−1 (𝑥)𝑦 = 𝑓(𝑥) … (*)
where, 𝑎0 (𝑥), 𝑎1 (𝑥) … . 𝑎𝑛 (𝑥) called the coefficient of the equation .The known function
𝑓(𝑥) is called non homogeneous term , equation (*) is called homogeneous if 𝑓(𝑥) = 0
If the coefficients in (*) are constant, so that (*) becomes 𝑎0 𝑦 𝑛 + 𝑎1 𝑦 𝑛−1 + ⋯ + 𝑎𝑛−1 𝑦 = 𝑓(𝑥)
which is called constant coefficient linear equation. 𝑎0 ≠ 0 , otherwise the equation would
not be the nth order. An ordinary differential equation that is not linear is said to be non-linear.
A function relation between two variables (dependent and independent variables) which
satisfy the given DE is called the solution of an ODE or integral curve. A solution of an nth
order equation that contains n arbitrary constants is called the general solution of the
equation. If the arbitrary constants in the general solution are assigned specific value, the
result is called a particular solution of the equation.
If the differential equation has the form 𝑦 ′ = 𝑓(𝑥) , then 𝑦 = ∫ 𝑓(𝑥)𝑑𝑥 + 𝑐 which is a
general solution.
An equation 𝑦 ′ = 𝑓(𝑥, 𝑦) is called separable if it can be written in the form
𝑦 ′ = 𝐹(𝑥)𝐺(𝑦) (*)
For some function 𝐹(𝑥) a dependent only on 𝑥 and 𝐺(𝑦) dependent on 𝑦. Equation (*) itself to be
variable separable type.
A function 𝑓(𝑥, 𝑦) is said to be algebraically homogenous of degree 𝑛, or simply
homogenous of degree 𝑛,
if 𝑓(𝑘𝑥, 𝑘𝑦) = 𝑘 𝑛 𝑓(𝑥, 𝑦),
for some real number 𝑛 and all 𝑘 > 0 for 𝑓(𝑥, 𝑦) ≠ (0,0).
The first ordered differential equation 𝑀(𝑥, 𝑦)𝑑𝑥 + 𝑁(𝑥, 𝑦)𝑑𝑦 = 0 is side to be exact if a
function 𝐹(𝑥, 𝑦) exists such that the total differential 𝑑[𝐹(𝑥, 𝑦)] = 𝑀(𝑥, 𝑦)𝑑𝑥 + 𝑁(𝑥, 𝑦)𝑑𝑦
𝜕𝑀 𝜕𝑁
The differential equation 𝑀(𝑥, 𝑦)𝑑𝑥 + 𝑁(𝑥, 𝑦)𝑑𝑦 = 0 is exact if and only if =
𝜕𝑦 𝜕𝑥
An integrating factor is a factor which we multiply the given non exact equation to make it
exact. To find an integrating factor 𝐼(𝑥, 𝑦) , solve the partial differential equation
𝐼𝑦 𝑀 − 𝐼𝑥 𝑁 + 𝐼(𝑀𝑦 − 𝑁𝑥 ) = 0
This is just as tricky to solve as the original equation. Only in a few special cases are there
methods for computing the integrating factor 𝐼(𝑥, 𝑦).
Case1: Suppose we want to see if there exists an integrating factor that depends only in x (and
𝜕𝐼
not on y). Then, 𝜕𝑦 would be zero, since 𝐼 does not depend on y, and so 𝐼(𝑥) we need to satisfy
𝐼′ 𝑀𝑦 −𝑁𝑥 𝑀𝑦 −𝑁𝑥
= .This can only happen if the ratio is a function of 𝑃(𝑥) only of 𝑥 (and not𝑦),
𝐼 𝑁 𝑁
𝐼(𝑦) = 𝑒 ∫ 𝑄(𝑦)𝑑𝑦
A first order ODE is said to be linear if it can be written as:𝑦 ′ + 𝑃(𝑥)𝑦 = 𝑟(𝑥)
(Standard form ) (*)
where 𝑝 and 𝑟 are a function of 𝑥. If the first term is 𝑓(𝑥)𝑦’ (instead of 𝑦’), divided the equation
by 𝑓(𝑥) to get the “ standard form “ (*) with 𝑦’ as the first term.
Miscellaneous Exercises
I. Find the general solution; indicate which method in this chapter you are using. Show the
details of your work.
dy −4y2 +6xy
a. (y 2 + 1)dx = ysec 2 xdy i. =
dx 3y2 +2x
dQ
b. y(lnx − lny)dx = (xlnx − xlny − y)dy j. t dt + Q = t 4 lnt
dy
c. (6x + 1)y 2 dx + 3x 2 + 2y 3 = 0 k. yy ′ + xy 2 =
y
d. xy ′ = (x)3 + y l. (x 2 + 4)dy = (2x − 8xy)dx
II. Solve the following initial value problem. Indicate the method used. Show the details of
your work.
a. yy ′ + x = 0 , y(3) = 4
dy 7π
b. sinx dx + (cosx)y = 0, y ( 6 ) = −2
dy −1
c. + 2(x + 1)y 2 = 0, y(0) =
dx 8
f. y ′ + πy = 2bcosπx , y(0) = 0
g. y ′ − 3y = −12y 2 , y(0) = 0
References
10. A. Ganesh and Etla, Engineering Mathematics II, 2009 New age International press
11. Wilfred Kaplan,Advanced Calculus, 5th edition, publishing house of electronics industry
12. Salas Hille Etgen, Calculus – One and Several variables,10th edition, WILLEY PLUS
13. Boyce. Diprima, Elementary differential equations and boundary value problem, 2001 ,John
Wiley and Sons.Inc
14. Ravi P. Agarwal. An introduction to differential equation, 2000,Spring
15. Rudolph E. Longer,Ordinary Differential equations,1954,John Wiley and Sons.Inc
Chapter Two
Ordinary Linear Differential Equation of The 2nd order
Introduction
In chapter one we saw that we could solve a few first-order differential equations by recognizing
them as separable, linear, exact or homogeneous equations. We turn now to the solution of
ordinary differential equation of order two or higher. These equations have important
Definition 2.1. A function 𝑦 = ℎ(𝑥) is called a solution of a (linear or non linear) second order ODE on
some open interval 𝐼 if ℎ is defined and twice differentiable throughout the interval and is such that the
ODE becomes an identify if we replace the unknown 𝑦 by ℎ, and its successive derivatives.
The initial condition is used to determine the arbitrary constant in the general solution of the
ODE.
This results is a unique solution and is called a particular solution.
For a second order homogenous linear ODE
𝑦 ′′ + 𝑝𝑦 + 𝑞𝑦 = 0 (1)
these condition helps to determine the constants 𝑐1 and 𝑐2 in the general solution.
Generally: we defined an initial-value problem for a general nth-order differential equation. For
a linear differential equation an
Example 2.6: Solve the following IVP
𝑦 ′′ − 9𝑦 = 0, 𝑦(0) = 2, 𝑦 ′ (0) = −1
Solution: The two function 𝑦(𝑡) = 𝑒 3𝑡 and 𝑦(𝑡) = 𝑒 −3𝑡 are enough to form the general
solution to the differential equation. The general solution to our differential equation is then
𝑦(𝑡) = 𝑐1 𝑒 −3𝑡 + 𝑐2 𝑒 3𝑡
Now all we need to do is apply the initial conditions
𝑦 ′ (𝑡) = −3𝑐1 𝑒 −3𝑡 + 3𝑐2 𝑒 3𝑡
Plugging in the initial conditions
𝑦(0) = 2 = 𝑐1 + 𝑐2
𝑦 ′ (0) = −1 = −3𝑐1 + 3𝑐2
This gives us a system of two equations and two unknowns that can be solved. Doing this yields
7 5
𝑐1 = , 𝑐2 =
6 6
The solution to the IVP is then
7 −3𝑡 5 3𝑡
𝑦(𝑡) = 𝑒 + 𝑒
6 6
Exercises
b. 𝑦 ′′ + 9𝑦 = 0 𝑦(0) = 2 , 𝑦(0) = −1
Note: The number of linearly independent solution is equal to the order of the differential
equation , so the 2nd order differential equation has two linearly independent solution.
Definition: The Wronskian of 𝑛 function 𝑦1 (𝑥) ,𝑦2 (𝑥), 𝑦3 (𝑥),… 𝑦𝑛 (𝑥) each (n-1) time
differentiable is the form
𝑦1 𝑦2 ⋯⋯ 𝑦𝑛
𝑦1 ′ 𝑦2 ′ ⋯⋯ 𝑦𝑛 ′
|
𝑊(𝑦) = | 𝑦1 ′′ 𝑦2 ′′ ⋯⋯ 𝑦𝑛 ′′ | , where | | denotes the determinant.
⋯⋯ |
⋮ ⋮ ⋮
𝑦1 (𝑛−1)
𝑦2 (𝑛−1) ⋯⋯ 𝑦𝑛 (𝑛−1)
Example 2.8. Show that 𝑦1 = 𝑒 𝑥 and 𝑦2 = 𝑒 −𝑥 are solution of ODE 𝑦 ′′ + 𝑦 = 0 then solve the
initial value problem .
Find a Basis If One Solution is Known, Reduction of Order
Finding solutions to non- constant coefficients, second order differential equations can be much
more differential equations can be much more difficult than finding solutions to constant
coefficient differential equations. However if we already know one solution to the differential
equation we can use the method that we used in the first order differential equation. This method
is called reduction of order.
Thus
2𝑦1 ′
𝑈′ + ( + 𝑝)𝑈 = 0
𝑦1
This is the desired first order ODE, the reduced ordinary differential equation. Separation of
variables and integrations gives
𝑑𝑈 2𝑦1 ′
= −( + 𝑝)dx ,and
𝑈 𝑦1
1 − ∫ 𝑝𝑑𝑥
𝑈= 𝑒
𝑦1 2
Here, 𝑈 = 𝑢′ so that 𝑢 = ∫ 𝑈𝑑𝑥
Hence, the desired solution is
1 − ∫ 𝑝𝑑𝑥
𝑦2 = 𝑦1 𝑢 = 𝑦1 ∫ 𝑈𝑑𝑥 = 𝑦1 ∫ 𝑒 𝑑𝑥
𝑦1 2
Example 2.9: Find a basis of solutions of the ODE
(𝑥 2 − 𝑥)𝑦 ′′ − 𝑥𝑦 ′ + 𝑦 = 0
Given that 𝑦1 (𝑥) = 𝑥 is a solution.
Solution: Given 𝑦1 = 𝑥 is a solution, (inspection).To find the second solution ,substitute
𝑦2 = 𝑦 = 𝑢𝑦1 = 𝑢𝑥
𝑦 ′ = 𝑢′ 𝑥 + 𝑢 , 𝑦 ′′ = 𝑢′′ 𝑥 + 𝑢′ 𝑥 ′ + 𝑢′ = 𝑢′′ 𝑥 + 2𝑢′
into the ODE. This gives
(𝑥 2 − 𝑥)(𝑢′′ 𝑥 + 2𝑢′ ) − 𝑥(𝑢′ 𝑥 + 𝑢) + 𝑢𝑥 = 0
𝑢𝑥 and −𝑢𝑥 cancel and we are left with the following ODE, which we divide by x, order and
simplify.
(𝑥 2 − 𝑥)(𝑢′′ 𝑥 + 2𝑢′ ) − 𝑥 2 𝑢′ = 0
After simplification, we get
(𝑥 2 − 𝑥)𝑢′′ + (𝑥 − 2)𝑢′ = 0
This ODE is of first order in 𝑈 = 𝑢′ , namely (𝑥 2 − 𝑥)𝑈 ′ + (𝑥 − 2)𝑈 = 0
Separation of variables and integration gives
dU −(x−2) 1 2
= dx= (x−1 − x)dx
U x2 −x
Exercises
𝑦 ′′ + 𝑎𝑦 ′ + 𝑏𝑦 = 0 (1)
𝜆2 + 𝑎𝜆 + 𝑏 = 0 (2)
1
𝜆1 = (−𝑎 + √𝑎2 − 4𝑏)
2
1
𝜆2 = (−𝑎 − √𝑎2 − 4𝑏)
2
Depending on the sign of the discriminant 𝑎2 − 4𝑏 the quadratic equation (2) may have three
kinds of roots.
𝑦1 = 𝑒 𝜆1 𝑥 and 𝑦2 = 𝑒 𝜆2 𝑥
Because, 𝑦1 and 𝑦2 are defined (and real) for all x and their quotient is not constant. The
corresponding general solution is
𝑦 = 𝑐1 𝑒 𝜆1 𝑥 + 𝑐2 𝑒 𝝀𝟐 𝑥
𝜆2 + 11𝜆 + 24 = 0
(𝜆 + 8)(𝜆 + 3) = 0.
Its roots are 𝜆1 = −8 and 𝜆2 = −3 , and the general solution and its derivative is
Now, plug in the initial conditions to get the following system of equations
𝑦(0) = 0 = 𝑐1 + 𝑐2
Badri A, Moges B. and Teklebrhan B. 35 AKU
APPLIED MATHEMATICS III
7 −7
𝑐1 = 5 and 𝑐2 = 5
7 −8𝑥 7 −𝟑𝑥
𝑦(𝑥) = 𝑒 − 𝑒
5 5
𝜆2 + 3𝜆 − 10 = 0
(𝜆 + 5)(𝜆 − 2) = 0
𝑦(𝑥) = 𝑐1 𝑒 −5𝑥 + 𝑐2 𝑒 𝟐𝑥
Now, plug in the initial conditions to get the following system of equations.
𝑦(0) = 4 = 𝑐1 + 𝑐2
10 18
𝑐1 = and 𝑐2 =
7 7
10 −5𝑥 18 𝟐𝑥
𝑦(𝑥) = 𝑒 + 𝑒
7 7
𝒂
Case II: Real double root 𝝀 = − 𝟐
𝑒 − ∫ 𝑝(𝑥)𝑑𝑥
𝑦2 = 𝑦1 ∫ , but 𝑝(𝑥) is a constant .
𝑦1 2
Then it become,
𝑒 − ∫ 𝑎𝑑𝑥 𝑎
(− )𝑥 𝑒 −𝑎𝑥 𝑎
(− )𝑥
𝑦2 = 𝑦1 ∫ 𝑑𝑥 = 𝑒 2 ∫ 𝑑𝑥 = 𝑥𝑒 2
𝑦1 2 𝑒 −𝑎𝑥
𝑎
Hence in the case of a double root of (2), a basis of solution of (1) on any interval is: 𝑒 (−2)𝑥 ,
𝑎
𝑒 (−2)𝑥 .
𝑎
𝑦 = 𝑒 (−2)𝑥 (𝑐1 + 𝑐2 𝑥)
i.e. if the roots of the characteristic equation are 𝜆 = 𝜆1 = 𝜆2 , then the general solution is then
𝑦(𝑥) = 𝑐1 𝑒 𝜆𝑥 + 𝑐2 𝑥𝑒 𝜆𝑥
′
a. 𝑦 ′ − 4𝑦 ′ + 4𝑦 = 0 , 𝑦(0) = 12, 𝑦 ′ (0) = −3
Solution: The characteristic equation and its roots are
𝜆2 − 4𝜆 + 4 = (𝜆 − 2)2 = 0
i.e.
𝜆1 = 𝜆2 = 2
𝑦(𝑥) = 𝑐1 𝑒 2𝑥 + 𝑐2 𝑥𝑒 𝟐𝑥
12 = 𝑦(0) = 𝑐1
−3 = 𝑦 ′ (0) = 2𝑐1 + 𝑐2
′ −9
b. 16𝑦 ′ − 40𝑦 ′ + 25𝑦 = 0 , 𝑦(0) = 3, 𝑦 ′ (0) = 4
5
16𝜆2 − 40𝜆 + 25 = (4𝜆 − 5)2 = 0 , 𝜆 1,2 =
4
5𝑥 𝟓𝒙
𝑦(𝑥) = 𝑐1 𝑒 4 + 𝑐2 𝑥𝑒 𝟒
5 5𝑥 𝟓𝒙 5 5𝑥
𝑦 ′ (𝑥) = 𝑐1 𝑒 4 + 𝑐2 𝑒 𝟒 + 𝑐2 𝑥𝑒 4
4 4
3 = 𝑦(0) = 𝑐1
−9 5
= 𝑦 ′ (0) = 𝑐1 + 𝑐2
4 4
5𝑥 𝟓𝒙
𝑦(𝑥) = 3𝑒 4 − 6𝑥𝑒 𝟒
𝟏 𝟏
Case III: Complex roots: − 𝟐 𝒂 + 𝒊𝝎 and − 𝟐 𝒂 − 𝒊𝝎.
This case occurs if the discriminant 𝑎2 − 4𝑏 < 0 of the characteristic equation is negative,
then the characteristic equation has no real root but it has a complex root
That is,
1
𝜆1 , 𝜆2 = (−𝑎 ± √𝑎2 − 4𝑏)
2
1
= (−𝑎 ± 𝑖 √4𝑏 − 𝑎2 )
2
𝑎 1
Put, 𝛼 = − 2 and 𝛽 = 2 √4𝑏 − 𝑎2
𝜆1 , 𝜆2 = 𝛼 ± 𝑖𝛽 with 𝛼, 𝛽 ∈ 𝑅
Thus ,
𝑦1 = 𝑒 𝜆1 𝑥 = 𝑒 (𝛼+𝑖𝛽)𝑥
and
𝑦2 = 𝑒 𝜆2 𝑥 = 𝑒 (𝛼−𝑖𝛽)𝑥
are the basis of complex solution of the ODE and the general solution is
𝑦1 = 𝑒 𝛼𝑥 cos 𝛽𝑥 , 𝑦2 = 𝑒 𝛼𝑥 sin 𝛽𝑥
′
𝑦 ′ − 8𝑦 ′ + 17𝑦 = 0 , 𝑦(0) = −4, 𝑦 ′ (0) = −1
Solution: The characteristic equation is :
𝜆2 − 8𝜆 + 17 = 0
The roots are, 𝜆 1,2 = 4 ± 𝑖
The general solution as well as its derivative is
−4 = 𝑦(0) = 𝑐1
−1 = 𝑦 ′ (0) = 4𝑐1 + 𝑐2
I Distinct real 𝑒 𝜆1 𝑥 , 𝑒 𝜆2 𝑥 𝑦 = 𝑐1 𝑒 𝜆1 𝑥 + 𝑐2 𝑒 𝜆2 𝑥
𝜆1 , 𝜆2
𝑎 𝑎 𝑎
II Real double root 𝑒 −2𝑥 , 𝑥𝑒 −2𝑥 𝑦 = 𝑒 −2𝑥 (𝑐1 + 𝑐2 𝑥)
𝑎
𝜆 = −2
𝜆2 = 𝛼 − 𝑖𝛽 𝑒 𝛼𝑥 sin 𝛽𝑥
Exercises
Badri A, Moges(General
Definition2.3 B. and Teklebrhan B.
solution, Particular solution) 41 AKU
A general solution of the non Homogenous ODE (1) on an open interval I is a solution of the
APPLIED MATHEMATICS III
A particular solution of (1) on I is a solution obtained from (3) by assigning specific value to the
arbitrary constant 𝑐1 and 𝑐2 in 𝑦ℎ .
The method of undetermined coefficients is suitable for linear ODEs with constant coefficients a
and b
𝑦 ′′ + 𝑎𝑦 ′ + 𝑏𝑦 = 𝑟(𝑥) (4)
when 𝑟(𝑥) is an exponential function, a power of x, a cosine or sine or sum or product of such
function.
a. Basic rule: if r(x) in (4) is one of the function in the first column in table 2.1 , choose 𝑦𝑝
in the same line and determine its undetermined coefficients by substitute 𝑦𝑝 and its
derivates into (4).
b. Modification rule: if a term in your choice for 𝑦𝑝 happens to be a solution of the
homogenous ODE corresponding to (4), multiply this term by 𝑥 (or 𝑥 2 if this solution
corresponding to a double root of the characteristic equation of the homogenous ODE)
c. Sum rule: if 𝑟(𝑥) is a sum of functions in the first column, choose 𝑦𝑝 the sum of
functions in the corresponding lines of the 2nd column.
The characteristic equation for this differential equation and its roots are :
𝜆2 + 1 = 0
⟹ 𝜆=𝑖
𝑦ℎ = 𝐴𝑐𝑜𝑠𝑥 + 𝐵𝑠𝑖𝑛𝑥
𝑦𝑝 = 𝐾2 𝑥 2 + 𝐾1 𝑥 + 𝐾0
Then,
𝑦𝑝 ′′ + 𝑦𝑝 = 2𝐾2 + 2𝑥 2 + 𝐾1 𝑥 + 𝐾0 = 0.001𝑥 2
𝐾2 = 0.001, 𝐾1 = 0 , 2𝐾2 + 𝐾 0 = 0
Hence,
𝐾0 = −2𝐾2 = 0.002
This gives
= 𝐴 − 0.002 = 0 or 𝐴 = 0.002
𝑦 ′ (0) = 𝐵 = 1.5
𝑦 ′′ − 4𝑦 ′ − 12𝑦 = 3𝑒 5𝑥
𝑦𝑝 (𝑥) = 𝐴𝑒 5𝑥
Plugging into the differential equation gives,
25𝐴𝑒 5𝑥 − 20𝐴𝑒 5𝑥 − 12(𝐴𝑒 5𝑥 ) = 3𝑒 5𝑥
−7𝐴𝑒 5𝑥 = 3𝑒 5𝑥
−3
𝐴=
7
A particular solution to the differential equation is
−3 5𝑥
𝑦𝑝 (𝑥) = 𝑒
7
Exercises
In the previous discussion, we have seen that a general solution of (1) is the sum of a general
solution 𝑦ℎ of the corresponding homogeneous ODE and any particular solution 𝑦𝑝 of (1). To
obtain 𝑦𝑝 when 𝑟(𝑥) is not too complicated we can often use the method of undetermined
coefficients . However, since this method restricted to function , 𝑟(𝑥) whose derivatives are of a
form similar to 𝑟(𝑥) itself (powers, exponential functions etc.), it is desirable to have a method
valid for more general ODEs (1), we shall now develop. It is called the method of variation of
parameters and is credited to Lagrange. Here 𝑝, 𝑞, 𝑟 in (1) may be variable (given functions of
𝑥), but we assume that they are continuous on some open I.
𝑦2 𝑟 𝑦1 𝑟
𝑦𝑝 = −𝑦1 ∫ 𝑑𝑥 + 𝑦2 ∫ 𝑑𝑥 … (2)
𝑊 𝑊
𝑦 ′′ + 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 = 0
𝑊 = 𝑦1 𝑦 ′ 2 − 𝑦2 𝑦 ′1
CAUTION! The solution formula (2) is obtained under the assumption that the ODE is written
in standard form, with 𝑦 ′′ as the first term as shown in (1). If it starts with 𝑓(𝑥)𝑦 ′′ , divide first
by 𝑓(𝑥).
1
𝑦 ′′ + 𝑦 = 𝑠𝑒𝑐𝑥 =
𝑐𝑜𝑠𝑥
𝑦1 = 𝑐𝑜𝑠𝑥 , 𝑦2 = 𝑠𝑖𝑛𝑥
From (2) choosing zero constants of integration, we get the particular solution of the given
ODE 𝑦𝑝 = −𝑐𝑜𝑠𝑥 ∫ 𝑠𝑖𝑛𝑥𝑠𝑒𝑐𝑥 𝑑𝑥 + 𝑠𝑖𝑛𝑥 ∫ 𝑐𝑜𝑠𝑥𝑠𝑒𝑐𝑥 𝑑𝑥
= 𝑐𝑜𝑠𝑥𝑙𝑛|𝑐𝑜𝑠𝑥| + 𝑥𝑠𝑖𝑛𝑥
𝑦ℎ = 𝑐1 𝑦1 + 𝑐2 𝑦2
𝑦 = 𝑦ℎ + 𝑦𝑝
Exercises
a. 𝑦 ′′ + 2𝑦 ′ + 𝑦 = 𝑥𝑒 −𝑥 e. 𝑦 ′′ − 2𝑦 ′ + 𝑦 = 𝑒 𝑥 𝑠𝑖𝑛𝑥
b. 𝑦 ′′ + 𝑦 = 𝑠𝑒𝑐 𝑥 f. 𝑦 ′′ + 𝑦 = 𝑡𝑎𝑛 𝑥
c. 𝑥 2 𝑦 ′′ − 2𝑥 2 𝑦 ′ + 𝑥 2 𝑦 = 𝑒 𝑥 g. 𝑦 ′′ + 𝑦 = 𝑐𝑜𝑠𝑥 + 𝑠𝑒𝑐𝑥
d. 𝑥 2 𝑦 ′′ + 𝑥𝑦 ′ − 𝑦 = 𝑥 2 𝑙𝑛 𝑥 f. 𝑥𝑦 ′′ − 2𝑥𝑦 ′ + 2𝑦 = 𝑥 3 𝑐𝑜𝑠𝑥
A general system of n first order linear variable coefficient DE involving the n dependent
variables 𝑥1 (𝑡), 𝑥2 (𝑡), … 𝑥𝑛 (𝑡) that are functions of the independent variable t (in application t is
often the time), the variable coefficients 𝑎𝑖𝑗 (𝑥) and the non homogenous terms
𝑓1 (𝑡), 𝑓2 (𝑡), … , 𝑓𝑛 (𝑡) has the form
𝑥′1 (𝑡) = 𝑎11 (𝑡)𝑥1 (𝑡) + 𝑎12 (𝑡)𝑥2 (𝑡) + 𝑎13 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎1𝑛 (𝑡)𝑥𝑛 (𝑡) + 𝑓1 (𝑡)
(1) 𝑥′2 (𝑡) = 𝑎21 (𝑡)𝑥1 (𝑡) + 𝑎22 (𝑡)𝑥2 (𝑡) + 𝑎23 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎2𝑛 (𝑡)𝑥𝑛 (𝑡) + 𝑓2 (𝑡)
𝑥′3 (𝑡) = 𝑎31 (𝑡)𝑥1 (𝑡) + 𝑎32 (𝑡)𝑥2 (𝑡) + 𝑎33 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎3𝑛 (𝑡)𝑥𝑛 (𝑡) + 𝑓3 (𝑡)
⋮
𝑥′𝑛 (𝑡) = 𝑎𝑛1 (𝑡)𝑥1 (𝑡) + 𝑎𝑛2 (𝑡)𝑥2 (𝑡) + 𝑎𝑛3 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎𝑛𝑛 (𝑡)𝑥𝑛 (𝑡) + 𝑓𝑛 (𝑡)
System (1) is said to be:
Homogenous: when all the functions 𝑓𝑖 (𝑡) are zero.
Non Homogenous: when at least one of them is non zero.
Variable Coefficient system: whenever at least one of the Coefficients 𝑎𝑖𝑗 (𝑡) is a function
of t: otherwise, it is a constant coefficient system.
The system (1) is linear system b/c it is linear in the function 𝑥1 (𝑡), 𝑥2 (𝑡), … 𝑥𝑛 (𝑡) and
their derivatives.
An initial value problem for system (1) involves seeking a solution of (1) such that at 𝑡 =
𝑡0 the variables 𝑥1 (𝑡), 𝑥2 (𝑡), … 𝑥𝑛 (𝑡) satisfies the initial conditions.
𝑥1 (𝑡0 ) = 𝑘1 , 𝑥2 (𝑡0 ) = 𝑘2 , … , 𝑥𝑛 (𝑡0 ) = 𝑘𝑛 (2)
where ∀𝑘𝑖 , 𝑖 = 1,2, . . , 𝑛 are given constant.
The matrix notation of (1) is
𝑥 ′ (𝑡) = 𝐴(𝑡)𝑥(𝑡) + 𝑏(𝑡) (3)
or more simplify as 𝑥 ′ = 𝐴𝑥 + 𝑏 where
The 𝑛 × 1 vector 𝑥(𝑡) is called the solution vector. The 𝑛 × 𝑛 matrix 𝐴(𝑡) is
called the coefficient matrix and the 𝑛 × 1 vector 𝑏(𝑡) is called the non
homogenous form of the system. ,The system (3) becomes an initial value
problem for the solution 𝑥(𝑡) when at 𝑡 = 𝑡0 the vector 𝑥(𝑡) is required to satisfy
the initial condition
𝑘1
𝑘
𝑥(𝑡0 ) = [ 2 ] (5)
⋮
𝑘𝑛
𝑥′1 = 2𝑥1 − 𝑥2 + 4 − 𝑡 2
𝑥′2 = −𝑥1 + 2𝑥2 + 1 with 𝑥1 (0) = 1, 𝑥2 (0)=0
Solution: In matrix form this is 𝑥 ′ (𝑡) = 𝐴(𝑡)𝑥(𝑡) + 𝑏(𝑡) ,
2 −1 𝑥1 2 1
where, 𝐴 = [ ] 𝑥(𝑡) = [𝑥 ] and, 𝑏(𝑡) = [4 − 𝑡 ],with initial vector 𝑥(0) = [ ].
−1 2 2 1 0
We now restrict our discussion to homogeneous first-order systems with constant coefficients:
those of the form:
𝑥′1 (𝑡) = 𝑎11 (𝑡)𝑥1 (𝑡) + 𝑎12 (𝑡)𝑥2 (𝑡) + 𝑎13 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎1𝑛 (𝑡)𝑥𝑛 (𝑡)
𝑥′2 (𝑡) = 𝑎21 (𝑡)𝑥1 (𝑡) + 𝑎22 (𝑡)𝑥2 (𝑡) + 𝑎23 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎2𝑛 (𝑡)𝑥𝑛 (𝑡)
𝑥′3 (𝑡) = 𝑎31 (𝑡)𝑥1 (𝑡) + 𝑎32 (𝑡)𝑥2 (𝑡) + 𝑎33 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎3𝑛 (𝑡)𝑥𝑛 (𝑡)
⋮
𝑥′𝑛 (𝑡) = 𝑎𝑛1 (𝑡)𝑥1 (𝑡) + 𝑎𝑛2 (𝑡)𝑥2 (𝑡) + 𝑎𝑛3 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎𝑛𝑛 (𝑡)𝑥𝑛 (𝑡)
Let 𝐴 = [𝑎𝑗𝑘 ] be an 𝑛 × 𝑛 matrix and consider the equation 𝐴 𝑥= 𝜆𝑥 where 𝜆 is a scalar (real or
complex). A scalar 𝜆 which satisfies
𝐴𝑥 = 𝜆x … (*)
for some x ≠ 0 is called an eigenvalue of A and this non zero vector is called an
eigenvector of A corresponding to this 𝜆 .(*) can be written as
𝐴𝑥 − 𝜆𝑥=0 … (**)
(𝐴 − 𝜆)x=0 … (***)
For these equations to have a solution x≠ 0 the determinant of the coefficient matrix
𝐴 − 𝜆𝐼 must be zero. Then (***) be comes
𝑎11 − 𝜆 𝑎12
[ ] (𝑥1 ) = (00)
𝑎21 𝑎22 − 𝜆 𝑥2
(𝑎11 − 𝜆)𝑥1 + 𝑎12 𝑥2 = 0
𝑎21 𝑥1 + (𝑎22 − 𝜆)𝑥2 = 0 … (𝑖)
𝑎11 − 𝜆 𝑎12
Solving [ ]=0
𝑎21 𝑎22 − 𝜆
for 𝜆 and substituting in (i) we get 𝑋1and 𝑋2 for 𝜆1 and 𝑋3 , 𝑋4 for 𝜆2 and thus, The Eigen vector
corresponds to 𝜆1 will be
𝑋1
𝑥⃗1 = ( )
𝑋2
and the Eigen vector corresponding to 𝜆2 will be
𝑋3
𝑋⃗2 = ( )
𝑋4
Theorem; Given 𝑋⃗ = 𝐴𝑥⃗ if 𝜆1 and 𝜆2 are the eigen values of the matrix A and 𝑦⃗1 and 𝑦⃗2 are
corresponding Eigen vectors then the general solution of the system is
𝑦 = 𝐶1 𝑒 𝜆1 t 𝑦⃗1 + 𝐶2 𝑒 𝜆2 t 𝑦⃗2
1 12
Example 2.16: Solve 𝑥⃗ = [ ] 𝑥⃗
3 1
1 12
Solution: we have 𝐴 = [ ]
3 1
And (A − λI)=[1 − λ 12
].
3 1−λ
1−λ 12
det(A − λI)=| |=0
3 1−λ
= (1 − λ)2 − 36 ⟹ 1 − λ = ±6
This implies, 𝜆1 = 7 𝜆2 = −5
Now
1−𝜆 12 𝑥1
[ ] [𝑥 ] = 0
3 1−𝜆 2
For 𝜆1 = 7 , we have
−6 12 𝑥1
[ ] [ ]=0
3 −6 𝑥2
𝑥′2 = 𝑥1 + 5𝑥2
Answer: The general solution is
x1 −3 −1
[x ] = C1 [ ] ∙ e2t + C2 [ ] . e4t
2 1 1
Exercises
c. 𝑥′1 = 𝑥1 + 𝑥2
𝑥′2 = 4𝑥1 + 𝑥2
But some of the generalized eigenvectors obtained from these chains may yield linearly
dependent solution functions.
If we consider the 1 × 1 system 𝑥′ = 𝑘𝑥 with the initial condition x(0) = 𝐶 we know that the
general solution is
𝑥(𝑡) = 𝑒 𝑘𝑡 𝐶.
We would like to find some way to extend this result to 𝑛 × 𝑛 systems.
This leads us to define the exponential of a matrix 𝑒 𝐴 .
Note: The definition is motivated by the Tyalor series for the exponential of a real or complex
𝑧𝑛
number 𝑧; namely 𝑒 𝑧 = ∑∞
𝑛=0 𝑛! .
𝐴𝑛
Theorem: The infinite sum 𝑒 𝐴 = ∑∞
𝑛=0 Converges for every matrix 𝐴.
𝑛!
Theorem: If 𝐴 is an 𝑛 × 𝑛 matrix, then the unique solution to the initial value problem
⃗⃗⃗⃗(0) = 𝑦⃗0 is given by 𝑦⃗(𝑥) = 𝑒 𝐴𝑥 ∙ 𝑦⃗0 .
𝑦⃗ = 𝐴 ∙ 𝑦⃗ with 𝑦
−1 𝐴𝑃
Proposition: For any invertiable matrix 𝑃, 𝑒 𝑝 = 𝑃−1 [𝑒 𝐴 ]𝑃.
Proof:
−1 𝐴𝑃 (𝑝−1 𝐴𝑃)𝑛 𝐴𝑛
𝑒𝑝 = ∑∞
𝑛=0 = 𝑃−1 (∑∞ −1 𝐴
𝑛=0 𝑛! ) 𝑃 = 𝑃 [𝑒 ]𝑃,
𝑛!
,where the middle step uses the fact that (𝑝−1 𝐴𝑃)𝑛 = 𝑃(𝐴𝑛 )𝑃.
0 −2
Example 2.18: Find 𝑒 𝐴𝑥 , if 𝐴 = [ ]
3 5
Solution: First we attempt to diagonalize the matrix 𝐴. We calculate
det(𝑡𝐼 − 𝐴) = 𝑡(𝑡 − 5) + 6 = (𝑡 − 2)(𝑡 − 3)
Finally we have
𝑒 𝐴𝑥 = 𝑃𝑒 𝐷𝑥 𝑃−1 = [
−1 − 2 𝑒 2𝑥 0 −3 − 2
][ ] [ ] = [ 3𝑒 2𝑥 − 2𝑒 3𝑥 2𝑒 2𝑥 − 2𝑒 3𝑥 ]
1 3 0 𝑒 3𝑥 1 1 −3𝑒 2𝑥 + 3𝑒 3𝑥 − 2𝑒 3𝑥 + 3𝑒 3𝑥
Unit Summary:
Linear second order differential equations with constant coefficients are the simplest of the
higher order differential equation and they have many applications.
The most general linear second order differential equation is in the form:
A function 𝑦 = ℎ(𝑥) is called a solution of a (linear or non linear) second order ODE on
some open interval 𝐼 if ℎ is defined and twice differentiable throughout the interval and is
such that the ODE becomes an identify if we replace the unknown 𝑦 by ℎ, and its successive
derivatives.
Two function 𝑦1 (𝑥) and 𝑦2 (𝑥) are said to be linearly independent (LI) over an interval I if
the equation
𝑐1 𝑦1 (𝑥) + 𝑐2 𝑦2 (𝑥) = 0
The Wronskian of 𝑛 function 𝑦1 (𝑥) ,𝑦2 (𝑥), 𝑦3 (𝑥),… 𝑦𝑛 (𝑥) each (n-1) time differentiable is
the form
𝑦1 𝑦2 ⋯⋯ 𝑦𝑛
𝑦1 ′ 𝑦2 ′ ⋯⋯ 𝑦𝑛 ′
𝑊(𝑦) = || 𝑦1 ′′ 𝑦2 ′′ ⋯⋯ 𝑦𝑛 ′′ | , where |
| | denotes the determinant.
⋮ ⋮ ⋯⋯ ⋮
𝑦1 (𝑛−1) 𝑦2 (𝑛−1) ⋯⋯ 𝑦𝑛 (𝑛−1)
A general solution of an ODE
𝑦 ′′ + 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 = 0
On an open interval I is a solution of
𝑦 = 𝑐1 𝑦1 + 𝑐2 𝑦2
In which 𝑦1 and 𝑦2 are solution of (*) on 𝐼 that are not proportional (LI) and 𝑐1 and 𝑐2 are
arbitrary constants. These 𝑦1 and 𝑦2 are called a basis (or a fundamental system of the solution
(*) on 𝐼)
A basis of solution (*) on an open interval 𝐼 is a pair of LI solution of (*) on 𝐼.
The equation
𝜆2 + 𝑎𝜆 + 𝑏 = 0 (2)
is called the characteristic equation of the given differential equation.
Depending on the sign of the discriminant 𝑎2 − 4𝑏 the quadratic equation (2) may have
three kinds of roots.
A general solution of the non Homogenous ODE (1) on an open interval I is a solution of the
form 𝑦(𝑥) = 𝑦ℎ (𝑥) + 𝑦𝑝 (𝑥) (3)
A general system of n first order linear variable coefficient DE involving the n dependent
variables 𝑥1 (𝑡), 𝑥2 (𝑡), … 𝑥𝑛 (𝑡) that are functions of the independent variable t (in application
t is often the time), the variable coefficients 𝑎𝑖𝑗 (𝑥) and the non homogenous terms
𝑓1 (𝑡), 𝑓2 (𝑡), … , 𝑓𝑛 (𝑡) has the form
𝑥′1 (𝑡) = 𝑎11 (𝑡)𝑥1 (𝑡) + 𝑎12 (𝑡)𝑥2 (𝑡) + 𝑎13 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎1𝑛 (𝑡)𝑥𝑛 (𝑡) + 𝑓1 (𝑡)
(1) 𝑥′2 (𝑡) = 𝑎21 (𝑡)𝑥1 (𝑡) + 𝑎22 (𝑡)𝑥2 (𝑡) + 𝑎23 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎2𝑛 (𝑡)𝑥𝑛 (𝑡) + 𝑓2 (𝑡)
𝑥′3 (𝑡) = 𝑎31 (𝑡)𝑥1 (𝑡) + 𝑎32 (𝑡)𝑥2 (𝑡) + 𝑎33 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎3𝑛 (𝑡)𝑥𝑛 (𝑡) + 𝑓3 (𝑡)
⋮
𝑥′𝑛 (𝑡) = 𝑎𝑛1 (𝑡)𝑥1 (𝑡) + 𝑎𝑛2 (𝑡)𝑥2 (𝑡) + 𝑎𝑛3 (𝑡)𝑥3 (𝑡) + ⋯ + 𝑎𝑛𝑛 (𝑡)𝑥𝑛 (𝑡) + 𝑓𝑛 (𝑡)
Miscellaneous Exercises
References
6.Matthew R and Etla, differential equation with linear algebra,2009, Oxford University press
7.K. Soetaert, T P, Solving Initial Value Diferential Equations in R,2010,R package deS
10. A. Ganesh and Etla, Engineering Mathematics II, 2009 New age International press
11. Wilfred Kaplan,Advanced Calculus, 5th edition, publishing house of electronics industry
12.Salas Hille Etgen, Calculus – One and Several variables,10th edition, WILLEY PLUS
13 .Boyce. Diprima, Elementary differential equations and boundary value problem, 2001 ,John
Wiley and Sons.Inc
Chapter-Three
Fourier series And Integrals
Introduction
The central starting point of Fourier analysis is Fourier series. They arise naturally while
analyzing many physical phenomenon like electrical oscillations, vibrating mechanical systems,
longitudinal oscillations crystals etc. They are infinite series designed to represent general
periodic functions in terms of simple ones, namely, 𝑐𝑜𝑠𝑖𝑛𝑒𝑠 and 𝑠𝑖𝑛𝑒𝑠.
Fourier series are very important to the engineer and physicist because they allow the solution of
ODEs in connection with forced oscillations and the Approximation of periodic functions.
Where as Fourier integrals extend the concept of Fourier series to non -periodic functions
defined for all x as in many practical problems we come across functions defined on −∞ < 𝑥 <
∞.More over periodic functions can be represented in complex Fourier series and non-periodic
functions can be represented in complex Fourier integral form.
In this chapter we will explore representations of periodic functions in Fourier series, in complex
Fourier series and representations of non-periodic functions in Fourier integral, in complex
Fourier integral. Before introducing Fourier series we will see some preliminary concepts, like
Periodic functions, even and odd functions orthogonal functions and trigonometric series.
Unit Objectives:
understand the definition of periodic ,even, odd functions and orthogonal functions;
understand and find Fourier series representation of periodic functions;
find Fourier integral representation of non- periodic functions
Understand and find the Complex Fourier series representation of periodic functions;
Identify and understand the idea of complex Fourier integral representation.
Overview:
In this section, we are going to consider the definition of periodic, even, odd, orthogonal
functions and trigonometric series with examples.
Section Objectives:
The smallest positive period is often called the fundamental period. The graph of a periodic
function has the characteristic that it can be obtained by periodic repetition of its graph in any
interval of length p as shown in the figure 1 bellow
Familiar periodic functions are the cosine, sine, tangent, and cotangent with fundamental periods
p= 2𝜋, 2𝜋, 𝜋 𝑎𝑛𝑑 𝜋 𝑟𝑒𝑠𝑝𝑒𝑐𝑡𝑖𝑣𝑒𝑙𝑦. Examples of functions that are not periodic are 𝑥, 𝑥 2 , 𝑥 3 , 𝑒 𝑥 ,
𝑐𝑜𝑠ℎ𝑥 𝑎𝑛𝑑 𝑙𝑛𝑥 .
If f (x) has period p, it also has the period2𝑝 because by the above definition of a periodic
function
Thus, for any integer 𝑛 = 1,2,3, . , . ,. 𝑓(𝑥 + 𝑛𝑝) = 𝑓(𝑥) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥
Example 3.1:
State the period for each of the following functions
𝜋 𝜋
𝑓(𝑥) = 𝑐𝑜𝑠4𝑥 = cos(4𝑥 + 2𝜋) = cos(4(𝑥 + 2 ) ) = 𝑓(𝑥 + 2 )
𝜋 𝜋
⇒ 𝑓(𝑥 + 2 ) = 𝑓(𝑥) ⇒ P = is the period of 𝑓(𝑥)
2
2𝜋 2𝜋
⇒ 𝑓(𝑥 + ) = 𝑓(𝑥) ⇒ P = is the period of 𝑓(𝑥)
3 3
Definition (Even and odd functions) a function 𝑓(𝑥) is called an Even function if
𝑓(−𝑥) = 𝑓(𝑥) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 𝑖𝑛 𝑡ℎ𝑒 𝑑𝑜𝑚𝑎𝑖𝑛 𝑜𝑓 𝑓 ; 𝑖𝑡 𝑖𝑠 𝑐𝑎𝑙𝑙𝑒𝑑 𝑎𝑛 Odd function if
𝑓(−𝑥) = −𝑓(𝑥) for all x in the domain of f .
If 𝑓 and 𝑔 are both odd functions, and both even functions; then the product 𝑓𝑔 is even; and if
one is even and the other is odd then the product is odd. More over even functions are symmetric
about the y- axis where as odd functions are symmetric about the origin.
𝑙 𝑙
Note: 1.for any even function 𝑓(𝑥), ∫−𝑙 𝑓(𝑥) 𝑑𝑥 = 2∫0 𝑓(𝑥)dx
𝑙
2. for any odd function 𝑓(𝑥), ∫−𝑙 𝑓(𝑥) 𝑑𝑥 = 0
Solution To show that a function 𝑓 is odd, we need to evaluate it at – 𝑥 and show that this is the
same as the value of – 𝑓 at 𝑥; that is
𝑒 (−𝑥) −𝑒 −(−𝑥) 𝑒 −𝑥 −𝑒 𝑥 𝑒 𝑥 −𝑒 −𝑥
𝑓(−𝑥) = 𝑆𝑖𝑛ℎ(−𝑥) = = = −( ) = − sinh(𝑥) = −𝑓(𝑥)
2 2 2
⟹ 𝑓(−𝑥) = −𝑓(𝑥)
Hence 𝑓 is odd function
Definition (Orthogonal functions) Two functions 𝑓(𝑥) 𝑎𝑛𝑑 𝑔(𝑥) are said to be Orthogonal on
the interval [-l,-l] if
𝑙
∫−𝑙 𝑓(𝑥) 𝑔(𝑥)𝑑𝑥 = 0
Example 3.3: show that 𝑓(𝑥)= 𝑠𝑖𝑛𝑥 and 𝑔(𝑥) = 𝑐𝑜𝑠2𝑥 are orthogonal on [−𝜋, 𝜋]
𝜋 𝜋 1
Solution: ∫−𝜋 𝑠𝑖𝑛𝑥 𝑐𝑜𝑠2𝑥𝑑𝑥 = ∫−𝜋 2 (sin(1 − 2) 𝑥 + cos(1 + 2)𝑥)𝑑𝑥
𝜋
1
=∫ (sin(−1) 𝑥 + cos(3)𝑥)𝑑𝑥
−𝜋 2
𝜋
1
= −∫ (sin𝑥 + cos 3𝑥)𝑑𝑥
−𝜋 2
1 𝜋
= − ∫ (sin 𝑥 + cos 3𝑥)𝑑𝑥
2 −𝜋
1 𝜋 1
= − ∫ sin𝑥𝑑𝑥 + − ∫ cos3x 𝑑𝑥
2 −𝜋 2
1 𝑠𝑖𝑛3𝜋 𝑠𝑖𝑛3(−𝜋) 1 1
[(𝑐𝑜𝑠𝜋 − ) – (𝑐𝑜𝑠(−𝜋) − ] = 2 [(−1 − 0)– (−1 − 0] = [(−1 − 0)– (−1 − 0)]
2 6 6 2
1
= (−1 + 1) = 0
2
Hence 𝑓 and 𝑔 are orthogonal functions.
𝜋 𝜋, 𝑛 = 𝑚 𝜋
𝑎) ∫−𝜋𝑐𝑜𝑠𝑛𝑥 𝑐𝑜𝑐𝑠𝑚𝑥𝑑𝑥 = { d) ∫−𝜋𝑐𝑜𝑠𝑛𝑥 𝑑𝑥 = 0
0, 𝑛 ≠ 𝑚
𝜋 𝜋, 𝑛 = 𝑚 𝜋
b). ∫−𝜋 𝑠𝑖𝑛𝑛𝑥 𝑠𝑖𝑛𝑚𝑥𝑑𝑥 = { e) ∫−𝜋𝑐𝑜𝑠𝑛𝑥 𝑑𝑥 = 0
0, 𝑛 ≠ 𝑚
𝜋 0, 𝑛 ≠ 𝑚
c) ∫−𝜋 𝑠𝑖𝑛𝑛𝑥 𝑐𝑜𝑠𝑚𝑥𝑑𝑥 = {
0, 𝑛 = 𝑚
Proof
a) we use the trigonometric identities to prove this Orthogonality that is
For 𝑛 ≠ 𝑚
𝜋 𝜋
1
∫ 𝑐𝑜𝑠𝑛𝑥 𝑐𝑜𝑐𝑠𝑚𝑥𝑑𝑥 = ∫ [cos(𝑛 − 𝑚)𝑥 + 𝑐𝑜𝑠(𝑛 + 𝑚)𝑥]𝑑𝑥
−𝜋 −𝜋 2
1 𝜋 1 𝜋
= ∫ cos(𝑛 − 𝑚) 𝑥 + ∫ 𝑐𝑜𝑠(𝑛 + 𝑚)𝑥𝑑𝑥
2 −𝜋 2 –𝜋
1 𝜋 1 𝜋
= 2(𝑛−𝑚) [sin(n − m)𝑥 ] + [sin(𝑛 + 𝑚) 𝑥 ]
−𝜋 2(𝑛+𝑚) −𝜋
1 1
= 2(𝑛−𝑚) [sin(n − m) 𝜋 − sin(n − m)(− 𝜋)] + 2(𝑛+𝑚) [sin(n + m) 𝜋 − sin(n + m)(− 𝜋)]
1 1
= (0 − 0) + 2(𝑛+𝑚) (0 − 0)
2(𝑛−𝑚)
1 1
= (0) + 2(𝑛+𝑚) (0) = 0 + 0 = 0
2(𝑛−𝑚)
For n=m,
𝜋 𝜋 𝜋 𝜋 1
∫−𝜋𝑐𝑜𝑠𝑛𝑥 𝑐𝑜𝑐𝑠𝑚𝑥𝑑𝑥 = ∫−𝜋𝑐𝑜𝑠𝑛𝑥 𝑐𝑜𝑐𝑠𝑛𝑥𝑑𝑥 = ∫−𝜋 𝑐𝑜𝑠 2 𝑛𝑥 𝑑𝑥 = ∫−𝜋 2 (1 + 𝑐𝑜𝑠2𝑛𝑥)𝑑𝑥
1 1 2𝜋
= 2[(𝜋+0) − (−𝜋 − 0)] = 2 (𝜋+𝜋) = =𝜋
2
Similarly b and c can be proved by the same fashion
Definition (trigonometric series) any series that can be expressed in the form of
Where𝑎𝑜 , 𝑎1 , 𝑏1 , 𝑎2 , 𝑏2 … are constants, called the coefficients of the series and each term has
the period 2𝜋 𝑖𝑠 𝑐𝑎𝑙𝑙𝑒𝑑 𝑎 𝒕𝒓𝒊𝒈𝒐𝒏𝒐𝒎𝒆𝒕𝒓𝒊𝒄 𝒔𝒆𝒓𝒊𝒆𝒔.
Note: If the coefficients𝑎𝑡𝑟𝑖𝑔𝑜𝑛𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑠𝑒𝑟𝑖𝑒𝑠are such that the series converges, its sum will
be a function of period 2𝜋.
Exercises 3.1
2. If 𝑓(𝑥) 𝑎𝑛𝑑 𝑔(𝑥) have period 𝑝, show that ℎ(𝑥) = 𝑎𝑓(𝑥) + 𝑏𝑔(𝑥) (𝑎, 𝑏, constants) has period 𝑝.
3. Show that 𝑓 = constant is periodic with any period but has no fundamental period.
(a)𝑓(𝑥) = 𝑐𝑜𝑠ℎ(𝑥) (𝑏) 𝑔(𝑥) = 𝑡𝑎𝑛(𝑥) (𝑐) 𝑘(𝑥) = 𝑥 2 𝑐𝑜𝑠(𝑥) (𝑑) ℎ(𝑥) = 𝑥|𝑥|
5) Show that if 𝑓 is an even function and 𝑔 is odd, then the product 𝑓. 𝑔 is an odd function
Overview:
In this section, we are going to consider the definition of Fourier series together with some
examples
Section Objectives:
Definition (Fourier series): If 𝑓 is a periodic function of period 2𝜋 and integrable on the interval
(−𝜋, 𝜋), then the Fourier series of 𝑓 is defined as:
𝑓(𝑥) = 𝑎𝑜 + ∑∞
𝑛=1(𝑎𝑛 𝑐𝑜𝑠𝑛𝑥 + 𝑏𝑛 𝑠𝑖𝑛𝑛𝑥) (1)
where 𝑎𝑜 ,𝑎𝑛 , 𝑎𝑛𝑑 𝑏𝑛 𝑎𝑟𝑒 𝑐𝑎𝑙𝑙𝑒𝑑 𝑡ℎ𝑒 Fourier coefficients of 𝑓 and are given by the Euler’s Formula
1 𝜋
a) 𝑎𝑜 = 2𝜋 ∫−𝜋 𝑓(𝑥) 𝑑𝑥
1 𝜋
b) 𝑎𝑛 = 𝜋 ∫−𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑛𝑥 𝑑𝑥 𝑛 = 1,2, …
1 𝜋
c) 𝑏𝑛 = 𝜋 ∫−𝜋 𝑓(𝑥)𝑠𝑖𝑛𝑛𝑥 𝑑𝑥 𝑛 = 1,2, …
Proof:
Verification
𝑎𝑛 𝜋 𝑏𝑛 𝜋
= 𝑎𝑜 [𝜋 − (−𝜋)] + ∑∞
𝑛=1[ 𝑛 [𝑠𝑖𝑛𝑛𝑥]– 𝜋 − 𝑛 [ 𝑐𝑜𝑠𝑛𝑥]– 𝜋 ]
𝑎 𝑏𝑛 𝜋
= 𝑎 𝑜 2 𝜋 + ∑∞ 𝑛
𝑛=1[ 𝑛 [𝑠𝑖𝑛𝑛𝜋 + 𝑠𝑖𝑛𝜋] − [ 𝑐𝑜𝑠𝑛𝜋 − 𝑐𝑜𝑠𝑛𝜋]– 𝜋 ]
𝑛
𝑎𝑛 𝑏𝑛
= 𝑎 𝑜 2 𝜋 + ∑∞
𝑛=1[ 𝑛 (0) − (0)] ]
𝑛
= 𝑎𝑜 2 𝜋 + 0
= 𝑎𝑜 2 𝜋
𝜋
⇒ ∫−𝜋 𝑓(𝑥) 𝑑𝑥 = 𝑎𝑜 2 𝜋 𝟏 𝝅
𝒂𝒐 = ∫ 𝒇(𝒙) 𝒅𝒙
1 𝜋 𝟐𝝅 −𝝅
⇒ 𝑎𝑜 = 2𝜋 ∫−𝜋 𝑓(𝑥) 𝑑𝑥
1 𝜋
Hence 𝑎𝑜 = 2𝜋 ∫−𝜋 𝑓(𝑥) 𝑑𝑥
Proof: for 𝑎𝑛
Multiplying on both sides of (1) by 𝑐𝑜𝑠𝑚𝑥 for any fixed positive integer 𝑚 and integrating on
both sides from – 𝜋 𝑡𝑜 𝜋 we get
𝜋 𝜋
∫−𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑚𝑥 𝑑𝑥 = ∫–𝜋 [𝑎𝑜 + ∑∞ 𝑛=1(𝑎𝑛 𝑐𝑜𝑠𝑛𝑥 + 𝑏𝑛 𝑠𝑖𝑛𝑛𝑥)]𝑐𝑜𝑠𝑚𝑥𝑑𝑥
𝜋
= ∫–𝜋 [𝑎𝑜 𝑐𝑜𝑠𝑚𝑥 + ∑∞
𝑛=1(𝑎𝑛 𝑐𝑜𝑠𝑛𝑥𝑐𝑜𝑠𝑚𝑥 + 𝑏𝑛 𝑠𝑖𝑛𝑛𝑥𝑐𝑜𝑠𝑚𝑥)]𝑑𝑥
𝜋 𝜋 𝜋
= ∫–𝜋 𝑎𝑜 𝑐𝑜𝑠𝑚𝑥𝑑𝑥 + ∑∞
𝑛=1 (∫–𝜋 𝑎𝑛 𝑐𝑜𝑠𝑛𝑥𝑐𝑜𝑠𝑚𝑥𝑑𝑥 + ∫–𝜋 𝑏𝑛 𝑠𝑖𝑛𝑛𝑥𝑐𝑜𝑠𝑚𝑥𝑑𝑥 )
𝜋 𝜋 𝜋
= 𝑎𝑜 ∫–𝜋 𝑐𝑜𝑠𝑚𝑥𝑑𝑥 + ∑∞𝑛=1 (𝑎𝑛 ∫–𝜋 𝑐𝑜𝑠𝑛𝑥𝑐𝑜𝑠𝑚𝑥𝑑𝑥 + 𝑏𝑛 ∫–𝜋 𝑠𝑖𝑛𝑛𝑥𝑐𝑜𝑠𝑚𝑥𝑑𝑥 )
𝑎 𝜋
= 𝑚𝑜 [𝑠𝑖𝑛𝑚𝑥]– 𝜋 + ∑∞𝑛=1(𝑎𝑛 𝜋 + 𝑏𝑛 (0) ) (by the Orthogonality theorem above)
𝑎
= 𝑚𝑜 [𝑠𝑖𝑛𝑚𝜋 + 𝑠𝑖𝑛𝑚𝜋 ] + 𝑎𝑛 𝜋 (𝑚 = 𝑛 𝑖𝑠 𝑎 𝑓𝑖𝑥𝑒𝑑 𝑖𝑛𝑡𝑒𝑔𝑒𝑟)
𝑎𝑜
= [0 + 0] + 𝑎𝑛 𝜋 = 𝑎𝑛 𝜋
𝑚
𝜋
⇒ ∫−𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑚𝑥 𝑑𝑥 = 𝑎𝑛 𝜋 𝟏 𝝅
𝒂𝒏 = ∫−𝝅 𝒇(𝒙)𝒄𝒐𝒔𝒏𝒙 𝒅𝒙
1 𝜋 𝝅
⇒ 𝑎𝑛 = 𝜋 ∫−𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑚𝑥 𝑑𝑥
1 𝜋
⇒ 𝑎𝑛 = ∫−𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑛𝑥 𝑑𝑥 as required
𝜋
1 𝜋
Hence 𝑎𝑛 = 𝜋 ∫−𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑛𝑥 𝑑𝑥
Proof: for 𝑏𝑛
Multiplying on both sides of (1) by 𝑠𝑖𝑛𝑚𝑥 for any fixed positive integer 𝑚 and integrating on
both sides from – 𝜋 𝑡𝑜 𝜋 we get
𝜋 𝜋
∫−𝜋 𝑓(𝑥)𝑠𝑖𝑛𝑚𝑥 𝑑𝑥 = ∫–𝜋 [𝑎𝑜 + ∑∞ 𝑛=1(𝑎𝑛 𝑐𝑜𝑠𝑛𝑥 + 𝑏𝑛 𝑠𝑖𝑛𝑛𝑥)]𝑠𝑖𝑛𝑚𝑥𝑑𝑥
𝜋
= ∫–𝜋 [𝑎𝑜 𝑠𝑖𝑛𝑚𝑥 + ∑∞
𝑛=1(𝑎𝑛 𝑐𝑜𝑠𝑛𝑥𝑠𝑖𝑛𝑚𝑥 + 𝑏𝑛 𝑠𝑖𝑛𝑛𝑥𝑠𝑖𝑛𝑚𝑥)]𝑑𝑥
𝜋 𝜋 𝜋
= ∫–𝜋 𝑎𝑜 𝑠𝑖𝑛𝑚𝑥𝑑𝑥 + ∑∞
𝑛=1 (∫–𝜋 𝑎𝑛 𝑐𝑜𝑠𝑛𝑥𝑠𝑖𝑛𝑚𝑥𝑑𝑥 + ∫–𝜋 𝑏𝑛 𝑠𝑖𝑛𝑛𝑥𝑠𝑖𝑛𝑚𝑥𝑑𝑥 )
𝜋 𝜋 𝜋
= 𝑎𝑜 ∫–𝜋 𝑠𝑖𝑛𝑚𝑥𝑑𝑥 + ∑∞
𝑛=1 (𝑎𝑛 ∫–𝜋 𝑐𝑜𝑠𝑛𝑥𝑠𝑖𝑛𝑚𝑥𝑑𝑥 + 𝑏𝑛 ∫–𝜋 𝑠𝑖𝑛𝑛𝑥𝑠𝑖𝑛𝑚𝑥𝑑𝑥 )
= 𝑎𝑜 (0) + ∑∞
𝑛=1(𝑎𝑛 (0) + 𝑏𝑛 𝜋) (By the Orthogonality theorem above)
Example 3.4: find the Fourier series representation of the periodic function
−𝑘 𝑖𝑓 − 𝜋 ≤ 𝑥 ≤ 𝜋
𝑓(𝑥) = { 𝑎𝑛𝑑 𝑓(𝑥 + 2𝜋) = 𝑓(𝑥)
𝑘 𝑖𝑓 0 < 𝑥 < 𝜋
Solution: to find the Fourier series representation of 𝑓 we first find the Fourier coefficients
𝑎𝑜 , 𝑎𝑛 𝑎𝑛𝑑 𝑏𝑛 of 𝑓 𝑢𝑠𝑖𝑛𝑔 𝐸𝑢𝑙𝑒𝑟 ′ 𝑠 𝑓𝑜𝑟𝑚𝑢𝑙𝑎𝑠.
1 𝜋 1 0 𝜋
𝑎𝑜 = 2𝜋 ∫−𝜋 𝑓(𝑥) 𝑑𝑥 = [∫ 𝑓(𝑥) 𝑑𝑥 + ∫0 𝑓(𝑥) 𝑑𝑥]
2𝜋 −𝜋
1 0 𝜋
= [∫ −𝑘 𝑑𝑥 + ∫0 𝑘 𝑑𝑥]
2𝜋 −𝜋
−𝑘 0 𝑘 𝜋
= [0 − (−𝜋)] + [𝜋 − 0]
2𝜋 −𝜋 2𝜋 0
−𝑘𝜋 𝑘𝜋
= + = 0
2𝜋 2𝜋
Hence, 𝑎𝑜 = 0
1 𝜋
Similarly, 𝑎𝑛 = 𝑎𝑛 = 𝜋 ∫–𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑛𝑥 𝑑𝑥
1 0 𝜋
= [∫−𝜋 𝑓(𝑥) 𝑐𝑜𝑠𝑛𝑥𝑑𝑥 + ∫0 𝑓(𝑥) 𝑐𝑜𝑠𝑛𝑥𝑑𝑥]
𝜋
1 0 1 𝜋
= ∫ −𝑘 𝑐𝑜𝑠𝑛𝑥𝑑𝑥 + 𝜋 ∫0 𝑘 𝑐𝑜𝑠𝑛𝑥𝑑𝑥
𝜋 −𝜋
𝜋
[𝑠𝑖𝑛𝑥] 0 + [𝑠𝑖𝑛𝑥]
−𝑘 𝑘
= 𝑛𝜋 −𝜋 𝑛𝜋 0
−𝑘 𝑘
= [𝑠𝑖𝑛0 − sin(−𝜋)] + (𝑠𝑖𝑛𝜋 − sin 0)
𝑛𝜋 𝑛𝜋
−𝑘 𝑘
= [𝑠𝑖𝑛0 − sin(−𝜋)] + (𝑠𝑖𝑛𝜋 − sin 0)
𝑛𝜋 𝑛𝜋
−𝑘 𝑘
= (0) + 𝑛𝜋 (0) = 0
𝑛𝜋
Hence , 𝑎𝑛 = 0
1 𝜋
More over, 𝑏𝑛 = ∫−𝜋 𝑓(𝑥)𝑠𝑖𝑛𝑚𝑥 𝑑𝑥
𝜋
1 0 𝜋 1 0 1 𝜋
= [∫−𝜋 −𝑘 𝑠𝑖𝑛𝑛𝑥𝑑𝑥 + ∫0 𝑘𝑠𝑖𝑛𝑛𝑥 𝑑𝑥] = ∫ −𝑘 𝑠𝑖𝑛𝑛𝑥𝑑𝑥 + 𝜋 ∫0 𝑘𝑠𝑖𝑛𝑛𝑥 𝑑𝑥]
𝜋 𝜋 −𝜋
𝑘 0 −𝑘 𝜋 𝑘 −𝑘
= 𝑛𝜋 [𝑐𝑜𝑠𝑛𝑥] + [𝑐𝑜𝑠𝑛𝑥] = 𝑛𝜋 [𝑐𝑜𝑠0 − cos(−𝑛𝜋)] + 𝑛𝜋 (𝑐𝑜𝑠𝑛𝜋 − 𝑐𝑜𝑠0)
−𝜋 2𝑛𝜋 0
𝑘 −𝑘 𝑘 𝑘
= 𝑛𝜋 [1 − 𝑐𝑜𝑠𝑛𝜋)] + 𝑛𝜋 (𝑐𝑜𝑠𝑛𝜋 − 1) = 𝑛𝜋 [1 − 𝑐𝑜𝑠𝑛𝜋)] + 2𝑛𝜋 (1 − 𝑐𝑜𝑠𝑛𝜋)
2𝑘 2𝑘
= 𝑛𝜋 (1 − 𝑐𝑜𝑠𝑛𝜋) = 𝑛𝜋 (1 − (−1)𝑛 )……. Because 𝑐𝑜𝑠𝑛𝜋 = (−1)𝑛
4𝑘
−1, 𝑓𝑜𝑟 𝑜𝑑𝑑 𝑛
𝑛 2𝑘 , 𝑓𝑜𝑟 𝑜𝑑𝑑 𝑛
Here, 𝑐𝑜𝑠𝑛𝜋 = (−1) = { ⇒ 𝑛𝜋 (1 − (−1)𝑛 ) = {𝑛𝜋
1, 𝑓𝑜𝑟 𝑒𝑣𝑒𝑛 𝑛 0, 𝑓𝑜𝑓 𝑛 𝑒𝑣𝑒𝑛
4𝑘
⇒ 𝑏𝑛 = 𝑛𝜋
4𝑘
Hence, 𝑏𝑛 = 𝑛𝜋
From these, the Fourier series of 𝑓 is
4𝑘
𝑓(𝑥) = 𝑎𝑜 + ∑∞
𝑛=1(𝑎𝑛 𝑐𝑜𝑠𝑛𝑥 + 𝑏𝑛 𝑠𝑖𝑛𝑛𝑥) = 0 + ∑∞
𝑛=1 (0. 𝑐𝑜𝑠𝑛𝑥 + 𝑛𝜋 𝑠𝑖𝑛𝑛𝑥)
4𝑘 4𝑘 4𝑘 4𝑘
= ∑∞
𝑛=1 𝑛𝜋 𝑠𝑖𝑛𝑛𝑥 = 𝑠𝑖𝑛𝑥 + 0 + 3𝜋 𝑠𝑖𝑛3𝑥 + 0 + 𝑠𝑖𝑛5𝑥 + ⋯
𝜋 5𝜋
4𝑘 4𝑘 4𝑘 4𝑘 4𝑘 4𝑘
= 𝑠𝑖𝑛𝑥 + 𝑠𝑖𝑛3𝑥 + 𝑠𝑖𝑛5𝑥 + ⋯ = 𝑠𝑖𝑛𝑥 + 𝑠𝑖𝑛3𝑥 + 𝑠𝑖𝑛5𝑥 + ⋯
𝜋 3𝜋 5𝜋 𝜋 3𝜋 5𝜋
4𝑘 1 1
= (𝑠𝑖𝑛𝑥 + 3 𝑠𝑖𝑛3𝑥 + 𝑠𝑖𝑛5𝑥 + ⋯ )
𝜋 5
4𝑘 1 1
Hence, 𝑓(𝑥) = (𝑠𝑖𝑛𝑥 + 3 𝑠𝑖𝑛3𝑥 + 𝑠𝑖𝑛5𝑥 + ⋯ )
𝜋 5
0, 𝐼𝑓 − 𝜋 ≤ 𝑥 ≤ 0
Example 3.5: Find the Fourier series representation of 𝑓(𝑥) = { so that 𝑓 is
1, 𝐼𝑓0 ≤ 𝑥 ≤ 𝜋
periodic with period 2𝜋 and 𝑓(𝑥 + 2𝜋) = 𝑓(𝑥)
Solution: to find a Fourier series representation of this periodic function we first find the Fourier
coefficients 𝑎𝑜 , 𝑎𝑛 𝑎𝑛𝑑 𝑏𝑛 .
1 𝜋
That is by Euler’s formulas, 𝑎𝑜 = 2𝜋 ∫−𝜋 𝑓(𝑥) 𝑑𝑥
1 0 𝜋 1 0 𝜋 1 𝜋
= 2𝜋 [∫−𝜋 𝑓(𝑥) 𝑑𝑥 + ∫0 𝑓(𝑥) 𝑑𝑥] = 2𝜋 [∫−𝜋 0 𝑑𝑥 + ∫0 1 𝑑𝑥] = ∫ 1 𝑑𝑥]
2𝜋 0
1 𝜋 1 𝜋 1
[𝑥] = (𝜋 − 0) = =2
2𝜋 0 2𝜋 2𝜋
1
Hence, 𝑎𝑜 = 2
1 𝜋 1 0 𝜋
Similarly, 𝑎𝑛 = 𝜋 ∫–𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑛𝑥 𝑑𝑥 = 𝜋 [∫−𝜋 𝑓(𝑥) 𝑐𝑜𝑠𝑛𝑥𝑑𝑥 + ∫0 𝑓(𝑥) 𝑐𝑜𝑠𝑛𝑥𝑑𝑥]
1 0 𝜋 1 𝜋 1 𝜋
= [∫−𝜋 0. 𝑐𝑜𝑠𝑛𝑥𝑑𝑥 + ∫0 1. 𝑐𝑜𝑠𝑛𝑥𝑑𝑥] = ∫0 1. 𝑐𝑜𝑠𝑛𝑥𝑑𝑥] = ∫0 𝑐𝑜𝑠𝑛𝑥𝑑𝑥
𝜋 𝜋 𝜋
𝑘 𝜋 𝑘 𝑘 𝑘
= 𝑛𝜋 [𝑠𝑖𝑛𝑛𝑥] = 𝑛𝜋 (𝑠𝑖𝑛𝑛𝜋 − 𝑠𝑖𝑛0) = 𝑛𝜋 (0 − 0) = 𝑛𝜋 (0) = 0
0
Hence, 𝑎𝑛 = 0
1 𝜋 1 0 𝜋 1 𝜋
Now, 𝑏𝑛 = 𝜋 ∫−𝜋 𝑓(𝑥)𝑠𝑖𝑛𝑛𝑥 𝑑𝑥 = 𝜋 [∫−𝜋 0. 𝑠𝑖𝑛𝑛𝑥𝑑𝑥 + ∫0 1. 𝑠𝑖𝑛𝑛𝑥 𝑑𝑥] = 𝜋 ∫0 1. 𝑠𝑖𝑛𝑛𝑥 𝑑𝑥
1 𝜋 −1 𝜋 −1 −1
∫ 𝑠𝑖𝑛𝑛𝑥 𝑑𝑥 = 𝑛𝜋 [𝑐𝑜𝑠𝑛𝑥] = 𝑛𝜋 (𝑐𝑜𝑠𝑛𝜋 − 𝑐𝑜𝑠0) = (𝑐𝑜𝑠𝑛𝜋 − 1)
𝜋 0 0 𝑛𝜋
Example 3.6: Obtain the Fourier series of 𝑓(𝑥) = 𝑥 2 over the interval (−𝜋, 𝜋) where
1 𝜋 𝜋 𝜋
= 𝜋 ∫0 𝑥 2 𝑑𝑥 (Because f is even function 𝑎𝑛𝑑 ∫−𝜋 𝑓(𝑥) 𝑑𝑥 = 2 ∫0 𝑓(𝑥) 𝑑𝑥)
1 𝑥3 𝜋 1 𝜋 1 𝜋2
= 𝜋[3] = [𝑥 3 ] = (𝜋 3 − 03 ) =
0 3𝜋 0 3𝜋 3
𝜋2
Hence, 𝑎𝑜 = 3
1 𝜋 1 𝜋 1 𝜋
Similarly, 𝑎𝑛 = 𝜋 ∫–𝜋 𝑓(𝑥)𝑐𝑜𝑠𝑛𝑥 𝑑𝑥 = 𝜋 ∫–𝜋 𝑥 2 𝑐𝑜𝑠𝑛𝑥 𝑑𝑥 = 2 𝜋 ∫0 𝑥 2 𝑐𝑜𝑠𝑛𝑥 𝑑𝑥 (∴ 𝑓 𝑖𝑠 𝑒𝑣𝑒𝑛)
𝟒(−𝟏)𝒏
Now using integration by parts we have, 𝒂𝒏 = 𝒏𝟐
1 𝜋 1 𝜋
Now, 𝑏𝑛 = ∫−𝜋 𝑓(𝑥)𝑠𝑖𝑛𝑛𝑥 𝑑𝑥 = ∫−𝜋 𝑥 2 𝑠𝑖𝑛𝑛𝑥 𝑑𝑥 = 0 (∴ 𝑥 2 𝑠𝑖𝑛𝑛𝑥 𝑖𝑠 𝑜𝑑𝑑 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 )
𝜋 𝜋
Hence, 𝑏𝑛 = 0
So far we have considered the Fourier series expansion of functions with period 2𝜋.in many
application ,we need to find the Fourier series expansion of periodic functions with arbitrary
period ,say 2𝑙. The transition from period 𝑃 = 2𝑙 to period 𝑃 = 2𝜋 is quite simple and involves
only proportional change of scale.
Consider the periodic function f(x) with period 2l in(−𝑙, 𝑙) to change the problem to period 2𝜋,
𝜋𝑥 𝑙𝑣
Set, = , which gives 𝑥 = , 𝑡ℎ𝑢𝑠 𝑥 = ± 𝑙 corresponds to 𝑣 = ± 𝜋 and the function 𝑓(𝑥) of
𝑙 𝜋
period 2l in(−𝑙, 𝑙) may be regarded as function 𝑔(𝑣) of period 2𝜋 in (−𝜋, 𝜋).
Hence,
𝑔(𝑣) = 𝑎𝑜 + ∑∞
𝑛=1(𝑎𝑛 𝑐𝑜𝑠𝑛𝑣 + 𝑏𝑛 𝑠𝑖𝑛𝑛𝑣) (1)
Where 𝑎𝑜 ,𝑎𝑛 , 𝑎𝑛𝑑 𝑏𝑛 𝑎𝑟𝑒 𝑐𝑎𝑙𝑙𝑒𝑑 𝑡ℎ𝑒 Fourier coefficients of 𝑓 and are given by the Euler’s
Formula
1 𝜋
a) 𝑎𝑜 = 2𝜋 ∫−𝜋 𝑔(𝑣) 𝑑𝑣
1 𝜋
b) 𝑎𝑛 = 𝜋 ∫−𝜋 𝑔(𝑣)𝑐𝑜𝑠𝑛𝑣 𝑑𝑣 𝑛 = 1,2, …
1 𝜋
c) 𝑏𝑛 = 𝜋 ∫−𝜋 𝑔(𝑣)𝑠𝑖𝑛𝑛𝑣 𝑑𝑣 𝑛 = 1,2, …
𝜋𝑥
Making the inverse substitution, 𝑣 = and 𝑔(𝑣) = 𝑓(𝑥) in the above we obtain the Fourier
𝑙
series expansion
𝑛𝜋𝑥 𝑛𝜋𝑥
𝑓(𝑥) = 𝑎𝑜 + ∑∞
𝑛=1(𝑎𝑛 𝑐𝑜𝑠 + 𝑏𝑛 𝑠𝑖𝑛 ) (2)
𝑙 𝑙
With coefficients,
1 𝑙
a) 𝑎𝑜 = 2𝑙 ∫−𝑙 𝑓(𝑥) 𝑑𝑥
1 𝑙 𝑛𝜋𝑥
b) 𝑎𝑛 = 𝑙 ∫−𝑙 𝑓(𝑥)𝑐𝑜𝑠 𝑑𝑥 𝑛 = 1,2, …
𝑙
1 𝑙 𝑛𝜋𝑥
c) 𝑏𝑛 = ∫−𝑙 𝑓(𝑥)𝑠𝑖𝑛 𝑑𝑥 𝑛 = 1,2, …
𝑙 𝑙
Note: we may replace the interval of integration by any interval of length 𝑃 = 2𝑙, say by the
interval (0,2𝑙)
𝑥, 𝐼𝑓 − 1 ≤ 𝑥 ≤ 0
Example 3.7: find the Fourier series for the function 𝑓(𝑥) = { where
𝑥 + 2, 𝐼𝑓0 ≤ 𝑥 ≤ 1
𝑓(𝑥 + 2) = 𝑓(𝑥)
1 𝑙 1 1
= 2𝑙 ∫−𝑙 𝑓(𝑥) 𝑑𝑥 = 2 ∫−1 𝑓(𝑥) 𝑑𝑥
1 0 1
= 2 [∫−1 𝑥 𝑑𝑥 + ∫0 (𝑥 + 2) 𝑑𝑥] -1 1
1 0 1
= 2 [∫−1 𝑥 𝑑𝑥 + ∫0 (𝑥 + 2) 𝑑𝑥]
1 𝑥2 0 1 𝑥2 1
= 2[2] + 2 [ 2 + 2𝑥] Figure3.3 graph of 𝑓
−1 0
1 02 (−1)2 1 12 02
= ( − ) + 2 ( 2 + 2 − ( 2 + 2(0)))
2 2 2
1 1 1 1 1
= 2(0 − 2) + 4 + 1 = − 4 + 4 + 1 = 1
Hence 𝑎𝑜 = 0
1 0 1
𝑎𝑛 = ∫−1 𝑓(𝑥)𝑐𝑜𝑠𝑛𝜋𝑥𝑑𝑥 = ∫−1 𝑥𝑐𝑜𝑠𝑛𝜋𝑥𝑑𝑥 + ∫0 (𝑥 + 2)𝑐𝑜𝑠𝑛𝜋𝑥𝑑𝑥
0 1 1 1 1
= ∫−1 𝑥𝑐𝑜𝑠𝑛𝜋𝑥𝑑𝑥 + ∫0 𝑥𝑐𝑜𝑠𝑛𝜋𝑥𝑑𝑥 + 2 ∫0 𝑐𝑜𝑠𝑛𝜋𝑑𝑥 = ∫−1 𝑥𝑐𝑜𝑠 𝑛𝜋𝑥𝑑𝑥 + 2 ∫0 𝑐𝑜𝑠𝑛𝜋𝑑𝑥
0 + (−1)𝑛 (−1)𝑛 0 0
=[ − (0 + )] + 2 [ − ]
(𝑛𝜋)2 (𝑛𝜋)2 𝑛𝜋 𝑛𝜋
(−1)𝑛 (−1)𝑛
= − + 2(0) = 0 − 0 + 0 = 0
(𝑛𝜋)2 (𝑛𝜋)2
1
Or for simplicity, ∫−1 𝑥𝑐𝑜𝑠 𝑛𝜋𝑥𝑑𝑥 = 0 because 𝑥𝑐𝑜𝑠 𝑛𝜋𝑥 is odd function and from section 1
1 1
∫−1(𝑜𝑑𝑑)(𝑒𝑣𝑒𝑛)𝑑𝑥 = 0 and clearly by integration 2 ∫0 𝑐𝑜𝑠𝑛𝜋𝑑𝑥 = 0
Hence 𝑎𝑛 = 0
1 0 1
Similarly, 𝑏𝑛 = ∫−1 𝑓(𝑥) sin 𝑛𝜋𝑥𝑑𝑥 = ∫−1 𝑥 sin 𝑛𝜋𝑥𝑑𝑥 + ∫0 (𝑥 + 2) sin 𝑛𝜋 𝑥𝑑𝑥
0 1 1 1 1
= ∫−1 𝑥 sin 𝑛𝜋𝑥𝑑𝑥 + ∫0 𝑥 sin 𝑛𝜋𝑥𝑑𝑥 + 2 ∫0 sin 𝑛𝜋 𝑥𝑑𝑥 = ∫−1 𝑥 sin 𝑛𝜋𝑥𝑑𝑥 + 2 ∫0 sin 𝑛𝜋 𝑥𝑑𝑥
6
𝑓𝑜𝑟 𝑜𝑑𝑑 𝑛
𝑛𝜋,
Thus 𝑏𝑛 = { −2
𝑓𝑜𝑟 𝑒𝑣𝑒𝑛 𝑛
𝑛𝜋,
1 1
Further for 𝑥 = 2, 𝑓(𝑥) = 𝑥 + 2 = 2 + 2 = 5⁄2
5 2 1 3𝜋 1 1 3 7𝜋
= 1 + [3 sin 𝜋⁄2 − sin 𝜋 + 𝑠𝑖𝑛 − sin 2𝜋 + 3⁄5 sin 5𝜋⁄2 − sin 3𝜋 + sin − ⋯]
2 𝜋 2 2 4 6 7 2
2 3 3
=1+ [3 − 1 + − + − − −]
𝜋 5 7
5 2
or − 1 = 𝜋 [3 − 1 + 3⁄5 − 3⁄7 + ⋯ ]
2
3𝜋
⟹ = 3 − 1 + 3⁄5 − 3⁄7 + ⋯
4
𝜋 1 1 1
This gives = 1 − 3 + 5 − 7+ ….
4
Example 3.8 : Obtain the Fourier series for the periodic function
1 𝑙 1 𝑙 1 1 1 𝑒 𝑙 −𝑒 −𝑙 sin ℎ 𝑙
𝑎𝑜 = 2𝑙 ∫−𝑙 𝑓(𝑥)𝑑𝑥 = 2𝑙 ∫−𝑙 𝑒 −𝑥 𝑑𝑥 = 2𝑙 [−𝑒 −𝑥 ] = 2𝑙 (𝑒 𝑙 − 𝑒 −𝑥 ) = 𝑙 ( )=
2 𝑙
sin ℎ 𝑙
⟹ 𝑎𝑜 = 𝑙
1 𝑙 𝑛𝜋𝑥 1 𝑙 𝑛𝜋𝑥
𝑎𝑛 = 𝑙 ∫−𝑙 𝑓(𝑥) cos 𝑑𝑥 = 𝑙 ∫−𝑙 𝑒 −𝑥 cos 𝑑𝑥 using integration by parts we have,
𝑙 𝑥
1 𝑒 −𝑙 𝑛𝜋 𝑒𝑙 𝑛𝜋
= [ 2 2 2 (– cos 𝜋 + sin 𝑛𝜋) − 2 2 2 (– cos 𝜋 + sin 𝑛𝜋)]
𝑙 𝑙 +𝑛 𝜋 𝑙 𝑙 +𝑛 𝜋 𝑙
𝑙2 𝑙2
1 𝑙 2 𝑒 −𝑙 𝑛𝜋 𝑙2𝑒 𝑙 𝑛𝜋
[2 (– cos 𝜋 + sin 𝑛𝜋) − (– cos 𝜋 + sin 𝑛𝜋)]
𝑙 𝑙 + (𝑛𝜋)2 𝑙 𝑙 2 + (𝑛𝜋)2 𝑙
1 𝑙2 𝑒 −𝑙 𝑙2 𝑒 −𝑙 𝑙2 𝑒 −𝑙
= 𝑙 [𝑙2 +(𝑛𝜋)2 cos 𝑛𝜋 + 𝑙2 +(𝑛𝜋)2 cos 𝑛𝜋] = 𝑙2 +(𝑛𝜋)2 [−𝑒 −𝑙 cos 𝑛 𝜋 + 𝑒 𝑙 cos 𝑛𝜋]
2(−1)𝑛
⟹ 𝑎𝑛 = 𝑙2 +(𝑛𝜋)2 sin ℎ𝑙
1 𝑙 𝑛𝜋𝑥 1 𝑙 𝑛𝜋𝑥
Similarly 𝑏𝑛 = 𝑙 ∫−𝑙 𝑓(𝑥) sin 𝑙
𝑑𝑥 = ∫ 𝑒 −𝑥 sin
𝑙 −𝑙 𝑙
𝑑𝑥
1 𝑒 −𝑙 𝑛𝜋𝑥 𝑛𝜋 𝑛𝜋𝑥 𝑙
= 𝑙[ 𝑛𝜋 (− sin − 𝑐𝑜𝑠 )]
1+( )2 𝑙 𝑙 𝑙 −𝑙
𝑙
1 𝑒 −𝑙 𝑛𝜋 𝑒𝑙 𝑛𝜋
= 𝑙[ 𝑛𝜋 (− sin 𝑛𝜋 − 𝑐𝑜𝑠𝑛𝜋) − 𝑛𝜋 (𝑠𝑖𝑛𝑛𝜋 − cos 𝑛𝜋)]
1+( )2 𝑙 1+( )2 𝑙
𝑙 𝑙
1 𝑙 2 𝑒 −𝑙 −𝑛𝜋 𝑙2𝑒 𝑙 𝑛𝜋
= (2 2
( cos 𝑛𝜋) + 2 2
( cos 𝑛𝜋))
𝑙 𝑙 + (𝑛𝜋) 𝑙 𝑙 + (𝑛𝜋) 𝑙
2(−1)𝑛 𝑛𝜋
⟹ 𝑏𝑛 = 𝑠𝑖𝑛ℎ𝑙
𝑙2 +𝑛2 𝜋2
∞
sin ℎ𝑙 2(−1)𝑛 𝑛𝜋𝑥 2(−1)𝑛 𝑛𝜋𝑥
⟹ 𝑒 −𝑥 = +∑( 2 2 2
sin ℎ𝑙. cos + 2 2
sinh𝑙 sin )
𝑥 𝑙 +𝑛 𝜋 𝑙 𝑙 +𝑛 𝜋 𝑙
𝑛=1
∞
sin ℎ𝑙 (−1)𝑛 𝑛𝜋𝑥 𝑛𝜋𝑥
= + 2 sin ℎ𝑙 (∑ 2 (𝑙 cos + 𝑛𝜋 𝑠𝑖𝑛 ))
𝑙 𝑙 + 𝑛2 𝜋 2 𝑙 𝑙
𝑛=1
1 1 𝜋𝑥 1 2𝜋𝑥 1 3𝜋𝑥
= sin ℎ𝑙 [( − 2𝑙 ( 2 cos − cos + cos )
𝑙 𝑙 + 𝜋2 𝑙 𝑙 2 + 22 𝜋 2 𝑙 𝑙 2 + 32 𝜋 2 𝑙
1 𝜋𝑥 2 2𝜋𝑥 3 3𝜋𝑥
− 2𝜋 ( 2 2 sin − 2 2 2
sin + 2 2 2
sin … . . ))]
𝑙 𝜋 𝑙 𝑙 +2 𝜋 𝑙 𝑙 +2 𝜋 𝑙
Fourier integrals
Fourier integrals extend the concept or Fourier series to non periodic functions defined for all x.
A non periodic function which cannot be represented as a Fourier series over the entire real line
may be represented in an integral form. In many practical problems we came across functions
defined on −∞ < 𝑥 < ∞ that are non periodic e.g 𝑓(𝑥) = 𝑒 −𝑥2 𝑓(𝑥)
We cannot expand such functions in Fourier series since they are not periodic; however we can
consider such function to be periodic, but with infinite period.
1 ∞ 1 ∞
Where 𝐴(𝜔) = ∫−∞ 𝑓(𝑢) cos 𝜔𝑢𝑑𝑢 𝑎𝑛𝑑 𝐵(𝜔) = ∫−∞ 𝑓(𝑢) sin 𝜔𝑢𝑑𝑢
𝜋 𝜋
Are the Fourier integrals coefficients is called the Fourier integral representation of 𝑓(𝑥)
The sufficient conductions for which we Fourier integral representation is valid are,
Theorem(Fourier integral theorem): If f(x) satisfies the conditions 1 to 3 stated above, then the
Fourier integral or 𝑓 converges to at every point 𝑥 at which 𝑓 is continuous, and to the mean
value [𝑓(𝑥+0) + 𝑓(𝑥-0)]/2 at every point 𝑥 at which 𝑓 is discontinuous, where 𝑓(𝑥+) and 𝑓(𝑥-)
are the right and left hand limits respectively.
1, −1 ≤ 𝑥 ≤ 1
Example 3.9: find the Fourier integrals representation of 𝑓(𝑥) = { and hence
0, 𝑓𝑜𝑟 |𝑥|>|
∞ sin 𝜔 𝜋
prove that ∫0 𝑑𝜔 =
𝜔 2
Solution: 𝑓(𝑥) is piecewise smooth and absolutely integrals over (−∞, ∞) . Thus 𝑓(𝑥) has
Fourier integral representation.
-1 1 𝑥
1 ∞ 1 1 sin 𝜔𝑢 1 2 sin 𝜔
𝐴(𝜔) = 𝜋 ∫−∞ 𝑓(𝑢) cos 𝜔𝑢𝑑𝑢 = 𝜋 ∫−1 cos 𝜔 𝑢𝑑𝑢 = [ ] = 𝜋𝜔
𝜋𝜔 −1
1 ∞ 1 1
and 𝐵(𝜔) = 𝜋 ∫−∞ 𝑓(𝑢) sin 𝜔𝑢𝑑𝑢 = 𝜋 ∫−1 sin 𝜔 𝑢𝑑𝑢 = 0 (because sin 𝜔 𝑢 𝑖𝑠 𝑜𝑑𝑑 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛)
𝜋
𝑓𝑜𝑟 − 1 < 𝑥 < 1
2,
2 ∞ 𝑐𝑜𝑠𝜔𝑥𝑠𝑖𝑛𝜔 𝜋 ∞ sin 𝜔 𝜋
Thus, ∫ 𝑑𝑤 ={ 𝑓𝑜𝑟 𝑥 = ± 1 and setting𝑥 = 0, we have ∫0 𝑑𝜔 =
𝜋 0 𝜔 4, 𝜔 2
0, 𝑓𝑜𝑟 1𝑥1 > 1
𝑒 −𝑥 , 𝑥 > 0
Example 3.10: find the Fourier integral representation of 𝑓(𝑥) = {
0, 𝑥 ≤ 0
and find the value of the repulsing integral when,(a) 𝑥 < 0, (b) 𝑥 = 0 (c)𝑥 > 0 also derive that
∞ 𝑑𝑤 𝜋
∫0 =
1+𝑤 2 2
Solution: the given function f(x) is piecewise smooth and is absolutely inferable over(−∞, ∞),
∞ ∞
Since ∫−∞|𝑓(𝑥)| 𝑑𝑥 = ∫0 𝑒 −𝑥 𝑑𝑥 = 1, 𝑡ℎ𝑎𝑡 𝑖𝑠 𝑏𝑦 𝑢𝑠𝑖𝑛𝑔 𝑖𝑚𝑝𝑟𝑜𝑝𝑒𝑟 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑙𝑠
∞ 𝑏 𝑏
∫0 𝑒 −𝑥 𝑑𝑥 = lim ∫0 𝑒 −𝑥 𝑑𝑥 = lim [−𝑒 −𝑥 ] 0 = lim [−𝑒 −𝑏 + 𝑒 0 ] = lim (1 − 𝑒 −𝑏 )
𝑏→∞ 𝑏→∞ 𝑏→∞ 𝑏→∞
1 ∞ 1 ∞
𝐴(𝑤) = 𝜋 ∫−∞ 𝑓(𝑢) cos 𝜔𝑢 𝑑𝑢 = 𝜋 ∫0 𝑒 −𝑢 cos 𝜔𝑢 𝑑𝑢 (𝑓𝑜𝑟 𝑥 > 0, 𝑓(𝑥) = 𝑒 −𝑥 , 𝑥 ≤ 0 , 𝑓(𝑥) =
sin 𝜔𝑢
0) Now using integration by pans, let 𝑡 = 𝑒 −𝑢 , 𝑑𝑡 = −𝑒 −𝑢 𝑑𝑢, let 𝑑𝑣 = cos 𝜔𝑢, 𝑣 = 𝑤
𝑒 −𝑢 sin 𝜔𝑢 1
∫ 𝑡𝑑𝑣 = 𝑡𝑣 − ∫ 𝑣𝑑𝑡 ⟹ ∫ 𝑒 −𝑢 cos 𝜔𝑢𝑑𝑢 = 𝜔
+ 𝑤 ∫ 𝑒 −𝑢 sin 𝜔𝑢𝑑𝑢 ………………. (1)
cos 𝜔𝑢 1
Then, ∫ 𝑒 −𝑢 sin 𝜔𝑢𝑑𝑢 = −𝑒 −𝑢 − 𝑤 ∫ 𝑒 −𝑢 cos 𝜔𝑢𝑑𝑢………………………………(2)
𝑤
−𝑢
𝑒 −𝑢 sin 𝜔𝑢 1 −𝑒 −𝑢 cos 𝜔𝑢 1
∫𝑒 𝑐𝑜𝑠𝜔𝑢𝑑𝑢 = + [ − ∫ 𝑒 −𝑢 cos 𝜔𝑢𝑑𝑢]
𝜔 𝜔 𝜔 𝜔
𝑒 −𝑢 sin 𝜔𝑢 𝑒 −𝑢 cos 𝜔𝑢 1
= − − 𝜔2 ∫ 𝑒 −𝑢 cos 𝜔𝑢𝑑𝑢
𝜔 𝜔2
1 −𝑢
𝑒 −𝑢 sin 𝜔𝑢 𝑒 −𝑢 cos 𝜔𝑢
= (1 + ) ∫ 𝑒 cos 𝜔𝑢𝑑𝑢 = −
𝜔2 𝜔 𝜔2
∞
1 + 𝜔2 −𝑢
𝑒 −𝑢 sin 𝜔𝑢 𝑒 −𝑢 cos 𝜔𝑢
= ∫ 𝑒 cos 𝜔𝑢𝑑𝑢 = −
𝜔2 𝜔 𝜔2
0
∞ 𝜔2 𝑒 −𝑢 sin 𝜔𝑢 −𝑒 −𝑢 cos 𝜔𝑢 ∞
⇒ ∫0 𝑒 −𝑢 cos 𝜔𝑢𝑑𝑢 = 1+𝜔2 [ − ]
𝜔 𝜔2 0
𝜔2 𝑒 −𝑢 sin 𝜔𝑢 −𝑒 −𝑢 cos 𝜔𝑢 ∞
= 1+𝜔2 [ − ]
𝜔 𝜔2 0
𝑒 −𝑢 sin 𝜔𝑢 −𝑒 −𝑢 cos 𝜔𝑢 𝑒 −𝑢 sin 𝜔𝑢 −𝑒 −𝑢 cos 𝜔𝑢 1
As 𝑢 → ∞, − → 0 and as 𝑢 → 0, − → 1+𝜔2
𝜔 𝜔2 𝜔 𝜔2
1
When u tends to infinity, it becomes zero, and when u tend to zero u becomes 1+𝑤2
∞ 1
⇒ ∫0 𝑒 −𝑢 cos 𝜔𝑢𝑑𝑢 = 1+𝜔2
1 ∞ 1 ∞ 1
so that 𝐴(𝜔) = 𝜋 ∫−∞ 𝑓(𝑢) cos 𝜔𝑢𝑑𝑢 = 𝜋 ∫0 𝑒 −𝑢 cos 𝜔𝑢𝑑𝑢 = 𝜋(1+𝜔2)
1 ∞ 1 ∞ 𝜔
similarly,𝐵(𝜔) = ∫−∞ 𝑓(𝑢) sin 𝜔𝑢𝑑𝑢 = ∫0 𝑒 −𝑢 sin 𝜔𝑢𝑑𝑢 = (use integration by parts)
𝜋 𝜋 𝜋(1+𝜔2 )
∞ 1 𝑐𝑜𝑠𝜔𝑥+𝜔 sin 𝜔𝑥
𝑓(𝑥) = ∫0 [𝐴(𝜔) cos 𝜔𝑥 + 𝛽(𝜔) sin 𝜔𝑥]𝑑𝜔 = 𝜋 ∫ 𝑑𝜔
1+𝜔 2
1 ∞ cos 𝜔𝑥+𝜔𝑠𝑖𝑛𝜔𝑥
(a) For 𝑥 < 0, 𝑓(𝑥) = 𝜋 ∫0 𝑑𝜔 = 0
1+𝜔 2
1 ∞ cos 𝜔𝑥+𝜔𝑠𝑖𝑛𝜔𝑥
(c) For 𝑥 > 0, 𝑓(𝑥) = 𝜋 ∫0 𝑑𝜔 = 𝑒 −𝑥
1+𝜔2
0, 𝑥 < 0
∞ cos 𝜔𝑥+𝑤 sin 𝜔𝑥 𝜋
Thus, ∫0 1+𝜔 2
𝑑𝜔 = { 2, 𝑥 = 0
𝜋𝑒 −𝑥 , 𝑥 > 0
Exercises 3.2
𝑥, 0≤𝑥≤𝜋 𝜋2 1 1
(c) 𝑓(𝑥) = { and deduce that 8 = 32 + 52 + ⋯
2𝜋 − 𝑥, 𝜋 ≤ 𝑥 ≤ 2𝜋
0, −2 < 𝑥 ≤ −1
2
(d) 𝑓(𝑥) = 1 − 𝑥 over the interval (1,-1) (e) 𝑓(𝑥) = { 𝑘, −1 < 𝑥 < 1 , 𝑓(𝑥) = (𝑥 + 4)
0, 1 ≤ 𝑥 < 2
2. in each of the following drive the Fourier integral representation, at which points, if any ,does the
Fourier integral fail to converge to f(x)? to what value does the integral converge at those points,
𝜋 𝜋
100, 0 ≤ 𝑥 ≤ 2 ( )𝑐𝑜𝑠𝑥, |𝑥| ≤
2 2
(a) 𝑓(𝑥) = { (b) 𝑓(𝑥) = { 𝜋
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 0, |𝑥| >
2
In this section, we are going to introduce the complex Fourier series and the complex Fourier
integral representation of real functions together with some examples.
Section Objectives:
Definition (Complex Fourier series): - let f(x) be a real periodic function of period 2𝑙 over the
interval (−𝑙. 𝑙).then the complex Fourier series representation of 𝑓 is defined as.
𝑖𝑛𝜋𝑥
𝑓(𝑥) = lim ∑𝑘𝑛=−𝑘 𝑐𝑛 𝑒 𝑙 for −𝑙 < 𝑥 < 𝑙
𝑘⟶∞
𝑙 −𝑖𝑛𝜋𝑥
1
Where 𝑐𝑛 = 2𝑙 ∫−𝑙 𝑓(𝑥) 𝑒 𝑙 𝑑𝑥 , 𝑛 = 0, ±1, , ±2 are the complex Fourier coefficients.
Note: in Complex Fourier series, at points of continuity 𝑓(𝑥) the series converges to 𝑓(𝑥) while
at point of discontinuity it converges to the midpoint
Example 3.11: find the complex Fourier series representation of the function
0, 0< 𝑥 ≤1
𝑓(𝑥) = { When 𝑓(𝑥) = 𝑓(𝑥 + 4)
1, 1< 𝑥 <4
Solution: The function 𝑓(𝑥) is periodic with period 4 defined on the internal(0,4), with 2𝑙 =
4,𝑙 = 2. Thus the complex Fourier coefficient 𝑐𝑛 is given by:
4 −𝑖𝑛𝜋𝑥 4 −𝑖𝑛𝜋𝑥
1 1
𝑐𝑛 = 4 ∫0 𝑓(𝑥)𝑒 𝑙 𝑑𝑥 = 4 ∫0 𝑒 2 𝑑𝑥
1 4
For 𝑛 = 0, we get 𝑐𝑜 = 4 ∫1 𝑑𝑥 = 3⁄4
−𝑖𝑛𝜋𝑥⁄ 𝑖𝑛𝜋𝑥
𝑖
𝑓(𝑥) = lim ∑𝑘𝑛=−𝑘 2𝜋𝑛 (1 − 𝑒 2) 𝑒 2
𝑘⟶∞
−𝑖𝑛𝜋𝑥⁄ 𝑖𝑛𝜋𝑥
3 𝑖
𝑓(𝑥) = 4 + lim ∑𝑘𝑛=−𝑘 2𝜋𝑛 (1 − 𝑒 2 ) 𝑒 2 (𝑛 ≠ 0)
𝑘⟶∞
Example 3.12: find the complex Fourier series representation of the function
Solution: The function 𝑓(𝑥) is periodic with period2𝜋, defined on the internal (−𝜋, 𝜋). Here 𝑙 =
𝜋. Thus the complex Fourier coefficients are:
𝜋 𝜋
1 1 −1 𝜋
𝑐𝑛 = ∫ 𝑒 −𝑥 𝑒 𝑖𝑛𝑥 𝑑𝑥 = ∫ 𝑒 −(1+𝑖𝑛)𝑥 𝑑𝑥 = [𝑒 −𝑖(1+𝑖𝑛)𝑥 ]
2𝜋 2𝜋 2𝜋(1 + 𝑖𝑛) −𝜋
−𝜋 −𝜋
−1 −1
= [𝑒 −(1+𝑖𝑛)𝜋 − 𝑒 (1+𝑖𝑛)𝜋 ] = (𝑒 −𝜋 𝑒 −𝑖𝑛𝜋 − 𝑒 𝜋 𝑒 𝑖𝑛𝜋 )
2𝜋(1 + 𝑖𝑛) 2𝜋(1 + 𝑖𝑛)
−1
= [𝑒 −𝜋 (cos 𝑛𝜋 − 𝑖 sin 𝑛𝜋) − 𝑒 𝜋 (cos 𝑛𝜋 + 𝑖 sin 𝑛𝜋)]
2𝜋(1 + 𝑖𝑛)
(1−𝑖𝑛) (1−𝑖𝑛)𝑠𝑖𝑛ℎ𝜋
= 2𝜋(1+𝑛2 ) [(𝑒 𝜋 − 𝑒 −𝜋 ) cos 𝑛𝜋] = (−1)𝑛 𝜋(1+𝑛2 )
1 ∞
Where, 𝑐(𝜔) = 2𝜋 ∫−∞ 𝑓(𝑢)𝑒 −𝑖𝜔𝑢 𝑑𝑢
is the complex Fourier integral coefficient is called the complex Fourier integral representation of 𝑓
in the real line
Example 3.13: If 𝑓(𝑥) = 𝑒 −𝑎|𝑥| for all real 𝑥 and with 𝑎 > 0, a positive constant Then find the
complex Fourier integral representation or 𝑓.
Obviously, 𝑓(𝑥) is piecewise smooth and is absolutely integrable over the interval(−∞, ∞).
0 ∞
1 1 𝑒 (𝑎−𝑖𝜔)𝑢 0 1 𝑒 −(𝑎+𝑖𝜔)𝑢 ∞
= [ ∫ 𝑒 (𝑎−𝑖𝜔)𝑢 𝑑𝑢 + ∫ 𝑒 (𝑎+𝑖𝜔)𝑢 𝑑𝑢] = [ ] + [ ]
2𝜋 2𝜋 𝑎 − 𝑖𝜔 −∞ 2𝜋 −(𝑎 + 𝑖𝜔) 0
−∞ 0
𝑒 (𝑎−𝑖𝜔)𝑢 1 𝑒 (𝑎−𝑖𝜔)𝑢
From this as 𝑢 → 0, → 𝑎−𝑖𝜔 and as 𝑢 → −∞ →0
𝑎−𝑖𝜔 𝑎−𝑖𝜔
𝑒 −(𝑎+𝑖𝜔)𝑢 𝑒 −(𝑎+𝑖𝜔)𝑢 −1
Similarly, as 𝑢 → ∞ , → 0 and as 𝑢 → 0, →
−(𝑎+𝑖𝜔) −(𝑎+𝑖𝜔) 𝑎+𝑖𝜔
1 1 1 𝑎 𝑎
= [ + ]= = 𝑐(𝜔) =
2𝜋 𝑎 + 𝑖𝜔 𝑎 − 𝑖𝜔 𝜋(𝑎2 + 𝜔 2 ) 𝜋(𝑎2 + 𝜔 2 )
𝑎
⟹ 𝑐(𝜔) =
𝜋(𝑎2 + 𝜔2)
∞ ∞ ∞
𝑎 𝑎 𝑎
𝑓(𝑥) = 𝑒 −𝑎|𝑥| = ∫ 𝑐(𝜔)𝑒 𝑖𝜔𝑥 𝑑𝜔 = ∫ 𝑒 𝑖𝜔𝑥
𝑑𝜔 = ∫ 𝑒 𝑖𝜔𝑥 𝑑𝜔
𝜋(𝑎2 + 𝜔 2 ) 𝜋 𝑎2 + 𝜔 2
−∞ −∞ −∞
Solution: Clearly 𝑓 is piecewise continuing and absolutely integrals on (−∞, ∞) (i.e over the
real line)
So, to find the complex Fourier integral, we first find the complex Fourier integral coefficient
c(𝜔) ,that is
∞ 5 5
1 1 1
𝑐(𝜔) = ∫ 𝑓(𝑢)𝑒 −𝑖𝜔𝑢 𝑑𝑢 = ∫ 𝑠𝑖𝑛𝜋𝑢 𝑒 −𝑖𝜔𝑢 𝑑𝑢 = ∫ 𝑒 −𝑖𝜔𝑢 sin 𝜋𝑢 𝑑𝑢
2𝜋 2𝜋 2𝜋
−∞ −5 −5
5
Now using integration by parts on∫−5 𝑒 −𝑖𝜔𝑢 sin 𝜋𝑢 𝑑𝑢, we let 𝑡 = 𝑒 −𝑖𝜔𝑢 , 𝑑𝑡 = 𝑖𝜔𝑒 𝑖𝜔𝑢 𝑑𝑢
− cos 𝜋𝑢
𝑑𝑣 = sin 𝜋𝑢 , 𝑣 =
𝜋
5 −𝑒 −𝑖𝜔𝑢 cos 𝜋𝑢 𝑖𝜔 5
∫−5 𝑒 −𝑖𝜔𝑢 𝑠𝑖𝑛𝜋𝑢 𝑑𝑢 = ( 𝜋
− 𝜋
∫−5 𝑒 −𝑖𝜔𝑢 cos 𝜋 𝑢𝑑𝑢) (1)
5
Again using integration by party on ∫−5 𝑒 −𝑖𝑤𝑢 cos 𝜋𝑢 𝑑𝑢, let t = 𝑒 −𝑖𝜔𝑢 , 𝑑𝑡 = −𝑖𝜔𝑒 𝑖𝜔𝑢 𝑑𝑢
sin𝜋𝑢
𝑑𝑣 = cos 𝜋𝑢 , 𝑣 = 𝜋
5 𝑒 −𝑖𝑤𝑢 sin𝜋𝑢 𝑖𝑤 5
⟹ ∫−5 𝑒 −𝑖𝑤𝑢 cos 𝜋𝑢 𝑑𝑢 = + ∫−5 𝑒 −𝑖𝑤𝑢 sin 𝜋 𝑢𝑑𝑢 (2)
𝜋 𝜋
−1 5
= 𝑤2 −𝜋2 [−𝜋𝑒 −𝑖𝑤𝑢 cos 𝜋𝑢 −𝑖𝑤𝑒 −𝑖𝑤𝑢 sin 𝜋𝑢] −5
−1
[−𝜋𝑒 −𝑖5𝑤 cos 5𝜋 −𝑖𝑤𝑒 −𝑖5𝑤 sin 5𝜋 + 𝜋𝑒 𝑖5𝑤 cos(−5)𝜋 + 𝑖𝑤𝑒 𝑖5𝑤 sin(−5)𝜋]
𝑤 2 −𝜋 2
−𝜋
= [𝑒 −𝑖5𝑤 −𝑒 𝑖5𝑤 ]
𝑤2 − 𝜋2
−𝜋
= [cos 5𝑤 − 𝑖 sin 5𝑤 − 𝑐𝑜𝑠5𝑤 − 𝑖𝑠𝑖𝑛 5𝑤]
(𝑤 2 − 𝜋 2 )
𝜋 2𝜋𝑖 𝑠𝑖𝑛 5𝑤
(2𝑖𝑠𝑖𝑛 5𝑤) =
(𝑤 2 2
−𝜋 ) 𝑤2 − 𝜋2
5 2𝜋𝑖 𝑠𝑖𝑛 5𝑤
From this ∫−5 𝑒 −𝑖𝜔𝑢 sin 𝜋𝑢 𝑑𝑢 = 𝑤 2 −𝜋 2
∞ 5 sin 5𝑤
Hence 𝑓(𝑥) = ∫−∞ 𝑐(𝑤)𝑒 −𝑖𝑤𝑢 𝑑𝑢 = 𝑖 ∫−5 (𝑤2 −𝜋2 ) 𝑒 −𝑖𝑤𝑥 𝑑𝑢
Exercises 3.3
1. In each of the following find the complex Fourier series representation of 𝑓(𝑥) on the given interval.
2. In each of the following problems, the complex Fourier integral of the function and determine what this
integral converges to.
𝑐𝑜𝑠𝜋𝑥, |𝑥| ≤ 2
(b) 𝑓(𝑥) = {
0, |𝑥| > 2
Unit Summary:
Fourier series are infinite series designed to represent general periodic functions in terms
of simple ones, namely, 𝑐𝑜𝑠𝑖𝑛𝑒𝑠 and 𝑠𝑖𝑛𝑒𝑠.
A function f (x) is called a periodic function of period 𝑝 if f (x) is defined for 𝑥, and
𝑓(𝑥 + 𝑝) = 𝑓(𝑥) and the smallest period 𝑝 is called the fundamental period.
If f (x) has period 𝑝, it also has the period2𝑝 and in general for any integer 𝑛 ≥ 1
𝑓(𝑥 + 𝑛𝑝) = 𝑓(𝑥) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥
In Fourier series representations, even and odd functions are very important in finding the
Fourier coefficients 𝑎0 ,𝑎𝑛 and 𝑏𝑛
A function 𝑓(𝑥) is called an Even function if 𝑓(−𝑥) = 𝑓(𝑥) for all x in the domain of 𝑓
A function 𝑓(𝑥) is called an odd function if 𝑓(−𝑥) = −𝑓(𝑥) for all x in the domain of 𝑓
𝑙 𝑙
For any even function 𝑓(𝑥), ∫−𝑙 𝑓(𝑥) 𝑑𝑥 = 2∫0 𝑓(𝑥) 𝑑𝑥 for any odd function
𝑙
𝑓(𝑥), ∫−𝑙 𝑓(𝑥) 𝑑𝑥 = 0
If 𝑓(𝑥) is a periodic function of period 𝑃 = 2𝑙 and integrable over the interval (−𝑙. 𝑙).
Then, the Fourier series expansion of 𝑓 is;
𝑛𝜋𝑥 𝑛𝜋𝑥
𝑓(𝑥) = 𝑎𝑜 + ∑∞
𝑛=1(𝑎𝑛 𝑐𝑜𝑠 + 𝑏𝑛 𝑠𝑖𝑛 )
𝑙 𝑙
With coefficients,
1 𝑙
a) 𝑎𝑜 = 2𝑙 ∫−𝑙 𝑓(𝑥) 𝑑𝑥
1 𝑙 𝑛𝜋𝑥
b) 𝑎𝑛 = 𝑙 ∫−𝑙 𝑓(𝑥)𝑐𝑜𝑠 𝑑𝑥 𝑛 = 1,2, …
𝑙
1 𝑙 𝑛𝜋𝑥
c) 𝑏𝑛 = 𝑙 ∫−𝑙 𝑓(𝑥)𝑠𝑖𝑛 𝑑𝑥 𝑛 = 1,2, …
𝑙
Fourier series are powerful tools for problems involving functions that are periodic or are
of interest on a finite interval only.
Fourier integrals extend the concept or Fourier series to non periodic functions defined
for all 𝑥.
The Fourier integral representation of 𝑓(𝑥) can be defined as;
∞
𝑓(𝑥) = ∫0 [𝐴(𝜔) cos 𝜔𝑥 + 𝛽(𝜔) sin 𝜔𝑥]𝑑𝜔
1 ∞ 1 ∞
Where 𝐴(𝜔) = 𝜋 ∫−∞ 𝑓(𝑢) cos 𝜔𝑢𝑑𝑢 𝑎𝑛𝑑 𝐵(𝜔) = 𝜋 ∫−∞ 𝑓(𝑢) sin 𝜔𝑢𝑑𝑢
The sufficient conductions for which we Fourier integral representation is valid are,
If f(x) be a real periodic function of period 2𝑙 over the interval (−𝑙. 𝑙).then the complex
Fourier series representation of 𝑓 is defined as.
𝑖𝑛𝜋𝑥
𝑓(𝑥) = lim ∑𝑘𝑛=−𝑘 𝑐𝑛 𝑒 𝑙 for −𝑙 < 𝑥 < 𝑙
𝑘⟶∞
𝑙 −𝑖𝑛𝜋𝑥
1
Where 𝑐𝑛 = 2𝑙 ∫−𝑙 𝑓(𝑥) 𝑒 𝑙 𝑑𝑥 , 𝑛 = 0, ±1, , ±2 are the complex Fourier coefficients
1 ∞
Where, 𝑐(𝜔) = 2𝜋 ∫−∞ 𝑓(𝑢)𝑒 −𝑖𝜔𝑢 𝑑𝑢 is the complex Fourier integral coefficient.
Miscellaneous Exercises
(c) Sums and products of even functions (d) Sums and products of odd functions
(e) Absolute values of odd functions (f) product of an odd and an even functions
1 1 1 1 𝜋2
(d) 𝑓(𝑥) = 𝑥 − 𝑥 2 , −𝜋 < 𝑥 < 𝜋 and deduce that 12 − 22 + 32 − 42 + ⋯ = 12
1, −𝜋 < 𝑥 < 0
(e) 𝑓(𝑥) = { 𝑓(𝑥 + 2𝜋) = 𝑓(𝑥)
−1, 0<𝑥<𝜋
5. Let 𝑓 be aperiodic function of period 2𝜋 such that 𝑓(𝑥) = 𝜋 2 − 𝑥 2 for 𝑥 ∈ (−𝜋, 𝜋), then
show that
2𝜋 2 −4
𝜋2 − 𝑥2 = + ∑∞ 𝑛
𝑛=1 𝑛2 (−) 𝑐𝑜𝑠𝑛𝑥
3
−6 6 𝜋2
(a) 3𝑥 = ∑∞
𝑛=1 (−1)𝑛 sin 𝑛𝑥 (b) 𝑥 3 = ∑∞ 𝑛
𝑛=1 2 (−1) (𝑛3 − ) sin 𝑛𝑥
𝑛 𝑛
References
Allan pinkas, Fourier Series and Integral transform, Cambridge university press, 1997
Alan Jeffrey, Advanced engineering mathematics, RR Donnelley & Sons, Inc, 2002
Abramowitz, M. and I. A. Stegun (eds.), Handbook of Mathematical Functions. 10th
Courant, R., Differential and Integral Calculus. 2 vols. Hoboken, NJ: Wiley, 1988.
Churchill, R. V., Operational Mathematics. 3rd ed.New York: McGraw-Hill, 1972.
Erwin Kreyzing, Advanced engineering mathematics, 10th ed, wiley, 2000
G.B. Foland, Fourier Analysis and its applications, Wadsworth and Brooks/Cole, Pacific
Grove,CA, 1992
Graham, R. L. et al., Concrete Mathematics. 2nd ed. Reading, MA: Addison-Wesley,
1994.
Hanna, J. R. and J. H. Rowland, Fourier Series, Transforms, and Boundary Value Problems.
2nd ed.New York: Wiley, 2008.
Jerri, A. J., The Gibbs Phenomenon in Fourier Analysis, Splines, and Wavelet
Approximations. Boston: Kluwer, 1998.
Szegö, G., Orthogonal Polynomials. 4th ed. Reprinted. New York: American
Mathematical Society, 2003.
Tolstov, G. P., Fourier Series. New York: Dover, 1976
Thomas, G. et al., Thomas’ Calculus, Early Transcendental Update. 10th ed. Reading,
MA: Addison-Wesley, 2003.
W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems, 5th ed.,McGraw-Hill,
New York, 1993
Zygmund, Trigonometric Series, 2nd ed. (Volumes I and II combined),Cambridge University
Press, Cambridge, UK, 1988
Unit-four
Fourier and Laplace Transformation
Introduction
An integral transform is a transformation that produces from a given function a new function,
that depends on a different variable and appears in the form of an integral. These transformations
are mainly employed as a tool to solve certain initial and boundary value problems in ordinary
and partial differential equations arising in many areas of science and engineering. Fourier
transforms are integral transforms which are of vital importance from the applications view point
in solving initial and boundary value problems.
In this chapter we will discuss three transforms: the Fourier cosine transform, the Fourier sine
transform; the first two being real and the later one complex. These transforms are obtained from
the corresponding Fourier integral. We will also see Laplace transforms, inverse Laplace
transform, differentiation and integration of Laplace transforms convolution and integral
equations
Unit Objectives:
Section Objectives:
The Fourier Cosine and Sine Transforms can be considered as a special cases of the Fourier
transform 𝑜𝑓 𝑓(𝑥) when 𝑓(𝑥) is even or odd function over the real axis
Definition: If 𝑓(𝑥) is piecewise continuous on each finite interval [0, 𝑙] and absolutely integrable
over the positive real axis so that its Fourier Transform 𝐹(𝑤) exists then the Fourier cosine and
Fourier sine transforms of f(x) denoted by 𝐹𝐶 (𝑤) 𝑜𝑟𝑓𝑐 ^ 𝑎𝑛𝑑 𝐹𝑠 (𝑤) 𝑜𝑟 𝑓𝑠 ^ respectively is defined
as;
2 ∞
𝐹𝑐 (𝑤) = √𝜋 ∫0 𝑓(𝑥) 𝑐𝑜𝑠𝑤𝑥𝑑𝑥,
2 ∞
𝐹𝑠 (𝑤) = √𝜋 ∫0 𝑓(𝑥) 𝑠𝑖𝑛𝑤𝑥𝑑𝑥
1, 0 ≤ 𝑥 ≤ 𝑎
Example 4.1: find the Fourier cosine and sine transform of 𝑓(𝑥) = {
0, 𝑥>𝑎
2 ∞
Solution; by definition 𝐹𝐶 (𝑤) = √𝜋 ∫0 𝑓(𝑥) 𝑐𝑜𝑠𝑤𝑥𝑑𝑥
2 ∞ 2 𝑎 2 𝑠𝑖𝑛𝑤𝑥 𝑎 2 𝑠𝑖𝑛𝑎𝑤
= √𝜋 ∫0 1 𝑐𝑜𝑠𝑤𝑥𝑑𝑥 = √𝜋 ∫0 𝑐𝑜𝑠𝑤𝑥𝑑𝑥 = √𝜋 [ ] = √𝜋 𝑤
𝑤 0
2 𝑠𝑖𝑛𝑎𝑤
Hence, 𝐹𝐶 (𝑤) = √
𝜋 𝑤
2 ∞
Similarly, 𝐹𝑠 (𝑤) = √𝜋 ∫0 𝑓(𝑥) 𝑠𝑖𝑛𝑤𝑥𝑑𝑥
2 ∞ 2 ∞ 2 −𝑐𝑜𝑠𝑤𝑥 𝑎 2 1−𝑐𝑜𝑠𝑎𝑤
= √𝜋 ∫0 1 𝑠𝑖𝑛𝑤𝑥𝑑𝑥 = √𝜋 ∫0 𝑠𝑖𝑛𝑤𝑥 𝑑𝑥 = √𝜋 [ ] = √𝜋 ( 𝑤 )
𝑤 0
2 1−𝑐𝑜𝑠𝑎𝑤
Hence, 𝐹𝑠 (𝑤) = √𝜋 ( )
𝑤
𝑐𝑜𝑠𝑥, 0 ≤ 𝑥 ≤ 𝑎
Example 4.2: find the Fourier cosine and sine transform of 𝑓(𝑥) = {
0, 𝑥>0
2 ∞ 2 𝑎
Solution; by definition 𝐹𝑐 (𝑤) = √𝜋 ∫0 𝑓(𝑥) 𝑐𝑜𝑠𝑤𝑥𝑑𝑥 = √𝜋 ∫0 𝑐𝑜𝑠𝑥 𝑐𝑜𝑠𝑤𝑥𝑑𝑥
2 𝑎1
= √𝜋 ∫0 2 (cos(1 − 𝑤)𝑥 + cos(1 + 𝑤)𝑥)𝑑𝑥 (𝑎𝑠 𝑐𝑜𝑠𝑥𝑐𝑜𝑠𝑦 = 1⁄2 (cos(𝑥 − 𝑦) + cos(𝑥 + 𝑦))
1 2 𝑎 1 2 𝑎
= 2 √𝜋 ∫0 (cos(1 − 𝑤)𝑥𝑑𝑥 + 2 √𝜋 ∫0 (cos(1 + 𝑤)𝑥𝑑𝑥
2 ∞ 2 𝑎
𝐹𝑠 (𝑤) = √𝜋 ∫0 𝑓(𝑥) 𝑠𝑖𝑛𝑤𝑥𝑑𝑥 = √𝜋 ∫0 𝑐𝑜𝑠𝑥 𝑠𝑖𝑛𝑤𝑥𝑑𝑥
2 𝑎1
= √𝜋 ∫0 2 (sin(1 − 𝑤) 𝑥 + sin(1 + 𝑤)𝑥)𝑑𝑥
1 2 𝑎 1 2 𝑎1
= 2 √𝜋 ∫0 (sin(1 − 𝑤) 𝑥𝑑𝑥 + 2 √𝜋 ∫0 2 (sin(1 + 𝑤) 𝑥𝑑𝑥
1 2 −cos(1−𝑤)𝑥 𝑎 1 2 cos(1+𝑤)𝑥 𝑎
= 2 √𝜋 [ ] + 2 √𝜋 [ 1+𝑤 ]
1−𝑤 0 0
1 2 −cos(1−𝑤)𝑎 1 cos(1+𝑤)𝑎 1
√ ( + 1−𝑤 + − 1+𝑤)
2 𝜋 1−𝑤 1+𝑤
1 cos(1+𝑤)𝑎 cos(1−𝑤)𝑎 1 1
= ( − + 1−𝑤 − 1+𝑤)
√2𝜋 1+𝑤 1−𝑤
1 cos(1+𝑤)𝑎 cos(1−𝑤)𝑎 2w
= ( − + 𝑤2 −1)
√2𝜋 1∓𝑤 1−𝑤
2 w 1 cos(1+𝑤)𝑎 cos(1−𝑤)𝑎
= √𝜋 𝑤2 −1 + ( − +)
√2𝜋 1+𝑤 1−𝑤
2 w 1 cos(1+𝑤)𝑎 cos(1−𝑤)𝑎
Hence, 𝐹𝑠 (𝑤) = √𝜋 𝑤2 −1 + ( − )
√2𝜋 1+𝑤 1−𝑤
Like Fourier transform the Fourier cosine and sine transforms also satisfy certain properties
which are useful from application point of view.
Property1 (Linearity): for any two functions 𝑓(𝑥) and 𝑔(𝑥) whose Fourier cosine and sine
transform exist and for any constants a and b
Proof:
2 ∞
(a) By definition 𝑓𝑐 ^ [𝑎𝑓(𝑥) + 𝑏𝑔(𝑥)] = √𝜋 ∫0 (𝑎𝑓(𝑥) + 𝑏𝑔(𝑥))𝑐𝑜𝑠𝑤𝑥𝑑𝑥
2 ∞ 2 ∞
= √𝜋 ∫0 𝑎𝑓(𝑥)𝑐𝑜𝑠𝑤𝑥𝑑𝑥 + √𝜋 ∫0 𝑏𝑔(𝑥)𝑐𝑜𝑠𝑤𝑥𝑑𝑥
2 ∞ 2 ∞
= 𝑎√ ∫ 𝑓(𝑥)𝑐𝑜𝑠𝑤𝑥𝑑𝑥 + 𝑏√ ∫ 𝑔(𝑥)𝑐𝑜𝑠𝑤𝑥𝑑𝑥 = 𝑎𝑓𝑐 ^ [𝑓(𝑥)] + 𝑏𝑓𝑐 ^ [𝑔(𝑥)]
𝜋 0 𝜋 0
2 ∞
(b) Similarly, by definition 𝑓𝑠 ^ [𝑎𝑓(𝑥) + 𝑏𝑔(𝑥)] = √𝜋 ∫0 (𝑎𝑓(𝑥) + 𝑏𝑔(𝑥))𝑠𝑖𝑛𝑤𝑥𝑑𝑥
2 ∞ 2 ∞
= √𝜋 ∫0 𝑎𝑓(𝑥)𝑠𝑖𝑛𝑤𝑥𝑑𝑥 + √𝜋 ∫0 𝑏𝑔(𝑥)𝑠𝑖𝑛𝑤𝑥𝑑𝑥
2 ∞ 2 ∞
= 𝑎√𝜋 ∫0 𝑓(𝑥)𝑠𝑖𝑛𝑤𝑥𝑑𝑥 + 𝑏√𝜋 ∫0 𝑔(𝑥)𝑠𝑖𝑛𝑤𝑥𝑑𝑥 = 𝑎𝑓𝑠 ^ [𝑓(𝑥)] + 𝑏𝑓𝑠 ^ [𝑔(𝑥)]
If 𝐹𝑐 (𝑤) and 𝐹𝑠 (𝑤) are the Fourier cosine and sine transforms of 𝑓(𝑥),then
1
a) 𝑓𝑐 ^ [cos(𝑤0 𝑥) 𝑓(𝑥)] = 2 [𝐹𝑐 (𝑤 + 𝑤0 ) + 𝐹𝑐 (𝑤 − 𝑤0 )]
1
b) 𝑓𝑐 ^ [sin(𝑤0 𝑥) 𝑓(𝑥)] = 2 [𝐹𝑠 (𝑤 + 𝑤0 ) + 𝐹𝑠 (𝑤 − 𝑤0 )]
1
c) 𝑓𝑠 ^ [cos(𝑤0 𝑥) 𝑓(𝑥)] = 2 [𝐹𝑠 (𝑤 + 𝑤0 ) + 𝐹𝑠 (𝑤 − 𝑤0 )]
1
d) 𝑓𝑠 ^ [sin(𝑤0 𝑥) 𝑓(𝑥)] = 2 [𝐹𝑐 (𝑤 − 𝑤0 ) + 𝐹𝑐 (𝑤 + 𝑤0 )]
1
e) 𝑓𝑐 ^ [𝑓(𝑎𝑥)] = 𝑎 𝐹𝑐 (𝑤⁄𝑎), 𝑎 > 0
1
f) 𝑓𝑠 ^ [𝑓(𝑎𝑥)] = 𝑎 𝐹𝑠 (𝑤⁄𝑎), 𝑎 > 0
These results follow directly from the definitions of the Fourier cosine and sine transforms
Proof;
2 ∞
By definition 𝑓𝑐 ^ [sin(𝑤0 𝑥) 𝑓(𝑥)] = √𝜋 ∫0 sin(𝑤0 𝑥) cos(𝑤0 𝑥) 𝑓(𝑥) 𝑑𝑥
2 ∞
Thus, 𝑓𝑐 ^ [sin(𝑤0 𝑥) 𝑓(𝑥)] = √ ∫0 sin(𝑤0 𝑥) cos(𝑤0 𝑥) 𝑓(𝑥) 𝑑𝑥
𝜋
1 2 ∞
= 2 [√𝜋 ∫0 𝑠𝑖𝑛(𝑤0 + 𝑤)𝑓(𝑥)𝑥 − sin(𝑤 − 𝑤0 )𝑓(𝑥)𝑥 𝑑𝑥]
1 2 ∞ 2 ∞
= 2 [√𝜋 ∫0 𝑠𝑖𝑛(𝑤0 + 𝑤)𝑓(𝑥)𝑥𝑑𝑥 − √𝜋 ∫0 𝑠𝑖𝑛(𝑤 − 𝑤0 )𝑓(𝑥)𝑥𝑑𝑥
1
=2 [ 𝐹𝑠 (𝑤 + 𝑤0 ) + 𝐹𝑠 (𝑤 − 𝑤0 )
Hence proved.
2 ∞ 1 2 ∞ 𝑤 1
(c) By definition 𝑓𝑐 ^ [𝑓(𝑎𝑥)] = √𝜋 ∫0 𝑓(𝑎𝑥) 𝑐𝑜𝑠𝑤𝑥𝑑𝑥 = 𝑎 √𝜋 ∫0 𝑓(𝑥) 𝑐𝑜𝑠 𝑎 𝑥𝑑𝑥 = 𝑎 𝐹𝑐 (𝑤⁄𝑎)
Let 𝑓(𝑥) and 𝑓′(𝑥) be continuous and absolutely integrable on the interval[0, ∞) and 𝑓′′(𝑥) be
piecewise continuous on every subinterval[0, 𝑙), then
2
a) 𝑓𝑐 ^ [𝑓′(𝑥)] = 𝑤𝐹𝑠 (𝑤) − √𝜋 𝑓(0)
2
c) 𝑓𝑐 ^ [𝑓′′(𝑥)] = −𝑤 2 𝐹𝑐 (𝑤) − √𝜋 𝑓′(0)
2
d) 𝑓𝑠 ^ [𝑓′′(𝑥)] = −𝑤 2 𝐹𝑠 (𝑤) + 𝑤√𝜋 𝑓(0)
Proof
2 ∞ 2 ∞
(a) By definition 𝑓𝑐 ^ [𝑓′(𝑥)] = √𝜋 ∫0 𝑓′(𝑥) 𝑐𝑜𝑠𝑤𝑥𝑑𝑥 = √𝜋 [[𝑓(𝑥)𝑐𝑜𝑠𝑤𝑥] +
0
∞
𝑤 ∫0 𝑓(𝑥) 𝑠𝑖𝑛𝑤𝑥𝑑𝑥]
2
= w 𝑤𝐹𝑠 (𝑤) − √𝜋 𝑓(0), this by assuming that 𝑓(𝑥) → 0 as 𝑥 → ∞
2
Hence 𝑓𝑐 ^ [𝑓′(𝑥)] = 𝑤𝐹𝑠 (𝑤) − √𝜋 𝑓(0)
Proof of (c)
2 ∞
By definition 𝑓𝑐 ^ [𝑓′′(𝑥)] = √𝜋 ∫0 𝑓′′(𝑥) 𝑐𝑜𝑠𝑤𝑥𝑑𝑥
2 ∞ ∞
= √𝜋 [[𝑓 ′ (𝑥)𝑐𝑜𝑠𝑤𝑥 + 𝑤𝑓(𝑥)𝑠𝑖𝑛𝑤𝑥] − 𝑤 2 ∫0 𝑓(𝑥) 𝑐𝑜𝑠𝑤𝑥𝑑𝑥]
0
2
=−𝑤 2 𝐹𝑐 (𝑤) − √𝜋 𝑓′(0), this is by assuming that 𝑓(𝑥), 𝑓′(𝑥) → 0 as 𝑥 → ∞
Example 4.3:find the Fourier cosine and sine transform of 𝑓(𝑥) = 𝑒 −𝑎𝑥 , 𝑥 ≥ 0, 𝑎 > 0, by using
the Fourier cosine and sine transforms of derivatives.
Solution; here 𝑓(𝑥) = 𝑒 −𝑎𝑥 , this gives 𝑓′(𝑥) = −𝑎𝑒 −𝑎𝑥 and 𝑓 ′′ (𝑥) = 𝑎2 𝑒 −𝑎𝑥
2 2
Also, 𝑓𝑐 ^ [𝑓′′(𝑥)] = −𝑤 2 𝐹𝑐 (𝑤) − √𝜋 𝑓 ′ (0) = −𝑤 2 𝐹𝑐 (𝑤) + 𝑎√𝜋, since 𝑓 ′ (0) = 𝑎 (2)
2 2 𝑎
𝑎2 𝐹𝑐 (𝑤) = −𝑤 2 𝐹𝑐 (𝑤) + 𝑎√𝜋 or 𝐹𝑐 (𝑤) = √𝜋 𝑤2 +𝑎2
2 𝑎
Hence 𝑓𝑐 ^ [𝑓(𝑥)] = 𝑓𝑐 ^ [𝑒 −𝑎𝑥 ] = √𝜋 𝑤2 +𝑎2
2 2
Also, 𝑓𝑠 ^ [𝑓′′(𝑥)] = −𝑤 2 𝐹𝑠 (𝑤) + 𝑤√𝜋 𝑓(0) = −𝑤 2 𝐹𝑠 (𝑤) + 𝑤√𝜋 (4)
^ 2 2 𝑤
𝑎2 𝑓𝑠 [𝑒 −𝑎𝑥 ] = −𝑤 2 𝐹𝑠 (𝑤) + 𝑤√𝜋 or 𝐹𝑠 (𝑤) = √𝜋 𝑤2 +𝑎2
2 𝑤
Hence 𝑓𝑠 ^ [𝑓(𝑥)] = 𝑓𝑠 ^ [𝑒 −𝑎𝑥 ] = √𝜋 𝑤2 +𝑎2
Exercises 4.1
1. Find the Fourier cosine and sine transform of each of the following
𝑐𝑜𝑠𝑥, 0 ≤ 𝑥 ≤ 𝑎 ∞
a) 𝑓(𝑥) = 𝑒 −𝑥 , 𝑥 > 0 (b). 𝑓(𝑥) = { c. ∫0 𝑓(𝑥)𝑔(𝑥)
0, 𝑥>𝑎
2. Find the Fourier cosine and sine transform of each of the following functions
3. Explain why the following functions have neither Fourier cosine transform nor Fourier sine
transform
a) 𝑓(𝑥) = 1 b) 𝑓(𝑥) = 𝑒 𝑥
Section Objectives:
Fourier transforms of a function f (𝑥) can be derived from the complex Fourier integral
representation of 𝑓(𝑥) on the real line, that is,
Recall the complex Fourier integral representation of 𝑓(𝑥) on the real line
∞ 1 ∞ ∞
𝑓(𝑥) = ∫−∞ 𝑐(𝑤)𝑒 𝑖𝑤𝑥 𝑑𝑤 = 2𝜋 ∫−∞ ∫−∞ 𝑓(𝑢)𝑒 −𝑖𝑤(𝑢−𝑥) 𝑑𝑢𝑑𝑤
1 ∞
Where, 𝑐(𝑤) = 2𝜋 ∫−∞ 𝑓(𝑢)𝑒 −𝑖𝑤𝑢 𝑑𝑢 and taking 𝜔 = 𝑤
1 ∞ ∞ 1 ∞ 1 ∞
⟹ 𝑓(𝑥) = 2𝜋 ∫−∞ ∫−∞ 𝑓(𝑢)𝑒 −𝑖𝑤(𝑢−𝑥) 𝑑𝑢𝑑𝑤 = ∫ [ ∫ 𝑓(𝑢) 𝑒 −𝑖𝑤𝑢 ] 𝑒 𝑖𝑖𝑤𝑥 (1)
√2𝜋 −∞ √2𝜋 −∞
Here, the expression in the bracket, a function of 𝑤 denoted by 𝐹(𝑤) is called the Fourier
Transform of 𝑓 and since 𝑢 is a dummy variable, we replace 𝑢 by 𝑥 and have
1 ∞
𝐹(𝑤) = ∫ 𝑓(𝑢) 𝑒 −𝑖𝑤𝑥 𝑑𝑥 so that (1) becomes
√2𝜋 −∞
1 ∞ ∞ 1 ∞
𝑓(𝑥) = 2𝜋 ∫−∞ ∫−∞ 𝑓(𝑢)𝑒 −𝑖𝑤(𝑢−𝑥) 𝑑𝑢𝑑𝑤 = ∫ 𝐹(𝑤) 𝑒 −𝑖𝑤𝑥 𝑑𝑤 and is called the inverse
√2𝜋 −∞
Fourier Transform of 𝐹(𝑤).
Other common notations used for Fourier transform of 𝑓(𝑥) are 𝑓^ (𝑤) or ℱ(𝑓(𝑥)).
Definition (Fourier transform): The Fourier transform denoted by 𝐹(𝑤) 𝑜𝑟 ℱ(𝑓(𝑥))of a function
𝑓(𝑥) is defined as
1 ∞
𝐹(𝑤) = ∫ 𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
The sufficient conditions for the existence of the Fourier transform of 𝑓(𝑥)are:
𝑘, 0 < 𝑥 < 𝑎
Example 4.4: find the Fourier transform 𝑓(𝑥) = {
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
1 ∞ 1 𝑎
Solution: By definition ℱ(𝑓(𝑥)) = ∫ 𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥 = √2𝜋 ∫0 𝑘 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
𝑘 𝑎 𝑘 𝑒 −𝑖𝑤𝑥 𝑎 𝑘
= ∫ 𝑘 𝑒 −𝑖𝑤𝑥 𝑑𝑥 = √2𝜋 [
√2𝜋 0 −𝑤
] = 𝑖𝑤√2𝜋 (1 − 𝑒 −𝑖𝑤𝑎 )
0
𝑘
Hence, 𝐹(𝑤) = 𝑖𝑤√2𝜋 (1 − 𝑒 −𝑖𝑤𝑎 )
1, |𝑥| ≤ 𝑎
Example 4.5 : find the Fourier Transform of 𝑓(𝑥) = {
0, |𝑥| > 𝑎
1 ∞ 1 𝑎
Solution: By definition ℱ(𝑓(𝑥)) = ∫ 𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
= ∫ 1 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −𝑎
1 𝑐𝑜𝑠𝑤𝑎+𝑖𝑠𝑖𝑛𝑤𝑎−𝑐𝑜𝑠𝑤𝑎+𝑖𝑠𝑖𝑛𝑤𝑎
= [ ]
𝑤√2𝜋 𝑖
4 𝑠𝑖𝑛𝑤𝑎 2 𝑠𝑖𝑛𝑤𝑎
= √2𝜋 = √𝜋
𝑤 𝑤
2 𝑠𝑖𝑛𝑤𝑎
Hence: 𝐹(𝑤) = √𝜋 𝑤
𝑒 𝑥 , −∞ < 𝑥 ≤ 0
Solution: The function can also be written as 𝑓(𝑥) = 𝑒 −|𝑥| = { (by the
𝑒 −𝑥 , 0 < 𝑥 < ∞
definition of absolute value)
1 ∞
Now by the definition of Fourier transform, 𝐹(𝑤) = ∫ 𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
1 0 ∞
⇒ 𝐹(𝑤) = [∫−∞ 𝑒 𝑥 𝑒 −𝑖𝑤𝑥 𝑑𝑥 + ∫0 𝑒 −𝑥 𝑒 −𝑖𝑤𝑥 𝑑𝑥]
√2𝜋
1 0 ∞
= [∫−∞ 𝑒 (1−𝑖𝑤)𝑥 𝑑𝑥 + ∫0 𝑒 −(1−𝑖𝑤𝑥) 𝑑𝑥]
√2𝜋
1 0 ∞
= [∫−∞ 𝑒 (1−𝑖𝑤)𝑥 𝑑𝑥 + ∫0 𝑒 −(1−𝑖𝑤𝑥) 𝑑𝑥]
√2𝜋
1 𝑒 (1−𝑖𝑤)𝑥 0 1 𝑒 −(1+𝑖𝑤)𝑥 ∞ 1 1 1 1
= [ (1−𝑖𝑤) ] − [ (1+𝑖𝑤) ] = [(1−𝑖𝑤)] + [(1+𝑖𝑤)]
√2𝜋 −∞ √2𝜋 0 √2𝜋 √2𝜋
𝑒 (1−𝑖𝑤)𝑥 1 𝑒 −(1+𝑖𝑤)𝑥 1
Here, as x→0 (1−𝑖𝑤)
→ (1−𝑖𝑤) and as x→ −∞ (1+𝑖𝑤)
→ (1+𝑖𝑤)
𝑒 −(1+𝑖𝑤)𝑥 𝑒 (1−𝑖𝑤)𝑥
Similarly, as x→ ∞ (1+𝑖𝑤)
→ 0 and as x→ 0 (1−𝑖𝑤)
→0
1 𝑒 (1−𝑖𝑤)𝑥 0 1 𝑒 −(1+𝑖𝑤)𝑥 ∞ 1 1 1
[ (1−𝑖𝑤) ] − [ (1+𝑖𝑤) ] = [(1−𝑖𝑤) − (1+𝑖𝑤)]
√2𝜋 −∞ √2𝜋 0 √2𝜋
1 1+𝑖𝑤+1−𝑖𝑤 1 2 2 1
= [ ]= = √𝜋 1+𝑤2
√2𝜋 1+𝑤 2 √2𝜋 1+𝑤 2
2 1
Hence, 𝐹(𝑤) = √𝜋 1+𝑤2
2
Example 4.7: find the Fourier transform of 𝑓(𝑥) = 𝑒 −𝑎𝑥 , 𝑎 > 0
1 ∞ 1 ∞ 2
Solution by definition ℱ(𝑓(𝑥)) = ∫ 𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥 = √2𝜋 ∫−∞ 𝑒 −𝑎𝑥 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
𝑖𝑤 2 𝑖𝑤 2
1 ∞ −(𝑎𝑥 2 +𝑖𝑤𝑥) 1 ∞ −[(√𝑎𝑥+ ) +( ) ]
= ∫−∞
𝑒 𝑑𝑥 = ∫−∞
𝑒 2√𝑎 2√𝑎 𝑑𝑥
√2𝜋 √2𝜋
−𝑤2 𝑖𝑤 2
1 ∞ −((√𝑎𝑥+ ) )
= 𝑒 4𝑎 ∫−∞ 𝑒 2√𝑎 𝑑𝑥
√2𝜋
−𝑤2 𝑖𝑤 2 −𝑤2
1 ∞ −((√𝑎𝑥+ ) ) 1 ∞ 2 1
⟹ ℱ(𝑓(𝑥)) = 𝑒 4𝑎 ∫−∞ 𝑒 2√𝑎 𝑑𝑥 = 𝑒 4𝑎 ∫−∞ 𝑒 −𝑡 𝑑𝑡
√2𝜋 √2𝜋 √𝑎
∞ 2 ∞ 2
This is because ∫−∞ 𝑒 −𝑡 𝑑𝑡 = 2 ∫0 𝑒 −𝑡 𝑑𝑡 = 1 2 = √𝜋
−𝑤2
2 1
Hence ℱ(𝑓(𝑥)) = ℱ(𝑒 −𝑎𝑥 ) = 𝑒 4𝑎
√2𝑎
Theorem (Linearity Theorem): For any functions 𝑓(𝑥) and 𝑔(𝑥)Whose Fourier Transform exist
and for any constants 𝑎, 𝑏
Proof:
1 ∞
By definition ℱ[𝑎𝑓(𝑥) + 𝑏𝑔(𝑥)] = ∫ (𝑎𝑓(𝑥) + 𝑏𝑔(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
1 ∞ 1 ∞
= ∫ 𝑎𝑓(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥 + √2𝜋 ∫−∞ 𝑏𝑔(𝑥)𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
1 ∞ ∞
= (∫−∞ 𝑎𝑓(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥 + ∫−∞ 𝑏𝑔(𝑥)𝑒 −𝑖𝑤𝑥 𝑑𝑥)
√2𝜋
1 ∞ ∞
= (𝑎 ∫−∞ 𝑓(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥 + 𝑏 ∫−∞ 𝑔(𝑥)𝑒 −𝑖𝑤𝑥 𝑑𝑥)
√2𝜋
𝑎 ∞ 𝑏 ∞
= ∫ 𝑓(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥 + √2𝜋 ∫−∞ 𝑔(𝑥)𝑒 −𝑖𝑤𝑥 𝑑𝑥)
√2𝜋 −∞
1 ∞ 1 ∞
=𝑎 ∫ 𝑓(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥 + 𝑏 √2𝜋 ∫−∞ 𝑔(𝑥)𝑒 −𝑖𝑤𝑥 𝑑𝑥)
√2𝜋 −∞
= 𝑎ℱ(𝑓(𝑥)) + 𝑏ℱ(𝑔(𝑥))
a) ℱ(𝑓′(𝑥)) = 𝑖𝑤ℱ[𝑓(𝑥)]
and this holds for all 𝑛 such that the derivatives 𝑓 (𝑟) (𝑥), 𝑟 = 1,2, … , 𝑛 satisfies the sufficient
conditions for the existence of the Fourier transforms
Proof:
1 ∞
(a) By, definition ℱ(𝑓′(𝑥)) = ∫ 𝑓′(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥,
√2𝜋 −∞
integrating by parts we obtain
1 ∞ ∞
ℱ(𝑓′(𝑥)) = [(𝑓(𝑥)𝑒 −𝑖𝑤𝑥 ) −∞ − (−𝑖𝑤) ∫−∞ 𝑓(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥]
√2𝜋
1 ∞ ∞
= [(𝑓(𝑥)𝑒 −𝑖𝑤𝑥 ) −∞ + 𝑖𝑤 ∫−∞ 𝑓(𝑥))𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋
ℱ(𝑓′(𝑥)) = 𝑖𝑤ℱ[𝑓(𝑥)]
(b) The repeated application of result (a) gives result (b) provided that the desired conditions are
satisfied at each step.
2
Example 4.8: find the Fourier transform of 𝑓(𝑥) = 𝑥𝑒 −𝑎𝑥 , 𝑎 > 0
1 2
= − 2𝑎 (𝑖𝑤)ℱ[𝑒 −𝑎𝑥 ], using differentiability
−𝑤2
−𝑖𝑤 1 2
= ( 𝑒 4𝑎 ) Refer example (find the Fourier transform of 𝑓(𝑥) = 𝑒 −𝑎𝑥 , 𝑎 > 0)
2𝑎 √2𝑎
−𝑤2
−𝑖𝑤
= 2𝑎√2𝑎 𝑒 4𝑎
−𝑤2
−𝑖𝑤
Hence ℱ(𝑓(𝑥)) = 2𝑎√2𝑎 𝑒 4𝑎
𝑑𝑛
Example 4.9: show that (a) ℱ[𝑥 𝑛 𝑓(𝑥)] = 𝑖 𝑛 𝑑𝑤𝑛 [𝐹(𝑤)]
𝑑𝑚
(b) ) ℱ[𝑥 𝑚 𝑓 (𝑛) (𝑥)(𝑥)] = 𝑖 𝑚+𝑛 𝑑𝑤𝑚 [𝑤 𝑛 𝐹(𝑤)]
Differentiating it w.r.t 𝑤 and using Leibnitz rule to differentiate under the integral sign, we have
𝑑 1 𝑑 ∞ −𝑖 ∞
𝑑𝑤
[𝐹(𝑤)] = ∫ 𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 𝑑𝑤 −∞
= ∫ 𝑥𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
𝑑 1 ∞
or −𝑖 𝑑𝑤 [𝐹(𝑤)] = ∫ 𝑥𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥 = ℱ(𝑥𝑓(𝑥))
√2𝜋 −∞
𝑑 𝑑
⟹ ℱ(𝑥1 𝑓(𝑥)) = (−𝑖)1 𝑑𝑤 [𝐹(𝑤)] = −𝑖 𝑑𝑤 [𝐹(𝑤)]
The repeated applications of the differentiation w.r.t 𝑤 leads to the desired result
𝑑𝑛
ℱ(𝑥 𝑛 𝑓(𝑥)) = 𝑖 𝑛 𝑑𝑤𝑛 [𝐹(𝑤)]
𝑑𝑚
(b) Consider ℱ[𝑥 𝑚 𝑓 (𝑛) (𝑥)(𝑥)] = 𝑖 𝑚 𝑑𝑤𝑚 [𝑓 (𝑛) (𝑥)] ( this is by using part (a))
𝑑𝑚 𝑑𝑚
= 𝑖 𝑚 𝑑𝑤𝑚 [(𝑖𝑤)𝑛 𝐹(𝑤)] = 𝑖 𝑚 . 𝑖 𝑛 𝑑𝑤𝑚[𝑤 𝑛 𝐹(𝑤)]
𝑑𝑚
⟹ ℱ[𝑥 𝑚 𝑓 (𝑛) (𝑥)(𝑥)] = 𝑖 𝑚+𝑛 𝑑𝑤𝑚 [𝑤 𝑛 𝐹(𝑤)]
Example 4.10: using the property of the Fourier transform of derivatives, find the Fourier
2
transform of 𝑓(𝑥) = 𝑒 −𝑎𝑥 , 𝑎 > 0.
Solution: clearly 𝑓(𝑥) satisfies the requisite conditions of continuity and absolute inerrability
over the real axis for the existence of Fourier transform.
It is easy to see that 𝑓(𝑥) satisfies the differential equation 𝑓′(𝑥) + 2𝑎𝑓(𝑥) = 0
This gives,
𝑖𝑤𝐹(𝑤) + 2𝑎(𝑖𝐹′(𝑤))= 𝑤𝐹(𝑤) + 2𝑎𝐹′(𝑤) = 0 (by the above example part (a))
𝐹′ (𝑤) 1
= − 2𝑎 𝑤
𝐹(𝑤)
1 ∞ 2 1 √𝜋 1 1
𝐹(0) = ∫ 𝑒 −𝑎𝑥
√2𝜋 −∞
𝑑𝑥 =
√2𝜋 √𝑎
. =
√2𝑎
, from this 𝐴 =
√2𝑎
−𝑤2 −𝑤2
1 1
Thus, 𝐹(𝑤) = 𝑒𝑥𝑝[ 4𝑎 ] = 𝑒 4𝑎 ,𝑎>0
√2𝑎 √2𝑎
Proof: The results follow immediately from the definition of Fourier transform (you try!)
2
Example 4.11: find the Fourier transform of 𝑓(𝑥) = 𝑒 −𝑎(𝑥−5) , 𝑎 > 0
Solution: By the shifting property in the above theorem part (a) with𝑥0 = 5 , we have
−𝑤2
2 2 1
ℱ[𝑒 −𝑎(𝑥−5) ] = 𝑒 −𝑖5𝑤 ℱ[𝑒 −𝑎𝑥 ] = 𝑒 −𝑖5𝑤 𝑒 4𝑎 (By the above example (refer))
√2𝑎
𝑤2
1
= 𝑒 −( 4𝑎 +𝑖5𝑤)
√2𝑎
ℱ[𝑓(𝑥)] = ℱ[4𝑒 −|𝑥| − 5𝑒 −3|𝑥+2| ] = ℱ[4𝑒 −|𝑥| ] − ℱ[5𝑒 −3|𝑥+2| ] = 4ℱ[𝑒 −|𝑥| ] − 5ℱ[𝑒 −3|𝑥+2| ]
1 2 5 1 2 1 8 30𝑒 2𝑖𝑤
=4 . 1+𝑤2 − 3 𝑒 2𝑖𝑤 . . 𝑤 = [1+𝑤2 − ] (Refer the previous example)
√2𝜋 √2𝜋 1+( )2 √2𝜋 9+𝑤 2
3
Exercises 4.2
(c) 𝑓(𝑥) = 𝑢(𝑥 + 1) − 𝑢(𝑥 − 1), where 𝑢(𝑥) is the unit –step function
𝑠𝑖𝑎𝑥 1, |𝑥| ≤ 𝑎
(d) 𝑓(𝑥) = ,𝑎 >0 (e) 𝑓(𝑥) = {
𝑥 0, |𝑥| > 𝑎
Section Objectives:
The Laplace transforms which transforms a function f of one variable (t) into function F of
another variable (s) is named in honor of the French mathematician and Astronomer Pierre-
Simon Marquis de Laplace (1749–1827).
Integral Transform; If f (x, y) is a function of two variables, then a definite integral of f with
respect to one of the variables leads to a function of the other variable. For example, by holding y
2 𝑏
constant, we see that∫1 2𝑥𝑦 2 𝑑𝑥 = 3𝑦 2 . Similarly, a definite integral such as∫𝑎 𝐾(𝑠, 𝑡)𝑓(𝑡)𝑑𝑡
transforms a function f of the variable t into function F of the variable s.
We are particularly interested in an integral transform, where the interval of integration is the
unbounded interval [0, ∞). If f (t) is defined fort ≥0, then the improper integral is defined as a
limit.
∞ 𝑏
∫0 𝐾(𝑠, 𝑡)𝑓(𝑡)𝑑𝑡 = lim ∫0 𝐾(𝑠, 𝑡)𝑓(𝑡)𝑑𝑡 (1)
𝑏→∞
If the limit in (1) exists, then we say that the integral exists or is convergent; if the limit does not
exist, the integral does not exist and is divergent. The limit in (1) will, in general, exist for only
certain values of the variable s.
Definition: The function K(s, t) in (1) is called the kernel of the transform. The choice
𝐾(𝑠, 𝑡) = 𝑒 −𝑠𝑡 as the kernel gives us an especially important integral Transform.
Definition (Laplace Transform): let f be a function defined for t ≥ 0. Then the integral
∞
ℒ{𝑓(𝑡)} = 𝐹(𝑠) = ∫0 𝑒 −𝑠𝑡 𝑓(𝑡)𝑑𝑡 (2)
is said to be the Laplace transform of f, provided that the integral converges
∞ 𝑏 −𝑒 −𝑠𝑡 𝑏 −𝑒 −𝑠𝑏 +1
Solution: by definition ℒ{1} = ∫0 𝑒 −𝑠𝑡 (1)𝑑𝑡 lim ∫0 𝑒 −𝑠𝑡 𝑑𝑡 = lim [ ] = lim ( )
𝑏→∞ 𝑏→∞ 𝑠 0 𝑏→∞ 𝑠
−𝑒 −𝑠𝑏 1 1 1
= lim + lim = 0 + 𝑠 = 𝑠 , whenever 𝑠 > 0
𝑏→∞ 𝑠 𝑏→∞ 𝑠
∞ 1 1 1 1
Hence, ℒ{𝑡} = ∫0 𝑒 −𝑠𝑡 𝑡𝑑𝑡 = ℒ{1} = 𝑠 . 𝑠 = 𝑠2 when ever 𝑠 > 0
𝑠
∞ ∞ −𝑒 −(𝑠+3)𝑡 ∞ 1
For a) ℒ{𝑒 −3𝑡 } = ∫0 𝑒 −𝑠𝑡 𝑒 −3𝑡 𝑑𝑡 = ∫0 𝑒 −(𝑠+3)𝑡 𝑑𝑡 = [ ]0 = 𝑠+3
𝑠+3
−𝑒 −(𝑠+3)𝑡 −𝑒 −(𝑠+3)𝑡 1
From this observe that as t→ ∞, → 0 𝑎𝑛𝑑 as 𝑡 → 0, → 𝑠+3
𝑠+3 𝑠+3
1
Hence, ℒ{𝑒 −3𝑡 } = 𝑠+3, whenever 𝑠 > −3
𝑒 −𝑠𝑡
Let 𝑢 = 𝑠𝑖𝑛2𝑡 𝑑𝑢 = 2𝑐𝑜𝑠2𝑡 and let 𝑑𝑣 = 𝑑𝑣 = 𝑒 −𝑠𝑡 , 𝑣 = − 𝑠
∞ 2 ∞
⟹ ℒ{𝑠𝑖𝑛2𝑡} = ∫0 𝑒 −𝑠𝑡 𝑠𝑖𝑛2𝑡𝑑𝑡 = 𝑠 ∫0 𝑒 −𝑠𝑡 𝑐𝑜𝑠2𝑡𝑑𝑡 (1)
∞
Now integrating ∫0 𝑒 −𝑠𝑡 𝑐𝑜𝑠2𝑡𝑑𝑡 by parts we have,
𝑒 −𝑠𝑡
Let 𝑢 = 𝑐𝑜𝑠2𝑡, 𝑑𝑢 = −2𝑠𝑖𝑛2𝑡 and let = 𝑒 −𝑠𝑡 , 𝑣 = − 𝑠
∞ 1 2 ∞
∫0 𝑒 −𝑠𝑡 𝑐𝑜𝑠2𝑡𝑑𝑡 = 𝑠 − 𝑠 ∫0 𝑒 −𝑠𝑡 𝑠𝑖𝑛2𝑡𝑑𝑡 , 𝑠 > 0 (2)
∞ 4 ∞ 2
⟹ ℒ{𝑠𝑖𝑛2𝑡} = ∫0 𝑒 −𝑠𝑡 𝑠𝑖𝑛2𝑡𝑑𝑡 + 𝑠2 ∫0 𝑒 −𝑠𝑡 𝑠𝑖𝑛2𝑡𝑑𝑡 = 𝑠2
4 ∞ 2
= (1 + 𝑠2 ) ∫0 𝑒 −𝑠𝑡 𝑠𝑖𝑛2𝑡𝑑𝑡 = 𝑠2
𝑠2 +4 ∞ 2
=( ) ∫0 𝑒 −𝑠𝑡 𝑠𝑖𝑛2𝑡𝑑𝑡 = 𝑠2
𝑠2
∞ 𝑠2 2 2
⟹ ℒ{𝑠𝑖𝑛2𝑡} = ∫0 𝑒 −𝑠𝑡 𝑠𝑖𝑛2𝑡𝑑𝑡 = 𝑠2 +4 (𝑠2 ) = 𝑠2 +4
2
Hence ℒ{𝑠𝑖𝑛2𝑡} = 𝑠2 +4 , 𝑠 > 0
When ever both integrals converge for 𝑠 > 𝑐. Hence it follows that
a) For 𝑠 > 0
1 1
ℒ{1 + 5𝑡} = ℒ{1} + ℒ{5𝑡} = ℒ{1} + 5ℒ{𝑡} = 𝑠 + 𝑠2 (By 𝓛 linearity and above examples)
b) 𝑓𝑜𝑟 𝑠 > 5
We state the generalization of some of the preceding examples by means of the next theorem.
From this point on we shall also refrain from stating any restrictions on s; it is understood that s
is sufficiently restricted to guarantee the convergence of the appropriate Laplace transform.
1 𝑘
(a) ℒ{1} = 𝑠 (d) ℒ{𝑠𝑖𝑛𝑘𝑡} = 𝑠2 +𝑘 2
𝑛! 𝑠
(b) ℒ{𝑡 𝑛 } = 𝑠𝑛+1 ,𝑛 = 1,2,3, .. (e) ℒ{𝑐𝑜𝑠𝑘𝑡} = 𝑠2 +𝑘 2
1 𝑘
(c) ℒ{𝑒 𝑎𝑡 } = 𝑠−𝑎 (f) ℒ{𝑠𝑖𝑛ℎ 𝑘𝑡} = 𝑠2 −𝑘 2
1 𝑠
(d) ℒ{𝑒 −𝑎𝑡 } = 𝑠+𝑎 (g) ℒ{𝑐𝑜𝑠ℎ 𝑘𝑡} = 𝑠2 −𝑘 2
These can be proved by the direct application of the definition of Laplace transform
Sufficient Conditions for Existence of ℒ{𝑓(𝑡)}: The integral that define the Laplace transform
∞
does not have to converge (i.e. if ℒ{𝑓(𝑡)} = 𝐹(𝑠) = ∫0 𝑒 −𝑠𝑡 𝑓(𝑡)𝑑𝑡 doesn’t converge), then the
2
Laplace transform of 𝑓 doesn’t exist). For example, neither ℒ{1⁄𝑡 } nor ℒ{𝑒 𝑡 }exists. Sufficient
conditions guaranteeing the existence of ℒ{𝑓(𝑡)} are that f be piecewise continuous on [0,∞)
and that f be of exponential order for 𝑡 ≥ 𝑇. Recall that a function f is piecewise continuous on
[0,∞) if, in any interval 0 ≤ a ≤t ≤b, there are at most a finite number of 𝑝𝑜𝑖𝑛𝑡𝑠 𝑡𝑘 , 𝑘 =
1, 2, . . . , 𝑛 (𝑡𝑘−1 < 𝑡𝑘 ) at which f has finite discontinuities and is continuous on each open
interval (𝑡𝑘−1 , 𝑡𝑘 ). See Figure 4.1 below. The concept of exponential order is defined in the
following manner.
Example 4.18: 𝑓(𝑡) = 𝑡 , 𝑓(𝑡) = 𝑒 −𝑡 𝑎𝑛𝑑 𝑓(𝑡) = 2𝑐𝑜𝑠𝑡 Is exponential order of 𝑐 = 1 for
all 𝑡 > 𝑇.
0, 0 ≤ 𝑡 < 3
Example 4.19: Evaluate ℒ{𝑓(𝑡)} where 𝑓(𝑡) = {
2, 𝑡≥3
Solution: The function 𝑓 is piecewise continuous and of exponential order for 𝑡 > 0
∞ 3 ∞ 2𝑒 −𝑠𝑡 ∞ 2𝑒 −3𝑠
Now, ℒ{𝑓(𝑡)} = ∫0 𝑒 −𝑠𝑡 𝑓(𝑡)𝑑𝑡 = ∫0 𝑒 −𝑠𝑡 (0)𝑑𝑡 + ∫3 𝑒 −𝑠𝑡 (2)𝑑𝑡 = 0 + [ ] = ,𝑠 > 0
−𝑠 3 𝑠
2𝑒 −3𝑠
Hence, ℒ{𝑓(𝑡)} = ,𝑠 > 0
𝑠
If 𝐹(𝑠) represents the Laplace transform of a function 𝑓(𝑡),that is ℒ{𝑓(𝑡)} = 𝐹(𝑠) we then say
𝑓(𝑡) is the inverse Laplace transform of 𝐹(𝑠) and write
𝑓(𝑡) = ℒ −1 {𝐹(𝑠)}.
Example 4.20: Evaluate the inverse Laplace transform of each of the following
1 1 1
a.𝐹(𝑠) = b). 𝐹(𝑠) = 𝑠2 c. 𝐹(𝑠) = 𝑠+3
𝑠
1
Solution: 𝑎) 𝑓(𝑡) = ℒ −1 {𝐹(𝑠)} = ℒ −1 {𝑠 } = 1
1
b) 𝑓(𝑡) = ℒ −1 {𝐹(𝑠)} = ℒ −1 {𝑠2 } = 𝑡
1
c) 𝑓(𝑡) = ℒ −1 {𝐹(𝑠)} = ℒ −1 {𝑠+3 } = 𝑒 −3𝑡
1 𝑘
(c) 𝑒 𝑎𝑡 = ℒ −1 {𝑠−𝑎 } (d) 𝑠𝑖𝑛𝑘𝑡 = ℒ −1 {𝑠2 +𝑘 2 }
𝑠 𝑘
(e) 𝑐𝑜𝑠𝑘𝑡 = ℒ −1 {𝑠2 +𝑘 2 } (f) 𝑠𝑖𝑛ℎ𝑘𝑡 = ℒ −1 {𝑠2 −𝑘 2 }
𝑠
(g) 𝑐𝑜𝑠ℎ𝑘𝑡 = ℒ −1 {𝑠2 −𝑘 2 }
1 1
Example 4.21: Evaluate a) ℒ −1 {𝑠5 } b) ℒ −1 {𝑠2 +7}
Solution: (a) By the above theorem and identifying that 𝑛 + 1 = 5 𝑜𝑟 𝑛 = 4 and then
multiplying and dividing 4!, we have,
1 1 4! 1 1 1 1 4! 1 1
ℒ −1 {𝑠5 } = ℒ −1 {𝑠4+1 } = ℒ −1 {4! 𝑠4+1 } = ℒ −1 {4! 𝑠4+1 } = 4! ℒ −1 {𝑠4+1 } = 4! 𝑡 4 = 24 𝑡 4
4!
1 1 √7 1 √7 1
b) ℒ −1 {𝑠2 +7} = ℒ −1 { 2 2}
= ℒ −1 {𝑠2 +(√7)2 } = 𝑠𝑖𝑛√7𝑡
√7 𝑠 +(√7) √7 √7
1
Here, we have fixed up the expression𝑠2 +7 by multiplying and dividing by √7
𝓛−𝟏 is a linear transform: The inverse Laplace transform is also a linear transform, that is for
constants 𝛼 and 𝛽 and for some functions 𝐹 and 𝐺 that are transforms of 𝑓 and 𝑔 respectively,
then
−2𝑠+6
Example 4.22: Evaluate ℒ −1 { 𝑠2 +4 }
Solution: we first rewrite the given function of s as two expressions by means of term wise
division and the use linearity ofℒ −1 .
−2𝑠+6 −2𝑠 6 −2𝑠 −6 𝑠 1
ℒ −1 { 𝑠2 +4 } = ℒ −1 {𝑠2 +4 + 𝑠2 +4} = ℒ −1 {𝑠2 +4} + ℒ −1 {𝑠2 +4} = −2ℒ −1 {𝑠2 +4} +6ℒ −1 {𝑠2 +4}
𝑠 6 2
= −2ℒ −1 {𝑠2 +4} + 2 ℒ −1 {𝑠2 +4} (by linearity and fixing the second expression)
= −2𝑐𝑜𝑠2𝑡 + 3𝑠𝑖𝑛2𝑡
𝑠+3
Example 4.23: Evaluate ℒ −1 {𝑠2 −7𝑠+12}
𝑠+𝑠 𝑠+𝑠
Now we decompose 𝑠2 −7𝑠+12 = (𝑠−4)(𝑠−3) into sum of partial fractions
𝑠+3 𝐴(𝑆−3)+𝐵(𝑆−4)
From this (𝑠−4)(𝑠−3) = (𝑠−4)(𝑠−3)
⟹ 𝐴(𝑠 − 3) + 𝐵(𝑆 − 4) = 𝑠 + 3
⟹ 𝐴𝑠 − 3𝐴 + 𝐵𝑆 − 4𝐵 = 𝑠 + 3 ⟹ 𝐴𝑠 + 𝐵𝑆 − 3𝐴 − 4𝐵 = 𝑠 + 3
A+B =1
⟹ (A+B) + (-3A-4B) = 𝑠 + 3 ⟹ { ⟹ 𝐴 = 7 𝑎𝑛𝑑 𝐵 = −6
−3A − 4B = 3
𝑠+3 𝑠+3 𝐴 𝐵 7 −6
From this 𝑠2 −7𝑠+12 = (𝑠−4)(𝑠−3) = (𝑠−4)
+ (𝑠−3) = (𝑠−4)
+ (𝑠−3)
𝑠+3 7 −6 7 −6
Now ℒ −1 {𝑠2 −7𝑠+12} = ℒ −1 {(𝑠−4) + (𝑠−3)} = ℒ −1 {𝑠−4} + ℒ −1 {𝑠−3}
1 1
= 7ℒ −1 {𝑠−4} − 6ℒ −1 {𝑠−3} = 7𝑒 4𝑡 − 6𝑒 3𝑡
Transforms of derivative
As was pointed out in the introduction to this chapter, Laplace transform is used to solve
differential equations. To that end we need to evaluate quantities such as ℒ{𝑑𝑦⁄𝑑𝑡} and
ℒ{𝑑 2 𝑦⁄𝑑 2 𝑡}.
here we have assumed that 𝑒 −𝑠𝑡 𝑓(𝑡) → 0 𝑎𝑠 𝑡 → ∞,similarly with the aid of (1),
∞ ∞
ℒ{𝑓 ′′ (𝑡)} = ∫0 𝑒 −𝑠𝑡 𝑓 ′′ (𝑡)𝑑𝑡 = [𝑒 −𝑠𝑡 𝑓′(𝑡)]∞
0
+ 𝑠 ∫0 𝑒 −𝑠𝑡 𝑓′(𝑡)𝑑𝑡 = −𝑓 ′ (0) + 𝑠 ℒ{𝑓′(𝑡)}
In general, The results (1), (2) and (3) can be generalized in the following theorem
If 𝑓, 𝑓′, … , 𝑓 (𝑛−1) are continuous on [0,∞) and are of exponential order and if 𝑓 (𝑛) (𝑡) is piece
wise continuous on [0,∞), then
ℒ{𝑓 (𝑛) (𝑡)} = 𝑠 𝑛 𝐹(𝑠) − 𝑠 𝑛−1 𝑓(0) − 𝑠𝑓 𝑛−2 (0) − … − 𝑓 𝑛−1 (0),
In solving ODEs it is apparent from the general result given in the above theorem (Transform of
a Derivative) that ℒ{𝑑𝑛 𝑦⁄𝑑𝑛 𝑡} depends on 𝑌(𝑠) = ℒ{𝑦(𝑡)} and 𝑛 − 1 derivatives of 𝑦(𝑡)
evaluated at 𝑡 = 0.this property make the Laplace ideally suited for solving linear initial-value
problems in which the differential equation has constant coefficients such a differential is simply
a linear combination of terms 𝑦, 𝑦′, 𝑦′′,…,𝑦 (𝑛) :
𝑑𝑛 𝑦 𝑑𝑛−1 𝑦
𝑎𝑛 𝑑𝑡 𝑛 + 𝑎𝑛−1 𝑑𝑡 𝑛−1 + … + 𝑎0 𝑦 = 𝑔(𝑡), 𝑦(0) = 𝑦0,𝑦 ′ (0) = 𝑦1 , … , 𝑦 (𝑛−1) (0) = 𝑦𝑛−1
By the linearity property the Laplace transform of this linear combination is a linear combination
of Laplace transforms:
𝑑𝑛 𝑦 𝑑𝑛−1 𝑦
𝑎𝑛 ℒ{ 𝑑𝑡 𝑛 } + 𝑎𝑛−1 { 𝑑𝑡 𝑛−1 } + … + 𝑎0 ℒ{𝑦} = ℒ{𝑔(𝑡)}
𝑑𝑛 𝑦 𝑑𝑛−1 𝑦
𝑎𝑛 𝑑𝑡 𝑛 + 𝑎𝑛−1 𝑑𝑡 𝑛−1 + … + 𝑎0 𝑦 = 𝑔(𝑡) (4)
𝑎𝑛 [𝑠 𝑛 𝑌(𝑠) − 𝑠 𝑛−1 𝑦(0) − 𝑠 𝑛−2 𝑦(0) − … − 𝑦 (𝑛−1) (0)] + 𝑎𝑛−1 [𝑠 𝑛−1 𝑌(𝑠) − 𝑠 𝑛−2 𝑦(0) − ⋯ −
𝑦 (𝑛−2) (0)] + 𝑎0 𝑌(𝑠) = 𝐺(𝑠). (5)
In other words,
The Laplace transform of a linear differential equation with constant coefficient becomes an
algebraic equation in 𝑌(𝑠).
If we solve the general transformed equation (5) for the symbol𝑌(𝑠), we first obtain
Where 𝑃(𝑠) = 𝑎𝑛 𝑠 𝑛 + 𝑎𝑛−1 𝑠 𝑛−1 + ⋯ + 𝑎0, 𝑄(𝑠) is a polynomial in s of degree less than or
equal to 𝑛 − 1 consisting of the various products of the coefficient 𝑎𝑖 , 𝑖 = 0,1, … , 𝑛 and the
prescribed initial conditions 𝑦0 , 𝑦1 , … , 𝑦𝑛−1 and 𝐺(𝑠) is the Laplace transform of 𝑔(𝑡).Typically,
we put the two terms in (6) over the least common denominator and then decompose the
expression into two or more partial fractions. Finally, the solution 𝑦(𝑡) of the original initial-
value problem is 𝑦(𝑡) = ℒ −1 {𝑌(𝑠)}, where the inverse transform is done term by term. Let us
summarize the procedure in the following diagram
Example 4.24: Use the Laplace Transform to solve the initial-value problem
𝑑𝑦
+ 3𝑦 + 13𝑠𝑖𝑛2𝑡, 𝑦(0) = 6
𝑑𝑡
Solution: We first take the transform of each member of the differential equation:
𝑑𝑦 𝑑𝑦
ℒ { 𝑑𝑡 + 3𝑦 + 13𝑠𝑖𝑛2𝑡} = ℒ { 𝑑𝑡 } + 3ℒ{ 𝑦} + 13ℒ{ 𝑠𝑖𝑛2𝑡} (7)
𝑑𝑦
From (1), ℒ { 𝑑𝑡 } = 𝑠𝑌(𝑠) − 𝑦(0) = 𝑠𝑌(𝑠) − 6 and we know that ℒ{ 𝑠𝑖𝑛2𝑡} = 2⁄(𝑠 2 + 4)
𝑑𝑦
So, ℒ { 𝑑𝑡 } + 3ℒ{ 𝑦} + 13ℒ{ 𝑠𝑖𝑛2𝑡} = 𝑠𝑌(𝑠) − 6 + 3𝑌(𝑠) = 26⁄(𝑠 2 + 4)
26
Or (𝑠 + 3)𝑌(𝑠) = 6 + ,we get
𝑠2 +4
6 26 6𝑠2 +50
𝑌(𝑠) = 𝑠+3 + (𝑠+3)(𝑠2 +4) = (𝑠+3)(𝑠2 +4) (7)
Since the quadratic polynomial 𝑠 2 + 4 does not factor using real numbers, its assumed numerator
in the partial fraction decomposition is a linear polynomial in𝑠:
Putting the right-hand side of the equality over a common denominator and equating numerators
Since the denominator has no more real zeros, we equate the coefficients of 𝑠 2 and s:
6 = 𝐴 + 𝐵 and 0 = 3𝐵 + 𝐶. Using the value of 𝐴 in the first equation gives 𝐵 = −2, and then
using in the second equation gives 𝐶 = 6.Thus
8 −2𝑠 6
Or 𝑌(𝑠) = 𝑠+3 + + 𝑠2 +4 (8)
𝑠2 +4
1 𝑠 1
𝑦(𝑡) = 8ℒ −1 { } − 2ℒ −1 { 2 } + 6ℒ −1 { 2 }
𝑠+3 𝑠 +4 𝑠 +4
Hence, the solution of the initial value problem is (𝑡) = 8𝑒 −3𝑡 − 2𝑐𝑜𝑠 2𝑡 + 3𝑠𝑖𝑛 2𝑡.
Solution: proceeding as in the example above, we transform the DE. We take the sum of the
transforms of each term, use the given initial conditions and then solve for 𝑌(𝑠):
𝑑2 𝑦 𝑑𝑦
ℒ { 𝑑𝑡 2 } − 3ℒ { 𝑑𝑡 } + 2ℒ{ 𝑦} = ℒ{ 𝑒 −4𝑡 }
1
𝑠 2 𝑌(𝑠) − 𝑠𝑦(𝑠) − 𝑦 ′ (0) − 3[𝑠𝑌(𝑠) − 𝑦(0)] = 𝑠+4
1
(𝑠 2 − 3𝑠 + 2)𝑌(𝑠) = 𝑠 + 2 +
𝑠+4
𝑠+2 1 𝑠2 +6𝑠+9
𝑌(𝑠) = + (𝑠2 +3𝑠+2)(𝑠+4) = (𝑠−1)(𝑠−2)(𝑠+4)
𝑠2 −3𝑠+2
𝑠2 +6𝑠+9 𝐴 𝐵 𝐶 𝐴(𝑆−2)(𝑆+4)+𝐵(𝑆−1)(𝑆+4)+𝐶(𝑆−1)(𝑆−2)
(𝑠−1)(𝑠−2)(𝑠+4)
= 𝑠−1 + 𝑠−2 + 𝑠+4 = (𝑠−1)(𝑠−2)(𝑠+4)
25
6𝐵 = 25 or 𝐵 = 6
𝑠2 +6𝑠+9 𝐴 𝐵 𝐶 16 1 25 1 1 1
Thus, 𝑌(𝑠) = (𝑠−1)(𝑠−2)(𝑠+4) = 𝑠−1 + 𝑠−2 + 𝑠+4 = − (𝑠−1) + (𝑠−2) + 𝐶 = 30 (𝑠+4)
5 6
16 1 25 1 1 1
Or 𝑌(𝑠) = − (𝑠−1) + (𝑠−2) + 𝐶 = 30 (𝑠+4)
5 6
16 1 25 1 1 1
𝑦(𝑡) = − ℒ −1 {𝑠−1} + ℒ −1 {𝑠−2} + 30 ℒ −1 {𝑠+4}
5 6
16 25 1
Or 𝑦(𝑡) = ℒ −1 {𝑌(𝑠)} = − 𝑒𝑡 + 𝑒 2𝑡 + 30 𝑒 −4𝑡
5 6
Evaluating transforms such as ℒ{𝑒 5𝑡 𝑡 3 } and ℒ{𝑒 −2𝑡 𝑐𝑜𝑠4𝑡} is straightforward provided that we
know (and we do know) ℒ{𝑡 3 } and ℒ{𝑐𝑜𝑠4𝑡}.in general, if we know the Laplace transform of a
function 𝑓, ℒ{𝑓(𝑡)} = 𝐹(𝑠), it is possible to compute the Laplace transform of an exponential
multiple of 𝑓,that is ℒ{𝑒 𝑎𝑡 𝑓(𝑡)},with no additional effort other than translating, or shifting, the
transform 𝐹(𝑠) to 𝐹(𝑠 − 𝑎).this result is known as the first translation theorem or first
shifting theorem.
Proof: By definition
∞ ∞
ℒ{𝑒 𝑎𝑡 𝑓(𝑡)} = ∫0 𝑒 −𝑠𝑡 𝑒 𝑎𝑡 𝑓(𝑡)𝑑𝑡 = ∫0 𝑒 −(𝑠−𝑎)𝑡 𝑓(𝑡)𝑑𝑡 = 𝐹(𝑠 − 𝑎).
Hence the proof
If we consider 𝑠 as areal variable, then the graph of 𝐹(𝑠 − 𝑎) is the graph of 𝐹(𝑠) shifted on the
𝑠-axis by amount|𝑎|. If 𝑎 > 0, the graph of 𝐹(𝑠) is shifted 𝑎 units to the right, where as if 𝑎 < 0,
is shifted |𝑎| units to the left as shown in the Fig 4.3.
Where 𝑠 → 𝑠 − 𝑎 means that in the Laplace transform 𝐹(𝑠) of 𝑓(𝑡),we replace the symbol 𝑠
when ever it appears by 𝑠 − 𝑎.
Inverse form of the first translation theorem: To compute the inverse of 𝐹(𝑠 − 𝑎), we must
recognize 𝐹(𝑠),find f(t) by taking the inverse Laplace transform 𝐹(𝑠),and then multiply f(t) by
the exponential function 𝑒 𝑎𝑡 .this procedure can be summarized symbolically in the following
manner:
2 11 2
𝑌(𝑠) = 𝑠−3 + (𝑠−3)2 + (𝑠−3)5
Thus, taking the inverse Laplace transform on both sides gives
2 1 1
𝑦(𝑡) = ℒ −1 {𝑠−3} + 11ℒ −1 {(𝑠−3)2 }+2ℒ −1 {(𝑠−3)5 }
2 1 2 4!
𝑦(𝑡) = ℒ −1 {𝑠−3} + 11ℒ −1 {(𝑠−3)2 }+4! ℒ −1 {(𝑠−3)5 }
1 1 4!
𝑦(𝑡) = 2ℒ −1 {𝑠−3} + ℒ −1 {𝑠2 }+ℒ −1 {𝑠5 } (by the translation theorem)
s s 3 s s 3
1 4
𝑦(𝑡) = 2𝑒 3𝑡 + 11𝑡𝑒 3𝑡 + 12 𝑡𝑒 3𝑡 .
1 4
Hence, 𝑦(𝑡) = 2𝑒 3𝑡 + 11𝑡𝑒 3𝑡 + 12 𝑡𝑒 3𝑡 is the solution to the IVP
Example 4.27: Solve the Initial-Value Problem
𝑦 ′′ + 4𝑦 ′ + 6𝑦 = 1 + 𝑒 −𝑡 , 𝑦(0) = 0, 𝑦 ′ (0) = 0.
Solution: ℒ{𝑦′′} + ℒ{𝑦′} + 6ℒ{𝑦} = ℒ{1} + ℒ{𝑒 −𝑡 }
1 1
𝑠 2 𝑌(𝑠) − 𝑠𝑦(0) − 𝑦 ′ (0) + 4[𝑠 𝑌(𝑠) − 𝑦(0)] + 6 𝑌(𝑠) = + 𝑠+1
𝑠
2𝑠+1
(𝑠 2 + 4𝑠 + 6) 𝑌(𝑠) = 𝑠(𝑠+1)
2𝑠+1
𝑌(𝑠) = 𝑠(𝑠+1)(𝑠2 +4𝑠+6)
Since the quadratic term in the denominator does not factor into real linear factors, the partial
fraction decomposition for 𝑌(𝑠) is found to be
1⁄6 1⁄3 𝑠⁄2+5⁄3
𝑌(𝑠) = + 𝑠+1 − 𝑠2 +4𝑠+6
𝑠
Taking the Laplace transform on each side and completing the square on 𝑠 2 + 4𝑠 + 6
1 1 1 1 1 𝑠+2 2 √2
𝑦(𝑡) = ℒ −1 {𝑌(𝑠)} = 6 ℒ −1 {𝑠 } + 3 ℒ −1 {𝑠+1} − 2 ℒ −1 {(𝑠+2)2 +2} − 3√2 ℒ −1 {(𝑠+2)2 +2}
1 1 1 √2 −2𝑡
= 6 + 3 𝑒 −𝑡 − 2 𝑒 −2𝑡 𝑐𝑜𝑠√2𝑡 − 𝑒 𝑠𝑖𝑛√2
3
Exercises 4.3
(a). 𝑓 (𝑡) = 𝑡 2 + 6𝑡 − 3 (b). 𝑓(𝑡) = −4𝑡 2 + 16𝑡 + 5 (c). 𝑓(𝑡) = 1 + 𝑒 4𝑡 (d). 𝑓(𝑡) = 1 + 𝑒 4𝑡
(f) 𝑓(𝑡) = 4𝑡 2 − 5 𝑠𝑖𝑛 3𝑡 (g) 𝑓(𝑡) = 𝑒 𝑡 𝑠𝑖𝑛ℎ 𝑡 (h) 𝑓(𝑡) = (2𝑡 − 1)3 (i) 𝑓(𝑡) = 𝑠𝑖𝑛(4𝑡 + 5)
𝑑 𝑑
That is ℒ{𝑡𝑓(𝑡)} = − 𝑑𝑠 𝐹(𝑠) = − 𝑑𝑠 ℒ{𝑓(𝑡)}
𝑑 𝑑 𝑑
Similarly by the above resultℒ{𝑡 2 𝑓(𝑡)} = ℒ{𝑡. 𝑡𝑓(𝑡)} = − 𝑑𝑠 ℒ{𝑡𝑓(𝑡)} = − 𝑑𝑠 (− 𝑑𝑠 ℒ{𝑓(𝑡)})
𝑑2
= ℒ{𝑓(𝑡)}
𝑑𝑠2
𝑑 𝑑
𝑑 𝑑 𝑘 (𝑠2 +𝑘 2 ) (𝑘)−𝑘 (𝑠2 +𝑘 2 ) (𝑠2 +𝑘 2 )(0)−2𝑘𝑠
ℒ{𝑡𝑠𝑖𝑛𝑘𝑡} = (−1)1 𝑑𝑠 ℒ{𝑠𝑖𝑛𝑘𝑡} = − 𝑑𝑠 (𝑠2 +𝑘 2 ) = −( 𝑑𝑠 𝑑𝑠
)= −( )
(𝑠2 +𝑘 2 )2 (𝑠2 +𝑘 2 )2
2𝑘𝑠
Hence ℒ{𝑡𝑠𝑖𝑛𝑘𝑡} = (𝑠2 +𝑘 2 )2
𝑑 𝑑
𝑑 𝑑 1 (𝑠−3) (1)−1. (𝑠−3) (𝑠−3)(0)−1 −(−1)
ℒ{𝑡𝑒 3𝑡 } = (−1)1 𝑑𝑠 ℒ{𝑒 3𝑡 } = − 𝑑𝑠 (𝑠−3) = −( 𝑑𝑠 𝑑𝑠
) = −( ) = (𝑠−3)2
(𝑠−3)2 (𝑠−3)2
−1
= (𝑠−3)2
Let 𝑓(𝑡)⁄𝑡 be a piecewise continuous, defined for t ≥ 0 and such that |𝑓(𝑡)⁄𝑡| ≤ 𝑀𝑒 −𝑘𝑡 for t ≥
0, if ℒ{𝑓(𝑡)⁄𝑡 } = 𝐺(𝑆) for 𝑠 > 𝑘 and ℒ{𝑓(𝑡)} = 𝐹(𝑆) ,
∞ −1
ℒ{𝑓(𝑡)⁄𝑡 } = ∫𝑠 𝐹(𝑢) 𝑑𝑢 and conversely ℒ −1 {𝐺(𝑠)} = ℒ −1 ℒ{𝐺 ′ (𝑠)}
𝑡
∞
Proof: we have 𝐺(𝑆) = ℒ{𝑓(𝑡)⁄𝑡 } = ∫0 𝑒 −𝑠𝑡 𝑓(𝑡)⁄𝑡 𝑑𝑡 for 𝑠 > 𝑘
∞ 𝑓(𝑡) ∞
However 𝐺 ′ (𝑠) = ∫0 𝑒 −𝑠𝑡 (−𝑡) 𝑑𝑡 = − ∫0 𝑒 −𝑠𝑡 𝑓(𝑡)𝑑𝑡 = − 𝐹(𝑆)
𝑡
To proceed further we now make use of the fact that the condition |𝑓(𝑡)⁄𝑡| ≤ 𝑀𝑒 −𝑘𝑡 implies
∞
that lim 𝐺(𝑠) = 0 , showing that 𝐺(𝑠) = ℒ{𝑓(𝑡)⁄𝑡 } = ∫𝑠 𝐹(𝑢) 𝑑𝑢 , for 𝑠 > 𝑘
𝑠→∞
The converse result follows by taking the inverse Laplace transform and using the fact that
{𝐹(𝑠)} 1
ℒ −1 {𝐺(𝑠)} = 𝑓(𝑡)⁄𝑡 = ℒ −1 = − 𝑡 ℒ −1 {𝐺 ′ (𝑠)} Together with ℒ{𝑓(𝑡) } = 𝐹(𝑠) = −𝐺 ′ (𝑠)
𝑡
𝑒 −2𝑡 −𝑒 −3𝑡
Example 4.30: Evaluate ℒ { }
𝑡
𝑒 −2𝑡 −𝑒 −3𝑡
Solution: The function is defined and finite for all > 0 , where 𝑓(𝑡) = 𝑒 −2𝑡 − 𝑒 −3𝑡
𝑡
1 1
ℒ{𝑓(𝑡)} = ℒ{𝑒 −2𝑡 − 𝑒 −3𝑡 } = 𝐹(𝑠) = 𝑠+2 + 𝑠+3
𝑒 −2𝑡 −𝑒 −3𝑡 ∞ ∞ 1 1
𝐺(𝑠) = ℒ{𝑓(𝑡)⁄𝑡 } = ℒ { } = ∫𝑠 𝐹(𝑢) 𝑑𝑢 = ∫𝑠 (𝑢+2 + 𝑢+3 ) 𝑑𝑢
𝑡
∞ 1 ∞ 1 ∞ ∞
= ∫𝑠 𝑑𝑢 + ∫𝑠 𝑑𝑢 = [ln(𝑢 + 2)] − [ln(𝑢 + 3)]
𝑢+2 𝑢+3 𝑠 𝑠
= ln(∞ + 2) − ln(𝑠 + 2) − ln(∞ + 3) + ln(𝑠 + 3)
𝑠+3
= ln(𝑠 + 3) − ln(𝑠 + 2) = ln (𝑠+2) because ln(∞ + 2) = ln(∞ + 3) = 0 (∴ lim 𝐺(𝑠) = 0 ) ⟹
𝑠→∞
𝑠+3
𝐺(𝑠) = ln (𝑠+2)
𝑠+3
Example 4.31: Evaluate ℒ −1 {ln (𝑠+2)}
𝑠+3 𝑑 𝑠+3 𝑑
Solution: let𝐺(𝑠) = ln (𝑠+2), then 𝐺 ′ (𝑠) = 𝑑𝑠 ln (𝑠+2) = 𝑑𝑠 (ln(𝑠 + 3) − ln(𝑠 + 2))
𝑑 𝑑
𝑑 𝑑 (𝑠+3) (𝑠+2) 1 1
𝑑𝑠 𝑑𝑠
= 𝑑𝑠 ln(𝑠 + 3) − 𝑑𝑠 ln(𝑠 + 2) = − = 𝑠+3 − 𝑠+2
𝑠+3 𝑠+2
1 1
⟹ 𝐺 ′ (𝑠) = 𝑠+3 − 𝑠+2
1 1 1 1
Now ℒ −1 {𝐺 ′ (𝑠)} = ℒ −1 {𝑠+3 − 𝑠+2 } = ℒ −1 {𝑠+3} − ℒ −1 {𝑠+2} = e−3t − e−2t
𝑠+3
Hence ℒ −1 {ln (𝑠+2)} = (e−2t − e−3t )⁄𝑡
Exercises 4.4
(a) 𝑡 𝑐𝑜𝑠𝑤𝑡 (b) 𝑡 2 𝑠𝑖𝑛 3𝑡 (c) 𝑡 2 𝑐𝑜𝑠ℎ 2𝑡 (d) 𝑡𝑒 −𝑘𝑡 𝑠𝑖𝑛 𝑡 (e) 𝑡 𝑛 𝑒 𝑘𝑡
2. Use the integral of a transform to find the Laplace transform of each of the following
4.5.1 Convolution
Definition: let the functions 𝑓(𝑡) and 𝑔(𝑡) be defined for 𝑡 ≥ 0. Then the convolution of the
functions 𝑓 and 𝑔 denoted by (𝑓 ∗ 𝑔)(𝑡), and in abbreviated form by 𝑓 ∗ 𝑔 is a function of t
defined as the integral;
𝑡
(𝑓 ∗ 𝑔)(𝑡) = ∫0 𝑓(𝜏)𝑔(𝑡 − 𝜏)𝑑𝜏
Note: From the definition it follows almost immediately the convolution has the properties
(𝑓 ∗ 𝑔) ∗ ℎ = 𝑓 ∗ (𝑔 ∗ ℎ) (associative law)
𝑓∗0= 0∗𝑓
similar to those of multiplication of numbers. However, there are differences of which you
should be aware.
1 1
= 2 (𝑒 𝑡 − cos 𝑡) + 2 (− 𝑠𝑖𝑛 𝑡)
1
= 2 (𝑒 𝑡 − cos 𝑡 − 𝑠𝑖𝑛 𝑡)
𝑡 𝑡 1
Hence 𝑓 ∗ 𝑔 = (𝑔 ∗ 𝑓)(𝑡) = ∫0 𝑓(𝜏)𝑔(𝑡 − 𝜏)𝑑𝜏 = ∫0 𝑒 𝜏 sin(𝑡 − 𝜏)𝑑𝜏 = 2 (𝑒 𝑡 − cos 𝑡 − 𝑠𝑖𝑛 𝑡)
If 𝑓(𝑡) and 𝑔(𝑡) are piecewise continuous on [0, ∞) and of exponential order, then
or equivalently
𝑡
ℒ −1 {𝐹(𝑠)𝐺(𝑠)} = 𝑓 ∗ 𝑔 = ∫0 𝑓(𝜏)𝑔(𝑡 − 𝜏)𝑑𝜏
Proof
∞ ∞
Let 𝐹(𝑠) = ∫0 𝑒 −𝑠𝜏 𝑓(𝜏)𝑑𝜏 and 𝐺(𝑠) = ∫0 𝑒 −𝑠𝑝 𝑔(𝑝)𝑑𝑝
𝜏 in 𝐹 and 𝑡 in 𝐺 vary independently. Hence we can insert the G-integral into the F-integral.
Here we integrate for fixed 𝜏 over 𝑡 from 𝜏 to ∞.this is the shaded region in Fig 4.4. Under the
assumption on f and g the order of integration can be reversed. We then integrate first over 𝜏
Example 4.33: if 𝑓(𝑡) = 𝑒 𝑡 and 𝑔(𝑡) = 𝑠𝑖𝑛𝑡 , then the convolution theorem states that the
Laplace transform of the convolution of 𝑓 and 𝑔 is the product of their Laplace transforms.
𝑡
That is ℒ{𝑓 ∗ 𝑔} = ℒ {∫0 𝑒 𝜏 sin(𝑡 − 𝜏) 𝑑𝜏} = ℒ{𝑓(𝑡)}. ℒ{𝑔(𝑡)} = ℒ{𝑒 𝑡 }. ℒ{𝑠𝑖𝑛𝑡}
1 1 1
= = (𝑠−1)(𝑠2 +1)
𝑠−1 𝑠2 +1
𝒔
Example 4.34: Evaluate a) ℒ{𝑡 2 ∗ 𝑐𝑜𝑠𝑡} b) 𝓛−𝟏 {(𝒔𝟐 +𝒂𝟐 )𝟐 }
2
Solution: a) Here 𝑓(𝑡) = 𝑡 2 and ℒ{𝑓(𝑡)} = ℒ{𝑡 2 } = 𝑠3 and 𝑔(𝑡) = 𝑐𝑜𝑠𝑡 , ℒ{𝑔(𝑡)} = ℒ{𝑐𝑜𝑠𝑡}
𝑠
= , then by the convolution theorem
𝑠2 +1
2 𝑠 2𝑠 2
ℒ{𝑡 2 ∗ 𝑐𝑜𝑠𝑡} = ℒ{𝑡 2 }ℒ{𝑐𝑜𝑠𝑡} = = 𝑠3 (𝑠2 +1) =
𝑠3 𝑠2 +1 𝑠2 (𝑠2 +1)
𝒔 𝟏 𝒔 𝟏 𝒔
b) Writing 𝟐 = 𝒔𝟐 +𝒂𝟐 and letting 𝐹(𝑠) = 𝒔𝟐 +𝒂𝟐 and 𝐺(𝑠) = 𝒔𝟐 +𝒂𝟐 , we have
(𝒔𝟐 +𝒂𝟐 ) 𝒔𝟐 +𝒂𝟐
𝟏 𝒂 1 𝒂 1
𝓛−𝟏 {𝐹(𝑠)} = 𝓛−𝟏 {𝒔𝟐 +𝒂𝟐 } = 𝓛−𝟏 {𝒂(𝒔𝟐 +𝒂𝟐)} = 𝑎 𝓛−𝟏 {𝒔𝟐 +𝒂𝟐 } = 𝑠𝑖𝑛𝑎𝑡
𝑎
𝒔
Similarly,𝓛−𝟏 {𝐺(𝑠)} = 𝓛−𝟏 {𝒔𝟐 +𝒂𝟐 } = 𝑐𝑜𝑠𝑎𝑡, then it follows from the convolution theorem that
𝑠 1
ℒ −1 {(𝑠2 +𝑎2 )2 } = ℒ −1 {𝐹(𝑠)𝐺(𝑠)} = (1⁄𝑎 )(𝑠𝑖𝑛𝑎𝑡 ∗ 𝑐𝑜𝑠𝑎𝑡) = 2𝑎 𝑡 𝑠𝑖𝑛𝑎𝑡
𝑡
Note: when evaluating convolution integrals of ∫0 𝑠𝑖𝑛𝑎𝜏𝑐𝑜𝑠𝑎(𝑡 − 𝜏)𝑑𝜏 , instead of expanding a
term such as 𝑐𝑜𝑠𝑎(𝑡 − 𝜏) and 𝑠𝑖𝑛𝑎(𝑡 − 𝜏) using integration by parts, it is often quicker to
𝑒 𝑖𝑎𝑡 +𝑒 −𝑖𝑎𝑡 𝑒 −𝑖𝑎(𝑡−𝜏)𝑡 +𝑒 −𝑖(𝑡−𝜏)𝑎𝑡
replace 𝑠𝑖𝑛𝑎𝑡 and 𝑐𝑜𝑠𝑎(𝑡 − 𝜏) by 𝑠𝑖𝑛𝑎𝑡 = , 𝑐𝑜𝑠𝑎(𝑡 − 𝜏) =
2𝑖 2
Before performing the integrations, and again using these identities to interpret the result in
terms of trigonometric functions
Convolution helps in solving certain integral equations, that is, equations in which the unknown
function 𝑦(𝑡) appears in an integral (and perhaps also outside of it).This concerns equations with
an integral of the form of a convolution.
Where 𝑔(𝑡) and ℎ(𝑡) are known functions is called Volterra Integral Equation for𝑓(𝑡).
Note: Volterra Integral Equation has the convolution form with the symbol ℎ playing the part of
𝑔 in convolution.
𝑡
Example 4.35: solve 𝑓(𝑡) = 3𝑡 2 − 𝑒 −𝑡 − ∫0 𝑓(𝜏)𝑒 𝑡−𝜏 𝑑𝜏 for 𝑓(𝑡)
2 1 2 1
⟹ 𝐹(𝑠) =3.𝑠3 − 𝑠+1 − ℒ{𝑓(𝑡)}ℒ{ℎ(𝑡)} =3.𝑠3 − 𝑠+1 − ℒ{𝑓(𝑡)}ℒ{𝑒 𝑡 }
2 1 6 1 1
= 3.𝑠3 − 𝑠+1 − ℒ{𝑓(𝑡)}ℒ{𝑒 −𝑡 } = 𝑠3 − 𝑠+1 − 𝐹(𝑠) 𝑠−1
1 6 1 𝑠−1+1 6 1 𝑠 6 1
⟹ (1 + 𝑠−1) 𝐹(𝑠) = 𝑠3 − 𝑠+1 ⟹ ( ) 𝐹(𝑠) = 𝑠3 − 𝑠+1 ⟹ 𝐹(𝑠) = 𝑠3 − 𝑠+1
𝑠−1 𝑠−1
𝑠−1
Decomposing 𝑠(𝑠+1) in to partial fractions we have
6 6 𝑠−1 6 6 1 2
𝐹(𝑠) = 𝑠3 − 𝑠4 − 𝑠(𝑠+1) = 𝑠3 − 𝑠4 + 𝑠 − 𝑠+1 taking the inverse Laplace transform of each term
we have
6 6 𝑠−1 6 6 1 2 6 6 1
ℒ −1 {𝐹(𝑠)} = ℒ −1 {𝑠3 − 𝑠4 − 𝑠(𝑠+1) = 𝑠3 − 𝑠4 + 𝑠 − 𝑠+1} = ℒ −1 {𝑠3 } − ℒ −1 {𝑠4 } + ℒ −1 {𝑠 } −
2
ℒ −1 {𝑠+1}
2! 3! 1 1
⟹ 𝑓(𝑡) = 3ℒ −1 {𝑠3 } − ℒ −1 {𝑠4 } + ℒ −1 {𝑠 } − 2 {𝑠+1} = 3t 2 − t 3 + 1 − 2e−t
Integro-differential equations
We now consider a differential equation of an unusual type, these equations occur in many
applications of mathematics, one of which arises in the continuum mechanics of polymers, where
the dynamical response 𝑦(𝑡) of certain types of material at time 𝑡 depends on a derivative of
𝑦(𝑡) and the time-weighed cumulative effect of what has happened to the material prior to time
𝑡.for obvious reasons materials of this type are called materials with memory.
Definition: Differential equations in which the function 𝑦(𝑡) occurs not only as the dependent
variable in the differential equation, but also inside a convolution integral that forms the
Nonhomogeneous term are called Integro-differential equations.
In other words, equations that involve both the integral of an unknown function and its
derivatives are called Integro-differential equations.
Here the last term is the Laplace of a convolution integral, so from the convolution theorem it
follows that
𝑡 𝑡 𝑡 𝑌(𝑠)
ℒ {∫0 sin 𝜏 𝑦(𝑡 − 𝜏)𝑑𝜏} = ℒ {∫0 sin 𝑡} ℒ {∫0 𝑦(𝑡)} = 𝑠2 +1
Using this result in the transformed equation, solving for 𝑌(𝑠), and expanding the result using
partial fractions gives
𝑌(𝑠) 𝑌(𝑠)
𝑠 2 𝑌(𝑠) − 𝑠 + 𝑌(𝑠) = 𝑠2 +1 or (𝑠 2 + 1) 𝑌(𝑠) = 𝑠2 +1 + 𝑠
𝑠2 +1 11 1 𝑠
i.e. 𝑌(𝑠) = 𝑠(𝑠2 +2) = 2 𝑠 + 2 (𝑠2 +2)
Exercises 4.5
2. If 𝑓(𝑡) = 𝑡 2 and 𝑔(𝑡) = 𝑐𝑜𝑠𝑡 , then show that (𝑓 ∗ 𝑔)(𝑡) = 2(𝑡 − 𝑠𝑖𝑛𝑡) = (𝑔 ∗ 𝑓)(𝑡)
(a) ℒ{1 ∗ 𝑡 3 } (b). ℒ{𝑡 2 ∗ 𝑡𝑒 𝑡 } (c). ℒ{𝑒 −𝑡 ∗ 𝑒 𝑡 𝑐𝑜𝑠𝑡} (d). ℒ{𝑒 2𝑡 ∗ 𝑠𝑖𝑛 𝑡}
4. In each of the following use the Laplace transform to solve the given integral equation or Integro-
differential equation.
𝑡 𝑡
(a) 𝑓(𝑡) + ∫0 (𝑡 − 𝜏)𝑓(𝜏)𝑑𝜏 = 𝑡 (b) 𝑓(𝑡) = 2𝑡 − 4 ∫0 𝑠𝑖𝑛 𝜏 𝑓(𝑡 − 𝜏)𝑑𝜏
𝑡
(c) 𝑦 ′ (𝑡) = 1 − 𝑠𝑖𝑛𝑡 − ∫0 𝑦(𝜏)𝑑𝜏 , 𝑦(0) = 0
𝑑𝑦 𝑡
(d) 𝑑𝑡
+ 6𝑦(𝑡) + 9 ∫0 𝑦(𝜏)𝑑𝜏 = 1, 𝑦(0) = 0
Unit Summary:
The Fourier Cosine and Sine Transforms can be considered as a special cases of the
Fourier transform 𝑜𝑓 𝑓(𝑥) when 𝑓(𝑥) is even or odd function over the real axis
Fourier cosine and Fourier sine transforms of f(x) denoted by
𝐹𝐶 (𝑤) 𝑜𝑟𝑓𝑐 ^ 𝑎𝑛𝑑 𝐹𝑠 (𝑤) 𝑜𝑟 𝑓𝑠 ^ respectively is defined as;
2 ∞ 2 ∞
𝐹𝑐 (𝑤) = √𝜋 ∫0 𝑓(𝑥) 𝑐𝑜𝑠𝑤𝑥𝑑𝑥 𝐹𝑠 (𝑤) = √𝜋 ∫0 𝑓(𝑥) 𝑠𝑖𝑛𝑤𝑥𝑑𝑥
The Fourier cosine and sine transform are linear transforms i.e. for any two functions
𝑓(𝑥) and 𝑔(𝑥) whose Fourier cosine and sine transform exist and for any constants a and
b, then
Let 𝑓(𝑥) and 𝑓′(𝑥) be continuous and absolutely integrable on the interval[0, ∞) and
𝑓′′(𝑥) be piecewise continuous on every subinterval[0, 𝑙), then the Fourier cosine and
sine transforms of derivatives are
2
a) 𝑓𝑐 ^ [𝑓′(𝑥)] = 𝑤𝐹𝑠 (𝑤) − √𝜋 𝑓(0)
2
c) 𝑓𝑐 ^ [𝑓′′(𝑥)] = −𝑤 2 𝐹𝑐 (𝑤) − √𝜋 𝑓′(0)
2
d) 𝑓𝑠 ^ [𝑓′′(𝑥)] = −𝑤 2 𝐹𝑠 (𝑤) + 𝑤√𝜋 𝑓(0)
Fourier transforms of a function f (𝑥) can be derived from the complex Fourier integral
representation of 𝑓(𝑥) on the real line.
The Fourier transform denoted by 𝐹(𝑤) 𝑜𝑟 ℱ(𝑓(𝑥))of a function 𝑓(𝑥) is defined as
1 ∞
𝐹(𝑤) = ∫ 𝑓(𝑥) 𝑒 −𝑖𝑤𝑥 𝑑𝑥
√2𝜋 −∞
Fourier transform is linear, that is, for any functions 𝑓(𝑥) and 𝑔(𝑥)Whose Fourier
Transform exist and for any constants 𝑎, 𝑏
Laplace transform is a linear transform i.e. for a linear combination of functions we can
write
∞ ∞ ∞
∫0 𝑒 −𝑠𝑡 [𝛼𝑓(𝑡) + 𝛽𝑔(𝑡)]𝑑𝑡 = 𝛼 ∫0 𝑒 −𝑠𝑡 𝑓(𝑡)𝑑𝑡 + 𝛽 ∫0 𝑒 −𝑠𝑡 𝑔(𝑡)𝑑𝑡
If f is piecewise continuous on [0, ∞) and exponential order 𝑐,then ℒ{𝑓(𝑡)} exists for𝑠 >
𝑐.
If ℒ{𝑓(𝑡)} = 𝐹(𝑠) we then say 𝑓(𝑡) is the inverse Laplace transform of 𝐹(𝑠) and write
𝑓(𝑡) = ℒ −1 {𝐹(𝑠)}.
The inverse Laplace transform is also a linear transform, that is for constants 𝛼 and 𝛽 and
for some functions 𝐹 and 𝐺 that are transforms of 𝑓 and 𝑔 respectively, then
Where 𝑔(𝑡) and ℎ(𝑡) are known functions is called Volterra Integral Equation for𝑓(𝑡).
Equations that involve both the integral of an unknown function and its derivatives are
called Integro-differential equations.
Miscellaneous Exercises
1. Find the Fourier cosine and Fourier sine transform of each of the stated function
𝑥, 0≤𝑥≤1
𝑠𝑖𝑛 𝑥, 0 ≤ 𝑥 ≤ 𝜋 𝑐𝑜𝑠 𝑥, 0 ≤ 𝑥 ≤ 𝜋
(a) 𝑓(𝑥) = { (b). 𝑓(𝑥) = { (c). 𝑓(𝑥) = { 2 − 𝑥, 1 ≤ 𝑥 ≤ 2
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
1 − 𝑥2, 0 ≤ 𝑥 < 1
(d) 𝑓(𝑥) = {
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
2. Find the Fourier sine transform of 𝑓(𝑥) = 𝑒 −𝑎𝑥 , 𝑎 > 0 and prove that
∞ 𝑥𝑠𝑖𝑛𝛼𝑥 𝜋
∫0 𝑑𝑥 = 𝑒 −𝑎𝛼 , 𝛼 > 0
𝑎 2 +𝑥 2 2
3. Find the Fourier cosine and Fourier sine transform s of each of the following
(a) 𝑓(𝑥) = 𝑥 𝛼−1 , 0 < 𝛼 < 1 (b.) 𝑓(𝑥) = 𝑥𝑒 −𝑎𝑥 (c). f (𝑥) = 𝑒 −𝑥 𝑐𝑜𝑠𝑥 , 𝑥 > 0
𝑥 𝑎 𝑒 −𝑥 , 𝑥>0
(g) ) 𝑓(𝑥) = {
0, 𝑥≤0
1 − 𝑥 2 , |𝑥| < 1
5. Find the Fourier transform of 𝑓(𝑥) = { and hence show that
0, |𝑥| > 1
∞ 𝑥𝑐𝑜𝑠𝑥−𝑠𝑖𝑛𝑥 𝑥 −3𝜋
∫0 𝑥3
𝑐𝑜𝑠 2 𝑑𝑥 = 16
(d) 𝑓(𝑡) = (𝑒 𝑡 − 𝑒 −𝑡 )2 (e) 𝑓(𝑡) = 𝑒 𝑡 𝑠𝑖𝑛 5𝑡 (f) 𝑓(𝑡) = 𝑒 2𝑡 (𝑡 − 1)2 (g) 𝑓(𝑡) = 𝑡10 𝑒 −7𝑡
𝑡
(h) 𝑓(𝑡) = (1 − 𝑒 𝑡 + 3𝑒 −4𝑡 )𝑐𝑜𝑠 5𝑡 (i) 𝑓(𝑡) = 𝑒 3𝑡 (9 − 4𝑡 + 10 𝑠𝑖𝑛 2)
1 1 𝑠−3 6𝑠+3
(a) ℒ −1 {(𝑠2 } (b). ℒ −1 { } (c ) ℒ −1 { } (d) ℒ −1 { }
+1)(𝑠2 +4) 𝑠4 −9 (𝑠−√3)(𝑠+√3) 𝑠4 +5𝑠2 +4
1 1 2𝑠−1 (𝑠+1)2
(e) ℒ −1 { } (f) ℒ −1 { } (g) ℒ −1 { } (g) ℒ −1 { }
(𝑠+2)3 𝑠2 +2𝑠+5 𝑠2 (𝑠+1)3 (𝑠+2)4
8. Use the Laplace transforms the given initial-value and boundary problem
9. In each of the following use the convolution theorem to find the Laplace transform
𝑡 𝑡 𝑡 𝑡
(a) ℒ {∫0 𝑒 𝜏 𝑑𝜏} (b). ℒ {∫0 𝑒 −𝜏 𝑐𝑜𝑠 𝜏 𝑑𝜏} (c). ℒ {∫0 𝜏𝑠𝑖𝑛 𝜏 𝑑𝜏} (d) ℒ {∫0 𝜏 𝑒 𝑡−𝜏 𝑑𝜏}
10. In each of the following use the Laplace transform to solve the given integral equation or Integro-
differential equation.
𝑡 𝑡
(a) 𝑓(𝑡) = 𝑡𝑒 𝑡 + ∫0 𝜏𝑓(𝑡 − 𝜏)𝑑𝜏 (b) 𝑓(𝑡) + 2 ∫0 𝑓(𝜏)𝑐𝑜𝑠(𝑡 − 𝜏)𝑑𝜏 = 4𝑒 −𝑡 + 𝑠𝑖𝑛 𝑡
𝑡 𝑡
(c ) 𝑓(𝑡) + ∫0 𝑓(𝜏)𝑑𝜏 = 1 (d) ) 𝑓(𝑡) = 𝑐𝑜𝑠 𝑡 + ∫0 𝑒 −𝜏 𝑓(𝑡 − 𝜏)𝑑𝜏
𝑡
(e) 𝑡 − 2𝑓(𝑡) = ∫0 (𝑒 𝜏 − 𝑒 −𝜏 )𝑓(𝑡 − 𝜏)𝑑𝜏
𝑡 𝑡
(f) 𝑦 ′ + 4𝑦 = 4 ∫0 𝑠𝑖𝑛 𝜏 𝑦(𝑡 − 𝜏)𝑑𝜏, with 𝑦(0)=1 (g) 𝑦 ′ + 𝑦 = 4 ∫0 𝑒 −2𝜏 𝑦(𝑡 − 𝜏)𝑑𝜏, with 𝑦(0)=3
𝑡 𝑡
(h) 𝑦 ′′ − 𝑦 = ∫0 𝑠𝑖𝑛ℎ 𝜏 𝑦(𝑡 − 𝜏)𝑑𝜏, with 𝑦(0)=4 (k) 𝑦 ′′ − 4𝑦 = 2 ∫0 𝑠𝑖𝑛ℎ 2𝜏 𝑦(𝑡 − 𝜏)𝑑𝜏 , with 𝑦(0)=1
References
UNIT FIVE
VECTOR CALCULUS
Introduction
Vector calculus deals with the application of calculus operations on vectors (vector fields) .We
will often need to evaluate integrals, derivatives, and other operations that use integrals and
derivatives. The rules needed for these evaluations constitute vector calculus. In particular, line,
volume, and surface integration are important, as are directional derivatives. The relations
defined here are very useful in the context of electromagnetic but, even without reference to
electromagnetics, we will show that the definitions given here are simple extensions to familiar
concepts and they simplify a number of important aspects of calculation.
We will discuss in particular the ideas of line, surface, and volume integration, and the general
ideas of gradient, divergence, and curl, as well as the divergence and Stokes theorems. These
notions are of fundamental importance for the understanding of electromagnetic fields.
More over, Vector fields have many important applications, as they can be used to represent
many physical quantities: the vector at a point may represent the strength of some force (gravity,
electricity, and magnetism) or a velocity (wind speed or the velocity of some other fluid).
Unit Objectives:
In this section, we are going to deal with the definition of scalar fields and vector fields by
considering various examples.
Section Objectives:
A two-dimensional vector field is a function f that maps each point (x, y) in 𝑅 2 to a two
dimensional Vector 〈𝑢, 𝑣〉, and similarly a three-dimensional vector field maps (𝑥, 𝑦, 𝑧) to
〈𝑢, 𝑣, 𝑤〉. Since a vector has no position, we typically indicate a vector field in graphical form by
placing the vector f(x, y) with its tail at (x, y). For such a graph to be readable, the vectors must
be fairly short, which is accomplished by using a different scale for the vectors than for the axes.
Such graphs are thus useful for understanding the sizes of the vectors relative to each other but
not their absolute size.
Definition: If to each point 𝑃 of a set 𝐷 ⊆ 𝑅 3 (𝑜𝑟𝑅 2 ) is assigned a scalar𝑓(𝑃), then a scalar
field is said to be defined in 𝐷 and the function 𝑓: 𝐷 ⟶ 𝑅 is called a scalar function (or a
scalar field). Likewise, if to each point 𝑃 in 𝐷 is assigned a vector 𝑭(𝑃) ∈ 𝑅 3 (𝑜𝑟𝑅 2 ) then a
vector field is said to be defined in 𝐷 and the vector-valued function 𝑭: 𝐷 ⟶ 𝑅 3 (𝑜𝑟𝑅 2 ) is called
a vector function (or a vector field).
If we introduce Cartesian coordinates 𝑥, 𝑦, 𝑧 then instead of 𝑓(𝑃) we can write 𝑓(𝑥, 𝑦, 𝑧) and
(ii). A scalar field or a vector field arising from geometric or physical considerations must
depend only on the points P where it is defined and not on the particular choice of Cartesian
coordinates.
Example 5.1: The scalar function of position 𝐹(𝑥, 𝑦, 𝑧) = 𝑥𝑦𝑧 2 for (𝑥, 𝑦, 𝑧) inside the unit
sphere 𝑥 2 + 𝑦 2 + 𝑧 2 = 1 defines a scalar field throughout the unit sphere
Example 5.2:(Euclidean distance) Let 𝐷 = 𝑅 3 and 𝑓(𝑃) = ‖𝑃𝑃0 ‖ the distance of point 𝑃 from
a fixed point 𝑃0 in space. 𝑓(𝑃) defines a scalar field in space. if we introduce a Cartesian
coordinate system in which 𝑃0 : (𝑥0 , 𝑦0 , 𝑧0 ) then
Note that the value of 𝑓(𝑃) does not depend on the particular choice of Cartesian coordinate
system.
The best way to picture a vector field is to draw the arrow representing the vector F(x, y) starting
at the point(𝑥, 𝑦)of course, it’s impossible to do this for all points (𝑥, 𝑦) ,but we can gain a
reasonable impression of 𝑭 by doing it for a few representative points in 𝐷 as shown in the figure
below. Since F(x, y) is a two-dimensional vector, we can write it in terms of its component
functions.
Example 5.3: A vector field on 𝑅 2 is defined by 𝑭(𝑥, 𝑦) = −𝑦𝑖 + 𝑥𝑗. Describe 𝑭 by sketching
some of the vectors 𝑭(𝑥, 𝑦).
Solution: Since(1, 0) = 𝑗, we draw the vector 𝑗 = 〈0,1〉 starting at the point (1, 0).since
𝑭(0, 1) = −𝑖, we draw the vector 〈−1,0〉 with starting point (0, 1).continuing in this way, we
calculate several other representative values of 𝑭(𝑥, 𝑦) in the table and draw the corresponding
vectors.
It appears from Figure 5 that each arrow is tangent to a circle with center the origin. To confirm this, we
take the dot product of the position vector 𝒙 = 𝑥𝑖 + 𝑦𝑗 with the vector with the vector𝑭(𝒙) = 𝑭(𝑥, 𝑦):
This shows that 𝑭(𝑥, 𝑦) is perpendicular to the position vector 〈𝑥, 𝑦〉 and is therefore tangent to
the circle with center the origin and radius |𝒙| = √𝑥 2 + 𝑦 2 . Notice also that
So the magnitude of the vector 𝑭(𝑥, 𝑦) is equal to the radius of the circle
Some computer algebra systems are capable of plotting vector fields in two or three dimensions.
They give a better impression of the vector field than is possible by hand because the computer
can plot a large number of representative vectors. Figure 5.4 shows a computer plot of the vector
field in Example 1; Figures 5.5 and 5.6 show two other vector fields. Notice that the computer
scales the lengths of the vectors so they are not too long and yet are proportional to their true
lengths.
Solution The sketch is shown in the figure below. Notice that all vectors are vertical and upward
above the 𝑥𝑦-plane or downward below it. The magnitude increases with the distance from the
𝑥𝑦-plane
We were able to draw the vector field in Example 2 by hand because of its particularly simple
formula. Most three-dimensional vector fields, however, are virtually impossible to sketch by
hand and so we need to resort to a computer algebra system. Examples are shown in Figures
Fig.5.8 and Fig5.9.If the vector field in Figure Fig5.9 represents a velocity field, then a particle
would be swept upward and would spiral around the -axis in the clockwise direction as viewed
from above.
𝑦 𝑥 𝑧
Fig.5.8 𝑭(𝑥, 𝑦, 𝑧) = 𝑦𝑖 + 𝑧𝑗 + 𝑥𝑘 Fig5.9: 𝑭(𝑥, 𝑦, 𝑧) = 𝑧 𝑖 + 𝑧 𝑗 + 4 𝑘
Example 5.5: Newton’s Law of Gravitation states that the magnitude of the gravitational force
between two objects with masses 𝑚 and 𝑀 is
𝑚𝑀𝐺
|𝑭| =
𝑟2
Where r is the distance between the objects and 𝐺 is the gravitational constant.(this is an example
of an inverse square law .) let us assume that the object with mass 𝑀 is located at the origin in
𝑅 3 .(For instance, 𝑀 could be the mass of the earth and the origin would be at its center.)Let the
position vector of the object with mass 𝑚 be 𝒙 = 〈𝑥, 𝑦, 𝑧〉. Then 𝑟 = |𝒙|, so 𝑟 2 = |𝒙|2. The
gravitational force exerted on this second object acts toward the origin, and the unit vector in this
direction is
𝒙
− |𝒙|
[Physicists often use the notation 𝒓 instead of 𝒙 for the position vector, so you may see formula
(∗) written in the form 𝑭 = − (𝑚𝑀𝐺 ⁄𝒓3 )𝒓.] The function given by equation ∗ is an example of
a vector field, called the gravitational field, because it associates a vector [the force 𝑭(𝒙)] with
every point 𝒙 in the space.
Formula ∗ is a compact way of writing the gravitational field, but we can also write it in terms
of its component functions by using the fact that 𝒙 = 𝒙𝒊 + 𝒚𝒋 + 𝒛𝒌 and
|𝒙| = √𝑥 2 + 𝑦 2 + 𝑧 2 :
−𝑚𝑀𝐺𝑥 −𝑚𝑀𝐺𝑦 −𝑚𝑀𝐺𝑧
𝑭(𝑥, 𝑦, 𝑧) = 3 𝑖+ 3 𝑗+ 3 𝑘
(𝑥 2 +𝑦 2 +𝑧 2 ) ⁄2 (𝑥 2 +𝑦 2 +𝑧 2 ) ⁄2 (𝑥 2 +𝑦 2 +𝑧 2 ) ⁄2
Exercise 5.1
Section Objectives:
In general, a function is a rule that assigns to each element in the domain an element in the range.
A vector-valued function, or vector function, is simply a function whose domain is a set of real
numbers and whose range is a set of vectors. We are most interested in vector functions 𝒓 whose
values are three-dimensional vectors. This means that for every number 𝑡 in the domain of 𝒓
there is a unique vector𝑉3 in denoted by 𝒓(𝒕).If 𝑓(𝑡), 𝑔(𝑡)and ℎ(𝑡) are the components of the
vector 𝒓(𝒕).then 𝑓, 𝑔and ℎ real-valued functions called the component functions of 𝒓 and we
can write
The limit of a vector function 𝒓 is defined by taking the limits of its component functions as
follows.
.
Definition: if 𝒓(𝒕) = 〈𝑓(𝑡), 𝑔(𝑡), ℎ(𝑡)〉, then
lim 𝒓(𝒕) = 〈lim 𝑓(𝑡), lim 𝑔(𝑡) , lim ℎ(𝑡)〉
𝑡→𝑎 𝑡→𝑎 𝑡→𝑎 𝑡→𝑎
𝑠𝑖𝑛 𝑡
Example 5.6: findlim 𝒓(𝒕), where 𝒓(𝒕) = (1 + 𝑡 3 )𝒊 + 𝑡𝑒 −𝑡 𝒋 + 𝒌
𝑡→0 𝑡
Solution According to the above definition, the limit of 𝒓 is the vector whose components are
the limits of the component functions of 𝒓:
𝑠𝑖𝑛 𝑡
lim 𝒓(𝒕) = [ lim(1 + 𝑡 3 )]𝒊 + [lim 𝑡𝑒 −𝑡 ]𝒋 + [lim ]𝒌 = 𝒊 + 𝒌
𝑡→0 𝑡→0 𝑡→0 𝑡→0 𝑡
Note limits of vector functions obey the same rules as limits of real-valued functions
In view of the above definition of limit, we see that 𝒓 is continuous at 𝑎 if and only if its
component function 𝑓, 𝑔 and ℎ are continuous at 𝑎.
Definition: The derivative 𝒓′ of a vector function 𝒓 is defined in much the same way as for real –
valuedBadri
functions:
A, Moges B. and Teklebrhan B. 144 AKU
𝑑𝒓 𝒓(𝑡+ℎ)+𝒓(𝑡)
𝑑𝑡
= 𝒓′(t) = lim
ℎ→0 ℎ
APPLIED MATHEMATICS III
The following theorem gives us a convenient method for computing the derivative of a vector
function 𝒓: just differentiate each component of 𝒓
Theorem: 𝒓(𝒕) = 〈𝑓(𝑡), 𝑔(𝑡), ℎ(𝑡)〉 = 𝑓(𝑡)𝐢 + 𝑔(𝑡)𝒋 + ℎ(𝑡)𝐤, where 𝑓, 𝑔 and ℎ are
differentiable functions, then
Solution
According o the above theorem, we differentiate each component of 𝒓:
Differentiation Rules
The next theorem shows the differentiation formulas for real-valued functions have their
counterparts for vector-valued functions.
Theorem: suppose 𝒖 and 𝒗 are differentiable vector functions, 𝑐 is a scalar, and 𝑓 is a real-
valued function, then
𝒅 𝒅 𝒅
1. [𝒖(𝑡) + 𝒗(𝑡)] = [𝒖(𝑡)] + [𝒗(𝑡)]
𝒅𝒕 𝒅𝒕 𝒅𝒕
𝒅 𝒅
2. [𝑐𝒖(𝑡)] = 𝑐 [𝒖(𝑡)]
𝒅𝒕 𝒅𝒕
𝒅
3. [𝑓(𝑡)𝒖(𝑡)] = 𝑓 ′ (𝑡)𝒖(𝑡) + 𝑓(𝑡)𝒖′(𝑡)
𝒅𝒕
𝒅
4. [𝒖(𝑡). 𝒗(𝑡)] = 𝒖′ (𝑡). 𝒗(𝑡) + 𝒖(𝑡). 𝒗′(𝑡)
𝒅𝒕
𝒅
5. [𝒖(𝑡) × 𝒗(𝑡)] = 𝒖′ (𝑡) × 𝒗(𝑡) + 𝒖(𝑡) × 𝒗′(𝑡)
𝒅𝒕
𝒅
6. [𝒖(𝑓(𝑡)] = 𝑓′(𝑡) 𝒖′(𝑓(𝑡)
𝒅𝒕
Proof (Exercise)
Integrals
The definite integral of a continuous vector function 𝒓(𝑡) can be defined in much the same way
as for real-valued functions except that the integral is a vector. But we can express the integral
of 𝒓 in terms of the integrals of its component functions 𝑓, 𝑔 and ℎ as follows.
𝑏 𝑛
And so
𝑏 𝑏 𝑏 𝑏
∫ 𝒓(𝑡)𝑑𝑡 = (∫ 𝑓(𝑡)𝑑𝑡) 𝒊 + (∫ 𝑔(𝑡)𝑑𝑡) 𝒋 + (∫ ℎ(𝑡)𝑑𝑡) 𝒌
𝑎 𝑎 𝑎 𝑎
This means that we can evaluate integral of a vector function by integrating each component
function.
Note: 1. We can extend the fundamental theorem of calculus to continuous vector functions as
follows:
𝑏
𝑏
∫ 𝒓(𝑡)𝑑𝑡 = 𝑹(𝑡)] = 𝑹(𝑏) − 𝑹(𝑎)
𝑎 𝑎
Where 𝑹 is an antiderivative of 𝒓, that is
𝑹′ (𝑡) = 𝒓(𝑡)
𝟐.We use the notation ∫ 𝒓(𝑡)𝑑𝑡 for indefinite integrals (antiderivatives)
Example 5.8: If 𝒓(𝑡) = 2 cos 𝑡 𝒊 + sin 𝑡 𝒋 + 2𝑡 𝒌, then
𝑏 𝑏
∫ 𝒓(𝑡)𝑑𝑡 = (2 cos 𝑡)𝒊 + (∫ sin 𝑡 𝑑𝑡) 𝒋 + (∫ 2𝑡𝑑𝑡 ) 𝒌
𝑎 𝑎
= 2𝑠𝑖𝑛 𝑡 𝒊 − 𝑐𝑜𝑠 𝑡 𝒋 + 𝑡2 𝒌 + 𝑪
𝜋 𝜋
2
2 𝜋22
∫ 𝒓(𝑡)𝑑𝑡 = [2𝑠𝑖𝑛 𝑡 𝒊 − 𝑐𝑜𝑠 𝑡 𝒋 + 𝑡 𝒌] = 2𝒊 + 𝒋 + 𝒌
0 0 4
Exercises
Overview:
In this section, we are going to introduce Curves, Arc length and Tangent by considering various
examples.
Section Objectives:
Curves
Vector calculus has important applications to curves and surfaces in physics and geometry. The
application of vector calculus to geometry is a field known as differential geometry.
Bodies that move in space form paths that may be represented by curves C. This and other
applications show the need for parametric representations of C with parameter t, which may
denote time or something else .A typical parametric representation is given by.
Here 𝑡 is a parameter and 𝑥, 𝑦, 𝑧 are Cartesian coordinates that is the usual rectangular
coordinates. To each value 𝑡 = 𝑡0 there corresponds a point of 𝐶 with position vector 𝒓(𝑡0 )
whose coordinates are 𝑥(𝑡0 ), 𝑦(𝑡0 ), 𝑧(𝑡0 ).
Example 5.9: The line is the simplest curve in the plane as its coordinate functions are linear
.Explicitly the curve
Is a straight line through the reference point 𝒑 = 𝒓(0) = (𝑥0 , 𝑦0 ) in the direction 𝒗 = (𝑢, 𝑣)
Here, 𝑡 is the signed distance from point 𝒓(𝑡) on the line to 𝒑 as scaled by‖𝒗‖.
As shown on the above figure, the vector 𝒑 to a point (𝑥, 𝑦) on the line must be either in the
direction of (𝑢, 𝑣) or in the opposite direction. Hence, the cross product of the two vectors must
be zero, that is,
(𝑥 − 𝑥0 , 𝑦 − 𝑦0 ) × (𝑢, 𝑣) = 0
Expansion of the above cross product yields an implicit equation of the line that relates the 𝑥 and
𝑦 coordinates of every incident point:
𝑣𝑥 − 𝑢𝑦 − 𝑣𝑥0 + 𝑢𝑦0 = 0
Example 5.10: sketch and identify the curve defined by the parametric equations
𝑥 = 𝑡 2 − 2𝑡 𝑦 =𝑡+1
Solution: Here 𝒓(𝑡) = [𝑥(𝑡), 𝑦(𝑡)] = [𝑡 2 − 2𝑡, 𝑡 + 1 ].each value of t a point on the curve, as
shown in the table. For instance, if 𝑡 = 0, then 𝑥 = 0 , 𝑦 = 1 and so the corresponding point is
(0,1).in Fig5.13 we plot the points (𝑥, 𝑦) Determined by several values of the parameter and we
join them to produce a curve
𝒕 𝒙 𝒚
-2 8 -1
-1 3 0
0 0 1
1 -1 2
2 0 3
3 3 4
4 8 5
Fig5.14 The graph of the curve 𝑥 = 𝑡 2 − 2𝑡 𝑦 =𝑡+1
Fig5.13 Tabular values of the curve 𝑥 = 𝑡 2 − 2𝑡 𝑦 = 𝑡 + 1
A particle whose position is given by the parametric equations moves along the curve in the
direction of the arrows as 𝑡 increases. Notice that the consecutive points marked on the curve
appear at equal time intervals but not at equal distances. That is because the particle slows down
and then speeds up as increases.
It appears from Fig5.14 that the curve traced out by the particle may be a parabola. This can be
confirmed by eliminating the parameter 𝑡 as follows. We obtain 𝑡 = 𝑦 − 1 from the second
equation and substitute into the first equation. This gives
𝑥 = 𝑡 2 − 2𝑡 = (𝑦 − 1)2 − 2(𝑦 − 1) = 𝑦 2 − 4𝑦 + 3
And so the curve represented by the given parametric equation is the parabola
𝑥 = 𝑦 2 − 4𝑦 + 3.
Note: in e the example above no restriction was placed on the parameter 𝑡, so we assumed that
𝑡 could be any real number.but sometimes we restrict 𝑡 to lie in finite interval.For instance,the
parametric curve
𝑥 = 𝑡 2 − 2𝑡 𝑦 =𝑡+1 0≤𝑡 ≤4
Shown in Fig 5.15 is the part of the parabola in the above example that starts at the point (0,1)
and ends at the point (8,5). The arrowhead indicates the direction in which the curve is traced
as 𝑡 increases from 0 to 4.
Has initial point (𝑓(𝑎), 𝑔(𝑎)) and terminal point (𝑓(𝑏), 𝑔(𝑏))
𝑥 = 𝑐𝑜𝑠𝑡 𝑦 = 𝑠𝑖𝑛𝑡 0 ≤ 𝑡 ≤ 2𝜋
Solution: if we plot points, it appears the curve is a circle. We can confirm this impression by
eliminating𝑡. Observe that
𝑥 2 + 𝑦 2 = 𝑐𝑜𝑠 2 𝑡 + 𝑠𝑖𝑛2 𝑡 = 1
Thus the point (𝑥, 𝑦) moves on the unit circle 𝑥 2 + 𝑦 2 = 1.notice that in this example the
parameter 𝑡 can be interpreted as the angle (in radians) shown in figure.as 𝑡 increases from 0 to
2𝜋, the point (𝑥, 𝑦) = (𝑐𝑜𝑠𝑡, 𝑠𝑖𝑛𝑡) moves once around the circle in the counter clockwise
direction starting from the point (1,0).
Represents an ellipse in the 𝑥𝑦-plane with center at the origin and principal axes in the direction
of the 𝑥- and 𝑦- axis. In fact, since 𝑐𝑜𝑠 2 𝑡 + 𝑠𝑖𝑛2 𝑡 = 1, we obtain from the above equation
𝑥2 𝑦2
+ = 1, 𝑧=0
𝑎2 𝑏 2
If 𝑏 = 𝑎,then
Fig. 5.17 circle of the above example Fig5.18 ellipse of the above example
Is called a circular helix, it lies on the cylinder 𝑥 2 + 𝑦 2 = 𝑎2 . If 𝑐 > 0,the helix is shaped like
right-handed screw(fig.). if 𝑐 < 0, it looks like a left handed screw(fig.).if 𝑐 = 0,then it is a circle
Fig 5.19 right-handed circular Helix Fig 5.20 Left-handed circular helix
A simple curve is a curve without multiple points, that is, without points at which the curve
intersects or touches itself. Circle and helix are simple curves. Fig 5.20 shows curves that are not
simple. An example is[𝑠𝑖𝑛2𝑡, 𝑐𝑜𝑠2𝑡, 0].Can you sketch it?
An arc of a curve is the portion between any two points of the curve. For simplicity, we say
“curve” for curves as well as for arcs.
Arc length
Recall from the application of integration that the length 𝐿 of a curve 𝐶 given in the form 𝑦 =
𝐹(𝑥), 𝑎 ≤ 𝑥 ≤ 𝑏, 𝐹 being continuous is given by
𝑏 𝑑𝑦 2
𝐿 = ∫𝑎 √1 + (𝑑𝑥 ) 𝑑𝑥
Suppose that 𝐶 can also be described by the parametric equation 𝑥 = 𝑓(𝑡) and 𝑦 = 𝑔(𝑡),
𝛼 ≤ 𝑡 ≤ 𝛽, where 𝑑𝑥 ⁄𝑑𝑡 = 𝑓 ′ (𝑡) > 0.this means that 𝐶 is traversed once, from the left to right,
𝑑𝑦 𝑑𝑦⁄𝑑𝑡
as 𝑡 increases from 𝛼 𝑡𝑜 𝛽, and 𝑓(𝛼) = 𝑎, 𝑓(𝛽) = 𝑏. Putting 𝑑𝑥 = 𝑑𝑥⁄𝑑𝑡 in to the above formula
and using the substitution Rule , we obtain
𝑏 𝑑𝑦 2 𝛽 𝑑𝑦⁄𝑑𝑡 2 𝑑𝑥
𝐿 = ∫𝑎 √1 + (𝑑𝑥 ) 𝑑𝑥 = ∫𝛼 √1 + (𝑑𝑥⁄𝑑𝑡 ) 𝑑𝑡
𝑑𝑡
𝛽 𝑑𝑥 𝑑𝑦 2 2
𝐿 = ∫𝛼 √( ) + ( ) 𝑑𝑡
𝑑𝑡 𝑑𝑡
If the curve is in the space, that is if 𝒓(𝑡) = (𝑥(𝑡), 𝑦(𝑡), ℎ(𝑡)) where 𝑥 = 𝑓(𝑡), 𝑦 = 𝑔(𝑡) and
𝑧 = ℎ(𝑡), then the arc length is
𝑏 𝑑𝑥 2 𝑑𝑦 2
𝑑𝑍 2
𝐿 = ∫𝑎 √( 𝑑𝑡 ) + ( 𝑑𝑡 ) + ( 𝑑𝑡 ) 𝑑𝑡
Thus, using Leibniz notation, we have the following result, which has the same form as the last
two formulas.
𝑏 𝑑𝑥 2 𝑑𝑦 2 𝑏 𝑑𝑥 𝑑𝑦 𝑑𝑧2 2 2
𝐿 = ∫𝑎 √( 𝑑𝑡 ) + ( 𝑑𝑡 ) 𝑑𝑡 or 𝐿 = ∫𝑎 √( 𝑑𝑡 ) + ( 𝑑𝑡 ) + ( 𝑑𝑡 ) 𝑑𝑡
Example 5.13: Find the arc length of the curve traced out by the end points of the vector
function
Solution here 𝑥 = 𝑓(𝑡) = 2𝑡, 𝑦 = 𝑔(𝑡) = 𝑙𝑛𝑡 and z = ℎ(𝑡) = 𝑡 2 then by the above theorem
𝑏 𝑑𝑥 2
𝑑𝑦 2𝑑𝑧 2 𝑑𝑥 𝑑𝑦 1 𝑑𝑧
𝐿 = ∫𝑎 √( 𝑑𝑡 ) + ( 𝑑𝑡 ) + (𝑑𝑡 ) 𝑑𝑡 where = 2, = and 𝑑𝑡 = 2𝑡
𝑑𝑡 𝑑𝑡 𝑡
𝑒 1 2 𝑒 1 𝑒 4𝑡 4 +4𝑡 2 +1
⟹ 𝐿 = ∫1 √(2)2 + ( 𝑡 ) + (2𝑡)2 𝑑𝑡 = ∫1 √4 + 𝑡 2 + 4𝑡 2 𝑑𝑡 = ∫1 √ 𝑑𝑡
𝑡2
𝑒 2𝑡 +1 2 2
𝑒 2𝑡 +1 𝑒 2 1 𝑒 𝑒1
= ∫1 √( 𝑡 ) 𝑑𝑡 = ∫1 𝑡 𝑑𝑡 = ∫1 (2𝑡 + 𝑡 )𝑑𝑡 = ∫1 2𝑡 𝑑𝑡 + ∫1 𝑡 𝑑𝑡 = [𝑡 2 ] 𝑒1 + [𝑙𝑛𝑡] 𝑒1
⟹ 𝐿 = (𝑒 2 − 1) + 𝑙𝑛𝑒 − 𝑙𝑛1 = 𝑒 2 − 1 + 1 − 0 = 𝑒 2
Hence 𝐿 = 𝑒 2
Example 5.13: find the length of one arch of the cycloid 𝑥 = 𝑟(𝜃 − 𝑠𝑖𝑛𝜃), 𝑦 = 𝑟(1 − 𝑐𝑜𝑠𝜃).
Solution: one arch of cycloid is described by the parameter interval, 0 ≤ 𝜃 ≤ 2𝜋.
𝑑𝑥 𝑑𝑦
Since 𝑑𝜃 = 𝑟(1 − 𝑐𝑜𝑠𝜃) and = 𝑟𝑠𝑖𝑛𝜃. We have
𝑑𝜃
2𝜋 𝑑𝑥 𝑑𝑦2 2
2𝜋
𝐿 = ∫0 √(𝑑𝜃) + (𝑑𝜃) 𝑑𝜃 = ∫0 √(𝑟(1 − 𝑐𝑜𝑠𝜃))2 + (𝑟𝑠𝑖𝑛𝜃)2 𝑑𝜃
2𝜋 2𝜋
= ∫0 √𝑟 2 (1 − 𝑐𝑜𝑠𝜃)2 + 𝑟 2 𝑠𝑖𝑛2 𝜃 𝑑𝜃 = ∫0 √𝑟 2 (1 − 2𝑐𝑜𝑠𝜃 + 𝑐𝑜𝑠 2 𝜃) + 𝑟 2 𝑠𝑖𝑛2 𝜃 𝑑𝜃
2𝜋 2𝜋
= ∫0 √𝑟 2 (1 − 2𝑐𝑜𝑠𝜃 + 𝑐𝑜𝑠 2 𝜃 + 𝑠𝑖𝑛2 𝜃) 𝑑𝜃 = ∫0 √𝑟 2 (2 − 2𝑐𝑜𝑠𝜃) 𝑑𝜃
2𝜋
= ∫0 √2(1 − 𝑐𝑜𝑠𝜃) 𝑑𝜃
1
To evaluate this integral we use the identity 𝑠𝑖𝑛2 𝑥 = (1 − 𝑐𝑜𝑠2𝑥) with 𝜃 = 2𝑥, which
2
gives 1 − 𝑐𝑜𝑠𝜃 = 2𝑠𝑖𝑛2 (𝜃⁄2)𝜃. Since 0 ≤ 𝜃 ≤ 2𝜋.we have 0 ≤ 𝜃⁄2 ≤ 2𝜋 and so sin(𝜃⁄2) ≥
0.therefore
Tangents
In the preceding section we saw that some curves defined by parametric equation 𝑥 = 𝑓(𝑡) and
𝑦 = 𝑔(𝑡) can also be expressed, by eliminating the parameter, in the form 𝑦 = 𝐹(𝑥).that is, if 𝑓′
is continuous and 𝑓 ′ (𝑡) ≠ 0 for 𝑎 ≤ 𝑡 ≤ 𝑏, then the parametric curve 𝑥 = 𝑓(𝑡), 𝑦 = 𝑔(𝑡),
𝑎 ≤ 𝑡 ≤ 𝑏, can be put in the form 𝑦 = 𝐹(𝑥). If we substitute 𝑥 = 𝑓(𝑡) and 𝑦 = 𝑔(𝑡) in the
equation 𝑦 = 𝐹(𝑥), we get
𝑔(𝑡) = 𝐹(𝑓(𝑡))
𝑔′ (𝑡)
𝐹 ′ (𝑥) = ……………………………….(1)
𝑓 ′ (𝑡)
Since the slope of the tangent to the curve 𝑦 = 𝐹(𝑥) at (𝑥, 𝐹(𝑥)) is 𝐹 ′ (𝑥), equation 1 enables us
to find tangents to parametric curves without having to eliminate the parameter. Using Leibniz
notation, we can, we can rewrite equation 1 in an easily remembered form.
𝑑𝑦
𝑑𝑦 𝑑𝑥
= 𝑑𝑡
𝑑𝑥 if ≠ 0 …………………………..(2)
𝑑𝑥 𝑑𝑡
𝑑𝑡
𝑑𝑦
It can be seen from Equation 2 that the curve has horizontal tangent when = 0 (provided that
𝑑𝑡
𝑑𝑥 𝑑𝑥 𝑑𝑦
≠ 0) and it has a vertical tangent when = 0(provided that 𝑑𝑡 ≠ 0).This information is
𝑑𝑡 𝑑𝑡
useful for sketching parametric curves.
It is also useful to consider 𝑑2 𝑦⁄𝑑𝑥 2 . This can be found by replacing y by 𝑑𝑦⁄𝑑𝑥 in equation 2:
𝑑 𝑑𝑦
𝑑2𝑦 𝑑 𝑑𝑦 ( )
𝑑𝑡 𝑑𝑥
= ( ) =
𝑑𝑥 2 𝑑𝑥 𝑑𝑥 𝑑𝑥
𝑑𝑡
Solution
(a) Notice that 𝑦 = 𝑡 3 − 3𝑡 = 𝑡(𝑡 2 − 3) = 0 when 𝑡 = 0 or 𝑡 = ±√3.Therefore the point (0,3)
On C arises from two values of the parameter, 𝑡 = √3 and 𝑡 = −√3 .this indicates that C crosses
itself at (0,3).since
𝑑𝑦 𝑑𝑦⁄𝑑𝑡 3𝑡 2 − 3 3 1
= = = (𝑡 − )
𝑑𝑥 𝑑𝑥⁄𝑑𝑡 2𝑡 2 𝑡
The slope of the tangent when 𝑡 = ±√3 is 𝑑𝑦⁄𝑑𝑥 = ±√3(2√3) = ±√3, so the equation of the
tangents at (3,0) are
(b) C has a horizontal tangent when 𝑑𝑦⁄𝑑𝑥 = 0,that is, 𝑑𝑦⁄𝑑𝑡 = 0 and 𝑑𝑥⁄𝑑𝑡 ≠ 0. Since
𝑑𝑦⁄𝑑𝑡 = 3𝑡 2 − 3,this happens when 𝑡 2 = 1,that is, 𝑡 = ±1.the corresponding point on C are
(1,-2) and (1,2).C has a vertical tangent when 𝑑𝑥⁄𝑑𝑡 = 2𝑡 = 0,that is, 𝑡 = 0.(Note that𝑑𝑦⁄𝑑𝑡 ≠
0 𝑡 there.) the corresponding point on C is (0,0).
Example 5.15
(a) Find the tangent to the cycloid 𝑥 = 𝑟(𝜃 − 𝑠𝑖𝑛𝜃), 𝑦 = 𝑟(1 − 𝑐𝑜𝑠𝜃) at the point where 𝜃 =
𝜋⁄
3
(b) At what points is the tangent horizontal? When it is vertical?
Solution
(a) The slope of the tangent line is
𝑟
𝑥 = 𝑟(𝜋⁄3 − 𝑠𝑖𝑛 𝜋⁄3) = 𝑟 (𝜋⁄3 − √3⁄2) 𝑦 = 𝑟(1 − 𝑐𝑜𝑠 𝜋⁄3) = 2
𝑑𝑦 sin(𝜋⁄3) √3⁄2
And = = = √3
𝑑𝑥 1−cos(𝜋⁄3) 1−
1
2
𝑟 𝑟𝜋 𝑟√3 𝜋
𝑦 − 2 = √3 (𝑥 − + ) or √3 x − y = r (√3 − 2)
3 2
(b) The tangent is horizontal when 𝑑𝑦⁄𝑑𝑥 = 0, which occurs when sin 𝜃 = 0 and 1 − cos 𝜃 ≠
0, that is, 𝜃 = (2𝑛 − 1)𝜋, 𝑛 an integer. The corresponding point on the cycloid is
((2𝑛 − 1)𝜋𝑟, 2𝑟).
When 𝜃 = 2𝑛𝜋, both 𝑑𝑥⁄𝑑 𝜃 and 𝑑𝑦⁄𝑑 𝜃 are 0. It appears from the graph that there are
vertical tangents at those points. We can verify this by using L’Hospital’s rule as follows:
𝑑𝑦 sin 𝜃 cos 𝜃
lim + = lim + = lim + =∞
𝜃→2𝑛𝜋 𝑑𝑥 𝜃→2𝑛𝜋 1 − cos 𝜃 𝜃→2𝑛𝜋 sin 𝜃
A similar computation shows that 𝑑𝑦⁄𝑑 𝑥 → −∞ as 𝜃 → 2𝑛𝜋 − , so indeed there are vertical
tangents 𝜃 = 2𝑛𝜋, that is, when 𝑥 = 2𝑛𝜋𝑟
Exercise 5.2
1. Find parametric equations for the circle with center (ℎ, 𝑘) and radius 𝑟
𝑥 = 𝑠𝑖𝑛2𝑡 𝑦 = 𝑐𝑜𝑠2𝑡 0 ≤ 𝑡 ≤ 2𝜋
i) 𝑥 = 3𝑡 − 5, 𝑦 = 2𝑡 + 1 ii) 𝑥 = 𝑡 2 − 2, 𝑦 = 5 − 2𝑡, −3 ≤ 𝑡 ≤ 4
4. In each of the following find an equation of the tangent to the curve at the point corresponding to the
given value of the parameter.
(a) 𝑥 = 𝑡 4 + 1, 𝑦 = 𝑡 3 + 𝑡; 𝑡 = −1
(b) 𝑥 = 𝑡 − 𝑡 −1 , 𝑦 = 1 + 𝑡 2 ; 𝑡=1
(c) 𝑥 = 𝑒 √𝑡 , = 𝑡 − 𝑙𝑛 𝑡 2 ; 𝑡 = 1
Section Objectives:
Upon successful completion of this chapter, the student will be able to:
Gradient is one of the simplest and most important types of vector field. We may have noticed
that performing a partial derivative is very much like taking the derivative in a particular
𝑑𝑓
direction, i.e. the partial derivative 𝑑𝑥 measures the rate of increase, or the slope, of the function
,𝑓 in the 𝑥 direction . Since there are only three directions in three- dimensional space there is a
neat and elegant way of summarizing all the information about how the function is increasing,
we simply put all the partial derivatives of the function into a vector. This vector is known as the
gradient of the function.
Definition: Let 𝑓: 𝑅 3 → 𝑅 be a scalar field, that is a function of three variables. The gradient
of 𝑓, denoted ∇𝑓, is the vector field given by
𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
∇𝑓 = 〈𝜕𝑥 , 𝜕𝑦 , 𝜕𝑧 〉 = 𝜕𝑥 𝒊 + 𝒋 + 𝜕𝑧 𝒌.
𝜕𝑦
𝜕 𝜕 𝜕
∇= 𝒊 + 𝒋+ 𝒌
𝜕𝑥 𝜕𝑦 𝜕𝑧
∇𝑓 is a vector field
∇𝑓 measures the rate of increase of the scalar function 𝑓 in each of the three coordinate
directions.
∇𝑓 Points in the direction in which 𝑓 increases the most.
= 2𝑥𝒊 + 2𝑦𝒋 + 3z
Divergence and curl are two measurements of vector fields that are very useful in a variety of
applications. Both are most easily understood by thinking of the vector field as representing a
flow of a liquid or gas; that is, each vector in the vector field should be interpreted as a velocity
vector. Roughly speaking, divergence measures the tendency of the fluid to collect or disperse at
a point, and curl measures the tendency of the fluid to swirl around the point. Divergence is a
scalar, that is, a single number, while curl is itself a vector. The magnitude of the curl measures
how much the fluid is swirling, the direction indicates the axis around which it tends to swirl.
These are best interpreted in terms of the velocity field of a fluid flow. The divergence is the rate
of expansion of the fluid at a point. The curl is a vector describing the rotation of the fluid near
the point (the direction of the curl is the axis of rotation and the magnitude is a measure of the
rate of rotation). The flow is called incompressible if its divergence is zero, and irrotational if
its curl is zero.
∂f ∂f ∂f
∇f = 〈 , , 〉
∂x ∂y ∂z
A useful mnemonic for the divergence and Curl is , let
∂ ∂ ∂
∇= 〈 , , 〉,
∂x ∂y ∂z
That is, we pretend that ∇ is a vector with rather odd looking entires . We can then think of the
gradient as
∂ ∂ ∂ ∂f ∂f ∂f
∇f = 〈 , , 〉𝑓 = 〈 , , 〉
∂x ∂y ∂z ∂x ∂y ∂z
∂ ∂ ∂ ∂f ∂g ∂h
∇ ∙ 𝐅 = 〈∂x , ∂y , ∂z〉. 〈𝑓, 𝑔, h〉 = + ∂y + ∂z .
∂x
The curl of 𝑭 is
𝑖 𝑗 𝑘
∂ ∂ ∂ 𝜕ℎ 𝜕𝑔 𝜕𝑓 𝜕ℎ 𝜕𝑔 𝜕𝑓
| |
∇ × 𝐹 = ∂x
| ∂y ∂z | = 〈 𝜕𝑦 − 𝜕𝑧 , 𝜕𝑧 − 𝜕𝑥 , 𝜕𝑥 − 𝜕𝑦〉
𝑓 𝑔 ℎ
Here are two simple but useful facts about divergence and curl.
i. 𝛻 ∙ (𝛻 × 𝐹) = 0.
In words, this says that the divergence of the curl is zero.
j. 𝛻 × (𝛻𝑓) = 0.
That is, the curl of a gradient is the zero vector. Recalling that gradients are
conservative vector fields, this says that the curl of a conservative vector field is the
zero vector. Under suitable conditions, it is also true that if the curl of 𝐹 is 0 then 𝐹 is
conservative.
𝐹(𝑥, 𝑦, 𝑧) = 𝑥 2 𝑦 2 𝑧𝑖 + 𝑥 2 𝑧𝑗 + 𝑥 2 𝑦𝑘.
Solution: The divergence of 𝐹 is
𝜕 3 2 𝜕 2 𝜕
𝑑𝑖𝑣𝐹(𝑥, 𝑦, 𝑧) = [𝑥 𝑦 𝑧] + [𝑥 𝑧] + [𝑥 2 𝑦]
𝜕𝑥 𝜕𝑦 𝜕𝑧
= 3𝑥 2 𝑦 2 𝑧
At the point (2,1, −1), the divergence is
𝑑𝑖𝑣 𝐹(2,1, −1) = 3(22 )(12 )(−1)
= 12
Piecewise Smooth Curves
A classic property of gravitational fields is that, subject to certain physical constraints, the work
done by gravity on an object moving between two points in the field is independent of the path
taken by the object. One of the constraints is that path must be a piecewise smooth curve. Recall
that a plane curve C given by
𝐫(t) = x(t)𝐢 + y(t)𝐣, a≤t≤b
is smooth if
𝑑𝑥 𝑑𝑦
and
𝑑𝑡 𝑑𝑡
are continuous on [𝑎, 𝑏] and not simultaneously 0 on (𝑎, 𝑏). Similarly, a space curve C given
is smooth if
𝑑𝑥 𝑑𝑦 𝑑𝑧
, and 𝑑𝑡
𝑑𝑡 𝑑𝑡
Are continuous on [𝑎, 𝑏] and not simultaneously 0 on (𝑎, 𝑏). A curve 𝐶 is piecewise smooth if
the interval [𝑎, 𝑏] can be partitioned into a finite number of subintervals, on each which 𝐶 is
smooth.
Example 5.19: Find a piecewise smooth parameterizations of the graph of 𝐶 shown in figure
below
Solution: Since 𝐶 consists of three line segments 𝐶1 , 𝐶2 𝑎𝑛𝑑 𝐶3 , you can construct a smooth
parameterization for each segment and piece them together by making the last 𝑡-value in 𝐶𝑖
Correspond to the first t-value in 𝐶𝑖+1 , as follows.
𝐶1 : 𝑥(𝑡) = 0, 𝑦(𝑡) = 2𝑡, 𝑧(𝑡) = 0 , 0≤𝑡≤1
𝐶2 ∶ 𝑥(𝑡) = 𝑡 − 1, 𝑦(𝑡) = 2 , 𝑧(𝑡) = 0, 1≤𝑡≤2
𝐶3 : 𝑥(𝑡) = 1, 𝑦(𝑡) = 2 , 𝑧(𝑡) = 𝑡 − 2, 2≤𝑡 ≤3
Section Objectives:
Line Integrals
Introduction: In this section , we consider some new concepts of line integrals. This new kinds
of integrals will be defined as limits of sums in the same general way that single integrals are
defined . An ordinary single integral
𝑏
∫ 𝑓(𝑥) 𝑑𝑥
𝑎
is an integral of a function which is defined along a line segment (an interval of a co –ordinate
axis). There is a corresponding kind of integral for a function which is defined along a curve.
Such an integral might well be called a curvilinear integral; the usual name is line integral, where
line means, in general, a curved line.
Definition: If 𝑓 is defined in a region containing a smooth curve 𝑐 of finte length , then the line
integral of 𝑓 along 𝑐 is given by
∫𝑪 𝒇(𝒙, 𝒚) 𝒅𝒔 = 𝐥𝐢𝐦 ∑𝒏𝒊=𝟏 𝒇(𝒙𝒊 , 𝒚𝒊 ) ∆𝒔𝒊 Plane
||∆||→𝟎
𝑏
∫𝐶 𝑓(𝑥, 𝑦) 𝑑𝑠 = ∫𝑎 𝑓(𝑥(𝑡), 𝑦(𝑡)) √[𝑥 ′ (𝑡)]2 + [𝑦 ′ (𝑡)]2 𝑑𝑡.
If 𝐶 is given by
𝑏
∫𝐶 𝑓(𝑥, 𝑦, 𝑧) 𝑑𝑠 = ∫𝑎 𝑓(𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)) √[𝑥 ′ (𝑡)]2 + [𝑦 ′ (𝑡)]2 + [𝑧 ′ (𝑡)]2 𝑑𝑡.
∫(𝑥 2 − 𝑦 − 3𝑧) 𝑑𝑠
𝐶
Figure 5.1
𝑥 = 𝑡, 𝑦 = 2𝑡 , and 𝑧 = 𝑡 , 0≤𝑡≤1
Therefore,
𝑥 ′ (𝑡) = 1, 𝑦 ′ (𝑡) = 2 and 𝑧 ′ (𝑡) = 1
This implies that,
= √6 ∫(𝑡 2 + 𝑡)𝑑𝑡
𝐶
5√6
= 6
Figure 5.2
Solution: Begin by integrating up the line 𝑦 = 𝑥 , using the following parameterization
𝐶1 = 𝑥 = 𝑡, 𝑦 = 𝑡 , 0≤𝑡≤1
−1 2 3
1
= ([ (1 + 4(1 − 𝑡)2 )])2 |
8 3 0
3
1
= 12 (5 2 − 1)
Consequently,
∫𝑥 𝑑𝑠 = ∫ 𝑥 𝑑𝑠 + ∫ 𝑥 𝑑𝑠
𝐶 𝐶1 𝐶2
√2 1 3
= + (5 2 − 1) ≅ 1.56
2 12
that is acting in the same direction as that in which the object is moving (or the opposite
direction). This means that at each point on 𝐶, you can consider the projection 𝑭. 𝑻 of the
force vector 𝑭 onto the unit tangent vector 𝑻. On a small subarc of length ∆𝑠𝑖 , the increment
of work is
∆𝑊𝑖 = (𝑓𝑜𝑟𝑐𝑒)(𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒)
≅ [𝐹(𝑥𝑖 , 𝑦𝑖, 𝑧𝑖 ). 𝑇(𝑥𝑖 , 𝑦𝑖, 𝑧𝑖 )]∆𝑠𝑖
Where (𝑥𝑖 , 𝑦𝑖, 𝑧𝑖 ) is a point in the 𝑖 𝑡ℎ subarc. Consequently, the total work done is given by the
following integral.
𝑊 = ∫𝐹(𝑥, 𝑦, 𝑧. 𝑇(𝑥, 𝑦, 𝑧) 𝑑𝑠
𝐶
This line integral appears in other contexts and is the basis of the following definition of the
line integral of a vector field . Note in this definition
𝐫 ′ (t)
𝐅 ∙ 𝐓ds = 𝐅 ∙ ‖𝐫′(t)‖dt
‖𝐫′(t)‖
= 𝐅 ∙ 𝐫′(t)dt
= 𝐅 ∙ d𝐫
Definition: Let 𝑭 be a continuous vector field defined on a smooth curve 𝐶 given by 𝒓(𝑡),
𝑎 ≤ 𝑡 ≤ 𝑏. The line integral of 𝑭 on 𝐶 is given by
𝑏
Figure 5.4
Solution: since
𝒓(𝑡) = 𝑥(𝑡)𝒊 + 𝑦(𝑡)𝒋 + 𝑧(𝑡)𝒌 = 𝒓(𝑡) = 𝑐𝑜𝑠𝑡𝒊 + 𝑠𝑖𝑛𝑡𝒋 + 𝑡𝒌
It follows that
𝑥(𝑡) = 𝑐𝑜𝑠𝑡, 𝑦(𝑡) = 𝑠𝑖𝑛𝑡, and 𝑧(𝑡) = 𝑡.
So, the force field can be written as
1 1 1
𝑭(𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)) = − 𝑐𝑜𝑠𝒊 − 𝑠𝑖𝑛𝒋 + 𝒌
2 2 4
To find the work done by the force field in moving a particle along the curve 𝐶, use the fact that
𝒓′(𝑡) = −𝑠𝑖𝑛𝑡𝒊 + 𝑐𝑜𝑠𝑡𝒋 + 𝒌
and write the following.
𝑊 = ∫ 𝑭 ∙ 𝑑𝒓
𝐶
Proof: A proof is provided only for a smooth curve. For piecewise smooth curves, the procedure
is carried out separately on each smooth portion. Because
𝐹(𝑥, 𝑦) = 𝛁𝑓𝑥 (𝑥, 𝑦)𝒊 + ∇𝑓𝑦 (𝑥, 𝑦)𝒋
It follows that
𝑏 𝑏
𝑑𝒓 𝑑𝑥 𝑑𝑦
∫ 𝑭 ∙ 𝑑𝒓 = ∫ 𝐹 ∙ 𝑑𝑡 = ∫ [𝑓𝑥 (𝑥, 𝑦) + 𝑓𝑦 (𝑥, 𝑦) ]𝑑𝑡
𝐶 𝑎 𝑑𝑡 𝑎 𝑑𝑡 𝑑𝑡
Using Chain rule, we have
𝑏
𝑑
∫𝑭 ∙ 𝑑𝒓 = ∫ [𝑓(𝑥(𝑡), 𝑦(𝑡))]𝑑𝑡 = 𝑓(𝑥(𝑏), 𝑦(𝑏)) − 𝑓(𝑥(𝑎), 𝑦(𝑎)).
𝐶 𝑎 𝑑𝑡
The last step is an application of the Fundamental Theorem of Calculus. That is, using
𝑏
∫𝑎 𝑓 ′ (𝑥)𝑑𝑥 = 𝑓(𝑏) − 𝑓(𝑎) █
In space, the Fundamental Theorem of Line Integrals takes the following form. Let 𝑐 be a
piecewise smooth curve lying in an open region 𝑄 and given by
𝒓(𝑡) = 𝑥(𝑡)𝒊 + 𝑦(𝑡)𝒋 + 𝑧(𝑡)𝒌, 𝑎 ≤ 𝑡 ≤ 𝑏.
Badri A, Moges B. and Teklebrhan B. 171 AKU
APPLIED MATHEMATICS III
Figure 5.4
Solution: 𝑭 is the gradient of 𝑓 , where
𝑦2
𝑓(𝑥, 𝑦) = 𝑥 2 𝑦 − +𝒌
2
Consequently, 𝑭 is conservative and by the Fundamental Theorem of Line Integrals, it follows
that
∫𝑭 ∙ 𝑑𝒓 = 𝑓(1,2) − 𝑓(−1,4)
𝐶
22 42
= [12 (2) − ] − [(−1)2 (4) − ]
2 2
= 4.
Answer, ∫𝐶 𝑭. 𝑑𝒓 = 17
Green’s Theorem
We now come to the first of three important theorems that extend the Fundamental Theorem of
Calculus to higher dimensions. (The Fundamental Theorem of Line Integrals has already done
this in one way, but in that case we were still dealing with an essentially one-dimensional
integral.) They all share with the Fundamental Theorem the following rather vague description:
To compute a certain sort of integral over a region, we may do a computation on the boundary
of the region that involves one fewer integration. Note that this does indeed describe the
Fundamental Theorem of Calculus and the Fundamental Theorem of Line Integrals: to compute
a single integral over an interval, we do a computation on the boundary (the endpoints) that
involves one fewer integrations, namely, no integrations at all.
In this section, we will study Green’s Theorem, named after the English mathematician George
Green (1793-1841). This theorem states that the value of a double integral over a simply
connected plane region 𝑅 is determined by the value of a line integral around the boundary of
𝑅. A curve 𝐶 given by 𝑟(𝑡) = 𝑥(𝑡)𝒊 + 𝑦(𝑡)𝒋, where 𝑎 ≤ 𝑡 ≤ 𝑏, is simple if it does not
cross itself, that is , 𝑟(𝑐) ≠ 𝑟(𝑑) for all 𝑐 and 𝑑 in the open interval (𝑎, 𝑏). A plane region
𝑅 is simply connected if every simple closed curve in 𝑅 encloses only points that are in 𝑅.
𝜕𝑁 𝜕𝑀
∫𝐶 𝑀 𝑑𝑥 + 𝑁 𝑑𝑦 = ∬𝑅( 𝜕𝑥 − ) 𝑑𝐴. █
𝜕𝑦
To indicate that an integral ∫𝐶 is being done over a closed curve in the counter clockwise
direction, we usually write ∮𝐶. we also use the notation 𝜕𝐷 to mean the boundary of D
oriented in the counterclockwise direction. With ∮𝐶 = ∫𝜕𝐷.
Proof: A proof is given only for a region that is both vertically simple and horizontally simple,
as shown in figure below.
Figure
𝑏 𝑎
∫𝐶 𝑀 𝑑𝑥 = ∫𝐶 𝑀 𝑑𝑥 ∫𝐶 𝑀 𝑑𝑥 = ∫𝑎 𝑀(𝑥, 𝑓1 (𝑥))𝑑𝑥 + ∫𝑏 𝑀(𝑥, 𝑓2 (𝑥)) 𝑑𝑥
1 2
𝑏
= ∫𝑎 [𝑀(𝑥, 𝑓1 (𝑥))𝑑𝑥 − 𝑀(𝑥, 𝑓2 (𝑥))]𝑑𝑥
On the other hand
𝑏 𝑓2 (𝑥)
𝜕𝑀 𝜕𝑀
∬ 𝑑𝐴 = ∫ ∫ 𝑑𝑦𝑑𝑥
𝜕𝑦 𝑎 𝑓1 (𝑥) 𝜕𝑦
𝑅
𝑏
𝑓2
= ∫ 𝑀(𝑥, 𝑦) | 𝑑𝑥
𝑎 𝑓1
𝑏
= ∫ [𝑀(𝑥, 𝑓2 (𝑥)) − 𝑀(𝑥, 𝑓1 (𝑥))]𝑑𝑥
𝑎
𝜕𝑀
Consequently, ∫𝐶 𝑀 𝑑𝑥 = ∬𝑅 𝜕𝑦 𝑑𝐴.
𝜕𝑁
Similarly, you can use 𝑔1 (𝑦) 𝑎𝑛𝑑 𝑔2 (𝑦) to show that ∫𝐶 𝑁𝑑𝑦 = ∫𝑅 ∫ 𝜕𝑥 𝑑𝐴. By adding the
integrals ∫𝐶 𝑀𝑑𝑥 and ∫𝐶 𝑁𝑑𝑦, you obtain the collection stated in the theorem.
▄
Example 5.26:
Use Green’s Theorem to evaluate the line integral
∫𝑦 3 𝑑𝑥 + (𝑥 3 + 3𝑥𝑦 2 )𝑑𝑦
𝐶
Where 𝐶 is the path from (0,0) to (1,1) along the graph of 𝑦 = 𝑥 3 and from (1,1) to (0,0)
along the graph of 𝑦 = 𝑥 as shown in figure below
𝑪 is simple and closed, and the region 𝑅 always lies to the left of 𝐶
Figure 5.5
Solution:
Since, 𝑀 = 𝑦 3 and 𝑁 = 𝑥 3 + 3𝑥𝑦 2 , it follows that
𝜕𝑁 𝜕𝑀
= 3𝑥 2 + 3𝑦 2 and = 3𝑦 2 .
𝜕𝑥 𝜕𝑦
𝑭(𝑥, 𝑦) = 𝑦 3 𝒊 + (𝑥 3 + 3𝑥𝑦 2 )𝒋
Figure 5.6
Solution: From example one above, (using Green’s Theorem), we have
∫𝑦 3 𝑑𝑥 + (𝑥 3 + 3𝑥𝑦 2 ) 𝑑𝑦 = ∬ 3𝑥 2 𝑑𝐴.
𝐶
𝑅
2𝜋 3
𝑊 = ∬ 3𝑥 2 𝑑𝐴 = ∫ ∫ 3(𝑟𝑐𝑜𝑠𝜃)2 𝑟𝑑𝑟𝑑𝜃
𝑅 0 0
2𝜋 3 2𝜋 𝑟4 2 81
= 3 ∫0 ∫0 𝑟 3 𝑐𝑜𝑠 2 𝜃𝑑𝑟𝑑𝜃 = 3 ∫0 𝑐𝑜𝑠 2 𝜃]30 𝑑𝜃 = 3 ∫0 𝑐𝑜𝑠 2 𝜃𝑑𝜃
4 4
2𝜋
243 243 𝑠𝑖𝑛2𝜃 2𝜋 243𝜋
= ∫ (1 + 𝑐𝑜𝑠2𝜃)𝑑𝜃 = [𝜃 + ]| =
8 8 2 0 4
0
Note: When evaluating line integrals over closed curves, remember that for conservative vector
𝜕𝑁 𝜕𝑀
fields (those for which = ), the value of the line integral is 0. This is easily seen from the
𝜕𝑥 𝜕𝑦
∫ 𝑦 3 𝑑𝑥 + 3𝑥𝑦 2 𝑑𝑦
𝐶
𝜕𝑁 𝜕𝑀
Solution: From this line integral , 𝑀 = 𝑦 3 and 𝑁 = 3𝑥𝑦 2 . So, = 3𝑦 2 and = 3𝑦 2 .
𝜕𝑥 𝜕𝑦
This implies that the vector field 𝑭 = 𝑀𝒊 + 𝑁𝒋, is conservative, and because 𝐶 is closed, you
can conclude that
∫ 𝑦 3 𝑑𝑥 + 3𝑥𝑦 2 𝑑𝑦 = 0.
𝐶
Example 5.29: Using a line integral find the area of the ellipse
𝑥2 𝑦2
+ = 1.
𝑎2 𝑏 2
Solution : we can induce a counterclockwise orientation to yhe elliptical path by letting
x = acost and y = bsint, 0 ≤ t ≤ 2π.
So the area is
2π
1 1
A = ∫ xdy − ydx = ∫ [(acost)(bcot)dt − (bsint)(−asint)dt]
2 2
C 0
2π
ab ab 2π
= ∫ (cos2 t + sin2 t)dt = [t] | = πab.
2 2 0
0
𝒊 𝒋 𝒌
𝜕 𝜕 𝜕
| |
𝑐𝑢𝑟𝑙 𝑭 = 𝛁 × 𝑭 = 𝜕𝑥 𝜕𝑦 𝜕𝑧
| |
𝑀 𝑁 0
𝜕𝑁 𝜕𝑀 𝜕𝑁 𝜕𝑀
= − 𝜕𝑧 𝒊 + 𝒋 + ( 𝜕𝑥 − )𝒌
𝜕𝑧 𝜕𝑦
Consequently,
𝜕𝑁 𝜕𝑀 𝜕𝑁 𝜕𝑀
(𝑐𝑢𝑟𝑙 𝑭) ∙ 𝒌 = [− 𝒊+ 𝒋+( − ) 𝒌] ∙ 𝒌
𝜕𝑧 𝜕𝑧 𝜕𝑥 𝜕𝑦
𝜕𝑁 𝜕𝑀
= − .
𝜕𝑥 𝜕𝑦
With appropraite conditions on 𝑭, 𝐶, and 𝑅, we can write Green’s Theorem in the vector form
𝜕𝑁 𝜕𝑀
∫ 𝑭 ∙ 𝑑𝒓 = ∬ ( − ) 𝑑𝐴
𝜕𝑥 𝜕𝑦
𝑅 𝑹
figure
𝑻 = 𝑐𝑜𝑠𝜃𝒊 + 𝑠𝑖𝑛𝜃𝒋
𝜋 𝜋
𝒏 = cos (0 + ) 𝒊 + sin(0 + )𝒋 = −𝑠𝑖𝑛𝜃𝒊 + 𝑐𝑜𝑠𝜃𝒋
2 2
𝑵 = 𝑠𝑖𝑛𝜃𝒊 − 𝑐𝑜𝑠𝜃𝒋
We can see that the outward unit normal vector 𝑵 can then be written as
𝑵 = 𝑦 ′ (𝑠)𝒊 + 𝑥′(𝑠)𝒋
Consequently, for 𝑭(𝑥, 𝑦) = 𝑴𝒊 + 𝑵𝒋, we can apply Green’s Theorem to obtain
𝑏
= ∬ 𝑑𝑖𝑣 𝑭 𝑑𝐴.
𝑅
Therefore,
∫𝐶 𝑭 ∙ 𝑵 𝑑𝑠 = ∬𝑅 𝑑𝑖𝑣 𝑭 𝑑𝐴. Second alternative form
The extension of this form to three dimensions is called the Divergence Theorem.
Exercises 3.3
Section Objectives:
Definition: Let S be a surface given by 𝑧 = 𝑔(𝑥, 𝑦) and let 𝑅 be its projection on to the 𝑥𝑦 −
plane. Suppose that 𝑔, 𝑔𝑥 , 𝑎𝑛𝑑 𝑔𝑦 are continuous at all points in R and that 𝑓 is defined on S.
𝑛
2
Where ∆𝑆𝑖 ≈ √1 + [𝑔𝑥 (𝑥𝑖 , 𝑦𝑖 )]2 + [𝑔𝑦 (𝑥𝑖 , 𝑦𝑖 )] ∆𝐴𝑖 and surface area of 𝑓 at (𝑥𝑖 , 𝑦𝑖 , 𝑧𝑖 ) and from
the sum ∑𝑛𝑖=1 𝑓(𝑥𝑖 , 𝑦𝑖 , 𝑧𝑖 )∆𝑆𝑖 provided the limit of this sum as ‖∆‖ approaches 0 exists, then
it is called a surface integral of 𝑓 over 𝑆
Fig. 5.6.1.1
Let 𝑆 be a surface with equation 𝑧 = 𝑔(𝑥, 𝑦) and let 𝑅 be its projection on to the 𝑥𝑦 − plane. If
𝑔, 𝑔𝑥 , 𝑎𝑛𝑑 𝑔𝑦 are continuous at all points in R and that 𝑓 is defined on S.
2
∬ 𝑓(𝑥, 𝑦, 𝑧) 𝑑𝑠 = ∬ 𝑓(𝑥, 𝑦, 𝑔(𝑥, 𝑦))√1 + [𝑔𝑥 (𝑥, 𝑦)]2 + [𝑔𝑦 (𝑥, 𝑦)] 𝑑𝐴
𝑆 𝑅
Remark1: If 𝑆 is the graph of 𝑦 = 𝑔(𝑥, 𝑧) and 𝑅 is its projection on to the 𝑥𝑧 − plane, then
∬ 𝑓(𝑥, 𝑦, 𝑧) 𝑑𝑠 = ∬ 𝑓(𝑥, 𝑔(𝑥, 𝑧), 𝑧)√1 + [𝑔𝑥 (𝑥, 𝑧)]2 + [𝑔𝑧 (𝑥, 𝑧)]2 𝑑𝐴
𝑆 𝑅
Remark2: If 𝑆 is the graph of 𝑥 = 𝑔(𝑦, 𝑧) and 𝑅 is its projection on to the 𝑦𝑧 − plane, then
2
∬𝑆 𝑓(𝑥, 𝑦, 𝑧) 𝑑𝑠 = ∬𝑅 𝑓(𝑔(𝑦, 𝑧), 𝑦, 𝑧)√1 + [𝑔𝑦 (𝑦, 𝑧)] + [𝑔𝑧 (𝑦, 𝑧)]2 𝑑𝐴
∬(𝑦 2 + 2𝑦𝑧) 𝑑𝑠
𝑆
1 1
Solution: begin by writing 𝑆 as 𝑧 = 2 (6 − 2𝑥 − 𝑦) and 𝑔(𝑥, 𝑦) = 2 (6 − 2𝑥 − 𝑦)
1
Using the partial derivatives 𝑔𝑥 (𝑥, 𝑦) = −1 and 𝑔𝑦 (𝑥, 𝑦) = − 2,
2 1 3
you can write √1 + [𝑔𝑥 (𝑥, 𝑦)]2 + [𝑔𝑦 (𝑥, 𝑦)] = √1 + 1 + 4 = 2
2
∬(𝑦 2 + 2𝑦𝑧) 𝑑𝑠 = ∬ 𝑓(𝑥, 𝑦, 𝑔(𝑥, 𝑦))√1 + [𝑔𝑥 (𝑥, 𝑦)]2 + [𝑔𝑦 (𝑥, 𝑦)] 𝑑𝐴
𝑆 𝑅
1 3
= ∬𝑅 [𝑦 2 + 2𝑦 (2) (6 − 2𝑥 − 𝑦)] (2) 𝑑𝐴
3 2(3−𝑥) 3 3 3 243
= 3 ∫0 ∫0 𝑦(3 − 𝑥)𝑑𝑦𝑑𝑥 = 6 ∫0 (3 − 𝑥)3 𝑑𝑥 = [− 2 (3 − 𝑥)3 ] =
0 2
2 −𝑦 3
and obtain √1 + [𝑔𝑥 (𝑥, 𝑦)]2 + [𝑔𝑦 (𝑥, 𝑦)] = √1 + ( )2 = figure
√9−𝑦 2 √9−𝑦2
Now theorem5.6.1 does not apply directly, because 𝑔𝑦 is not continuous when 𝑦 = 3.
however, you can apply theorem 5.6.1 for 0 ≤ 𝑏 < 3 and then take the limit as 𝑏 approaches 3.
As follows
𝑏 4
3
∬(𝑥 + 𝑧) 𝑑𝑠 = lim− ∫ ∫ (𝑥 + √9 − 𝑦 2 ) 𝑑𝑥𝑑𝑦
𝑏→3 √9 − 𝑦 2
𝑆 0 0
𝑏 4 𝑏 4
𝑥 𝑥2
= lim− 3 ∫ ∫ ( + 1) 𝑑𝑥𝑑𝑦 = lim−3 ∫ [ + 𝑥] 𝑑𝑦
𝑏→3 √9 − 𝑦 2 𝑏→3 2√9 − 𝑦 2
0
0 0 0
𝑏
8 𝑦𝑏
= lim− 3 ∫ ( + 4) 𝑑𝑦 = lim− 3 [4𝑦 + 8 𝑎𝑟𝑐 sin ]
𝑏→3 √9 − 𝑦 2 𝑏→3 30
0
𝑏 𝜋
= lim− 3(4𝑏 + 8 𝑎𝑟𝑐 sin ) = 36 + 24 ( ) = 36 + 12𝜋
𝑏→3 3 2
𝑟(𝑢, 𝑣) = 𝑥(𝑢, 𝑣)𝑖 + 𝑦(𝑢, 𝑣)𝑗 + 𝑧(𝑢, 𝑣)𝑘 is the parametric surface defined over a region D in
the 𝑢𝑣 −plane . you can show that the surface of 𝑓(𝑥, 𝑦, 𝑧)over S is given by
∬ 𝑓(𝑥, 𝑦, 𝑧) 𝑑𝑆 = ∬ 𝑓(𝑥(𝑢, 𝑣), 𝑦(𝑢, 𝑣), 𝑧(𝑢, 𝑣))‖𝑟𝑢 (𝑢, 𝑣) × 𝑟𝑣 (𝑢, 𝑣) ‖𝑑𝐴
𝑆 𝑅
Note: 𝑑𝑠 𝑎𝑛𝑑 𝑑𝑆 can be written as 𝑑𝑠 = ‖𝑟 ′ (𝑡)‖𝑑𝑡 and 𝑑𝑆 = ‖𝑟𝑢 (𝑢, 𝑣) × 𝑟𝑣 (𝑢, 𝑣) ‖𝑑𝐴.
∬(𝑥 + 𝑧) 𝑑𝑆
𝑆
Fig.5.5.1.4
𝜋
Where 0 ≤ 𝑥 ≤ 4 𝑎𝑛𝑑 0 ≤ 𝜃 ≤ to evaluate the surface integral in parametric form we have
2
𝑖 𝑗 𝑘
𝑟𝑥 × 𝑟𝜃 = |1 0 0 | = −3 cos 𝜃 𝑗 − 3 sin 𝜃 𝑘
0 −3 sin 𝜃 3 cos 𝜃
𝜋
4 2
4
𝜋
= ∫[3𝑥𝜃 − 9 cos 𝜃 𝑗]02 𝑑𝑥
0
4 3𝜋
= ∫0 ( 2 𝑥 + 9) 𝑑𝑥]
3𝜋 4
= [ 2 𝑥 + 9] = 12𝜋 + 36
0
Orientation of a Surface
Unit normal vectors are used to induce an orientation to a surface 𝑠 in space. A surface is called
orientable if a unit normal vector 𝑁 can be defined at every nonboundary point of 𝑆 in such away
that the noprmal vector vary continuously over the surface 𝑆. If this is possible, 𝑆 is called an
oriented surface.
Most common surfaces, such as sphers, paraboloids, ellipses, and planes, are orientable.
Moreover, for an orientable surface, the gradient vector provides a convenient way to find a unit
normal vector. That is, for an orientable surface 𝑆 given by 𝑧 = 𝑔(𝑥, 𝑦)
Let 𝐺(𝑥, 𝑦, 𝑧) = 𝑧 − 𝑔(𝑥, 𝑦) then 𝑆 can be either the unit normal vector
Fig.5.6.1.6
If the smooth orientable surface 𝑆 is given in parametric form by 𝑟(𝑢, 𝑣) = 𝑥(𝑢, 𝑣)𝑖 +
𝑦(𝑢, 𝑣)𝑗 + 𝑧(𝑢, 𝑣)𝑘
𝑟 ×𝑟
the unit normal vectors are given by 𝑁 = ‖𝑟𝑢 ×𝑟𝑣‖
𝑢 𝑣
Note: suppose that the orientable surface is given by 𝑦 = 𝑔(𝑥, 𝑧) 𝑜𝑟 𝑥 = 𝑔(𝑦, 𝑧). then you can
use the gradient vector ∇𝐺(𝑥, 𝑦, 𝑧) = −𝑔𝑥 (𝑥, 𝑧)𝑖 + 𝑗 − 𝑔𝑧 (𝑥, 𝑧)𝑘 , 𝐺(𝑥, 𝑦, 𝑧) = 𝑦 − 𝑔(𝑥, 𝑧)
Flux Integrals
nearly constant. Then the amount of fluid crossing this region per unit
Consequently, the volume of 𝑆 fluid crossing the surface 𝑆 per unit of time (called the flux of 𝐹
across 𝑆) is given by the surface integral in the following definition.
Let 𝐹(𝑥, 𝑦, 𝑧) = 𝑀𝑖 + 𝑁𝑗 + 𝑃𝑘, where M, N and P have continuous first partial derivatives on
the surface 𝑆 oriented by a unit normal vector N. the flux integral of F across 𝑆 is given by
∬ 𝐹⦁𝑁 𝑑𝑆
𝑆
Geometrically, a flux integral is the surface integral over 𝑆 of the normal component of F. if
𝜌(𝑥, 𝑦, 𝑧) is the density of the fluid at (𝑥, 𝑦, 𝑧), the flux integral
∬ 𝜌𝐹⦁𝑁 𝑑𝑆
𝑆
represents the mass of the fluid flowing across 𝑆 per unit of time. To evaluate a flux integral for a
surface given by 𝑧 = 𝑔(𝑥, 𝑦), let 𝐺(𝑥, 𝑦, 𝑧) = 𝑧 − 𝑔(𝑥, 𝑦) then, 𝑁𝑑𝑆 can be written as follows.
∇𝐺(𝑥, 𝑦, 𝑧) ∇𝐺(𝑥, 𝑦, 𝑧) 2
𝑁𝑑𝑆 = 𝑑𝑆 = √1 + [𝑔𝑥 (𝑥, 𝑦)]2 + [𝑔𝑦 (𝑥, 𝑦)] 𝑑𝐴
‖∇𝐺(𝑥, 𝑦, 𝑧)‖ 2
√1 + [𝑔𝑥 (𝑥, 𝑦)]2 + [𝑔𝑦 (𝑥, 𝑦)]
= ∇𝐺(𝑥, 𝑦, 𝑧)𝑑𝐴
Let 𝑆 be an oriented surface given by 𝑧 = 𝑔(𝑥, 𝑦) and let 𝑅 be its projection onto the 𝑥𝑦 −plane.
For the first integral, the surface is oriented upward and for the second integral, the surface is
oriented downward
Example 5.33: using a flux integral to find the rate of mass flow
= 𝜌 ∬𝑅[2𝑥 2 + 2𝑦 2 + (4 − 𝑥 2 − 𝑦 2 )] 𝑑𝐴
= 𝜌 ∬𝑅[(4 + 𝑥 2 + 𝑦 2 )] 𝑑𝐴
2𝜋 2
= 𝜌 ∫0 ∫0 (4 + 𝑟 2 )𝑟 𝑑𝑟𝑑𝜃 (∵ 𝑝𝑜𝑙𝑎𝑟 𝑐𝑜𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠)
2𝜋
= 𝜌 ∫0 12 𝑑𝜃 = 24𝜋𝜌
parabolic cylinder 𝑆: 𝑦 = 𝑥 2 , 0 ≤ 𝑥 ≤ 2, 0 ≤ 𝑧 ≤ 3
Hence a representation of S is
On S, writing simply 𝐹(𝑠) for 𝐹[𝑟(𝑢, 𝑣)] , we have 𝐹(𝑠) = (3𝑣 2 , 6,6𝑢𝑣).
3 2 3
3
= ∫0 (12𝑣 2 − 12)𝑑𝑣 = [4𝑣 3 − 12𝑣]3𝑣=0 = 108 − 36 = 72
Example 5.35: Find the Surface integral when 𝐹 = (𝑥 2 , 0,3𝑦 2 ) and 𝑆 is the portion of the plane
𝑥 + 𝑦 + 𝑧 = 1 in the first octant (Fig.5.6.1.10)
Fig.5.6.1.10
We obtain the first-octant portion S of this plane by restricting 𝑥 = 𝑢 and 𝑦 = 𝑣to the projection
R of S in the 𝑥𝑦-plane. R is the triangle bounded by the two coordinate axes and the straight line
𝑥 + 𝑦 = 1, obtained from 𝑥 + 𝑦 + 𝑧 = 1 by setting 𝑧 = 0. Thus0 ≤ 𝑥 ≤ 1 − 𝑦, 0 ≤ 𝑦 ≤ 1.
1 1−𝑣
Exercises 3.3
e. 𝐹 = [𝑦 2 𝑥 2 , 𝑧 2 ], 𝑆: 𝑧 = 4√𝑥 2 + 𝑦 2 , 0 ≤ 𝑧 ≤ 8, 𝑦 ≥ 0
2. Find the flux of F through S,∬𝑆 𝐹⦁𝑁 𝑑𝑆,where N is the upward unit normal vector to S
In this section we discuss another “big” integral theorem, the divergence theorem, which
transforms surface integrals into triple integrals. So let us begin with a review of the latter.
A triple integral is an integral of a function 𝑓 (𝑥, 𝑦, 𝑧) taken over a closed bounded, three-
dimensional region T in space. (Note that “closed” and “bounded” are defined in the same way
with “sphere” substituted for “circle”). Triple integrals can be evaluated by three successive
integrations. This is similar to the evaluation of double integrals by two successive integrations.
Triple integrals can be transformed into surface integrals over the boundary surface of a region in
space and conversely. Such a transformation is of practical interest because one of the two kinds
of integral is often simpler than the other. It also helps in establishing fundamental equations in
fluid flow, heat conduction, etc., as we shall see. The transformation is done by the divergence
theorem, which involves the divergence of a vector function
𝐹 = [𝐹1 , 𝐹2 , 𝐹3 ] = 𝐹1 𝑖 + 𝐹2 𝑗 + 𝐹3 𝑘
Let T be a closed bounded region in space whose boundary is a piecewise smooth orientable
surface S. Let 𝐹 (𝑥, 𝑦, 𝑧) be a vector function that is continuous and has continuous first partial
derivatives in some domain containing T. Then
∭ 𝑑𝑖𝑣 𝐹 𝑑𝑣 = ∬ 𝐹⦁𝑛 𝑑𝐴
𝑇 𝑆
In components of 𝐹 = [𝐹1 , 𝐹2 , 𝐹3 ] and of the outer unit normal vector 𝑛 = [cos 𝛼 , cos 𝛽 , cos 𝛾]
of 𝑆. Now this becomes
Fig.5.6.2.1
The form of the surface suggests that we introduce polar coordinates 𝑟, 𝜃 defined by
𝑏 2𝜋 𝑎
𝑏 2𝜋 𝑎4 𝑏 𝑎4 5𝜋
= 5 ∫𝑧=0 ∫𝜃=0 cos2 𝜃 𝑑𝜃𝑑𝑧 = 5 ∫𝑧=0 𝑑𝑧 = 𝑎4 𝑏.
4 4 4
Now on 𝑆 we have 𝑥 = 2 cos 𝑣 cos 𝑢 , 𝑧 = 2 sin 𝑢 so that 𝐹 = [7𝑥, 0, −𝑧] becomes on 𝑆 𝐹(𝑆) =
[14 cos 𝑣 cos 𝑢 , 0, −2 sin 𝑣]
And 𝐹(𝑆)⦁𝑁 = (14 cos 𝑣 cos 𝑢)⦁4 cos 2 𝑣 cos 𝑢 + (−2 sin 𝑣)⦁4 cos 𝑣 sin 𝑣
The integral of cos 𝑣 sin2 𝑣 equals (sin2 𝑣)/3 and that of cos3 𝑣 = cos 𝑣 (1 − sin2 𝑣) equals
sin 𝑣 − (sin2 𝑣)/3 on 𝑆 we have −𝜋⁄2 ≤ 𝑣 ≤ 𝜋⁄2
2
So that by substituting these limits we get 56𝜋(2 − 2⁄3) − 16𝜋 ∙ 3 = 64𝜋
Exercises 3.3
The divergence theorem has many important applications: In fluid flow, it helps characterize
sources and sinks of fluids. In heat flow, it leads to the heat equation. In potential theory, it gives
properties of the solutions of Laplace’s equation. In this section, we assume that the region T and
its boundary surface S are such that the divergence theorem applies.
Overview
In this subtopic we are going to learn that we can transform surface integrals into line integrals
and conversely, line integrals into surface integrals is called Stokes’s Theorem and we will see
examples.
Section Objectives:
A second higher dimension analog of Green’s Theorem is called Stokes’s Theorem, after the
English mathematical physicist George Gabriel Stokes. In addition to making contributions to
physics, stokes worked with infinite series and differential equation, as well as with the
integration result presented in this section.
Stokes’s Theorem gives the relationship between a surface integral over an oriented surface 𝑆
and a line integral along a closed space curve 𝐶 forming the boundary of , as shown
Fig.5.7.1.1. The positive direction along 𝐶 is counterclockwise relative to the normal vector 𝑁.
That is, if you imagine grasping the normal vector N with your right hand, with your thumb
pointing in
the direction of N, your fingers will point in the positive direction C, as shown Fig. 5.7.1.2.
Let S be a piecewise smooth oriented surface with unit normal vector N, and let the boundary of
S be a piecewise smooth simple closed curve C. Let 𝐹(𝑥, 𝑦, 𝑧) be a continuous vector function
that has continuous first partial derivatives in a domain in space containing S. Then
∫𝐶 𝐹⦁ 𝑑𝑟 = ∬𝑆(𝑐𝑢𝑟𝑙 𝐹)⦁𝑁𝑑𝑠
𝑖 𝑗 𝑘
Where 𝑐𝑢𝑟𝑙 𝐹 = |𝜕⁄𝜕𝑥 𝜕⁄
𝜕𝑦
𝜕⁄ |
𝜕𝑧
𝐹1 𝐹2 𝐹3
OR
Theorem 5.7.1: Stokes’s Theorem (Transformation between Surface and Line Integrals)
Let S be a piecewise smooth oriented surface in space and let the boundary of S be a piecewise smooth
simple closed curve C. Let 𝐹(𝑥, 𝑦, 𝑧) be a continuous vector function that has continuous first partial
derivatives in a domain in space containing S. Then
Here n is a unit normal vector of S and, depending on 𝒏, the integration around C is taken in the sense
shown in Fig. 5.6.4. Furthermore, 𝑟 ′ = 𝑑𝑟⁄𝑑𝑠 is the unit tangent vector and 𝑠 the arc length of C. In
components, formula (*) becomes
Here, 𝐹 = [𝐹1 , 𝐹2 , 𝐹3 ], 𝑁 = [𝑁1 , 𝑁2 , 𝑁3 ], 𝑛𝑑𝑎 = 𝑁𝑑𝑢𝑑𝑣 , 𝑟 ′ 𝑑𝑠 = [𝑑𝑥, 𝑑𝑦, 𝑑𝑧], and R is the region
with boundary curve 𝐶̅ in the 𝑢𝑣-plane corresponding to S represented by 𝑟 (𝑢, 𝑣)
𝑖 𝑗 𝑘
𝜕
𝑐𝑢𝑟𝑙 𝐹 = | ⁄𝜕𝑥 𝜕⁄ 𝜕⁄ | i.e. Fig. 5.7.1.3
𝜕𝑦 𝜕𝑧
𝐹1 𝐹2 𝐹3
𝑖 𝑗 𝑘
𝑐𝑢𝑟𝑙 𝐹 = |𝜕⁄𝜕𝑥 𝜕⁄
𝜕𝑦
𝜕⁄ | = −𝑖 − 𝑗 + 2𝑦𝑘
𝜕𝑧
−𝑦 2 𝑧 𝑥
Considering 𝑧 = 6 − 2𝑥 − 2𝑦 = 𝑔(𝑥, 𝑦, ) you can use theorem 5.6.2 for an upward normal
vector to obtain
∫ 𝐹⦁ 𝑑𝑟 = ∬(𝑐𝑢𝑟𝑙 𝐹)⦁𝑁𝑑𝑠
𝐶 𝑆
3 3−𝑦
= ∫0 ∫0 (2𝑦 − 4) 𝑑𝑥𝑑𝑦
3
= ∫0 (−2𝑦 2 + 10𝑦 − 12)𝑑𝑦
𝟑
2𝑦 3
= [− + 5𝑦 2 − 12𝑦] = −𝟗
3 𝟎
𝑖 𝑗 𝑘
Solution: the 𝑐𝑢𝑟𝑙 𝑜𝑓 𝐹 is given by 𝑐𝑢𝑟𝑙 𝐹 = |𝜕⁄𝜕𝑥 𝜕⁄
𝜕𝑦
𝜕⁄ | =
𝜕𝑧
𝐹1 𝐹2 𝐹3
𝑖 𝑗 𝑘
𝜕⁄ 𝜕⁄ 𝜕⁄
| 𝜕𝑥 𝜕𝑦 𝜕𝑧| = 3√𝑥 2 + 𝑦 2 𝑘.
−𝑦√𝑥 2 + 𝑦 2 𝑥√𝑥 2 + 𝑦 2 0
2𝜋 2 2𝜋 2𝜋
is the circle 𝑟(𝑠) = [cos 𝑠 , sin 𝑠 , 0]. It’s unit tangent vector is 𝑟 ′ (𝑠) = [−sin 𝑠 , cos 𝑠 , 0] . The
function 𝐹 = [𝑦, 𝑧, 𝑥] on C
2𝜋 2𝜋
2𝜋
2 1 1
= ∫ (− (cos 𝜃 + sin 𝜃) − )𝑑𝜃 = 0 + 0 − (2𝜋) = −𝜋
3 2 2
𝜃=0
Solution: As a surface S bounded by C we can take the plane circular disk 𝑥 2 + 𝑦 2 = 4 in the
plane 𝑧 = −3.
Hence (𝑐𝑢𝑟𝑙 𝐹)⦁𝑛 is simply the component of 𝑐𝑢𝑟𝑙 𝐹 in the positive 𝑧 −direction. Since 𝐹 with
𝑧 = −3 has the components 𝐹1 = 𝑦, 𝐹2 = −27𝑥, 𝐹3 = 3𝑦 3 , we thus obtain
𝜕𝐹1 𝜕𝐹2
(𝑐𝑢𝑟𝑙 𝐹)⦁𝑛 = − = −27 − 1 = −28.
𝜕𝑥 𝜕𝑦
𝜕𝐹1 𝜕𝐹2
∬( − ) 𝑑𝐴 = ∮ 𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 = −28
𝜕𝑥 𝜕𝑦
𝑆 𝐶
Note: if 𝑐𝑢𝑟𝑙 𝐹 = 0 throuhot region R , the rotation of F about each unit normal N is 0.
Path dependence of line integrals is practically and theoretically so important that we formulate it
as a theorem.
for every pair of endpoints A, B in domain D, (1) has the same value for all paths
𝜕𝑓 𝜕𝑓 𝜕𝑓
𝐹 = 𝑔𝑟𝑎𝑑 𝑓 thus 𝐹1 = 𝜕𝑥 , 𝐹2 = 𝜕𝑦 , 𝐹3 = (2)
𝜕𝑧
Proof : We assume that (2) holds for some function 𝑓 in D and show that this implies path
independence. Let C be any path in D from any point A to any point B in D, given by𝑟(𝑡) =
[𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)], where 𝑎 ≤ 𝑡 ≤ 𝑏. Then from (2), the chain rule
𝜕𝑓 𝜕𝑓 𝜕𝑓
∫(𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = ∫ ( 𝑑𝑥 + 𝑑𝑦 + 𝑑𝑧)
𝐶 𝐶 𝜕𝑥 𝜕𝑦 𝜕𝑧
𝑏
𝜕𝑓 𝑑𝑥 𝜕𝑓 𝑑𝑦 𝜕𝑓 𝑑𝑧
= ∫( + + )
𝜕𝑥 𝑑𝑡 𝜕𝑦 𝑑𝑡 𝜕𝑧 𝑑𝑡
𝑎
𝑏
𝑑𝑓
=∫ 𝑑𝑡 = 𝑓[𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)]𝑡=𝑏
𝑡=𝑎
𝜕𝑥
𝑎
Example 5.42: Show that the integral ∫𝐶 𝐹(𝑟)⦁𝑑𝑟 = ∫𝐶(2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧) is path
independent in any domain in space and find its value in the integration from A: (0, 0, 0) to B:
(2, 2, 2).
𝜕𝑓 𝜕𝑓 𝜕𝑓
= 2𝑥 = 𝐹1 , 𝜕𝑦 = 2𝑦 = 𝐹2 , 𝜕𝑧 = 4𝑧 = 𝐹3 . Hence the integral is independent of path according
𝜕𝑥
If you want to check this, use the most convenient path 𝑐: 𝑟(𝑡) = [𝑡, 𝑡, 𝑡], 0 ≤ 𝑡 ≤ 2 on
which𝐹(𝑟(𝑡)) = [2𝑡, 2𝑡, 4𝑡] so that 𝐹(𝑟(𝑡))⦁𝑟 ′ (𝑡) = 2𝑡 + 2𝑡 + 4𝑡 = 8𝑡 and integration from 0
2
2 8 𝑡2
to 2 gives ∫0 8𝑡 𝑑𝑡 = [ ] = 16
2 0
𝑓𝑥 = 𝐹1 = 3𝑥 2 , 𝑓𝑦 = 𝐹2 = 2𝑦𝑧, 𝑓𝑧 = 𝐹3 = 𝑦 2
𝑓𝑦 = 𝑔𝑦 = 2𝑦𝑧, 𝑓𝑧 = 𝑦 2 + ℎ′ = 𝑦 2 , ℎ′ = 0, ℎ = 0
𝜋 1 1 1
1. ∫𝜋/2,𝜋 (2 cos 2 𝑥 cos 2𝑦 𝑑𝑥 − 2 sin 2 𝑥 sin 2𝑦 𝑑𝑦)
(6,1)
2. ∫(4,0) 𝑒 4𝑦 (2𝑥 𝑑𝑥 + 4𝑥 2 𝑑𝑦)
(2,1/2,𝜋/2)
3. ∫(0,0,𝜋) 𝑒 𝑥𝑦 (𝑦 sin 𝑧 𝑑𝑥 + 𝑥 sin 𝑧 𝑑𝑦 + cos 𝑧 𝑑𝑧)
(1,1,0) 2 +𝑦 2 +𝑧 2
4. ∫(0,0,0) 𝑒 𝑥 (𝑥 𝑑𝑥 + 𝑦 𝑑𝑦 + 𝑧 𝑑𝑧)
(1,1,1)
5. ∫(0,2,3) (𝑦𝑧 sinh 𝑥𝑧 𝑑𝑥 + cosh 𝑥𝑧 𝑑𝑦 + 𝑥𝑦 sinh 𝑥𝑧 𝑑𝑧)
Unit Summary
Vector calculus deals with the application of calculus operations on vectors (vector).
if 𝐷 is a subset of 𝑅 𝑛 , then a scalar field in 𝐷 is a function
𝑓: 𝐷 ⟶ 𝑅 and a vector field in 𝐷 is a function 𝑭: 𝐷 ⟶ 𝑅 𝑛 .
A curve in 𝑅 2 (or𝑅 3 ) is a differentiable function 𝒓: [𝑎, 𝑏] ⟶ 𝑅 2(or𝑅 3 ). The initial point is
𝒓(𝒂) and the final point is 𝒓(𝑏).the domain of the curve is the interval [𝑎, 𝑏].
If a curve C is described by the parametric equation 𝑥 = 𝑓 (𝑡), 𝑦 = 𝑔(𝑡) and z = ℎ(𝑡),
𝑎 ≤ 𝑡 ≤ 𝑏, where 𝑓 ′ , 𝑔′ and ℎ′ are continuous on [𝑎, 𝑏] and C is traversed exactly once as
𝑡 increases from 𝑎 to 𝑏, then the length 𝐿 of C is.
𝑏 𝑑𝑥 2 𝑑𝑦 2 𝑏 𝑑𝑥 𝑑𝑦 𝑑𝑧2 2 2
𝐿 = ∫𝑎 √( 𝑑𝑡 ) + ( 𝑑𝑡 ) 𝑑𝑡 or 𝐿 = ∫𝑎 √( 𝑑𝑡 ) + ( 𝑑𝑡 ) + ( 𝑑𝑡 ) 𝑑𝑡
𝜕 𝜕 𝜕
∇= 𝒊 + 𝒋+
𝜕𝑥 𝜕𝑦 𝜕𝑧
Let S be a surface given by 𝑧 = 𝑔(𝑥, 𝑦) and let 𝑅 be its projection on to the 𝑥𝑦 − plane.
Suppose that 𝑔, 𝑔𝑥 , 𝑎𝑛𝑑 𝑔𝑦 are continuous at all points in R and that 𝑓 is defined on S.
𝑛
∬ 𝐹⦁𝑁 𝑑𝑆
𝑆
Let S be a piecewise smooth oriented surface with unit normal vector N, and let the
boundary of S be a piecewise smooth simple closed curve C. Let 𝐹(𝑥, 𝑦, 𝑧) be a
continuous vector function that has continuous first partial derivatives in a domain in
space containing S. Then
∫𝐶 𝐹⦁ 𝑑𝑟 = ∬𝑆(𝑐𝑢𝑟𝑙 𝐹)⦁𝑁𝑑𝑠
𝑖 𝑗 𝑘
Where 𝑐𝑢𝑟𝑙 𝐹 = |𝜕⁄𝜕𝑥 𝜕⁄
𝜕𝑦
𝜕⁄ |
𝜕𝑧
𝐹1 𝐹2 𝐹3
Miscellaneous Exercises
12. Evaluate line integral, with 𝑭 and 𝐶 as given , by the method that seems most suitable.
Recall that if 𝑭 is a force, the integral gives the work done in a displacement along 𝐶.
a. 𝑭 = [𝑥 2 , 𝑦 2 , 𝑧 2 ]
𝐶the straight line segment from (4,1,8) to (0,2,3)
b. 𝑭 = [𝑦𝑧, 2𝑧𝑥, 𝑥𝑦],
𝐶The circle 𝑥 2 + 𝑦 2 = 9, 𝑧 = 1, counterclockwise
c. 𝑭 = [𝑠𝑖𝑛𝜋𝑦, 𝑐𝑜𝑠𝜋𝑥, 𝑠𝑖𝑛𝜋𝑥],
𝐶the boundary of 0 ≤ 𝑥 ≤ 1⁄2 , 0 ≤ 𝑦 ≤ 2, 𝑧 = 2𝑥
d. 𝑭 = [𝑥 − 𝑦, 0, 𝑒 𝑥 ]
𝐶: 𝑦 = 3𝑥 2 , 𝑧 = 2𝑥for𝑥 from 0 to 2
13. Using Green’s Theorem evaluate the line integral
3. ∫𝐶(𝑥 2 − 𝑦 2 ) 𝑑𝑥 + 2𝑥𝑦𝑑𝑦 , 𝐶 = 𝑥 2 + 𝑦 2 = 16
4. ∫𝐶 𝑒 𝑥 𝑐𝑜𝑠2𝑦𝑑𝑥 − 2𝑒 𝑥 𝑠𝑖𝑛2𝑦𝑑𝑦 , 𝐶 = 𝑥 2 + 𝑦 2 = 𝑎2
5. ∫𝐶(𝑥 − 3𝑦) 𝑑𝑥 + (𝑥 + 𝑦)𝑑𝑦
𝐶:boundary of the region lying between the graphs of
𝑥 2 + 𝑦 2 = 1and𝑥 2 + 𝑦 2 = 9
14. Evaluate the integral ∬𝑆(𝑐𝑢𝑟𝑙 𝑭) ∙ 𝒏dA directly for the give: 𝑭and S.
a. 𝑭 = [4𝑧 2 , 16𝑥, 0], 𝑆: 𝑧 = 𝑦 (0 ≤ 𝑥 ≤ 1, 0 ≤ 𝑦 ≤ 1)
1
b. 𝑭 = [0,0,5𝑥𝑐𝑜𝑠𝑧], 𝑆 = 𝑥 2 + 𝑦 2 = 4, 𝑦 ≥ 0, 0 ≤ 𝑧 ≤ 2 𝜋
c. 𝑭 = [−𝑒 𝑦 , 𝑒 𝑧 , 𝑒 𝑥 ], 𝑆: 𝑧 = 𝑥 + 𝑦 (0 ≤ 𝑥 ≤ 1, 0 ≤ 𝑦 ≤ 1)
d. 𝑭 = [3𝑐𝑜𝑠𝑦, 𝑐𝑜𝑠ℎ𝑧, 𝑥], 𝑆 the square 0 ≤ 𝑥 ≤ 2, 0 ≤ 𝑦 ≤ 2, 𝑧 = 4
15. Calculate this integral by Stokes’s theorem, clockwise as seen by a person standing at the
origin, for the following 𝐹and 𝐶. Assume the Cartesian coordinates to be right handed.
a. 𝑭 = [−3𝑦, 3𝑥, 𝑧], 𝐶 the circle 𝑥 2 + 𝑦 2 = 4, 𝑧 = 1
b. 𝑭 = [4𝑧, −2𝑥, 2𝑥], 𝐶 the intersection of 𝑥 2 + 𝑦 2 = 1 and 𝑧 = 𝑦 + 1
c. 𝑭 = [𝑦 2 , 𝑥 2 , −𝑥 + 𝑧], around the triangle with vertices(0,0,1), (1,0,1), (1,1,1)
d. 𝑭 = [𝑦, 𝑥𝑦 3 , −𝑧𝑦 3 ] , 𝐶 the circle 𝑥 2 + 𝑦 2 = 𝑎2 , 𝑧 = 𝑏 (> 0)
16. Evaluate the surface integral directly or, if possible , by the divergence theorem.
a. 𝑭 = [2𝑥 2 , 4𝑦, 0], 𝑆 = 𝑥 + 𝑦 + 𝑧 = 1, 𝑥 ≥ 0, 𝑦 ≥ 0, 𝑧 ≥ 0
b. 𝑭 = [𝑦, −𝑥, 0], 𝑆 = 3𝑥 + 2𝑦 + 𝑧 = 6, 𝑥 ≥ 0, 𝑦 ≥ 0, 𝑧 ≥ 0
c. 𝑭 = [𝑥 − 𝑦, 𝑦 − 𝑧, 𝑧 − 𝑥], 𝑆 the sphere of radius 5 and center 0
d. 𝑭 = [𝑦 2 , 𝑥 2 , 𝑧 2 ], 𝑆the surface of 𝑥 2 + 𝑦 2 ≤ 4, 0 ≤ 𝑧 ≤ 5
e. 𝑭 = [𝑦 3 , 𝑥 3 , 3𝑧 2 ], 𝑆 the portion of the paraboloid 𝑧 = 𝑥 2 + 𝑦 2 , 𝑧 ≤ 4
References
3. A. Ganesh and Etla, Engineering Mathematics II, New age International press,2009
5.Salas Hille Etgen, Calculus – One and Several variables,10th edition, WILLEY PLUS
8.Kaplan, W.: "Advanced Calculus," 5th ed., Addison-Wesley Higher Mathematics, Boston,
2003
9. Knopp, K., Theory of Functions. 2 parts. New York:Dover, Reprinted 1996.
10. Krantz, S. G., Complex Analysis: The GeometricViewpoint. Washington, DC: The
MathematicalAssociation of America, 1990.
11.Lang, S., Complex Analysis. 4th ed. New York:Springer, 1999.
12. ] Narasimhan, R., Compact Riemann Surfaces. NewYork: Springer, 1996.
13. Nehari, Z., Conformal Mapping. Mineola, NY:Dover, 1975.
14. Springer, G., Introduction to Riemann Surfaces.Providence, RI: American Mathematical
Society, 2001
CHAPTER-6
Introduction
The transition from “real calculus” to “complex calculus” starts with a discussion of complex
numbers and their geometric representation in the complex plane. We desire functions to be
analytic because these are the “useful functions” in the sense that they are differentiable in some
domain and operations of complex analysis can be applied to them. The most important
equations are the Cauchy–Riemann equations because they allow a test of analyticity of such
functions. Moreover, we show how the Cauchy–Riemann equations are related to the important
Laplace equation.
The remaining sections of the chapter are devoted to elementary complex functions (exponential,
trigonometric, hyperbolic, and logarithmic functions). These generalize the familiar real
functions of calculus.
Unit Objectives:
Overview:
In this section, we are going to deal with the definition and notation of the complex numbers by
considering various examples.
Section Objectives:
Definition: A complex number is an order pair (𝑥, 𝑦) of real number x and y that is 𝑧 = (𝑥, 𝑦)
𝑥 is called the real part and y is called the imaginary part of z, written 𝑥 = 𝑅𝑒 𝑧 and 𝑦 = 𝐼𝑚 𝑧
The order pair (𝑥, 0) = 𝑥 and (0,1) = 𝑖 by definition, two complex number are equal if and
only if their real parts are equal and their imaginary parts are equal.
Notation 𝑧 = 𝑥 + 𝑖𝑦
Multiplication is defined by
𝑖 2 = −1 𝑎𝑠 𝑖 2 = 𝑖𝑖 = (0,1)(0,1) = (−1,0) = −1
From this we see continued multiplication by positive power of 𝑖 leads to the following pattern:
𝑖 = 𝑖, 𝑖 2 = −1, 𝑖 3 = −𝑖, 𝑖 4 = 1, 𝑖 5 = 𝑖, …
Example 6.1: - Find Real part, Imaginary part, Sum and Product of Complex Numbers
Subtraction and Division are defined as the inverse operation of addition and multiplication,
respectively. Thus the difference is 𝑧 = 𝑧1 − 𝑧2 the complex number 𝑧 for which 𝑧1 = 𝑧 + 𝑧2
𝑧1
The quotient 𝑧 = ⁄𝑧2 (𝑧2 ≠ 0)is the complex number 𝑧1 = 𝑧𝑧2 .
If we equate the real and the imaginary parts on both sides of this equation, setting 𝑧 = 𝑥 + 𝑖𝑦
We obtain
𝑥1 = 𝑥2 𝑥 − 𝑦2 𝑦, 𝑦1 = 𝑦2 𝑥 + 𝑥2 𝑦
The solution is
𝑧1 𝑥1 𝑥2 + 𝑦1 𝑦2 𝑥2 𝑦1 − 𝑥1 𝑦2
𝑍= = 𝑋 + 𝑖𝑌 , 𝑋 = 2 2 𝑎𝑛𝑑 𝑌 =
𝑧2 𝑥2 + 𝑦2 𝑥22 + 𝑦22
𝑧1
The practical rule used to get this is by multiplying numerator and denominator of ⁄𝑧2
By 𝑥2 − 𝑖𝑦2 and simplifying
Example 6.2: - Let 𝑧1 = 8 + 3𝑖 and 𝑧2 = 9 − 2𝑖. Find the difference and quotient.
Complex Plane
So far we discussed the algebraic manipulation of complex numbers. Consider the geometric
representation of complex numbers, which is of great practical importance. We choose two
perpendicular coordinate axes, the horizontal 𝑥-axis, called the real axis, and the vertical 𝑦-axis,
called the imaginary axis. On both axes we choose the same unit of length (Fig. 6.1.1.1). This is
called a Cartesian coordinate system.
Definition: We now plot a given complex number 𝑧 = (𝑥, 𝑦) = 𝑥 + 𝑖𝑦 as the point P with
coordinates 𝑥, 𝑦. The 𝑥𝑦 plane in which the complex numbers are represented in this way is
called the complex plane.
Addition and subtraction can now be visualized as illustrated in Figs. 6.1.1.3 and 6.1.1.4
The complex conjugate is important because it permits us to switch from complex to real.
Indeed, by multiplication, 𝑧𝑧̅ = 𝑥 2 + 𝑦 2 (verify!). By addition and subtraction,
𝑧 + 𝑧̅ = 2𝑥, 𝑧 − 𝑧̅ = 2𝑖𝑦. We thus obtain for the real part 𝑥 and the imaginary part 𝑦 (not 𝑖𝑦!) of
𝑧 = 𝑥 + 𝑖𝑦 . The important formulas are
1 1
𝑅𝑒𝑧 = 𝑥 = (𝑧 + 𝑧̅), 𝐼𝑚𝑧 = (𝑧 − 𝑧̅)
2 2𝑖
If 𝑧 is real, 𝑧 = 𝑥 then 𝑧̅ = 𝑧 by the definition of 𝑧̅ and conversely. Working with Conjugates is
easy, since we have
(𝑧̅̅̅̅̅̅̅̅̅
1 + 𝑧2 ) = 𝑧̅1 + 𝑧̅2 , (𝑧̅̅̅̅̅̅̅̅̅)
1 − 𝑧2 = 𝑧̅1 − 𝑧̅2
̅̅̅̅̅̅
𝑧1 𝑧̅1
𝑧1 𝑧2 = 𝑧̅1 𝑧̅2 ,
̅̅̅̅̅̅ ( )=
𝑧2 𝑧̅2
Example 6.3: Let 𝑧1 = 4 + 3𝑖 𝑎𝑛𝑑 𝑧2 = 2 + 5𝑖 .
1 3𝑖+3𝑖
Then 𝐼𝑚𝑧1 = 2𝑖 [(4 + 3𝑖) − (4 − 31)] = =3
2𝑖
Polar Form of Complex Numbers: We gain further insight into the arithmetic operations of
complex numbers if, in addition to the 𝑥𝑦-coordinates in the complex plane, we also employ the
usual polar coordinates 𝜃 defined by 𝑥 = 𝑟 cos 𝜃 , 𝑦 = 𝑟 sin 𝜃 then 𝑧 = 𝑥 + 𝑖𝑦 hence
𝑧 = 𝑟(𝑐𝑜𝑠𝜃 + 𝑖𝑠𝑖𝑛𝜃) is called polar form
𝑟 is called the absolute value or modulus of 𝑧 and is denoted by |𝑧|. Hence
|𝑧| = 𝑟 = √𝑥 2 + 𝑦 2 = √𝑧𝑧̅
Geometrically, |𝑧| is the distance of the point 𝑧 from the origin .Similarly, |𝑧1 − 𝑧2 |is the
distance between 𝑧1 𝑎𝑛𝑑 𝑧2 .
𝜃 is called the argument of 𝑧 and is denoted by 𝜃 = 𝑎𝑟𝑔 𝑧.
𝑦
tan 𝜃 = (𝑧 ≠ 0)
𝑥
Geometrically, 𝜃 is the directed angle from the positive 𝑥-axis to 𝑂𝑃 in Fig.6.01. Here, as in
calculus, all angles are measured in radians and positive in the counterclockwise sense.
For 𝑧 = 0 this angle 𝜃 is undefined. (Why?) For a given 𝑧 ≠ 0 it is determined only up to integer
multiples of 2𝜋 since cosine and sine are periodic with period 2𝜋 . But one often wants to
specify a unique value of 𝑎𝑟𝑔𝑧 of a given 𝑧 ≠ 0 . For this reason one defines the principal
value 𝐴𝑟𝑔 𝑧 (with capital A!) of arg 𝑧 by the double inequality −𝝅 < 𝑎𝑟𝑔 𝑧 ≤ 𝜋
Then we have 𝐴𝑟𝑔 𝑧 = 0 for positive real𝑧 = 𝑥 which is practical, and 𝐴𝑟𝑔𝑧 = 0 (not −𝜋 ) for
negative real 𝑧. Obviously, for a given 𝑧 ≠ 0, the other values of arg 𝑧 are
𝑎𝑟𝑔 𝑧 = 𝐴𝑟𝑔 𝑧 ± 2𝑛𝜋 (𝑛 = ±1, ±2, ⋯ )
Fig. 6.1.1.6.Complex plane, Fig. 6.1.1.7 Distance between two points in the complex plane
Polar form of a complex number
𝐻𝑒𝑛𝑐𝑒 𝑤𝑒 𝑜𝑏𝑡𝑎𝑖𝑛
1
arg 𝑧 = 𝜋 ± 2𝑛𝜋 (𝑛 = 0,1, ⋯ ),
4
1
𝑎𝑛𝑑 arg 𝑧 = 4 𝜋 and (the principal value).
Hence
π π
𝑧 = 3 − 3𝑖 = 3√2 (cos − i sin )
4 4
Similarly,
1 1 1
𝑧 = 3 + 3√3𝑖 = 6 (𝑐𝑜𝑠 𝜋 + 𝑖 𝑠𝑖𝑛 𝜋) , |𝑧| = 6, 𝑎𝑛𝑑 𝐴𝑟𝑔 𝑧 = 𝜋
3 3 3
6.1.2. Triangle Inequality
Inequalities such as 𝑥1 < 𝑥2 make sense for real numbers, but not in complex because there is no
natural way of ordering complex numbers. However, inequalities between absolute values
(which are real!), such as|𝑧1 < |𝑧2 || (meaning that𝑧1 is closer to the origin than 𝑧2 ) are of great
importance. The daily bread of the complex analyst is the triangle inequality
Theorem6.1.2: |𝑧1 + 𝑧2 | ≤ |𝑧1 | + |𝑧2 |
Proof: This inequality follows by noting that the three points are the vertices of a triangle with
sides |𝑧1 | , |𝑧2 | and |𝑧1 + 𝑧2 | and one side cannot exceed the sum of the other two sides.
Now
|𝑧1 + 𝑧2 |2 = (𝑧1 + 𝑧2 )(𝑧̅1 + 𝑧̅2 ) = 𝑧1 𝑧̅1 + (𝑧1 𝑧̅2 + ̅̅̅̅̅̅
𝑧1 𝑧̅2 ) + 𝑧2 𝑧̅2 (∗)
But
𝑧1 𝑧̅2 + ̅̅̅̅̅̅
𝑧1 𝑧̅2 = 2𝑅𝑒(𝑧1 𝑧̅2 ) ≤ 2|𝑧1 ||𝑧2 |
This is substitute in to (*) we have,
|𝑧1 + 𝑧2 |2 ≤ (|𝑧1 | + |𝑧2 |)2
Therefore
|𝑧1 + 𝑧2 | ≤ |𝑧1 | + |𝑧2 |
The argument of a product equals the sum of the argument of the factors,
arg (𝑧1 𝑧2 ) = arg 𝑧1 + arg 𝑧2
Similarly, arg 𝑧1 = arg[(𝑧1 /𝑧2 )𝑧2 ] = arg(𝑧1 /𝑧2 ) + arg 𝑧2 and by subtraction of arg 𝑧2
𝑧1
arg = arg 𝑧1 − arg 𝑧2
𝑧2
Euler formulas: the known as Euler’s formula is 𝑒 𝑖𝜃 = cos 𝜃 + isin 𝜃
We can now express the polar representation of a complex number in the form of 𝑧 = 𝑟𝑒 𝑖𝜃
we now that |𝑒 𝑖𝜃 | = 1
Powers and Roots of Complex numbers
The polar representation of 𝑧 = 𝑟𝑒 𝑖𝜃 is particularly useful in finding powers and roots of various
complex number.
Example 6.6: 𝑧 2 = 𝑟 2 𝑒 2𝑖𝜃 , where as repeatedly multiplying by 𝑧 and 𝑟𝑒 𝑖𝜃 ,we obtain
𝑧 𝑛 = 𝑟 𝑛 (𝑒 𝑖𝜃 )𝑛 = 𝑟 𝑛 𝑒 𝑛𝑖𝜃 , 𝑛 = 1,2, ⋯
𝑧 𝑛 = 𝑟 𝑛 (cos 𝑛𝜃 + 𝑖 sin 𝑛𝜃)
which in terms of trigonometric functions if |𝑧| = 𝑟 = 1 becomes the famous De Moive’s
formula 𝑧 𝑛 = (cos 𝜃 + 𝑖 sin 𝜃)𝑛 = 𝑧 𝑛 = cos 𝑛𝜃 + 𝑖 sin 𝑛𝜃, 𝑛 = 1,2, ⋯
One of the principal uses of De Moiver’s formula is finding fractional powers of complex
numbers. for example suppose we wish to find the solution of the equation 𝑧 𝑛 = 𝑧0
1⁄
Formally, we represent the solution as 𝑧 = 𝑧0 𝑛 but we don’t know how to find the 𝑛𝑡ℎ root of
a complex number. To do so, we first write 𝑧0 = 𝑟0 𝑒 𝑖𝜃0 and 𝑧 𝑛 = 𝑧0
𝑟 𝑛 𝑒 𝑖𝑛𝜃 = 𝑟0 𝑒 𝑖𝜃0
From this relation it now follows that 𝑟 𝑛 = 𝑟0 and 𝑛𝜃 = 𝜃0
𝑛 𝜃0 +2𝑘𝜋
From which we deduce 𝑟 = √|𝑧|, 𝜃= 𝑘 = 0, ±1, ±2, ⋯
𝑛
Thus
𝜃0 +2𝑘𝜋
𝑧 = 𝑛√𝑟0 𝑒 𝑖( 𝑛
)
, 𝑘 = 0,1,2, ⋯ , 𝑛 − 1
The 𝑛𝑡ℎ 𝑟𝑜𝑜𝑡 of any complex number 𝑧 can expressed as
𝜃0 +2𝑘𝜋
1⁄
𝑧 𝑛 = 𝜔𝑘+1 = 𝑒 𝑖( 𝑛
)
, 𝑘 = 0, ±1, ±2, ⋯ Where |𝑧| = 𝑟 and 𝜃0 = 𝐴𝑟𝑔(𝑧) or
𝑚⁄ 𝑚 𝜃0 +2𝑘𝜋
= ( √|𝑧|) 𝑒 𝑖𝑚( )
𝑛
𝑧 𝑛 𝑛 𝑚 = 1,2, ⋯ 𝑎𝑛𝑑 𝑘 = 0,1,2, ⋯ , 𝑛 − 1
Fig.6.1.2.1
3⁄
Example 6.8: Find all values of (−1 + 𝑖√3) 2
2𝜋
Solution: 𝑧 = −1 + 𝑖√3 , |𝑧| = 2, 𝜃0 = 𝑡𝑎𝑛−1 (√3) =
3
2𝜋
+2𝑘𝜋
𝑖3( 3 )
𝜔𝑘+1 = 2√2 𝑒 2 𝑘 = 0,1
𝑘0 , 𝜔1 = 2√2 𝑒 𝑖𝜋 = −2√2
𝑘1 , 𝜔2 = 2√2 𝑒 𝑖4𝜋 = 2√2
Exercise: Find all roots in the complex plane
3
1) √1 + 𝑖
8
2) √1
4
3) √𝑖
5
4) √−1
12
5) √1
7
6) √3 + 4𝑖
In this section, we are going to deal with the definition and notation of the limit, derivative,
analytic function by considering various examples.
Section Objectives:
Also
𝑓(1 + 3𝑖) = (1 + 3𝑖)2 + 3(1 + 3𝑖)
= 1 − 9 + 6𝑖 + 3 + 9𝑖 = −5 + 15𝑖
So
𝑢(1,3) = −5 and 𝑣(1,3) = 15
Example 6.10: Let 𝑤 = 𝑓(𝑧) = 𝑧 2 . Find 𝑢(𝑥, 𝑦)and 𝑣(𝑥, 𝑦) the value of 𝑓 at 𝑧 = 5𝑖
Solution: 𝑓(𝑧) = 𝑧 2 = (𝑥 + 𝑖𝑦)2
= 𝑥 2 − 𝑦 2 + +𝑖(2𝑥𝑦)
Now
𝑢(𝑥, 𝑦) = 𝑅𝑒 𝑓(𝑧) = 𝑥 2 − 𝑦 2 +and 𝑣(𝑥, 𝑦) = 2𝑥𝑦,
Also
𝑓(5𝑖) = (5𝑖)2 = −25
1
Exercise: Let 𝑤 = 𝑓(𝑧) = 2𝑖𝑧 + 6𝑧̅. Find 𝑢(𝑥, 𝑦)and 𝑣(𝑥, 𝑦) the value of 𝑓 at 𝑧 = 2 + 4𝑖
Definition 1: A sequence of complex numbers {𝑧𝑛 }1∞ is said to have the limit 𝑧0 or to converges
to 𝑧0 , and we write lim 𝑧𝑛 = 𝑧0
𝑛→∞
Or equivalently, 𝑧𝑛 → 𝑧0 as 𝑛 → ∞ if for any 𝜀 > 0 there exists an integer N such that |𝑧𝑛 −
𝑧0 | < 𝜀 for all 𝑛 > 𝑁.
Definition 2: Let 𝑓 be a function defined in some neighborhood of 𝑧0 itself. We say that the limit
of 𝑓(𝑧) as 𝑧 approaches 𝑧0 is the number 𝑤0 and write lim 𝑓(𝑧) = 𝑤0
𝑧→𝑧0
Or equivalently, 𝑓(𝑧) → 𝑤0 as 𝑧 → 𝑧0 if for any 𝜀 > 0 there exists a positive number 𝛿 such that
|𝑓(𝑧) − 𝑤0 | < 𝜀 whenever 0 < |𝑧 − 𝑧0 | < 𝛿 .
Solution: we must show that for any given 𝜀 > 0 there is a positive number 𝛿 such that
|𝑧 2 − (−1)| < 𝜀 𝜀 Whenever 0 < |𝑧 − 𝑖| < 𝛿 .
So we express |𝑧 2 − (−1)| in terms of |𝑧 − 𝑖|
𝑧 2 − (−1) = 𝑧 2 + 1 = (𝑧 − 𝑖)(𝑧 + 𝑖) = (𝑧 − 𝑖)(𝑧 − 𝑖 + 2𝑖)
Now
if |𝑧 − 𝑖| < 𝛿 hence |𝑧 2 − (−1)| = |𝑧 − 𝑖||𝑧 − 𝑖 + 2𝑖|
≤ |𝑧 − 𝑖|(|𝑧 − 𝑖| + |2𝑖|)
= |𝑧 − 𝑖|(|𝑧 − 𝑖| + 2)
< 𝛿(𝛿 + 2)
So to ensure that it is less than 𝜀, we choose 𝛿 to be smaller than each either of the number
𝜀 𝜀
and 1. |𝑧 − 𝑖|(|𝑧 − 𝑖| + 2) < 3 (1 + 2) = 𝜀
3
𝑖
Exercise: prove that 𝑙𝑖𝑚 𝑓(𝑧) = where 𝑓(𝑧) = 𝑖𝑧/2 in the open disk |𝑧| < 1,
𝑧→1 2
In other words, for 𝑓 to be continuous at 𝑧0 , it must have a limiting value at 𝑧0 and this limiting
value must be 𝑓(𝑧0 ). A function 𝑓 is said to be continuous on a set 𝑆 if it is continuous at each
point of 𝑆.
Note that by definition of a limit this function 𝑓(𝑧) is said to be continuous in a domain if it is
continuous at each point of this domain.
Fig.6.2.2.1 Limit
𝑧−𝑖 𝑧−𝑖 1 1
Solution: lim 𝑧 2 +1 = lim (𝑧+𝑖)(𝑧−𝑖) = lim 𝑧+1 = 2𝑖
𝑧→𝑖 𝑧→𝑖 𝑧→𝑖
Definition: The derivative of a complex function 𝑤 = 𝑓(𝑧) at a fixed point 𝑧0 is written 𝑓′(𝑧0 )
and is defined by
𝑓(𝑧) − 𝑓(𝑧0 )
𝑓′(𝑧0 ) = lim
𝑧→𝑧0 𝑧 − 𝑧0
Theorem 6.2.2.2: If 𝑓(𝑧) and 𝑔(𝑧) are differentiable function at a given point 𝑧, then
𝑑
i. ((𝑓(𝑧) ± 𝑔(𝑧))) = 𝑓′(𝑧) ± 𝑔′(𝑧)
𝑑𝑧
𝑑
ii. ((𝑓(𝑧)𝑔(𝑧))) = 𝑓′(𝑧)𝑔(𝑧) + 𝑓(𝑧)𝑔′(𝑧)
𝑑𝑧
𝑑
iii. (𝑐 𝑓(𝑧)) = 𝑐𝑓′(𝑧) where c is a constant
𝑑𝑧
𝑑 𝑓(𝑧) 𝑓′(𝑧)𝑔(𝑧)−𝑓(𝑧)𝑔′(𝑧)
iv. ( )= 2 𝑖𝑓 𝑔(𝑧) ≠ 0.
𝑑𝑧 𝑔(𝑧) (𝑔(𝑧))
Example: Show that 𝑓(𝑧) = 𝑧̅ is not differentiable at any point in the complex plane.
̅̅̅̅̅̅̅)−𝑧̅
(𝑧+∆𝑧 ̅̅̅̅
∆𝑧
Solution: 𝑓′(𝑧) = lim = lim
∆𝑧→0 ∆𝑧 ∆𝑧→0 ∆𝑧
If we allow ∆𝑦 → 0 first
̅̅̅
∆𝑧 ∆𝑥
lim = lim =1
∆𝑧→0 ∆𝑧 ∆𝑥→0 ∆𝑥
̅∆𝑧
̅̅̅ −𝑖∆𝑦
When ∆𝑥 → 0 first lim = lim = −1
∆𝑧→0 ∆𝑧 ∆𝑦→0 𝑖∆𝑦
Therefore the limit is not unique (does note exists) 𝑓(𝑧) = 𝑧̅ does not derivative.
6.2.4. Analytic Function of Complex Variable
Example 6.15: The non-negative of power integer 1, 𝑧, 𝑧 2 , … is analytic in the entire complex
plane
In this section, we are going to deal with the definition and notation of the Cauchy-Riemann
equation and Laplace equation by considering various examples.
Section Objectives:
Where were using the notation ∆𝑓 = 𝑓(𝑧 + ∆𝑧) − 𝑓(𝑧) and 𝑓(𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦)
If we first let ∆𝑦 → 0
∆𝑢 𝑖∆𝑣
𝑓′(𝑧) = lim ( + )
∆𝑥→0 ∆𝑥 ∆𝑥
𝜕𝑢 𝜕𝑣
𝑓′(𝑧) = 𝜕𝑥 + 𝑖 𝜕𝑥 ……………………………………... (1)
And let ∆𝑥 → 0
1 ∆𝑢 ∆𝑣
𝑓′(𝑧) = lim ( + )
∆𝑦→0 𝑖 ∆𝑦 ∆𝑦
𝜕𝑣 𝜕𝑢
𝑓′(𝑧) = 𝜕𝑦 − 𝑖 𝜕𝑦 ……………………………………….. (2)
𝜕𝑢 𝜕𝑣 𝜕𝑢 𝜕𝑣
= − , =
𝜕𝑦 𝜕𝑦 𝜕𝑥 𝜕𝑦
This condition is called the Cauchy-Riemann equation.
Example 6.17: using the Cauchy-Riemann equation Show that
𝑓(𝑧) = 𝑧 3 is analytic everywhere.
Solution: we have, 𝑓(𝑧) = 𝑧 3 = (𝑥 + 𝑖𝑦)3
= (𝑥 3 − 3𝑥𝑦 2 ) + 𝑖(3𝑥 2 𝑦 − 𝑦 3 )
Here
𝑢(𝑥, 𝑦) = 𝑥 3 − 3𝑥𝑦 2 and 𝑣(𝑥, 𝑦) = 3𝑥 2 𝑦 − 𝑦 3
Thus
𝑢𝑥 = 3𝑥 2 − 3𝑦 2 , 𝑣𝑥 = 6𝑥𝑦
𝑢𝑦 = −6𝑥𝑦, 𝑣𝑦 = 3𝑥 2 − 3𝑦 2
We observe that 𝑢𝑥 = 𝑣𝑦 and 𝑢𝑦 = −𝑣𝑥 at a points
Hence
𝑓(𝑧) = 𝑧 3 is analytic for every z and further
𝑓 ′(𝑧) = 𝑢𝑥 + 𝑖𝑣𝑥 = 3𝑥 2 − 3𝑦 2 + 𝑖6𝑥𝑦
= 3[𝑥 2 + (𝑖𝑦)2 + 2𝑖𝑥𝑦] = 3(𝑥 + 𝑖𝑦)2 = 3𝑍 2 .
Example 6.18: using the Cauchy-Riemann equation Show that
𝑓(𝑧) = |𝑧|2 is analytic nowhere.
In this section, we are going to deal with the definition and notation of the elementary
functions, exponential functions and trigonometric functions by considering various examples.
Section Objectives:
If to each complex number 𝑧 there is but one value 𝑤 = 𝑓(𝑧), we say that 𝑓(𝑧) is single-valued.
Multiple valued functions which are not single-valued.
Properties of 𝑒 𝑧 are
I. 𝑒 𝑧1 𝑒 𝑧2 = 𝑒 𝑧1 +𝑧2 , 𝑒 𝑧 ≠ 0 ∀𝑧
II. |𝑒 𝑧 | = 𝑒 𝑥
III. 𝑒 𝑧 = 1, 𝑦 𝑟𝑒𝑎𝑙 and (𝑒̅̅̅𝑧 ) = 𝑒 𝑧̅
Example 6.20: show that |𝑒 −3𝑖𝑧+5𝑖 | = 𝑒 3𝑦 = |𝑒 −3𝑖𝑧+5𝑖 | = 𝑒 3𝑦
−3𝑖𝑧+5𝑖 |=𝑒 3𝑦
Solution: |𝑒 −3𝑖𝑧+5𝑖 | = |𝑒 3𝑦+|𝑒 | = 𝑒 3𝑦 |𝑒 𝑖(5−3𝑥) | = 𝑒 3𝑦
Note: For all finite𝑥, this implies that 𝑒 𝑧 is non zero for all finite𝑧, also
arg 𝑒 𝑧 = 𝑦 ± 2𝑛𝜋 𝑛 = 0,1,2, ⋯
6.4.3. Trigonometric Function
If we odd and subtract the Euler formulas
𝑒 𝑖𝑦 = cos 𝑦 + 𝑖 sin 𝑦
𝑒 −𝑖𝑦 = cos 𝑦 − 𝑖 sin 𝑦
We are led to the real trigonometric functions
1 𝑖𝑦
cos 𝑦 = (𝑒 + 𝑒 −𝑖𝑦 )
2
1
sin 𝑦 = (𝑒 𝑖𝑦 − 𝑒 −𝑖𝑦 )
2𝑖
To define the complex trigonometric functions………………………………………………(*)
1
cos 𝑧 = (𝑒 𝑖𝑧 + 𝑒 −𝑖𝑧 )
2
1
sin 𝑧 = (𝑒 𝑖𝑧 − 𝑒 −𝑖𝑧 )
2𝑖
𝑑 𝑑
And Formulas for the derivatives follow readily from 𝑑𝑧 cos 𝑧 = − sin 𝑧 , 𝑑𝑧 sin 𝑧 = cos 𝑧
In this section, we are going to deal with the definition and notation of the hyperbolic,
logarithm function and general power by considering various examples.
Section Objectives:
This is suggested by the familiar definitions for a real variable. These functions are entire, with
derivatives
(cosh 𝑧)′ = sinh 𝑦
(sinh 𝑦)′ = cosh 𝑧
formalism and a deeper understanding of special functions. This is one of the main reasons for
the importance of complex analysis to the engineer and physicist.
Example 6.22: Real, Imaginary Parts and Absolute value. Show that:-
1) Cosh 𝑧 = cosh 𝑥 cos 𝑦 + 𝑖 sinh 𝑥 sin 𝑦
2) sinh 𝑧 = sinh 𝑥 cos 𝑦 + 𝑖 cosh 𝑥 sin 𝑦
3) cosh(𝑧1 + 𝑧2 ) = cosh 𝑧1 cosh 𝑧2 + 𝑖 sinh 𝑧1 sinh 𝑧2
4) sinh(𝑧1 + 𝑧2 ) = sinh 𝑧1 cosh 𝑧2 ± 𝑖 sinh 𝑧2 cosh 𝑧1
5) cosh2 𝑧 − sinh2 𝑧 = 1
6) |Cosh 𝑧|2 = cosh2 𝑥 + sin2 𝑦
7) |sinh 𝑧|2 = sinh2 𝑥 + sin2 𝑦
Solution: 1) since 𝑧 = 𝑥 + 𝑖𝑦
𝑒 𝑧 +𝑒 −𝑧 𝑒 (𝑥+𝑖𝑦) +𝑒 −(𝑥+𝑖𝑦)
Cosh 𝑧 = =
2 2
1
Cosh 𝑧 = 2 (𝑒 (𝑥+𝑖𝑦) + 𝑒 −(𝑥+𝑖𝑦) )
1 𝑥 1
= 𝑒 (cos 𝑦 + 𝑖 sin 𝑦) + 𝑒 −𝑥 (cos 𝑦 − 𝑖 sin 𝑦)
2 2
1 1
= (𝑒 𝑥 + 𝑒 −𝑥 ) cos 𝑦 + 𝑖(𝑒 𝑥 − 𝑒 −𝑥 ) sin 𝑦.
2 2
1 1
We know from calculus, cosh 𝑥 = 2 (𝑒 𝑥 + 𝑒 −𝑥 ) and sinh 𝑥 = 2 (𝑒 𝑥 − 𝑒 −𝑥 )
Therefore
Cosh 𝑧 = cosh 𝑥 cos 𝑦 + 𝑖 sinh 𝑥 sin 𝑦
2) Since 𝑧 = 𝑥 + 𝑖𝑦
1 𝑧 1
sinh 𝑧 = (𝑒 − 𝑒 −𝑧 ) = (𝑒 (𝑥+𝑖𝑦) − 𝑒 −(𝑥+𝑖𝑦) )
2 2
1 1
= 𝑒 𝑥 (cos 𝑦 + 𝑖 sin 𝑦) − 𝑒 −𝑥 (cos 𝑦 − 𝑖 sin 𝑦)
2 2
1 1
= (𝑒 𝑥 − 𝑒 −𝑥 ) cos 𝑦 + 𝑖(𝑒 𝑥 + 𝑒 −𝑥 ) sin 𝑦.
2 2
1 1
We know from calculus, sinh 𝑥 = 2 (𝑒 𝑥 − 𝑒 −𝑥 ) and cosh 𝑥 = 2 (𝑒 𝑥 + 𝑒 −𝑥 )
Therefore
sinh 𝑧 = sinh 𝑥 cos 𝑦 + 𝑖 cosh 𝑥 sin 𝑦
Similar the other properties, do these?
Now comes an important point (without analog in real calculus). Since the argument of z is
determined only up to integer multiples of 2𝜋. the complex natural logarithm ln 𝑧 (𝑧 ≠ 0) is
infinitely many-valued.
The value of ln 𝑧 corresponding to the principal value 𝐴𝑟𝑔 𝑧 is denoted by 𝐿𝑛 𝑧 (𝐿𝑛 with
capital𝐿) and is called the principal value of ln z.
Thus restricted by – 𝜋 < arg 𝑧 ≤ 𝜋.
ln 𝑧 = ln|𝑧| + 𝑖 𝐴𝑟𝑔 𝑧, 𝑧≠0
The uniqueness of 𝐴𝑟𝑔 𝑧 for given 𝑧 (𝑧 ≠ 0 ) implies that 𝐿𝑛 𝑧 is single-valued, that is, a
function in the usual sense. Since the other values of 𝑎𝑟𝑔 𝑧 differ by integer multiples of 2𝜋, the
other values of 𝑙𝑛 𝑧 are given by
ln 𝑧 = ln 𝑧 ± 𝑖 2𝑛𝜋, (𝑛 = 1,2, ⋯ )
They all have the same real part, and their imaginary parts differ by integer multiples of 2𝜋.
If 𝑧 is positive real, then 𝐴𝑟𝑔 𝑧 = 𝑜 and ln 𝑧 becomes identical with the real natural logarithm we
know from calculus. if z is negative real (so that the natural logarithm of calculus is not
defined!), then 𝐴𝑟𝑔 𝑧 = 𝜋 and ln 𝑧 = ln|𝑧| + 𝜋𝑖 , (𝑧 negative real).
𝑒 ln 𝑟 = 𝑟 For positive real r we obtain 𝑒 ln 𝑧 = 𝑧 as expected, but since 𝑎𝑟𝑔(𝑒 𝑧 ) = 𝑦 ± 2𝑛𝜋 is
multivalve, so is 𝑎𝑟𝑔(𝑒 𝑧 ) = 𝑧 ± 2𝑛𝜋𝑖, 𝑛 = 0,1, ⋯
6. A sequence of complex numbers {𝑧𝑛 }1∞ is said to have the limit 𝒛𝟎 or to converges
to 𝒛𝟎 , and we write lim 𝑧𝑛 = 𝑧0 Or equivalently, 𝑧𝑛 → 𝑧0 as 𝑛 → ∞ if for any
𝑛→∞
𝜀 > 0 there exists an integer N such that |𝑧𝑛 − 𝑧0 | < 𝜀 for all 𝑛 > 𝑁.
7. Let 𝑓 be a function defined in some neighborhood of 𝑧0 itself. We say that the limit
of 𝑓(𝑧) as 𝑧 approaches 𝑧0 is the number 𝑤0 and write lim 𝑓(𝑧) = 𝑤0 Or
𝑧→𝑧0
a limiting value at 𝑧0 and this limiting value must be 𝑓(𝑧0 ). A function 𝑓 is said to
be continuous on a set 𝑆 if it is continuous at each point of 𝑆.
9. The derivative of a complex function 𝑤 = 𝑓(𝑧) at a fixed point 𝑧0 is written
𝑓′(𝑧0 ) and is defined by
𝑓(𝑧0 + ∆𝑧) − 𝑓(𝑧0 )
𝑓′(𝑧0 ) = lim
∆𝑧→0 ∆𝑧
16. The complex hyperbolic cosine and sine are defined by the formulas
1 1
cosh 𝑧 = 2 (𝑒 𝑧 + 𝑒 −𝑧 ) = cos 𝑖𝑧, sinh 𝑦 = 2 (𝑒 𝑦 − 𝑒 −𝑦 ) = −𝑖 sin 𝑖𝑧
Miscellaneous Exercises
a) 𝑢 = 𝑥𝑦
b) 𝑣 = −𝑒 −2𝑥 sin 2𝑦
c) 𝑣 = 𝑦/(𝑥 2 + 𝑦 2 )
d) 𝑢 = cos 3𝑥 cosh 3𝑦
9. Find all values of 𝑧 such that
I. 𝑒 𝑧 = −2
II. 𝑒 𝑧 = 1 + √3𝑖
III. exp(2𝑧 − 1) = 1
10. Find the value of:
a. cos(3 − 𝑖)
b. 𝑡𝑎𝑛 𝑖
c. sinh(1 + 𝜋𝑖)
d. cosh(𝜋 + 𝜋𝑖)
e. ln(0.6 + 0.8𝑖)
11. Show that
a. exp(2 ± 3𝜋𝑖) = −𝑒 2
b. exp(𝑧 + 𝜋𝑖) = − exp 𝑧
𝜋𝑖 𝑒
c. exp (2 + 4 ) = √2 (1 + i)
References:
Ahlfors , L. V., Complex Analysis. 3rd ed. New York: McGraw-Hill, 1979.
Bieberbach, L., Conformal Mapping. Providence, RI: American Mathematical Society,
2000.
Henrici, P., Applied and Computational Complex Analysis. 3 vols. New York: Wiley,
1993.
Hille, E., Analytic Function Theory. 2 vols. 2nd ed. Providence, RI: American
Mathematical Society, Reprint V1 1983, V2 2005.
Knopp, K., Elements of the Theory of Functions. New York: Dover, 1952.
Knuth, D. E., the Art of Computer Programming. 3vols. 3rd ed. Reading, MA: Addison-
Wesley, 1997–2009.
Kreyszig, E., Introductory Functional Analysis with Applications. New York: Wiley,
1989.
Kreyszig, E., on methods of Fourier analysis in multigrid theory. Lecture Notes in Pure
and Applied Mathematics 157. New York: Dekker, 1994, pp. 225–242.
Kreyszig, E., Basic ideas in modern numerical analysis and their origins. Proceedings of
the Annual Conference of the Canadian Society for the History and Philosophy of
Mathematics. 1997, pp. 34–45.
Kreyszig, E., and J. Todd, QR in two dimensions. Elemente der Mathematik 31 (1976),
pp. 109–114.
Mortensen, M. E., Geometric Modeling. 2nd ed. New York: Wiley, 1997.
Morton, K. W., and D. F. Mayers, Numerical Solution of Partial Differential Equations:
An Introduction. New York: Cambridge University Press, 1994.
Ortega, J. M., Introduction to Parallel and Vector Solution of Linear Systems. New
York: Plenum Press, 1988.
Overton, M. L., Numerical Computing with IEEE Floating Point Arithmetic.
Philadelphia: SIAM, 2004.
Press, W. H. et al., Numerical Recipes in C: The Art of Scientific Computing. 2nd ed.
New York: Cambridge University Press, 1992.
CHAPTER 7
COPLEX INTEGRALS
Introduction
Chapter 7 laid the groundwork for the study of complex analysis, covered complex numbers in
the complex plane, limits, and differentiation, and introduced the most important concept of
analyticity. A complex function is analytic in some domain if it is differentiable in that domain.
Complex analysis deals with such functions and their applications. The Cauchy–Riemann
equations and also analytic functions satisfy Laplace’s equation. Furthermore, the Cauchy
integral formula shows the surprising result that analytic functions have derivatives of all orders.
Hence, in this respect, complex analytic functions behave much more simply than real-valued
functions of real variables, which may have derivatives, only up to a certain order. Complex
integration is attractive for several reasons. Some basic properties of analytic functions are
difficult to prove by other methods. This includes the existence of derivatives of all orders just
discussed. A main practical reason for the importance of integration in the complex plane is that
such integration can evaluate certain real integrals that appear in applications and that are not
accessible by real integral calculus.
Unit Objectives:
In this section, we are going to deal with the definition and notation of the complex integrals
Section Objectives:
∫ f(z)dz.
C
Here the integrand 𝑓(𝑧) is integrated over a given curve 𝐶 or a portion of it (an 𝑎𝑟𝑐, but we
shall say “curve” in either case, for simplicity). This curve 𝐶 in the complex plane is called the
path of integration. We may represent 𝐶 by a parametric representation
𝑧(𝑡) = 𝑥(𝑡) = 𝑖𝑦(𝑡) (𝑎 ≤ 𝑡 ≤ 𝑏) … … … … … … … … … … … … … … … … … … … … … … . . (1)
The sense of increasing 𝑡 is called the positive sense on 𝐶, and we say that 𝐶 is oriented
by (1).
We assume 𝐶 to be a smooth curve, that is, 𝐶 has a continuous and nonzero derivative
𝑑𝑧
𝑧 ′ (𝑡) = = 𝑥′(𝑡) + 𝑖𝑦′(𝑡)
𝑑𝑡
at each point. Geometrically this means that 𝐶 has everywhere a continuously turning tangent, as
follows directly from the definition
𝑧(𝑡 + ∆𝑡 − 𝑧(𝑡))
𝑧 ′ (𝑡) = lim
∆𝑡→0 ∆𝑡
Here we use a dot since a prime ' denotes the derivative with respect to 𝑧.
Definition of the Complex Line Integral
This is similar to the method in calculus. Let 𝐶 be a smooth curve in the complex plane given by
(1), and let 𝑓(𝑧) be a continuous function given (at least) at each point of 𝐶. We now subdivide
(we “partition”) the interval 𝑎 ≤ 𝑡 ≤ 𝑏 in (1) by points
t 0 (= a), t1 , … , t n−1 , t n (= b)
Where t 0 < t1 < t 2 … < t n . To this subdivision there corresponds a subdivision of 𝐶 by points
z0 , z1 , z2 … , zn−1 , zn (= z)
Fig.7.1. Tangent vector 𝑧′(𝑡) of a curve C in the Fig. 7.2.Complex line integral
Complex plane given by 𝑧(𝑡) the arrowhead on the curve indicates the positive sense (sense of
increasing 𝑡)
Wherezj = z(t j ). On each portion of subdivision of 𝐶 we choose an arbitrary point, say, a point
ζ1 between z0 and z1 (that is, ζ1 = z(t)where 𝑡 satisfiest 0 ≤ t ≤ t1 ), a point ζ2 between z1
and z2 etc. Then we form the sum
𝑠𝑛 = ∑𝑛 𝑓(ζm )∆zm where ∆zm = zm − zm−1 …………(2)
We do this for each 𝑛 = 2,3, ⋯ in a completely independent manner, but so that the greatest
|∆𝑡𝑚 | = |t m − t m−1 | approaches zero as 𝑛 → ∞. This implies that the greatest |∆𝑧𝑚 |also
approaches zero. Indeed, it cannot exceed the length of the arc of 𝐶 from zm−1 to zm and the
latter goes to zero since the arc length of the smooth curve C is a continuous function of 𝑡. The
limit of the sequence of complex numbers s2 , s3 … thus obtained is called the line integral (or
simply the integral) of 𝑓(𝑧) over the path of integration C with the orientation given by (1).
This line integral is denoted by
If 𝐶 is a closed path (one whose terminal point 𝑍 coincides with its initial point z0 , as
for a circle or for a curve shaped ).
General Assumption: - All paths of integration for complex line integrals are assumed to
be piecewise smooth, that is, they consist of finitely many smooth curves joined end to end.
2. Sense reversal in integrating over the same path, from z0 to 𝑍 (left) and from Z to z0
(right), introduces a minus sign as shown
z z0
These sums are real. Since 𝑓 is continuous, 𝑢 and 𝑣 are continuous. Hence, if we let n approach
infinity in the aforementioned way, then the greatest ∆𝑥𝑚 and ∆𝑦𝑚 will approach zero and each
sum on the right becomes a real line integral:
lim 𝑠𝑛 = ∫ 𝑓(𝑧)𝑑𝑧
𝑛→∞
𝐶
This shows that under our assumptions on 𝑓 and 𝐶 the line integral (3) exists and its value is
independent of the choice of subdivisions and intermediate points 𝜁𝑚 .
Theorem 1: (Indefinite Integration of Analytic Functions)
Let 𝑓(𝑧) be analytic in a simply connected domain D. Then there exists an indefinite integral of
𝑓(𝑧) in the domain D, that is, an analytic function 𝐹(𝑧)such that 𝐹 ′ (𝑧) = 𝑓(𝑧) in D, and for all
paths in 𝐷 joining two points z0 and z1 in D. We have
z1
(Note that we can write z0 and z1 instead of 𝐶, since we get the same value for all those 𝐶 from
to z0 𝑡𝑜 z1 .)
PROOF: The left side of (10) is given by (8) in terms of real line integrals and we show that the
right side of (10) also equals (8). We have 𝑧 = 𝑥 + 𝑖𝑦, hence 𝑧 ′ = 𝑥 ′ + 𝑖𝑦. We simply write 𝑢
for 𝑢[𝑥(𝑡, 𝑦(𝑡))]and 𝑣 for 𝑣[𝑥(𝑡, 𝑦(𝑡))] . We also have 𝑑𝑥 = 𝑥 ′ 𝑑𝑡 and 𝑑𝑦 = 𝑦 ′ 𝑑𝑡.
Consequently, in (10)
𝑏 𝑏
Dependence on path: Now comes a very important fact. If we integrate a given function
𝑓(𝑧)from a point z0 to a point z1 along different paths, the integrals will in general have
different values. In other words, a complex line integral depends not only on the endpoints of
the path but in general also on the path itself. The next example gives a first impression
of this, and a systematic discussion follows in the next section.
Example 7.2:: Integral of a Non analytic Function. Dependence on Path
Integrate 𝑓(𝑧) = 𝑅𝑒𝑧 = 𝑥 from 0 to 1 + 2𝑖
(a) Along 𝐶 ∗ in Fig. 7.4,
(b) along C consisting of C1 and C2 .
Solution: (a) 𝐶 ∗ can be represented by 𝑧(𝑡) = 𝑡 + 2𝑖𝑡 (0 ≤ 𝑡 ≤ 1). Hence 𝑧 ′ (𝑡) = 1 + 2𝑖 and
𝑓[𝑧(𝑡)] = 𝑥(𝑡 = 𝑡) on 𝐶 ∗ . We now calculate
1
∫𝐶 ∗ 𝑅𝑒 𝑧𝑑𝑧 = ∫0 𝑡(1 + 2𝑖)𝑑𝑡
1 1
= 2 (1 + 2𝑖) = 2 + 𝑖
Fig. 7.4
b) we now have
C1 : z(t) = t, 𝑧 ′ (𝑡) = 1, 𝑓(𝑧(𝑡)) = 𝑥(𝑡) = 𝑡 (0 ≤ 𝑡 ≤ 1)
C2 : z(t) = t + it, 𝑧 ′ (𝑡) = 𝑖, 𝑓(𝑧(𝑡)) = 𝑥(𝑡) = 1 (0 ≤ 𝑡 ≤ 2).
using (6) we calculate
1 2 1
∫𝐶 𝑅𝑒 𝑧 𝑑𝑧 = ∫𝐶 𝑅𝑒 𝑧 𝑑𝑧 + ∫𝐶 𝑅𝑒 𝑧 𝑑𝑧 = ∫0 𝑡𝑑𝑡 + ∫0 1. 𝑖𝑑𝑡 2 + 2𝑖.
1 2
∫ 𝑧 2 𝑑𝑧,
C
Fig 7.5
Example 7.4: Evaluate ∫C 𝑧 2 𝑑𝑧, where C is the straight line joining the origin o to the point P (2,
1) in the complex plane.
Hence
1 1
= (𝟐 + 𝟏𝟏) ∫0 𝑦 2 𝑑𝑦 = 3 (2 + 11𝑖)
1+𝑖
Example 7.5: Evaluate the integral ∫0 (𝑥 − 𝑦 + 𝑖𝑥 2 )𝑑𝑧
a) along the straight line from 𝑧 = 0 to 𝑧 = 1 + 𝑖
b) along the real axis from 𝑧 = 0 to 𝑧 = 1 and then along a line parallel to imaginary axis
from 𝑧 = 1 to 𝑧 = 𝑥 + 𝑖𝑦
P (1, 1)
Y=x
o
M (1, 0)
o
Fig. 7.6
Solution: a) the equation of the straight line 𝑜𝑝, refers the fig.7.6 is 𝑦 = 𝑥, thus along the line
𝑜𝑝, 𝑧 = 𝑥 + 𝑖𝑦 = 𝑥 + 𝑖𝑥 = (1 + 𝑖)𝑥, which gives 𝑑𝑧 = (1 + 𝑖)𝑑𝑥, 0 ≤ 𝑥 ≤ 1,
And hence
1+𝑖 1
1
= − 3 (1 − 𝑖)
b) Along the path 𝑂𝑀, we have 𝑦 = 0 and thus 𝑧 = 𝑥 + 𝑖𝑦 = 𝑥 and hence 𝑑𝑧 = 𝑑𝑥, 0 ≤ 𝑥 ≤ 1
Also, along the path MP, we have 𝑥 = 1 and thus 𝑧 = 𝑥 + 𝑖𝑦 = 1 + 𝑖𝑦, and hence 𝑑𝑧 = 𝑖𝑑𝑦,
0 ≤ 𝑦 ≤ 1.
Therefore, the line integral
1+𝑖 1 1
2 2
∫ (𝑥 − 𝑦 + 𝑖𝑥 )𝑑𝑧 = ∫(𝑥 + 𝑖𝑥 )𝑑𝑥 + ∫(1 − 𝑦 + 𝑖)𝑖𝑑𝑦
0 0 0
1 1
𝑥2 𝑖𝑥 3 𝑦2
=[2 + ] + [(𝑖 − 1)𝑦 − 𝑖 ]
3 0 2 0
1 𝑖 𝑖 1 5
= 2 + 3 + (𝑖 − 1) − 2 = − 2 + 6 𝑖
Exercise: Evaluate ∮𝐶 |𝑧 2 |2 𝑑𝑧 around the square with vertices at (0, 0), (1, 0), (1, 1), (0, 1)
C (0, 1)
B (1, 1)
x
O (0, 0) A (1, 0)
In this section, we are going to deal with state and proof of Cauchy’s Integral Theorem and
Cauchy’s Integral Formula and given by examples.
Section Objectives:
2. A simply connected domain D in the complex plane is a domain such that every simple
closed path in D encloses only points of D. Examples: The interior of a circle (“open disk”),
ellipse, or any simple closed curve. A domain that is not simply connected is called multiply
connected. Examples: An annulus, a disk without the center, for example, 0 < |𝑧| < 1. See also
Fig. 7.7.
More precisely, a bounded domain D (that is, a domain that lies entirely in some circle about
the origin) is called p-fold connected if its boundary consists of p closed
∫ 𝑧 2 𝑑𝑧 = 0 … … … … … … … … … … … … … … … … … … … . (∗)
C
Are also continuous in D, and hence in the region enclosed by C. thus Green’s theorem becomes
𝜕𝑢 𝜕𝑢 𝜕𝑢 𝜕𝑣
∮ 𝑓(𝑧)𝑑𝑧 = − ∬( + )𝑑𝑥𝑑𝑦 + 𝑖 ∬( − )𝑑𝑥𝑑𝑦 … … … … … … (∗∗)
𝜕𝑥 𝜕𝑦 𝜕𝑥 𝜕𝑦
𝐶 𝐸 𝐸
Where E is the region bounded by the closed curve C, since 𝑓(𝑧)is analytic, u and v satisfied the
C-R.E.
And thus the integrands of the two double integrals on the right side of (**) are identically zero
and hence we obtain
∮ 𝑓(𝑧)𝑑𝑧 = 0
𝐶
Note: Analytic of 𝑓(𝑧) is any sufficient but not necessary condition of ∮𝐶 𝑓(𝑧)𝑑𝑧 = 0.
Example 7.6: Evaluate the following integrals by applying Cauchy’s integral theorem, in each
applicable
a) ∮𝐶 cos 𝑧 𝑑𝑧
b) ∮𝐶 sec 𝑧 𝑑𝑧
𝑑𝑧
c) ∮𝐶 𝑧 2 −5𝑧+6
all these points lie outside the unit circle |𝑧| = 1 hence 𝑓(𝑧) is analytic and 𝑓 ′ (𝑧) is
continuous in and on C. thus by Cauchy’s theorem ∮𝐶 sec 𝑧 𝑑𝑧 = 0
1 1
c. The integrand 𝑓(𝑧) = 𝑧 2 −5𝑧+6 = (𝑧−2)(𝑧−3) is analytic every where except at 𝑧 =
2 𝑎𝑛𝑑 𝑧 = 3, the points which lie outside the unit circle |𝑧| = 1 and hence 𝑓(𝑧) is
𝑑𝑧
analytic and 𝑓 ′ (𝑧) is continuous in and on C. thus by Cauchy’s theorem ∮𝐶 𝑧 2 −5𝑧+6 = 0.
d. The integrand 𝑓(𝑧) = 𝑧̅ is analytic and hence the Cauchy’s theorem is not applicable.
In fact, about 𝐶: |𝑧| = 1 we have
2𝜋 2𝜋
Independence of path
∫ 𝑓(𝑧)𝑑𝑧 = ∫ 𝑓(𝑧)𝑑𝑧
𝐶1 𝐶2
Theorem: If 𝑓(𝑧) is analytic on between the region included in the closed curves 𝐶1 , 𝐶2 , 𝐶3 etc.
then
𝐶1
𝐶2
𝐶3
𝐶4
Example 7.7: Evaluate 𝐼 = ∫𝐶 sin 𝑧 𝑑𝑧, where C is composed of the circular arc 𝐶1 and straight
line segment 𝐶2 that connects the point 𝑧 = 0 and 𝑧 = 𝑖𝜋
Solution: the integrand 𝑓(𝑧) = sin 𝑧 is an entire function. Hence, its integral is independent of
path so we may write
y
𝐶2
𝐶1
𝑖𝜋
1 3 2𝑖
Exercise: 1) ∫0 𝑧 2 𝑒 𝑧 𝑑𝑧 2) ∫0 sinh 𝑧 𝑑𝑧
Let 𝑓(𝑧) be analytic in a simply connected domain D. Then for any point in D, then for any point
𝑧𝑜 in D and any simple closed path C in D that encloses 𝑧𝑜
1 𝑓(𝑧)
𝑓(𝑧𝑜 ) = 2𝜋𝑖 ∮𝐶 𝑧−𝑧 𝑑𝑧
𝑜
𝑧 2 +1
Example 7.8: Evaluate the integral ∮𝐶 𝑧 2−1 𝑑𝑧, 𝐶: |𝑧 − 1| = 1
𝑧 2 +1 𝑧 2 +1⁄𝑧+1
Solution: writing the integrand as = we observe e that 𝑓(𝑧) = 𝑧 2 + 1⁄𝑧 + 1 is
𝑧 2 −1 𝑧−1
(1, 0)
Badri A, Moges B. and Teklebrhan B. 250 AKU
APPLIED MATHEMATICS III
𝑧2 + 1
∮ 2 𝑑𝑧 = 2𝜋𝑖𝑓(1) = 2𝜋𝑖
𝑧 −1
𝐶
𝑧 2 +1
Example 7.9: Evaluate the integral ∮𝐶 𝑧(2𝑧−1) 𝑑𝑧 𝐶: |𝑧| = 1
𝑧 2 +1
Solution: Let 𝐼 = ∮𝐶 𝑧(2𝑧−1) 𝑑𝑧
𝑧 2 +1 1
The integrand is not analytic at the point 𝑧 = 0 and 𝑧 = 2 both of which lie inside C,
𝑧(2𝑧−1)
writing it as
𝑧2 + 1 1 1
= (𝑧 2 + 1) [ − ]
𝑧(2𝑧 − 1) 1
(𝑧 − 2) 𝑧
𝑧2 + 1 𝑧2 + 1 𝑧2 + 1
𝐼=∮ 𝑑𝑧 = ∮ 𝑑𝑧 − ∮ 𝑑𝑧
𝑧(2𝑧 − 1) 1 𝑧
𝐶 𝐶 𝑧 − 𝐶
2
5𝜋𝑖 𝜋𝑖
= 2𝜋𝑖[𝑧 2 + 1]𝑧=1 − 2𝜋𝑖[𝑧 2 + 1]𝑧=0 = − 2𝜋𝑖 =
2 2 2
Overview:
In this section, we are going to deal with state and proof of Derivatives of an Analytic Function
or Generalized Cauchy’s integral formula and given by examples.
Section Objectives:
As mentioned, a surprising fact is that complex analytic functions have derivatives of all orders.
This differs completely from real calculus. Even if a real function is once differentiable we
cannot conclude that it is twice differentiable nor that any of its higher derivatives exist. This
makes the behavior of complex analytic functions simpler than real functions in this aspect. To
prove the surprising fact we use Cauchy’s integral formula.
If 𝑓(𝑧) is analytic in a domain D. Then it has derivatives of all orders in D, which are then also
analytic in D, and the values of these derivatives at a point 𝑧𝑜 in D are given by the formulas
𝑛! 𝑓(𝑧)
𝑓 (𝑛) (𝑧𝑜 ) = ∮ 𝑑𝑧 , 𝑛 = 1,2,3, …
2𝜋𝑖 (𝑧 − 𝑧𝑜 )𝑛+1
𝐶
1 𝑓(𝑧)
𝑓(𝑧𝑜 ) = ∮ 𝑑𝑧
2𝜋𝑖 𝑧 − 𝑧𝑜
𝐶
1! 𝑓(𝑧)
𝑓′(𝑧𝑜 ) = ∮ 𝑑𝑧
2𝜋𝑖 (𝑧 − 𝑧𝑜 )2
𝐶
Similarly,
2! 𝑓(𝑧)
𝑓′′(𝑧𝑜 ) = ∮ 𝑑𝑧
2𝜋𝑖 (𝑧 − 𝑧𝑜 )3
𝐶
And in general
𝑛! 𝑓(𝑧)
𝑓 (𝑛) (𝑧𝑜 ) = ∮ 𝑑𝑧
2𝜋𝑖 (𝑧 − 𝑧𝑜 )𝑛+1
𝐶
ez
Solution: Let I = ∮C z3 dz. here 𝑓(𝑧) = 𝑒 𝑧 is analytic in the region bounded by the simple close
1
curve |𝑧| = 1, the singular point 𝑧 = 0 of 𝑧 3 lies inside |𝑧| = 1, hence applying the generalized
ez 2πi d2 z
I=∮ dz = [e ]z=0 = 𝜋𝑖
z3 2! dz 2
C
𝑧 = 2 𝑎𝑛𝑑 4 lies inside C. consider two non intersecting closed contours 𝐶1 𝑎𝑛𝑑 𝐶2, As shown
fig. lying completely with in C, respectively about the point 𝑧 = 2 𝑎𝑛𝑑 𝑧 = 4.
𝑧+1
I=∮ 𝑑𝑧
𝑧(𝑧 − 2)(𝑧 − 4)3
𝐶
𝑧+1 𝑑𝑧 𝑧+1 𝑑𝑧
= ∮[ 3
] + ∮[ ]
𝑧(𝑧 − 4) (𝑧 − 2) 𝑧(𝑧 − 2) (𝑧 − 4)3
𝐶1 𝐶2
= I1 + I 2 , 𝑠𝑎𝑦
Now, using the Cauchy’s integral formula
𝑧+1 𝑑𝑧 𝑧+1 3𝜋𝑖
I1 = ∮ [ ] = 2𝜋𝑖 [ ] = −
𝑧(𝑧 − 4)3 (𝑧 − 2) 𝑧(𝑧 − 4)3 𝑧=2 8
𝐶1
3𝜋𝑖 23𝜋𝑖 𝜋𝑖
Therefore, I = I1 + I 2 = − + =−
8 64 64
Unit Summary:
- A general method of integration, not restricted to analytic functions, uses the equation
𝑏 𝑑𝑧
𝑧 = 𝑧(𝑡) of C, where 𝑎 ≤ 𝑡 ≤ 𝑏, ∫𝐶 𝑓(𝑧)𝑑𝑧 = ∫𝑎 𝑓(𝑧(𝑡))𝑧 ′ (𝑡)𝑑𝑡 (𝑧 ′ = )
𝑑𝑡
- Cauchy’s integral theorem is the most important theorem in this chapter. It states that if
𝑓(𝑧)is analytic in a simply connected domain D, then for every closed path C in D,
∮𝐶 𝑓(𝑧)𝑑𝑧 = 0
Under the same assumptions and for any 𝑧0 in D and closed path C in D containing 𝑧0 in
1 𝑓(𝑧)
its interior we also have Cauchy’s integral formula 𝑓(𝑧0 ) = 2𝜋𝑖 ∮𝐶 𝑧−𝑧 𝑑𝑧.
0
- Generalized Cauchy’s integral formula these assumptions 𝑓(𝑧) have derivatives of all
orders in D that are themselves analytic functions in D. 𝑓 (𝑛) (𝑧0 ) =
𝑛! 𝑓(𝑧)
∮ 𝑑𝑧 (𝑛 = 1, 2, … )
2𝜋𝑖 𝐶 (𝑧−𝑧0 )𝑛+1
This implies Morera’s theorem (the converse of Cauchy’s integral theorem) and
Cauchy’s inequality which in turn implies Liouville’s theorem that an entire function that
is bounded in the whole complex plane must be constant.
Miscellaneous Exercises
4𝑧 2 +𝑧+5 𝑥 𝑦
1. If 𝐹(𝑎) = ∮𝐶 𝑑𝑧, 𝑤ℎ𝑒𝑟𝑒 𝐶: (2) + ( 3) = 1,
𝑧−𝑎
Taken in counter clockwise sense, then fid 𝐹(3, 5), 𝐹(𝑖), 𝐹 ′ (−1), 𝐹 ′′ (−𝑖)
sin 𝑧
2. ∫𝐶 𝑑𝑧 integrate clockwise around the unit circle
𝑧4
𝑒𝑧
3. ∫𝐶 𝑧 𝑛 𝑑𝑧 integrate clockwise around the unit circle
𝑧6
4. ∫𝐶 (2𝑧−1)6 𝑑𝑧 integrate clockwise around the unit circle
𝑑𝑧
5. ∫𝐶 (𝑧−2𝑖)2 (𝑧−𝑖/2)2 𝑑𝑧 integrate clockwise around the unit circle
(1+𝑧) sin 𝑧
6. ∮𝐶 𝑑𝑧 , 𝐶: |𝑧 − 𝑖| = 2 Counterclockwise.
(2𝑧−1)2
exp(𝑧 2 )
7. ∮𝐶 𝑧(𝑧−2𝑖)2 𝑑𝑧 , 𝐶: |𝑧 − 3𝑖| = 2 Clockwise.
ln(z+3)
8. ∮𝐶 (𝑧−2)(𝑧+1)2 𝑑𝑧 , C the boundary of the square with vertices±1.5, ±1.5𝑖
counterclockwise.
References:
Krzyz, J. G.: "Problems in Complex Variable Theory," Elsevier Science, New York, 1972.
Lang, S.: "Complex Analysis," 3d ed., Springer-Verlag, New York, 1993.
Markushevich, A. 1.: "Theory of Functions of a Complex Variable," 3 vols. in one, 2d
ed., American Mathematical Society, Providence, RI, 1977.
Mathews, J. H., and R. W. Howell: "Complex Analysis for Mathematics and
Engineering," 4th ed., Jones and Bartlett Publishers, Sudbury, MA, 2001.
Rubenfeld, L.A.: "A First Course in Applied Complex Variables," John Wiley & Sons, Inc.,
New York, 1985.
Silverman, R. A.: "Complex Analysis with Applications," Dover Publications, Inc., ineola,
NY, 1984.
Whittaker, E. T., and G. N. Watson: "A Course of Modern Analysis," 4th ed., Cambridge
University Press, New York, 1996.