Applied Mathematics III
Applied Mathematics III
Table of Contents 1
Math 331: Applied Mathematics III Lecture Notes I Ordinary Differential Equations 6
2.7 *The Power Series Solution Method . . . . . . . . . . . . . . . . . . . . . . . . 61 5.3.1 Green’s Theorem for Multiply Connected Regions . . . . . . . . . . . . . 125
2.8 Systems of ODE of the First Order . . . . . . . . . . . . . . . . . . . . . . . . 63 5.4 Surface Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
2.8.1 Eigenvalue Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.4.1 Normal Vector and Tangent plane to a Surface . . . . . . . . . . . . . . 128
2.8.2 The Method of Elimination: . . . . . . . . . . . . . . . . . . . . . . . . 69 5.4.2 Applications of Surface Integrals . . . . . . . . . . . . . . . . . . . . . . 132
2.8.3 Reduction of higher order ODEs to systems of ODE of the first order . . 72 5.5 Divergence and Stock’s Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 136
2.9 Numerical Methods to Solve ODEs . . . . . . . . . . . . . . . . . . . . . . . . 74 5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
2.9.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.9.2 Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
III Complex Analysis 144
2.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6 COMPLEX ANALYTIC FUNCTIONS 146
3 *Nonlinear ODEs and Qualitative Analysis 76
6.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
3.1 Critical Points and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.2 Complex Functions, Differential Calculus and Analyticity . . . . . . . . . . . . . 151
3.1.1 Stability for linear systems . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.2.1 Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
3.1.2 Stability for nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . 78
6.2.2 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
3.2 Stability by Lyapunav’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3 The Cauchy - Riemann Equation . . . . . . . . . . . . . . . . . . . . . . . . . 155
3.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3.1 Test for Analyticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.4 Elementary Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
II Vector Analysis 82 6.4.1 Exponential Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.4.2 Trigonometric and Hyperbolic Functions . . . . . . . . . . . . . . . . . 160
4 Vector Differential Calculus 84
6.4.3 Polar form and Multi-Valuedness. . . . . . . . . . . . . . . . . . . . . . 162
4.1 Vector Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.4.4 The Logarithmic Functions . . . . . . . . . . . . . . . . . . . . . . . . 162
4.1.1 Vector Functions of One Variable in Space . . . . . . . . . . . . . . . . 84
6.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.1.2 Limit of A Vector Valued Function . . . . . . . . . . . . . . . . . . . . 85
4.1.3 Derivative of a Vector Function . . . . . . . . . . . . . . . . . . . . . . 87 7 COMPLEX INTEGRAL CALCULUS 166
4.1.4 Vector and Scalar Fields . . . . . . . . . . . . . . . . . . . . . . . . . . 88 7.1 Complex Integration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.2 The Gradient Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7.2 Cauchy’s Integral Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.2.1 Level Surfaces, Tangent Planes and Normal Lines . . . . . . . . . . . . 91 7.3 Cauchy’s Integral Formula and The Derivative of Analytic Functions. . . . . . . 173
4.3 Curves and Arc length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 7.4 Cauchy’s Theorem for Multiply Connected Domains . . . . . . . . . . . . . . . 176
4.4 Tangent, Curvature and Torsion . . . . . . . . . . . . . . . . . . . . . . . . . . 97 7.5 Fundamental Theorem of Complex Integral Calculus . . . . . . . . . . . . . . . 178
4.5 Divergence and Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
4.5.1 Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 8 TAYLOR AND LAURENT SERIES 182
8.1 Sequence and Series of Complex Numbers . . . . . . . . . . . . . . . . . . . . 182
5 Line and Surface Integrals 110 8.2 Complex Taylor Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.1 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 8.3 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
5.2 Line Integrals Independent of Path . . . . . . . . . . . . . . . . . . . . . . . . 112 8.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.3 Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
CONTENTS 4 CONTENTS 5
9 INTEGRATION BY THE METHOD OF RESIDUE. 194 This page is left blank intensionally.
9.1 Zeros and Classification of Singularities. . . . . . . . . . . . . . . . . . . . . . . 194
9.2 The Residue Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
9.3 Evaluation of Real Integrals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
9.3.1 Improper Integrals: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Chapter 1
Ordinary Differential Equations a single variable and such equations are called ordinary differential equations, which can be used
to model a phenomena of interest in the sciences, engineering, economics, ecological studies, and
other areas.
In the first section we will see the basic concepts and ideas and in the remaining sections we will
consider equations which involve the first derivative of a given independent variable with respect
to an independent variable, which are called Ordinary Differential Equations of the First Order.
The derivative 𝑑𝑦/𝑑𝑥 of a function 𝑦 = 𝑓 (𝑥) is itself another function 𝑓 ′ (𝑥) found by an appro-
2
priate rule of differentiation. For example, the function 𝑦 = 𝑒𝑥 is differentiable on the interval
2
(−∞, ∞) and by the Chain Rule its derivative is 𝑑𝑦/𝑑𝑥 = 2𝑥𝑒𝑥 . If we replace the right-hand
expression of the last equation by the symbol y, the equation becomes
𝑑𝑦
= 2𝑥𝑦. (1.1)
𝑑𝑥
In differentiation, the problem was ”Given a function 𝑦 = 𝑓 (𝑥), find its derivative.”
1.1 Basic Concepts and Ideas 8 1.1 Basic Concepts and Ideas 9
Now, the problem we face here is ”If we are given an equation such as (1.1), is there some way Classification by Order
or method by which we can find the unknown function 𝑦 = 𝑓 (𝑥) that satisfy the given equation,
The order of a differential equation (either ODE or PDE) is the order of the highest derivative
without prior knowledge how it was constructed?” These kind of problems are the ones we are
that appear in the equation. For example,
going to focus on in this part of the course.
𝑑𝑦 𝑑2 𝑦 𝑑𝑦
Definition 1.1.1. An equation involving derivatives of one or more dependent variables with 4𝑥 +𝑦 =𝑥 + 4 − 6𝑦 = 𝑒𝑥
𝑑𝑥 𝑑𝑥2 𝑑𝑥
respect to one or more independent variables is called a Differential Equation ( DE).
are first and second-order ordinary differential equations respectively.
Example 1.1.1. The general 𝑛th −order ordinary differential equation in one dependent variable is given by the
𝑑𝑦 𝑑3 𝑥 𝑑2 𝑥 ∂𝑣 ∂𝑣 general form
+ 𝑦 = 𝑥, + 5 2 + 3𝑥 = sin 𝑡, + + 5𝑣 = 2. (1.2)
𝑑𝑥 𝑑𝑡 3 𝑑𝑡 ∂𝑠 ∂𝑡 𝐹 (𝑥, 𝑦, 𝑦 ′ , 𝑦 ′′ , ..., 𝑦 (𝑛) ) = 0, (1.3)
are all Differential Equations. where F is a real-valued function of 𝑛 + 2 variables 𝑥, 𝑦, 𝑦 ′ , 𝑦 ′′ , ..., 𝑦 (𝑛) .
Differential equations can be classified by their type, order, and in term of linearity. We will Remark 1.1.2. For both practical and theoretical reasons we shall also make the assumption
see these classifications before going to the solution concept. hereafter that it is possible to explicitly solve the differential equation of the form (1.3) uniquely
for the highest derivative 𝑦 (𝑛) in terms of the remaining 𝑛 + 1 variables 𝑥, 𝑦, 𝑦 ′ , 𝑦 ′′ , ..., 𝑦 (𝑛−1) .
Classification by Type Then the differential equation (1.3) becomes
𝑑𝑛 𝑦
∙ If an equation contains only ordinary derivatives of one or more dependent variables with = 𝑓 (𝑥, 𝑦, 𝑦 ′ , ..., 𝑦 (𝑛−1) ), (1.4)
𝑑𝑥𝑛
respect to a single independent variable, then it is said to be an ordinary differential
equation (ODE). where 𝑓 is a real-valued continuous function and this is referred to as the normal form of (1.3).
For example, Example 1.1.2. The normal form of the first-order equation 4𝑥𝑦 ′ + 𝑦 = 𝑥 is
𝑑𝑦 𝑑2 𝑦 𝑑𝑦 𝑑3 𝑥 𝑑2 𝑥 𝑥−𝑦
+ 𝑦 = 𝑥, + 𝑥𝑦( )2 = 0 and + 5 2 + 3𝑥 = sin 𝑡 𝑦′ =
𝑑𝑥 𝑑𝑥 2 𝑑𝑥 𝑑𝑡 3 𝑑𝑡 4𝑥
and the normal form of the second-order equation 𝑦 ′′ − 𝑦 + 6𝑦 = 0 is
are all ordinary differential equations.
∙ If a function is defined in terms of two or more independent variables, the corresponding 𝑦 ′′ = 𝑦 ′ − 6𝑦.
derivative will be a partial derivative with respect to each independent variable. An equation
The first order ordinary differential equation is generally expressed as:
involving partial derivatives of one or more dependent variables of two or more independent
variables is called a partial differential equation (PDE). 𝐹 (𝑥, 𝑦, 𝑦 ′ ) = 0 or 𝑦 ′ = 𝑓 (𝑥, 𝑦).
For example,
∂ 2𝑢 ∂ 2𝑢 ∂𝑢 ∂𝑣 ∂𝑣 For example, the differential equation 𝑦 ′ + 𝑦 = 𝑥 is equivalent to 𝑦 ′ + 𝑦 − 𝑥 = 0. If 𝐹 (𝑥, 𝑦, 𝑦 ′ ) =
= 2 −2 and + + 5𝑣 = 2.
∂𝑥2 ∂𝑡 ∂𝑡 ∂𝑠 ∂𝑡 𝑦 ′ + 𝑦 − 𝑥, then the given differential equation becomes of the form 𝐹 (𝑥, 𝑦, 𝑦 ′ ) = 0.
are both partial differential equations.
In this part we will only consider the case of ordinary differential equations.
1.1 Basic Concepts and Ideas 10 1.2 Separable Differential Equations 11
1. 𝐹 (𝑥, ℎ(𝑥), ℎ′ (𝑥), ℎ′′ (𝑥), . . . , ℎ(𝑛) (𝑥)) is defined for all 𝑥 ∈ (𝑎, 𝑏) and where G(x) is an antiderivative (indefinite integral) of 𝑓 (𝑥).
2. 𝐹 (𝑥, ℎ(𝑥), ℎ′ (𝑥), ℎ′′ (𝑥), . . . , ℎ(𝑛) (𝑥)) = 0, for all 𝑥 ∈ (𝑎, 𝑏), Example 1.2.1.
∫𝑥
then 𝑦 = ℎ(𝑥) is called a ( an Explicit) solution of the ODE on [𝑎, 𝑏]. 1. If 𝑦 ′ = 𝑥, then 𝑦(𝑥) = 0
𝑡𝑑𝑡 + 𝐶 = 12 𝑥2 + 𝐶
∫𝑥
Sometimes a solution of a differential equation may appear as an implicit function, i.e. the solution 2. If 𝑦 ′ = 𝑠𝑖𝑛(1 + 𝑥2 ), then 𝑦(𝑥) = 0
𝑠𝑖𝑛(1 + 𝑡2 )𝑑𝑡 + 𝐶. However, it is difficult to find an
can be expressed implicitly in the form: ℎ(𝑥, 𝑦) = 0, where ℎ is some continuous function of 𝑥 explicit solution formula for this problem. (In such cases one may use numerical methods
and 𝑦, and such solution is called an Implicit Solution of the DE. to get approximate solutions.)
1.2 Separable Differential Equations 12 1.2 Separable Differential Equations 13
Many first-order ODEs can be reduced or transformed to the form which implies
−1 𝑒2𝑥
′ = + 𝑐,
𝑔(𝑦)𝑦 = 𝑓 (𝑥), 𝑦 2
where 𝑐 is a constant of integration. Then solve for 𝑦 to get
where 𝑔 and 𝑓 are continuous functions. Then, from elementary calculus we have:
−2
𝑦(𝑥) = ,
(𝑒2𝑥 + 𝑐)
𝑔(𝑦)𝑑𝑦 = 𝑓 (𝑥)𝑑𝑥. which is an explicit solution of the given first order differential equation.
Such type of equations are called separable equations. Integrating both sides we get:
� �
𝑔(𝑦)𝑑𝑦 = 𝑓 (𝑥)𝑑𝑥 + 𝑐 Remark 1.2.1. It is recommended to write an explicit solution to the differential equation when
ever possible. However, sometimes solving for the dependent variable (in our case 𝑦) may not
is the general solution of the given equation. be possible. In those cases one can represent the final solution by an implicit solution of the
′
Example 1.2.2. Solve the DE 6𝑦𝑦 + 4𝑥 = 0. differential equation.
The equation 6𝑦𝑦 ′ + 4𝑥 = 0 is equivalent to There are some differential equations which are not separable, but they can be transformed to
𝑑𝑦 a separable form by simple change of variables. We will see some of the possible substitutions
6𝑦 = −4𝑥
𝑑𝑥 hereunder.
and then 6𝑦𝑑𝑦 = −4𝑥𝑑𝑥. Integrating both sides,
� � A. Linear Substitution
6𝑦𝑑𝑦 = (−4𝑥)𝑑𝑥,
Suppose we have a differential equation that can be written in the form:
gives
3𝑦 2 + 2𝑥2 = 𝐶, 𝑦 ′ = 𝑔(𝑎𝑥 + 𝑏𝑦 + 𝑐) (1.7)
which is an implicit solution of the given first order differential equation. Such an equation is not in general separable. However, if we set 𝑢 = 𝑎𝑥 + 𝑏𝑦 + 𝑐, we get
Example 1.2.3. Solve the DE 𝑦 ′ = 𝑦 2 𝑒−𝑥 . 𝑑𝑢 𝑑𝑦
=𝑎+𝑏 .
𝑑𝑥 𝑑𝑥
First rewrite the equation as Or
𝑑𝑦 𝑑𝑦 1 𝑑𝑢 𝑎
= 𝑦 2 𝑒2𝑥 . = − .
𝑑𝑥 𝑑𝑥 𝑏 𝑑𝑥 𝑏
If 𝑦 ∕= 0, this has the differential form Thus (1.7) will be transformed into
1
d𝑦 = 𝑒2𝑥 d𝑥, 1 𝑑𝑢 𝑎
𝑦2 − = 𝑔(𝑢),
𝑏 𝑑𝑥 𝑏
where the variables have been separated. Integrating both sides we have where 𝑢 and 𝑥 can be separated.
� �
1
𝑑𝑦 = 𝑒2𝑥 𝑑𝑥, Example 1.2.4. Solve the differential equation 𝑦 ′ = (𝑥 + 𝑦)2 .
𝑦2
1.2 Separable Differential Equations 14 1.2 Separable Differential Equations 15
Let 𝑢 = 𝑥 + 𝑦. Then 𝑢′ = 1 + 𝑦 ′ which implies 𝑦 ′ = 𝑢′ − 1. With this substitution the equation Suppose we have an equation that can be written in the form
′ 2
𝑦 = (𝑥 + 𝑦) is equivalent to 𝑦
𝑦 ′ = 𝑔( ).
𝑥
𝑑𝑢
𝑢′ − 1 = 𝑢2 ⇐⇒ = 𝑢2 + 1. Let us substitute
𝑑𝑥
𝑦
𝑢= .
Then 𝑥
𝑑𝑢
= 𝑑𝑥 Then
𝑢2 + 1 𝑑𝑢 𝑥𝑦 ′ − 𝑦 1 𝑦
= = 𝑦′ − 2 .
and integrate both sides, � � 𝑑𝑥 𝑥2 𝑥 𝑥
𝑑𝑢
= 𝑑𝑥 This implies,
𝑢2 + 1 𝑦
𝑦 ′ = 𝑥𝑢′ + = 𝑥𝑢′ + 𝑢.
to get arctan 𝑢 = 𝑥 + 𝑐 for an arbitrary constant 𝑐. Substituting back 𝑢 = 𝑥 + 𝑦 in the last 𝑥
equation gives us the general solution of the given DE to be arctan(𝑥 + 𝑦) = 𝑥 + 𝑐. Thus, the differential equation
𝑦
𝑦 ′ = 𝑔( )
Example 1.2.5. Solve the differential equation (2𝑥 − 4𝑦 + 5)𝑦 ′ + 𝑥 − 2𝑦 + 3 = 0. 𝑥
is reduced to the equation 𝑥𝑢′ = 𝑔(𝑢) − 𝑢 which is equivalent to the differential equation
Solution 𝑑𝑥 𝑑𝑢
= .
𝑥 𝑔(𝑢) − 𝑢
Let 𝑢 = 𝑥 − 2𝑦. Then, 𝑢′ = 1 − 2𝑦 ′ which implies 𝑦 ′ = 12 (1 − 𝑢′ ). Therefore, the equation
Then by integrating we obtain a general solution.
(2𝑥 − 4𝑦 + 5)𝑦 ′ + 𝑥 − 2𝑦 + 3 = 0 becomes (2𝑢 + 5) 12 (1 − 𝑢′ ) + 𝑢 + 3 = 0. Simplifying this we
get (2𝑢 + 5) − (2𝑢 + 5)𝑢′ + 2𝑢 + 6 = 0 which implies Example 1.2.6. Solve 𝑥2 𝑦 ′ = 𝑥2 + 𝑥𝑦 + 𝑦 2 .
� �
2𝑢 + 5 𝑑𝑢
(2𝑢 + 5)𝑢′ = 4𝑢 + 11 ⇐⇒ = 1. Solution
4𝑢 + 11 𝑑𝑥
Example 1.2.7. Solve the DE: 2𝑥𝑦𝑦 ′ = 𝑦 2 − 𝑥2 . Remark 1.2.3. The number of initial conditions necessary to determine a unique solution equals
the order of the differential equation.
Solution:
Example 1.2.8. Solve the IVP 𝑦 ′′ + 𝑦 = 0, 𝑦(0) = 3 and 𝑦 ′ (0) = −4.
Divide both sides by 𝑥2 , for 𝑥 ∕= 0, to get
�𝑦� � 𝑦 �2 Solution:
2 𝑦′ = − 1.
𝑥 𝑥
First find the general solution with two unknown constants. Given
Let 𝑢 = 𝑥𝑦 . Then 𝑔(𝑢) = 12 (𝑢 − 𝑢1 ) and we get
Now we substitute 𝑢 = 𝑦
to get Example 1.2.9. Suppose 𝑦 ′′ + 𝑦 = 0, 𝑦(0) = 3 and 𝑦( 𝜋2 ) = 5.
𝑥
𝑥2 + 𝑦 2 = 𝐴𝑥3 . This is a boundary value problem with 𝑦(0) = 𝑐1 which implies 𝑐1 = 3 and 𝑦( 𝜋2 ) = 𝑐2 , which
implies 𝑐2 = 5.
Notice that the solution of each of the previous examples contains arbitrary constants. To
determine the constants in these solutions we need to impose some additional conditions. For Hence, the particular solution of this BVP is
example, for the DE equation 6𝑦𝑦 ′ + 4𝑥 = 0, the equation 3𝑦 2 + 2𝑥2 = 𝐶 represents an implicit 𝑦(𝑥) = 3 cos 𝑥 + 5 sin 𝑥.
solution for an arbitrary constant 𝐶. But if 𝑦(0) = 3 is given in addition, then 𝐶 = 27 and
3𝑥2 + 2𝑥 = 27 will be a specific solution of the given DE. Two fundamental questions arise in considering an initial-value problem and these are:
Theorem 1.2.5 (Existence and uniqueness of a solution). If 𝑓 (𝑥, 𝑦) is continuous function on is called an exact differential equation in some domain 𝐷 (an open connected set of points)
some rectangular region 𝑅 in the 𝑥𝑦− plane containing the point (𝑎, 𝑏) in its interior , then the if there is a function 𝐹 (𝑥, 𝑦) such that
problem ∂𝐹 ∂𝐹
𝑦 ′ = 𝑓 (𝑥, 𝑦), with 𝑦(𝑎) = 𝑏 (1.8) = 𝑀 (𝑥, 𝑦) and = 𝑁 (𝑥, 𝑦),
∂𝑥 ∂𝑦
has at least one solution defined on some open interval of 𝑥 containing 𝑥 = 𝑎. for all (𝑥, 𝑦) ∈ 𝐷.
If, in addition, the function
∂𝑓 If we can find a function 𝐹 (𝑥, 𝑦) such that
∂𝑦
∂𝐹 ∂𝐹
is continuous on R, then the solution to the above equation (1.8) is unique on some open interval = 𝑀 (𝑥, 𝑦) and = 𝑁 (𝑥, 𝑦),
∂𝑥 ∂𝑦
containing 𝑥 = 𝑎.
then the differential equation 𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 is just 𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 =
Remark 1.2.6. The above condition for uniqueness can be eased by using a condition piecewise 𝑑𝐹 = 0. But recall that, if 𝑑𝐹 = 0, then 𝐹 (𝑥, 𝑦) = constant. The equation 𝐹 (𝑥, 𝑦) = 𝑐,
continuous instead of the condition that “ ∂𝑓
∂𝑦
is continuous”. where 𝑐 is an arbitrary constant, implicitly defines the general solution of the deferential equation
𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 = 0.
After knowing the exactness of a differential equation, the next question is ”How can we solve 𝑥 sin 𝑦 − 𝑦 2 = 𝐶
the given equation?” The method for this is described here below.
determines 𝑦(𝑥) implicitly.
Suppose a differential equation 𝑀 (𝑥, 𝑦)𝑑𝑥+𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 is exact. Then, there exists a function
Example 1.3.4. Solve the differential equation
𝐹 (𝑥, 𝑦) such that
∂𝐹 ∂𝐹
𝑀= and 𝑁 = . (𝑥3 + 3𝑥𝑦 2 )𝑑𝑥 + (3𝑥2 𝑦 + 𝑦 3 )𝑑𝑦 = 0.
∂𝑥 ∂𝑦
∂𝐹
From 𝑀 = ∂𝑥
, we have (by integrating with respect to 𝑥)
� Solution
𝐹 (𝑥, 𝑦) = 𝑀 𝑑𝑥 + 𝐴(𝑦), (1.9)
Step 1: Checking Exactness
where 𝐴(𝑦) is only a function of 𝑦 but constant with respect to 𝑥.
Now to determine 𝐴(𝑦) (the constant of integration), differentiate equation (1.9) with respect Let 𝑀 (𝑥, 𝑦) = 𝑥3 + 3𝑥𝑦 2 and 𝑁 (𝑥, 𝑦) = 3𝑥2 𝑦 + 𝑦 3 . Then
to 𝑦 to get ∂𝑀 ∂𝑁
= 6𝑥𝑦 = .
∂𝑦 ∂𝑥
�
∂𝐹 ∂ Therefore the given equation is exact.
= 𝑀 𝑑𝑥 + 𝐴′ (𝑦)
∂𝑦 ∂𝑦
which implies
� � Step 2: Finding Implicit Solution
∂𝑀 ∂𝑀
𝑁 (𝑥, 𝑦) = 𝑑𝑥 + 𝐴′ (𝑦) and hence 𝐴′ (𝑦) = 𝑁 (𝑥, 𝑦) − 𝑑𝑥
∂𝑦 ∂𝑦
Then to find 𝐹 (𝑥, 𝑦) we use
by exactness. Therefore, � �
� � � � 1 3
∂𝑀 𝐹 (𝑥, 𝑦) = 𝑀 𝑑𝑥 + 𝐴(𝑦) = (𝑥3 + 3𝑥𝑦 2 )𝑑𝑥 + 𝐴(𝑦) = 𝑥4 + 𝑥2 𝑦 2 + 𝐴(𝑦)
𝐴(𝑦) = 𝑁 (𝑥, 𝑦) − 𝑑𝑥 𝑑𝑦. 4 2
∂𝑦
where 𝐴(𝑦) is a function of 𝑦 only. To find A(y);
Example 1.3.3. Solve the differential equation
∂𝐹
sin 𝑦𝑑𝑥 + (𝑥 cos 𝑦 − 2𝑦)𝑑𝑦 = 0. = 3𝑥2 𝑦 + 𝐴′ (𝑦) = 𝑁 = 3𝑥2 𝑦 + 𝑦 3 ,
∂𝑦
1.3 Exact Differential Equations 22 1.3 Exact Differential Equations 23
∫
which implies that 𝐴′ (𝑦) = 𝑦 3 and then 𝐴(𝑦) = 𝑦 3 𝑑𝑦 = 14 𝑦 4 + 𝐶. (Of course 𝜇(𝑥, 𝑦) ∕= 0 so that the two equations are equivalent.)
Therefore,
Example 1.3.5. Consider the differential equation
1 4 3 2 2 1 4
𝐹 (𝑥, 𝑦) = 𝑥 + 𝑥 𝑦 + 𝑦 +𝐶
4 2 4 (3𝑦 + 4𝑥𝑦 2 )𝑑𝑥 + (2𝑥 + 3𝑥2 𝑦)𝑑𝑦 = 0.
1 4
= (𝑥 + 6𝑥2 𝑦 2 + 𝑦 4 ) + 𝐶 (1.10)
4 Let 𝑀 (𝑥, 𝑦) = 3𝑦 + 4𝑥𝑦 2 and 𝑁 (𝑥, 𝑦) = 2𝑥 + 3𝑥2 𝑦. Then
Step 3: Checking ∂𝑀 ∂𝑁
= 3 + 8𝑥𝑦 and = 2 + 6𝑥𝑦,
∂𝑦 ∂𝑥
Differentiate implicitly to check for 𝑦 ′ : which implies
∂𝑀 ∂𝑁
∕= .
1 ∂𝑦 ∂𝑥
(4𝑥3 + 12𝑥𝑦 2 + 12𝑥2 𝑦𝑦 ′ + 4𝑦 3 𝑦 ′ ) = 0
4 Hence the DE is not exact.
which implies
But if 𝜇(𝑥, 𝑦) = 𝑥2 𝑦 then 𝜇(𝑥, 𝑦)𝑀 𝑑𝑥 + 𝜇(𝑥, 𝑦)𝑁 𝑑𝑦 = 0 is exact, since
3 2 2 3 ′
𝑥 + 3𝑥𝑦 + (3𝑥 𝑦 + 𝑦 )𝑦 = 0
∂(𝜇(𝑥, 𝑦)𝑀 ) ∂(𝜇(𝑥, 𝑦)𝑁 )
and then = 6𝑥2 𝑦 + 12𝑥3 𝑦 2 = .
∂𝑦 ∂𝑥
(𝑥3 + 3𝑥𝑦 2 )𝑑𝑥 + (3𝑥2 𝑦 + 𝑦 3 )𝑑𝑦 = 0.
Suppose we have a differential equation which is not exact but it can be made exact by an
Exercise 1.3.3. Solve each of the following differential equations. integrating factor. Then we can ask the following fundamental questions.
1. (𝑦 + 𝑒𝑦 )𝑑𝑥 + 𝑥(1 + 𝑒𝑦 )𝑑𝑦 = 0; 𝑦 = 0 when 𝑥 = 1. 1. How can we find the integrating factor 𝜇 ?
𝑑𝑦 2𝑥 + 1
2. = ; 𝑦(0) = 0. 2. Given 𝜇, how can we solve the problem?
𝑑𝑥 2𝑦 + 1
3. sin ℎ𝑥 cos 𝑦𝑑𝑥 = cos ℎ𝑥 sin 𝑦𝑑𝑦. The method is described below.
Clearly 𝜇(𝑥, 𝑦) is any (non-zero) solution of the equation
Integrating Factors
∂ ∂
(𝜇𝑁 ) = (𝜇𝑁 ) (1.11)
The differential equation 𝑦𝑑𝑥 + 2𝑥𝑑𝑦 = 0 is not exact. But if we multiply this equation by y, the ∂𝑦 ∂𝑥
equation is changed to exact equation. That is, which is equivalent to the equation
is exact, since
∂𝑦 2 ∂(2𝑥𝑦) This is a first-order partial differential equation in 𝜇. However the integrating factor 𝜇 can be
= 2𝑦 = .
∂𝑦 ∂𝑥 found to be a function of 𝑥 alone 𝜇(𝑥) (or a function of 𝑦 alone 𝜇(𝑦)).
Definition 1.3.4. If the differential equation 𝑀 (𝑥, 𝑦)𝑑𝑥 + 𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 is not exact but the Then in this case equation (1.11) will be reduced to
differential equation 𝑑𝜇 𝑑𝜇 𝑀 𝑦 − 𝑁𝑥
𝜇(𝑥, 𝑦)𝑀 (𝑥, )𝑑𝑥 + 𝜇(𝑥, 𝑦)𝑁 (𝑥, 𝑦)𝑑𝑦 = 0 𝜇𝑀𝑦 = 𝑁 + 𝜇𝑁𝑥 or equivalently = 𝜇( ) (1.12)
𝑑𝑥 𝑑𝑥 𝑁
is exact, then the multiplicative function 𝜇(𝑥, 𝑦) is called an integrating factor of the DE. which is a separable differential equation.
1.3 Exact Differential Equations 24 1.4 Linear First Order Differential Equations 25
� � ∂𝐹
𝑑𝜇 𝑀 𝑦 − 𝑁𝑥 (3𝑥 − 𝑒−2𝑦 )𝑒3𝑦 = = 3𝑥𝑒3𝑦 + 𝐴′ (𝑦).
= −𝜇 , which is a separable differential equation. ∂𝑦
𝑑𝑦 𝑀
Then 3𝑥𝑒3𝑦 − 𝑒 = 3𝑥𝑒3𝑦 + 𝐴′ (𝑦) which implies that
If the fraction �
𝑀𝑦 − 𝑁𝑥
𝑀 𝐴′ (𝑦) = −𝑒𝑦 and hence 𝐴(𝑦) = − 𝑒𝑦 𝑑𝑦 = −𝑒𝑦 + 𝐵.
is a function of 𝑦 alone, then
∫ Therefore, 𝐹 (𝑥, 𝑦) = 𝑥𝑒3𝑦 − 𝑒𝑦 + 𝐵 = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡. That means 𝑥𝑒3𝑦 − 𝑒𝑦 = 𝐶, where 𝐶 is an
𝜇(𝑦) = 𝑒− 𝑞(𝑦)𝑑𝑦
.
arbitrary constant, defines the implicit solution of the Differential Equation in (1.13).
Example 1.3.6. Consider the equation
𝑑𝑥 + (3𝑥 − 𝑒−2𝑦 )𝑑𝑦 = 0. (1.13) 1.4 Linear First Order Differential Equations
∂𝑀 ∂𝑁 ∂𝑀 ∂𝑁
Let 𝑀 = 1 and 𝑁 = 3𝑥 − 𝑒−2𝑦 . Then ∂𝑦
= 0 and ∂𝑥
= 3 and hence ∂𝑦
∕= ∂𝑥
which implies Consider the general first-order linear differential equation
that the given differential equation is not exact.
Assume that the given equation has an integrating factor. But 𝑎1 (𝑥)𝑦 ′ + 𝑎0 (𝑥)𝑦 = 𝑓 (𝑥), 𝑎1 (𝑥) ∕= 0 (1.14)
𝑀 𝑦 − 𝑁𝑥 0−3 −3𝑒2𝑦 By dividing both sides by 𝑎1 (𝑥) ∕= 0, we get 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥), where
= =
𝑁 3𝑥 − 𝑒−2𝑦 3𝑥𝑒2𝑦
𝑎0 (𝑥) 𝑓 (𝑥)
which is not a function of 𝑥 alone. Hence obtaining 𝜇(𝑥) is not possible. 𝑝(𝑥) = and 𝑞(𝑥) = .
𝑎1 (𝑥) 𝑎1 (𝑥)
1.4 Linear First Order Differential Equations 26 1.4 Linear First Order Differential Equations 27
Here we assume that 𝑝(𝑥) and 𝑞(𝑥) are continuous. 2. If (𝑥 + 2)𝑦 ′ − 𝑥𝑦 = 0, then
𝑦′ 𝑥
There is a general approach to solve linear equations. To solve for 𝑦(𝑥) from the given equation = .
𝑦 𝑥+2
we start with the simplest case, when 𝑞(𝑥) = 0. That is, (1.14) becomes
We integrate � �
𝑑𝑦 𝑥
′ = 𝑑𝑥
𝑦 + 𝑝(𝑥)𝑦 = 0. (1.15) 𝑦 𝑥+2
𝐴
This problem is called a homogeneous version of (1.14). Now to solve (1.15) first we get 𝑦 ′ = to get 𝑦(𝑥) = 𝐴𝑒(𝑥−2 ln(𝑥+2) ), or 𝑦(𝑥) = 𝑒𝑥 which is the general solution of the
(𝑥 + 2)2
−𝑝(𝑥)𝑦 and we divide both sides by 𝑦 and get given equation.
𝑦′
= −𝑝(𝑥). Now we want to solve the general first - order linear ordinary differential equation
𝑦
Then by integrating � � 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥) (1.16)
𝑑𝑦
=− 𝑝(𝑥)𝑑𝑥
𝑦 This can be done in two steps.
we get �
ln ∣𝑦∣ = − 𝑝(𝑥)𝑑𝑥 + 𝐶,
Step 1.
which implies ∫
∫ ∫
∣𝑦∣ = 𝑒𝑐− 𝑝(𝑥)𝑑𝑥
= 𝐵𝑒− 𝑝(𝑥)𝑑𝑥
, for 𝐵 > 0. Consider the homogeneous version of (1.16) and find the solution to be 𝑦ℎ (𝑥) = 𝐴𝑒− 𝑝(𝑥)𝑑𝑥
,
where ℎ indicate the general solution for the homogeneous part of the equation
Therefore,
∫
𝑦(𝑥) = 𝐴𝑒− 𝑝(𝑥)𝑑𝑥
, where 𝐴 is an arbitrary constant,
Step 2.
is a general solution of (1.15).
To get the solution for the non-homogeneous part of the equation we vary the constant 𝐴 with
Example 1.4.1. Solve the following differential equations. different value of 𝑥.
Hence we assume that
1. 𝑦 ′ + 2𝑥𝑦 = 0 ∫
𝑦(𝑥) = 𝐴(𝑥)𝑒− 𝑝(𝑥)𝑑𝑥
(1.17)
2. (𝑥 + 2)𝑦 ′ − 𝑥𝑦 = 0
is a solution for (1.16). Then (1.17) must satisfy (1.16). i.e.
Solution: ( ∫ )
𝑝(𝑥)𝑑𝑥 ′
( ∫ )
𝐴(𝑥)𝑒− + 𝑝(𝑥) 𝐴(𝑥)𝑒− 𝑝(𝑥)𝑑𝑥 = 𝑞(𝑥),
𝑦′
1. If 𝑦 ′ + 2𝑥𝑦 = 0, then 𝑦
= −2𝑥. We integrate which implies
� � ∫ ∫ ∫
𝑑𝑦
= −2 𝑥𝑑𝑥 𝐴′ (𝑥)𝑒− 𝑝(𝑥)𝑑𝑥
+ 𝐴(𝑥)(−𝑝(𝑥))𝑒− 𝑝(𝑥)𝑑𝑥
+ 𝑝(𝑥)𝐴(𝑥)𝑒− 𝑝(𝑥)𝑑𝑥
= 𝑞(𝑥).
𝑦
2
to get ln ∣𝑦∣ = 𝑥2 + 𝐶 and hence 𝑦(𝑥) = 𝐴𝑒−𝑥 is the general solution. Simplifying this gives us,
∫
𝐴′ (𝑥) = 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥
.
1.4 Linear First Order Differential Equations 28 1.5 *Nonlinear Differential Equations of the First Order 29
∫ ∫ ∫
Now integrate both sides 𝐴′ (𝑥)𝑑𝑥 = 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥 𝑑𝑥 to get Step 4. We integrate � �
� 𝑑(𝑦𝑒3𝑥 )
∫ 𝑑𝑥 = 6𝑒3𝑥
𝐴(𝑥) = 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥 𝑑𝑥 + 𝐶. 𝑑𝑥
and get 𝑦(𝑥)𝑒3𝑥 = 2𝑒3𝑥 + 𝐶. Then solve for 𝑦(𝑥) to get the general solution 𝑦(𝑥) =
Hence the general solution of the non-homogeneous ODE (1.16) is given by: 𝐶𝑒−3𝑥 + 2 for an arbitrary constant 𝐶.
∫ ∫
�� ∫
�
𝑦(𝑥) = 𝐴(𝑥)𝑒− 𝑝(𝑥)𝑑𝑥 = 𝑒− 𝑝(𝑥)𝑑𝑥 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥 𝑑𝑥 + 𝐶
∫ ∫
� ∫ 1.5 *Nonlinear Differential Equations of the First Order
= 𝐶𝑒− 𝑝(𝑥)𝑑𝑥 + 𝑒− 𝑝(𝑥)𝑑𝑥 𝑞(𝑥)𝑒 𝑝(𝑥)𝑑𝑥 𝑑𝑥
= 𝑦ℎ (𝑥) + 𝑦𝑝 (𝑥) Some nonlinear differential equations can be reduced to linear form. In this section we will
consider three famous nonlinear equations: Bernoulli Equation, Riccati Equation and Clairuat
Remark 1.4.1. It may not be necessary to memorize this long formula for 𝑦(𝑥). Instead, we can
Equation
carry out the following procedure.
Step 1. If the differential equation is linear, 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥), then first compute 1.5.1 The Bernoulli Equation
∫
𝑝(𝑥)𝑑𝑥
𝑒 . A differential equation of the form
This is called an integrating factor for the linear equation.
𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥)𝑦 𝛼 ,
Step 2. Multiply both sides of the differential equation by the integrating factor.
where 𝛼 is a constant, is called Bernoulli Equation.
Step 3. Write the left side of the resulting equation as the derivative of the product of 𝑦 and the If 𝛼 = 0, then the equation is linear and if 𝛼 = 1, then the equation is separable. We have seen
integrating factor. The integrating factor is designed to make this possible. The right side these two cases in the previous section.
is a function of just 𝑥. For 𝛼 ∕= 1, use the change of variable 𝑢 = 𝑦 1−𝛼 . Then by differentiating with respect to 𝑥, we
Step 4. Integrate both sides of this equation with respect to 𝑥 and solve the resulting equation for get 𝑢′ = (1 − 𝛼)𝑦 −𝛼 𝑦 ′ . But, from 𝑦 ′ + 𝑝(𝑥)𝑦 = 𝑞(𝑥)𝑦 𝛼 , we get 𝑦 ′ = 𝑞(𝑥)𝑦 𝛼 − 𝑝(𝑥)𝑦. Then
Since 𝛼 = 2, let 𝑢 = 𝑦 1−2 = 𝑦 −1 . Then 𝑢′ = −𝑦 −2 𝑦 ′ and substituting 𝐴𝑦 − 𝐵𝑦 2 for 𝑦 ′ we get A nonlinear differential equation of the form
′ −2 2 −1 ′
𝑢 = −𝑦 (𝐴𝑦 −𝐵𝑦 ) = 𝐵 −𝐴𝑦 = 𝐵 −𝐴𝑢. Then we get the differential equation 𝑢 +𝐴𝑢 = 𝐵
𝑦 = 𝑥𝑦 ′ + 𝑔(𝑦 ′ ),
which is equivalent to the equation 𝑑𝑢 − (𝐵 − 𝐴𝑢)𝑑𝑥 = 0. This implies
Therefore, the general solution of the original differential equation is Solving 𝑦 ′′ = 0 gives us the general solution 𝑦 = 𝑎𝑥 + 𝑏 and solving 𝑥 + 𝑔 ′ (𝑦 ′ ) = 0 gives us a
singular solution (include definition of singular solution in the basic section ).
1 1
𝑦= = 𝐵
.
𝑢 𝐴
+ 𝐶𝑒−𝐴𝑥 Example 1.5.3. Solve the Clairuat equation
1
1.5.2 The Riccati Equation. 𝑦 = 𝑥𝑦 ′ + .
𝑦′
A differential equation of the form
Solution
′ 2
𝑦 = 𝑝(𝑥)𝑦 + 𝑞(𝑥)𝑦 + 𝑟(𝑥)
Differentiating both sides with respect to 𝑥 to get
is called Riccati equation. If 𝑝(𝑥) ≡ 0, then the equation is linear.
𝑦 ′′
𝑦 ′ = 𝑦 ′ + 𝑥′′ − .
(𝑦 ′ )2
If we can obtain one particular solution 𝑠(𝑥) of the Riccati equation, then the change of variables
This implies � �
1 1
𝑦 = 𝑠(𝑥) + 𝑦 ′′ 𝑥 − =0
𝑧 (𝑦 ′ )2
transforms the Riccati equation in to a linear equation in 𝑥 and 𝑧. Then we find the general and then solving 𝑦 ′′ = 0 gives us a general solution 𝑦 = 𝑎𝑥 + 𝑏 and solving
solution of this linear equation and we use it to write the general solution of the original Riccati 1
𝑥− =0
equation. 1 − (𝑦 ′ )2
1 1 √
gives us a singular solution. Then (𝑦 ′ )2 = which implies 𝑦 ′ = √ . Hence 𝑦 = 2 𝑥 + 𝑐 is a
Example 1.5.2. Solve the Riccati equation 𝑥 ± 𝑥
singular solution.
′ 1 1
𝑦 = 2 𝑦 2 − 𝑦 + 1.
𝑥 𝑥 Remark 1.5.1. The general solution of the Clairuat equation is 𝑦 = 𝑎𝑥 + 𝑏. Therefore, our
(Hint: 𝑦 = 𝑥 is one solution.) main focus for such equation is the singular solution.
1.6 Exercises 32
1.6 Exercises
1. 𝑥𝑦 ′ + 𝑦 = 6𝑥2
In this section, we will focus on the general theory of linear ordinary differential equations before
we start to discuss about solving such problems.
Definition 2.1.1. A linear ordinary differential equation of order 𝑛 in the dependent variable 𝑦
and independent variable 𝑥 is an equation which can be expressed in the form:
where 𝑎𝑛 (𝑥) ∕≡ 0 and the functions 𝑎0 , . . . , 𝑎𝑛 are continuous real- valued functions of 𝑥 ∈ [𝑎, 𝑏].
The function 𝑓 (𝑥) is called the non-homogeneous term and all the points 𝑥𝜖 ∈ [𝑎, 𝑏] in which
𝑎𝑛 (𝑥𝜖 ) = 0 are called singular points of the DE (2.1).
2.2 General Solution of Homogeneous Linear ODEs 34 2.2 General Solution of Homogeneous Linear ODEs 35
If 𝑓 (𝑥) ≡ 0, then (2.1) is reduced to: Theorem 2.2.1 (Linear Combination of Solutions). If 𝑦1 , 𝑦2 , . . . , 𝑦𝑘 are solutions of the homo-
geneous linear ODE (2.1) and if 𝑐1 , 𝑐2 , . . . 𝑐𝑘 are arbitrary constants, then the linear combination
𝑎𝑛 (𝑥)𝑦 (𝑛) + 𝑎𝑛−1 (𝑥)𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑎1 (𝑥)𝑦 ′ + 𝑎0 (𝑥) = 0 (2.2)
𝑘
�
This equation is known as homogeneous Linear ODE of order 𝑛. 𝑦 = 𝑐1 𝑦1 + 𝑐2 𝑦2 + . . . + 𝑐𝑘 𝑦𝑘 = 𝑐𝑖 𝑦𝑖
𝑖=1
Example 2.1.1. The equation 𝑦 ′′ + 3𝑥𝑦 ′ + 𝑥3 𝑦 = 𝑒𝑥 is a non homogeneous linear ordinary is also a solution of (2.1). That is, any linear combination of solutions of a linear homogeneous
differential equation of the 2nd order, whereas 𝑦 ′′ + 3𝑥𝑦 ′ + 𝑥3 𝑦 = 0 is a homogeneous linear differential equation is also a solution.
ordinary differential equation of the 2nd order.
Definition 2.2.2 (Linearly Dependent and Linearly Independent Functions).
Theorem 2.1.2 (Basic Existence Theorem for IVP). Consider the linear ODE given in (2.1),
where 𝑎0 , 𝑎1 , . . . , 𝑎𝑛−1 , 𝑎𝑛 and 𝑓 are continuous functions on the interval [𝑎, 𝑏] and 𝑎𝑛 (𝑥) ∕= 1. The functions 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are said to be Linearly Dependent (LD) on some interval [𝑎, 𝑏]
0, ∀𝑥 ∈ [𝑎, 𝑏]. Furthermore, let 𝑥0 be any point in [𝑎, 𝑏] and let 𝑐0 , 𝑐1 . . . 𝑐𝑛−1 be arbitrary real if there are constants 𝑐1 , 𝑐2 , . . . , 𝑐𝑛 , not all zero, such that
constants. Then there exists a unique solution function 𝑔(𝑥) of (2.1) on [𝑎, 𝑏] satisfying the initial
𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + . . . + 𝑐𝑛 𝑓𝑛 (𝑥) = 0 (2.4)
conditions,
𝑔(𝑥0 ) = 𝑐0 , 𝑔 ′ (𝑥0 ) = 𝑐1 , . . . , 𝑔 𝑛−1 (𝑥0 ) = 𝑐𝑛−1 . for all 𝑥 ∈ [𝑎, 𝑏].
𝑦 ′′ + 𝑦 = 0. (2.3) 1. The functions 𝑓1 (𝑥) = 𝑒𝑥 and 𝑓2 (𝑥) = 4𝑒𝑥 are Linearly Dependent on R since
Then, 𝑦1 = cos 𝑥 and 𝑦2 = sin 𝑥 are solutions of the differential equation (2.3). Let 𝑐1 and 𝑐2 be −4𝑓1 (𝑥) + 𝑓2 (𝑥) = −4𝑒𝑥 + 4𝑒𝑥 = 0, for all 𝑥 ∈ R.
arbitrary constants. Then
2. The functions
𝑦 = 𝑐1 𝑦1 + 𝑐2 𝑦2 = 𝑐1 cos 𝑥 + 𝑐2 sin 𝑥 𝑓1 (𝑥) = 𝑒𝑥 , 𝑓2 (𝑥) = 𝑒−𝑥 , 𝑓3 (𝑥) = sinh 𝑥
is also a solution of (2.3). Indeed, 𝑦 ′ = −𝑐1 sin 𝑥 + 𝑐2 cos 𝑥, and 𝑦 ′′ = −𝑐1 cos 𝑥 − 𝑐2 sin 𝑥 which are linearly dependent on R since
implies that 𝑒𝑥 − 𝑒−𝑥
′′
𝑓3 (𝑥) = sinh 𝑥 =
𝑦 + 𝑦 = (−𝑐1 cos 𝑥 − 𝑐2 sin 𝑥) + (𝑐1 cos 𝑥 + 𝑐2 sin 𝑥) = 0 2
for all 𝑥. Therefore, any linear combination of the functions 𝑦1 = cos 𝑥 and 𝑦2 = cos 𝑥 is a and (1)𝑓1 (𝑥) + (−1)𝑓2 (𝑥) + (−2)𝑓3 (𝑥) = 0, ∀𝑥 ∈ R.
solution for the given differential equation. 3. The two functions 𝑓1 (𝑥) = 𝑥 and 𝑓2 (𝑥) = 𝑥3 are Linearly Independent on R, since for
This condition can be generalized for any homogenous linear differential equation in the following 𝑐1 , 𝑐2 ∈ R,
theorem. 𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) = 𝑐1 𝑥 + 𝑐2 𝑥3 = 0, ∀𝑥 ∈ R∖{0}
implies 𝑐1 = 0 and 𝑐2 = 0.
2.2 General Solution of Homogeneous Linear ODEs 36 2.2 General Solution of Homogeneous Linear ODEs 37
The following theorem guarantees that any 𝑛th order Linear Homogenous Ordinary Differential There is a simple test to determine whether a given set of functions is linearly independent or
Equation has 𝑛 linearly independent solutions. dependent on an open interval 𝐼 = (𝑎, 𝑏), for some real numbers 𝑎, 𝑏, by using the idea of
determinant of a matrix.
Theorem 2.2.3 (Existence of Linearly Independent Solutions for a LHODE). The Linear Ho-
mogenous Differential Equation (LHODE) (2.2) always has 𝑛 Linearly Independent (LI) solutions. Definition 2.2.5. Let 𝑓1 (𝑥), 𝑓2 (𝑥), . . . , 𝑓𝑛 (𝑥) be 𝑛 real valued functions each of which has an
Furthermore, if 𝑓1 (𝑥), 𝑓2 (𝑥), . . . , 𝑓𝑛 (𝑥) are 𝑛 LI solutions of (2.2), then every solution of (2.2) (𝑛 − 1)𝑡ℎ derivative on the interval [𝑎, 𝑏]. The determinant:
� �
can be expressed as a linear combination of these solution functions. i.e. If y is a solution for � �
� 𝑓1 (𝑥) 𝑓2 (𝑥) ... 𝑓𝑛 (𝑥) �
(2.2), then � �
� 𝑓 ′ (𝑥) 𝑓 ′
(𝑥) . . . 𝑓𝑛′ (𝑥) �
𝑛
� � 1 2 �
W[f1 , f2 . . . , fn ] = � . . . � = W(𝑥)
𝑦(𝑥) = 𝑐𝑖 𝑓𝑖 (𝑥) � .. .. .. �
� �
𝑖=1 � (𝑛−1) (𝑛−1) (𝑛−1) �
� 𝑓1 (𝑥) 𝑓2 (𝑥) . . . 𝑓𝑛 (𝑥) �
for some 𝑐1 , ..., 𝑐𝑛 ∈ R..
is called the Wronskian of these 𝑛 functions.
Example 2.2.2. Consider the second order linear homogenous DE
Example 2.2.4. The function 𝑦1 (𝑥) = 𝑒2𝑥 and 𝑦2 (𝑥) = 𝑥𝑒4𝑥 are solutions of the second order
′′
𝑦 + 𝑦 = 0. linear homogenous differential equation 𝑦 ′′ − 4𝑦 ′ + 4𝑦 = 0. Then the Wronskian, W(x) of 𝑦1 and
𝑦2 is � �
Then 𝑓1 (𝑥) = sin 𝑥, 𝑓2 (𝑥) = cos 𝑥 are LI solutions of the given equation. Then {sin 𝑥, cos 𝑥} � 𝑒2𝑥 𝑥𝑒4𝑥 �
� �
is the fundamental set of solutions of the given DE and hence the general solution of the DE is W(x) = � 2𝑥 4𝑥 � = 𝑒4𝑥 + 2𝑥𝑒4𝑥 − 2𝑥𝑒4𝑥 = 𝑒4𝑥
� 2𝑒 𝑒 + 4𝑥𝑒4𝑥 �
given by 𝑦(𝑥) = 𝑐1 sin 𝑥 + 𝑐2 cos 𝑥, for constants 𝑐1 , 𝑐2 ∈ R.
Definition 2.2.4. If 𝑓1 (𝑥), 𝑓2 (𝑥), . . . , 𝑓𝑛 (𝑥) are 𝑛 linearly independent solutions of (2.2) on [𝑎, 𝑏], Question: Are the two functions 𝑦1 (𝑥) = 𝑒2𝑥 and 𝑦2 (𝑥) = 𝑥𝑒4𝑥 linearly independent?
then the set {𝑓1 (𝑥), 𝑓2 (𝑥), . . . 𝑓𝑛 (𝑥)} is called the Fundamental Set of Solutions of (2.2) and
The above question can be easily answered using the following theorem.
the function
𝑓 (𝑥) = 𝑐1 𝑓1 (𝑥) + 𝑐2 𝑓2 (𝑥) + ⋅ ⋅ ⋅ + 𝑐𝑛 𝑓𝑛 (𝑥), 𝑥 ∈ [𝑎, 𝑏], Theorem 2.2.6 (Wronskian Test for Linearly Independence). The 𝑛 functions 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are
Linearly Independent on an interval [𝑎, 𝑏] if and only if the Wronskian of 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 is different
where 𝑐1 , 𝑐2 , . . . , 𝑐𝑛 are arbitrary constants is called a General Solution of (2.2) on [𝑎, 𝑏]. and from zero for some 𝑥 ∈ [𝑎, 𝑏]. That is, 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are LI if and only if there exists 𝑥 ∈ [𝑎, 𝑏]
each 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are called particular solutions. such that W(𝑥) ∕= 0.
Example 2.2.3. Consider the third order linear homogenous DE Example 2.2.5. 1. Show that 𝑥 and 𝑥2 are Linearly Independent.
′′′
𝑦 − 2𝑦 ′′ − 𝑦 ′ + 2𝑦 = 0.
Solution
𝑥 −𝑥 2𝑥
a) The functions 𝑒 , 𝑒 , 𝑒 are (particular) solutions (check!)
Consider the Wronskian of 𝑥 and 𝑥2 ,
� �
b) 𝑒𝑥 , 𝑒−𝑥 and 𝑒2𝑥 are LI (check!) � 𝑥 𝑥2 �
2 � �
W(x, x ) = � � = 2𝑥2 − 𝑥2 = 𝑥2
� 1 2𝑥 �
c) Therefore, the general solution of the given equation is given by:
This implies W(𝑥, 𝑥2 ) = 𝑥2 ∕= 0, ∀𝑥 ∕= 0 and hence 𝑥 and 𝑥2 are LI.
𝑦(𝑥) = 𝑐1 𝑒𝑥 + 𝑐2 𝑒−𝑥 + 𝑐3 𝑒2𝑥 .
2. Show that 𝑒𝑥 , 𝑒−𝑥 , 𝑒2𝑥 are Linearly Independent.
2.2 General Solution of Homogeneous Linear ODEs 38 2.2 General Solution of Homogeneous Linear ODEs 39
Consider the Wronskian of 𝑒𝑥 , 𝑒−𝑥 and 𝑒2𝑥 In the preceding section we saw that the general solution of a homogeneous linear second-order
� � differential equation
� �
� 𝑒𝑥 𝑒−𝑥 𝑒2𝑥 �
� � 𝑦 ′′ + 𝑝(𝑥)𝑦 + 𝑞(𝑥)𝑦 = 0 (2.5)
W(x) = �� 𝑒𝑥 −𝑒−𝑥 2𝑒2𝑥 �,
�
� � is a linear combination 𝑦(𝑥) = 𝑐1 𝑦1 (𝑥) + 𝑐2 𝑦2 (𝑥), where 𝑦1 and 𝑦2 are linearly independent solu-
� 𝑒𝑥 𝑒−𝑥 4𝑒2𝑥 �
tions on some interval I.
which is equal to
𝑒𝑥 (−4𝑒𝑥 + 2𝑒𝑥 ) − 𝑒−𝑥 (4𝑒3𝑥 − 2𝑒3𝑥 ) + 𝑒2𝑥 (1 + 1) In this method we can construct a second solution 𝑦2 of a homogeneous equation (2.5) (even
when the coefficients in (2.5) are variable) provided that we know a nontrivial solution 𝑦1 of the
= −2𝑒2𝑥 − 2𝑒2𝑥 + 2𝑒2𝑥 DE. The basic idea described in this section is that equation (2.5) can be reduced to a linear
= −2𝑒 2𝑥
∕= 0, ∀𝑥 ∈ R. first-order DE by means of a substitution involving the known solution 𝑦1 . A second solution 𝑦2
of (2.5) is apparent after this first-order differential equation is solved.
Hence, 𝑒𝑥 , 𝑒−𝑥 and 𝑒2𝑥 are Linearly Independent.
𝑒𝑥 − 𝑒−𝑥
𝑓3 (𝑥) = sinh 𝑥 = . The quotient 𝑦2 /𝑦1 is nonconstant on I, that is,
2
𝑦2 (𝑥)
Solution = 𝑢(𝑥)
𝑦1 (𝑥)
Consider the Wronskian of 𝑒𝑥 , 𝑒−𝑥 and sinh 𝑥 or 𝑦2 (𝑥) = 𝑢(𝑥)𝑦1 (𝑥). The function 𝑢(𝑥) can be found by substituting
� �
� 𝑥 𝑥 −𝑥 �
� 𝑒 𝑒−𝑥 𝑒 −𝑒 2 � 𝑦2 (𝑥) = 𝑢(𝑥)𝑦1 (𝑥)
� �
W(x) = �� 𝑒𝑥 −𝑒−𝑥 𝑒 +𝑒 � = 0, ∀𝑥 ∈ R.
𝑥 −𝑥
2 �
� 𝑥 𝑥 −𝑥 � into the given differential equation.
� 𝑒 𝑒−𝑥 𝑒 −𝑒 2
�
Consider the derivatives 𝑦2′ = 𝑢′ 𝑦1 + 𝑢𝑦1′ and 𝑦2′′ = 𝑢′′ 𝑦1 + 2𝑢′ 𝑦1′ + 𝑢𝑦1′′ and substituting these in
𝑥 −𝑥
Hence 𝑒 , 𝑒 and sinh 𝑥 are linearly dependent.
(2.5) we get
Remark 2.2.7. The Wronskian of 𝑛 solutions 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 of the DE (2.2) is either identically
zero on [𝑎, 𝑏] or else is never zero on [𝑎, 𝑏]. That is, if 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are solutions of the DE (2.2), (𝑢′′ 𝑦1 + 2𝑢′ 𝑦1′ + 𝑢𝑦1′′ ) + 𝑝(𝑥)(𝑢′ 𝑦1 + 𝑢𝑦1′ ) + 𝑞(𝑥)𝑢𝑦1 = 0
then 𝑊 (𝑥) = 0, ∀𝑥 ∈ [𝑎, 𝑏] if 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are LD. or 𝑊 (𝑥) ∕= 0, ∀𝑥 ∈ [𝑎, 𝑏] if 𝑓1 , 𝑓2 , . . . , 𝑓𝑛 are
and simplifying this gives us
LI.
𝑢′′ 𝑦1 + 𝑢′ (2𝑦1′ + 𝑝(𝑥)𝑦1 ) + 𝑢(𝑦1′′ + 𝑝(𝑥)𝑦1′ + 𝑞(𝑥)𝑦1 ) = 0.
2.2 General Solution of Homogeneous Linear ODEs 40 2.3 Homogeneous LODE with Constant Coefficients 41
But 𝑦1 , by assumption, is a solution of (2.5) and hence .... Other possible reduction methods, such as Bernoli or Ricati (one of the two) should be
mentioned here and one should be indicated in the exercises .......
𝑢(𝑦1′′ + 𝑝(𝑥)𝑦1′ + 𝑞(𝑥)𝑦1 ) = 0
which implies
𝑢′′ 𝑦1 + 𝑢′ (2𝑦1′ + 𝑝(𝑥)𝑦1 ) = 0. 2.3 Homogeneous LODE with Constant Coefficients
This is a second order DE in 𝑢. Let 𝑢′ = 𝑧. Then 𝑢′′ = 𝑧 ′ . Using separation of variables we get
Definition 2.3.1. A Differential Equation
𝑧′ −2𝑦1′
= −𝑝
𝑧 𝑦1 𝑏𝑛 𝑦 (𝑛) + 𝑏𝑛−1 𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑏1 𝑦 ′ + 𝑏0 𝑦 = 0, (2.6)
which is a first order DE and hence the name reduction of order and integrating and taking
∫ where 𝑏0 , 𝑏1 , . . . , 𝑏𝑛 are all real constants, is called a Homogenous Linear Differential Equation of
the constant zero gives us ln 𝑧 = −2 ln 𝑦1 − 𝑝𝑑𝑥. This implies
constant coefficients.
1 ∫
𝑧 = 2 𝑒− 𝑝𝑑𝑥 .
𝑦1 Let 𝑓 (𝑥) be any solution of (2.6) in [𝑎, 𝑏]. Then
But 𝑧 = 𝑢′ . Then
1 − ∫ 𝑝𝑑𝑥
𝑢′ = 𝑒 𝑏𝑛 𝑓 (𝑛) (𝑥) + 𝑏𝑛−1 𝑓 (𝑛−1) (𝑥) + ⋅ ⋅ ⋅ + 𝑏1 𝑓 ′ (𝑥) + 𝑏0 𝑓 (𝑥) = 0 for all 𝑥 ∈ [𝑎, 𝑏].
𝑦12
and then � � � Hence the derivatives of 𝑓 are linearly dependent since at least one of the coefficients 𝑏0 , 𝑏1 , . . . , 𝑏𝑛
𝑦2 1 − ∫ 𝑝𝑑𝑥
=𝑢= 𝑒 𝑑𝑥. is different from zero.
𝑦1 𝑦12
Therefore, the second solution for the given equation is The simplest case with this property is a function 𝑓 such that
� � �
1 − ∫ 𝑝𝑑𝑥 𝑓 (𝑘) (𝑥) = 𝑐𝑓 (𝑥), ∀𝑥 ∈ [𝑎, 𝑏]
𝑦2 = 𝑦 1 𝑒 𝑑𝑥.
𝑦12
for some constant 𝑐.
Example 2.2.6. The function 𝑦1 (𝑥) = 𝑥 is a solution of the homogenous DE
Let 𝑓 (𝑥) = 𝑒𝜆𝑥 . Then 𝑓 𝑘 (𝑥) = 𝜆𝑘 𝑓 (𝑥) = 𝜆𝑘 𝑒𝜆𝑥 which implies 𝑐 = 𝜆𝑘 .
(𝑥2 − 1)𝑦 ′′ − 2𝑥𝑦 ′ + 2𝑦 = 0.
Thus we will look for the solution of (2.6) in the form 𝑦 = 𝑒𝜆𝑥 where the constant 𝜆 will be
Solve the given DE. chosen so that 𝑦 = 𝑒𝜆𝑥 does satisfy the equation (2.6).
Now insert 𝑦 = 𝑒𝜆𝑥 into (2.6) to get;
Solution
(𝑏𝑛 𝜆𝑛 + 𝑏𝑛−1 𝜆𝑛−1 + ⋅ ⋅ ⋅ + 𝑏1 𝜆 + 𝑏0 )𝑒𝜆𝑥 = 0.
The given equation is equivalent to 𝑦 ′′ + 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 = 0, where
Hence, if 𝑒𝜆𝑥 is a solution of the equation in (2.6), then 𝜆 should satisfy:
−2𝑥 2
𝑝(𝑥) = and 𝑞(𝑥) = 2 .
𝑥2 − 1 𝑥 −1
If 𝑦2 is a second solution of the given equation then 𝑦2 (𝑥) = 𝑢(𝑥)𝑦1 (𝑥), where 𝑏0 𝜆𝑛 + 𝑏1 𝜆𝑛−1 + ⋅ ⋅ ⋅ + 𝑏𝑛−1 𝜆 + 𝑏𝑛 = 0, (2.7)
� � ( ) � � � � �
1 − ∫ 𝑥−2𝑥
2 −1 𝑑𝑥
1 ln ∣𝑥2 −1∣ 1 1 since 𝑒𝜆𝑥 ∕= 0 for all 𝑥 ∈ R.
𝑢(𝑥) = 2
𝑒 𝑑𝑥 = 2
𝑒 𝑑𝑥 = (1 − 2 )𝑑𝑥 = 𝑥 + .
𝑥 𝑥 𝑥 𝑥
Definition 2.3.2. The algebraic equation (2.7) is called an Auxiliary equation or character-
Therefore, 𝑦1 (𝑥) = 𝑥 and 𝑦2 (𝑥) = 𝑢(𝑥)𝑦1 (𝑥) = 𝑥2 + 1 are two linearly independent solutions of
istic equation of the given differential equation in (2.6).
the given equation and hence the general solution of the given equation is 𝑦(𝑥) = 𝑐1 𝑥+𝑐2 (𝑥2 +1),
where 𝑐1 and 𝑐2 are constants. There are 3 different cases of the roots of (2.7).
2.3 Homogeneous LODE with Constant Coefficients 42 2.3 Homogeneous LODE with Constant Coefficients 43
Case 1. Distinct Real Roots 2. if the characteristic equation has triple root 𝜆, then the corresponding linearly independent
solutions are 𝑒𝜆𝑥 , 𝑥𝑒𝜆𝑥 and 𝑥2 𝑒𝜆𝑥 .
Suppose that (2.7) has 𝑛 distinct roots, 𝜆1 , 𝜆2 , . . . 𝜆𝑛 where 𝜆𝑖 ∕= 𝜆𝑗 , for 𝑖 ∕= 𝑗. Then, the
solutions 𝑒𝜆1 𝑥 , 𝑒𝜆2 𝑥 , . . . , 𝑒𝜆𝑛 𝑥 are linearly independent. (Use the Wronskian to prove this.) Let us proof the first part of the above remark for a second order linear homogenous differential
equation .
If 𝜆1 , 𝜆2 , . . . , 𝜆𝑛 are the 𝑛 distinct real roots of (2.7), then the general solution of (2.6) is:
𝑛
If the given DE is 𝑎𝑦 ′′ + 𝑏𝑦 ′ + 𝑐𝑦 = 0, then its characteristic equation is 𝑎𝜆2 + 𝑏𝜆 + 𝑐 = 0 and then
� −𝑏
𝑦(𝑥) = 𝑐1 𝑒𝜆1 𝑥 + 𝑐2 𝑒𝜆2 𝑥 + ⋅ ⋅ ⋅ + 𝑐𝑛 𝑒𝜆𝑛 𝑥 = 𝑐𝑖 𝑒𝜆𝑖 𝑥 , 𝜆 = 𝜆 1 = 𝜆2 = 2𝑎
. One of the solution of the given DE is 𝑦1 = 𝑒𝜆𝑥 . Then we can use the method
𝑖=1 of reduction of order to find a second solution 𝑦2 so that 𝑦1 and 𝑦2 are linearly independent.
where 𝑐1 , 𝑐2 , . . . , 𝑐𝑛 are arbitrary constants. The given equation is equivalent to
𝑏 𝑐
Example 2.3.1. 1. For the differential equation 𝑦 ′′ −3𝑦 ′ +2𝑦 = 0, the characteristic equation 𝑦 ′′ + 𝑦 ′ + 𝑦 = 0
𝑎 𝑎
is: 𝜆2 − 3𝜆 + 2 = 0, and 𝜆1 = 2 and 𝜆2 = 1 are the two distinct real roots of this
and 𝑦2 = 𝑢𝑦1 , where
characteristic equation. Hence, the general solution of the given differential equation is � � ∫ 𝑏 � � −𝑏 𝑥 �
2𝑥 𝑥 𝑒− 𝑎 𝑑𝑥 𝑒𝑎
𝑦(𝑥) = 𝑐1 𝑒 + 𝑐2 𝑒 . 𝑢= ( )2 𝑑𝑥 = = 1𝑑𝑥 = 𝑥,
𝑒 𝜆𝑥 𝑒2𝜆𝑥
2. For the differential equation 𝑦 ′′′ − 4𝑦 ′′ + 𝑦 ′ + 6𝑦 = 0, the corresponding characteristic −𝑏
since 2𝜆 = 𝑎
and hence 𝑦2 = 𝑥𝑒𝜆𝑥 .
equation is: 𝜆3 − 4𝜆2 + 𝜆 + 6 = 0 with distinct real roots 𝜆1 = 2, 𝜆2 = 3 and 𝜆3 = −1.
The following theorem is a generalization for the above remark.
Therefore, the general solution of the give equation is 𝑦(𝑥) = 𝑐1 𝑒2𝑥 + 𝑐2 𝑒3𝑥 + 𝑐3 𝑒−𝑥 .
Theorem 2.3.4.
Case 2. Repeated Real Roots 1. If the characteristic equation (2.7) has the real root 𝜆 occurring k times (𝑖.𝑒.𝜆1 = 𝜆2 =
⋅ ⋅ ⋅ = 𝜆𝑘 ) where 𝑘 ≤ 𝑛, then the part of the general solution for (2.6) corresponding to
To understand the situation let us consider the following example.
this k fold repeated root is
Example 2.3.2. Consider the DE 𝑦 ′′ − 6𝑦 ′ + 9𝑦 = 0. Then, its characteristic equation is 𝜆2 −
(𝑐1 + 𝑐2 𝑥 + 𝑐3 𝑥2 + ⋅ ⋅ ⋅ + 𝑐𝑘 𝑥𝑘−1 )𝑒𝜆𝑥
6𝜆 + 9 = 0, which implies (𝜆 − 3)2 = 0. Therefore, 𝜆1 = 𝜆2 = 3, which is a repeated real root.
One of the solutions of the given linear differential equation is 𝑒3𝑥 . 2. If further, the remaining roots are the distinct real roots 𝜆𝑘+1 , 𝜆𝑘+2 , . . . , 𝜆𝑛 , the general
solution of (2.6) will be:
Let 𝑦1 (𝑥) = 𝑒3𝑥 . The given equation will have two linearly independent solutions and the second
solution can be found by using the method of reduction of order. Let 𝑦2 be another solution 𝑦(𝑥) = 𝑐1 𝑒𝜆𝑥 + 𝑐2 𝑥𝑒𝜆𝑥 + 𝑐3 𝑥2 𝑒𝜆𝑥 + ⋅ ⋅ ⋅ + 𝑐𝑘 𝑥𝑘−1 𝑒𝜆𝑥 + 𝑐𝑘+1 𝑒𝜆𝑘+1 𝑥 + ⋅ ⋅ ⋅ + 𝑐𝑛 𝑒𝜆𝑛 𝑥 .
so that 𝑦1 and 𝑦2 are linearly independent. Then 𝑦2 = 𝑢𝑦1 , where Example 2.3.3.
� � − ∫ −6𝑑𝑥 � � � 6𝑥 � �
𝑒 𝑒
𝑢(𝑥) = ( )2 𝑑𝑥 = 𝑑𝑥 = 1𝑑𝑥 = 𝑥. 1. Consider the Differential Equation
𝑒3𝑥 𝑒6𝑥
𝑦 (4) − 5𝑦 ′′′ + 6𝑦 ′′ + 4𝑦 ′ − 8𝑦 = 0.
Therefore 𝑦2 (𝑥) = 𝑥𝑒3𝑥 and 𝑦(𝑥) = 𝑐1 𝑒3𝑥 + 𝑐2 𝑥𝑒3𝑥 is a general solution for constants 𝑐1 and 𝑐2 .
The corresponding characteristic equation is 𝜆4 − 5𝜆3 + 6𝜆2 − 8 = 0 and the roots are
Remark 2.3.3. Given a differential equation:
𝜆1 = 𝜆2 = 𝜆3 = 2, 𝜆4 = −1.
1. if the characteristic equation has double real root 𝜆, then 𝑒𝜆𝑥 and 𝑥𝑒𝜆𝑥 are two linearly Therefore, the general solution is given by 𝑦(𝑥) = 𝑐1 𝑒2𝑥 + 𝑐2 𝑥𝑒2𝑥 + 𝑐3 𝑥2 𝑒2𝑥 + 𝑐4 𝑒−𝑥 , where
independent solutions and; 𝑐1 , 𝑐2 , 𝑐3 and 𝑐4 are constants.
2.3 Homogeneous LODE with Constant Coefficients 44 2.4 Nonhomogeneous Equations with Constant Coefficients 45
′′′
2. Consider the Differential Equation 𝑦 − 4𝑦 ′′ − 3𝑦 ′ + 18𝑦 = 0 The roots of the characteristic 2.3.1 Exercises
equation are, 𝜆1 = 𝜆2 = 3 and 𝜆3 = −2 and hence the general solution of the equation is:
Exercise 2.3.5. Solve each of the following Differential Equation.
𝑦(𝑥) = 𝑐1 𝑒3𝑥 + 𝑐2 𝑥𝑒3𝑥 + 𝑐3 𝑒−2𝑥 , where 𝑐1 , 𝑐2 and 𝑐3 are constants.
1. 𝑦 ′′ + 𝑦 = 0.
Case 3. Conjugate Complex Roots
2. 𝑦 ′′ − 6𝑦 ′ + 25𝑦 = 0.
Suppose the equation (2.7) has a complex root 𝜆 = 𝑎 + 𝑖𝑏, 𝑎, 𝑏 ∈ R. Then (we know from the
3. 𝑦 (4) − 4𝑦 ′′′ + 14𝑦 ′′ − 20𝑦 ′ + 25𝑦 = 0, where 𝜆1 = 𝜆2 = 1 + 2𝑖 and 𝜆3 = 𝜆4 = 1 − 2𝑖.
¯ = 𝑎 − 𝑖𝑏 is also a root of (2.7) and the
theory of algebraic equations that) the conjugate 𝜆
corresponding part of the general solution of (2.6) will be:
2.4 Nonhomogeneous Equations with Constant Coefficients
𝑘1 𝑒(𝑎+𝑖𝑏)𝑥 + 𝑘2 𝑒(𝑎−𝑖𝑏)𝑥 .
where 𝑏𝑛 , ..., 𝑏0 are constants is called a nonhomogeneous differential equation with constant
Solution
coefficients. The following theorem is very important in such cases.
The characteristic equation of the given equation is 𝜆2 − 2𝜆 + 10 = 0 with roots 𝜆1 = 1 + 3𝑖
Theorem 2.4.1 (Homogeneous-Nonhomogeneous Solution Relation). Consider the nonhomoge-
and 𝜆2 = 1 − 3𝑖. Then 𝑦1 = 𝑒1+3𝑖 and 𝑦2 = 𝑒1−3𝑖 are two independent solutions of the given
neous differential equation
equation. Therefore, 𝑦 = 𝑐1 𝑦1 + 𝑐2 𝑦2 , where 𝑐1 and 𝑐2 are constants, is a general solution of the
given equation. That means 𝑏𝑛 (𝑥)𝑦 (𝑛) + ⋅ ⋅ ⋅ + 𝑏1 (𝑥)𝑦 ′ + 𝑏0 (𝑥)𝑦 = 𝑓 (𝑥), where 𝑓 (𝑥) ∕≡ 0.
𝑦(𝑥) = 𝑒𝑥 (𝑐1 cos 3𝑥 + 𝑐2 sin 3𝑥). If 𝑓 (𝑥) ≡ 0, then the equation becomes a homogeneous equation.
2. If 𝑦1 is a solution of the nonhomogeneous equation and 𝑦2 is a solution of the homogeneous 2. Let f be an UC function. The set S of functions consisting of 𝑓 and all the derivatives of
equation in an interval I, then 𝑦1 + 𝑦2 is a solution of the nonhomogeneous equation in the 𝑓 which are mutually LI UC functions is said to be the UC set of function f, if S is a finite
interval I. set and we shall denote it by S.
The following remark follows directly from the theorem given above. Example 2.4.1.
Remark 2.4.2. Suppose 𝑦ℎ (𝑥) denote the general solution of the homogeneous part and 𝑦𝑝 (𝑥) 1. Let 𝑓 (𝑥) = 𝑥3 . Then f is UC function.
denote the particular solution of the DE:Then the general solution of (2.10) is given by 𝑦(𝑥) =
𝑦ℎ (𝑥) + 𝑦𝑝 (𝑥). 𝑓 ′ (𝑥) = 3𝑥2 , 𝑥2 is UC function.
Theorem 2.4.3 (Superposition Principle). If 𝑦ℎ (𝑥) is a general solution of the homogeneous 𝑓 ′′ (𝑥) = 6𝑥, 𝑥 is UC function.
part of (2.10) on an interval [𝑎, 𝑏] and 𝑦𝑝1 (𝑥), 𝑦𝑝2 (𝑥), . . . , 𝑦𝑝𝑘 (𝑥) are particular solutions of (2.10)
𝑓 ′′′ (𝑥) = 6, 1 is UC function.
corresponding to 𝑓1 (𝑥), 𝑓2 (𝑥), . . . , 𝑓𝑘 (𝑥) respectively on the right hand side, then the general
solution of (2.10) where, 𝑓 (𝑥) = 𝑓1 (𝑥) + ⋅ ⋅ ⋅ + 𝑓𝑘 (𝑥) on [𝑎, 𝑏], is Therefore, 𝑆 = {1, 𝑥, 𝑥2 , 𝑥3 }.
𝑦(𝑥) = 𝑦ℎ (𝑥) + 𝑦𝑝1 (𝑥) + 𝑦𝑝2 (𝑥) + ⋅ ⋅ ⋅ + 𝑦𝑝𝑘 (𝑥). 2. Let 𝑓 (𝑥) = sin(2𝑥). Then 𝑓 is an UC function.
The above result is called a Superposition Principle. It tells us that the response 𝑦𝑝 to a 𝑓 ′ (𝑥) = 2 cos(2𝑥), cos(2𝑥) is UC function.
iv) cos(𝑏𝑥 + 𝑐), where 𝑏, 𝑐 are constants, such that 𝑏 ∕= 0. Example 2.4.2. Consider the differential equation
or
𝑦 (4) − 𝑦 ′′ = 3𝑥2 − sin 2𝑥 (2.11)
b) a function which is defined as a finite product of two or more functions of the above
4 types.
2.4 Nonhomogeneous Equations with Constant Coefficients 48 2.4 Nonhomogeneous Equations with Constant Coefficients 49
(4)
1) First find the general solution to the homogeneous part ∙ 𝑦𝑝1 − 𝑦𝑝′′1 (𝑥) = 3𝑥2 which implies 24𝐴 − 12𝐴𝑥2 − 6𝐵𝑥 − 2𝐶 = 3𝑥2 .
Equating the coefficients of like terms we get:
𝑦 (4) − 𝑦 ′′ = 0.
⎧
The characteristic equation of the given equation is 𝜆4 − 𝜆2 = 0. Then 𝜆2 (𝜆2 − 1) = 0 and ⎨ −12𝐴 = 3
−6𝐵 = 0
hence 𝜆1 = 𝜆2 = 0 and 𝜆3 = 1, 𝜆4 = −1. Therefore, the general solution is:
⎩ 24𝐴 − 2𝐶 = 0
𝑦ℎ (𝑥) = 𝑐1 + 𝑐2 𝑥 + 𝑐3 𝑒𝑥 + 𝑐4 𝑒−𝑥 . −1
This implies, 𝐴 = 4
, 𝐵 = 0 and 𝐶 = −3.
2) The forcing function (non- homogeneous term) is a combination of 𝑥2 and sin 2𝑥. Therefore,
1
𝑦𝑝1 (𝑥) = − 𝑥4 − 3𝑥2 .
4
Next find the set of UC functions corresponding to the component functions
Next, we need to find 𝑦𝑝2 (𝑥) which corresponds to 𝑓2 (𝑥) = − sin 2𝑥. We seek 𝑦𝑝2 (𝑥) to be
2
𝑓1 (𝑥) = 3𝑥 a linear combination of the elements of 𝑆2 , that is,
where A,B and C are called the undetermined constants. (24 𝐷 sin 2𝑥 + 24 𝐸 cos 2𝑥) − (−22 𝐷 sin 2𝑥 − 22 cos 2𝑥) = − sin 2𝑥.
∙ Check each term in 𝑦𝑝1 (𝑥) for duplication with terms in 𝑦ℎ (𝑥). Here the 𝐵𝑥 and C terms Therefore, 20𝐷 sin 2𝑥 + 20𝐸 cos 2𝑥 = − sin 2𝑥 and then
are constant multiples of 𝑐2 𝑥 and 𝑐1 respectively. 1
20𝐷 = −1 ⇐⇒ 𝐷 = −
20
∙ If there is any duplicate, then successively multiply each member of 𝑆𝑖 by the lowest positive
and 20𝐸 = 0. Therefore,
integral power of 𝑥, until (so that) the resulting revised set contains no duplicate of the 1
𝑦𝑝2 (𝑥) = − sin 2𝑥.
terms in the homogeneous (and previously found particular 𝑦𝑝𝑖 ’s) solutions. 20
– 𝑦𝑝1 (𝑥) = 𝑥(𝐴𝑥2 + 𝐵𝑥 + 𝐶) = 𝐴𝑥3 + 𝐵𝑥2 + 𝐶𝑥 still a duplicate is there, Hence the general solution of (2.11) is:
Exercise 2.4.5. Solve each of the following DEs. This will simplify the equation as
2. 𝑦 ′′ − 2𝑦 ′ + 2𝑦 = 2𝑥2 + 𝑒𝑥 + 2𝑥𝑒𝑥 + 4𝑒3𝑥 and after simplification, the equation (2.12) becomes
Since 𝑦1 and 𝑦2 are linearly independent solutions for the homogeneous part of equation (2.12) the general solution of the homogeneous equation is 𝑦ℎ (𝑥) = 𝑐1 𝑒2𝑥 + 𝑐2 𝑒−2𝑥 and
we have the following system of equations: � � � �
� 𝑦 � � 2𝑥 𝑒−2𝑥 �
� 1 𝑦2 � � 𝑒 �
W(x) = � ′ � = � � = −2 − 2 = −4,
� � ‘𝑦1 𝑦2′ � � 2𝑒2𝑥 −2𝑒−2𝑥 �
𝑐′1 𝑦1′ + 𝑐′2 𝑦2′ = 𝑓
(2.13) � � � �
𝑐′1 𝑦1 + 𝑐′2 𝑦2 = 0, � 0 𝑦 � � 0 𝑒2𝑥 �
� 2 � � �
W1 (x) = � �=� � = −8𝑥𝑒−2𝑥
which is a system of two algebraic equations in 𝑐′1 and 𝑐′2 . Then (2.13) has a unique solution if � 8𝑥 𝑦2 � � 8𝑥 −2𝑒
′ −2𝑥 �
the determinant of the coefficient matrix is non-zero, that is, Therefore,
� � �
� � 𝑊1 (𝑥) −8𝑥𝑒−2𝑥
� 𝑦 (𝑥) 𝑦 (𝑥) � 𝑐1 (𝑥) = 𝑑𝑥 = 𝑑𝑥 = 2 𝑥𝑒−2𝑥 𝑑𝑥 = −𝑥𝑒−2𝑥 + 𝑒−2𝑥
� 1 2 � 𝑊 (𝑥) −4
� ′ � ∕= 0.
� 𝑦1 (𝑥) 𝑦2′ (𝑥) �
and similarly
However, the above determinant is the Wronskian of the functions 𝑦1 and 𝑦2 . Since 𝑦1 and 𝑦2 � � �
𝑊2 (𝑥) 8𝑥𝑒2𝑥
are LI functions, then 𝑐2 (𝑥) = 𝑑𝑥 = 𝑑𝑥 = −2 𝑥𝑒2𝑥 𝑑𝑥 = 𝑥𝑒−2𝑥 − 𝑒−2𝑥 .
𝑊 (𝑥) −4
W[y1 ,y2 ] (𝑥) ∕= 0.
Therefore, 𝑦𝑝 (𝑥) = 𝑐1 (𝑥)𝑒2𝑥 + 𝑐2 (𝑥)𝑒−2𝑥 a particular solution and the general solution for the
Hence by Cramer’s rule we have: problem is 𝑦(𝑥) = 𝑦ℎ (𝑥) + 𝑦𝑝 (𝑥).
� �
� 0 𝑦 � Remark 2.4.6. This method looks easier when the integrads (or the quotients of the Wronskian)
� 2 �
� �
� 𝑓 𝑦2′ � 𝑊1 (𝑥) are simple. However, it could be very difficult to get the particular solution when the integrand
c′1 (x) = =
𝑊 (𝑥) 𝑊 (𝑥) is complicated.
and � �
� 𝑦 0 �
� 1 �
� �
� 𝑦1 𝑓 � 𝑊2 (𝑥)
c′2 (x) = = 2.5 The Laplace Transform Method to Solve ODEs
𝑊 (𝑥) 𝑊 (𝑥)
By integrating both sides we will get: In the previous sections we have discussed how to solve differential equations of the form:
�� � �� �
W1 (𝑥) W2 (𝑥)
𝑦𝑝 (𝑥) = 𝑑𝑥 𝑦1 (𝑥) + 𝑑𝑥 𝑦2 (𝑥). 𝑎𝑛 𝑦 (𝑛) + 𝑎𝑛−1 𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑎1 𝑦 ′ + 𝑎0 𝑦 = 𝑓 (𝑥) (2.14)
W(𝑥) W(𝑥)
Example 2.4.4. Solve the differential equation by finding the general solutions and then evaluate the arbitrary constants in accordance with the
given initial conditions. However, the solution methods mainly dependent on the structure of the
𝑦 ′′ − 4𝑦 = 8𝑥. forcing function 𝑓 (𝑥). Moreover, all the coefficients are assumed to be constants. To address
problems with more general forcing function and some form of variable coefficients, we discuss
Solution the use of Laplace transform as possible alternative.
First solve the homogeneous equation 𝑦 ′′ − 4𝑦 = 0. Definition 2.5.1 (Laplace Transform). The Laplace Transform of a function 𝑓 (𝑡), if it exists, is
Then the characteristic equation is 𝜆2 − 4 = 0, which implies 𝜆 = ±2. If 𝑦1 and 𝑦2 are two denoted by ℒ{𝑓 (𝑡)} is given by,
� ∞
linearly independent solutions of the equation 𝑦 ′′ − 4𝑦 = 0, then 𝑦1 = 𝑒2𝑥 , 𝑦2 = 𝑒−2𝑥 . Therefore,
ℒ{𝑓 (𝑡)} = 𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡,
0
2.5 The Laplace Transform Method to Solve ODEs 54 2.5 The Laplace Transform Method to Solve ODEs 55
where 𝑠 is a real number called a parameter of the transform. For short we may write, From the table above we have
� ∞ � ∞
1
ℱ(𝑠) to denote 𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡. i.e., ℱ(𝑠) = 𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡. ℒ{𝑒𝑎𝑡 } = for 𝑠 > 𝑎.
0 0 𝑠−𝑎
Example 2.5.1. Find the Laplace Transform of the constant function 𝑓 (𝑡) = 1. Thus the inverse operator applied on 1
𝑠−𝑎
will give us back the function 𝑒𝑎𝑡
� ∞ 1
ℒ{1} = 𝑒−𝑠𝑡 × 1𝑑𝑡. i.e., ℒ−1 { } = 𝑒𝑎𝑡 for 𝑠 > 𝑎.
𝑠−𝑎
0
In general, ℒ−1 , the inverse Laplace Operator, is given by
Solution � 𝛾+𝑖∞
1
ℒ−1 {𝐹 (𝑠)} = 𝐹 (𝑠)𝑒𝑠𝑡 𝑑𝑠,
2𝜋𝑖 𝛾−𝑖∞
� ∞
(where 𝛾 is a positive real number), which is a complex improper integral.
ℒ{1} = 𝑒−𝑠𝑡 × 1𝑑𝑡
0
� 𝑇
= lim 𝑒−𝑠𝑡 𝑑𝑡 Properties
𝑇 →∞ 0
�𝑇
𝑒−𝑠𝑡 �� Here below we state some important properties of the transform in a serious of theorems without
= lim
𝑇 →∞ −𝑠 �
� 0
� proof.
−1 −𝑠𝑇 1
= lim 𝑒 +
𝑇 →∞ 𝑠 𝑠 Theorem 2.5.2 (Linearity).
�
1
𝑠
; if 𝑠 > 0
= (a) If 𝑢(𝑡), 𝑣(𝑡) are functions and 𝛼, 𝛽 are any constants, then
∞; otherwise
Solutions Theorem 2.5.3 (Transform of the derivative). Let 𝑓 (𝑡) be continuous and 𝑓 ′ (𝑡) be piecewise
continuous on some interval [0, 𝑡𝑜 ] for every finite 𝑡𝑜 , and let ∣𝑓 (𝑡)∣ < 𝐾𝑒𝑐𝑡 for some constants
1. ℒ{3𝑡 + 5𝑒−2𝑡 }; Applying the liniarity property we get,
𝐾, 𝑇 , and 𝑐 and for all 𝑡 > 𝑇 . Then the transform ℒ{𝑓 ′ (𝑡)} exists for all 𝑠 > 𝑐 and
ℒ{3𝑡 + 5𝑒−2𝑡 } = 3ℒ{𝑡} + 5ℒ{𝑒−2𝑡 } ℒ{𝑓 ′ (𝑡)} = 𝑠ℒ{𝑓 (𝑡)} − 𝑓 (0).
� � � �
1 1
= 3 2 +5
𝑠 𝑠+2 Example 2.5.3. Use the Laplace transform method to solve the initial-value problem.
3 5 5𝑠2 + 3𝑠 + 6
= 2+ =
𝑠 𝑠+2 𝑠2 (𝑠 + 2) 𝑦 ′ + 2𝑦 = 0 with 𝑦(0) = 1.
⇒ 𝐴 = 1, 𝐵 = −2, 𝐶 = 1. The following property of the transform, which is the continuouation of the above theorem, is
required.
Hence we can rewrite the inverse transform and apply linearity to get
{ } { } Theorem 2.5.4. Let 𝑓 (𝑡) be continuous and 𝑓 (𝑛) (𝑡) be piecewise continuous on some interval
𝑠2 1 2 1
ℒ−1 = ℒ −1
− + [0, 𝑡𝑜 ] for every finite 𝑡𝑜 , and let ∣𝑓 (𝑡)∣ < 𝐾𝑒𝑐𝑡 for some constants 𝐾, 𝑇 , and 𝑐 and for all 𝑡 > 𝑇 .
(𝑠 + 1)3 𝑠 + 1 (𝑠 + 1)2 (𝑠 + 1)3
{ } { } { } Then we have
1 −2 1
= ℒ−1 + ℒ−1 + ℒ −1
𝑠+1 (𝑠 + 1)2 (𝑠 + 1)3 ℒ{𝑓 (𝑛) (𝑡)} = 𝑠𝑛 ℒ{𝑓 (𝑡)} − 𝑠𝑛−1 𝑓 (0) − 𝑠𝑛−2 𝑓 ′ (0) − ⋅ ⋅ ⋅ − 𝑓 (𝑛−1) (0).
1
= 𝑒−𝑡 − 2𝑡𝑒−𝑡 + 𝑡2 𝑒−𝑡
2 Theorem 2.5.5 (First shifting theorem). If ℒ{𝑓 (𝑡)} = ℱ(𝑠) for 𝑅𝑒(𝑠) > 𝑏, then ℒ{𝑒𝑎𝑡 𝑓 (𝑡)} =
1 2 −𝑡
= (1 − 2𝑡 + 𝑡 )𝑒 . ℱ(𝑠 − 𝑎) for 𝑅𝑒(𝑠) > 𝑎 + 𝑏.
2
The other important property that leads us to use the Laplace transform in solving ordinary The proof of this theorem is easy to see using the definition.
differential equation is how the transform performs on the derivative.
Example 2.5.4. Find the Laplace transform for the function 𝑓 (𝑡) = 𝑒3𝑡 cos 4𝑡.
2.5 The Laplace Transform Method to Solve ODEs 58 2.6 The Cauchy-Euler Equation 59
Solution Theorem 2.5.6 (Derivative of the transform). For a piecewise continuous function 𝑓 (𝑡) and for
𝑠 any positive integer 𝑛, it holds that
Recall that ℒ{cos 4𝑡} = .
+ 42𝑠2
Then using the first shifting theorem we get ℒ{(−1)𝑛 𝑡𝑛 𝑓 (𝑡)} = ℱ (𝑛) (𝑠).
𝑠−3
ℒ{𝑒3𝑡 cos 4𝑡} = . The formula in this theorem can be used to find transforms of functions of the form 𝑥𝑛 𝑓 (𝑥) when
(𝑠 − 3)2 + 42
the Laplace transform of 𝑓 (𝑡) is known.
𝑠
Example 2.5.5. Find the inverse Laplace transform for the function ℱ(𝑠) = .
𝑠2 + 𝑠 + 1 Exercise 2.5.7. Use the Laplace transform method to solve
First let us rewrite the function ℱ(𝑠) as Remark 2.5.8. The main idea in using Laplace transform in solving ODEs is that, it transforms
𝑠 𝑠 𝑠 + 12 1 the differential equation into an algebraic equation. Once the transformation is completed, we
2
ℱ(𝑠) = = = − seek for a solution to ℒ{𝑦(𝑡)} algebraically. Then the final step will be to get back the value of
𝑠2 + 𝑠 + 1 (𝑠 + 12 )2 + 3
4
(𝑠 + 1 2
2
) + 34 (𝑠 + 1 2
2
) + 3
4
𝑦(𝑡) using the inverse Laplace transform.
and hence,
{ } { } � √ �
−1 𝑠 −1 𝑠 + 12 −1 1 2
3
ℒ =ℒ −ℒ √ . 2.6 The Cauchy-Euler Equation
𝑠2 + 𝑠 + 1 (𝑠 + 12 )2 + 3
4
1 2
3 (𝑠 + 2 ) + 3
4
Then using the first shifting theorem, we have, In this section we are going to consider linear differential equations where the coefficients are
{ } √ √ variables with some special forms.
−1 𝑠 −𝑡/2 3𝑡 1 −𝑡/2 3𝑡
ℒ =𝑒 cos −√ 𝑒 sin .
𝑠2 + 𝑠 + 1 2 3 2 Definition 2.6.1. The linear differential equation with variable coefficient of the form:
Consider the general Laplace transform formula where 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 are constants is called the Cauchy-Euler Equation.
� ∞ Example 2.6.1. The linear differential equation 3𝑥2 𝑦 ′′ − 11𝑥𝑦 ′ + 2𝑦 = sin 𝑥 is a Cauchy- Euler
ℱ(𝑠) = 𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡.
0 equation.
Taking the derivative with respect to 𝑠 on both sides we get,
� ∞ To solve Cauchy-Euler DEs first we reduce the given DE into a linear differential equation with
ℱ ′ (𝑠) = (−𝑡)𝑒−𝑠𝑡 𝑓 (𝑡)𝑑𝑡 = ℒ{−𝑡𝑓 (𝑡)}. constant coefficients and solve the given equation with the methods derived in the previous
0
sections.
By further differentiating the above equation with respect to 𝑠, we get
Theorem 2.6.2. The transformation 𝑥 = 𝑒𝑡 , 𝑡 ∈ R reduces the Cauchy-Euler DE to a linear DE
ℱ ′′ (𝑠) = ℒ{𝑡2 𝑓 (𝑡)}. with constant coefficients.
In general we have
2.6 The Cauchy-Euler Equation 60 2.7 *The Power Series Solution Method 61
Let us consider the case when 𝑛 = 2. In this case the equation is: Therefore, the differential equation 𝑦 ′′ − 3𝑦 ′ + 2𝑦 = 0 has a general solution 𝑦(𝑡) =
𝑐1 𝑒𝑡 + 𝑐2 𝑒2𝑡 and since 𝑥 = 𝑒𝑡 the DE 𝑥2 𝑦 ′′ − 2𝑥𝑦 ′ + 2𝑦 = 0 has a general solution
𝑎2 𝑥2 𝑦 ′′ + 𝑎1 𝑥𝑦 ′ + 𝑎0 𝑦 = 𝐹 (𝑥) (2.16)
𝑦(𝑥) = 𝑐1 𝑥 + 𝑐2 𝑥2 ,
𝑡 𝑡
Let 𝑥 = 𝑒 . Then by solving for 𝑡 we get 𝑡 = ln 𝑥 for 𝑥 > 0 (or 𝑥 = −𝑒 if 𝑥 < 0) and
where 𝑐1 and 𝑐2 are arbitrary constants.
𝑑𝑦 𝑑𝑦 𝑑𝑡 1 𝑑𝑦
= . =
𝑑𝑥 𝑑𝑡 𝑑𝑥 𝑥 𝑑𝑡 2. Let 𝑥 = 𝑒𝑡 . Since 𝑎2 = 3, 𝑎1 = −11, 𝑎0 = 2, we have 𝐴2 = 𝑎2 = 3, 𝐴1 = 𝑎1 − 𝑎2 = −11 −
and 3 = −14, and 𝐴0 = 𝑎0 = 2, which reduce the given equation to 3𝑦 ′′ − 14𝑦 ′ + 2𝑦 = sin(𝑒𝑡 )
2
� � � � � �
𝑑𝑦 1 𝑑 𝑑𝑦 𝑑𝑦 𝑑 1 1 𝑑2 𝑦 𝑑𝑡 1 𝑑𝑦 1 𝑑2 𝑦 𝑑𝑦 which is a DE with constant coefficients.
= + . = . − = − .
𝑑𝑥2 𝑥 𝑑𝑥 𝑑𝑡 𝑑𝑡 𝑑𝑥 𝑥 𝑥 𝑑𝑡2 𝑑𝑥 𝑥2 𝑑𝑡 𝑥2 𝑑𝑡2 𝑑𝑡
3. The given equation is transformed into 𝑦 ′′ − 3𝑦 ′ + 2𝑦 = 𝑒3𝑡 , with the substitution 𝑥 = 𝑒𝑡 ,
Substituting into (2.15) we get:
which is a DE with constant coefficients.
� 2
�
1 𝑑 𝑦 𝑑𝑦 1 𝑑𝑦
𝑎2 𝑥2 − + 𝑎1 𝑥. + 𝑎2 𝑦 = 𝐹 (𝑒𝑡 ). Example 2.6.3 (Application). Consider a mechanical oscillator.
𝑥2 𝑑𝑡2 𝑑𝑡 𝑥 𝑑𝑡
This implies, Let 𝐹𝐺 = 𝑚𝑔, where 𝑚 is mass of the object on the spring and 𝑔 is the gravity and ∣𝐹ˆ𝑅 ∣ = 𝑘𝑥,
2
𝑑𝑦 𝑑𝑦 (Hook’s law) where 𝑘 is the spring stiffness constant and 𝑥 is the the distance moved by the mass
𝑎2 + (𝑎1 − 𝑎2 ) + 𝑎0 𝑦 = 𝐹 (𝑒𝑡 ).
𝑑𝑡2 𝑑𝑡 𝑚. � �
Then � 𝑑𝑥 �
If 𝐹𝐷 is a damping force for small velocity of mass, then ∣𝐹𝐷 ∣ = 𝐶 �� �� , where 𝐶 > 0 is called
𝑑𝑡
𝑑2 𝑦 𝑑𝑦 a damping constant.
𝐴2
+ 𝐴1 + 𝐴0 𝑦 = 𝐺(𝑡), (2.17)
𝑑𝑡2 𝑑𝑡 Therefore, the final form of our governing equation of motion is of the form:
where 𝐴2 = 𝑎2 , 𝐴1 = 𝑎1 − 𝑎2 , 𝐴0 = 𝑎0 and 𝐹 (𝑒𝑡 ) = 𝐺(𝑡), which is a second order linear
differential equation with constant coefficients. 𝑚𝑥′′ + 𝐶𝑥′ + 𝑘𝑥 = 𝑚𝑔 = 𝐹 (𝑡).
Example 2.6.2. Solve each of the following DEs. If 𝐹 is a variable force then the problem will be a second order nonhomogeneous differential
equation, and can be solved using one of the previously discussed methods.
1. 𝑥2 𝑦 ′′ − 2𝑥𝑦 ′ + 2𝑦 = 0.
Theorem 2.7.1 (Power Series Solution). If the functions 𝑝 and 𝑞 are analytic at a point 𝑐𝑜 , then Definition 2.7.2.
every solution of the DE 𝑝(𝑥) 𝑞(𝑥)
1. A point 𝑥𝑜 is said to be an ordinary point of equation (2.20) if ℎ(𝑥𝑜 ) ∕= 0 and ,
ℎ(𝑥) ℎ(𝑥)
are
𝑦 ′′ + 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 = 0 (2.18)
analytic at 𝑥𝑜 . Otherwise, it is called a singular point of equation (2.20).
is also analytic at 𝑐𝑜 , and can be found in the form
∞ 2. A singular point 𝑥𝑜 is said to be a regular singular point of equation (2.20) if the function
�
𝑛
𝑦(𝑥) = 𝑎𝑛 (𝑥 − 𝑐𝑜 ) . 𝑝(𝑥) 𝑞(𝑥)
𝑛=0
(𝑥 − 𝑥𝑜 ) and (𝑥 − 𝑥𝑜 )2
ℎ(𝑥) ℎ(𝑥)
Moreover, the radius of convergence of every solution is at least as large as the smaller of the are analytic at 𝑥𝑜 . A non regular singular point is called an irregular singular point of
radii of convergence of 𝑇 𝑆 𝑝∣𝑐𝑜 and 𝑇 𝑆 𝑞∣𝑐𝑜 . equation (2.20).
Example 2.7.1. Solve the DE If equation (2.20) has a regular singular point at 𝑥𝑜 , then use the power series:
′′ ′ ∞
�
(𝑥 − 1)𝑦 + 𝑦 + 2(𝑥 − 1)𝑦 = 0 (2.19)
𝑦(𝑥) = 𝑎𝑛 (𝑥 − 𝑥𝑜 )𝑛+𝑟
′ 𝑛=0
on the interval [4, ∞) with initial conditions 𝑦(4) = 5 and 𝑦 (4) = 0.
and determine the values of 𝑟 and 𝑎𝑛 , 𝑛 = 0, 1, 2, . . .
Solution Procedure:
This last series is called a Frobenius series solution.
∙ Convert the problem to the form of equation (2.18) in the above Theorem Example 2.7.2. Use Frobenius method to solve
∙ Check analyticity of the coefficient functions 𝑝(𝑥) and 𝑞(𝑥) at the point 𝑥𝑜 (the given initial 𝑥2 𝑦 ′′ + 5𝑥𝑦 ′ + (𝑥 + 4)𝑦 = 0
point).
∞
� 1 𝑛−2
∙ Substitute into the equation (2.19) the general solution Ans.: 𝑦(𝑥) = 𝑎𝑜 (−1)𝑛 𝑥
𝑛=0
(𝑛!)2
∞
�
𝑦(𝑥) = 𝑎𝑛 (𝑥 − 𝑥𝑜 )𝑛 The general solution (also the number of different solutions) of differential equations using Frobe-
𝑛=0
nius Method depend upon the solution to the equation (which is called the indicial equation)
and determine the values of the coefficients 𝑎𝑛 , for each 𝑛 = 0, 1, 2, . . ..
𝑟(𝑟 − 1) + 𝑏𝑜 𝑟 + 𝑐𝑜 = 0,
Frobenius Method which forces the coefficient of 𝑥𝑟 to be zero.
ℎ(𝑥)𝑦 ′′ + 𝑝(𝑥)𝑦 ′ + 𝑞(𝑥)𝑦 = 0 (2.20) 2.8 Systems of ODE of the First Order
If ℎ(𝑥) ∕= 0 for some 𝑥 we can equivalently have A system of 𝑛 linear first-order equations in the 𝑛 unknowns 𝑥1 (𝑡), 𝑥2 (𝑡), . . . , 𝑥𝑛 (𝑡) is a system
𝑝(𝑥) ′ 𝑞(𝑥) that can be written in the form:
𝑦 ′′ + 𝑦 + 𝑦 = 0, for ℎ(𝑥) ∕= 0.
ℎ(𝑥) ℎ(𝑥) 𝑥′1 = 𝑎11 (𝑡)𝑥1 + 𝑎12 (𝑡)𝑥2 + ⋅ ⋅ ⋅ + 𝑎1𝑛 (𝑡)𝑥𝑛 + 𝑓1 (𝑡)
If ℎ(𝑥) ∕= 0 for all 𝑥 we can simply apply the power series solution method. But if ℎ(𝑥) = 0 𝑥′2 = 𝑎21 (𝑡)𝑥1 + 𝑎22 (𝑡)𝑥2 + ⋅ ⋅ ⋅ + 𝑎2𝑛 (𝑡)𝑥𝑛 + 𝑓2 (𝑡)
.. (2.21)
for some 𝑥 the resulting equation will be different from the original one at those points 𝑥 where .
ℎ(𝑥) = 0. 𝑥′𝑛 = 𝑎𝑛1 (𝑡)𝑥1 + 𝑎𝑛2 (𝑡)𝑥2 + ⋅ ⋅ ⋅ + 𝑎𝑛𝑛 (𝑡)𝑥𝑛 + 𝑓𝑛 (𝑡),
2.8 Systems of ODE of the First Order 64 2.8 Systems of ODE of the First Order 65
which is called the normal form. In vector form this system becomes: Example 2.8.1. Solve each of the following systems of linear differential equations.
⎧
X′ = AX + F(𝑡), � ′
𝑦1′ = −3𝑦1 + 𝑦2 ⎨ 𝑦1 = 2𝑦1 + 𝑦2 + 𝑦3
1. 2. 𝑦 ′ = 𝑦1 + 2𝑦2 + 𝑦3
𝑦2′ = 𝑦1 − 3𝑦2 2
where ⎩ 𝑦 ′ = 𝑦 + 𝑦 + 2𝑦
3 1 2 3
b) Next, let us find an eigenvector corresponding to 𝜆2 = −4. Eigenvector corresponding to the eigenvalue 𝜆1 = 1 can be found as follows.
� � � � � � ⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞
−3 − (−4) 1 𝑥1 0 1 1 𝑥1 0 2−1 1 1 𝑥1 0 1 1 1 𝑥1 0
= ⇐⇒ = ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟
1 −3 − (−4) 𝑥2 0 1 1 𝑥2 0 ⎜ 1 2−1 ⎟ ⎜ ⎟ ⎜ ⎟
1 ⎠ ⎝𝑥2 ⎠ = ⎝0⎠ ⇐⇒ ⎝1 ⎜ ⎟ ⎜
1 1 ⎠ ⎝𝑥 2 ⎠ = ⎝0 ⎟
⎟ ⎜
⎝ ⎠
1 1 2−1 𝑥3 0 1 1 1 𝑥3 0
This implies 𝑥1 + 𝑥2 = 0 and hence 𝑥1 = −𝑥1 which implies (𝑥1 , 𝑥2 )𝑇 = 𝑥1 (1, −1)𝑇 .
Therefore, the vector (1, −1)𝑇 is an eigenvector corresponding to the eigenvalue 𝜆2 = This implies, 𝑥1 + 𝑥2 + 𝑥3 = 0.
−4 i.e., (𝑥1 , 𝑥2 , 𝑥3 )𝑇 = (−𝑥2 − 𝑥3 , 𝑥2 , 𝑥3 )𝑇 = 𝑥2 (−1, 1, 0)𝑇 + 𝑥3 (−1, 0, 1)𝑇 .
Hence, the general solution of the given system is Similarly, eigenvector corresponding to 𝜆2 = 4 is obtained to be (1, 1, 1)𝑇 .
� � Thus, the general solution will be
1 1
y(𝑡) = 𝑐1 𝑒−2𝑡 + 𝑐2 𝑒4𝑡 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
1 −1
𝑦1 (𝑡) −1 −1 1
which is equivalent to ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜𝑦2 (𝑡)⎟ = 𝑐1 ⎜ 1 ⎟ 𝑒𝑡 + 𝑐2 ⎜ 0 ⎟ 𝑡𝑒𝑡 + 𝑐3 ⎜1⎟ 𝑒4 𝑡.
⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠
𝑦1 (𝑡) = 𝑐1 𝑒−2𝑡 + 𝑐2 𝑒−4𝑡
𝑦3 (𝑡) 0 1 1
𝑦2 (𝑡) = 𝑐1 𝑒−2𝑡 − 𝑐2 𝑒−4𝑡
Let y(𝑡) = x𝑒𝜆𝑡 , y, x ∈ R3 . Then corresponding eigenvalue problem is AX = 𝜆X with constants. We will illustrate the method by the following example.
This implies that 𝑝1 + 2 = 0 and 6𝑝1 − 𝑝2 + 1 = 0 and solving this gives us 𝑝1 = −2 2. 𝐷2 (2𝑥3 − 2𝑥2 + 3) = 12𝑥 − 4.
𝑇 𝑇
and 𝑝2 = −11. Therefore, the particular solution is yp =(−2, −11) = −(2, 11) and the
general solution is A linear combination of differential operators of the form
� � �
1 0 2
y(𝑡) = 𝐶1 𝑒 𝑡 + 𝐶2 𝑒−𝑡 − . 𝑎𝑛 𝐷𝑛 + 𝑎𝑛−1 𝐷𝑛−1 + ⋅ ⋅ ⋅ + 𝑎1 𝐷 + 𝐷0 ,
3 1 11
b) Here we have where 𝑎0 , 𝑎1 , . . . , 𝑎𝑛 are constants is called an 𝑛th order polynomial operator and is denoted 𝑃 (𝐷)
�
𝑝1 and
F(𝑡) = p𝑒2𝑡 = 𝑒2𝑡 . 𝑑𝑛 𝑦 𝑑𝑦
𝑝2 𝑃 (𝐷)𝑦 = (𝑎𝑛 𝐷𝑛 + ⋅ ⋅ ⋅ + 𝑎1 𝐷 + 𝑎1 )𝑦 = 𝑎𝑛 + ⋅ ⋅ ⋅ + 𝑎1 + 𝑎0 𝑦
𝑑𝑥𝑛 𝑑𝑥
This implies Example 2.8.4.
� � � �
2 2𝑝1 𝑝1 2
2p𝑒2𝑡 = Ap𝑒2𝑡 + 𝑒2𝑡 ⇐⇒ = + 1. 𝑦 ′′ + 3𝑦 ′ − 𝑦 = 0 implies (𝐷2 + 3𝐷 − 1)𝑦 = 0
1 2𝑝2 6𝑝1 − 𝑝2 1
2. 𝑦 ′′′ − 4𝑦 ′ = cos 𝑥 implies (𝐷3 − 4𝐷)𝑦 = cos 𝑥.
Since 𝑒𝑡 ∕= 0, by simplifying the given equation we get 2𝑝1 = 𝑝1 + 2, 2𝑝2 = 6𝑝1 − 𝑝2 + 1
13
and solving for 𝑝1 and 𝑝2 gives us 𝑝1 = 2 and 𝑝2 = . Definition 2.8.2.
2
2.8 Systems of ODE of the First Order 70 2.8 Systems of ODE of the First Order 71
1. Two polynomial operators 𝑃1 (𝐷) and 𝑃2 (𝐷) are equal if and only if 𝑃1 (𝐷)𝑦 = 𝑃2 (𝐷)𝑦 for Solution
all functions 𝑦.
1. The system is equivalent to
2. The sum 𝑃1 (𝐷) + 𝑃2 (𝐷) is obtained by first expressing 𝑃1 and 𝑃2 as linear combinations } {
𝐷𝑦1 − 𝑦2 = 𝑥2 𝐷𝑦1 − 𝑦2 = 𝑥2
of the operator D and adding the coefficients of like powers of D ⇐⇒
𝐷𝑦2 + 4𝑦1 = 𝑥 4𝑦1 + 𝐷𝑦2 = 𝑥
3. The product 𝑃1 (𝐷)𝑃2 (𝐷) is obtained by using the operator 𝑃2 (𝐷) followed by 𝑃1 (𝐷), i.e.
To eliminate 𝑦2 , apply 𝐷 on the first equation. Then the equation is equivalent to:
And this can be written in matrix form as There is an equivalence between an 𝑛th order linear ODE and a system of 𝑛 ODEs of first order.
� � � In subsection 2.8.2 we have seen how to transform a system of 𝑛 first order ODEs to an 𝑛th order
𝐷−2 2𝐷 𝑦1 2 − 4𝑒2𝑥
= linear ODE.
2𝐷 − 3 3𝐷 − 1 𝑦2 0
It is also possible to convert a higher order equation into a system of first order equations using
Then by Crammer’s rule
� � new variable definitions. To see this, consider a homogeneous 𝑛th order linear ODE:
�2 − 4𝑒2𝑥 2𝐷 ��
� 𝑎𝑛 𝑦 (𝑛) + 𝑎𝑛−1 𝑦 (𝑛−1) + ⋅ ⋅ ⋅ + 𝑎1 𝑦 ′ + 𝑎0 𝑦 = 0, where 𝑡 is the independent variable.
� �
� 0 3𝐷 − 1� (3𝐷 − 1)(2 − 4𝑒2𝑥 )
𝑦1 = � � = Then redefine the variables as follows,
�𝐷−2 2𝐷 � � (𝐷 − 2)(3𝐷 − 1) − (2𝐷 − 3)2𝐷
�
� �
�2𝐷 − 3 3𝐷 − 1�
𝑥1 (𝑡) = 𝑦(𝑡)
This implies [⟨3𝐷2 − 7𝐷 + 2⟩ − ⟨4𝐷2 − 6𝐷⟩]𝑦1 = (3𝐷 − 1)(2 − 4𝑒2𝑥 ). Simplifying this
2 2𝑥
gives us (−𝐷 − 𝐷 + 2)𝑦1 = −12𝑥2𝑒 − 2 + 4𝑒 2𝑥
and then −𝑦1′′ − 𝑦1′ 2𝑥
+ 2𝑦1 = −20𝑒 − 2 𝑥2 (𝑡) = 𝑦 ′ (𝑡)
which is reduced to a second order linear DE in 𝑦1 (Solve this equation.) 𝑥3 (𝑡) = 𝑦 ′′ (𝑡)
.. .
and � � . = ..
� 𝐷 − 2 2 − 4𝑒2𝑥 �
� � 𝑥𝑛 (𝑡) = 𝑦 (𝑛−1) (𝑡)
� �
�2𝐷 − 3 0 � (2𝐷 − 3)(2 − 4𝑒2𝑥 )
𝑦2 = � � = Then the equation will be equivalent to the system
�𝐷−2 2𝐷 �� (−𝐷2 − 𝐷 + 2)
�
� �
�2𝐷 − 3 3𝐷 − 1�
This implies 𝑥′1 (𝑡) = 𝑥2 (𝑡)
(−𝐷2 − 𝐷 + 2)𝑦2 = −8(2𝑒2𝑥 ) − 6 + 12𝑒2𝑥 𝑥′2 (𝑡) = 𝑥3 (𝑡)
and then −𝑦2′′ − 𝑦2′ + 2𝑦2 = −4𝑒2𝑥 − 6 which is reduced to a second order linear DE in 𝑦2 𝑥′3 (𝑡) = 𝑥4 (𝑡)
(Solve this equation.) .. .
. = ..
Therefore, the solution is: 𝑥′𝑛−1 (𝑡) = 𝑥𝑛 (𝑡)
𝑎𝑛−1 𝑎𝑛−2 𝑎1 𝑎0
𝑦1 = 𝐶1 𝑒−2𝑥 + 𝐶2 𝑒𝑥 + 5𝑒2𝑥 − 1 𝑥′𝑛 (𝑡) = − 𝑥𝑛 − 𝑥𝑛−1 − ⋅ ⋅ ⋅ − 𝑥2 − 𝑥1
𝑎𝑛 𝑎𝑛 𝑎𝑛 𝑎𝑛
𝑦2 = −𝐶1 𝑒−2𝑥 + 12 𝐶2 𝑒𝑥 − 𝑒2𝑥 + 3
Or in matrix notation:
𝑋 ′ = 𝐴𝑋,
2.8.3 Reduction of higher order ODEs to systems of ODE of the first
where the coefficient matrix is
order ⎛ ⎞
0 1 0 0 ⋅⋅⋅ 0 0
⎜ ⎟
In the previous sections we used the characteristics equation to solve higher order ODEs. The ⎜ 0 0 1 0 ⋅⋅⋅ 0 0 ⎟
⎜ ⎟
⎜ ⎟
characteristic equations are polynomials of degree 𝑛, where 𝑛 is the order of the ODE. However, ⎜ 0 0 0 1 ⋅⋅⋅ 0 0 ⎟
𝐴=⎜ ⎜ .. .. .. .. .. .. ..
⎟,
⎟
solving polynomials is a challenging task when the degree gets larger. Because of the techniques ⎜ . . . . . . . ⎟
⎜ ⎟
developed in linear algebra to reduce matrices, it is preferable to solve eigenvalue problems when ⎜ 0 0 0 0 ⋅⋅⋅ 1 0 ⎟
⎝ ⎠
the order of the ODE is higher. − 𝑎𝑎𝑛0 − 𝑎𝑎𝑛1 − 𝑎𝑛−4
𝑎𝑛
− 𝑎𝑛−3
𝑎𝑛
⋅⋅⋅ − 𝑎𝑛−2
𝑎𝑛
− 𝑎𝑛−1
𝑎𝑛
2.9 Numerical Methods to Solve ODEs 74 2.10 Exercises 75
which is the so called the companient matrix of the 𝑛th degree charachteristic equation of the 2.9.2 Runge-Kutta Method
differential equation. Such matrices have special future in matrix theory and the eigenvalue
To improve the drawback in Euler’s method, it is better to take the mid-point of Δ𝑦𝑛 and Δ𝑦𝑛+1
problem could be solved by employing Jordan form of the matrix.
as an increment for 𝑦 instead of simply Δ𝑦𝑛 alone. Hence we have
1
𝑦𝑛+1 = 𝑦𝑛 + (𝑘1 + 𝑘2 ),
2.9 Numerical Methods to Solve ODEs 2
where 𝑘1 = ℎ𝑓 (𝑥𝑛 , 𝑦𝑛 ), 𝑘2 = ℎ𝑓 (𝑥𝑛+1 , 𝑦𝑛 + 𝑘1 )
It could be impossible to analytically solve many practical problems. But an approximate solution
which is called the Runge-Kutta method of second order.
can be obtained using quantitative methods.
The Runge-Kutta method works by using a weighted average of slopes in the basic Euler formula
to estimate 𝑦(𝑥𝑜 + ℎ) [or in general 𝑦(𝑥𝑘 + ℎ).]
2.9.1 Euler’s Method
The fourth-order Runge-Kutta method is given by
Consider a first order initial - value problem: 1
𝑦𝑛+1 = 𝑦𝑛 + ℎ (𝑚1 + 2𝑚2 + 2𝑚3 + 𝑚4 )
6
𝑦 ′ = 𝑓 (𝑥, 𝑦); 𝑦(𝑎) = 𝑏. where 𝑚1 = 𝑓 (𝑥𝑛 , 𝑦𝑛 )
1 1
△𝑦 △𝑦 𝑚2 = 𝑓 (𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑚1 )
Since 𝑦 ′ (𝑥) = lim△𝑥→0 , we can approximate 𝑦 ′ by the ratio 2 2
△𝑥 △𝑥 1 1
𝑚3 = 𝑓 (𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑚2 )
Hence we have △𝑦 ≃ 𝑓 (𝑥, 𝑦)△𝑥 2 2
𝑚4 = 𝑓 (𝑥𝑛 + ℎ, 𝑦𝑛 + ℎ𝑚3 )
Let us denote the 𝑦 values at different points as 𝑦𝑜 , 𝑦1 , 𝑦2 , . . . , where 𝑦𝑜 = 𝑦(𝑥𝑜 ) is the initial
value. And let △𝑥 = ℎ denote the increment in 𝑥, called the step size. This method is surprisingly accurate for values of ℎ < 1.
Then we have:
This iterative method is known as Euler’s method. Since Euler’s method is based on first order
approximation, it may work for very small step size ℎ, which makes the iterative scheme very
slow.
3.1 Critical Points and Stability 77
∙ Any point (𝑥𝑜 , 𝑦𝑜 ) such that both 𝑃 and 𝑄 vanishes is called a critical (or singular or
equilibrium) point of the system (3.1).
Chapter 3 ∙ An equilibrium point 𝑋𝑜 = (𝑥𝑜 , 𝑦𝑜 ) of system (3.1) is said to be stable if motions (or
trajectories) that start sufficiently close to 𝑋𝑜 remain close to 𝑋𝑜 .
Mathematically:
*Nonlinear ODEs and Qualitative
Definition 3.1.1. Let 𝑑(𝑃1 , 𝑃2 ) denote the distance between any two points 𝑃1 = (𝑥1 , 𝑦1 )
Analysis and 𝑃2 = (𝑥2 , 𝑦2 ) and let 𝑃 (𝑡) = (𝑥(𝑡), 𝑦(𝑡)) denote the representative point in the
phase plane corresponding to system (3.1). Then a singular (or an equilibrium) point
𝑋𝑜 = (𝑥𝑜 , 𝑦𝑜 ) is stable if for any given 𝜖 > 0, there is a 𝛿 > 0 such that
∙ Many Nonlinear equations cannot be solved in closed form. 𝑑(𝑃 (0), 𝑋𝑜 ) < 𝛿 ⇒ 𝑑(𝑃 (𝑡), 𝑋𝑜 ) < 𝜖, ∀𝑡 > 0
∙ The properties tell us the way how all the trajectories (solution curves) behave near to some ∙ A singular point 𝑋𝑜 is called asymptotically stable if motions (or trajectories) that start out
points. sufficiently close to 𝑋𝑜 not only stay close to 𝑋𝑜 but actually approach 𝑋𝑜 as 𝑡 → ∞
4) a Saddle if all trajectories (paths) approach to 𝑋𝑜 in one direction and move away ∙ System (3.2) can be rewritten as
from it in the other direction. � � �
𝑋˙ 𝑎 𝑏 𝑋
A saddle is always unstable. =
𝑌˙ 𝑐 𝑑 𝑌
– The two straight-line trajectories through the saddle (along which the flow is attracted
and repelled) are called the stable and unstable manifolds respectively. ∙ Clearly (0, 0) is a critical point for the linear system (3.2) [and hence the point (𝑥𝑜 , 𝑦𝑜 ) is
a critical point for the system (3.1)]
∙ In many practical problems we will be interested in the stability of equilibrium points. That �
𝑎 𝑏
means, if we take an initial point near to an equilibrium point 𝑋𝑜 = (𝑥𝑜 , 𝑦𝑜 ), does the point ∙ Let 𝜆1 and 𝜆2 be the two eigenvalues of the coefficient matrix .
𝑐 𝑑
(𝑥(𝑡), 𝑦(𝑡)) on the solution curve (trajectory) remain near 𝑋𝑜 ?
∙ Then the nature of the critical point (0, 0) of the system (3.2) depends upon the nature of
∙ To study this, we approximate the nonlinear system (equation (3.1)) by its linear terms in the eigenvalues 𝜆1 and 𝜆2 .
the Taylor series expansion in the neighborhood of each singular point.
1. If 𝜆1 and 𝜆2 are real, unequal and of the same sign, then the critical point (0, 0) of the
3.1.1 Stability for linear systems linear system (3.2) is a node.
– If, in addition, both 𝜆1 and 𝜆2 are positive, then the critical point is an unstable node.
3.1.2 Stability for nonlinear systems
– If, both 𝜆1 and 𝜆2 are negative, then the critical point is a stable node.
∙ Consider again the system:
2. If 𝜆1 and 𝜆2 are real and of opposite sign, then the critical point (0, 0) of the linear system
𝑑𝑥
= 𝑃 (𝑥, 𝑦) (3.2) is a saddle point.
𝑑𝑡
𝑑𝑦
= 𝑄(𝑥, 𝑦) 3. If 𝜆1 and 𝜆2 are real and equal, then the critical point (0, 0) of the linear system (3.2) is a
𝑑𝑡
node.
∙ From the Taylor series we have
– If, in addition, 𝜆1 = 𝜆2 < 0, then it is a stable node and if 𝜆1 = 𝜆2 > 0, then it is an
𝑃 (𝑥, 𝑦) ≈ 𝑃𝑥 (𝑥𝑜 , 𝑦𝑜 )(𝑥 − 𝑥𝑜 ) + 𝑃𝑦 (𝑥𝑜 , 𝑦𝑜 )(𝑦 − 𝑦𝑜 ) unstable node.
𝑄(𝑥, 𝑦) ≈ 𝑄𝑥 (𝑥𝑜 , 𝑦𝑜 )(𝑥 − 𝑥𝑜 ) + 𝑄𝑦 (𝑥𝑜 , 𝑦𝑜 )(𝑦 − 𝑦𝑜 ) – If, 𝑎 = 𝑑 ∕= 0 and 𝑏 = 𝑐 = 0, then it is a proper node, otherwise an improper node.
∙ Now letting 𝑎 = 𝑃𝑥 (𝑥𝑜 , 𝑦𝑜 ), 𝑏 = 𝑃𝑦 (𝑥𝑜 , 𝑦𝑜 ), 𝑐 = 𝑄𝑥 (𝑥𝑜 , 𝑦𝑜 ) and 𝑑 = 𝑄𝑦 (𝑥𝑜 , 𝑦𝑜 ) we have 4. If 𝜆1 and 𝜆2 are complex conjugates with the real part not zero, then the equilibrium point
(0, 0) of the linear system (3.2) is a focus or spiral.
𝑋˙ = 𝑎𝑋 + 𝑏𝑌
– If, in addition, the real part is negative, then the critical point is a stable focus.
𝑌˙ = 𝑐𝑋 + 𝑑𝑌, (3.2)
– If, the real part is positive then it is an unstable focus.
where 𝑋 = 𝑥 − 𝑥𝑜 and 𝑌 = 𝑦 − 𝑦𝑜 .
5. If 𝜆1 and 𝜆2 are pure imaginary, then the equilibrium point (0, 0) of the linear system (3.2)
∙ The above process is called a linearization process. is a center.
A center is always stable even though it is not asymptotically stable.
3.1 Critical Points and Stability 80 3.2 Stability by Lyapunav’s Method 81
2. The equation 𝑥¨ + 𝜖𝑥˙ 3 + 𝑥 = 0 models a harmonic oscillator with cubic damping - that
2. The constant terms in the linearized system are missing because 𝑃 (𝑥𝑜 , 𝑦𝑜 ) = 𝑄(𝑥𝑜 , 𝑦𝑜 ) = 0.
is, with a damping term proportional to the velocity cubed. Find the critical point(s) and
3. The nature of the equilibrium points of the nonlinear system (3.1) can be determined from determine their nature.
that of the linearized system (3.2) as in the following Theorems
Theorem 3.1.3 (Poincare’s Result). The classification of all singular points of the non-linear Ans.: the point (0, 0) is the only critical point and it is a center for the nonlinear equation.
system (3.1) correspond in both type and stability with the results obtained by considering the
linearized system (3.2) except for a center and a proper node.
In these exceptional cases 3.2 Stability by Lyapunav’s Method
(i) a center of the linearized system could be either a focus or a center for the nonlinear system.
3.3 Exercises
(ii) a proper node could also be either a spiral or a node for the nonlinear system (3.1).
To determine these exceptional cases one requires to study further the original nonlinear system
itself.
The above procedure (the linearization process) can also be used to solve a second order non-
linear ODE. This can be done by using substitution of variables 𝑦 = 𝑥,
˙ which imply that 𝑦˙ = 𝑥¨.
This will result in a nonlinear system of two first order equations.
However, if such an equation has no term in 𝑥,
˙ we need the following theorem.
Theorem 3.1.4. If the nonlinear equation 𝑥¨ + 𝑓 (𝑥) = 0 has a singular point in the 𝑥𝑥˙ plane
(phase plane), where the linearized system indicates a center or a proper node, the nonlinear
equation also has the same property.
83
Part II
Vector Analysis
4.1 Vector Calculus 85
denoted by r(𝑡) . If 𝑓 (𝑡), 𝑔(𝑡) and ℎ(𝑡) are the components of the vector r(𝑡), then 𝑓, 𝑔 and ℎ
are real-valued functions called the component functions of r and we can write
Example 4.1.1. The function r(𝑡) = 𝑡3 𝑖 + 𝑒−𝑡 𝑗 + sin 𝑡𝑘 is a vector valued function and the
Chapter 4 component functions of r are 𝑡3 , 𝑒−𝑡 and sin 𝑡.
Remark 4.1.2. The domain of a vector valued function r consists of all values of 𝑡 for which the
Vector Differential Calculus expression r(𝑡) is defined, that is the values of 𝑡 for which all the component functions are defined.
For example, if
√
r(𝑡) = 𝑡𝑖 + ln(𝑡 − 2)𝑗 + 3𝑡,
√
4.1 Vector Calculus then the domain of r(𝑡) is the set of points in R, where 𝑡, ln(𝑡 − 2) and 3𝑡 are defined. That
is, 𝑡 ≥ 0 and 𝑡 − 2 > 0 and hence the domain of r is (𝑡, ∞).
In the previous Applied Mathematics courses, specifically in the linear algebra part, we have been
discussing about constant vectors, but the most interesting applications of vectors involve also For each 𝑡, where r is defined, draw r(𝑡) as as a vector from the origin to the point (𝑓 (𝑡), 𝑔(𝑡), ℎ(𝑡)).
vector functions. The end points of these vectors traces out a curve C as t varies.
The simplest example is a position vector that depends on time. We can differentiate such a
Example 4.1.2. The function r(𝑡) = (1 + 𝑡)𝑖 + 𝑡𝑗 + (3 − 𝑡)𝑘 is a vector valued function of one
function with respect to time and the first derivative of such function is the velocity and its
variable. The curve that is traced out by the heads of the position vectors this vector valued
second derivative is the acceleration of the particle whose position is given by the position vector.
function is a line that passes through the point (1, 0, 3) and with directional vector (1, 1, −1).
In this case, the coordinates of the tip of the position vector are functions of time.
Therefore, it is worth to talk about such functions and in this course, specially in this chapter we 4.1.2 Limit of A Vector Valued Function
are going to address the calculus of vector fields (vector valued functions).
Definition 4.1.3. A vector valued function 𝑉 (𝑡) is said to have the limit 𝑙 as t approaches 𝑡0 , if
𝑣(𝑡) is defined in some neighborhood of 𝑡0 (possibly except at 𝑡0 ) and
4.1.1 Vector Functions of One Variable in Space
lim ∥𝑉 (𝑡) − 𝑙∥ = 0.
𝑡→𝑡0
First recall the definition of a function, that is, a function is a rule that assigns to each element
in the domain an element in the range. Then we write
lim 𝑉 (𝑡) = 𝑙.
Definition 4.1.1. A vector-valued function, or vector function, is a function whose domain is a 𝑡→𝑡0
set of real numbers and whose range is a set of vectors. A vector function 𝑣(𝑡) is said to be continuous at 𝑡 = 𝑡0 if it is defined in some neighborhood of
𝑡0 and
In this course, we are most interested in vector functions whose values are three-dimensional
lim 𝑉 (𝑡) = 𝑣(𝑡0 ).
vectors. This means that for every number 𝑡 in the domain of there is a unique vector in R3 𝑡→𝑡0
4.1 Vector Calculus 86 4.1 Vector Calculus 87
Z where
𝑥 = 𝑓 (𝑡), 𝑦 = 𝑔(𝑡) and 𝑧 = ℎ(𝑡) (4.1)
r(t)
The equations in 4.1 are called parametric equations of the curve 𝐶 and the variable 𝑡 is called
the parameter.
Y Example 4.1.4. The components of the vector valued function r(𝑡) = (2 cos 𝑡, 2 sin 𝑡, 0) are
parametric equations of a circle with center at the origin and radius 2 in space.
X
4.1.3 Derivative of a Vector Function
Figure 4.1: Graph of the line in Example 4.1.2
Recall that, if 𝑓 is a real-valued function of one variable, then the derivative of 𝑓 at any point 𝑡
in the domain of 𝑓 is
The following theorem is used as an alternative definition of limit of a vector valued function. 𝑓 (𝑡 + ℎ) − 𝑓 (𝑡)
𝑓 ′ (𝑡) = lim ,
ℎ→0 ℎ
Theorem 4.1.4. If r(𝑡) = (𝑓 (𝑡), 𝑔(𝑡), ℎ(𝑡)), then
provided that the limit exists. Now, let us define the derivative of a vector valued function of one
lim r(𝑡) = 𝑙 variable.
𝑡→𝑡0
if and only if Definition 4.1.6. A vector function 𝑉 (𝑡) is said to be differentiable at a point 𝑡 in the domain
( )
lim 𝑓 (𝑡), lim 𝑔(𝑡), lim ℎ(𝑡) = 𝑙. of V if the limit
𝑡→𝑡0 𝑡→𝑡0 𝑡→𝑡0 𝑉 (𝑡 + ℎ) − 𝑉 (𝑡)
lim
Example 4.1.3. Find lim𝑡→0 r(𝑡), if ℎ→0 ℎ
exists and if the limit exists then it is denoted by 𝑉 ′ (𝑡). That is,
( sin 𝑡 )
r(𝑡) = 𝑡3 𝑖 + 𝑒−𝑡 𝑗 + 𝑘.
𝑡 𝑉 (𝑡 + ℎ) − 𝑉 (𝑡)
𝑉 ′ (𝑡) = lim
ℎ→0 ℎ
Solution
Remark 4.1.7. If the function 𝑉 (𝑡) = (𝑉1 (𝑡), 𝑉2 (𝑡), 𝑉3 (𝑡)) is a vector field, then 𝑉 ′ (𝑡) =
By Theorem 4.1.4 we have , (𝑣1′ (𝑡), 𝑉2′ (𝑡), 𝑉 ′ 3(𝑡))
Vector valued functions and curves in space have close connection. Suppose that 𝑓, 𝑔 and ℎ are
continuous real-valued functions on an interval I. Then the set 𝐶 of all points (𝑥, 𝑦, 𝑧) in space
4.1 Vector Calculus 88 4.2 The Gradient Field 89
Differentiation Rules Definition 4.1.10. Let 𝑉 : R𝑛 → R3 , 𝑉 = (𝑉𝑖 , 𝑉2 , 𝑉3 ) where each 𝑉𝑖 is a function of 𝑛 variables,
∂𝑉
𝑡1 , 𝑡2 , . . . , 𝑡𝑛 . Then the partial derivative of 𝑉 with respect to 𝑡𝑖 is denoted by ∂𝑡𝑖
and is defined
Let 𝑈 (𝑡) and 𝑉 (𝑡) be a vector valued functions in space and c be any constant. Then
as the vector function
∂𝑉 ∂𝑉1 ∂𝑉2 ∂𝑉3
′ ′
=( , , )
a) (𝑐𝑉 ) = 𝑐𝑉 ∂𝑡𝑖 ∂𝑡𝑖 ∂𝑡𝑖 ∂𝑡𝑖
( )
Example 4.1.7. If 𝑓 (𝑥, 𝑦) = (𝑥2 + 𝑦), ln(𝑥2 + 𝑦 2 ), sin(𝑥 + 3𝑦) , then
b) (𝑈 + 𝑉 )′ = 𝑈 ′ + 𝑉 ′
� � � �
∂𝑓 2𝑥 ∂𝑓 2𝑦
c) (𝑈.𝑉 )′ = 𝑈 ′ .𝑉 + 𝑈.𝑉 ′ = 2𝑥, 2 , cos(𝑥 + 3𝑦) and = 𝑦, , 3 cos(𝑥 + 3𝑦) .
∂𝑥 𝑥 + 𝑦2 ∂𝑦 𝑥2 + 𝑦 2
d) (𝑈 × 𝑉 ) = 𝑈 ′ × 𝑉 + 𝑈 × 𝑉 ′
4.2 The Gradient Field
Remark 4.1.8. Let V(t) be a vector function of constant norm. i.e. ∥𝑉 (𝑡)∥ = 𝑐 for a constant 𝑐
or 𝑉.𝑉 = 𝑐2 Then (𝑉.𝑉 )′ = (𝑐2 )′ = 0 which implies 2𝑉 ′ .𝑉 = 0. Then, either 𝑉 ′ = 0 or 𝑉 ′ ⊥ 𝑉. Let 𝐹 (𝑥, 𝑦, 𝑧) be a real valued functions of three variables (i.e. F is a scalar field defined from
Therefore, a nonzero vector field with constant norm is perpendicular to its derivative. 𝑋 ⊂ R3 into R.) The gradient of 𝐹 , denoted by ∇𝐹, is a vector field defined by
∂𝐹 ∂𝐹 ∂𝐹 ∂𝐹 ∂𝐹 ∂𝐹
∇𝐹 = ( , , )= 𝑖+ 𝑗+ 𝑘
4.1.4 Vector and Scalar Fields ∂𝑥 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑧
Now, let us consider vector valued functions, called vector fields, of several variables. Vector and if P is a point in the domain of F, the gradient of F evaluated at P is denoted by ∇𝐹 (𝑃 )
valued functions of one variable are also called vector fields. and also if 𝑓 is a function of two variables, then the the gradient of 𝑓 , denoted by ∇𝑓, is defined
by
Definition 4.1.9. A function 𝑓 whose value is a scalar (or a real number), say 𝑓 : 𝑋 → R, ∂𝑓 ∂𝑓
∇𝑓 = + .
∂𝑥 ∂𝑦
𝑋 ⊂ R𝑛 , is called a scalar field.
But in this section we will focus on the gradient of functions of three variables.
A function 𝑣 whose value is a vector, say 𝑣 : 𝑋 → R𝑚 , 𝑋 ⊂ R𝑛 , is called a vector field. That
is a vector field is a vector valued function. Example 4.2.1. If 𝐹 (𝑥, 𝑦, 𝑧) = 2𝑥 + 𝑥𝑦 − 𝑦𝑧 2 , then F is a scalar field and
where 𝑋 = {(𝑥, 𝑦)∣0 ≤ 𝑥 ≤ 1, 0 ≤ 𝑦 ≤ 1}, which is a Temperature Field of a square plate Example 4.2.2. The gradient of 𝑓 (𝑥, 𝑦) = 𝑥𝑦 + 2𝑥3 is
is a scalar field. ∂𝑓 ∂𝑓
∇𝑓 = 𝑖+ 𝑗 = (𝑦 + 6𝑥2 )𝑖 + 𝑥𝑗.
∂𝑥 ∂𝑥
2. The function 𝑓 : 𝑋 → R given by 𝑓 (𝑥, 𝑦) = (𝑥 + 𝑦, ln(𝑥 + 𝑦 ), sin(𝑥 + 3𝑦)), where
3 2 2 2
𝑋 = R2 ∖{(0, 0)} is a vector field. Remark 4.2.1. Let F and G be scalar fields of three variables and c be a constant. Then
Let 𝑃 (𝑥0 , 𝑦0 , 𝑧0 ) be a point and 𝑢 = 𝑎𝑖 + 𝑏𝑗 + 𝑐𝑘 be a unit vector, i.e. 𝑎2 + 𝑏2 + 𝑐2 = 1. Then the Example 4.2.4. Let 𝐹 (𝑥, 𝑦, 𝑧) = 2𝑥𝑧 + 𝑦𝑧 2 and 𝑃 (1, 1, 2). The gradient of F is
directional derivative of a scalar field F at the point P in the direction of u, denoted by 𝐷𝑢 𝐹 (𝑃 ),
∇𝐹 (𝑥, 𝑦, 𝑧) = 2𝑧𝑖 + 𝑧 2 𝑗 + (2𝑥 + 2𝑦𝑧)𝑘
is defined by
∂𝐹 ∂𝐹 ∂𝐹 and then
𝐷𝑢 𝐹 (𝑃 ) = 𝑎 (𝑥0 , 𝑦0 , 𝑧0 ) + 𝑏 (𝑥0 , 𝑦0 , 𝑧0 ) + 𝑐 (𝑥0 , 𝑦0 , 𝑧0 ) = ∇𝐹 (𝑥0 , 𝑦0 , 𝑧0 ).𝑢, ∇𝐹 (2, 1, 1) = 2𝑖 + 𝑗 + 6𝑘.
∂𝑥 ∂𝑦 ∂𝑧
the scalar product of the vectors ∇𝐹 (𝑥0 , 𝑦0 , 𝑧0 ) and u. The maximum rate of change of F at (2, 1, 1) is in the direction of 2𝑖 + 𝑗 + 6𝑘 and this maximum
√ √
rate of change is 22 + 12 + 62 = 41.
Example 4.2.3. Given 𝐹 (𝑥, 𝑦, 𝑧) = 2𝑥 + 𝑥𝑦 − 𝑦𝑧 2 , the directional derivative of F at the point
(1, 2, 2) in the direction of the unit vector 𝑢 = ( 23 , 13 , 23 ) is
� � � � � � 4.2.1 Level Surfaces, Tangent Planes and Normal Lines
1 ∂𝐹 2 ∂𝐹 1 ∂𝐹 −7
𝐷𝑢 𝐹 (1, 2, 2) = (1, 2, 2) + (1, 2, 2) + (1, 2, 2) = .
3 ∂𝑥 3 ∂𝑦 3 ∂𝑧 3 The gradient of a scalar field can be use to find equations of tangent planes and equations of
Remark 4.2.2. If F is a scalar field of three variables and 𝑣 is any nonzero vector then the normal lines of a level surface defined by the scalar field at a given point.
1
directional derivative of F at a point P in the direction of v is given by 𝐷𝑢 𝐹 (𝑃 ), where 𝑢 = ∥𝑣∥
𝑣.
Let F be a function of three variables and c be a number. The set of points (𝑥, 𝑦, 𝑧) such that
Let F be a scalar field and F and its partial derivatives be continuous in some sphere about a 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐 is called a level surface of F.
point P and u be a unit vector. Then
Example 4.2.5. Let 𝐹 (𝑥, 𝑦, 𝑧) = 𝑥2 + 𝑦 2 + 𝑧 2 . If 𝑐 > 0, then the level surface 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐 is
√
𝐷𝑢 𝐹 (𝑃 ) = ∇𝐹 (𝑃 ).𝑢 = ∥∇𝐹 (𝑃 )∥∥𝑢∥ cos 𝜃 = ∥∇𝐹 (𝑃 )∥ cos 𝜃, a sphere with radius 𝑐; if 𝑐 = 0, then the level surface 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐 is just the point (0, 0, 0)
and if 𝑐 < 0, then the level surface 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐 is empty set.
where 𝜃 is the angle between u and ∇𝐹 (𝑃 ).
Theorem 4.2.3. Let F be a scalar field and F and its partial derivatives be continuous in some The plane containing these tangent vectors is called the tangent plane to the surface S at 𝑃0
sphere about a point P and suppose that ∇𝐹 (𝑃 ) ∕= 0. Then and a vector orthogonal to this tangent plane at 𝑃0 is called a normal vector, or normal, to
the surface S at 𝑃0 . The line through 𝑃0 in the direction of the normal vector is called a normal
1. At P, F has its maximum rate of change in the direction of ∇𝐹 (𝑃 ) and this maximum rate
line to the surface S at the point 𝑃0 .
of change is ∥∇𝐹 (𝑃 )∥.
2. At P, F has its minimum rate of change in the direction of −∇𝐹 (𝑃 ) and this minimum Therefore, to determine equation of the tangent plane and normal line to a surface S at a given
rate of change is −∥∇𝐹 (𝑃 )∥. point P, we need to have a normal vector to the tangent plane and for this purpose we have the
following theorem.
4.2 The Gradient Field 92 4.3 Curves and Arc length 93
for 𝑎 ≤ 𝑡 ≤ 𝑏 are said to constitute a curve C joining the endpoints r(𝑎) and r(𝑏) and (4.2) is
called a parametrization of the curve. We call the functions 𝑥, 𝑦 and 𝑧, coordinate functions.
X
Z
Figure 4.2: Normal vector to a surface.
r(b)
Theorem 4.2.4. Let F be a function of three variables and suppose that F and its first partial
derivatives are continuous at a point P on the level surface S given by 𝐹 (𝑥, 𝑦, 𝑧) = 𝑐. Suppose
that ∇𝐹 (𝑃 ) ∕= 0. Then ∇𝐹 (𝑃 ) is normal to the level surface S at the point P.
r(t)
Example 4.2.6. Find the equation of the tangent plane and normal line to the surface
r(a) Y
4 4 4
3𝑥 + 3𝑦 + 6𝑧 = 12
Figure 4.3: A curve with initial point r(a) and terminal point r(b).
Solution
We call a curve C that is parameterized by r(𝑡) = (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)), for 𝑎 ≤ 𝑡 ≤ 𝑏:
Let 𝐹 (𝑥, 𝑦, 𝑧) = 3𝑥4 + 3𝑦 4 + 6𝑧 4 . Then
∂𝐹 ∂𝐹 ∂𝐹 ∙ continuous if each coordinate function is continuous;
(𝑥, 𝑦, 𝑧) = 12𝑥3 , (𝑥, 𝑦, 𝑧) = 12𝑦 3 and (𝑥, 𝑦, 𝑧) = 24𝑧 3
∂𝑥 ∂𝑥 ∂𝑥
∙ differentiable if each coordinate function is differentiable;
which are continuous at the point (1, 1, 1) and hence ∇𝐹 (1, 1, 1) = 12𝑖 + 12𝑗 + 24𝑘.
∙ closed if the initial and terminal points coincide, that is,
Since ∇𝐹 (1, 1, 1) is normal to the plane, equation of the plane is given by (𝑥(𝑎), 𝑦(𝑎), 𝑧(𝑎)) = (𝑥(𝑏), 𝑦(𝑏), 𝑧(𝑏))
∙ simple if 𝑎 < 𝑡1 < 𝑡2 < 𝑏 implies that (𝑥(𝑡1 ), 𝑦)𝑡1 ), 𝑧(𝑡1 )) ∕= (𝑥(𝑡2 ), 𝑦(𝑡2 ), 𝑧(𝑡2 )), in other 1. Straight line:
words, if it does not intersect itself; A straight line L through a point P with position vector in the direction of a constant vector
A can be represented as
∙ smooth if the coordinate functions have continuous derivatives which are never all zero for
the same value of 𝑡, that is, it possesses a tangent vector that varies continuously along 𝑟(𝑡) = 𝑃 + 𝑡𝐴 = (𝑣1 + 𝑡𝑎1 , 𝑣2 + 𝑡𝑎2 , 𝑣3 + 𝑡𝑎3 ), for all 𝑡 ∈ R,
the length of C.
where 𝑃 = (𝑣1 , 𝑣2 , 𝑣3 ), 𝐴 = (𝑎1 , 𝑎2 , 𝑎3 ).
∙ piecewise smooth if it has continuous tangent at all but finitely many points. Such a curve
Z
is a curve with a finite number of corner at which there is no tangent.
If C is a curve which is divided into smooth curves 𝐶1 , 𝐶2 , ..., 𝐶𝑛 such C begins with 𝐶1 ,
𝐶2 begins where 𝐶1 ends and so on, but at the point where 𝐶𝑖 and 𝐶𝑖+1 join, there may P
A
be no tangent in the resulting curve , then C is piecewise smooth curve and we write such L
a curve as
Y
𝐶 = 𝐶1 ⊕ 𝐶2 ⊕ ... ⊕ 𝐶𝑛 .
Z X
Figure 4.5: A line passing through a point and parallel to a given vector.
C
C2 C
3
2. Ellipse, circle:
The vector function:
C1
Y 𝑟(𝑡) = (𝑎 cos 𝑡, 𝑏 sin 𝑡, 0) (4.4)
r(𝑡0 + ℎ) − r(𝑡0 )
Example 4.3.1. The following are examples of curves. r′ (t0 ) = lim
ℎ→0 ℎ
which is equal to
4.3 Curves and Arc length 96 4.4 Tangent, Curvature and Torsion 97
Z moves into a tangent vector to C at the point (x(𝑡0 ), y(𝑡0 ), z(𝑡0 )).
Hence the derivative 𝑟′ (𝑡) (if it exists) of the curve is called the tangent vector to the curve at
the point 𝑟(𝑡) and the equation of the tangent line to the curve C at point P is
If a particle is moving along the curve C with a position vector F(𝑡) = 𝑥(𝑡)𝑖 + 𝑦(𝑡)𝑗 + 𝑧(𝑡)𝑘, then Consider the relation
𝑑𝑇 𝑑𝑇 𝑑𝑡 𝑑𝑇 /𝑑𝑡
the the velocity v(𝑡) of the particle at time t is: = . = .
𝑑𝑆 𝑑𝑡 𝑑𝑆 𝑑𝑆/𝑑𝑡
v(𝑡) = F′ (𝑡) But 𝑑𝑆/𝑑𝑡 = ∥F′ (𝑡)∥ and hence we get
1
and the speed 𝑣(𝑡) of of the particle is the norm of the velocity, i.e 𝐾(𝑡) = ∥T′ (𝑡)∥
∥F′ (𝑡)∥
which is the rate of change of the distance covered by the particle along the curve with respect Example 4.4.1. Curvature of a line at any point is zero.
to the time and the acceleration a(𝑡) of the moving particle is the rate of change of the velocity To see this, let 𝑙 be a line that passes through a point 𝑃 (𝑥0 , 𝑦0 , 𝑧0 ) with directional vector
with respect to time, i.e. 𝐴 = (𝑎, 𝑏, 𝑐). Then the parametric equation of 𝑙 is given by
a(𝑡) = v′ (𝑡) = F′′ (𝑡).
F(𝑡) = (𝑥0 + 𝑡𝑎)𝑖 + (𝑦0 + 𝑡𝑏)𝑗 + (𝑧0 + 𝑡𝑐)𝑘, 𝑡 ∈ R.
If F′ (𝑡) ∕= 0, then the vector F′ (𝑡) is tangent to the curve C. Let T(𝑡) be a unit vector in the
Then F′ (𝑡) = 𝑎𝑖 + 𝑏𝑗 + 𝑐𝑘 and
direction of F′ (𝑡), i.e.
1 1 1
T(𝑡) = F′ (𝑡) T(𝑡) = F′ (𝑡) = √ (𝑎𝑖 + 𝑏𝑗 + 𝑐𝑘).
∥F′ (𝑡)∥ ∥F′ (𝑡)∥ 𝑎2 + 𝑏2 + 𝑐2
Z This implies that, T′ (𝑡) = 0 for all t and hence 𝐾(𝑡) = 0 for all t. This is clear from the fact
that a particle moving on a straight line does not change its direction.
r(t)
Example 4.4.2 (Curvatures of ellipses and circles). Recall that the vector function
T
represents an ellipse and it represents a circle if 𝑎 = 𝑏 in space. Then r′ (𝑡) = (−𝑎 sin 𝑡, 𝑏 cos 𝑡, 0)
X √
and ∥r′ (𝑡)∥ = 𝑎2 sin2 𝑡 + 𝑏2 cos2 𝑡. Therefore
where 𝑎, 𝑏 ≥ 0 and 𝑎2 + 𝑏2 ∕= 0. N Z
T
First r′ (𝑡) = (−𝑎 sin 𝑡)𝑖 + (𝑎 cos 𝑡)𝑗 + 𝑏𝑘 and r′′ (𝑡) = (−𝑎 cos 𝑡)𝑖 − (𝑎 sin 𝑡)𝑗. Then
√ √
∥r′ (𝑡)∥ = 𝑎2 cos2 𝑡 + 𝑎2 sin2 𝑡 + 𝑏2 = 𝑎2 + 𝑏2 r(t)
and � � Y
� �
� 𝑖 𝑗 𝑘�
� �
r′ (𝑡) × r′′ (𝑡) = �� −𝑎 sin 𝑡 𝑎 cos 𝑡 𝑏 �� = 𝑎𝑏 sin 𝑡𝑖 + 𝑎𝑏 cos 𝑡𝑗 + 𝑎2 𝑘,
� �
�−𝑎 cos 𝑡 −𝑎 sin 𝑡 0� X
√
which implies ∥r′ (𝑡) × r′′ (𝑡)∥ = 𝑎 𝑎2 + 𝑏2 . Therefore the curvature 𝐾(𝑡) of the helix is
Figure 4.9: Unit tangent vector and principal unit normal vector to a curve.
∥∥r′ (𝑡) × r′′ (𝑡)∥ 𝑎
𝐾(𝑡) = = 2 .
∥r′ (𝑡)∥3 𝑎 + 𝑏2 Definition 4.4.3. The unit vector B = 𝑇 × 𝑁 is called the binomial vector of the curve 𝐶
In the case of plane curves, that is, graph of functions of the form 𝑦 = 𝑓 (𝑥) can be considered trace out by the vector field r(𝑡).
as curves traced out by a vector field r(𝑡) = 𝑡𝑖 + 𝑓 (𝑡)𝑗. Here the 𝑘 𝑡ℎ component is considered
Now, by using the rule of differentiation we have
to be zero. Therefore r′ (𝑡) = 𝑖 + 𝑓 ′ (𝑡)𝑗 and r′′ (𝑡) = 𝑓 ′′ (𝑡)𝑗 and then the curvature of this curve � � � �
𝑑B 𝑑 𝑑T 𝑑N
is given by = (T × N) = ×N + T× .
∣𝑓 ′′ (𝑡)∣ 𝑑𝑆 𝑑𝑆 𝑑𝑆 𝑑𝑆
𝐾(𝑡) = 3 ,
(1 + (𝑓 ′ (𝑡))2 ) 2 But
𝑑T
since r′ (𝑡) × r′′ (𝑡) = 𝑓 ′′ (𝑡)𝑘. × N = 0,
𝑑𝑆
Example 4.4.4. Find the curvature of the parabola 𝑦 = 𝑎𝑥2 + 𝑏𝑥 + 𝑐, where 𝑎 ∕= 0. since they are of the same direction. This implies
The vector field that traces out the parabola is given by r(𝑡) = 𝑡𝑖+𝑓 (𝑡)𝑗, where 𝑓 (𝑥) = 𝑥2 +𝑏𝑥+𝑐. 𝑑B 𝑑N
=T× ,
Then 𝑓 ′ (𝑡) = 2𝑎𝑡 + 𝑏 and 𝑓 ′′ (𝑡) = 2𝑎. Therefore 𝑑𝑆 𝑑𝑆
𝑑B 𝑑B
∣𝑎∣ which implies 𝑑𝑆
⊥ 𝑇 and since B is vector of constant norm we get that 𝑑𝑆
⊥ B.
𝐾(𝑡) = 3 .
(1 + (2𝑎𝑡 + 𝑏)2 ) 2
𝑑B 𝑑B
Given a curve C which is parameterized by the the position vector r(𝑡), we have a unit tangent Hence, the vector 𝑑𝑆
is parallel to N. This implies that 𝑑𝑆
= −𝜏 𝑁 for some constant 𝜏 (the
vector T at a point where the coordinate functions are differentiable. Now we are looking to get negative sign is traditional). Here the scalar 𝜏 is called the torsion along the curve and from
a unit normal vector to the curve at a point where the coordinate functions are differentiable. N. 𝑑B
𝑑𝑆
= −𝜏 N.N = −𝜏.1 we have
𝑑B
𝜏 =−
.N.
𝑑𝑆
Since T has constant length, 𝑑T
𝑑𝑠
is orthogonal to T. At a point where 𝐾(𝑆) ∕= 0, the vector Unlike 𝐾, which is always positive,𝜏 can be positive, negative or zero.
1 𝑑𝑇 𝑇 ′ (𝑡) 𝑟′′ (𝑆) Since B, T and N are mutually orthogonal, they are linearly independent. Hence any vector in
𝑁= . = =
𝐾 𝑑𝑆 ∥𝑇 ′ (𝑡)∥ ∥𝑟(𝑆)∥ R3 can be represented as a linear combination of these vectors.
is a unit vector parallel to 𝑇 ′ (𝑡) and hence normal to the curve and it is called principal unit
normal vector for a curve 𝐶. If B′ , T′ and N′ exist, then we get the following:
4.5 Divergence and Curl 102 4.5 Divergence and Curl 103
4.5 Divergence and Curl 2. The product of ∇ and a vector field F in the given order is the divergence of F , that is, if
𝐹 = 𝑓 𝑖 + 𝑔𝑗 + ℎ𝑘, then
Recall that, the gradient operator produces a vector field from a scalar field. We will discuss two � �
∂ ∂ ∂ ∂𝑓 ∂𝑔 ∂ℎ
∇.𝐹 = 𝑖+ 𝑗 + 𝑘 .(𝑓 𝑖 + 𝑔𝑗 + ℎ𝑘) = + + = 𝑑𝑖𝑣𝐹.
other important vector operations. One produces a scalar field from a vector field and the other ∂𝑥 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑧
produces a vector field from a vector field.
Here even though ∇.F = 𝑑𝑖𝑣F this notation is not directly equivalent to the scalar (dot)
Definition 4.5.1. Let 𝐹 : R → R be a differentiable vector field given by
3 3 product. This is because
∂𝑓 ∂𝑔 ∂ℎ
∇.F = + +
∂𝑥 ∂𝑦 ∂𝑧
𝐹 (𝑥, 𝑦, 𝑧) = 𝑓 (𝑥, 𝑦, 𝑧)𝑖 + 𝑔(𝑥, 𝑦, 𝑧)𝑗 + ℎ(𝑥, 𝑦, 𝑧)𝑘.
which is a real number depending on values of 𝑓, 𝑔 and ℎ, whereas
1. The divergence of F, denoted by 𝑑𝑖𝑣𝐹 , is the scalar field defined by ∂ ∂ ∂
F.∇ = 𝑓 +𝑔 +ℎ
∂𝑓 ∂𝑔 ∂ℎ ∂𝑥 ∂𝑦 ∂𝑧
𝑑𝑖𝑣𝐹 = + + .
∂𝑥 ∂𝑦 ∂𝑧 which is an operator.
2. The curl of F, denoted by 𝑐𝑢𝑟𝑙𝐹 , is the vector field defined by 3. The cross product of ∇ and a vector field F is the curl of F, that is, if 𝐹 = 𝑓 𝑖 + 𝑔𝑗 + ℎ𝑘,
� � � � � �
∂ℎ ∂𝑔 ∂𝑓 ∂ℎ ∂𝑔 ∂𝑓 then
𝑐𝑢𝑟𝑙𝐹 = − 𝑖+ − 𝑗+ − 𝑘. � �
∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦 � �
�𝑖 𝑗 𝑘� � � � � � �
� � ∂ℎ ∂𝑔 ∂𝑓 ∂ℎ ∂𝑔 ∂𝑓
Example 4.5.1. Let 𝐹 (𝑥, 𝑦, 𝑧) = 3𝑥𝑦𝑖 − 2𝑦𝑧𝑗 + 𝑥2 𝑘. Then 𝑓 = 3𝑥𝑦, 𝑔 = −2𝑦𝑧, ℎ = 𝑧 and �∂ ∂ ∂ � = − 𝑖 + − 𝑗 + − 𝑘 = 𝑐𝑟𝑢𝑙𝐹.
� ∂𝑥 ∂𝑦 ∂𝑧 � ∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
∂𝑓 ∂𝑓 ∂𝑓 ∂𝑔 ∂𝑔 ∂𝑔 ∂ℎ ∂ℎ ∂ℎ � �
hence ∂𝑥
= 3𝑦, ∂𝑦
= 3𝑥, ∂𝑧
= 0, ∂𝑥
= 0, ∂𝑦
= −2𝑧, ∂𝑧
= −2𝑦, ∂𝑥
= 2𝑥, ∂𝑦
= 0 and ∂𝑧
= 0. �𝑓 𝑔 ℎ �
Therefore
4. The physical significance of 𝑑𝑖𝑣F at point P is that it describes the out flow of F per unit
1. 𝑑𝑖𝑣𝐹 = 3𝑦 − 2𝑧 + 2𝑥𝑧 which is a scalar in R. volume at P.
4.5 Divergence and Curl 104 4.5 Divergence and Curl 105
Let 𝐺 be a continuous scalar field with continuous first and second partial derivatives. Then
��
∂𝐹 ∂𝐺 ∂𝐺
���� �� grad𝐺 = 𝑖+ 𝑗+ 𝑘
∂𝑥 ∂𝑦 ∂𝑧
�������
� and � �
� �
� � 𝑖 𝑗 𝑘��
∂𝐹 ∂𝐺 ∂𝐺 � �
∇ × (∇𝐺) = ∇ × 𝑖 𝑗+ 𝑘 = �� ∂𝑥 ∂𝑦 ∂𝑧 ��
∂ ∂ ∂
∂𝑥 ∂𝑦 ∂𝑧 � ∂𝐺 ∂𝐺 ∂𝐺 �
� ∂𝑥 ∂𝑦 ∂𝑧 �
� 2 2
� � 2 2
� � 2 �
∂ 𝐺 ∂ 𝐺 ∂ 𝐺 ∂ 𝐺 ∂ 𝐺 ∂ 2𝐺
= − 𝑖+ − 𝑗+ − 𝑘.
∂𝑦∂𝑧 ∂𝑧∂𝑦 ∂𝑧∂𝑥 ∂𝑥∂𝑧 ∂𝑥∂𝑦 ∂𝑦∂𝑥
Figure 4.10: Geometrical Interpretation of curl.
But by assumption the function G is continuous with continuous first and second partial derivatives
Consider a particle moving around a circle. The rate of change the angular position of the particle and hence the mixed partial derivatives are equal, that is,
4.5.1 Potential 2. Let 𝑉 : R𝑛 → R3 , 𝑛 = 2, 3 be given by 𝑉 (𝑝) = (𝑉1 (𝑝), 𝑉2 (𝑝), 𝑉3 (𝑝)). Then V has a
potential function if and only if 𝑐𝑢𝑟𝑉 = ∇ × 𝑉 = 0 = (0, 0, 0).
Recall that, if a scalar field f is differentiable at every point D of its domain, then 𝑉 (𝑃 ) = ∇𝑓 (𝑃 )
defines a vector field V on D. Example 4.5.4. 1. Let 𝑉 (𝑥, 𝑦, 𝑧) = (2 + 𝑦, 𝑥 − 𝑧 2 , −2𝑦𝑧). Then
� �
� �
Example 4.5.2. If 𝑓 (𝑥, 𝑦) = 3𝑥2 + 𝑥𝑦 + 𝑦 3 , then ∇𝑓 (𝑥, 𝑦) = 𝑉 (𝑥, 𝑦) = (6𝑥 + 𝑦, 𝑥 + 3𝑦 2 ) is a � 𝑖 𝑖 𝑘 � � � � �
� � ∂𝑉3 ∂𝑉2 ∂𝑉1 ∂𝑉3 ∂𝑉2 ∂𝑉1
∇ × 𝑉 = �� ∂𝑥
∂ ∂ ∂ � =
∂𝑧 �
− 𝑖 + − 𝑗+( − )𝑘
vector field. Here the function f is called a potential of the vector V. �
∂𝑦
� ∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
� 𝑉 1 𝑉 2 𝑉3 �
However, not every vector field has a potential, that is, not every vector field ia a gradient of
= (−2𝑧 − (−2𝑧))𝑖 + (0 − 0)𝑗 + (1 − 1)𝑘 = (0, 0, 0).
some scalar field.
Therefore, V has a potential.
Example 4.5.3. The velocity field
2. Let 𝑉 (𝑥, 𝑦) = (−𝑐𝑦, 𝑐𝑥), 𝑐 ∈ R∖0. Then since
𝑉 (𝑥, 𝑦) = (−𝑐𝑦, 𝑐𝑥), 𝑐 ∈ R∖{0} ∂𝑉1 ∂𝑉2
(𝑥, 𝑦) = −𝑐 and (𝑥, 𝑦) = 𝑐,
∂𝑦 ∂𝑥
is not the gradient of any function 𝑓 .
but −𝑐 ∕= 𝑐, ∀𝑐 ∕= 0 and hence 𝑉 (𝑥, 𝑦) has no potential.
Suppose the contrary, i.e. suppose that there exists a function For the second question we illustrate the procedure by the considering the following two examples.
𝑓 : R2 → R such that ∇𝑓 = 𝑉. 1. Let 𝑉 (𝑥, 𝑦) = (6𝑥 + 𝑦, 𝑥 + 3𝑦 2 ) be given. Then if there is a potential function f for V it
must satisfy
That is, (𝑓𝑥 , 𝑓𝑦 ) = 𝑉. But 𝑓𝑥 (𝑥, 𝑦) = −𝑐𝑦 and 𝑓𝑦 (𝑥, 𝑦) = 𝑐𝑥. This implies 𝑓 (𝑥, 𝑦) = −𝑐𝑥𝑦+𝐴(𝑦), 𝑓𝑥 (𝑥, 𝑦) = 6𝑥 + 𝑦 and 𝑓𝑦 (𝑥, 𝑦) = 𝑥 + 3𝑦 2 .
where A is a function of 𝑦 only and 𝑓𝑦 (𝑥, 𝑦) = −𝑐𝑥+𝐴′ (𝑦) = 𝑐𝑥. This gives us −2𝑐𝑥+𝐴′ (𝑦) = 0
This implies �
and hence 𝐴′ (𝑦) = 2𝑐𝑥. But this is a contradiction since A is a function of 𝑦 only.
𝑓 (𝑥, 𝑦) = (6𝑥 + 𝑦)𝑑𝑥 = 3𝑥2 + 𝑥𝑦 + 𝐴(𝑦),
Let V be a vector field. Let us ask the following two questions. were 𝐴(𝑦) is constant with respect to x(or, it is a function of y only).
Then from 𝑓𝑦 (𝑥, 𝑦) = 𝑥 + 3𝑦 2 , we get 𝑓𝑦 (𝑥, 𝑦) = 𝑥 + 𝐴′ (𝑦) = 𝑥 + 3𝑦 2 , which implies that
1. How do we check whether V has a potential or not?
𝐴′ (𝑦) = 3𝑦 2 and hence
�
2. How do we determine the potential f, given V?
𝐴(𝑦) = 3𝑦 2 𝑑𝑦 = 𝑦 3 + 𝐶, where C is a constant.
Now let us answer the first question. How do we check whether a vector field has a potential or
Therefore the scalar field 𝑓 (𝑥, 𝑦) = 3𝑥2 + 𝑥𝑦 + 𝑦 3 + 𝐶 is the potential of the vector field
not? The following proposition will answer this question.
𝑉 (𝑥, 𝑦) = (6𝑥 + 𝑦, 𝑥 + 3𝑦 2 ).
Proposition 4.5.4. (Test for Existence of a Potential)
2. Let V be a vector field given by 𝑉 (𝑥, 𝑦, 𝑧) = (2 + 𝑦, 𝑥 − 𝑧 2 , −2𝑦𝑧).
1. Let 𝑉 : R2 → R2 be a vector field given by 𝑉 (𝑝) = (𝑉1 (𝑝), 𝑉2 (𝑝)). Then V has a potential
Then if f is a potential, we must have 𝑓𝑥 (𝑥, 𝑦, 𝑧) = 2 + 𝑦, 𝑓𝑦 (𝑥, 𝑦, 𝑧) = 𝑥 − 𝑧 2 and
function if and only if
𝑓𝑧 (𝑥, 𝑦, 𝑧) = −2𝑦𝑧. We integrate 𝑓𝑥 (𝑥, 𝑦, 𝑧) = 2 + 𝑦 with respect to 𝑥 and get
∂𝑉1 ∂𝑉2 �
(𝑝) = (𝑝) for all p in the Domain of V.
∂𝑦 ∂𝑥 𝑓 (𝑥, 𝑦, 𝑧) = (2 + 𝑦)𝑑𝑥 = 2𝑥 + 𝑦𝑥 + 𝐴(𝑦, 𝑧).
4.6 Exercises 108 4.6 Exercises 109
∂𝐴
𝑓𝑦 (𝑥, 𝑦, 𝑧) = 𝑥 + (𝑦, 𝑧)
∂𝑦
which implies that
∂𝐴
𝑥+ (𝑦, 𝑧) = 𝑥 − 𝑧 2
∂𝑦
and hence
∂𝐴(𝑦, 𝑧)
= −𝑧 2 .
∂𝑦
We integrate this with respect to 𝑦 to get 𝐴(𝑦, 𝑧) = −𝑧 2 𝑦 + 𝐵(𝑧), where 𝐵 is a function
of 𝑧 only and hence 𝑓 (𝑥, 𝑦, 𝑧) = 2𝑥 + 𝑦 − 𝑧 2 𝑦 + 𝐵(𝑧).
Since 𝑓𝑧 (𝑥, 𝑦, 𝑧) = −2𝑦𝑧, we get 𝑓𝑧 (𝑥, 𝑦, 𝑧) = −2𝑧𝑦 + 𝐵 ′ (𝑧) = −2𝑦𝑧 which implies that
𝐵 ′ (𝑧) = 0, which means 𝐵(𝑧) = 𝐶, where C is a constant.
Therefore, 𝑓 (𝑥, 𝑦, 𝑧) = 2𝑥 + 𝑦𝑥 − 𝑧 2 𝑦 + 𝐶.
4.6 Exercises
5.1 Line Integrals 111
2. If the integrand function F is a scalar valued function, the line integral will take the following Question: Which line integrals are independent of paths?
form.
� � 𝑏 � 𝑏 The following theorem has an answer for this question.
√ √
𝑓 (𝑥, 𝑦, 𝑧, )𝑑𝑆 = ′ 2
𝑓 (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)) (𝑟 (𝑡)) .𝑑𝑡 = 𝑓 (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡)) 𝑟′ (𝑡).𝑟′ (𝑡)𝑑𝑡
𝐶 𝑎 𝑎 Theorem 5.2.1. Let 𝐹1 , 𝐹2 and 𝐹3 be continuous functions in a set D and let 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ).
A line integral (5.1) is independent of path in D if and only if 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ) is the gradient of
3. If the path of integration C is a closed curve, that is, if 𝑟(𝑎) = 𝑟(𝑏), then the line integral
some potential function f in D, i.e., if there exists a function f in D such that 𝐹 = ∇𝑓, which is
will be denoted by � �
equivalent to saying that
instead of .
𝐶 𝐶 ∂𝑓 ∂𝑓 ∂𝑓
𝐹1 = , 𝐹2 = and 𝐹3 = .
Example 5.1.3 (Mass of a Helical wire). Determine the mass M of a wire that is in the shape ∂𝑥 ∂𝑦 ∂𝑧
of a helical curve 𝐶 : 𝑟(𝑡) = (𝑎 cos 𝑡, 𝑎 sin 𝑡, 𝑏𝑡) 0 ≤ 𝑡 ≤ 2𝑛𝜋, 𝑛 ∈ N and that has a mass density
Let F be a conservative vector valued function with potential function 𝑓 and let C be a curve
𝜎 = 𝑐𝑡 that varies along C.
with coordinate function 𝑥 = 𝑥(𝑡), 𝑦 = 𝑦(𝑡), 𝑧 = 𝑧(𝑡) for 𝑎 ≤ 𝑡 ≤ 𝑏. Then
� � � �
∂𝑓 ∂𝑓 ∂𝑓
Solution 𝐹.𝑑𝑟 = 𝑑𝑥 + 𝑑𝑦 + 𝑑𝑧
𝐶 𝐶 ∂𝑥 ∂𝑦 ∂𝑧
Recall that the mass M of the wire is given by � 𝑏� �
∂𝑓 𝑑𝑥 ∂𝑓 𝑑𝑦 ∂𝑓 𝑑𝑧
� = + + 𝑑𝑡
√ 𝑎 ∂𝑥 𝑑𝑡 ∂𝑦 𝑑𝑡 ∂𝑧 𝑑𝑡
𝑀= 𝜎𝑑𝑆, where 𝑑𝑆 = 𝑟′ (𝑡).𝑟′ (𝑡)𝑑𝑡. � 𝑏 � �
𝐶 𝑑
= 𝑓 (𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡) 𝑑𝑡
But 𝑟′ (𝑡) = (−𝑎 sin 𝑡, 𝑎 cos 𝑡, 𝑏) and 𝑟′ (𝑡).𝑟′ (𝑡) = 𝑎2 sin2 𝑡 + 𝑎2 cos2 𝑡 + 𝑏2 = 𝑎2 + 𝑏2 . Therefore, 𝑎 𝑑𝑡
where C is a curve with initial point 𝐴 = (0, 1, 0) and terminal point 𝐵 = (−2, 1, 0).
2. The integral �
(𝑒𝑧 𝑑𝑥 + 2𝑦𝑑𝑦 + 𝑥𝑒𝑧 𝑑𝑧), But But
𝐶 � �
� �
if C is a curve with with initial point 𝐴 = (0, 1, 0) and terminal point 𝐵 = (−2, 1, 0). � 𝑖 𝑖 𝑘 � � � � �
� �
∇ × 𝐹 = �� ∂𝑥
∂ ∂ ∂ � = ∂𝐹3 − ∂𝐹2 𝑖 + ∂𝐹1 − ∂𝐹3 𝑗 + ( ∂𝐹2 − ∂𝐹1 )𝑘 = 0.
∂𝑦 ∂𝑧 � ∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
� �
� 𝐹1 𝐹 2 𝐹3 �
Solution
1. Let 𝐹1 = 2𝑥, 𝐹2 = 2𝑦 and 𝐹3 = 4𝑧. Then 𝐹 = (𝐹1 , 𝐹2 , 𝐹3 ) and Then there exists a function 𝑓 such that 𝐹 = ∇𝑓 which implies
� � � ∂𝑓 ∂𝑓 ∂𝑓
𝐹1 = , 𝐹2 = and 𝐹3 =
𝐹 (𝑟).𝑑𝑟 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = (2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧), ∂𝑥 ∂𝑦 ∂𝑧
𝐶 𝐶 𝐶
and hence by Fundamental Theorem of Line Integrals, we have
where C is a curve with initial point 𝐴 = (0, 0, 0) and terminal point 𝐵 = (2, 2, 2).
� �
But (2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧) = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) = 𝑓 (𝐵) − 𝑓 (𝐴).
� � 𝐶 𝐶
� �
� 𝑖 𝑖 𝑘 � � � � �
� � Now, by using the same procedure as in Section ?? of the previous chapter, we can get
∇ × 𝐹 = �� ∂𝑥
∂ ∂ ∂ � = ∂𝐹3 − ∂𝐹2 𝑖 + ∂𝐹1 − ∂𝐹3 𝑗 + ( ∂𝐹2 − ∂𝐹1 )𝑘 = 0.
∂𝑦 ∂𝑧 � ∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
� � 𝑓 (𝑥, 𝑦, 𝑧) = 𝑥𝑒𝑧 + 𝑦 2 + 𝑘, where 𝑘 is a constant.
� 𝐹1 𝐹 2 𝐹3 �
2 2 2
𝑓 (𝑥, 𝑦, 𝑧) = 𝑥 + 𝑦 + 2𝑧 + 𝑘, where 𝑘 is a constant.
C1
C 2
Therefore, �
(2𝑥𝑑𝑥 + 2𝑦𝑑𝑦 + 4𝑧𝑑𝑧) = 𝑓 (2, 2, 2) − 𝑓 (0, 0, 0) = 24.
A
𝐶
2. � 𝐵 � �
𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧
2𝑥𝑦𝑧 2 𝑑𝑥 + (𝑥2 𝑥2 + 𝑧 cos 𝑦𝑧)𝑑𝑦 + (2𝑥2 𝑦𝑧 + 𝑦 cos 𝑦𝑧)𝑑𝑧 ,
𝐴
is said to be exact in a domain D if there is a differentiable function f such that where 𝐴 = (0, 0, 1, ) and 𝐵 = (1, 𝜋4 , 2).
∂𝑓 ∂𝑓 ∂𝑓
𝐹1 = , 𝐹2 = and 𝐹3 = in D.
∂𝑥 ∂𝑦 ∂𝑧 Solution
Definition 5.2.4. A domain D is said to be simply connected if every closed curve in D can be
shrunk to any point in D. 1. Let 𝐹1 = 𝑒𝑥 cos 𝑦 and 𝐹2 = −𝑒𝑥 sin 𝑦. Then
∂𝐹1 ∂𝐹2
= −𝑒𝑥 sin 𝑦 =
∂𝑦 ∂𝑥
and hence the differential in the integral is exact.
Then let us find the function f.
�
MULTIPLY CONNECTED SIMPLY CONNECTED
𝑓 (𝑥, 𝑦) = 𝐹2 𝑑𝑦 = 𝑒𝑥 cos 𝑦 + 𝐴(𝑥)
Figure 5.3: Multiply and Simply connected regions. and 𝑓𝑥 = 𝑒𝑥 cos 𝑦 + 𝐴𝑥 = 𝑒𝑥 cos 𝑦 = 𝐹1 , which implies that 𝐴 = 𝐶, a constant.
Therefore, the potential function is 𝑓 (𝑥, 𝑦) = 𝑒𝑥 cos 𝑦 + 𝐶 and hence
� (3, 𝜋 )
Theorem 5.2.5. Suppose 𝐹1 , 𝐹2 and 𝐹3 are continuous and having continuous first order partial 2 𝜋
𝑒𝑥 (cos 𝑦𝑑𝑥 − sin 𝑦𝑑𝑦) = 𝑓 (3, ) − 𝑓 (0, 𝜋) = 0 + 𝜋 = 𝜋.
(0,𝜋) 2
derivatives in a domain D and consider the line integral
� �
𝐹 (𝑟).𝑑𝑟 = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦 + 𝐹3 𝑑𝑧) (5.2) 2. Let 𝐹1 = 2𝑥𝑦𝑧 2 , 𝐹2 = 𝑥2 𝑧 2 + 𝑧 cos 𝑦𝑧 and 𝐹3 = 2𝑥2 𝑦𝑧 + 𝑦 cos 𝑦𝑧. Then since we have
𝑐 𝑐 (𝐹3 )𝑦 = 2𝑥2 𝑧 + cos 𝑦𝑧 sin 𝑦𝑧 = (𝐹2 )𝑧 , (𝐹1 )𝑧 = 4𝑥𝑦𝑧 = (𝐹3 )𝑥 and (𝐹2 )𝑥 = 2𝑥𝑧 2 = (𝐹1 )𝑦 ,
1. If the line integral (5.2) is independent of path in D, then 𝐶𝑢𝑟𝑙𝐹 = 0. i.e. the differential in the integral is exact.
Then let us find the function f.
∂𝐹3 ∂𝐹2 ∂𝐹1 ∂𝐹3 ∂𝐹2 ∂𝐹1 � �
= , = and = .
∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦 𝑓 (𝑥, 𝑦, 𝑧) = 𝐹2 𝑑𝑦 = (𝑥2 𝑧 2 + 𝑧 cos 𝑦𝑧)𝑑𝑦 = 𝑥2 𝑧 2 𝑦 + 𝑧 sin 𝑦𝑧 + 𝐴(𝑥, 𝑧)
2. If 𝐶𝑢𝑟𝑙𝐹 = 0 in D and if D is simply connected then the line integral (5.2) is independent
and 𝑓𝑥 = 2𝑥𝑧 2 𝑦 + 𝐴𝑥 (𝑥, 𝑧) = 2𝑥𝑦𝑧 2 = 𝐹1 , which implies 𝐴𝑥 = 0 and hence 𝐴 = 𝐵(𝑧).
of path in D.
Therefore 𝑓 (𝑥, 𝑦, 𝑧) = 𝑥2 𝑧 2 𝑦 +sin 𝑦𝑧 +𝐵(𝑥) which means 𝑓𝑧 = 2𝑥2 𝑦𝑧 +𝑦 cos 𝑦𝑧 +𝐵 ′ (𝑧) =
5.2 Line Integrals Independent of Path 118 5.3 Green’s Theorem 119
2𝑥2 𝑦𝑧 + 𝑦 cos 𝑦𝑧 that implies 𝐵 ′ (𝑧) = 0 and hence 𝐵 = 𝐶, a constant. Write 𝐶 = 𝐶1 ⊕ 𝐶2 , where 𝐶1 is the portion of the parabola and 𝐶2 is the line segment.
2 2
Therefore the potential function is 𝑓 (𝑥, 𝑦, 𝑧) = 𝑥 𝑧 𝑦 + sin 𝑦 + 𝐶 and hence Parameterize 𝐶1 by 𝑥 = 𝑡, 𝑦 = 𝑡2 for 0 ≤ 𝑡 ≤ 2 and on 𝐶1 , 𝑑𝑥 = 𝑑𝑡, 𝑑𝑦 = 2𝑡𝑑𝑡. Therefore,
� � � 2 �2
𝐵
𝑡5 𝑡4 � 32 72
(2𝑥𝑦𝑧 2 𝑑𝑥 + (𝑥2 𝑧 2 + 𝑧 cos 𝑦𝑧)𝑑𝑦 + (2𝑥2 𝑦𝑧 + 𝑦 cos 𝑦𝑧)𝑑𝑧) (𝑦 2 𝑑𝑥 + 𝑥2 𝑑𝑦) = (𝑡4 + 2𝑡3 )𝑑𝑡 = ( + )�� = +8= .
𝐴 𝐶1 0 5 2 0 5 5
𝜋 𝜋 Parameterize 𝐶2 by 𝑥 = 𝑡, 𝑦 = 2 for 2 ≤ 𝑡 ≤ 4 and on 𝐶2 , 𝑑𝑥 = 𝑑𝑡, 𝑑𝑦 = 0. Therefore,
= 𝑓 (𝐵) − 𝑓 (𝐴) = 1. .22 + (sin( × 2) − 0 + sin 0
4 4 �4
� � 4
= 𝜋 + 1 − 0 = 1 + 𝜋. �
(𝑦 2 𝑑𝑥 + 𝑥2 𝑑𝑦) = (4 + 0)𝑑𝑡 = 4𝑡�� = 8.
𝐶2 2 2
Example 5.2.3. Let C be a curve consisting of portion of a parabola 𝑦 = 𝑥2 in the Example 5.3.1. 1. Use Green’s Theorem to evaluate
𝑥𝑦−plane from (0, 0) to (2, 4) and a horizontal line from (2, 4) to (4, 4). Evaluate �
� (𝑥2 𝑦𝑑𝑥 + 𝑥𝑑𝑦)
𝐶
(𝑦 2 𝑑𝑥 + 𝑥2 𝑑𝑦).
𝐶 over the triangular path in the figure.
5.3 Green’s Theorem 120 5.3 Green’s Theorem 121
Solution
Y
But since F is not conservative (verify) W need not be zero,though C is a simple closed
curve. If we parameterize the circle by 𝑥 = cos 𝑡, 𝑦 = sin 𝑡 on 0 ≤ 𝑡 ≤ 2𝜋, the integral will
X
be complicated and difficult to solve.
However if we use Green’s Theorem we have:
Figure 5.5: Two boundaries of a closed region �
𝑊 = (𝑒𝑥 − 𝑦 3 )𝑑𝑥 + (cos 𝑦 + 𝑥3 )𝑑𝑦
Y 𝐶 ��
2 2
(1, 2) = [ (cos 𝑦 + 𝑥3 ) − (𝑒𝑥 − 𝑦 3 )]𝑑𝐴.
𝑅 2𝑥 2𝑦
C3
C2
�� ��
= (3𝑥2 + 3𝑦 2 )𝑑𝐴 = 3 (𝑥2 + 𝑦 2 )𝑑𝐴.
C1 1 X
𝑅 𝑅
� 2𝜋 � 1
=3 𝑟2 𝑟𝑑𝑟𝑑𝜃 (using polar coordinates)
0 0
Figure 5.6: Triangular path �
3 2𝜋 3𝜋
= 𝑑𝜃 = .
4 0 2
Solution
Here R is region bounded by a unit circle.
The curves are parameterized as: 𝐶1 : 𝑟(𝑡) = (𝑡, 0) for 0 ≤ 𝑡 ≤ 1, 𝐶2 : 𝑟(𝑡) = (1, 2𝑡) for
Remark 5.3.2. Green’s Theorem can be used to find areas of a plane region.
0 ≤ 𝑡 ≤ 1 and 𝐶3 : 𝑟(𝑡) = (1 − 𝑡, 2 − 2𝑡) for 0 ≤ 𝑡 ≤ 𝑡.
Let R be a plane region with boundary C.
Since 𝐹1 = 𝑥2 𝑦 and 𝐹2 = 𝑥, we have from Green’s Theorem that:
� �� � � � 1 � 2𝑥
∂ ∂ 2 1. Area of the region R in cartesian coordinates.
(𝑥2 𝑦𝑑𝑥 + 𝑥𝑑𝑦) = (𝑥) − (𝑥 𝑦) 𝑑𝐴 = (1 − 𝑥2 )𝑑𝑦𝑑𝑥
𝐶 𝑅 ∂𝑥 ∂𝑦 0 0 First choose 𝐹1 = 0 and 𝐹2 = 𝑥. Then, as in 5.3.1 above,
� � ��1 �� �
1
𝑥4 �� 1 𝑑𝑥𝑑𝑦 = 𝑥𝑑𝑦
= (1 − 𝑥2 )(2𝑥)𝑑𝑥 = 𝑥2 − = .
0 2 �0 2 𝑅 𝐶
2. Find the work done by the force field 𝐹 (𝑥, 𝑦) = (𝑒𝑥 − 𝑦 3 )𝑖 + (cos 𝑦 + 𝑥3 )𝑗 on a particle and then choose 𝐹1 = −𝑦 and 𝐹2 = 0 to get
2 2
� � �
that travels once around the unit circle 𝑥 + 𝑦 = 1 in the counterclockwise direction.
𝑑𝑥𝑑𝑦 = − 𝑦𝑥𝑑𝑦
𝑅 𝐶
Therefore, the area 𝐴(𝑅) of the region bounded by the curve C is given by: �
� � �
1
𝐴(𝑅) = 𝑑𝑥𝑑𝑦 = (𝑥𝑑𝑦 − 𝑦𝑑𝑥). (5.4)
𝑅 2 𝑐
2 2
𝑥 𝑦
+ 2 = 1,
𝑎2 𝑏
we write 𝑥 = 𝑎 cos 𝑡, 𝑦 = 𝑏 sin 𝑡, 0 ≤ 𝑡 ≤ 2𝜋. Then 𝑥′ = −𝑎 sin 𝑡, 𝑦 ′ = 𝑏 cos 𝑡 Then area of
region bounded by the ellipse is: Figure 5.7: A cardioid 𝑟 = 𝑎(1 − cos 𝜃), where 0 ≤ 𝜃 ≤ 2𝜋 and 𝑎 is a positive constant.
� �
1 1 � � 2𝜋 �
𝐴(𝑅) = (𝑥𝑑𝑦 − 𝑦𝑑𝑥) = (𝑥𝑦 ′ − 𝑦𝑥′ )𝑑𝑡 1 2 𝑎2 2𝜋 2
2 2 𝐴(𝑅) = 𝑟 𝑑𝜃 = [𝑎(1 − cos 𝜃)] 𝑑𝜃 = (1 − 2 cos 𝜃 + 𝑐𝑜𝑠2 𝜃)𝑑𝜃
� 2 𝑐 2 0
1 2𝜋 � � 2𝜋 �
= (𝑎 cos 𝑡)(𝑏 cos 𝑡) − (𝑏 sin 𝑡)(−𝑎 sin 𝑡)𝑑𝑡 𝑎2 2𝜋 𝑎2 2𝜋
2 = 𝑑𝜃 − 𝑎2 cos 𝜃 + (cos 2𝜃 + 1)𝑑𝜃
� 2𝜋 2 0 0 4 0
1
= (𝑎𝑏 cos2 𝑡 + 𝑎𝑏 sin2 𝑡)𝑑𝑡 𝑎 2
𝑎2
𝜋 3𝑎2 𝜋
2 0 = 𝑥2𝜋 + 𝑥2𝜋 + 0 = 𝑎2 𝜋 + 𝑎2 =
� �2𝜋 2 4 2 2
1 2𝜋 𝑎𝑏 ��
= (𝑎𝑏)𝑑𝑡 = 𝑡� = 𝑎𝑏𝜋 3𝑎2 𝜋
2 0 2 0 Therefore 𝐴(𝑅) = 2
.
Therefore, the area 𝐴(𝑅) of the region bounded by the curve C is given in polar form by: �
� �
�
1
𝐴(𝑅) = 𝑟2 𝑑𝜃.
2 𝐶
�������� ���������
For example, to find the area of the region bounded by the cardioid 𝑟 = 𝑎(1 − cos 𝜃), where
Figure 5.8: Circles centered on 𝑥−axis.
0 ≤ 𝜃 ≤ 2𝜋 and 𝑎 is a positive constant.
5.3 Green’s Theorem 124 5.3 Green’s Theorem 125
The equation of a circle that is centered on the 𝑦 − 𝑎𝑥𝑖𝑠 and passes through the origin has
an equation of the form 𝑟 = 2𝑎 sin 𝜃 or 𝑟 = −2𝑎 sin 𝜃.
�
�
�
�
���������
������� ���
��� ����������
Figure 5.9: Circles centered on 𝑦−axis. �����������
���������
�������������������������
��
��
��������
���������������������������������
��
�
��������������� ��������������
Figure 5.10: Cardioid and Limacons. Figure 5.12: Dividing a region in to regions
3. Rose Curves. Equations of the form First divide R into simply connected regions 𝑅1 and 𝑅2 . Then
�� � � �� � � �� � �
𝑟 = 𝑎 sin 𝑛𝜃 and 𝑟 = 𝑎 cos 𝑛𝜃 ∂𝐹2 ∂𝐹1 ∂𝐹2 ∂𝐹1 ∂𝐹2 ∂𝐹1
− 𝑑𝐴 = − 𝑑𝐴 + − 𝑑𝐴.
𝑅 ∂𝑥 ∂𝑦 𝑅1 ∂𝑥 ∂𝑦 𝑅2 ∂𝑥 ∂𝑦
represents a flower shaped curves called roses. � �
When we graph 𝑟 verses 𝜃 in the cartesian (𝑟, 𝜃) plane, we ignore the points where 𝑟 is = (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦) + (𝐹1 𝑑𝑥 + 𝐹2 𝑑𝑦),
𝐶1 𝐶2
imaginary but plot positive and negative parts from the points where 𝑟2 is positive.
5.3 Green’s Theorem 126 5.4 Surface Integrals 127
where the curves 𝐶1 and 𝐶2 are the boundaries of the regions 𝑅1 and 𝑅2 respectively. The and hence � � � � � �
−𝑦𝑑𝑥 + 𝑥𝑑𝑦 −𝑦𝑑𝑥 + 𝑥𝑑𝑦
orientation of the curves should be in such a way that when traveling along the curves the region = .
𝐶 𝑥2 + 𝑦 2 𝐶𝑎 𝑥2 + 𝑦 2
should be to the left.
Now let 𝑥 = 𝑎 cos 𝑡 and 𝑦 = 𝑎 sin 𝑡 for 0 ≤ 𝑡 ≤ 2𝜋 on 𝐶𝑎 implies 𝑑𝑥 = −𝑎 sin 𝑡𝑑𝑡 and
Example 5.3.2. Evaluate the integral 𝑑𝑦 = 𝑎 cos 𝑡𝑑𝑡.
� � � � � 2𝜋
−𝑦𝑑𝑥 + 𝑥𝑑𝑦 −𝑦𝑑𝑥 + 𝑥𝑑𝑦 (−𝑎 sin 𝑡)(−𝑎 sin 𝑡)𝑑𝑡 + (𝑎 cos 𝑡)(𝑎 cos 𝑡)𝑑𝑡
=
𝐶 𝑥2 + 𝑦 2 𝐶 𝑥2 + 𝑦 2 0 (𝑎 cos 𝑡)2 + (𝑎 sin 𝑡)2
� 2𝜋 2 2 � 2𝜋
(𝑎 sin 𝑡 + 𝑎2 cos2 𝑡)𝑑𝑡
if C is a piecewise smooth simply closed curve oriented counterclockwise such that C incloses the = 𝑑𝑡 = 2𝜋
0 (𝑎2 cos2 𝑡 + 𝑎2 sin2 𝑡) 0
origin. Consider Figure 5.3.2.
for any small radius 𝑎 and hence
� � � �
−𝑦𝑑𝑥 + 𝑥𝑑𝑦
= 2𝜋.
� 𝐶 𝑥2 + 𝑦 2
This implies � � � � � �
−𝑦𝑑𝑥 + 𝑥𝑑𝑦 −𝑦𝑑𝑥 + 𝑥𝑑𝑦 2. The parametric representation of a sphere
+ =0
𝐶 𝑥2 + 𝑦 2 −𝐶𝑎 𝑥2 + 𝑦 2
𝑥2 + 𝑦 2 + 𝑧 2 = 𝑎 2 is 𝑟(𝑢, 𝑣) = 𝑎 cos 𝑣 cos 𝑢𝑖 + 𝑎 cos 𝑣 sin 𝑢𝑗 + 𝑎 sin 𝑣𝑘
5.4 Surface Integrals 128 5.4 Surface Integrals 129
3. The parametric representation of a cone is a tangent vector to the curve Ω𝑣 with coordinate functions
√
𝑧= 𝑥2 + 𝑦 2 , 0 ≤ 𝑧 ≤ 𝑇 is 𝑟(𝑢, 𝑣) = 𝑢 cos 𝑣𝑖 + 𝑢 sin 𝑣𝑗 + 𝑢𝑘 𝑥(𝑢0 , 𝑣), 𝑦(𝑢0 , 𝑣), 𝑧(𝑢0 , 𝑣).
where 0 ≤ 𝑢 ≤ 𝑇 and 0 ≤ 𝑣 ≤ 2𝜋. Assume that these two vectors are not zero. Then these two vectors lie in a plane tangent to the
surface Ω at the point 𝑃0 and hence the vectors
For a surface we write a position vector as
𝑁 (𝑃0 ) = 𝑇𝑢0 × 𝑇𝑣0
𝑟(𝑢, 𝑣) = 𝑥(𝑢, 𝑣)𝑖 + 𝑦(𝑢, 𝑣)𝑗 + 𝑧(𝑢, 𝑣)𝑘
is a normal vector to the tangent plane and hence the surface to at the point 𝑃0 . But
and 𝑟(𝑢, 𝑣) can be considered as a vector in R3 with initial point the origin and terminal point � �
( ) � �
𝑥(𝑢, 𝑣), 𝑦(𝑢, 𝑣), 𝑧(𝑢, 𝑣) which is on the surface. � 𝑖 𝑗 𝑘 �
� �
𝑇𝑢0 × 𝑇𝑣0 = �� ∂𝑢
∂𝑥 ∂𝑦
(𝑢0 , 𝑣0 ) ∂𝑢 ∂𝑧
(𝑢0 , 𝑣0 ) ∂𝑢 (𝑢0 , 𝑣0 )��
� ∂𝑥 �
A surface with parametrization r is simple if it does not fold over and intersect itself. This means � ∂𝑣 (𝑢0 , 𝑣0 ) ∂𝑦
∂𝑣
∂𝑧
(𝑢0 , 𝑣0 ) ∂𝑣 (𝑢0 , 𝑣0 )�
𝑟(𝑢1 , 𝑣1 ) = 𝑟(𝑢2 , 𝑣2 ) can occur only when 𝑢1 = 𝑢2 and 𝑣1 = 𝑣2 . � � � � � �
∂𝑦 ∂𝑧 ∂𝑧 ∂𝑦 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑧 ∂𝑥 ∂𝑦 ∂𝑦 ∂𝑥
= − 𝑖+ − 𝑗+ − 𝑘,
∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣
5.4.1 Normal Vector and Tangent plane to a Surface in which all the partial derivatives are evaluated at (𝑢0 , 𝑣0 ).
From the previous courses, recall that, the Jacobian of two functions 𝑓 and 𝑔 is defined to be
Recall that: if C is a curve with coordinate functions 𝑥(𝑡), 𝑦(𝑡), 𝑧(𝑡), then � �
∂(𝑓, 𝑔) �� ∂𝑓 ∂𝑢
∂𝑓 �
∂𝑣 � ∂𝑓 ∂𝑔 ∂𝑔 ∂𝑓
=� �= − .
𝑇 = 𝑥′ (𝑡0 ) + 𝑦 ′ (𝑡0 )𝑗 + 𝑧 ′ (𝑡0 )𝑘 ∂(𝑢, 𝑣) � ∂𝑔 ∂𝑔 � ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣
∂𝑢 ∂𝑣
( )
is a vector that is tangent to the curve at a point 𝑃0 = 𝑥(𝑡0 ), 𝑦(𝑡0 ), 𝑧(𝑡0 ) . Then the normal vector
∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦
𝑁 (𝑃0 ) = 𝑗+ 𝑗+ 𝑘,
Let Ω be a surface in R3 with coordinate functions 𝑥(𝑢, 𝑣), 𝑦(𝑢, 𝑣), 𝑧(𝑢, 𝑣) and let 𝑃0 be the ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣
( )
point 𝑥(𝑢0 , 𝑣0 ), 𝑦(𝑢0 , 𝑣0 ), 𝑧(𝑢0 , 𝑣0 ) on the surface Ω. We want to find a normal vector N to where the partial derivatives are evaluated at (𝑢0 , 𝑣0 ).
the surface at 𝑃0 .
For an arbitrary point (𝑢, 𝑣) on the surface, the normal line to the tangent plane is given by
Let Ω𝑢 be the curve with coordinate functions 𝑁 = 𝑟𝑢 × 𝑟𝑣 and we denote the corresponding unit vector in the direction of N by n and it is
Solution 2. A cube is a piecewise smooth since all the six faces are smooth, but the eight sides do not
have tangents.
Here 𝑥(𝑢, 𝑣) = 𝑢, 𝑦(𝑢, 𝑣) = 𝑢 + 𝑣 and 𝑧(𝑢, 𝑣) = 𝑢 + 𝑣 2 . First let us find the normal vector
𝑁 (2, 4, 6) to the plane tangent to the surface at the given point which is Now we are in a position to define the surface integral of a vector field over a piecewise smooth
∂𝑦 ∂𝑧 ∂𝑧 ∂𝑥 ∂𝑥 ∂𝑦 surface.
𝑁 (2, 4, 6) = 𝑖+ 𝑗+ 𝑘 = 8𝑖 + 𝑘.
∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣
Definition 5.4.3. Suppose S is a smooth surface parameterized by r(𝑢, 𝑣) with normal vector
Therefore, equation of the plane Π tangent to the given surface at the point (2, 4, 6) is
𝑁 (𝑢, 𝑣) = 𝑟𝑢 × 𝑟𝑣 . Let F be a continuous function on S. Then the surface integral of F over S
Π : 8𝑥 + 𝑧 = 22. is denoted by ��
𝐹 (𝑥, 𝑦, 𝑧)𝑑𝜎
Remark 5.4.1. If the surface Ω is a surface represented by the equation 𝑔(𝑥, 𝑦, 𝑧) = 0, then the 𝑆
Suppose Ω represents a surface in R3 with equation 𝑧 = 𝑔(𝑥, 𝑦) and let R be its projection on where S is the portion of the cylinder 𝑥2 + 𝑦 2 = 3 between the planes 𝑧 = 0 and 𝑧 = 6.
the 𝑥𝑦−plane. If g has continuous first partial derivatives on R, then the surface area of Ω is
��� � � �2 � Solution
�� 2
∂𝑔 ∂𝑧
Area of Ω = + + 1 𝑑𝐴.
𝑅 ∂𝑥 ∂𝑦 The parametrization of the cylinder is
But the integrant is the norm of the normal vector 𝑁 (𝑥, 𝑦) to the surface, that is, √ √
𝑟(𝑢, 𝑣) = 3 cos 𝑢𝑖 + 3 sin 𝑢𝑗 + 𝑣𝑘,
��
Area of Ω = ∣∣𝑁 (𝑥, 𝑦)∣∣𝑑𝐴.
𝑅 for 0 ≤ 𝑢 ≤ 2𝜋 and 0 ≤ 𝑣 ≤ 6.
√ √
Definition 5.4.2. A surface S is called a smooth surface if the unit normal vector n is continuous Then 𝑟𝑢 = − 3 sin 𝑢𝑖 + 3 cos 𝑢𝑗 and 𝑟𝑣 = 𝑘 and
� �
on S and surface S is called piecewise smooth if it consists of finitely many smooth portions. � �
� 𝑖 𝑗 𝑘�
� √ √ � √ √
Example 5.4.4. Examples of smooth and piecewise smooth surfaces. 𝑟𝑢 × 𝑟𝑣 = ��− 3 sin 𝑢 3 cos 𝑢 0�� = 3 cos 𝑢𝑖 + 3 sin 𝑢𝑗
� �
� 0 0 1�
1. A sphere is a smooth surface, since at any point on the sphere, there is a continuous tangent
normal.
5.4 Surface Integrals 132 5.4 Surface Integrals 133
√
which implies ∥𝑟𝑢 × 𝑟𝑣 ∥ = 3. We denote the corresponding unit vector in the direction of N by n,
Therefore, 1 1 ( )
�� �� √ 𝑛= 𝑁= 𝑟𝑢 × 𝑟𝑣 .
∥𝑁 ∥ ∥𝑟𝑢 × 𝑟𝑣 ∥
(𝑥 + 𝑦)𝑑𝜎 = ( 3(cos 𝑢 + sin 𝑢))∥𝑟𝑢 × 𝑟𝑣 ∥𝑑𝐴
𝑆 If a surface S is represented by a the equation 𝑔(𝑥, 𝑦, 𝑧) = 0, then
�𝑅2𝜋 � 6
1
= 3 (cos 𝑢 + sin 𝑢)𝑑𝑣𝑑𝑢 𝑛= ∇𝑔.
0 0 ∥∇𝑔∥
� 2𝜋
= 18 (cos 𝑢 + sin 𝑢)𝑑𝑢 Example 5.4.6. Let S be the portion of the surface 𝑧 = 1 − 𝑥2 − 𝑦 2 that lie above the 𝑥𝑦-plane,
0
� �2𝜋 and suppose that S is oriented upward(i.e. n is in the upward direction at all points of S).
= 18 sin 𝑢 − cos 𝑢 = 0. Find the flux Φ of the flow field 𝐹 (𝑥, 𝑦, 𝑧) = (𝑥, 𝑦, 𝑧) across S.
0
∫∫
Hence, (𝑥 + 𝑦)𝑑𝜎 = 0.
𝑆 Solution
Similarly, for surface S in surface integrals we parameterize the surfaces. But since surfaces are Surface Area
two dimensional; S can be represented as
If Ω is a piecewise smooth surface, then the area of the surface Ω is given by
Υ(𝑢, 𝑣) = 𝑥(𝑢, 𝑣)𝑖 + 𝑦(𝑢, 𝑣)𝑗 + 𝑧(𝑢, 𝑣)𝑘, (𝑢, 𝑣) ∈ 𝑅,
��
where R is some region in uv-plane. Area of Ω = 𝑑𝐴.
Ω
A normal vector N of a surface S whose parametric form is But ∥𝑁 ∥ = ∥𝑟𝑢 × 𝑟𝑣 ∥ represents the area of a parallelogram with adjacent side vectors 𝑟𝑢 and
𝑟𝑣 . Therefore, we can write 𝑑𝐴 as 𝑑𝐴 = ∥𝑟𝑢 × 𝑟𝑣 ∥𝑑𝑢𝑑𝑣. Hence
𝑟(𝑢, 𝑣) = 𝑥(𝑢, 𝑣)𝑖 + 𝑦(𝑢, 𝑣)𝑗 + 𝑧(𝑢, 𝑣)𝑘 �� ��
Area of Ω = 𝑑𝐴 = ∥𝑟𝑢 × 𝑟𝑣 ∥𝑑𝑢𝑑𝑣,
at the point P is Ω 𝑅
Example 5.4.7. Find the area of the surface of the torus given Figure 5.4.7. Mass and Center of Mass of a Shell
�
Consider a shell of negligible thickness in the shape of piecewise smooth surface Ω. Let 𝛿(𝑥, 𝑦, 𝑧)
be the density of the material of the shell at point (𝑥, 𝑦, 𝑧).
Let 𝑥(𝑢, 𝑣), 𝑦(𝑢, 𝑣) and 𝑧(𝑢, 𝑣) be the coordinate functions of Ω for (𝑢, 𝑣) ∈ 𝑅, where R is the
� � projection of the surface in the 𝑥𝑦−plane. Then the mass of Ω is given by
��
�
Mass of Ω = 𝛿(𝑥, 𝑦, 𝑧)𝑑𝜎
Ω
�
and the center of mass of the shell is (¯
𝑥, 𝑦¯, 𝑧¯), where
Figure 5.14: A torus �� �� ��
1 1 1
𝑥¯ = 𝑥𝛿(𝑥, 𝑦, 𝑧)𝜎, 𝑦¯ = 𝑦𝛿(𝑥, 𝑦, 𝑧)𝜎 and 𝑧¯ = 𝑧𝛿(𝑥, 𝑦, 𝑧)𝜎,
𝑚 Ω 𝑚 Ω 𝑚 Ω
Here 𝛾(𝑢, 𝑣) = (𝑎 + 𝑏 cos 𝑣) cos 𝑢𝑖 + (𝑎 + 𝑏 cos 𝑣) sin 𝑢𝑗 + 𝑏 sin 𝑣𝑘.
where 𝑚 is the mass of the shell.
Thus 𝑟𝑢 = −(𝑎 + 𝑏 cos 𝑣) sin 𝑢𝑖 + (𝑎 + 𝑏 cos 𝑣) cos 𝑢𝑗 + 0𝑘,
𝑟𝑣 = −𝑏 sin 𝑣 cos 𝑢𝑖 − 𝑏 sin 𝑣 sin 𝑢𝑗 + 𝑏 cos 𝑣𝑘 and and hence
� � If the surface is given by 𝑧 = 𝑓 (𝑥, 𝑦) for (𝑥, 𝑦) ∈ 𝑅, then the mass is given by
� �
� 𝑖 𝑗 𝑘 � �
� � �� � �2 � �2
𝑟𝑢 × 𝑟𝑣 = ��−(𝑎 + 𝑏 cos 𝑣) sin 𝑢 (𝑎 + 𝑏 cos 𝑣) cos 𝑢 0 �� ∂𝑓 ∂𝑓
� � 𝑚= 𝛿(𝑥, 𝑦, 𝑧) 1 + + 𝑑𝑦𝑑𝑥.
� −𝑏 sin 𝑣 cos 𝑢 −𝑏 sin 𝑣 sin 𝑢 𝑏 cos 𝑣 � Ω ∂𝑥 ∂𝑥
= 𝑏(𝑎 + 𝑏 cos 𝑣)(cos 𝑢 cos 𝑣𝑖 + sin 𝑢 cos 𝑣𝑗 + sin 𝑣𝑘). Example 5.4.8. Find the center of mass of the sphere Ω, 𝑥2 + 𝑦 2 + 𝑧 2 = 𝑎2 , in the first octant,
if it has constant density 𝜇0 .
Which implies ∥𝑟𝑢 × 𝑟𝑣 ∥ = 𝑏(𝑎 + 𝑏𝑏 cos 𝑣) and hence
� � � 2𝜋 � 2𝜋 �
𝐴(𝑆) = ∥𝑟𝑢 × 𝑟𝑣 ∥𝑑𝑢𝑑𝑣 = 𝑏(𝑎 + 𝑏 cos 𝑣)𝑑𝑢𝑑𝑣
�
�𝑅 2𝜋 � 2𝜋 � 02𝜋 � 02𝜋
= 𝑏𝑎𝑑𝑢𝑑𝑣 + 𝑏2 cos 𝑣𝑑𝑢𝑑𝑣.
0 0 0 0
� 2𝜋
= 4𝜋 2 𝑎𝑏 + 𝑏2 𝑜𝑑𝑣
0
� �
= 4𝜋 2 𝑎𝑏
Then �� �� �
Φ= 𝐹.𝑛𝑑𝐴 = (2𝑥2 + 2𝑦 2 + 𝑧)𝑑𝐴. �
𝑆 𝑅
But since 𝑧 = 1 − 𝑥2 − 𝑦 2 , we have Figure 5.15: A sphere of radius 𝑎 in the first octant.
� �
Φ = (𝑥2 + 𝑦 2 + 1)𝑑𝐴
𝑅
� 2𝜋 � 1 Solution
= (𝛾 2 + 1)𝛾𝑑𝛾𝑑𝜃
0 0
� 2𝜋 � � The mass 𝑚 of the sphere is
3 3𝜋 ��
= 𝑑𝜃 =
0 4 2 𝑚= 𝜇0 𝑑𝜎.
Ω
5.5 Divergence and Stock’s Theorems 136 5.5 Divergence and Stock’s Theorems 137
In spherical coordinates we know that the equation of a sphere of radius 𝑎 is given by, 𝜌 = 𝑎. A smooth surface is said to be orientable if the positive normal direction, given at an arbitrary
Now we change the cartesian coordinates in to spherical coordinates to get point 𝑃0 of S, can be continued in a unique and continuous way to the entire surface.
A smooth surface is said to be piecewise orientable if we can orient each smooth piece of the
𝑥 = 𝑎 cos 𝜃 sin 𝜙, 𝑦 = 𝑎 sin 𝜃 sin 𝜙 and 𝑧 = 𝑎 cos 𝜙
surface S in such a manner that along each curve 𝐶 ∗ which is a common boundary of two pieces
for 0 ≤ 𝜃 ≤ 𝜋
2
and 𝜋
2
≤ 𝜙 ≤ 𝜋. 𝑆1 and 𝑆2 the positive direction of 𝐶 ∗ relative to 𝑆1 is opposite to the positive direction of 𝐶 ∗
Therefore, the parametrization of the sphere in the first octant is : relative to 𝑆2 .
This implies
�����������������
�
2 2 2 2 2
𝑟𝜃 × 𝑟𝜙 = −𝑎 sin 𝜙 cos 𝜃𝑖 − 𝑎 sin 𝜙 sin 𝜃𝑗 − 𝑎 sin 𝜙 cos 𝜙𝑘
Figure 5.16: Smooth orientable and pieceorientable surfaces.
and ∥𝑟𝜃 × 𝑟𝜙 ∥ = 𝑎2 sin 𝜙.
There are also non-orientable surfaces. Möbius strip [no inward and no outward directions once
Therefore �� � 𝜋 � 𝜋
2
2 2 𝑎𝜋 in once out word.]
𝑚= 𝜇0 𝑎2 sin 𝜙𝑑𝜃𝑑𝜙 = 𝜇0 𝑎2 sin 𝜙𝑑𝜃𝑑𝜙 = 𝜇0 .
Ω 0 0 2 Consider a boundary surface of a solid region D in 3-space. Such surfaces are called closed.
Then let us find the coordinates of center of mass which is given by
�� �� �� If a closed surface is orientable or piecewise orientable, then there are only two possible orienta-
1 1 1 tions: inward (to ward the solid) and outward (away from the solid).
𝑥= 𝑥𝜇0 𝑑𝜎, ,𝑦 = 𝑦𝜇0 𝑑𝜎 and 𝑧= 𝑧𝜇0 𝑑𝜎.
𝑚 Ω 𝑚 Ω 𝑚 Ω
Let 𝐹 (𝑥, 𝑦, 𝑧) = 𝐹1 (𝑥, 𝑦, 𝑧)𝑖 + 𝐹2 (𝑥, 𝑦, 𝑧)𝑗 + 𝐹3 (𝑥, 𝑦, 𝑧)𝑘 be vector field defined on a solid D.
� 𝜋 � 𝜋 � 𝜋 � 𝜋
1 2 2
3 2
2 2
2 𝑎 Then 𝑑𝑖𝑣𝐹 = 2𝐹1
+ 2𝐹2
+ 2𝐹3
𝑥= 𝜇0 𝑎 cos 𝜃 sin 𝜙𝑑𝜃𝑑𝜙 = 2𝑎 cos 𝜃 sin 𝜙𝑑𝜃𝑑𝜙 = 2𝑥 2𝑦 2𝑧
𝑚 0 0 0 0 2
𝑎 Theorem 5.5.1 (Divergence Theorem of Gauss). Let D be a solid in R3 with surface S oriented
and in a similar fashion we can find find 𝑦 = 2
and 𝑧 = 𝑎2 . Therefore, the center of mass of the
outward. If 𝐹 = 𝐹1 𝑖 + 𝐹2 𝑗 + 𝐹3 𝑘, where 𝐹1 , 𝐹2 and 𝐹3 have continuous first and second partial
portion of the sphere is
(𝑎 𝑎 𝑎) derivatives on some open set containing D, then
(𝑥, 𝑦, 𝑧) = , , . � � ���
2 2 2
𝐹.𝑛𝑑𝐴, = 𝑑𝑖𝑣𝐹 𝑑𝑣,
𝑆 𝐷
that is,
5.5 Divergence and Stock’s Theorems ��� ��
∂𝐹1 ∂𝐹2 ∂𝐹3
( + + )𝑑𝑥𝑑𝑦𝑑𝑧 = (𝐹1 𝑑𝑦𝑑𝑧 + 𝐹2 𝑑𝑧𝑑𝑥 + 𝐹3 𝑑𝑥𝑑𝑦).
𝐷 ∂𝑥 ∂𝑦 ∂𝑧 𝑆
If a surface S is smooth and P is any point in S we can choose a unit normal vector n of S at P.
Example 5.5.1. Let S be the sphere 𝑥2 + 𝑦 2 + 𝑧 2 = 𝑎2 oriented outward. Find the flux of the
Then we can take the direction of n as the positive normal direction of S at P( two possibilities).
vector function 𝐹 (𝑥, 𝑦, 𝑧) = 𝑧𝑘 across S.
5.5 Divergence and Stock’s Theorems 138 5.5 Divergence and Stock’s Theorems 139
If D is the spherical solid enclosed by S by Divergence Theorem, the flux Φ across S is zero,(i.e. if 𝑉 (𝐷) → 0) then the approximate value will be exact.
�� ��� Hence
4𝜋𝑎3
Φ= 𝐹.𝑛𝑑𝐴 = 𝑑𝑉 = volume of D = . ��
𝑆 𝐷 3 Φ(𝐷) 1
𝑑𝑖𝑣𝐹 (𝑃0 ) = lim or 𝑑𝑖𝑣𝐹 (𝑃0 ) = lim 𝐹.𝑛𝑑𝐴.
𝑣(𝐷)→0 𝑣(𝐷) 𝑣(𝐷)→0 𝑣(𝐷) 𝑆
Example 5.5.2. Let S be the surface of the solid enclosed by the circular cylinder 𝑥2 + 𝑦 2 = 9
and the planes 𝑧 = 0 and 𝑧 = 2, oriented outward. use the Divergence theorem to find the flux This limit is called the flux density of F at the point 𝑃0 and some times is taken as the definition
Φ of the vector field for divergence.
3 3 2
𝐹 (𝑥, 𝑦, 𝑧) = 𝑥 𝑖 + 𝑦 𝑗 + 𝑧 𝑘 In an incompressible fluid:
across S.
∙ Points 𝑃0 at which div 𝐹 (0 ) > 0 are called sources (because Φ(𝐷) > 0, out flow).
Solution ∙ Points 𝑃0 at which div𝐹 (𝑃0 ) < 0 are called sinks (b/c Φ(𝐷) < 0, in flow).
We have 𝑑𝑖𝑣𝐹 = 3𝑥2 + 3𝑦 2 + 2𝑧. Thus if D is the cylindrical solid enclosed by S, we have: ∙ Fluid enters the flow at a source and drains out at a sink.
�� ��� ���
Φ= 𝐹.𝑛𝑑𝐴 = 𝑑𝑖𝑣𝐹 𝑑𝑣 = (3𝑥2 + 3𝑦 2 + 2𝑧)𝑑𝑣. If an incompressible fluid is without sources or sinks, we must have: 𝑑𝑖𝑣𝐹 (𝑃 ) = 0 for all P point
𝑆 𝐷 𝐷
and in hydrodynamics, this is called the continuity equation for incompressible fluids.
Let 𝑥 = 𝛾 cos 𝜃 for 0 ≤ 𝜃 ≤ 2𝜋 , 𝑦 = 𝛾 sin 𝜃 for 0 ≤ 𝛾 ≤ 3 and 𝑧 = 𝑧 for 0 ≤ 𝑧 ≤ 2. Then
� 2𝜋 � 3 � 2 Up to this point we were looking at the application of 𝑑𝑖𝑣𝐹 in 3- space, We will now go to the
Φ = (3𝛾 2 + 2𝑧)𝛾𝑑𝑧𝑑𝛾𝑑𝜃
0 0 0 𝑐𝑢𝑟𝑙𝐹 in 3-space which helps us in generalizing Green’s Theorem to a 3-dimensional object.
� 2𝜋 � 3 � 2
= (𝑟3 + 2𝑟𝑧)𝑑𝑧𝑑𝑟𝑑𝜃
0 0 0
� 2𝜋 � 3
= (6𝛾 3 + 4𝛾)𝑑𝑟𝑑𝜃 = 279𝜋
0 0
Remark 5.5.3. 1. If 𝑆1 and 𝑆2 have the same boundary C which is oriented positively, then
� for any vector function F that satisfy the hypotheses in Stoke’s Theorem, we have:
� � �� ��
(𝑐𝑢𝑟𝑙𝐹 ).𝑛𝑑𝐴 = (𝑐𝑢𝑟𝑙𝐹 ).𝑛𝑑𝐴.
𝑆1 𝑆2
Figure 5.18: The paraboloid 𝑧 = 4 − 𝑥2 − 𝑦 2 for 𝑧 ≥ 0.
��
��
�
�
Figure 5.19: Surfaces with the same boundary.
5.6 Exercises
145
Part III
Complex Analysis
6.1 Complex Numbers 147
COMPLEX ANALYTIC FUNCTIONS For a complex number 𝑧 = 𝑎 + 𝑏𝑖, the number 𝑎 is called the real part of 𝑧 and denoted by 𝑅𝑒(𝑧)
and 𝑏 is called the imaginary part of 𝑧 and denoted by 𝐼𝑚(𝑧).
6.1 Complex Numbers 1. The real and imaginary parts of any complex number are real numbers.
2. Any real number 𝑎 can be considered as a complex number 𝑎 + 0𝑖. Therefore, the set of
In this section we are going to revise the set of complex numbers on which we are going to work
complex numbers is an extension of the set of real numbers.
with in the coming chapters.
3. The set of complex numbers is denoted by C.
A complex number 𝑧 is a symbol of the form 𝑥 + 𝑦𝑖 or 𝑥 + 𝑖𝑦, where 𝑥 and 𝑦 are real numbers and
4. If 𝑥, 𝑦 and 𝑧 are complex numbers, then:
𝑖2 = −1. Let 𝑎 + 𝑏𝑖 and 𝑐 + 𝑑𝑖 be two complex numbers. The four basic arithmetic operations
are defined as follows. 4.1. 𝑥 + 𝑦 = 𝑦 + 𝑥 (Addition is commutative.)
4.2. 𝑥𝑦 = 𝑦𝑥 (Multiplication is commutative.)
1. Equality: 𝑎 + 𝑏𝑖 = 𝑐 + 𝑑𝑖 if and only if 𝑎 = 𝑐 and 𝑏 = 𝑑.
4.3. 𝑥 + (𝑦 + 𝑧) = (𝑥 + 𝑦) + 𝑧 (Associative law for addition.)
2. Addition: (𝑎 + 𝑏𝑖)(𝑐 + 𝑑𝑖) = (𝑎 + 𝑐) + (𝑏 + 𝑑)𝑖.
4.4. 𝑥(𝑦𝑧) = (𝑥𝑦)𝑧 (Associative law for multiplication.)
3. Multiplication: (𝑎 + 𝑏𝑖)(𝑐 + 𝑑𝑖) = (𝑎𝑐 − 𝑏𝑑) + (𝑎𝑑 + 𝑏𝑐)𝑖. 4.5. 𝑥(𝑦 + 𝑧) = 𝑥𝑦 + 𝑥𝑧 (Distributive law.)
4. Division: Let 𝑧 = 𝑎 + 𝑏𝑖 and 𝑤 = 𝑐 + 𝑑𝑖 be complex numbers and 𝑧 ∕= 0. Then 4.6. 𝑥 + 0 = 0 + 𝑥 = 𝑥 (0 is identity element for addition.)
����� 𝑧= 2𝑒𝑖 4
and using Euler’s formula we can write 𝑟(cos 𝜃 + 𝑖 sin 𝜃) = 𝑟𝑒𝑖𝜃 . The expression 𝑧 = 𝑟𝑒𝑖𝜃 is ��������������
called the polar form of 𝑧.
Figure 6.3: A complex number and its conjugate in the complex plane.
�
1. Then ∣𝑧 − 𝑧0 ∣ = 𝑟 if and only if 6.2 Complex Functions, Differential Calculus and Analyt-
√
(𝑥 − 𝑥0 )2 + (𝑦 − 𝑦0 )2 = 𝑟 icity
which is a circle with center (𝑥0 , 𝑦0 ) and radius 𝑟. In the next subsequent sections we are going to consider functions from a subset of the set of
complex numbers to the set of complex numbers.
2. The set {𝑧 ∈ C : ∣𝑧 − 𝑧0 ∣ < 𝑟} is an open disk of radius 𝑟 about 𝑧0 and it contains all
points enclosed by the circle but does not contain the boundary. Definition 6.2.1. A function w of a complex variable z is a rule that assigns a unique value w(z)
to each point z in some set D in the complex plane. If 𝑤 is a complex function and 𝑧 = 𝑥 + 𝑖𝑦,
3. The set {𝑧 ∈ C : ∣𝑧 − 𝑧0 ∣ ≤ 𝑟} is a closed disk about 𝑧0 and it contains all points enclosed
by the circle the boundary points on the circle. � � �
�
4. The S be a set of complex numbers and 𝑤 be a complex number. ����
� �
4.1 𝑤 is an the interior of S is there is some open disk about 𝑧 which is contained in S.
�
4.2 𝑤 is a boundary point of S if every open disk about 𝑤 contains at least one point of �
S and at leats one point ont in S.
4.3 The set S is an open set if every point of S is an interior point of S. Figure 6.5: A complex function.
4.4 The set S is a closed set if S contains all the boundary points.
then we can always write
�
𝑤(𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦),
where 𝑢 and 𝑣 are real-valued functions of 𝑥 and 𝑦 and 𝑢(𝑥, 𝑦) = 𝑅𝑒(𝑓 (𝑧)) and 𝑣(𝑥, 𝑦) =
�� ���������������������
𝐼𝑚(𝑓 (𝑧.))
�� If 𝑧 = 𝑥 + 𝑖𝑦, then 𝑤(𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦). That is, the real and imaginary parts of 𝑤(𝑧) are
���������������������� functions of x and y.
� �
Example 6.2.1. Let 𝑤 be a complex function defined by 𝑤(𝑧) = 𝑧 2 .= (𝑥 + 𝑖𝑦)2 = (𝑥2 − 𝑦 2 ) +
𝑖2𝑥𝑦. Hence 𝑅𝑒(𝑤(𝑧)) = 𝑢(𝑥, 𝑦) = 𝑥2 − 𝑦 2 and 𝐼𝑚(𝑤(𝑧)) = 𝑣(𝑥, 𝑦) = 2𝑥𝑦.
1 𝑥 𝑦
𝑓 (𝑧) = = 2 −𝑖 2 = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦).
𝑥 + 𝑖𝑦 𝑥 + 𝑦2 𝑥 + 𝑦2
Figure 6.4: Interior and boundary points of a set in the complex plane.
𝑥 −𝑦
Then 𝑅𝑒(𝑓 (𝑧)) = 𝑢(𝑥, 𝑦) = 𝑥2 +𝑦 2
and 𝐼𝑚(𝑓 (𝑧)) = 𝑣(𝑥, 𝑦) = 𝑥2 +𝑦 2
.
Let 𝑓 : C → C be a complex valued function. Then clearly 𝑓 maps R2 into R2 and hence all the
concept of limit and derivatives that are defined for vector functions of two variables also apply
here with the notations modified in terms of the complex numbers notation.
6.2 Complex Functions, Differential Calculus and Analyticity 152 6.2 Complex Functions, Differential Calculus and Analyticity 153
6.2.1 Limit 4.
lim (𝑐𝑓 )(𝑧) = 𝑐𝐿.
Let 𝑧0 be an interior point in the domain of definition of a function 𝑓 : C → C. We say that the 𝑧→𝑧0
limit of 𝑓 (𝑧) as z approaches to 𝑧0 is L and write Definition 6.2.3. For a complex function 𝑓 , if
� �
� 6.2.2 Derivatives
�
�� Let 𝑓 be a complex function. The derivative of 𝑓 at the point 𝑧0 , denoted by 𝑓 ′ (𝑧0 ), is defined
����
as
� 𝑓 (𝑧) − 𝑓 (𝑧0 ) 𝑓 (𝑧0 + △𝑧) − 𝑓 (𝑧)
� � 𝑓 ′ (𝑧0 ) = lim = lim ,
𝑧→𝑧0 𝑧 − 𝑧0 △𝑧→0 △𝑧
if the limit exists and is a complex number.
Figure 6.6: Limit of a complex function at a given point. Here as well the limit value should be unique and independent of the way z approaches 𝑧0 .
Example 6.2.3. Find the derivative of each of the following functions if it exists.
In the above definition, 𝑧 = 𝑥 + 𝑖𝑦 and 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦). Moreover ∣𝑧 − 𝑧0 ∣ means the
modulus of the complex number 𝑧 − 𝑧0 and ∣𝑧 − 𝑧0 ∣ < 𝛿 represents an open circle centered 𝑧0 . 1. If 𝑓 (𝑧) = 𝑧 2 , then
𝑓 (𝑧 + △𝑧) − 𝑓 (𝑧) (𝑧 + △𝑧)2 − 𝑧 2
𝑓 ′ (𝑧) = lim = lim
Recall that, in the calculus of real variables, the limit of sum( product, quotient) of two functions △𝑧→ △𝑧 △𝑧→0 △𝑧
is sum( product, quotient) of the limits whenever the limits are defined and the limit of the 𝑧 2 + 2𝑧△𝑧 + △𝑧 2 − 𝑧 2
= lim = lim (2𝑧 + △𝑧) = 2𝑧.
denominator is nonzero. The same is true for complex functions, which is summarized below. △𝑧→0 △𝑧 △𝑧→0
4. Quotient Rule: Recall that, if 𝑓 is a complex function and 𝑧 = 𝑥 + 𝑖𝑦, then we can always write
� �′
𝑓 𝑓 ′ (𝑧)𝑔(𝑧) − 𝑓 (𝑧)𝑔 ′ (𝑧)
(𝑧) = ( )2 . 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦),
𝑔 𝑔(𝑧)
5. The complex version of the Chain Rule: (𝑓 𝑜𝑔)′ (𝑧) = 𝑓 ′ (𝑔(𝑧))𝑔 ′ (𝑧). where 𝑢 and 𝑣 are real-valued functions of 𝑥 and 𝑦.
1
Example 6.2.4. Let 𝑓 (𝑧) = 𝑧
. Then Consider a complex function 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) with 𝑧 = 𝑥 + 𝑖𝑦. If f is analytic in some
(1)′ 𝑧 − 1.(𝑧)′ −1 domain D (and hence differentiable in D), then the partial derivatives exist and for 𝑧0 = 𝑥0 + 𝑖𝑦0
𝑓 ′ (𝑧) = = 2.
𝑧2 𝑧 and 𝑧 = 𝑥0 + 𝑖𝑦, we have
Therefore, f is differentiable every where in C except at 𝑧 = 0.
( ) ( )
Suppose a complex function 𝑓 is differentiable at 𝑧0 . Consider the equation 𝑓 (𝑧) − 𝑓 (𝑧0 ) 𝑢(𝑥0 , 𝑦) + 𝑖𝑣(𝑥0 , 𝑦) − 𝑢(𝑥0 , 𝑦0 ) + 𝑖𝑣(𝑥0 , 𝑦0 )
� � lim = lim
𝑧→𝑧0 𝑧 − 𝑧0 𝑧→𝑧0 (𝑥0 + 𝑖𝑦) − (𝑥0 + 𝑖𝑦0 )
𝑓 (𝑧) − 𝑓 (𝑧0 ) ( ) ( )
𝑓 (𝑧) − 𝑓 (𝑧0 ) = (𝑧 − 𝑧0 ) .
𝑧 − 𝑧0 𝑢(𝑥0 , 𝑦) + 𝑖𝑣(𝑥0 , 𝑦) − 𝑢(𝑥0 , 𝑦0 ) + 𝑖𝑣(𝑥0 , 𝑦0 )
= lim
Then � � � � ��
𝑧→𝑧0 𝑖(𝑦 − 𝑦0 )
𝑓 (𝑧) − 𝑓 (𝑧0 ) 1 𝑢(𝑥0 , 𝑦) − 𝑢(𝑥0 , 𝑦0 ) 𝑖 𝑣(𝑥0 , 𝑦) − 𝑣(𝑥0 , 𝑦0 )
lim 𝑓 (𝑧) − 𝑓 (𝑧0 ) = lim (𝑧 − 𝑧0 ) . = lim + lim
𝑧→𝑧0 𝑧→𝑧0 𝑧 − 𝑧0 𝑖 𝑦→𝑦 0 𝑦 − 𝑦0 𝑖 𝑦→𝑦 0 𝑦 − 𝑦0
But
� � �� � � 1 ∂𝑢 ∂𝑣 1 𝑖1
𝑓 (𝑧) − 𝑓 (𝑧0 ) 𝑓 (𝑧) − 𝑓 (𝑧0 ) = + = 𝑢𝑦 + 𝑣 𝑦 = 𝑢𝑦 + 𝑣 𝑦
lim (𝑧 − 𝑧0 ) = lim (𝑧 − 𝑧0 ). lim = 0.𝑓 ′ (𝑧0 ) = 0. 𝑖 ∂𝑦 ∂𝑦 𝑖 𝑖𝑖
𝑧→𝑧0 𝑧 − 𝑧0 𝑧→𝑧0 𝑧→𝑧0 𝑧 − 𝑧0
𝑖 ∂𝑣 ∂𝑢
Therefore, lim𝑧→𝑧0 𝑓 (𝑧) = 𝑓 (𝑧0 ) and hence we have the following theorem. = 𝑢𝑦 + 𝑣𝑦 = 𝑣𝑦 − 𝑖𝑢𝑦 = −𝑖 .
−1 ∂𝑦 ∂𝑦
Theorem 6.2.5. If 𝑓 is a differentiable complex function at 𝑧0 , then 𝑓 is continuous at 𝑧0 .
6.3 The Cauchy - Riemann Equation 156 6.3 The Cauchy - Riemann Equation 157
Similarly, if we set △𝑦 = 0 and △𝑥 → 0, that is, if 𝑧 = 𝑥 + 𝑖𝑦0 and 𝑧0 = 𝑥0 + 𝑖𝑦0 , then we have Example 6.3.1. Consider 𝑓 (𝑧) = ∣𝑧∣2 = 𝑧 𝑧¯. Then 𝑓 (𝑧) = (𝑥2 + 𝑦 2 ) + 0𝑖 and hence we have
( ) ( ) 𝑢(𝑥, 𝑦) = 𝑥2 + 𝑦 2 and 𝑣 = 0.
𝑓 (𝑧) − 𝑓 (𝑧0 ) 𝑢(𝑥, 𝑦0 ) + 𝑖𝑣(𝑥, 𝑦0 ) − 𝑢(𝑥0 , 𝑦0 ) + 𝑖𝑣(𝑥0 , 𝑦0 )
lim = lim Since 𝑢𝑥 = 2𝑥, 𝑢𝑦 = 2𝑦, 𝑣𝑥 = 0 and 𝑣𝑦 = 0, all 𝑢, 𝑣, 𝑢𝑥 , 𝑢𝑦 , 𝑣𝑥 , 𝑣𝑦 are continuous in R2 and hence
𝑧→𝑧0 𝑧 − 𝑧0 𝑧→𝑧0 (𝑥 + 𝑖𝑦0 ) − (𝑥0 + 𝑖𝑦0 )
( ) ( ) u and v are continuously differentiable everywhere in R2 .
𝑢(𝑥, 𝑦0 ) − 𝑢(𝑥0 , 𝑦0 ) − 𝑢(𝑥0 , 𝑦0 ) + 𝑖𝑣(𝑥0 , 𝑦0 )
= lim But 𝑢𝑥 = 𝑣𝑦 only if 𝑥 = 0, that is on the y-axis and 𝑣𝑥 = −𝑢𝑦 only if 𝑦 = 0, that is, on the
𝑥→𝑥0 𝑥 − 𝑥0
𝑢(𝑥, 𝑦0 ) − 𝑢(𝑥0 , 𝑦0 ) 𝑣(𝑥, 𝑦0 ) − 𝑣(𝑥0 , 𝑦0 ) ∂𝑢 ∂𝑣 x-axis. Thus the Cauchy-Riemann equations holds true only at the origin and hence 𝑓 (𝑧) = ∣𝑧∣2
= lim + 𝑖 lim = +𝑖 .
𝑥→𝑥0 𝑥 − 𝑥0 𝑥→𝑥0 𝑥 − 𝑥0 ∂𝑥 ∂𝑥 is differentiable only at 𝑧 = 0 and it is analytic nowhere.
Since 𝑓 is differentiable at 𝑧0 , the two partial derivatives must be equal. That is, we must have
Example 6.3.2. Let 𝑓 (𝑧) = 𝑧 2 − 8𝑧 + 3. If 𝑧 = 𝑥 + 𝑖𝑦, then
Definition 6.3.4. A real-valued function 𝑢(𝑥, 𝑦) of two variables satisfy Laplace’s equation, that 6.4 Elementary Functions
is,
∇2 𝑢 = 𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 6.4.1 Exponential Functions
and all various first and second - order partial derivatives of its component functions with respect
For a complex number 𝑧 = 𝑥 + 𝑖𝑦, the complex exponential function 𝑒𝑧 is defined by
to x and y are continuous (in this case these functions are called 𝐶 2 functions) is called a
harmonic function. 𝑒𝑧 = 𝑒𝑥+𝑖𝑦 = 𝑒𝑥 .𝑒𝑖𝑦 = 𝑒𝑥 (cos 𝑦 + 𝑖 sin 𝑦)
Then we have proved the following theorem. and 𝑒𝑖𝑦 = cos 𝑦 + 𝑖 sin 𝑦 is Euler’s formula.
√
We also have ∣𝑒𝑖𝑦 ∣ = ∣ cos 𝑦+𝑖 sin 𝑦∣ = cos2 𝑦 + sin2 𝑦 = 1 and hence ∣𝑒𝑧 ∣ = ∣𝑒𝑥 𝑒𝑖𝑦 ∣ = 𝑒𝑥 ∣𝑒𝑖𝑦 ∣ =
Theorem 6.3.5 (Harmonic Functions). If 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) is analytic in a domain D,
𝑒𝑥 , for all 𝑧 = 𝑥 + 𝑖𝑦.
then u and v are harnonic in D. That is, they are 𝐶 (2) functions and satisfy the Laplace’s equation:
Example 6.4.1. ∣𝑒−2+4𝑖 ∣ = 𝑒−2 and ∣𝑒3−5𝑖 ∣ = 𝑒3 .
∇2 𝑢 = 𝑢𝑥𝑥 + 𝑣𝑦𝑦 = 0
Example 6.4.2. If 𝑒𝑧 = 2𝑖, then 𝑒𝑥 cos 𝑦 + 𝑖𝑒𝑥 sin 𝑦 = 0 + 2𝑖. This implies 𝑒𝑥 cos 𝑦 = 0 and
∇2 𝑣 = 𝑣𝑥𝑥 + 𝑣𝑦𝑦 = 0
𝑒𝑥 sin 𝑦 = 2. Squaring these equations adding the results will give us
Since f is analytic, u and v are related by the Cauchy-Riemann equation and to refer this rela-
𝑒2𝑥 (cos2 𝑦 + sin2 𝑦) = 4
tionship we such functions are called conjugate harmonic functions.
Example 6.3.3. Show that 𝑢 = 𝑥2 − 𝑦 2 − 𝑦 is harmonic in C and find a conjugate harmonic which implies 𝑒2𝑥 = 4 and then 𝑥 = ln 2 and
function v of u. 𝑒𝑥 cos 𝑦
= cot 𝑦 = 0
𝑒𝑥 sin 𝑦
𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) = (𝑥2 − 𝑦 2 − 𝑦) + 𝑖(2𝑥𝑦 + 𝑥 + 𝑐) Riemann equations. Therefore, the complex exponential function 𝑓 (𝑧) = 𝑒𝑧 is differentiable for
all z and
( ) ( ) ( )′
= 𝑥2 − 𝑦 2 + 𝑖2𝑥𝑦 + − 𝑦 + 𝑖𝑥 + 𝑖𝑐 = 𝑧 2 + 𝑖(𝑧 + 𝑐) = 𝑧 2 + 𝑖𝑧 + 𝑐1 , 𝑓 ′ (𝑧) = 𝑒𝑧 = 𝑢𝑥 + 𝑖𝑣𝑥 = 𝑒𝑧 .
where 𝑐1 = 𝑖𝑐.
6.4 Elementary Functions 160 6.4 Elementary Functions 161
6.4.2 Trigonometric and Hyperbolic Functions Another basic relation of the complex trigonometric functions is derived as follows.
� 𝑖𝑧 �2 � 𝑖𝑧 �2
From Euler’s formula we have that: 𝑒𝑖𝜃 = cos 𝜃 + 𝑖 sin 𝜃 and 𝑒−𝑖𝜃 = cos 𝜃 − 𝑖 sin 𝜃. By adding 𝑒 − 𝑒−𝑖𝑧 𝑒 + 𝑒−𝑖𝑧 (𝑒𝑖𝑧 − 𝑒−𝑖𝑧 )2 (𝑒𝑖𝑧 + 𝑒−𝑖𝑧 )2
sin2 𝑧 + cos2 𝑧 = + = +
2𝑖 2 −4 4
these two equations we get: 𝑒𝑖𝜃 + 𝑒−𝑖𝜃 = 2 cos 𝜃 which implies
1 1
𝑒𝑖𝜃 + 𝑒−𝑖𝜃 = [−𝑒2𝑖𝑧 + 2𝑒𝑖𝑧−𝑖𝑧 − 𝑒−𝑧𝑖𝑧 + 𝑒2𝑖𝑧 + 2𝑒𝑖𝑧−𝑖𝑧 + 𝑒−2𝑖𝑧 ] = (2 + 2) = 1.
cos 𝜃 = 4 4
2
Therefore sin2 𝑧 + cos2 𝑧 = 1.
and by subtracting the second from the first we get: 𝑒𝑖𝜃 − 𝑒𝑖𝜃 = 2𝑖 sin 𝜃 which implies
𝑒𝑖𝜃 − 𝑒−𝑖𝜃
sin 𝜃 = . For a complex number 𝑧 show that
2𝑖
Using these two formulae we can now define the trigonometric complex functions as follows. i) sin(−𝑧) = − sin 𝑧
6.4.3 Polar form and Multi-Valuedness. From the equation 𝑟𝑒𝑖𝜃 = 𝑒𝑎+𝑏𝑖 = 𝑒𝑎 𝑒𝑖𝑏 we get 𝑒𝑖(𝑏−𝜃) = 1 = 𝑒2𝑘𝜋𝑖 for 𝑘 ∈ Z, which implies that
𝑖(𝑏 − 𝜃) = 2𝑘𝜋𝑖 and hence 𝑏 = 𝜃 + 2𝑘𝜋 for 𝑘 ∈ Z. Therefore, for 𝑧 ∕= 0 there are infinitely many
The Polar form of a complex number z is
numbers
𝑧 = 𝑟𝑒𝑖𝜃 , 𝑤 = ln 𝑟 + 𝑖(𝜃 + 2𝑘𝜋), 𝑘 ∈ Z
such that 𝑧 = 𝑒𝑤 .
where 𝑟 = ∣𝑧∣ and 𝜃 = arctan 𝑥𝑦 = 𝑎𝑟𝑔𝑧. However, the angle 𝜃 = 𝑎𝑟𝑔𝑧 for 𝑧 ∕= 0) can be
Now we are in a position to define the logarithm of a nonzero complex number 𝑧 as
determined only to within an arbitrary integer multiple of 2𝜋 The angle 𝜃 with −𝜋 < 𝜃 < 𝜋 is
called the Principal argument of z and denoted by 𝑎𝑟𝑔𝑧. That is, log(𝑧) = ln ∣𝑧∣ + 𝑖(arg(𝑧) + 2𝑘𝜋), 𝑘 ∈ Z
The expression 𝑧 𝑘 is single valued only if the exponent k is an integer. If k is a rational number Example 6.4.6. Compute log(1 + 𝑖).
√ √
𝑚
𝑛
(in its reduced form), then the map Let 𝑧 = 1 + 𝑖. Then 𝑟 = 1 + 1 = 2 𝑎𝑟𝑔(𝑧) = arctan−1 ( 11 ) = 𝜋4 .
Therefore
𝑓 (𝑧) = 𝑧 𝑘 √ 𝜋
log(1 + 𝑖) = ln 2 + 𝑖( + 2𝑘𝜋), 𝑘 ∈ Z.
4
is n - valued, (since there are exactly n 𝑛𝑡ℎ roots of a complex number z.)
Let 𝑓 be a complex function which is differentiable at 𝑧. If 𝑓 is expressed in polar form as:
√ √
Example 6.4.5. Let 𝑧 = 1 + 𝑖. Then 𝑟 = 1+1= 2 and 𝑎𝑟𝑔𝑧 = tan−1 ( 11 ) = 𝜋
4
. 𝑓 (𝑧) = 𝑢(𝑟, 𝜃) + 𝑖𝑣(𝑟, 𝜃), the Cauchy-Rieman equations can be calculated (using definition along
Therefore constant 𝜃 and along constant 𝑟 or using change of variables 𝑥 = 𝑟 cos 𝜃 and 𝑦 = 𝑟 sin 𝜃.)
1/3 𝑖𝜃 1/3 1/6 𝑖𝜋/3 By using change of variables 𝑥 = 𝑟 cos 𝜃 and 𝑦 = 𝑟 sin 𝜃 and from chain rule we have:
𝑧 = (𝑟𝑒 ) =2 𝑒 )
2𝜋
for 𝑘 = 0, 1, 2. Then 𝐹𝑘 = 𝑒𝑖𝑘( 2 ) for 𝑘 = 0, 1, 2 which implies 𝐹0 = 𝑒0 = 1, 𝐹1 = 𝑒𝑖2𝜋/3, and ∂𝑢 ∂𝑢 ∂𝑟 ∂𝑢 1 ∂𝑢 1
= . = = .
𝑖4𝜋/3 ∂𝑥 ∂𝑟 ∂𝑥 ∂𝑟 ∂𝑥 ∂𝑟 cos 𝜃
𝐹2 = 𝑒 which correspond to the three points on the unit circle. ∂𝑟
∂𝑢 1 ∂𝑣
=
∂𝑟 𝑟 ∂𝜃
and from from (6.5) and (6.6) we have
∂𝑣 1 ∂𝑢
=−
∂𝑟 𝑟 ∂𝜃
and thus
𝑒−𝑖𝜃 𝑖 1
𝑓 ′ (𝑧) = 𝑒−𝑖𝜃 (𝑢𝑟 + 𝑖𝑣𝑟 ) = (𝑣𝜃 − 𝑖𝑢𝜃 ) = 𝑒−𝑖𝜃 (𝑢𝑟 − 𝑢𝜃) = 𝑒−𝑖𝜃 ( 𝑣𝜃 + 𝑖𝑣𝑟 ).
𝑟 𝑟 𝑟
Example 6.4.7. Let 𝑓 (𝑧) = log 𝑧 = ln 𝑟 + 𝑖𝜃, with 𝜃 = 𝑎𝑟𝑔(𝑧) is the principal argument
(𝑖.𝑒.0 < 𝑟 < ∞ and −𝜋 < 𝜃 < 𝜋 Then find 𝑓 ′ (𝑧) in terms of z.
Here 𝑢(𝑟, 𝜃) = ln 𝑟, 𝑣(𝑟, 𝜃) = 𝜃 and 𝑢𝑟 = 1𝑟 , 𝑢𝜃 = 0, 𝑢𝑟 = 0, 𝑣𝜃 = 1.
Since 𝑢, 𝑣, 𝑢𝑟 , 𝑣𝑟 , 𝑢𝜃 , 𝑣𝜃 are all continuous in the plane where log 𝑧 is defined and since the Cauchy-
Riemann equations are satisfied, log 𝑧 is analytic everywhere in the domain of log 𝑧.
Hence
1 1 1
𝑓 ′ (𝑧) = (log 𝑧)′ = 𝑒−𝑖𝜃 (𝑢𝑟 + 𝑖𝑣𝑟 ) = 𝑒−𝑖𝜃 ( + 𝑖 × 0) = 𝑖𝜃 =
𝑟 𝑟𝑒 𝑧
as in the real case.
6.5 Exercises
7.1 Complex Integration: 167
is defined by
Chapter 7 �
𝑓 (𝑧)𝑑𝑧 =
� 𝑏
𝑓 (𝑧(𝑡))𝑧(𝑡)𝑑𝑡,
˙ where 𝑧(𝑡)
˙ =
𝑑𝑧
.
𝐶 𝑎 𝑑𝑡
Now the question is how can we evaluate this integral. One other possible way to evaluate the
COMPLEX INTEGRAL CALCULUS complex integral is to write the line integral into one or more real line integrals. To see this, let
𝑓 (𝑧) = 𝑢 + 𝑖𝑣 and 𝑧 = 𝑥 + 𝑖𝑦. Then 𝑑𝑧 = 𝑑𝑥 + 𝑖𝑑𝑦.
Hence
� � � �
( )
7.1 Complex Integration: 𝑓 (𝑧)𝑑𝑧 = 𝑢 + 𝑖𝑣 (𝑑𝑥 + 𝑖𝑑𝑦) = (𝑢𝑑𝑥 − 𝑣𝑑𝑦) + 𝑖 (𝑣𝑑𝑥 + 𝑢𝑑𝑦)
𝐶 𝐶 𝐶 𝐶
Recall that: there is a one=to-once correspondence between the set of Complex numbers C and Example 7.1.1. Evaluate �
𝑑𝑧
the set of points in the Euclidean real plane R × R.. Hence the natural generalization of the ,
𝐶 𝑧
Riemann integral � 𝑏 where C is a unit circle around the origin.
𝑓 (𝑥)𝑑𝑥
𝑎
of a real valued function 𝑓 on a real 𝑥−axis the line integral in R2 or in R3 . Following this fact Solution
we define the integral of a complex valued of a complex variable as a line integral of the function
Here C is parameterized by 𝑧(𝑡) = cos 𝑡 + 𝑖 sin 𝑡 = 𝑒𝑖𝑡 , 0 ≤ 𝑡 ≤ 2𝜋. Then
along a given oriented curve C in the complex plane.
� � 2𝜋 � 2𝜋
𝑑𝑧
i.e. The complex integral of a complex function 𝑓 on a curve C is given by: = 𝑒−𝑖𝑡 .𝑖𝑒𝑖𝑡 𝑑𝑡 = 𝑖 𝑑𝑡 = 2𝜋𝑖.
� 𝐶 𝑧 0 0
𝐼= 𝑓 (𝑧)𝑑𝑧. ∫
Example 7.1.2. Evaluate 𝐼 = 𝐶 𝑧 2 𝑑𝑧, where C is the parabolic arc given in the figure below.
𝐶
Here we assume that C is an oriented curve in the complex plane, wich is piecewise smooth and �
simple. �
������
� �
The direction in which t is increasing is called the positive sense of C. we can now state: ��
Definition 7.1.1. Let C be a smooth curve, represented by 𝑧 = 𝑧(𝑡), where 𝑎 ≤ 𝑡 ≤ 𝑏. Let 𝑓 (𝑧) Figure 7.1: The parabola 𝑥 = 4 − 𝑦 2 .
be a continuous complex function on C.
7.1 Complex Integration: 168 7.1 Complex Integration: 169
Solution Definition 7.1.2. Let C be a piecewise smooth curve such that 𝐶 = 𝐶1 ⊕ ... ⊕ 𝐶𝑛 and 𝑓 (𝑧) be
a continuous complex function on C. Then we define
First 𝑧 2 = (𝑥 + 𝑖𝑦)2 = (𝑥2 − 𝑦 2 ) + 𝑖(2𝑥.𝑦) which implies 𝑢(𝑥, 𝑦) = 𝑥2 − 𝑦 2 , 𝑣 = 2𝑥𝑦. Then � 𝑛 �
�
� � �
( 2 ) ( ) 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧.
𝐼= 𝑧 2 𝑑𝑧 = (𝑥 − 𝑦 2 )𝑑𝑥 − (2𝑥𝑦)𝑑𝑦 + 𝑖 2𝑥𝑦𝑑𝑥 + (𝑥2 − 𝑦 2 )𝑑𝑦 . 𝐶 𝑖=1 𝐶𝑖
𝐶 𝐶 𝐶
In the previous examples, we have been integrating over smooth curves. Let C be a piecewise 3. If C’ has an opposite orientation to that of C, then
� �
smooth curve. That is, C is a curve made up of smooth curves 𝐶1 , 𝐶2 , ..., 𝐶𝑛 such that the
𝑓 (𝑧)𝑑𝑧 = − 𝑓 (𝑧)𝑑𝑧.
terminal point of 𝐶𝑖 is the initial point of 𝐶𝑖+1 and in this case we write 𝐶 = 𝐶1 ⊕ ... ⊕ 𝐶𝑛 . 𝐶 𝐶′
7.2 Cauchy’s Integral Theorem. 170 7.2 Cauchy’s Integral Theorem. 171
7.2 Cauchy’s Integral Theorem. where C is the unit circle centered at the origin and traverse counterclockwise. Then
1
𝑓 (𝑧) =
Let C be a piecewise-smooth simple closed curve in the complex plane (and hence in R2 ). Then (𝑧 − 2)(𝑧 − 3)
C encloses some simply connected region R. Let 𝑓 (𝑧) = 𝑢(𝑥, 𝑦) + 𝑖𝑣(𝑥, 𝑦) be continuous in a is analytic everywhere except at 𝑧 = 2 and 𝑧 = 3. But 𝑧 = 2 and 𝑧 = 3 are out of the region
simply connected domain D containing the curve C. Then enclosed in C. Hence f is analytic in the region enclosed by C. Then by Cauchy’s Theorem
� � � �
𝑑𝑧
𝑓 (𝑧)𝑑𝑧 = (𝑢𝑑𝑥 − 𝑣𝑑𝑦) + 𝑖 (𝑣𝑑𝑥 + 𝑢𝑑𝑦) (7.1) 2
= 0.
𝐶 𝐶 𝐶 𝐶 𝑧 − 5𝑧 + 6
Now assume that 𝑓 is analytic and that 𝑓 ′ (𝑧) is continuous in D so that u and v are continuously Let 𝐶1 and 𝐶2 be closed paths in the complex plane with 𝐶2 is in the interior of 𝐶1 . Suppose
differentiable. Then by Green’s Theorem on u and v we can write (7.1) as: that a complex function 𝑓 is analytic in an open set containing both paths and all points between
them. Now let 𝐿 be the line segment as shown in the Figure 7.4. Then the region D is a simply
� �� � � �� � �
∂(−𝑣) ∂𝑢 ∂𝑢 ∂𝑣 connected region bounded by the curve 𝐶, where 𝐶 = 𝐶1 ⊕ 𝐶2′ ⊕ 𝐿 ⊕ 𝐿′ , where 𝐿′ is the line
𝑓 (𝑧)𝑑𝑧 = − 𝑑𝐴 + 𝑖 − 𝑑𝐴
𝐶 𝑅 ∂𝑥 ∂𝑦 𝑅 ∂𝑥 ∂𝑦 segment which is oriented opposite to that of L and 𝐶2′ is the curve 𝐶2 but in opposite orientation.
�� � � �� � �
∂𝑣 ∂𝑢 ∂𝑢 ∂𝑣 � �
= − − 𝑑𝐴 + 𝑖 − 𝑑𝐴.
𝑅 ∂𝑥 ∂𝑦 𝑅 ∂𝑥 ∂𝑦
�� �� �
But since f is analytic in D, by Cauchy-Riemann equations we have that � �
� �
∂𝑢 ∂𝑣 ∂𝑢 ∂𝑣 �
= and =− . �
∂𝑥 ∂𝑦 ∂𝑦 ∂𝑥 �� ��
� �
This implies
∂𝑢 ∂𝑣 −∂𝑣 ∂𝑢
− = 0 and − = 0 in D.
∂𝑥 ∂𝑦 ∂𝑥 ∂𝑦
Figure 7.2: Multiply and Simply Connected Regions.
Therefore � �� ��
𝑓 (𝑧)𝑑𝑧 = 0𝑑𝐴 + 0𝑑𝐴 = 0 Then. since 𝑓 is analytic in D , by Cauchy’s Theorem, we have
𝐶 𝑅 𝑅 �
and hence we have proved the following theorem (called Cauchy’s Theorem.) 𝑓 (𝑧)𝑑𝑧 = 0.
𝐶
Theorem 7.2.1 (Cauchy’s Theorem). If 𝑓 (𝑧) is analytic in a simply connected domain D,then But since 𝐶 is a piecewise smooth curve, we have
� � � � � �
𝑓 (𝑧)𝑑𝑧 = 0, 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧 + 𝑓 (𝑧)𝑑𝑧 + 𝑓 (𝑧)𝑑𝑧 + 𝑓 (𝑧)𝑑𝑧
𝐶 𝐶1 𝐶2 𝐿 𝐿′
𝐶
and � � � �
for every piecewise smooth simple closed curve C in D.
𝑓 (𝑧)𝑑𝑧 = − 𝑓 (𝑧)𝑑𝑧 and 𝑓 (𝑧)𝑑𝑧 = − 𝑓 (𝑧)𝑑𝑧
𝐿′ 𝐿 𝐶2′ 𝐶2
Remark 7.2.2. In the Cauchy’s Theorem above, the continuity of 𝑓 ′ (𝑧) is omitted. This is done
Therefore, we get � � �
intentionally because as we can show it later, if 𝑓 is analytic at a point 𝑧0 , then the derivatives
′ 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧 − 𝑓 (𝑧)𝑑𝑧 = 0
of all order of 𝑓 at 𝑧0 exists. and hence 𝑓 (𝑧) is continuous 𝐶 𝐶1 𝐶2
and hence � �
Example 7.2.1. Consider the integral
� � 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧.
𝑑𝑧 𝑑𝑧 𝐶1 𝐶2
2
= , Therefore, we have proved the following theorem.
𝐶 𝑧 − 5𝑧 + 6 𝐶 (𝑧 − 2)(𝑧 − 3)
7.2 Cauchy’s Integral Theorem. 172 7.3 Cauchy’s Integral Formula and The Derivative of Analytic Functions. 173
Theorem 7.2.3 (The Deformation Theorem). Let 𝐶1 and 𝐶2 be closed paths in the complex path with radius 𝑟 and centered at 𝑎.
plane with 𝐶2 is in the interior of 𝐶1 . Suppose that a complex function 𝑓 is analytic in an open Then � �
𝑑𝑧 𝑑𝑧
set containing both paths and all points between them. Then = .
𝑐 𝑧−𝑎 𝐶1 𝑧−𝑎
� �
𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧. Set 𝑧 − 𝑎 = 𝑟𝑒𝑖𝜃 . Then 𝑑𝑧 = 𝑟𝑖𝑒𝑖𝜃 𝑑𝜃 and hence
𝐶1 𝐶2 � � � 2𝜋
∫ 𝑟𝑖𝑒𝑖𝜃
𝑑𝜃 = 𝑖 𝑑𝜃 = 𝑖 𝑑𝜃 = 2𝜋𝑖 ∕= 0.
Remark 7.2.4. If f is analytic in a simply connected domain D, then the integral 𝑐
𝑓 (𝑧)𝑑𝑧is 𝐶1 𝑟𝑒
𝑖𝜃
𝐶1 0
independent of path in D. That is, if 𝐶1 and 𝐶2 are open curves with the same initial and terminal
points, then � � 7.3 Cauchy’s Integral Formula and The Derivative of An-
𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧.
𝐶1 𝐶2
alytic Functions.
Hence we can deform 𝐶1 into 𝐶2 without changing the value of the integral.
However, if f is not analytic in D, then Cauchy’s Theorem does not hold true in general. In the last example of the previous section we have seen that
�
Example 7.2.2. Consider the integral 𝑑𝑧
� = 2𝜋𝑖,
𝑑𝑧 𝐶 𝑧 −𝑎
𝑐 𝑧−𝑎
where C is any piecewise smooth, simple closed curve oriented counterclockwise and containing
where C is any piecewise smooth simple closed curve, oriented counterclockwise and containing 𝑎 in the interior. During the evaluation of the integral we used the idea of path deformation and
𝑎 inside. Since a circle 𝐶1 with center 𝑎 and radius r.
�
Now let 𝑓 (𝑧) be analytic in a simply- connected domain D containing C inside. Then
�
� �
𝑓 (𝑧) 𝑓 (𝑧)
𝐼= 𝑑𝑧 = 𝑑𝑧,
𝐶 𝑧−𝑎 𝐶1 𝑧−𝑎
�
��
� where 𝐶1 is a sufficiently small circle with radius r and centered at 𝑎.
Since this last integral is independent of 𝑟, provided 𝐶1 stays inside we will let 𝑟 → 0. Hence
�
� � � � � � � �
𝑓 (𝑧) 𝑓 (𝑎) 𝑓 (𝑧) − 𝑓 (𝑎) 𝑓 (𝑧) − 𝑓 (𝑎)
𝐼= 𝑑𝑧 = 𝑧𝑎 + 𝑑𝑧 = 𝑓 (𝑎)2𝜋𝑖 + 𝑑𝑧.
𝐶1 𝑧 − 𝑎 𝐶1 𝑧 − 𝑎 𝐶1 𝑧−𝑎 𝐶1 𝑧−𝑎
At this final step letting 𝑟 → 0 we have ∣𝑓 (𝑧) − 𝑓 (𝑎)∣ → 0.(The deviation of integrand goes to
Figure 7.3: A curve C containing 𝑎 inside.
zero).
1 Thus �
𝑓 (𝑧) = 𝑓 (𝑧)
𝑧−𝑎 𝐼= 𝑑𝑧 = 𝑓 (𝑎)2𝜋𝑖.
is analytic in the region bounded by C except in some neighborhood of 𝑧 = 𝑎, we can conclude 𝐶 𝑧−𝑎
that 𝑓 is analytic in every domain not containing 𝑎 inside. Definition 7.3.1. A Complex function 𝑔 is said to be singular at a point, say 𝑧 = 𝑧0 , if it is not
Thus, because of path deformation, we can assume without loss of generality that 𝐶1 is a circular analytic at that point.
Theorem 7.3.2 (Cauchy Integral Formula). Let 𝑓 (𝑧) be analytic in a simply - connected domain A very striking result in complex analysis is that, if f is analytic in a domain D (once it is
D and let C be a piecewise smooth simple closed curve in D oriented counterclockwise. Then differentiable at a point of D), then it has derivatives of all orders in D. We have the following
� theorem that can be used to find higher order derivatives of an analytic at a given point and
𝑓 (𝑧)
𝑑𝑧 = 2𝜋𝑖𝑓 (𝑎) evaluate integrals.
𝐶 𝑧 −𝑎
for all a in D. This implies � Theorem 7.3.3 (Cauchy Integral Formula for Higher Derivatives). Let 𝑓 (𝑧) be analytic in a
1 𝑓 (𝑧)
𝑓 (𝑎) = 𝑑𝑧. simply - connected domain D and let C be a piecewise smooth simple closed curve in D oriented
2𝜋𝑖 𝐶 𝑧−𝑎
counterclockwise. Then for all 𝑎 in D
Example 7.3.1. Evaluate � � � �
𝑧3 − 6 𝑛! 𝑓 (𝑧)
𝑑𝑧, 𝑓 (𝑛) (𝑎) = 𝑑𝑧
2𝑧 − 𝑖 2𝜋𝑖 𝑐 (𝑧 − 𝑎)𝑛+1
𝐶
Here � �� �
𝑧2 + 1 𝑧2 + 1 𝑧2 + 1 1 𝑓 (𝑧)
= = . = ,
𝑧2 − 1 (𝑧 − 1)(𝑧 + 1) 𝑧+1 𝑧−1 𝑧−1 �
𝑧 2 +1
where 𝑓 (𝑧) = 𝑧+1
. Therefore,
� � � �
� � 2
� � 2 �
𝑧 +1 𝑧 +1 2
𝑑𝑧 = 2𝜋𝑖𝑓 (1) = 2𝜋𝑖 = 2𝜋𝑖 × = 2𝜋𝑖.
𝐶 𝑧2 − 1 𝑧 + 1 𝑧=1 2
1 𝐴 𝐵 𝐶 𝐷 �� �� �
= + 2+ + � �
𝑧 2 (𝑧 − 2)(𝑧 − 4) 𝑧 𝑧 𝑧−2 𝑧−4 � �
which implies 𝐴 = 3
,𝐵 = 18 , 𝐶 = − 18 , 𝐷 = 32
1
. Therefore � �
32 �� ��
� � � � �
𝑑𝑧 3 𝑑𝑧 1 𝑑𝑧 1 𝑑𝑧 1 𝑑𝑧 � �
2
= + 2
− + .
𝐶 𝑧 (𝑧 − 2)(𝑧 − 4) 32 𝐶 𝑧 8 𝐶 𝑧 8 𝐶 𝑧 − 2 32 𝐶 𝑧 −4
∮ ∮
But 18 𝐶 𝑑𝑧
𝑧2
= 0 since the exponent is 2 and 32 1
𝐶 𝑧−4
𝑑𝑧
since 𝑓 𝑟𝑎𝑐1𝑧 − 2 is analytic in D. Hence Figure 7.6: Multiply and Simply Connected Regions.
� � � � � � � � �
𝑑𝑧 3 1 1 1 −𝜋
2
= × 2𝜋𝑖 + × 0 + × 2𝜋𝑖 + × 0 = 𝑖. and
𝐶 𝑧 (𝑧 − 2)(𝑧 − 4) 32 8 8 32 16 � �
𝑓 (𝑧) 𝑓 (𝑧)
𝑑𝑧 = − 𝑑𝑧.
𝐿′ 𝑧−𝑎 𝐿 𝑧−𝑎
7.4 Cauchy’s Theorem for Multiply Connected Domains Therefore, we get � � �
𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧)
𝑑𝑧 = 𝑑𝑧 + 𝑑𝑧
Now let us extend the the Cauchy’s theorem for multiply connected regions. 𝐶 𝑧−𝑎 𝐶1 𝑧−𝑎 𝐶2 𝑧−𝑎
Suppose f is analytic on 𝐶1 and 𝐶2 and in the annulus domain D bounded by 𝐶1 and 𝐶2 and hence � �� � �
1 𝑓 (𝑧) 1 𝑓 (𝑧) 𝑓 (𝑧)
counterclockwise and clockwise respectively, and 𝑎 is in the interior of the domain as shown in 𝑓 (𝑎) = 𝑑𝑧 = 𝑑𝑧 + 𝑑𝑧 .
2𝜋𝑖 𝐶 𝑧−𝑎 2𝜋𝑖 𝐶1 𝑧−𝑎 𝐶2 𝑧−𝑎
Figure 7.4.
In general if D is bound by 𝐶1 , 𝐶2 , 𝐶3 , . . . 𝐶𝑚 , where 𝐶1 is oriented counterclockwise and all the
� others are oriented clockwise and each of the 𝐶𝑖′ 𝑠 are closed, simple paths, 𝑎 is in the interior of
D, then � � �
�� 𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧)
� 𝑧−𝑎
𝑑𝑧 +
𝑧−𝑎
𝑑𝑧 + . . . +
𝑧−𝑎
𝑑𝑧 = 2𝜋𝑖𝑓 (𝑎).
𝐶1 𝐶2 𝐶𝑚
�
�� Theorem 7.4.1. Let C be a closed path and 𝐶1 , ..., 𝐶𝑛 be closed paths enclosed by C. Assume
� that any two of 𝐶, 𝐶1 , ..., 𝐶𝑛 intersect and no interior point to any 𝐶𝑖 is interior to any other
𝐶𝑘 . Let 𝑓 be analytic on an open set containing C and each 𝐶𝑖 and all the points that are both
Figure 7.5: Annulus. interior to C an exterior to each 𝐶𝑖 . Then
� 𝑛 �
�
Now let 𝐿 be the line segment as shown in the Figure 7.4. Then the region D is a simple bounded 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧)𝑑𝑧.
𝐶 𝑖=1 𝐶𝑖
by the curve 𝐶, where 𝐶 = 𝐶1 ⊕ 𝐶2 ⊕ 𝐿 ⊕ 𝐿′ , where 𝐿′ is the line segment which is oriented
opposite to that of L. Example 7.4.1. Evaluate �
𝑑𝑧
,
Then. since 𝑓 is analytic in D and 𝑎 is D, by Cauchy Integral Formula, we have 𝐶 𝑧(𝑧 − 1)
�
1 𝑓 (𝑧) where C is the circle ∣𝑧∣ = 3 counterclockwise.
𝑓 (𝑎) = 𝑑𝑧.
2𝜋𝑖 𝐶 𝑧 − 𝑎
But 𝐶 is a piecewise smooth curve, we have Solution
� � � � �
𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧) 𝑓 (𝑧)
𝑑𝑧 = 𝑑𝑧 + 𝑑𝑧 + 𝑑𝑧 + 𝑑𝑧 Let 𝐶1 and 𝐶2 be the circles as in Figure 7.4.1.
𝐶 𝑧 −𝑎 𝐶1 𝑧 − 𝑎 𝐶2 𝑧 − 𝑎 𝐿 𝑧 −𝑎 𝐿′ 𝑧 − 𝑎
7.5 Fundamental Theorem of Complex Integral Calculus 178 7.5 Fundamental Theorem of Complex Integral Calculus 179
�
If 𝑧0 is any fixed point in D, then the integral
�
�� �� � � 𝑧
� �
𝑓 (𝜁)𝑑𝜁 denoted by 𝑓 (𝜁)𝑑𝜁
� � 𝐿 𝑧0
where 𝐿 is the line segment with initial point 𝑧0 and terminal point 𝑧 is path independent and
hence defines a single- valued function of 𝑧, provided that f is analytic in the domain D. Thus we
Figure 7.7: The curves in Example 7.4.1. can define � 𝑧
𝐺(𝑧) = 𝑓 (𝜁)𝑑𝜁
𝑧0
Therefore,
� � � and thus it follows that 𝐺′ (𝑧) = 𝑓 (𝑧).
𝑑𝑧 𝑑𝑧 𝑑𝑧 ( ) ( )
= + = 2𝜋𝑖 × 𝑓1 (1) + 2𝜋𝑖 × 𝑓2 (0) ,
𝐶 𝑧(𝑧 − 1) 𝐶 𝑧(𝑧 − 1) 𝐶 𝑧(𝑧 − 1)
1 1 If 𝐹 (𝑧) is any particular primitive of 𝑓 (𝑧), then
where 𝑓1 (𝑧) = 𝑧
and 𝑓2 (𝑧) = 𝑧−1
. Therefore
� 𝑧
�
𝑑𝑧 ( ) ( ) 𝐺(𝑧) = 𝑓 (𝜁)𝑑𝜁 = 𝐹 (𝑧) + 𝑘
= 2𝜋𝑖 × 1 + 2𝜋𝑖 × (−1) = 0. 𝑧0
𝐶 𝑧(𝑧 − 1)
Example 7.4.2. Evaluate where 𝑘 is arbitrary constant. Suppose the line segment is parameterized by 𝑧(𝑡) for 𝑎 ≤ 𝑡 ≤ 𝑏.
�
𝑧+1 First write 𝐹 (𝑧) = 𝑈 (𝑥, 𝑦) + 𝑖𝑉 (𝑥, 𝑦). Then we get
𝑑𝑧,
𝑐 𝑧(𝑧 − 2)(𝑧 − 4)3 � � 𝑏
where C is the counterclockwise circle ∣𝑧 − 3∣ = 2. 𝑓 (𝑧)𝑑𝑧 = 𝑓 (𝑧(𝑡))𝑧 ′ (𝑡)𝑑𝑡
𝐿 𝑎
� 𝑏
𝑑
Solution = 𝐹 (𝑧(𝑡))𝑑𝑡
𝑎 𝑑𝑡
� 𝑏 � 𝑏
𝑑 𝑑
Here the integrand has singularities at 𝑧 = 0, 2 and 4. of which 2 and 4 lie inside C. It is easier to = 𝑈 (𝑥(𝑡), 𝑦(𝑡))𝑑𝑡 + 𝑖 𝑉 (𝑥(𝑡), 𝑦(𝑡))𝑑𝑡
𝑎 𝑑𝑡 𝑎 𝑑𝑡
deform C into two closed curves and to evaluate each of the integrand using generalized Cauchy’s = 𝑈 (𝑥(𝑏), 𝑦(𝑏)) + 𝑖𝑉 (𝑥(𝑏), 𝑦(𝑏)) − 𝑈 (𝑥(𝑎), 𝑦(𝑎)) − 𝑖𝑉 (𝑥(𝑎), 𝑦(𝑎))
formula, � 𝑏
� � � � � � � = 𝐹 ′ (𝑧(𝑡))𝑧 ′ (𝑡)𝑑𝑡
(𝑧 + 1) 𝑧+1 𝑑𝑧 𝑧+1 𝑑𝑧 𝑎
𝑑𝑧 = + = 𝐹 (𝑧) − 𝐹 (𝑧0 ).
𝐶 𝑧(𝑧 − 2)(𝑧 − 4)3 𝐶1 𝑧(𝑧 − 4) 3 𝑧 −2
𝐶2 𝑧(𝑧 − 2) (𝑧 − 4)3
� � 2
� �
𝑧+1 2𝜋𝑖 𝑑 𝑧+1
= 2𝜋𝑖 + Hence we have the following theorem.
𝑧(𝑧 − 4)3 𝑧=2 2! 𝑑𝑧 2 𝑧(𝑧 − 2) 𝑧=4
3𝜋𝑖 23𝜋𝑖 𝜋𝑖
=− + = . Theorem 7.5.1 (Fundamental Theorem of the complex Integral Calculus). Let 𝑓 (𝑧) be analytic
8 64 64
in a simply - connected domain D and let 𝑧0 be any fixed point in D. Then
the property that 𝐹 ′ (𝑧) = 𝑓 (𝑧) for all 𝑧 ∈ 𝐷. Then any function 𝐹 (𝑧) satisfying 𝐹 ′ (𝑧) = 𝑓 (𝑧) is analytic in D and 𝐺′ (𝑧) = 𝑓 (𝑧)
is called an anti-derivative or a primitive of 𝑓 (𝑧).
7.6 Exercises 180 7.6 Exercises 181
(ii) if 𝐹 (𝑧) is any primitive of 𝑓 (𝑧), then This page is left blank intensionally.
� 𝑧
𝑓 (𝜁)𝑑𝜁 = 𝐹 (𝑧) − 𝐹 (𝑧0 ).
𝑧0
1. the integral � 3
sin 𝑧𝑑𝑧
2𝑖
2. the integral � −𝑖
𝑑𝑧
1+𝑖 𝑧
Solution
1. Since sin 𝑧 is analytic on the segment joining the points 2𝑖 and3, we have
� 3
[ ]3
sin 𝑧𝑑𝑧 = − cos 𝑧 2𝑖 = − cos 3 + cos 2𝑖 = cosh 2 − cos 3
2𝑖
1
2. Since 𝑧
is analytic every where except at 𝑧 = 0, it is analytic on the line segment joining
−𝑖 and 1 + 𝑖. Hence
� −𝑖 � �−𝑖 � �𝑟=1,𝜃=𝜋/2
𝑑𝑧
= log 𝑧 = ln 𝑟 + 𝑖𝜃 √
1+𝑖 𝑧 1+𝑖 𝑟= 2, 3𝜋 2
3𝜋 𝜋 √
= (ln 1 + 𝑖 ) − (ln 2 + 𝑖 )
2 4
ln 2 5𝜋 ln 2 3𝜋
=− +𝑖 =− −𝑖 .
2 4 2 4
Therefore we have � −𝑖
𝑑𝑧 ln 2 3𝜋
=− −𝑖 .
1+𝑖 𝑧 2 4
7.6 Exercises
8.1 Sequence and Series of Complex Numbers 183
However, since this state is difficult to apply, we use, in practice, array of standard convergence
theorems (comparison, integral test, ratio test and others) which are easier to apply.
∑∞ ∑𝑛
Theorem 8.1.2. Consider a complex series 𝑘=1 𝑐𝑘 = 𝑘=1 (𝑎𝑘 + 𝑖𝑏𝑘 ).
∑∞ ∑∞ ∑∞
1. The series 𝑘=1 𝑐𝑘 , converges to 𝑢 + 𝑖𝑣 if and only 𝑎𝑘 = 𝑢 and 𝑘=1 𝑏𝑘 = 𝑣.
Chapter 8 ∑ ∑∞
𝑘=1
2. We say that the series ∞ 𝑘=1 𝑐𝑘 converges absolutely if the series 𝑘=1 ∣𝑐𝑘 ∣ converges and
∑
if the series ∞ 𝑐
𝑘=1 𝑘 converges absolutely, then it is convergent.
TAYLOR AND LAURENT SERIES All absolute convergence tests that apply for real series(i.e. comparison test, ratio test and
root test) hold also for complex series with the necessary notational adjustments.
where the terms 𝑐′𝑛 𝑠 are complex numbers is called a Complex series. the series 𝑛=1 (−1)𝑛 +𝑖 is divergent.
∑
As in the case of the real series, a complex series ∞𝑛=1 𝑐𝑛 is convergent, if and only if its partial
∑𝑛
sum 𝑠𝑛 = 𝑘=1 𝑐𝑘 is a cauchy sequence, that is, if and only if to each 𝜖 > 0 there corresponds
an integer 𝑁 (𝜖) such that ∣𝑠𝑚 − 𝑠𝑛 ∣ < 𝜖 for all integers m and n greater than 𝑁 (𝜖).
8.2 Complex Taylor Series. 184 8.2 Complex Taylor Series. 185
∑∞
8.2 Complex Taylor Series. 3. 1 𝑛
𝑛=0 𝑛! 𝑧 , the radius of convergence is 𝑅 = ∞.
∑∞
A series of the form 4. 𝑛=0 𝑛𝑛 𝑧 𝑛 , the radius of convergence is 𝑅 = 0.
∞
� In (8.1) if 𝑎 = 0 and the radius of convergence is 𝑅 > 0, the function:
𝑐𝑛 (𝑧 − 𝑎)𝑛 = 𝑐0 + 𝑐1 (𝑧 − 𝑎) + 𝑐2 (𝑧 − 𝑎)2 + . . . , (8.1)
𝑛=0 ∞
�
𝑓 (𝑧) = 𝑐𝑛 𝑧 𝑛 , ∣𝑧∣ < 𝑅
where the terms are complex numbers, is known as a power series in powers of 𝑧 − 𝑎. In the
𝑛=0
power series
∞
� is a power series representation of 𝑓 .
𝑐𝑛 (𝑧 − 𝑎)𝑛 = 𝑐0 + 𝑐1 (𝑧 − 𝑎) + 𝑐2 (𝑧 − 𝑎)2 + . . . ,
𝑛=0 Remark 8.2.3. If the power series representations
∙ 𝑐′𝑛 𝑠 are the coefficients, (real or complex constants) ∞
� ∞
�
𝑎𝑛 𝑧 𝑛 and 𝑏𝑘 𝑧 𝑘
∙ 𝑧 is the variable (complex variable) 𝑛=0 𝑘=0
∙ 𝑎 is the center (real or complex constant) of the series. both converge for ∣𝑧∣ < 𝑅 to the same value for all 𝑧 such that ∣𝑧∣ < 𝑅, then the two series are
identical. That is, 𝑎𝑛 = 𝑏𝑛 for all 𝑛 = 0, 1, . . . . Thus if a complex function 𝑓 has a power series
Remark 8.2.1. The given power series (8.1) converges at 𝑧 = 𝑧0 . If the series (8.1) converges representation with any center 𝑎, then the representation
at 𝑧1 ∕= 𝑧0 , then the series converges for all 𝑧, ∣𝑧 − 𝑧0 ∣ ≤ ∣𝑧1 − 𝑧0 ∣.
∞
�
� � � � � � 𝑓 (𝑧) = 𝑐𝑛 (𝑧 − 𝑎)𝑛
� � � 𝑐𝑛+1 (𝑧−𝑎) � � 𝑐𝑛+1 �
(𝑧−𝑎)𝑛+1 �
If we apply the ratio test on (8.1) we get: �� 𝑐𝑛+1
𝑐𝑛 (𝑧−𝑎) 𝑛
� �=�
� � 𝑐𝑛 � � 𝑐𝑛 �∣𝑧 − 𝑎∣.
= � 𝑛=0
� 𝑐𝑛+1 �
If lim𝑛→∞ � 𝑐𝑛 � = 𝐿, the power series (8.1) converges in the disk ∣𝑧 − 𝑎∣ < 𝐿1 and diverges in is unique.
1
the set ∣𝑧 − 𝑎∣ > 𝐿
. If 𝐿 = ∞, then the series converges only at 𝑧 = 𝑎, and if 𝐿 = 0, then it
Theorem 8.2.4. If a power series function
converges for all z.
∞
�
𝑓 (𝑧) = 𝑐𝑛 (𝑧 − 𝑎)𝑛
We have proved the following theorem. 𝑛=0
The number 𝑅 is called the radius of convergence. 2. If 𝐶 is any path in 𝐷 = {𝑧 ∈ C : ∣𝑧 − 𝑎∣ < 𝑅}, then by termwise integration, we have
� ∞ �
�
Example 8.2.1. For the series: 𝑓 (𝑧)𝑑𝑧 = (𝑧 − 𝑎)𝑛 𝑑𝑧
𝐶 𝑛=1 𝐶
∑∞
1. 𝑛=0 𝑧 𝑛 , the radius of convergence is 𝑅 = 1. ∫
and 𝐶
𝑓 (𝑧)𝑑𝑧 has the same radius of convergence as 𝑓 (𝑧).
∑∞ 1 𝑛
2. 𝑛=0 𝑛 𝑧 , the radius of convergence is 𝑅 = 1.
8.2 Complex Taylor Series. 186 8.2 Complex Taylor Series. 187
1
∑∞
Recall that, unlike to real valued functions, for a complex function 𝑓 , if 𝑓 is analytic in some On the other hand, if we replace 𝑧 by −𝑧 in 1−𝑧
= 𝑛=0 𝑧 𝑛 we obtain
domain D, then it admits derivatives of all orders in D. If 𝑎 is in the interior of D, the series � ∞
1
∞
� (𝑛) = (−1)𝑛 𝑧 𝑛 for ∣𝑧∣ < 1
𝑓 (𝑎) 1+𝑧
(𝑧 − 𝑎)𝑛 𝑛=0
𝑛=0
𝑛!
and differentiating this yields
is well defined and is known as the Taylor series (or Expansion ) of 𝑓 about the point 𝑎 and if ∞
−1 �
𝑎 = 0, then the Taylor series is also known as the Maclaurin Series = 𝑛(−1)𝑛 𝑧 𝑛−1 for ∣𝑧∣ < 1.
(1 + 𝑧)2 𝑛=1
Differentiating this gives us Example 8.2.6 (Undetermined coefficients). Find the Maclaurin’s series for
1 � ∞ 𝑒𝑧
= 𝑛𝑧 𝑛−1 == 1 + 𝑧 + 𝑧 2 + . . . for ∣𝑧∣ < 1. 𝑓 (𝑧) = .
(1 − 𝑧)2 cos 𝑧
𝑛=1
Clearly 𝑓 (𝑧) is analytic in the neighborhood of 0. Hence 𝑓 has the Taylor’s series representation
Differentiating this gives us
at 𝑧 = 0, say ;
� ∞ ∞
�
2 𝜋
3
= 𝑛(𝑛 − 1)𝑧 𝑛−2 for ∣𝑧∣ < 1. 𝑓 (𝑧) = 𝑎𝑛 𝑧 𝑛 𝑓 𝑜𝑟 ∣𝑧∣ < .
(1 − 𝑧) 𝑛=2 𝑛=0
2
8.2 Complex Taylor Series. 188 8.3 Laurent Series 189
But the successive derivatives get tedious to find the coefficients 𝑎𝑛 . Then Here � �� �
1 1 1
𝑒𝑧 =
= 𝑎0 + 𝑎1 𝑧 + 𝑎2 𝑧 2 + . . . (1 − 𝑧)(1 + 2𝑧) 1−𝑧 1 + 2𝑧
cos 𝑧
and ∞
1 �
implies 𝑒𝑧 = (𝑎0 + 𝑎1 𝑧) cos 𝑧. Thus . = 𝑧𝑛 = 1 + 𝑧 + 𝑧2 + . . .
1−𝑧 𝑛=0
𝑧2 𝑧3 𝑧2 𝑧4
1+𝑧+ + + . . . = (𝑎0 + 𝑎1 𝑧 + 𝑎2 𝑧 2 + . . .)(1 − + + . . .) in ∣𝑧∣ < 1.
2! 3! 2 24
� ∞ � ∞
which implies 1
. = (−2𝑧)𝑛 = (−1)𝑛 2𝑛 𝑧 𝑛 = 1 − 2𝑧 1 + 4𝑧 2 − 8𝑧 3 + . . .
1 + 2𝑧
𝑧2 𝑧3 1 1 𝑛=0 𝑛=0
1+𝑧+ + + . . . = 𝑎0 + 𝑎1 𝑧 + (𝑎2 − 𝑎0 )𝑧 2 + (𝑎3 − 𝑎, )𝑧 3
2 6 2 2 in ∣𝑧∣ < 12 .
Equating coefficients we get: 𝑎0 = 11 𝑎1 = 1, 𝑎2 − 12 𝑥1 = 1
2
⇒ 𝑎2 = 1, 𝑎3 = 1
6
+ 1
2
= 23 , . . . Then the product
Hence � �� � ��∞ �� �∞ �
𝑒𝑧 2 1 1
= 1 + 𝑧 + 𝑧2 + 𝑧3 + . . . = 𝑧𝑛 (−1𝑛 (2𝑧)𝑛 )
cos 𝑧 3 1−𝑧 1 + 2𝑧 𝑛=0 𝑛=0
∞
�
for ∣𝑧∣ < 𝜋2 .
= (1.(−1)𝑛 2𝑛 0 + 1.(−1)𝑛−1 2𝑛−1 + . . . + 1)𝑧 𝑛
𝑛=0
In the above example we used the product of two power series term by term. This kind of
= 1 − 𝑧 + 3𝑧 2 + . . . .
manipulation can be generalized in the following theorem.
in ∣𝑧∣ < 12 .
Theorem 8.2.6 (Termwise Product of Power series). If
∞ Remark 8.2.7. From the Fundamental Theorem of Complex Integration, we know that, if 𝑓 is
�
𝑎𝑛 (𝑧 − 𝑎)𝑛 analytic in an open disk D about 𝑎, then there exists a function 𝐹 such that 𝐹 ′ (𝑧) = 𝑓 (𝑧) for
𝑛=0
all 𝑧 in D. Then we can construct an antiderivative F of 𝑓 from the power series expansion of 𝑓
∑ ∑∞ 1
converges to 𝑓 (𝑧) in ∣𝑧 − 𝑎∣ < 𝑅1 and about 𝑎 in D. If 𝑓 (𝑧) = ∞ 𝑛
𝑛=1 𝑐𝑛 (𝑧 −𝑎) , then 𝐹 (𝑧) = 𝑛=0 𝑛+1 𝑐𝑛 (𝑧 −𝑎)
𝑛+1
is an antiderivative
∞
� of 𝑓.
𝑏𝑛 (𝑧 − 𝑎)𝑛
𝑛=0
(a) Since f is analytic every where except at 𝑧 = −𝑖, the Taylor series expansion 𝑎𝑡𝑧 = 0 is
�∞
� 1 1 1
= = −𝑖 = −𝑖 (𝑖𝑧)𝑛 ; ∣𝑧∣ < 1
�� 𝑧+𝑖 𝑖(1 + 𝑧𝑖 ) 1 − 𝑖𝑧
� 𝑛=0
= −𝑖[1 + 𝑖𝑧 + (𝑖𝑧)2 + . . .] = −𝑖[1 + 𝑖𝑧 − 𝑧 2 + . . .]
= −𝑖 + 𝑧 + 𝑖𝑧 2 − . . .
��
for ∣𝑧∣ < 1.
Figure 8.1: Annulus.
(b) However, in the annulus 1 < ∣𝑧∣ < ∞, we can expand it using Laurent series. This time
If f is analytic in D, then it admits a Laurent series representation given by we write � �� �
1 1 1 1
= =
∞ 𝑧+𝑖 𝑧(1 + 𝑧𝑖 ) 𝑧 1 + 𝑖/𝑧
�
𝑓 (𝑧) = 𝑏𝑛 (𝑧 − 𝑎)𝑛 to extract out the singularity point of 𝑓 . Since we are expanding in the annulus 1 < ∣𝑧∣ < ∞
𝑛=−∞
about 𝑧 = 0, 𝑧1 is already in the required form.
in D, with the coefficients 𝑏′𝑛 𝑠 are calculated from:
Now put 𝑡 = 𝑧𝑖 . Then ∣𝑡∣ = ∣ 𝑧𝑖 ∣ = 1
∣𝑧∣
< 1, since ∣𝑧∣ > 1. Thus we can use Taylor series
� expansion on :
1 𝑓 (𝑤)
𝑏𝑛 = 𝑑𝑤, (8.2) � ∞
2𝜋𝑖 (𝑤 − 𝑎)𝑛+1 1 1
𝑐
𝑖 = = (−1)𝑛 𝑡𝑛 𝑓 𝑜𝑟∣𝑡∣ < 1.
where C is a piecewise smooth simple closed counterclockwise curve in D. 1+ 𝑧
1 + 𝑡 𝑛=0
�∞ � �𝑛
𝑖
Remark 8.3.2. Note that: = (−1)𝑛
𝑛=0
𝑧
𝑖 1
1. If f is analytic on or in the interior of 𝐶1 then 1
(𝑤−𝑎)𝑛+1
is also analytic for all 𝑛 < 0, 𝑛 ∈ Z. = 1− − + ...
𝑧 𝑧2
Hence 𝑏𝑛 = 0∀𝑛𝜖𝑧𝑎𝑛𝑑𝑛 < 0𝑖𝑛(∗). Therefore we have the Taylor series expansion.
in 1 < ∣𝑧∣ < ∞. Therefore
� � � �
2. The Laurent series representation depends on the choice of the annulus D with the same 1 1 1 1 𝑖 1
𝑓 (𝑧) = = = 1 − − 2 + ...
center 𝑎. 𝑧+𝑖 𝑧 1 + 𝑖/𝑧 𝑧 𝑧 𝑧
1 𝑖 1
Therefore, a Laurent series is not in general a unique representation (unlike the Taylor series = − − ...
𝑧 𝑧2 𝑧3
of analytic functions). �−1
= (𝑖)(𝑛+1) 𝑧 𝑛
But do we really need to evaluate (9.1) to get the coefficients 𝑏′𝑛 𝑠? No, practically (9.1) will not 𝑛=−∞
be calculated. This can be seen in the next examples Example 8.3.2. Derive the Laurent expansion of
1
𝑓 (𝑧) =
Example 8.3.1. Expand sin 𝑧
1
𝑓 (𝑧) = about 𝑧 = 𝜋, in the annulus 0 < ∣𝑧 − 𝜋∣ < 𝜋.
𝑧+𝑖
about 𝑧 = 0 Since 𝑓 (𝑧) has singularity at 𝑧 = 𝜋, it does not admit Taylor expansion. Now let 𝑡 = 𝑧 − 𝜋. Then
𝑧 = 𝑡 + 𝜋. and we have:
1 1 1 1 𝑡
= =− =− .
sin 𝑧 sin(𝑡 + 𝜋) sin 𝑡 𝑡 sin 𝑡
8.3 Laurent Series 192 8.4 Exercises 193
Here since Remark 8.3.3. In the Laurent series expansion of a function 𝑓 about 𝑎 we have two parts:
𝑡3 𝑡5
sin 𝑡 = 𝑡 − + − . . .
� 3! 2 5! 4 � ∞
�
𝑡 𝑡 𝑓 (𝑧) = 𝑐𝑛 (𝑧 − 𝑎)𝑛
= 𝑡 1 − + − ... ,
3! 5! 𝑛=−∞
−1
� ∞
�
1 1 = 𝑐𝑛 (𝑧 − 𝑎)𝑛 + 𝑐𝑛 (𝑧 − 𝑎)𝑛
= 𝑡2 𝑡4
sin 𝑡 𝑡(1 − 3!
+ 5!
− . . .) 𝑛=−∞ 𝑛=0
�∞ �∞
1 1
has singularity at 𝑡 = 0. So, the factor 𝑡
contributes to the non-singularity of sin 𝑡
, hence we = 𝑏𝑚 (𝑧 − 𝑎)−𝑚 + 𝑐𝑛 (𝑧 − 𝑎)𝑛
factored it out. But 𝑚=1 𝑛=0
𝑡 1 �∞
𝑏𝑚 � ∞
= 𝑡2 4 = + 𝑐𝑛 (𝑧 − 𝑎)𝑛
sin 𝑡 1− 3!
+ 𝑡5! − . . . (𝑧 − 𝑎) 𝑚
𝑚=1 𝑛=0
𝑡
is not singular at 𝑡 = 0. That is, sin 𝑡
is analytic at 𝑡 = 0, hence admits a Taylor series expansion
where 𝑏𝑚 = 𝑐−𝑛 and 𝑚 = −𝑛. In this last expression the sum
in some neighborhood of 𝑡 = 0. Therefore, for 0 ≤ ∣𝑡∣ < 𝜋, the Taylor series will be of the form,
∞
�
say;
∞
� 𝑐𝑛 (𝑧 − 𝑎)𝑛
𝑡
= 𝑎0 + 𝑎1 𝑡 + 𝑎2 𝑡2 + . . . = 𝑎𝑛 𝑡𝑛 𝑛=0
sin 𝑡 𝑛=0
is part of the Taylor series if the function is analytic and the sum
(Though − 1𝑡 is singular at 𝑡 = 0, it is already a one - term Laurent series about 𝑡 = 0. Thus ∞
� 𝑏𝑚
𝑡 (𝑧 − 𝑎)𝑚
𝑚=1
𝑠𝑖𝑛𝑡
is known as the principal part of the Laurent Series. of 𝑓 (𝑧)
has been ” desingularized ” at 𝑡 = 0 by t at the numerator.)
Now to find the coefficients 𝑎0 , 𝑎1 , ..., we use the ”undetermined coefficients - method ”: Exercise 8.3.4. Expand 𝑓 (𝑧) = 𝑒1/𝑧 about 𝑧 = 0.
𝑡
𝑡 = . sin 𝑡
sin 𝑡
𝑡 3 𝑡5 8.4 Exercises
= (𝑎0 + 𝑎1 𝑡 + 𝑎2 𝑡2 + 𝑎3 𝑡3 + 𝑎4 𝑡4 . . .)(𝑡 − + − . . .)
3! 5!
𝑎0 𝑎1
= 0 + 𝑎0 𝑡 + 𝑎1 𝑡2 + (𝑎2 − )𝑡3 + (𝑎3 − )𝑡4 + . . .
3! 3!
This implies 𝑎0 = 1, 𝑎1 = 0, 𝑎2 = 16 , 𝑎3 = 0 and hence 𝑡
sin 𝑡
= 1 + 16 𝑡2 + 7 4
360
𝑡 + ...
Thus
1 1 1 7
= −( )(1 + (𝑧 − 𝜋)2 + (𝑧 − 𝜋)4 + . . .)
sin 𝑧 𝑧−𝜋 6 360
1 1 7
= − − (𝑧 − 𝜋) − (𝑧 − 4)3 − . . . .
𝑧−𝜋 6 360
is the desired Laurent series in the annulus 0 < ∣𝑧 − 𝜋∣ < 𝜋.
9.1 Zeros and Classification of Singularities. 195
is singular at 𝑧 = 1
𝑘𝜋
,𝑘 = ±, 1, ±2, . . . and at 𝑧 = 0. The singularity points 𝑧 = 1
𝑘𝜋
, 𝑘 ∈ Z∖{0}
are isolated (as we can find a 𝜌 > 0,) while the point 𝑧 = 0 is not isolated because every annulus
0 < ∣𝑧∣ < 𝜌 inevitably contains at least one singular point (in fact, infinitely many of them) no
1
matter how small we choose 𝜌 > 0. (Since 𝑘𝜋
→ 0 as 𝑘 → ∞, 0 is the limit point or accumulation
point of non-singularities.)
Chapter 9
Assume that 𝑓 has an isolated singularity at 𝑧 = 𝑎 That is, there exists 𝜌 > 0 such that
0 < ∣𝑧 − 𝑎∣ < 𝜌 in which 𝑓 has a Laurent series of the form:
9.1 Zeros and Classification of Singularities. is the principal part of the Laurent series for 𝑓.
annulus 0 < ∣𝑧 − 𝜋∣ < 𝜋. clearly f is analytic on the annulus. To see this: with 𝑡 = 𝑧 − 𝜋, we 3. If the principal part has infinitely many terms, then 𝑓 is said to have an isolated essential
have: singularity at 𝑧 = 𝑎.
1 1 1 1 𝑡
= =− =− Example 9.1.3. Let
sin 𝑧 sin(𝑡 + 𝜋) sin 𝑡 𝑡 sin 𝑡
cos 𝑧
for 𝑧 ∕= 𝜋 and hence 𝑡 ∕= 0, 𝑓 (𝑧) =
.
𝑧2
𝑡
Then 𝑓 is differentiable at all 𝑧 ∕= 0 and not defined at 𝑧 = 0. Using the Maclaurin series of
sin 𝑡
is analytic in the annulus. cos 𝑧, we can get the Laurent expansion of 𝑓 (𝑧) around zero is
� ∞ �
cos 𝑧 1 � (−1)𝑛 2𝑛 1 1
Example 9.1.2. The function 𝑓 (𝑧) = 2 = 2 𝑧 = 2 − 1 + 𝑧 2 − +...
1 𝑧 𝑧 𝑛=0 (2𝑛)! 𝑧 26
𝑔(𝑧) =
sin( 𝑧1 ) 1
Then the highest power of 𝑧
in the expansion is 2, so 𝑓 has a pole of order 2 at 0.
9.1 Zeros and Classification of Singularities. 196 9.2 The Residue Theorem 197
𝑓 (𝑎) = 0. A zero 𝑎 of 𝑓 is said to have order 𝑚 if 𝑓 (𝑎) = 𝑓 ′ (𝑎) = 𝑓 ′′ (𝑎) = . . . = 𝑓 (𝑚−1) (𝑎) = 0
(𝑧 − 𝑎)2 𝑓 (𝑧) = 𝑏2 + 𝑏1 (𝑧 − 𝑎) + (𝑧 − 𝑎)2 𝑔(𝑧).
and 𝑓 (𝑚) (𝑎) ∕= 0.
′′′
Since 𝑔(𝑧) is analytic at 𝑎, it is continuous at 𝑎. Hence
Example 9.1.4. Let 𝑓 (𝑧) = 𝑧 3 . Then 𝑓 ′ (𝑧) = 3𝑧 2 , 𝑓 ′′ (𝑧) = 6𝑧 and 𝑓 (𝑧) = 6.
Here 𝑓 (0) = 𝑓 ′ (0) = 𝑓 ′′ (0) = 0 and 𝑓 ′′′ (0) = 6 ∕= 0 and hence 𝑓 has aa zero, 𝑧 = 0, of order 2.
lim (𝑧 − 𝑎)2 𝑓 (𝑧) = 𝑏2 + (𝑏1 × 0) + (0 × 𝑔(𝑎)) = 𝑏2 .
𝑧→𝑎
Remark 9.1.2. If 𝑓 is analytic at 𝑎 and 𝑓 (𝑎) ∕= 0, we say for convenience that f has a zero, This implies ∣(𝑧 − 𝑎)2 𝑓 (𝑧)∣ = ∣𝑧 − 𝑎∣2 ∣𝑓 (𝑧)∣ → ∣𝑏2 ∣ ∕= 0 which implies
𝑧 = 𝑎, of order 0, or a zeroth-order of zero at 𝑧 = 𝑎.
∣𝑏2 ∣
lim ∣𝑓 (𝑧)∣ = lim = ∞.
Now we state a theorem which help us to determine the different kind of singularities of a function. 𝑧→𝑎 𝑧→𝑎 ∣𝑧 − 𝑎∣2
Theorem 9.1.3. Let 𝑝 and 𝑞 be analytic functions at 𝑧 = 𝑎, and have zero of order P and Q Hence we have the following theorem for the general case.
respectively at 𝑎. Then Theorem 9.1.4 (Behavior of a function near its pole). If 𝑓 (𝑧) has a pole at 𝑧 = 𝑎, then
1
1. 𝑓 (𝑧) = 𝑝(𝑧)
has a pole of order P at 𝑧 = 𝑎. lim ∣𝑓 (𝑧)∣ = ∞.
𝑛→𝑧
𝑝(𝑧)
2. 𝑓 (𝑧) = 𝑞(𝑧)
has a pole of order 𝑁 = 𝑄 − 𝑃 at 𝑧 = 𝑎, if 𝑄 − 𝑃 > 0, and 𝑓 is analytic at 𝑎
However, if f has an essential singularity at 𝑧 = 𝑎, the above theorem does not hold in general.
if 𝑄 ≤ 𝑃.
That is, the integral I is equal to 2𝜋𝑖 times the sum of the residues of 𝑓 in C.
�
� Example 9.2.1. Calculate the residue and evaluate
�� � � �
3 1
𝐼= 𝑧 cos 𝑑𝑧,
� �
�� 𝐶 𝑧
�
where C is a circle ∣𝑧∣ = 1 oriented counterclockwise.
𝑧𝑒𝜋𝑧
𝑗 𝑡ℎ derivative of 𝑔(𝑧) at 𝑧 = 𝑎 divided by 𝑗!. That is, Example 9.2.3. Let 𝑓 (𝑧) = (𝑧−2)2 (𝑧 2 +4)
. Evaluate the integral I of 𝑓 over the ellipse 9𝑥2 +𝑦 2 = 9
counterclockwise.
1 𝑑𝑗
𝑐−𝑁 +𝑗 = 𝑔(𝑎).
𝑗! 𝑑𝑧 𝑗
Solution
When 𝑖 = 𝑁 − 1, −𝑁 + 𝑗 = −1 and 𝑐−𝑁 +𝑗 becomes 𝑐−1 , the residue of 𝑓 at 𝑧 = 𝑎.
Therefore, Since the denominator (𝑧 − 2)2 (𝑧 2 + 4) has zeros at 𝑧 = 2 of order 2 and 𝑧 = ±2𝑖 each of order
1
𝑐−1 = lim 𝑔 (𝑁 −1) (𝑧), 1, 𝑓 has poles at 𝑧 = 2 of order 2 and at 𝑧 = 2𝑖 and at 𝑧 = −2𝑖 of order 1 (as the numerator
(𝑁 − 1)! 𝑧→𝑎
𝑧𝑒𝜋𝑧 has no zeros). But since 𝑧 = 2 is not inside the ellipse C, it has no relevance for integration.
where 𝑔(𝑧) = (𝑧 − 𝑎)𝑁 𝑓 (𝑧). This holds true only if the singularity at 𝑧 = 𝑎 is not essential.
Hence we consider only 𝑧 = −2𝑖 and 𝑧 = 2𝑖. Their respective residues are:
Theorem 9.2.2 (Residue at a Pole of Order 𝑚.). Let 𝑓 be a function having a pole of order 𝑚
at 𝑧 = 𝑎. Then � �
1 𝑑𝑚−1 𝑧𝑒𝜋𝑧 𝑧𝑒𝜋𝑧
𝑅𝑒𝑠(𝑓, 𝑎) = lim 𝑚−1 (𝑧 − 𝑎)𝑚 𝑓 (𝑧) . 𝑅𝑒𝑠𝑎𝑡𝑧=−2𝑖 = lim × (𝑧 + 2𝑖)
(𝑚 − 1)! 𝑧→𝑎 𝑑𝑧 (𝑧 − 2)2 (𝑧 2 + 4) 𝑧→−2𝑖 (𝑧 − 2)2 (𝑧
− 2𝑖)(𝑧 + 2𝑖)
−2𝜋𝑖
(−2𝑖)𝑒 1 1 −𝑖
Example 9.2.2. Evaluate all residues of 𝑓 (𝑧) = 1 = = = =
(𝑧+2)(𝑧−1)3 (−2 − 2𝑖)2 (−4𝑖) 2(−2 − 2𝑖)2 16𝑖 16
and
Solution 𝑧𝑒𝜋𝑧 𝑧𝑒𝜋𝑧
𝑅𝑒𝑠𝑎𝑡𝑧=2𝑖 = lim (𝑧 − 2𝑖)
(𝑧 − 2)2 (𝑧 2 + 4) 𝑧→+2𝑖 (𝑧 − 2)2 (𝑧− 2𝑖)(𝑧 + 2𝑖)
The denominator of 𝑓 has first - order zero at 𝑧 = −2 and 3𝑟𝑑 order zero at 𝑧 = 1. Since the
(2𝑖)𝑒2𝜋𝑖 2𝑖
numerator 1 has no zeros, 𝑓 has first - order pole at 𝑧 = −2 and a third - order pole at 𝑧 = 1 = =
(2𝑖 − 2)2 (2𝑖 + 2𝑖) 4(−1 + 𝑖)2 (42 𝑖)
(N = 3). Thus 1 𝑖
= =
� � 8 × (−2𝑖) 16
1 1 1 1 1
𝑅𝑒𝑠𝑓 = lim (𝑧 + 2). = lim =− 3 =− Therefore,
0! 𝑧→−2 (𝑧 + 2)(𝑧 − 1) 3 𝑧→−2 (𝑧 − 1) 3 3 27 � � �
−𝑖 𝑖
𝑓 (𝑧)𝑑𝑧 = 2𝜋𝑖 + = 0.
and � �(𝑁 −1)=2 𝐶 16 16
1 1 1 1 ′′
𝑅𝑒𝑠𝑓 = lim (𝑧 − 1)3 . = lim ( )
(3 − 1)! 𝑧→1 (𝑧 + 2)(𝑧 − 1)3 2! 𝑧→1 𝑧 + 2
9.3 Evaluation of Real Integrals.
1 (−1).(−2) 1 1 1
= lim = lim = 3 = .
2 𝑧→1 (𝑧 + 2)3 𝑧→1 (𝑧 + 2)3 3 27
Consider the class of real integrals of the general form
Thus � � � 2𝜋
𝑑𝑧 1 1 𝐼= 𝐹 (cos 𝜃, sin 𝜃)𝑑𝜃
𝑓 (𝑧)𝑑𝑧 = = 2𝜋𝑖(− + ) = 0,
𝐶 𝑐 (𝑧 + 2)(𝑧 − 1)3 27 27 0
Example 9.3.1. The functions since 𝑧1 lies outside of C, its contribution to the integral is zero, Thus we’ve:
�
2 − cos2 𝜃 sin 𝜃 −2𝑑𝑧 −2
𝐹1 (cos 𝜃, sin 𝜃) = 𝐼= = 2𝜋𝑖𝑥𝑅𝑒𝑠𝑎𝑡𝑧=𝑧2 [ ]
1 + cos 𝜃 𝑐 (𝑧 − 𝑧1 )(𝑧 − 𝑧2 ) (𝑧 − 𝑧1 )(𝑧 − 𝑧2 )
−2
and = 2𝜋𝑖𝑥 lim [(𝑧 − 𝑧2 ). ] (9.2)
sin 𝑎𝜃 𝑧→𝑧2 (𝑧 − 𝑧1 )(𝑧 − 𝑧2 )
𝐹2 (cos 𝜃, sin 𝜃) = , −2
(1 + cos 𝑏𝜃)2 = 2𝜋𝑖𝑥 lim ( ) (9.3)
𝑧→𝑧2 𝑧 − 𝑧1
for 𝑎, 𝑏 ∈ R, rational function of cos 𝜃 and sin 𝜃. −2 1
= 2𝜋𝑖𝑥 = 2𝜋𝑖𝑥 − √ (9.4)
𝑧2 − 𝑧 1 5𝑖
To evaluate integrals of the above form, use the change of variables 𝑧 = 𝑒 . This change of 𝑖𝜃 √
2 2 5
variables will transform the real integral into a closed path complex integral. If 𝜃1 = 0 then = √ 𝜋𝑜𝑟 𝜋. (9.5)
5 5
2𝜋𝑖
𝑧1 = 1 and if 𝜃2 = 2𝜋 then 𝑧2 = 𝑒 = 1. (9.6)
𝑖𝜃 𝑑𝑧
Here 𝑑𝑧 = 𝑖𝑒 𝑑𝜃, which implies that 𝑑𝜃 = 𝑖𝑧
and with this change of variable we get: cos 𝜃 =
𝑒𝑖𝜃 +𝑒−𝑖𝜃 𝑧+𝑧 −1 𝑧 2 +1 𝑒𝑖𝜃 −𝑒−𝑖𝜃 𝑧−𝑧 −1 𝑧 2 −1
2
= 2
= 2𝑧
and sin 𝜃 = 2𝑖
= 2𝑖
= 2𝑧𝑖
. ∫ 2𝜋
Hence ∴ 0
𝑑𝜃
2−𝑠𝑖𝑛𝜃
= √2 𝜋
� 2𝜋 � � � 5
𝑧 2 + 1 𝑧 2 − 1 𝑑𝑧
𝐼= 𝐹 (𝑐𝑜𝑠𝜃, 𝑠𝑖𝑛𝜃)𝑑𝜃 = 𝐹 , .
0 𝐶 2𝑧 2𝑖𝑧 𝑖𝑧 Exercise. Evaluate using Residue Theorem:
∫ 𝜋 𝑐𝑜𝑠𝜃 ∫ 2𝜋 𝑐𝑜𝑠𝜃
Then we can use the Residue Theorem to evaluate the final integral. 𝐼 = 0 17−8𝑐𝑜𝑠𝜃 𝑑𝜃 = 12 0 17−8𝑐𝑜𝑠𝜃 𝑑𝜃.
𝜋
Example 9.3.2. Evaluate: Ans. 60
� 2𝜋
𝑑𝜃
𝐼= .
0 2 − sin 𝜃
9.3.1 Improper Integrals:
Solution
. Consider real Integrals of type:
𝑖𝜃 𝑑𝑧 ∫∞
Let 𝑧 = 𝑒 . Then 𝑑𝜃 = 𝑖𝑧
. As 𝜃 goes from 0 to 2𝜋, 𝑧 traverses through a complete circular −∞
𝑓 (𝑥)𝑑𝑥
revolution with radius 𝑟 = 1. Thus Clearly the improper integral can be written as:
� 2𝜋 � ∫∞ ∫0 ∫𝐵
𝑑𝜃 1 𝑑𝑧 𝑓 (𝑥)𝑑𝑥 = lim𝐴→∞ −𝐴 𝑓 (𝑥)𝑑𝑥 + lim𝐵→∞ 0 𝑓 (𝑥)𝑑𝑥.
𝐼= = .
𝑧 2 −1 𝑖𝑧
−∞
0 2 − 𝑠𝑖𝑛𝜃 𝑐 2 − 2𝑖𝑧
� If both limits exists, then the improper integral is said to be convergent and can be expressed in
𝑑𝑧
= −2 2 − 4𝑖𝑧 − 1 the form:
𝑧
�𝑐 ∫∞ ∫𝑅
𝑑𝑧 𝑓 (𝑥)𝑑𝑥 = lim𝑅→∞ 𝑓 (𝑥)𝑑𝑥. . . . . . . (∗)
= −2 , −∞ −𝑅
𝑐 (𝑧 − 𝑧 1 )(𝑧 − 𝑧2 ) 𝑝(𝑥)
Now assume that 𝑓 (𝑥) = 𝑞(𝑥)
𝑠.𝑡.𝑞(𝑥) ∕= 0∀𝑥𝜖ℜ and deg. 𝑞(𝑥) − 𝑑𝑒𝑔.𝑝(𝑥) ≥ 2. Then clearly (*)
√ √
where 𝑧1 = (2 + 5)𝑖 and 𝑧2 = (2 − 5)𝑖 is convergent and we can use the expression in (**) without any further remark.
Consider the upper semicircle C and define a complex 𝑓 𝑛𝑓 (𝑧)𝑤𝑖𝑡ℎ𝑟𝑒𝑎𝑙𝑝𝑎𝑟𝑡𝑓 (𝑥).𝑇 ℎ𝑒𝑛.
∮ ∫𝑅 ∫ ∑
𝑐
𝑓 (𝑧)𝑑𝑧 = −𝑅 𝑓 (𝑥)𝑑𝑥 + 𝑐𝑅 𝑓 (𝑧)𝑑𝑧 = 2𝜋𝑖 𝑅𝑒𝑠𝑓 (𝑧)
∫𝑅 ∑ ∫
⇒ −𝑅 𝑓 (𝑥)𝑑𝑥 = 2𝜋𝑖 𝑅𝑒𝑠𝑓 (2) − 𝑐𝑅 𝑓 (𝑧)𝑑𝑧.
9.3 Evaluation of Real Integrals. 204 9.3 Evaluation of Real Integrals. 205
∫ ∫∞ ∫ ∞−∞ 𝑐𝑜𝑠𝑎𝑥 1 −𝑎 𝜋 −𝑎
Now as 𝑅 → ∞, we consider the integral 𝑐𝑅
𝑓 (𝑧)𝑑𝑧. using the substitution 𝑧 = 𝑅𝑒𝑖𝜃 . then 𝐶𝑅 ⇒ 𝐼 = 0 𝑐𝑜𝑠𝑎𝑥
𝑥2 +1
𝑑𝑥 12 𝑥2 +1
𝑑𝑥 2 𝜋𝑒 = 2 𝑒
can be represented parametrically that ∣𝑧∣ = 𝑅 ⇒ and hence 𝑅 = const. will be the 𝑒𝑞𝑛𝑜𝑓 𝐶𝑅 .𝑎𝑠𝑧 Exercise! Evaluate
ranges along 𝐶𝑅 , 𝜃 varies from0𝑡𝑜𝜋. ∫ ∞ 𝑥1/3
𝐷 (𝑥+1)2
𝑑𝑥
Thus ∣𝑓 (𝑧)∣ < ∣𝑧∣𝑘2 for ∣𝑧∣ = 𝑅 > 𝑅0 sufficiently large k. constant
∫ ∫ using Residue Theorem.
⇒ ∣ 𝑐𝑅 𝑓 (𝑥)𝑑𝑧∣ ≤ 𝑐𝑅 ∣𝑓 (𝑧)∣𝑑𝑧 < 𝑅𝑘2 𝜋𝑅 = 𝑘𝜋 𝑅
𝑓 𝑜𝑟𝑅 > 𝑅0
� ∞ � 𝑅 � �
∴ 𝑓 (𝑥)𝑑𝑥 = lim 𝑓 (𝑥)𝑑𝑥 = lim [2𝜋𝑖 𝑅𝑒𝑠𝑓 (𝑧) − 𝑓 (𝑧)𝑑𝑧]
−∞ 𝑅→∞ −𝑅 𝑅→∞ 𝑐𝑅
� �
= 2𝜋𝑖 𝑅𝑒𝑠𝑓 (𝑧) − lim 𝑓 (𝑧)𝑑𝑧 (9.7)
𝑅→∞ 𝑐
𝑅
�
2𝜋𝑖 𝑅𝑒𝑠𝑓 (𝑧) − 0 (9.8)
(9.9)
∫∞
∴ −𝑅
𝑓 (𝑥)𝑑𝑥 = 2𝜋𝑖 ∼ 𝑅𝑒𝑠𝑓 (𝑧)
where sum is over all the residues of f over the upper half plane.
Examples. 1. Evaluate:
∫ ∞ 𝑐𝑜𝑠𝑎𝑥
0 𝑥2 +1
𝑑𝑥 𝑎 > 𝑜.
soln.
∫∞ 𝑐𝑜𝑠𝑎𝑥 1
∫∞
𝐼= 0 𝑥2 +1
𝑑𝑥 = 2
−∞ 𝑐𝑜𝑠𝑎𝑥
𝑥2 +1
𝑑𝑥
𝑒𝑖𝑎𝑧
Now consider the function 𝑓 (𝑧) = 𝑧 2 +1
.Clearly f is analytic everywhere except at 𝑧 = ±𝑖. At these two points, f has simple poles.
∮ 𝑒𝑖𝑎𝑧
Thus 𝑐 𝑧2 1
𝑑𝑧 = 2𝜋𝑖𝑅𝑒𝑠𝑎𝑡𝑧=𝑖 .𝑓 (𝑧)(𝑎𝑡𝑡ℎ𝑒𝑢𝑝𝑝𝑒𝑟ℎ𝑎𝑙𝑓 𝑝𝑙𝑎𝑛𝑒).
𝑖𝑎𝑥𝑖
= 2𝜋𝑖 𝑒 2𝑖 = 𝜋𝑒−𝑎
� � 𝑅 𝑖𝑎𝑥 �
𝑒𝑖𝑎𝑧 𝑒 𝑒𝑖𝑎𝑧
𝑖.𝑒.𝜋𝑒−𝑎 = 2
𝑑𝑧 = lim 𝑑𝑥 + lim 𝑑𝑧
𝑐 𝑧 +1 𝑅→∞ −𝑅 𝑥2 + 1 𝑅→∞ 𝑐 𝑧 2 + 1
𝑅
� 𝑅 �
𝑐𝑜𝑠𝑎𝑥 + 𝑖𝑠𝑖𝑛𝑎𝑥 𝑒𝑖𝑎𝑧
= lim 2
𝑑𝑥 + lim 2
𝑑𝑧 (9.10)
𝑅→∞ −𝑅 𝑥 +1 𝑅→∞ 𝑐 𝑧 + 1
𝑅
� ∞ � ∞
𝑐𝑜𝑠𝑎𝑥 𝑠𝑖𝑛𝑎𝑥 𝑒𝑖𝑎𝑧 1 1
= 2
𝑑𝑥 + 𝑖 2
𝑑𝑥∣ 2 ∣≤∣ 2 ∣≤ √ (9.11)
−∞ 𝑥 + 1 −∞ 𝑥 + 1 𝑧 + 1 𝑧 + 1 (𝑅 − 1) 𝑅2 + 1
∫∞ ∫∞
∴ 𝑐𝑜𝑠𝑎𝑥
−∞ 𝑥2 +1
𝑑𝑥 = 𝜋𝑒−𝑎 𝑎𝑛𝑑 𝑠𝑖𝑛𝑎𝑥
−∞ 𝑥2 +1
𝑑𝑥 =0