numericals
numericals
M.Sc. (Mathematics)
IV - Semester
311 43
NUMERICAL ANALYSIS
Authors:
Dr. N. Dutta, Professor of Mathematics, Head - Department of Basic Sciences & Humanities, Heritage Institute of Technology,
Kolkata
Units (2, 4, 6-8, 10-13)
Vikas® Publishing House: Units (1, 3, 5, 9, 14)
All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.
Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.
Work Order No. AU/DDE/DE 12-02/Preparation and Printing of Course Materials/2020 Dated 30.01.2020 Copies - 1000
SYLLABI-BOOK MAPPING TABLE
Numerical Analysis
Syllabi Mapping in Book
1.0 INTRODUCTION
1.1 OBJECTIVES
…(1.1)
where K(x, t) is called the kernel of the integral Equation (1.1) and (x)
and β(x) are the limits of integration. It can be easily observed that the unknown
function u(x) appears under the integral sign. It is to be noted here that both the
kernel K(x, t) and the function f(x) in Equation (1.1) are given functions; and is
a constant parameter. We have to determine the unknown function u(x) that will
satisfy Equation (1.1).
An integral equation can be classified as a linear or nonlinear integral equation.
The most frequently used integral equations fall under two major classes, namely
Volterra and Fredholm integral equations. In this unit we will distinguish following
integral equations:
Volterra integral equations
Fredholm integral equations
Self-Instructional
2 Material
Volterra Integral Equations Transcendental and
Polynomial Equations
The most standard form of Volterra linear integral equations is of the form
NOTES
where the limits of integration are function of x and the unknown function
u(x) appears linearly under the integral sign.
If the function (x) = 1, then equation becomes
and this equation is known as the Volterra integral equation of the second
kind; whereas if (x) = 0, then the equation becomes
1.2)
where the limits of integration a and b are constants and the unknown function
u(x) appears linearly under the integral sign. If the function (x) = 1, then Equation
(1.2) becomes,
and this equation is called Fredholm integral equation of second kind; whereas
if (x) = 0, then Equation (1.2) gives,
Self-Instructional
Material 3
Transcendental and The Laplace transform of f(t) is defined as
Polynomial Equations
L{f(t)} =
NOTES Using this definition the above integral equation can be transformed to
If
Self-Instructional
4 Material
Method of Successive Approximation to Solve Volterra Integral Transcendental and
Polynomial Equations
Equations of Second Kind
Volterra integral equation of the second kind is of the form,
NOTES
where K(x, t) is the kernel of the integral equation, f (x) a continuous function
of x and a parameter. Here, f (x) and K(x, t) are the given functions but u(x) is
an unknown function that needs to be determined. The limits of integral for the
Volterra integral equations are functions of x.
In this method of approximation, we replace the unknown function u(x)
under the integral sign of the Volterra equation by any selective real valued continuous
function u0(x), called the zeroth approximation. This substitution will give the first
approximation u1(x) by
It is obvious that u1(x) is continuous if f (x), K(x, t) and u0(x) are continuous.
The second approximation u2(x) can be obtained similarly by replacing u0(x) in
the above equation by u1(x) obtained above. And we find,
so that the resulting solution u(x) is independent of the choice of the zeroth
approximation u0(x). This process of approximation is extremely simple. However,
if we follow the Picard’s successive approximation method, we need to set u0(x)
= f (x), and determine u1(x) and other successive approximation as follows:
Self-Instructional
Material 5
Transcendental and
Polynomial Equations
NOTES
Where,
Self-Instructional
6 Material
Transcendental and
Polynomial Equations
The repeated integrals in may be considered as a double integral
over the triangular region; thus interchanging the order of integration, we obtain
NOTES
Where,
Similarly,
hence it is also possible that the solution of Volterra equation will be given
by as n
Self-Instructional
Material 7
Transcendental and
Polynomial Equations
NOTES
Where,
where the kernel K(x–t) is of convolution type, can very easily be solved
using the Laplace transform method. To begin the solution process, we first define
the Laplace transform of u(x)
where
Self-Instructional
8 Material
Example 1: Solve the following Volterra integral equation of the second kind of Transcendental and
Polynomial Equations
the convolution type using (a) the Laplace transform method and (b) successive
approximation method
NOTES
Self-Instructional
Material 9
Transcendental and In the double integration the order of integration is changed to obtain the
Polynomial Equations
final result. In a similar manner, the fourth approximation u4(x) can be at once
written as
NOTES
Now, as n
Where,
Self-Instructional
10 Material
is the remainder after n terms. Transcendental and
Polynomial Equations
Now,
NOTES
Accordingly, the general series for u(x) can be written as
where is the
resolvent kernel is the solution of the Volterra integral equation of the second kind,
given by
When both and f(x) are continuous then the resolvent kernel can
be constructed in terms of the Neumann series
Self-Instructional
Material 11
Transcendental and
Polynomial Equations Where is the iterated kernel which is evaluated as,
NOTES
and .
For showing this, assume the following infinite series form for the solution
u(x),
Substituting this in the Volterra integral equation of the second kind and
assuming good convergence which allows the exchange of summation with the
integration operation, we get
And
Self-Instructional
12 Material
Transcendental and
Polynomial Equations
NOTES
Similarly,
Therefore,
Self-Instructional
Material 13
Transcendental and Solution of a Volterra Integral Equation of the First Kind
Polynomial Equations
The first kind Volterra equation is usually written as,
NOTES
If then
The second way to obtain the second kind Volterra integral equation from
the first kind is by using integration by parts, if we set
Or
which reduces to
Giving
Self-Instructional
14 Material
(0) = 0, and dividing out by K(x, x) we have Transcendental and
Polynomial Equations
NOTES
In this method the function f(x) is not required to be differentiable. But u(x)
must finally be calculated by differentiating the function (x) given by the formula
with boundary
conditions
NOTES
Therefore,
NOTES
Thus,
Self-Instructional
Material 17
Transcendental and Example 3: Consider the boundary value problem,
Polynomial Equations
NOTES
Solution: Integrating the equation with respect to x from 0 to x two times yields
And
Therefore,
If we specialize our problem with simple linear BVP y (x) = – y(x), 0 < x
< 1 with the boundary conditions y(0) = y0, y(1) = y1, then y(x) reduces to the
second kind Fredholm integral equation,
Self-Instructional
18 Material
Method of Successive Approximation to Solve Fredholm Equations of Transcendental and
Polynomial Equations
Second Kind
The successive approximation method, which was successfully applied to Volterra
integral equations of the second kind, can also be applied to the basic Fredholm NOTES
integral equations of the second kind:
We set u0(x) = f (x). Note that the zeroth approximation can be any selected
real valued function u0(x), a x b.
Accordingly, the first approximation u1(x) of the solution of u(x) is defined
by
This process can be continued in the same manner to obtain the nth
approximation. In other words, the various approximations can be put in a recursive
scheme given by
Even though we can select any real valued function for the zeroth
approximation u0(x), the most commonly selected functions for u0(x) are u0(x) =
0, 1 or x. With the selection of u0(x) = 0, the first approximation u1(x) =
f (x). The final solution u(x) is obtained by
so that the resulting solution u(x) is independent of the choice of u0(x). This
is known as Picard’s method. The Neumann series is obtained if we set u0(x) = f
(x) such that
Self-Instructional
Material 19
Transcendental and Where,
Polynomial Equations
NOTES
The second approximation u2(x) can be obtained as,
Where,
The final solution u(x) known as Neumann series can be obtained as,
Where,
Self-Instructional
20 Material
Transcendental and
Polynomial Equations
NOTES
Thus,
And
is the solution.
Self-Instructional
Material 21
Transcendental and Substituting the value of u(t) in this equation, we get
Polynomial Equations
NOTES
or
Hence,
Self-Instructional
22 Material
Iterated Kernels and Neumann Series for Fredholm Equations Transcendental and
Polynomial Equations
The Liouville-Neumann series is defined as,
NOTES
Then,
Self-Instructional
Material 23
Transcendental and Therefore,
Polynomial Equations
NOTES
To find uj define,
where, I is the identity matrix. Let D(c) be the determinant of the matrix M
then
Self-Instructional
24 Material
Where, Transcendental and
Polynomial Equations
And NOTES
now becomes
So that,
NOTES
For
Or
where, and
1.4 SUMMARY
where K(x, t) is called the kernel of the integral equation, and (x) and
(x) are the limits of integration.
The most frequently used integral equations fall under two major classes,
namely Volterra and Fredholm integral equations.
In Volterra equation one of the limits of integration is variable while in
Fredholm equation both the limits are constant.
Self-Instructional
Material 27
Transcendental and
Polynomial Equations 1.5 KEY WORDS
Short-Answer Questions
1. Write the two kinds of Volterra integral equations.
2. What is the basic difference between Volterra and Fredholm equations?
3. List the methods used to solve Fredholm and Volterra integral equations of
the second kind.
4. How can you find the solution of the Volterra integral equation of the first
kind?
5. Define iterated kernel for Fredholm and Volterra integral equations.
Long-Answer Questions
1. Reduce the following initial value problem to an equivalent Volterra equation:
3. Find the solution of the following Volterra integral equations of the first kind:
(a)
Self-Instructional
28 Material
Transcendental and
Polynomial Equations
(b)
(c) NOTES
(d)
(a)
(b)
(c)
(d)
(e)
Self-Instructional
Material 29
Methods for Finding
Complex Roots and
Polynomial Equations UNIT 2 METHODS FOR FINDING
COMPLEX ROOTS AND
NOTES
POLYNOMIAL EQUATIONS
Structure
2.0 Introduction
2.1 Objectives
2.2 Methods for Finding Complex Roots
2.3 Polynomial Equations
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings
2.0 INTRODUCTION
2.1 OBJECTIVES
20 y = 15.2x + 13.2
10 y = x3
y = log10 x
1 2 3 X
O
1
Fig. 2.3 Graph
= of y = and y log10 x
x
The point of intersection of the curves has its x-coordinates value 2.5
approximately. Thus, the location of the root is 2.5.
Tabulation Method: In the tabulation method, a table of values of f (x) is made
for values of x in a particular range. Then, we look for the change in sign in the
Self-Instructional values of f (x) for two consecutive values of x. We conclude that a real root lies
32 Material
between these values of x. This is true if we make use of the following theorem on Methods for Finding
Complex Roots and
continuous functions. Polynomial Equations
Theorem 1: If f (x) is continuous in an interval (a, b) and f (a) and f(b) are of
opposite signs, then there exists at least one real root of f (x) = 0, between a
and b. NOTES
Consider for example, the equation f (x) = x3 – 8x + 5 = 0
Constructing the following table of x and f (x)
x − 4 − 3 − 2 −1 0 1 2 3
f ( x) − 27 2 13 12 5 − 2 − 3 8
We observe that there is a change in sign of f (x) in each of the sub-intervals (–3,
–4), (0, 1) and (2, 3). Thus we can take the crude approximation for the three real
roots as – 3.2, 0.2 and 2.2.
Define f (x)
Read epsilon
Read x0, x1
Compute y0 = f (x0)
Compute y1 = f (x1)
Is
No y0y1 > 0
Yes
Compute x2 = ( x0 + x1)/2
Compute y2 = f (x2)
Is Yes x 0 = x2
y0y2 > 1
No
x 1 = x2
Is
No Yes
|(x1 – x 0) / x0|
> epsilon
print ‘root’ = x2
Self-Instructional End
34 Material
Example 2: Find the location of the smallest positive root of the equation x3 – 9x Methods for Finding
Complex Roots and
+ 1 = 0 and compute it by bisection method, correct to two decimal places. Polynomial Equations
Solution: To find the location of the smallest positive root we tabulate the function
f (x) = x3 – 9x + 1 below:
NOTES
x 0 1 2 3
f ( x) 1 − 2 − 9 1
We observe that the smallest positive root lies in the interval [0, 1]. The
computed values for the successive steps of the bisection method are given in the
Table.
n x0 x1 x2 f (x2 )
1 0 1 0 .5 − 3 . 37
2 0 0 .5 0 . 25 − 1 . 23
3 0 0 . 25 0 . 125 − 0 . 123
4 0 0 . 125 0 . 0625 0 . 437
5 0 . 0625 0 . 125 0 . 09375 0 . 155
6 0 . 09375 0 . 125 0 . 109375 0 . 016933
7 0 . 109375 0 . 125 0 . 11718 − 0 . 053
From the above results, we conclude that the smallest root correct to two
decimal places is 0.11.
Simple Iteration Method: A root of an equation f (x) = 0, is determined using
the method of simple iteration by successively computing better and better
approximation of the root, by first rewriting the equation in the form,
x = g(x) (2.4)
Then, we form the sequence {xn} starting from the guess value x0 of the root
and computing successively,
=x1 g=
( x0 ), x2 g (=
x1 ),.., xn g ( xn −1 )
| ξ − x n +1 | < l n +1 | ξ − x0 | (2.12)
Evidently, since l < l, l n +1 → 0, as n → ∞, the right hand side tends to zero and
thus it follows that the sequence {xn}converges to the root ξ if ϕ ′(ξ ) < 1. This
completes the proof.
Order of Convergence: The order of convergence of an iterative process is
determined in terms of the errors en and en+1 in successive iterations. An iterative
en +1
process is said to have kth order convergence if lim < M, where M is a finite
n →∞ enk
number.
Roughly speaking, the error in any iteration is proportional to the kth power of
the error in the previous iteration.
Evidently, the simple iteration discussed in this section has its order of
convergence 1.
The above iteration is also termed as fixed point iteration since it determines
the root as the fixed point of the mapping defined by x = g(x).
Algorithm: Computation of a root of f (x) = 0 by linear iteration.
Step 0: Define g(x), where f (x) = 0 is rewritten as x = g(x)
Self-Instructional
36 Material
Step 1: Input x0, epsilon, maxit, where x0 is the initial guess of root, epsilon is Methods for Finding
Complex Roots and
accuracy desired, maxit is the maximum number of iterations allowed. Polynomial Equations
Step 2: Set i = 0
Step 3: Set x1 = g (x0) NOTES
Step 4: Set i = i + 1
Step 5: Check, if |(x1 – x0)/ x1| < epsilon, then print ‘root is’, x1
else go to Step 6
Step 6: Check, if i < n, then set x0 = x1 and go to Step 3
Step 7: Write ‘No convergence after’, n, ‘iterations’
Step 8: End
Example 3: In order to compute a real root of the equation x3 – x – 1 = 0, near
x = 1, by iteration, determine which of the following iterative functions can be
used to give a convergent sequence.
x +1 x +1
(i) x = x3 – 1 (ii) x= 2
(iii) x=
x x
Solution:
(i) For the form x = x 3 − 1, g(x) = x3 – 1, and g ′( x) = 3x 2 . Hence, | g ′( x) | > 1,
for x near 1. So, this form would not give a convergent sequence of
iterations.
x +1 x +1 1 2
(ii) For the form x= , g ( x) = . Thus, g ′( x) = − − and | g ′(1) | = 3 > 1.
x2 x2 x2 x3
Hence, this form also would not give a convergent sequence of iterations.
1
−
x +1 1 x +1 2 1
(iii) For the form, g ( x) = , g ′( x) = ⋅ − 2 .
x 2 x x
1 x +1
∴ | g ′(1) | = < 1. Hence, the form x = would give a convergent
2 2 x
sequence of iterations.
Example 4: Compute the real root of the equation x3 + x2 – 1 = 0, correct to five
significant digits, by iteration method.
Solution: The equation has a real root between 0 and 1 since f (x) = x3 + x2 – 1
has opposite signs at 0 and 1. For using iteration, we first rewrite the equation in
the following different forms.
1 1 1
(i) x= −1 (ii) x= −1 (iii) x=
2
x x x +1
Self-Instructional
Material 37
Methods for Finding
Complex Roots and 1 2
For the form (i), g ( x) = −1 + , g ′( x) = − and for x in (0, 1), | g ′( x) | > 1 .
Polynomial Equations x2 x3
1 1 1
So, this form is not suitable. For the form g ′( x) = . − 2 − 1 (ii) and
NOTES 2 1 x
−1
x
| g ′( x) | > 1 for all x in (0, 1). Finally, for the form (iii)
1 1
g ′( x) = − . 3
and g ′( x ) < 1 for x in (0, 1). Thus this form can be used to form
2
( x + 1) 2
a convergent sequence for finding the root.
1
We start the iteration x = with x0 = 1. The results of suecessive iterations
1+ x
are,
x1 = 0.70711 x2 = 0.76537 x3 = 0.75236 x4 = 0.75541
x5 = 0.75476 x6 = 0.75490 x7 = 0.75488 x8 = 0.75488
1
3 1 1
(i) x = ( x + 1) (ii) x = 9/ x− 2 (iii) x = 9−
9 x x
2 1
In case of (i), g ′( x) = x and for x in [2, 4], | g ′( x) | > 1. Hence it will not give
3
rise to a convergent sequence.
Self-Instructional
38 Material
Methods for Finding
9 2 Complex Roots and
In case of form (ii) g ′( x) = 2 x − + and for x in [2, 4], | g ′( x) | > 1
x2 x3 Polynomial Equations
1
−
1 2 1
In case of form (iii) g ′( x) = 9 − and | g ′( x) | < 1 NOTES
x 2x2
Thus, the forms (ii) and (iii) would give convergent sequences for finding the
root in [2, 3].
We start the iterations taking x0 = 2 in the iteration scheme (iii). The result for
successive iterations are,
x0 = 2.0 x1 = 2.91548 x4 = 2.94282.
x2 = 2.94228 x3 = 2.94281
Thus, the root can be taken as 2.94281, correct to four decimal places.
Newton-Raphson Method
Newton-Raphson method is a widely used numerical method for finding a root of
an equation f (x) = 0, to the desired accuracy. It is an iterative method which has
a faster rate of convergence and is very useful when the expression for the derivative
f (x) is not complicated. To derive the formula for this method, we consider a
Taylor’s series expansion of f (x0 + h), x0 being an initial guess of a root of
f (x) = 0 and h is a small correction to the root.
h2
f ( x0 + h) = f ( x0 ) + h f ′( x0 ) + f " ( x0 ) + ...
2!
Assuming h to be small, we equate f (x0 + h) to 0 by neglecting square and
higher powers of h.
f ( x0 ) + h f ′( x0 ) =
0
f ( x0 )
Or, h= −
f ′( x0 )
Thus, we can write an improved value of the root as,
x=
1 x0 + h
f ( x0 )
i.e., x=
1 x0 −
f ′( x0 )
f ( x1 )
x2 = x1 −
f ′( x1 )
f ( x2 )
x3 = x2 −
f ′( x2 )
Self-Instructional
Material 39
Methods for Finding ... ... ...
Complex Roots and
f ( xn )
Polynomial Equations xn +1 = xn −
f ′( xn ) (2.13)
If the sequence {xn } converges, we get the root.
NOTES
Algorithm: Computation of a root of f (x) = 0 by Newton-Raphson method.
Step 0: Define f (x), f ′(x)
Step 1: Input x0, epsilon, maxit
[x0 is the initial guess of root, epsilon is the desired accuracy of the
root and maxit is the maximum number of iterations allowed]
Step 2: Set i = 0
Step 3: Set f0 = f (x0)
Step 4: Compute df0 = f ′ (x0)
Step 5: Set x1 = x0 – f0/df0
Step 6: Set i = i + 1
Step 7: Check if |(x1 – x0) |x1| < epsilon, then print ‘root is’, x1 and stop
else if i < n, then set x0 = x1 and go to Step 3
Step 8: Write ‘Iterations do not converge’
Step 9: End
Example 7: Use Newton-Raphson method to compute the positive root of the
equation x3 – 8x – 4 = 0, correct to five significant digits.
Solution: Newton-Raphson iterative scheme is given by,
f ( xn )
xn +1 =
xn − , for n =
0, 1, 2, ...
f ′( xn )
x 0 1 2 3 4
f ( x) − 4 − 13 − 12 − 1 28
27 − 24 − 4
We get, x1 =
3− 3.0526
=
27 − 8
Self-Instructional
40 Material
Similarly, x2 = 3.05138, and x3 = 3.05138 Methods for Finding
Complex Roots and
Thus, the positive root is 3.0514, correct to five significant digits. Polynomial Equations
1 a
Example 9: For evaluating a , deduce the iterative formula xn +1 = xn + ,
2 xn
xn2 − a
We have, xn +=
1 xn −
2 xn
xn2 − a
xn +=
1 xn −
2 xn
1 a
i.e., xn +1 = xn + , for n =
0, 1, 2,...
2 xn
Self-Instructional
Material 41
Methods for Finding
Complex Roots and Now, for computing 2 , we assume x0 = 1.4. The successive iterations give,
Polynomial Equations
1 2 3.96
x1 = 1.4 + = = 1.414
2 1.4 2.8
NOTES
1 2
x2 = 1.414 + = 1.41421
2 1.414
1 a
x n +1 = (k − 1) x n + k −1 . Hence evaluate 3
2 , connect to five significant digits.
k xn
xnk − a
xn +=
1 xn − −1
kxnk
1 a
or, xn +1 = (k − 1) xn + k −1 , for n = 0, 1, 2,...
k xn
Now, for evaluating 3 2 , we take x0 = 1.25 and use the iterative formula,
1 2
xn +1
= 2 xn + 2
3 xn
1 2
x1
We have, = 1.25 × 2 + = 1.26
3 (1.25)2
=x2 1.259921,
= x3 1.259921
Self-Instructional
42 Material
The results for successive iterations are, Methods for Finding
Complex Roots and
x1 = 0.667, x2 = 0.6075, x3 = 0.6071 Polynomial Equations
4+4−6
x1 = 2 − 2
= 1.72238
4 × (log e 2 x + 1) + 2
x2 = 1.72321
x3 = 1.72308
Let us assume that the sequence of iterations {xn} converges to the root ξ .
Then, expanding by Taylor’s series about xn, the relation f ( ξ ) = 0, gives
1
f ( xn ) + (ξ − xn ) f ′( xn ) + (ξ − xn ) 2 f ′′( xn ) + ... = 0
2
f ( xn ) 1 2 f ′′( x n )
∴ − = ξ − xn + (ξ − xn ) . + ...
′
f ( xn ) 2 f ' ( xn )
1 f ′′( xn )
∴ xn +1 − ξ ≈ (ξ − xn ) 2 .
2 f ′( xn )
Taking n
as the error in the nth iteration and writing n
= n
– , we have,
1 f ′′(ξ )
ε n +1 ≈ ε n 2 (2.14)
2 f ′(ξ )
f ( x) f ′′( x)
< 1, in the interval near the root.
[ f ′( x)]2
Secant Method
Secant method can be considered as a discretized form of Newton-Raphson
method. The iterative formula for this method is obtained from formula of Newton-
Raphson method on replacing the derivative f ′( x0 ) by the gradient of the chord
joining two neighbouring points x0 and x1 on the curve y = f (x).
Thus, we have
f ( x1 ) − f ( x0 )
f ′( x0 ) ≈
x1 − x0
The iterative formula is equivalent to the one for Regula–Falsi method. The
distinction between secant method and Regula–Falsi method lies in the fact that
unlike in Regula–Falsi method, the two initial guess values do not bracket a root
and the bracketing of the root is not checked during successive iterations, in secant
method. Thus, secant method may not always give rise to a convergent sequence
to find the root. The geometrical interpretation of the method is shown in Figure
2.4.
Self-Instructional
44 Material
Methods for Finding
Complex Roots and
Polynomial Equations
NOTES
Regula-Falsi Method
Regula-Falsi method is also a bracketing method. As in bisection method, we
start the computation by first finding an interval (a, b) within which a real root lies.
Writing a = x0 and b = x1, we compute f (x0) and f (x1) and check if f (x0) and f
(x1) are of opposite signs. For determining the approximate root x2, we find the
Self-Instructional
Material 45
Methods for Finding point of intersection of the chord joining the points (x0, f (x0)) and (x1, f (x1)) with
Complex Roots and
Polynomial Equations the x-axis, i.e., the curve y = f (x0) is replaced by the chord given by,
f ( x1 ) − f ( x0 )
y − f ( x0 ) = ( x − x0 ) (2.16)
NOTES x1 − x0
Next, we compute f (x2) and determine the interval in which the root lies in the
following manner. If (i) f (x2) and f (x1) are of opposite signs, then the root lies in
(x2, x1). Otherwise if (ii) f (x0) and f (x2) are of opposite signs, then the root lies
in (x0, x2). The next approximate root is determined by changing x0 by x2 in the
first case and x1 by x2 in the second case.
The aforesaid process is repeated until the root is computed to the desired
accuracy , i.e., the condition
( xk +1 − xk ) / xk < ε , should be satisfied.
Regula-Falsi method can be geometrically interpreted by the following Figure
2.5.
Y
x1, f (x1)
X
O
x2, f (x2)
x0, f (x2)
Self-Instructional
46 Material
Step 8: Compute x2 = (x0 f1 – x1 f0) / (f1 – f0) Methods for Finding
Complex Roots and
Step 9: Compute f2 = f (x2) Polynomial Equations
Self-Instructional
Material 47
Methods for Finding
Complex Roots and
Polynomial Equations Check Your Progress
1. How will you compute the roots of the form f (x) = 0?
NOTES 2. Define tabulation method.
3. Explain bisection method.
4. How is order of convergence determined?
5. Explain Newton-Raphson method.
6. Define secant method.
7. Explain Regula-Falsi method.
Descarte’s Rule
The number of positive real roots of a polynomial equation is equal to the number
of changes of sign in pn(x), written with descending powers of x, or less by an
even number.
Consider for example, the polynomial equation,
3x5 + 2 x 4 + x3 − 2 x 2 + x − 2 =0
Self-Instructional
48 Material
Clearly there are three changes of sign and hence the number of positive real Methods for Finding
Complex Roots and
roots is three or one. Thus, it must have a real root. In fact, every polynomial Polynomial Equations
equation of odd degree has a real root.
We can also use Descarte’s rule to determine the number of negative roots by
NOTES
finding the number of changes of signs in pn(–x). For the above equation,
pn (− x) = −3x 5 + 2 x 4 − x 3 − 2 x 2 − x − 2 = 0; and it has two changes of sign. Thus, it
has either two negative real roots or none.
2.5 SUMMARY
A root of an equation is usually computed in two stages. First, we find the
location of a root in the form of a crude approximation of the root. Next
we use an iterative technique for computing a better value of the root to a
desired accuracy in successive approximations/computations.
Tabulation Method: In the tabulation method, a table of values of f (x) is
made for values of x in a particular range.
The bisection method involves successive reduction of the interval in which
an isolated root of an equation lies.
If a function f (x) is continuous in the closed interval [a, b] and f (a) and f
(b) are of opposite signs, i.e., f (a) f (b) < 0, then there exists at least one
real root of f (x) = 0 between a and b.
The bisection method is also termed as a bracketing method, since the
method successively reduces the gap between the two ends of an interval
surrounding the real root, i.e., brackets the real root.
If the function g(x) is continuous in the interval [a, b] which contains a root
ξ of the equation f (x) = 0, and is rewritten as x = g(x), and | g ′( x) | ≤ l ≤ 1 in
this interval, then for any choice of x0 ∈ [a, b] , the sequence {xn} determined
by the iterations,
Self-Instructional
50 Material
Methods for Finding
=xk +1 g=
( xk ), for k 0, 1, 2,... Complex Roots and
Polynomial Equations
This converges to the root of f (x) = 0.
Order of Convergence: The order of convergence of an iterative process is
determined in terms of the errors en and en+1 in successive iterations. An NOTES
en +1
iterative process is said to have kth order convergence if lim < M,
n →∞ enk
where M is a finite number.
Newton-Raphson method is a widely used numerical method for finding a
root of an equation f (x) = 0, to the desired accuracy.
Secant method can be considered as a discretized form of Newton-Raphson
method. The iterative formula for this method is obtained from formula of
Newton-Raphson method on replacing the derivative f ′( x0 ) by the gradient
of the chord joining two neighbouring points x0 and x1 on the curve y = f
(x).
Descarte’s rule of signs can be used to determine the number of possible
real roots (positive or negative).
If x1, x2,..., xn are all real roots of the polynomial equation, then we can
express pn(x) uniquely as,
pn ( x) = an ( x − x1 )( x − x2 )...( x − xn )
We can also use Descarte’s rule to determine the number of negative
roots by finding the number of changes of signs in pn(–x).
Short-Answer Questions
1. What is tabulation method?
2. What is bisection method?
Self-Instructional
Material 51
Methods for Finding 3. Define Newton-Rapshon method.
Complex Roots and
Polynomial Equations 4. What is meant by secant method?
5. Explain Regula-Falsi method.
NOTES Long-Answer Questions
1. Use graphical method to find the location of a real root of the equation x3 +
10x – 15 = 0.
2. Draw the graphs of the function f (x) = cos x – x, in the range [0, /2) and
find the location of the root of the equation f (x) = 0.
3. Compute the root of the equation x3 – 9x + 1 = 0 which lies between 2 and 3
correct upto three significant digits using bisection method.
4. Compute the root of the equation x3 + x2 – 1 = 0, near 1, by the iterative
method correct upto two significant digits.
5. Use iterative method to find the root near x = 3.8 of the equation 2x – log10x
= 7 correct upto four significant digits.
6. Compute using Newton-Raphson method the root of the equation ex = 4x,
near 2, correct upto four significant digits.
7. Use an iterative formula to compute 7 125 correct upto four significant digits.
8. Find the real root of x log10x – 1.2 = 0 correct upto four decimal places using
Regula-Falsi method.
9. Use Regula-Falsi method to find the root of the following equations correct
upto four significant figures:
(i) x3 – 4x – 1 = 0, the root near x = 2
(ii) x6 – x4 – x3 – 1 = 0, the root between 1.4 and 1.5
10. Compute the positive root of the given equation correct upto four places of
decimals using Newton-Raphson method:
x + loge x = 2
Self-Instructional
52 Material
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis: Methods for Finding
Complex Roots and
An Algorithmic Approach. New York: McGraw-Hill. Polynomial Equations
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
NOTES
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Self-Instructional
Material 53
Birge – Vieta, Bairstow’s
and Graeffe’s Root
Squaring Methods UNIT 3 BIRGE – VIETA,
BAIRSTOW’S AND
NOTES
GRAEFFE’S ROOT
SQUARING METHODS
Structure
3.0 Introduction
3.1 Objectives
3.2 Birge – Vieta Method
3.3 Bairstow’s Method
3.4 Graeffe’s Root Squaring Method
3.5 Answers to Check Your Progress Questions
3.6 Summary
3.7 Key Words
3.8 Self-Assessment Questions and Exercises
3.9 Further Readings
3.0 INTRODUCTION
Birge-Vieta method is used for finding the real roots of a polynomial equations.
This method is based on an original method developed by the two English
mathematicians Birge and Vieta. Finding and approximating the derivation of all
roots of a polynomial equation is a very significant. In the field of science and
engineering, there are numerous applications which require the solutions of all
roots of a polynomial equations for a particular problem.
Newton-Raphson method is fundamentally used for finding the root of an
algebraic and transcendental equations. Since the rate of convergence of this method
is quadratic, hence the Newton-Raphson method can be used to find a root of a
polynomial equation as polynomial equation is an algebraic equation. Birge-Vieta
method is based on the Newton-Raphson method or this method is a modified
form of Newton-Raphson method.
Consider the given polynomial equation of degree n, which has the form,
Pn(x) = anxn + . . . + alx + a0 = 0.
Let x0 be an initial approximation to the root . The Newton-Raphson
iterated formula for improving this approximation is,
To apply this formula, first evaluate both Pn(x) and P n(xi) at any xi. The
utmost natural method is to evaluate,
Self-Instructional
Material 55
Birge – Vieta, Bairstow’s However, this is stated as the most inefficient method of evaluating a
and Graeffe’s Root
Squaring Methods polynomial, because of the amount of computations involved and also because of
the possible growth of round off errors. Thus there must be some proficient and
effective method for evaluating Pn(x) and P n(x).
NOTES
Vieta’s formula is used for the coefficients of polynomial to the sum and
product of their roots, along with the products of the roots that are in groups.
Vieta’s formula defines the association of the roots of a polynomial by means of its
coefficients. Following example will make the concept clear that how to find a
polynomial with given roots.
Here we will discuss about the real-valued polynomials, i.e., the coefficients
of polynomials are real numbers.
Consider a quadratic polynomial. If the given two real roots are r1 and r2,
then find a polynomial.
Let the polynomial is a2x2 + a1x + a0. When the roots are given, then we
can also write the polynomial equation in the form, k (x – r1) (x – r2).
Since both the equations denotes the same polynomial, therefore equate
both polynomials as,
a2x2 + a1x + a0 = k (x – r1) (x – r2) (3.1)
On simplifying the Equation (3.1), we have the following form of equation,
a2x2 + a1x + a0 = kx2 – k (r1 + r2) x + k (r1r2)
Comparing the coefficients of both the sides of the above equation, we
have,
For x2, a2 = k
For x, a1 = – k (r1 + r2)
For constant term, a0 = k r1r2
Which gives,
a2 = k
Therefore,
(3.2)
(3.3)
Equations (3.2) and (3.4) are termed as Vieta’s formulas for a second degree
polynomial.
As a general rule, for an nth degree polynomial, there are n different Vieta’s
formulas which can be written in a condensed form as,
Self-Instructional
56 Material
Birge – Vieta, Bairstow’s
For and Graeffe’s Root
Squaring Methods
NOTES
Example 1: Find all the roots of the given polynomial equation P3(x) = x + x – 3
3
= 0 rounded off to three decimal places. Stop the iteration whenever {xi+1 – xi} <
0.0001.
Solution: The equation P3(x) = 0 has three roots. Since there is only one change
in the sign of the coefficients, the equation can have as a maximum one positive
real root. The equation has no negative real root since P3(–x) = 0 has no change of
sign of coefficients. Since P3(x) = 0 is of odd degree it has at least one real root.
Hence the given equation x3 + x – 3 = 0 has one positive real root and a complex
pair. Since P(1) = –1 and P(2) = 7, as per the intermediate value theorem the
equation has a real root lying in the interval ]1,2[.
Now we will find the real root using Birge-Vieta method. Let the initial
approximation be 1.1.
First Iteration
= 0.6065 1.4505 i
Hence the three roots of the equation rounded off to three decimal places
are 1.213, 0.6065 + 1.4505 i and 0.6065 1.4505 i.
By x2 + ux + v yields a quotient,
Self-Instructional
58 Material
And a remainder gx + h with, Birge – Vieta, Bairstow’s
and Graeffe’s Root
Squaring Methods
NOTES
The variables c, d, g, h and the {bi}, {fi} are functions of u and v. They can
be found recursively as follows,
This continues until convergence occurs. This method to find the zeroes of
polynomials can thus be easily implemented with a programming language or even
a spreadsheet.
Example 2: The task is to determine a pair of roots of the polynomial,
f (x) = 6 x5 + 11 x4 33 x3 33 x2 + 11 x + 6
Solution: As first quadratic polynomial we can use the normalized polynomial
formed from the leading three coefficients of f(x),
Self-Instructional
Material 59
Birge – Vieta, Bairstow’s After eight iterations the method produced a quadratic factor that contains
and Graeffe’s Root
Squaring Methods the roots –1/3 and –3 within the represented precision. The step length from the
fourth iteration on demonstrates the superlinear speed of convergence.
NOTES
3.4 GRAEFFE’S ROOT SQUARING METHOD
Dandelin–Graeffe Iteration
Let p(x) be a polynomial of degree n,
Then,
Self-Instructional
60 Material
Then the coefficients are related by, Birge – Vieta, Bairstow’s
and Graeffe’s Root
Squaring Methods
NOTES
Graeffe observed that if one separates p(x) into its odd and even parts,
then
This expression involves the squaring of two polynomials of only half the
degree, and is therefore used in most implementations of the method.
Iterating this procedure several times separates the roots with respect to
their magnitudes. Repeating k times gives a polynomial of degree n, we have:
With roots,
some factor >1, that is, , then the roots of the k-th iterate are
separated by a fast growing factor,
Next the Vieta relations are used as Classical Graeffe’s method as shown
below:
Self-Instructional
Material 61
Birge – Vieta, Bairstow’s
and Graeffe’s Root
Squaring Methods roots are separated by the factor , which quickly becomes very big. The
coefficients of the iterated polynomial can then be approximated by their leading
NOTES term,
Implying,
Finally, logarithms are used in order to find the absolute values of the roots
of the original polynomial. These magnitudes alone are already useful to generate
meaningful starting points for other root-finding methods.
Self-Instructional
62 Material
Birge – Vieta, Bairstow’s
3.6 SUMMARY and Graeffe’s Root
Squaring Methods
Birge-Vieta method is used for finding the real roots of a polynomial equations.
This method is based on an original method developed by the two English NOTES
mathematicians Birge and Vieta.
Finding and approximating the derivation of all roots of a polynomial equation
is a very significant. In the field of science and engineering, there are numerous
applications which require the solutions of all roots of a polynomial equations
for a particular problem.
Newton-Raphson method is fundamentally used for finding the root of an
algebraic and transcendental equations.
Since the rate of convergence of this method is quadratic, hence the Newton-
Raphson method can be used to find a root of a polynomial equation as
polynomial equation is an algebraic equation.
Birge-Vieta method is based on the Newton-Raphson method or this method
is a modified form of Newton-Raphson method.
The most inefficient method of evaluating a polynomial, because of the amount
of computations involved and also because of the possible growth of round
off errors. Thus there must be some proficient and effective method for
evaluating Pn(x) and P n(x).
Vieta’s formula is used for the coefficients of polynomial to the sum and
product of their roots, along with the products of the roots that are in groups.
Vieta’s formula defines the association of the roots of a polynomial by means
of its coefficients. Following example will make the concept clear that how
to find a polynomial with given roots.
As a general rule, for an nth degree polynomial, there are n different Vieta’s
formulas which can be written in a condensed form as,
For
Birae-Vieta method: This method is used for finding the real roots of a
polynomial equations.
Bairstow’s method: This is an efficient algorithm for finding the roots of a
real polynomial of arbitrary degree. The algorithm was formulated by Leonard
Bairstow for finding the roots in complex conjugate pairs using only real
arithmetic.
Graeffe’s method or Dandelin–Lobachesky–Graeffe method: It is
an algorithm typically used for finding all of the roots of a polynomial.
Short-Answer Questions
1. Why is Birge-Vieta method used?
2. Define Bairstow’s method.
3. What is the significance of Graeffe’s root squaring method?
Long-Answer Questions
1. Briefly explain the Birge-Vieta method giving appropriate examples.
2. Find the root of x4 - 3x3 + 3x2 - 3x + 2 = 0 using Birge-Vieta method.
Self-Instructional
64 Material
3. Explain the Bairstow’s method with the help of examples. Birge – Vieta, Bairstow’s
and Graeffe’s Root
4. Using Bairstow’s method find all the roots of a given polynomial, Squaring Methods
NOTES
5. Discuss the Graeffe’s root squaring method giving appropriate examples.
Self-Instructional
Material 65
Solution of Simultaneous
Linear Equation
UNIT 4 SOLUTION OF
SIMULTANEOUS
NOTES
LINEAR EQUATION
Structure
4.0 Introduction
4.1 Objectives
4.2 System of Linear Equations
4.2.1 Classical Methods
4.2.2 Elimination Methods
4.2.3 Iterative Methods
4.2.4 Computation of the Inverse of a Matrix by using Gaussian Elimination
Method
4.3 Answers to Check Your Progress Questions
4.4 Summary
4.5 Key Words
4.6 Self Assessment Questions and Exercises
4.7 Further Readings
4.0 INTRODUCTION
Many engineering and scientific problems require the solution based on system of
linear equations. The system of equations is termed as a homogeneous type if all
the elements in the column vector b are zero else the system is termed as a non-
homogeneous type. You will learn the method of computation to find the solution
of a system of n linear equations in n unknowns. Two types of efficient numerical
methods are used for computing solution of systems of equations, of which some
are direct methods and others are iterative in nature. In the direct method, Gaussian
elimination method is used while in the iterative method, Gauss-Seidel iteration
method is commonly used. You will learn the two forms of iteration methods termed
as Jacobi iteration method and Gauss-Seidel iteration method.
In this unit, you will study about the transcendental and polynomial equations
and rate of convergence of iterative methods.
4.1 OBJECTIVES
2 −3 1 x1 1
3 1 −1 x =
2 2
1 −1 −1 x3 1
2 − 3 1
D = 3 1 − 1 = 2(−1 − 1) − 3(−1 + 3) + (−3 − 1) = −14
1 − 1 − 1
1 −3 1
D1 = 2 1 − 1 = ( −1 − 1) − 3(−1 + 2) + (−2 − 1) = −8
1 −1 −1
2 1 1
D2 = 3 2 − 1 = 2(−2 + 1) + (−1 + 3) + (3 − 2) = 1
1 1 −1
2 −3 1
D3 = 3 1 2 = 2(1 + 2) − 3(2 − 3) + (−3 − 1) = 5
1 −1 1
Hence by Cramer’s rule, we get
D1 −8 4 D −1 D 5
x1 = = = , x2 = 2 = , x3 = 3 = −
D −14 7 D 14 D 14
Self-Instructional
68 Material
4.2.2 Elimination Methods Solution of Simultaneous
Linear Equation
Matrix Inversion Method: Let A–1 be the inverse of the matrix A defined by,
Adj A
A1 (4.5)
| A| NOTES
where Adj A is the adjoint matrix obtained by transposing the matrix of the cofactors
of the elements aij of the determinant of the coefficient matrix A.
Thus,
A11 A21..... An1
A12 A22 ..... An 2
Adj A
.... ...... .....
A1n A2 n ..... Ann
(4.6)
Aij being the cofactor of aij.
Then the solution of the system is given by,
x = A−1b
(4.7)
Note: If the rank of the coefficient matrix of a system of linear equations in n
unknowns is less than n, then there are more unknowns than the number of
independent equations. In such a case, the system has an infinite set of solutions.
Example 2: Solve the given system of equations by matrix inversion method:
1 1 1 x1 4
2 −1 3 x =
2 1
3 2 −1 x3 1
Solution: For solving the system of equations by matrix inversion method we first
compute the determinant of the coefficient matrix,
1 1 1
| A |= 2 − 1 3 = 13
3 2 −1
Since | A | ≠ 0 , the matrix A is non-singular and A–1 exists. We now compute
the adjoint matrix,
5 3 4 5 3 4
1 Ad j A 1
Adj A 11 4 1 . Thus, A 11 4 1
| A| 13
7 1 3 7 1 3
Hence, the solution by matrix inversion method gives,
x1 5 3 4 4 13 1
1 1 1
x x2 A b 11 4 1 1 39 3
13 13
x3 7 1 3 1 26 2 Self-Instructional
Material 69
Solution of Simultaneous Gaussian Elimination Method: This method consists in systematic elimination
Linear Equation
of the unknowns so as to reduce the coefficient matrix into an upper triangular
system, which is then solved by the procedure of back-substitution. To understand
the procedure for a system of three equations in three unknowns, consider the
NOTES following system of equations:
a11 x1 + a12 x2 + a13 x3 = b1 (4.8(a))
a 21 x1 + a22 x2 + a23 x3 = b2 (4.8(b))
a31 x1 + a32 x2 + a33 x3 = b3 (4.8(c))
We have to first eliminate x1 from the last two equations and then eliminate x2
from the last equation.
In order to eliminate x1 from the second equation we multiply the Equation
(4.8(a)) by − a 21 / a11 = m2, and add to the second equation. Similarly, for elimination
of x1 from the third Equation (4.8(c)) we have to multiply the first Equation (4.8(a))
by − a31 / a11 = m3 , and add to the last Equation (4.8(c)). We would then have the
following two equations from them:
(1)
a 22 (1)
x 2 + a 23 x3 = b2(1) (4.9(a))
(1) (1)
a32 x2 + a33 x3 = b3(1) (4.9(b))
(1) (1)
where a22 a22 m2 a12 , a23 a23 m2 a13 , b2(1) b2 m2 b1
(1) (1) (1)
a32 a32 m2 a12 , a 33 a33 m2 a13 , b 3 b3 m3b1
Again for eliminating x2 from the last of the above two equations, we multiply
the first Equation (4.9(a)) by m4 = −a32
(1) (1)
/ a 22 , and add to the second Equation
(4.9(b)), which would give the equation,
( 2)
a33 = b3( 2) (4.10)
where a33
( 2) (1)
= a33 (1)
− m 4 a 23 , b3( 2) = b3(1) − m4 b2(1)
(ii) Then we write the transformed 2nd and 3rd rows after the elimination of x1 by
row operations [(m2 × 1st row + 2nd row) and (m3 × 1st row + 3rd row)] as
new 2nd and 3rd rows along with the multiplier on the left.
(iii) Finally, we get the upper triangular transformed augmented matrix as given
below.
Notes:
1. The above procedure can be easily extended to a system of n unknowns, in
which case, we have to perform a total of (n–1) steps for the systematic
elimination to get the final upper triangular matrix.
2. The condition to be satisfied for using this elimination is that the first diagonal
elements at each step must not be zero. These diagonal elements
(1) ( 2)
[ a11 , a 22 , a33 , etc.] are called pivot. If the pivot is zero at any stage, the method
fails. However, we can rearrange the rows so that none of the pivots is zero,
at any stage.
Example 3: Solve the following system by Gauss elimination method:
x1 + 2 x2 + x3 =
0
2 x1 + 2 x2 + 3 x3 =
3
− x1 − 3x2 =2
NOTES 1 2 1 : 0
− 2 0 − 2 1 : 3
1 0 − 1 1 : 2
Step 2: For elimination of x2 from the third equation we multiply the second
1
equation by − and add it to the third equation. The result is shown in the augmented
2
matrix below.
1 2 1 : 0
− 1 / 20 − 2 1 : 3
0 0 1 / 2 : 1 / 2
(4.13)
The augmented matrix is,
where, NOTES
Now considering a′22 as the non-zero pivot, we first divide the second row by
a′22 and then multiply the reduced second row by a12
′ and subtract it from the first
row and also multiply the reduced second row by a32 ′ and subtracting it from the
third row. The operations are shown below in matrix notation.
′
1 a12 ′ : b1′
a13 1 a12′ ′ : b1′
a13 1 0 a13′′ : b1′′
0 a′ ′
R2 / a22 ′ and R3−R2 a32
R1−R2 a12 ′
′ : b2′
a23 a′′23 : b2′′ ′′ : b2′′
→ 0 1 a23
22 → 0 1
0 a32
′ ′ : b3′
a33 0 a32
′ ′ : b3′
a33 0 0 a33
′′ : b3′′
where
′′ = a13
a13 ′ − a12
′ a ′23
′ , b1′′1 = b1′ − a12
′ b2′′
a ′23
′ = a ′23 / a ′22 , b2′′ = b2′ / a′22
′′ = a33
a33 ′ − a ′23
′ a32
′ , b3′′ = b3′ − a32
′ b2′′
Finally, the third row elements are divided by a33′′ and then the reduced third
row is multiplied by a13 ′′ and subtracted from the first row and also the reduced
third row is multiplied by a′23′ and subtracted from the second row. This is again
shown in matrix notation below.
0 0 a33
′′ : b3′′ 0 0 1 : b3′′′ ′′ R3
R2 − a23 0 0 1 : b3′′
Finally, the solution of the system is given by the reduce augmented column,
i.e., x1 b1 , x2 b2 and x3 b3 .
We illustrate the elimination procedure with an example using augmented matrix,
2 2 4 : 18
1 3 2 : 13
3 1 3 : 14
Self-Instructional
Material 73
Solution of Simultaneous First, we divide the first row by 2 then subtract the reduced first row from 2nd
Linear Equation
row and also multiply the first row by 2 and then subtract from the third. The
results are shown below:
NOTES 1 2 4 18 1 1 2 : 9 R −R 1 1 2 : 9
1 3 2 13 R 1
/2 1 3 2 : 13 2 1→ 0 2 0 : 4
→
3 1 3 14 3 1 3 : 14 R3 + 2 R2 0 − 2 − 3 : − 13
Next considering 2nd row, we reduce the second column to [0, 1, 0] by row
operations shown below:
1 1 2 : 9 1 1 2 : 9 R −R 1 0 2 : 7
0 2 R2 / 2
: 2 → 0 1 0 : 2
1 2
0 : 4 → 0 1 0
0 − 2 − 3 : − 13 0 − 2 − 3 : − 13 R3 + 2 R2 0 0 − 3 : − 9
Finally, dividing the third row by –3 and then subtracting from the first row the
elements of the third row multiplied by 2, the result is shown below:
1 0 2 : 7 1 0 2 : 7 1 0 0 : 1
0 1 0 : 2 R 3 /( −3)
0 1 0 : 2 R1− 2 R3
→ → 0 1 0 : 2
0 0 − 3 : − 9 0 0 1 : 3 0 0 1 : 3
3 18 9 x1 18
2 3 3 x = 117
2
4 1 2 x3 283
Solution: We consider the augmented matrix and solve the system by Gauss-
Jordan elimination method. The computations are shown in compact matrix notation
as given below. The augmented matrix is,
3 18 9 : 18
2 3 3 : 117
4 1 2 : 283
Step 1: The pivot is 3 in the first column. The first column is transformed into [1,
0, 0]T by row operations shown below:
3 18 9 : 18 1 6 3 6 1 6 3 : 6
R1 / 3
2 3 3 : 117 R2
− 2 R1 0 − 9 − 3 : 105
→ 2 3 3 117 →
R3 − 4 R1
4 1 2 : 283 4 1 2 283 0 − 23 − 10 : 259
Self-Instructional
74 Material
Step 2: The second column is transformed into [0, 1, 0] by row operations shown Solution of Simultaneous
Linear Equation
below:
1 6 3 : 6 1 6 3 : 6 1 0 1 : 76
0 − 9 − 3 : 105 − R2 / 9 0 R − 6 R2
→ 1 1 / 3 : − 35 / 3 1 → 0 1 1 / 3 : − 35 / 3
0 − 23 − 10 : 259
0 − 23 − 10 : 259
R3 + 23R2
0 0 − 7 / 3 : 28 / 3
NOTES
Step 3: The third column is transformed into [0 0 1]T by row operations shown
below:
1 0 1 : 76 1 0 1 : 76 1 0 0 : 72
0 1 1 / 3 : − 35 / 3 R /( −7 / 3) R − R3 →
3
→ 0 1 1 / 3 : − 35 / 3 1 0 1 0 : − 13
0 0 − 7 / 3 : 28 / 3 0 0 1 : 4 R2 − R3 / 3 0 0 1 : 4
∑| a
j =1, j ≠ i
ij | < | aii |, for i = 1, 2,..., n (4.15)
∑| a
i =1, i ≠ j
ij | < | a jj | , for j = 1, 2,..., n (4.16)
There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
Jacobi Iteration Method: Consider a system of n linear equations,
The diagonal elements aii, i = 1, 2, ..., n are non-zero and satisfy the set of
sufficient conditions stated earlier. When the system of equations do not satisfy
these conditions, we may rearrange the system in such a way that the conditions
hold.
In order to apply the iteration we rewrite the equations in the following form:
Self-Instructional
Material 75
Solution of Simultaneous
Linear Equation x1 = (b1 − a12 x2 − a13 x3 − ... − a1n xn ) / a11
x2 = (b2 − a 21 x1 − a 23 x3 − ... − a2 n xn ) / a22
x3 = (b3 − a31 x1 − a32 x2 − ... − a3n xn ) / a33
NOTES ............................................................
xn = (bn − a n1 x1 − an 2 x2 − ... − a nn −1 xn −1 ) / ann
x1( k +1) = (b1 − a12 x2( k ) − a13 x3( k ) − ... − a1n xn( k ) ) / a11
x2( k +1) = (b2 − a21 x1( k ) − a 23 x3( k ) − ... − a2 n xn( k ) ) / a22
( k +1) (k ) (k ) (k )
x3 = (b3 − a31 x1 − a32 x2 − ... − a3n xn ) / a33
(4.17)
.................................................................................
( k +1) (k ) (k ) (k )
xn = (bn − an1 x1 − an 2 x2 − ... − ann −1 xn −1 ) / ann
where k = 0, 1, 2, ...
The iterations are continued till the desired accuracy is achieved. This is checked
by the relations,
xi( k 1)
xi( k ) , for i 1, 2, ..., n (4.18)
It is clear from above that for computing x2( k +1) , the improved value of x1( k +1)
is used instead of x1( k ) ; and for computing x3( k +1) , the improved values x1( k +1) and
x2( k +1) are used. Finally, for computing xn( k ) , improved values of all the components
x1( k +1) , x2( k +1) ,..., xn( k−+11) are used. Further, as in the Jacobi iteration, the iterations are
continued till the desired accuracy is achieved.
Example 5: Solve the following system by Gauss-Seidel iterative method correct
upto four significant digits.
10 x1 − 2 x2 − x3 − x4 = 3
− 2 x1 + 10 x2 − x3 − x4 = 15
− x1 − x2 + 10 x3 − 2 x4 = 27
− x1 − x2 − 2 x3 + 10 x4 = −9
Solution: The given system is clearly having diagonally dominant coefficient matrix,
n
i.e., | aii | ≥ ∑| a
j =1
ij |, i =
1, 2, ..., n
j ≠i
Self-Instructional
Material 77
Solution of Simultaneous The results of successive iterations are given in the table below.
Linear Equation
k x1 x2 x3 x4
1 0.72 1.824 2.774 –0.0196
2 0.9403 1.9635 2.9864 –0.0125
NOTES 3 0.09901 1.9954 2.9960 –0.0023
4 0.9984 1.9990 2.9993 –0.0004
5 0.9997 1.9998 2.9998 –0.0003
6 0.9998 1.9998 2.9998 –0.0003
7 1.0000 2.0000 3.0000 0.0000
Self-Instructional
78 Material
The second iteration gives, Solution of Simultaneous
Linear Equation
( 2) 1
x = (30 − 2 × 2.14 − 2.98) = 1.137
1 20
1
x2( 2) = (75 + 1.137 + 3 × 2.98) = 2.127 NOTES
40
1
x3( 2) = (30 − 2 × 1.137 + 2.127) = 2.986
10
The third iteration gives,
(3) 1
x1 = (30 − 2 × 2.127 − 2.986) = 1.138
20
1
x2(3) = (75 + 1.138 + 3 × 2.986) = 2.127
40
1
x3(3) = (30 − 2 × 1.138 + 2.127) = 2.985
10
Thus the solution correct to three significant digits can be written as x1 = 1.14,
x2 = 2.13, x3 = 2.98.
Example 7: Solve the following system correct to three significant digits, using
Jacobi iteration method.
10 x1 + 8 x 2 − 3 x 3 + x 4 = 16
3 x1 − 4 x 2 + 10 x 3 + x 4 = 10
2 x1 + 10 x2 + x3 − 4 x4 = 9
2 x1 + 2 x2 − 3x3 + 10 x4 = 11
Solution: The system is first rearranged so that the coefficient matrix is diagonally
dominant. The equations are rewritten for starting Jacobi iteration as,
x1( k +1) =
1.6 − 0.8 x2( k ) + 0.3 x3( k ) − 0.1 x4( k )
x2( k +1) =
0.9 − 0.2 x1( k ) + 0.1 x3( k ) − 0.4 x4( k )
x3( k +1) =
1.0 − 0.3 x1( k ) + 0.4 x2( k ) − 0.1 x4( k )
x4( k +1) =
1.1 − 0.2 x1( k ) + 0.2 x2( k ) − 0.3 x3( k ) , where k =
0, 1, 2,...
The initial guess of solution is taken as,
=x1(0) 1.6,
= x2(0) 0.9,
= x3(0) 1.0,
= x4(0) 1.1
The results of successive iterations computed by Jacobi iterations are given in
the following table:
k x1 x2 x3 x4
1 1.07 0.92 0.77 0.90
2 1.050 0.969 0.957 0.933
3 1.0186 0.9765 0.9928 0.9923
4 1.0174 0.9939 0.9858 0.9989
5 0.9997 0.9975 0.9925 0.9974
6 1.0001 0.9997 0.9994 0.9984
7 1.0002 0.9998 1.0001 0.9999
Self-Instructional
Material 79
Solution of Simultaneous Thus the solution correct to three significant digits is x1 = 1.000, x2 = 1.000,
Linear Equation
x4 = 1.000.
Algorithm: Solution of a system of equations by Gauss-Seidel iteration method.
NOTES Step 1: Input elements aij of augmented matrix for i = 1 to n, next, j = 1 to n
+ 1.
Step 2: Input epsilon, maxit [epsilon is desired accuracy, maxit is maximum
number of iterations]
Step 3: Set xi = 0, for i = 1 to n
Step 4: Set big = 0, sum = 0, j = 1, k = 1, iter = 0
Step 5: Check if k ≠ j, set sum = sum + ajk xk
Step 6: Check if k < n, set k = k + 1, go to Step 5 else go to next step
Step 7: Compute temp = (ajn + 1 – sum) / ajj
Step 8: Compute relerr = abs (xj – temp) / temp
Step 9: Check if big < relerr then big = relerr
Step 10: Set xj = temp
Step 11: Set j = j + 1, k = 1
Step 12: Check if j ≤ n to Step 5 else go to next step
Step 13: Check if relerr < epsilon then {write iterations converge, and write
xj for j = 1 to n go to Step 15} else if iter < maxit iter = iter + 1 go
to Step 5
Step 14: Write ‘iterations do not converge in’, maxit ‘iteration’
Step 15: Write xj for j = 1 to n
Step 16: End
Self-Instructional
Material 81
Solution of Simultaneous Using Gaussian elimination to this augmented matrix, we get the following at
Linear Equation
the end of first step:
R −2 R 2 3 − 1 : 1 0 0
2 1 → 0 − 2 − 1 : − 2 1 0
NOTES R3 − R1 0 − 6 2 : − 1 0 1
Similarly, at the end of 2nd step we get,
2 3 − 1 : 1 0 0
→0 − 2 − 1 : − 2 1 0
R3 − 3R2
0 0 5 : 5 − 3 1
Thus, we get the three columns of inverse matrix by solving the following three
systems:
2 3 − 1 : 1 2 3 − 1 : 0 2 3 − 1 : 0
0 − 2 − 1 : − 2 0 − 2 − 1 : 1 0 − 2 − 1 : 0
0 0 5 : 5 0 0 5 : − 3 0 0 5 : 1
The solution of the three are easily derived by back-substitution, which give
the three columns of the inverse matrix given below:
1 / 4 0 1/ 4
1 / 2 − 1 / 5 − 1 / 10
1 − 3 / 5 1 / 5
We can also employ Gauss-Jordan elimination to compute the inverse matrix.
This is illustrated by the following example:
Example 9: Compute the inverse of the following matrix by Gauss-Jordan
elimination.
2 3 − 1
A = 4 4 − 3
2 − 3 1
Solution: We consider the augmented matrix [A : I],
2 3 −1 : 1 0 0 1 3 / 2 −1/ 2 : 1/ 2 0 0
[ A : I ] 4 4 −3 : 0 1 0
= → 4 4 −3 : 0 1 0
R1 / 2
2 −3 1 : 0 0 1 2 −3 1 : 0 0 1
R3 −2 R1 1 3 / 2 −1/ 2 : 1/ 2 0 0 1 3 / 2 −1/ 2 : 1/ 2 0 0
→ 0 −2 −1 : −2 1 0 → 0 1 +1/ 2 : 1 −1/ 2 0
R /− 2
R2 − 4 R1 0 −6 : −1 0 1 2
1
2 0 −6 2 : −1 0
R1 −3R2 /2 1 0 −5 / 4 : −1 3 / 4 0 1 0 −5 / 4 : −1 3 / 4 0
→ 0 1 1/ 2 : 1 −1/ 2 0 → 0 1 1/ 2 : 1 −1/ 2 0
R /5
R3 + 6 R2 0 0 −3 1 3
5 : 5 0 0 1 : 1 −3/ 5 1/ 5
R1 +5 R3 / 4 1 0 0 : 1/ 4 0 1/ 4
→ 0 1 0 : 1/ 2 −1/ 5 −1/10
R2 − 1R3 / 2 0 0 1 : 1 −3 / 5 1/ 5
1 / 4 0 1/ 4
which gives A−1 = 1 / 2 − 1 / 5 − 1 / 10
1 − 3/ 5 1 / 5
Self-Instructional
82 Material
Solution of Simultaneous
Linear Equation
Check Your Progress
1. When is the system of equation homogenous and when non-homogenous?
2. Explain Gauss elimination method. NOTES
3. Explain Gauss-Jordan elimination method.
4. Why are iterative methods used?
5. Explain Gauss-Seidel iteration method.
4.4 SUMMARY
Many engineering and scientific problems require the solution of a system
of linear equations.
The system of equations is termed as a homogeneous one if all the elements
in the column vector b of the equation Ax = b, are zero.
Cramer’s rule and matrix inversion method are two classical methods to
solve the system of equations.
If D = |A| be the determinant of the coefficient matrix A and Di is the
determinant obtained by replacing the ith column of D by the column vector
b, then the Cramer’s rule gives the solution vector x by the equations,
Di
xi = for i = 1, 2, …, n.
D Self-Instructional
Material 83
Solution of Simultaneous Gaussian elimination method consists in systematic elimination of the
Linear Equation
unknowns so as to reduce the coefficient matrix into an upper triangular
system, which is then solved by the procedure of back-substitution.
In Gauss-Jordan elimination, the augmented matrix is transformed by row
NOTES
operations such that the coefficient matrix reduces to the identity matrix.
We can use iteration methods to solve a system of linear equations when
the coefficient matrix is diagonally dominant.
There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
Gaussian elimination can be used to compute the inverse of a matrix.
Short-Answer Questions
1. Define the system of linear equations.
2. How many determinants d0 we have to compute in Cramer’s rule?
3. What is the basic difference between Gaussian elimination and Gauss-Jordan
elimination method?
4. What are iterative methods?
5. State an application of Gaussian elimination method.
Long-Answer Questions
1. Use Cramer’s rule to solve the following systems of equations:
(i) x1 – x2 – x3 = 1 (ii) x1 + x2 + x3 = 6
2x1 – 3x2 + x3 = 1 x1 + 2x2 + 3x3 = 14
3x1 + x2 – x3 = 2 x1 – 2x2 + x3 = 2
Self-Instructional
84 Material
2. Using the matrix inversion method to solve the following systems of equation: Solution of Simultaneous
Linear Equation
(i) 4x1 – x2 + 2x3 = 15 (ii) x1 + 4x2 + 9x3 = 16
x1 – 2x2 – 3x3 = –5 2x1 + x2 + x3 = 10
5x1 – 7x2 + 9x3 = 8 3x1 + 2x2 + 3x3 = 18 NOTES
5.0 INTRODUCTION
5.1 OBJECTIVES
Self-Instructional
86 Material
Eigen Values and
5.2 FINDING EIGEN VALUES AND EIGEN Eigen Vectors
VECTORS
Let A = [aij] be a square matrix of order n. If there exists a non-zero (non-null) NOTES
column vector X and a scalar l such that,
AX = X
Then is called an eigenvalue of the matrix A and X is called eigenvectors
corresponding to the eigenvalue .
The problem of finding the values of the parameter , for which the
homogeneous system,
AX = X ...(5.1)
possesses non-trivial solution is known as characteristic value problem or
eigenvalue problem.
Thus, system of Equation (5.1) possesses non-trivial solution if and only if,
[A – I] =0 ...(5.2)
This Equation is known as the characteristic equation of the matrix A.
The roots of this Equation (5.2) are called latent roots or characterstics
values or eigenvalues of the matrix A. The corresponding non-trivial solutions are
called eigenvectors or characteristic vectors of A.
If A is an n × n matrix, then its characteristic equation is an nth degree
polynomial equations in . Therefore, an n × n matrix has n eigenvalues (real or
complex).
Suppose i (i = 1, 2, 3, . . . ., n) be the eigenvalues of A, then for each i,
there exists a non-null vector Xi such that
AX1 = i Xi (i = 1, 2, 3, ........, n)
Multiplying both sides by a non-zero scalar k, we get
A(kXi) = i (kXi)
This implies that an eigenvector is determined upto a multiplicative scalar. In
other words, the eigenvector is not unique. But corresponding to an eigenvector
of the matrix A, there can be one and only one eigenvalue of the matrix A.
It can be shown that for a matrix A of order n, the characteristic Equation
(5.2) can be written as,
n n–1 n–2
– 1 + 2 +......+ (–1)n n= 0
Self-Instructional
Material 87
Eigen Values and Where r is the sum of all the determinants formed from square matrices of
Eigen Vectors
order r whose principal diagonals lie along the principal diagonal of A.
Notes:
NOTES 1. An eigenvector of a matrix cannot correspond to two different eigenvalues.
2. An eigenvalue of a matrix can, and will correspond to different eigenvectors.
3 4
Example 1: Find the eigenvalues and eigenvectors of A =
4 −3
3 − λ 4
Solution: The characteristic equation is =0
4 −3 − λ
2
– 25 = 0 = 5
The eigenvectors are given by AX = X
i.e., (A – I)X = 0
3 − λ 4 x1
4
−3 − λ x2 = 0
(3 – )x1 + 4x2 = 0
4x1 – (3 + )x2 = 0
If, = 5,
We get, – 2x1 + 4x2 = 0
4x1 – 8x2 = 0
x1 x
Or, = 2
2 1
2
The eigenvector is
1
x1 x2
=
−1 2
−1
The eigenvector is
2
Self-Instructional
88 Material
Eigen Values and
2 Eigen Vectors
The eigenvalues are 5 and –5 with the corresponding eigenvectors are and
1
−1
2 respectively.. NOTES
3 −4 4
Example 2: Find the eigenvalues and eigenvectors of the matrix 1 −2 4
1 −1 3
3 −4 4
Solution: Let A = 1 −2 4
1 −1 3
3 − λ −4 4
1 4 = 0
i.e., −2 − λ
1 −1 3 − λ
3 − λ −4 4 x1
i.e., 1 −2 − λ 4 x2 = 0
x
1 −1 3 − λ 3
Self-Instructional
Material 89
Eigen Values and
Eigen Vectors −12 1
−12
The eigenvector is X1 = or 1
0 0
NOTES Case 2: = 2 gives:
x1 – 4x2 + 4x3 = 0, x1 – 4x2 + 4x3 = 0, x1 – x2 + x3 = 0
0
Solving the first and third equations we get the eigenvector X2 = 1
1
Case 3: = 3 gives:
– 4x2 + 4x3 = 0, x1 – 5x2 + 4x3 = 0, x1 – x2 = 0
1
Solving any two of these equations we get the eigenvector X3 = 1
1
2 2 0
Example 3: Find the eigenvalues and eigenvectors of the matrix 2 1 1 .
−7 2 −3
Self-Instructional
90 Material
Solution: The characteristic equation is, Eigen Values and
Eigen Vectors
3 2
– 1 + 2 – 3 = 0,
Where, 1= 1 + 2 – 3 = 0, 2 = – 5 – 6 – 2 = – 13, 3= – 12
NOTES
The characteristic equation is,
3
– 13 + 12 = 0
( – 1)( + 4)( – 3) = 0
The eigenvalues are 1, –4, 3, when
1 2 0
= 1, A – I = 2 0 1
−7 2 −4
−1 2 0
When = 3, A – I = 2 −2 1
−7 2 −6
The cofactors of the elements of the first row are 10, 5, –10, which are proportional
to 2, 1, –2 respectively.
2
The eigenvector is 1
−2
6 2 0
When = –4, A – I = 2 5 1
−7 2 1
The cofactors of the elements of the first row are 3, –9, 39, which are proportional
to 1, –3, 13 respectively.
1
The eigenvector is −3
13
Self-Instructional
Material 91
Eigen Values and
Eigen Vectors 1 6 1
Example 4: Find the eigenvalues and eigenvectors of A = 1 2 0
0 0 3
NOTES 3 2
Solution: The characteristic equation is – 1 + 2 – 3=0
Where, 1 = 1+2+3=6
2 0 1 1 1 6
2 = + +
0 3 0 3 1 2
= 6+3+2–6=5
3 = |A| = 1(6) – 6(3) + 1(10) = –12
3 2
The characteristic equation is –6 + 5 + 12 = 0
The roots of this equation –1, 3, 4 are the eigenvalues of A, when
2 6 1
= –1, A – I = 1 3 0
0 0 4
The cofactors of the elements of the first row give the eigenvector as,
12 3
X1 = −4 or −1
0 0
−2 6 1
When = 3, A – I = 1 −1 0
0 0 0
Since, the cofactors of the elements of the 1st and 2nd rows vanish completely,
we consider the cofactors of the elements of the 3rd row to get the eigenvector as,
1
X2 = 1
−4
−3 6 1
When = 4, A – I = 1 −2 0
0 0 −1
Considering the cofactors of the elements of the 1st row we get the eigenvector
as,
Self-Instructional
92 Material
Eigen Values and
2 Eigen Vectors
X3 = 1
0
NOTES
Properties of Eigenvalues and Eigenvectors
1. If all the eigenvalues of a matrix are distinct, then the corresponding eigenvectors
are linearly independent.
2. If two or more eigenvalues of a matrix are equal then the corresponding
eigenvectors may be linearly independent or linearly dependent.
3. The eigenvalues of a matrix and its transpose are the same. The characteristic
equation of A and AT (the transpose of A) are,
|A – I| = 0 ...(5.3)
and |AT – I| = 0 ...(5.4)
LHS of Equation (5.4) is the determinant obtained by interchanging rows into
columns of |A – I|. Since, the value of a determinant is unaltered by the
interchanging of rows and columns, Equations (5.3) and (5.4) are identical.
Therefore, the eigenvalues of a matrix and its transpose are the same.
4. The sum of the eigenvalues of a matrix A, is equal to the sum of the diagonal
elements of A. The sum of the diagonal elements is called the Trace of the
matrix A. The characteristic Equation of A is,
n
– 1 n–1 + 2 n–2 – .... + (–1)n n = 0 ...(5.5)
Where, 1 = sum of the diagonal elements of A. ...(5.6)
Let, 1, 2,....., n be the roots of Equation (5.5)
−β1
Then, 1 + 2 + .... + n = − =
β1 ...(5.7)
1
From Equations (5.6) and (5.7) we find that the sum of the eigenvalues is equal to
the sum of the diagonal elements.
5. The product of the eigenvalues of a matrix A is |A|.
The characteristic Equation of A is,
n–1 n–2
n – 1 + 2 – .... + (–1)n n =0 ...(5.8)
Where, n = Determinant of A.
βn
1, 2,... n = (–1)n(–1)n 1 = n ...(5.9)
Self-Instructional
Material 93
Eigen Values and From Equations (5.8) and (5.9) we find that the product of the eigenvalues is
Eigen Vectors
equal to the value of the determinant of A.
6. The eigenvalues of a triangular matrix are the diagonal elements of it.
NOTES Notes:
1. The sum of the eigenvalues of a matrix A, is equal to the sum of the
diagonal elements of A, which is called the Trace of the matrix A.
2. If one of the eigenvalues is zero, then the matrix is singular and conversely,
when the matrix is singular then at least one of the eigenvalues ought to
be zero.
3. The eigenvalues of a diagonal matrix are the diagonal elements of it.
7. If i for(i = 1,2,3,...,n) are the eigenvalues of A, then:
(i) k i for (i = 1,2,3,....,n) are the eigenvalues of the matrix kA, k being a
non-zero scalar.
1
(ii) , (i = 1,2,3,....,n) are the eigenvalues of the inverse matrix A–1, provided
λi
i 0
(i) Let Xi (i = 1,2,3,....,n) be the eigenvectors of the matrix A corresponding
to the eigenvalues of i(i = 1,2,3,....,n). Then,
AXi = iXi (i = 1,2,3,....,n) ...(5.10)
Multiplying by k, (a non-zero scalar):
kAXi = k iXi
This implies that k i for (i = 1,2,3,....,n) are the eigenvalues of kA.
(ii) Premultiply Equation (5.10) by A–1
A–1AXi = A–1 iXi
IXi = iA–1Xi or A–1Xi = i
–1
Xi
–1 –1
This implies that i , for (i = 1,2,3,...,n) are the eigenvalues of A
m
In general, if i(i = 1,2,3,...,n) are the eigenvalues of A, then i (i =
1,2,3,...,n), where m is an integer, are the eigenvalues of Am.
Note: A and Am ( m being an integer) have the same eigenvectors even though the
eigenvalues are different.
3 0 0
Example 5: Find the sum of the squares of the eigenvalues of 8 4 0 .
6 2 5
Self-Instructional
94 Material
Solution: The eigenvalues are 3, 4 and 5. Hence, the sum of the squares of eigenvalues Eigen Values and
Eigen Vectors
=50.
6 −2 2
Example 6: Two eigenvalues of matrix −2 3 −1 are 2 and 8. Find the third NOTES
2 −1 3
eigenvalue.
Solution: Sum of the eigenvalues = Sum of the diagonal elements = 6 + 3 + 3 =
12.
Since the sum of the 2 given eigenvalues is 10 (2+8), the third eigenvalues is
12 – 10 = 2.
8 −6 2
Example 7: If 3 and 15 are the two eigenvalues of −6 7 −4 , find the value
2 −4 3
of the determinant.
Solution: Let, 1, 2, 3 be the eigenvalues.
Then, 1 + 2 + 3 = 8 + 7 + 3; 3 + 15 + 3 = 18 3 =0
The value of the determinant = Product of the eigenvalues = 0
Values of the determinant is zero.
Example 8: If one of the eigenvalues of a matrix is zero, then what is the type of
matrix?
Solution: The matrix is singular.
0 1 1
Example 9: Find the eigenvalues and eigenvectors of 1 0 1 .
1 1 0
3
Solution: The characteristic equation is – 3 – 2 = 0. Solving this equation we
get the eigenvalues as –1, –1 and 2.
−λ 1 1 x1
The eigenvalues are given by 1 −λ 1 x2 = 0
1 1 −λ x3
– x1 + x2 + x3 = 0, x1 – x2 + x3 = 0, x1 + x2 – x3 = 0
Case 1: = 2 gives:
– 2x1 + x2 + x3 = 0, x1 – 2x2 + x3 = 0, x1 + x2 – 2x3 = 0
Self-Instructional
Material 95
Eigen Values and
Eigen Vectors 1
Solving any two of these equations we get eigenvector X1 = 1
1
NOTES
Case 2: = –1 gives:
x1 + x2 + x3 = 0, x1 + x2 + x3 = 0, x1 + x2 + x3 = 0
Solving any two of these equations we get x1 = 0, x2 = 0, x3 = 0 and the vector x2
becomes a null vector which cannot be an eigenvector. This is because of the fact
that all the three equations are one and the same. The rank of coefficient matrix is
1. Therefore, the system will have (n – r) = (3 – 1) = 2 linearly independent
solutions. This indicates that, corresponding to = –1, there will be two linearly
independent eigenvectors.
To get the solutions, we assign arbitrary values to two of the three variables as shown
below. Considering the equation x1 + x2 + x3 = 0 and assigning x3 = 0, x2 = 1, we
get x1 = –1.
−1
The eigenvector is X2 = 1
0
0
X3 = 1
−1
1 2
Example 10: Show that the eigenvalues of A = are –1, 3 and verify that
2 1
the eigenvalues are 1 and 9 for A2.
2
Solution: The characteristic equation is – 2 – 3 = 0. The eigenvalues are –1,
3 and the corresponding eigenvectors are −1 and 1 . By property, the
1 1
eigenvectors of A2 are 1, 9 and the corresponding eigenvectors are (–1, 1)T,
(1, 1)T
5 4
Verification: A2 = has the characteristic equation,
2
– 10 + 9 = 0.
4 5
Self-Instructional
96 Material
Eigen Values and
−1 1
This equation gives the eigenvalues 1, 9 and eigenvectors and .
Eigen Vectors
1 1
Inner product X, X is known as the square of the length of the vector X and it is
denoted as |X|2, where |X| is read as norm X. If |X| = 1 then X is called a unit
vector.
If the inner product between two vectors vanishes, then we say that the two vectors
are orthogonal to each other.
Example 11: Show that X = (1, –1, 2)T and Y = (3, –1, –2)T are orthogonal.
3
T
Solution: < X, Y > = X Y = (1, –1, 2) −1 = 3 + 1 = 4 = 0.
−2
X ' AX
= X ' λX ...(5.11)
Taking complex conjugate on both sides,
X ' AX= X ' λX
X ' AX
= X ' λX
Since A is real, A = A
X ' AX
= X ' λX
Taking transpose on both sides,
(λ − λ ) X ' X = 0
10 −2 −5
Example 12: Find the eigenvalues of the matrix −2 2 3 and verify that the
−5 3 5
10 −2 −5
Solution: Let A = −2 2 3
−5 3 5
Self-Instructional
98 Material
1 = 17; 2 = 42; 3 =0 Eigen Values and
Eigen Vectors
The characteristic equation is,
3 2
– 17 + 42 = 0
2
NOTES
– 17 + 42 = 0
– 3) ( – 14 =0
The eigenvalues are 0, 3 and 14. To find the eigenvectors we consider,
10 – λ –2 –5 x1
–2 2−λ 3 x2 = 0
–5 3 5 – λ x3
(10 – λ ) x1 – 2 x2 – 5x3 = 0
–2x1 + (2 – )x2 + 3x3 = 0
–5x1 + 3x2 + (5 – )x3 = 0
Case 1: = 0 gives:
10x1 – 2x2 – 5x3 = 0, –2x1 + 2x2 + 3x3 = 0
1
Solving we get eigenvector X1 = –5
4
Case 2: = 3 gives:
7x1 – 2x2 – 5x3 = 0, –2x1 –x2 + 3x3 = 0
1
Solving we get eigenvector X2 = 1
1
Case 3: = 14 gives:
–4x1 – 2x2 – 5x3 = 0, –2x1 + 12x2 + 3x3 = 0
–3
Solving we get eigenvector X3 = 1
2
1 1 –3
The eigenvectors are –5 , 1 , 1
4 1 2
Self-Instructional
Material 99
Eigen Values and X1 X2 = 0, X2 X3 = 0 and X3 X1 = 0
Eigen Vectors
Hence, the three eigenvectors are mutually orthogonal.
Note: It may be seen that any matrix, whose elements are polynomials can be
NOTES expressed as a polynomial whose coefficients are matrices and vice versa. The
following two examples illustrate this concept.
Example 13: Express the following matrices as polynomials with matrix
coefficients.
λ + 2λ 2 λ3 – 3
(i) ,
1 + 3λ –λ2
1 + λ + λ 2 λ + λ 2 – λ3 λ 3 − 3λ 2 + 5λ + 1
(ii) λ3 − 3λ 2 − 1 λ + λ2 1 − 3λ 2 + 4λ3
λ + λ3 − 1 0 λ3 + λ 2 + λ + 1
λ + 2λ 2 λ3 – 3
Solution: (i)
1 + 3λ –λ2
0 1 2 2 0 1 0 0 −3
= λ3 +λ + λ +
0 0 0 –1 3 0 1 0
3 2
=A +B + C + D, where A,B and C are matrices.
1 + λ + λ2 λ + λ 2 – λ3 λ 3 − 3λ 2 + 5λ + 1
3
(ii) λ − 3λ 2 − 1 λ + λ2 1 − 3λ 2 + 4λ 3
λ + λ3 − 1 0 λ 3 + λ 2 + λ + 1
3 2
=A +B +C +D
Where,
0 −1 1 1 1 –3
A = 1 0 4 B = –3 1 –3
1 0 1 0 0 1
1 1 5 1 0 1
C = 0 1 0 D = –1 0 1
1 0 1 –1 0 1
And,
Self-Instructional
Material 101
Eigen Values and The solution is then obtained iteratively by means of,
Eigen Vectors
NOTES Where x(k) is referred as the kth approximation or iteration of x and x(k+1) is
the next or k + 1 iteration of{\displaystyle \mathbf {x} } x. The element-based
formula is thus:
Inner product X, X is known as the square of the length of the vector X and
it is denoted as |X|2, where |X| is read as norm X. If |X| = 1 then X is called a
unit vector.
5.5 SUMMARY
Short-Answer Questions
1. Define any three properties of eigenvalues and eigenvectors.
2. What is an inner product?
Long-Answer Questions
6 −2 2
1. Find the eigenvalues and eigenvectors of A = −2 3 −1 .
2 −1 3
Self-Instructional
104 Material
2. If X1 and X2 are eigenvectors corresponding to distinct eigenvalues of 1
Eigen Values and
Eigen Vectors
and 2 of A, then show that X1 and X2 are linearly independent.
3. Find the eigenvalues of the following matrices:
5 2 1 1+ i NOTES
(i) (ii)
2 3 1 − i 2
2 2 1 3 10 5
(iii) 1 3 1 (iv) −2 −3 −4
1 2 2 3 5 7
1 i= j
4. Let k1, k2, k3 > 0 and A = [ai j] where ai j = ki i≠ j
(i, j = 1, 2, 3)
k
j
2 1 −1 0
1 3 4 2
7. Find the sum and product of the eigenvalues of .
−1 4 1 2
0 2 2 1
8. Eigenvalues of a matrix are 1, –1 and 2. Find the value of Trace (A) and
determinant A.
7 4 −4
9. A = 4 −8 −1 . If one the eigenvalues of A is –9, find the other two
4 −1 −8
eigenvalues.
Self-Instructional
106 Material
Interpolation and
APPROXIMATION
NOTES
Structure
6.0 Introduction
6.1 Objectives
6.2 Interpolation and Approximation
6.3 Answers to Check Your Progress Questions
6.4 Summary
6.5 Key Words
6.6 Self Assessment Questions and Exercises
6.7 Further Readings
6.0 INTRODUCTION
6.1 OBJECTIVES
Self-Instructional
108 Material
Theorem 6.2: For a real-valued function f (x) defined at (n + 1) distinct points Interpolation and
Approximation
x0, x1, ..., xn, there exists exactly one polynomial of degree ≤ n which interpolates
f (x) at x0, x1, ..., xn.
We know that a polynomial P(x) which has (n + 1) distinct roots x0, x1, ...,
NOTES
xn can be written as,
P(x) = (x – x0) (x – x1) .....(x – xn) q (x)
where q(x) is a polynomial whose degree is either 0 or (n + 1) which is less than
the degree of P(x).
Suppose that two polynomials ( x ) and ( x ) are of degree ≤ n and that
both interpolate f(x). Here P ( x ) ( x) ( x ) at x x0 , x1 ,..., xn . Then P(x)
vanishes at the n +1 points x0 , x1 ,..., xn . Thus P(x) = 0 and ( x) ( x ).
Iterative Linear Interpolation
In this method, we successively generate interpolating polynomials, of any degree,
by iteratively using linear interpolating functions.
Let p01(x) denote the linear interpolating polynomial for the tabulated values at
x0 and x1. Thus, we can write as,
( x1 − x) f 0 − ( x0 − x) f1
p01 ( x) =
x1 − x0
1 f 0 x0 − x
=p0 j ( x) = , for j 1, 2, ..., n (6.4)
x j − x0 f j x j − x
Now, consider the polynomial denoted by p01j (x) and defined by,
1 p01 ( x) x1 − x
=p01 j ( x) = , for j 2, 3, ..., n (6.5)
x j − x1 p0 j ( x) x j − x
The polynomial p01j(x) interpolates f(x) at the points x0, x1, xj (j > 1) and is a
polynomial of degree 2, which can be easily verified that,
p0ij ( x0 ) f 0 , p0ij ( xi ) f i and p0ij ( x j ) f j because p01 ( x0 ) f0 p0ij ( x0 ), etc.
xk fk p0 j p01 j ... x j − x
x0 f0 x0 − x
x1 f1 p01 x1 − x
x2 f2 p02 p012 x2 − x
x3 f3 p03 p013 x3 − x
... ... ... ... ... ...
xj fj p0 j p01 j xj − x
... ... ... ... ... ...
xn fn p0n p01n xn − x
Solution: Here, x = 2.12. The following table gives the successive iterative linear
interpolation results. The details of the calculations are shown below in the table.
xj s( x j ) p0 j p 01 j p012 j xj − x
2.0 0.7909 − 0.12
2.1 0.7875 0.78682 − 0.02
2.2 0.7796 0.78412 0.78628 0.08
2.3 0.7673 0.78146 0.78628 0.78628 0.18
Self-Instructional
110 Material
Interpolation and
1 0.7909 −0.12 Approximation
=p01 = 0.78682
2.1 − 2.0 0.7875 −0.02
1 0.7909 −0.12
=p02 = 0.78412
2.2 − 2.0 0.7796 −0.08 NOTES
1 0.7909 −0.12
=p03 = 0.78146
2.3 − 2.0 0.7673 0.18
1 0.78682 −0.02
=p012 = 0.78628
2.2 − 2.1 0.78412 0.08
1 0.78682 −0.02
p013 = = 0.78628
2.3 − 2.1 0.78146 0.18
1 0.78628 0.08
=p012 = 0.78628
2.3 − 2.2 0.78628 0.18
The boldfaced results in the table give the value of the interpolation at x =
2.12. The result 0.78682 is the value obtained by linear interpolation. The result
0.78628 is obtained by quadratic as well as by cubic interpolation. We conclude
that there is no improvement in the third degree polynomial over that of the second
degree.
Notes 1. Unlike Lagrange’s methods, it is not necessary to find the degree of the
interpolating polynomial to be used.
2. The approximation by a higher degree interpolating polynomial may
not always lead to a better result. In fact it may be even worse in some
cases.
Consider, the function f(x) = 4.
We form the finite difference table with values for x = 0 to 4.
x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x) ∆4 f ( x)
0 1
3
1 4 9
12 27
2 16 36 81
48 108
3 64 144
192
4 256
x x0 9 27 81
u x, ( x ) 1 3x x( x 1) x( x 1)( x 2) x( x 1)( x 2)( x 3)
h 2 6 24
Self-Instructional
Material 111
Interpolation and Now, consider values of ( x) at x = 0.5 by taking successively higher and
Approximation
higher degree polynomials.
Thus,
NOTES
1 (0.5) 1 0.5 3 2.5, by linear interpolation
0.5 ( 0.5)
2 (0.5) 2.5 9 1.375, by quadratic interpolation
2
0.5 ( 0.5) ( 1.5)
3 (0.5) 1.375 27 3.0625, by cubic interpolation
6
(0.5)( 0.5)( 1.5)( 2.5)
4 (0.5) 3.0625 81 0.10156, by quartic interpolation
24
We note that the actual value 40.5 = 2 is not obtainable by interpolation. The
results for higher degree interpolating polynomials become worse.
Note: Lagrange’s interpolation formula and iterative linear interpolation can easily
be implemented for computations by a digital computer.
Example 2: Determine the interpolating polynomial for the following table of data:
x 1 2 3 4
y −1 −1 1 5
Solution: The data is equally spaced. We thus form the finite difference table.
x y ∆y ∆2 y
1 −1
0
2 −1 2
2
3 1 2
4
4 5
Since the differences of second order are constant, the interpolating polynomial
is of degree two. Using Newton’s forward difference interpolation, we get
u (u 1) 2
y y0 u y0 y0 ,
2!
Here, x0 1, u x 1.
( x 1)( x 2)
Thus, y 1 ( x 1) 0 2 x2 3x 1.
2
Self-Instructional
112 Material
Example 3: Compute the value of f(7.5) by using suitable interpolation on the Interpolation and
Approximation
following table of data.
x 3 4 5 6 7 8
f ( x) 28 65 126 217 344 513 NOTES
Solution: The data is equally spaced. Thus for computing f(7.5), we use Newton’s
backward difference interpolation. For this, we first form the finite difference table
as shown below.
x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x)
3 28
37
4 65 24
61 6
5 126 30
91 6
6 217 36
127 6
7 344 42
169
8 513
The differences of order three are constant and hence we use Newton’s
backward difference interpolating polynomial of degree three.
v(v + 1) 2 v(v + 1)(v + 2) 3
f ( x) = yn + v ∇yn + ∇ yn + ∇ yn ,
2 ! 3 !
x − xn
v= , for x 7.5,
= = xn 8
h
7.5 − 8
v= = −0.5
1
( −0.5) ( −0.5 + 1) −0.5 × 0.5 × 1.5
f (7.5) = 513 − 0.5 × 169 + × 42 + ×6
2 6
=513 − 84.5 − 5.25 − 0.375
= 422.875
Example 4: Determine the interpolating polynomial for the following data:
x 2 4 6 8 10
f ( x) 5 10 17 29 50
Self-Instructional
Material 113
Interpolation and Solution: The data is equally spaced. We construct the Newton’s forward
Approximation
difference interpolating polynomial. The finite difference table is,
x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x) ∆4 f ( x)
NOTES 2 5
5
4 10 2
7 3
6 17 5 1
12 4
8 29 9
21
10 50
x y ∆y ∆2 y ∆3 y
0.0 1.0000
− 25
0.1 0.9975 − 50
− 75 25
0.2 0.9900 − 25
− 100
0.3 0.9800
x
Here, h = 0.1. Choosing x0 = 0.0, we have s = = 10 x. Newton’s forward
0.1
difference interpolation formula is,
Self-Instructional
114 Material
s ( s − 1) 2 s( s − 1)(s − 2) 3 Interpolation and
y = y0 + s ∆y0 + ∆ y0 + ∆ y0 Approximation
2! 3!
10 x(10 x − 1) 10 x(10 x − 1)(10 x − 2)
= 1 + 10 x(−0.0025) + (−0.0050) + × 0.0025
2! 6
2.5 3 300 0.025
NOTES
2 2
= 1.0 − 0.25 x − 0.25 x + 0.25 x + x − × 0.0025x + x
6 4 6
2 3
= 1.0 + 0.004 x − 0.375 x + 0.421x
y (0.05) = 1.0002
Example 6: Compute f(0.23) and f(0.29) by using suitable interpolation formula
with the table of data given below.
x 0.20 0.22 0.24 0.26 0.28 0.30
f ( x) 1.6596 1.6698 1.6804 1.6912 1.7024 1.7139
Solution: The data being equally spaced, we use Newton’s forward difference
interpolation for computing f(0.23), and for computing f(0.29), we use Newton’s
backward difference interpolation. We first form the finite difference table,
x f ( x) ∆f ( x) ∆2 f ( x)
0.20 1.6596
102
0.22 1.6698 4
106
0.24 1.6804 2
108
0.26 1.6912 4
112
0.28 1.7024 3
115
0.30 1.7139
We observe that differences of order higher than two would be irregular. Hence,
we use second degree interpolating polynomial. For computing f(0.23), we take
x − x0 0.23 − 0.22
x0 = 0.22 so that u = = = 0.5.
h 0.02
Using Newton’s forward difference interpolation, we compute
(0.5)(0.5 − 1.0)
f (0.23)
= 1.6698 + 0.5 × 0.0106 + × 0.0002
2
= 1.6698 + 0.0053 − 0.000025
= 1.675075
≈ 1.6751
Again for computing f (0.29), we take xn = 0.30,
Self-Instructional
Material 115
Interpolation and
Approximation x xn 0.29 0.30
so that v 0.5
n 0.02
Using Newton’s backward difference interpolation we evaluate,
NOTES
( −0.5)(−0.5 + 1.0)
f (0.29)
= 1.7139 − 0.5 × .0115 + × 0.0003
2
=1.7139 − 0.00575 − 0.00004
= 1.70811
− 1.7081
Example 7: Compute values of ex at x = 0.02 and at x = 0.38 using suitable
interpolation formula on the table of data given below.
x 0.0 0.1 0.2 0.3 0.4
e x 1.0000 1.1052 1.2214 1.3499 1.4918
Solution: The data is equally spaced. We have to use Newton’s forward difference
interpolation formula for computing ex at x = 0.02, and for computing ex at
x = 0.38, we have to use Newton’s backward difference interpolation formula.
We first form the finite difference table.
x y = ex ∆y ∆2 y ∆3 y ∆4 y
0.0 1.0000
1052
0.1 1.1052 110
1162 13
0.2 1.2214 123 −2
1285 11
0.3 1.3499 134
1419
0.4 1.4918
x − x0 0.02 − 0.0
∴ u= = = 0 .2
h 0.1
By Newton’s forward difference interpolation formula, we have
Self-Instructional
116 Material
Interpolation and
For computing e0.38 we take xn = 0.4. Thus, v = 0.38 − 0.4 = −0.2 Approximation
0.1
By Newton’s backward difference interpolation formula, we have
(−0.2)(−0.2 + 1) NOTES
e0.38 1.4918 + ( −0.2) × 0.1419 +
= × 0.0134
2
(−0.2)(−0.2 + 1)(−0.2 + 2) −0.2(−0.2 + 1)(−0.2 + 2)(−0.2 + 3)
+ × 0.0011 + × (−0.0002)
6 24
=1.4918 − 0.02838 − 0.00107 − 0.00005 − 0.00001
= 1.49287 − 0.02844
= 1.46443 ≈ 1.4644
Lagrange’s Interpolation
Lagrange’s interpolation is useful for unequally spaced tabulated values. Let y = f
(x) be a real valued function defined in an interval (a, b) and let y0, y1,..., yn be the
(n + 1) known values of y at x0, x1,...,xn, respectively. The polynomial (x),
which interpolates f (x), is of degree less than or equal to n. Thus,
( xi ) yi , for i 0,1, 2, ..., n
(6.7)
The polynomial (x) is assumed to be of the form,
n
( x) li ( x) yi
i 0
(6.8)
where each li(x) is a polynomial of degree ≤ n in x and is called Lagrangian
function.
Now, (x ) satisfies Equation (6.7) if each li(x) satisfies,
li ( x j ) 0 when i j
1 when i j
(6.9)
Equation (6.9) suggests that li(x) vanishes at the (n+1) points x0, x1, ... xi–1,
xi+1,..., xn. Thus, we can write,
li(x) = ci (x – x0) (x – x1) ... (x – xi–1) (x – xi+1)...(x – xn)
where ci is a constant given by li (xi) =1,
i.e., ci ( xi x0 ) ( xi x1 )...( xi xi 1 ) ( xi xi 1 )... ( xi xn ) 1
( x − x0 )( x − x1 )...( x − xi −1 )( x − xi +1 )...( x − xn )
=Thus, li ( x) = for i 0, 1, 2, ..., n
( xi − x0 )( xi − x1 )...( xi − xi −1 )( xi − xi +1 )...( xi − xn )
(6.10)
Self-Instructional
Material 117
Interpolation and Equations (6.8) and (6.10) together give Lagrange’s interpolating polynomial.
Approximation
Algorithm: To compute f (x) by Lagrange’s interpolation.
Step 1: Read n [n being the number of values]
NOTES Step 2: Read values of xi, fi for i = 1, 2,..., n.
Step 3: Set sum = 0, i = 1
Step 4: Read x [x being the interpolating point]
Step 5: Set j = 1, product = 1
Step 6: Check if j i, product = product × (x – xj)/(xi – xj) else go to Step
7
Step 7: Set j = j + 1
Step 8: Check if j > n, then go to Step 9 else go to Step 6
Step 9: Compute sum = sum + product × fi
Step 10: Set i = i + 1
Step 11: Check if i > n, then go to Step 12
else go to Step 5
Step 12: Write x, sum
Example 8: Compute f (0.4) for the table below by Lagrange’s interpolation.
x 0.3 0.5 0.6
f ( x) 0.61 0.69 0.72
where
( x − 0)( x − 1)( x − 2) 1
l0 ( x) = = − x( x − 1)( x − 2)
(−1 − 0)(−1 − 1)(−1 − 2) 6
( x + 1)( x − 1)( x − 2) 1
l1 ( x ) = = ( x + 1)( x − 1)( x − 2)
(0 + 1)(0 − 1)(0 − 2) 2
( x + 1)( x − 0)( x − 2) 1
l2 ( x) = = − ( x + 1) x( x − 2)
(1 + 1)(1 − 0)(1 − 2) 2
( x + 1)( x − 0)( x − 1) 1
l3 ( x ) = = ( x + 1) x ( x − 2)
(2 + 1)(2 − 0)(2 − 1) 6
1 1 1 1
f ( x) = − x( x − 1)( x − 2) × 1 + ( x + 1)( x − 1)( x − 2) × 1 − ( x + 1) x ( x − 2) × 1 + ( x + 1) x( x − 2) × (−3)
6 2 2 6
1 3
= − (4 x − 4 x − 6)
6
−1
= ( 2 x 3 − 2 x − 3)
3
Example 11: Evaluate the values of f (2) and f (6.3) using Lagrange’s interpolation
formula for the table of values given below.
Self-Instructional
Material 119
Interpolation and ∴f (2) = 0.275 × 6.84 + 0.821 × 14.25 – 0 .095 × 27 = 11.015 ~− 11.02
Approximation
For evaluation of f (6.3), we consider the values of f (x) at x0 = 5.1, x1 = 6.0, x2
= 6.5.
NOTES Thus, f (6.3) = l0(6.3) × 39.21 + l1(6.3) × 51 + l2(6.3) × 58.25
where
(6.3 − 6.0)(6.3 − 6.5)
l0 (6.3) = = −0.048
(5.1 − 6.0)(5.1 − 6.5)
(6.3 − 5.1)(6.3 − 6.5)
l1 (6.3) = = 0.533
(6 − 5.1)(6.0 − 6.5)
(6.3 − 5.1)(6.3 − 6.0)
l 2 (6.3) = = 0.514
(6.5 − 5.1)(6.5 − 6.0)
Since, the computed result cannot be more accurate than the data, the final
result is rounded-off to the same number of decimals as the data. In some cases,
a higher degree interpolating polynomial may not lead to better results.
Interpolation for Equally Spaced Tabular Values
For interpolation of an unknown function when the tabular values of the argument
x are equally spaced, we have two important interpolation formulae, viz.,
(i) Newton’s forward difference interpolation formula
(ii) Newton’s backward difference interpolation formula
We will first discuss the finite differences which are used in evaluating the
above two formulae.
Finite Differences
Let us assume that values of a function y = f (x) are known for a set of equally
spaced values of x given by {x0, x1,..., xn}, such that the spacing between any
two consecutive values is equal. Thus, x1 = x0 + h, x2 = x1 + h,..., xn = xn–1 + h,
so that xi = x0 + ih for i = 1, 2, ...,n. We consider two types of differences known
as forward differences and backward differences of various orders. These
differences can be tabulated in a finite difference table as explained in the subsequent
sections.
Forward Differences
Let y0, y1,..., yn be the values of a function y = f (x) at the equally spaced values of
x = x0, x1, ..., xn. The differences between two consecutive y given by y1 – y0, y2
– y1,..., yn – yn–1 are called the first order forward differences of the function y = f
(x) at the points x0, x1,..., xn–1. These differences are denoted by,
Self-Instructional
120 Material
∆y0 = y1 − y0 , ∆y1 = y2 − y1 , ..., ∆yn−1 = yn − yn−1 Interpolation and
Approximation
(6.11)
where ∆ is termed as the forward difference operator defined by,,
∆f ( x) = f ( x + h) − f ( x) NOTES
(6.12)
Thus, ∆ yi = yi+1 – yi, for i = 0, 1, 2, ..., n – 1, are the first order forward
differences at xi.
The differences of these first order forward differences are called the second
order forward differences.
Thus, 2
∆ yi =∆ (∆yi )
= ∆yi +1 − ∆yi , for i = 0, 1, 2, ..., n − 2
(6.13)
Evidently,
∆2 y0 = ∆y1 − ∆y0 = y 2 − y1 − ( y1 − y0 ) = y2 − 2 y1 + y0
And, ∆2 yi = yi + 2 − yi +1 − ( yi +1 − yi )
i.e., ∆ 2 yi = yi + 2 − 2 yi +1 + yi , for i = 0, 1, 2, ..., n − 2
(6.14)
Similarly, the third order forward differences are given by,
∆ 3 yi = ∆ 2 yi +1 − ∆ 2 yi , for i = 0, 1, 2, ..., n − 3
i.e., ∆ 3 y i = y i + 3 − 3 y i + 2 + 3 y i +1 − y i
(6.15)
Finally, we can define the nth order forward difference by,
n(n − 1)
∆n y 0 = y n − ny n −1 + y n − 2 + ... + ( −1) n y0
2!
(6.16)
The coefficients in above equations are the coefficients of the binomial expansion
(1 – x)n.
The forward differences of various orders for a table of values of a function
y = f (x), are usually computed and represented in a diagonal difference table. A
diagonal difference table for a table of values of y = f (x), for six points x0, x1, x2,
x3, x4, x5 is shown here.
Self-Instructional
Material 121
Interpolation and Diagonal difference Table for y = f(x):
Approximation
i xi yi ∆yi ∆2 yi ∆3 yi ∆4 yi ∆5 yi
0 x0 y0
NOTES ∆y0
1 x1 y1 ∆2 y0
∆y1 ∆3 y0
2
2 x2 y2 ∆ y1 ∆4 y0
∆y 2 ∆3 y1 ∆5 y 0
3 x3 y3 ∆2 y 2 ∆4 y1
∆y3 ∆3 y 2
2 3
4 x4 y4 ∆ y
∆y 4
5 x5 y5
The entries in any column of the differences are computed as the differences
of the entries of the previous column and one placed in between them. The upper
data in a column is subtracted from the lower data to compute the forward
differences. We notice that the forward differences of various orders with respect
to yi are along the forward diagonal through it. Thus y0, 2y0, 3y0, 4y0 and
5
y0 lie along the top forward diagonal through y0. Consider the following example.
Example 12: Given the table of values of y = f (x),
x 1 3 5 7 9
y 8 12 21 36 62
form the diagonal difference table and find the values of ∆f (5), ∆2 f (3), ∆3 f (1) .
Solution: The diagonal difference table is,
i xi yi ∆yi ∆2 yi ∆3 yi ∆4 yi
0 1 8
4
1 3 12 5
9 1
2 5 21 6 4
15 5
3 7 36 11
26
4 9 62
From the table, we find that ∆f (5) = 15, the entry along the diagonal through
the entry 21 of f (5).
Similarly, ∆2 f (3) = 6, the entry along the diagonal through f (3). Finally,,
∆3 f (1) = 1.
Self-Instructional
122 Material
Backward Differences Interpolation and
Approximation
The backward differences of various orders for a table of values of a function y =
f (x) are defined in a manner similar to the forward differences. The backward
difference operator ∇ (inverted triangle) is defined by ∇f ( x) = f ( x) − f ( x − h). NOTES
Thus, ∇yk = yk − yk −1 , for k = 1, 2, ..., n
i.e., ∇y1 = y1 − y0 , ∇y 2 = y 2 − y1 ,..., ∇y n = y n − y n −1
(6.17)
The backward differences of second order are defined by,
∇ 2 yk = ∇yk − ∇yk −1 = yk − 2 yk −1 + yk − 2
Hence,
∇ 2 y2 = y2 − 2 y1 + y0 , and ∇ 2 yn = yn − 2 yn −1 + yn −2
(6.18)
Higher order backward differences can be defined in a similar manner.
Thus, ∇ 3 yn = yn − 3 yn −1 + 3 yn −2 − yn −3 , etc.
(6.19)
Finally,
n n( n 1)
yn yn nyn 1 yn 2 – ... ( 1) n y0
2 i
(6.20)
The backward differences of various orders can be computed and placed in a
diagonal difference table. The backward differences at a point are then found
along the backward diagonal through the point. The following table shows the
backward differences entries.
Diagonal difference Table of backward differences:
i xi yi ∇y i ∇ 2 yi ∇ 3 yi ∇ 4 yi ∇ 5 yi
0 x0 y0
∇y1
1 x1 y1 ∇ 2 y2
∇y 2 ∇ 3 y3
2 x2 y2 ∇ 2 y3 ∇ 4 y4
3
∇y3 ∇ y4
2
3 x3 y3 ∇ y4 ∇ 4 y5
3
∇y 4 ∇ y5
2
4 x4 y4 ∇ y5
∇y5
5 x5 y5
Self-Instructional
Material 123
Interpolation and The entries along a column in the table are computed (as discussed in previous
Approximation
example) as the differences of the entries in the previous column and are placed in
between. We notice that the backward differences of various orders with respect
to yi are along the backward diagonal through it. Thus, ∇y5 , ∇ 2 y5 , ∇ 3 y5 , ∇ 4 y5 and
NOTES
∇ 5 y 5 are along the lowest backward diagonal through y5.
We may note that the data entries of the backward difference table in any
column are the same as those of the forward difference table, but the differences
are for different reference points.
Specifically, if we compare the columns of first order differences we can see
that,
∆y0 = ∇y1 , ∆y1 = ∇y 2 , ..., ∆y n −1 = ∇y n
Similarly, ∆2 y 0 = ∇ 2 y 2 , ∆2 y1 = ∇ 2 y3 ,..., ∆2 y n − 2 = ∇ 2 y n
Thus, ∆ 2 yi =
∇2 yi + 2 , for i =
1, 2, ..., n − 2
In general, ∆k yi = ∇ k yi + k .
Conversely, ∇ k yi = ∆k yi − k .
Example 13: Given the following table of values of y = f (x):
x 1 3 5 7 9
y 8 12 21 36 62
xi yi ∇y i ∇ 2 yi ∇ 3 yi ∇ 4 yi
1 8
4
3 12 5
9 1
5 21 6 4
15 5
7 36 11
26
9 62
2 3
From the table, we can easily find ∇y(7) = 15, ∇ y(9) = 11, ∇ y(9) = 5.
Self-Instructional
124 Material
Symbolic Operators Interpolation and
Approximation
We consider the finite differences of an equally spaced tabular data for developing
numerical methods. L et a function y = f (x) has a set of values y0, y1, y2,...,
corresponding to points x0, x1, x2,..., where x1 = x0 + h, x2 = x0 + 2h,...., are
NOTES
equally spaced with spacing h. We define different types of finite differences such
as forward differences, backward differences and central differences, and express
them in terms of operators.
The forward difference of a function f (x) is defined by the operator ∆, called
the forward difference operator given by,
f ( x) f ( x h) f ( x)
(6.21)
At a tabulated point xi , we have
f ( xi ) f ( xi h) f ( xi )
(6.22)
We also denote ∆f ( xi ) by ∆yi , given by
yi yi 1 yi , for i 0, 1, 2, ...
(6.23)
We also define an operator E, called the shift operator which is given by,
E f(x) = f(x + h)
(6.24)
f ( x) Ef ( x) f ( x)
Thus, ∆ = E − 1 is an operator relation. (6.25)
While Equation (6.21) defines the first order forward difference, we can define
second order forward difference by,
2
yi ( yi ) ( yi 1 yi )
2
yi yi 1 yi (6.26)
Shift Operator
The shift operator is denoted by E and is defined by E f (x) = f (x + h). Thus,
Eyk = yk+1
Higher order shift operators can be defined by, E2f (x) = Ef (x + h) = f (x +
2h).
E2yk = E(Eyk) = E(yk + 1) = yk + 2
In general, Emf (x) = f (x + mh)
Emyk = yk+m
Self-Instructional
Material 125
Interpolation and Relation between Forward Difference Operator and Shift Operator
Approximation
From the definition of forward difference operator, we have
y ( x) y ( x h) y ( x )
NOTES
Ey ( x) y ( x)
( E 1) y ( x )
2
y ( x) y ( x h) y ( x)
y ( x 2 h) 2 y ( x h) y( x)
2
E y ( x ) 2 Ey ( x ) y ( x)
2
(E 2 E 1) y ( x )
Finally, we have m
( E 1)m , for m 1, 2, ... (6.28)
Relation between the Backward Difference Operator with Shift Operator
From the definition of backward difference operator, we have
∇ f ( x ) = f ( x ) − f ( x − h)
= f ( x) − E −1 f ( x) = (1 − E −1 ) f ( x)
∇ 2 f ( x ) = ∇f ( x ) − ∇f ( x − h )
= f ( x ) − f ( x − h) − f ( x − h) + f ( x − 2h)
= f ( x ) − 2 f ( x − h) + f ( x − 2h)
= f ( x) − E −1 f ( x) + E − 2 f ( x)
= (1 − E −1 + E − 2 ) f ( x)
= (1 − E −1 ) 2 f ( x)
∇ m ≡ (1 − E −1 ) m (6.30)
Self-Instructional
126 Material
Relations between the Operators E, D and Interpolation and
Approximation
We have by Taylor’s theorem,
h2
f ( x h) f ( x ) hf ( x) f ( x) ... NOTES
2!
h2 D2 d
Thus, Ef ( x) f ( x) hDf ( x) f ( x) ..., where D
2! dx
h2 D2
Or, (1 + ∆ ) f ( x)= 1 + hD + + ... f ( x)
2!
hD
= e f ( x)
Thus, e hD 1 E (6.31)
Also, hD log (1 + ∆ )
=
∆2 ∆3 ∆4
Or, hD =
∆− + − + ...
2 3 4
1 ∆ 2 ∆3 ∆ 4
D= ∆ − + − + ...
h 2 3 4
Further,
2
yn ( yn ) yn 1/ 2 yn 1/ 2
E1/ 2 E 1/ 2
yn 1/ 2 E1/ 2 E 1/ 2
yn 1/ 2
1/ 2 1/ 2
E yn 1/ 2 yn 1/ 2 E yn 1/ 2 yn 1/ 2
yn 1 yn yn yn 1
yn 1 2 yn yn 1 ( E1/ 2 E 1/ 2 2
) yn 2
yn 1
2
yn 1
1
(E E 2) yn
Self-Instructional
Material 127
Interpolation and
Approximation
2
E E 1
2 (6.32)
Even though the central difference operator uses fractional arguments, still it is
widely used. This is related to the averaging operator and is defined by,
NOTES
1 1/ 2 1/ 2
(E E ) (6.33)
2
1 1 1
Squaring, 2
(E 2 E 1 ) ( 2
2 2) 1 2
4 4 4
1
2
1 2
(6.34)
4
It may be noted that, y1/ 2 y1 y0 y1
Also, E1/ 2 y1 y1 y2 y1 y1
1
2
E1/ 2 E 1 (6.35)
Further,
3
yn ( 2 yn ) y 1 y 1
n n
2 2
2 2
y 1 y 1 ( yn 1 2 yn yn 1 )
n n
2 2
E −1
Thus, ∇ ≡ or ∇E ≡ E − 1 ≡ ∆
E
Hence proved.
(ii) From Equation (1), we have E ≡ ∆ + 1 (3)
and from Equation (2) we get E −1 ≡ 1 − ∇ (4)
Combining Equations (3) and (4), we get (1 + ∆ )(1 − ∇) ≡ 1.
Example 15: If fi is the value of f (x) at xi where xi = x0 + ih, for i = 1,2,..., prove
that,
i
i
= i
f i E= fo ∑ j ∆ i
f0
j =0
Self-Instructional
128 Material
Solution: We can write Ef (x) = f (x + h) Interpolation and
Approximation
Using Taylor series expansion, we have
h2
Ef ( x) f ( x) hf ( x) f ( x) ... NOTES
2!
h2 2 d
f ( x) hDf ( x) D f ( x) ..., where D
2! dx
2 2
hD
(1 ) f ( x) 1 hD ... f ( x), since E 1
2!
= ehD . f ( x)
1 + ∆ =ehD
Now, f i = f ( xi )= f ( x0 + ih)= E i f ( x0 )
fi (1 )i f ( x0 ), since E 1
i
i i
fi f 0 , using binomial expansion.
j 0 j
Hence proved.
Example 16: Compute the following differences:
(i) ∆n e x (ii) ∆n x n
Solution:
(i) We have, ∆ e x = e x + h − e x = e x (e h − 1)
Thus by induction, ∆n e x = (e h − 1) n e x .
(ii) We have,
∆ ( x n ) = ( x + h) n − x n
n(n − 1) 2 n − 2
= n h x n −1 + h x + .... + h n
2!
Self-Instructional
Material 129
Interpolation and Proceeding n times, we get
Approximation
n
( xn ) n(n 1)... 1h n n !h n
∆ f ( x)
(ii) ∆{log f ( x)} = log 1 +
f ( x)
Solution:
(i) We have,
f ( x ) f ( x + h) f ( x )
∆ = −
g ( x ) g ( x + h) g ( x )
f ( x + h) g ( x ) − f ( x ) g ( x + h)
=
g ( x + h) g ( x )
f ( x + h) g ( x ) − f ( x ) g ( x ) + f ( x ) g ( x ) − f ( x ) g ( x + h)
=
g ( x + h) g ( x )
g ( x){ f ( x + h) − f ( x)} − f ( x){g ( x + h) − g ( x)}
=
g ( x ) g ( x + h)
g ( x) ∆f ( x) − f ( x) ∆g ( x)
=
g ( x ) g ( x + h)
(ii) We have,
∆ {log f ( x)} = log{ f ( x + h)} − log{ f ( x)}
f ( x + h) f ( x + h) − f ( x ) + f ( x )
= log = log
f ( x) f ( x)
∆f ( x )
= log + 1
f ( x)
Differences of a Polynomial
We now look at the differences of various orders of a polynomial of degree n,
given by
) an x n + an−1 x n −1 + an − 2 x n− 2 + ... + a1 x + a0
y= f ( x=
The first order forward difference is defined by,
∆f ( x ) = f ( x + h ) − f ( x ) and is given by,,
y an {( x h) n x n } an 1{( x h) n 1
x n 1} ... a1 ( x h x)
n(n 1) 2 n
an {n h x n 1
h x 2
...} an 1{( n 1) h x n 2
...}
2 !
Self-Instructional bn 1 x n 1
bn 2 x n 2
... b1 x b0
130 Material
where the coefficients of various powers of x are collected separately. Interpolation and
Approximation
Thus, the first order difference of a polynomial of degree n is a polynomial of
degree n – 1, with bn −1 = an . nh
Proceeding as above, we can state that the second order forward difference NOTES
of a polynomial of degree n is a polynomial of degree n – 2, with coefficient of xn
–2
as n(n − 1)h 2 a0 .
Continuing successively, we finally get ∆ n y =
a0 n!h n , a constant.
We can conclude that for polynomial of degree n, all other differences having
order higher than n are zero.
It may be noted that the converse of the above result is partially true and
suggests that if the tabulated values of a function are found to be such that the
differences of the kth order are approximately constant, then the highest degree of
the interpolating polynomial that should be used is k. Since the tabulated data may
have round-off errors, the acutal function may not be a polynomial.
Example 18: Compute the horizontal difference table for the following data and
hence, write down the values of ∇f (4), ∇ 2 f (3) and ∇ 3 f (5).
x 1 2 3 4 5
f ( x) 3 18 83 258 627
Solution: The horizontal difference table for the given data is as follows:
x f ( x ) ∇f ( x ) ∇ 2 f ( x ) ∇ 3 f ( x ) ∇ 4 f ( x )
1 3 − − − −
2 18 15 − − −
3 83 65 50 − −
4 258 175 110 60 −
5 627 369 194 84 24
From the table we read the required values and get the following result:
∇f (4) = 175, ∇ 2 f (3) = 50, ∇ 3 f (5) = 84
Example 19: Form the difference table of f (x) on the basis of the following table
and show that the third differences are constant. Hence, conclude about the degree
of the interpolating polynomial.
x 0 1 2 3 4
f ( x) 5 6 13 32 69
Self-Instructional
Material 131
Interpolation and Solution: The difference table is given below
Approximation
x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x)
0 5
NOTES
1
1 6 6
7 6
2 13 12
19 6
3 32 18
37
4 69
It is clear from the above table that the third differences are constant and
hence, the degree of the interpolating polynomial is three.
( xi ) f ( xi ) yi
xi x0 ih, for i 0, 1, 2, ..., n (6.36)
( x) a0 a1 ( x x0 ) a2 ( x x0 )( x x1 ) a3 ( x x0 )( x x1 )( x x2 )
... an ( x x0 )( x x1 )...( x xn 1 )
(6.37)
The coefficients ai’s in Equation (6.37) are determined by satisfying the
conditions in Equation (6.36) successively for i = 0, 1, 2,...,n.
Thus, we get
y0 ( x0 ) a0 , gives a0 y0
y1 y0
y1 ( x1 ) a0 a1 ( x1 x0 ), gives a1
h
y0
a1
h
y2 ( x2 ) a0 a1 ( x2 x0 ) a2 ( x2 x0 )( x2 x1 )
Self-Instructional
132 Material
or, Interpolation and
Approximation
y0
y2 y0 a2 (2h) h
h
y2 2 y1 y0 2
y0 NOTES
a2 2
2h 2 ! h2
x x0 ( x x0 )( x x1 ) 2 ( x x0 ) ( x x1 ) ( x x2 ) 3 y0
( x) y0 y0 y0 ...
h 2 ! h2 h h h 3!
n
( x x0 ) ( x x1 ) ( x xn 1 ) y0
... ...
h h h n!
x − x0
This formula can be expressed in a more convenient form by taking u = as
h
shown here.
We have,
x − x1 x − ( x0 + h ) x − x0
= −1 = u −1
=
h h h
x − x2 x − ( x0 + 2h )
x − x0
= = −2 =u−2
h h h
x − xn −1 x − {x0 + (n − 1)h} x − x0
= = − (n − 1) = u − n + 1
h h h
u (u 1)(u 2)...(u n 1) n
... y0 (6.38)
n !
This formula is generally used for interpolating near the beginning of the table.
For a given x, we choose a tabulated point as x0 for which the following condition
is satisfied.
For better results, we should have
Self-Instructional
Material 133
Interpolation and
Approximation x − x0
| u |= ≤ 0.5
h
yn − yn −1 ∇yn
Or, =b1 = (6.42)
h h
Again
( xn 2 ) yn 2 , gives yn 2 b0 b1 ( xn 2 xn ) b2 ( xn 2 xn )( xn 1 xn )
yn − yn −1
Or, yn − 2 = yn + ( −2h) + b2 (−2h)(−h)
h
yn 2 yn 1 yn yn2
b2 2
(6.43)
2h 2 2 ! h2
By induction or by proceeding as mentioned earlier, we have
∇3 yn ∇ 4 yn ∇ n yn
=b3 = , b4 = , ..., bn (6.44)
3 ! h3 4 ! h4 n ! hn
Substituting the expressions for bi in Equation (6.39), we get
2 n
yn yn yn
( x) yn ( x xn ) ( x xn ) ( x xn 1 ) ... ( x xn )( x xn 1 )...( x x1 )
h 2 ! h 2
n ! hn
Self-Instructional (6.45)
134 Material
This formula is known as Newton’s backward difference interpolation formula. Interpolation and
Approximation
It uses the backward differences along the backward diagonal in the difference
table.
x − xn
Introducing a new variable v = , NOTES
h
x − xn −1 x − ( xn − h)
we have, = = v +1.
h h
x xn x x1
Similarly, 2
v 2,..., v n 1.
h h
Thus, the interpolating polynomial in Equation (6.45) may be rewritten as,
(6.46)
This formula is generally used for interpolation at a point near the end of a
table.
The error in the given interpolation formula may be written as,
E ( x) f ( x) ( x)
( n 1)
(x x n )( x x n 1 )...( x x1 )( x x0 ) f ( )
, where x0 xn
( n 1) !
y ( n 1) ( ) n
v ( v 1)( v 2)...( v n) h 1
( n 1) !
Extrapolation
The interpolating polynomials are usually used for finding values of the tabulated
function y = f(x) for a value of x within the table. But, they can also be used in
some cases for finding values of f(x) for values of x near to the end points x0 or xn
outside the interval [x0, xn]. This process of finding values of f(x) at points beyond
the interval is termed as extrapolation. We can use Newton’s forward difference
interpolation for points near the beginning value x0. Similarly, for points near the
end value xn, we use Newton’s backward difference interpolation formula.
Example 20: With the help of appropriate interpolation formula, find from the
following data the weight of a baby at the age of one year and of ten years:
Age = x 3 5 7 9
Weight = y (kg ) 5 8 12 17
Solution: Since the values of x are equidistant, we form the finite difference table
for using Newton’s forward difference interpolation formula to compute weight of
the baby at the age of required years.
Self-Instructional
Material 135
Interpolation and
Approximation x y ∆y ∆2 y
3 5
3
NOTES 5 8 1
4
7 12 1
5
9 17
x x0
Taking x = 2, u 0.5.
h
Newton’s forward difference interpolation gives,
(−0.5)(−1.5)
y at x = 1, y (1) = 5 − 0.5 × 3 + ×1
2
=5 − 1.5 + 0.38 =3.88 − 3.9 kg.
Similarly, for computing weight of the baby at the age of ten years, we use
Newton’s backward difference interpolation given by,
x − xn 10 − 9
=v = = 0.5
h 2
0.5 ×1.5
y at x = 10, y (10) = 17 + 0.5 × 5 + ×1
2
=17 + 2.5 + 0.38 − 19.88
Inverse Interpolation
The problem of inverse interpolation in a table of values of y = f (x) is to find the
value of x for a given y. We know that the inverse function x = g (y) exists and is
dy
unique, if y = f (x) is a single valued function of x and exists and does not
dx
vanish in the neighbourhood of the point where inverse interpolation is desired.
When the values of x are unequally spaced, we can apply Lagrange’s
interpolation or iterative linear interpolation simply by interchanging the roles of x
and y. Thus Lagrange’s formula for inverse interpolation can be written as,
n
x = ∑ li ( y) xi
i =0
n
Where li ( y ) =∏ [( y − y j ) /( yi − y j )]
j =0
j ≠i
When x values are equally spaced, we can apply the method of successive
approximation as described below.
Consider Newton’s formula for forward difference interpolation given by,
u (u − 1) 2 u (u − 1)(u − 2) 3
y = y0 + u ∆y0 + ∆ y0 + ∆ y0 + ...
Self-Instructional 2! 3!
136 Material
Retaining only two terms on the RHS, we can write the first approximation, Interpolation and
Approximation
1
u (1) = ( y − y0 )
∆ y0
1 u (1) (u (1) − 1) 2
u ( 2) = ( y − y 0 ) − ∆ y0
∆y 0 2
1 u ( 2) (u ( 2) − 1) 2 u ( 2) (u ( 2) − 1)(u ( 2) − 2) 3
u (3) = y − y0 − ∆ y0 − ∆ y0
∆yo 2 6
x 1 3 4
y 3 12 19
Self-Instructional
Material 137
Interpolation and Solution: Since finding x for an equally spaced table of cosh x is a problem of
Approximation
inverse interpolation, we employ the method of successive approximation using
Newton’s formula of inverse interpolation. We first form the finite difference table.
2 3
NOTES x f ( x) cos h x f ( x) f ( x) f ( x)
0.738 1.2849085
8074
0.739 1.2857159 14
8088 1
0.740 1.2865247 13
8101 1
0.741 1.2873348 12
8113
0.742 1.2881461
Using Newton’s forward difference interpolation formula for the first
( x − x0 )
approximation u = we get,
h
1
=u (1) ( y − y0 )
∆f ( x0 )
For, y 1.285, we take x0 0.739.
1
u (1) (1.285 1.2857159) 0.8851384, then x 0.739885
0.0008088
For a second approximation,
1 u (1) (u (1) − 1) 2
u (2) =
u (1) − ∆ f ( x0 )
∆f ( x0 ) 2
1
=−0.8851384 − × ( −0.8851384) × (−1.8851384) × 0.0000013
0.0008088 × 2
=−0.8851384 − 0.0013409 = −0.8864793 ⇒ x = 0.7398864
Similarly,
u (2) (u (2) − 1) ∆ 2 f 0 1 (2) (2) ∆3 f 0
u (3) =
u (1) − − u (u − 1)(u (2) − 2)
2 ∆f 0 6 ∆f 0
=−0.8851384 − 0.0013430 − 0.000073600
=−0.886555 ⇒ x = 0.7398865
Example 23: Find the divided difference interpolation for the following table of
values:
x 4 7 9
f ( x) − 43 83 327
Self-Instructional
138 Material
Solution: We first form the Divided Difference (DD) table as given below. Interpolation and
Approximation
x f ( x) 1st DD 2nd DD
4 − 43
42 NOTES
7 83 16
122
9 327
f ( x) f ( x0 ) ( x x0 ) f ( x0 , x1 ) ( x x0 ) ( x x1 ) f x0 , x1 , x2
f ( x) 43 ( x 4) 42 ( x 4) ( x 7) 16
16 x 2 134 x 237
Example 24: Given the following table of values of the function y = log e x, construct
the Newton’s forward difference interpolating polynomial. Comment on the degree
of the polynomial and find loge1001.
x y ∆y ∆2 y
1000 3.00000
432
1010 3.00432 −4
428
1020 3.00860 −4
424
1030 3.01284 −5
419
1040 3.01703
We observe that, the differences of second order are nearly constant. Thus,
the degree of the interpolating polynomial is 2 and is given by,
u (u − 1) 2 x − x0
y = y 0 + u ∆y 0 + ∆ y0 , where u =
2 h
For x = 1001, we take x0 = 1000.
Self-Instructional
Material 139
Interpolation and
Approximation 1001 − 1000
=∴u = 0.1
10
0.1 × 0.9
log=
e 1001 3.00000 + 0.1 × 0.00432 + × (−0.00004)
NOTES 2
= 3.000430 ≈ 3.00043
Example 25: Determine the interpolating polynomial for the following data table
using both forward and backward difference interpolating formulae. Comment on
the result.
x 0 1 2 3 4
f ( x) 1.0 8.5 36.0 95.5 199.0
Solution: Since the data points are equally spaced, we construct the Newton’s
forward difference interpolating polynomial for which we first form the finite
difference table as given below:
x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x)
0 1 .0
7.5
1.0 8 .5 20.0
27.5 12.0
2 .0 36.0 32.0
59.5 12.0
3 .0 95.5 44.0
103.5
4.0 199.0
Since the differences of order 3 are constant, we construct the third degree
Newton’s forward difference interpolating polynomial given by,
x ( x − 1) x ( x − 1) ( x − 2)
f ( x ) ≅ 1.0 + x × 0.75 + × 20 + × 12
2 6
Since=
x0 0,=h 1.0
x − x0
=u = x
h
Self-Instructional
140 Material
Interpolation and
( x − 4) ( x − 3) Approximation
f ( x) = 199 + ( x − 4) × 103.5 + × 44
2
( x − 4) ( x − 3) ( x − 3)
+ × 12 NOTES
6
=1.0 + 1.5 x + 4 x 2 + 2 x 3 , on simplification.
x 4 5 7 10 11 13
f ( x) 48 100 294 900 1210 2028
Since 3rd order divided differences are same, higher order divided differences
vanish. We have the Newton’s divided difference interpolation given by,
f ( x) f0 (x x0 ) f x , x1 (x x0 )( x x1 ) f x0 , x1 , x2
(x x0 )( x x1 )( x x 2 ) f x0 , x1 , x2 , x3
For x = 8, we take x0 = 4,
f (8) = 48 + (8 − 4)52 + (8 − 4)(8 − 5) × 15 + (8 − 4)(8 − 5)(8 − 7) × 1
= 48 + 208 + 180 + 12 = 448
Self-Instructional
Material 141
Interpolation and Example 27: Using inverse interpolation, find the zero of f (x) given by the following
Approximation
tabular values.
Thus, the zero of f (x) is 0.475 which is approximately equal to 0.48, since the
accuracy depends on the accuracy of the data which is the significant digits.
Hermite Interpolation
Hermite Interpolation: Hermite interpolation, named after Charles Hermite, is
a method of interpolating data points as a polynomial function. The generated
Hermite interpolating polynomial is closely related to the Newton polynomial, in
that both are derived from the calculation of divided differences. However, the
Hermite interpolating polynomial may also be computed without using divided
differences, see Chinese remainder theorem and Hermite interpolation.
Unlike Newton interpolation, Hermite interpolation matches an unknown
function both in observed value, and the observed value of its first m derivatives.
This means that n(m + 1) values,
Self-Instructional
142 Material
must be known, rather than just the first n values required for Newton Interpolation and
Approximation
interpolation. The resulting polynomial may have degree at most n(m + 1) – 1,
whereas the Newton polynomial has maximum degree n – 1. (In the general case,
there is no need for m to be a fixed value; that is, some points may have more
known derivatives than others. In this case the resulting polynomial may have NOTES
degree N – 1, with N the number of data points.)
Self-Instructional
Material 143
Interpolation and end points x0 or xn outside the interval [x0, xn]. This process of finding
Approximation
values of f(x) at points beyond the interval is termed as extrapolation.
8. The problem of inverse interpolation in a table of values of y = f (x) is to
find the value of x for a given y.
NOTES
6.4 SUMMARY
Short-Answer Questions
1. What is the significance of polynomial interpolation?
2. Define the symbolic operators E and .
3. What is the degree of the first order forward difference of a polynomial of
degree n?
4. What is the degree of the nth order forward difference of a polynomial of
degree n?
5. Write Newton’s forward and backward difference formulae.
6. State an application of iterative linear interpolation.
7. What is the advantage of extrapolation?
8. State Lagrange’s formula for inverse interpolation.
Long-Answer Questions
1. Use Lagrange’s interpolation formula to find the polynomials of least degree
which attain the following tabular values:
x −2 1 2
(a) y 25 −8 −15
x 0 1 2 5
(b) y 2 3 12 147
x 1 2 3 4
(c) y −1 −1 1 5
2. Form the finite difference table for the given tabular values and find the
values of:
(a) f (2)
(b) f 2(1)
(c) f 3(0)
(d) f 4(1)
(e) f (5)
(f) f (3)
Self-Instructional
Material 145
Interpolation and
Approximation x 0 1 2 3 4 5 6
f ( x) 3 4 13 36 79 148 249
3. How are the forward and backward differences in a table related? Prove
NOTES the following:
(a) ∆yi =∇yi +1
(b) ∆ 2 yi =
∇ 2 yi + 2
(c) ∆ n yi =
∇ n yi + n
4. Describe Newton’s forward and backward difference formulae using
illustrations.
5. Explain iterative linear interpolation with the help of examples.
6. Illustrate inverse interpolation procedure.
Self-Instructional
146 Material
Approximation
UNIT 7 APPROXIMATION
Structure NOTES
7.0 Introduction
7.1 Objectives
7.2 Approximation
7.3 Least Square Approximation
7.4 Answers to Check Your Progress Questions
7.5 Summary
7.6 Key Words
7.7 Self Assessment Questions and Exercises
7.8 Further Readings
7.0 INTRODUCTION
Numerical error is the combined effect of two kinds of error in a calculation. The
first is caused by the finite precision of computations involving floating point or
integer values. The second usually called truncation error is the difference between
the exact mathematical solution and the approximate solution obtained when
simplifications are made to the mathematical equations to make them more
amenable to calculation. The number of significant figures in a measurement, such
as 2.531, is equal to the number of digits that are known with some degree of
confidence (2, 5 and 3) plus the last digit (1), which is an estimate or approximation.
Zeroes within a number are always significant. Zeroes that do nothing but set the
decimal point are not significant. Trailing zeroes that are not needed to hold the
decimal point are significant. A round-off error, also called rounding error, is the
difference between the calculated approximation of a number and its exact
mathematical value. Numerical analysis specifically tries to estimate this error when
using approximation equations and/or algorithms, especially when using finitely
many digits to represent real numbers.
In this unit, you will study about the approximation and least square
approximation.
7.1 OBJECTIVES
Numerical methods are methods used for solving problems through numerical
NOTES calculations providing a table of numbers and/or graphical representations or figures.
Numerical methods emphasize that how the algorithms are implemented. Thus,
the objective of numerical methods is to provide systematic methods for solving
problems in a numerical form. Often the numerical data and the methods used are
approximate ones. Hence, the error in a computed result may be caused by the
errors in the data or the errors in the method or both. Generally, the numbers are
represented in decimal (base 10) form, while in computers the numbers are
represented using the binary (base 2) and also the hexadecimal (base 16) forms.
To perform a numerical calculation, approximate them first by a representation
involving a finite number of significant digits. If the numbers to be represented
are very large or very small, then they are written in floating point notation. The
Institute of Electrical and Electronics Engineers (IEEE) has published a standard
for binary floating point arithmetic. This standard, known as the IEEE Standard
754, has been widely adopted. The standard specifies formats for single precision
and double precision numbers. The simplest way of reducing the number of
significant digits in the representation of a number is simply to ignore the unwanted
digits known as chopping. All these topics are discussed in the following section.
Significant Figures
In approximate representation of numbers, the number is represented with a finite
number of digits. All the digits in the usual decimal representation may not be
significant while considering the accuracy of the number. Consider the following
numbers:
1514, 15.14, 1.324, 1524
Each of them has four significant digits and all the digits in them are significant.
Now consider the following numbers,
0.00215, 0.0215, 0.000215, 0.0000125
The leading zeroes after the decimal point in each of the above numbers are
not significant. Each number has only three significant digits, even though they
have different number of digits after the decimal point.
Floating Point Computation
Every real number is usually represented by a finite or infinite sequence of decimal
digits. This is called decimal system representation. For example, we can represent
1 1 1
as 0.25, but as 0.333... Thus is represented by two significant digits only,,
4 3 4
1
while is represented by an infinite number of digits. Most computers have two
3
Self-Instructional
148 Material
forms of storing numbers for performing computations. They are fixed-point and Approximation
floating point. In a fixed-point system, all numbers are given with a fixed number of
decimal places. For example, 35.123, 0.014, 2.001. However, fixed-point
representation is not of practical importance in scientific computation, since it cannot
deal with very large or very small numbers. NOTES
In a floating-point representation, a number is represented with a finite
number of significant digits having a floating decimal point. We can express the
floating decimal number as follows:
623.8 as 0.6238 × 103, 0.0001714 as 0.1714 × 10–3
A very large number can also be representated with floating-point
representation, keeping the first few significant digits such as 0.14263218 × 1039.
Similarly, a very small number can be written with only the significant digits, leaving
the leading zeros such as 0.32192516 × 10–19.
In the decimal system, very large and very small numbers are expressed in
scientific notation as follows: 4 69 10 23 and 1 ⋅ 601× 10 −19 . Binary numbers can
also be expressed by the floating point representation. The floating point
representation of a number consists of two parts: the first part represents a signed,
fixed point number called the mantissa (m); the second part designates the position
of the decimal (or binary) point and is called the exponent (e). The fixed point
mantissa may be a fraction or an integer. The number of bits required to express
the exponent and mantissa is determined by the accuracy desired from the computing
system as well as its capability to handle such numbers. For example, the decimal
number + 6132.789 is represented in floating point as follows:
sign sign
0 0 6132789 0 04
mantissa exponent
The mantissa has a 0 in the leftmost position to denote a plus. Here, the
mantissa is considered to be a fixed point fraction. This representation is equivalent
to the number expressed as a fraction 10 times by an exponent, that is
0.6132789 × 10+04. Because of this analogy, the mantissa is sometimes called the
fraction part.
Consider, for example, a computer that assumes integer representation for
the mantissa and radix 8 for the numbers. The octal number + 36.754 = 36754 ×
8–3 in its floating point representation will look like this:
sign sign
0 36754 1 03
mantissa exponent
2n 1086
n log 2 86
86 86
n 285.7
log 2 0.3010
86 285.7
Therefore, 10 2
The exponent ±285 can be represented by a 10-bit binary word. It has a
range of exponents (+511 to –512).
Errors in Numerical Solution
The errors in a numerical solution are basically of two types. They are truncation
error and computational error. The error which is inherent in the numerical method
employed for finding numerical solution is called the truncation error. The
computational error arises while doing arithmetic computation due to representation
of numbers with a finite number of decimal digits.
The truncation error arises due to the replacement of an infinite process
such as summation or integration by a finite one. For example, in computation of a
transcendental function we use Taylor series/Maclaurin series expansion but retain
only a finite number of terms. Similarly, a definite integral is numerically evaluated
using a finite sum with a few function values of the integral. Thus, we express the
error in the solution obtained by numerical method.
Inherent errors are errors in the data which are obtained by physical
measurement and are due to limitations of the measuring instrument. The analysis
of errors in the computed result due to the inherent errors in data is similar to that
of round-off errors.
Generation and Propagation of Round-Off Error
During numerical computation on a computer, a round-off error is generated by
taking an infinite decimal representation of a real, rational number such
as 1/3, 4/7, etc., by a finite size decimal form. In each arithmetic operation with
such approximate rounded-off numbers there arises a round-off error. Also round-
off errors present in the data will propagate in the result. Consider two approximate
floating point numbers rounded-off to four significant digits.
x = 0.2234 × 103 and y = 0.1112 × 102
Self-Instructional
150 Material
The sum x + y = 0.23452 × 103 is rounded-off to 0.23456 × 103 with an Approximation
absolute error, 2 × 10–2. This is the new round-off error generated in the result.
Besides this error, the result will have an error propagated from the round-off
errors in x and y.
NOTES
Round-Off Errors in Arithmetic Operations
To get an insight into the propagation of round-off errors, let us consider them for
the four basic operations of addition, subtraction, multiplication and division. Let
xT and yT be two real numbers whose round-off errors in their approximate
representations x and y are 1 and 2 respectively, so that
xT x 1 and yT y 2
Thus the propagated round-off error is the sum of two approximate numbers
(having round-off errors) equal to the sum of the round-off errors in the individual
numbers.
The multiplication of two approximate numbers has the propagated round-
off error given by,
xT yT xy 1 y 2 x 1 2
xy x y
This is equal to the sum of the relative errors in the numbers x and y.
Similarly, for division we get the relative propagated error as,
xT x
yT y 1 2
x x y
y
Thus, the relative error in division is equal to the difference of the relative
errors in the numbers.
Errors in Evaluation of Functions
The propagated error in the evaluation of a function f (x) of a single variable x
having a round-off error is given by,
f (x ) f ( x) f '( x )
Self-Instructional
Material 151
Approximation In the evaluation of a function of several variables x1, x2, …, xn, the
n
f
propagated round-off error is given by 1 , where 1 , 2 ,..., n are the round-
i 1 xi
NOTES off errors in x1, x2,..., xn, respectively.
Significance Errors
During arithmetic computations of approximate numbers having fixed precision,
there may be loss of significant digits in some cases. The error due to loss of
significant digits is termed as significance error. Significance error is more serious
than round-off errors, since it affects the accuracy of the result.
There are two situations when loss of significant digits occur. These are,
(i) Subtraction of two nearly equal numbers.
(ii) Division by a very small divisor compared to the dividend.
For example, consider the subtraction of the nearly equal numbers
x = 0.12454657 and y = 0.12452413, each having eight significant digits. The
result x – y = 0.22440000 × 10–4, is correct to four significant figures only. This
result when used in further computations leads to serious error in the result.
Consider the problem of computing the roots of the quadratic equation,
ax2 + bx + c = 0
The roots of this equation are,
−b + b2 − 4ac −b − b 2 − 4ac
and
2a 2a
−b + b 2 − 4ac
2a
It can be written as,
Table 7.1 shows that the error in the computed value becomes more serious
for smaller value of x. It may be noted that the correct values of f (x) can be
computed by avoiding the divisions by small number by rewriting f (x) as given
below.
1 − cos x 1 + cos x
f ( x)
= ×
x2 1 + cos x
sin 2 x
i.e., f ( x) = 2
x (1 + cos x)
22 22
Solution:Relative error = 7 − 3.14 7
0.00090.
=
Example 5: Determine the number of correct digits in x = 0.2217, if it has a
relative error r 0.2 10 1.
Solution:Absolute error = 0.2 × 10–1 × 0.2217 = 0.004434
Hence, x has only one correct digit x −~ 0.2.
Example 6: Round-off the number 4.5126 to four significant figures and find the
relative percentage error.
Solution:The number 4.5126 rounded-off to four significant figures is 4.153.
− 0.0004
Relative error = − 0.0088 per cent
× 100 =
4.5126
5 xy 2
Example 7: Given f (x, y, z) = , find the relative maximum error in the
z2
evaluation of f (x, y, z) at x = y = z = 1, if x, y, z have absolute errors
x y z 0.1
Self-Instructional
154 Material
Solution:The value of f (x, y, z) at x = y = z = 1 is 5. The maximum absolute error Approximation
25 × 0.1
ER ) max
(= = 0.5
5
Example 8: Find the relative propagated error in the evaluation of x + y where
x = 13.24 and y = 14.32 have round-off errors 1 0.004 and 2 0.002 respectively.
Solution:Here, x y 27.56 and 1 2 0.006 .
0.006
Thus, the required relative error = = 0.0002177 .
27.56
Example 9: Find the relative percentage error in the evaluation of u = xy with
x = 5.43, y = 3.82 having round-off errors 0.01 in both x and y.
Solution:Now, xy = 5.43 × 3.82 ~− 20.74
0.01
The relative error in x is 0.0018.
5.43
0.01
The relative error in y is 0.0026.
3.82
Thus, the relative propagated error in x and y = 0.0044.
The percentage relative error = 0.44 per cent.
Example 10: Given u = xy + yz + zx, find the estimate of relative percentage
error in the evaluation of u for x = 2.104, y = 1.935 and z = 0.845. What are the
approximate values correct to the last digit?
Solution:Here, u = x (y + z) + yz = 2.104 (1.935 + 0.845) + 1.935 × 0.845
= 5.849 + 1.635 = 7.484
Error, u (y z) x ( z x) y ( x y) z
0.0005 2( x y z) x y z 0.0005
2 4.884 0.0005 0.0049
0.0049
Hence, the relative percentage error = × 100 =
0.062 per cent.
7.884
Example 11: The diameter of a circle measured to within 1 mm is d = 0.842 m.
Compute the area of the circle and give the estimated relative error in the computed
result.
Self-Instructional
Material 155
Approximation
d2
Solution: The area of the circle A is given by the formula, A .
4
3.1416
Thus,
= A × (0.842) 2 m2 = 0.5568 m2.
NOTES 4
Here the value of is taken upto 4th decimal palce since the data of d has
accuracy upto the 3rd decimal place. Now the relative percentage error in the
above computation is,
2 d 4 d 2 d 2 0.01
Ep 2
100 100 0.24 per cent
4 d d 0.842
Example 12: The length a and the width b of a plate is measured accurate up to
1cm as a = 5.43 m and b = 3.82 m. Compute the area of the plate and indicate its
error.
Solution: The area of the plate is given by,
A = ab = 3.82 × 5.43 sq. m. = 20.74 m2.
The estimate of error in the computed value of A is given by,
A a .b b .a
0.01 3.82 0.01 5.43, since a b 0.01
0.0925 10 m 2
Computational Algorithms
For solving problems with the help of a computer, one should first analyse the
mathematical formulation of the problem and consider a suitable numerical method
for solving it. The next step is to write an algorithm for implementing the method.
An algorithm is defined as a finite sequence of unambiguous steps to be followed
for solving a given problem. Finally, one has to write a computer program in a
suitable programming language. A computer program is a sequence of computer
instructions for solving a problem.
It is possible to write more than one algorithm to solve a specific problem.
But one should analyse them before writing a computer program. The analysis
involves checking their correctness, robustness, efficiency and other characteristics.
The analysis is helpful for solving the problem on a computer. The analysis of
correctness of an algorithm ensures that the algorithm gives a correct solution of
the problem. The analysis of robustness is required to ascertain if the algorithm is
capable of tackling the problem for possible cases or for all possible variations of
the parameters of the problem. The efficiency is concerned with the computational
complexities and the total time required to solve the problem.
Computer oriented numerical methods must deal with algorithms for
implementation of numerical methods on a computer. The following algorithms of
some simple problems will make the concept clear.
Self-Instructional
156 Material
Consider the problem of solving a pair of linear equations in two unknowns Approximation
given by,
a1 x + b1 y =
c1
a2 x + b2 y =
c2
NOTES
where a1, b1, c1, a2, b2, c2 are real constants. The solution of the equations are
given by cross multiplication as,
b2 c1 − b1c2 c2 a1 − c1a2
=x = , y
a1b2 − a2 b1 a1b2 − a2 b1
It may be noted that if a1 b2 – a2 b1 = 0, then the solution does not exist. This
aspect has to be kept in mind while writing the algorithm as given below.
Further, if b 2 ≥ 4ac, the roots are real, otherwise they are complex conjugates.
This aspect is to be considered while writing an algorithm.
Algorithm: Computation of roots of a quadratic equation.
Step 1: Read a, b, c
Step 2: Compute d = b2 – 4ac
Step 3: Check if d ≥ 0, go to Step 4 else go to Step 8
Self-Instructional
158 Material
…, n are observed data and not exact. In this case, if we use the polynomial Approximation
interpolation, then it would reproduce all the errors of observation. In such situations
one may take a large number of observed data, so that statistical laws in effect
cancel the errors introduced by inaccuracies in the measuring equipment. The
NOTES
approximating function is then derived, such that the sum of the squared deviation
between the observed values and the estimated values are made as small as
possible.
Mathematically, the problem of curve fitting or function approximation may be
stated as follows:
To find a functional relationship y = g(x), that relates the set of observed data
values Pi(xi, yi), i = 1, 2,..., n as closely as possible, so that the graph of y = g(x)
goes near the data points Pi’s though not necessarily through all of them.
The first task in curve fitting is to select a proper form of an approximating
function g(x), containing some parameters, which are then determined by minimizing
the total squared deviation.
For example, g(x) may be a polynomial of some degree or an exponential or
logarithmic function. Thus g (x) may be any of the following:
(i) g ( x) x (ii) g ( x) x x2
x
(iii) g ( x ) e (iv) g ( x) e x
(v) g ( x) log( x )
Here , , are parameters which are to be evaluated so that the curve y =
g(x), fits the data well. A measure of how well the curve fits is called the goodness
of fit.
In the case of least square fit, the parameters are evaluated by solving a system
of normal equations, derived from the conditions to be satisfied so that the sum of
the squared deviations of the estimated values from the observed values, is minimum.
∑{ f − g ( xi )}
2
i.e., =S i
i =1
(7.1)
The function g(x) may have some parameters, , , . In order to determine
these parameters we have to form the necessary conditions for S to be minimum,
which are:
Self-Instructional
Material 159
Approximation
S S S
0, 0, 0 (7.2)
These equations are called normal equations, solving which we get the
NOTES parameters for the best approximate function g(x).
S n
And, 0, i.e., xi ( xi yi ) 0 (7.5)
i 1
Solving,
1 S01 S1
2
. Also .
S1S11 S1S2 nS11 S1S01 nS2 S 1 n n
Algorithm: Fitting a straight line y = a + bx.
Step 1: Read n [n being the number of data points]
Step 2: Initialize : sum x = 0, sum x2 = 0, sum y = 0, sum xy = 0
Step 3: For j = 1 to n compute
Begin
Read data xj, yj
Compute sum x = sum x + xj
Compute sum x2 + xj × xj
Compute sum y = sum y + yi × yj
Compute sum xy = sum xy + xj × yj
End
Self-Instructional
160 Material
Step 4: Compute b = (n × sum xy – sum x × sum y)/ (n × sum x2 – (sum x)2) Approximation
(7.6)
S S S
Thus the normal equations, 0, 0, 0, are as follows:
a b c
(7.7)
n
( a bxi cxi2 yi ) 0
i 1
n
xi (a bxi cxi2 yi ) 0
i 1
n
xi2 (a bxi cxi2 yi ) 0. (7.8)
i 1
na s1b s2 c s01 0
s1a s2b s3c s11 0
s2 a s3b s4 c s21 0 (7.9)
n n n n
NOTES
x xi yi xi2 xi3 xi4 xi yi xi2 yi
1 x1 y1 x12 x13 x 41 x1 y1 x12 y1
2 x2 y2 x22 x23 x24 x2 y2 x22 y2
... ... ... ... ... ... ... ...
n xn yn xn2 xn3 xn4 xn yn xn2 yn
Sum s1 s01 s2 s3 s4 s11 s21
xi 4 6 8 10 12
y1 13.72 12.90 12.01 11.14 10.31
Solution: Let y = a + bx, be the straight line which fits the data. We have the
S S
normal equations 0, 0 for determining a and b, where
a b
5
S= ∑ ( y − a − bx )
i =1
i i
2
.
5 5
Thus,
i
=i 1 =i 1
∑y − na − b∑ xi =
0
5 5 5
and ,
=i 1
∑x y i i − a ∑ x − b∑ xi =
i
=I 1 =i 1
0
2
xi yi xi2 xi yi
4 13.72 16 54.88
6 12.90 36 77.40
8 12.01 64 96.08
10 11.14 100 111.40
12 10.31 144 123.72
Sum 40 60.08 360 463.48
xi 60 61 62 63 64
yi 40 40 48 52 55
Solution: Let the straight line fitting the data be y = a + bx. The data values being
large, we can use a change in variable by substituting u = x – 62 and v = y – 48.
Let v = A + B u, be a straight line fitting the transformed data, where the
normal equations for A and B are,
5 5
=i 1 =i 1
∑ vi = 5 A + B ∑u i
5 5 5
=i 1 =i 1
∑ u i vi = A ∑ ui + B ∑u
i =1
2
i
The computation of the various sums are given in the table below,
xi yi ui vi ui vi ui2
60 40 –2 –8 16 4
61 42 −1 −6 6 1
62 48 0 0 0 0
63 52 1 4 4 1
64 55 2 7 14 4
Sum 0 −3 40 10
n n
And, z px , where x xi n, z log yi n (7.16)
i 1 i 1
a11 n=
= , a31 sx 2
, a21 sx=
2
=a12 sx
= , a22 sx= , a32 sx3
2 3
=a13 sx= , a23 sx
= , a33 sx 4
Self-Instructional
Material 165
Approximation
a12 = a12 / a11 , a13 = a13 / a11 , b1 = b1 / a11
a 22 = a 22 − a 21a12 , a23 = a 23 − a21a13
b2 = b2 − b1a21
NOTES a32 = a32 − a31a12
a33 = a33 − a31a13
b3 = b3 − b1a31
a 23 = a23 / a 22
b2 = b2 / a 22
a33 = a33 − a23 a32
b3 = b3 − a32 b2
c = b3 / a33
b = b2 − c a23
a = b1 − b a12 − c a13
Step 8: Print values of a, b, c (the coefficients of the parabola)
Step 9: Print the table of values of xk , yk and y pk where y pk =a + bxk + cx 2 k ,
round-off errors) equal to the sum of the round-off errors in the individual
numbers.
5. During arithmetic computations of approximate numbers having fixed
NOTES
precision, there may be loss of significant digits in some cases. The error
due to loss of significant digits is termed as significance error.
6. There are two situations when loss of significant digits occur. These are,
(i) Subtraction of two nearly equal numbers
(ii) Division by a very small divisor compared to the dividend
7. To get a numerical solution on a computer, one has to write an algorithm.
8. For solving problems with the help of a computer, one should first analyse
the mathematical formulation of the problem and consider a suitable numerical
method for solving it. The next step is to write an algorithm for implementing
the method.
9. Let (x1, f1), (x2, f2),..., (xn, fn) be a set of observed values and g(x) be the
approximating function. We form the sums of the squares of the deviations
of the observed values f i from the estimated values g (x i ),
n
∑{ f − g ( xi )}
2
i.e.,=S i
i =1
These equations are called normal equations, solving which we get the
parameters for the best approximate function g(x).
10. Let g ( x) x, be the straight line which fits a set of observed data
points (xi, yi), i = 1, 2, ..., n.
Let S be the sum of the squares of the deviations g(xi) – yi, i = 1, 2,...,n,
given by,
n
2
S xi yi
i 1
7.5 SUMMARY
Numerical methods are methods used for solving problems through numerical
calculations providing a table of numbers and/or graphical representations
or figures. Numerical methods emphasize that how the algorithms are
implemented. Self-Instructional
Material 167
Approximation To perform a numerical calculation, approximate them first by a
representation involving a finite number of significant digits. If the numbers
to be represented are very large or very small, then they are written in
floating point notation.
NOTES
The Institute of Electrical and Electronics Engineers (IEEE) has published a
standard for binary floating point arithmetic.
In approximate representation of numbers, the number is represented with
a finite number of digits. All the digits in the usual decimal representation
may not be significant while considering the accuracy of the number.
In a floating representation, a number is represented with a finite number of
significant digits having a floating decimal point.
Floating point representation of a number consists of mantissa and exponent.
The errors in a numerical solution are basically of two types termed as
truncation error and computational error.
The error which is inherent in the numerical method employed for finding
numerical solution is called the truncation error.
The truncation error arises due to the replacement of an infinite process
such as summation or integration by a finite one.
Inherent errors are errors in the data which are obtained by physical
measurement and are due to limitations of the measuring instrument.
The analysis of errors in the computed result due to the inherent errors in
data is similar to that of round-off errors.
Significance error is more serious than round-off errors.
Iteration is the numerical method applied repeatedly to get better results till
the solution is obtained up to a desired accuracy.
An algorithm is a sequence of unambiguous steps used to solve a given
problem.
It is possible to write more than one algorithm to solve a specific problem.
The algorithm analysis involves checking their correctness, robustness,
efficiency and other characteristics.
Single precision floating point format is a computer number format that
occupies 4 bytes (32-bits) in computer memory and denotes wide range of
values using a floating point.
Double precision refers to a specific floating point number that has more
precision, i.e., more digits to the right of the decimal point than a single
precision number.
Self-Instructional
168 Material
Approximation
7.6 KEY WORDS
Short-Answer Questions
1. What are floating point numbers?
5
2. Find the percentage error in approximating by 0.8333 correct upto four
6
significant figures.
3. Write the characteristics of numerical computation.
4. Find the relative error in the computation of x – y for x = 12.05 and y =
8.02 having absolute errors x 0.005 and y 0.001.
Self-Instructional
Material 169
Approximation Long-Answer Questions
1. Round-off the following numbers to three decimal places:
(i) 0.230582 (ii) 0.00221118 (iii) 2.3645 (iv) 1.3455
NOTES 2. Round-off the following numbers to four significant figures:
(i) 49.3628 (ii) 0.80022 (iii) 8.9325 (iv) 0.032588
(v) 0.0029417 (vi) 0.00010211 (vii) 410.99
3. Round-off each of the following numbers to three significant figures and
indicate the absolute error in each.
(i) 49.3628 (ii) 0.9002 (iii) 8.325 (iv) 0.0039417
4. Find the sum of the following approximate numbers, correct to the last
digits.
0.348, 0.1834, 345.4, 235.2, 11.75, 0.0849, 0.0214, 0.0002435
5. Find the number of correct significant digits in the approximate number
11.2461. Given is its absolute error = 0.25 × 10–2.
6. Given are the following approximate numbers with their relative errors.
Determine the absolute errors.
(i) x A 12165, R 0.1% (ii) x A 3.23, R 0.6%
Self-Instructional
170 Material
Approximation
r2 h
12. In the formula for computing R for=R + , find the absolute error in
2 h 2
computing R for r = 48 mm and h = 56 mm, due to errors of 1 mm in r and
0.2 mm in h. NOTES
13. Find the smaller root of 0.001x2 + 100.1x + 10000 = 0, with the help of
the usual formula and round-off to six significant digits. Compare with the
correct answer x = –100.0.
14. Find the roots of the quadratic equation x2 – 100x – 0.1 = 0, with the help
of the usual formulae and show the significance error in the result.
Self-Instructional
Material 171
Numerical Integration and
Numerical Differentiation
UNIT 8 NUMERICAL
INTEGRATION AND
NOTES
NUMERICAL
DIFFERENTIATION
Structure
8.0 Introduction
8.1 Objectives
8.2 Numerical Integration
8.3 Numerical Differentiation
8.4 Optimum Choice of Step Length
8.5 Extrapolation Method
8.6 Answers to Check Your Progress Questions
8.7 Summary
8.8 Key Words
8.9 Self Assessment Questions and Exercises
8.10 Further Readings
8.0 INTRODUCTION
∫ f ( x) dx
a
(8.1)
∫ f ( x) dx = h ∑c
k =0
k f ( xk ) (8.5)
x0
Self-Instructional
174 Material
Y Numerical Integration and
Numerical Differentiation
(x1, f1)
f(x)
(x0, f0) NOTES
O X
x0 x1
Trapezoidal Rule
xn
For evaluating the integral ∫ f ( x)dx, we have to sum the integrals for each of the
x0
xn
h (8.9)
Or ∫
x0
f ( x)dx
=
2
[ f 0 + 2( f1 + f 2 + ... + f n −1 ) + f n ]
Where x0 1 x1 , x1 2 x2 ,..., xn 1 n xn
Self-Instructional
Material 175
Numerical Integration and Thus, we can write
Numerical Differentiation
h3
ETn [nf ( )], f ( ) being the mean of f ( 1 ), f ( 2 ),..., f ( n )
12
NOTES
h2
nh f ( )
12
h2
Where ETn (b a) f ( ), since nh b a
12
Or, x0 xn
(8.10)
b
Algorithm: Evaluation of ∫ f ( x)dx by trapezoidal rule.
a
Self-Instructional
176 Material
Numerical Integration and
Assuming F ′( x) = f ( x), we obtain: Numerical Differentiation
h
E S = F ( x 2 ) − F ( x0 ) − ( f + 4 f1 + f 2 )
3 0
Expanding F(x2) = F(x0+2h), f1 = f (x0+h) and f2 = f (x0+2h) in powers of NOTES
h, we have:
(2h) 2 (2h)3
ES 2hF ( x0 ) F ( x) ( x0 ) F ( x0 ) ...
2! 3!
h h2 (2h) 2
f0 4 f0 hf 0 f (0) ... f0 2hf 0 f (0) ...
3 2! 2!
4 3 2 4 4 5 iv
2hf 0 2h 2 f 0 h f (0) h f (0) h y0 ( )
3 3 15
h
[6 f 0 6hf 0 4h 2 f (0) 2h3 f (0)...]
3
h5 iv
ES f ( ), on simplification, where x0 x2
90
(8.12)
Geometrical interpretation of Simpson’s one-third formula is that the integral
represented by the area under the curve is approximated by the area under the
parabola through the points (x0, f0), (x1, f1) and (x2, f2) shown in Figure 8.2.
Y
Parabola y = f (x)
X
0 x0 x1 x2
h
=
3
[
( f 0 + 4 f1 + f 2 ) + ( f 2 + 4 f 3 + f 4 ) + ( f 4 + 4 f 5 + f 6 ) + ... + ( f 2 m − 2 + 4 f 2 m −1 + f 2 m ) ]
b
h
∫ f ( x)dx = 3 [ f
a
0
]
+ 4 ( f1 + f 3 + f 5 + ... + f 2 m −1 ) + 2 ( f 2 + f 4 + f 6 + ... + f 2 m − 2 ) + f 2 m .
(8.13)
Self-Instructional
Material 177
Numerical Integration and This is known as Simpson’s one-third rule of numerical integration.
Numerical Differentiation
The error in this formula is given by the sum of the errors in each pair of
intervals as,
NOTES h5 iv
ES2 m [ f ( 1) f iv ( 2 ) ... f iv ( m )]
90
Which can be rewritten as,
h5
ES2 m m f iv ( ), f iv ( ) being the mean of f iv ( 1 ), f iv ( 2 ),..., f iv ( m )
90
Since 2mh b a, we have
h4
ES2 m (b a ) f iv ( ), where a b.
180
(8.14)
b
Self-Instructional
178 Material
Simpson’s Three-Eighth Formula Numerical Integration and
Numerical Differentiation
Taking n = 3, Newton-Cotes formula can be written as,
x3 3
u (u − 1) 2 u (u − 1)(u − 2) 3 NOTES
∫ f ( x)dx = h ∫ ( f
x0 0
0 + u ∆f 0 +
2!
∆ f0 +
3!
∆ f 0 )du
3
u
2
1u
3
u 2
2
1u
4
2 3
∆f 0 + − ∆ f0 + − u + u ∆ f 0
3
= h uf 0 +
2 2 3 2 6 4
0
9 9 2 3 3
= h 3 y0 + ∆y 0 + ∆ y0 + ∆ y0
2 4 8
9 9 3
= h 3 y0 + ( y1 − y0 ) + ( y 2 − 2 y1 + y0 ) + ( y3 − 3 y 2 + 3 y1 − y 0 )
2 4 8
x3
3h
∫ f ( x) dx =
x0
( y + 3 y1 + y3 )
8 0
(8.15)
3h5 iv
The truncation error in this formula is f ( ), x0 x3 .
80
This formula is known as Simpson’s three-eighth formula of numerical
integration.
As in the case of Simpson’s one-third rule, we can write Simpson’s three-eighth
rule of numerical integration as,
b
3h
∫ f ( x) dx =
a
[ y + 3 y1 + 3 y 2 + 2 y3 + 3 y 4 + 3 y5 + 2 y 6 + ... + 2 y3m −3 + 3 y3m − 2 + 3 y3m −1 + y3m ]
8 0
(8.16)
where h = (b–a)/(3m); for m = 1, 2,...
i.e., the interval (b–a) is divided into 3m number of sub-intervals.
The rule in Equation (8.16) can be rewritten as,
b
3h
∫ f ( x) dx =
a
8
[ y0 + y3m + 3 ( y1 + y 2 + y 4 + y5 + ... + y3m − 2 + y3m −1 ) + 2 ( y3 + y6 + ... + y3m −3 )]
(8.17)
The truncation error in Simpson’s three-eighth rule is
3h4
(b a) f iv ( ), x0 xg m
240
Self-Instructional
Material 179
Numerical Integration and Weddle’s Formula
Numerical Differentiation
In Newton-Cotes formula with n = 6 some minor modifications give the Weddle’s
formula. Newton-Cotes formula with n = 6, gives
NOTES x6
123 5 41 6
∫ ydx = h6 y
x0
0 + 18 ∆y0 + 27∆2 y0 + 24 y∆3 y 0 +
10
∆ y0 +
140
∆ y0
41 6
This formula takes a very simple form if the last term ∆ y0 is replaced by
140
42 6 3
∆ y0 = ∆ 6 y0 . Then the error in the formula will have an additional term
140 10
1 6
∆ y0 . The above formula then becomes,
140
x6
123 5 3
∫ ydx
x0
= h 6 y0 + 18∆y0 + 27∆ 2 y0 + 24∆3 y0 +
10
∆ y0 + ∆ 6 y0
10
x6
3h
∫=
ydx
x0 10
[ y0 + 5 y1 + y2 + 6 y3 + y4 + 5 y5 + y6 ]
(8.18)
On replacing the differences in terms of yi’s, this formula is known as Weddle’s
formula.
1 7 ( vi )
The error Weddle’s formula is h y ( ) (8.19)
140
Weddle’s rule is a composite Weddle’s formula, when the number of sub-
intervals is a multiple of 6. One can use a Weddle’s rule of numerical integration by
sub-dividing the interval (b – a) into 6m number of sub-intervals, m being a posi-
tive integer. The Weddle’s rule is,
b
3h
∫ f ( x)dx = 10 [y +5y +y +6y +y +5y +2y +5y +y +6y +y
a
0 1 2 3 4 5 6 7 8 9 10
+5y11+...
+2y6m–6+5y6m–5+y6m–4+6y6m–3+y6m–2+5y6m–1+y6m] (8.20)
where b–a = 6mh
b
3h
i.e., ∫ f ( x) dx = 10 [ y
a
0 + y6 m + 5 ( y1 + y5 + y7 + y11 + ... + y6m −5 + y6m −1 ) + y 2 + y 4 + y8 + y10 + ...
1 6
The error in Weddle’s rule is given by h (b a ) y ( vi ) ( )
840
(8.21)
Self-Instructional
180 Material
2 Numerical Integration and
Numerical Differentiation
Example 1: Compute the approximate value of ∫ x dx by taking four sub-inter-
4
2 5 32
Exact value = = = 6 .4
5 5
Error in the result by trapezoidal rule = 6.4 – 7.0672 = – 0.6672
Error in the result by Simpson’s one third rule = 6.4 – 6.4230 = – 0.0230
Example 2: Evaluate the following integral:
1
∫ (4 x − 3x
2
)dx by taking n = 10 and using the following rules:
0
(i) Trapezoidal rule and (ii) Simpson’s one-third rule. Also compare them
with the exact value and find the error in each case.
Solution: We tabulate f (x) = 4x–3x2, for x = 0, 0.1, 0.2, ..., 1.0.
x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
f ( x) 0.0 0.37 0.68 0.93 1.12 1.25 1.32 1.33 1.28 1.17 1.0
Self-Instructional
Material 181
Numerical Integration and (i) Using trapezoidal rule, we have
Numerical Differentiation
1
0.1
∫ (4 x − 3 x
2
) dx = [0 + 2 (0.37 + 0.68 + 0.93 + 1.12 + 1.25 + 1.32 + 1.33 + 1.28 + 1.17) + 1.0]
2
NOTES 0
0.1
= × (18.90 + 1.0) = 0.995
2
1
0 .1
∫ (4 x − 3x
2
) dx = [0 + 4 (0.37 + 0.93 + 1.25 + 1.33 + 1.17) + 2(0.68 + 1.12 + 1.32 + 1.28) + 1.0]
3
0
0 .1
= [4 × 5.05 + 2 × 4.40 + 1.0]
3
0 .1
= × [30.0] = 1.00
3
2
x e− x
0.0 1.00000
0.1 0.990050
0.2 0.960789
0.3 0.913931
0.4 0.852144
0.5 0.778801
0.6 0.697676
0.7 0.612626
0.8 0.527292
0.9 0.444854
1.0 0.367879
1.367879 3.740262 3.037901
Self-Instructional
182 Material
Hence, by Simpson’s one-third rule we have, Numerical Integration and
Numerical Differentiation
1
h
∫e
− x2
dx
= [ f 0 + f10 + 4 ( f1 + f 3 + f5 + f 7 + f9 ) + 2 ( f 2 + f 4 + f 6 + f8 )]
0 3
0.1
NOTES
= [1.367879 + 4 × 3.740262 + 2 × 3.037901]
3
0.1
= [1.367879 + 14.961048 + 6.075802]
3
2.2404729
= = 0.7468243 ≈ 0.746824
3
Using trapezoidal rule, we get
1
h
∫e
− x2
dx
= [ f 0 + f10 + 2 ( f1 + f 2 + ... + f9 )]
0
2
0.1
= [1.367879 + 6.778163]
2
= 0.4073021
4
Example 4: Compute the integral I = ∫ ( x 3 − 2 x 2 + 1)dx, using Simpson’s one-third
0
rule taking h = 1 and show that the computed value agrees with the exact value.
Give reasons for this.
Solution: The values of f (x) = x3–2x2+1 are tabulated for x = 0, 1, 2, 3, 4 as
x 0 1 2 3 4
f ( x) 1 0 1 10 33
1 1
I [1 4 0 2 1 4 10 33] 25
3 3
44 43 1
The exact value 2 1 4 25
4 3 3
Thus, the computed value by Simpson’s one-third rule is equal to the exact
value. This is because the error in Simpson’s one-third rule contains the fourth
order derivative and so this rule gives the exact result when the integrand is a
polynomial of degree less than or equal to three.
0.5
0.1
rule and compare the results with the exact value, by taking h = 0.1.
Self-Instructional
Material 183
Numerical Integration and Solution: We tabulate the values of f (x) = ex for x = 0.1 to 0.5 with spacing
Numerical Differentiation
h = 0.1.
0.1
IS = [1.1052 + 4 (1.2214 + 1.4918) + 2 ×1.3498 + 1.6487]
3
0.1 0.1
= [2.7539 + 4 × 2.7132 + 2.6996] = [16.3063] = 0.5435
3 3
rule taking 10 sub-intervals. Hence, find log e and compare it with the exact value
2
NOTES
0.1
= [1.500000 + 2 × (3.4595391 + 2.7281745)]
2
0.1
= [1.500000 + 12.3754272] = 0.6437714.
2
(ii) Using Simpson’s one-third rule, we get
1
dx h
∫ 1+ x = 3 [ f
0
0 + f10 + 4 ( f1 + f 3 + ... + f 9 ) + 2 ( f 2 + f 4 + ... + f 8 )]
0 .1
= [1.500000 + 4 × 3.4595391 + 2 × 2.7281745]
3
0 .1 0.1
= [1.5 + 13.838156 + 5.456349] = × 20.794505 = 0.6931501
3 3
(iii) Exact value:
1
dx 0.1
∫ 1 + x=
0
log=
e2
3
[1.500000 + 4 × 3.4595391 + 2 × 2.7281745]
= 0.6931472
The trapezoidal rule gives the value of the integral having an error 0.693147 –
0.6437714 = 0.0493758, while the error in the value by Simpson’s one-third rule
is – 0. 000029.
2
Example 7: Compute cos d , by (i) Simpson’s rule and (ii) Weddle’s for--
0
0 15 30 45 60 75 90
cos 1 0.98281 0.93061 0.84089 0.70711 0.50874 0
Self-Instructional
Material 185
Numerical Integration and (i) The value of the integral by Simpson’s one-third rule is given by,
Numerical Differentiation
0.26179
IS = [1 + 4 × (0.98281 + 0.84089 + 0.50874) + 2 × (0.093061 + 0.070711) + 0)]
3
NOTES 0.26179
= [1 + 4 × 2.33244 + 2 × 1.63772]
3
0.26179
= × 13.6052 = 1.18723
3
(ii) The value of the integral by Weddle’s formula is,
3
IW = × 0.26179 [1.05 + 7.45775 + 5.04534 + 0.93061 + 0.070711]
10
3 × 0.026179 [14.554411] =
= 1.143059 ≈ 1.14306
2
Example 8: Evaluate the integral 1 0.162sin 2 d by Weddle’s formula.
0
Solution: On dividing the interval into six sub-intervals, the length of each sub-
1
interval will be h 0 26179 15. For computing the integral by Weddle’ss
6 2
0 15 30 45 60 75 90
f ( ) 1.0 0.99455 0.97954 0.95864 0.93728 0.92133 0.91542
Self-Instructional
186 Material
Numerical Integration and
2
dx Numerical Differentiation
As an illustration, consider the evaluation of ∫
1
x using Simpson’s one-third
180
1 iv 2 × 3× 4
For the given problem, f (x) = , thus f ( x ) = . Hence,
x x5
max f iv ( x) = 24
[1, 2 ]
4 1 × 24
Thus, h × < 0 ⋅ 5 × 10 −3 or h < 0.102
180
But h has to be so chosen such so that the interval [1, 2] is divided into an
even number of sub-intervals. Hence we may take h = 0.1 < 0.102, for which n =
10, i.e., there will be 10 sub-intervals.
The value of the integral is,
2
dx 0.1 1 1 1 1 1 1 1 1 1 1
1.0 4 2
1
x 3 1.1 1.3 1.5 1.7 1.9 1.2 1.4 1.6 1.8 2
0.1
[1.5 4 3.4595 2 2.7282]
3
0.1
2.0749 0.06931 which agrees with the exact value of log e2 .
3
Interval Halving Technique
When the estimation of the truncation error is cumbersome, the method of interval
halving is used to compute an integral to the desired accuracy.
In the interval halving technique, an integral is first computed for some moderate
h
value of h. Then, it is evaluated again for spacing , i.e., with double the number
2
of subdivisions. This requires the evaluation of the integrand at the new points of
subdivision only and the previous function values with spacing h are also used.
Ih
Now the difference between the integral Ih and 2
is used to check the accuracy
h
NOTES integral is made again with spacing and the accuracy condition is tested again.
4
The equation of I h / 4 will require the evaluation of the integrand at the new points of
sub-division only.
Notes:
1. The initial choice of h is sometimes taken as m where m = 2 for trapezoidal
rule and m = 4 for Simpson’s one-third rule.
2. The method of interval halving is widely used for computer evaluation since it
enables a general choice of h together with a check on the computations.
3. The truncation error R can be estimated by using Runge’s principle given by,
1
R≈ Ih − Ih/2 for trapezoidal rule and R ≈ 1 I h − I h / 2 for Simpson’s one-
3 15
third rule.
Algorithm: Evaluation of an integral by Simpson’s one-third rule with interval
halving.
Step 1: Set/initialize a, b,
[a, b are limits of integration, is error tolerance]
b−a
Step 2: Set h =
2
Step 3: Compute S1 = f (a) + f (b)
Step 4: Compute S4 = f (a + h)
Step 5: Set S2 = 0, I1 = 0
( S1 + 4S 4 + S 2 ) × h
Step 6: Compute I 2 =
3
h
Step 8: Set h = , I1 = I2
2
Step 9: Compute S2 = S2+ S4
Step 10: Set S4 = 0
Step 11: Set x = a + h
Step 12: Compute S4 = S4+ f (x)
Step 13: Set x = x + h
Self-Instructional
188 Material
Step 14: If x < b, go to Step 12 else go to the next step Numerical Integration and
Numerical Differentiation
( S1 + 2S 2 + 4S 4 ) × h
Step 15: Compute I 2 =
3
h
Step 8: Set h =
2
Step 9: Set x = a + h
Step 10: Set I2 = I2+ h × f (x)
Step 11: If x < b, go to Step 9 else go to next step
Step 12: Go to Step 7
Step 13: Write I 2 , h,
Step 14: End
b
NOTES
∫
I = F ( x) dx
a
(8.25)
Now for numerical integration, we can divide the interval [a, b] into n sub-
intervals with spacing h and then use a suitable rule of numerical integration.
Trapezoidal Rule for Double Integral
By trapezoidal rule, we can write the integral Equation (8.25) as,
b
h
∫ F ( x) dx = 2 [ F
a
0
+ Fn + 2 ( F1 + F2 + F3 + ... + Fn −1 )] (8.26)
b−a
where x0 = a, xn = b, h = and
n
1
Fi = F ( xi ) = ∫ f ( x , y) dy, x
0
i i
= a + ih (8.27)
for i = 0, 1, 2,..., n.
Each Fi can be evaluated by trapezoidal rule. For this, the interval [c, d] may
c−d
be divided into m sub-intervals each of length k = . Thus we can write,
m
k
Fi = [ f ( xi , y0 ) + f ( xi , y m ) + 2{ f ( xi , y1 ) + f ( xi , y 2 + ... + f ( xi , y m −1 )}]
2
(8.28)
y0 = c, ym = d, yi = c+ik; i = 0, 1,..., m.
This Equation (8.28) can be written in a compact form,
k
Fi = [ f + f im + 2 ( f i1 + f i 2 + ... + f im −1 )].
2 i0
(8.29)
The relation Equations (8.26) and (8.29) together form the trapezoidal rule
for evaluation of double integrals.
Self-Instructional
190 Material
Numerical Integration and
b−a
Where h = , n is even and Numerical Differentiation
n
d
Fi = F(xi) = ∫ f ( xi , y )dy, xi = a + ih, for i = 0, 1, 2,..., n (8.31)
c
NOTES
And, x0 = a and xn = b
For evaluating I, we have to evaluate each of the (n + 1) integrals given in
Equation (8.31). For evaluation of Fi, we can use Simpson’s one-third rule by
dividing [c, d] into m sub-intervals. Fi can be written as,
k
Fi = [ f ( xi , y0 ) + f ( xi , y m ) + 2 f ( xi , y 2 ) + f ( xi , y4 ) + ... + f ( xi , y m−2 ) + 4{ f ( xi , y1 ) + f ( xi , y3 )
3
+ ... + f ( xi , y m−1 )}]
(8.32)
Equation (8.32) can be written in a compact notation as,
k
Fi = [ f + f im + 2 ( f i 2 + f i 4 + ... + f in − 2 ) + 4 ( f i1 , f i 3 + ... + f im −1 )]
3 i0
Where fij = f (xi, yj), j = 0, 1, 2,...,m.
∫∫ ( x
2
+ y 2 )dx dy
Example 9: Evaluate the following double integral where R is
R
the rectangular region 1 ≤ x ≤ 3, 1 ≤ y ≤ 2, by Simpson’s one-third rule taking
h = k = 0.5.
Solution: We write the integral in the form of a repeated integral,
3 2
1
∫ ∫
I = dx ( x 2 + y 2 )dy
1
2
Taking n = 4 sub-intervals along x, so that h = = 0.5
4
y=2
y=1
x=1 x=3
3
0 .5
∫
∴ I = F ( x)dx =
1
3 [F0 + F4 + 2F2 + 4(F1 + F3)]
2
∫ (x
2
=
where F(x) + y 2 )dy
1
Self-Instructional
Material 191
Numerical Integration and 2
Numerical Differentiation
∫ (x
2
Fi F =
= ( xi ) i + y 2 )dy; x = 1+ 0.5i, where i = 0, 1, 2, 3, 4.
i
1
1
For evaluating Fi’s, we take k = = 0.5 and get,
NOTES 2
2
0.5 0 .5
∫ (1 + y
2
F0 = ) dy = [1 + 12 + 4 {1 + (1.5) 2 } + 1 + 2 2 ] = × 20
3 3
1
2
0 .5 0.5
∫ (1.5
2
F1 = + y 2 ) dy = [(1.5) 2 + 12 + 4{1.5) 2 + (1.5) 2 } + (1.5) 2 + 2 2 ] = × 27.50
3 3
1
2
0.5 0.5
F2 = ∫ (2 + y 2 ) dy =
2
[22 + 12 + 4 (22 + 1.5) 2 } + 22 + 22 ] = × 38
1 3 3
2
0.5 0.5
∫ ((2.5)
2
F3 = + y 2 ) dy = [(2.5) 2 + 12 + 4 {(2.5)2 + (1.5) 2 } + (2.5)2 + 22 ] = × 51.50
1 3 3
2
0.5 2 2 0.5
∫ (3
2
F4 = + y 2 )dy = [3 + 1 + 4{32 + (1.5) 2 } + 32 + 22 ] = × 68
1 3 3
0.25
∴I
= [20 + 68 + 2 × 38 + 4 (27.50 + 51.50)]
9
0.25
= × 480 = 13.333
9
y=2
y=1
x=1 x=3
3
0.5
Solution: I T = ∫ F ( x)dx = [F0+F4+2 (F1+F2+F3)]
2
1
∫
where Fi = F(xi) = ( xi2 + y 2 )dy, xi = 1 + 0.5i, i = 0, 1, 2, 3, 4.
1
2
0.5 2 2
Thus, F0 (1 y 2 )dy [1 1 2{12 (1.5)2 } 12 22 ]
1
2
0.5
13.50 3.375
2
Self-Instructional
192 Material
2 Numerical Integration and
0.5
∫ [(1.5) + y = Numerical Differentiation
2 2
F1
= ]dy [(1.5) 2 + 12 + 2 {(1.5) 2 + (1.5)2 } + (1.5) 2 + 22 ]
1
2
0.5
= × 18.50 = 4.625
2
2
NOTES
0.5
F2= ∫ [22 + y 2 ]dy= [22 + 12 + 2{22 + (1.5) 2 } + 22 + 22 ]
1 2
0.5
= × 25.50 = 6.375
2
2
0.5
F3 ∫ [(2.5) 2 + y 2=
= ]dy [(2.5) 2 + 12 + 2{(2.5) 2 + (1.5)2 } + (2.5) 2 + 22 ]
1 2
0.5
= × 34.50 = 8.625
2
2
0.5 2 2
F4= ∫ [32 + y 2 ]dy= [3 + 1 + 2{32 + (1.5)2 } + 32 + 22 ]
1 2
0.5
= × 45.50 = 11.375
2
0.5
∴ IT = × [3.375 + 11.375 + 2(4.625 + 6.375 + 8.625)]
2
1
= [14.750 + 2 × 19.625]
4
1 1
= [14.750 + 39.250] = × 54 =13.5
4 4
Example 11: Evaluate the following double integral using trapezoidal rule with
2 2
dx dy
length of sub-intervals h = k = 0.5, .
1 1
x y
1
Solution: Let f ( x, y ) =
x+ y
y
1.5
x
1 1.5 2
Self-Instructional
Material 193
Numerical Integration and By trapezoidal rule with h = 0.5, the integral
Numerical Differentiation
2 2
I = ∫∫ dx dy f ( x, y ) is computed as,
1 1
NOTES 0.5 × 0.5
=I [ f (1, 1) + f (2, 1) + f (1, 2) + f (2, 2) + 2{ f (1.5, 1) + f (1, 1.5)
4
+ f (2, 1.5) + f (1.5, 2)} + 4 f (1.5, 1.5)]
1 1 1 1 1 2 2 2 2 1
= + + + + 2 + + + + 4×
16 2 3 3 4 5 5 7 7 3
1 4 × 12 4
= 0.666667 + 0.75 + 2 × +
16 35 3
1
= [5.492857 ]
16
= 0.343304.
2 2
dxdy
Example 12: Evaluate ∫∫ x + y by Simpson’s one-third rule. Take sub-intervals
1 1
of length h = k = 0.5.
2 2
Solution: The value of the integral I = ∫ ∫ f ( x, y)dx dy by Simpson’s one-third
1 1
0.5 0.5
I [ f (1, 1) f (2, 1) f (1, 2) f (2, 2) 4{ f (1, 1.5) f (1.5, 1)
3 3
f (2, 1.5) f (1.5, 2)} 16 f (1.5, 1.5)]
1 1 1 1 1 2 2 2 2 1
4 16
36 2 3 3 4 5 5 7 7 3
1 4 12 16
0.666667 0.75 4
36 35 3
1
[12.235714] 0.339880
36
Gaussian Quadrature
We have seen that Newton-Cotes formula of numerical integration is of the form,
b n
∫ f ( x)dx ≈ ∑ c f (x )
i =0
i i (8.33)
a
b−a
where xi = a+ih, i = 0, 1, 2, ..., n; h =
n
Self-Instructional
194 Material
This formula uses function values at equally spaced points and gives the exact Numerical Integration and
Numerical Differentiation
result for f (x) being a polynomial of degree less than or equal to n. Gaussian
quadrature formula is similar to Equation (8.33) given by,
1 n
NOTES
∫ F (u ) du ≈ ∑ w F (u )
i =1
i i (8.34)
−1
where wi’s and ui’s called weights and abscissae, respectively are derived such
that above Equation (8.34) gives the exact result for F(u) being a polynomial of
degree less than or equal to 2n–1.
In Newton-Cotes Equation (8.33), the coefficients ci and the abscissae xi are
rational numbers but the weights wi and the abscissae ui are usually irrational
numbers. Even though Gaussian quadrature formula gives the integration of F(u)
between the limits –1 to +1, we can use it to find the integral of f (x) from a to b
by a simple transformation given by,
b−a a+b
x= u+ (8.35)
2 2
Evidently, then limits for u become –1 to 1 corresponding to x = a to b and
writing,
b − a a +b
f ( x) = f u+ = F (u )
2 2
b 1
b−a
We have, ∫
a
f ( x)dx =
2 ∫ F (u)du
−1
(8.36)
It can be shown that the ui are the zeros of the Legendre polynomial Pn(u) of
degree n. These roots are real but irrational and the weights are also irrational.
Given below is a simple formulation of the relevant equations to determine ui
and wi. Let F(u) be a polynomial of the form,
2 n −1
F (u ) = ∑a u
k =0
k
k
(8.37)
1
2 2 2
Or, ∫ F (u )du = 2a
−1
0
+ a + a + ... +
3 2 5 4
a
2n − 2 2 n − 2
(8.39)
Self-Instructional
Material 195
Numerical Integration and Equation (8.34) gives,
Numerical Differentiation
1 n 2 n −1
∫ F (u )du = ∑ ∑
i =1
wi
k =0
ak uik
NOTES −1
n (8.40)
= ∑ (
i =1
wi a0 + a1ui + a2 ui2 + ... + a2 n −1ui2 n −1 )
The Equations (8.39) and (8.40) are assumed to be identical for all polynomials
of degree less than or equal to 2n–1 and hence equating the coefficients of ak on
either side we obtain the following 2n equations for the 2n unknowns w1, w2,...,wn
and u1, u2,...,un.
n n n n
2
∑
i=
=1
wi = 2,
i 1 =i 1
∑ wi ui = 0, ∑ wi ui2 = ∑
,... wi ui2 n −1 = 0
3 i =1 (8.41)
2 1 1
Also, w1 = w2 = 1. The third equation gives, 2u12 u1 , u2
3 3 3
Hence, two point Gauss-Legendre quadrature formula is,
1
1 1
∫ F (u )=
−1
du F
3
+ F−
3
The Table 8.1 gives the abscissae and weights of the Gauss-Legendre quadra-
ture for values of n from 2 to 6.
Self-Instructional
196 Material
Table 8.1 Values of Weights and Abscissae for Gauss-Legendre Quadrature Numerical Integration and
Numerical Differentiation
n Weights Abscissae
2 1.0 ± 0.57735027
3 0.88888889 0.0
NOTES
0.55555556 ± 0.77459667
4 0.65214515 ± 0.33998104
0.34785485 ± 0.86113631
5 0.56888889 0.0
0.47862867 ± 0.53846931
0.23692689 ± 0.90617985
6 0.46791393 ± 0.23861919
0.36076157 ± 0.66120939
0.17132449 ± 0.93246951
It is seen that the abscissae are symmetrical with respect to the origin and the
weights are equal for equidistant points.
2
Example 13: Compute ∫ (1 + x)dx, by Gauss two point quadrature formula.
0
2
Solution: Substituting x = u + 1, the given integral ∫ (1 + x)dx
0
reduces to
1
∫
I = (u + 2) du .
−1
Using a two point Gauss quadrature formula, we have I =
(0.57735027+2) + (– 0.57735027+2) = 4.0.
As expected, the result is equal to the exact value of the integral.
Example 14: Show that Gauss two-point quadrature formula for evaluating
b b N
1
where ri = xi + hp, si = xi + (1 – p)h, p = (3 − 3 ).
6
Solution: We subdivide the interval [a, b] into N sub-intervals, each of length h,
given by h b a .
N
xi+1
h h 1 h h h
Ii = f ⋅ + xi + + f − + xi +
2 2 3 2 2 3 2
NOTES h
= [ f (ri ) + f ( si )]
2
1
where ri = xi + ph, si = xi + (1 – p)h, p = (3 − 3 )
6
b N −1 N −1
h
Hence, ∫ f ( x) dx = ∑
i =0
Ii =
2 ∑ [ f ( r ) + f ( s )]
i =0
i i
a
Note: Instead of considering Gauss integration formula for more and more num-
ber of points for better accuracy, one can use a two point composite formula for
larger number of sub-intervals.
Example 15: Evaluate the following integral by Gauss three point quadrature
formula:
1
dx
I =∫
0
1+ x
Solution: We first transform the interval [0, 1] to the interval (–1, 1) by substitut-
1 1
dx dt
ing t = 2x – 1, so that .
0
1 x 1
t 3
Now by Gauss three point quadrature we have,
1 1
I [8 F (0) 5 F (3 0.77459667) 5F (3.77459667)] with F (t )
9 t 3
I 0.693122
1
dx
The exact value of ∫ 1 +=
x
ln=
2 0.693147
0
Error = 0.000025
Romberg’s Procedure
This procedure is used to find a better estimate of an integral using the evaluation
of the integral for two values of the width of the sub-intervals.
b
Let I1 and I2 be the values of an integral I = ∫ f ( x) dx, with two different num-
a
ber of sub-intervals of width h1 and h2 respectively using the trapezoidal rule. Let
E1 and E2 be the corresponding truncation errors. Since the errors in trapezoidal
rule is of order of h2, we can write,
Self-Instructional
198 Material
Numerical Integration and
I1 + Kh12 and I =
I= I 2 + Kh22 , where K is approximately same. Numerical Differentiation
I1 + Kh12 =I 2 + Kh22
I1 − I 2
K≈ NOTES
h22 − h12
2 2
I1 − I 2 2 I1h2 − I 2 h1
Thus, I ≈ I1 + 2
.h =
2 1
h2 − h1 h22 − h12
h1
In Romberg procedure, we take h2 and we then have,
2
2
h
I1 1 − I 2 h12
2 4 I 2 − I1
= I = 2
h1 2
3
−h
2
Or, I −I
I I2 + 2 1
=
3
This is known as Romberg’s formula for trapezoidal integration.
The use of Romberg procedure gives a better estimate of the integral without
any more function evaluation. Further, the evaluation of I2 with h/2 uses the func-
tion values required in evaluation of I1.
1
dx
Example 16: Evaluate I = ∫ 1 + x 2 by trapezoidal rule with h1 0.5and h2 0.25
0
and then use Romberg procedure for a better estimate of I. Compare the result
with exact value.
1
Solution: We tabulate the value of x and y = with h = 0.25.
1+ x2
x 0 0.25 0.5 0.75 1.0
y 1 0.9412 0.80 0.64 0.5
Self-Instructional
Material 199
Numerical Integration and The evaluation of I2 uses the function values for evaluation of I1.
Numerical Differentiation
By Romberg formula,
1
I ≈ I2 + ( I 2 − I1 )
NOTES 3
1
= 0.5218 + (0.5218 − 0.516) ×
3
= 0.5218 + 0.0019
= 0.5237
1
The exact integral tan 1 x 0 0.5237.
4
Thus we can take the result correct to four places of decimals.
2
dx
Example 17: Evaluate I = ∫ by trapezoidal rule with two and four sub-inter-
x
1
0.5
I1
= [1 + 0.5 + 2 × 0.6667]
= 0.7084
2
0.25
I2
= [1 + 0.5 + 2 (0.8 + 0.6667 + 0.5714)]
= 0.6970
2
By Romberg proecedure,
I 2 − I1 1
I = I2 + ≈ 0.6970 + ( −0.0114)
3 3
= 0.6970 − 0.0038 = 0.6932
1
dx
Example 18: Compute the value of ∫ 1 + x , (i) by Gauss two point and (ii) by
0
1 1 1
dx 1 1 1 2
∫0
=
1+ x 2 ∫
−11 +
1 1
+ t
=
2 3+t
−1
∫
dt
2 2
Self-Instructional
200 Material
Numerical Integration and
1
1 1 Numerical Differentiation
(i) By Gauss two point quadrature ∫
−1
F (t ) dt = F
3
+ F −
3
we get ,
NOTES
1
1 1 1
∫ dt = + = 0.6923
3+t 1 1
−1 3+ 3−
3 3
(ii) By Gauss three point quadrature,
1
dt 1 0.55555556
∫ 3+t
−1
dt =
3 × 0.888888 + 3 + 0.77459667
= 0.443478
2
Example 19: Compute ∫ e x dx by Gauss three point quadrature.
1
6−a 1 1 3
Solution: We first transform the integral by substituting x = t + (b + a ) = t +
2 2 2 2
2 1 t 3 3 1 t
1 + 1
∫ ∫ ∫
x
∴ e dx = e 2 2 dt = e 2 e 2 dt
2 2
1 −1 −1
1 2 1
3 1
0
= e 0.88888889 × e + 0.55555556 × e 2 × 0.77459667 + e 2 × 0.77459667
2
= 4.67077
Self-Instructional
Material 201
Numerical Integration and derivatives at a point near the beginning of an equally spaced table, Newton’s
Numerical Differentiation
forward difference interpolation formula is used, whereas Newton’s backward
difference interpolation formula is used for computing the derivatives at a point
near the end of the table. Again, for computing the derivatives at a point near the
NOTES middle of the table, the derivatives of the central difference interpolation formula is
used. If, however, the arguments of the table are unequally spaced, the derivatives
of the Lagrange’s interpolating polynomial are used for computing the derivatives
of the function.
Self-Instructional
202 Material
Differentiation Using Newton’s Backward Difference Interpolation Formula Numerical Integration and
Numerical Differentiation
For an equally spaced table of a function, Newton’s backward difference
interpolation formula is,
(v ) yn v yn
v(v 1) 2
yn
v (v 1)(v 2) 3
yn
v(v 1)(v 2)(v 3) 4
yn ... NOTES
2 ! 3 ! 4 !
v(v 1)...(v n 1) n
yn
n !
x xn
where v
h
dy d2y
The derivatives and , obtained by differentiating the above formula
dx dx 2
are given by,
dy 1 2v + 1 2 3v 2 + 6v + 2 3 2v 3 + 9v 2 + 11v + 3 4
= ∇y n + ∇ yn + ∇ yn + ∇ y n + ...
dx h 2 6 12
(8.46)
d2y 1 2 3 6v 2 + 18v + 11 4
= ∇ y n + ( v + 1) ∇ y n + ∇ y n + ... (8.47)
dx 2 2
h 12
dy d2y
For a given x near the end of the table, the values of and are com-
dx dx 2
puted by first computing v = (x – xn)/h and using the above formulae. At the
tabulated point xn, the derivatives are given by,
1 1 1 1
y ′( xn ) = ∇y n + ∇ 2 y n + ∇ 3 y n + ∇ 4 y n + ...
h 2 3 4
(8.48)
1 2 11 5
y ′′( xn ) = 2
∇ y n + ∇ 3 y n + ∇ 4 y n + ∇ 5 y n + ...
h 12 6
(8.49)
Example 20: Compute the values of f ′(2.1), f ′′(2.1), f ′(2.0) and f ′′(2.0) when f
(x) is not known explicitly, but the following table of values is given:
x f(x)
2.0 0.69315
2.2 0.78846
2.4 0.87547
Self-Instructional
Material 203
Numerical Integration and Solution: Since the points are equally spaced, we form the finite difference table.
Numerical Differentiation
x f ( x) ∆f ( x) ∆2 f ( x)
2.0 0.69315
NOTES 9531
2.2 0.78846 − 83
8701
2.4 0.87547
1 1
f ′(2.0)
= ∆f 0 − ∆ 2 f 0
0.2 2
1 1
= 0.09531 + × 0.00083
0.2 2
0.09572
= = 0.4786
0.2
1
f ′′(2.0)
= × ( −0.0083)
(0.2) 2
= −0.21
Example 21: For the function f(x) whose values are given in the table below
compute values of f ′(1), f ′′(1), f ′(5.0), f ′′(5.0).
x 1 2 3 4 5 6
f ( x) 7.4036 7.7815 8.1291 8.4510 8.7506 9.0309
Self-Instructional
204 Material
Solution: Since f(x) is known at equally spaced points, we form the finite differ- Numerical Integration and
Numerical Differentiation
ence table to be used in the differentiation formulae based on Newton’s interpo-
lating polynomial.
x f ( x) ∆f ( x )
2 3 4 5
∆ f ( x) ∆ f ( x) ∆ f ( x) ∆ f ( x) NOTES
1 7.4036
0.3779
2 7.7815 − 303
0.3476 46
3 8.1291 − 257 − 12
0.3219 34 8
4 8.4510 − 223 −4
0.2996 30
5 8.7506 − 193
0.2803
6 9.0309
To calculate f ′(1) and f ′′(1), we use the derivative formulae based on Newton’ss
forward difference interpolation at the tabulated point given by,
1 1 1 1 1
f ′( x0 ) = ∆f 0 − ∆ 2 f 0 + ∆3 f 0 − ∆ 4 f 0 + ∆ 5 f 0
h 2 3 4 5
1
2 11 4 5 5
f ′′( x=
0) ∆ f 0 − ∆ f0 + 12 ∆ f 0 − 6 ∆ f0
3
h2
1 1 1 1 1
∴
= f ′(1) 0.3779 − × (−0.0303) + × 0.0046 − × ( −0.0012) + × 0.0008
1 2 3 4 5
= 0.39507
11 5
f ′′(1) 0.0303 − 0.0046 + × (−0.0012) − × 0.0008
=
12 6
= −0.0367
Similarly, for evaluating f ′(5.0) and f ′′(5.0), we use the following formulae
1 1 2 1 3 1 4 1 5
f ′( x n ) = ∇f n + ∇ f n + ∇ f n + ∇ f n + ∇ f n
h 2 3 4 5
1 2 11 5
f ′′( xn ) = 2
∇ f n + ∇3 f n + ∇ 4 f n + ∇5 f n
h 12 6
1 1 1
f ′(5) = 0.2996 + (−0.0223) + × 0.0034 + (−0.0012)
2 3 4
= 0.2893
11
f ′′(5) = [−0.0223 + 0.0034 + × 0.0012]
12
= −0.0178
Self-Instructional
Material 205
Numerical Integration and
Numerical Differentiation
Example 22: Compute the values of y ′(0), y ′′(0.0), y ′(0.02) and y ′′(0.02) for the
function y = f(x) given by the following tabular values:
Solution: Since the values of x for which the derivatives are to be computed lie
near the beginning of the equally spaced table, we use the differentiation formulae
based on Newton’s forward difference interpolation formula. We first form the
finite difference table.
x y ∆y ∆2 y ∆3 y ∆4 y
0.0 0.00000
0.10017
0.05 0.10017 100
0.10117 101
0.10 0.20134 201 3
0.10318 104
0.15 0.30452 305 3
0.10623 107
0.20 0.41075 412
0.11035
0.25 0.52110
Self-Instructional
206 Material
Numerical Integration and
1 2u − 1 2 3u 2 − 6u + 2 3 2u 3 − 9u 2 + 11u − 3 4
y ′(0.02) = ∆y0 + ∆ y0 + ∆ y0 + ∆ y0 Numerical Differentiation
h 2 6 12
1 2 6(u − 1) 3 6u 2 − 18u + 11 4
y ′′(0.02)
= 2
∆ y0 + ∆ y0 + × ∆ y0
h 6 12 NOTES
1 2 × 0.4 − 1 2
3 × (0.4) − 6 × 0.4 + 2
∴ y ′(0.02)
= 0.10017 + × 0.00100 + × 0.00101
0.05 2 6
2 × 0.43 − 9 × 0.42 + 11 × 0.4 − 3
+ × 0.00003
12
= 4.00028
1 6 × 0.16 − 18 × 0.4 + 11
y ′′(0.02)
= 0.00100 − 0.00101 × (−0.6) + × 0.00003
(0.05) 2 12
= 0.800
1 1 1
f ′( x0 ) = ∆f 0 − ∆ 2 f 0 + ∆ 3 f 0
h 2 3
1 1 1
∴ f ′(6.0)= −0.0248 − × 0.0023 + × 0.0003
0.1 2 3
10[ 0.0248 − 0.00115 + 0.0001]
=−
= −0.2585
Self-Instructional
Material 207
Numerical Integration and
Numerical Differentiation
For evaluating f ′′(6.3), we use the formula obtained by differentiating Newton’ss
backward difference interpolation formula. It is given by,
1 2
f ′′( x=
n) [∇ f n + ∇3 f n ]
NOTES h2
1
f ′′(6.3)
= [0.0026 + 0.0003]
= 0.29
(0.1) 2
Example 24: Compute the values of y ′(1.00) and y ′′(1.00) using suitable numeri-
cal differentiation formulae on the following table of values of x and y:
Solution: For computing the derivatives, we use the formulae derived on differ-
entiating Newton’s forward difference interpolation formula, given by
1 1 1 1
f ′( x0 ) = ∆y0 − ∆2 y0 + ∆3 y0 − ∆4 y0 + ...
h 2 3 4
1 2 11
f ′′( x0 ) = 2
∆ y0 − ∆3 y0 + ∆4 y 0 + ...
h 12
Now, we form the finite difference table.
x y ∆y ∆2 y ∆3 y ∆4 y
1.00 1.00000
2470
1.05 1.02470 − 59
2411 5
1.10 1.04881 − 54 −2
2357 3
1.15 1.07238 − 51
2306
1.20 1.09544
x 0 1 2 3
Self-Instructional f ( x) 1 3 15 40
208 Material
Solution: Since the values of x are equally spaced we use Newton’s forward Numerical Integration and
Numerical Differentiation
difference interpolating polynomial for finding f ′( x) and f ′(0.5). We first form the
finite difference table as given below:
x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x) NOTES
0 1
2
1 3 10
12 3
2 15 13
25
3 40
x − x0
Taking x0 = 0, we have u = = x. Thus the Newton’s forward difference
h
interpolation gives,
u (u − 1) 2 u (u − 1) (u − 2) 3
f = f 0 + u∆f 0 + ∆ f0 + ∆ f0
2! 3!
x( x − 1) x( x − 1) ( x − 2)
i.e., f ( x) ≈ 1 + 2 x + × 10 + ×3
2 6
13 2 1 3
or, f ( x) =+
1 3x − x + x
2 2
3 2
f ′( x) =−
3 13x + x
2
3
and, f ′(0.5) = 3 − 13 × 0.5 + × (0.5) 2 =−3.12
2
Example 26: The population of a city is given in the following table. Find the rate
of growth in population in the year 2001 and in 1995.
dy
Solution: Since the rate of growth of the population is , we have to compute
dx
dy
at x = 2001 and at x = 1995. For this we consider the formula for the deriva-
dx
tive on approximating y by the Newton’s backward difference interpolation given
by,
dy 1 2u + 1 2 3u 2 + 6u + 2 3 2u 3 + 9u 2 + 11u + 3 4
= ∇y n + ∇ yn + ∇ yn + ∇ y n + ...
dx h 2 6 12
Self-Instructional
Material 209
Numerical Integration and x − xn
Numerical Differentiation Where u =
h
For this we construct the finite difference table as given below:
NOTES x y ∆y ∆2 y ∆3 y ∆4 y
1961 40.62
20.18
1971 60.80 − 1.03
19.15 5.49
1981 79.95 4.46 − 4.47
23.61 1.02
1991 103.56 5.48
29.09
2001 132.65
x−x
For x = 2001,
= u =n
0
h
dy 1 1 1 1
=
dx
10 29.09 + 2 × 5.48 + 3 × 1.02 + 4 × (−4.47)
2001
= 3.105
1995 − 1991
For x = 1995, u = = 0.4
10
Self-Instructional
210 Material
For the numerical derivative formula evaluated at x and x + h, a choice Numerical Integration and
Numerical Differentiation
for h that is small without producing a large rounding error is x (though
not when x = 0), where the machine epsilon ε is typically of the order of
2.2 × 10–16. A formula for h that balances the rounding error against the secant NOTES
error for optimum accuracy is,
f x
h 2
f x
Though not when f (x) = 0, and to employ it will require knowledge of the
function.
For single precision the problems are exacerbated because, although x may
be a representable floating-point number, x + h almost certainly will not be. This
means that x + h will be changed (by rounding or truncation) to a nearby machine-
representable number, with the consequence that (x + h) – x will not equal h;
the two function evaluations will not be exactly h apart. Consequently, since most
decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a
seemingly round step, such as h = 0.1 will not be a round number in binary; it is
0.000110011001100...
Age = x 3 5 7 9
Weight = y (kg ) 5 8 12 17
Self-Instructional
Material 211
Numerical Integration and Solution: Since the values of x are equidistant, we form the finite difference table
Numerical Differentiation
for using Newton’s forward difference interpolation formula to compute weight of
the baby at the age of required years.
NOTES x y ∆y ∆2 y
3 5
3
5 8 1
4
7 12 1
5
9 17
x x0
Taking x = 2, u 0.5.
h
Newton’s forward difference interpolation gives,
(−0.5)(−1.5)
y at x = 1, y (1) = 5 − 0.5 × 3 + ×1
2
=5 − 1.5 + 0.38 =3.88 − 3.9 kg.
Similarly, for computing weight of the baby at the age of ten years, we use
Newton’s backward difference interpolation given by,
x − xn 10 − 9
=v = = 0.5
h 2
0.5 ×1.5
y at x = 10, y (10) = 17 + 0.5 × 5 + ×1
2
=17 + 2.5 + 0.38 − 19.88
1. The evaluation of a definite integral cannot be carried out when the integrand
f(x) is not integrable, as well as when the function is not explicitly known
but only the function values are known at a finite number of values of x.
There are two types of numerical methods for evaluating a definite integral
based on the following formula:
b
∫ f ( x) dx
a
x1
h
2. The formula is, ∫ f ( x)dx = 2 [ f
x0
0 + f1 ] .
Self-Instructional
212 Material
x2 Numerical Integration and
h
3. The formula is, ∫ f ( x )dx = [ f 0 + 4 f1 + f 2 ] . Numerical Differentiation
x0
3
b
3h
4. Simpson’s three-eighth rule of numerical integration is, f ( x) dx [y NOTES
a
8 0
+ 3y1 + 3y2 + 2y3 + 3y4 + 3y5 + 2y6 + …+ 2y3m – 3 + 3y3m – 2 + 3y3m – 1 + y3m]
w h e r e
h = (b–a)/(3m); for m = 1, 2, ...
b
3h
5. The Weddle’s rule is, ∫ f ( x)dx = 10
a
[y0 + 5y1 + y2 + 6y3 + y4 + 5y5 + 2y6 +
5y7 + y8 + 6y9 + y10 + 5y11 + ... + 2y6m – 6 + 5y 6m – 5 + y6m – 4 + 6y6m – 3 + y6m
–2
+ 5y6m – 1 + y6m], where b – a = 6mh.
6. This procedure is used to find a better estimate of an integral using the
evaluation of the integral for two values of the width of the sub-intervals.
7. Numerical differentiation is the process of computing the derivatives of a
function f(x) when the function is not explicitly known, but the values of the
function are known for a given set of arguments x = x0, x1, x2, ..., xn. To
find the derivatives, we use a suitable interpolating polynomial and then its
derivatives are used as the formulae for the derivatives of the function.
8. Newton’s forward difference interpolation formula is,
where u = x − x0
h
9. Newton’s backward difference interpolation formula is,
8.7 SUMMARY
x − x0
where u = .
h
Self-Instructional
214 Material
At the tabulated point x0, the value of u is zero and the formulae for the Numerical Integration and
Numerical Differentiation
derivatives are given by,
1 1 1 1 1
y ′( x0 ) = ∆y0 − ∆2 y0 + ∆3 y0 − ∆4 y0 + ∆5 y0 − ...
h 2 3 4 5 NOTES
1 2 11 5
y′′( x0 ) = 2
∆ y0 − ∆3 y0 + ∆4 y0 − ∆5 y0 + ...
h 12 6
dy d2y
For a given x near the end of the table, the values of and 2 are
dx dx
computed by first computing v = (x – xn)/h and using the above formulae.
At the tabulated point xn, the derivatives are given by,
1 1 1 1
y ′( x n ) = ∇y n + ∇ 2 y n + ∇ 3 y n + ∇ 4 y n + ...
h 2 3 4
1 2 11 5
y ′′( xn ) = 2
∇ y n + ∇ 3 y n + ∇ 4 yn + ∇ 5 y n + ...
h 12 6
For computing the derivatives at a point near the middle of the table, the
derivatives of the central difference interpolation formula is used.
If the arguments of the table are unequally spaced, then the derivatives of
the Lagrange’s interpolating polynomial are used for computing the derivatives
of the function.
Self-Instructional
Material 215
Numerical Integration and Numerical differentiation: It is the process of computing the derivatives
Numerical Differentiation
of a function f(x) when the function is not explicitly known, but the values of
the function are known for a given set of arguments x = x0, x1, x2, ..., xn.
Newton’s forward difference interpolation formula: The Newton’s
NOTES
forward difference interpolation formula is used for computing the derivatives
at a point near the beginning of an equally spaced table.
Newton’s backward difference interpolation formula: Newton’s
backward difference interpolation formula is used for computing the
derivatives at a point near the end of the table.
Central difference interpolation formula: For computing the derivatives
at a point near the middle of the table, the derivatives of the central difference
interpolation formula is used.
Short-Answer Questions
1. State Newton-Cotes formula.
2. State the trapezoidal rule.
3. What is the difference between Simpson’s one-third formula and one-third
rule?
4. What is the error in Weddle’s rule?
5. Give the truncation error in Simpson’s one-third rule.
6. Where is interval halving technique used?
7. Name the methods used for numerical evaluation of double integrals.
8. State the Gauss quadrature formula.
9. State an application of Romberg’s procedure.
10. Define the term numerical differentiation.
dy
11. How the derivative can be evaluated?
dx
12. Give the formulae for the derivatives at the tabulated point x0 where the
value of u is zero.
13. Give the differentiation formula for Newton’s backward difference
interpolation.
14. Give the Newton’s backward difference interpolation formula for an equally
spaced table of a function.
Self-Instructional
216 Material
Long-Answer Questions Numerical Integration and
Numerical Differentiation
1. Use suitable formulae to compute y (1.4) and y (1.4) for the function y = f
(x), given by the following tabular values.
x 1.4 1.8 2.2 2.6 3.0
NOTES
y 0.9854 0.9738 0.8085 0.5155 0.1411
dy d2y
2. Compute dx and dx 2
for x = 1 where the function y = f (x) is given by the
following table:
x 1 2 3 4 5 6
y 1 8 27 64 125 216
20
3. Compute ∫ f ( x) dx by Simpson’s one-third rule, where:
0
x 0 5 10 15 20
f ( x) 1.0 1.6 3.8 8.2 15.4
4
4. Compute ∫ x 3 dx by Simpson’s one-third formula and comment on the
0
result:
x 0 2 4
x3 0 8 64
2
6. Compute ∫ e x dx by Simpson’s one-third formula and compare with the
0
exact value, where e0 = 1, e1 = 2.72, e2 = 7.39.
1
7. Compute an approximate value of π , by integrating dx , by Simpson’ss
∫ 1+ x
0
2
one-third formula.
8. A rod is rotating in a plane about one of its ends. The following table gives
the angle (in radians) through which the rod has turned for different values
d
of time t seconds. Find its angular velocity and angular acceleration
dt
d2
at t = 1.0.
dt 2
Self-Instructional
Material 217
Numerical Integration and
Numerical Differentiation t secs 0.0 0.2 0.4 0.6 0.8 1.0
θ radius 0.0 0.12 0.48 1.10 2.00 3.20
dy d2y
NOTES 9. Find and at x = 1 and at x = 3 for the function y = f (x), whose
dx dx 2
values are given in the following table:
x 1 2 3 4 5 6
y 2.7183 3.3210 4.0552 4.9530 6.0496 7.3891
dy d2y
10. Find and at x = 0.96 and at x = 1.04 for the function y = f (x)
dx dx 2
given in the following table:
x 0.96 0.98 1.0 1.02 1.04
y 0.7825 0.7739 0.7651 0.7563 0.7473
13. Evaluate ∫ cos x dx, correct to three significant figures taking five equal sub-
0
intervals.
1
xdx
14. Compute the value of the integral ∫ correct to three significant figures
1+ x 0
x 0 1 2 3
f ( x) 1.6 3.8 8.2 15.4
Self-Instructional
218 Material
18. Use suitable formulae to compute y ′(1.4) and y ′′(1.4) for the function y = Numerical Integration and
Numerical Differentiation
f(x), given by the following tabular values:
x 1.4 1.8 2.2 2.6 3.0
y 0.9854 0.9738 0.8085 0.5155 0.1411 NOTES
dy d2y
19. Compute and for x =1 where the function y = f(x) is given by the
dx dx 2
following table:
x 1 2 3 4 5 6
y 1 8 27 64 125 216
20. A rod is rotating in a plane about one of its ends. The following table gives
the angle (in radians) through which the rod has turned for different values
d
of time t seconds. Find its angular velocity and angular acceleration
dt
d2
at t = 1.0.
dt 2
dy d2y
21. Find and at x = 1 and at x = 3 for the function y = f(x), whose
dx dx 2
values in [1, 6] are given in the following table:
x 1 2 3 4 5 6
y 2.7183 3.3210 4.0552 4.9530 6.0496 7.3891
dy d2y
22. Find and at x = 0.96 and at x = 1.04 for the function y = f(x)
dx dx 2
given in the following table:
x 0.96 0.98 1.0 1.02 1.04
y 0.7825 0.7739 0.7651 0.7563 0.7473
Self-Instructional
Material 219
Numerical Integration and Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
Numerical Differentiation
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
NOTES
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Self-Instructional
220 Material
Partial Differential
BLOCK - III Equations
NOTES
UNIT 9 PARTIAL DIFFERENTIAL
EQUATIONS
Structure
9.0 Introduction
9.1 Objectives
9.2 Partial Differential Equation of the First Order Lagrange’s Solution
9.3 Solution of Some Special Types of Equations
9.4 Charpit’s General Method of Solution and Its Special Cases
9.5 Partial Differential Equations of Second and Higher Orders
9.5.1 Classification of Linear Partial Differential Equations of Second Order
9.6 Homogeneous and Non-Homogeneous Equations with
Constant Coefficients
9.7 Partial Differential Equations Reducible to Equations with
Constant Coefficients
9.8 Answers to Check Your Progress Questions
9.9 Summary
9.10 Key Words
9.11 Self Assessment Questions and Exercises
9.12 Further Readings
9.0 INTRODUCTION
In this unit, you will learn about partial differential equations. Partial differential
equations are used to formulate, and thus aid the solution of, problems involving
functions of several variables. Partial differential equations often model
multidimensional systems.
You will learn various methods to solve partial differential equations of first,
second and higher orders.
9.1 OBJECTIVES
Lagrange’s Equation
The partial differential equation Pp + Qq = R, where P, Q, R are functions of x, y,
z, is called Lagrange’s linear differential equation.
dx dy dz
Form the auxiliary equations P = =
Q R and find two indpendent solutions
of the auxiliary equations say u(x, y, z) = C1 and v(x, y, z) = C2, where C1 and C2
are constants. Then the solution of the given equation is F(u, v) = 0 or u = F(v).
For example, solve ( y 2 z 2 ) p – xyq xz
The auxiliary equations are
dx dy dz
y2 z2 = − xy = – xz (9.1)
dx − dy dz
i.e. 2
x −y 2 =
( x + y) z
dx − dy dz
i.e. x− y
=
z
dx dy
Also =
x2 y 2
1 1
Hence = y + constant
x
1 1
–
y x = C2
1 1 x– y
Hence the solution is, F – , =0
y x z
Example 2: Solve (x2 – yz)p + (y2 – zx)q = z2 – xy
Solution:
The subsidiary equations are:
dx dy dz
=
x 2 – yz = y 2 – zx z 2 – xy
dx – dy d ( x − y)
x 2 − yz − ( y 2 − zx) = ( x − y )( x + y + z )
d ( y − z)
= ( y − z )( x + y + z )
Self-Instructional
Material 223
Partial Differential
Equations d ( x − y) d ( y − z)
x− y
= y−z
dx + dy + dz
and is also equal to
x + y + z 2 − yz − zx − xy
2 2
x− y
F 0, where F is arbitrary..
, xy + yz + zx =
y−z
Example 3: Solve (a – x)p + (b – y)q = c – z
Solution:
The subsidiary equations are:
dx dy dz
= b– y =c–z (1)
a−x
From Equation (1)
dy dz
b− y
=
c−z
dy dz
i.e. y−b =
z −c
log ( y – b) = log (z – c) + log C1
y−b
= C1
z−c
Self-Instructional
224 Material
Also Partial Differential
Equations
dx dy
= b− y
a−x
dx dy NOTES
= y−b
x−a
log (x – a) = log (y – b) + log C2
x−a
= C2
y −b
The general solution is
y −b x−a
F , =0
z −c y −b
Example 4: Solve (y – z)p + (z – x)q = x – y
Solution:
The auxiliary equations are:
dx dy dz dx + dy + dz
= = =
y−z z−x x− y 0
dx + dy + dz = 0
Integrating we get, x + y + z = C1
Also each ratio
xdx + ydy + zdz
=
x( y − z ) + y ( z − x) + z ( x − y )
dx + dy + dz
=
0
dx + dy + dz = 0
On integrating, we get, x + y + z = C1 (1)
dx dy dz dx dy dz
+ +
x y z x y z
= = =
y−z z−x x− y 0
dx dy dz
+ +
x y z =0
x −1 y −1
= + C1 NOTES
−1 −1
1 1
= – y + C1
x
1 1
−
y x = C1
Also
dy dz
2 =
y z2
1 1
− = − + C2
y z
1 1
−
z y = C2
1 1 1 1
F – , – =0
y x z y
Example 8: Solve ( y + z)p + (z + x)q = x + y
Solution:
The auxiliary equations are
dx dy dz
= =
y+z z+x x+ y
dx − dy dy − dz dz − dx
i.e. = =
x− y y−z z−x
dx + dy + dz
= 2( x + y + z )
( x − y)
2
NOTES = log C2
x+ y+z
The general solution is
x − y ( x − y )2
F , =0
y−z x+ y+z
Wave Equation
For deriving the equation governing small transverse vibrations of an elastic string,
we position the string along the x-axis, extend it to its length L and fix it at its ends
x = 0 and x = L. Distort the string and at some instant, say t = 0, release it to
vibrate. Now the problem is to find the deflection u(x, t) of the string at point x
and at any time t > 0.
To obtain u(x, t) as the result of a partial differential equation we have to
make simplifying assumptions as follows:
1. The string is homogeneous. The mass of the string per unit length is constant.
The string is perfectly elastic and hence does not offer any resistance to
bending.
2. The tension in the string is constant throughout.
3. The vibrations in the string are small so the slope at each point remains
small.
For modeling the differential equation, consider the forces working on a
small portion of the string. Let the tension be T1 and T2 at the endpoints P and Q
of the chosen portion. The horizontal components of the tension are constant
because the points on the string move vertically according to our assumption.
Hence we have,
T1 cos α = T2 cos β = T = const (9.2)
The two forces in the vertical direction are − T1 sin α and T2 sin β of T1
and T2 . The negative sign shows that the component is directed downward. If
is the mass of the undeflected string per unit length and x is the length of that
portion of the string that is undeflected then by Newton’s second law the resultant
of these two forces is equal to the mass x of the portion times the acceleration
∂ 2 u / ∂t 2
Self-Instructional
228 Material
Partial Differential
∂ 2u Equations
T2 sin β − T1 sin α = ρ∆x 2 .
∂t
By using Equation (9.1), we can divide the above equation by
T2 cos β = T1 cos α = T , to get NOTES
∂u ∂u
tan α = x and tan β = .
∂x ∂x x + ∆x
By dividing Equation (9.3) by x and substituting the values of tan and
tan , we have
1 ∂u ∂u ρ ∂ u
2
x + ∆x − = 2 .
∆x ∂x ∂ x x T ∂t
As x approaches zero, the equation becomes the linear partial differential
equation
∂ 2u 2
2 ∂ u T
=c , c2 = (9.4)
∂t 2 ∂x 2 ρ
which is the one-dimensional wave equation governing the vibrations of an
elastic string
∂ 2u 2
2 ∂ u
=c , (9.5)
∂t 2 ∂x 2
To determine the solution we use the boundary conditions, x = 0 and
x = L,
u (0, t ) = 0, u (L, t ) = 0 for all t (9.6)
The initial velocity and initial deflection of the string determine the form of
motion. If f(x) is the original deflection and g(x) is the initial velocity, then our initial
conditions are,
u ( x,0) = f ( x ) (9.7)
and
∂u
= g (x) . (9.8)
∂t t =0
Self-Instructional
Material 229
Partial Differential I. Now the problem is to get the solution of Equation (9.5) satisfying the
Equations
conditions (9.6)-(9.8).
By using the method of separation of variables, verify solutions of the wave
Equation (9.5) of the form
NOTES
u ( x, t ) = F ( x )G (t ) (9.9)
which are a product of two functions, F(x) and G(t). Note here that each
of these functions is dependent on one variable, i.e., either x or t. By differentiating
Equation (9.9) two times both with respect to x and t, we obtain
∂ 2u ∂ 2u
= FG and = F ′′G
∂t 2 ∂x 2
By substituting these values in the wave equation we get,
= c 2 F ′′G .
FG
Divide this equation by c 2 FG , to get
G F ′′
= .
2
c G F
The equations on either side are dependent on different variables. Hence
changing x will not change G and changing t will not change F and the other side
will remain constant. Thus,
G F ′′
= = k.
2
c G F
or
F ′′ − kF = 0 (9.10)
and
− c 2 kG = 0 .
G (9.11)
The constant k is arbitrary.
Now we will find the solutions of Equations (9.10) and (9.11) so that the
equation u = FG fulfills the boundary conditions (9.6), that is,
u (0, t ) = F (0)G (t ) = 0, u (L, t ) = F (L )G (t ) = 0 for all t.
When G 0, then u 0.
Therefore, G 0 and
(a) F(0) = 0, (b) F(L) = 0 (9.12)
For k = 0 the general solution of Equation (9.10) is F = ax + b, and from
Equation (9.12) we obtain a = b = 0 and hence F 0, which gives
Self-Instructional
230 Material
u 0. But for positive value of k, i.e., k = 2
the general solution of Equation Partial Differential
Equations
(9.10) is
F = Ae μ , x + Be − μx ,
and from Equation (9.12), we again get F 0. Hence choose k < 0, i.e., NOTES
k = − p 2 . Then the Equation (9.10) becomes,
F ′′ + p 2 F = 0
The general solution of the above equation is,
F ( x ) = A cos px + B sin px .
Using conditions of Equation (9.12), we have
F (0) = A = 0 and F (L ) = B sin pL = 0
B = 0 implies F 0. Thus we will take sin pL = 0, giving
nπ
pL = nπ , so that p = where n is an integer (9.13)
L
For B = 1, we get infinitely many solutions F ( x ) = Fn ( x ) , where
nπ
Fn ( x ) = sin x (n = 1,2,) . (9.14)
L
These solutions satisfy Equation (9.12). The value of the constant k is now
limited to the values k − p 2 = −(nπ / L )2 , resulting from Equation (9.13), so
Equation (9.11) becomes
+ λ 2 G = 0 cnπ
G n where λ n = . (9.15)
L
A general solution is
Gn (t ) = Bn cos λ n t + Bn * sin λ n t .
Select the coefficients Bn’s so that u ( x,0 ) becomes the Fourier sine series
of f(x). Thus, from Equation (9.10),
L
2 nπx
Bn = ∫ f (x )sin dx, n = 1,2, . (9.19)
L0 L
∂u ∞ nπx
= ∑ (− Bn λ n sin λ n t + Bn * λ n cos λ n t )sin
∂t t =0 n =1 L t =0
∞
n πx
= ∑B n * λ n sin = g (x )
n =1 L
1 ∞ nπ 1 ∞ nπ
u ( x, t ) = ∑ B n sin ( x − ct ) + ∑ Bn sin ( x − ct )
2 n =1 L 2 n =1 L
The above two series are generated by substituting x – ct and x + ct,
respectively, for the variable x in the Fourier sine series given in Equation (9.18)
for f(x). Thus
1
u ( x, t ) = [ f * (x − ct ) + f * (x + ct )] (9.22)
2
where f* is the odd periodic extension of f with the period 2L. By
differentiating Equation (9.22) we see that u(x, t) is a solution of Equation (9.5),
given that f(x) is twice differentiable on the interval 0 < f(x) < L and has one-sided
second derivatives at x = 0 and x = L, which are zero. u(x, t) is obtained as a
solution satisfying Equations (9.6) – (9.8).
If f ′( x ) and f ′′( x ) are merely piecewise continuous or if the one-sided
derivatives are not zero, then for each t there will be finitely many values of x at
which the second derivatives of u appearing in Equation (9.5) do not exist. Except
at these points the wave equation will still be satisfied. We can then regard u(x, t)
as a generalized solution.
Example 9: Determine the solution of the wave Equation (9.5) corresponding to
the following triangular initial deflection,
2k L
x if 0< x<
f (x ) = L 2
2k L
(L − x ) if <x<L
L 2
and zero initial velocity.
Solution: Since g ( x ) ≡ 0 , we have Bn * = 0 in Equation (9.17).
The Bn are given by Equation (9.11) and thus Equation (9.17) takes the
8k 1 π πc 1 3π 3 πc
form u ( x, t ) = 12 sin L x cos L t − 3 2 sin L x cos L t + .
π2
Self-Instructional
Material 233
Partial Differential
Equations 9.4 CHARPIT’S GENERAL METHOD OF
SOLUTION AND ITS SPECIAL CASES
NOTES Charpit’s method is used to find the solution of most general partial differential
equation of order one, given by
F(x, y, z, p, q) = 0 (9.23)
The primary idea in this method is the introduction of a second partial
differential equation of order one,
f(x, y, z, p, q, a) = 0 (9.24)
containing an arbitrary constant ‘a’ and satisfying the following conditions :
1. Equations (9.23) and (9.24) can be solved to give
p = p(x, y, z, a ) and q = q( x, y, z, a )
2. The equation
dz = p( x, y, z, a )dx + q( x, y, z, a )dy (9.25)
is integrable.
When a function ‘f’ satisfying the conditions 1 and 2 has been found, the
solution of Equation (9.25) containing two arbitrary constants (including ‘a’) will
be a solution of Equation (9.23). The condition 1 will hold if
∂F ∂f
∂ (F , f ) ∂p ∂p
J= = ≠0
∂( p, q ) ∂F ∂f (9.26)
∂q ∂q
∂p ∂p ∂p ∂q
p + q − − − = 0
∂z ∂z ∂y ∂x
∂q ∂q ∂p ∂p
p + =q + (9.27)
∂z ∂x ∂z ∂y
Substituting the values of p and q as functions of x, y and z in Equations
(9.23) and (9.24) and differentiating with respect to x
∂F ∂F ∂p ∂F ∂q
+ + =0
∂x ∂p ∂x ∂q ∂x
∂f ∂f ∂p ∂f ∂q
and + + =0
Self-Instructional
∂x ∂p ∂x ∂q ∂x
234 Material
Therefore, Partial Differential
Equations
∂F ∂f ∂F ∂f ∂q ∂F ∂f ∂F ∂f
− = −
∂p ∂q ∂q ∂p ∂x ∂x ∂p ∂p ∂x
NOTES
∂q 1 ∂F ∂f ∂F ∂f
or = −
∂x J ∂x ∂p ∂p ∂x
∂p 1 ∂F ∂f ∂F ∂f
Similarly = − +
∂y J ∂y ∂q ∂q ∂y
∂p 1 ∂F ∂f ∂F ∂f
= − +
∂z J ∂z ∂q ∂q ∂z
∂q 1 ∂F ∂f ∂F ∂f
and = − (9.28)
∂z J ∂z ∂p ∂p ∂z
Substituting the values from Equation (9.28) in Equation (9.27)
1 ∂F ∂f ∂F ∂f ∂F ∂f ∂F ∂f
p − + −
J ∂z ∂p ∂p ∂z ∂x ∂p ∂p ∂x
1 ∂F ∂f ∂F ∂f ∂F ∂f ∂F ∂f
= q − + + − +
J ∂z ∂q ∂p ∂z ∂y ∂q ∂q ∂y
∂F ∂f ∂F ∂f ∂F ∂F ∂f
or − + − + − p −q
∂p ∂x ∂q ∂y ∂p ∂q ∂z
∂F ∂F ∂f ∂F ∂F ∂f
+ p + + q + =0 (9.29)
∂z ∂x ∂p ∂z ∂y ∂q
The Equation (9.29) being linear in variable x, v, z, p, q and f has the
following subsidiary equations:
dx dy dz dp dq
= = = =
∂F ∂F ∂F ∂F ∂F ∂F ∂F ∂F (9.30)
− − −p −q +p +q
∂p ∂q ∂p ∂q dx ∂z ∂y ∂z
If any of the integrals of Equations (9.30) involve p or q then it is of the form
of Equation (9.24).
Then we solve Equations (9.23) and (9.24) for p and q and integrate Equation
(9.25).
Self-Instructional
Material 235
Partial Differential Example 10: Get complete integral of the equation,
Equations
p 2 + q 2 − 2 px − 2qy + 2 xy = 0 (1)
Solution: The subsidiary equations are
NOTES
dp dq dx dy
= = = (2)
2( y − p ) 2(x − q ) − 2( p − x ) − 2(q − y )
dp + dq dx dy
=
2 y + 2 x − 2 p − 2q 2 x 2 y 2 p 2q
dp + dq = dx + dy
Integrating, we get
p+q=x+ y+a
where a is constant
( p − x ) + (q − y ) = a (3)
Equation (1) can also be written as
( p − x )2 + (q − y )2 = (x − y )2
Now {( p x) (q y )}2 {( p x) (q y )}2
{
= 2 ( p − x ) + (q − y )
2 2
}
( p − x ) − (q − y ) = 2(x − y ) − a 2
2
(4)
Adding Equations (3) and (4),
( p − x) = 1 a + 1 2( x − y ) − a 2
2
2 2
a 1
2( x − y ) − a 2
2
or p= +x+
2 2
Similarly subtracting Equation (4) from Equation (3)
a 1
2( x − y ) − a 2
2
q= y+ −
2 2
dz = pdx + qdy
or
a 1 a 1
2( x − y ) − a 2 dx + y + − 2( x − y ) − a 2 dy
2 2
dz = + x +
Self-Instructional
2 2 2 2
236 Material
Partial Differential
1 a 1
2
2 2
(
= d x + y + d (x + y ) +
2 2
)
2( x − y ) − a 2 d ( x − y )
2 Equations
On integrating
NOTES
x2 + y2 a 1
( )
1
z+b = + ( x + y ) + ∫ 2U 2 − a 2 2 dU
2 2 2
a2
2
U U −
2
x +y 2
a 1 2 a 2 a 2
z+b = + x+ y + − logU + U −
2
2 2 2 2 4 2
x2 + y2 a ( x − y ) 2( x − y ) − a 2 2
= + (x + y ) +
2 2 4
a2 a 2
log ( x − y ) + (x − y ) −
2
−
4 2 2
Example 11: Determine the complete integral of the equation
p 2 + q 2 − 2 px − 2qy + 1 = 0 (1)
Solution: The subsidiary equations are
dx dy dp dq
= = = (2)
− (2 − 2 xp ) − (2q − 2 y ) − 2 p − 2q
With
dp dq
=
p q
On integrating, we get
p = aq (3)
where ‘a’ is an arbitrary constant.
Substituting the value of p from Equation (3) in Equation (1)
( )
q 2 1 + a 2 − 2q(ax + y ) + 1 = 0
q = (ax + y ) + (ax + y )2 − (1 + a 2 )
dz = pdx + qdy
Self-Instructional
Material 237
Partial Differential which gives
Equations
dz = q(adx + dy )
NOTES {
= d (ax + y ) (ax + y ) + (ax + y )2 − (1 + a 2 ) }
Integrating
1
z + b = (ax + y ) +
2 (ax + y ) (ax + y ) − 1 + a 2 2
( )
2 2
−
(a 2
2
+1) {
log (ax + y ) + (ax + y )2 − (1 + a 2 )}
where b is an arbitrary constant.
Example 12: Find Complete Integral of the following equation
2( pq + py + qx ) + x 2 + y 2 = 0 (1)
Solution: The subsidiary equations of Equation (1) are
dx dy dp dq
= = = (2)
− (2q + 2 y ) − (2 p + 2 x ) (2q + 2 x ) (2 p + 2 y )
dp + dq + dx + dy = 0
Integrating
p + q + x + y = constant = a (say)
or ( p + x ) + (q + y ) = a (3)
Equation (1) can be written as
2( p + x )(q + y )( x − y ) = 0
2
or ( p + x )(q + y ) = − 1 (x − y )2
2
( p + x) − (q + y ) = {( p + x ) + (q + y )}2 − 4( p + x )(q + y )
= a 2 + 2( x − y )
2
2( p + x ) = a + a 2 + 2(x − y )
2
a 1
a 2 + 2( x − y )
2
or p = −x + +
2 2
Self-Instructional
238 Material
Subtracting Equation (4) from Equation (3) Partial Differential
Equations
a 1
a 2 + 2( x − y )
2
q = −y + −
2 2
NOTES
dz = pdx + qdy
giving
a
dz = −( xdx + ydy ) + (dx + dy ) + 1 a 2 + 2(x − y )2 d (x − y )
2 2
1 a 1
= − d (x + y ) + d ( x + y ) + a 2 + 2( x − y ) d ( x − y )
3 3 2
2 2 2
Integrating the above equation, we get
a2
( )
2 z + b = − x 2 + y 2 + a (x + y ) + 2 ∫
2
(x − y )2 d (x − y )
a2
2 (x − y ) + (x − y )
2
( )
= − x 2 + y 2 + a(x + y ) + 2
2
a2 a2
2
+ 2 log ( x − y ) + + (x − y )
4 2
(x − y ) a 2 + 2( x − y )
2
( )
= − x 2 + y 2 + a(x + y ) +
2
a2 a2
2
+ log ( x − y ) + + (x − y ) .
2 2 2
dq
= − 4 pq sec h 2 2 y + 4 sec h 2 2 y tanh 2 y
Self-Instructional
Material 239
Partial Differential
Equations
dp = 0
or p = constant = a (say)
Therefore
NOTES
q 2 − 2a tanh 2 y.q + a 2 − sec h 2 2 y = 0
= a tanh 2 y + 1 − a 2 sec h2 y
dz = pdz + qdy
gives
(
dz = adx + a tanh 2 y + 1 − a 2 sec h2 y dy )
a
= d ax + log cosh 2 y + 1 − a 2 sec h 2 ydy
2
Integrating
a 2dy
z + b = ax + log cosh 2 y + 1 − a 2 ∫ 2 v
2 e + e −2v
a 2e 2 v dy
= ax + log cosh 2 y + 1 − a 2 ∫
2 1 + e 4v
a
(
log cosh 2 y + 1 − a 2 tan −1 e 2 v .
= ax +
2
)
Example 14: Find Complete Integral
(
xy + 3 yq = 2 z − x 2 q 2 ) (1)
Solution: The subsidiary equations are
dx dy dp dq
= = =
− x − 3 y − 4 x q p − 2 p + 4 xq
3 2
3q − 2q
dq dx
=
q −x
qx = constant = a
a
q=
x
Substituting in Equation (1) we get
p=
(
2 z − a 3 3 ya
− 2
)
Self-Instructional x x
240 Material
dz = pdx + qdy Partial Differential
Equations
dz =
(
2 z − a 2 3 ya a )
− 2 dx + dy
gives
x x x
NOTES
Multiplying by x 2
( )
x 3 dz = 2 x z − a 2 dx − 3 yadx + axdy
4 z −a
2
i.e., x d
x2 = −3aydx + axdy
z − a2 a 3ay ay
i.e., 2
d = 3 dy − 4 dx = d 2
x x x x
z − a 2 ay
On integrating , we get = 3 +b
x2 x
y
or z = a a + + bx2 where, a and b are arbitrary constants.
x
1
The operator F ( D) is defined as that operator which, when operated on
f (x) gives a function (x) such that F (D) (x) = f (x)
1
i.e., F ( D) { f (x)} = (x) (= P.I. )
1 1
F (D) f ( x) = f (x) f ( x) ( x)
F ( D ) F ( D)
Obviously F (D) and 1/F(D) are inverse operators.
1
Case I: Let F (D) = D, then f ( x) = ∫ f ( x) dx .
D
1 1
Proof: Let y = { f (x)}, operating by D, we get Dy = D . { f (x)} or Dy = f (x) or
D D
dy
= f (x) or dy = f (x) dx
dx
Integrating both sides with respect to x, we get
y = ∫ f ( x) dx, since particular integrating does not contain any arbitrary constant.
Case II: Let F (D) = D – m where m is a constant, then
1
{ f ( x )} = emx − mx
f ( x)dx .
D−m ∫e
Self-Instructional
244 Material
Partial Differential
1 Equations
Proof: Let { f ( x )} = y, then operating by D – m, we get
D−m
1
(D – m) . { f ( x )} = (D – m) y
D−m NOTES
dy
or f (x) = − my
dx
dy
or − my = f (x) which is a first order linear differential equation and I.F. =
dx
e∫
− mdx
= e− mx.
Then multiplying above equation by e–mx and integrating with respect to x, we
get
y e – mx = ∫ f ( x)e−mx dx, since particular integral does not contain any arbitrary
constant
or y = emx ∫ f ( x )e
− mx
dx .
1 a1 a2 an
Note: If = ..... where ai and mi (i = 1, 2, ..., n)
F ( D) D m1 D m2 D mn
are constants, then
1 1x
{ f ( x)} = a1em f ( x )e m1x
dx a2 em2 x f ( x)e m2 x
dx
F ( D)
... an e mn x f ( x)e mn x
dx
n
= ∑ ai em x ∫ f ( x)e−m x dx
i i
i =1
We now discuss methods of finding particular integrals for certain specific types
of right hand functions
Type I: f (D) y = emx where m is a constant.
1 emx
Then P.I. = {emx } = if F (m) 0
F ( D) F ( m)
If F (m) = 0, then we replace D by D + m in F (D),
1 1
P.I. = {emx } = emx . {1}
F ( D) F ( D + m)
e 4 x e2 x
= e4x + 6 e2x + 9e0 . x + +
2 2
3 4x 13 2 x
= e e 9e0. x
2 2
The particular integral is
1 3 4x 13 2 x
y= 3 2
e e 9e 0. x
D 2D 5D 6 2 2
13 4x 13 2 x
= e e 9e0. x
( D 1)( D 3)( D 2) 2 2
3 1 13 1
= e4 x {e 2 x}
2 ( D 1)( D 3)( D 2) 2 ( D 1)( D 2)( D 3)
1
+9 e0. x
( D 1)( D 3)( D 2)
3 e4 x 13 e2 x
=
2 (4 1) (4 3) (4 2) 2 (2 1) )(2 2) (2 3)
e0. x
9
(0 1)(0 3)(0 2)
3 e4 x 13 e2 x e0. x
= 9
2 3 .1. 6 2 1. 4 . ( 1) ( 1)( 3) . 2
e4 x 13 2 x 3
= − e + .
12 8 2
Hence the general solution is
y = C.F. + P.I.
e4 x 13 2 x 3
= c1ex + c2 e3x + c3 e–2x + − e + .
12 8 2
1 1
Notes: 1. When F (m) = 0 and F (m) 0, P.I. = {emx } =x {emx }
F ( D) F (D)
xe mx
=
F (m)
Self-Instructional
246 Material
1 Partial Differential
2. When F (m)= 0 F (m) = 0 and F (m) 0, then P.I. = {emx } Equations
F ( D)
1 x 2 e mx
= x2 emx =
F ( D) F ( m)
NOTES
and so on.
Type II: f (x) = emx V where V is any function of x.
Here the particular integral (P.I.) of F (D) y = f (x) is
1 1
P.I. = {emxV } = emx {V }.
F ( D) F ( D + m)
1 1
= e3x {x 2 } = e3x {x 2 }
2
D + 6 D + 9 − 5 D − 15 + 6 D2 + D
1 1
= e3x D(1 + D= {x 2 } e3 x (1 + D)−1{x 2 }
) D
e3 x
= (1 D D2 D3 D 4 ...){x 2 }
D
e3 x 2 x3
= 2} e3x − x 2 + 2 x
{x − 2 x +=
D
3
Hence the general solution is
y = C.F. + P.I.
x3
= c1e 2 x c2 e3 x e3x x2 2x .
3
Recall: (i) (1+ x)–1 = 1 – x + x2 – x3 + x4 – x5 + ...
(ii) (1 – x)–1 = 1 + x + x2 + x3 + x4 + x5 + ...
Self-Instructional
Material 247
Partial Differential Type III: (a) F (D) y = sin ax or cos ax where F (D) = (D2).
Equations
1 1
Here P.I. = {sin ax} = sin ax (if (– a2) 0)
F ( D) ( a2 )
NOTES 1 1
or P.I. = {cos ax} = cos ax (if (– a2) 0)
F ( D) ( a2 )
[Note D2 has been replaced by – a2 but D has not been replaced by – a.]
(b) F (D) y = sin ax or cos ax and F (D) = (D2, D)
1 1 1
Here P.I. = {sin ax} = {sin ax} {sin ax}
F ( D) 2
(D , D) ( a 2 , D)
if (– a2, D) 0
1 1 1
or y= {cos ax} = {cos ax} {cos ax}
F (D) 2
( D , D) ( a 2, D )
if (–a2, D) 0
( D)
(c) F (D) y = sin ax or cos ax and F(D) =
( D2 )
1 ( D) ( D)
Here P.I. = {sin ax} = {sin ax} = {sin ax} if (–a2) 0
F ( D) (D2 ) 2
( a )
1 ( D)
or y= {cos ax} = {cos ax}
F (D) ( D2 )
( D)
= 2
{cos ax} if (– a2) 0
( a )
ei xa e ixa
Alternatively, sin ax and cos ax can be written in the form sin ax =
2i
e aix e aix
and cos ax = , then find P.I. by the method of Type I.
2
Example 17: Solve (D4 + 2D2 + 1) y = cos x.
Solution: The reduced equation is (D4 + 2D2 + 1) y = 0
Let y = Aemx be a trial solution. Then the auxiliary equaiton is
m4 + 2m2 + 1 = 0 or [(m2 + 1)]2 = 0 or m = ± i, ± i
C.F. = (c1 + c2x) cos x + (c3 + c4x) sin x where c1, c2, c3 and c4 are
arbitrary constants.
Self-Instructional
248 Material
1 Partial Differential
P.I. = {cos x} Equations
D4 + 2D 2 + 1
1
= x {cos x}
4 D3 4D NOTES
2 4 2
[ (D ) = D + 2D + 1
1 1
(–12) = 1 – 2 + 1 = 0, then { f ( x)} =x { f ( x)} ]
F ( D) F ′( D)
x 1 x x
= 3
{cos x} = . 2 {cos x}
4D +D 4 3D + 1
x2 1 x 2 cos x x2
= {cos x} . cos x
4 3D 2 1 4 3 1 8
Hence the general solution is
y = C.F. + P.I.
x2
= (c1 + c2x) cos x + (c3 + c4x) sin x – cos x .
8
Example 18: Solve (D2 – 4)y = sin 2x.
Solution: The reduced equation is
(D2 – 4)y = 0
Let y = Aemx be a trial solution and then auxiliary equation is
m2 – 4 = 0 m=±2
The complementary function is
y = c1 e2x + c2 e–2x where c1, c2 are arbitrary constants.
The particular integral is
1 1
y= {sin 2 x} = sin 2 x [Replace D 2 by 22 ]
D2 − 4 22 4
1
= sin 2 x
8
1
The general solution is y = C.F. + P.I. = c1e2x + c2e–2x sin 2 x .
8
Example 19: Solve (3D2 + 2D – 8)y = 5 cos x.
Solution: The reduced equation is
(3D2 + 2D – 8)y = 0
Let y = Aemx be a trial solution and then the auxiliary equation is
3m2 + 2m – 8 = 0 or 3m2 + 6m – 4m – 8 = 0
or 3m (m + 2) – 4 (m + 2) = 0 or (m + 2) (3m – 4) = 0
Self-Instructional
Material 249
Partial Differential 4
Equations or m = – 2, m =
3
The complementary function is
4
NOTES x
y = c1e–2x + c2 e 3 when c1 and c2 are arbitrary constants.
The particular integral is
1 1
y= {5 cos x} = 5 {cos x}
2
3D + 2 D − 8 (3D − 4)( D + 2)
( D)
[D2 is replaced by – 12 in the denominator] form
(D2 )
5 1
= [3D 2 6 D 4 D 8]{cos x} = [3 D 2 − 2 D − 8]cos x
( 25) ( 5) 25
1 d2 d
= 3 2 (cos x) − 2 (cos x) − 8cos x
25 dx dx
1 1
= [ −3cos + 2sin x − 8 cos x ] = (2 sin x − 11cos x)
25 25
The general solution is
y = C.F. + P.I.
1
= c1e –2x + c2e4/3x + (2 sin x − 11cos x) .
25
Type IV: F (D) y = xn, n is a positive integer.
1
Here P.I. = {x n } = [F(D)]–1 {xn}
F ( D)
In this case, [F (D)]–1 is expanded in a binomial series in ascending powers of
D upto Dn and then operate on xn with each term of the expansion. The terms in
the expansion beyond Dn need not be considered, since the result of their operation
on xn will be zero.
Example 20: Solve D2 (D2 + D + 1)y = x2.
Solution: The reduced equation is
D2 (D2 + D + 1)y = 0 (1)
mx
Let y = Ae be a trial solution of Equation (2) and then the auxiliary equation
is
m2 (m2 + m + 1) = 0
−1 ± 1 − 4 −1 ± −3 − I ± 3i
m = 0, 0 and m = = =
Self-Instructional 2 2 2
250 Material
The complementary function is Partial Differential
Equations
1
x 3 3
y = (c1 + c2 x) e 0 . x + e 2 c3 cos x c4 sin x
2 2
1 NOTES
x 3 3
= c1 + c2x + e 2 c3 cos x c4 sin x
2 2
where c1, c2, c3, c4 are the arbitrary constant.
The particular integral is
1 1
y= {x 2 } = (1 + D + D2 )−1{x2}
2
D ( D + D + 1) 2 D2
1
= {1 ( D D2 ) (D D 2 )2 (D D )3 ...}{x 2}
D2
1
= 2
{1 ( D D2 ) ( D2 2 D3 D4 ) (D D 2 )3 ...}{x2}
D
1
= 2
{x 2 (2 x 2) (2) 0}
D
1 1 x3 x4 x3
= 2
{x 2 2 x} = x2 =
D D 3 12 3
1 x 1 x (e 2ix e 2ix
)
=
D2 4 2 D2 4 2 2
1
1 D2 x 1 e 2ix e 2ix
= 1 {x} {x}
4 4 2 4 ( D 2i ) 2 4 4( D 2i) 2 4
Self-Instructional
Material 251
Partial Differential
1x e2ix 1 e 2ix
1
Equations = {x} {x}
42 4 D2 4 Di 4 4 4 D 2
4 Di 4 4
x e 2ix 1 e 2ix
= {x} {x}
NOTES 8 4
4 Di 1
D
4 . ( 4 Di ) 1
D
4i 4i
1 1
x e2ix 1 D e 2 xi D
= . 1 {x} 1 {x}
8 4 4 Di 4i 4( 4 Di ) 4i
x e2ix 1 D D2 e 2 xi D
= . 1 ... {x} 1 ... {x}
8 4 4 Di 4i 16 4( 4 Di ) 4i
x e2ix 1 1 e 2 xi 1
= . x x
8 4 4 Di 4i 4 . 4 Di 4i
x e 2ix x 2 x e 2 xi x 2 x
=
8 2 . 8i 2 4i 2 . 8i 2 4i
x x 2 e 2ix e 2 xi
x e2ix e 2 xi
=
8 2.8 2i 2 .16 . i 2 2
x x2 x
= − sin 2 x − cos 2 x
8 2.8 2.16
x x2 x
= − sin 2 x − cos 2 x
8 16 32
Hence the general solution is y = C.F. + P.I.
x x2 x
= c1 cos 2x + c2 sin 2x + sin 2x – cos 2 x .
8 16 32
Example 22: Solve (D4 + D3 – 3D2 – 5D – 2) y = 3xe–x.
Solution: The reduced equation is
(D4 + D3 – 3D2 – 5D – 2) y = 0 (1)
mx
The trial solution y = Ae gives the auxiliary equation as
m4 + m3 – 3m2 – 5 m – 2 = 0
or m4 + m3 – 3m2 – 3 m – 2m – 2 = 0
or m3 (m + 1) – 3m (m + 1) – 2 (m +1)
or (m + 1) (m3 – 3m – 2) = 0 or (m + 1) {m3 + m2 – m2 – m – 2m –2)
=0
or (m + 1) {m2 (m + 1) – m (m + 1) – 2 (m + 1)} = 0
or (m + 1) (m + 1) (m2 – m – 2) = 0
or (m + 1)2 (m2 – 2m + m – 2) = 0
or (m + 1)2 (m + 1) (m – 2) = 0
Self-Instructional
252 Material
m = – 1, –1, –1, 2 Partial Differential
Equations
The complementary function is y = (c1 + c2 x + c3x2) e–x + c4e2x.
The particular integral is
1 NOTES
y= 3
{3e x
x}
( D 1) ( D 2)
1 1
= 3e–x 3
{x} = 3e–x 3
{x}
( D 1 1) ( D 3) D ( 3) (1 D/3)
1
1 D 1 D D2
= – e–x 1 {x} e x
1 ... {x}
D3 3 D3 3 9
1 1 1 x2 x 1 x3 x2
= – e–x x e x
e x
D3 3 D2 2 3 D 6 6
x4 x3
= –e–x
24 18
1 1
= x − F ′( D ) {sin x}
F ( D) F (D)
2D 1
= x 2 2
{sin x}
D 9 D 9
2D sin x 2D sin x
= x 2
= x
D 9 1 9 D2 9 8
x sin x 1 1 x sin x 1
= D{sin x} = − cos x
8 4 1 9 8 32
Self-Instructional
Material 253
Partial Differential Hence the general solution is
Equations
x sin x 1
y = C.F. + P.I. = c1 cos 3x + c2 sin 3x + − cos x
8 32
1 1 1
= x 2D x 2D sin x
D2 1 D2 1 12 1
1 1
= x 2
2D x 2
2 D { 1/ 2 sin x}
D 1 D 1
1 x 1
= x 2
2D sin x 2
{cos x}
D 1 2 D 1
1 x 1
= x −
2 D − sin x − cos x
2
D −1 2 2
x2 x 1
=– sin x − cos x + 2 {D ( x sin x + cos x)}
2 2 D −1
x2 x 1
=– sin x cos x 2
{sin x x cos x sin x}
2 2 D 1
x2 x 1
=– sin x − cos x + 2 {x cos x}
2 2 D −1
1 1 1
Again 2
{x cos x} = x − 2 2D 2 {cos x}
D −1 D − 1 D −1
1 1
= x − 2D
2
cos
x
D − 1 −1 − 1
1 1
= x cos x 2
{ sin x}
2 D 1
Self-Instructional
254 Material
1 sin x Partial Differential
1 1
= x cos x = – x cos x + sin x Equations
2 12 1 2 2
x2 x x 1
P.I. = – sin x − cos x − cos x + sin x
2 2 2 2 NOTES
1 2 1
= x sin x x cos x sin x
2 2
Hence the general solution is
1 2 1
y = C.F. + P.I. = c1ex + c2e–x x sin x x cos x sin x .
2 2
9.5.1 Classification of Linear Partial Differential Equations of Second Order
Consider the following linear partial differential equation of the second order in
two independent variables,
∂ 2u ∂ 2u ∂ 2u ∂u ∂u
A + B + C +D +E + Fu = G
∂x 2
∂x∂y ∂y 2
∂x ∂y
Where A, B, C, D, E, F and G are functions of x and y.
This equation when converted to quasi-linear partial differential equation
takes the form,
∂ 2u ∂ 2u ∂ 2u ∂u ∂u
A 2 +B + C 2 + f x, y, u , = 0
∂x ∂x∂y ∂y ∂x ∂y
These equations are said to be of:
1. Elliptic type if B2 – 4AC < 0
2. Parabolic type if B2 – 4AC = 0
3. Hyperbolic type if B2 – 4AC > 0
Let us consider some examples to understand this:
∂ 2u ∂ 2u 2
2 ∂ u ∂u
(i) 2
− 2 x + x 2
−2 =0
∂x ∂x∂y ∂y ∂y
uxx – 2xuxy + x2uyy – 2uy = 0
Comparing it with the general equation we find that,
A = 1, B = –2x, C = x2
Therefore
B2 – 4AC = (–2x)2 – 4x2 = 0, ∀ x and y 0
So the equation is parabolic at all points.
(ii) y2uxx + x2uyy = 0
Self-Instructional
Material 255
Partial Differential Comparing it with the general equation we get,
Equations
A = y2, B = 0, C= x2
Therefore
NOTES B2 – 4AC = 0 – 4x2y2 < 0, ∀ x and y 0
So the equation is elliptic at all points.
(iii) x2uxx – y2uyy = 0
Comparing it with the general equation we find that,
A = x2, B = 0, C = –y2
Therefore
B2 – 4AC = 0 – 4x2y2 > 0, ∀ x and y 0
So the equation is hyperbolic at all points.
Following three are the most commonly used partial differential equations
of the second order:
1. Laplace equation
∂ 2u ∂ 2u
+ =0
∂x 2 ∂y 2
This is equation is of elliptic type.
2. One-dimensional heat flow equation
∂u ∂ 2u
= c2 2
∂t ∂x
This equation is of parabolic type.
3. One-dimensional wave equation
∂ 2u 2
2 ∂ u
=c
∂t 2 ∂x 2
This is a hyperbolic equation.
Complementary Function
Consider the equation,
(A 0 )
D n + A 1 D n −1 D ′ + A 2 D n − 2 D ′ 2 + + A n D ′ n z = 0 (9.42)
Let
z = φ(y + mx) (9.43)
be a solution of Equation (9.42)
Now D r z = m r φ r (y + mx )
D ′ s z = φ (e ) (y + mx )
and D r D ′s z = m r φ ( r + s ) (y + mx )
Therefore, on substituting Equation (9.43) in Equation (9.42), we get
(A 0 )
m n + A 1 m n − s + A 2 m n − 2 + + A n φ ( n ) (y + mx ) = 0
which will be satisfied if
A 0 m n + A 1 m n −1 + A 2 m n − 2 + + A n = 0 (9.44)
Equation (9.44) is known as the Auxiliary Equation.
Let m1 , m 2 ,, m n be the roots of the Equation (9.44),
Then the following three cases arise:
Case I: Roots m1 , m 2 ,, m n are distinct.
Part of C.F. corresponding to m = m1 is
z = φ1 (y + m1x )
where ‘ 1’ is an arbitrary function.
Part of C.F. corresponding to m = m2 is
z = φ 2 (y + m 2 x )
where 2
is any arbitrary function.
Now since our equation is linear, so the sum of solutions is also a solution.
Therefore, our complimentary function becomes,
C.F. = 1
(y + m1x) + 2
(y + m2x) +……………+ n
(y + mnx)
Self-Instructional
Material 257
Partial Differential Case II: Roots are imaginary.
Equations
Let the pair of complex roots of the Equation (9.44) be
u ± iv
NOTES then the corresponding part of complimentary function is
z= 1
(y + ux + ivx) + 2
(y + ux – ivx) …(9.45)
Let y + ux = P and vx = Q
Then z = (P + iQ) +
1
(P – iQ)
2
Or z = ( 1+ )P + ( 1–
2
)iQ
2
If 1
+ 2
= 1
And 1
– 2
= 2
Then
1
φ1 = ( ξ 1 + iξ 2 )
2
and
1
φ2 = ( ξ 1 − iξ 2 )
2
Substituting these values in Equation (9.45), we get
1 1 1 1
z= ξ 1 ( P + iQ ) + iξ 2 ( P + iQ ) + ξ 1 ( P − iQ ) − iξ 2 ( P − iQ )
2 2 2 2
or
1 1
z = {ξ 1 ( P + iQ ) + ξ 1 ( P − iQ )} + i{ξ 2 ( P + iQ ) − ξ 2 ( P − iQ)}
2 2
Case III: Roots are repeated.
Let m be the repeated root of Equation (9.44).
Then we have,
(D – mD')(D – mD')z = 0
Putting (D – mD')z = U, we get (9.46)
(D – mD')U = 0 (9.47)
Since the equation is linear, it has the following subsidiary equations,
dx dy dU
= = (9.48)
1 −m 0
Two independent integrals of Equation (9.48) are
y + mx = constant
Self-Instructional
258 Material
and U = constant Partial Differential
Equations
U = φ(y + mx )
is a solution of Equation (9.47) where is an arbitrary function.
NOTES
Substituting in Equation (9.46)
∂z ∂z
−m = φ(y + mx ) (9.49)
∂x ∂y
which has the following subsidiary equations,
dx dy dz
= =
1 − m φ(y + mx )
Two independent integrals of Equation (9.46) are
y + mx = constant
or
ci
− y
z= e bi
ψ i (bi x − ai y ) if bi ≠ 0
is the general solution of Equation (9.53). Here φ i and ψ i are arbitrary functions.
Self-Instructional
260 Material
Example 26: Solve the differential equations Partial Differential
Equations
(D 2
)
− D ′ 2 − 3D + 3D ′ z = 0 .
Solution: The equation can also be written as
NOTES
(D − D′)(D + D′ − 3)z = 0
C.F. = φ1 (y + x ) + e 3 x φ 2 (x − y )
Or
ψ1 ( y + x) + e3 y ψ 2 ( x − y )
When the Factors are Repeated
Let the factor is repeated two times and is given by,
(aD + bD '+ c)
Consider the equation
(aD + bD′ + c )(aD + bD′ + c )z = 0 (9.55)
Or
c
U =e
− y
b ψ(bx − ay) if b ≠ 0 (9.59)
dx dy dz
= = c
a b − x (9.61)
e a
φ(bx − ay ) − cz
Self-Instructional
Material 261
Partial Differential
c c
Equations dz c 1 − x 1 − x
and + z = e a φ(bx − ay ) = e a φ(λ) (9.63)
dx a a a
The Equation (9.63) being an ordinary linear equation has the following
NOTES solution:
c
x 1
ze a = xφ(λ) + constant
a
c
x 1
or ze a = xφ(bx − ay ) + constant
a
Therefore, general solution of Equation (9.60) is
c c
x − x − x
z = e a φ(bx − ay ) + φ1 (bx − ay )e a
a
c
= e − a x {xφ (bx − ay ) + φ (bx − ay )} …(9.64)
2 1
where 1
and 2
are arbitrary functions.
Similarly from Equations (9.59) and (9.56), we get
where 1
and 2
are arbitrary functions.
In general, for r times repeated factor, (aD + bD′ + c)
c r
− x
z=e a
∑x i −1
φ i (bx − ay ) if a ≠ 0
i =1
Or
c r
− y
z−e b
∑y i −1
ψ i (bx − ay ) if b ≠ 0
i =1
e4y φ(x + 2 y )
C.F. corresponding to the factor (D = 2 D ′ + 1)2 is
Self-Instructional
262 Material
Partial Differential
e − x {xφ 2 (2 x − y ) + φ1 (2 x − y )} Equations
Self-Instructional
Material 263
Partial Differential ∞
= e ∑ cie
Equations − kx b i ( y − hx )
(9.71)
i =1
= ∑c e
i =1
i
a i x + bi y
2a i2 + b i = 0
or b i = −2a i2
Therefore, part of C.F. corresponding to the first factor
∞
∑d e
i =1
i
ei ( x − ei y )
C.F. = ∑c e
i =1
i
a i (x −2 a i y )
+ ∑ d i e ei ( x −ei y )
i =1
Particular Integral
In the equation,
f (D, D′)z = V(x, y) …(9.72)
f(D, D’) is a non homogeneous function of D and D ′ .
1
P.I. = V (x , y ) …(9.73)
f (D, D′)
Self-Instructional
264 Material
Here if V(x, y ) is of the form e ax+by where ‘a’ and ‘b’ are constants then
Partial Differential
Equations
we use the following theorem to evaluate the particular integral:
Theorem 9.1: If f (a , b ) ≠ 0 , then
NOTES
1 1
e ax + by = e ax + by
f (D, D′ ) f (a , b )
Proof: By differentiation
D r D 's e ax +by = a r b s e ax +by
D r e ax + by = a r e ax + by
1
e ax + by = f (a , b ) e ax + by
f (D, D′ )
Dividing the above equation by f(a, b)
1 1
e ax + by = e ax + by
f (a , b ) f (D, D )
′
1 1
or e ax + by = e ax + by
f (D, D′ ) f (a , b )
Self-Instructional
Material 265
Theorem 9.2: If φ(x, y) is any function, then
Partial Differential
Equations
1 1
e ax + by φ(x , y ) = e ax + by φ( x , y )
f (D, D ′) f (D + a , D′ + b )
NOTES
Proof: From Leibnitz’s theorem for successive differentiation, we have
{ } {
D r e ax + by φ(x , y ) = e ax + by D r φ(x , y ) + r c1a.D r −1φ(x , y ) }
+ r c 2 a 2 d r − 2 φ(x , y ) + + r c r a r φ(x , y )
= e ax + by {D r + r c1 D r −1 + r c 2 a 2 D r − 2 + + r c r a r }φ(x , y )
= e ax + by (D + a )r φ (x , y ) .
Similarly
{ }
D 's e ax+by φ( x, y ) = e ax+by ( D'+b) s φ( x, y )
{ }
and D r D' s e ax +by φ( x, y ) = D r [e ax +by ( D'+b)φ( x, y )]
= e ax + by (D + a )r (D ′ + b )s φ (x , y )
So { }
f (D, D ′ ) e ax + by φ(x , y ) = e ax + by f (D + a , D ′ + b )φ(x , y ) (9.74)
1
φ( x , y ) = ψ (x , y )
f (D + a , D′ + b )
Substituting in Equation (9.74), we get
1
f (D, D ′ )e ax + by ψ (x , y ) = e ax + by ψ (x , y )
f (D + a , D ′ + b )
1
Operating on the equation by
f (D, D′)
1 1
e ax + by
f (D + a , D′ + b )
ψ (x , y ) =
f (D, D′)
{
e ax + by ψ (x , y ) }
Replacing ψ(x, y) by φ(x, y ) , we have
1 1
f (D, D )
′
( )
e ax + by φ(x , y ) = e ax + by
f (D + a , D ′ + b )
φ(x , y )
Self-Instructional
266 Material
Example 30: Solve (D 2 − D ′ 2 − 3D + 3D ′)z = xv + e x + av .
Partial Differential
Equations
1 1
P.I. = xy + e x+2 y
(D − D′)(D + D′ − 3) (D − D′)(D + D′ − 3)
−1 −1
1 D ′ D + D′
=− 1 − 1 − xy
3D D 3
x +2 y 1
+e 1
(D + 1 − D′ − 2)(D + 1 + D′ + 2 − 3) .
1 D′ D ′ 2 D + D′ 2
=− 1 + + 2 + 1 + + DD′ + xy + e x + 2 y
3D D D 3 9
1
.1
(D − D′ − 1)(D + D′)
1 2 x2 1 2
=− xy + x + + y + − xe x + 2 y
3D 3 2 3 9
1 x2y x2 x3 1 2 x +2 y
= 3 2 + 3 + 6 + 3 xy + 9 x − xe
− .
( )
Example 31: Solve D 2 − DD'+ D'−1 z = cos( x + 2 y ) + e y + xy + 1 .
Solution: Equation is equivalent to
(D − 1)(D − D'+1)z = cos(x + 2 y ) + e y + xy + 1
Complementary Function = e x φ1 ( y ) + e y φ 2 ( x + y ) .
Particular integral corresponding to cos (x + 2y) is
Self-Instructional
Material 267
Partial Differential
1
Equations cos(x + 2 y )
D − DD′ + D′ − 1
2
1
NOTES = cos(x + 2 y )
(− 1) − (− 2 ) + D′ − 1
1
= cos(x + 2 y )
D′
1
= sin (x + 2 y )
2
Corresponding to e y , the particular integral is
1
= ey
D − DD ′ + D ′ − 1
2
1
= ey
D′ − 1
1
= ey. .1
D′
= ye .
y
{ }{ }
= − 1 + D + D 2 + ..... 1 − (D − D ′) + (D − D′) − ..... (xy + 1)
2
= −{1 + D + D 2
+ .....}{(xy + 1) − (y − x ) − 2}
= −{1 + D + D 2
+ .....}(xy − y + x − 1)
= −{(xy − y + x − 1) + (y + 1)}
= −(xy + x )
= − x (y + 1)
1
z = e x φ1 ( y ) + e y φ 2 ( x + y ) + sin( x + 2 y ) + ye y − x( y + 1)
2
Self-Instructional
268 Material
Partial Differential
9.7 PARTIAL DIFFERENTIAL EQUATIONS Equations
1 ∂ 1 ∂2
= x 2 − 2 + 2 2
x ∂u x ∂u
∂2 ∂
= 2
−
∂u ∂u
= d(d − 1)
Therefore,
(
x r D r = d (d − 1)(d − 2 )..... d − r − 1 )
(
and y s D′s = d ′(d ′ − 1)(d ′ − 2 )... d ′ − s − 1)
( )
Hence f (xD, yD′) = ∑ c rs d (d − 1)..... d − r − 1 d ' (d '−1).....(d '− s − 1)
= g(d, d′)
Self-Instructional
Material 269
Partial Differential Here the coefficients in g(d, d’) are constants.
Equations
Thus by substitution Equation (9.75) is reduced to
(
g(d, d ′)z = V e u , e v )
NOTES
Or g (d , d ' ) z = U(u, v ) (9.77)
Equation (9.77) can be solved by methods that have been described for
solving partial differential equations with constant coefficients.
Example 32: Solve the differential equation,
(x D
2 2
)
− 4xyDD′ + 4 y 2 D′ 2 + 6 yD′ z = x 3 y 4
Solution: Put u = log x
v = log y
The given equation can be reduced to
{d(d − 1) − 4dd′ + 4d′(d′ − 1) + 6d′}z = e 3u+4v
or
or (d − 2d ′)(d − 2d ′ − 1)z = e 3u + 4 v
( ) (
= φ1 log x 2 y + xφ 2 log x 2 y )
= ψ (x y ) + xψ (x y )
1
2
2
2
1
And the particular integral is e 3u +2 v
(d − 2d )(d − 2d − 1)
′ ′
1 3u + 4 v
= e
30
1 3 4
= x y
30
1 3 4
( ) ( )
z = ψ 1 x 2 y + xψ 2 x 2 y +
30
x y .
(
Example 33: Find the solution of, x 2 D 2 − y 2 D′ 2 − yD′ + xD z = 0 )
Solution: Put u = log x
v = log y
The given differential can be reduced to
Self-Instructional
270 Material
{d(d − 1) − d′(d′ − 1) − d′ + d}z = 0 Partial Differential
Equations
(d 2
)
− d′2 z = 0
A.E. is NOTES
2
m −1 = 0
m = 1,−1
z = φ1 (v + u ) + φ 2 (v − u )
y
= φ1 (log xy ) + φ 2 log
x
y
= Ψ1 (xy ) + Ψ2 .
x
Example 34: Determine the solution of the following equation:
(x D
2 2
)
+ 2xyDD′ + y 2 D′ 2 z + nz = n (xD + yD′)z + x 2 + y 2 + x 3
Solution: Put u = log x
v = log y
The Equation reduces to
{d(d − 1) + 2dd′ + d′(d′ − 1)}z − n (d + d′)z + nz = e 2u + e 2 v + e 3u
or
or
or
{(d + d′) 2
}
− (n + 1)(d + d ′) + n z = e 2 u + e 2 v + e 3 u
or
(d + d′ − n )(d + d′ − 1)z = e 2u + e 2 v + e 3u
C.F. = e nu φ1 (u − v ) + e u φ 2 (u − v )
n x x
= x ψ1 + xψ 2
y y
Self-Instructional
Material 271
Partial Differential
1
Equations
P.I. =
(d + d′ − n )(d + d′ − 1)
{
e 2 u + e 2 v + e 3u }
NOTES
=
x2 + y2 1 1 3
= − − . x
n−2 2 n −3
x x x 2 + y2 1 x3
z = x n ψ1 + xψ 2 − −
y y n −2 2 n −3
y 1
Example 35: Solve (x D − xyDD′ − 2 y D′ + xD − 2 yD′)z = log
2 2 2 2
−
x 2
Solution: Put u = log x
v = log y
Our equation reduces to
or (d − 2d′)(d + d ′)z = v − u − 1
2
C.F. = φ1 (2u + v ) + φ 2 (u − v )
x
2
( )
= ψ1 x y + ψ 2
y
1 1
P.I. = v −u −
(d − 2d′)(d + d′) 2
1 1 d ′ 1
= . 1 − v − u −
d − 2d ′ d d 2
1 1 1
= . v − u − − u
d − 2d ′ d 2
Self-Instructional
272 Material
Partial Differential
1 1 Equations
= . uv − u 2 − u
d − 2d ′ 2
1 2d ′ 4d ′ 2 2 1 NOTES
= d 1 + d + d 2 + uv − u − 2 u
1 2 1 2
= uv − u − u + u
d 2
1
= (log x )2 log y − 1 (log x )2
2 4
x 1 1
( )
z = ψ1 x 2 y + ψ 2 + (log x ) log y − (log x ) .
2 2
y 2 4
Example 36: Solve the differential equation,
(x D ) ( )
n
2 2
+ 2 xyDD′ + y 2 D′ 2 z = x 2 + y 2 2
{(d + d′) } ( )
n
or − (d + d ′) z = e 2 u + e 2 v
2
2
n
or (d + d′)(d + d′ − 1)z = (e 2 u + e 2 v )2
C.F. = φ1 (u − v ) + e u φ 2 (u − v )
x x
= φ1 log + xφ 2 log
y y
x x
= Ψ1 + xΨ2
y y
1
( )
n
Particular Integral is = e 2u + e 2v 2
(d + d′)(d + d′ − 1)
Self-Instructional
Material 273
Partial Differential
1 n
Equations
Substituting Z =
d + d′ − 1
(
e 2u + e 2v ) 2
∂Z ∂Z n
NOTES or +
∂u ∂v
= Z + e 2u + e 2v( )
2
du dv dZ
The subsidiary equations are = =
( )
n
1 1
Z + e 2u + e 2v 2
n
= e nv (e 2 a + 1) 2
e (n −1)v 2a
( )
n
Ze −v = e +1 2
(n − 1)
e nv 2a
( )
n
Z= e +1 2
n −1
(e )
n
2u
+ e 2v 2
=
(n − 1)
( )
n
1 e 2u + e 2v 2
P.I. = d + d ′ n − 1
1
{ } du
n
n − 1) a =∫v− u
= ( e 2 u + e 2a + 2 u 2
1 2a
n
= n − 1 ∫ (e + 1) ∫ e du
2 nu
a = v −u
1 nu 2 a
( )
n
= n (n − 1) e e + 1 2
a = v − u
Self-Instructional
274 Material
Partial Differential
1
( )
n
Equations
= e 2u + e 2v 2
n (n − 1)
1
( )
n
x 2 + y2 NOTES
= 2
n (n − 1)
x x 1
( )
n
z = ψ1 + xψ 2 + x 2 + y2 2 .
y y n (n − 1)
8y
Example 37: Solve (x D − 2 xyDD′ + y D′ − xD + 3yD′)z =
2 2 2 2
x
Solution: Put u = log x
v = log x
Our Equation reduces to
or
= ψ1 (xy ) + x 2 ψ 2 (xy )
1
P.I. = 8. e v−u
(d − d′)(d − d′ − 2)
= e v−u
y
=
x
y
z = ψ (xy ) + x 2 ψ 2 (xy ) + .
x
(
Example 38: Solve x 2 D 2 + 2xyDD′ + y 2 D′ 2 z = x m y n )
Solution: Put u = log x
v = log y
The equation reduces to
Self-Instructional
Material 275
Partial Differential
Equations {d(d − 1) + 2dd′ + d′(d′ − 1)}z = e mu + nv
or
NOTES
or (d + d′)(d + d ′ − 1)z = e mu + nv
C.F. = φ1 (u − v ) + e u φ 2 (u − v )
x x
= ψ1 + xψ 2
y y
1
P.I. = e mu + nv
(d + d′)(d + d′ − 1)
1
= e mu + nv
(m + n )(m + n − 1)
1
= x m yn
(m + n )(m + n − 1)
x x 1
z = ψ1 + xψ 2 + x m yn .
y y (m + n )(m + n − 1)
Check Your Progress
5. Write the general linear differential equation with constant coefficients.
6. What are the three types of second order partial differential equations?
7. What is the complementary function of the equation
(A 0 D n + A 1 D n −1 D ′ + A 2 D n − 2 D ′ 2 + + A n D ′ n )z = 0 if the roots are
distinct?
8. When is a non-homogeneous equation said to be reducible?
9. Which mathematical function is used to reduce partial differential
equations to equations with constant coefficients?
9.9 SUMMARY
Short-Answer Questions
1. Define partial differential equations with suitable examples.
2. How will you identify the order of a partial differential equation?
3. Which equations are termed as singular integral?
4. How will you determine the degree of the partial differential equation?
5. What is a spectrum?
Self-Instructional
278 Material
6. Define Wronskian of functions. Partial Differential
Equations
7. Give examples of parabolic, elliptic and hyperbolic type equations.
8. What is the difference between homogeneous and non homogeneous
differential equations? NOTES
Long-Answer Questions
1. Solve the following differential equations:
a. (3 z − 4 y ) p + (4 x − 2 z )q = 2 y − 3 x
b. x( z 2 − y 2 ) p + y ( x 2 − z 2 )q = z ( y 2 − x 2 )
2. How does the frequency of the fundamental mode of the vibrating string
depend on the (a) Length of the string (b) On the mass per unit length (c)
On the tension? What happens to that frequency if we double the tension?
3. Find u(x, t) of the string of length L = when c2 = 1, the initial velocity is
zero, and the initial deflection is
a. 0.01 sin 3x.
1
b. k sin x − sin 2 x
2 .
c. 0.1x(π − x ) .
d. (
0.1x π 2 − x 2 . )
4. Find the deflection u ( x, t ) of the string of length L = and c 2 = 1 for zero
initial displacement and ‘triangular’ initial velocity u t ( x,0 ) = (0.01x ) if
1 1
0< x< π , u t ( x,0) = 0.01(π − x ) if π < x < π . (Initial conditions with
2 2
u t ( x,0) ≠ 0 are hard to realize experimentally).
5. Find solutions u(x, y) of the following equations by separating variables.
a. u x + u y = 0.
b. ux − u y = 0 .
c. y 2u x − x 2u y = 0
.
d. u x + u y = (x + y )u
.
e. u xx + u yy = 0
.
Self-Instructional
Material 279
Partial Differential
Equations f. u xy − u = 0 .
g. u xx − u yy = 0 .
NOTES h. xu xy + 2 yu = 0
.
6. Show that
∞
n πx
a. The substitution of u ( x, t ) = ∑ G n (t ) sin (L =length of the string)
n =1 L
into the wave equation governing free vibrations leads to
G + λ 2 G = 0, λ = cnπ
n n n
L .
b. Forced vibrations of the string under an external force P(x, t) per unit
length acting normal to the string are governed by the equation
P
u tt = c 2 u xx + .
ρ
7. Find Complete Integrals of the following equations:
a. p 2 + px + q = z .
b. p2 x + q2 y = z .
c. px + qy = z 1 + pq .
d. p(1 + q2) = q (z – a).
e. ( )
pq + x(2 y + 1) p + y 2 + y q − (2 y + 1)z = 0 .
f. ( pq )( px + qy ) = 1.
g. pxy + pq + qy = yz .
h. (p 2
)
+ q 2 x = pz .
i. 2( y + zq ) = q( xp + yq ) .
8. Solve the equations:
a. (D 2
)
+ DD′s − 1D′3 z = 0 .
b. (D 3
+ 3D 2 D′ − 4D′3 z = 0 . )
9. Solve the equations:
a. (D 2
)
+ 2DD′ + D′ 2 z = 12xy .
b. (D 2
)
− 2DD′ − 15D′ 2 z = 12xy .
Self-Instructional
280 Material
c. (D 2
)
− 6DD′ − 9D′2 z = 12 x 2 + 16xy .
Partial Differential
Equations
d. (D 3
)
− 7 DD′ 2 − 6D′3 z = x 2 + xy 2 + y 3 .
1
e. (D D′ − 2DD′
2 2
+ D′ 3 z = ) x2 .
NOTES
b. (D − 3DD′ + 2D′ )z = x + y .
2 2
f. (D − 3DD′ + 2D′ )z = x − 2y .
3 2 3
g. (D − 4D D′ + 5DD′ − 2D′ )z = e + y + x .
3 2 2 3 y+2x
b. (D 2
)
+ 5DD′ + 5D′ 2 z = x sin (3x − 2 y ) .
12. Solve the equations:
a. (D 2
)
− Dd′ − 2D′ 2 z = (y − 1)e x .
b. (D 3
)
− 3DD′ 2 − 2D′3 z = cos(x + 2 y ) − e y (3 + 2 x ) .
13. Solve the equations:
a. (DD′ + D′ 2
)
− 3D′ z = 0 .
b. (D + DD′ + D + D′ + 1)z = 0 .
2
NOTES b. (D 2
)
− D′ z = A cos(lx + my) , where A, l, m are constants.
17. Solve the equations:
a. (D = D′ − 1)(D + 2D′ − 3)z = 4 + 3x + 6y .
x+2
b. (D 3
− DD ′ 2 − D 2 + DD ′ z = ) x3
.
c. (D 2
)
− D′ y = 2 y − x 2 .
18. Solve the equations:
( )
a. D − D′ 2 z = cos(x − 3y ) .
b. (x D + 2xyDD′ + y D′ )z = x y .
2 2 2 2 2 2
c. z + r = x cos(x + y ) .
25. Solve the differential equation, r − 2 yp + y 2 z = (y − 2 )e 2 x +3 y . NOTES
Self-Instructional
Material 283
Ordinary Differential
Equations
UNIT 10 ORDINARY DIFFERENTIAL
EQUATIONS
NOTES
Structure
10.0 Introduction
10.1 Objectives
10.2 Ordinary Differential Equations
10.3 Answers to Check Your Progress Questions
10.4 Summary
10.5 Key Words
10.6 Self Assessment Questions and Exercises
10.7 Further Readings
10.0 INTRODUCTION
10.1 OBJECTIVES
Even though there are many methods to find an analytical solution of ordinary
differential equations, for many differential equations solutions in closed form cannot
be obtained. There are many methods available for finding a numerical solution for
Self-Instructional
284 Material
differential equations. We consider the solution of an initial value problem associated Ordinary Differential
Equations
with a first order differential equation given by,
dy
= f ( x, y )
dx NOTES
(10.1)
With y (x0) = y0 (10.2)
In general, the solution of the differential equation may not always exist. For
the existence of a unique solution of the differential Equation (10.1), the following
conditions, known as Lipshitz conditions must be satisfied,
(i) The function f(x, y) is defined and continuous in the strip
R : x0 x b, y
(ii) There exists a constant L such that for any x in (x0, b) and any two num-
bers y and y1
|f(x, y) – f(x, y1)| ≤ L|y – y1|
(10.3)
The numerical solution of initial value problems consists of finding the ap-
proximate numerical solution of y at successive steps x1, x2,..., xn of x. A number
of good methods are available for computing the numerical solution of differential
equations.
The integral contains the unknown function y (x) and it is not possible to
integrate it directly. In Picard’s method, the first approximate solution y (1) ( x) is
obtained by replacing y (x) by y0.
x
Thus, ) y0 + ∫ f ( x, y0 )dx
y (1) ( x= (10.5)
x0
∫ f ( x, y
( 2) (1)
y ( x) = y0 + ( x)) dx
(10.6)
x0
Self-Instructional
Material 285
Ordinary Differential The process can be continued, so that we have the general approximate solu-
Equations
tion given by,
x
∫ f ( x, y
(n) ( n −1)
y ( x) = y 0 + ( x))dx,
NOTES for n = 2, 3... (10.7)
x0
dy x2
Solution: We have dx = 1 + y 2 , y (0) = 0
(0.25) 2
For x 0.25, y (1) (0.25) 0.0052
3
Self-Instructional
Material 287
Ordinary Differential
Equations (0.25) 2
y (2) (0.25) tan 1
0.0052
3
y (0.25) 0.005, Correct to three decimal place.
NOTES
(0.5)2
Again, for x = 0.5, y (1)=
(0.5) = 0.083333
3
(0.5) 3
y ( 2) (0.5) = tan −1 = 0.0416
3
Thus, correct to three decimal places, y (0.5) = 0.042.
Note: For this problem we observe that, the integral for getting the third and
higher approximate solution is either difficult or impossible to evalute, since
x
x2
y (3) ( x) = ∫ 2 is not integrable.
0 x3
1 + tan −1
3
Example 4: Use Picard’s method to find two successive approximate solutions
of the initial value problem,
dy y − x
= , y (0 ) = 1
dx y + x
We observe that, it is not possible to obtain the integral for getting y(2)(x). Thus
Picard’s method is not applicable to get successive approximate solutions.
Multistep Methods
We have seen that for finding the solution at each step, the Taylor series method
and Runge-Kutta methods requires evaluation of several derivatives. We shall
Self-Instructional
288 Material
now develop the multistep method which require only one derivative evaluation Ordinary Differential
Equations
per step; but unlike the self starting Taylor series or Runge-Kutta methods, the
multistep methods make use of the solution at more than one previous step points.
Let the values of y and y1 already have been evaluated by self-starting methods
NOTES
at a number of equally spaced points x0, x1,..., xn. We now integrate the differential
equation,
dy
f ( x, y ), from xn to xn 1
dx
xn 1 xn 1
i.e., dy f ( x, y ) dx
xn xn
xn 1
yn 1 yn f ( x, y ( x)) dx
xn
To evaluate the integral on the right hand side, we consider f (x, y) as a function
of x and replace it by an interpolating polynomial, i.e., a Newton’s backward
difference interpolation using the (m + 1) points xn, xn+1, xn–2,..., xn–m,
m
x xn
pm ( x ) ( 1)k (k s ) k
f n k , where s
k 0 h
s 1
k s ( s 1)( s 2)...( s k 1)
k!
yn 1 yn h ( 1)k k
s k
fn k ds
0 k 0
2
yn h [ fn 1 fn 1 2 fn 2 ... m fn m ]
1
Where
where k ( 1) k k
s
ds
0
Predictor-Correction Methods
These methods use a pair of multistep numerical integration. The first is the Predictor
formula, which is an open-type explicit formula derived by using, in the integral, an
interpolation formula which interpolates at the points xn, xn–1,..., xn–m. The second
is the Corrector formula which is obtained by using interpolation formula that
interpolates at the points xn+1, xn, ..., xn–p in the integral.
(c) h ( p)
y n +1 = y n + [ f ( xn , y n ) + f ( xn +1 , y n +1 )] (10.12)
2
In order to determine the solution of the problem upto a desired accuracy, the
corrector formula can be employed in an iterative manner as shown below:
Step 1: Compute yn(0+)1 , using Equation (10.11)
i.e., yn(0+)1 = yn+ h f (xn, yn)
Step 2: Compute yn( k+)1 using Equation (10.12)
(k ) h
i.e., yn +1 =
yn + [ f ( xn , yn ) + f ( xn +1 , yn( k+−11) )], for K =
1, 2, 3,...,
2
The computation is continued till the condition given below is satisfied,
yn( k )1 yn( k 11)
yn( k )1
(10.13)
Self-Instructional
290 Material
where is the prescribed accuracy. Ordinary Differential
Equations
It may be noted that the accuracy achieved will depend on step size h and on
the local error. The local error in the predictor and corrector formula are,
h2 h3
y ( 1 ) and y ( 2 ), respectively.. NOTES
2 12
4h
The Predictor formula gives, y4 = y(0.4) = y0+ (2 y1′ − y 2′ + 2 y3′ ) .
3
4 × 0.1
∴ y4(0) = 1 + (2 × 1.11053 − 1.24458 + 2 × 1.40658)
3
=1.50528 ∴ y4′ =1 + 0.4 × 1.50528 =1.602112
(1) h
The Corrector formula gives, y 4 = y 2 + ( y 2′ + 4 y3′ + y ′4 ) .
3
0.1
y (0.4) 1.22288 (1.24458 4 1.40658 1.60211)
3
1.22288 0.28243
1.50531
y n +1 − 2 y n + y n −1
y ′′( xn ) ≈ 2 (10.27)
h
Substituting these in the differential equation, we have
2(yn+1–2yn+ yn–1) + pn h(yn+1– yn–1) + 2h2gnyn = 2rnh2,
where pn = p(xn), qn = q(xn), rn = r(xn) (10.28)
Rewriting the equation by regrouping we get,
(2–hpn)yn–1+(–4+2h2qn)yn+(2+h2qn)yn+1 = 2rnh2
(10.29)
Self-Instructional
Material 293
Ordinary Differential This equation is to be considered at each of the interior points, i.e., it is true for
Equations
n = 1, 2, ..., N–1.
The boundary conditions of the problem are given by,
NOTES y0 , yn
(10.30)
Introducing these conditions in the relevant equations and arranging them, we
have the following system of linear equations in (N–1) unknowns y1, y2, ..., yn–1.
B1 C1 0 0... 0 0 0
A B2 C2 0... 0 0 0
2
0 A3 B3 C3 ... 0 0 0
A= (10.33)
... ... ... ... ... ... ...
0 0 0 0... AN − 2 BN −2 C N −2
0 0 0 0... 0 AN −1 B N −1
Where Bi = −4 + 2h 2 qi , i = 1, 2,..., N − 1
Ci = 2 + hpi , i = 1, 2,..., N − 2 (10.34)
Ai = 2 − hpi , i = 2, 3,..., N − 1
b1 2 1h 2 (2 hp1 )
bi 2 i h , for i 2
2, 3,..., N 2
(10.35)
bN 1 2 N 1 h2 (2 hlp N 1 )
The system of linear equations can be directly solved using suitable methods.
Self-Instructional
294 Material
Example 6: Compute values of y (1.1) and y (1.2) on solving the following initial Ordinary Differential
Equations
value problem, using Runge-Kutta method of order 4:
y′
y ′′ + + y = 0 , with y(1) = 0.77, y ′ (1) = –0.44
x NOTES
Solution: We first rewrite the initial value problem in the form of pair of first order
equations.
−z
y ′ = z, z ′ = −y
x
with y (1) = 0.77 and z (1) = –0.44.
We now employ Runge-Kutta method of order 4 with h = 0.1,
1
y (1.1) = y (1) + (k1 + 2k2 + 2k3 + k4 )
6
1
y ′(1.1) =z (1.1) =+1 (l1 + 2l2 + 2l3 + l4 )
6
k1 =−0.44 × 0.1 = −0.044
0.44
l1 =
0.1 × − 0.77 =−0.033
1
0.033
k2= 0.1 × −0.44 − = 0.04565
2
0.4565
l2 =
0.1 × − 0.748 =
−0.031323809
1.05
−0.03123809
k3 =0.1 × −0.44 + =−0.0455661904
2
0.0455661904
l3 =
0.1 × − 0.747175 = −0.031321128
1.05
k4 =0.1 × (−0.47132112) =−0.047132112
0.047132112
l4 = 0.1 × − 0.72443381 =
−0.068158643
1.1
1
∴ y (1.1)= 0.77 + [−0.044 + 2 × ( −0.045661904) − 0.029596005] =0.727328602
6
1
y ′(1.1) = −0.44 + [−0.033 + 2(−0.031323809) + 2( −0.031321128) − 0.029596005]
6
1
= −0.44 + [−0.33 − 0.062647618 − 0.062642256 − 0.029596005]
6
= −0.526322021
Example 7: Compute the solution of the following initial value problem for x =
0.2, using Taylor series solution method of order 4: n.l.
d2 y dy
y x , y(0) 1, y (0) 0
dx2 dx
Self-Instructional
Material 295
Ordinary Differential
Equations Solution: Given y y xy , we put z y , so that
z y xz , y z and y (0) 1, z (0) 0.
We solve for y and z by Taylor series method of order 4. For this we first
NOTES compute y ′′(0), y ′′′(0), y iv (0),...
We have, y ′′(0)= y(0) + 0 × y ′(0) = 1, z ′(0) = 1
y ′′′(0) = z ′′(0) = y ′(0) + z (0) + 0.z ′(0) = 0
y iv (0) =z ′′′(0) = y ′′(0) + 2 z ′(0) + 0.z ′′(0) =3
z iv (0) =4 z ′′(0) + 0.z ′′′(0) =0
By Taylor series of order 4, we have
x2 x3 x 4 iv
y (0 + x) = y(0) + xy ′(0) + y ′′(0) + y ′′′(0) + y (0)
2! 3! 4!
x2 x4
or, y ( x) =+
1+ ×3
2! 4!
(0.2)2 (0.2)4
∴ y(0.2) =
1+ + 1.0202
=
2! 8
(0.2)3
Similarly, y (0.2) z (0.2) 0.2 3 0.204
4!
Example 8: Compute the solution of the following initial value problem for x =
2
d y
0.2 by fourth order Runge -Kutta method: n.l. = xy, y (0) = 1, y ′(0) = 1
dx 2
Solution: Given y ′′ = xy, we put y ′ = z and the simultaneous first order problem,
y z f ( x, y , z ), say z xy g ( x, y , z ), say with y (0) 1 and z (0) 1
We use Runge-Kutta 4th order formulae, with h = 0.2, to compute y (0.2)
and y ′(0.2), given below..
k1 h f ( x0 , y0 , z0 ) 0.2 1 0.2
l1 h g ( x0 , y0 , z0 ) 0.2 0 0
h k1 l1
k2 h f x0 , y0 , z0 0.2 (1 0) 0.2
2 2 2
h k1 l1 0.2 0.2
l2 h g x0 , y0 , z0 0.2 1 0.022
2 2 2 2 2
h k2 l2
k3 h f x0 , y0 , z0 0.2 1.011 0.2022
2 2 2
h k2 l
l3 h g x0 , y0 , z0 2 0.2 0.1 1.1 0.022
2 2 2
k4 h f ( x0 h, y0 k3 , z0 l3 ) 0.2 1.022 0.2044
l4 h g ( x0 h , y0 k3 , z0 l3 ) 0.2 0.2 1.2022 0.048088
1
y (0.2) 1 (0.2 2(0.2 0.2022) 0.2044) 1.2015
6
1
Self-Instructional y (0.2) 1 (0 2 (0.022 0.022) 0.048088) 1.02268
296 Material 6
Local Truncation Error Ordinary Differential
Equations
Local Truncation error in a numerical method is error that is caused by using
simple approximations to represent exact mathematical formulas. The only way to
completely avoid truncation error is to use exact calculations. However, truncation NOTES
error can be reduced by applying the same approximation to a larger number of
smaller intervals or by switching to a better approximation. Analysis of truncation
error is the single most important source of information about the theoretical
characteristics that distinguish better methods from poorer ones. With a
combination of theoretical analysis and numerical experiments, it is possible to
estimate truncation error accurately.
∫ f ( x, y
( 2) (1)
y ( x) = y0 + ( x)) dx
.
x0
Self-Instructional
Material 297
Ordinary Differential
14 5 (v ) 1
Equations 3. The local errors in these formulae are h y (ξ1 ) and − h 5 y (v ) (ξ 2 ) .
45 90
4. This method is applicable to linear differential equations only.
NOTES
10.4 SUMMARY
Short-Answer Questions
1. What are ordinary differential equations?
2. Name the methods for computing the numerical solution of differential
equations.
Self-Instructional
298 Material
3. When is multistep method used? Ordinary Differential
Equations
4. Name the predictor-corrector methods.
5. How will you find the numerical solution of boundary value problems?
Long-Answer Questions NOTES
1. Use Picard’s method to compute values of y(0.1), y(0.2) and y(0.3) correct
to four decimal places, for the problem, y = x + y, y(0) = 1.
dy 1
2. Given = (1 + x 2 ) y 2 , and y(0) = 1, y(0.1) = 1.06, y(0.2) = 1.12, y(0.3)
dx 2
= 1.21. Compute y(0.4) by Milne’s predictor-corrector method.
Self-Instructional
Material 299
Euler’s Method
11.0 INTRODUCTION
The Euler method is a first-order method, which means that the local error (error
per step) is proportional to the square of the step size, and the global error (error
at a given time) is proportional to the step size. The Euler method often serves as
the basis to construct more complex methods.
In this unit, you will study about the Euler’s method and modified Euler’s
method.
11.1 OBJECTIVES
Euler’s is a crude but simple method of solving a first order initial value problem:
dy
= f ( x, y ), y ( x0 ) = y0
dx
This is derived by integrating f (x0, y0) instead of f (x, y) for a small interval,
x0 h x0 h
dy f ( x0 , y0 )dx
x0 x0
y ( x0 + h ) − y ( x0 ) =
hf ( x0 , y0 )
X
0 x0 x1 x2
ek= y ( xk + h) − { yk + hf ( xk , yk )}
h2
= yk + hy ′( xk ) + y′′( xk + θh) − yk − hy′( xk ), 0 < θ < 1
2
h2
=ek y ′′( xk + θh), 0 < θ < 1
2
Note: The Euler’s method finds a sequence of values {yk} of y for the sequence of
values {xk}of x, step by step. But to get the solution up to a desired accuracy, we
have to take the step size h to be very small. Again, the method should not be used
for a larger range of x about x0, since the propagated error grows as integration
proceeds.
Example 1: Solve the following differential equation by Euler’s method for x = 0.1,
dy
0.2, 0.3; taking h = 0.1; = x 2 − y, y (0) = 1. Compare the results with exact solu-
dx
tion.
dy
Solution: Given = x 2 − y, with y (0) = 1.
dx
Self-Instructional
Material 301
Euler’s Method In Euler’s method one computes in successive steps, values of y1, y2, y3,... at
x1 = x0+ h, x2 = x0 + 2h, x3 = x0 + 3h, using the formula,
yn +1 =
yn + hf ( xn , yn ), for n =
0, 1, 2,...
NOTES yn + h ( x 2 n − yn )
yn +1 =
2
n xn yn f ( xn , y n ) = x n − y n y n +1 = y n + hf ( xn , y n )
0 0.0 1.000 − 1.000 0.9000
1 0.1 0.900 − 0.8900 0.8110
2 0.2 0.8110 − 0.7710 0.7339
3 0.3 0.7339 − 0.6439 0.6695
dy
The analytical solution of the differential equation written as + y = x 2 , is
dx
ye x ∫ x e dx + c
2 x
=
Or, ye x = x 2 e x − 2 xe x + 2e x + c.
y = x 2 − 2 x + 2 − e− x .
The following table compares the exact solution with the approximate solution
by Euler’s method.
n xn Approximate Solution Exact Solution % Error
1 0.1 0.9000 0.9052 0.57
2 0.2 0.8110 0.8213 1.25
3 0.3 0.7339 0.7492 2.04
Example 2: Compute the solution of the following initial value problem by Euler’s
method, for x = 0.1 correct to four decimal places, taking h = 0.02,
dy y − x
= , y (0) = 1 .
dx y + x
Self-Instructional
302 Material
Euler’s Method
1− 0
y (0.02) = y1 = y 0 + h f ( x0 , y0 ) = 1 + 0.02 × = 1.0200
1+ 0
1.0200 − 0.02
y (0.04) = y 2 = y1 + h f ( x1 , y1 ) = 1.0200 + 0.02 × = 1.0392
1.0200 + 0.02
NOTES
1.0392 − 0.04
y (0.06) = y3 = y 2 + h f ( x2 , y 2 ) = 1.0392 + 0.02 × = 1.0577
1.0392 + 0.04
1.0577 − 0.06
y(0.08) = y 4 = y3 + h f ( x3 , y3 ) = 1.0577 + 0.02 × = 1.0756
1.0577 + 0.06
1.0756 − 0.08
y (0.1) = y5 = y 4 + h f ( x4 , y 4 ) = 1.0756 + 0.02 × = 1.0928
1.0756 + 0.08
Hence, y (0.1) = 1.0928.
Compute y ( k ) n 1 yn h f ( xn , y n )
h
Compute yn( k 11) yn f ( xn , y n ) f ( xn 1 , yn( k )1 , for k 0, 1, 2,... (11.5)
2
The iterations are continued until two successive approximations yn( k+)1 and yn( k++11)
coincide to the desired accuarcy. As a rule, the iterations converge rapidly for a
sufficiently small h. If, however, after three or four iteration the iterations still do not
give the necessary accuracy in the solution, the spacing h is decreased and iterations
are performed again.
Example 3: Use modified Euler’s method to compute y (0.02) for the initial value
problem, dy = x 2 + y, with y (0) = 1, taking h = 0.01. Compare the result with the
dx
exact solution.
Solution: Modified Euler’s method consists of obtaining the solution at successive
points, x1 = x0 + h, x2 = x0 + 2h,..., xn = x0 + nh, by the two stage computations given
by,
y n(0+)1 = yn + hf ( xn , yn )
y n(1+)1 = y n +
h
2
[ ]
f ( xn , y n ) + f ( x n +1 , y n( 0+)1 ) .
Self-Instructional
Material 303
Euler’s Method For the given problem, f (x, y) = x2 + y and h = 0.01
y1(0) = y0 + h[ x02 + y0 ] = 1 + 0.01 × 1 = 1.01
0.01
y1(1) =
[1.0 + 1.01 + (0.01)2 ] =
1+ 1.01005
NOTES 2
i.e., =y1 y=
(0.01) 1.01005
Next, y 2( 0) = y1 + h [ x12 + y1 ]
= 1.01005 + 0.01[(0.1) 2 + 1.01005]
= 1.01005 + 0.010102 = 1.02015
0.01
y2(1) = 1.01005 +
[(0.01)2 + 1.01005 + (0.01)2 + 1.02015]
2
0.01
= 1.01005 + × (2.02140)
2
= 1.01005 + 0.10107
= 1.11112
∴ y2 y (0.02)
= = 1.11112
d2y dy
= g x, y, , with y(x ) = y , y ′( x0 ) = y0′
2
dx dx 0 0
dy dz
We write = z , so that = g ( x, y , z ) with y (x0) = y0 and z (x0) = y0′ .
dx dx
Example 4: Compute y(1.1) and y(1.2) by solving the initial value problem,
y′
y ′′ + + y = 0, with y (1) = 0.77, y ′ (1) = –0.44
x
Self-Instructional
304 Material
Euler’s Method
z
Solution: We can rewrite the problem as y ′ = z , z ′ = − − y; with y, (1) = 0.77 and
x
z (1.1) = –0.44.
Taking h = 0.1, we use Euler’s method for the problem in the form, NOTES
yi +1 = yi + hzi
z
z i +1 = z i + h − 1 − yi , i = 0, 1, 2,...
xi
Thus, y1 = y (1.1) and z1 = z (1.1) are given by,
y1 = y0 + hz 0 = 0.77 + 0.1× (−0.44) = 0.726
z
z1 = z 0 + h − 0 − y 0 = −0.44 + 0.1× (0.44 − 0.77)
x0
= −0.44 − 0.33 = −0.473
z1
z2 z (1.2) z1 h y1
x1
0.473
0.473 0.1 0.726
1.1
0.473 0.1 0.296 0.503
Thus, y (1.1) 0.726 and y (12) 0.679.
Example 5: Using Euler’s method, compute y (0.1) and y (0.2) for the initial value
problem,
y ′′ + y = 0, y (0) = 0, y ′(0) = 1
Self-Instructional
Material 305
y ′′ + xy ′ + y = 0, we get y ′′(0) = 0
y ′′′( x) = − xy ′′ − 2 y ′ ∴ y ′′′(0) = −2
And in general, y(2n)(0) = 0, y ( 2n +1) (0) = −2ny ( 2 n −1) (0) = (−1) n 2 n.2!
x3 x5 n 2n n ! x 2n +1
Thus, y ( x) =x − 3 + 15 − ... + (−1) (2n + 1)! + ...
This is an alternating series whose terms decrease. Using this, we form the
solution for y up to 0.2 as given below:
x 0 0.05 0.10 0.15 0.20
y ( x) 0 0.0500 0.0997 0.1489 0.1973
11.4 SUMMARY
Euler’s is a crude but simple method of solving a first order initial value
problem:
dy
= f ( x, y ), y ( x0 ) = y0
dx
The local error at any xk , i.e., the truncation error of the Euler’s method is
given by,
ek = y(xk+1) – yk+1
Modified Euler’s method can be used to compute the solution up to a desired
accuracy by applying it in an iterative scheme as stated below.
Compute y ( k ) n 1 yn h f ( xn , y n )
h
Compute yn( k 11) yn f ( xn , y n ) f ( xn 1 , yn( k )1 , for k 0, 1, 2,...
2
Euler’s method can be extended to compute approximate values yi and zi of
y (xi) and z (xi) respectively given by,
yi+1 = yi+h f (xi, yi, zi)
zi+1 = zi+h g (xi, yi, zi)
Short-Answer Questions
1. Define Euler’s method.
2. Explain the Euler’s method for a pair of differential equations.
Long-Answer Questions
1. Compute values of y at x = 0.02, by Euler’s method taking h = 0.01, given
dy
y is the solution of the following initial value problem: dx
= x3 + y, y(0) = 1.
2. Evaluate y(0.02) by modified Euler’s method, given y = x2 + y, y(0) = 1,
correct to four decimal places.
12.0 INTRODUCTION
In mathematics, the Taylor series of a function is an infinite sum of terms that are
expressed in terms of the function’s derivatives at a single point. For most common
functions, the function and the sum of its Taylor series are equal near this point.
Taylor’s series are named after Brook Taylor who introduced them in 1715.
In this unit, you will study about the Taylor’s method.
12.1 OBJECTIVES
Self-Instructional
308 Material
The derivatives in the above expansion can be determined as follows, Taylor’s Method
y ′( x0 ) = f ( x0 , y 0 )
y ′′ ( x0 ) = f x ( x0 , y0 ) + f y ( x0 , y 0 ) y ′( x0 )
y ′′( x) = xy ′ + y , ∴ y ′′(0) = 1
y ′′′( x) = xy ′′ + 2 y ′, ∴ y ′′′(0) = 2
y (iv ) ( x) = xy ′′′ + 3 y ′′, ∴ y (iv ) (0) = 3
y ( v ) ( x) = xy (iv ) + 3 y ′′′, ∴ y ( v ) (0) = 6
x2 x3 x 4 (iv ) x 5 (v)
y ( x) ≈ y (0) + xy ′(0) + y ′′(0) + y ′′′(0) + y (0 ) + y (0)
2 3! 4! 5!
x 2 x3 x4 x5 x 2 x3 x 4 x5
≈ 1+ x + + ×2+ ×3+ × 6 = 1+ x + + + +
2 6 24 120 2 3 8 20
0.01 0.001 0.0001 0.00001
∴ y (0.1) ≈ 1 + 0.1 + + + + = 1.1053
2 3 8 20
Self-Instructional
Material 309
Taylor’s Method
Solution: We have, y ′ =
x 2 + y 2 , y (0) =
0
Differentiating successively we have,
NOTES y ′′ = 2 x + 2 yy ′, ∴ y ′′(0) = 0
y ′′′ = 2 + 2[ yy ′′ + ( y ′) 2 , y ′′′(0) = 2
( iv ) ( iv )
y = 2 ( yy ′′′ + 3 y ′y ′′), ∴y (0) = 0
(v) ( iv )
y = 2[ yy 2
+ 4 y ′y ′′′ + 3 ( y ′′) ], ∴ y (v) = 0
y (vi ) = 2[ yy ( v ) + 5 y ′y (iv ) + 10 y ′′y ′′′], ∴ y ( vi ) = 0
y ( vii ) = 2[ yy ( vi ) + 6 y ′ y ( v ) + 15 y ′′ y (iv ) + 10 ( y ′′′) 2 ] ∴ y ( vi ) (0) = 80
x3 x7 80 1 3 x 7
The Taylor series up to two terms is y ( x) = ×2+ × = x +
6 7 7! 3 63
Example 3: Given x y ′ = x – y2, y(2) = 1, evaluate y(2.1), y(2.2) and y(2.3) correct
to four decimal places using Taylor series method.
Solution: Given y x y 2 , i.e., y 1 y 2 / x and y 1 for x 2. To compute
y(2.1) by Taylor series method, we first find the derivatives of y at x = 2.
1
y ′ =1 − y 2 / x ∴ y ′(2) =1 − =0.5
2
xy ′′ + y ′ =1 − 2 yy ′
1 1 1 2 1
2 y ′′(2) + 1 2.
=− ∴ y ′′(2) = − × =−0.25
2 2 4 2 2
xy ′′′ + 2 y ′′ =−2 y ′ − 2 yy ′′
2
2
1 1 1
2 y ′′′(2) + 2 − = −2 − 2 −
4 2 4
1 1
Or, 2 y ′′′(2) = ∴ y ′′′(2) = 0.25
=
2 4
( iv )
xy + 3 y = ′′′ −4 y ′y ′′ − 2 y ′y ′′ − 2 yy′′′
1 1 1 1
2y (2) 3 6 2
4 2 4 4
3 3 1 1
y (2) 0.25
4 4 2 2
(0.1) 2 (0.1)3 (0.1) 4
y (2.1) y (2) 0.1 y (2) y (2) y (2) y (2)
2 3! 4!
0.01 0.001 0.0001
1 0.1 0.5 ( 0.25) 0.25 ( 0.25)
2 6 24
1 0.05 0.00125 0.00004 0.000001
1.0488
Self-Instructional
310 Material
Taylor’s Method
0.04 0.008 0.0016
y (2.2) 1 0.2 0.5 ( 0.25) 0.25 ( 0.5)
2 6 24
1 0.1 0.005 0.00032 0.00003
1.0954 NOTES
0.09 0.009 0.0081
y (2.3) 1 0.3 0.5 ( 0.25) 0.25 (0.5)
2 2 24
1 0.15 0.01125 0.001125 0.000168
1.005043
y ′( x0 ) = f ( x0 , y0 )
y ′′ ( x0 ) = f x ( x0 , y0 ) + f y ( x0 , y0 ) y ′( x0 )
2
y ′′′ ( x0 ) = f xx ( x0 , y0 ) + 2 f xy ( x0 , y0 ) y ′( x0 ) + f yy ( x0 , y0 ) { y ′ ( x0 )} + f y ( x, y ) y ′′ ( x0 )
12.4 SUMMARY
The solution y (x) of the problem can be expanded about the point x0 by a
Taylor series in the form,
h2 y k ( x0 ) k hk 1
y ( x0 h) y ( x0 ) hy ( x0 ) y ( x0 ) ... h ( )
2! k! (k 1)!
because of difficulties in obtaining higher order derivatives, commonly a
fourth order method is used.
The solution at x2 = x1+h, can be found by evaluating the derivatives at
(x1, y1) and using the expansion; otherwise, writing x2 = x0+2h, we can use
the same expansion.
Self-Instructional
Material 311
Taylor’s Method
12.5 KEY WORDS
Taylor’s series method:If we take k = 1, we get the Euler’s method, y1 =
NOTES y0+h f (x0, y0).
Thus, Euler’s method is a particular case of Taylor series method.
Short-Answer Questions
1. What is Taylor’s series?
2. Give one example of Taylor’s method.
Long-Answer Questions
1. Discuss about the Taylor’s method.
2. Compute the derivatives of Taylor’s expansion.
Self-Instructional
312 Material
Runge Kutta Method
13.0 INTRODUCTION
In numerical analysis, the Runge–Kutta methods are a family of implicit and explicit
iterative methods, which include the well-known routine called the Euler Method,
used in temporal discretization for the approximate solutions of ordinary differential
equations.
In this unit, you will study about the Runge-Kutta methods and Runge-
Kutta methods for a pair of equations.
13.1 OBJECTIVES
Self-Instructional
Material 313
Runge Kutta Method Where k1 = h f (xn, yn) and
k2 = h f(xn+ h, yn+ k1), for n = 0, 1, 2,... (13.2)
The unknown parameters a, b, , and are determined by expanding in Taylor
series and forming equations by equating coefficients of like powers of h. We have,
NOTES
2 3
h h
y n +1 = y ( xn + h) = y n + h y ′( xn ) + y ′′( xn ) + y ′′′( xn ) + 0 (h 4 )
2 6
h2 h3
= y n + h f ( xn , y n ) + [ f x + ff y ]n + [ f xx + 2 ff yy + f yy f 2 + f x f y + f y2 f ]n + 0(h 4 ) (13.3)
2 6
The subscript n indicates that the functions within brackets are to be evaluated
at (xn, yn).
Again, expanding k2 by Taylor series with two variables, we have
2 2 2
k12
k2 h[ f n ah ( f x ) n k1 ( f y ) n ( f xx ) n hk1 ( f xy ) n ( f yy ) n 0( h3 )] (13.4)
2 2
Thus on substituting the expansion of k 2, we get from Equation (13.4)
2 2
yn 1 yn ( a b) h f n bh 2 ( f x ff y ) n bh3 f xx ff xx f 2 f yy 0(h 4 )
2 2
On comparing with the expansion of yn+1 and equating coefficients of h and h2
we get the relations,
1
a b 1, b b
2
There are three equations for the determination of four unknown parameters.
Thus, there are many solutions. However, usually a symmetric solution is taken by
1
setting a b , then 1
2
Thus we can write a Runge-Kutta method of order 2 in the form,
h
yn +1 =yn + [ f ( xn , yn ) + f ( xn + h, yn + h f ( xn , yn ))], for n =0, 1, 2,... (13.5)
2
Proceeding as in second order method, Runge-Kutta method of order 4 can be
formulated. Omitting the derivation, we give below the commonly used Runge-
Kutta method of order 4.
1 5
y n +1 = y n + ( k1 + 2k 2 + 2k3 + k 4 ) + 0 ( h )
6
k1 = h f ( xn , y n )
h k
k 2 = h f xn + , y n + 1
2 2
h k
k 3 = h f xn + , y n + 2
2 2
k 4 = h f ( x n + h, y n + k 3 ) (13.6)
Self-Instructional
314 Material
Runge-Kutta method of order 4 requires the evaluation of the first order derivative Runge Kutta Method
f (x, y), at four points. The method is self-starting. The error estimate with this
method can be roughly given by,
y n* − y n
|y (xn) – yn| ≈ (13.7) NOTES
15
h
where yn* and yn are the approximate values computed with and h respectively
2
as step size and y (xn) is the exact solution.
Note: In particular, for the special form of differential equation y ′ = F (x), a function
of x alone, the Runge-Kutta method reduces to the Simpson’s one-third formula of
numerical integration from xn to xn+1. Then,
xn+1
h h
Or, yn+1 = yn+ [F(xn) + 4F(xn+ ) + F(xn+h)]
6 2
Runge-Kutta methods are widely used particularly for finding starting values at
steps x1, x2, x3,..., since it does not require evaluation of higher order derivatives. It
is also easy to implement the method in a computer program.
Example 1: Compute values of y (0.1) and y (0.2) by 4th order Runge-Kutta method,
correct to five significant figures for the initial value problem,
dy
= x + y , y ( 0) = 1
dx
dy
Solution: We have = x + y , y ( 0) = 1
dx
∴ f ( x, y ) =
x + y, h=
0.1, x0 =
0, y0 =
1
By Runge-Kutta method,
1
y (0.1) = y (0) + ( k + 2k 2 + 2k 3 + k 4 )
6 1
where, k1 h f ( x0 , y0 ) 0.1 (0 1) 0.1
h k2
k2 h f x0 , y0 0.1 (0.05 1.05) 0.11
2 2
h k2
k3 h f x0 , y0 0.1 (0.05 1.055) 0.1105
2 2
k4 h f ( x0 h, y0 k3 ) 0.1 (0.1 1.1105) 0.12105
where 1
y (0.1) 1 [0.1 2 (0.11 0.1105 0.12105] 1.130516
6
Thus,
= x1 0.1,
= y1 1.130516
Self-Instructional
Material 315
Runge Kutta Method 1
y (0.2) y (0.1) ( k1 2k2 2k3 k4 )
6
k1 h f ( x1 , y1 ) 0.1 (0.1 1.11034) 0.121034
h k1
k2 h f x1 , y1 0.1 (0.15 1.17086) 0.132086
NOTES 2 2
h k2
k3 h f x1 , y1 0.1 (0.15 1.17638) 0.132638
2 2
k4 h f ( x1 h, y1 k3 ) 0.1 (0.2 1.24298) 0.144298
1
y2 y (0.2) 1.11034 [0.121034 2 (0.132086 0.132638) 0.144298] 1.2428
6
Example 2: Use Runge-Kutta method of order 4 to evaluate y (1.1) and y (1.2), by
taking step length h = 0.1 for the initial value problem,
dy
= x 2 + y 2 , y (1) = 0
dx
Solution: For the initial value problem,
dy
f ( x, y ), y ( x0 ) y0 , the Runge-Kutta method of order 4 is given as,
dx
1
y n +1 = y n + ( k1 + 2k 2 + 2k 3 + k 4 )
6
k1 h f ( xn , yn )
h k1
k2 h f xn , yn
2 2
where h k2
k3 h f xn , yn
2 2
k4 h f ( xn h, y n k3 ), for n 0, 1, 2,...
Self-Instructional
316 Material
For y (1.2): Runge Kutta Method
k1
Step 6: Compute y = y0 +
2
Step 7: Compute k2 = h f (x, y)
k2
Step 8: Compute y = y0 +
2
Self-Instructional
Material 317
Runge Kutta Method Step 9: Compute k3 = h f(x, y)
Step 10: Compute x1 = x0+ h
Step 11: Compute y = y0+ k3
NOTES Step 12: Compute k4 = h f (x1, y)
Step 13: Compute y1 = y0+ (k1+ 2 (k2+ k3) + k4)/6
Step 14: Write x1, y1
Step 15: Set x0 = x1
Step 16: Set y0 = y1
Step 17: Stop
dy dz
f ( x, y, z ), g ( x, y , z )
dx dx
with y ( x0 ) y0 and z ( x0 ) z0
1
yi 1 yi (k1 2k2 2k3 k4 )
6
1 (13.8)
zi 1 zi (l1 2l2 2l3 l4 ) for i 0, 1, 2,...
6
Where k1 = hf ( xi , yi , z i ), l1 = hg ( xi , yi , z i )
h k l h k l
k 2 = hf xi + , yi + 1 , z i + 1 , l 2 = hg xi + , yi + 1 , z1 + 1
2 2 2 2 2 2
h k l h k l
k 3 = hf xi + , yi + 2 , z i + 2 , l3 = hg xi + , yi + 2 , zi + 2
2 2 2 2 2 2
k 4 = hf ( xi + h, y1 + k 3 , z i + l3 ), l 4 = hf ( xi + h, yi + k 3 , z i + l3 )
y i = y ( xi ), z i = z ( xi ), i = 0, 1, 2,...
The solutions for y(x) and z(x) are determined at successive step points x1 = x0+h, x2
= x1+h = x0+2h,..., xN = x0+Nh.
Self-Instructional
318 Material
Runge Kutta Method
with y (x0) = y0 and y ′ (x0) = 0
which is an initial value problem associated with a system of two first order differential
equations. Thus we can write the Runge-Kutta method for a second order differential
equation as,
1
yi 1 yi (k1 2k2 2k3 k4 ),
6
1 (13.9)
zi 1 yi 1 zi (l1 2l2 2l3 l4 ) for i 0, 1, 2,...
6
where
= k1 h=
( zi ), l1 hg ( xi , yi , zi )
l h k l
k2 =h zi + 1 , l2 =hg xi + , yi + 1 , zi + 1
2 2 2 2
l h k l
k3 =h zi + 2 , l3 =hg xi + , yi + 2 , zi + 2
2 2 2 2
k4 =h( zi + l3 ), l4 =hg ( xi + h, yi + k3 , zi + l3 )
1. Runge-Kutta methods are very useful when the method of Taylor series is
not easy to apply because of the complexity of finding higher order
derivatives.
2. Runge-Kutta methods are widely used particularly for finding starting values
at steps x , x , x ,..., since it does not require evaluation of higher order
1 2 3
derivatives. It is also easy to implement the method in a computer program.
13.4 SUMMARY
Runge-Kutta methods attempt to get better accuracy and at the same time
obviate the need for computing higher order derivatives.
Self-Instructional
Material 319
Runge Kutta Method The solution of the (n + 1)th step is assumed in the form,
yn+1 = yn+ ak1+ bk2
Where k1 = h f (xn, yn) and k2 = h f(xn+ h, yn+ k1), for n = 0, 1, 2,...
NOTES Runge-Kutta method of order 4 requires the evaluation of the first order
derivative f (x, y), at four points. The method is self-starting.
In particular, for the special form of differential equation y ′ = F (x), a function
of x alone, the Runge-Kutta method reduces to the Simpson’s one-third formula
of numerical integration from xn to xn+1.
The Runge-Kutta method of order 4 can be easily extended in the following
form,
1
yi 1 yi (k1 2k2 2k3 k4 )
6
1
zi 1 zi (l1 2l2 2l3 l4 ) for i 0, 1, 2,...
6
Short-Answer Questions
1. What is the significance of Runge-Kutta methods of different orders?
2. Explain the Runge-Kutta method for a pair of equations.
Long-Answer Questions
1. Using Runge-Kutta method of order 4, compute y(0.1) for each of the
following problems:
dy
(a) x + y, y (0) =
= 1
dx
dy
(b) x + y 2 , y (0) =
= 1
dx
2. Compute solution of the following initial value problem by Runge-Kutta
method of order 4 taking h = 0.2 upto x = 1; y = x – y, y(0) = 1.5.
Self-Instructional
320 Material
Runge Kutta Method
13.7 FURTHER READINGS
Self-Instructional
Material 321
Stability Analysis
14.0 INTRODUCTION
14.1 OBJECTIVES
Self-Instructional
Material 323
Stability Analysis Stability means that the trajectories do not change too much under small
perturbations. The opposite situation, where a nearby orbit is getting repelled
from the given orbit. In general, perturbing the initial state in some directions results
in the trajectory asymptotically approaching the given one and in other directions
NOTES to the trajectory getting away from it. There may also be directions for which the
behaviour of the perturbed orbit is more complicated (neither converging nor
escaping completely), and then stability theory does not give sufficient information
about the dynamics.
In stability theory, the qualitative behaviour of an orbit under perturbations
can be analysed using the linearization of the system near the orbit. In particular, at
each equilibrium of a smooth dynamical system with an n-dimensional phase space,
there is a certain n×n matrix A whose eigenvalues characterize the behaviour of
the nearby points (Hartman–Grobman theorem). More precisely, if all eigenvalues
are negative real numbers or complex numbers with negative real parts then the
point is a stable attracting fixed point, and the nearby points converge to it at
an exponential rate, Lyapunov stability and exponential stability. If none of the
eigenvalues are purely imaginary (or zero) then the attracting and repelling directions
are related to the eigen-spaces of the matrix A with eigenvalues whose real part is
negative and, respectively, positive. Analogous statements are known for
perturbations of more complicated orbits.
Stability of Fixed Points
The simplest kind of an orbit is a fixed point, or an equilibrium. If a mechanical
system is in a stable equilibrium state then a small push will result in a localized
motion, for example, small oscillations as in the case of a pendulum. In a system
with damping, a stable equilibrium state is moreover asymptotically stable. On the
other hand, for an unstable equilibrium, such as a ball resting on a top of a hill,
certain small pushes will result in a motion with a large amplitude that may or may
not converge to the original state. Stability of a nonlinear system can be deduced
from the stability of its linearization.
Maps: Let f: R R be a continuously differentiable function with a fixed
point a, f(a) = a. Consider the dynamical system obtained by iterating the function f:
xn+1 = f(xn), n = 0, 1, 2, . . . .
The fixed point is stable if the absolute value of the derivative of f at
is strictly less than 1, and unstable if it is strictly greater than 1. This is because near
the point a, the function f has a linear approximation with slope f (a):
f(x) f(a) + f (a)(x – a).
Self-Instructional
324 Material
Thus Stability Analysis
xn 1 xn f xn xn f a f a xn a xn a f a xn a xn
xn 1 xn
f a 1 xn a f a 1 NOTES
xn a
which means that the derivative measures the rate at which the successive
iterates approach the fixed point a or diverge from it. If the derivative at a is
exactly 1 or “1, then more information is needed in order to decide stability.
There is an analogous criterion for a continuously differentiable
map f: Rn Rn with a fixed point a, expressed in terms of its Jacobian
matrix at a, Ja (f). If all eigenvalues of J are real or complex numbers with absolute
value strictly less than 1 then a is a stable fixed point; if at least one of them has
absolute value strictly greater than 1 then a is unstable. Just as for n=1, the case of
the largest absolute value being 1 needs to be investigated further — the Jacobian
matrix test is inconclusive. The same criterion holds more generally
for diffeomorphisms of a smooth manifold.
Linear Autonomous Systems
The stability of fixed points of a system of constant coefficient linear differential
equations of first order can be analysed using the eigenvalues of the corresponding
matrix.
An autonomous system
x = Ax,
where x(t) Rn and A is an n×n matrix with real entries, has a constant
solution
x(t) = 0.
(In a different language, the origin 0 Rn is an equilibrium point of the
corresponding dynamical system.) This solution is asymptotically stable as t
(“in the future”) iff for all eigenvalues λ of A, Re(λ) < 0. Similarly, it is asymptotically
stable as t – (“in the past”) iff for all eigenvalues λ of A, Re(λ) > 0. If there
exists an eigenvalue λ of A with Re(λ) > 0 then the solution is unstable for t ’! “.
The stability of the origin for a linear system can be determined by the Routh–
Hurwitz stability criterion. The eigenvalues of a matrix are the roots of
its characteristic polynomial. A polynomial in one variable with real coefficients is
called a Hurwitz polynomial if the real parts of all roots are strictly negative.
The Routh–Hurwitz theorem implies a characterization of Hurwitz polynomials by
means of an algorithm that avoids computing the roots.
Non-Linear Autonomous Systems
Asymptotic stability of fixed points of a non-linear system can be demonstrated
using the Hartman–Grobman theorem.
Self-Instructional
Material 325
Stability Analysis Suppose that v is a C1-vector field in Rn which vanishes at a point p,
v(p) = 0. Then the corresponding autonomous system
x = v(x)
NOTES has a constant solution
x(t) = p.
Let Jp(v) be the n×n Jacobian matrix of the vector field v at the point p. If
all eigenvalues of J have strictly negative real part then the solution is asymptotically
stable. This condition can be tested using the Routh–Hurwitz criterion.
14.4 SUMMARY
Self-Instructional
326 Material
If all eigenvalues are negative real numbers or complex numbers with Stability Analysis
negative real parts then the point is a stable attracting fixed point, and the
nearby points converge to it at an exponential rate, Lyapunov stability and
exponential stability.
NOTES
The simplest kind of an orbit is a fixed point, or an equilibrium.
Stability of a nonlinear system can be deduced from the stability of
its linearization.
The fixed point is stable if the absolute value of the derivative of f at
is strictly less than 1, and unstable if it is strictly greater than 1.
There is an analogous criterion for a continuously differentiable
map f: Rn Rn with a fixed point a, expressed in terms of its Jacobian
matrix at a, Ja (f).
The stability of the origin for a linear system can be determined by the Routh–
Hurwitz stability criterion.
The Routh–Hurwitz theorem implies a characterization of Hurwitz
polynomials by means of an algorithm that avoids computing the roots.
If all eigenvalues of J have strictly negative real part then the solution is
asymptotically stable.
Short-Answer Questions
1. Define stability.
2. Elaborate on non-linear autonomous systems.
Long-Answer Questions
1. Explain the stability theory.
2. Give details on linear autonomous systems.
Self-Instructional
Material 327
Stability Analysis
14.7 FURTHER READINGS
Self-Instructional
328 Material