0% found this document useful (0 votes)
13 views336 pages

numericals

The document is a course material for an M.Sc. in Mathematics focusing on Numerical Analysis, published by Alagappa University and Vikas Publishing House. It includes various units covering topics such as polynomial equations, eigenvalues, interpolation, differentiation, and numerical methods. The content is structured to facilitate self-instruction with objectives, summaries, and exercises for each unit.

Uploaded by

priprasanna3997
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views336 pages

numericals

The document is a course material for an M.Sc. in Mathematics focusing on Numerical Analysis, published by Alagappa University and Vikas Publishing House. It includes various units covering topics such as polynomial equations, eigenvalues, interpolation, differentiation, and numerical methods. The content is structured to facilitate self-instruction with objectives, summaries, and exercises for each unit.

Uploaded by

priprasanna3997
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 336

ALAGAPPA UNIVERSITY

[Accredited with ‘A+’ Grade by NAAC (CGPA:3.64) in the Third Cycle


and Graded as Category–I University by MHRD-UGC]
(A State University Established by the Government of Tamil Nadu)
KARAIKUDI – 630 003

Directorate of Distance Education

M.Sc. (Mathematics)
IV - Semester
311 43

NUMERICAL ANALYSIS
Authors:
Dr. N. Dutta, Professor of Mathematics, Head - Department of Basic Sciences & Humanities, Heritage Institute of Technology,
Kolkata
Units (2, 4, 6-8, 10-13)
Vikas® Publishing House: Units (1, 3, 5, 9, 14)

"The copyright shall be vested with Alagappa University"

All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.

Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.

Vikas® is the registered trademark of Vikas® Publishing House Pvt. Ltd.


VIKAS® PUBLISHING HOUSE PVT. LTD.
E-28, Sector-8, Noida - 201301 (UP)
Phone: 0120-4078900  Fax: 0120-4078999
Regd. Office: A-27, 2nd Floor, Mohan Co-operative Industrial Estate, New Delhi 1100 44
 Website: www.vikaspublishing.com  Email: helpline@vikaspublishing.com

Work Order No. AU/DDE/DE 12-02/Preparation and Printing of Course Materials/2020 Dated 30.01.2020 Copies - 1000
SYLLABI-BOOK MAPPING TABLE
Numerical Analysis
Syllabi Mapping in Book

BLOCK - I : POLYNOMIAL EQUATIONSAND EIGEN VALUE Unit 1: Transcendental and


PROBLEMS Polynomial Equations
UNIT - 1 (Pages 3-29);
Transcendental and Polynomial Equations: Rate of Convergence Unit 2: Methods for Finding Complex
of Iterative Methods. Roots and Polynomial Equations
UNIT - 2 (Pages 30-53);
Methods for Finding Complex Roots - Polynomial Equations. Unit 3: Birge – Vieta, Bairstow’s
UNIT - 3 and Graeffe’s Root
Birge - Vieta Method, Bairstow’s Method, Graeffe’s Root Squaring Squaring Methods
Method. (Pages 54-65);
UNIT - 4 Unit 4: Solution of Simultaneous
System of Linear Algebraic Equations and Eigen Value Problems: Linear Equation
Error Analysis of Direct and Iteration Methods. (Pages 66-85);

BLOCK - II : EIGEN VECTORS, INTERPOLATION,


APPROXIMATION, DIFFERENTIATION
AND INTEGRATION
UNIT - 5
Finding Eigen Values and Eigen Vectors - Jacobi and Power Unit 5: Eigen Values and
Methods. Eigen Vectors
UNIT - 6 (Pages 86-106);
Interpolation and Approximation: Hermite Interpolations - Unit 6: Interpolation and
Piecewise and Spline Interpolation - Bivariate Interpolation. Approximation
UNIT - 7 (Pages 107-146);
Approximation - Least Square Approximation and Best Unit 7: Approximation
Approximations. (Pages 147-171);
UNIT - 8 Unit 8: Numerical Integration and
Differentiation and Integration: Numerical Differentiation - Numerical Differentiation
Optimum Choice of Step - Length - Extrapolation Methods. (Pages 172-220)

BLOCK - III : PDE, ODEAND EULER METHODS


UNIT - 9
Partial Differentiation - Methods Based on Undetermined Unit 9: Partial Differential Equations
Coefficient - Gauss Methods. (Pages 221-283);
UNIT - 10 Unit 10: Ordinary Differential Equations
Ordinary Differential Equations: Local Truncation Error - Problems. (Pages 284-299);
UNIT - 11 Unit 11: Euler’s Method
Euler, Backward Euler, Midpoint, -Problems. (Pages 300-307)

BLOCK - IV: TAYLOR’S METHOD, R.K METHOD


AND STABILITYANALYSIS Unit 12: Taylor’s Method
UNIT - 12 (Pages 308-312);
Taylor’s Method -Related Problems. Unit 13: Runge Kutta Method
UNIT - 13 (Pages 313-321);
Second Order Runge Kutta Method - Stability Analysis. Unit 14: Stability Analysis
UNIT - 14 (Pages 322-328)
Stability Analysis.
CONTENTS
BLOCK I: POLYNOMIAL EQUATIONS AND EIGEN VALUE PROBLEMS
UNIT 1 TRANSCENDENTAL AND POLYNOMIAL EQUATIONS 1-29
1.0 Introduction
1.1 Objectives
1.2 Transcendental and Polynomial Equations
1.3 Answers to Check Your Progress Questions
1.4 Summary
1.5 Key Words
1.6 Self Assessment Questions and Exercises
1.7 Further Readings
UNIT 2 METHODS FOR FINDING COMPLEX ROOTS
AND POLYNOMIAL EQUATIONS 30-53
2.0 Introduction
2.1 Objectives
2.2 Methods for Finding Complex Roots
2.3 Polynomial Equations
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings
UNIT 3 BIRGE – VIETA, BAIRSTOW’S AND GRAEFFE’S
ROOT SQUARING METHODS 54-65
3.0 Introduction
3.1 Objectives
3.2 Birge – Vieta Method
3.3 Bairstow’s Method
3.4 Graeffe’s Root Squaring Method
3.5 Answers to Check Your Progress Questions
3.6 Summary
3.7 Key Words
3.8 Self-Assessment Questions and Exercises
3.9 Further Readings
UNIT 4 SOLUTION OF SIMULTANEOUS LINEAR EQUATION 66-85
4.0 Introduction
4.1 Objectives
4.2 System of Linear Equations
4.2.1 Classical Methods
4.2.2 Elimination Methods
4.2.3 Iterative Methods
4.2.4 Computation of the Inverse of a Matrix by using Gaussian Elimination Method
4.3 Answers to Check Your Progress Questions
4.4 Summary
4.5 Key Words
4.6 Self Assessment Questions and Exercises
4.7 Further Readings

BLOCK II: EIGEN VECTORS, INTERPOLATION, APPROXIMATION,


DIFFERENTIATION AND INTEGRATION
UNIT 5 EIGEN VALUES AND EIGEN VECTORS 86-106
5.0 Introduction
5.1 Objectives
5.2 Finding Eigen Values and Eigen Vectors
5.3 Jacobi and Power Methods
5.4 Answers to Check Your Progress Questions
5.5 Summary
5.6 Key Words
5.7 Self Assessment Questions and Exercises
5.8 Further Readings
UNIT 6 INTERPOLATION AND APPROXIMATION 107-146
6.0 Introduction
6.1 Objectives
6.2 Interpolation and Approximation
6.3 Answers to Check Your Progress Questions
6.4 Summary
6.5 Key Words
6.6 Self Assessment Questions and Exercises
6.7 Further Readings
UNIT 7 APPROXIMATION 147-171
7.0 Introduction
7.1 Objectives
7.2 Approximation
7.3 Least Square Approximation
7.4 Answers to Check Your Progress Questions
7.5 Summary
7.6 Key Words
7.7 Self Assessment Questions and Exercises
7.8 Further Readings
UNIT 8 NUMERICAL INTEGRATION AND NUMERICAL
DIFFERENTIATION 172-220
8.0 Introduction
8.1 Objectives
8.2 Numerical Integration
8.3 Numerical Differentiation
8.4 Optimum Choice of Step Length
8.5 Extrapolation Method
8.6 Answers to Check Your Progress Questions
8.7 Summary
8.8 Key Words
8.9 Self Assessment Questions and Exercises
8.10 Further Readings
BLOCK III: PDE, ODE AND EULER METHODS
UNIT 9 PARTIAL DIFFERENTIAL EQUATIONS 221-283
9.0 Introduction
9.1 Objectives
9.2 Partial Differential Equation of the First Order Lagrange’s Solution
9.3 Solution of Some Special Types of Equations
9.4 Charpit’s General Method of Solution and Its Special Cases
9.5 Partial Differential Equations of Second and Higher Orders
9.5.1 Classification of Linear Partial Differential Equations of Second Order
9.6 Homogeneous and Non-Homogeneous Equations with
Constant Coefficients
9.7 Partial Differential Equations Reducible to Equations with
Constant Coefficients
9.8 Answers to Check Your Progress Questions
9.9 Summary
9.10 Key Words
9.11 Self Assessment Questions and Exercises
9.12 Further Readings
UNIT 10 ORDINARY DIFFERENTIAL EQUATIONS 284-299
10.0 Introduction
10.1 Objectives
10.2 Ordinary Differential Equations
10.3 Answers to Check Your Progress Questions
10.4 Summary
10.5 Key Words
10.6 Self Assessment Questions and Exercises
10.7 Further Readings
UNIT 11 EULER’S METHOD 300-307
11.0 Introduction
11.1 Objectives
11.2 Euler Method
11.3 Answers to Check Your Progress Questions
11.4 Summary
11.5 Key Words
11.6 Self Assessment Questions and Exercises
11.7 Further Readings
BLOCK IV: TAYLOR’S METHOD, R.K METHOD AND STABILITY ANALYSIS
UNIT 12 TAYLOR’S METHOD 308-312
12.0 Introduction
12.1 Objectives
12.2 Taylor’s Method
12.3 Answers to Check Your Progress Questions
12.4 Summary
12.5 Key Words
12.6 Self Assessment Questions and Exercises
12.7 Further Readings
UNIT 13 RUNGE KUTTA METHOD 313-321
13.0 Introduction
13.1 Objectives
13.2 Runge Kutta Method
13.3 Answers to Check Your Progress Questions
13.4 Summary
13.5 Key Words
13.6 Self Assessment Questions and Exercises
13.7 Further Readings
UNIT 14 STABILITY ANALYSIS 322-328
14.0 Introduction
14.1 Objectives
14.2 Stability Analysis
14.3 Answers to Check Your Progress Questions
14.4 Summary
14.5 Key Words
14.6 Self Assessment Questions and Exercises
14.7 Further Readings
INTRODUCTION

Numerical analysis is the study of algorithms to find solutions for problems of


NOTES continuous mathematics. It helps in obtaining approximate solutions while maintaining
reasonable bounds on errors. Although numerical analysis has applications in all
fields of engineering and the physical sciences, yet in the 21st century life sciences
and both the arts have adopted elements of scientific computations. Ordinary
differential equations are used for calculating the movement of heavenly bodies,
i.e., planets, stars and galaxies. Besides, it evaluates optimization occurring in
portfolio management and also computes stochastic differential equations to solve
problems related to medicine and biology. Airlines use sophisticated optimization
algorithms to finalize ticket prices, airplane and crew assignments and fuel needs.
Insurance companies too use numerical programs for actuarial analysis. The basic
aim of numerical analysis is to design and analyse techniques to compute approximate
and accurate solutions to unique problems.
In numerical analysis, two methods are involved, namely direct and iterative
methods. Direct methods compute the solution to a problem in a finite number of
steps whereas iterative methods start from an initial guess to form successive
approximations that converge to the exact solution only in the limit. Iterative methods
are more common than direct methods in numerical analysis. The study of errors is
an important part of numerical analysis. There are different methods to detect and
fix errors that occur in the solution of any problem. Round-off errors occur because
it is not possible to represent all real numbers exactly on a machine with finite
memory. Truncation errors are assigned when an iterative method is terminated or
a mathematical procedure is approximated and the approximate solution differs
from the exact solution.
This book, Numerical Analysis, is divided into four blocks that are further
divided into fourteen units which will help you understand how to solve
transcendental and polynomial equations, rate of convergence of iterative methods,
methods for finding complex roots – polynomial equations, Birge-Vieta method,
Bairstow’s method, Graeffe’s root squaring method, system of linear algebraic
equations and eigenvalue problems, error analysis of direct and iteration methods,
finding eigenvalues and eigenvectors – Jacobi and power methods, interpolation
and approximation, Hermite interpolations, piecewise and spline interpolation,
approximation, least square approximation and best approximations, differentiation
and integration, numerical differentiation, partial differentiation, ordinary differential
equations, Euler, backward Euler, Taylor’s method, second order Runge Kutta
methods, and stability analysis.
The book follows the Self-Instruction Mode or the SIM format wherein
each unit begins with an ‘Introduction’ to the topic followed by an outline of the
‘Objectives’. The content is presented in a simple, organized and comprehensive
form interspersed with ‘Check Your Progress’ questions and answers for better
understanding of the topics covered. A list of ‘Key Words’ along with a ‘Summary’
and a set of ‘Self Assessment Questions and Exercises’ is provided at the end of
the each unit for effective recapitulation. Logically arranged topics, relevant solved
Self-Instructional examples and illustrations have been included for better understanding of the topics.
Material
Transcendental and
BLOCK - I Polynomial Equations

POLYNOMIAL EQUATIONS AND


EIGEN VALUE PROBLEMS
NOTES

UNIT 1 TRANSCENDENTAL AND


POLYNOMIAL EQUATIONS
Structure
1.0 Introduction
1.1 Objectives
1.2 Transcendental and Polynomial Equations
1.3 Answers to Check Your Progress Questions
1.4 Summary
1.5 Key Words
1.6 Self Assessment Questions and Exercises
1.7 Further Readings

1.0 INTRODUCTION

In mathematics, a polynomial is an expression consisting of variables (also


called indeterminate) and coefficients, that involves only the operations
of addition, subtraction, multiplication, and non-negative integer exponents of
variables. An example of a polynomial of a single indeterminate, x, is x2 – 4x + 7.
An example in three variables is x3 + 2xyz2 – yz + 1.
Polynomials appear in many areas of mathematics and science. For example,
they are used to form polynomial equations, which encode a wide range of problems,
from elementary word problems to complicated scientific problems; they are used
to define polynomial functions, which appear in settings ranging from
basic chemistry and physics to economics and social science; they are used
in calculus and numerical analysis to approximate other functions. In advanced
mathematics, polynomials are used to construct polynomial rings and algebraic
varieties, central concepts in algebra and algebraic geometry.
In this unit, you will study about transcendental and polynomial equations,
and rate of convergence of iterative methods.

1.1 OBJECTIVES

After going through this unit, you will be able to:


Understand linear integral equations and some basic identities
Self-Instructional
Material 1
Transcendental and Reduce initial value problems to Volterra integral equations
Polynomial Equations
Know the methods of successive approximation and successive substitution
to solve Volterra equations of second kind, iterated kernels and Neumann
series for Volterra equations
NOTES
Express resolvent kernel as a series in
Know Laplace transform method for a difference kernel
Find the solution of a Volterra equation of the first kind
Reduce boundary value problems to Fredholm integral equations
Know the method of successive approximation and successive substitution
to solve Fredholm equations of the second kind
Know iterated kernels and Neumann series for Fredholm equations
Express resolvent kernel as a sum of series and Fredholm resolvent kernel
as a ratio of two series
Know Fredholm equations with separable kernels, approximation of a kernel
by a separable kernel and Fredholm alternative

1.2 TRANSCENDENTAL AND POLYNOMIAL


EQUATIONS

In mathematics, an integral equation is an equation in which an unknown function


appears under an integral sign.
An integral equation in u(x) is given by,

…(1.1)

where K(x, t) is called the kernel of the integral Equation (1.1) and (x)
and β(x) are the limits of integration. It can be easily observed that the unknown
function u(x) appears under the integral sign. It is to be noted here that both the
kernel K(x, t) and the function f(x) in Equation (1.1) are given functions; and is
a constant parameter. We have to determine the unknown function u(x) that will
satisfy Equation (1.1).
An integral equation can be classified as a linear or nonlinear integral equation.
The most frequently used integral equations fall under two major classes, namely
Volterra and Fredholm integral equations. In this unit we will distinguish following
integral equations:
Volterra integral equations
Fredholm integral equations

Self-Instructional
2 Material
Volterra Integral Equations Transcendental and
Polynomial Equations
The most standard form of Volterra linear integral equations is of the form

NOTES

where the limits of integration are function of x and the unknown function
u(x) appears linearly under the integral sign.
If the function (x) = 1, then equation becomes

and this equation is known as the Volterra integral equation of the second
kind; whereas if (x) = 0, then the equation becomes

which is known as the Volterra equation of the first kind.


Fredholm Integral Equations
The most standard form of the Fredholm linear integral equations is given
by,

1.2)

where the limits of integration a and b are constants and the unknown function
u(x) appears linearly under the integral sign. If the function (x) = 1, then Equation
(1.2) becomes,

and this equation is called Fredholm integral equation of second kind; whereas
if (x) = 0, then Equation (1.2) gives,

which is called Fredholm integral equation of the first kind.


Initial Value Problems Reduced to Volterra Integral Equations
Consider the integral equation,

Self-Instructional
Material 3
Transcendental and The Laplace transform of f(t) is defined as
Polynomial Equations

L{f(t)} =
NOTES Using this definition the above integral equation can be transformed to

In a similar manner if y( ) = then

This is inverted by convolution theorem to give

If

Then . Using the convolution theorem, we get


the Laplace inverse as

Thus the n-fold integrals can be expressed as a single integral as,

Self-Instructional
4 Material
Method of Successive Approximation to Solve Volterra Integral Transcendental and
Polynomial Equations
Equations of Second Kind
Volterra integral equation of the second kind is of the form,
NOTES

where K(x, t) is the kernel of the integral equation, f (x) a continuous function
of x and a parameter. Here, f (x) and K(x, t) are the given functions but u(x) is
an unknown function that needs to be determined. The limits of integral for the
Volterra integral equations are functions of x.
In this method of approximation, we replace the unknown function u(x)
under the integral sign of the Volterra equation by any selective real valued continuous
function u0(x), called the zeroth approximation. This substitution will give the first
approximation u1(x) by

It is obvious that u1(x) is continuous if f (x), K(x, t) and u0(x) are continuous.
The second approximation u2(x) can be obtained similarly by replacing u0(x) in
the above equation by u1(x) obtained above. And we find,

Proceeding in a similar way, we obtain an infinite sequence of functions

that satisfies the recurrence relation

for n = 1, 2, 3, . . . and u0(x) is equivalent to any selected real valued


function. The most commonly selected function for u0(x) are 0, 1 and x. Thus, at
the limit, the solution u(x) of the Volterra equation is obtained as,

so that the resulting solution u(x) is independent of the choice of the zeroth
approximation u0(x). This process of approximation is extremely simple. However,
if we follow the Picard’s successive approximation method, we need to set u0(x)
= f (x), and determine u1(x) and other successive approximation as follows:

Self-Instructional
Material 5
Transcendental and
Polynomial Equations

NOTES

The last equation is the recurrence relation. Consider

Where,

Thus, it can be easily observed that,

if (x)=f (x), and further that


0

where m=1, 2, 3, . . . and hence,

Self-Instructional
6 Material
Transcendental and
Polynomial Equations
The repeated integrals in may be considered as a double integral
over the triangular region; thus interchanging the order of integration, we obtain
NOTES

Where,

Similarly,

Where the iterative kernels, are


defined by the recurrence formula given by,

Thus, the solution for un(x) can be written as,

Resolvent Kernel as a Series in L


It is also possible that we should be led to the solution of Volterra equation by
means of the sum if it exists, of the infinite series defined by un(x). Thus, we have
using m(x)

hence it is also possible that the solution of Volterra equation will be given
by as n
Self-Instructional
Material 7
Transcendental and
Polynomial Equations

NOTES

Where,

is known as the resolvent kernel.


Laplace Transform Method for A Difference Kernel
Volterra integral equation of convolution type such as

where the kernel K(x–t) is of convolution type, can very easily be solved
using the Laplace transform method. To begin the solution process, we first define
the Laplace transform of u(x)

Using the Laplace transform of the convolution integral, we have

Thus, taking the Laplace transform of Volterra integral equations of


convolution type, we obtain

and the solution for L{u(x)} is given by

By inverting this transform, we obtain

where
Self-Instructional
8 Material
Example 1: Solve the following Volterra integral equation of the second kind of Transcendental and
Polynomial Equations
the convolution type using (a) the Laplace transform method and (b) successive
approximation method

NOTES

Solution: (a) Taking the Laplace transforms, we obtain

and solving for L{u(x)} yields

The Laplace inverse of the above can be written immediately as

where (x) is the Dirac delta.


(b) Solution by successive approximation
Let us assume that the zeroth approximation is,
u0(x) = 0
Then the first approximation can be obtained as
u1(x) = f (x)
Hence, the second approximation is given by

Proceeding in similar manner, the third approximation can be obtained as

Self-Instructional
Material 9
Transcendental and In the double integration the order of integration is changed to obtain the
Polynomial Equations
final result. In a similar manner, the fourth approximation u4(x) can be at once
written as

NOTES

Now, as n

Here, the resolvent kernel is H(x, t; ) = e(1+ .


)(x–t)

Method of Successive Substitution to Solve Volterra Integral


Equations of Second Kind
In this method, we substitute successively for u(x) its value as given by Volterra
integral equation of the second kind. We find that

Where,

Self-Instructional
10 Material
is the remainder after n terms. Transcendental and
Polynomial Equations
Now,

NOTES
Accordingly, the general series for u(x) can be written as

Example 2: Solve the integral

Solution: By the method of successive substitution, we get

Iterated Kernels and Neumann Series for Volterra Equations


The integral,

where is the

resolvent kernel is the solution of the Volterra integral equation of the second kind,
given by

When both and f(x) are continuous then the resolvent kernel can
be constructed in terms of the Neumann series

Self-Instructional
Material 11
Transcendental and
Polynomial Equations Where is the iterated kernel which is evaluated as,

NOTES

and .
For showing this, assume the following infinite series form for the solution
u(x),

Substituting this in the Volterra integral equation of the second kind and
assuming good convergence which allows the exchange of summation with the
integration operation, we get

Equating coefficients of on both sides, we have

By successive substitution, we get

And

Self-Instructional
12 Material
Transcendental and
Polynomial Equations

NOTES

Similarly,

So we can now write,

as the general term of the

iterated kernel and

Therefore,

Self-Instructional
Material 13
Transcendental and Solution of a Volterra Integral Equation of the First Kind
Polynomial Equations
The first kind Volterra equation is usually written as,

NOTES

If the derivatives exist and are


continuous, then the solution of this equation is found by reducing it to its second
kind and then proceeding with the methods discussed above.
Differentiating the above Volterra equation and applying Leibnitz rule, we
get

If then

The second way to obtain the second kind Volterra integral equation from
the first kind is by using integration by parts, if we set

Or

By integrating by parts, we have

which reduces to

Giving

Self-Instructional
14 Material
(0) = 0, and dividing out by K(x, x) we have Transcendental and
Polynomial Equations

NOTES

In this method the function f(x) is not required to be differentiable. But u(x)
must finally be calculated by differentiating the function (x) given by the formula

where H(x, t : 1) is the resolvent kernel corresponding to . To do this


f (x) must be differentiable.
Boundary Value Problems Reduced to Fredholm Integral Equations
A boundary value problem can be converted to an equivalent Fredholm integral
equation. But this method is complicated and so is rarely used. This method is
demonstrated with the help of following illustration:
Consider the differential equation

with boundary
conditions

Where and β are given constants. Make the transformation,

Integrating both sides from a to x, we get

Integrating with respect to x from a to x and applying the given boundary


condition at x = a, we get
Self-Instructional
Material 15
Transcendental and
Polynomial Equations

NOTES

And using the boundary condition at x = b gives,

And the unknown constant is determined as

Hence the solution can be rewritten as,

Therefore,

where and so y(x) can be determined. It is a complicated


procedure to determine the solution of a Boundary Value Problem (BVP) by
equivalent Fredholm equation.
If a = 0 and b = 1, i.e., 0 x 1, then
Self-Instructional
16 Material
Transcendental and
x x
Polynomial Equations
y ( x) = α + xy′(0) + ∫∫ u (t )dtdt
0 0

NOTES

And hence the unknown constant can be determined as

Thus,

Where the kernel is given by,,

It can be easily verified that confirming that the


kernel is symmetric. The Fredholm integral equation is given by u(x).

Self-Instructional
Material 17
Transcendental and Example 3: Consider the boundary value problem,
Polynomial Equations

NOTES

Solution: Integrating the equation with respect to x from 0 to x two times yields

To determine the unknown constant we use the condition at x = 1,


i.e., y(1) = y1. Hence,

And

Therefore,

Where the kernel is given by,

If we specialize our problem with simple linear BVP y (x) = – y(x), 0 < x
< 1 with the boundary conditions y(0) = y0, y(1) = y1, then y(x) reduces to the
second kind Fredholm integral equation,

where F(x) = y0 + x(y1 – y0). It can be easily verified that K(x, t) =


K(t, x) confirming that the kernel is symmetric.

Self-Instructional
18 Material
Method of Successive Approximation to Solve Fredholm Equations of Transcendental and
Polynomial Equations
Second Kind
The successive approximation method, which was successfully applied to Volterra
integral equations of the second kind, can also be applied to the basic Fredholm NOTES
integral equations of the second kind:

We set u0(x) = f (x). Note that the zeroth approximation can be any selected
real valued function u0(x), a x b.
Accordingly, the first approximation u1(x) of the solution of u(x) is defined
by

The second approximation u2(x) of the solution u(x) can be obtained by


replacing u0(x) by the previously obtained u1(x). Hence we find

This process can be continued in the same manner to obtain the nth
approximation. In other words, the various approximations can be put in a recursive
scheme given by

Even though we can select any real valued function for the zeroth
approximation u0(x), the most commonly selected functions for u0(x) are u0(x) =
0, 1 or x. With the selection of u0(x) = 0, the first approximation u1(x) =
f (x). The final solution u(x) is obtained by

so that the resulting solution u(x) is independent of the choice of u0(x). This
is known as Picard’s method. The Neumann series is obtained if we set u0(x) = f
(x) such that

Self-Instructional
Material 19
Transcendental and Where,
Polynomial Equations

NOTES
The second approximation u2(x) can be obtained as,

Where,

The final solution u(x) known as Neumann series can be obtained as,

Where,

Example 4: Solve the Fredholm integral equation

by using the successive approximation method.


Solution: Let us consider the zeroth approximation is u0(x) = 1, and then the first
approximation can be computed as

Self-Instructional
20 Material
Transcendental and
Polynomial Equations

NOTES

Thus,

And

is the solution.

Method of Successive Substitutions to Solve Fredholm Equations of


Second Kind
This method is almost analogous to the successive approximation method except
that it concerns with the solution of the integral equation in a series form through
evaluating single and multiple integrals.

K(x, t) 0, is real and continuous in the rectangle R, for which a x b


and a t b; f (x) 0 is real and continuous in the interval I, for which a x
b; and , a constant parameter.

Self-Instructional
Material 21
Transcendental and Substituting the value of u(t) in this equation, we get
Polynomial Equations

NOTES
or

Hence,

The unknown function u(x) is replaced by the known function f(x).


Example 5: Use the successive substitutions to solve the Fredholm integral equation

Solution: Here, = 12, f (x) = cos x, and K(x, t) = sin x

Self-Instructional
22 Material
Iterated Kernels and Neumann Series for Fredholm Equations Transcendental and
Polynomial Equations
The Liouville-Neumann series is defined as,

NOTES

It is a unique continuous solution of a Fredholm integral equation of the


second kind, defined as

If the nth iterated kernel is defined as

Then,

And the resolvent kernel is given by

Resolvent Kernel as a Sum Of Series


Let the Fredholm equation of the second kind be,

Where the range of the separable kernel

which consists of arbitrary linear combinations of the functions fi is given by,

Self-Instructional
Material 23
Transcendental and Therefore,
Polynomial Equations

NOTES

To find uj define,

Consider the algebraic problem . If we replace h by a kernel


cK instead of the separable K then the equation becomes,

where, I is the identity matrix. Let D(c) be the determinant of the matrix M
then

If determinant D(c) is not zero then the matrix M has an inverse,

Then the solution of the algebraic equation becomes or .


Now by substituting these values of ui in the Fredholm integral equation and
then expressing hi in terms of h(x), we get

Where, is called the resolvent kernel.

Fredholm Resolvent Kernel as a Ratio of Two Series

If is a continuous kernel, not necessarily real then the resolvent kernel


can be expressed as the ratio of two infinite series of powers of
such that both of these series converge for all values of .
Expressing the resolvent kernel as a ratio,

Self-Instructional
24 Material
Where, Transcendental and
Polynomial Equations

And NOTES

Here the coefficients Ci and the function can be determined by,,

The solution of the equation given by,

now becomes

This method is preferable only when the kernel is separable.


Fredholm’s Equations with Separable Kernels
This section deals with the study of the homogeneous Fredholm integral equation
with separable kernel given by,

This equation is obtained from the second kind Fredholm equation

Setting f(x) = 0, it is easily seen that u(x) = 0 is a solution which is known as


the trivial solution. The homogeneous Fredholm integral equation with separable
kernel may have nontrivial solutions. We shall use the direct computational method
to obtain the solution in this case. Without loss of generality, we assume that Self-Instructional
Material 25
Transcendental and
Polynomial Equations

So that,
NOTES

For

We note that = 0 gives the trivial solution u(x) = 0. However, to determine


the nontrivial solution, we need to determine the value of the parameter by
considering 0. Therefore,

Or

which gives a numerical value for 0 by evaluating the definite integral.


Check Your Progress
1. What is an integral equation?
2. Write the standard form of the Volterra equation.
3. What is first step in the method of successive approximation to solve
Volterra equations?
4. Write the Volterra integral equation of convolution type.
5. What are the two methods to reduce a Volterra integral equation of the
first kind to a second kind?
6. Express the resolvent kernel as a ratio of two series.

1.3 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. An integral equation is an equation in which an unknown function appears


under an integral sign.
Self-Instructional
26 Material
2. The most standard form of Volterra linear integral equations is of the form Transcendental and
Polynomial Equations

3. In successive method of approximation, we first replace the unknown function NOTES


u(x) under the integral sign of the Volterra equation by any selective real
valued continuous function u0(x).
4. Volterra integral equation of convolution type is

where the kernel K(x – t) is of convolution type.


5. The two methods to reduce a Volterra integral equation of the first kind to a
second kind are differentiating the Volterra equation and applying Leibnitz
rule, and by using integration by parts.
6. Expressing the resolvent kernel as a ratio,

where, and

1.4 SUMMARY

An integral equation in u(x) is given by,

where K(x, t) is called the kernel of the integral equation, and (x) and
(x) are the limits of integration.
The most frequently used integral equations fall under two major classes,
namely Volterra and Fredholm integral equations.
In Volterra equation one of the limits of integration is variable while in
Fredholm equation both the limits are constant.

Self-Instructional
Material 27
Transcendental and
Polynomial Equations 1.5 KEY WORDS

Integral equation: It is an equation in which an unknown function appears


NOTES under an integral sign.

1.6 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Write the two kinds of Volterra integral equations.
2. What is the basic difference between Volterra and Fredholm equations?
3. List the methods used to solve Fredholm and Volterra integral equations of
the second kind.
4. How can you find the solution of the Volterra integral equation of the first
kind?
5. Define iterated kernel for Fredholm and Volterra integral equations.
Long-Answer Questions
1. Reduce the following initial value problem to an equivalent Volterra equation:

2. Solve the following Volterra integral equations using methods of successive


approximation with five approximations with u0(x) = 0:

3. Find the solution of the following Volterra integral equations of the first kind:

(a)

Self-Instructional
28 Material
Transcendental and
Polynomial Equations
(b)

(c) NOTES

(d)

4. Reduce the following initial value problem into an equivalent Fredholm


equation:

5. Solve the following linear Fredholm integral equations:

(a)

(b)

(c)

(d)

(e)

1.7 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
Material 29
Methods for Finding
Complex Roots and
Polynomial Equations UNIT 2 METHODS FOR FINDING
COMPLEX ROOTS AND
NOTES
POLYNOMIAL EQUATIONS
Structure
2.0 Introduction
2.1 Objectives
2.2 Methods for Finding Complex Roots
2.3 Polynomial Equations
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings

2.0 INTRODUCTION

In mathematics and computing, a root-finding algorithm is an algorithm for


finding zeroes, also called roots, of continuous functions. A zero of a function f,
from the real numbers to real numbers or from the complex numbers to the complex
numbers, is a number x such that f(x) = 0. As, generally, the zeroes of a function
cannot be computed exactly nor expressed in closed form, root-finding algorithms
provide approximations to zeroes, expressed either as floating point numbers or
as small isolating intervals, or disks for complex roots (an interval or disk output
being equivalent to an approximate output together with an error bound).
In this unit, you will study about the methods for finding complex roots and
polynomial equations.

2.1 OBJECTIVES

After going through this unit, you will be able to:


Explain the various methods for finding complex roots
Analyse the polynomial equations

2.2 METHODS FOR FINDING COMPLEX ROOTS


In this section, we consider numerical methods for computing the roots of an
equation of the form,
f (x) = 0 (2.1)
Self-Instructional
30 Material
Where f (x) is a reasonably well-behaved function of a real variable x. The function Methods for Finding
Complex Roots and
may be in algebraic form or polynomial form given by, Polynomial Equations

f ( x) = a n x n + a n−1 x n −1 + ... + a1 x + a0 (2.2)


It may also be an expression containing transcendental functions such as cos NOTES
x, sin x, ex, etc. First, we would discuss methods to find the isolated real roots of
a single equation. Later, we would discuss methods to find the isolated roots of a
system of equations, particularly of two real variables x and y, given by,
f (x, y) = 0 , g (x, y) = 0 (2.3)
A root of an equation is usually computed in two stages. First, we find the
location of a root in the form of a crude approximation of the root. Next we use
an iterative technique for computing a better value of the root to a desired accuracy
in successive approximations/computations. This is done by using an iterative
function.

Methods for Finding Location of Real Roots


The location or crude approximation of a real root is determined by the use of any
one of the following methods, (i) graphical and (ii) tabulation.
Graphical Method: In the graphical method, we draw the graph of the function
y = f (x), for a certain range of values of x. The abscissae of the points where the
graph intersects the x-axis are crude approximations for the roots of the Equation
(2.1). For example, consider the equation,
f (x) = x2 + 2x – 1 = 0
From the graph of the function y = f (x), shown in Figure 2.1 we find that it
cuts the x-axis between 0 and 1. We may take any point in [0, 1] as the crude
approximation for one root. Thus, we may take 0.5 as the location of a root. The
other root lies between – 2 and – 3. We can take – 2.5 as the crude approximation
of the other root.

Fig. 2.1 Graph of y = x 2 + 2 x − 1


In some cases, where it is complicated to draw the graph of y = f (x), we may
rewrite the equation f (x) = 0, as f1(x) = f2(x), where the graphs of y = f1 (x) and
y = f2(x) are standard curves. Then we find the x-coordinate(s) of the point(s) of Self-Instructional
Material 31
Methods for Finding intersection of the curves y = f1(x), and y = f2(x), which is the crude approximations
Complex Roots and
Polynomial Equations of the root (s).
For example, consider the equation,
NOTES x 3 − 15.2 x − 13.2 = 0
This can be rewritten as,
x 3 = 15.2 x + 13.2
Where it is easy to draw the graphs of y = x3 and y = 15.2 x + 13.2. Then, the
abscissa of the point(s) of intersection can be taken as the crude approximation(s)
of the root(s).

20 y = 15.2x + 13.2

10 y = x3

Fig. 2.2 Graph of y = x3 and y = 15.2x + 13.2


Example 1: Find the location of the root of the equation x log10 x = 1.

Solution: The equation can be rewritten as log10 x = 1 .


x
1
Now the
= curves y log
= 10 x , and y , can be easily drawn and are shown in
x
Figure 2.3.
Y
1
y= x

y = log10 x

1 2 3 X
O

1
Fig. 2.3 Graph
= of y = and y log10 x
x
The point of intersection of the curves has its x-coordinates value 2.5
approximately. Thus, the location of the root is 2.5.
Tabulation Method: In the tabulation method, a table of values of f (x) is made
for values of x in a particular range. Then, we look for the change in sign in the
Self-Instructional values of f (x) for two consecutive values of x. We conclude that a real root lies
32 Material
between these values of x. This is true if we make use of the following theorem on Methods for Finding
Complex Roots and
continuous functions. Polynomial Equations
Theorem 1: If f (x) is continuous in an interval (a, b) and f (a) and f(b) are of
opposite signs, then there exists at least one real root of f (x) = 0, between a
and b. NOTES
Consider for example, the equation f (x) = x3 – 8x + 5 = 0
Constructing the following table of x and f (x)

x − 4 − 3 − 2 −1 0 1 2 3
f ( x) − 27 2 13 12 5 − 2 − 3 8

We observe that there is a change in sign of f (x) in each of the sub-intervals (–3,
–4), (0, 1) and (2, 3). Thus we can take the crude approximation for the three real
roots as – 3.2, 0.2 and 2.2.

Methods for Finding the Roots—Bisection and Simple Iteration Methods


Bisection Method: The bisection method involves successive reduction of the
interval in which an isolated root of an equation lies. This method is based upon an
important theorem on continuous functions as stated below.
Theorem 2: If a function f (x) is continuous in the closed interval [a, b] and f (a)
and f (b) are of opposite signs, i.e., f (a) f (b) < 0, then there exists at least one
real root of f (x) = 0 between a and b.
The bisection method starts with two guess values x0 and x1. Then this interval
1
[x0, x1] is bisected by a point x2 = ( x0 + x1 ), where f ( x0 ) ⋅ f ( x1 ) < 0. We compute
2
f (x 2). If f (x 2) = 0, then x2 is a root. Otherwise, we check whether
f ( x0 ) ⋅ f ( x 2 ) < 0 or f ( x1 ) ⋅ f ( x 2 ) < 0. If f (x2)/f (x0) < 0, then the root lies in the interval
(x2, x0). Otherwise, if f ( x0 ) ⋅ f ( x1 ) < 0, then the root lies in the interval (x2, x1).
The sub-interval in which the root lies is again bisected and the above process
is repeated until the length of the sub-interval is less than the desired accuracy.
The bisection method is also termed as a bracketing method, since the method
successively reduces the gap between the two ends of an interval surrounding the
real root, i.e., brackets the real root.
The algorithm given below clearly shows the steps to be followed in finding a
real root of an equation, by bisection method to the desired accuracy.
Algorithm: Finding root using bisection method.
Step 0: Define the equation, f (x) = 0
Step 1: Read epsilon, the desired accuracy
Setp 2:Read two initial values x0 and x1 which bracket the desired root
Step 3: Compute y0 = f (x0)
Step 4: Compute y1 = f (x1) Self-Instructional
Material 33
Methods for Finding Step 5: Check if y0 y1 < 0, then go to Step 6
Complex Roots and
Polynomial Equations else go to Step 2
Step 6: Compute x2 = (x0 + x1)/2
NOTES Step 7: Compute y2 = f (x2)
Step 8: Check if y0 y2 > 0, then set x0 = x2
else set x1 = x2
Step 9: Check if | ( x1 − x0 ) / x1 | > epsilon, then go to Step 3
Step 10: Write x2, y2
Step 11: End
Next, we give the flowchart representation of the above algorithm to get a
better understanding of the method. The flowchart also helps in easy implementation
of the method in a computer program.
Flow Chart for Bisection Algorithm
Begin

Define f (x)

Read epsilon

Read x0, x1

Compute y0 = f (x0)

Compute y1 = f (x1)

Is
No y0y1 > 0

Yes

Compute x2 = ( x0 + x1)/2

Compute y2 = f (x2)

Is Yes x 0 = x2
y0y2 > 1

No
x 1 = x2

Is
No Yes
|(x1 – x 0) / x0|
> epsilon

print ‘root’ = x2

Self-Instructional End
34 Material
Example 2: Find the location of the smallest positive root of the equation x3 – 9x Methods for Finding
Complex Roots and
+ 1 = 0 and compute it by bisection method, correct to two decimal places. Polynomial Equations
Solution: To find the location of the smallest positive root we tabulate the function
f (x) = x3 – 9x + 1 below:
NOTES
x 0 1 2 3
f ( x) 1 − 2 − 9 1

We observe that the smallest positive root lies in the interval [0, 1]. The
computed values for the successive steps of the bisection method are given in the
Table.

n x0 x1 x2 f (x2 )
1 0 1 0 .5 − 3 . 37
2 0 0 .5 0 . 25 − 1 . 23
3 0 0 . 25 0 . 125 − 0 . 123
4 0 0 . 125 0 . 0625 0 . 437
5 0 . 0625 0 . 125 0 . 09375 0 . 155
6 0 . 09375 0 . 125 0 . 109375 0 . 016933
7 0 . 109375 0 . 125 0 . 11718 − 0 . 053

From the above results, we conclude that the smallest root correct to two
decimal places is 0.11.
Simple Iteration Method: A root of an equation f (x) = 0, is determined using
the method of simple iteration by successively computing better and better
approximation of the root, by first rewriting the equation in the form,
x = g(x) (2.4)
Then, we form the sequence {xn} starting from the guess value x0 of the root
and computing successively,
=x1 g=
( x0 ), x2 g (=
x1 ),.., xn g ( xn −1 )

In general, the above sequence may converge to the root ξ as n → ∞, or it


may diverge. If the sequence diverges, we shall discard it and consider another
form x = h(x), by rewriting f (x) = 0. It is always possible to get a convergent
sequence since there are different ways of rewriting f (x) = 0, in the form x = g(x).
However, instead of starting computation of the sequence, we shall first test whether
the form of g(x) can give a convergent sequence or not. We give below a theorem
which can be used to test for convergence.
Theorem 3: If the function g(x) is continuous in the interval [a, b] which contains
a root ξ of the equation f (x) = 0, and is rewritten as x = g(x), and | g ′( x) | ≤ l ≤ 1 in
this interval, then for any choice of x0 ∈ [a, b] , the sequence {xn} determined by
the iterations,
Self-Instructional
Material 35
Methods for Finding
Complex Roots and
=xk +1 g=
( xk ), for k 0, 1, 2,... (2.5)
Polynomial Equations
This converges to the root of f (x) = 0.
Proof: Since x = ξ , is a root of the equation x = g(x), we have
NOTES
ξ = g (ξ ) (2.6)
The first iteration gives x1 = g(x0) (2.7)
Subtracting Equation (2.7) from Equation (2.6), we get
ξ − x1 = g (ξ ) − g ( x0 )
Applying mean value theorem, we can write
ξ − x1 = (ξ − x0 ) g ′( s0 ), x0 < s0 < ξ (2.8)
Similarly, we can derive
ξ − x2 = (ξ − x1 ) g ′( s1 ), x1 < s1 < ξ (2.9)
....
ξ − xn +1 = (ξ − xn ) g ′( s n ), xn < s n < ξ (2.10)
From all these Equations (2.8), (2.9), and (2.10), we get
ξ − xn +1 = (ξ − x0 ) g ′( s0 ) g ′( s1 )..., g ′( s n ) (2.11)
Since | g ′( xi ) | < l for each xi, the above Equation (2.11) becomes,

| ξ − x n +1 | < l n +1 | ξ − x0 | (2.12)

Evidently, since l < l, l n +1 → 0, as n → ∞, the right hand side tends to zero and
thus it follows that the sequence {xn}converges to the root ξ if ϕ ′(ξ ) < 1. This
completes the proof.
Order of Convergence: The order of convergence of an iterative process is
determined in terms of the errors en and en+1 in successive iterations. An iterative
en +1
process is said to have kth order convergence if lim < M, where M is a finite
n →∞ enk
number.
Roughly speaking, the error in any iteration is proportional to the kth power of
the error in the previous iteration.
Evidently, the simple iteration discussed in this section has its order of
convergence 1.
The above iteration is also termed as fixed point iteration since it determines
the root as the fixed point of the mapping defined by x = g(x).
Algorithm: Computation of a root of f (x) = 0 by linear iteration.
Step 0: Define g(x), where f (x) = 0 is rewritten as x = g(x)

Self-Instructional
36 Material
Step 1: Input x0, epsilon, maxit, where x0 is the initial guess of root, epsilon is Methods for Finding
Complex Roots and
accuracy desired, maxit is the maximum number of iterations allowed. Polynomial Equations
Step 2: Set i = 0
Step 3: Set x1 = g (x0) NOTES
Step 4: Set i = i + 1
Step 5: Check, if |(x1 – x0)/ x1| < epsilon, then print ‘root is’, x1
else go to Step 6
Step 6: Check, if i < n, then set x0 = x1 and go to Step 3
Step 7: Write ‘No convergence after’, n, ‘iterations’
Step 8: End
Example 3: In order to compute a real root of the equation x3 – x – 1 = 0, near
x = 1, by iteration, determine which of the following iterative functions can be
used to give a convergent sequence.

x +1 x +1
(i) x = x3 – 1 (ii) x= 2
(iii) x=
x x

Solution:
(i) For the form x = x 3 − 1, g(x) = x3 – 1, and g ′( x) = 3x 2 . Hence, | g ′( x) | > 1,
for x near 1. So, this form would not give a convergent sequence of
iterations.
x +1 x +1 1 2
(ii) For the form x= , g ( x) = . Thus, g ′( x) = − − and | g ′(1) | = 3 > 1.
x2 x2 x2 x3
Hence, this form also would not give a convergent sequence of iterations.
1

x +1 1  x +1 2  1 
(iii) For the form, g ( x) = , g ′( x) =   ⋅  − 2 .
x 2 x   x 

1 x +1
∴ | g ′(1) | = < 1. Hence, the form x = would give a convergent
2 2 x
sequence of iterations.
Example 4: Compute the real root of the equation x3 + x2 – 1 = 0, correct to five
significant digits, by iteration method.
Solution: The equation has a real root between 0 and 1 since f (x) = x3 + x2 – 1
has opposite signs at 0 and 1. For using iteration, we first rewrite the equation in
the following different forms.

1 1 1
(i) x= −1 (ii) x= −1 (iii) x=
2
x x x +1

Self-Instructional
Material 37
Methods for Finding
Complex Roots and 1 2
For the form (i), g ( x) = −1 + , g ′( x) = − and for x in (0, 1), | g ′( x) | > 1 .
Polynomial Equations x2 x3

1 1  1 
So, this form is not suitable. For the form g ′( x) = .  − 2 − 1 (ii) and
NOTES 2 1  x 
−1
x

| g ′( x) | > 1 for all x in (0, 1). Finally, for the form (iii)
1 1
g ′( x) = − . 3
and g ′( x ) < 1 for x in (0, 1). Thus this form can be used to form
2
( x + 1) 2
a convergent sequence for finding the root.
1
We start the iteration x = with x0 = 1. The results of suecessive iterations
1+ x
are,
x1 = 0.70711 x2 = 0.76537 x3 = 0.75236 x4 = 0.75541
x5 = 0.75476 x6 = 0.75490 x7 = 0.75488 x8 = 0.75488

Thus, the root is 0.75488, correct to five significant digits.


Example 5: Compute the root of the equation x2 – x – 0.1 = 0, which lies in
(1,2), correct to five significant figures.
Solution: The equation is rewritten in the following form for computing the root
by iteration,
1
x = x + 0.1. Here, g ′( x) = , and | g ′( x) | < 1, for x in (1, 2).
2 x + 0 .1

The results for successive iterations, taking x0 = 1, are


x1 = 1.0488 x2 = 1.0718 x3 = 1.0825
x4 = 1.0874 x5 = 1.0897.
Thus, the root is 1.09, correct to three significant figures.
Example 6: Solve the following equation for the root lying in (2, 4) by using the
method of linear iteration x3 – 9x + 1 = 0. Show that there are various ways of
rewriting the equation in the form, x = g (x) and choose the one which gives a
convergent sequence for the root.
Solution: We can rewrite the equation in the following different forms:

1
3 1 1
(i) x = ( x + 1) (ii) x = 9/ x− 2 (iii) x = 9−
9 x x

2 1
In case of (i), g ′( x) = x and for x in [2, 4], | g ′( x) | > 1. Hence it will not give
3
rise to a convergent sequence.
Self-Instructional
38 Material
Methods for Finding
9 2 Complex Roots and
In case of form (ii) g ′( x) = 2 x − + and for x in [2, 4], | g ′( x) | > 1
x2 x3 Polynomial Equations

1

 1 2 1
In case of form (iii) g ′( x) =  9 −  and | g ′( x) | < 1 NOTES
 x 2x2

Thus, the forms (ii) and (iii) would give convergent sequences for finding the
root in [2, 3].
We start the iterations taking x0 = 2 in the iteration scheme (iii). The result for
successive iterations are,
x0 = 2.0 x1 = 2.91548 x4 = 2.94282.
x2 = 2.94228 x3 = 2.94281
Thus, the root can be taken as 2.94281, correct to four decimal places.

Newton-Raphson Method
Newton-Raphson method is a widely used numerical method for finding a root of
an equation f (x) = 0, to the desired accuracy. It is an iterative method which has
a faster rate of convergence and is very useful when the expression for the derivative
f (x) is not complicated. To derive the formula for this method, we consider a
Taylor’s series expansion of f (x0 + h), x0 being an initial guess of a root of
f (x) = 0 and h is a small correction to the root.
h2
f ( x0 + h) = f ( x0 ) + h f ′( x0 ) + f " ( x0 ) + ...
2!
Assuming h to be small, we equate f (x0 + h) to 0 by neglecting square and
higher powers of h.
f ( x0 ) + h f ′( x0 ) =
0

f ( x0 )
Or, h= −
f ′( x0 )
Thus, we can write an improved value of the root as,

x=
1 x0 + h

f ( x0 )
i.e., x=
1 x0 −
f ′( x0 )

Successive approximations x2 , x3 ,..., xn +1 can thus be written as,

f ( x1 )
x2 = x1 −
f ′( x1 )
f ( x2 )
x3 = x2 −
f ′( x2 )
Self-Instructional
Material 39
Methods for Finding ... ... ...
Complex Roots and
f ( xn )
Polynomial Equations xn +1 = xn −
f ′( xn ) (2.13)
If the sequence {xn } converges, we get the root.
NOTES
Algorithm: Computation of a root of f (x) = 0 by Newton-Raphson method.
Step 0: Define f (x), f ′(x)
Step 1: Input x0, epsilon, maxit
[x0 is the initial guess of root, epsilon is the desired accuracy of the
root and maxit is the maximum number of iterations allowed]
Step 2: Set i = 0
Step 3: Set f0 = f (x0)
Step 4: Compute df0 = f ′ (x0)
Step 5: Set x1 = x0 – f0/df0
Step 6: Set i = i + 1
Step 7: Check if |(x1 – x0) |x1| < epsilon, then print ‘root is’, x1 and stop
else if i < n, then set x0 = x1 and go to Step 3
Step 8: Write ‘Iterations do not converge’
Step 9: End
Example 7: Use Newton-Raphson method to compute the positive root of the
equation x3 – 8x – 4 = 0, correct to five significant digits.
Solution: Newton-Raphson iterative scheme is given by,
f ( xn )
xn +1 =
xn − , for n =
0, 1, 2, ...
f ′( xn )

For the given equation f (x) = x3 – 8x – 4


First we find the location of the root by the method of tabulation. The table for
f (x) is,

x 0 1 2 3 4
f ( x) − 4 − 13 − 12 − 1 28

Evidently, the positive root is near x = 3. We take x0 = 3 in Newton-Raphson


iterative scheme.
xn3 − 8 xn − 4
xn +=
1 xn −
3xn2 − 8

27 − 24 − 4
We get, x1 =
3− 3.0526
=
27 − 8

Self-Instructional
40 Material
Similarly, x2 = 3.05138, and x3 = 3.05138 Methods for Finding
Complex Roots and
Thus, the positive root is 3.0514, correct to five significant digits. Polynomial Equations

Example 8: Find a real root of the equation x3 + 7 x 2 + 9 =0, correct to five


significant digits. NOTES
Solution: First we find the location of the real root by tabulation. We observe that
the real root is negative and since f (–7) = 9 > 0 and f (–8) = – 55 < 0, a root lies
between –7 and – 8.
For computing the root to the desired accuracy, we take x0 = –8 and use
Newton-Raphson iterative formula,
x3 + 7 x 2 + 9
xn +1 =
xn − n 2 n , for n =
0, 1, 2, ...
3xn + 14 xn

The successive iterations give,


x1 = –7.3125
x2 = –7.17966
x3 = –7.17484
x4 = –7.17483
Hence, the desired root is –7.1748, correct to five significant digits.

1 a 
Example 9: For evaluating a , deduce the iterative formula xn +1 =  xn + ,
2 xn 

by using Newton-Raphson scheme of iteration. Hence, evaluate 2 using this,


correct to four significant digits.

Solution: We observe that a is the solution of the equation x2 – a = 0.


Now, using f (x) = x2 – a in the Newton-Raphson iterative scheme
xn2 − a
xn +=
1 xn −
2 xn

xn2 − a
We have, xn +=
1 xn −
2 xn

xn2 − a
xn +=
1 xn −
2 xn

1 a
i.e., xn +1 = xn +  , for n =
0, 1, 2,...
2 xn 

Self-Instructional
Material 41
Methods for Finding
Complex Roots and Now, for computing 2 , we assume x0 = 1.4. The successive iterations give,
Polynomial Equations
1 2  3.96
x1 = 1.4 + = = 1.414
2 1.4  2.8
NOTES
1 2 
x2 = 1.414 +  = 1.41421
2 1.414 

Hence, the value of 2 is 1.414 correct to four significant digits.


Example 10: Prove that k a can be computed by the iterative scheme,

1 a 
x n +1 = (k − 1) x n + k −1 . Hence evaluate 3
2 , connect to five significant digits.
k  xn 

Solution: The value k a is the positive root of xk – a = 0. Thus, the iterative


scheme for evaluating k a is,

xnk − a
xn +=
1 xn − −1
kxnk

1 a 
or, xn +1 = (k − 1) xn + k −1  , for n = 0, 1, 2,...
k xn 

Now, for evaluating 3 2 , we take x0 = 1.25 and use the iterative formula,

1 2
xn +1
=  2 xn + 2 
3 xn 

1 2 
x1
We have, = 1.25 × 2 + =  1.26
3 (1.25)2 

=x2 1.259921,
= x3 1.259921

Hence, 3 2 = 1.2599, correct to five significant digits.


Example 11: Find by Newton-Raphson method, the real root of
3x – cos x – 1 = 0, correct to three significant figures.
Solution: The location of the real root of f (x) = 3x – cos x – 1 = 0, is [0, 1] since
f (0) = – 2 and f (1) > 0.
We choose x0 = 0, and use Newton-Raphson scheme of iteration.
3x − cos xn − 1
xn +1 =
xn − n , n=
0, 1, 2,...
3 + sin xn

Self-Instructional
42 Material
The results for successive iterations are, Methods for Finding
Complex Roots and
x1 = 0.667, x2 = 0.6075, x3 = 0.6071 Polynomial Equations

Thus, the root is 0.607 correct to three significant figures.


Example 12: Find a real root of the equation xx + 2x – 6 = 0, correct to four NOTES
significant digits.
Solution: Taking f (x) = xx + 2x – 6, we have f (1) = –3 < 0 and f (2) = 2 > 0.
Thus, a root lies in [1, 2]. Choosing x0 = 2, we use Newton-Raphson iterative
scheme given by,
x
x n + 2 xn − 6
xn +1 =
xn − xn n , for n =
0, 1, 2,...
xn (log e xn + 1) + 2

The computed results for successive iterations are,

4+4−6
x1 = 2 − 2
= 1.72238
4 × (log e 2 x + 1) + 2
x2 = 1.72321
x3 = 1.72308

Hence, the root is 1.723 correct to four significant figures.


Order of Convergence: We consider the order of convergence of the Newton-
Raphson method given by the formula,
f ( xn )
xn +1 = xn −
f ′( xn )

Let us assume that the sequence of iterations {xn} converges to the root ξ .
Then, expanding by Taylor’s series about xn, the relation f ( ξ ) = 0, gives
1
f ( xn ) + (ξ − xn ) f ′( xn ) + (ξ − xn ) 2 f ′′( xn ) + ... = 0
2

f ( xn ) 1 2 f ′′( x n )
∴ − = ξ − xn + (ξ − xn ) . + ...

f ( xn ) 2 f ' ( xn )
1 f ′′( xn )
∴ xn +1 − ξ ≈ (ξ − xn ) 2 .
2 f ′( xn )

Taking n
as the error in the nth iteration and writing n
= n
– , we have,
1 f ′′(ξ )
ε n +1 ≈ ε n 2 (2.14)
2 f ′(ξ )

Thus, ε n +1 = kε 2 n, where k is a constant.


This shows that the order of convergence of Newton-Raphson method is 2.
In other words, the Newton-Raphson method has a quadratic rate of convergence.
Self-Instructional
Material 43
Methods for Finding The condition for convergence of Newton-Raphson method can easily be
Complex Roots and
Polynomial Equations derived by rewriting the Newton-Raphson iterative scheme as xn +1 = ϕ ( xn ) with
f ( x)
ϕ ( x)= x −
NOTES f ′( x)
Hence, using the condition for convergence of the linear iteration method, we
f ( x ) f ′′( x )
can write ϕ ′( x) = 2
[ f ′( x)]

Thus, the sufficient condition for the convergence of Newton-Raphson method


is,

f ( x) f ′′( x)
< 1, in the interval near the root.
[ f ′( x)]2

i.e., | f ( x) f ′′( x) | < | f ′( x) |2 (2.15)

Secant Method
Secant method can be considered as a discretized form of Newton-Raphson
method. The iterative formula for this method is obtained from formula of Newton-
Raphson method on replacing the derivative f ′( x0 ) by the gradient of the chord
joining two neighbouring points x0 and x1 on the curve y = f (x).
Thus, we have
f ( x1 ) − f ( x0 )
f ′( x0 ) ≈
x1 − x0

The iterative formula is given by,


f ( x0 )
x 2 = x0 − ( x1 − x0 )
f ( x1 ) − f ( x0 )

This can be rewritten as,


x0 f ( x1 ) − x1 f ( x0 )
x2 =
f ( x1 ) − f ( x0 )

The iterative formula is equivalent to the one for Regula–Falsi method. The
distinction between secant method and Regula–Falsi method lies in the fact that
unlike in Regula–Falsi method, the two initial guess values do not bracket a root
and the bracketing of the root is not checked during successive iterations, in secant
method. Thus, secant method may not always give rise to a convergent sequence
to find the root. The geometrical interpretation of the method is shown in Figure
2.4.

Self-Instructional
44 Material
Methods for Finding
Complex Roots and
Polynomial Equations

NOTES

(Line AB meets x-axis alone)


Fig. 2.4 Secant Method
Algorithm: To find a root of f (x) = 0, by Secant method.
Step 1: Define f (x).
Step 2: Input x0, x1, error, maxit. [x0, x1, are initial guess values, error is the
prescribed precision and maxit is the maximum number of iterations
allowed].
Step 3: Set i = 1
Step 4: Compute f0 = f (x0)
Step 5: Compute f1 = f (x1)
Step 6: Compute x2 = (x0 f1 – x1 f0)/(f1 – f0)
Step 7: Set i = i + 1
Step 8: Compute accy = |x2 – x1| / |x1|
Step 9: Check if accy < error, then go to Step 14
Step 10: Check if i ≥ maxit then go to Step 16
Step 11: Set x0 = x1
Step 12: Set x1 = x2
Step 13: Go to step 6
Step 14: Print “Root =”, x2
Step 15: Go to Step 17
Step 16: Print ‘iterations do not converge’
Step 17: Stop

Regula-Falsi Method
Regula-Falsi method is also a bracketing method. As in bisection method, we
start the computation by first finding an interval (a, b) within which a real root lies.
Writing a = x0 and b = x1, we compute f (x0) and f (x1) and check if f (x0) and f
(x1) are of opposite signs. For determining the approximate root x2, we find the

Self-Instructional
Material 45
Methods for Finding point of intersection of the chord joining the points (x0, f (x0)) and (x1, f (x1)) with
Complex Roots and
Polynomial Equations the x-axis, i.e., the curve y = f (x0) is replaced by the chord given by,
f ( x1 ) − f ( x0 )
y − f ( x0 ) = ( x − x0 ) (2.16)
NOTES x1 − x0

Thus, by putting y = 0 and x = x2 in Equation (2.16), we get


f ( x0 )
x 2 = x0 − ( x1 − x0 ) (2.17)
f ( x1 ) − f ( x0 )

Next, we compute f (x2) and determine the interval in which the root lies in the
following manner. If (i) f (x2) and f (x1) are of opposite signs, then the root lies in
(x2, x1). Otherwise if (ii) f (x0) and f (x2) are of opposite signs, then the root lies
in (x0, x2). The next approximate root is determined by changing x0 by x2 in the
first case and x1 by x2 in the second case.
The aforesaid process is repeated until the root is computed to the desired
accuracy , i.e., the condition
( xk +1 − xk ) / xk < ε , should be satisfied.
Regula-Falsi method can be geometrically interpreted by the following Figure
2.5.
Y

x1, f (x1)

X
O
x2, f (x2)

x0, f (x2)

Fig. 2.5 Regula-Falsi Method


Algorithm: Computing root of an equation by Regula-Falsi method.
Step 1: Define f (x)
Step 2: Read epsilon, the desired accuracy
Step 3: Read maxit, the maximum no. of iterations
Step 4: Read x0, x1 two initial guess values of root
Step 5: Compute f0 = f (x0)
Step 6: Compute f1 = f (x1)
Step 7: Check if f0 f1 < 0, then go to the next step
else go to Step 4

Self-Instructional
46 Material
Step 8: Compute x2 = (x0 f1 – x1 f0) / (f1 – f0) Methods for Finding
Complex Roots and
Step 9: Compute f2 = f (x2) Polynomial Equations

Step 10: Check if |f2| < epsilon, then go to Step 18


Step 11: Check if f2 f0 < 0 then go to the next Step NOTES
else go to Step 15
Step 12: Set x1 = x2
Step 13: Set f1 = f2
Step 14: Go to Step 7
Step 15: Set x0 = x2
Step 16: Set f0 = f2
Step 17: Go to Step 7
Step 18: Write ‘root =’ , x2, f3
Step 19: End
Example 13: Use Regula-Falsi method to compute the positive root of x3 – 3x –
5 = 0, correct to four significant figures.
Solution: First we find the interval in which the root lies. We observe that f (2) =
–3 and f (3) = 13. Thus, the root lies in [2, 3]. For using the Regula–Falsi method,
we use the formula,
f ( x0 )
x 2 = x0 − ( x1 − x0 )
f ( x1 ) − f ( x0 )
With x0 = 2, and x1 = 3, we have
3
x2 = 2 + (3 − 2) = 2.1875
13 + 3
Again, since f (x2) = f (2.1875) = –1.095, we consider the interval
[2.1875, 3]. The next approximation is x3 = 2.2461. Also, f (x3) = – 0.4128. Hence,
the root lies in [2.2461, 3]
Repeating the iterations, we get
x4 = 2.2684, f (x4) = – 0.1328
x5 = 2.2748, f (x5) = – 0.0529
x6 = 2.2773, f (x6) = – 0.0316
x7 = 2.2788, f (x7) = – 0.0028
x8 = 2.2792, f (x8) = – 0.0022
The root correct to four significant figures is 2.279.

Self-Instructional
Material 47
Methods for Finding
Complex Roots and
Polynomial Equations Check Your Progress
1. How will you compute the roots of the form f (x) = 0?
NOTES 2. Define tabulation method.
3. Explain bisection method.
4. How is order of convergence determined?
5. Explain Newton-Raphson method.
6. Define secant method.
7. Explain Regula-Falsi method.

2.3 POLYNOMIAL EQUATIONS


Polynomial equations with real coefficients have some important characteristics
regarding their roots. A polynomial equation of degree n is of the form pn(x) =
anxn + an–1xn–1 + an–2xn–2 + ... + a2x2 + a1x + a0 = 0.
(i) A polynomial equation of degree n has exactly n roots.
(ii) Complex roots occur in pairs, i.e., if α + i β is a root of pn(x) = 0, then
α − i β is also a root.
(iii) Descarte’s rule of signs can be used to determine the number of possible
real roots (positive or negative).
(iv) If x1, x2,..., xn are all real roots of the polynomial equation, then we can
express pn(x) uniquely as,
pn ( x) = an ( x − x1 )( x − x2 )...( x − xn )
(v) pn(x) has a quadratic factor for each pair of complex conjugate roots.
Let, α + i β and α − i β be the roots, then {x 2 − 2α x + (α 2 + β 2 )} is the
quadratic factor.
(vi) There is a special method, known as Horner’s method of synthetic
substitution, for evaluating the values of a polynomial and its derivatives
for a given x.

Descarte’s Rule
The number of positive real roots of a polynomial equation is equal to the number
of changes of sign in pn(x), written with descending powers of x, or less by an
even number.
Consider for example, the polynomial equation,
3x5 + 2 x 4 + x3 − 2 x 2 + x − 2 =0

Self-Instructional
48 Material
Clearly there are three changes of sign and hence the number of positive real Methods for Finding
Complex Roots and
roots is three or one. Thus, it must have a real root. In fact, every polynomial Polynomial Equations
equation of odd degree has a real root.
We can also use Descarte’s rule to determine the number of negative roots by
NOTES
finding the number of changes of signs in pn(–x). For the above equation,
pn (− x) = −3x 5 + 2 x 4 − x 3 − 2 x 2 − x − 2 = 0; and it has two changes of sign. Thus, it
has either two negative real roots or none.

Check Your Progress


8. Define polynomial equations.
9. Give the statement of Descarte’s rule.

2.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. We consider numerical methods for computing the roots of an equation of the
form,
f (x) = 0
Where f (x) is a reasonably well-behaved function of a real variable x.
2. In the tabulation method, a table of values of f (x) is made for values of x in
a particular range. Then, we look for the change in sign in the values of f (x)
for two consecutive values of x. We conclude that a real root lies between
these values of x.
3. The bisection method involves successive reduction of the interval in which
an isolated root of an equation lies.
The sub-interval in which the root lies is again bisected and the above
process is repeated until the length of the sub-interval is less than the desired
accuracy.
The bisection method is also termed as a bracketing method, since the
method successively reduces the gap between the two ends of an interval
surrounding the real root, i.e., brackets the real root.
4. The order of convergence of an iterative process is determined in terms of
the errors en and en+1 in successive iterations.
5. Newton-Raphson method is a widely used numerical method for finding a
root of an equation f (x) = 0, to the desired accuracy. It is an iterative method
which has a faster rate of convergence and is very useful when the expression
for the derivative f (x) is not complicated. To derive the formula for this
method, we consider a Taylor’s series expansion of f (x0 + h), x0 being an
initial guess of a root of f (x) = 0 and h is a small correction to the root.
Self-Instructional
Material 49
Methods for Finding 6. Secant method can be considered as a discretized form of Newton-Raphson
Complex Roots and
Polynomial Equations method. The iterative formula for this method is obtained from formula of
Newton-Raphson method on replacing the derivative f ′( x0 ) by the gradient
of the chord joining two neighbouring points x0 and x1 on the curve y = f (x).
NOTES
7. Regula-Falsi method is also a bracketing method. As in bisection method, we
start the computation by first finding an interval (a, b) within which a real root
lies. Writing a = x0 and b = x1, we compute f (x0) and f (x1) and check if f (x0)
and f (x1) are of opposite signs. For determining the approximate root x2, we
find the point of intersection of the chord joining the points (x0, f (x0)) and (x1,
f (x1)) with the x-axis, i.e., the curve y = f (x0) is replaced by the chord given
by,
f ( x1 ) − f ( x0 )
y − f ( x0 ) = ( x − x0 )
x1 − x0

8. A polynomial equation of degree n is of the form pn(x) = anxn + an–1xn–1 +


an–2xn–2 + ... + a2x2 + a1x + a0 = 0.
9. The number of positive real roots of a polynomial equation is equal to the
number of changes of sign in pn(x), written with descending powers of x, or
less by an even number.

2.5 SUMMARY
A root of an equation is usually computed in two stages. First, we find the
location of a root in the form of a crude approximation of the root. Next
we use an iterative technique for computing a better value of the root to a
desired accuracy in successive approximations/computations.
Tabulation Method: In the tabulation method, a table of values of f (x) is
made for values of x in a particular range.
The bisection method involves successive reduction of the interval in which
an isolated root of an equation lies.
If a function f (x) is continuous in the closed interval [a, b] and f (a) and f
(b) are of opposite signs, i.e., f (a) f (b) < 0, then there exists at least one
real root of f (x) = 0 between a and b.
The bisection method is also termed as a bracketing method, since the
method successively reduces the gap between the two ends of an interval
surrounding the real root, i.e., brackets the real root.
If the function g(x) is continuous in the interval [a, b] which contains a root
ξ of the equation f (x) = 0, and is rewritten as x = g(x), and | g ′( x) | ≤ l ≤ 1 in
this interval, then for any choice of x0 ∈ [a, b] , the sequence {xn} determined
by the iterations,
Self-Instructional
50 Material
Methods for Finding
=xk +1 g=
( xk ), for k 0, 1, 2,... Complex Roots and
Polynomial Equations
This converges to the root of f (x) = 0.
Order of Convergence: The order of convergence of an iterative process is
determined in terms of the errors en and en+1 in successive iterations. An NOTES
en +1
iterative process is said to have kth order convergence if lim < M,
n →∞ enk
where M is a finite number.
Newton-Raphson method is a widely used numerical method for finding a
root of an equation f (x) = 0, to the desired accuracy.
Secant method can be considered as a discretized form of Newton-Raphson
method. The iterative formula for this method is obtained from formula of
Newton-Raphson method on replacing the derivative f ′( x0 ) by the gradient
of the chord joining two neighbouring points x0 and x1 on the curve y = f
(x).
Descarte’s rule of signs can be used to determine the number of possible
real roots (positive or negative).
If x1, x2,..., xn are all real roots of the polynomial equation, then we can
express pn(x) uniquely as,
pn ( x) = an ( x − x1 )( x − x2 )...( x − xn )
We can also use Descarte’s rule to determine the number of negative
roots by finding the number of changes of signs in pn(–x).

2.6 KEY WORDS

Graphical Method: In the graphical method, we draw the graph of the


function y = f (x), for a certain range of values of x.
Tabulation Method: In the tabulation method, a table of values of f (x) is
made for values of x in a particular range.
Order of Convergence: The order of convergence of an iterative process
is determined in terms of the errors en and en+1 in successive iterations.

2.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What is tabulation method?
2. What is bisection method?
Self-Instructional
Material 51
Methods for Finding 3. Define Newton-Rapshon method.
Complex Roots and
Polynomial Equations 4. What is meant by secant method?
5. Explain Regula-Falsi method.
NOTES Long-Answer Questions
1. Use graphical method to find the location of a real root of the equation x3 +
10x – 15 = 0.
2. Draw the graphs of the function f (x) = cos x – x, in the range [0, /2) and
find the location of the root of the equation f (x) = 0.
3. Compute the root of the equation x3 – 9x + 1 = 0 which lies between 2 and 3
correct upto three significant digits using bisection method.
4. Compute the root of the equation x3 + x2 – 1 = 0, near 1, by the iterative
method correct upto two significant digits.
5. Use iterative method to find the root near x = 3.8 of the equation 2x – log10x
= 7 correct upto four significant digits.
6. Compute using Newton-Raphson method the root of the equation ex = 4x,
near 2, correct upto four significant digits.

7. Use an iterative formula to compute 7 125 correct upto four significant digits.
8. Find the real root of x log10x – 1.2 = 0 correct upto four decimal places using
Regula-Falsi method.
9. Use Regula-Falsi method to find the root of the following equations correct
upto four significant figures:
(i) x3 – 4x – 1 = 0, the root near x = 2
(ii) x6 – x4 – x3 – 1 = 0, the root between 1.4 and 1.5
10. Compute the positive root of the given equation correct upto four places of
decimals using Newton-Raphson method:
x + loge x = 2

2.8 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.

Self-Instructional
52 Material
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis: Methods for Finding
Complex Roots and
An Algorithmic Approach. New York: McGraw-Hill. Polynomial Equations
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
NOTES
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
Material 53
Birge – Vieta, Bairstow’s
and Graeffe’s Root
Squaring Methods UNIT 3 BIRGE – VIETA,
BAIRSTOW’S AND
NOTES
GRAEFFE’S ROOT
SQUARING METHODS
Structure
3.0 Introduction
3.1 Objectives
3.2 Birge – Vieta Method
3.3 Bairstow’s Method
3.4 Graeffe’s Root Squaring Method
3.5 Answers to Check Your Progress Questions
3.6 Summary
3.7 Key Words
3.8 Self-Assessment Questions and Exercises
3.9 Further Readings

3.0 INTRODUCTION

In mathematics, a polynomial is an expression consisting of variables (also called


indeterminate) and coefficients that involves only the operations of addition,
subtraction, multiplication, and non-negative integer exponents of variables. The
polynomial of a single indeterminate, x, is x2 – 4x + 7 while in three variables it is
x3 + 2xyz2 – yz + 1. A polynomial equation is, therefore, an equation that has
multiple terms made up of numbers and variables. The degree tells us how many
roots can be found in a polynomial equation. For example, if the highest exponent
is 3, then the equation has three roots. The roots of the polynomial equation are
the values of x where y = 0. Principally, the polynomial equation is an equation of
the form f(x) = 0 where f(x) is a polynomial in x.
The sixteenth century French mathematician Francois Vieta was the pioneer
to develop methods for finding approximate roots of polynomial equations. Later,
several other methods were developed for solving polynomial equations. In
numerical analysis, Bairstow’s method is an efficient algorithm for finding the roots
of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix
of the 1920 book ‘Applied Aerodynamics’ by Leonard Bairstow. The algorithm
finds the roots in complex conjugate pairs using only real arithmetic. The Graeffe’s
root squaring method is a direct method to find the roots of any polynomial equation
with real coefficients. Polynomials are used to form polynomial equations, which
encode a wide range of problems, from elementary word problems to complicated
scientific problems.
Self-Instructional
54 Material
In this unit, you will study about the Birge-Vieta method, the Bairstow’s Birge – Vieta, Bairstow’s
and Graeffe’s Root
method, and the Graeffe’s root squaring method. Squaring Methods

3.1 OBJECTIVES NOTES


After going through this unit, you will be able to:
Discuss the Birge-Vieta method
Understand the Bairstow’s method
Elaborate on the Graeffe’s root squaring method

3.2 BIRGE – VIETA METHOD

Birge-Vieta method is used for finding the real roots of a polynomial equations.
This method is based on an original method developed by the two English
mathematicians Birge and Vieta. Finding and approximating the derivation of all
roots of a polynomial equation is a very significant. In the field of science and
engineering, there are numerous applications which require the solutions of all
roots of a polynomial equations for a particular problem.
Newton-Raphson method is fundamentally used for finding the root of an
algebraic and transcendental equations. Since the rate of convergence of this method
is quadratic, hence the Newton-Raphson method can be used to find a root of a
polynomial equation as polynomial equation is an algebraic equation. Birge-Vieta
method is based on the Newton-Raphson method or this method is a modified
form of Newton-Raphson method.
Consider the given polynomial equation of degree n, which has the form,
Pn(x) = anxn + . . . + alx + a0 = 0.
Let x0 be an initial approximation to the root . The Newton-Raphson
iterated formula for improving this approximation is,

To apply this formula, first evaluate both Pn(x) and P n(xi) at any xi. The
utmost natural method is to evaluate,

Self-Instructional
Material 55
Birge – Vieta, Bairstow’s However, this is stated as the most inefficient method of evaluating a
and Graeffe’s Root
Squaring Methods polynomial, because of the amount of computations involved and also because of
the possible growth of round off errors. Thus there must be some proficient and
effective method for evaluating Pn(x) and P n(x).
NOTES
Vieta’s formula is used for the coefficients of polynomial to the sum and
product of their roots, along with the products of the roots that are in groups.
Vieta’s formula defines the association of the roots of a polynomial by means of its
coefficients. Following example will make the concept clear that how to find a
polynomial with given roots.
Here we will discuss about the real-valued polynomials, i.e., the coefficients
of polynomials are real numbers.
Consider a quadratic polynomial. If the given two real roots are r1 and r2,
then find a polynomial.
Let the polynomial is a2x2 + a1x + a0. When the roots are given, then we
can also write the polynomial equation in the form, k (x – r1) (x – r2).
Since both the equations denotes the same polynomial, therefore equate
both polynomials as,
a2x2 + a1x + a0 = k (x – r1) (x – r2) (3.1)
On simplifying the Equation (3.1), we have the following form of equation,
a2x2 + a1x + a0 = kx2 – k (r1 + r2) x + k (r1r2)
Comparing the coefficients of both the sides of the above equation, we
have,
For x2, a2 = k
For x, a1 = – k (r1 + r2)
For constant term, a0 = k r1r2
Which gives,
a2 = k
Therefore,

(3.2)

(3.3)

Equations (3.2) and (3.4) are termed as Vieta’s formulas for a second degree
polynomial.
As a general rule, for an nth degree polynomial, there are n different Vieta’s
formulas which can be written in a condensed form as,

Self-Instructional
56 Material
Birge – Vieta, Bairstow’s
For and Graeffe’s Root
Squaring Methods

NOTES
Example 1: Find all the roots of the given polynomial equation P3(x) = x + x – 3
3

= 0 rounded off to three decimal places. Stop the iteration whenever {xi+1 – xi} <
0.0001.
Solution: The equation P3(x) = 0 has three roots. Since there is only one change
in the sign of the coefficients, the equation can have as a maximum one positive
real root. The equation has no negative real root since P3(–x) = 0 has no change of
sign of coefficients. Since P3(x) = 0 is of odd degree it has at least one real root.
Hence the given equation x3 + x – 3 = 0 has one positive real root and a complex
pair. Since P(1) = –1 and P(2) = 7, as per the intermediate value theorem the
equation has a real root lying in the interval ]1,2[.
Now we will find the real root using Birge-Vieta method. Let the initial
approximation be 1.1.
First Iteration

Therefore x1 = 1.1 – (–0.569)/4.63 = 1.22289


Similarly,
x2 = 1.21347
x3 = 1.21341
Since, x2 x3 < 0.0001, we stop the iteration here. Hence the required
value of the root is 1.213, rounded off to three decimal places.
Next we will find the deflated polynomial of P3(x). To obtain the deflated
polynomial, we have to first find the polynomial q2(x) by using the final approximation
x3 = 1.213, as shown in the following table.

Here, P3 (1.213) = 0.0022, i.e., the magnitude of the error in satisfying


P3(x3) = 0 is 0.0022.
We then find q2(x) = x2 + 1.213 x + 2.4714 = 0
Self-Instructional
Material 57
Birge – Vieta, Bairstow’s This is a quadratic equation and its roots are given by,
and Graeffe’s Root
Squaring Methods
x=
NOTES
=

= 0.6065 1.4505 i
Hence the three roots of the equation rounded off to three decimal places
are 1.213, 0.6065 + 1.4505 i and 0.6065 1.4505 i.

3.3 BAIRSTOW’S METHOD

In numerical analysis, Bairstow’s method is an efficient algorithm for finding the


roots of a real polynomial of arbitrary degree. The algorithm was formulated by
Leonard Bairstow and first appeared in the appendix of the book ‘Applied
Aerodynamics’ (1920). The algorithm finds the roots in complex conjugate pairs
using only real arithmetic.
Bairstow’s approach is to use Newton’s method to adjust the coefficients u
and v in the quadratic x2 + ux + v until its roots are also roots of the polynomial
being solved. The roots of the quadratic may then be determined, and the polynomial
may be divided by the quadratic to eliminate those roots. This process is then
iterated until the polynomial becomes quadratic or linear, and all the roots have
been determined.
Long division of the polynomial to be solved as,

By x2 + ux + v yields a quotient,

And a remainder cx + d such that,

A second division of Q(x) by x2 + ux + v is performed to yield a quotient,

Self-Instructional
58 Material
And a remainder gx + h with, Birge – Vieta, Bairstow’s
and Graeffe’s Root
Squaring Methods

NOTES
The variables c, d, g, h and the {bi}, {fi} are functions of u and v. They can
be found recursively as follows,

The quadratic evenly divides the polynomial when,


c (u, v) = d (u, v) = 0
Values of u and v for which this occurs can be discovered by picking starting
values and iterating Newton’s method in two dimensions as,

This continues until convergence occurs. This method to find the zeroes of
polynomials can thus be easily implemented with a programming language or even
a spreadsheet.
Example 2: The task is to determine a pair of roots of the polynomial,
f (x) = 6 x5 + 11 x4 33 x3 33 x2 + 11 x + 6
Solution: As first quadratic polynomial we can use the normalized polynomial
formed from the leading three coefficients of f(x),

The iteration then produces as shown in the following table.


Iteration Steps of Bairstow’s Method

Self-Instructional
Material 59
Birge – Vieta, Bairstow’s After eight iterations the method produced a quadratic factor that contains
and Graeffe’s Root
Squaring Methods the roots –1/3 and –3 within the represented precision. The step length from the
fourth iteration on demonstrates the superlinear speed of convergence.

NOTES
3.4 GRAEFFE’S ROOT SQUARING METHOD

In mathematics, Graeffe’s method or Dandelin–Lobachesky–Graeffe method is


an algorithm typically used for finding all of the roots of a polynomial. It was
developed independently by Germinal Pierre Dandelin in 1826 and Lobachevsky
in 1834. In 1837 Karl Heinrich Gräffe also discovered the principal idea of the
method. The method separates the roots of a polynomial by squaring them
repeatedly. This squaring of the roots is done implicitly, that is, only working on
the coefficients of the polynomial. Finally, Viète’s formulas are used in order to
approximate the roots.

Dandelin–Graeffe Iteration
Let p(x) be a polynomial of degree n,

Then,

Let q(x) be the polynomial which has the squares as its


roots,

Then we can denote as,

Next q(x) can now be computed by algebraic operations on the coefficients


of the polynomial p(x) alone.
Let,

Self-Instructional
60 Material
Then the coefficients are related by, Birge – Vieta, Bairstow’s
and Graeffe’s Root
Squaring Methods

NOTES
Graeffe observed that if one separates p(x) into its odd and even parts,
then

We now obtain a simplified algebraic expression for q(x) of the form,

This expression involves the squaring of two polynomials of only half the
degree, and is therefore used in most implementations of the method.
Iterating this procedure several times separates the roots with respect to
their magnitudes. Repeating k times gives a polynomial of degree n, we have:

With roots,

If the magnitudes of the roots of the original polynomial were separated by

some factor >1, that is, , then the roots of the k-th iterate are
separated by a fast growing factor,

Next the Vieta relations are used as Classical Graeffe’s method as shown
below:

If the roots are sufficiently separated, say by a factor

, then the iterated powers of the

Self-Instructional
Material 61
Birge – Vieta, Bairstow’s
and Graeffe’s Root
Squaring Methods roots are separated by the factor , which quickly becomes very big. The
coefficients of the iterated polynomial can then be approximated by their leading
NOTES term,

Implying,

Finally, logarithms are used in order to find the absolute values of the roots
of the original polynomial. These magnitudes alone are already useful to generate
meaningful starting points for other root-finding methods.

Check Your Progress


1. Why the Birge-Vieta method is used?
2. How the Bairstow’s approach uses Newton’s method for adjusting the
coefficients u and v in the quadratic x2 + ux + v?
3. Explain Graeffe’s method.

3.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Birge-Vieta method is used for finding the real roots of a polynomial


equations.
2. Bairstow’s approach is to use Newton’s method to adjust the coefficients u
and v in the quadratic x2 + ux + v until its roots are also roots of the
polynomial being solved. The roots of the quadratic may then be determined,
and the polynomial may be divided by the quadratic to eliminate those roots.
This process is then iterated until the polynomial becomes quadratic or
linear, and all the roots have been determined.
3. Graeffe’s method or Dandelin–Lobachesky–Graeffe method is an algorithm
typically used for finding all of the roots of a polynomial. It was developed
independently by Germinal Pierre Dandelin in 1826 and Lobachevsky in
1834. The method separates the roots of a polynomial by squaring them
repeatedly. This squaring of the roots is done implicitly, that is, only working
on the coefficients of the polynomial. Finally, Viète’s formulas are used in
order to approximate the roots.

Self-Instructional
62 Material
Birge – Vieta, Bairstow’s
3.6 SUMMARY and Graeffe’s Root
Squaring Methods

Birge-Vieta method is used for finding the real roots of a polynomial equations.
This method is based on an original method developed by the two English NOTES
mathematicians Birge and Vieta.
Finding and approximating the derivation of all roots of a polynomial equation
is a very significant. In the field of science and engineering, there are numerous
applications which require the solutions of all roots of a polynomial equations
for a particular problem.
Newton-Raphson method is fundamentally used for finding the root of an
algebraic and transcendental equations.
Since the rate of convergence of this method is quadratic, hence the Newton-
Raphson method can be used to find a root of a polynomial equation as
polynomial equation is an algebraic equation.
Birge-Vieta method is based on the Newton-Raphson method or this method
is a modified form of Newton-Raphson method.
The most inefficient method of evaluating a polynomial, because of the amount
of computations involved and also because of the possible growth of round
off errors. Thus there must be some proficient and effective method for
evaluating Pn(x) and P n(x).
Vieta’s formula is used for the coefficients of polynomial to the sum and
product of their roots, along with the products of the roots that are in groups.
Vieta’s formula defines the association of the roots of a polynomial by means
of its coefficients. Following example will make the concept clear that how
to find a polynomial with given roots.
As a general rule, for an nth degree polynomial, there are n different Vieta’s
formulas which can be written in a condensed form as,

For

In numerical analysis, Bairstow’s method is an efficient algorithm for finding


the roots of a real polynomial of arbitrary degree.
The algorithm was formulated by Leonard Bairstow. The algorithm finds
the roots in complex conjugate pairs using only real arithmetic.
Bairstow’s approach uses Newton’s method to adjust the coefficients u
and v in the quadratic x2 + ux + v until its roots are also roots of the
polynomial being solved.
Self-Instructional
Material 63
Birge – Vieta, Bairstow’s The roots of the quadratic may then be determined, and the polynomial
and Graeffe’s Root
Squaring Methods may be divided by the quadratic to eliminate those roots. This process is
then iterated until the polynomial becomes quadratic or linear, and all the
roots have been determined.
NOTES
In mathematics, Graeffe’s method or Dandelin–Lobachesky–Graeffe
method is an algorithm typically used for finding all of the roots of a
polynomial. It was developed independently by Germinal Pierre Dandelin
in 1826 and Lobachevsky in 1834. In 1837 Karl Heinrich Gräffe also
discovered the principal idea of the method.
The Graeffe’s method separates the roots of a polynomial by squaring them
repeatedly. This squaring of the roots is done implicitly, that is, only working
on the coefficients of the polynomial. Finally, Viète’s formulas are used in
order to approximate the roots.
In Graeffe’s method, the logarithms are used in order to find the absolute
values of the roots of the original polynomial. These magnitudes alone are
already useful to generate meaningful starting points for other root-finding
methods.

3.7 KEY WORDS

Birae-Vieta method: This method is used for finding the real roots of a
polynomial equations.
Bairstow’s method: This is an efficient algorithm for finding the roots of a
real polynomial of arbitrary degree. The algorithm was formulated by Leonard
Bairstow for finding the roots in complex conjugate pairs using only real
arithmetic.
Graeffe’s method or Dandelin–Lobachesky–Graeffe method: It is
an algorithm typically used for finding all of the roots of a polynomial.

3.8 SELF-ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Why is Birge-Vieta method used?
2. Define Bairstow’s method.
3. What is the significance of Graeffe’s root squaring method?
Long-Answer Questions
1. Briefly explain the Birge-Vieta method giving appropriate examples.
2. Find the root of x4 - 3x3 + 3x2 - 3x + 2 = 0 using Birge-Vieta method.
Self-Instructional
64 Material
3. Explain the Bairstow’s method with the help of examples. Birge – Vieta, Bairstow’s
and Graeffe’s Root
4. Using Bairstow’s method find all the roots of a given polynomial, Squaring Methods

NOTES
5. Discuss the Graeffe’s root squaring method giving appropriate examples.

3.9 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
Material 65
Solution of Simultaneous
Linear Equation
UNIT 4 SOLUTION OF
SIMULTANEOUS
NOTES
LINEAR EQUATION
Structure
4.0 Introduction
4.1 Objectives
4.2 System of Linear Equations
4.2.1 Classical Methods
4.2.2 Elimination Methods
4.2.3 Iterative Methods
4.2.4 Computation of the Inverse of a Matrix by using Gaussian Elimination
Method
4.3 Answers to Check Your Progress Questions
4.4 Summary
4.5 Key Words
4.6 Self Assessment Questions and Exercises
4.7 Further Readings

4.0 INTRODUCTION

Many engineering and scientific problems require the solution based on system of
linear equations. The system of equations is termed as a homogeneous type if all
the elements in the column vector b are zero else the system is termed as a non-
homogeneous type. You will learn the method of computation to find the solution
of a system of n linear equations in n unknowns. Two types of efficient numerical
methods are used for computing solution of systems of equations, of which some
are direct methods and others are iterative in nature. In the direct method, Gaussian
elimination method is used while in the iterative method, Gauss-Seidel iteration
method is commonly used. You will learn the two forms of iteration methods termed
as Jacobi iteration method and Gauss-Seidel iteration method.
In this unit, you will study about the transcendental and polynomial equations
and rate of convergence of iterative methods.

4.1 OBJECTIVES

After going through this unit, you will be able to:


Explain the system of linear equations
Understand Cramer’s rule
Explain Gaussian elimination method and Gauss-Jordan elimination method
Self-Instructional
66 Material
Define Jacobi iteration method and Gauss-Seidel iteration method Solution of Simultaneous
Linear Equation
Compute inverse of a matrix using Gaussian elimination method

4.2 SYSTEM OF LINEAR EQUATIONS NOTES


Many engineering and scientific problems require the solution of a system of linear
equations. We consider a system of m linear equations in n unknowns written as,
a11 x1 + a12 x2 + a13 x3 + ... + a1n xn =
b1
a21 x1 + a22 x2 + a23 x3 + ... + a2 n xn =
b2
a31 x1 + a32 x2 + a33 x3 + ... + a3n xn =
b3
... ... ...
(4.1)
am1 + am 2 x2 + am3 x3 + ... + am xn =
bm
Using matrix notation, we can write the above system of equations in the
form,
Ax = b (4.2)
where A is a m × n matrix and x, b are respectively n-column, m-row vectors
given by,

 a11 a12 a13 ... a1n   x1   b1 


a a22 a23 ... a2 n  x  b 
 21  2  2
=A  a31 a32 a33 a3n  , x
... = = x3  , b  b3 
      (4.3)
 ... ... ... ... ...   ...   ... 
 am1 am 2 am3 ... amn   xn  bm 

The system of equations is termed as a homogeneous one, if all the elements in


the column vector b are zero. Otherwise, the system is termed as a non-
homogeneous one.
The homogeneous system has a non-trivial solution, if A is a square matrix, i.e.,
m = n, and the determinant of the coefficient matrix, i.e., |A| is equal to zero.
The solution of the non-homogeneous system exists, if the rank of the coefficient
matrix A is equal to the rank of the augmented matrix [A : b] given by,
 a11 a12 a13 ... a1n b1 
a b2 
 21 a22 a23 ... a2 n
[ A : b] =  a31 a32 a33 ... a3n b3 
 
 ... ... ... ... ... ... 
 an1 an 2 an3 ... ann bn 

Further, a unique non-trivial solution of the system given by Equation (4.1)
exists when m = n and the determinant | A | ≠ 0 , i.e., the coefficient matrix is a
square non-singular matrix. The computation of the solution of a system of n linear
equations in n unknowns can be made by any one of the two classical methods
Self-Instructional
Material 67
Solution of Simultaneous known as the Cramer’s rule and the matrix inversion method. But these two
Linear Equation
methods are not suitable for numerical computation, since both the methods require
the evaluation of determinants. There are two types of efficient numerical methods
for computing solution of systems of equations. Some are direct methods and
NOTES others are iterative in nature. Among the direct methods, Gaussian elimination
method is most commonly used. Among the iterative methods, Gauss-Seidel
iteration method is very commonly used.

4.2.1 Classical Methods


Cramer’s Rule: Let D = |A| be the determinant of the coefficient matrix A and Di
be the determinant obtained by replacing the ith column of D by the column vector
b. The Cramer’s rule gives the solution vector x by the equations,
Di
xi , for i 1, 2,..., n
D
(4.4)
Thus we have to compute (n + 1) determinants of order n.
Example 1: Use Cramer’s rule to solve the following system:

 2 −3 1   x1   1 
 3 1 −1  x  =  
   2   2
1 −1 −1  x3  1 

Solution: The determinant D of the coefficient matrix is,

2 − 3 1 
D = 3 1 − 1 = 2(−1 − 1) − 3(−1 + 3) + (−3 − 1) = −14
1 − 1 − 1

The determinants D1, D2 and D3 are,

1 −3 1
D1 = 2 1 − 1 = ( −1 − 1) − 3(−1 + 2) + (−2 − 1) = −8
1 −1 −1
2 1 1
D2 = 3 2 − 1 = 2(−2 + 1) + (−1 + 3) + (3 − 2) = 1
1 1 −1
2 −3 1
D3 = 3 1 2 = 2(1 + 2) − 3(2 − 3) + (−3 − 1) = 5
1 −1 1
Hence by Cramer’s rule, we get
D1 −8 4 D −1 D 5
x1 = = = , x2 = 2 = , x3 = 3 = −
D −14 7 D 14 D 14
Self-Instructional
68 Material
4.2.2 Elimination Methods Solution of Simultaneous
Linear Equation
Matrix Inversion Method: Let A–1 be the inverse of the matrix A defined by,
Adj A
A1 (4.5)
| A| NOTES
where Adj A is the adjoint matrix obtained by transposing the matrix of the cofactors
of the elements aij of the determinant of the coefficient matrix A.
Thus,
A11 A21..... An1
A12 A22 ..... An 2
Adj A
.... ...... .....
A1n A2 n ..... Ann

(4.6)
Aij being the cofactor of aij.
Then the solution of the system is given by,
x = A−1b
(4.7)
Note: If the rank of the coefficient matrix of a system of linear equations in n
unknowns is less than n, then there are more unknowns than the number of
independent equations. In such a case, the system has an infinite set of solutions.
Example 2: Solve the given system of equations by matrix inversion method:

1 1 1   x1   4 
 2 −1 3  x  =  
   2  1 
 3 2 −1  x3  1 

Solution: For solving the system of equations by matrix inversion method we first
compute the determinant of the coefficient matrix,
1 1 1
| A |= 2 − 1 3 = 13
3 2 −1
Since | A | ≠ 0 , the matrix A is non-singular and A–1 exists. We now compute
the adjoint matrix,
5 3 4 5 3 4
1 Ad j A 1
Adj A 11 4 1 . Thus, A 11 4 1
| A| 13
7 1 3 7 1 3
Hence, the solution by matrix inversion method gives,
x1 5 3 4 4 13 1
1 1 1
x x2 A b 11 4 1 1 39 3
13 13
x3 7 1 3 1 26 2 Self-Instructional
Material 69
Solution of Simultaneous Gaussian Elimination Method: This method consists in systematic elimination
Linear Equation
of the unknowns so as to reduce the coefficient matrix into an upper triangular
system, which is then solved by the procedure of back-substitution. To understand
the procedure for a system of three equations in three unknowns, consider the
NOTES following system of equations:
a11 x1 + a12 x2 + a13 x3 = b1 (4.8(a))
a 21 x1 + a22 x2 + a23 x3 = b2 (4.8(b))
a31 x1 + a32 x2 + a33 x3 = b3 (4.8(c))
We have to first eliminate x1 from the last two equations and then eliminate x2
from the last equation.
In order to eliminate x1 from the second equation we multiply the Equation
(4.8(a)) by − a 21 / a11 = m2, and add to the second equation. Similarly, for elimination
of x1 from the third Equation (4.8(c)) we have to multiply the first Equation (4.8(a))
by − a31 / a11 = m3 , and add to the last Equation (4.8(c)). We would then have the
following two equations from them:
(1)
a 22 (1)
x 2 + a 23 x3 = b2(1) (4.9(a))
(1) (1)
a32 x2 + a33 x3 = b3(1) (4.9(b))
(1) (1)
where a22 a22 m2 a12 , a23 a23 m2 a13 , b2(1) b2 m2 b1
(1) (1) (1)
a32 a32 m2 a12 , a 33 a33 m2 a13 , b 3 b3 m3b1

Again for eliminating x2 from the last of the above two equations, we multiply
the first Equation (4.9(a)) by m4 = −a32
(1) (1)
/ a 22 , and add to the second Equation
(4.9(b)), which would give the equation,
( 2)
a33 = b3( 2) (4.10)

where a33
( 2) (1)
= a33 (1)
− m 4 a 23 , b3( 2) = b3(1) − m4 b2(1)

Thus by systematic elimination we get the triangular system given below,


a11 x1 + a12 x2 + a13 x3 = b1 (4.11(a))
(1)
a 22 (1)
x2 + a 23 x3 = b2(1) (4.11(b))
( 2)
a33 x3 = b2( 2) (4.11(c))
It is now easy to solve the unknowns by back-substitution as stated below:
We solve for x3 from Equation (4.11(c)), then solve for x2 from Equation
(4.11(b)) and finally solve for x1 from Equation (4.11(a)). This systematic Gaussian
elimination procedure can be written in matrix notation in a compact form, as
shown below.
Self-Instructional
70 Material
Solution of Simultaneous
(i) We write the augmented matrix, [A : b] and the multipliers on the left. Linear Equation

a11 a12 a13 b1


m2 a21 / a11 a21 a22 a23 b2 . Perform row operators
NOTES
m3 a31 / a11 a31 a32 a33 b3 R2 m2 R1 and R3 m3 R1

(ii) Then we write the transformed 2nd and 3rd rows after the elimination of x1 by
row operations [(m2 × 1st row + 2nd row) and (m3 × 1st row + 3rd row)] as
new 2nd and 3rd rows along with the multiplier on the left.

a11 a12 a13 b1 


 
m4 = (1)
−a31 (1)
/ a22 
(1)
a 22 (1)
a23 b2(1) . Perform
 →
 (1)
a32
(1)
a33 b3  R3 − m4 R2
(1)

(iii) Finally, we get the upper triangular transformed augmented matrix as given
below.

a11 a12 a13 b1 


 (1) (1) 
a22 a 23 b2(1) 

( 2)
(4.12)
 a33 b3 

Notes:
1. The above procedure can be easily extended to a system of n unknowns, in
which case, we have to perform a total of (n–1) steps for the systematic
elimination to get the final upper triangular matrix.
2. The condition to be satisfied for using this elimination is that the first diagonal
elements at each step must not be zero. These diagonal elements
(1) ( 2)
[ a11 , a 22 , a33 , etc.] are called pivot. If the pivot is zero at any stage, the method
fails. However, we can rearrange the rows so that none of the pivots is zero,
at any stage.
Example 3: Solve the following system by Gauss elimination method:

x1 + 2 x2 + x3 =
0
2 x1 + 2 x2 + 3 x3 =
3
− x1 − 3x2 =2

Show the computations by augmented matrix representation.


Solution: The augmented matrix of the system is,
1 2 1 : 0
2 2 3 : 3

− 1 − 3 0 : 2
Self-Instructional
Material 71
Solution of Simultaneous Step 1: For elimination of x1 from the 2nd and 3rd equations we multiply the first
Linear Equation
equation by –2 and 1 successively and add them to the 2nd and 3rd equation. The
result is shown in the augmented matrix below.

NOTES 1 2 1 : 0 
− 2 0 − 2 1 : 3
1 0 − 1 1 : 2

Step 2: For elimination of x2 from the third equation we multiply the second
1
equation by − and add it to the third equation. The result is shown in the augmented
2
matrix below.
1 2 1 : 0 
− 1 / 20 − 2 1 : 3 

0 0 1 / 2 : 1 / 2

Step 3: The upper triangular system is now solved by back-substitution, giving


x1 = 1, x2 = –1, x3 = 1
Gauss-Jordan Elimination Method: The Gauss-Jordan elimination method is
a variation of the Gaussian elimination method. In this method, the augmented
coefficient matrix is transformed by row operations such that the coefficient matrix
reduces to the identity matrix. The solution of the system is then directly obtained
as the reduced augmented column of the transformed augmented matrix. We explain
the method with a system of three equations given by,
 a11 a12 a13   x1   b1 
a a 23   x  = b 
 21 a22  2  2
a31 a32 a33   x3  b3 

(4.13)
The augmented matrix is,

 a11 a12 a13 : b1 


 
a 21 a 22 a 23 : b2 
a a32 a33 : b3 
 31
(4.14)
We assume that a11 is non-zero. If, however, a11 is zero, we can interchange
rows so that a11 is non-zero in the resulting system.
The first step is to divide the first row by a11 and then eliminating x1 from 2nd
and 3rd equations by row operations of multiplying the reduced first row by a21
and subtracting from the second row and next multiplying the reduced first row by
a31 and subtracting from the third row. This is shown in matrix transformations
given below.
Self-Instructional
72 Material
Solution of Simultaneous
 a11 a12 a13 : b1   1 a12 ′ ′ : b1′ 
a13 R2 − R1 a21 1 a12′ ′ : b1′ 
a13 Linear Equation
a  →
 21 a22 a23 : b2  R / a a
 21 a22

a′23 : b2   →  0 a′ a′23 : b2′ 
1 11 R3 − R1 a31  22

 a31 a32 a33 : b3   a31 a32 ′ : b3 


a33 0 a32
′ ′ : b3′ 
a33

where, NOTES

′ = a12 / a11 , a13


a12 ′ = a13 / a11 , b1′ = b1 / a11
a ′22 = a 22 − a 21 a12
′ , a 23
′ = a 23 − a 21 a13
′ , b2′ = b2 − a21b1′
′ = a32 − a31 a12
a32 ′ , a33
′ = a33 − a31 a13
′ , b3′ = b3 − a31b1′

Now considering a′22 as the non-zero pivot, we first divide the second row by
a′22 and then multiply the reduced second row by a12
′ and subtract it from the first
row and also multiply the reduced second row by a32 ′ and subtracting it from the
third row. The operations are shown below in matrix notation.

1 a12 ′ : b1′ 
a13 1 a12′ ′ : b1′ 
a13 1 0 a13′′ : b1′′
 0 a′ ′ 
R2 / a22 ′ and R3−R2 a32
R1−R2 a12 ′ 
′ : b2′  
a23 a′′23 : b2′′  ′′ : b2′′
→ 0 1 a23
 22  → 0 1  
0 a32
′ ′ : b3′ 
a33 0 a32
′ ′ : b3′ 
a33 0 0 a33
′′ : b3′′

where
′′ = a13
a13 ′ − a12
′ a ′23
′ , b1′′1 = b1′ − a12
′ b2′′
a ′23
′ = a ′23 / a ′22 , b2′′ = b2′ / a′22
′′ = a33
a33 ′ − a ′23
′ a32
′ , b3′′ = b3′ − a32
′ b2′′

Finally, the third row elements are divided by a33′′ and then the reduced third
row is multiplied by a13 ′′ and subtracted from the first row and also the reduced
third row is multiplied by a′23′ and subtracted from the second row. This is again
shown in matrix notation below.

1 0 a13′′ : b1′′ 1 0 a13′′ : b1′′ R1 −a ′′ R3 1 0 0 : b1′′


0 1 a ′′ : b′′ → 0 1 a ′′ : b′′  25 → 0 1 0 : b′′
 23 2  R3 / a33
′′  23 2   2

0 0 a33
′′ : b3′′ 0 0 1 : b3′′′ ′′ R3
R2 − a23 0 0 1 : b3′′

where b1′′ = b1′′ − a13


′′ b3′′′ , b2′′′ = b2′′ − a23
′′ b3′′′, b3′′′ = b3′′ / a33
′′

Finally, the solution of the system is given by the reduce augmented column,
i.e., x1 b1 , x2 b2 and x3 b3 .
We illustrate the elimination procedure with an example using augmented matrix,

 2 2 4 : 18 
1 3 2 : 13
 
 3 1 3 : 14 

Self-Instructional
Material 73
Solution of Simultaneous First, we divide the first row by 2 then subtract the reduced first row from 2nd
Linear Equation
row and also multiply the first row by 2 and then subtract from the third. The
results are shown below:

NOTES 1 2 4 18 1 1 2 : 9  R −R 1 1 2 : 9 
1 3 2 13 R 1
/2 1 3 2 : 13 2  1→ 0 2 0 : 4 
  →   
3 1 3 14 3 1 3 : 14 R3 + 2 R2 0 − 2 − 3 : − 13

Next considering 2nd row, we reduce the second column to [0, 1, 0] by row
operations shown below:
1 1 2 : 9  1 1 2 : 9  R −R 1 0 2 : 7 
0 2  R2 / 2 
: 2   → 0 1 0 : 2 
1 2
 0 : 4   → 0 1 0  
0 − 2 − 3 : − 13 0 − 2 − 3 : − 13 R3 + 2 R2 0 0 − 3 : − 9

Finally, dividing the third row by –3 and then subtracting from the first row the
elements of the third row multiplied by 2, the result is shown below:

1 0 2 : 7  1 0 2 : 7 1 0 0 : 1 
0 1 0 : 2  R 3 /( −3) 
0 1 0 : 2  R1− 2 R3  
     →      → 0 1 0 : 2
0 0 − 3 : − 9 0 0 1 : 3 0 0 1 : 3

Hence the solution of the system is x1 = 1, x2 = 2, x3 = 3.


Example 4: Solve the following system by Gauss-Jordan elimination method:

3 18 9  x1   18 
2 3 3  x  = 117 
   2  
4 1 2  x3  283

Solution: We consider the augmented matrix and solve the system by Gauss-
Jordan elimination method. The computations are shown in compact matrix notation
as given below. The augmented matrix is,

3 18 9 : 18 
2 3 3 : 117 
 
4 1 2 : 283

Step 1: The pivot is 3 in the first column. The first column is transformed into [1,
0, 0]T by row operations shown below:

3 18 9 : 18  1 6 3 6  1 6 3 : 6 
R1 / 3 
2 3 3 : 117    R2 
− 2 R1 0 − 9 − 3 : 105 
   → 2 3 3 117   →
R3 − 4 R1  
4 1 2 : 283 4 1 2 283 0 − 23 − 10 : 259

Self-Instructional
74 Material
Step 2: The second column is transformed into [0, 1, 0] by row operations shown Solution of Simultaneous
Linear Equation
below:
1 6 3 : 6  1 6 3 : 6  1 0 1 : 76 
0 − 9 − 3 : 105  − R2 / 9 0 R − 6 R2
  → 1 1 / 3 : − 35 / 3 1   → 0 1 1 / 3 : − 35 / 3

0 − 23 − 10 : 259

0 − 23 − 10 : 259 
R3 + 23R2  
0 0 − 7 / 3 : 28 / 3 
NOTES

Step 3: The third column is transformed into [0 0 1]T by row operations shown
below:
1 0 1 : 76  1 0 1 : 76  1 0 0 : 72 
0 1 1 / 3 : − 35 / 3 R /( −7 / 3)   R − R3 →  
  3
    →  0 1 1 / 3 : − 35 / 3 1  0 1 0 : − 13
0 0 − 7 / 3 : 28 / 3  0 0 1 : 4  R2 − R3 / 3 0 0 1 : 4 

Hence the solution of the system is x1 = 72, x2 = –13, x3 = 4.

4.2.3 Iterative Methods


We can use iteration methods to solve a system of linear equations when the
coefficient matrix is diagonally dominant. This is ensured by the set of sufficient
conditions given as follows,
n

∑| a
j =1, j ≠ i
ij | < | aii |, for i = 1, 2,..., n (4.15)

An alternative set of sufficient conditions is,


n

∑| a
i =1, i ≠ j
ij | < | a jj | , for j = 1, 2,..., n (4.16)

There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
Jacobi Iteration Method: Consider a system of n linear equations,

a11 x1 + a12 x2 + a13 x3 + ........ + a1n xn = b1


a 21 x1 + a22 x2 + a 23 x3 + ........ + a2 n xn = b2
a31 x1 + a32 x2 + a33 x3 + ........ + a3n xn = b3
..................................................................
a n1 x1 + an 2 x2 + an3 x3 + ........ + a nn xn = bn

The diagonal elements aii, i = 1, 2, ..., n are non-zero and satisfy the set of
sufficient conditions stated earlier. When the system of equations do not satisfy
these conditions, we may rearrange the system in such a way that the conditions
hold.
In order to apply the iteration we rewrite the equations in the following form:

Self-Instructional
Material 75
Solution of Simultaneous
Linear Equation x1 = (b1 − a12 x2 − a13 x3 − ... − a1n xn ) / a11
x2 = (b2 − a 21 x1 − a 23 x3 − ... − a2 n xn ) / a22
x3 = (b3 − a31 x1 − a32 x2 − ... − a3n xn ) / a33
NOTES ............................................................
xn = (bn − a n1 x1 − an 2 x2 − ... − a nn −1 xn −1 ) / ann

To start the iteration we make an initial guess of the unknowns as


x1(0) , x2(0) , x3(0) ,..., xn(0) (initial guess may be taken to be zero).
Successive approximations are computed using the equations,

x1( k +1) = (b1 − a12 x2( k ) − a13 x3( k ) − ... − a1n xn( k ) ) / a11
x2( k +1) = (b2 − a21 x1( k ) − a 23 x3( k ) − ... − a2 n xn( k ) ) / a22
( k +1) (k ) (k ) (k )
x3 = (b3 − a31 x1 − a32 x2 − ... − a3n xn ) / a33
(4.17)
.................................................................................
( k +1) (k ) (k ) (k )
xn = (bn − an1 x1 − an 2 x2 − ... − ann −1 xn −1 ) / ann

where k = 0, 1, 2, ...
The iterations are continued till the desired accuracy is achieved. This is checked
by the relations,
xi( k 1)
xi( k ) , for i 1, 2, ..., n (4.18)

Jacobi Iterative Algorithm


Choose an initial guess x(0) to the solution x.
for k = 1, 2, …
for i = 1, 2, …, n
xi = 0
for j = 1, 2, …, i – 1, i + 1, …, n
( k –1)
xi = xi + ai , j x j
end
xi (bi – xi ) / ai ,i
end
x(k ) x
check convergence; continue if necessary
end
Self-Instructional
76 Material
Gauss-Seidel Iteration Method: This is a simple modification of the Jacobi Solution of Simultaneous
Linear Equation
iteration. In this method, at any stage of iteration of the system, the improved
values of the unknowns are used for computing the components of the unknown
vector. The iteration equations given below are used in this method,
NOTES
x1( k +1) = (b1 − a12 x 2( k ) − a13 x3( k ) − ... − a1n xn( k ) ) / a11
x2( k +1) = (b2 − a 21 x1( k +1) − a23 x3( k ) − ... − a2 n xn( k ) ) / a22
( k +1) ( k +1) ( k +1) (k )
x3 = (b3 − a31 x1 − a32 x 2 − ... − a3n xn ) / a33
(4.19)
.................................................................................
( k +1) ( k +1) ( k +1) ( k +1)
xn = (bn − a n1 x1 − an2 x2 (k + 1) − ... − a nn −1 xn −1 ) / a nn

It is clear from above that for computing x2( k +1) , the improved value of x1( k +1)
is used instead of x1( k ) ; and for computing x3( k +1) , the improved values x1( k +1) and
x2( k +1) are used. Finally, for computing xn( k ) , improved values of all the components
x1( k +1) , x2( k +1) ,..., xn( k−+11) are used. Further, as in the Jacobi iteration, the iterations are
continued till the desired accuracy is achieved.
Example 5: Solve the following system by Gauss-Seidel iterative method correct
upto four significant digits.
10 x1 − 2 x2 − x3 − x4 = 3
− 2 x1 + 10 x2 − x3 − x4 = 15
− x1 − x2 + 10 x3 − 2 x4 = 27
− x1 − x2 − 2 x3 + 10 x4 = −9

Solution: The given system is clearly having diagonally dominant coefficient matrix,
n
i.e., | aii | ≥ ∑| a
j =1
ij |, i =
1, 2, ..., n
j ≠i

Hence, we can employ Gauss-Seidel iteration method, for which we rewrite


the system as,
( k +1) (k ) (k ) (k )
x1 = 0.3 + 0.2 x2 + 0.1 x3 + 0.1 x4
( k +1) ( k +1) (k ) (k )
x2 = 1.5 + 0.2 x1 + 0.1 x3 + 0.1 x4
( k +1) ( k +1) ( k +1) (k )
x3 = 2.7 + 0.1 x1 + 0.1 x2 + 0 .2 x 4
x4( k +1) = −0.9 + 0.1 x1( k +1) + 0.1 x2( k +1) + 0.2 x3( k +1)
We start the iteration with,
x1( 0) = 0.3, x 2(0) = 1.5, x3( 0) = 2.7, x 4( 0) = −0.9

Self-Instructional
Material 77
Solution of Simultaneous The results of successive iterations are given in the table below.
Linear Equation
k x1 x2 x3 x4
1 0.72 1.824 2.774 –0.0196
2 0.9403 1.9635 2.9864 –0.0125
NOTES 3 0.09901 1.9954 2.9960 –0.0023
4 0.9984 1.9990 2.9993 –0.0004
5 0.9997 1.9998 2.9998 –0.0003
6 0.9998 1.9998 2.9998 –0.0003
7 1.0000 2.0000 3.0000 0.0000

Hence the solution correct to four significant figures is x1 = 1.0000, x2 =


2.000, x3 = 3.000, x4 = 0.000.
Example 6: Solve the following system by Gauss-Seidel iteration method.
20 x1 + 2 x2 + x3 = 30
x1 − 40 x2 + 3x3 = −75
2 x1 − x2 + 10 x2 = 30

Give the solution correct upto three significant figures.


Solution: It is evident that the coefficient matrix is diagonally dominant and the
sufficient conditions for convergence of the Gauss-Seidel iterations are satisfied,
since
| a11 | = 20 ≥ | a12 | + | a13 | = 3
| a 22 | = 40 ≥ | a 21 | + | a23 | = 4
| a33 | = 10 ≥ | a31 | + | a32 | = 3
For starting the iterations, we rewrite the equations as,
1
x1 = (30 − 2 x2 − x3 )
20
1
x2 = (75 + x1 + 3x3 )
40
1
x3 = (30 − 2 x1 + x2 )
10
The initial approximate solution is taken as,
x1( 0) = 1.5, x 2(0) = 2.0, x3( 0) = 3.0

The first iteration gives,


(1) 1
x1 = (30 − 2 × 2.0 − 3.0) = 1.15
20
1
x2(1) = (75 + 1.15 + 3 × 3.0) = 2.14
40
1
x3(1) = (30 − 2 × 1.15 + 2.14) = 2.98
10

Self-Instructional
78 Material
The second iteration gives, Solution of Simultaneous
Linear Equation
( 2) 1
x = (30 − 2 × 2.14 − 2.98) = 1.137
1 20
1
x2( 2) = (75 + 1.137 + 3 × 2.98) = 2.127 NOTES
40
1
x3( 2) = (30 − 2 × 1.137 + 2.127) = 2.986
10
The third iteration gives,
(3) 1
x1 = (30 − 2 × 2.127 − 2.986) = 1.138
20
1
x2(3) = (75 + 1.138 + 3 × 2.986) = 2.127
40
1
x3(3) = (30 − 2 × 1.138 + 2.127) = 2.985
10
Thus the solution correct to three significant digits can be written as x1 = 1.14,
x2 = 2.13, x3 = 2.98.
Example 7: Solve the following system correct to three significant digits, using
Jacobi iteration method.
10 x1 + 8 x 2 − 3 x 3 + x 4 = 16
3 x1 − 4 x 2 + 10 x 3 + x 4 = 10

2 x1 + 10 x2 + x3 − 4 x4 = 9
2 x1 + 2 x2 − 3x3 + 10 x4 = 11
Solution: The system is first rearranged so that the coefficient matrix is diagonally
dominant. The equations are rewritten for starting Jacobi iteration as,
x1( k +1) =
1.6 − 0.8 x2( k ) + 0.3 x3( k ) − 0.1 x4( k )
x2( k +1) =
0.9 − 0.2 x1( k ) + 0.1 x3( k ) − 0.4 x4( k )
x3( k +1) =
1.0 − 0.3 x1( k ) + 0.4 x2( k ) − 0.1 x4( k )
x4( k +1) =
1.1 − 0.2 x1( k ) + 0.2 x2( k ) − 0.3 x3( k ) , where k =
0, 1, 2,...
The initial guess of solution is taken as,
=x1(0) 1.6,
= x2(0) 0.9,
= x3(0) 1.0,
= x4(0) 1.1
The results of successive iterations computed by Jacobi iterations are given in
the following table:
k x1 x2 x3 x4
1 1.07 0.92 0.77 0.90
2 1.050 0.969 0.957 0.933
3 1.0186 0.9765 0.9928 0.9923
4 1.0174 0.9939 0.9858 0.9989
5 0.9997 0.9975 0.9925 0.9974
6 1.0001 0.9997 0.9994 0.9984
7 1.0002 0.9998 1.0001 0.9999

Self-Instructional
Material 79
Solution of Simultaneous Thus the solution correct to three significant digits is x1 = 1.000, x2 = 1.000,
Linear Equation
x4 = 1.000.
Algorithm: Solution of a system of equations by Gauss-Seidel iteration method.
NOTES Step 1: Input elements aij of augmented matrix for i = 1 to n, next, j = 1 to n
+ 1.
Step 2: Input epsilon, maxit [epsilon is desired accuracy, maxit is maximum
number of iterations]
Step 3: Set xi = 0, for i = 1 to n
Step 4: Set big = 0, sum = 0, j = 1, k = 1, iter = 0
Step 5: Check if k ≠ j, set sum = sum + ajk xk
Step 6: Check if k < n, set k = k + 1, go to Step 5 else go to next step
Step 7: Compute temp = (ajn + 1 – sum) / ajj
Step 8: Compute relerr = abs (xj – temp) / temp
Step 9: Check if big < relerr then big = relerr
Step 10: Set xj = temp
Step 11: Set j = j + 1, k = 1
Step 12: Check if j ≤ n to Step 5 else go to next step
Step 13: Check if relerr < epsilon then {write iterations converge, and write
xj for j = 1 to n go to Step 15} else if iter < maxit iter = iter + 1 go
to Step 5
Step 14: Write ‘iterations do not converge in’, maxit ‘iteration’
Step 15: Write xj for j = 1 to n
Step 16: End

4.2.4 Computation of the Inverse of a Matrix by using Gaussian


Elimination Method
The inverse matrix B of a given square matrix A satisfy the relation,
A.B=I
where I is the unit matrix of the same order as that of A. In order to determine the
elements bij of the matrix B, we can employ row operations as in Gaussian
elimination. We explain the method for a 2 × 3 matrix as given below. We can
write the above relation in detail as,
 a11 a12 a13   b11 b12 b13  1 0 0
a a22 a23  b b23  = 0 1 0
 21  21 b22
a31 a32 a33  b31 b32 b33  0 0 1
By using the definition of matrix multiplication we can write that the above
relation equivalent to the following three systems of linear equations.
 a11 a12 a13  b11  1  a11 a12 a13  b12  0  a11 a12 a13  b13  0
a a 23  b  = 0, a a 23  b  = 1 , a a23  b  = 0
 21 a 22  21     21 a 22  22     21 a 22  23   
Self-Instructional  a31 a32 a33  b31  0 a31 a32 a33  b32  0 a31 a32 a33  b33  1
80 Material
Thus by solving each of the above systems we shall get the three columns of Solution of Simultaneous
Linear Equation
the inverse matrix B = A–1. Since, the coefficient matrix is the same for each of the
three systems, we can apply Gauss elimination to all the three systems
simultaneously. We consider for this the following augmented matrix:
NOTES
 a11 a12 a13 : 1 0 0
a a 23 : 0 1 0
 21 a 22
a31 a32 a33 : 0 0 1
We employ Gauss elimination to this augmented matrix. At the end of 1st
stage we get,
R2 −( a21 / a11 ) R1  a11 a12 a13 : 1 0 0
→  0 (1)
a22 (1)
a23 : − a21 / a11 1 0 

R3 − ( a31 / a11 ) R1  0 (1)
a32 (1)
a33 : −a31 / a11 0 1 
where
(1) (1)
a 22 = a 22 − ( a 21 / a11 ) a12 , a 23 = a 23 − ( a 21 / a11 ) a13
(1) (1)
a32 = a32 − (a31 − a11 )a12 , a33 = a33 − (a31 / a11 ) a13
Similarly, at the end of the second stage, we have
 a11 a12 a13 : 1 0 0
R3 − ( a / a
(1)
23
(1)
22 )a(1)
32 →  0
 (1)
a22 (1)
a23 : c21 1 0
 0 1 
(2)
0 a33 : c31 c32
where
(2)
a33 a33
=(1)
− ( a32
(1) (1)
/ a22 ) a23(1) , c21 =
−(a21 / a12 )

= ( a21 a32(1) / a11 a22(1) ) − (a31 / a11 ), c32 =


− ( a32
(1) (1)
/ a22 )
By back-substitution process, we get the elements of the inverse matrix, by
solving the three systems corresponding to the three columns of the reduced
augmented part, i.e.,
1 0 0
c 1 0
 11
c31 c32 1
We illustrate the method by an example given below.
Example 8: Find the inverse of the following matrix A by Gaussian elimination
method.
 2 3 − 1
A = 4 4 − 3
2 − 3 1 
Solution: We consider the following augmented matrix:
 2 3 − 1 : 1 0 0
[ A.I ] = 4 4 − 3 : 0 1 0
2 − 3 1 : 0 0 1

Self-Instructional
Material 81
Solution of Simultaneous Using Gaussian elimination to this augmented matrix, we get the following at
Linear Equation
the end of first step:
R −2 R 2 3 − 1 : 1 0 0
2 1 → 0 − 2 − 1 : − 2 1 0
 
NOTES R3 − R1 0 − 6 2 : − 1 0 1
Similarly, at the end of 2nd step we get,
2 3 − 1 : 1 0 0

  →0 − 2 − 1 : − 2 1 0
R3 − 3R2
0 0 5 : 5 − 3 1
Thus, we get the three columns of inverse matrix by solving the following three
systems:
 2 3 − 1 : 1  2 3 − 1 : 0  2 3 − 1 : 0
0 − 2 − 1 : − 2 0 − 2 − 1 : 1  0 − 2 − 1 : 0
    
0 0 5 : 5  0 0 5 : − 3 0 0 5 : 1
The solution of the three are easily derived by back-substitution, which give
the three columns of the inverse matrix given below:
1 / 4 0 1/ 4 
1 / 2 − 1 / 5 − 1 / 10
 
 1 − 3 / 5 1 / 5 
We can also employ Gauss-Jordan elimination to compute the inverse matrix.
This is illustrated by the following example:
Example 9: Compute the inverse of the following matrix by Gauss-Jordan
elimination.
 2 3 − 1
A = 4 4 − 3
2 − 3 1 
Solution: We consider the augmented matrix [A : I],
 2 3 −1 : 1 0 0  1 3 / 2 −1/ 2 : 1/ 2 0 0 
[ A : I ]  4 4 −3 : 0 1 0  
= →  4 4 −3 : 0 1 0 
R1 / 2
 2 −3 1 : 0 0 1   2 −3 1 : 0 0 1 

R3 −2 R1 1 3 / 2 −1/ 2 : 1/ 2 0 0  1 3 / 2 −1/ 2 : 1/ 2 0 0
 → 0 −2 −1 : −2 1 0  → 0 1 +1/ 2 : 1 −1/ 2 0 
  R /− 2  
R2 − 4 R1 0 −6 : −1 0 1  2
1 
 2 0 −6 2 : −1 0

R1 −3R2 /2 1 0 −5 / 4 : −1 3 / 4 0 1 0 −5 / 4 : −1 3 / 4 0 
 →  0 1 1/ 2 : 1 −1/ 2 0  → 0 1 1/ 2 : 1 −1/ 2 0 
  R /5  
R3 + 6 R2  0 0 −3 1  3
 5 : 5 0 0 1 : 1 −3/ 5 1/ 5

R1 +5 R3 / 4 1 0 0 : 1/ 4 0 1/ 4 
 → 0 1 0 : 1/ 2 −1/ 5 −1/10 

R2 − 1R3 / 2 0 0 1 : 1 −3 / 5 1/ 5 

1 / 4 0 1/ 4 
which gives A−1 = 1 / 2 − 1 / 5 − 1 / 10
 1 − 3/ 5 1 / 5 

Self-Instructional
82 Material
Solution of Simultaneous
Linear Equation
Check Your Progress
1. When is the system of equation homogenous and when non-homogenous?
2. Explain Gauss elimination method. NOTES
3. Explain Gauss-Jordan elimination method.
4. Why are iterative methods used?
5. Explain Gauss-Seidel iteration method.

4.3 ANSWERS TO ‘CHECK YOUR PROGRESS’


1. The system of equations Ax = b is termed as a homogeneous one if all the
elements in the column vector b are zero. Otherwise, the system is termed
as a non-homogeneous one.
2. The gauss elemination method consists in systematic elimination of the
unknowns so as to reduce the coefficient matrix into an upper triangular
system, which is then solved by the procedure of back-substitution.
3. The Gauss-Jordan elimination method is a variation of the Gaussian elimination
method. In this method, the augmented coefficient matrix is transformed by
row operations such that the coefficient matrix reduces to the Identity matrix.
The solution of the system is then directly obtained as the reduced augmented
column of the transformed augmented matrix.
4. We can use iteration methods to solve a system of linear equations when
the coefficient matrix is diagonally dominant.
5. The Gauss-Seidel iteration is a simple modification of the Jacobi iteration.
In this method, at any stage of iteration of the system, the improved values
of the unknowns are used for computing the components of the unknown
vector.

4.4 SUMMARY
Many engineering and scientific problems require the solution of a system
of linear equations.
The system of equations is termed as a homogeneous one if all the elements
in the column vector b of the equation Ax = b, are zero.
Cramer’s rule and matrix inversion method are two classical methods to
solve the system of equations.
If D = |A| be the determinant of the coefficient matrix A and Di is the
determinant obtained by replacing the ith column of D by the column vector
b, then the Cramer’s rule gives the solution vector x by the equations,
Di
xi = for i = 1, 2, …, n.
D Self-Instructional
Material 83
Solution of Simultaneous Gaussian elimination method consists in systematic elimination of the
Linear Equation
unknowns so as to reduce the coefficient matrix into an upper triangular
system, which is then solved by the procedure of back-substitution.
In Gauss-Jordan elimination, the augmented matrix is transformed by row
NOTES
operations such that the coefficient matrix reduces to the identity matrix.
We can use iteration methods to solve a system of linear equations when
the coefficient matrix is diagonally dominant.
There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
Gaussian elimination can be used to compute the inverse of a matrix.

4.5 KEY WORDS

Homogenous equation: In this system of equations, all the elements in the


column vector b of the equation Ax = b, are zero.
Gaussian elimination: It is the systematic elimination of the unknowns so
as to reduce the coefficient matrix into an upper triangular system, which is
then solved by the procedure of back-substitution.
Gauss-Seidel iteration: In this method, at any stage of iteration of the
system, the improved values of the unknowns are used for computing the
components of the unknown vector.

4.6 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define the system of linear equations.
2. How many determinants d0 we have to compute in Cramer’s rule?
3. What is the basic difference between Gaussian elimination and Gauss-Jordan
elimination method?
4. What are iterative methods?
5. State an application of Gaussian elimination method.
Long-Answer Questions
1. Use Cramer’s rule to solve the following systems of equations:
(i) x1 – x2 – x3 = 1 (ii) x1 + x2 + x3 = 6
2x1 – 3x2 + x3 = 1 x1 + 2x2 + 3x3 = 14
3x1 + x2 – x3 = 2 x1 – 2x2 + x3 = 2

Self-Instructional
84 Material
2. Using the matrix inversion method to solve the following systems of equation: Solution of Simultaneous
Linear Equation
(i) 4x1 – x2 + 2x3 = 15 (ii) x1 + 4x2 + 9x3 = 16
x1 – 2x2 – 3x3 = –5 2x1 + x2 + x3 = 10
5x1 – 7x2 + 9x3 = 8 3x1 + 2x2 + 3x3 = 18 NOTES

3. Solve the following systems of equation using Gaussian elimination method:


(i) 2x + 2y + 4z = 18 (ii) x1 + 2x2 + x3 + 4x4 = 13
x + 3y + 2x = 13 x1 + 4x3 + 3x4 = 28
3x + y + 3x = 14 4x1 + 2x2 + 2x3 + x4 = 20
–3x1 + x2 + 3x3 + 2x4 = 6

4. Apply Gauss-Jordan elimination method to solve the following systems:


(i) x1 + 2x2 + 3x3 = 4 (ii) 5x1 + 3x2 + x3 = 2
x1 + x2 + x3 = 3 4x1 + 10x2 + 4x3 = –4
2x1 + 2x2 + x3 = 1 2x1 + 3x2 + 5x3 = 11

5. Compute the solution of the following systems correct to three significant


digits using Gauss-Jordan iteration method:
(i) 9x1 – 3x2 + 2x3 = 23 (ii) x1 + 2x2 + 3x3 + 4x4 = 30
6x1 + 3x2 + 14x3 = 38 4x1 + x2 + 2x3 + 3x4 = 24
4x1 + 2x2 – 3x3 = 35 3x1 + 4x2 + x3 + 2x4 = 22
2x1 + 3x2 + 4x3 + x4 = 24

4.7 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd. Self-Instructional
Material 85
Eigen Values and
Eigen Vectors BLOCK - II
EIGEN VECTORS, INTERPOLATION,
APPROXIMATION, DIFFERENTIATION
NOTES AND INTEGRATION

UNIT 5 EIGEN VALUES AND


EIGEN VECTORS
Structure
5.0 Introduction
5.1 Objectives
5.2 Finding Eigen Values and Eigen Vectors
5.3 Jacobi and Power Methods
5.4 Answers to Check Your Progress Questions
5.5 Summary
5.6 Key Words
5.7 Self Assessment Questions and Exercises
5.8 Further Readings

5.0 INTRODUCTION

In linear algebra, an eigenvector or characteristic vector of a linear transformation is


a nonzero vector that changes at most by a scalar factor when that linear
transformation is applied to it. The corresponding eigenvalue is the factor by which
the eigenvector is scaled. Geometrically, an eigenvector, corresponding to
a real nonzero eigenvalue, points in a direction in which it is stretched by the
transformation and the eigenvalue is the factor by which it is stretched. If the
eigenvalue is negative, the direction is reversed. Loosely speaking, in a
multidimensional vector space, the eigenvector is not rotated. However, in a one-
dimensional vector space, the concept of rotation is meaningless.
In this unit, you will study about the Eigen values, Eigen vectors and Jacobi
power method.

5.1 OBJECTIVES

After going through this unit, you will be able to:


Understand the concept of Eigen values and Eigen vectors
Analyse the Jacobi and power methods

Self-Instructional
86 Material
Eigen Values and
5.2 FINDING EIGEN VALUES AND EIGEN Eigen Vectors

VECTORS

Let A = [aij] be a square matrix of order n. If there exists a non-zero (non-null) NOTES
column vector X and a scalar l such that,
AX = X
Then is called an eigenvalue of the matrix A and X is called eigenvectors
corresponding to the eigenvalue .
The problem of finding the values of the parameter , for which the
homogeneous system,
AX = X ...(5.1)
possesses non-trivial solution is known as characteristic value problem or
eigenvalue problem.
Thus, system of Equation (5.1) possesses non-trivial solution if and only if,
[A – I] =0 ...(5.2)
This Equation is known as the characteristic equation of the matrix A.
The roots of this Equation (5.2) are called latent roots or characterstics
values or eigenvalues of the matrix A. The corresponding non-trivial solutions are
called eigenvectors or characteristic vectors of A.
If A is an n × n matrix, then its characteristic equation is an nth degree
polynomial equations in . Therefore, an n × n matrix has n eigenvalues (real or
complex).
Suppose i (i = 1, 2, 3, . . . ., n) be the eigenvalues of A, then for each i,
there exists a non-null vector Xi such that
AX1 = i Xi (i = 1, 2, 3, ........, n)
Multiplying both sides by a non-zero scalar k, we get
A(kXi) = i (kXi)
This implies that an eigenvector is determined upto a multiplicative scalar. In
other words, the eigenvector is not unique. But corresponding to an eigenvector
of the matrix A, there can be one and only one eigenvalue of the matrix A.
It can be shown that for a matrix A of order n, the characteristic Equation
(5.2) can be written as,
n n–1 n–2
– 1 + 2 +......+ (–1)n n= 0

Self-Instructional
Material 87
Eigen Values and Where r is the sum of all the determinants formed from square matrices of
Eigen Vectors
order r whose principal diagonals lie along the principal diagonal of A.
Notes:
NOTES 1. An eigenvector of a matrix cannot correspond to two different eigenvalues.
2. An eigenvalue of a matrix can, and will correspond to different eigenvectors.

3 4
Example 1: Find the eigenvalues and eigenvectors of A =  
 4 −3

3 − λ 4 
Solution: The characteristic equation is  =0
 4 −3 − λ 
2
– 25 = 0 = 5
The eigenvectors are given by AX = X
i.e., (A – I)X = 0

3 − λ 4   x1 
 4
 −3 − λ   x2  = 0

(3 – )x1 + 4x2 = 0
4x1 – (3 + )x2 = 0
If, = 5,
We get, – 2x1 + 4x2 = 0
4x1 – 8x2 = 0
x1 x
Or, = 2
2 1

 2
The eigenvector is  
1 

If = –5, we get: 8x1 + 4x2 = 0


4x1 + 2x2 = 0
2x1 = –x2

x1 x2
=
−1 2

−1
The eigenvector is  
2

Self-Instructional
88 Material
Eigen Values and
2 Eigen Vectors
The eigenvalues are 5 and –5 with the corresponding eigenvectors are   and
1
 −1 
 2  respectively.. NOTES
 

3 −4 4 
Example 2: Find the eigenvalues and eigenvectors of the matrix 1 −2 4
1 −1 3 

 3 −4 4 
Solution: Let A = 1 −2 4
1 −1 3 

The characteristic equation is |A – I| = 0

3 − λ −4 4 
 1 4  = 0
i.e.,  −2 − λ
 1 −1 3 − λ 

(3 – )[(–2 – )(3 – ) + 4] + 4(3 – – 4) + 4(–1 + 2 + ) = 0


3
–4 2+ + 6 = 0
The eigenvalues are the roots of this equation and they are –1, 2, 3.
The eigenvectors are given by the solution of:
(A – I)X = 0

3 − λ −4 4   x1 
  
i.e.,  1 −2 − λ 4   x2  = 0
  x 
 1 −1 3 − λ  3

(3 – )x1 – 4x2 + 4x3 = 0


x1 – (2 + )x2 + 4x3 = 0
x1 – x2 + (3 – )x3 = 0
Case 1: = –1 gives:
4x1 – 4x2 + 4x3 = 0, x1 – x2 + 4x3 = 0, x1 – x2 + 4x3 = 0
Solving the first and second equations by the method of cross multiplication we
get:
x1 x2 x3
= =
−12 −12 0

Self-Instructional
Material 89
Eigen Values and
Eigen Vectors  −12  1 
 −12 
The eigenvector is X1 =   or 1 
 0  0 
NOTES Case 2: = 2 gives:
x1 – 4x2 + 4x3 = 0, x1 – 4x2 + 4x3 = 0, x1 – x2 + x3 = 0
0
Solving the first and third equations we get the eigenvector X2 = 1 
1 
Case 3: = 3 gives:
– 4x2 + 4x3 = 0, x1 – 5x2 + 4x3 = 0, x1 – x2 = 0
1
Solving any two of these equations we get the eigenvector X3 = 1
1

Note: For a square matrix A = (ai j) of order 3, the characteristic equation |A – I|


= 0, takes the form:
3 2
– 1 + 2 – 3 =0
Where, 1 = a11 + a22 + a33 = Sum of the leading diagonal elements of A

a11 a12 a22 a23 a11 a13


2 = + +
a21 a22 a32 a33 a31 a33

= Sum of the minors of the leading diagonal elements of A


= | A | = Determinant of the matrix A
In the Example 2
1 =3–2+3=4
−2 4 3 4 3 −4
2 = + + = 1, 3 = –6
−1 3 1 3 1 −2
3 2
The characteristic equation is –4 + +6=0
Note: If is an eigenvalue of matrix A of order 3, then the components of the
eigenvector of matrix A corresponding of are proportional to the cofactors of
the elements of any row of |A – I| provided that not all of them vanish.
This method is used for the computation of the eigenvectors in the following
examples.

2 2 0
Example 3: Find the eigenvalues and eigenvectors of the matrix  2 1 1  .
 −7 2 −3
Self-Instructional
90 Material
Solution: The characteristic equation is, Eigen Values and
Eigen Vectors
3 2
– 1 + 2 – 3 = 0,
Where, 1= 1 + 2 – 3 = 0, 2 = – 5 – 6 – 2 = – 13, 3= – 12
NOTES
The characteristic equation is,
3
– 13 + 12 = 0
( – 1)( + 4)( – 3) = 0
The eigenvalues are 1, –4, 3, when

1 2 0
= 1, A – I =  2 0 1 
 −7 2 −4 

Components of the eigenvectors are proportional to the cofactors of the elements


of the first row, namely –2, 1, 4.
 −2 
 
The eigenvector is  1 
 4 

 −1 2 0 
When = 3, A – I =  2 −2 1 
 −7 2 −6 

The cofactors of the elements of the first row are 10, 5, –10, which are proportional
to 2, 1, –2 respectively.
2
The eigenvector is  1 
 −2

 6 2 0
When = –4, A – I =  2 5 1 
 −7 2 1 

The cofactors of the elements of the first row are 3, –9, 39, which are proportional
to 1, –3, 13 respectively.

1
The eigenvector is  −3
13 

Self-Instructional
Material 91
Eigen Values and
Eigen Vectors 1 6 1 
Example 4: Find the eigenvalues and eigenvectors of A = 1 2 0 
0 0 3 
NOTES 3 2
Solution: The characteristic equation is – 1 + 2 – 3=0

Where, 1 = 1+2+3=6

2 0  1 1 1 6 
2 =  + + 
0 3 0 3 1 2

= 6+3+2–6=5
3 = |A| = 1(6) – 6(3) + 1(10) = –12
3 2
The characteristic equation is –6 + 5 + 12 = 0
The roots of this equation –1, 3, 4 are the eigenvalues of A, when

2 6 1 
= –1, A – I = 1 3 0 
0 0 4 

The cofactors of the elements of the first row give the eigenvector as,
12  3
X1 =  −4 or  −1
 
 0   0 

 −2 6 1 
When = 3, A – I =  1 −1 0
 0 0 0 

Since, the cofactors of the elements of the 1st and 2nd rows vanish completely,
we consider the cofactors of the elements of the 3rd row to get the eigenvector as,
1
X2 =  1 
 −4

 −3 6 1 
When = 4, A – I =  1 −2 0 
 0 0 −1

Considering the cofactors of the elements of the 1st row we get the eigenvector
as,

Self-Instructional
92 Material
Eigen Values and
 2 Eigen Vectors
X3 = 1 
0 
NOTES
Properties of Eigenvalues and Eigenvectors

1. If all the eigenvalues of a matrix are distinct, then the corresponding eigenvectors
are linearly independent.
2. If two or more eigenvalues of a matrix are equal then the corresponding
eigenvectors may be linearly independent or linearly dependent.
3. The eigenvalues of a matrix and its transpose are the same. The characteristic
equation of A and AT (the transpose of A) are,
|A – I| = 0 ...(5.3)
and |AT – I| = 0 ...(5.4)
LHS of Equation (5.4) is the determinant obtained by interchanging rows into
columns of |A – I|. Since, the value of a determinant is unaltered by the
interchanging of rows and columns, Equations (5.3) and (5.4) are identical.
Therefore, the eigenvalues of a matrix and its transpose are the same.
4. The sum of the eigenvalues of a matrix A, is equal to the sum of the diagonal
elements of A. The sum of the diagonal elements is called the Trace of the
matrix A. The characteristic Equation of A is,
n
– 1 n–1 + 2 n–2 – .... + (–1)n n = 0 ...(5.5)
Where, 1 = sum of the diagonal elements of A. ...(5.6)
Let, 1, 2,....., n be the roots of Equation (5.5)
−β1
Then, 1 + 2 + .... + n = − =
β1 ...(5.7)
1

From Equations (5.6) and (5.7) we find that the sum of the eigenvalues is equal to
the sum of the diagonal elements.
5. The product of the eigenvalues of a matrix A is |A|.
The characteristic Equation of A is,
n–1 n–2
n – 1 + 2 – .... + (–1)n n =0 ...(5.8)
Where, n = Determinant of A.

If, 1, 2,.... n be the roots of Equation (5.8) then

βn
1, 2,... n = (–1)n(–1)n 1 = n ...(5.9)

Self-Instructional
Material 93
Eigen Values and From Equations (5.8) and (5.9) we find that the product of the eigenvalues is
Eigen Vectors
equal to the value of the determinant of A.
6. The eigenvalues of a triangular matrix are the diagonal elements of it.
NOTES Notes:
1. The sum of the eigenvalues of a matrix A, is equal to the sum of the
diagonal elements of A, which is called the Trace of the matrix A.
2. If one of the eigenvalues is zero, then the matrix is singular and conversely,
when the matrix is singular then at least one of the eigenvalues ought to
be zero.
3. The eigenvalues of a diagonal matrix are the diagonal elements of it.
7. If i for(i = 1,2,3,...,n) are the eigenvalues of A, then:

(i) k i for (i = 1,2,3,....,n) are the eigenvalues of the matrix kA, k being a
non-zero scalar.
1
(ii) , (i = 1,2,3,....,n) are the eigenvalues of the inverse matrix A–1, provided
λi
i 0
(i) Let Xi (i = 1,2,3,....,n) be the eigenvectors of the matrix A corresponding
to the eigenvalues of i(i = 1,2,3,....,n). Then,
AXi = iXi (i = 1,2,3,....,n) ...(5.10)
Multiplying by k, (a non-zero scalar):
kAXi = k iXi
This implies that k i for (i = 1,2,3,....,n) are the eigenvalues of kA.
(ii) Premultiply Equation (5.10) by A–1
A–1AXi = A–1 iXi
IXi = iA–1Xi or A–1Xi = i
–1
Xi
–1 –1
This implies that i , for (i = 1,2,3,...,n) are the eigenvalues of A
m
In general, if i(i = 1,2,3,...,n) are the eigenvalues of A, then i (i =
1,2,3,...,n), where m is an integer, are the eigenvalues of Am.
Note: A and Am ( m being an integer) have the same eigenvectors even though the
eigenvalues are different.
3 0 0
Example 5: Find the sum of the squares of the eigenvalues of 8 4 0  .
6 2 5 

Self-Instructional
94 Material
Solution: The eigenvalues are 3, 4 and 5. Hence, the sum of the squares of eigenvalues Eigen Values and
Eigen Vectors
=50.
 6 −2 2 
Example 6: Two eigenvalues of matrix  −2 3 −1 are 2 and 8. Find the third NOTES
 2 −1 3 
eigenvalue.
Solution: Sum of the eigenvalues = Sum of the diagonal elements = 6 + 3 + 3 =
12.
Since the sum of the 2 given eigenvalues is 10 (2+8), the third eigenvalues is
12 – 10 = 2.

 8 −6 2 
Example 7: If 3 and 15 are the two eigenvalues of  −6 7 −4  , find the value
 2 −4 3 
of the determinant.
Solution: Let, 1, 2, 3 be the eigenvalues.

Then, 1 + 2 + 3 = 8 + 7 + 3; 3 + 15 + 3 = 18 3 =0
The value of the determinant = Product of the eigenvalues = 0
Values of the determinant is zero.
Example 8: If one of the eigenvalues of a matrix is zero, then what is the type of
matrix?
Solution: The matrix is singular.

0 1 1 
Example 9: Find the eigenvalues and eigenvectors of 1 0 1  .
1 1 0

3
Solution: The characteristic equation is – 3 – 2 = 0. Solving this equation we
get the eigenvalues as –1, –1 and 2.

 −λ 1 1   x1 
The eigenvalues are given by  1 −λ 1   x2  = 0

 1 1 −λ   x3 

– x1 + x2 + x3 = 0, x1 – x2 + x3 = 0, x1 + x2 – x3 = 0
Case 1: = 2 gives:
– 2x1 + x2 + x3 = 0, x1 – 2x2 + x3 = 0, x1 + x2 – 2x3 = 0

Self-Instructional
Material 95
Eigen Values and
Eigen Vectors 1
Solving any two of these equations we get eigenvector X1 = 1
1
NOTES
Case 2: = –1 gives:
x1 + x2 + x3 = 0, x1 + x2 + x3 = 0, x1 + x2 + x3 = 0
Solving any two of these equations we get x1 = 0, x2 = 0, x3 = 0 and the vector x2
becomes a null vector which cannot be an eigenvector. This is because of the fact
that all the three equations are one and the same. The rank of coefficient matrix is
1. Therefore, the system will have (n – r) = (3 – 1) = 2 linearly independent
solutions. This indicates that, corresponding to = –1, there will be two linearly
independent eigenvectors.
To get the solutions, we assign arbitrary values to two of the three variables as shown
below. Considering the equation x1 + x2 + x3 = 0 and assigning x3 = 0, x2 = 1, we
get x1 = –1.

 −1
The eigenvector is X2 =  1 
 0 

Similarly, assigning the value x1 = 0, x2 = 1 we get x3 = –1, so that the eigenvector


is,

0
X3 =  1 
 −1

1 2 
Example 10: Show that the eigenvalues of A =   are –1, 3 and verify that
2 1 
the eigenvalues are 1 and 9 for A2.
2
Solution: The characteristic equation is – 2 – 3 = 0. The eigenvalues are –1,

3 and the corresponding eigenvectors are  −1 and 1 . By property, the
1 1
eigenvectors of A2 are 1, 9 and the corresponding eigenvectors are (–1, 1)T,
(1, 1)T

 5 4
Verification: A2 =   has the characteristic equation,
2
– 10 + 9 = 0.
 4 5

Self-Instructional
96 Material
Eigen Values and
−1 1
This equation gives the eigenvalues 1, 9 and eigenvectors   and   .
Eigen Vectors

1  1

Inner Product NOTES


Inner product or scalar product of two vectors X and Y, denoted by < X, Y> is
defined as the scalar X TY
n
i.e., X ,=
X T
X= Y ∑x y
i =1
i i

Inner product X, X is known as the square of the length of the vector X and it is
denoted as |X|2, where |X| is read as norm X. If |X| = 1 then X is called a unit
vector.
If the inner product between two vectors vanishes, then we say that the two vectors
are orthogonal to each other.
Example 11: Show that X = (1, –1, 2)T and Y = (3, –1, –2)T are orthogonal.

3
T  
Solution: < X, Y > = X Y = (1, –1, 2)  −1  = 3 + 1 = 4 = 0.
 −2 
 

Hence, X and Y are orthogonal.


Eigenvalues of Real Symmetric Matrix

Let be an eigenvalue and X be the corresponding eigenvectors of the real


symmetric matrix A. Then,
AX = X

Multiplying by X ' on both sides,

X ' AX
= X ' λX ...(5.11)
Taking complex conjugate on both sides,
X ' AX= X ' λX
X ' AX
= X ' λX
Since A is real, A = A
X ' AX
= X ' λX
Taking transpose on both sides,

)' ( X ' λX )'


( X ' AX=
Self-Instructional
Material 97
Eigen Values and
Eigen Vectors X ' A ' X =X ' λ X =λX ' X
Since A is symmetric A = A
X ' AX = λ X ' X ...(5.12)
NOTES
From Equations (5.11) and (5.12) we get
λX ' X =
λX ' X

(λ − λ ) X ' X = 0

Since X ' X is non-zero, λ = λ . Therefore is real.


Theorem 1: If X1 and X2 are two eigenvectors corresponding to two different
eigenvalues 1 and 2 of a real symmetric matrix A, then X1 and X2 are
orthogonal.
Proof: Since X1 and X2 are the eigenvectors of matrix A corresponding to the
eigenvalues 1and 2, we have,
AX1 = 1X1 (i)
AX2 = 2X2 (ii)
Premultiplying Equation (i) by X2 we get,
X 2 AX1 = X 2 1X1
Taking transpose on both sides we get,
(X 2 AX1) = (X 2 1X1)
X 1 A X2 = X 1 1X2

X 1 AX2 = 1X 1 X2 where A = A, since A is symmetric.


X1 2X2 = 1X 1 X2 [Using Equation (ii)]
( 2– 1)X 1 X2 =0
Since, ( 2 1), X 1 X2 = 0
i.e., X1 and X2 are orthogonal.

10 −2 −5 
Example 12: Find the eigenvalues of the matrix  −2 2 3  and verify that the
 −5 3 5 

eigenvectors are mutually orthogonal.

10 −2 −5 
 
Solution: Let A =  −2 2 3 
 −5 3 5 

Self-Instructional
98 Material
1 = 17; 2 = 42; 3 =0 Eigen Values and
Eigen Vectors
The characteristic equation is,
3 2
– 17 + 42 = 0
2
NOTES
– 17 + 42 = 0
– 3) ( – 14 =0
The eigenvalues are 0, 3 and 14. To find the eigenvectors we consider,

10 – λ –2 –5   x1 
 –2 2−λ 3   x2  = 0

 –5 3 5 – λ   x3 

(10 – λ ) x1 – 2 x2 – 5x3 = 0
–2x1 + (2 – )x2 + 3x3 = 0
–5x1 + 3x2 + (5 – )x3 = 0
Case 1: = 0 gives:
10x1 – 2x2 – 5x3 = 0, –2x1 + 2x2 + 3x3 = 0

1
 
Solving we get eigenvector X1 =  –5
 4 

Case 2: = 3 gives:
7x1 – 2x2 – 5x3 = 0, –2x1 –x2 + 3x3 = 0

1
 
Solving we get eigenvector X2 = 1
1

Case 3: = 14 gives:
–4x1 – 2x2 – 5x3 = 0, –2x1 + 12x2 + 3x3 = 0

 –3
Solving we get eigenvector X3 =  1 
 2 

 1  1  –3
    
The eigenvectors are  –5 , 1 ,  1 
 4  1  2 
Self-Instructional
Material 99
Eigen Values and X1 X2 = 0, X2 X3 = 0 and X3 X1 = 0
Eigen Vectors
Hence, the three eigenvectors are mutually orthogonal.
Note: It may be seen that any matrix, whose elements are polynomials can be
NOTES expressed as a polynomial whose coefficients are matrices and vice versa. The
following two examples illustrate this concept.
Example 13: Express the following matrices as polynomials with matrix
coefficients.

 λ + 2λ 2 λ3 – 3 
(i)  ,
 1 + 3λ –λ2 

 1 + λ + λ 2 λ + λ 2 – λ3 λ 3 − 3λ 2 + 5λ + 1
 
(ii)  λ3 − 3λ 2 − 1 λ + λ2 1 − 3λ 2 + 4λ3 
 λ + λ3 − 1 0 λ3 + λ 2 + λ + 1 

 λ + 2λ 2 λ3 – 3 
Solution: (i)  
 1 + 3λ –λ2 

0 1 2 2 0  1 0   0 −3 
= λ3  +λ   + λ + 
0 0  0 –1 3 0 1 0 
3 2
=A +B + C + D, where A,B and C are matrices.
 1 + λ + λ2 λ + λ 2 – λ3 λ 3 − 3λ 2 + 5λ + 1 
 3 
(ii)  λ − 3λ 2 − 1 λ + λ2 1 − 3λ 2 + 4λ 3 
 λ + λ3 − 1 0 λ 3 + λ 2 + λ + 1 

3 2
=A +B +C +D
Where,

 0 −1 1   1 1 –3 
 
A =  1 0 4  B =  –3 1 –3 
1 0 1 0 0 1
   

1 1 5  1 0 1
   
C = 0 1 0 D =  –1 0 1
1 0 1  –1 0 1
   

5.3 JACOBI AND POWER METHODS

In numerical linear algebra, the Jacobi method is an iterative algorithm for


determining the solutions of a strictly diagonally dominant system of linear equations.
Self-Instructional
100 Material
Each diagonal element is solved for, and an approximate value is plugged in. The Eigen Values and
Eigen Vectors
process is then iterated until it converges. This algorithm is a stripped-down version
of the Jacobi transformation method of matrix diagonalization. The method is named
after Carl Gustav Jacob Jacobi.
NOTES
In mathematics, power iteration, also known as the power method, is an
eigenvalue algorithm: given a diagonalizable matrix A, the algorithm will produce a
number , which is the greatest (in absolute value) eigenvalue of A, and a nonzero
vector v, which is a corresponding eigenvector of , that is, Av = v. The algorithm
is also known as the Von Mises iteration.
Power iteration is a very simple algorithm, but it may converge slowly. The
most time-consuming operation of the algorithm is the multiplication of matrix A by
a vector, so it is effective for a very large sparse matrix with appropriate
implementation.
Jacobi Method
As per the Jacobi method,
Let,
Ax = b be a square system of n linear equations.
Where,

Then A can be decomposed into a diagonal component D, a lower triangular


part L and an upper triangular part U:
A=D+L+U
Where,

And,

Self-Instructional
Material 101
Eigen Values and The solution is then obtained iteratively by means of,
Eigen Vectors

NOTES Where x(k) is referred as the kth approximation or iteration of x and x(k+1) is
the next or k + 1 iteration of{\displaystyle \mathbf {x} } x. The element-based
formula is thus:

The computation of requires each element in x(k) except itself. Unlike

the Gauss–Seidel method, we cannot overwrite with , as that value


will be necessary for the rest of the computation. The minimum amount of storage
is two vectors of size n.
Power Method
The power iteration algorithm starts with a vector b0, which may be an approximation
to the dominant eigenvector or a random vector. The method is described by
the recurrence relation,

Consequently, at every iteration, the vector bk is multiplied by the


matrix A and normalized.
If we assume that A has an eigenvalue that is strictly greater in magnitude
than its other eigenvalues and the starting vector b0 has a nonzero component in
the direction of an eigenvector associated with the dominant eigenvalue, then a
subsequence (bk) converges to an eigenvector associated with the dominant
eigenvalue.
Without the two assumptions above, the sequence (bk) does not necessarily
converge. In this sequence,

Where v1 is an eigenvector associated with the dominant eigenvalue,


and . The presence of the term implies that (bk) does not
converge unless . As per the two assumptions listed above, the
sequence ( k) is defined by,
Self-Instructional
102 Material
Eigen Values and
Eigen Vectors

This converges to the dominant eigenvalue. NOTES

Check Your Progress


1. Define the terms eigenvalue and eigenvector.
2. Define any two properties of eigenvalues and eigenvectors.
3. Define inner product.

5.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Let A = [aij] be a square matrix of order n. If there exists a non-zero (non-


null) column vector X and a scalar l such that,
AX = X
Then is called an eigenvalue of the matrix A and X is called eigenvectors
corresponding to the eigenvalue .
2. (i) If all the eigenvalues of a matrix are distinct, then the corresponding
eigenvectors are linearly independent.
(ii) If two or more eigenvalues of a matrix are equal then the corresponding
eigenvectors may be linearly independent or linearly dependent.
3. Inner product or scalar product of two vectors X and Y, denoted by < X, Y> is
defined as the scalar X TY
n
i.e., X ,=
X T
X= Y ∑x y
i =1
i i

Inner product X, X is known as the square of the length of the vector X and
it is denoted as |X|2, where |X| is read as norm X. If |X| = 1 then X is called a
unit vector.

5.5 SUMMARY

Let A = [aij] be a square matrix of order n. If there exists a non-zero (non-


null) column vector X and a scalar l such that,
AX = X
Then is called an eigenvalue of the matrix A and X is called eigenvectors
corresponding to the eigenvalue .
Self-Instructional
Material 103
Eigen Values and If A is an n × n matrix, then its characteristic equation is an nth degree
Eigen Vectors
polynomial equations in . Therefore, an n × n matrix has n eigenvalues
(real or complex).

NOTES An eigenvector of a matrix cannot correspond to two different eigenvalues.


An eigenvalue of a matrix can, and will correspond to different eigenvectors.
If is an eigenvalue of matrix A of order 3, then the components of the
eigenvector of matrix A corresponding of are proportional to the cofactors
of the elements of any row of |A – I| provided that not all of them vanish.
If all the eigenvalues of a matrix are distinct, then the corresponding
eigenvectors are linearly independent.
If two or more eigenvalues of a matrix are equal then the corresponding
eigenvectors may be linearly independent or linearly dependent.

5.6 KEY WORDS

Eigen Value Problem:The problem of finding the values of the parameter


, for which the homogeneous system, possesses non-trivial solution is
known as characteristic value problem or eigenvalue problem.
AX = X
Inner Product: Inner product or scalar product of two vectors X and Y,
denoted by < X, Y> is defined as the scalar X TY
n
i.e., X ,=
X T
X= Y ∑x y
i =1
i i

5.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define any three properties of eigenvalues and eigenvectors.
2. What is an inner product?
Long-Answer Questions

 6 −2 2 
1. Find the eigenvalues and eigenvectors of A =  −2 3 −1 .
 2 −1 3 
 

Self-Instructional
104 Material
2. If X1 and X2 are eigenvectors corresponding to distinct eigenvalues of 1
Eigen Values and
Eigen Vectors
and 2 of A, then show that X1 and X2 are linearly independent.
3. Find the eigenvalues of the following matrices:

 5 2  1 1+ i NOTES
(i)   (ii)  
 2 3 1 − i 2 

2 2 1  3 10 5 
   
(iii)  1 3 1  (iv)  −2 −3 −4 
1 2 2  3 5 7
   

1 i= j

4. Let k1, k2, k3 > 0 and A = [ai j] where ai j =  ki i≠ j
(i, j = 1, 2, 3)
k
 j

Write the matrix A and find its eigenvalues.


5. Write 3 matrices whose characteristics equation is – .
6. Find the eigenvalues and eigenvectors of 3 × 3 null matrix.

2 1 −1 0 
1 3 4 2 
7. Find the sum and product of the eigenvalues of  .
 −1 4 1 2
 
 0 2 2 1 

8. Eigenvalues of a matrix are 1, –1 and 2. Find the value of Trace (A) and
determinant A.

7 4 −4 
9. A =  4 −8 −1  . If one the eigenvalues of A is –9, find the other two
 4 −1 −8 
 
eigenvalues.

5.8 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Self-Instructional
Material 105
Eigen Values and Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
Eigen Vectors
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
NOTES
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
106 Material
Interpolation and

UNIT 6 INTERPOLATION AND Approximation

APPROXIMATION
NOTES
Structure
6.0 Introduction
6.1 Objectives
6.2 Interpolation and Approximation
6.3 Answers to Check Your Progress Questions
6.4 Summary
6.5 Key Words
6.6 Self Assessment Questions and Exercises
6.7 Further Readings

6.0 INTRODUCTION

Interpolation is the process of defining a function that takes on specified values at


specified points. Polynomial interpolation is the most known one-dimensional
interpolation method. Its advantages lies in its simplicity of realization and the good
quality of interpolants obtained from it. You will learn about the various interpolation
methods, namely Lagrange’s interpolation, Newton’s forward and backward
difference interpolation formulae, iterative linear interpolation and inverse
interpolation.
In this unit, you will study about the interpolation, approximation, Hermite
interpolation, piecewise and spline interpolation and bivariate interpolation.

6.1 OBJECTIVES

After going through this unit, you will be able to:


Understand the interpolation and approximation
Analyse the Hermite interpolation
Explain the piecewise, spline interpolation and bivariate interpolation

6.2 INTERPOLATION AND APPROXIMATION

The problem of interpolation is very fundamental problem in numerical analysis.


The term interpolation literally means reading between the lines. In numerical analysis,
interpolation means computing the value of a function f (x) in between values of x
in a table of values. It can be stated explicitly as ‘given a set of (n + 1) values y0,
y1, y2,..., yn for x = x0, x1, x2, ..., xn respectively. The problem of interpolation is to
compute the value of the function y = f (x) for some non-tabular value of x.’
Self-Instructional
Material 107
Interpolation and The computation is often made by finding a polynomial called interpolating
Approximation
polynomial of degree less than or equal to n such that the value of the polynomial
is equal to the value of the function at each of the tabulated points. Thus if,
( x) a0 a1 x a2 x 2 an x n
NOTES
(6.1)
is the interpolating polynomial of degree ≤ n , then
( xi ) yi , for i 0, 1, 2, ..., n
(6.2)
It is true that, in general, it is difficult to guess the type of function to
approximate f (x). In case of periodic functions, the approximation can be made
by a finite series of trigonometric functions. Polynomial interpolation is a very
useful method for functional approximation. The interpolating polynomial is also
useful as a basis to develop methods for other problems such as numerical
differentiation, numerical integration and solution of initial and boundary value
problems associated with differential equations.
The following theorem, developed by Weierstrass, gives the justification for
approximation of the unknown function by a polynomial.
Theorem 6.1: Every function which is continuous in an interval (a, b) can be
represented in that interval by a polynomial to any desired accuracy. In other
words, it is possible to determine a polynomial P(x) such that f ( x) P( x) ,
for every x in the interval (a, b) where is any prescribed small quantity.
Geometrically, it may be interpreted that the graph of the polynomial y = P(x) is
confined to the region bounded by the curves y f ( x) and y f ( x ) for
all values of x within (a, b), however small may be.

Fig. 6.1 Interpolation

The following theorem is regarding the uniqueness of the interpolating


polynomial.

Self-Instructional
108 Material
Theorem 6.2: For a real-valued function f (x) defined at (n + 1) distinct points Interpolation and
Approximation
x0, x1, ..., xn, there exists exactly one polynomial of degree ≤ n which interpolates
f (x) at x0, x1, ..., xn.
We know that a polynomial P(x) which has (n + 1) distinct roots x0, x1, ...,
NOTES
xn can be written as,
P(x) = (x – x0) (x – x1) .....(x – xn) q (x)
where q(x) is a polynomial whose degree is either 0 or (n + 1) which is less than
the degree of P(x).
Suppose that two polynomials ( x ) and ( x ) are of degree ≤ n and that
both interpolate f(x). Here P ( x ) ( x) ( x ) at x x0 , x1 ,..., xn . Then P(x)
vanishes at the n +1 points x0 , x1 ,..., xn . Thus P(x) = 0 and ( x) ( x ).
Iterative Linear Interpolation
In this method, we successively generate interpolating polynomials, of any degree,
by iteratively using linear interpolating functions.
Let p01(x) denote the linear interpolating polynomial for the tabulated values at
x0 and x1. Thus, we can write as,
( x1 − x) f 0 − ( x0 − x) f1
p01 ( x) =
x1 − x0

This can be written with determinant notation as,


f0 x0 − x
f1 x1 − x (6.3)
p01 ( x) =
x1 − x0
This form of p01(x) is easy to visualize and is convenient for desk computation.
Thus, the linear interpolating polynomial through the pair of points (x0, f0) and
( x j , f j ) can be easily written as,

1 f 0 x0 − x
=p0 j ( x) = , for j 1, 2, ..., n (6.4)
x j − x0 f j x j − x

Now, consider the polynomial denoted by p01j (x) and defined by,

1 p01 ( x) x1 − x
=p01 j ( x) = , for j 2, 3, ..., n (6.5)
x j − x1 p0 j ( x) x j − x

The polynomial p01j(x) interpolates f(x) at the points x0, x1, xj (j > 1) and is a
polynomial of degree 2, which can be easily verified that,
p0ij ( x0 ) f 0 , p0ij ( xi ) f i and p0ij ( x j ) f j because p01 ( x0 ) f0 p0ij ( x0 ), etc.

Similarly, the polynomial p012 j ( x) can be constructed by replacing p01(x) by


p012 (x) and p0j (x) by p01j (x).
Self-Instructional
Material 109
Interpolation and Thus,
Approximation
1 p012 ( x) x2 − x
=p012 j ( x) = , for j 3, 4, ..., n (6.6)
x j − x2 p01 j ( x) x j − x
NOTES
Evidently, p012j (x) is a polynomial of degree 3 and it interpolates the function
at x0, x1, x2 and xj.
i.e., p012 j ( x0 ) f=
= 0 ; p012 j ( xi ) f=
1 ; p012 j ( x2 ) f 2 and
= p012 j ( x j ) f j
This process can be continued to generate higher and higher degree
interpolating polynomials.
The results of the iterated linear interpolation can be conveniently represented
as given in the following table.

xk fk p0 j p01 j ... x j − x
x0 f0 x0 − x
x1 f1 p01 x1 − x
x2 f2 p02 p012 x2 − x
x3 f3 p03 p013 x3 − x
... ... ... ... ... ...
xj fj p0 j p01 j xj − x
... ... ... ... ... ...
xn fn p0n p01n xn − x

The successive columns of interpolation results can be conveniently filled by


computing the values of the determinants written using the previous column and
the corresponding entries in the last column xj – x. Thus, for computing p01j’s for
j = 2, 3, ..., n, we evaluate the determinant whose elements are the boldface
quantities and divide the determinant’s value by the difference ( x j − x) − ( x1 − x) .
Example 1: Find s(2.12) using the following table by iterative linear interpolation:

x 2.0 2.1 2.2 2.3


s( x) 0.7909 0.7875 0.7796 0.7673

Solution: Here, x = 2.12. The following table gives the successive iterative linear
interpolation results. The details of the calculations are shown below in the table.

xj s( x j ) p0 j p 01 j p012 j xj − x
2.0 0.7909 − 0.12
2.1 0.7875 0.78682 − 0.02
2.2 0.7796 0.78412 0.78628 0.08
2.3 0.7673 0.78146 0.78628 0.78628 0.18

Self-Instructional
110 Material
Interpolation and
1 0.7909 −0.12 Approximation
=p01 = 0.78682
2.1 − 2.0 0.7875 −0.02
1 0.7909 −0.12
=p02 = 0.78412
2.2 − 2.0 0.7796 −0.08 NOTES
1 0.7909 −0.12
=p03 = 0.78146
2.3 − 2.0 0.7673 0.18
1 0.78682 −0.02
=p012 = 0.78628
2.2 − 2.1 0.78412 0.08
1 0.78682 −0.02
p013 = = 0.78628
2.3 − 2.1 0.78146 0.18
1 0.78628 0.08
=p012 = 0.78628
2.3 − 2.2 0.78628 0.18
The boldfaced results in the table give the value of the interpolation at x =
2.12. The result 0.78682 is the value obtained by linear interpolation. The result
0.78628 is obtained by quadratic as well as by cubic interpolation. We conclude
that there is no improvement in the third degree polynomial over that of the second
degree.
Notes 1. Unlike Lagrange’s methods, it is not necessary to find the degree of the
interpolating polynomial to be used.
2. The approximation by a higher degree interpolating polynomial may
not always lead to a better result. In fact it may be even worse in some
cases.
Consider, the function f(x) = 4.
We form the finite difference table with values for x = 0 to 4.

x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x) ∆4 f ( x)
0 1
3
1 4 9
12 27
2 16 36 81
48 108
3 64 144
192
4 256

Newton’s forward difference interpolating polynomial is given below by taking


x0 = 0,

x x0 9 27 81
u x, ( x ) 1 3x x( x 1) x( x 1)( x 2) x( x 1)( x 2)( x 3)
h 2 6 24
Self-Instructional
Material 111
Interpolation and Now, consider values of ( x) at x = 0.5 by taking successively higher and
Approximation
higher degree polynomials.
Thus,
NOTES
1 (0.5) 1 0.5 3 2.5, by linear interpolation
0.5 ( 0.5)
2 (0.5) 2.5 9 1.375, by quadratic interpolation
2
0.5 ( 0.5) ( 1.5)
3 (0.5) 1.375 27 3.0625, by cubic interpolation
6
(0.5)( 0.5)( 1.5)( 2.5)
4 (0.5) 3.0625 81 0.10156, by quartic interpolation
24
We note that the actual value 40.5 = 2 is not obtainable by interpolation. The
results for higher degree interpolating polynomials become worse.
Note: Lagrange’s interpolation formula and iterative linear interpolation can easily
be implemented for computations by a digital computer.
Example 2: Determine the interpolating polynomial for the following table of data:

x 1 2 3 4
y −1 −1 1 5

Solution: The data is equally spaced. We thus form the finite difference table.

x y ∆y ∆2 y
1 −1
0
2 −1 2
2
3 1 2
4
4 5

Since the differences of second order are constant, the interpolating polynomial
is of degree two. Using Newton’s forward difference interpolation, we get

u (u 1) 2
y y0 u y0 y0 ,
2!
Here, x0 1, u x 1.
( x 1)( x 2)
Thus, y 1 ( x 1) 0 2 x2 3x 1.
2

Self-Instructional
112 Material
Example 3: Compute the value of f(7.5) by using suitable interpolation on the Interpolation and
Approximation
following table of data.

x 3 4 5 6 7 8
f ( x) 28 65 126 217 344 513 NOTES

Solution: The data is equally spaced. Thus for computing f(7.5), we use Newton’s
backward difference interpolation. For this, we first form the finite difference table
as shown below.

x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x)
3 28
37
4 65 24
61 6
5 126 30
91 6
6 217 36
127 6
7 344 42
169
8 513

The differences of order three are constant and hence we use Newton’s
backward difference interpolating polynomial of degree three.
v(v + 1) 2 v(v + 1)(v + 2) 3
f ( x) = yn + v ∇yn + ∇ yn + ∇ yn ,
2 ! 3 !

x − xn
v= , for x 7.5,
= = xn 8
h
7.5 − 8
v= = −0.5
1
( −0.5) ( −0.5 + 1) −0.5 × 0.5 × 1.5
f (7.5) = 513 − 0.5 × 169 + × 42 + ×6
2 6
=513 − 84.5 − 5.25 − 0.375
= 422.875
Example 4: Determine the interpolating polynomial for the following data:

x 2 4 6 8 10
f ( x) 5 10 17 29 50

Self-Instructional
Material 113
Interpolation and Solution: The data is equally spaced. We construct the Newton’s forward
Approximation
difference interpolating polynomial. The finite difference table is,

x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x) ∆4 f ( x)
NOTES 2 5
5
4 10 2
7 3
6 17 5 1
12 4
8 29 9
21
10 50

Here, x0 = 2, u = (x–x0)/h = (x–2)/2.


The interpolating polynomial is,
u (u − 1) 2
f ( x ) = f ( x 0 ) + u ∆f ( x 0 ) + ∆ f ( x0 ) + ...
2!
x−2 x − 2  x − 2  2 x − 2  x − 2  x − 2 3
= 5+ ×5+  − 1 +  − 1 − 2
2 2  2  2! 2  2  2  3!
x − 2  x − 2  x − 2  x − 2 1
+  − 1 − 2  − 3
2  2  2  2  4!
1
= ( x 4 + 4 x 3 − 52 x 2 + 1040 x)
384
Example 5: Find the interpolating polynomial which takes the following values:
y(0) = 1, y(0.1) = 0.9975, y(0.2) = 0.9900, y(0.3) = 0.9980. Hence compute
y (0.05).
Solution: The data values of x are equally spaced we form the finite difference
table,

x y ∆y ∆2 y ∆3 y
0.0 1.0000
− 25
0.1 0.9975 − 50
− 75 25
0.2 0.9900 − 25
− 100
0.3 0.9800

x
Here, h = 0.1. Choosing x0 = 0.0, we have s = = 10 x. Newton’s forward
0.1
difference interpolation formula is,

Self-Instructional
114 Material
s ( s − 1) 2 s( s − 1)(s − 2) 3 Interpolation and
y = y0 + s ∆y0 + ∆ y0 + ∆ y0 Approximation
2! 3!
10 x(10 x − 1) 10 x(10 x − 1)(10 x − 2)
= 1 + 10 x(−0.0025) + (−0.0050) + × 0.0025
2! 6
2.5 3 300 0.025
NOTES
2 2
= 1.0 − 0.25 x − 0.25 x + 0.25 x + x − × 0.0025x + x
6 4 6
2 3
= 1.0 + 0.004 x − 0.375 x + 0.421x
y (0.05) = 1.0002
Example 6: Compute f(0.23) and f(0.29) by using suitable interpolation formula
with the table of data given below.
x 0.20 0.22 0.24 0.26 0.28 0.30
f ( x) 1.6596 1.6698 1.6804 1.6912 1.7024 1.7139

Solution: The data being equally spaced, we use Newton’s forward difference
interpolation for computing f(0.23), and for computing f(0.29), we use Newton’s
backward difference interpolation. We first form the finite difference table,

x f ( x) ∆f ( x) ∆2 f ( x)
0.20 1.6596
102
0.22 1.6698 4
106
0.24 1.6804 2
108
0.26 1.6912 4
112
0.28 1.7024 3
115
0.30 1.7139

We observe that differences of order higher than two would be irregular. Hence,
we use second degree interpolating polynomial. For computing f(0.23), we take
x − x0 0.23 − 0.22
x0 = 0.22 so that u = = = 0.5.
h 0.02
Using Newton’s forward difference interpolation, we compute
(0.5)(0.5 − 1.0)
f (0.23)
= 1.6698 + 0.5 × 0.0106 + × 0.0002
2
= 1.6698 + 0.0053 − 0.000025
= 1.675075
≈ 1.6751
Again for computing f (0.29), we take xn = 0.30,

Self-Instructional
Material 115
Interpolation and
Approximation x xn 0.29 0.30
so that v 0.5
n 0.02
Using Newton’s backward difference interpolation we evaluate,
NOTES
( −0.5)(−0.5 + 1.0)
f (0.29)
= 1.7139 − 0.5 × .0115 + × 0.0003
2
=1.7139 − 0.00575 − 0.00004
= 1.70811
− 1.7081
Example 7: Compute values of ex at x = 0.02 and at x = 0.38 using suitable
interpolation formula on the table of data given below.
x 0.0 0.1 0.2 0.3 0.4
e x 1.0000 1.1052 1.2214 1.3499 1.4918

Solution: The data is equally spaced. We have to use Newton’s forward difference
interpolation formula for computing ex at x = 0.02, and for computing ex at
x = 0.38, we have to use Newton’s backward difference interpolation formula.
We first form the finite difference table.

x y = ex ∆y ∆2 y ∆3 y ∆4 y
0.0 1.0000
1052
0.1 1.1052 110
1162 13
0.2 1.2214 123 −2
1285 11
0.3 1.3499 134
1419
0.4 1.4918

For computing e0.02 , we take x = 0 0

x − x0 0.02 − 0.0
∴ u= = = 0 .2
h 0.1
By Newton’s forward difference interpolation formula, we have

0.2(0.2 − 1) 0.2(0.2 − 1)(0.2 − 2)


e0.02 = 1.0 + 0.2 × 0.1052 + × 0.0110 + × 0.0013
2 6
0.2 (0.2 − 1)(0.2 − 2)(0.2 − 3)
+ × −0.0002
24
1.0 .02104 − 0.00088 + 0.00006 + 0.00001
=+
= 1.02023 − 1.0202

Self-Instructional
116 Material
Interpolation and
For computing e0.38 we take xn = 0.4. Thus, v = 0.38 − 0.4 = −0.2 Approximation
0.1
By Newton’s backward difference interpolation formula, we have

(−0.2)(−0.2 + 1) NOTES
e0.38 1.4918 + ( −0.2) × 0.1419 +
= × 0.0134
2
(−0.2)(−0.2 + 1)(−0.2 + 2) −0.2(−0.2 + 1)(−0.2 + 2)(−0.2 + 3)
+ × 0.0011 + × (−0.0002)
6 24
=1.4918 − 0.02838 − 0.00107 − 0.00005 − 0.00001
= 1.49287 − 0.02844
= 1.46443 ≈ 1.4644

Lagrange’s Interpolation
Lagrange’s interpolation is useful for unequally spaced tabulated values. Let y = f
(x) be a real valued function defined in an interval (a, b) and let y0, y1,..., yn be the
(n + 1) known values of y at x0, x1,...,xn, respectively. The polynomial (x),
which interpolates f (x), is of degree less than or equal to n. Thus,
( xi ) yi , for i 0,1, 2, ..., n
(6.7)
The polynomial (x) is assumed to be of the form,
n
( x) li ( x) yi
i 0

(6.8)
where each li(x) is a polynomial of degree ≤ n in x and is called Lagrangian
function.
Now, (x ) satisfies Equation (6.7) if each li(x) satisfies,
li ( x j ) 0 when i j
1 when i j
(6.9)
Equation (6.9) suggests that li(x) vanishes at the (n+1) points x0, x1, ... xi–1,
xi+1,..., xn. Thus, we can write,
li(x) = ci (x – x0) (x – x1) ... (x – xi–1) (x – xi+1)...(x – xn)
where ci is a constant given by li (xi) =1,
i.e., ci ( xi x0 ) ( xi x1 )...( xi xi 1 ) ( xi xi 1 )... ( xi xn ) 1
( x − x0 )( x − x1 )...( x − xi −1 )( x − xi +1 )...( x − xn )
=Thus, li ( x) = for i 0, 1, 2, ..., n
( xi − x0 )( xi − x1 )...( xi − xi −1 )( xi − xi +1 )...( xi − xn )
(6.10)

Self-Instructional
Material 117
Interpolation and Equations (6.8) and (6.10) together give Lagrange’s interpolating polynomial.
Approximation
Algorithm: To compute f (x) by Lagrange’s interpolation.
Step 1: Read n [n being the number of values]
NOTES Step 2: Read values of xi, fi for i = 1, 2,..., n.
Step 3: Set sum = 0, i = 1
Step 4: Read x [x being the interpolating point]
Step 5: Set j = 1, product = 1
Step 6: Check if j i, product = product × (x – xj)/(xi – xj) else go to Step
7
Step 7: Set j = j + 1
Step 8: Check if j > n, then go to Step 9 else go to Step 6
Step 9: Compute sum = sum + product × fi
Step 10: Set i = i + 1
Step 11: Check if i > n, then go to Step 12
else go to Step 5
Step 12: Write x, sum
Example 8: Compute f (0.4) for the table below by Lagrange’s interpolation.
x 0.3 0.5 0.6
f ( x) 0.61 0.69 0.72

Solution: The Lagrange’s interpolation formula gives,


(0.4 − 0.5)(0.4 − 0.6) (0.4 − 0.3)(0.4 − 0.6) (0.4 − 0.3)(0.4 − 0.5)
f (0.4) = × 0.61 + × 0.69 + × 0.72
(0.3 − 0.5)(0.3 − 0.6) (0.5 − 0.3)(0.5 − 0.6) (0.6 − 0.3)(0.6 − 0.5)
= 0.203 + 0.69 − 0.24 = 0.653 − ~ 0.65

Thus, f (0.4) = 0.65.


Example 9: Using Lagrange’s formula, find the value of f (0) from the table given
below.
x −1 − 2 2 4
f ( x) − 1 − 9 11 69

Solution: Using Lagrange’s interpolation formula, we find


 (0 + 2)(0 − 2)(0 − 4)   (0 + 1)(0 − 2)(0 − 4) 
f (0) =  × (−1) +  × (−9)
 ( −1 + 2 ) ( − 1 − 2)( −1 − 4)   ( −2 + 1)(− 2 − 2)(− 2 − 4) 
 (0 + 1) (0 + 2)(0 − 4)   (0 + 1)(0 + 2)(0 − 2) 
+ ×11 +  × 69
 ( 2 + 1 )(2 + 2)(2 − 4 )   ( 4 + 1)(4 + 2)(4 − 2 ) 
16 9 11 69 20 85
=− + + − = −
15 3 3 15 3 15
20 17
= − =1
Self-Instructional 3 3
118 Material
Example 10: Determine the interpolating polynomial of degree three for the table Interpolation and
Approximation
given below.
x −1 0 1 2
f ( x) 1 1 1 − 3 NOTES
Solution: We have Lagrange’s third degree interpolating polynomial as,
3
f ( x) = ∑ l ( x) f ( x )
i =0
i i

where
( x − 0)( x − 1)( x − 2) 1
l0 ( x) = = − x( x − 1)( x − 2)
(−1 − 0)(−1 − 1)(−1 − 2) 6
( x + 1)( x − 1)( x − 2) 1
l1 ( x ) = = ( x + 1)( x − 1)( x − 2)
(0 + 1)(0 − 1)(0 − 2) 2
( x + 1)( x − 0)( x − 2) 1
l2 ( x) = = − ( x + 1) x( x − 2)
(1 + 1)(1 − 0)(1 − 2) 2
( x + 1)( x − 0)( x − 1) 1
l3 ( x ) = = ( x + 1) x ( x − 2)
(2 + 1)(2 − 0)(2 − 1) 6
1 1 1 1
f ( x) = − x( x − 1)( x − 2) × 1 + ( x + 1)( x − 1)( x − 2) × 1 − ( x + 1) x ( x − 2) × 1 + ( x + 1) x( x − 2) × (−3)
6 2 2 6
1 3
= − (4 x − 4 x − 6)
6
−1
= ( 2 x 3 − 2 x − 3)
3
Example 11: Evaluate the values of f (2) and f (6.3) using Lagrange’s interpolation
formula for the table of values given below.

x 1.2 2.5 4 5.1 6 6.5


f ( x) 6.84 14.25 27 39.21 51 58.25

Solution: It is not advisable to use a higher degree interpolating polynomial. For


evaluation of f (2) we take a second degree polynomial using the values of f (x) at
the points x0 = 1.2, x1 = 2.5 and x2 = 4.
Thus,
f (2) = l0(2) × 6.84 + l1(2) × 14.25 + l2(2) × 27
Where
(2 − 2.5)(2 − 4)
l0 (2) = = 0.275
(1.2 − 2.5)(1.2 − 4)
(2 − 1.2)(2 − 4)
l1 (2) = = 0.821
(2.5 − 1.2)(2.5 − 4)
(2 − 1.2)(2 − 2.5)
l 2 (2) = = −0.095
(4 − 1.2)(4 − 2.5)

Self-Instructional
Material 119
Interpolation and ∴f (2) = 0.275 × 6.84 + 0.821 × 14.25 – 0 .095 × 27 = 11.015 ~− 11.02
Approximation
For evaluation of f (6.3), we consider the values of f (x) at x0 = 5.1, x1 = 6.0, x2
= 6.5.
NOTES Thus, f (6.3) = l0(6.3) × 39.21 + l1(6.3) × 51 + l2(6.3) × 58.25
where
(6.3 − 6.0)(6.3 − 6.5)
l0 (6.3) = = −0.048
(5.1 − 6.0)(5.1 − 6.5)
(6.3 − 5.1)(6.3 − 6.5)
l1 (6.3) = = 0.533
(6 − 5.1)(6.0 − 6.5)
(6.3 − 5.1)(6.3 − 6.0)
l 2 (6.3) = = 0.514
(6.5 − 5.1)(6.5 − 6.0)

∴ f (6.3) = − 0.048 × 39.21 + 0.533 × 51 + 0.514 × 58.25


= 55.241 ≅ 55.24

Since, the computed result cannot be more accurate than the data, the final
result is rounded-off to the same number of decimals as the data. In some cases,
a higher degree interpolating polynomial may not lead to better results.
Interpolation for Equally Spaced Tabular Values
For interpolation of an unknown function when the tabular values of the argument
x are equally spaced, we have two important interpolation formulae, viz.,
(i) Newton’s forward difference interpolation formula
(ii) Newton’s backward difference interpolation formula
We will first discuss the finite differences which are used in evaluating the
above two formulae.
Finite Differences
Let us assume that values of a function y = f (x) are known for a set of equally
spaced values of x given by {x0, x1,..., xn}, such that the spacing between any
two consecutive values is equal. Thus, x1 = x0 + h, x2 = x1 + h,..., xn = xn–1 + h,
so that xi = x0 + ih for i = 1, 2, ...,n. We consider two types of differences known
as forward differences and backward differences of various orders. These
differences can be tabulated in a finite difference table as explained in the subsequent
sections.
Forward Differences
Let y0, y1,..., yn be the values of a function y = f (x) at the equally spaced values of
x = x0, x1, ..., xn. The differences between two consecutive y given by y1 – y0, y2
– y1,..., yn – yn–1 are called the first order forward differences of the function y = f
(x) at the points x0, x1,..., xn–1. These differences are denoted by,

Self-Instructional
120 Material
∆y0 = y1 − y0 , ∆y1 = y2 − y1 , ..., ∆yn−1 = yn − yn−1 Interpolation and
Approximation
(6.11)
where ∆ is termed as the forward difference operator defined by,,
∆f ( x) = f ( x + h) − f ( x) NOTES
(6.12)
Thus, ∆ yi = yi+1 – yi, for i = 0, 1, 2, ..., n – 1, are the first order forward
differences at xi.
The differences of these first order forward differences are called the second
order forward differences.
Thus, 2
∆ yi =∆ (∆yi )
= ∆yi +1 − ∆yi , for i = 0, 1, 2, ..., n − 2
(6.13)
Evidently,
∆2 y0 = ∆y1 − ∆y0 = y 2 − y1 − ( y1 − y0 ) = y2 − 2 y1 + y0

And, ∆2 yi = yi + 2 − yi +1 − ( yi +1 − yi )
i.e., ∆ 2 yi = yi + 2 − 2 yi +1 + yi , for i = 0, 1, 2, ..., n − 2
(6.14)
Similarly, the third order forward differences are given by,
∆ 3 yi = ∆ 2 yi +1 − ∆ 2 yi , for i = 0, 1, 2, ..., n − 3

i.e., ∆ 3 y i = y i + 3 − 3 y i + 2 + 3 y i +1 − y i
(6.15)
Finally, we can define the nth order forward difference by,
n(n − 1)
∆n y 0 = y n − ny n −1 + y n − 2 + ... + ( −1) n y0
2!
(6.16)
The coefficients in above equations are the coefficients of the binomial expansion
(1 – x)n.
The forward differences of various orders for a table of values of a function
y = f (x), are usually computed and represented in a diagonal difference table. A
diagonal difference table for a table of values of y = f (x), for six points x0, x1, x2,
x3, x4, x5 is shown here.

Self-Instructional
Material 121
Interpolation and Diagonal difference Table for y = f(x):
Approximation
i xi yi ∆yi ∆2 yi ∆3 yi ∆4 yi ∆5 yi
0 x0 y0
NOTES ∆y0
1 x1 y1 ∆2 y0
∆y1 ∆3 y0
2
2 x2 y2 ∆ y1 ∆4 y0
∆y 2 ∆3 y1 ∆5 y 0
3 x3 y3 ∆2 y 2 ∆4 y1
∆y3 ∆3 y 2
2 3
4 x4 y4 ∆ y
∆y 4
5 x5 y5

The entries in any column of the differences are computed as the differences
of the entries of the previous column and one placed in between them. The upper
data in a column is subtracted from the lower data to compute the forward
differences. We notice that the forward differences of various orders with respect
to yi are along the forward diagonal through it. Thus y0, 2y0, 3y0, 4y0 and
5
y0 lie along the top forward diagonal through y0. Consider the following example.
Example 12: Given the table of values of y = f (x),

x 1 3 5 7 9
y 8 12 21 36 62

form the diagonal difference table and find the values of ∆f (5), ∆2 f (3), ∆3 f (1) .
Solution: The diagonal difference table is,
i xi yi ∆yi ∆2 yi ∆3 yi ∆4 yi
0 1 8
4
1 3 12 5
9 1
2 5 21 6 4
15 5
3 7 36 11
26
4 9 62

From the table, we find that ∆f (5) = 15, the entry along the diagonal through
the entry 21 of f (5).
Similarly, ∆2 f (3) = 6, the entry along the diagonal through f (3). Finally,,
∆3 f (1) = 1.
Self-Instructional
122 Material
Backward Differences Interpolation and
Approximation
The backward differences of various orders for a table of values of a function y =
f (x) are defined in a manner similar to the forward differences. The backward
difference operator ∇ (inverted triangle) is defined by ∇f ( x) = f ( x) − f ( x − h). NOTES
Thus, ∇yk = yk − yk −1 , for k = 1, 2, ..., n
i.e., ∇y1 = y1 − y0 , ∇y 2 = y 2 − y1 ,..., ∇y n = y n − y n −1
(6.17)
The backward differences of second order are defined by,

∇ 2 yk = ∇yk − ∇yk −1 = yk − 2 yk −1 + yk − 2
Hence,
∇ 2 y2 = y2 − 2 y1 + y0 , and ∇ 2 yn = yn − 2 yn −1 + yn −2
(6.18)
Higher order backward differences can be defined in a similar manner.
Thus, ∇ 3 yn = yn − 3 yn −1 + 3 yn −2 − yn −3 , etc.
(6.19)
Finally,

n n( n 1)
yn yn nyn 1 yn 2 – ... ( 1) n y0
2 i
(6.20)
The backward differences of various orders can be computed and placed in a
diagonal difference table. The backward differences at a point are then found
along the backward diagonal through the point. The following table shows the
backward differences entries.
Diagonal difference Table of backward differences:

i xi yi ∇y i ∇ 2 yi ∇ 3 yi ∇ 4 yi ∇ 5 yi
0 x0 y0
∇y1
1 x1 y1 ∇ 2 y2
∇y 2 ∇ 3 y3
2 x2 y2 ∇ 2 y3 ∇ 4 y4
3
∇y3 ∇ y4
2
3 x3 y3 ∇ y4 ∇ 4 y5
3
∇y 4 ∇ y5
2
4 x4 y4 ∇ y5
∇y5
5 x5 y5
Self-Instructional
Material 123
Interpolation and The entries along a column in the table are computed (as discussed in previous
Approximation
example) as the differences of the entries in the previous column and are placed in
between. We notice that the backward differences of various orders with respect
to yi are along the backward diagonal through it. Thus, ∇y5 , ∇ 2 y5 , ∇ 3 y5 , ∇ 4 y5 and
NOTES
∇ 5 y 5 are along the lowest backward diagonal through y5.

We may note that the data entries of the backward difference table in any
column are the same as those of the forward difference table, but the differences
are for different reference points.
Specifically, if we compare the columns of first order differences we can see
that,
∆y0 = ∇y1 , ∆y1 = ∇y 2 , ..., ∆y n −1 = ∇y n

Hence, ∆yi =∇yi +1 , for i =0, 1, 2, ..., n − 1

Similarly, ∆2 y 0 = ∇ 2 y 2 , ∆2 y1 = ∇ 2 y3 ,..., ∆2 y n − 2 = ∇ 2 y n

Thus, ∆ 2 yi =
∇2 yi + 2 , for i =
1, 2, ..., n − 2

In general, ∆k yi = ∇ k yi + k .

Conversely, ∇ k yi = ∆k yi − k .
Example 13: Given the following table of values of y = f (x):

x 1 3 5 7 9
y 8 12 21 36 62

Find the values of ∇ y (7) , ∇ 2 y (9) , ∇ 3 y (9) .


Solution: We form the diagonal difference table,

xi yi ∇y i ∇ 2 yi ∇ 3 yi ∇ 4 yi
1 8
4
3 12 5
9 1
5 21 6 4
15 5
7 36 11
26
9 62

2 3
From the table, we can easily find ∇y(7) = 15, ∇ y(9) = 11, ∇ y(9) = 5.

Self-Instructional
124 Material
Symbolic Operators Interpolation and
Approximation
We consider the finite differences of an equally spaced tabular data for developing
numerical methods. L et a function y = f (x) has a set of values y0, y1, y2,...,
corresponding to points x0, x1, x2,..., where x1 = x0 + h, x2 = x0 + 2h,...., are
NOTES
equally spaced with spacing h. We define different types of finite differences such
as forward differences, backward differences and central differences, and express
them in terms of operators.
The forward difference of a function f (x) is defined by the operator ∆, called
the forward difference operator given by,
f ( x) f ( x h) f ( x)
(6.21)
At a tabulated point xi , we have
f ( xi ) f ( xi h) f ( xi )
(6.22)
We also denote ∆f ( xi ) by ∆yi , given by
yi yi 1 yi , for i 0, 1, 2, ...
(6.23)
We also define an operator E, called the shift operator which is given by,
E f(x) = f(x + h)
(6.24)
f ( x) Ef ( x) f ( x)
Thus, ∆ = E − 1 is an operator relation. (6.25)
While Equation (6.21) defines the first order forward difference, we can define
second order forward difference by,
2
yi ( yi ) ( yi 1 yi )
2
yi yi 1 yi (6.26)

Shift Operator
The shift operator is denoted by E and is defined by E f (x) = f (x + h). Thus,
Eyk = yk+1
Higher order shift operators can be defined by, E2f (x) = Ef (x + h) = f (x +
2h).
E2yk = E(Eyk) = E(yk + 1) = yk + 2
In general, Emf (x) = f (x + mh)
Emyk = yk+m
Self-Instructional
Material 125
Interpolation and Relation between Forward Difference Operator and Shift Operator
Approximation
From the definition of forward difference operator, we have

y ( x) y ( x h) y ( x )
NOTES
Ey ( x) y ( x)
( E 1) y ( x )

This leads to the operator relation,


E 1
or, E 1 (6.27)
Similarly, for the second order forward difference, we have

2
y ( x) y ( x h) y ( x)
y ( x 2 h) 2 y ( x h) y( x)
2
E y ( x ) 2 Ey ( x ) y ( x)
2
(E 2 E 1) y ( x )

This gives the operator relation, ∆2 = ( E − 1) 2 .

Finally, we have m
( E 1)m , for m 1, 2, ... (6.28)
Relation between the Backward Difference Operator with Shift Operator
From the definition of backward difference operator, we have
∇ f ( x ) = f ( x ) − f ( x − h)
= f ( x) − E −1 f ( x) = (1 − E −1 ) f ( x)

This leads to the operator relation, ∇ ≡ 1 − E −1 (6.29)


Similarly, the second order backward difference is defined by,

∇ 2 f ( x ) = ∇f ( x ) − ∇f ( x − h )
= f ( x ) − f ( x − h) − f ( x − h) + f ( x − 2h)
= f ( x ) − 2 f ( x − h) + f ( x − 2h)
= f ( x) − E −1 f ( x) + E − 2 f ( x)
= (1 − E −1 + E − 2 ) f ( x)
= (1 − E −1 ) 2 f ( x)

This gives the operator relation, ∇ 2 ≡ (1 − E −1 ) 2 and in general,

∇ m ≡ (1 − E −1 ) m (6.30)
Self-Instructional
126 Material
Relations between the Operators E, D and Interpolation and
Approximation
We have by Taylor’s theorem,

h2
f ( x h) f ( x ) hf ( x) f ( x) ... NOTES
2!
h2 D2 d
Thus, Ef ( x) f ( x) hDf ( x) f ( x) ..., where D
2! dx
 h2 D2 
Or, (1 + ∆ ) f ( x)= 1 + hD + + ...  f ( x)
 2! 
hD
= e f ( x)
Thus, e hD 1 E (6.31)
Also, hD log (1 + ∆ )
=
∆2 ∆3 ∆4
Or, hD =
∆− + − + ...
2 3 4
1  ∆ 2 ∆3 ∆ 4 
D= ∆ − + − + ... 
h  2 3 4 

Central Difference Operator


The central difference operator denoted by is defined by,
h h
y ( x) y x y x
2 2
Thus,
y ( x) ( E1/ 2 E 1/ 2
) y ( x)
Giving the operator relation, E1/ 2 E 1/ 2
or E1/ 2 E 1
Also,
yn ( E1/ 2 E 1/ 2
) y ( xn ) E1/ 2 yn E 1/ 2
yn
i.e., yn yn 1/ 2 yn 1/ 2

Further,
2
yn ( yn ) yn 1/ 2 yn 1/ 2

E1/ 2 E 1/ 2
yn 1/ 2 E1/ 2 E 1/ 2
yn 1/ 2
1/ 2 1/ 2
E yn 1/ 2 yn 1/ 2 E yn 1/ 2 yn 1/ 2

yn 1 yn yn yn 1

yn 1 2 yn yn 1 ( E1/ 2 E 1/ 2 2
) yn 2
yn 1
2
yn 1
1
(E E 2) yn
Self-Instructional
Material 127
Interpolation and
Approximation
2
E E 1
2 (6.32)
Even though the central difference operator uses fractional arguments, still it is
widely used. This is related to the averaging operator and is defined by,
NOTES
1 1/ 2 1/ 2
(E E ) (6.33)
2
1 1 1
Squaring, 2
(E 2 E 1 ) ( 2
2 2) 1 2

4 4 4
1
2
1 2
(6.34)
4
It may be noted that, y1/ 2 y1 y0 y1
Also, E1/ 2 y1 y1 y2 y1 y1
1
2

E1/ 2 E 1 (6.35)
Further,

3
yn ( 2 yn ) y 1 y 1
n n
2 2
2 2
y 1 y 1 ( yn 1 2 yn yn 1 )
n n
2 2

Example 14: Prove the following operator relations:


(i) ∆ ≡ ∇E (ii) (1 + ∆) (1 − ∇) = 1
Solution:
(i) Since, ∆f (x) = f (x + h) − f (x) = Ef (x) − f (x) , ∆ ≡ E − 1 (1)

and since ∇f ( x) = f ( x) − f ( x − h) = (1 − E −1 ) f ( x) , ∇ ≡ 1 − E −1 (2)

E −1
Thus, ∇ ≡ or ∇E ≡ E − 1 ≡ ∆
E
Hence proved.
(ii) From Equation (1), we have E ≡ ∆ + 1 (3)
and from Equation (2) we get E −1 ≡ 1 − ∇ (4)
Combining Equations (3) and (4), we get (1 + ∆ )(1 − ∇) ≡ 1.
Example 15: If fi is the value of f (x) at xi where xi = x0 + ih, for i = 1,2,..., prove
that,
i
i
= i
f i E= fo ∑ j  ∆ i
f0
j =0  
Self-Instructional
128 Material
Solution: We can write Ef (x) = f (x + h) Interpolation and
Approximation
Using Taylor series expansion, we have

h2
Ef ( x) f ( x) hf ( x) f ( x) ... NOTES
2!
h2 2 d
f ( x) hDf ( x) D f ( x) ..., where D
2! dx
2 2
hD
(1 ) f ( x) 1 hD ... f ( x), since E 1
2!

= ehD . f ( x)

1 + ∆ =ehD

Hence, eihD = (1 + ∆)i

Now, f i = f ( xi )= f ( x0 + ih)= E i f ( x0 )

fi (1 )i f ( x0 ), since E 1

i
i i
fi f 0 , using binomial expansion.
j 0 j

Hence proved.
Example 16: Compute the following differences:
(i) ∆n e x (ii) ∆n x n
Solution:
(i) We have, ∆ e x = e x + h − e x = e x (e h − 1)

Again, ∆2 e x = ∆(∆e x ) = (e h − 1)∆e x = (e h − 1) 2 e x

Thus by induction, ∆n e x = (e h − 1) n e x .
(ii) We have,

∆ ( x n ) = ( x + h) n − x n
n(n − 1) 2 n − 2
= n h x n −1 + h x + .... + h n
2!

Thus, ∆( x n ) is a polynomial of degree (n – 1)

Also, ∆(h n ) = 0. Hence, we can say that ∆2 ( x n ) is a polynomial of degree (n –


2) with the leading term n(n − 1)h 2 x n −2 .

Self-Instructional
Material 129
Interpolation and Proceeding n times, we get
Approximation
n
( xn ) n(n 1)... 1h n n !h n

Example 17: Prove that,


NOTES
 f ( x )  g ( x ) ∆f ( x ) − f ( x ) ∆g ( x )
(i) ∆  =
 g ( x)  g ( x ) g ( x + h)

 ∆ f ( x) 
(ii) ∆{log f ( x)} = log 1 + 
 f ( x) 

Solution:
(i) We have,
 f ( x )  f ( x + h) f ( x )
∆ = −
 g ( x )  g ( x + h) g ( x )
f ( x + h) g ( x ) − f ( x ) g ( x + h)
=
g ( x + h) g ( x )
f ( x + h) g ( x ) − f ( x ) g ( x ) + f ( x ) g ( x ) − f ( x ) g ( x + h)
=
g ( x + h) g ( x )
g ( x){ f ( x + h) − f ( x)} − f ( x){g ( x + h) − g ( x)}
=
g ( x ) g ( x + h)
g ( x) ∆f ( x) − f ( x) ∆g ( x)
=
g ( x ) g ( x + h)
(ii) We have,
∆ {log f ( x)} = log{ f ( x + h)} − log{ f ( x)}
f ( x + h)  f ( x + h) − f ( x ) + f ( x ) 
= log = log  
f ( x)  f ( x) 
 ∆f ( x ) 
= log  + 1
 f ( x) 

Differences of a Polynomial
We now look at the differences of various orders of a polynomial of degree n,
given by
) an x n + an−1 x n −1 + an − 2 x n− 2 + ... + a1 x + a0
y= f ( x=
The first order forward difference is defined by,
∆f ( x ) = f ( x + h ) − f ( x ) and is given by,,

y an {( x h) n x n } an 1{( x h) n 1
x n 1} ... a1 ( x h x)
n(n 1) 2 n
an {n h x n 1
h x 2
...} an 1{( n 1) h x n 2
...}
2 !
Self-Instructional bn 1 x n 1
bn 2 x n 2
... b1 x b0
130 Material
where the coefficients of various powers of x are collected separately. Interpolation and
Approximation
Thus, the first order difference of a polynomial of degree n is a polynomial of
degree n – 1, with bn −1 = an . nh
Proceeding as above, we can state that the second order forward difference NOTES
of a polynomial of degree n is a polynomial of degree n – 2, with coefficient of xn
–2
as n(n − 1)h 2 a0 .
Continuing successively, we finally get ∆ n y =
a0 n!h n , a constant.
We can conclude that for polynomial of degree n, all other differences having
order higher than n are zero.
It may be noted that the converse of the above result is partially true and
suggests that if the tabulated values of a function are found to be such that the
differences of the kth order are approximately constant, then the highest degree of
the interpolating polynomial that should be used is k. Since the tabulated data may
have round-off errors, the acutal function may not be a polynomial.
Example 18: Compute the horizontal difference table for the following data and
hence, write down the values of ∇f (4), ∇ 2 f (3) and ∇ 3 f (5).

x 1 2 3 4 5
f ( x) 3 18 83 258 627

Solution: The horizontal difference table for the given data is as follows:
x f ( x ) ∇f ( x ) ∇ 2 f ( x ) ∇ 3 f ( x ) ∇ 4 f ( x )
1 3 − − − −
2 18 15 − − −
3 83 65 50 − −
4 258 175 110 60 −
5 627 369 194 84 24

From the table we read the required values and get the following result:
∇f (4) = 175, ∇ 2 f (3) = 50, ∇ 3 f (5) = 84

Example 19: Form the difference table of f (x) on the basis of the following table
and show that the third differences are constant. Hence, conclude about the degree
of the interpolating polynomial.

x 0 1 2 3 4
f ( x) 5 6 13 32 69

Self-Instructional
Material 131
Interpolation and Solution: The difference table is given below
Approximation

x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x)
0 5
NOTES
1
1 6 6
7 6
2 13 12
19 6
3 32 18
37
4 69

It is clear from the above table that the third differences are constant and
hence, the degree of the interpolating polynomial is three.

Newton’s Forward Difference Interpolation Formula


Newton’s forward difference interpolation formula is a polynomial of degree less
than or equal to n. This is used to find the value of the tabulated function at a non-
tabular point. Consider a function y = f (x) whose values y0, y1,..., yn at a set of
equidistant points x0 , x1 ,..., xn are known.
Let ( x ) be the interpolating polynomial, such that

( xi ) f ( xi ) yi
xi x0 ih, for i 0, 1, 2, ..., n (6.36)

We assume the polynomial ( x ) to be of the form,

( x) a0 a1 ( x x0 ) a2 ( x x0 )( x x1 ) a3 ( x x0 )( x x1 )( x x2 )
... an ( x x0 )( x x1 )...( x xn 1 )
(6.37)
The coefficients ai’s in Equation (6.37) are determined by satisfying the
conditions in Equation (6.36) successively for i = 0, 1, 2,...,n.
Thus, we get

y0 ( x0 ) a0 , gives a0 y0
y1 y0
y1 ( x1 ) a0 a1 ( x1 x0 ), gives a1
h
y0
a1
h
y2 ( x2 ) a0 a1 ( x2 x0 ) a2 ( x2 x0 )( x2 x1 )

Self-Instructional
132 Material
or, Interpolation and
Approximation

y0
y2 y0 a2 (2h) h
h
y2 2 y1 y0 2
y0 NOTES
a2 2
2h 2 ! h2

Proceeding further, we get successively,


3 n
y0 y0
a3 ,..., an
3 ! h3 n ! hn
Using these values of the coefficients, we get Newton’s forward difference
interpolation in the form,

x x0 ( x x0 )( x x1 ) 2 ( x x0 ) ( x x1 ) ( x x2 ) 3 y0
( x) y0 y0 y0 ...
h 2 ! h2 h h h 3!
n
( x x0 ) ( x x1 ) ( x xn 1 ) y0
... ...
h h h n!

x − x0
This formula can be expressed in a more convenient form by taking u = as
h
shown here.
We have,

x − x1 x − ( x0 + h ) x − x0
= −1 = u −1
=
h h h
x − x2 x − ( x0 + 2h )
x − x0
= = −2 =u−2
h h h
x − xn −1 x − {x0 + (n − 1)h} x − x0
= = − (n − 1) = u − n + 1
h h h

Thus, the interpolating polynomial reduces to:


u (u 1) 2 u (u 1)(u 2) 3
(u ) y0 u y0 y0 y0
2 ! 3 !

u (u 1)(u 2)...(u n 1) n
... y0 (6.38)
n !
This formula is generally used for interpolating near the beginning of the table.
For a given x, we choose a tabulated point as x0 for which the following condition
is satisfied.
For better results, we should have

Self-Instructional
Material 133
Interpolation and
Approximation x − x0
| u |= ≤ 0.5
h

The degree of the interpolating polynomial to be used is less than or equal to


NOTES n and is determined by the order of the differences when they are nearly same so
that the differences of higher orders are irregular due to the propagated round-off
error in the data.

Newton’s Backward Difference Interpolation Formula


Newton’s forward difference interpolation formula cannot be used for interpolating
at a point near the end of a table, since we do not have the required forward
differences for interpolating at such points. However, we can use a separate formula
known as Newton’s backward difference interpolation formula. Let a table of
values {xi, yi}, for i = 0, 1, 2, ..., n for equally spaced values of xi be given. Thus,
xi = x0 + ih, yi = f(xi), for i = 0, 1, 2, ..., n are known.
We construct an interpolating polynomial of degree n of the form,
y ( x) b0 b1 ( x xn ) b2 ( x xn )( x xn 1 ) ... bn ( x xn )( x xn 1 )...( x x1 )
(6.39)
We have to determine the coefficients b0, b1, ..., bn by satisfying the relations,
(xi ) yi , for i n, n 1, n 2, ..., 1, 0
(6.40)
Thus, (xn ) yn , gives b0 yn (6.41)
Similarly, (xn 1 ) yn 1 , gives yn 1 b0 b1 ( xn 1 xn )

yn − yn −1 ∇yn
Or, =b1 = (6.42)
h h
Again
( xn 2 ) yn 2 , gives yn 2 b0 b1 ( xn 2 xn ) b2 ( xn 2 xn )( xn 1 xn )

yn − yn −1
Or, yn − 2 = yn + ( −2h) + b2 (−2h)(−h)
h
yn 2 yn 1 yn yn2
b2 2
(6.43)
2h 2 2 ! h2
By induction or by proceeding as mentioned earlier, we have
∇3 yn ∇ 4 yn ∇ n yn
=b3 = , b4 = , ..., bn (6.44)
3 ! h3 4 ! h4 n ! hn
Substituting the expressions for bi in Equation (6.39), we get
2 n
yn yn yn
( x) yn ( x xn ) ( x xn ) ( x xn 1 ) ... ( x xn )( x xn 1 )...( x x1 )
h 2 ! h 2
n ! hn
Self-Instructional (6.45)
134 Material
This formula is known as Newton’s backward difference interpolation formula. Interpolation and
Approximation
It uses the backward differences along the backward diagonal in the difference
table.
x − xn
Introducing a new variable v = , NOTES
h
x − xn −1 x − ( xn − h)
we have, = = v +1.
h h
x xn x x1
Similarly, 2
v 2,..., v n 1.
h h
Thus, the interpolating polynomial in Equation (6.45) may be rewritten as,

v(v 1) 2 v(v 1)(v 2) 3 v(v 1)(v 2)...(v n 1) n


( x) yn v yn yn yn ... yn
2 ! 3 ! n !

(6.46)
This formula is generally used for interpolation at a point near the end of a
table.
The error in the given interpolation formula may be written as,
E ( x) f ( x) ( x)
( n 1)
(x x n )( x x n 1 )...( x x1 )( x x0 ) f ( )
, where x0 xn
( n 1) !
y ( n 1) ( ) n
v ( v 1)( v 2)...( v n) h 1

( n 1) !

Extrapolation
The interpolating polynomials are usually used for finding values of the tabulated
function y = f(x) for a value of x within the table. But, they can also be used in
some cases for finding values of f(x) for values of x near to the end points x0 or xn
outside the interval [x0, xn]. This process of finding values of f(x) at points beyond
the interval is termed as extrapolation. We can use Newton’s forward difference
interpolation for points near the beginning value x0. Similarly, for points near the
end value xn, we use Newton’s backward difference interpolation formula.
Example 20: With the help of appropriate interpolation formula, find from the
following data the weight of a baby at the age of one year and of ten years:

Age = x 3 5 7 9
Weight = y (kg ) 5 8 12 17

Solution: Since the values of x are equidistant, we form the finite difference table
for using Newton’s forward difference interpolation formula to compute weight of
the baby at the age of required years.

Self-Instructional
Material 135
Interpolation and
Approximation x y ∆y ∆2 y
3 5
3
NOTES 5 8 1
4
7 12 1
5
9 17

x x0
Taking x = 2, u 0.5.
h
Newton’s forward difference interpolation gives,
(−0.5)(−1.5)
y at x = 1, y (1) = 5 − 0.5 × 3 + ×1
2
=5 − 1.5 + 0.38 =3.88 − 3.9 kg.
Similarly, for computing weight of the baby at the age of ten years, we use
Newton’s backward difference interpolation given by,
x − xn 10 − 9
=v = = 0.5
h 2
0.5 ×1.5
y at x = 10, y (10) = 17 + 0.5 × 5 + ×1
2
=17 + 2.5 + 0.38 − 19.88

Inverse Interpolation
The problem of inverse interpolation in a table of values of y = f (x) is to find the
value of x for a given y. We know that the inverse function x = g (y) exists and is
dy
unique, if y = f (x) is a single valued function of x and exists and does not
dx
vanish in the neighbourhood of the point where inverse interpolation is desired.
When the values of x are unequally spaced, we can apply Lagrange’s
interpolation or iterative linear interpolation simply by interchanging the roles of x
and y. Thus Lagrange’s formula for inverse interpolation can be written as,
n
x = ∑ li ( y) xi
i =0
n
Where li ( y ) =∏ [( y − y j ) /( yi − y j )]
j =0
j ≠i

When x values are equally spaced, we can apply the method of successive
approximation as described below.
Consider Newton’s formula for forward difference interpolation given by,
u (u − 1) 2 u (u − 1)(u − 2) 3
y = y0 + u ∆y0 + ∆ y0 + ∆ y0 + ...
Self-Instructional 2! 3!
136 Material
Retaining only two terms on the RHS, we can write the first approximation, Interpolation and
Approximation
1
u (1) = ( y − y0 )
∆ y0

The second approximation can be written as, NOTES

1  u (1) (u (1) − 1) 2 
u ( 2) =  ( y − y 0 ) − ∆ y0 
∆y 0  2 

on replacing u by u(1) in the coefficient of ∆2 y0 .


Similarly, the third approximation can be written as,

1  u ( 2) (u ( 2) − 1) 2 u ( 2) (u ( 2) − 1)(u ( 2) − 2) 3 
u (3) =  y − y0 − ∆ y0 − ∆ y0 
∆yo  2 6 

The process can be continued until two successive approximations have a


reasonable accuracy. Then x is obtained by the relation,
x = x0 + uh
Example 21: Using inverse interpolation, find the value of x for y = 5, from the
given table.

x 1 3 4
y 3 12 19

Solution: Applying inverse interpolation,


2
x= ∑ l ( y). x
i =0
i i

Thus, for y = 5, we have


(5 − 12)(5 − 19) (5 − 3)(5 − 19) (5 − 3)(5 − 12)
=x ×1 + ×3+ ×4
(3 − 12)(3 − 19) (12 − 3)(12 − 19) (19 − 3)(19 − 12)
7 × 14 2 × −14 2 × −7
= + ×3+ ×4
9 × 16 9 × −7 16 × 7
= 0.6805 + 1.3333 − 0.5000
= 1.5138
= 1.514 correct upto four significant figures.
Example 22: Given the following tabular values of cosh x, find x for which
cosh x = 1.285.

x 0.738 0.739 0.740 0.741 0.742


cos h x 1.2849085 1.2857159 1.2865247 1.2873348 1.2881461

Self-Instructional
Material 137
Interpolation and Solution: Since finding x for an equally spaced table of cosh x is a problem of
Approximation
inverse interpolation, we employ the method of successive approximation using
Newton’s formula of inverse interpolation. We first form the finite difference table.
2 3
NOTES x f ( x) cos h x f ( x) f ( x) f ( x)
0.738 1.2849085
8074
0.739 1.2857159 14
8088 1
0.740 1.2865247 13
8101 1
0.741 1.2873348 12
8113
0.742 1.2881461
Using Newton’s forward difference interpolation formula for the first
( x − x0 )
approximation u = we get,
h
1
=u (1) ( y − y0 )
∆f ( x0 )
For, y 1.285, we take x0 0.739.

1
u (1) (1.285 1.2857159) 0.8851384, then x 0.739885
0.0008088
For a second approximation,
1 u (1) (u (1) − 1) 2
u (2) =
u (1) − ∆ f ( x0 )
∆f ( x0 ) 2
1
=−0.8851384 − × ( −0.8851384) × (−1.8851384) × 0.0000013
0.0008088 × 2
=−0.8851384 − 0.0013409 = −0.8864793 ⇒ x = 0.7398864
Similarly,
u (2) (u (2) − 1) ∆ 2 f 0 1 (2) (2) ∆3 f 0
u (3) =
u (1) − − u (u − 1)(u (2) − 2)
2 ∆f 0 6 ∆f 0
=−0.8851384 − 0.0013430 − 0.000073600
=−0.886555 ⇒ x = 0.7398865
Example 23: Find the divided difference interpolation for the following table of
values:
x 4 7 9
f ( x) − 43 83 327

Self-Instructional
138 Material
Solution: We first form the Divided Difference (DD) table as given below. Interpolation and
Approximation
x f ( x) 1st DD 2nd DD
4 − 43
42 NOTES
7 83 16
122
9 327

Newton’s divided difference interpolation formula is,

f ( x) f ( x0 ) ( x x0 ) f ( x0 , x1 ) ( x x0 ) ( x x1 ) f x0 , x1 , x2
f ( x) 43 ( x 4) 42 ( x 4) ( x 7) 16
16 x 2 134 x 237

Example 24: Given the following table of values of the function y = log e x, construct
the Newton’s forward difference interpolating polynomial. Comment on the degree
of the polynomial and find loge1001.

x 1000 1010 1020 1030 1040


log e x 3.00000 3.00432 3.00860 3.01284 3.01703

Solution: We form the difference table as given below:

x y ∆y ∆2 y
1000 3.00000
432
1010 3.00432 −4
428
1020 3.00860 −4
424
1030 3.01284 −5
419
1040 3.01703

We observe that, the differences of second order are nearly constant. Thus,
the degree of the interpolating polynomial is 2 and is given by,
u (u − 1) 2 x − x0
y = y 0 + u ∆y 0 + ∆ y0 , where u =
2 h
For x = 1001, we take x0 = 1000.

Self-Instructional
Material 139
Interpolation and
Approximation 1001 − 1000
=∴u = 0.1
10
0.1 × 0.9
log=
e 1001 3.00000 + 0.1 × 0.00432 + × (−0.00004)
NOTES 2
= 3.000430 ≈ 3.00043

Example 25: Determine the interpolating polynomial for the following data table
using both forward and backward difference interpolating formulae. Comment on
the result.

x 0 1 2 3 4
f ( x) 1.0 8.5 36.0 95.5 199.0

Solution: Since the data points are equally spaced, we construct the Newton’s
forward difference interpolating polynomial for which we first form the finite
difference table as given below:

x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x)
0 1 .0
7.5
1.0 8 .5 20.0
27.5 12.0
2 .0 36.0 32.0
59.5 12.0
3 .0 95.5 44.0
103.5
4.0 199.0

Since the differences of order 3 are constant, we construct the third degree
Newton’s forward difference interpolating polynomial given by,
x ( x − 1) x ( x − 1) ( x − 2)
f ( x ) ≅ 1.0 + x × 0.75 + × 20 + × 12
2 6

Since=
x0 0,=h 1.0

x − x0
=u = x
h

i.e., f ( x) =1.0 + 1.5x + 4 x 2 + 2 x3 , on simplification.

Taking xn = 4, we also construct the backward difference interpolating


polynomial given by,

Self-Instructional
140 Material
Interpolation and
( x − 4) ( x − 3) Approximation
f ( x) = 199 + ( x − 4) × 103.5 + × 44
2
( x − 4) ( x − 3) ( x − 3)
+ × 12 NOTES
6
=1.0 + 1.5 x + 4 x 2 + 2 x 3 , on simplification.

This is the same as the forward difference interpolating polynomial, because


the difference of third order is constant.
Example 26: Use Newton’s divided difference interpolation to evaluate f(18)
and f (12) for the following data:

x 4 5 7 10 11 13
f ( x) 48 100 294 900 1210 2028

Solution: We first form the divided difference table as given below.

x f ( x) 1st DD 2nd DD 3rd DD


4 48
52
5 100 15
97 1
7 294 21
202 1
10 900 27
310 1
11 1210 33
409
13 2028

Since 3rd order divided differences are same, higher order divided differences
vanish. We have the Newton’s divided difference interpolation given by,

f ( x) f0 (x x0 ) f x , x1 (x x0 )( x x1 ) f x0 , x1 , x2
(x x0 )( x x1 )( x x 2 ) f x0 , x1 , x2 , x3

For x = 8, we take x0 = 4,
f (8) = 48 + (8 − 4)52 + (8 − 4)(8 − 5) × 15 + (8 − 4)(8 − 5)(8 − 7) × 1
= 48 + 208 + 180 + 12 = 448

For x = 12, we take x0 = 13,


f (12) = 2028 + (12 − 13) × 409 + (12 − 13) (12 − 11) × 33
+ (12 − 13) (12 − 11) (12 − 10) × 1
∴ f (12) = 2028 − 409 − 33 − 2 = 1584

Self-Instructional
Material 141
Interpolation and Example 27: Using inverse interpolation, find the zero of f (x) given by the following
Approximation
tabular values.

x 0.3 0.4 0.6 0.7


NOTES y = f ( x) 0.14 0.06 − 0.04 − 0.06

Solution: Using Lagrange’s form of inverse interpolation, we calculate the formula


using y = 0. 14, 0.06, – 0.04 and – 0.06, as given below:

( y − 0.06)( y + 0.04)( y + 0.06)


=P3 ( y ) × 0.3
(0.14 − 0.06)(0.14 + 0.04)(0.14 + 0.06)
( y − 0.14)( y + 0.04)( y + 0.06)
+ × 0.4
(0.06 − 0.14)(0.06 + 0.04)(0.06 + 0.06)
( y − 0.14)( y − 0.06)( y + 0.06)
+ × 0.6
(0.04 − 0.14)(0.04 − 0.06)( −0.04 + 0.06)
( y − 0.14)( y − 0.06)( y + 0.04)
+ × 0.7
(−0.06 − 0.14)(−0.06 − 0.06)(−0.06 + 0.04)

0.06 0.04 0.06 0.3 0.14 0.04 0.06 0.4


Thus, P3 (0)
0.08 0.18 0.20 0.08 0.1 0.12
0.14 0.06 0.06 0.6 0.14 0.06 0.04 0.7
0.18 0.1 0.02 0.2 0.12 0.02
0.015 0.14 0.84 0.49 0.475

Thus, the zero of f (x) is 0.475 which is approximately equal to 0.48, since the
accuracy depends on the accuracy of the data which is the significant digits.

Hermite Interpolation
Hermite Interpolation: Hermite interpolation, named after Charles Hermite, is
a method of interpolating data points as a polynomial function. The generated
Hermite interpolating polynomial is closely related to the Newton polynomial, in
that both are derived from the calculation of divided differences. However, the
Hermite interpolating polynomial may also be computed without using divided
differences, see Chinese remainder theorem and Hermite interpolation.
Unlike Newton interpolation, Hermite interpolation matches an unknown
function both in observed value, and the observed value of its first m derivatives.
This means that n(m + 1) values,

Self-Instructional
142 Material
must be known, rather than just the first n values required for Newton Interpolation and
Approximation
interpolation. The resulting polynomial may have degree at most n(m + 1) – 1,
whereas the Newton polynomial has maximum degree n – 1. (In the general case,
there is no need for m to be a fixed value; that is, some points may have more
known derivatives than others. In this case the resulting polynomial may have NOTES
degree N – 1, with N the number of data points.)

Check Your Progress


1. What do we generate in iterative linear interpolation?
2. Define interpolation.
3. How is Lagrange's interpolation useful?
4. Which interpolation will you use for equally spaced tabular values?
5. Define the shift operator.
6. What is the Newton forward difference interpolation formula used?
7. Define extrapolation.
8. Define the problem of inverse interpolation.

6.3 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. In this method, we successively generate interpolating polynomials of any


degree by iteratively using linear interpolating functions.
2. It can be stated explicitly as ‘given a set of (n + 1) values y0, y1, y2,..., yn for
x = x0, x1, x2, ..., xn respectively. The problem of interpolation is to compute
the value of the function y = f (x) for some non-tabular value of x.’
3. Lagrange’s interpolation is useful for unequally spaced tabulated values.
4. For interpolation of an unknown function when the tabular values of the
argument x are equally spaced, we have two important interpolation
formulae, viz.,
(a) Newton’s forward difference interpolation formula
(b) Newton’s backward difference interpolation formula
5. The shift operator is denoted by E and is defined by E f (x) = f (x + h).
6. The Newton’s forward difference interpolation formula is a polynomial of
degree less than or equal to n.
7. The interpolating polynomials are usually used for finding values of the
tabulated function y = f(x) for a value of x within the table. But they can also
be used in some cases for finding values of f(x) for values of x near to the

Self-Instructional
Material 143
Interpolation and end points x0 or xn outside the interval [x0, xn]. This process of finding
Approximation
values of f(x) at points beyond the interval is termed as extrapolation.
8. The problem of inverse interpolation in a table of values of y = f (x) is to
find the value of x for a given y.
NOTES

6.4 SUMMARY

The problem of interpolation is very fundamental problem in numerical


analysis.
In numerical analysis, interpolation means computing the value of a function
f (x) in between values of x in a table of values.
Lagrange’s interpolation is useful for unequally spaced tabulated values.
For interpolation of an unknown function when the tabular values of the
argument x are equally spaced, we have two important interpolation
formulae, viz., Newton’s forward difference interpolation formula and
Newton’s backward difference interpolation formula.
The forward difference operator is defined by, ∆f ( x) = f ( x + h) − f ( x) .
The backward difference operator is defined by, ∆f ( x) = f ( x + h) − f ( x) .
We define different types of finite differences such as forward differences,
backward differences and central differences, and express them in terms of
operators.
The shift operator is denoted by E and is defined by E f (x) = f (x + h).
The first order difference of a polynomial of degree n is a polynomial of
degree n–1. For polynomial of degree n, all other differences having order
higher than n are zero.
Newton’s forward difference interpolation formula is generally used for
interpolating near the beginning of the table while Newton’s backward
difference formula is used for interpolating at a point near the end of a table.
In iterative linear interpolation, we successively generate interpolating
polynomials, of any degree, by iteratively using linear interpolating functions.
The process of finding values of a function at points beyond the interval is
termed as extrapolation.
The problem of inverse interpolation in a table of values of y = f (x) is to
find the value of x for a given y.

6.5 KEY WORDS

Interpolation: It means computing the value of a function f(x) in between


values of x in a table of values.
Self-Instructional
144 Material
Interpolation and
Extrapolation: The process of finding values of a function at points beyond Approximation
the interval is termed as extrapolation.

6.6 SELF ASSESSMENT QUESTIONS AND NOTES


EXERCISES

Short-Answer Questions
1. What is the significance of polynomial interpolation?
2. Define the symbolic operators E and .
3. What is the degree of the first order forward difference of a polynomial of
degree n?
4. What is the degree of the nth order forward difference of a polynomial of
degree n?
5. Write Newton’s forward and backward difference formulae.
6. State an application of iterative linear interpolation.
7. What is the advantage of extrapolation?
8. State Lagrange’s formula for inverse interpolation.
Long-Answer Questions
1. Use Lagrange’s interpolation formula to find the polynomials of least degree
which attain the following tabular values:
x −2 1 2
(a) y 25 −8 −15

x 0 1 2 5
(b) y 2 3 12 147

x 1 2 3 4
(c) y −1 −1 1 5

2. Form the finite difference table for the given tabular values and find the
values of:
(a) f (2)
(b) f 2(1)
(c) f 3(0)
(d) f 4(1)
(e) f (5)
(f) f (3)

Self-Instructional
Material 145
Interpolation and
Approximation x 0 1 2 3 4 5 6
f ( x) 3 4 13 36 79 148 249

3. How are the forward and backward differences in a table related? Prove
NOTES the following:
(a) ∆yi =∇yi +1
(b) ∆ 2 yi =
∇ 2 yi + 2

(c) ∆ n yi =
∇ n yi + n
4. Describe Newton’s forward and backward difference formulae using
illustrations.
5. Explain iterative linear interpolation with the help of examples.
6. Illustrate inverse interpolation procedure.

6.7 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
146 Material
Approximation

UNIT 7 APPROXIMATION
Structure NOTES
7.0 Introduction
7.1 Objectives
7.2 Approximation
7.3 Least Square Approximation
7.4 Answers to Check Your Progress Questions
7.5 Summary
7.6 Key Words
7.7 Self Assessment Questions and Exercises
7.8 Further Readings

7.0 INTRODUCTION

Numerical error is the combined effect of two kinds of error in a calculation. The
first is caused by the finite precision of computations involving floating point or
integer values. The second usually called truncation error is the difference between
the exact mathematical solution and the approximate solution obtained when
simplifications are made to the mathematical equations to make them more
amenable to calculation. The number of significant figures in a measurement, such
as 2.531, is equal to the number of digits that are known with some degree of
confidence (2, 5 and 3) plus the last digit (1), which is an estimate or approximation.
Zeroes within a number are always significant. Zeroes that do nothing but set the
decimal point are not significant. Trailing zeroes that are not needed to hold the
decimal point are significant. A round-off error, also called rounding error, is the
difference between the calculated approximation of a number and its exact
mathematical value. Numerical analysis specifically tries to estimate this error when
using approximation equations and/or algorithms, especially when using finitely
many digits to represent real numbers.
In this unit, you will study about the approximation and least square
approximation.

7.1 OBJECTIVES

After going through this unit, you will be able to:


Explain the various types of approximations
Evaluate errors in functions
Define significance errors
Understand the characteristics of numerical computation
Analyse the least square approximation Self-Instructional
Material 147
Approximation
7.2 APPROXIMATION

Numerical methods are methods used for solving problems through numerical
NOTES calculations providing a table of numbers and/or graphical representations or figures.
Numerical methods emphasize that how the algorithms are implemented. Thus,
the objective of numerical methods is to provide systematic methods for solving
problems in a numerical form. Often the numerical data and the methods used are
approximate ones. Hence, the error in a computed result may be caused by the
errors in the data or the errors in the method or both. Generally, the numbers are
represented in decimal (base 10) form, while in computers the numbers are
represented using the binary (base 2) and also the hexadecimal (base 16) forms.
To perform a numerical calculation, approximate them first by a representation
involving a finite number of significant digits. If the numbers to be represented
are very large or very small, then they are written in floating point notation. The
Institute of Electrical and Electronics Engineers (IEEE) has published a standard
for binary floating point arithmetic. This standard, known as the IEEE Standard
754, has been widely adopted. The standard specifies formats for single precision
and double precision numbers. The simplest way of reducing the number of
significant digits in the representation of a number is simply to ignore the unwanted
digits known as chopping. All these topics are discussed in the following section.
Significant Figures
In approximate representation of numbers, the number is represented with a finite
number of digits. All the digits in the usual decimal representation may not be
significant while considering the accuracy of the number. Consider the following
numbers:
1514, 15.14, 1.324, 1524
Each of them has four significant digits and all the digits in them are significant.
Now consider the following numbers,
0.00215, 0.0215, 0.000215, 0.0000125
The leading zeroes after the decimal point in each of the above numbers are
not significant. Each number has only three significant digits, even though they
have different number of digits after the decimal point.
Floating Point Computation
Every real number is usually represented by a finite or infinite sequence of decimal
digits. This is called decimal system representation. For example, we can represent
1 1 1
as 0.25, but as 0.333... Thus is represented by two significant digits only,,
4 3 4
1
while is represented by an infinite number of digits. Most computers have two
3
Self-Instructional
148 Material
forms of storing numbers for performing computations. They are fixed-point and Approximation

floating point. In a fixed-point system, all numbers are given with a fixed number of
decimal places. For example, 35.123, 0.014, 2.001. However, fixed-point
representation is not of practical importance in scientific computation, since it cannot
deal with very large or very small numbers. NOTES
In a floating-point representation, a number is represented with a finite
number of significant digits having a floating decimal point. We can express the
floating decimal number as follows:
623.8 as 0.6238 × 103, 0.0001714 as 0.1714 × 10–3
A very large number can also be representated with floating-point
representation, keeping the first few significant digits such as 0.14263218 × 1039.
Similarly, a very small number can be written with only the significant digits, leaving
the leading zeros such as 0.32192516 × 10–19.
In the decimal system, very large and very small numbers are expressed in
scientific notation as follows: 4 69 10 23 and 1 ⋅ 601× 10 −19 . Binary numbers can
also be expressed by the floating point representation. The floating point
representation of a number consists of two parts: the first part represents a signed,
fixed point number called the mantissa (m); the second part designates the position
of the decimal (or binary) point and is called the exponent (e). The fixed point
mantissa may be a fraction or an integer. The number of bits required to express
the exponent and mantissa is determined by the accuracy desired from the computing
system as well as its capability to handle such numbers. For example, the decimal
number + 6132.789 is represented in floating point as follows:
sign sign
0 0 6132789 0 04
    
mantissa exponent

The mantissa has a 0 in the leftmost position to denote a plus. Here, the
mantissa is considered to be a fixed point fraction. This representation is equivalent
to the number expressed as a fraction 10 times by an exponent, that is
0.6132789 × 10+04. Because of this analogy, the mantissa is sometimes called the
fraction part.
Consider, for example, a computer that assumes integer representation for
the mantissa and radix 8 for the numbers. The octal number + 36.754 = 36754 ×
8–3 in its floating point representation will look like this:
sign sign
0 36754 1 03
 

mantissa exponent

When this number is represented in a register in its binary-coded form, the


actual value of the register becomes 0 011 110 111 101 100 and 1 000 011.
Self-Instructional
Material 149
Approximation Most computers and all electronic calculators have a built-in capacity to
perform floating-point arithmetic operations.
Example 1: Determine the number of bits required to represent in floating point
NOTES notation the exponent for decimal numbers in the range of 10 86 .
Solution: Let n be the required number of bits to represent the number 10 86
.

2n 1086
n log 2 86
86 86
n 285.7
log 2 0.3010
86 285.7
Therefore, 10 2
The exponent ±285 can be represented by a 10-bit binary word. It has a
range of exponents (+511 to –512).
Errors in Numerical Solution
The errors in a numerical solution are basically of two types. They are truncation
error and computational error. The error which is inherent in the numerical method
employed for finding numerical solution is called the truncation error. The
computational error arises while doing arithmetic computation due to representation
of numbers with a finite number of decimal digits.
The truncation error arises due to the replacement of an infinite process
such as summation or integration by a finite one. For example, in computation of a
transcendental function we use Taylor series/Maclaurin series expansion but retain
only a finite number of terms. Similarly, a definite integral is numerically evaluated
using a finite sum with a few function values of the integral. Thus, we express the
error in the solution obtained by numerical method.
Inherent errors are errors in the data which are obtained by physical
measurement and are due to limitations of the measuring instrument. The analysis
of errors in the computed result due to the inherent errors in data is similar to that
of round-off errors.
Generation and Propagation of Round-Off Error
During numerical computation on a computer, a round-off error is generated by
taking an infinite decimal representation of a real, rational number such
as 1/3, 4/7, etc., by a finite size decimal form. In each arithmetic operation with
such approximate rounded-off numbers there arises a round-off error. Also round-
off errors present in the data will propagate in the result. Consider two approximate
floating point numbers rounded-off to four significant digits.
x = 0.2234 × 103 and y = 0.1112 × 102

Self-Instructional
150 Material
The sum x + y = 0.23452 × 103 is rounded-off to 0.23456 × 103 with an Approximation

absolute error, 2 × 10–2. This is the new round-off error generated in the result.
Besides this error, the result will have an error propagated from the round-off
errors in x and y.
NOTES
Round-Off Errors in Arithmetic Operations
To get an insight into the propagation of round-off errors, let us consider them for
the four basic operations of addition, subtraction, multiplication and division. Let
xT and yT be two real numbers whose round-off errors in their approximate
representations x and y are 1 and 2 respectively, so that
xT x 1 and yT y 2

Their addition gives, ( xT yT ) (x y) 1 2

Hence, the propagated round-off error is given by,


( xT yT ) ( x y) 1 2

Thus the propagated round-off error is the sum of two approximate numbers
(having round-off errors) equal to the sum of the round-off errors in the individual
numbers.
The multiplication of two approximate numbers has the propagated round-
off error given by,
xT yT xy 1 y 2 x 1 2

Since the product 1 2 is a small quantity of higher order, then 1 or 2 may


take the propagated round-off error as 1 x1 2 y1 and the relative propagated
error is given by,
1 x 2 y 1 2

xy x y
This is equal to the sum of the relative errors in the numbers x and y.
Similarly, for division we get the relative propagated error as,
xT x
yT y 1 2
x x y
y

Thus, the relative error in division is equal to the difference of the relative
errors in the numbers.
Errors in Evaluation of Functions
The propagated error in the evaluation of a function f (x) of a single variable x
having a round-off error is given by,
f (x ) f ( x) f '( x )

Self-Instructional
Material 151
Approximation In the evaluation of a function of several variables x1, x2, …, xn, the
n
f
propagated round-off error is given by 1 , where 1 , 2 ,..., n are the round-
i 1 xi
NOTES off errors in x1, x2,..., xn, respectively.

Significance Errors
During arithmetic computations of approximate numbers having fixed precision,
there may be loss of significant digits in some cases. The error due to loss of
significant digits is termed as significance error. Significance error is more serious
than round-off errors, since it affects the accuracy of the result.
There are two situations when loss of significant digits occur. These are,
(i) Subtraction of two nearly equal numbers.
(ii) Division by a very small divisor compared to the dividend.
For example, consider the subtraction of the nearly equal numbers
x = 0.12454657 and y = 0.12452413, each having eight significant digits. The
result x – y = 0.22440000 × 10–4, is correct to four significant figures only. This
result when used in further computations leads to serious error in the result.
Consider the problem of computing the roots of the quadratic equation,
ax2 + bx + c = 0
The roots of this equation are,
−b + b2 − 4ac −b − b 2 − 4ac
and
2a 2a

If b2 >> 4ac, then the evaluation of b b 2 4ac leads to subtraction of


nearly equal numbers. One can avoid this by rewriting the expression,

−b + b 2 − 4ac
2a
It can be written as,

( −b + b 2 − 4ac )(−b − b 2 − 4ac )= −2c


(
2a × b − b 2 − 4ac ) b + b 2 − 4ac

Let the quadratic equation be,


x2 + 100.0001x + 0.01 = 0
Using the first formula, we get the smaller root = 0.10050000 × 10–3, whereas
exact root is 0.10000 0000 × 10–3. But using the last expression we get the smaller
root as 0.10000000 × 10–3 which does not has the effect of significance error.
Consider an example where loss of significant digits occur due to division
Self-Instructional by a small number.
152 Material
Approximation
1 − cos x
Computation of f ( x) = , for small values of x would have loss of
x2
significant digits.
The Table 7.1 shows the computed values of f (x) upto six decimal places NOTES
along with the correct values and error.
Table 7.1 Computed Value of f(x) upto Six Decimal Places

x Computed f (x) Correct f (x) Error


0.1 0.499584 0.499583 – 0.000001
0.01 0.50008 0.499996 – 0.000012
0.001 0.506639 0.500000 – 0.006639
0.0001 0.500000 0.745058 0.245058

Table 7.1 shows that the error in the computed value becomes more serious
for smaller value of x. It may be noted that the correct values of f (x) can be
computed by avoiding the divisions by small number by rewriting f (x) as given
below.
1 − cos x 1 + cos x
f ( x)
= ×
x2 1 + cos x

sin 2 x
i.e., f ( x) = 2
x (1 + cos x)

Characteristics of Numerical Computation


A numerical solution can never be exact but attempts are made to know the accuracy
of the approximate solution. Thus one attempts to get an approximate solution
which differs from the exact solution by less than a specified tolerance limit.
Some numerical methods find the solution by a direct method but many others
are of repetitive nature. The first step in the solution procedure is to take an
approximate solution. Then the numerical method is applied repeatedly to get
better results till the solution is obtained up to a desired accuracy. This process is
known as iteration.
To get a numerical solution on a computer, one has to write an algorithm. An
algorithm is a sequence of unambiguous steps used to solve a given problem. In
the design of such computer programs one considers the input data required to
implement the numerical method and writes the computer program in a suitable
programming language. The output of the program should give the solution with
the desired accuracy.
It may be noted that the iterative method gives rise to a sequence of results.
The convergence of this sequence to get the output upto a desired accuracy is
dependent on the initial data. Hence, one has to suitably choose the input data.
Thus, if for some input data the sequence is not convergent for certain pre-assigned
Self-Instructional
Material 153
Approximation number of iterations then the input data is changed. It is for this reason that one has
to limit the number of iterations to be employed while designing the computer
program.
While computing a solution with the help of an algorithm, one has to check
NOTES
the correctness of the solution obtained. To do so, one has to have some test data
whose solution is known.
Example 2: The numbers 28.483 and 27.984 are both approximate and are
correct up to the last digits shown. Compute their difference. Indicate how many
significant digits are present in the result and comment.
Solution: We have 28.483 – 27.984 = 00.499. The result has only three significant
digits. This is due to the loss of significant digits during subtraction of nearly equal
numbers.
Example 3: Round the number x = 2.2554 to three significant figures. Find the
absolute error and the relative error.
Solution:The rounded-off number is 2.25.
The absolute error is 0.0054.
0.0054
The relative error is ~− = 0.0024
2.25
The percentage error is 0.24 per cent.
22
Example 4: If = 3.14 instead of , find the relative error..
7

 22  22
Solution:Relative error =  7 − 3.14  7
0.00090.
=
 
Example 5: Determine the number of correct digits in x = 0.2217, if it has a
relative error r 0.2 10 1.
Solution:Absolute error = 0.2 × 10–1 × 0.2217 = 0.004434
Hence, x has only one correct digit x −~ 0.2.
Example 6: Round-off the number 4.5126 to four significant figures and find the
relative percentage error.
Solution:The number 4.5126 rounded-off to four significant figures is 4.153.
− 0.0004
Relative error = − 0.0088 per cent
× 100 =
4.5126

5 xy 2
Example 7: Given f (x, y, z) = , find the relative maximum error in the
z2
evaluation of f (x, y, z) at x = y = z = 1, if x, y, z have absolute errors
x y z 0.1

Self-Instructional
154 Material
Solution:The value of f (x, y, z) at x = y = z = 1 is 5. The maximum absolute error Approximation

in the evaluation of f (x, y, z) is,


∂f ∂f ∂f
(∆f ) max = ∆x + ∆y + ∆z
∂x ∂y ∂z NOTES
2 2
5y 10 xy − 10 xy
= 2
∆x + 2
∆y + ∆z
z z z3
At, x = y = z = 1, the maximum relative error is,

25 × 0.1
ER ) max
(= = 0.5
5
Example 8: Find the relative propagated error in the evaluation of x + y where
x = 13.24 and y = 14.32 have round-off errors 1 0.004 and 2 0.002 respectively.
Solution:Here, x y 27.56 and 1 2 0.006 .

0.006
Thus, the required relative error = = 0.0002177 .
27.56
Example 9: Find the relative percentage error in the evaluation of u = xy with
x = 5.43, y = 3.82 having round-off errors 0.01 in both x and y.
Solution:Now, xy = 5.43 × 3.82 ~− 20.74
0.01
The relative error in x is  0.0018.
5.43
0.01
The relative error in y is  0.0026.
3.82
Thus, the relative propagated error in x and y = 0.0044.
The percentage relative error = 0.44 per cent.
Example 10: Given u = xy + yz + zx, find the estimate of relative percentage
error in the evaluation of u for x = 2.104, y = 1.935 and z = 0.845. What are the
approximate values correct to the last digit?
Solution:Here, u = x (y + z) + yz = 2.104 (1.935 + 0.845) + 1.935 × 0.845
= 5.849 + 1.635 = 7.484
Error, u (y z) x ( z x) y ( x y) z
0.0005 2( x y z)  x y z 0.0005
 2 4.884 0.0005  0.0049

0.0049
Hence, the relative percentage error = × 100 =
0.062 per cent.
7.884
Example 11: The diameter of a circle measured to within 1 mm is d = 0.842 m.
Compute the area of the circle and give the estimated relative error in the computed
result.
Self-Instructional
Material 155
Approximation
d2
Solution: The area of the circle A is given by the formula, A .
4
3.1416
Thus,
= A × (0.842) 2 m2 = 0.5568 m2.
NOTES 4
Here the value of is taken upto 4th decimal palce since the data of d has
accuracy upto the 3rd decimal place. Now the relative percentage error in the
above computation is,
2 d 4 d 2 d 2 0.01
Ep 2
100 100 0.24 per cent
4 d d 0.842
Example 12: The length a and the width b of a plate is measured accurate up to
1cm as a = 5.43 m and b = 3.82 m. Compute the area of the plate and indicate its
error.
Solution: The area of the plate is given by,
A = ab = 3.82 × 5.43 sq. m. = 20.74 m2.
The estimate of error in the computed value of A is given by,

A a .b b .a
0.01 3.82 0.01 5.43, since a b 0.01
0.0925  10 m 2

Computational Algorithms
For solving problems with the help of a computer, one should first analyse the
mathematical formulation of the problem and consider a suitable numerical method
for solving it. The next step is to write an algorithm for implementing the method.
An algorithm is defined as a finite sequence of unambiguous steps to be followed
for solving a given problem. Finally, one has to write a computer program in a
suitable programming language. A computer program is a sequence of computer
instructions for solving a problem.
It is possible to write more than one algorithm to solve a specific problem.
But one should analyse them before writing a computer program. The analysis
involves checking their correctness, robustness, efficiency and other characteristics.
The analysis is helpful for solving the problem on a computer. The analysis of
correctness of an algorithm ensures that the algorithm gives a correct solution of
the problem. The analysis of robustness is required to ascertain if the algorithm is
capable of tackling the problem for possible cases or for all possible variations of
the parameters of the problem. The efficiency is concerned with the computational
complexities and the total time required to solve the problem.
Computer oriented numerical methods must deal with algorithms for
implementation of numerical methods on a computer. The following algorithms of
some simple problems will make the concept clear.
Self-Instructional
156 Material
Consider the problem of solving a pair of linear equations in two unknowns Approximation

given by,
a1 x + b1 y =
c1
a2 x + b2 y =
c2
NOTES
where a1, b1, c1, a2, b2, c2 are real constants. The solution of the equations are
given by cross multiplication as,
b2 c1 − b1c2 c2 a1 − c1a2
=x = , y
a1b2 − a2 b1 a1b2 − a2 b1
It may be noted that if a1 b2 – a2 b1 = 0, then the solution does not exist. This
aspect has to be kept in mind while writing the algorithm as given below.

Algorithm: Solution of a pair of equations a1 x b1 y c1 , a2 x b2 y c2


Step 1: Read a1, b1, c1, a2, b2, c2
Step 2: Compute d = a1 b2 – a2 b1
Step 3: Check if d = 0, then go to Step 8 else
go to next step
Step 4: Compute x
Step 5: Compute y
Step 6: Write ‘x =’, x, ‘y =’, y
Step 7: Go to Step 9
Step 8: Write ‘no solution’
Step 9: Stop
Example 13: Write an algorithm to compute the roots of a quadratic equation,
ax2 + bx + c = 0.
Solution: We know that the roots of the quadratic equation are given by,
− b ± b 2 − 4ac
x=
2a

Further, if b 2 ≥ 4ac, the roots are real, otherwise they are complex conjugates.
This aspect is to be considered while writing an algorithm.
Algorithm: Computation of roots of a quadratic equation.
Step 1: Read a, b, c
Step 2: Compute d = b2 – 4ac
Step 3: Check if d ≥ 0, go to Step 4 else go to Step 8

Step 4: Compute x1 = (−b + d ) (2a)

Step 5: Compute x 2 = (−b − d ) (2a)


Self-Instructional
Material 157
Approximation Step 6: Write ‘Roots are real’, x1, x2
Step 7: Go to Step 11

Step 8: Compute xi = − d ( 2a)


NOTES
Step 9: Compute x r = −b (2a)
Step 10: Write ‘Roots are complex’, ‘Real part =’ xr ‘Imaginary part =’, xi
Step 11: Stop

Check Your Progress


1. What are the two parts of floating point representation?
2. Define truncation and computational errors.
3. Define inherent errors.
4. What is propagated round-off error?
5. What are significance errors?
6. Write the situations when loss of significant digits occur.
7. Why we write an algorithm?
8. Define features and purpose of computational algorithms.

7.3 LEAST SQUARE APPROXIMATION


In this section, we consider the problem of approximating an unknown function
whose values, at a set of points, are generally known only empirically and are,
thus subject to inherent errors, which may sometimes be appreciably larger in
many engineering and scientific problems. In these cases, it is required to derive a
functional relationship using certain experimentally observed data. Here the
observed data may have inherent or round-off errors, which are serious, making
polynomial interpolation for approximating the function inappropriate. In polynomial
interpolation the truncation error in the approximation is considered to be important.
But when the data contains round-off errors or inherent errors, interpolation is not
appropriate.
The subject of this section is curve fitting by least square approximation. Here
we consider a technique by which noisy function values are used to generate a
smooth approximation. This smooth approximation can then be used to
approximate the derivative more accurately than with exact polynomial
interpolation.
There are situations where interpolation for approximating function may not
be efficacious procedure. Errors will arise when the function values f (xi), i = 1, 2,

Self-Instructional
158 Material
…, n are observed data and not exact. In this case, if we use the polynomial Approximation

interpolation, then it would reproduce all the errors of observation. In such situations
one may take a large number of observed data, so that statistical laws in effect
cancel the errors introduced by inaccuracies in the measuring equipment. The
NOTES
approximating function is then derived, such that the sum of the squared deviation
between the observed values and the estimated values are made as small as
possible.
Mathematically, the problem of curve fitting or function approximation may be
stated as follows:
To find a functional relationship y = g(x), that relates the set of observed data
values Pi(xi, yi), i = 1, 2,..., n as closely as possible, so that the graph of y = g(x)
goes near the data points Pi’s though not necessarily through all of them.
The first task in curve fitting is to select a proper form of an approximating
function g(x), containing some parameters, which are then determined by minimizing
the total squared deviation.
For example, g(x) may be a polynomial of some degree or an exponential or
logarithmic function. Thus g (x) may be any of the following:
(i) g ( x) x (ii) g ( x) x x2
x
(iii) g ( x ) e (iv) g ( x) e x

(v) g ( x) log( x )
Here , , are parameters which are to be evaluated so that the curve y =
g(x), fits the data well. A measure of how well the curve fits is called the goodness
of fit.
In the case of least square fit, the parameters are evaluated by solving a system
of normal equations, derived from the conditions to be satisfied so that the sum of
the squared deviations of the estimated values from the observed values, is minimum.

Method of Least Squares


Let (x1, f1), (x2, f2),..., (xn, fn) be a set of observed values and g(x) be the
approximating function. We form the sums of the squares of the deviations of the
observed values fi from the estimated values g (xi),
n

∑{ f − g ( xi )}
2
i.e., =S i
i =1

(7.1)
The function g(x) may have some parameters, , , . In order to determine
these parameters we have to form the necessary conditions for S to be minimum,
which are:

Self-Instructional
Material 159
Approximation
S S S
0, 0, 0 (7.2)

These equations are called normal equations, solving which we get the
NOTES parameters for the best approximate function g(x).

Curve Fitting by a Straight Line: Let g ( x) x, be the straight line which


fits a set of observed data points (xi, yi), i = 1, 2, ..., n.
Let S be the sum of the squares of the deviations g(xi) – yi, i = 1, 2,...,n, given
by,
n
2
S xi yi (7.3)
i 1

We now employ the method of least squares to determine and so that S


will be minimum. The normal equations are,
S n
0, i.e., ( xi yi ) 0 (7.4)
i 1

S n
And, 0, i.e., xi ( xi yi ) 0 (7.5)
i 1

These conditions give,


n S1 S 01 0
S1 S2 S11 0
n n n n
2
Where, S1 xi , S01 yi , S 2 xi , S11 xi yi
i 1 i 1 i 1 i 1

Solving,
1 S01 S1
2
. Also .
S1S11 S1S2 nS11 S1S01 nS2 S 1 n n
Algorithm: Fitting a straight line y = a + bx.
Step 1: Read n [n being the number of data points]
Step 2: Initialize : sum x = 0, sum x2 = 0, sum y = 0, sum xy = 0
Step 3: For j = 1 to n compute
Begin
Read data xj, yj
Compute sum x = sum x + xj
Compute sum x2 + xj × xj
Compute sum y = sum y + yi × yj
Compute sum xy = sum xy + xj × yj
End
Self-Instructional
160 Material
Step 4: Compute b = (n × sum xy – sum x × sum y)/ (n × sum x2 – (sum x)2) Approximation

Step 5: Compute x bar = sum x / n


Step 6: Compute y bar = sum y / n
Step 8: Compute a = y bar – b × x bar NOTES
Step 9: Write a, b
Step 10: For j = 1 to n
Begin
Compute y estimate = a + b × x
write xj, yj, y estimate
End
Step 11: Stop
Curve Fitting by a Quadratic (A Parabola): Let g(x) = a + bx + cx2, be the
approximating quadratic to fit a set of data (xi, yi), i = 1, 2, ..., n. Here the
parameters are to be determined by the method of least squares, i.e., by minimizing
the sum of the squares of the deviations given by,
n
S ( a bxi cxi2 yi ) 2
i 1

(7.6)
S S S
Thus the normal equations, 0, 0, 0, are as follows:
a b c
(7.7)
n
( a bxi cxi2 yi ) 0
i 1
n
xi (a bxi cxi2 yi ) 0
i 1
n
xi2 (a bxi cxi2 yi ) 0. (7.8)
i 1

These equations can be rewritten as,

na s1b s2 c s01 0
s1a s2b s3c s11 0
s2 a s3b s4 c s21 0 (7.9)

n n n n

where s1 xi , s2 xi2 , s3 xi3 , s4 xi4


i 1 i 1 i 1 i 1
n n n
s01 yi , s11 xi yi , s21 xi2 yi (7.10)
i 1 i 1 i 1
Self-Instructional
Material 161
Approximation It is clear that the normal equations form a system of linear equations in the
unknown parameters a, b, c. The computation of the coefficients of the normal
equations can be made in a tabular form for desk computations as shown below.

NOTES
x xi yi xi2 xi3 xi4 xi yi xi2 yi
1 x1 y1 x12 x13 x 41 x1 y1 x12 y1
2 x2 y2 x22 x23 x24 x2 y2 x22 y2
... ... ... ... ... ... ... ...
n xn yn xn2 xn3 xn4 xn yn xn2 yn
Sum s1 s01 s2 s3 s4 s11 s21

The system of linear equations can be solved by Gaussian elimination method.


It may be noted that number of normal equations is equal to the number of unknown
parameters.
Example 14: Find the straight line fitting the following data:

xi 4 6 8 10 12
y1 13.72 12.90 12.01 11.14 10.31

Solution: Let y = a + bx, be the straight line which fits the data. We have the
S S
normal equations 0, 0 for determining a and b, where
a b
5
S= ∑ ( y − a − bx )
i =1
i i
2
.

5 5

Thus,
i
=i 1 =i 1
∑y − na − b∑ xi =
0

5 5 5

and ,
=i 1
∑x y i i − a ∑ x − b∑ xi =
i
=I 1 =i 1
0
2

The coefficients are computed in the table below.

xi yi xi2 xi yi
4 13.72 16 54.88
6 12.90 36 77.40
8 12.01 64 96.08
10 11.14 100 111.40
12 10.31 144 123.72
Sum 40 60.08 360 463.48

Thus the normal equations are,


Self-Instructional
162 Material
Approximation
5a + 40b − 60.08 = 0
40a + 360b − 463.48 = 0
Solving these two equations we obtain,
a = 15.448, b = 0.429 NOTES
Thus, y = g(x) = 15.448 – 0.429 x, is the straight line fitting the data.
Example 15: Use the method of least square approximation to fit a straight line to
the following observed data:

xi 60 61 62 63 64
yi 40 40 48 52 55

Solution: Let the straight line fitting the data be y = a + bx. The data values being
large, we can use a change in variable by substituting u = x – 62 and v = y – 48.
Let v = A + B u, be a straight line fitting the transformed data, where the
normal equations for A and B are,
5 5

=i 1 =i 1
∑ vi = 5 A + B ∑u i

5 5 5

=i 1 =i 1
∑ u i vi = A ∑ ui + B ∑u
i =1
2
i

The computation of the various sums are given in the table below,

xi yi ui vi ui vi ui2
60 40 –2 –8 16 4
61 42 −1 −6 6 1
62 48 0 0 0 0
63 52 1 4 4 1
64 55 2 7 14 4
Sum 0 −3 40 10

Thus the normal equations are,


3 5 A and 40 10 B
3
A and B 4
5

This gives the line, v 3/ 5 4u


Or, 20u 5v 3 0.
Transforming we get the line,
20 (x – 62) – 5 (y – 48) – 3 = 0
Or, 20 x – 5y – 1003 = 0
Self-Instructional
Material 163
Approximation Curve Fitting with an Exponential Curve: We consider a two parameter
exponential curve as,
y ae bx
(7.11)
NOTES For determining the parameters, we can apply the principle of least squares
by first using a transformation,
z = log y, so that Equation (7.11) is rewritten as, (7.12)
z = log a – bx (7.13)
Thus, we have to fit a linear curve of the form z x, with z – x variables
and then get the parameters a and b as,
a e , b (7.14)
Thus proceeding as in linear curve fitting,
n n n
n xi log yi xi log yi
i 1 i 1 i 1
2
n n
(7.15)
n xi2 xi
i 1 i 1

n n
And, z px , where x xi n, z log yi n (7.16)
i 1 i 1

After computing and , we can determine a and b given by Equation (7.13).


Finally, the exponential curve fitting data set is given by Equation (7.11).
Algorithm: To fit a straight line for a given set of data points by least square error
method.
Step 1: Read the number of data points, i.e., n
Step 2: Read values of data-points, i.e., Read (xi, yi) for i = 1, 2,..., n
Step 3: Initialize the sums to be computed for the normal equations,
i.e., sx = 0, sx2 = 0, sy = 0, syx = 0
Step 4: Compute the sums, i.e., For i = 1 to n do
Begin
sx
= sx + xi
2
sx
= sx 2 + xi2
sy= sy + yi
syx
= syx + xi yi
End
Step 5: Solve the normal equations, i.e., solve for a and b of the line y = a +
bx
Self-Instructional
164 Material
Approximation
Compute d =n ∗ sx 2 − sx ∗ sx
b = (n ∗ sxy − sy ∗ sx ) / d
xbar = sx / n
ybar = sy / n NOTES
=a ybar − b ∗ x bar
Step 6: Print values of a and b
Step 7: Print a table of values of
xi , yi , y pi a bxi for i 1, 2, ..., n
Step 8: Stop
Algorithm: To fit a parabola y = a + bx + cx 2 , for a given set of data points by least
square error method.
Step 1: Read n, the number of data points
Step 2: Read (xi, yi) for i = 1, 2,..., n, the values of data points
Step 3: Initialize the sum to be computed for the normal equations,
i.e., sx = 0, sx2 = 0, sx3 = 0, sx4 = 0, sy = 0, sxy = 0.
Step 4: Compute the sums, i.e., For i = 1 to n do
Begin
sx
= sx + xi
x 2= xi ∗ xi
2
sx= sx 2 + x 2
sx 3 = sx 3 + xi ∗ x 2
sx 4 = sx 4 + x 2 ∗ x 2
sy
= sy + yi
sxy = sxy + xi ∗ yi
sx 2 y= sx 2 y + x 2 ∗ yi
End
Step 5: Form the coefficients {aij } matrix of the normal equations, i.e.,

a11 n=
= , a31 sx 2
, a21 sx=
2
=a12 sx
= , a22 sx= , a32 sx3
2 3
=a13 sx= , a23 sx
= , a33 sx 4

Step 6: Form the constant vector of the normal equations.


b1 = sy, b2 = sxy, b3 = sx2y
Step 7: Solve the normal equation by Gauss-Jordan method

Self-Instructional
Material 165
Approximation
a12 = a12 / a11 , a13 = a13 / a11 , b1 = b1 / a11
a 22 = a 22 − a 21a12 , a23 = a 23 − a21a13
b2 = b2 − b1a21
NOTES a32 = a32 − a31a12
a33 = a33 − a31a13
b3 = b3 − b1a31
a 23 = a23 / a 22
b2 = b2 / a 22
a33 = a33 − a23 a32
b3 = b3 − a32 b2
c = b3 / a33
b = b2 − c a23
a = b1 − b a12 − c a13
Step 8: Print values of a, b, c (the coefficients of the parabola)
Step 9: Print the table of values of xk , yk and y pk where y pk =a + bxk + cx 2 k ,

i.e., print xk , yk , y pk for k 1, 2,..., n.


Step 10: Stop.

Check Your Progress


9. How is approximating function found in the method of least squares?
10. Explain the curve fitting by a straight line.

7.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The floating point representation of a number consists of mantissa and


exponent.
2. The errors in a numerical solution are basically of two types. They are
truncation error and computational error. The error which is inherent in the
numerical method employed for finding numerical solution is called the
truncation error. The computational error arises while doing arithmetic
computation due to representation of numbers with a finite number of decimal
digits.
3. Inherent errors are errors in the data which are obtained by physical
measurement and are due to limitations of the measuring instrument. The
analysis of errors in the computed result due to the inherent errors in data is
similar to that of round-off errors.
Self-Instructional
166 Material
4. Propagated round-off error is the sum of two approximate numbers (having Approximation

round-off errors) equal to the sum of the round-off errors in the individual
numbers.
5. During arithmetic computations of approximate numbers having fixed
NOTES
precision, there may be loss of significant digits in some cases. The error
due to loss of significant digits is termed as significance error.
6. There are two situations when loss of significant digits occur. These are,
(i) Subtraction of two nearly equal numbers
(ii) Division by a very small divisor compared to the dividend
7. To get a numerical solution on a computer, one has to write an algorithm.
8. For solving problems with the help of a computer, one should first analyse
the mathematical formulation of the problem and consider a suitable numerical
method for solving it. The next step is to write an algorithm for implementing
the method.
9. Let (x1, f1), (x2, f2),..., (xn, fn) be a set of observed values and g(x) be the
approximating function. We form the sums of the squares of the deviations
of the observed values f i from the estimated values g (x i ),
n

∑{ f − g ( xi )}
2
i.e.,=S i
i =1

The function g(x) may have some parameters, , , . In order to determine


these parameters we have to form the necessary conditions for S to be
minimum, which are:
S S S
0, 0, 0

These equations are called normal equations, solving which we get the
parameters for the best approximate function g(x).
10. Let g ( x) x, be the straight line which fits a set of observed data
points (xi, yi), i = 1, 2, ..., n.
Let S be the sum of the squares of the deviations g(xi) – yi, i = 1, 2,...,n,
given by,
n
2
S xi yi
i 1

7.5 SUMMARY

Numerical methods are methods used for solving problems through numerical
calculations providing a table of numbers and/or graphical representations
or figures. Numerical methods emphasize that how the algorithms are
implemented. Self-Instructional
Material 167
Approximation To perform a numerical calculation, approximate them first by a
representation involving a finite number of significant digits. If the numbers
to be represented are very large or very small, then they are written in
floating point notation.
NOTES
The Institute of Electrical and Electronics Engineers (IEEE) has published a
standard for binary floating point arithmetic.
In approximate representation of numbers, the number is represented with
a finite number of digits. All the digits in the usual decimal representation
may not be significant while considering the accuracy of the number.
In a floating representation, a number is represented with a finite number of
significant digits having a floating decimal point.
Floating point representation of a number consists of mantissa and exponent.
The errors in a numerical solution are basically of two types termed as
truncation error and computational error.
The error which is inherent in the numerical method employed for finding
numerical solution is called the truncation error.
The truncation error arises due to the replacement of an infinite process
such as summation or integration by a finite one.
Inherent errors are errors in the data which are obtained by physical
measurement and are due to limitations of the measuring instrument.
The analysis of errors in the computed result due to the inherent errors in
data is similar to that of round-off errors.
Significance error is more serious than round-off errors.
Iteration is the numerical method applied repeatedly to get better results till
the solution is obtained up to a desired accuracy.
An algorithm is a sequence of unambiguous steps used to solve a given
problem.
It is possible to write more than one algorithm to solve a specific problem.
The algorithm analysis involves checking their correctness, robustness,
efficiency and other characteristics.
Single precision floating point format is a computer number format that
occupies 4 bytes (32-bits) in computer memory and denotes wide range of
values using a floating point.
Double precision refers to a specific floating point number that has more
precision, i.e., more digits to the right of the decimal point than a single
precision number.

Self-Instructional
168 Material
Approximation
7.6 KEY WORDS

Truncation error: This error is inherent in the numerical method employed


for finding numerical solution. It occurs due to the replacement of an infinite NOTES
process such as summation or integration by a finite one.
Computational error: This error occurs during arithmetic computation
due to representation of numbers having a finite number of decimal digits.
Inherent error: This error occurs in the data type which is obtained using
physical measurement and also due to limitations of the measuring instruments.
Significance error: This error occurs due to loss of significant digits.
Algorithm: It is a sequence of finite steps used to solve a given problem.

7.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What are floating point numbers?

5
2. Find the percentage error in approximating by 0.8333 correct upto four
6
significant figures.
3. Write the characteristics of numerical computation.
4. Find the relative error in the computation of x – y for x = 12.05 and y =
8.02 having absolute errors x 0.005 and y 0.001.

5. Find the percentage error in computing y 3x 2 6 x at x = 1, if the error in


x is 0.05.
6. Given a = 1.135 and b = 1.075 having absolute errors a = 0.011 and b
= 0.12. Estimate the relative percentage error in the computation of a – b.
7. Find the percentage error in taking 1.33 as approximation for 4/3.
8. The length a and breadth b of a plate measured accurate to 1 cm as a =
5.43 m and b = 3.82 m. Estimate the area of the plate and estimate its
absolute error.
9. How many significant digits are present in each of the following approximate
numbers:
10.54113, 5.4113, 0.054113, 0.00541

Self-Instructional
Material 169
Approximation Long-Answer Questions
1. Round-off the following numbers to three decimal places:
(i) 0.230582 (ii) 0.00221118 (iii) 2.3645 (iv) 1.3455
NOTES 2. Round-off the following numbers to four significant figures:
(i) 49.3628 (ii) 0.80022 (iii) 8.9325 (iv) 0.032588
(v) 0.0029417 (vi) 0.00010211 (vii) 410.99
3. Round-off each of the following numbers to three significant figures and
indicate the absolute error in each.
(i) 49.3628 (ii) 0.9002 (iii) 8.325 (iv) 0.0039417
4. Find the sum of the following approximate numbers, correct to the last
digits.
0.348, 0.1834, 345.4, 235.2, 11.75, 0.0849, 0.0214, 0.0002435
5. Find the number of correct significant digits in the approximate number
11.2461. Given is its absolute error = 0.25 × 10–2.
6. Given are the following approximate numbers with their relative errors.
Determine the absolute errors.
(i) x A 12165, R 0.1% (ii) x A 3.23, R 0.6%

(iii) x A 0.798, R 10% (iv) x A 67.84, R 1%

7. Round-off the following numbers to four significant digits.


(i) 450.92 (ii) 48.3668 (iii) 9.3265 (iv) 8.4155
(v) 0.80012 (vi) 0.042514 (vii) 0.0049125
(viii) 0.00020215
8. Write the following numbers in floating-point form rounded to four significant
digits.
(i) 100000 (ii) – 0.0022136 (iii) – 35.666
9. Determine the number of correct digits in the number x in each of the
following (the relative errors are given).
(i) x 0.2217, R 0.2 10 1
(ii) x 32.541, R 0.1

(iii) x 0.12432, R 10% (iv) x 0.58632, R 1%

10. Find the percentage error in computing z = x for x = 4.44, if x is correct


to its last digit only.
11. Let u = 4x6 + 3x – 9. Find the relative percentage error in computing u at
x = 1.1, if the error in x is 0.05.

Self-Instructional
170 Material
Approximation
r2 h
12. In the formula for computing R for=R + , find the absolute error in
2 h 2
computing R for r = 48 mm and h = 56 mm, due to errors of 1 mm in r and
0.2 mm in h. NOTES
13. Find the smaller root of 0.001x2 + 100.1x + 10000 = 0, with the help of
the usual formula and round-off to six significant digits. Compare with the
correct answer x = –100.0.
14. Find the roots of the quadratic equation x2 – 100x – 0.1 = 0, with the help
of the usual formulae and show the significance error in the result.

7.8 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
Material 171
Numerical Integration and
Numerical Differentiation
UNIT 8 NUMERICAL
INTEGRATION AND
NOTES
NUMERICAL
DIFFERENTIATION
Structure
8.0 Introduction
8.1 Objectives
8.2 Numerical Integration
8.3 Numerical Differentiation
8.4 Optimum Choice of Step Length
8.5 Extrapolation Method
8.6 Answers to Check Your Progress Questions
8.7 Summary
8.8 Key Words
8.9 Self Assessment Questions and Exercises
8.10 Further Readings

8.0 INTRODUCTION

Numerical integration methods can generally be described as combining evaluations


of the integrand to get an approximation to the integral. The integrand is evaluated
at a finite set of points called integration points and a weighted sum of these values
is used to approximate the integral. The integration points and weights depend on
the specific method used and the accuracy required from the approximation.
Modern numerical integrations methods based on information theory have been
developed to simulate information systems such as computer controlled systems,
communication systems, and control systems.
In numerical analysis, numerical differentiation is the process of finding the
numerical value of a derivative of a given function at a given point. It is the process
of computing the derivatives of a function f(x) when the function is not explicitly
known, but the values of the function are known only at a given set of arguments x
= x , x , x ,..., x . For finding the derivatives, a suitable interpolating polynomial is
0 1 2 n
used and then its derivatives are used as the formulae for the derivatives of the
function. Thus, for computing the derivatives at a point near the beginning of an
equally spaced table, Newton’s forward difference interpolation formula is used,
whereas Newton’s backward difference interpolation formula is used for computing
the derivatives at a point near the end of the table.
In this unit, you will study about the numerical integration, numerical
differentiation and extrapolation method.
Self-Instructional
172 Material
Numerical Integration and
8.1 OBJECTIVES Numerical Differentiation

After going through this unit, you will be able to:


Understand the various numerical integrations NOTES
Explain about the numerical differentiation
Analyse the extrapolation method

8.2 NUMERICAL INTEGRATION


The evaluation of a definite integral cannot be carried out when the integrand f (x)
is not integrable, as well as when the function is not explicitly known but only the
function values are known at a finite number of values of x. However, the value of
the integral can be determined numerically by applying numerical methods. There
are two types of numerical methods for evaluating a definite integral based on the
following formula.
b

∫ f ( x) dx
a
(8.1)

They are termed as Newton-Cotes quadrature and Gaussian quadrature. We


first confine our attention to Newton-Cotes quadrature which is based on inte-
grating polynomial interpolation formulae. This quadrature requires a table of val-
ues of the integrand at equally spaced values of the independent variable x.

Newton-Cotes General Quadrature


We start with Newton’s forward difference interpolation formula which uses a
table of values of f (x) at equally spaced points in the interval [a, b]. Let the
interval [a, b] be divided into n equal sub-intervals such that,
a = x0, xi = xo+ ih, for i = 1, 2, ..., n – 1, xn = b (8.2)
so that, nh = b–a
Newton’s forward difference interpolation formula is,
s ( s 1) 2 s ( s 1)( s 2) 3 s ( s 1)( s 2)...( s n 1) n
(s) f0 s f0 f0 f 0 ... f0
2! 3! n!
(8.3)
x − x0
Where s=
h
Replacing f (x) by (s) in Equation (8.1), we get
xn n
 s ( s − 1) 2 

x0 0

f ( x) dx = h  f 0 + s∆f 0 +
 2 !
∆ f 0 + ... ds

since when x = x0, s = 0 and x = xn, s = n and dx = h du.


Self-Instructional
Material 173
Numerical Integration and Performing the integration on the RHS we have,
Numerical Differentiation
xn
 n2 1  n3 n 2  1  n4 n3 n2 

x0
f ( x)dx = h nf 0 +
 2
∆f 0 +  − ∆2 f 0 + 
2  3 2  6  4
− 3 − 2 ∆3 f 0
3 2 
NOTES 1  n 5 3n 4 11n 3  
+ − + − 3n 2 ∆4 f 0 + ...

24  5 2 3 
 
(8.4)
We can derive different integration formulae by taking particular values of
n = 1, 2, 3, .... Again, on replacing the differences, the Newton-Cotes formula can
be expressed in terms of the function values at x0, x1,..., xn, as
xn n

∫ f ( x) dx = h ∑c
k =0
k f ( xk ) (8.5)
x0

The error in the Newton-Cotes formula is given by,


n
hn 2
En fn 1
( ) s ( s 1) ( s n) ds (8.6)
(n 1)! 0

Trapezoidal Formula of Numerical Integration


Taking n = 1 in Equation (8.4), we get the trapezoidal formula given by,
x1
 1 
∫ f ( x)dx = h f
x0
0 +
2
∆f 0 

since all other differences of higher order are absent.


Replacing ∆f 0 by f1 – f0, we have
x1
h
∫ f ( x)dx = 2 [ f
x0
0 + f1 ] (8.7)

This is termed as trapezoidal formula of numerical integration.


This formula can be geometrically interpreted as the definite integral of the
function f (x) between the limits x0 to x1, as is approximated by the area of the
trapezoidal region bounded by the chord joining the points (x0, f0) and (x1, f1), the
x-axis and the ordinates at x = x0 and at x = x1. This is represented by the shaded
area as shown in the Figure 8.1.

Self-Instructional
174 Material
Y Numerical Integration and
Numerical Differentiation

(x1, f1)

f(x)
(x0, f0) NOTES

O X
x0 x1

Fig. 8.1 Trapezoidal Region


Thus, the area under the curve y = f (x) is replaced by the area under the
chord joining the points.
The error in the trapezoidal formula is given by,
1
h3 h3
ET f ( ) s ( s 1) ds f ( ), where x0 x1 (8.8)
2 0
12

Trapezoidal Rule
xn
For evaluating the integral ∫ f ( x)dx, we have to sum the integrals for each of the
x0

sub-intervals (x0, x1), (x1, x2),..., (xn–1, xn). Thus,


xn
h
∫ f ( x)dx
x0
=
2
[( f 0 + f1 ) + ( f1 + f 2 ) + ... + ( f n −1 + f n )]

xn
h (8.9)
Or ∫
x0
f ( x)dx
=
2
[ f 0 + 2( f1 + f 2 + ... + f n −1 ) + f n ]

This is known as trapezoidal rule of numerical integration.


The error in the trapezoidal rule is,
xn
n h
ET f ( x)dx [ f0 2( f1 f 2 ... fn 1) fn ]
x0
2
h3
[ f ( 1) f ( 2 ) ... f ( n )]
12

Where x0 1 x1 , x1 2 x2 ,..., xn 1 n xn

Self-Instructional
Material 175
Numerical Integration and Thus, we can write
Numerical Differentiation

h3
ETn [nf ( )], f ( ) being the mean of f ( 1 ), f ( 2 ),..., f ( n )
12
NOTES
h2
nh f ( )
12
h2
Where ETn (b a) f ( ), since nh b a
12
Or, x0 xn
(8.10)
b
Algorithm: Evaluation of ∫ f ( x)dx by trapezoidal rule.
a

Step 1: Define function f (x)


Step 2: Initialize a, b, n
Step 3: Compute h = (b–a)/n
Step 4: Set x = a, S = 0
Step 5: Compute x = x + h
Step 6: Compute S = S + f (x)
Step 7: Check if x < b, then go to Step 4 else go to the next step
Step 8: Compute I = h (S + (f (a) + f (b))/2)
Step 9: Output I, n

Simpson’s One-Third Formula


Taking n = 2 in the Newton-Cotes formula in Equation (8.4), we get Simpson’s
one-third formula of numerical integration given by,
x2
22 1
f ( x)dx h 2 f0 f0 (2 23 3 22 ) 2
f0
x0
2 12
1
h 2 f0 2 ( f1 f0 ) ( f2 2 f1 f0 ) (8.11)
3
x2
h
f ( x)dx [ f0 4 f1 f2 ]
x0
3
This is known as Simpson’s one-third formula of numerical integration.
The error in Simpson’s one-third formula is defined as,
x2
h
ES = ∫ f ( x) dx − 3 ( f
x0
0
+ 4 f1 + f 2 )

Self-Instructional
176 Material
Numerical Integration and
Assuming F ′( x) = f ( x), we obtain: Numerical Differentiation
h
E S = F ( x 2 ) − F ( x0 ) − ( f + 4 f1 + f 2 )
3 0
Expanding F(x2) = F(x0+2h), f1 = f (x0+h) and f2 = f (x0+2h) in powers of NOTES
h, we have:
(2h) 2 (2h)3
ES 2hF ( x0 ) F ( x) ( x0 ) F ( x0 ) ...
2! 3!
h h2 (2h) 2
f0 4 f0 hf 0 f (0) ... f0 2hf 0 f (0) ...
3 2! 2!
4 3 2 4 4 5 iv
2hf 0 2h 2 f 0 h f (0) h f (0) h y0 ( )
3 3 15
h
[6 f 0 6hf 0 4h 2 f (0) 2h3 f (0)...]
3
h5 iv
ES f ( ), on simplification, where x0 x2
90
(8.12)
Geometrical interpretation of Simpson’s one-third formula is that the integral
represented by the area under the curve is approximated by the area under the
parabola through the points (x0, f0), (x1, f1) and (x2, f2) shown in Figure 8.2.
Y

Parabola y = f (x)

X
0 x0 x1 x2

Fig. 8.2 Simpson’s One-Third Integration

Simpson’s One-Third Rule


On dividing the interval [a, b] into 2m sub-intervals by points x0 = a, x1 = a + h,
x2 = a + 2h, ..., x2m = a+2mh, where b = x2m and h = (b–a)/(2m), and using
Simpson’s one-third formula in each pair of consecutive sub-intervals, we have
b x2 x4 x2 m

∫ f ( x)dx = ∫ f ( x)dx + ∫ f ( x)dx + ... + ∫ f ( x)dx


a x0 x2 x 2 m− 2

h
=
3
[
( f 0 + 4 f1 + f 2 ) + ( f 2 + 4 f 3 + f 4 ) + ( f 4 + 4 f 5 + f 6 ) + ... + ( f 2 m − 2 + 4 f 2 m −1 + f 2 m ) ]
b
h
∫ f ( x)dx = 3 [ f
a
0
]
+ 4 ( f1 + f 3 + f 5 + ... + f 2 m −1 ) + 2 ( f 2 + f 4 + f 6 + ... + f 2 m − 2 ) + f 2 m .

(8.13)
Self-Instructional
Material 177
Numerical Integration and This is known as Simpson’s one-third rule of numerical integration.
Numerical Differentiation
The error in this formula is given by the sum of the errors in each pair of
intervals as,
NOTES h5 iv
ES2 m [ f ( 1) f iv ( 2 ) ... f iv ( m )]
90
Which can be rewritten as,
h5
ES2 m m f iv ( ), f iv ( ) being the mean of f iv ( 1 ), f iv ( 2 ),..., f iv ( m )
90
Since 2mh b a, we have

h4
ES2 m (b a ) f iv ( ), where a b.
180
(8.14)
b

Algorithm: Evaluation of ∫ f ( x)dx by Simpson’s one-third rule.


a

Step 1: Define f (x)


Step 2: Input a, b, n (even)
Step 3: Compute h = (b–a)/n
Step 4: Compute S1 = f (a) + f (b)
Step 5: Set S2 = 0, x = a
Step 6: Compute x = x + 2h
Step 7: Compute S2 = S2+ f (x)
Step 8: Check If x < b then go to Step 5 else go to next step
Step 9: Compute x = a + h
Step 10: Compute S4 = S4+ f (x)
Step 11: Compute x = x + 2h
Step 12: Check If x > b go to next Step else go to Step 9
Step 13: Compute I = (S1+ 4S4+ 2S2)h/3
Step 14: Write I, n

Self-Instructional
178 Material
Simpson’s Three-Eighth Formula Numerical Integration and
Numerical Differentiation
Taking n = 3, Newton-Cotes formula can be written as,
x3 3
u (u − 1) 2 u (u − 1)(u − 2) 3 NOTES
∫ f ( x)dx = h ∫ ( f
x0 0
0 + u ∆f 0 +
2!
∆ f0 +
3!
∆ f 0 )du

3
 u
2
1u
3
u  2
2
1u
4
2 3

∆f 0 +  − ∆ f0 +  − u + u ∆ f 0 
3
= h uf 0 +
 2 2  3 2  6  4 
  0
 9 9 2 3 3 
= h 3 y0 + ∆y 0 + ∆ y0 + ∆ y0 
 2 4 8 
 9 9 3 
= h 3 y0 + ( y1 − y0 ) + ( y 2 − 2 y1 + y0 ) + ( y3 − 3 y 2 + 3 y1 − y 0 )
 2 4 8 
x3
3h
∫ f ( x) dx =
x0
( y + 3 y1 + y3 )
8 0
(8.15)

3h5 iv
The truncation error in this formula is f ( ), x0 x3 .
80
This formula is known as Simpson’s three-eighth formula of numerical
integration.
As in the case of Simpson’s one-third rule, we can write Simpson’s three-eighth
rule of numerical integration as,
b
3h
∫ f ( x) dx =
a
[ y + 3 y1 + 3 y 2 + 2 y3 + 3 y 4 + 3 y5 + 2 y 6 + ... + 2 y3m −3 + 3 y3m − 2 + 3 y3m −1 + y3m ]
8 0

(8.16)
where h = (b–a)/(3m); for m = 1, 2,...
i.e., the interval (b–a) is divided into 3m number of sub-intervals.
The rule in Equation (8.16) can be rewritten as,
b
3h
∫ f ( x) dx =
a
8
[ y0 + y3m + 3 ( y1 + y 2 + y 4 + y5 + ... + y3m − 2 + y3m −1 ) + 2 ( y3 + y6 + ... + y3m −3 )]

(8.17)
The truncation error in Simpson’s three-eighth rule is
3h4
(b a) f iv ( ), x0 xg m
240

Self-Instructional
Material 179
Numerical Integration and Weddle’s Formula
Numerical Differentiation
In Newton-Cotes formula with n = 6 some minor modifications give the Weddle’s
formula. Newton-Cotes formula with n = 6, gives
NOTES x6
 123 5 41 6 
∫ ydx = h6 y
x0
0 + 18 ∆y0 + 27∆2 y0 + 24 y∆3 y 0 +
10
∆ y0 +
140
∆ y0 

41 6
This formula takes a very simple form if the last term ∆ y0 is replaced by
140
42 6 3
∆ y0 = ∆ 6 y0 . Then the error in the formula will have an additional term
140 10
1 6
∆ y0 . The above formula then becomes,
140
x6
 123 5 3 
∫ ydx
x0
= h 6 y0 + 18∆y0 + 27∆ 2 y0 + 24∆3 y0 +
 10
∆ y0 + ∆ 6 y0 
10 
x6
3h
∫=
ydx
x0 10
[ y0 + 5 y1 + y2 + 6 y3 + y4 + 5 y5 + y6 ]

(8.18)
On replacing the differences in terms of yi’s, this formula is known as Weddle’s
formula.
1 7 ( vi )
The error Weddle’s formula is h y ( ) (8.19)
140
Weddle’s rule is a composite Weddle’s formula, when the number of sub-
intervals is a multiple of 6. One can use a Weddle’s rule of numerical integration by
sub-dividing the interval (b – a) into 6m number of sub-intervals, m being a posi-
tive integer. The Weddle’s rule is,
b
3h
∫ f ( x)dx = 10 [y +5y +y +6y +y +5y +2y +5y +y +6y +y
a
0 1 2 3 4 5 6 7 8 9 10
+5y11+...

+2y6m–6+5y6m–5+y6m–4+6y6m–3+y6m–2+5y6m–1+y6m] (8.20)
where b–a = 6mh
b
3h
i.e., ∫ f ( x) dx = 10 [ y
a
0 + y6 m + 5 ( y1 + y5 + y7 + y11 + ... + y6m −5 + y6m −1 ) + y 2 + y 4 + y8 + y10 + ...

+ y6 m − 4 + y6m − 2 + 6 ( y3 + y9 + ... + y6 m−3 ) + 2 ( y6 + y12 + ... + y−6 )]

1 6
The error in Weddle’s rule is given by h (b a ) y ( vi ) ( )
840
(8.21)
Self-Instructional
180 Material
2 Numerical Integration and
Numerical Differentiation
Example 1: Compute the approximate value of ∫ x dx by taking four sub-inter-
4

vals and compare it with the exact value.


NOTES
2 1
Solution: For four sub-intervals of [0, 2], we have h = = = 0.6. We tabu-
4 2
late f ( x) = x 4 .
x 0 0.5 1.0 1.5 2.0
f ( x) 0 0.0625 1.0 5.062 16.0

By trapezoidal rule, we get


2
0 .5
∫ x dx ≈
4
[0 + 2 × (0.0625 + 1.0 + 5.062) + 16.0]
2
0
1 28.2690
≈ [12.2690 + 16.0] = = 7.0672
4 4
By Simpson’s one-third rule, we get
2
0⋅5
∫ x dx =
4
[0 + 4 × (0.0625 + 5.062) + 2 × 1.0 + 16.0]
3
0
1 38.5380
= [4 × 5.135 + 18.0] = = 6.4230
6 6

2 5 32
Exact value = = = 6 .4
5 5
Error in the result by trapezoidal rule = 6.4 – 7.0672 = – 0.6672
Error in the result by Simpson’s one third rule = 6.4 – 6.4230 = – 0.0230
Example 2: Evaluate the following integral:
1

∫ (4 x − 3x
2
)dx by taking n = 10 and using the following rules:
0

(i) Trapezoidal rule and (ii) Simpson’s one-third rule. Also compare them
with the exact value and find the error in each case.
Solution: We tabulate f (x) = 4x–3x2, for x = 0, 0.1, 0.2, ..., 1.0.

x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
f ( x) 0.0 0.37 0.68 0.93 1.12 1.25 1.32 1.33 1.28 1.17 1.0

Self-Instructional
Material 181
Numerical Integration and (i) Using trapezoidal rule, we have
Numerical Differentiation
1
0.1
∫ (4 x − 3 x
2
) dx = [0 + 2 (0.37 + 0.68 + 0.93 + 1.12 + 1.25 + 1.32 + 1.33 + 1.28 + 1.17) + 1.0]
2
NOTES 0
0.1
= × (18.90 + 1.0) = 0.995
2

(ii) Using Simpson’s one-third rule, we have

1
0 .1
∫ (4 x − 3x
2
) dx = [0 + 4 (0.37 + 0.93 + 1.25 + 1.33 + 1.17) + 2(0.68 + 1.12 + 1.32 + 1.28) + 1.0]
3
0
0 .1
= [4 × 5.05 + 2 × 4.40 + 1.0]
3
0 .1
= × [30.0] = 1.00
3

(iii) Exact value = 1.0


Error in the result by trapezoidal rule is 0.005 and there is no error in the result
by Simpson’s one-third rule.
1
2
Example 3: Evaluate ∫ e − x dx, using (i) Simpson’s one-third rule with 10 sub-
0
intervals and (ii) Trapezoidal rule.
Solution: We tabulate values of e − x for the 11 points x = 0, 0.1, 0.2, 0.3, ....,
2

1.0 as given below.

2
x e− x
0.0 1.00000
0.1 0.990050
0.2 0.960789
0.3 0.913931
0.4 0.852144
0.5 0.778801
0.6 0.697676
0.7 0.612626
0.8 0.527292
0.9 0.444854
1.0 0.367879
1.367879 3.740262 3.037901

Self-Instructional
182 Material
Hence, by Simpson’s one-third rule we have, Numerical Integration and
Numerical Differentiation
1
h
∫e
− x2
dx
= [ f 0 + f10 + 4 ( f1 + f 3 + f5 + f 7 + f9 ) + 2 ( f 2 + f 4 + f 6 + f8 )]
0 3
0.1
NOTES
= [1.367879 + 4 × 3.740262 + 2 × 3.037901]
3
0.1
= [1.367879 + 14.961048 + 6.075802]
3
2.2404729
= = 0.7468243 ≈ 0.746824
3
Using trapezoidal rule, we get
1
h
∫e
− x2
dx
= [ f 0 + f10 + 2 ( f1 + f 2 + ... + f9 )]
0
2
0.1
= [1.367879 + 6.778163]
2
= 0.4073021
4
Example 4: Compute the integral I = ∫ ( x 3 − 2 x 2 + 1)dx, using Simpson’s one-third
0

rule taking h = 1 and show that the computed value agrees with the exact value.
Give reasons for this.
Solution: The values of f (x) = x3–2x2+1 are tabulated for x = 0, 1, 2, 3, 4 as

x 0 1 2 3 4
f ( x) 1 0 1 10 33

The value of the integral by Simpson’s one-third rule is,

1 1
I [1 4 0 2 1 4 10 33] 25
3 3

44 43 1
The exact value 2 1 4 25
4 3 3
Thus, the computed value by Simpson’s one-third rule is equal to the exact
value. This is because the error in Simpson’s one-third rule contains the fourth
order derivative and so this rule gives the exact result when the integrand is a
polynomial of degree less than or equal to three.
0.5

Example 5: Compute ∫ e dx by (i) Trapezoidal rule and (ii) Simpson’s one-third


x

0.1

rule and compare the results with the exact value, by taking h = 0.1.

Self-Instructional
Material 183
Numerical Integration and Solution: We tabulate the values of f (x) = ex for x = 0.1 to 0.5 with spacing
Numerical Differentiation
h = 0.1.

x 0.1 0.2 0.3 0.4 0.5


NOTES x
f ( x) = e 1.1052 1.2214 1.3498 1.4918 1.6847

The value of the integral by trapezoidal rule is,


0.1
IT = [1.1052 + 2 (1.2214 + 1.3498 + 1.4918) + 1.6487]
2
0.1
= [2.7539 + 2 × 4.0630] = 0.5439
2
The value computed by Simpson’s one-third rule is,

0.1
IS = [1.1052 + 4 (1.2214 + 1.4918) + 2 ×1.3498 + 1.6487]
3
0.1 0.1
= [2.7539 + 4 × 2.7132 + 2.6996] = [16.3063] = 0.5435
3 3

Exact value = e0.5– e0.1 = 1.6487–1.1052 = 0.5435


The trapezoidal rule gives the value of the intergal with an error – 0.0004 but
Simpson’s one-third rule gives the exact value.
1
dx
Example 6: Compute ∫ 1 + x using (i) Trapezoidal rule (ii) Simpson’s one-third
0

rule taking 10 sub-intervals. Hence, find log e and compare it with the exact value
2

up to six decimal places.


1
Solution: We tabulate the values of f(x) = for x = 0, 0.1, 0.2,..., 1.0 as given
1+ x
below:
1
x y f ( x) =
1+ x
0.0 y0 1.000000
0.1 y1 0.9090909
0.2 y2 0.8333333
0.3 y3 0.1692307
0.4 y4 0.7142857
0.5 y5 0.6666667
0.6 y6 0.6250000
0.7 y7 0.5882352
0.8 y8 0.5555556
0.9 y9 0.5263157
1.0 y10 0.500000
Self-Instructional 1.500000 3.4595391 2.7281746
184 Material
(i) Using trapezoidal rule, we have Numerical Integration and
Numerical Differentiation
1
dx h
∫ 1+ x = 2 [ f
0
0 + f10 + 2 ( f1 + f 2 + f 3 + f 4 + ... + f 0 )]

NOTES
0.1
= [1.500000 + 2 × (3.4595391 + 2.7281745)]
2
0.1
= [1.500000 + 12.3754272] = 0.6437714.
2
(ii) Using Simpson’s one-third rule, we get
1
dx h
∫ 1+ x = 3 [ f
0
0 + f10 + 4 ( f1 + f 3 + ... + f 9 ) + 2 ( f 2 + f 4 + ... + f 8 )]

0 .1
= [1.500000 + 4 × 3.4595391 + 2 × 2.7281745]
3
0 .1 0.1
= [1.5 + 13.838156 + 5.456349] = × 20.794505 = 0.6931501
3 3
(iii) Exact value:
1
dx 0.1
∫ 1 + x=
0
log=
e2
3
[1.500000 + 4 × 3.4595391 + 2 × 2.7281745]

= 0.6931472

The trapezoidal rule gives the value of the integral having an error 0.693147 –
0.6437714 = 0.0493758, while the error in the value by Simpson’s one-third rule
is – 0. 000029.
2
Example 7: Compute cos d , by (i) Simpson’s rule and (ii) Weddle’s for--
0

mula taking six sub-intervals.

Solution: Sub-division of [0, ] into six sub-intervals will have


2
1
h . 15 0 26179. For applying the integration rules we tabulate cos .
2 6

0 15 30 45 60 75 90
cos 1 0.98281 0.93061 0.84089 0.70711 0.50874 0

Self-Instructional
Material 185
Numerical Integration and (i) The value of the integral by Simpson’s one-third rule is given by,
Numerical Differentiation
0.26179
IS = [1 + 4 × (0.98281 + 0.84089 + 0.50874) + 2 × (0.093061 + 0.070711) + 0)]
3
NOTES 0.26179
= [1 + 4 × 2.33244 + 2 × 1.63772]
3
0.26179
= × 13.6052 = 1.18723
3
(ii) The value of the integral by Weddle’s formula is,
3
IW = × 0.26179 [1.05 + 7.45775 + 5.04534 + 0.93061 + 0.070711]
10
3 × 0.026179 [14.554411] =
= 1.143059 ≈ 1.14306

2
Example 8: Evaluate the integral 1 0.162sin 2 d by Weddle’s formula.
0

Solution: On dividing the interval into six sub-intervals, the length of each sub-
1
interval will be h 0 26179 15. For computing the integral by Weddle’ss
6 2

formula, we tabulate f ( ) 1 0 162sin 2 .

0 15 30 45 60 75 90
f ( ) 1.0 0.99455 0.97954 0.95864 0.93728 0.92133 0.91542

The value of the integral by Weddle’s formula is given by,


3 0.26179
IW [1.0 5 (0.99455 0.92133) 0.97954 6 0.95864 0.93728 0.91542]
10
0.078537 19.16348 1.50504

Computing an Integral to a Desired Accuracy


For evaluating a definite integral correct to a desired accuracy, one has to make a
suitable choice of the value of h, the length of sub-interval to be used in the for-
mula. There are two ways of determining h, by considering the truncation error in
the formula to be used for numerical integration or by successive evaluation of the
integral by the technique of interval halving and comparing the results.
Truncation Error Estimation Method
In the truncation error estimation method, the value of h to be used is determined
by considering the truncation error in the formula for numerical integration. Let E
be the error tolerance for the integral to be evaluated. Then h is chosen by using
the condition,
R /2

Self-Instructional
186 Material
Numerical Integration and
2
dx Numerical Differentiation
As an illustration, consider the evaluation of ∫
1
x using Simpson’s one-third

rule accurate up to the third decimal place. We may take 10 3.


NOTES
If we wish to use Simpson’s one-third rule, then the truncation error is R,
h4
R (2 1) f iv ( ); 1 2
180
Then h is determined by satisfying the condition,
h4
| f iv ( ) | 0 5 10 3

180
1 iv 2 × 3× 4
For the given problem, f (x) = , thus f ( x ) = . Hence,
x x5

max f iv ( x) = 24
[1, 2 ]

4 1 × 24
Thus, h × < 0 ⋅ 5 × 10 −3 or h < 0.102
180
But h has to be so chosen such so that the interval [1, 2] is divided into an
even number of sub-intervals. Hence we may take h = 0.1 < 0.102, for which n =
10, i.e., there will be 10 sub-intervals.
The value of the integral is,
2
dx 0.1 1 1 1 1 1 1 1 1 1 1
1.0 4 2
1
x 3 1.1 1.3 1.5 1.7 1.9 1.2 1.4 1.6 1.8 2
0.1
[1.5 4 3.4595 2 2.7282]
3
0.1
2.0749 0.06931 which agrees with the exact value of log e2 .
3
Interval Halving Technique
When the estimation of the truncation error is cumbersome, the method of interval
halving is used to compute an integral to the desired accuracy.
In the interval halving technique, an integral is first computed for some moderate
h
value of h. Then, it is evaluated again for spacing , i.e., with double the number
2
of subdivisions. This requires the evaluation of the integrand at the new points of
subdivision only and the previous function values with spacing h are also used.
Ih
Now the difference between the integral Ih and 2
is used to check the accuracy

of the computed integral. If I h I h / 2 , where is the permissible error, then


Self-Instructional
Material 187
Numerical Integration and Ih/2 is to be taken as the computed value of the integral to the desired accuracy. If
Numerical Differentiation
the above accuracy is not achieved, i.e., I h I h / 2 , then the computation of the

h
NOTES integral is made again with spacing and the accuracy condition is tested again.
4
The equation of I h / 4 will require the evaluation of the integrand at the new points of
sub-division only.
Notes:
1. The initial choice of h is sometimes taken as m where m = 2 for trapezoidal
rule and m = 4 for Simpson’s one-third rule.
2. The method of interval halving is widely used for computer evaluation since it
enables a general choice of h together with a check on the computations.
3. The truncation error R can be estimated by using Runge’s principle given by,
1
R≈ Ih − Ih/2 for trapezoidal rule and R ≈ 1 I h − I h / 2 for Simpson’s one-
3 15
third rule.
Algorithm: Evaluation of an integral by Simpson’s one-third rule with interval
halving.
Step 1: Set/initialize a, b,
[a, b are limits of integration, is error tolerance]
b−a
Step 2: Set h =
2
Step 3: Compute S1 = f (a) + f (b)
Step 4: Compute S4 = f (a + h)
Step 5: Set S2 = 0, I1 = 0
( S1 + 4S 4 + S 2 ) × h
Step 6: Compute I 2 =
3

Step 7: If ( I 2 I1 ) , go to Step 17 else go to the next step

h
Step 8: Set h = , I1 = I2
2
Step 9: Compute S2 = S2+ S4
Step 10: Set S4 = 0
Step 11: Set x = a + h
Step 12: Compute S4 = S4+ f (x)
Step 13: Set x = x + h
Self-Instructional
188 Material
Step 14: If x < b, go to Step 12 else go to the next step Numerical Integration and
Numerical Differentiation
( S1 + 2S 2 + 4S 4 ) × h
Step 15: Compute I 2 =
3

Step 16: Go to step 7 NOTES

Step 17: Write I2, h,


Step 18: End

Algorithm: Evaluation of an integral by trapezoidal rule with interval halving.


Step 1: Initialize/set a, b, [a, b are limits of integration, is error
tolerance]
Step 2: Set h = b–a
f ( a ) + f (b )
Step 3: Compute S1 =
2
Step 4: Compute I1 = S1×h
h
Step 5: Compute x = a +
2

Step 6: Compute I 2 = ( S1 + f ( x)) × h

Step 7: If I 2 I1 , go to Step 13 else go to the next step

h
Step 8: Set h =
2
Step 9: Set x = a + h
Step 10: Set I2 = I2+ h × f (x)
Step 11: If x < b, go to Step 9 else go to next step
Step 12: Go to Step 7
Step 13: Write I 2 , h,
Step 14: End

Numerical Evaluation of Double Integrals


We consider the evaluation of a double integral,
I= ∫∫ f ( x, y)dx dy
R
(8.22)
where R is the rectangular region a ≤ x ≤ b, c ≤ y ≤ d. The double integral can
be transformed into a repeated integral in the following form,
b d 
∫ ∫
a
dx  f ( x, y )dy 
c



(8.23)
Self-Instructional
Material 189
Numerical Integration and
d
Numerical Differentiation
Writing F ( x) = ∫ f ( x, y )dy considered as a function of x, we have (8.24)
c

b
NOTES

I = F ( x) dx
a
(8.25)

Now for numerical integration, we can divide the interval [a, b] into n sub-
intervals with spacing h and then use a suitable rule of numerical integration.
Trapezoidal Rule for Double Integral
By trapezoidal rule, we can write the integral Equation (8.25) as,
b
h
∫ F ( x) dx = 2 [ F
a
0
+ Fn + 2 ( F1 + F2 + F3 + ... + Fn −1 )] (8.26)

b−a
where x0 = a, xn = b, h = and
n
1
Fi = F ( xi ) = ∫ f ( x , y) dy, x
0
i i
= a + ih (8.27)

for i = 0, 1, 2,..., n.
Each Fi can be evaluated by trapezoidal rule. For this, the interval [c, d] may
c−d
be divided into m sub-intervals each of length k = . Thus we can write,
m
k
Fi = [ f ( xi , y0 ) + f ( xi , y m ) + 2{ f ( xi , y1 ) + f ( xi , y 2 + ... + f ( xi , y m −1 )}]
2
(8.28)
y0 = c, ym = d, yi = c+ik; i = 0, 1,..., m.
This Equation (8.28) can be written in a compact form,
k
Fi = [ f + f im + 2 ( f i1 + f i 2 + ... + f im −1 )].
2 i0
(8.29)
The relation Equations (8.26) and (8.29) together form the trapezoidal rule
for evaluation of double integrals.

Simpson’s One-Third Rule for Double Integrals


1
For the evaluation of double integrals we can write Simpson’s rule. Thus we
3
have,
b
h

I = F ( x) dx =
a
[ F + Fn + 2 ( F2 + F4 + ... + Fn − 2 ) + 4 ( F1 + F3 + ... + Fn −1 )]
3 0 (8.30)

Self-Instructional
190 Material
Numerical Integration and
b−a
Where h = , n is even and Numerical Differentiation
n
d
Fi = F(xi) = ∫ f ( xi , y )dy, xi = a + ih, for i = 0, 1, 2,..., n (8.31)
c
NOTES
And, x0 = a and xn = b
For evaluating I, we have to evaluate each of the (n + 1) integrals given in
Equation (8.31). For evaluation of Fi, we can use Simpson’s one-third rule by
dividing [c, d] into m sub-intervals. Fi can be written as,
k
Fi = [ f ( xi , y0 ) + f ( xi , y m ) + 2 f ( xi , y 2 ) + f ( xi , y4 ) + ... + f ( xi , y m−2 ) + 4{ f ( xi , y1 ) + f ( xi , y3 )
3
+ ... + f ( xi , y m−1 )}]
(8.32)
Equation (8.32) can be written in a compact notation as,
k
Fi = [ f + f im + 2 ( f i 2 + f i 4 + ... + f in − 2 ) + 4 ( f i1 , f i 3 + ... + f im −1 )]
3 i0
Where fij = f (xi, yj), j = 0, 1, 2,...,m.

∫∫ ( x
2
+ y 2 )dx dy
Example 9: Evaluate the following double integral where R is
R
the rectangular region 1 ≤ x ≤ 3, 1 ≤ y ≤ 2, by Simpson’s one-third rule taking
h = k = 0.5.
Solution: We write the integral in the form of a repeated integral,
3 2 

1
∫ ∫
I = dx  ( x 2 + y 2 )dy 
1


2
Taking n = 4 sub-intervals along x, so that h = = 0.5
4

y=2
y=1

x=1 x=3
3
0 .5

∴ I = F ( x)dx =
1
3 [F0 + F4 + 2F2 + 4(F1 + F3)]
2

∫ (x
2
=
where F(x) + y 2 )dy
1

Self-Instructional
Material 191
Numerical Integration and 2
Numerical Differentiation
∫ (x
2
Fi F =
= ( xi ) i + y 2 )dy; x = 1+ 0.5i, where i = 0, 1, 2, 3, 4.
i
1

1
For evaluating Fi’s, we take k = = 0.5 and get,
NOTES 2
2
0.5 0 .5
∫ (1 + y
2
F0 = ) dy = [1 + 12 + 4 {1 + (1.5) 2 } + 1 + 2 2 ] = × 20
3 3
1
2
0 .5 0.5
∫ (1.5
2
F1 = + y 2 ) dy = [(1.5) 2 + 12 + 4{1.5) 2 + (1.5) 2 } + (1.5) 2 + 2 2 ] = × 27.50
3 3
1

2
0.5 0.5
F2 = ∫ (2 + y 2 ) dy =
2
[22 + 12 + 4 (22 + 1.5) 2 } + 22 + 22 ] = × 38
1 3 3
2
0.5 0.5
∫ ((2.5)
2
F3 = + y 2 ) dy = [(2.5) 2 + 12 + 4 {(2.5)2 + (1.5) 2 } + (2.5)2 + 22 ] = × 51.50
1 3 3
2
0.5 2 2 0.5
∫ (3
2
F4 = + y 2 )dy = [3 + 1 + 4{32 + (1.5) 2 } + 32 + 22 ] = × 68
1 3 3
0.25
∴I
= [20 + 68 + 2 × 38 + 4 (27.50 + 51.50)]
9
0.25
= × 480 = 13.333
9

Example 10: Compute ∫∫ ( x by trapezoidal rule with h = 0.5.


2
+ y 2 )dx dy
R

y=2

y=1

x=1 x=3
3
0.5
Solution: I T = ∫ F ( x)dx = [F0+F4+2 (F1+F2+F3)]
2
1


where Fi = F(xi) = ( xi2 + y 2 )dy, xi = 1 + 0.5i, i = 0, 1, 2, 3, 4.
1

2
0.5 2 2
Thus, F0 (1 y 2 )dy [1 1 2{12 (1.5)2 } 12 22 ]
1
2
0.5
13.50 3.375
2

Self-Instructional
192 Material
2 Numerical Integration and
0.5
∫ [(1.5) + y = Numerical Differentiation
2 2
F1
= ]dy [(1.5) 2 + 12 + 2 {(1.5) 2 + (1.5)2 } + (1.5) 2 + 22 ]
1
2
0.5
= × 18.50 = 4.625
2
2
NOTES
0.5
F2= ∫ [22 + y 2 ]dy= [22 + 12 + 2{22 + (1.5) 2 } + 22 + 22 ]
1 2
0.5
= × 25.50 = 6.375
2
2
0.5
F3 ∫ [(2.5) 2 + y 2=
= ]dy [(2.5) 2 + 12 + 2{(2.5) 2 + (1.5)2 } + (2.5) 2 + 22 ]
1 2
0.5
= × 34.50 = 8.625
2
2
0.5 2 2
F4= ∫ [32 + y 2 ]dy= [3 + 1 + 2{32 + (1.5)2 } + 32 + 22 ]
1 2
0.5
= × 45.50 = 11.375
2
0.5
∴ IT = × [3.375 + 11.375 + 2(4.625 + 6.375 + 8.625)]
2
1
= [14.750 + 2 × 19.625]
4
1 1
= [14.750 + 39.250] = × 54 =13.5
4 4
Example 11: Evaluate the following double integral using trapezoidal rule with
2 2
dx dy
length of sub-intervals h = k = 0.5, .
1 1
x y
1
Solution: Let f ( x, y ) =
x+ y
y

1.5

x
1 1.5 2

Self-Instructional
Material 193
Numerical Integration and By trapezoidal rule with h = 0.5, the integral
Numerical Differentiation
2 2
I = ∫∫ dx dy f ( x, y ) is computed as,
1 1
NOTES 0.5 × 0.5
=I [ f (1, 1) + f (2, 1) + f (1, 2) + f (2, 2) + 2{ f (1.5, 1) + f (1, 1.5)
4
+ f (2, 1.5) + f (1.5, 2)} + 4 f (1.5, 1.5)]

1 1 1 1 1 2 2 2 2 1
= + + + + 2  + + +  + 4× 
16  2 3 3 4 5 5 7 7 3
1  4 × 12 4 
=  0.666667 + 0.75 + 2 × + 
16  35 3
1
= [5.492857 ]
16
= 0.343304.
2 2
dxdy
Example 12: Evaluate ∫∫ x + y by Simpson’s one-third rule. Take sub-intervals
1 1

of length h = k = 0.5.
2 2
Solution: The value of the integral I = ∫ ∫ f ( x, y)dx dy by Simpson’s one-third
1 1

rule with h = k = 0.5 is,

0.5 0.5
I [ f (1, 1) f (2, 1) f (1, 2) f (2, 2) 4{ f (1, 1.5) f (1.5, 1)
3 3
f (2, 1.5) f (1.5, 2)} 16 f (1.5, 1.5)]
1 1 1 1 1 2 2 2 2 1
4 16
36 2 3 3 4 5 5 7 7 3
1 4 12 16
0.666667 0.75 4
36 35 3
1
[12.235714] 0.339880
36

Gaussian Quadrature
We have seen that Newton-Cotes formula of numerical integration is of the form,
b n

∫ f ( x)dx ≈ ∑ c f (x )
i =0
i i (8.33)
a

b−a
where xi = a+ih, i = 0, 1, 2, ..., n; h =
n
Self-Instructional
194 Material
This formula uses function values at equally spaced points and gives the exact Numerical Integration and
Numerical Differentiation
result for f (x) being a polynomial of degree less than or equal to n. Gaussian
quadrature formula is similar to Equation (8.33) given by,
1 n
NOTES
∫ F (u ) du ≈ ∑ w F (u )
i =1
i i (8.34)
−1

where wi’s and ui’s called weights and abscissae, respectively are derived such
that above Equation (8.34) gives the exact result for F(u) being a polynomial of
degree less than or equal to 2n–1.
In Newton-Cotes Equation (8.33), the coefficients ci and the abscissae xi are
rational numbers but the weights wi and the abscissae ui are usually irrational
numbers. Even though Gaussian quadrature formula gives the integration of F(u)
between the limits –1 to +1, we can use it to find the integral of f (x) from a to b
by a simple transformation given by,
b−a a+b
x= u+ (8.35)
2 2
Evidently, then limits for u become –1 to 1 corresponding to x = a to b and
writing,
b − a a +b
f ( x) = f  u+ = F (u )
 2 2 

b 1
b−a
We have, ∫
a
f ( x)dx =
2 ∫ F (u)du
−1

(8.36)
It can be shown that the ui are the zeros of the Legendre polynomial Pn(u) of
degree n. These roots are real but irrational and the weights are also irrational.
Given below is a simple formulation of the relevant equations to determine ui
and wi. Let F(u) be a polynomial of the form,
2 n −1
F (u ) = ∑a u
k =0
k
k
(8.37)

Then, we can write


1 1 2 n −1
 
∫ ∫∑
k
F (u )du =  a k u  du (8.38)
−1  k =0
−1  

1
2 2 2
Or, ∫ F (u )du = 2a
−1
0
+ a + a + ... +
3 2 5 4
a
2n − 2 2 n − 2
(8.39)

Self-Instructional
Material 195
Numerical Integration and Equation (8.34) gives,
Numerical Differentiation
1 n  2 n −1 
∫ F (u )du = ∑ ∑
i =1
wi 
 k =0
ak uik 

NOTES −1
n (8.40)
= ∑ (
i =1
wi a0 + a1ui + a2 ui2 + ... + a2 n −1ui2 n −1 )
The Equations (8.39) and (8.40) are assumed to be identical for all polynomials
of degree less than or equal to 2n–1 and hence equating the coefficients of ak on
either side we obtain the following 2n equations for the 2n unknowns w1, w2,...,wn
and u1, u2,...,un.

n n n n
2

i=
=1
wi = 2,
i 1 =i 1
∑ wi ui = 0, ∑ wi ui2 = ∑
,... wi ui2 n −1 = 0
3 i =1 (8.41)

The solution of Equation (8.41) is quite complicated. However, use of Legendre


polynominals makes the labour unnecessary. It can be shown that the abscissae ui
are the zeros of the Legendre polynomial Pn(x) of degree n. The weights wi can
then be easily determined by solving the first n equations of Equations (8.41). As
an illustration, we take n = 2. The four equations for u1, u2, w1 and w2 are,
w1+w 2 = 2
w1u1+w2u2 = 0
2
w1u12 + w2 u 22 =
3
w1u13 + w2u 23 = 0
Eliminating w1, w2, we get
w1 u u3
= − 2 = − 23
w2 u1 u1

Or, u13u 2 − u1u 23 = 0 or u1u 2 (u12 − u 22 ) = 0


Since, u1 ≠ u 2 ≠ 0 , we have u1 = – u2.

2 1 1
Also, w1 = w2 = 1. The third equation gives, 2u12 u1 , u2
3 3 3
Hence, two point Gauss-Legendre quadrature formula is,
1
 1   1 
∫ F (u )=
−1
du F
 3
+ F−


3
The Table 8.1 gives the abscissae and weights of the Gauss-Legendre quadra-
ture for values of n from 2 to 6.
Self-Instructional
196 Material
Table 8.1 Values of Weights and Abscissae for Gauss-Legendre Quadrature Numerical Integration and
Numerical Differentiation
n Weights Abscissae
2 1.0 ± 0.57735027
3 0.88888889 0.0
NOTES
0.55555556 ± 0.77459667
4 0.65214515 ± 0.33998104
0.34785485 ± 0.86113631
5 0.56888889 0.0
0.47862867 ± 0.53846931
0.23692689 ± 0.90617985
6 0.46791393 ± 0.23861919
0.36076157 ± 0.66120939
0.17132449 ± 0.93246951

It is seen that the abscissae are symmetrical with respect to the origin and the
weights are equal for equidistant points.
2
Example 13: Compute ∫ (1 + x)dx, by Gauss two point quadrature formula.
0
2
Solution: Substituting x = u + 1, the given integral ∫ (1 + x)dx
0
reduces to
1


I = (u + 2) du .
−1
Using a two point Gauss quadrature formula, we have I =
(0.57735027+2) + (– 0.57735027+2) = 4.0.
As expected, the result is equal to the exact value of the integral.
Example 14: Show that Gauss two-point quadrature formula for evaluating
b b N

∫ f ( x)dx can be written in the composite form as ∫ f ( x)dx = h ∑ [ f (r ) + f (s )]


i =0
i i
a a

1
where ri = xi + hp, si = xi + (1 – p)h, p = (3 − 3 ).
6
Solution: We subdivide the interval [a, b] into N sub-intervals, each of length h,
given by h b a .
N
xi+1

Consider the integral Ii over the interval (xi, xi+1), i.e., I x =


i ∫ f ( x)dx .
xi

We transform the integral Ii by putting x = h u +  xi + h , so that x = xi gives


2  2
1
h h h
u = –1 and x = xi+1 gives u = 1. Thus, I i =
2 ∫ f  2 u + x + 2 du .
−1
i
Self-Instructional
Material 197
Numerical Integration and The Gauss two point quadrature gives,
Numerical Differentiation

h h 1 h  h h 
Ii = f ⋅ + xi +  + f  − + xi + 
2   2 3 2  2 3 2 
NOTES h
= [ f (ri ) + f ( si )]
2
1
where ri = xi + ph, si = xi + (1 – p)h, p = (3 − 3 )
6
b N −1 N −1
h
Hence, ∫ f ( x) dx = ∑
i =0
Ii =
2 ∑ [ f ( r ) + f ( s )]
i =0
i i
a

Note: Instead of considering Gauss integration formula for more and more num-
ber of points for better accuracy, one can use a two point composite formula for
larger number of sub-intervals.
Example 15: Evaluate the following integral by Gauss three point quadrature
formula:
1
dx
I =∫
0
1+ x
Solution: We first transform the interval [0, 1] to the interval (–1, 1) by substitut-
1 1
dx dt
ing t = 2x – 1, so that .
0
1 x 1
t 3
Now by Gauss three point quadrature we have,
1 1
I [8 F (0) 5 F (3 0.77459667) 5F (3.77459667)] with F (t )
9 t 3
I 0.693122
1
dx
The exact value of ∫ 1 +=
x
ln=
2 0.693147
0

Error = 0.000025

Romberg’s Procedure
This procedure is used to find a better estimate of an integral using the evaluation
of the integral for two values of the width of the sub-intervals.
b

Let I1 and I2 be the values of an integral I = ∫ f ( x) dx, with two different num-
a
ber of sub-intervals of width h1 and h2 respectively using the trapezoidal rule. Let
E1 and E2 be the corresponding truncation errors. Since the errors in trapezoidal
rule is of order of h2, we can write,
Self-Instructional
198 Material
Numerical Integration and
I1 + Kh12 and I =
I= I 2 + Kh22 , where K is approximately same. Numerical Differentiation

I1 + Kh12 =I 2 + Kh22
I1 − I 2
K≈ NOTES
h22 − h12

2 2
I1 − I 2 2 I1h2 − I 2 h1
Thus, I ≈ I1 + 2
.h =
2 1
h2 − h1 h22 − h12

h1
In Romberg procedure, we take h2 and we then have,
2
2
h 
I1  1  − I 2 h12
2 4 I 2 − I1
= I = 2
 h1  2
3
  −h
2
Or, I −I 
I I2 +  2 1 
=
 3 
This is known as Romberg’s formula for trapezoidal integration.
The use of Romberg procedure gives a better estimate of the integral without
any more function evaluation. Further, the evaluation of I2 with h/2 uses the func-
tion values required in evaluation of I1.
1
dx
Example 16: Evaluate I = ∫ 1 + x 2 by trapezoidal rule with h1 0.5and h2 0.25
0

and then use Romberg procedure for a better estimate of I. Compare the result
with exact value.
1
Solution: We tabulate the value of x and y = with h = 0.25.
1+ x2
x 0 0.25 0.5 0.75 1.0
y 1 0.9412 0.80 0.64 0.5

Thus using trapezoidal rule, with h1 = 0.5, we have


0.5
I1= × (1 + 0.5 + 2 × 0.8)= 0.516
3
Similarly, with h2 = 0.25,
0.25
I=
2 [1 + 0.5 + 2 (0.8 + 0.9412 + 0.64)]
3
= 0.5218

Self-Instructional
Material 199
Numerical Integration and The evaluation of I2 uses the function values for evaluation of I1.
Numerical Differentiation
By Romberg formula,
1
I ≈ I2 + ( I 2 − I1 )
NOTES 3
1
= 0.5218 + (0.5218 − 0.516) ×
3
= 0.5218 + 0.0019
= 0.5237
1
The exact integral tan 1 x 0 0.5237.
4
Thus we can take the result correct to four places of decimals.
2
dx
Example 17: Evaluate I = ∫ by trapezoidal rule with two and four sub-inter-
x
1

vals and then use Romberg procedure to get a better estimate of I.


1 1
Solution: We form a table of value of y = with spacing h = = 0.25.
x 4

x 1 1.25 1.5 1.75 2.0


y 1 0.8 0.6667 0.5714 0.5

0.5
I1
= [1 + 0.5 + 2 × 0.6667]
= 0.7084
2
0.25
I2
= [1 + 0.5 + 2 (0.8 + 0.6667 + 0.5714)]
= 0.6970
2
By Romberg proecedure,
I 2 − I1 1
I = I2 + ≈ 0.6970 + ( −0.0114)
3 3
= 0.6970 − 0.0038 = 0.6932

1
dx
Example 18: Compute the value of ∫ 1 + x , (i) by Gauss two point and (ii) by
0

Gauss three point formulas.


b−a 1
Solution: We first transform the integral by substituting x = t + (b + a)
2 2

1 1 1
dx 1 1 1 2
∫0
=
1+ x 2 ∫
−11 +
1 1
+ t
=
2 3+t
−1

dt

2 2

Self-Instructional
200 Material
Numerical Integration and
1
 1   1  Numerical Differentiation
(i) By Gauss two point quadrature ∫
−1
F (t ) dt = F 
 3
 + F  −
  3
 we get ,

  NOTES
1  
1 1 1
∫ dt =  +  = 0.6923
3+t  1 1 
−1  3+ 3− 
 3 3 
(ii) By Gauss three point quadrature,
1
dt 1 0.55555556 
∫ 3+t
−1
dt =
 3 × 0.888888 + 3 + 0.77459667 
 
= 0.443478
2
Example 19: Compute ∫ e x dx by Gauss three point quadrature.
1

6−a 1 1 3
Solution: We first transform the integral by substituting x = t + (b + a ) = t +
2 2 2 2
2 1 t 3 3 1 t
1 + 1
∫ ∫ ∫
x
∴ e dx = e 2 2 dt = e 2 e 2 dt
2 2
1 −1 −1

1 2   1 
3 1
0
= e 0.88888889 × e + 0.55555556 × e 2 × 0.77459667 + e 2 × 0.77459667
2   

= 4.67077

Check Your Progress


1. How will you evaluate a definite integral?
2. Write the trapezoidal formula for numerical integration.
3. What is Simpson’s one-third formula of numerical integration?
4. Define Simpson’s three-eighth rule of numerical integration.
5. State Weddle’s rule.
6. Why is Romberg’s procedure used?

8.3 NUMERICAL DIFFERENTIATION


Numerical differentiation is the process of computing the derivatives of a function
f(x) when the function is not explicitly known, but the values of the function are
known only at a given set of arguments x = x0, x1, x2,..., xn. For finding the
derivatives, we use a suitable interpolating polynomial and then its derivatives are
used as the formulae for the derivatives of the function. Thus, for computing the

Self-Instructional
Material 201
Numerical Integration and derivatives at a point near the beginning of an equally spaced table, Newton’s
Numerical Differentiation
forward difference interpolation formula is used, whereas Newton’s backward
difference interpolation formula is used for computing the derivatives at a point
near the end of the table. Again, for computing the derivatives at a point near the
NOTES middle of the table, the derivatives of the central difference interpolation formula is
used. If, however, the arguments of the table are unequally spaced, the derivatives
of the Lagrange’s interpolating polynomial are used for computing the derivatives
of the function.

Differentiation Using Newton’s Forward Difference


Interpolation Formula
Let the values of an unknown function y = f(x) be known for a set of equally
spaced values x0, x1, …, xn of x, where xr = x0 + rh. Newton’s forward differ-
ence interpolation formula is,
u (u 1) 2 u (u 1)(u 2) 3 u (u 1)(u 2)...(u n 1) n
(u ) y0 u y0 y0 y0 ... y0
2 ! 3 ! n !
x − x0
where u =
h
dy
The derivative can be evaluated as,
dx
dy d d du 1d
{ (u )} .
dx dx du dx h du
1 2u 1 2 3u 2 6u 2 3 2u 3 9u 2 11u 3 4
Thus, y ( x)  y0 y0 y0 y0 ...
h 2 6 12
(8.42)
1
Similarly, y ( x)  (u )
h2
1 2 3 6u 2 18u 11 4
Or, y ( x) y0 (u 1) y0 y0 ... (8.43)
h2 12
For a value of x near the beginning of a table, u = (x – xo)/h is computed first
and then Equation (8.42) and (8.43) can be used to compute f ′( x) and f ′′( x). At
the tabulated point x0, the value of u is zero and the formulae for the derivatives
are given by,
1 1 1 1 1 
y ′( x0 ) = ∆y 0 − ∆2 y 0 + ∆3 y0 − ∆4 y0 + ∆5 y0 − ... (8.44)
h  2 3 4 5 
1  2 11 5 
y ′′( x0 ) = 2 
∆ y0 − ∆3 y0 + ∆4 y 0 − ∆5 y0 + ... (8.45)
h  12 6 

Self-Instructional
202 Material
Differentiation Using Newton’s Backward Difference Interpolation Formula Numerical Integration and
Numerical Differentiation
For an equally spaced table of a function, Newton’s backward difference
interpolation formula is,
(v ) yn v yn
v(v 1) 2
yn
v (v 1)(v 2) 3
yn
v(v 1)(v 2)(v 3) 4
yn ... NOTES
2 ! 3 ! 4 !
v(v 1)...(v n 1) n
yn
n !
x xn
where v
h
dy d2y
The derivatives and , obtained by differentiating the above formula
dx dx 2
are given by,
dy 1  2v + 1 2 3v 2 + 6v + 2 3 2v 3 + 9v 2 + 11v + 3 4 
= ∇y n + ∇ yn + ∇ yn + ∇ y n + ...
dx h  2 6 12 
(8.46)
d2y 1  2 3 6v 2 + 18v + 11 4 
= ∇ y n + ( v + 1) ∇ y n + ∇ y n + ... (8.47)
dx 2 2
h  12 

dy d2y
For a given x near the end of the table, the values of and are com-
dx dx 2
puted by first computing v = (x – xn)/h and using the above formulae. At the
tabulated point xn, the derivatives are given by,
1 1 1 1 
y ′( xn ) = ∇y n + ∇ 2 y n + ∇ 3 y n + ∇ 4 y n + ...
h  2 3 4 
(8.48)
1  2 11 5 
y ′′( xn ) = 2 
∇ y n + ∇ 3 y n + ∇ 4 y n + ∇ 5 y n + ...
h  12 6 
(8.49)
Example 20: Compute the values of f ′(2.1), f ′′(2.1), f ′(2.0) and f ′′(2.0) when f
(x) is not known explicitly, but the following table of values is given:
x f(x)
2.0 0.69315
2.2 0.78846
2.4 0.87547

Self-Instructional
Material 203
Numerical Integration and Solution: Since the points are equally spaced, we form the finite difference table.
Numerical Differentiation

x f ( x) ∆f ( x) ∆2 f ( x)
2.0 0.69315
NOTES 9531
2.2 0.78846 − 83
8701
2.4 0.87547

For computing the derivatives at x = 2.1, we have


1 2u 1 2 1
f ( x)  [ f0 f 0 ] and f ( x)  2 2 f0
h 2 h
x x0 2.1 2.0
u 0.5
h 0.2
1 2 0.5 1 2
f (2.1) 0.09531 f0 0.4765
0.2 2
1
f (2.1) ( 0.00083) 0.21
(0.2)2

The value of f ′(2.0) is given by,,

1  1 
f ′(2.0)
=  ∆f 0 − ∆ 2 f 0 
0.2  2 
1  1 
=  0.09531 + × 0.00083
0.2  2 
0.09572
= = 0.4786
0.2
1
f ′′(2.0)
= × ( −0.0083)
(0.2) 2
= −0.21
Example 21: For the function f(x) whose values are given in the table below
compute values of f ′(1), f ′′(1), f ′(5.0), f ′′(5.0).

x 1 2 3 4 5 6
f ( x) 7.4036 7.7815 8.1291 8.4510 8.7506 9.0309

Self-Instructional
204 Material
Solution: Since f(x) is known at equally spaced points, we form the finite differ- Numerical Integration and
Numerical Differentiation
ence table to be used in the differentiation formulae based on Newton’s interpo-
lating polynomial.

x f ( x) ∆f ( x )
2 3 4 5
∆ f ( x) ∆ f ( x) ∆ f ( x) ∆ f ( x) NOTES
1 7.4036
0.3779
2 7.7815 − 303
0.3476 46
3 8.1291 − 257 − 12
0.3219 34 8
4 8.4510 − 223 −4
0.2996 30
5 8.7506 − 193
0.2803
6 9.0309

To calculate f ′(1) and f ′′(1), we use the derivative formulae based on Newton’ss
forward difference interpolation at the tabulated point given by,

1 1 1 1 1 
f ′( x0 ) =  ∆f 0 − ∆ 2 f 0 + ∆3 f 0 − ∆ 4 f 0 + ∆ 5 f 0 
h 2 3 4 5 
1
 2 11 4 5 5 
f ′′( x=
0)  ∆ f 0 − ∆ f0 + 12 ∆ f 0 − 6 ∆ f0 
3

h2
1 1 1 1 1 

= f ′(1)  0.3779 − × (−0.0303) + × 0.0046 − × ( −0.0012) + × 0.0008
1 2 3 4 5 
= 0.39507
 11 5 
f ′′(1) 0.0303 − 0.0046 + × (−0.0012) − × 0.0008
=
 12 6 
= −0.0367

Similarly, for evaluating f ′(5.0) and f ′′(5.0), we use the following formulae
1 1 2 1 3 1 4 1 5 
f ′( x n ) = ∇f n + ∇ f n + ∇ f n + ∇ f n + ∇ f n 
h 2 3 4 5 
1  2 11 5 
f ′′( xn ) = 2 
∇ f n + ∇3 f n + ∇ 4 f n + ∇5 f n 
h  12 6 
 1 1 1 
f ′(5) = 0.2996 + (−0.0223) + × 0.0034 + (−0.0012)
 2 3 4 
= 0.2893
11
f ′′(5) = [−0.0223 + 0.0034 + × 0.0012]
12
= −0.0178

Self-Instructional
Material 205
Numerical Integration and
Numerical Differentiation
Example 22: Compute the values of y ′(0), y ′′(0.0), y ′(0.02) and y ′′(0.02) for the
function y = f(x) given by the following tabular values:

x 0.0 0.05 0.10 0.15 0.20 0.25


NOTES y 0.00000 0.10017 0.20134 0.30452 0.41075 0.52110

Solution: Since the values of x for which the derivatives are to be computed lie
near the beginning of the equally spaced table, we use the differentiation formulae
based on Newton’s forward difference interpolation formula. We first form the
finite difference table.
x y ∆y ∆2 y ∆3 y ∆4 y
0.0 0.00000
0.10017
0.05 0.10017 100
0.10117 101
0.10 0.20134 201 3
0.10318 104
0.15 0.30452 305 3
0.10623 107
0.20 0.41075 412
0.11035
0.25 0.52110

For evaluating y ′ (0,0), we use the formula


1 1 1 1 
y′( x0 ) =  ∆y0 − ∆ 2 y0 + ∆ 3 y0 − ∆ 4 y0 
h 2 3 4 
1  1 1 1 
y′(0.0)
∴=  0.10017 − × 0.00100 + × 0.00101 − × 0.00003 
0.05  2 3 4 
= 2.00000
For evaluating y ′′ (0,0), we use the formula
1 2 11 4 
y′′( x=
0)
3
 ∆ y0 − ∆ y0 + ∆ y0 
h2  12 
1  11 
=  0.00100 − 0.00101 + × 0.00003 
(0.05)2  12 
= 0.007
For evaluating y ′ (0.02) and y ′′ (0.02), we use the following formulae, with
0.02 − 0.00
u= = 0.4
0.05

Self-Instructional
206 Material
Numerical Integration and
1 2u − 1 2 3u 2 − 6u + 2 3 2u 3 − 9u 2 + 11u − 3 4 
y ′(0.02) =  ∆y0 + ∆ y0 + ∆ y0 + ∆ y0  Numerical Differentiation
h 2 6 12 
1  2 6(u − 1) 3 6u 2 − 18u + 11 4 
y ′′(0.02)
= 2 
∆ y0 + ∆ y0 + × ∆ y0 
h  6 12  NOTES
1  2 × 0.4 − 1 2
3 × (0.4) − 6 × 0.4 + 2
∴ y ′(0.02)
= 0.10017 + × 0.00100 + × 0.00101
0.05  2 6
2 × 0.43 − 9 × 0.42 + 11 × 0.4 − 3 
+ × 0.00003
12 
= 4.00028
1  6 × 0.16 − 18 × 0.4 + 11 
y ′′(0.02)
= 0.00100 − 0.00101 × (−0.6) + × 0.00003
(0.05) 2  12 
= 0.800

Example 23: Compute f ′(6.0) and f ′′(6.3) by numerical differentiation formulae


for the function f(x) given in the following table.

x 6.0 6.1 6.2 6.3 6.4


f ( x) − 0.1750 − 0.1998 − 0.2223 − 0.2422 − 0.2596

Solution: We first form the finite difference table,


x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x)
6.0 − 0.1750
− 248
6.1 − 0.1998 23
− 225 3
6.2 − 0.2223 26
− 199 −1
6.3 − 0.2422 25
− 174
6.4 − 0.2596

For evaluating f ′(6.0) , we use the formula derived by differentiating Newton’ss


forward difference interpolation formula.

1 1 1 
f ′( x0 ) =  ∆f 0 − ∆ 2 f 0 + ∆ 3 f 0 
h 2 3 
1  1 1 
∴ f ′(6.0)=  −0.0248 − × 0.0023 + × 0.0003
0.1  2 3 
10[ 0.0248 − 0.00115 + 0.0001]
=−
= −0.2585

Self-Instructional
Material 207
Numerical Integration and
Numerical Differentiation
For evaluating f ′′(6.3), we use the formula obtained by differentiating Newton’ss
backward difference interpolation formula. It is given by,
1 2
f ′′( x=
n) [∇ f n + ∇3 f n ]
NOTES h2
1
f ′′(6.3)
= [0.0026 + 0.0003]
= 0.29
(0.1) 2
Example 24: Compute the values of y ′(1.00) and y ′′(1.00) using suitable numeri-
cal differentiation formulae on the following table of values of x and y:

x 1.00 1.05 1.10 1.15 1.20


y 1.0000 1.02470 1.04881 1.07238 1.09544

Solution: For computing the derivatives, we use the formulae derived on differ-
entiating Newton’s forward difference interpolation formula, given by
1 1 1 1 
f ′( x0 ) =  ∆y0 − ∆2 y0 + ∆3 y0 − ∆4 y0 + ...
h 2 3 4 
1  2 11 
f ′′( x0 ) = 2 
∆ y0 − ∆3 y0 + ∆4 y 0 + ...
h  12 
Now, we form the finite difference table.
x y ∆y ∆2 y ∆3 y ∆4 y
1.00 1.00000
2470
1.05 1.02470 − 59
2411 5
1.10 1.04881 − 54 −2
2357 3
1.15 1.07238 − 51
2306
1.20 1.09544

Thus with x0 = 1.00, we have


1  1 1 1 
y ′(1.00) =  0.02470 + × 0.00059 + × 0.00005 + × 0.00002 
0.05  2 3 4 
= 0.502
1 11 
y ′′(1.00) =  − 0.00059 − 0.00005 − × 0.00002 
2
(0.05)  12 
= −0.26
Example 25: Using the following table of values, find a polynomial representation
of f ′(x) and then compute f ′(0.5).

x 0 1 2 3
Self-Instructional f ( x) 1 3 15 40
208 Material
Solution: Since the values of x are equally spaced we use Newton’s forward Numerical Integration and
Numerical Differentiation
difference interpolating polynomial for finding f ′( x) and f ′(0.5). We first form the
finite difference table as given below:
x f ( x) ∆f ( x) ∆2 f ( x) ∆3 f ( x) NOTES
0 1
2
1 3 10
12 3
2 15 13
25
3 40

x − x0
Taking x0 = 0, we have u = = x. Thus the Newton’s forward difference
h
interpolation gives,
u (u − 1) 2 u (u − 1) (u − 2) 3
f = f 0 + u∆f 0 + ∆ f0 + ∆ f0
2! 3!
x( x − 1) x( x − 1) ( x − 2)
i.e., f ( x) ≈ 1 + 2 x + × 10 + ×3
2 6
13 2 1 3
or, f ( x) =+
1 3x − x + x
2 2
3 2
f ′( x) =−
3 13x + x
2
3
and, f ′(0.5) = 3 − 13 × 0.5 + × (0.5) 2 =−3.12
2
Example 26: The population of a city is given in the following table. Find the rate
of growth in population in the year 2001 and in 1995.

Year x 1961 1971 1981 1991 2001


Population y 40.62 60.80 79.95 103.56 132.65

dy
Solution: Since the rate of growth of the population is , we have to compute
dx
dy
at x = 2001 and at x = 1995. For this we consider the formula for the deriva-
dx
tive on approximating y by the Newton’s backward difference interpolation given
by,

dy 1  2u + 1 2 3u 2 + 6u + 2 3 2u 3 + 9u 2 + 11u + 3 4 
= ∇y n + ∇ yn + ∇ yn + ∇ y n + ...
dx h  2 6 12 

Self-Instructional
Material 209
Numerical Integration and x − xn
Numerical Differentiation Where u =
h
For this we construct the finite difference table as given below:

NOTES x y ∆y ∆2 y ∆3 y ∆4 y
1961 40.62
20.18
1971 60.80 − 1.03
19.15 5.49
1981 79.95 4.46 − 4.47
23.61 1.02
1991 103.56 5.48
29.09
2001 132.65

x−x
For x = 2001,
= u =n
0
h

 dy  1  1 1 1 
=
dx

10  29.09 + 2 × 5.48 + 3 × 1.02 + 4 × (−4.47) 
 2001  
= 3.105

1995 − 1991
For x = 1995, u = = 0.4
10

 dy  1  1.8 3 × 0.16 + 6 × 0.4 + 2 


 =   23.61 + × 4.46 + × 5.49 
 dx 1995 10  2 6 
= 3.21

8.4 OPTIMUM CHOICE OF STEP LENGTH

In numerical analysis, numerical differentiation describes algorithms for estimating


the derivative of a mathematical function or function subroutine using values of the
function and perhaps other knowledge about the function. The simplest method is
to use finite difference approximations.
An important consideration in practice when the function is calculated
using floating-point arithmetic is the choice of step size, h. If chosen too small, the
subtraction will yield a large rounding error. In fact, all the finite-difference formulae
are ill-conditioned and due to cancellation will produce a value of zero if h is small
enough. If too large, the calculation of the slope of the secant line will be more
accurately calculated, but the estimate of the slope of the tangent by using the
secant could be worse.

Self-Instructional
210 Material
For the numerical derivative formula evaluated at x and x + h, a choice Numerical Integration and
Numerical Differentiation
for h that is small without producing a large rounding error is x (though
not when x = 0), where the machine epsilon ε is typically of the order of
2.2 × 10–16. A formula for h that balances the rounding error against the secant NOTES
error for optimum accuracy is,

f x
h 2
f x

Though not when f (x) = 0, and to employ it will require knowledge of the
function.
For single precision the problems are exacerbated because, although x may
be a representable floating-point number, x + h almost certainly will not be. This
means that x + h will be changed (by rounding or truncation) to a nearby machine-
representable number, with the consequence that (x + h) – x will not equal h;
the two function evaluations will not be exactly h apart. Consequently, since most
decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a
seemingly round step, such as h = 0.1 will not be a round number in binary; it is
0.000110011001100...

Check Your Progress


7. Define the process of numerical differentiation.
8. Write Newton's forward difference interpolation formula.
9. Write Newton's backward difference interpolation formula.

8.5 EXTRAPOLATION METHOD


The interpolating polynomials are usually used for finding values of the tabulated
function y = f(x) for a value of x within the table. But, they can also be used in
some cases for finding values of f(x) for values of x near to the end points x0 or xn
outside the interval [x0, xn]. This process of finding values of f(x) at points beyond
the interval is termed as extrapolation. We can use Newton’s forward difference
interpolation for points near the beginning value x0. Similarly, for points near the
end value xn, we use Newton’s backward difference interpolation formula.
Example 27: With the help of appropriate interpolation formula, find from the
following data the weight of a baby at the age of one year and of ten years:

Age = x 3 5 7 9
Weight = y (kg ) 5 8 12 17

Self-Instructional
Material 211
Numerical Integration and Solution: Since the values of x are equidistant, we form the finite difference table
Numerical Differentiation
for using Newton’s forward difference interpolation formula to compute weight of
the baby at the age of required years.

NOTES x y ∆y ∆2 y
3 5
3
5 8 1
4
7 12 1
5
9 17

x x0
Taking x = 2, u 0.5.
h
Newton’s forward difference interpolation gives,
(−0.5)(−1.5)
y at x = 1, y (1) = 5 − 0.5 × 3 + ×1
2
=5 − 1.5 + 0.38 =3.88 − 3.9 kg.
Similarly, for computing weight of the baby at the age of ten years, we use
Newton’s backward difference interpolation given by,
x − xn 10 − 9
=v = = 0.5
h 2
0.5 ×1.5
y at x = 10, y (10) = 17 + 0.5 × 5 + ×1
2
=17 + 2.5 + 0.38 − 19.88

8.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The evaluation of a definite integral cannot be carried out when the integrand
f(x) is not integrable, as well as when the function is not explicitly known
but only the function values are known at a finite number of values of x.
There are two types of numerical methods for evaluating a definite integral
based on the following formula:
b

∫ f ( x) dx
a

x1
h
2. The formula is, ∫ f ( x)dx = 2 [ f
x0
0 + f1 ] .

Self-Instructional
212 Material
x2 Numerical Integration and
h
3. The formula is, ∫ f ( x )dx = [ f 0 + 4 f1 + f 2 ] . Numerical Differentiation
x0
3

b
3h
4. Simpson’s three-eighth rule of numerical integration is, f ( x) dx [y NOTES
a
8 0
+ 3y1 + 3y2 + 2y3 + 3y4 + 3y5 + 2y6 + …+ 2y3m – 3 + 3y3m – 2 + 3y3m – 1 + y3m]
w h e r e
h = (b–a)/(3m);  for m = 1, 2, ...
b
3h
5. The Weddle’s rule is, ∫ f ( x)dx = 10
a
[y0 + 5y1 + y2 + 6y3 + y4 + 5y5 + 2y6 +
5y7 + y8 + 6y9 + y10 + 5y11 + ... + 2y6m – 6 + 5y 6m – 5 + y6m – 4 + 6y6m – 3 + y6m
–2
+ 5y6m – 1 + y6m], where b – a = 6mh.
6. This procedure is used to find a better estimate of an integral using the
evaluation of the integral for two values of the width of the sub-intervals.
7. Numerical differentiation is the process of computing the derivatives of a
function f(x) when the function is not explicitly known, but the values of the
function are known for a given set of arguments x = x0, x1, x2, ..., xn. To
find the derivatives, we use a suitable interpolating polynomial and then its
derivatives are used as the formulae for the derivatives of the function.
8. Newton’s forward difference interpolation formula is,

u (u − 1) 2 u (u − 1)(u − 2) 3 u (u − 1)(u − 2)...(u − n + 1) n


ϕ (u ) = y0 + u ∆ y0 + ∆ y0 + ∆ y0 + ... + ∆ y0
2 ! 3 ! n !

where u = x − x0
h
9. Newton’s backward difference interpolation formula is,

v (v + 1) 2 v(v + 1)(v + 2) 3 v (v + 1)(v + 2)(v + 3) 4


ϕ(v ) = yn + v ∇yn + ∇ yn + ∇ yn + ∇ yn + ...
2 ! 3 ! 4 !
v (v + 1)...(v + n − 1) n
+ ∇ yn
n !
x − xn
Where v =
h

8.7 SUMMARY

Numerical differentiation is the process of computing the derivatives of a


function f(x) when the function is not explicitly known, but the values of the
function are known only at a given set of arguments x = x0, x1, x2, ..., xn.
For computing the derivatives at a point near the beginning of an equally
spaced table, Newton’s forward difference interpolation formula is used,
Self-Instructional
Material 213
Numerical Integration and whereas Newton’s backward difference interpolation formula is used for
Numerical Differentiation
computing the derivatives at a point near the end of the table.
Numerical methods can be applied to determine the value of the integral
when the integrand is not integrable as well as when the function is not
NOTES
explicitly known but only the function values are known.
The two types of numerical methods for evaluating a definite integral are
Newton-Cotes quadrature and Gaussian quadrature.
Taking n = 2 in the Newton-Cotes formula, we get Simpson’s one-third
formula of numerical integration while taking n = 3, we get Simpson’s three-
eighth formula of numerical integration.
In Newton-Cotes formula with n = 6 some minor modifications give the
Weddle’s formula.
For evaluating a definite integral correct to a desired accuracy, one has to
make a suitable choice of the value of h, the length of sub-interval to be
used in the formula.
There are two ways of determining h, by considering the truncation error in
the formula to be used for numerical integration or by successive evaluation
of the integral by the technique of interval halving and comparing the results.
In the truncation error estimation method, the value of h to be used is
determined by considering the truncation error in the formula for numerical
integration.
When the estimation of the truncation error is cumbersome, the method of
interval halving is used to compute an integral to the desired accuracy.
Numerical evaluation of double integrals is done by applying trapezoidal
rule and Simpson’s one-third rule.
This procedure is used to find a better estimate of an integral using the
evaluation of the integral for two values of the width of the sub-intervals.
For finding the derivatives, we use a suitable interpolating polynomial and
then its derivatives are used as the formulae for the derivatives of the function.
For computing the derivatives at a point near the beginning of an equally
spaced table, Newton’s forward difference interpolation formula is used,
whereas Newton’s backward difference interpolation formula is used for
computing the derivatives at a point near the end of the table.
Let the values of an unknown function y = f(x) be known for a set of equally
spaced values x0, x1, …, xn of x, where xr = x0 + rh. Newton’s forward
difference interpolation formula is,
u (u 1) u(u 1)(u 2) u (u 1)(u 2)...(u n 1) n
(u ) y0 u y0 2
y0 3
y0 ... y0
2 ! 3 ! n !

x − x0
where u = .
h
Self-Instructional
214 Material
At the tabulated point x0, the value of u is zero and the formulae for the Numerical Integration and
Numerical Differentiation
derivatives are given by,

1 1 1 1 1 
y ′( x0 ) =  ∆y0 − ∆2 y0 + ∆3 y0 − ∆4 y0 + ∆5 y0 − ...
h 2 3 4 5  NOTES

1  2 11 5 
y′′( x0 ) = 2 
∆ y0 − ∆3 y0 + ∆4 y0 − ∆5 y0 + ...
h  12 6 

dy d2y
For a given x near the end of the table, the values of and 2 are
dx dx
computed by first computing v = (x – xn)/h and using the above formulae.
At the tabulated point xn, the derivatives are given by,

1 1 1 1 
y ′( x n ) = ∇y n + ∇ 2 y n + ∇ 3 y n + ∇ 4 y n + ...
h 2 3 4 

1  2 11 5 
y ′′( xn ) = 2 
∇ y n + ∇ 3 y n + ∇ 4 yn + ∇ 5 y n + ...
h  12 6 
For computing the derivatives at a point near the middle of the table, the
derivatives of the central difference interpolation formula is used.
If the arguments of the table are unequally spaced, then the derivatives of
the Lagrange’s interpolating polynomial are used for computing the derivatives
of the function.

8.8 KEY WORDS

Newton-Cotes quadrature: This is based on integrating polynomial


interpolation formulae and requires a table of values of the integrand at
equally spaced values of the independent variable x.
Trapezoidal formula: The trapezoidal formula of numerical integration is
defined using the definite integral of the function f (x) between the limits x0
to x1, as it is approximated by the area of the trapezoidal region bounded
by the chord joining the points (x0, f0) and (x1, f1), the x-axis and the
ordinates at x = x0 and at x = x1.
Romberg’s procedure: This procedure is used to find a better estimate of
an integral using the evaluation of the integral for two values of the width of
the sub-intervals.
Weddle’s rule: It is a composite Weddle’s formula and is used when the
number of sub-intervals is multiple of 6.

Self-Instructional
Material 215
Numerical Integration and Numerical differentiation: It is the process of computing the derivatives
Numerical Differentiation
of a function f(x) when the function is not explicitly known, but the values of
the function are known for a given set of arguments x = x0, x1, x2, ..., xn.
Newton’s forward difference interpolation formula: The Newton’s
NOTES
forward difference interpolation formula is used for computing the derivatives
at a point near the beginning of an equally spaced table.
Newton’s backward difference interpolation formula: Newton’s
backward difference interpolation formula is used for computing the
derivatives at a point near the end of the table.
Central difference interpolation formula: For computing the derivatives
at a point near the middle of the table, the derivatives of the central difference
interpolation formula is used.

8.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. State Newton-Cotes formula.
2. State the trapezoidal rule.
3. What is the difference between Simpson’s one-third formula and one-third
rule?
4. What is the error in Weddle’s rule?
5. Give the truncation error in Simpson’s one-third rule.
6. Where is interval halving technique used?
7. Name the methods used for numerical evaluation of double integrals.
8. State the Gauss quadrature formula.
9. State an application of Romberg’s procedure.
10. Define the term numerical differentiation.
dy
11. How the derivative can be evaluated?
dx
12. Give the formulae for the derivatives at the tabulated point x0 where the
value of u is zero.
13. Give the differentiation formula for Newton’s backward difference
interpolation.
14. Give the Newton’s backward difference interpolation formula for an equally
spaced table of a function.

Self-Instructional
216 Material
Long-Answer Questions Numerical Integration and
Numerical Differentiation
1. Use suitable formulae to compute y (1.4) and y (1.4) for the function y = f
(x), given by the following tabular values.
x 1.4 1.8 2.2 2.6 3.0
NOTES
y 0.9854 0.9738 0.8085 0.5155 0.1411

dy d2y
2. Compute dx and dx 2
for x = 1 where the function y = f (x) is given by the
following table:
x 1 2 3 4 5 6
y 1 8 27 64 125 216

20
3. Compute ∫ f ( x) dx by Simpson’s one-third rule, where:
0

x 0 5 10 15 20
f ( x) 1.0 1.6 3.8 8.2 15.4

4
4. Compute ∫ x 3 dx by Simpson’s one-third formula and comment on the
0
result:
x 0 2 4
x3 0 8 64

5. Compute ∫ x 3 dx by Simpson’s one-third formula and comment on the result:


0

2
6. Compute ∫ e x dx by Simpson’s one-third formula and compare with the
0
exact value, where e0 = 1, e1 = 2.72, e2 = 7.39.
1
7. Compute an approximate value of π , by integrating dx , by Simpson’ss
∫ 1+ x
0
2

one-third formula.
8. A rod is rotating in a plane about one of its ends. The following table gives
the angle (in radians) through which the rod has turned for different values
d
of time t seconds. Find its angular velocity and angular acceleration
dt
d2
at t = 1.0.
dt 2

Self-Instructional
Material 217
Numerical Integration and
Numerical Differentiation t secs 0.0 0.2 0.4 0.6 0.8 1.0
θ radius 0.0 0.12 0.48 1.10 2.00 3.20

dy d2y
NOTES 9. Find and at x = 1 and at x = 3 for the function y = f (x), whose
dx dx 2
values are given in the following table:
x 1 2 3 4 5 6
y 2.7183 3.3210 4.0552 4.9530 6.0496 7.3891

dy d2y
10. Find and at x = 0.96 and at x = 1.04 for the function y = f (x)
dx dx 2
given in the following table:
x 0.96 0.98 1.0 1.02 1.04
y 0.7825 0.7739 0.7651 0.7563 0.7473

11. Compute ∫ ( x + 1)dx, by trapezoidal rule by taking four sub-intervals and


0

comment on the result by comparing it with the exact value.


1.4

12. Compute ∫ ( x3 + 2)dx, by Simpson’s one-third rule by taking four sub-intervals


1

and find the error in the result.


1

13. Evaluate ∫ cos x dx, correct to three significant figures taking five equal sub-
0

intervals.
1
xdx
14. Compute the value of the integral ∫ correct to three significant figures
1+ x 0

by Simpson’s one-thrid rule with six sub-intervals.


1
dx
15. Compute the integral ∫ , by Simpson’s one-third rule taking four sub-
1+ x 0

intervals and use it to compute the approximate value of .


16. Discuss numerical differentiation using Newton’s forward difference
interpolation formula and Newton’s backward difference interpolation
formula.
3

17. Use the following table of values to compute f ( x) dx :


0

x 0 1 2 3
f ( x) 1.6 3.8 8.2 15.4

Self-Instructional
218 Material
18. Use suitable formulae to compute y ′(1.4) and y ′′(1.4) for the function y = Numerical Integration and
Numerical Differentiation
f(x), given by the following tabular values:
x 1.4 1.8 2.2 2.6 3.0
y 0.9854 0.9738 0.8085 0.5155 0.1411 NOTES

dy d2y
19. Compute and for x =1 where the function y = f(x) is given by the
dx dx 2
following table:
x 1 2 3 4 5 6
y 1 8 27 64 125 216

20. A rod is rotating in a plane about one of its ends. The following table gives
the angle (in radians) through which the rod has turned for different values
d
of time t seconds. Find its angular velocity and angular acceleration
dt
d2
at t = 1.0.
dt 2

t secs 0.0 0.2 0.4 0.6 0.8 1.0


radians 0.0 0.12 0.48 1.10 2.00 3.20

dy d2y
21. Find and at x = 1 and at x = 3 for the function y = f(x), whose
dx dx 2
values in [1, 6] are given in the following table:
x 1 2 3 4 5 6
y 2.7183 3.3210 4.0552 4.9530 6.0496 7.3891

dy d2y
22. Find and at x = 0.96 and at x = 1.04 for the function y = f(x)
dx dx 2
given in the following table:
x 0.96 0.98 1.0 1.02 1.04
y 0.7825 0.7739 0.7651 0.7563 0.7473

8.10 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.

Self-Instructional
Material 219
Numerical Integration and Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
Numerical Differentiation
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
NOTES
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
220 Material
Partial Differential
BLOCK - III Equations

PDE, ODE AND EULER METHODS

NOTES
UNIT 9 PARTIAL DIFFERENTIAL
EQUATIONS
Structure
9.0 Introduction
9.1 Objectives
9.2 Partial Differential Equation of the First Order Lagrange’s Solution
9.3 Solution of Some Special Types of Equations
9.4 Charpit’s General Method of Solution and Its Special Cases
9.5 Partial Differential Equations of Second and Higher Orders
9.5.1 Classification of Linear Partial Differential Equations of Second Order
9.6 Homogeneous and Non-Homogeneous Equations with
Constant Coefficients
9.7 Partial Differential Equations Reducible to Equations with
Constant Coefficients
9.8 Answers to Check Your Progress Questions
9.9 Summary
9.10 Key Words
9.11 Self Assessment Questions and Exercises
9.12 Further Readings

9.0 INTRODUCTION

In this unit, you will learn about partial differential equations. Partial differential
equations are used to formulate, and thus aid the solution of, problems involving
functions of several variables. Partial differential equations often model
multidimensional systems.
You will learn various methods to solve partial differential equations of first,
second and higher orders.

9.1 OBJECTIVES

After going through this unit, you will be able to:


Derive partial differential equations of the first order Lagrange’s solution
Know some special type of equations which can be solved easily by methods
other than the general method
Describe Charpit’s general method of solution and its special cases
Self-Instructional
Material 221
Partial Differential Solve partial differential equations of second and higher orders
Equations
Classify linear partial differential equations of second order
Explain homogeneous and non-homogeneous equations with constant
NOTES coefficients
Reduce partial differential equations to equations with constant coefficients

9.2 PARTIAL DIFFERENTIAL EQUATION OF THE


FIRST ORDER LAGRANGE’S SOLUTION

Lagrange’s Equation
The partial differential equation Pp + Qq = R, where P, Q, R are functions of x, y,
z, is called Lagrange’s linear differential equation.
dx dy dz
Form the auxiliary equations P = =
Q R and find two indpendent solutions
of the auxiliary equations say u(x, y, z) = C1 and v(x, y, z) = C2, where C1 and C2
are constants. Then the solution of the given equation is F(u, v) = 0 or u = F(v).
For example, solve ( y 2 z 2 ) p – xyq xz
The auxiliary equations are
dx dy dz
y2 z2 = − xy = – xz (9.1)

Taking the last two equations, we get


dy dz
y = z

Integrating we get log y = log z + constant


y
= C1
z
Each of the Equations (9.1) is equal to
xdx + ydy + zdz
x( y 2 + z 2 ) – xy 2 – xz 2

xdx + ydy + zdz


i.e.
0
i.e. xdx + ydy + zdz = 0
Hence after integration this reduces to
x2 + y2 + z2 = C2
Self-Instructional
222 Material
Hence the general solution of the equation is Partial Differential
Equations
y 
F  , x2 + y 2 + z 2  =
= 00
z 
NOTES
∂z ∂z
Example 1: Solve x 2
+ y2 ( x y) z
=+
∂x ∂y

Solution: The auxiliary equations are


dx dy dz
2 = 2
=
x y ( x + y) z

dx − dy dz
i.e. 2
x −y 2 =
( x + y) z

dx − dy dz
i.e. x− y
=
z

i.e. log (x – y) = log z + constant


x− y
= C1
z

dx dy
Also =
x2 y 2

1 1
Hence = y + constant
x

1 1

y x = C2

1 1 x– y
Hence the solution is, F  – ,  =0
y x z 
Example 2: Solve (x2 – yz)p + (y2 – zx)q = z2 – xy
Solution:
The subsidiary equations are:
dx dy dz
=
x 2 – yz = y 2 – zx z 2 – xy

dx – dy d ( x − y)
x 2 − yz − ( y 2 − zx) = ( x − y )( x + y + z )

d ( y − z)
= ( y − z )( x + y + z )
Self-Instructional
Material 223
Partial Differential
Equations d ( x − y) d ( y − z)
x− y
= y−z

Integrating log (x – y) = log (y – z) + log C1


NOTES
x− y
= C1 (1)
y−z

Using multipliers x, y, z, each of the subsidiary equations


xdx + ydy + zdz xdx + ydy + zdz
= =
x + y + z – 3xyz ( x + y + z )( x 2 + y 2 + z 2 – xy – yz – zx)
3 3 3

dx + dy + dz
and is also equal to
x + y + z 2 − yz − zx − xy
2 2

xdx + ydy + zdz dx + dy + dz


x+ y+ z =
1
xdx + ydy + zdz = (x + y + z)d (x + y + z)
On Integrating, we get
x2 + y2 + z2 = (x + y + z)2 + C2
xy + yz + zx = C 2 (2)
From Equations (1) and (2), we get the solution,

x− y 
F 0, where F is arbitrary..
, xy + yz + zx  =
 y−z 
Example 3: Solve (a – x)p + (b – y)q = c – z
Solution:
The subsidiary equations are:
dx dy dz
= b– y =c–z (1)
a−x
From Equation (1)
dy dz
b− y
=
c−z

dy dz
i.e. y−b =
z −c
log ( y – b) = log (z – c) + log C1
y−b
= C1
z−c

Self-Instructional
224 Material
Also Partial Differential
Equations
dx dy
= b− y
a−x

dx dy NOTES
= y−b
x−a
log (x – a) = log (y – b) + log C2

 x−a
  = C2
 y −b
The general solution is

 y −b x−a 
F ,  =0
 z −c y −b
Example 4: Solve (y – z)p + (z – x)q = x – y
Solution:
The auxiliary equations are:
dx dy dz dx + dy + dz
= = =
y−z z−x x− y 0
dx + dy + dz = 0
Integrating we get, x + y + z = C1
Also each ratio
xdx + ydy + zdz
=
x( y − z ) + y ( z − x) + z ( x − y )

xdx + ydy + zdz


=
0
xdx + ydy + zdz = 0
On integrating, we get,
x2 + y2 + z2 = C2
The general solution is
F(x + y + z, x2 + y2 + z2) = 0
Example 5: Solve (mz – ny)p – (nx –lz)q = ly – mx
Solution:
The auxiliary equations are:
dx dy dz
mz ny nx lz ly – mx
Self-Instructional
Material 225
Partial Differential Using multipliers x, y, z, we get each ratio
Equations
xdx + ydy + zdz
= x(mz − ny ) + y(nx − lz ) + z (ly – mx)
NOTES
xdx + ydy + zdz
=
0
x2 + y2 + z2 = C1
Also by using multipliers l, m, n, we get each ratio
ldx + mdy + ndz
=
0
lx + my + nz = C2
The general solution is
F ( x 2 + y 2 + z 2 , lx + my + nz ) =
0

Example 6: Solve x (y – z)p + y(z – x)q = z(x – y)


Solution:
The auxiliary equations are:
dx dy dz
= =
xy − xz yz − yx zx − zy

dx + dy + dz
=
0
dx + dy + dz = 0
On integrating, we get, x + y + z = C1 (1)

dx dy dz dx dy dz
+ +
x y z x y z
= = =
y−z z−x x− y 0

dx dy dz
+ +
x y z =0

On integrating, log x + log y + log z = log C2


xyz = C2 (2)
From Equations (1) and (2), the general solution is, F(x + y + z, xyz) = 0
Example 7: Solve x2p + y2q = z2
Solution:
The auxiliary equations are:
dx dy dz
Self-Instructional = =
x2 y2 z 2
226 Material
Partial Differential
dx dy Equations
2 =
x y2

x −1 y −1
= + C1 NOTES
−1 −1

1 1
= – y + C1
x

1 1

y x = C1

Also
dy dz
2 =
y z2
1 1
− = − + C2
y z
1 1

z y = C2

The general solution is

1 1 1 1
F – , – =0
 y x z y
Example 8: Solve ( y + z)p + (z + x)q = x + y
Solution:
The auxiliary equations are
dx dy dz
= =
y+z z+x x+ y

dx − dy dy − dz dz − dx
i.e. = =
x− y y−z z−x

dx + dy + dz
= 2( x + y + z )

Considering first two members and integrating, we get


x− y
y − z = C1

Considering first and last members and integrating, we get


1
log(x – y) = log(x + y + z) + log C2
2
Self-Instructional
Material 227
Partial Differential
(x − y)
2
Equations
log = log C2
x+ y+z

( x − y)
2

NOTES = log C2
x+ y+z
The general solution is
 x − y ( x − y )2 
F , =0
 y−z x+ y+z

9.3 SOLUTION OF SOME SPECIAL TYPES OF


EQUATIONS

Wave Equation
For deriving the equation governing small transverse vibrations of an elastic string,
we position the string along the x-axis, extend it to its length L and fix it at its ends
x = 0 and x = L. Distort the string and at some instant, say t = 0, release it to
vibrate. Now the problem is to find the deflection u(x, t) of the string at point x
and at any time t > 0.
To obtain u(x, t) as the result of a partial differential equation we have to
make simplifying assumptions as follows:
1. The string is homogeneous. The mass of the string per unit length is constant.
The string is perfectly elastic and hence does not offer any resistance to
bending.
2. The tension in the string is constant throughout.
3. The vibrations in the string are small so the slope at each point remains
small.
For modeling the differential equation, consider the forces working on a
small portion of the string. Let the tension be T1 and T2 at the endpoints P and Q
of the chosen portion. The horizontal components of the tension are constant
because the points on the string move vertically according to our assumption.
Hence we have,
T1 cos α = T2 cos β = T = const (9.2)

The two forces in the vertical direction are − T1 sin α and T2 sin β of T1
and T2 . The negative sign shows that the component is directed downward. If
is the mass of the undeflected string per unit length and x is the length of that
portion of the string that is undeflected then by Newton’s second law the resultant
of these two forces is equal to the mass x of the portion times the acceleration
∂ 2 u / ∂t 2
Self-Instructional
228 Material
Partial Differential
∂ 2u Equations
T2 sin β − T1 sin α = ρ∆x 2 .
∂t
By using Equation (9.1), we can divide the above equation by
T2 cos β = T1 cos α = T , to get NOTES

T2 sin β T1 sin α ρ∆x ∂ 2 u


− = tan β − tan α = (9.3)
T2 cos β T1 cos α T ∂t 2
Since tan α and tan β are the slopes of the string at x and x + x, therefore

 ∂u   ∂u 
tan α =   x and tan β =   .
 ∂x   ∂x  x + ∆x
By dividing Equation (9.3) by x and substituting the values of tan and
tan , we have

1  ∂u   ∂u   ρ ∂ u
2

  x + ∆x −  = 2 .
∆x  ∂x   ∂ x  x  T ∂t
As x approaches zero, the equation becomes the linear partial differential
equation
∂ 2u 2
2 ∂ u T
=c , c2 = (9.4)
∂t 2 ∂x 2 ρ
which is the one-dimensional wave equation governing the vibrations of an
elastic string
∂ 2u 2
2 ∂ u
=c , (9.5)
∂t 2 ∂x 2
To determine the solution we use the boundary conditions, x = 0 and
x = L,
u (0, t ) = 0, u (L, t ) = 0 for all t (9.6)
The initial velocity and initial deflection of the string determine the form of
motion. If f(x) is the original deflection and g(x) is the initial velocity, then our initial
conditions are,
u ( x,0) = f ( x ) (9.7)
and

∂u
= g (x) . (9.8)
∂t t =0

Self-Instructional
Material 229
Partial Differential I. Now the problem is to get the solution of Equation (9.5) satisfying the
Equations
conditions (9.6)-(9.8).
By using the method of separation of variables, verify solutions of the wave
Equation (9.5) of the form
NOTES
u ( x, t ) = F ( x )G (t ) (9.9)
which are a product of two functions, F(x) and G(t). Note here that each
of these functions is dependent on one variable, i.e., either x or t. By differentiating
Equation (9.9) two times both with respect to x and t, we obtain

∂ 2u  ∂ 2u
= FG and = F ′′G
∂t 2 ∂x 2
By substituting these values in the wave equation we get,
 = c 2 F ′′G .
FG
Divide this equation by c 2 FG , to get

G F ′′
= .
2
c G F
The equations on either side are dependent on different variables. Hence
changing x will not change G and changing t will not change F and the other side
will remain constant. Thus,

G F ′′
= = k.
2
c G F
or
F ′′ − kF = 0 (9.10)
and
 − c 2 kG = 0 .
G (9.11)
The constant k is arbitrary.
Now we will find the solutions of Equations (9.10) and (9.11) so that the
equation u = FG fulfills the boundary conditions (9.6), that is,
u (0, t ) = F (0)G (t ) = 0, u (L, t ) = F (L )G (t ) = 0 for all t.
When G 0, then u 0.
Therefore, G 0 and
(a) F(0) = 0, (b) F(L) = 0 (9.12)
For k = 0 the general solution of Equation (9.10) is F = ax + b, and from
Equation (9.12) we obtain a = b = 0 and hence F 0, which gives
Self-Instructional
230 Material
u 0. But for positive value of k, i.e., k = 2
the general solution of Equation Partial Differential
Equations
(9.10) is
F = Ae μ , x + Be − μx ,
and from Equation (9.12), we again get F 0. Hence choose k < 0, i.e., NOTES
k = − p 2 . Then the Equation (9.10) becomes,

F ′′ + p 2 F = 0
The general solution of the above equation is,
F ( x ) = A cos px + B sin px .
Using conditions of Equation (9.12), we have
F (0) = A = 0 and F (L ) = B sin pL = 0
B = 0 implies F 0. Thus we will take sin pL = 0, giving

pL = nπ , so that p = where n is an integer (9.13)
L
For B = 1, we get infinitely many solutions F ( x ) = Fn ( x ) , where


Fn ( x ) = sin x (n = 1,2,) . (9.14)
L
These solutions satisfy Equation (9.12). The value of the constant k is now
limited to the values k − p 2 = −(nπ / L )2 , resulting from Equation (9.13), so
Equation (9.11) becomes

 + λ 2 G = 0 cnπ
G n where λ n = . (9.15)
L
A general solution is
Gn (t ) = Bn cos λ n t + Bn * sin λ n t .

Hence solutions of (9.5) satisfying (9.6) are u n ( x, t ) = Fn (x )Gn (t ) , written


as

u n ( x, t ) = (Bn cos λ n t + Bn * sin λ n t )sin x (n = 1,2,) . (9.16)
L
Functions of these type are called the eigenfunctions and the values n =
cn /L are called the eigenvalues of the vibrating string. This set of n is known as
spectrum.
Each u n represents a harmonic motion with frequency λ n / 2π = cn / 2 L
cycles per unit time. This motion is known as the nth normal mode of the string. Self-Instructional
Material 231
Partial Differential The first normal mode is referred as the fundamental mode (n = 1) while the
Equations
others are known as overtones.
A single solution u n ( x, t ) will not satisfy the initial conditions (9.7) and (9.8).
NOTES But, u n is a solution of Equation (9.5), since the equation is linear and
homogeneous. To obtain a solution that satisfies Equations (9.7) and (9.8), consider
the following infinite series,
∞ ∞

u ( x, t ) = ∑ u n (x, t ) =∑ (Bn cos λ n t + Bn * sin λ n t )sin x,
n =1 n =1 L
where λn = cnπ / L (9.17)
Therefore,


u ( x,0 ) = ∑ Bn sin x = f (x ) . (9.18)
n =1 L

Select the coefficients Bn’s so that u ( x,0 ) becomes the Fourier sine series
of f(x). Thus, from Equation (9.10),
L
2 nπx
Bn = ∫ f (x )sin dx, n = 1,2, . (9.19)
L0 L

Similarly, by differentiating Equation (9.17) with respect to t and using


Equation (9.8), we get

∂u ∞ nπx 
= ∑ (− Bn λ n sin λ n t + Bn * λ n cos λ n t )sin
∂t t =0  n =1 L  t =0

n πx
= ∑B n * λ n sin = g (x )
n =1 L

Bn * ’s should be selected so that for t = 0 the partial derivative ∂u / ∂t


becomes the Fourier sine series of the function g(x). So from Equation (9.10),
L
2 nπx
Bn * λ n = ∫ g (x )sin dr
L0 L

Here, since λ n = cnπ / L ,


L
2 nπx
Bn * = ∫ g ( x )sin dx n = 1,2, . (9.20)
cnπ 0 L
Now, let us consider the case when the initial velocity g(x) is zero. Then the
Self-Instructional Bn * are zero and Equation (9.17) becomes,
232 Material
Partial Differential

n πx cnπ
u ( x, t ) = ∑ Bn cos λ n t sin
Equations
, λn = . (9.21)
n =1 L L
We know that,
NOTES
cnπ nπ 1   nπ   nπ 
cos t sin x = sin  (x − ct ) + sin  ( x + ct )
L L 2  L  L 
Therefore Equation (9.21) becomes,

1 ∞  nπ  1 ∞  nπ 
u ( x, t ) = ∑ B n sin  ( x − ct ) + ∑ Bn sin  ( x − ct )
2 n =1 L  2 n =1 L 
The above two series are generated by substituting x – ct and x + ct,
respectively, for the variable x in the Fourier sine series given in Equation (9.18)
for f(x). Thus
1
u ( x, t ) = [ f * (x − ct ) + f * (x + ct )] (9.22)
2
where f* is the odd periodic extension of f with the period 2L. By
differentiating Equation (9.22) we see that u(x, t) is a solution of Equation (9.5),
given that f(x) is twice differentiable on the interval 0 < f(x) < L and has one-sided
second derivatives at x = 0 and x = L, which are zero. u(x, t) is obtained as a
solution satisfying Equations (9.6) – (9.8).
If f ′( x ) and f ′′( x ) are merely piecewise continuous or if the one-sided
derivatives are not zero, then for each t there will be finitely many values of x at
which the second derivatives of u appearing in Equation (9.5) do not exist. Except
at these points the wave equation will still be satisfied. We can then regard u(x, t)
as a generalized solution.
Example 9: Determine the solution of the wave Equation (9.5) corresponding to
the following triangular initial deflection,

 2k L
 x if 0< x<
f (x ) =  L 2
2k L
 (L − x ) if <x<L
L 2
and zero initial velocity.
Solution: Since g ( x ) ≡ 0 , we have Bn * = 0 in Equation (9.17).

The Bn are given by Equation (9.11) and thus Equation (9.17) takes the
8k 1 π πc 1 3π 3 πc 
form u ( x, t ) = 12 sin L x cos L t − 3 2 sin L x cos L t +  .
π2  
Self-Instructional
Material 233
Partial Differential
Equations 9.4 CHARPIT’S GENERAL METHOD OF
SOLUTION AND ITS SPECIAL CASES

NOTES Charpit’s method is used to find the solution of most general partial differential
equation of order one, given by
F(x, y, z, p, q) = 0 (9.23)
The primary idea in this method is the introduction of a second partial
differential equation of order one,
f(x, y, z, p, q, a) = 0 (9.24)
containing an arbitrary constant ‘a’ and satisfying the following conditions :
1. Equations (9.23) and (9.24) can be solved to give
p = p(x, y, z, a ) and q = q( x, y, z, a )
2. The equation
dz = p( x, y, z, a )dx + q( x, y, z, a )dy (9.25)
is integrable.
When a function ‘f’ satisfying the conditions 1 and 2 has been found, the
solution of Equation (9.25) containing two arbitrary constants (including ‘a’) will
be a solution of Equation (9.23). The condition 1 will hold if

∂F ∂f
∂ (F , f ) ∂p ∂p
J= = ≠0
∂( p, q ) ∂F ∂f (9.26)
∂q ∂q

Condition 2 will hold when

 ∂p   ∂p   ∂p ∂q 
p  + q −  −  −  = 0
 ∂z   ∂z   ∂y ∂x 

∂q ∂q ∂p ∂p
p + =q + (9.27)
∂z ∂x ∂z ∂y
Substituting the values of p and q as functions of x, y and z in Equations
(9.23) and (9.24) and differentiating with respect to x
∂F ∂F ∂p ∂F ∂q
+ + =0
∂x ∂p ∂x ∂q ∂x

∂f ∂f ∂p ∂f ∂q
and + + =0
Self-Instructional
∂x ∂p ∂x ∂q ∂x
234 Material
Therefore, Partial Differential
Equations

 ∂F ∂f ∂F ∂f  ∂q ∂F ∂f ∂F ∂f
 −  = −
 ∂p ∂q ∂q ∂p  ∂x ∂x ∂p ∂p ∂x
NOTES
∂q 1  ∂F ∂f ∂F ∂f 
or =  − 
∂x J  ∂x ∂p ∂p ∂x 

∂p 1  ∂F ∂f ∂F ∂f 
Similarly = − + 
∂y J  ∂y ∂q ∂q ∂y 

∂p 1  ∂F ∂f ∂F ∂f 
= − + 
∂z J  ∂z ∂q ∂q ∂z 

∂q 1  ∂F ∂f ∂F ∂f 
and =  −  (9.28)
∂z J  ∂z ∂p ∂p ∂z 
Substituting the values from Equation (9.28) in Equation (9.27)

1   ∂F ∂f ∂F ∂f   ∂F ∂f ∂F ∂f 
 p −  +  − 
J   ∂z ∂p ∂p ∂z   ∂x ∂p ∂p ∂x 

1   ∂F ∂f ∂F ∂f   ∂F ∂f ∂F ∂f 
=  q − +  + − + 
J   ∂z ∂q ∂p ∂z   ∂y ∂q ∂q ∂y 

 ∂F  ∂f  ∂F  ∂f  ∂F ∂F  ∂f
or  −  +  −  +  − p −q 
 ∂p  ∂x  ∂q  ∂y  ∂p ∂q  ∂z

 ∂F ∂F  ∂f  ∂F ∂F  ∂f
+ p +  +  q +  =0 (9.29)
 ∂z ∂x  ∂p  ∂z ∂y  ∂q
The Equation (9.29) being linear in variable x, v, z, p, q and f has the
following subsidiary equations:
dx dy dz dp dq
= = = =
∂F ∂F ∂F ∂F ∂F ∂F ∂F ∂F (9.30)
− − −p −q +p +q
∂p ∂q ∂p ∂q dx ∂z ∂y ∂z
If any of the integrals of Equations (9.30) involve p or q then it is of the form
of Equation (9.24).
Then we solve Equations (9.23) and (9.24) for p and q and integrate Equation
(9.25).
Self-Instructional
Material 235
Partial Differential Example 10: Get complete integral of the equation,
Equations
p 2 + q 2 − 2 px − 2qy + 2 xy = 0 (1)
Solution: The subsidiary equations are
NOTES
dp dq dx dy
= = = (2)
2( y − p ) 2(x − q ) − 2( p − x ) − 2(q − y )

dp + dq dx dy
=
2 y + 2 x − 2 p − 2q 2 x 2 y 2 p 2q

dp + dq = dx + dy
Integrating, we get
p+q=x+ y+a
where a is constant
( p − x ) + (q − y ) = a (3)
Equation (1) can also be written as

( p − x )2 + (q − y )2 = (x − y )2
Now {( p x) (q y )}2 {( p x) (q y )}2

{
= 2 ( p − x ) + (q − y )
2 2
}
( p − x ) − (q − y ) = 2(x − y ) − a 2
2
(4)
Adding Equations (3) and (4),

( p − x) = 1 a + 1 2( x − y ) − a 2
2

2 2

a 1
2( x − y ) − a 2
2
or p= +x+
2 2
Similarly subtracting Equation (4) from Equation (3)
a 1
2( x − y ) − a 2
2
q= y+ −
2 2
dz = pdx + qdy
or

a 1   a 1 
2( x − y ) − a 2 dx +  y + − 2( x − y ) − a 2 dy
2 2
dz =  + x +
Self-Instructional
2 2   2 2 
236 Material
Partial Differential
1 a 1
2
2 2
(
= d x + y + d (x + y ) +
2 2
)
2( x − y ) − a 2 d ( x − y )
2 Equations

On integrating
NOTES
x2 + y2 a 1
( )
1
z+b = + ( x + y ) + ∫ 2U 2 − a 2 2 dU
2 2 2

where U = x − y and b is an arbitrary constant

  a2 
 2
U U − 
2
x +y 2
a 1  2 a 2  a 2  
z+b = + x+ y +  − logU + U −
2 
2 2 2 2 4  2  
  
   
 

x2 + y2 a ( x − y ) 2( x − y ) − a 2 2

= + (x + y ) +
2 2 4
 a2 a 2 
log ( x − y ) + (x − y ) −
2

4 2  2 

Example 11: Determine the complete integral of the equation
p 2 + q 2 − 2 px − 2qy + 1 = 0 (1)
Solution: The subsidiary equations are
dx dy dp dq
= = = (2)
− (2 − 2 xp ) − (2q − 2 y ) − 2 p − 2q
With
dp dq
=
p q
On integrating, we get
p = aq (3)
where ‘a’ is an arbitrary constant.
Substituting the value of p from Equation (3) in Equation (1)
( )
q 2 1 + a 2 − 2q(ax + y ) + 1 = 0

q = (ax + y ) + (ax + y )2 − (1 + a 2 )
dz = pdx + qdy

Self-Instructional
Material 237
Partial Differential which gives
Equations
dz = q(adx + dy )

NOTES {
= d (ax + y ) (ax + y ) + (ax + y )2 − (1 + a 2 ) }
Integrating

1
z + b = (ax + y ) +
2 (ax + y ) (ax + y ) − 1 + a 2 2
( )
2 2


(a 2

2
+1) {
log (ax + y ) + (ax + y )2 − (1 + a 2 )}
where b is an arbitrary constant.
Example 12: Find Complete Integral of the following equation
2( pq + py + qx ) + x 2 + y 2 = 0 (1)
Solution: The subsidiary equations of Equation (1) are
dx dy dp dq
= = = (2)
− (2q + 2 y ) − (2 p + 2 x ) (2q + 2 x ) (2 p + 2 y )
dp + dq + dx + dy = 0
Integrating
p + q + x + y = constant = a (say)
or ( p + x ) + (q + y ) = a (3)
Equation (1) can be written as

2( p + x )(q + y )( x − y ) = 0
2

or ( p + x )(q + y ) = − 1 (x − y )2
2

( p + x) − (q + y ) = {( p + x ) + (q + y )}2 − 4( p + x )(q + y )
= a 2 + 2( x − y )
2

Adding Equation (3) and (4),

2( p + x ) = a + a 2 + 2(x − y )
2

a 1
a 2 + 2( x − y )
2
or p = −x + +
2 2
Self-Instructional
238 Material
Subtracting Equation (4) from Equation (3) Partial Differential
Equations
a 1
a 2 + 2( x − y )
2
q = −y + −
2 2
NOTES
dz = pdx + qdy
giving
a
dz = −( xdx + ydy ) + (dx + dy ) + 1 a 2 + 2(x − y )2 d (x − y )
2 2

1 a 1
= − d (x + y ) + d ( x + y ) + a 2 + 2( x − y ) d ( x − y )
3 3 2

2 2 2
Integrating the above equation, we get

a2
( )
2 z + b = − x 2 + y 2 + a (x + y ) + 2 ∫
2
(x − y )2 d (x − y )

a2
2 (x − y ) + (x − y )
2

( )
= − x 2 + y 2 + a(x + y ) + 2
2

a2  a2 
2 
+ 2 log ( x − y ) + + (x − y ) 
4  2 

(x − y ) a 2 + 2( x − y )
2

( )
= − x 2 + y 2 + a(x + y ) +
2

a2  a2 
2 
+ log ( x − y ) + + (x − y )  .
2 2  2 

Example 13: Find Complete Integral of the equation,


p 2 + q 2 − 2 pq tan h 2 y = sec h 2 2 y
Solution: The subsidiary equations are,
dx dy dp
= =
− (2 p − 2q tanh 2 y ) (− 2q − 2 p tanh 2 y ) 0

dq
= − 4 pq sec h 2 2 y + 4 sec h 2 2 y tanh 2 y

Self-Instructional
Material 239
Partial Differential
Equations
dp = 0
or p = constant = a (say)
Therefore
NOTES
q 2 − 2a tanh 2 y.q + a 2 − sec h 2 2 y = 0

q = a tanh 2 y + a 2 tanh 2 2 y − a 2 + sec h 2 2 y

= a tanh 2 y + 1 − a 2 sec h2 y
dz = pdz + qdy
gives

(
dz = adx + a tanh 2 y + 1 − a 2 sec h2 y dy )
 a 
= d  ax + log cosh 2 y  + 1 − a 2 sec h 2 ydy
 2 
Integrating
a 2dy
z + b = ax + log cosh 2 y + 1 − a 2 ∫ 2 v
2 e + e −2v
a 2e 2 v dy
= ax + log cosh 2 y + 1 − a 2 ∫
2 1 + e 4v
a
(
log cosh 2 y + 1 − a 2 tan −1 e 2 v .
= ax +
2
)
Example 14: Find Complete Integral
(
xy + 3 yq = 2 z − x 2 q 2 ) (1)
Solution: The subsidiary equations are
dx dy dp dq
= = =
− x − 3 y − 4 x q p − 2 p + 4 xq
3 2
3q − 2q

dq dx
=
q −x
qx = constant = a
a
q=
x
Substituting in Equation (1) we get

p=
(
2 z − a 3 3 ya
− 2
)
Self-Instructional x x
240 Material
dz = pdx + qdy Partial Differential
Equations

dz = 
(
 2 z − a 2 3 ya  a )
− 2 dx + dy
gives
 x x  x
NOTES
Multiplying by x 2

( )
x 3 dz = 2 x z − a 2 dx − 3 yadx + axdy

4  z −a 
2

i.e., x d 
 x2  = −3aydx + axdy
 

 z − a2  a 3ay  ay 
i.e.,  2
d  = 3 dy − 4 dx = d  2 
 x  x x x 

z − a 2 ay
On integrating , we get = 3 +b
x2 x

 y
or z = a a +  + bx2 where, a and b are arbitrary constants.
 x

Check Your Progress


1. Define Lagrange’s linear differential equation.
2. What are the assumptions for solving the wave equation?
3. What is the nth normal mode of the string?
4. Where is Charpit’s method used?

9.5 PARTIAL DIFFERENTIAL EQUATIONS OF


SECOND AND HIGHER ORDERS
The general form of a linear differential equation of nth order is
dny d n 1y dn 2
y dy
n
P1 n 1
P2 n 2
... Pn 1 Pn y =Q
dx dx dx dx
where P1, P2 ..., Pn and Q are functions of x alone or constants.
The linear differential equation with constant coefficients are of the form
dny d n 1y dn 2
y dy
n
P1 n 1
P2 n 2
... Pn 1 Pn y =Q (9.31)
dx dx dx dx
where P1, P2, ..., Pn are constants and Q is a function of x.
The equation
Self-Instructional
Material 241
Partial Differential
dny d n −1 y d n−2 y dy
Equations
n
+ P1 n −1
+ P2 n−2
+ ... + Pn–1 + Py y = 0 (9.32)
dx dx dx dx
is then called the Reduced Equation (R.E.) of the Equation (9.31)
NOTES If y = y1 (x), y = y2 (x), ..., y = yn (x) are n-solutions of this reduced equation,
then y = c1 y1 + c2 y2 + ... + cn yn is also a solution of the reduced equation where
c1, c2, ..., cn are artbitrary constants.
The solution y = y1 (x), y = y2 (x), y = y3 (x), ..., y = yn (x) are said to be
linearly independent if the Wronskian of the functions is not zero where the
Wronskian of the functions y1, y2,..., yn, denoted by W (y1, y2, ...,yn), is defined
by
y1 y2 y3... yn
y1 y2 y3 ... yn
W (y1, y2, ....yn) = y1 y2 y3 ... yn
  
y1( n 1)
y2( n 1)
y3( n 1)
... yn( n 1)

Since the general solution of a differential equation of nth order contains n


arbitrary constants, u = c1y1 + c2y2 + ... + cn yn is its complete solution.
Let v be any solution of the differential Equation (9.31), then
d nv d n −1v d n−2 v dv
+ P1 + P2 + ... + Pn–1 + Pn v =Q (9.33)
dx n dx n −1 dx n − 2 dx
Since u is a solution of Equation (9.32), we get
d nu d n −1u d n− 2u du
n
+ P1 n −1
+ P2 n− 2
+ ... + Pn–1 Pn u =0 (9.34)
dx dx dx dx
Now adding Equation (9.33) and (9.34), we get
d n (u v) d n 1(u v) d n 2 (u v) d (u v)
n
P1 n
P2 n 2
+ ...+ Pn –1 + Pn(u + v) = Q
dx dx dx dx
This shows that y = u + v is the complete solution of the Equation (9.31).
d d2 d3
Introducing the operators D for , D2 for 2 , D3 for etc. The Equation
dx dx dx 3
(9.31) can be written in the form
Dny + P1Dn–1y + P2Dn–2 y +.......+ Pn –1 Dy + y Pn = Q
or (Dn + P1 Dn –1 + P2Dn–2 +.....+ Pn–1 D + Pn) y = Q
or F(D) y = Q where F (D) = Dn + P1Dn–1 P2Dn–2 + .......+ Pn–1D + Pn
From the above discussions it is clear that the general solution of F (D)y = Q
consists of two parts:
(i) The Complementary Function (C.F.) which is the complete primitive of the
Reduced Equation (R.E.) and is of the form
Self-Instructional y = c1 y1 + c2 y2 + ... + cn yn containing n arbitrary constants.
242 Material
(ii) The Particular Integral (P.I.) which is a solution of F (D) y = Q containing Partial Differential
Equations
no arbitrary constant.
Rules for Finding The Complementary Function
Let us consider the 2nd order linear differential equation NOTES
d2y dy
2
+ P1 + P2 y = 0 (9.35)
dx dx
Let y = A emx be a trial solution of the Equation (9.35); then the auxiliary Equation
(A.E.) of Equation (9.35) is given by
m2 + P1m + P2 = 0 (9.36)
The Equation (9.36) has two roots m = m1, m = m2. We discuss the following
cases:
(i) When m1 m2, then the complementary function will be
1 2
y = c1em x + c2 em x where c1 and c2 are arbitrary constants.
(ii) When m1= m2, then the complementary function will be
1
y = (c1 + c2 x) em x where c1 and c2 are arbitrary constants.
(iii) When the auxiliary Equation (9.36) has complex roots of the form + i
and – i , then the complementary function will be
x
y=e (c1 cos x + c2 sin x)
Let us consider the equation of order n
dny d n −1 y d n−2 y dy
n
+ P1 n −1
+ P2 n− 2
+ ... + Pn –1 Pn y = 0 (9.37)
dx dx dx dx
Let y = A emx be a trial solution of Equation (9.37), then the auxiliary equation
is
mn + P1 mn–1 + P2 mn – 2 + ......+ Pn –1 m + Pn = 0 (9.38)
Rule (1): If m1, m2, m3, ..., mn be n distinct real roots of Equation (9.38), then
the general solution will be
1 2 3
y = c1 em x +c2e m x + c3em x + ... + cnemnx
where c1, c2, c3.....cn are arbitrary constants.
Rule (2): If the two roots m1 and m2 of the auxiliary equation are equal
and each equal to m, the corresponding part of the general solution will be (c1 +
c2 x) emx and if the three roots m3, m4, m5 are equal to the corresponding part
of the solution is (c3 + c4x + c5x2) e x and others are distinct, the general solution
will be
6
y = (c1 + c2x) emx + (c3 + c4 + c5x2) e x
+ c6 em x +......+ cnemnx
Rule (3): If a pair of imaginary roots ± i occur twices, the corresponding part
of the general solution will be
x
e [(c1 + c2x) cos x + (c3 + c4x) sin x]
Self-Instructional
Material 243
Partial Differential and the general solution will be
Equations 5
x
y=e [(c1 + c2x) cos x + (c3 + c4x) sin x] + c5em x + ......+ cnemnx
where c1, c2..., cn are arbitrary constants and m5, m6, ...., mn are distinct real
NOTES roots of (9.38).
Rule (4): If the two roots (real) be m and – m, the corresponding part of the
general solution will be c1emx + c2e – mx
= c1 (cosh mx + sinh mx) + c2 (cosh mx – sinh mx)
= c 1 cosh mx + c 2 sinh mx where c1 = c1 + c2, c 2 = c1 – c2
and general solution will be
3 4
y = c 1 cosh mx + c 2 sinh mx + c3em x + c4 em x +......+ cnemnx
where c1, c2, c3, .....cn are arbitrary constants and m3, m4 ... mn are distinct real
roots of Equation (9.38).
Rules for Finding Particular Integrals
Any particular solution of F (D) y = f(x) is known as its Particular Integral (P.I).
The P.I. of F(D)y = f(x) is symbolically written as
1
P.I. = {f (x)} where F(D) is the operator..
F ( D)

1
The operator F ( D) is defined as that operator which, when operated on
f (x) gives a function (x) such that F (D) (x) = f (x)
1
i.e., F ( D) { f (x)} = (x) (= P.I. )

 1  1
F (D)  f ( x)  = f (x)  f ( x) ( x)
 F ( D )  F ( D)
Obviously F (D) and 1/F(D) are inverse operators.
1
Case I: Let F (D) = D, then f ( x) = ∫ f ( x) dx .
D
1 1
Proof: Let y = { f (x)}, operating by D, we get Dy = D . { f (x)} or Dy = f (x) or
D D
dy
= f (x) or dy = f (x) dx
dx
Integrating both sides with respect to x, we get
y = ∫ f ( x) dx, since particular integrating does not contain any arbitrary constant.
Case II: Let F (D) = D – m where m is a constant, then
1
{ f ( x )} = emx − mx
f ( x)dx .
D−m ∫e
Self-Instructional
244 Material
Partial Differential
1 Equations
Proof: Let { f ( x )} = y, then operating by D – m, we get
D−m

1
(D – m) . { f ( x )} = (D – m) y
D−m NOTES
dy
or f (x) = − my
dx
dy
or − my = f (x) which is a first order linear differential equation and I.F. =
dx
e∫
− mdx
= e− mx.
Then multiplying above equation by e–mx and integrating with respect to x, we
get
y e – mx = ∫ f ( x)e−mx dx, since particular integral does not contain any arbitrary
constant
or y = emx ∫ f ( x )e
− mx
dx .

1 a1 a2 an
Note: If = ..... where ai and mi (i = 1, 2, ..., n)
F ( D) D m1 D m2 D mn
are constants, then
1 1x
{ f ( x)} = a1em f ( x )e m1x
dx a2 em2 x f ( x)e m2 x
dx
F ( D)

... an e mn x f ( x)e mn x
dx

n
= ∑ ai em x ∫ f ( x)e−m x dx
i i

i =1

We now discuss methods of finding particular integrals for certain specific types
of right hand functions
Type I: f (D) y = emx where m is a constant.
1 emx
Then P.I. = {emx } = if F (m) 0
F ( D) F ( m)
If F (m) = 0, then we replace D by D + m in F (D),
1 1
P.I. = {emx } = emx . {1}
F ( D) F ( D + m)

Example 15: (D3 – 2D2 – 5D + 6) y = (e2x + 3)2 + e3x cosh x.


Solution: The reduced equation is
(D3 – 2D2 – 5D + 6) y = 0 ...(1)
Let y =Aemx be a trial solution of (1). Then the auxiliary equation is
m3 – 2 m2 – 5m + 6 = 0 or m3 – m2 – m2 + m – 6m + 6 = 0
Self-Instructional
Material 245
Partial Differential or m2 (m – 1) – m (m – 1) – 6 (m – 1) = 0
Equations
or (m – 1) (m2 – m– 6) = 0 or (m – 1) (m2 – 3m + 2m – 6) = 0
or (m – 1) (m – 3) (m + 2) = 0 or m = 1, 3, –2
NOTES The complementary function is
y = c1ex + c2e3x + c3 e–2x where c1, c2, c3 are arbitrary constants.
 e x + e− x 
Again (e2x + 3)2 + e3x cosh x = e4x + 6 e2x + 9 + e3x   .
 2 

e 4 x e2 x
= e4x + 6 e2x + 9e0 . x + +
2 2
3 4x 13 2 x
= e e 9e0. x
2 2
The particular integral is
1 3 4x 13 2 x
y= 3 2
e e 9e 0. x
D 2D 5D 6 2 2

13 4x 13 2 x
= e e 9e0. x
( D 1)( D 3)( D 2) 2 2
3 1 13 1
= e4 x {e 2 x}
2 ( D 1)( D 3)( D 2) 2 ( D 1)( D 2)( D 3)
1
+9 e0. x
( D 1)( D 3)( D 2)

3 e4 x 13 e2 x
=
2 (4 1) (4 3) (4 2) 2 (2 1) )(2 2) (2 3)

e0. x
9
(0 1)(0 3)(0 2)

3 e4 x 13 e2 x e0. x
= 9
2 3 .1. 6 2 1. 4 . ( 1) ( 1)( 3) . 2

e4 x 13 2 x 3
= − e + .
12 8 2
Hence the general solution is
y = C.F. + P.I.
e4 x 13 2 x 3
= c1ex + c2 e3x + c3 e–2x + − e + .
12 8 2
1 1
Notes: 1. When F (m) = 0 and F (m) 0, P.I. = {emx } =x {emx }
F ( D) F (D)

xe mx
=
F (m)
Self-Instructional
246 Material
1 Partial Differential
2. When F (m)= 0 F (m) = 0 and F (m) 0, then P.I. = {emx } Equations
F ( D)
1 x 2 e mx
= x2 emx =
F ( D) F ( m)
NOTES
and so on.
Type II: f (x) = emx V where V is any function of x.
Here the particular integral (P.I.) of F (D) y = f (x) is
1 1
P.I. = {emxV } = emx {V }.
F ( D) F ( D + m)

Example 16: Solve (D2 – 5D + 6) y = x2 e3x


Solution: The reduced equation is
(D2 –5D + 6) y = 0 (1)
Let y = Aemx be a trial solution of Equation (1) and then auxiliary equation is
m2 – 5m + 6 = 0 or m2 – 3m – 2m + 6 = 0
or m (m – 3) – 2 (m – 3) = 0 or (m – 3) (m – 2) = 0
m = 2, 3
The complementary function is
y = c1 e2x + c2 e3x where c1 and c2 are arbitrary constants.
The particular integral is
1 e3 x
y= 2
{ x 2 e3 x } = 2
{x 2 }
D − 5D + 6 ( D + 3) − 5( D + 3) + 6

1 1
= e3x {x 2 } = e3x {x 2 }
2
D + 6 D + 9 − 5 D − 15 + 6 D2 + D

1 1
= e3x D(1 + D= {x 2 } e3 x (1 + D)−1{x 2 }
) D

e3 x
= (1 D D2 D3 D 4 ...){x 2 }
D
e3 x 2  x3 
= 2} e3x  − x 2 + 2 x 
{x − 2 x +=
D  
 3 
Hence the general solution is
y = C.F. + P.I.
x3
= c1e 2 x c2 e3 x e3x x2 2x .
3
Recall: (i) (1+ x)–1 = 1 – x + x2 – x3 + x4 – x5 + ...
(ii) (1 – x)–1 = 1 + x + x2 + x3 + x4 + x5 + ...
Self-Instructional
Material 247
Partial Differential Type III: (a) F (D) y = sin ax or cos ax where F (D) = (D2).
Equations
1 1
Here P.I. = {sin ax} = sin ax (if (– a2) 0)
F ( D) ( a2 )
NOTES 1 1
or P.I. = {cos ax} = cos ax (if (– a2) 0)
F ( D) ( a2 )
[Note D2 has been replaced by – a2 but D has not been replaced by – a.]
(b) F (D) y = sin ax or cos ax and F (D) = (D2, D)
1 1 1
Here P.I. = {sin ax} = {sin ax} {sin ax}
F ( D) 2
(D , D) ( a 2 , D)

if (– a2, D) 0
1 1 1
or y= {cos ax} = {cos ax} {cos ax}
F (D) 2
( D , D) ( a 2, D )

if (–a2, D) 0
( D)
(c) F (D) y = sin ax or cos ax and F(D) =
( D2 )

1 ( D) ( D)
Here P.I. = {sin ax} = {sin ax} = {sin ax} if (–a2) 0
F ( D) (D2 ) 2
( a )

1 ( D)
or y= {cos ax} = {cos ax}
F (D) ( D2 )

( D)
= 2
{cos ax} if (– a2) 0
( a )

(d) F (D) y = sin ax or cos ax, F (D) = (D2) but (–a2) = 0.


1 1
Here P.I. = {sin ax or cos ax} = x {sin ax or cos ax}
F ( D) F ′( D )

ei xa e ixa
Alternatively, sin ax and cos ax can be written in the form sin ax =
2i
e aix e aix
and cos ax = , then find P.I. by the method of Type I.
2
Example 17: Solve (D4 + 2D2 + 1) y = cos x.
Solution: The reduced equation is (D4 + 2D2 + 1) y = 0
Let y = Aemx be a trial solution. Then the auxiliary equaiton is
m4 + 2m2 + 1 = 0 or [(m2 + 1)]2 = 0 or m = ± i, ± i
C.F. = (c1 + c2x) cos x + (c3 + c4x) sin x where c1, c2, c3 and c4 are
arbitrary constants.
Self-Instructional
248 Material
1 Partial Differential
P.I. = {cos x} Equations
D4 + 2D 2 + 1
1
= x {cos x}
4 D3 4D NOTES
2 4 2
[ (D ) = D + 2D + 1
1 1
(–12) = 1 – 2 + 1 = 0, then { f ( x)} =x { f ( x)} ]
F ( D) F ′( D)
x 1 x x
= 3
{cos x} = . 2 {cos x}
4D +D 4 3D + 1

x2 1 x 2 cos x x2
= {cos x} . cos x
4 3D 2 1 4 3 1 8
Hence the general solution is
y = C.F. + P.I.
x2
= (c1 + c2x) cos x + (c3 + c4x) sin x – cos x .
8
Example 18: Solve (D2 – 4)y = sin 2x.
Solution: The reduced equation is
(D2 – 4)y = 0
Let y = Aemx be a trial solution and then auxiliary equation is
m2 – 4 = 0 m=±2
The complementary function is
y = c1 e2x + c2 e–2x where c1, c2 are arbitrary constants.
The particular integral is
1 1
y= {sin 2 x} = sin 2 x [Replace D 2 by 22 ]
D2 − 4 22 4
1
= sin 2 x
8
1
The general solution is y = C.F. + P.I. = c1e2x + c2e–2x sin 2 x .
8
Example 19: Solve (3D2 + 2D – 8)y = 5 cos x.
Solution: The reduced equation is
(3D2 + 2D – 8)y = 0
Let y = Aemx be a trial solution and then the auxiliary equation is
3m2 + 2m – 8 = 0 or 3m2 + 6m – 4m – 8 = 0
or 3m (m + 2) – 4 (m + 2) = 0 or (m + 2) (3m – 4) = 0
Self-Instructional
Material 249
Partial Differential 4
Equations or m = – 2, m =
3
The complementary function is
4
NOTES x
y = c1e–2x + c2 e 3 when c1 and c2 are arbitrary constants.
The particular integral is
1 1
y= {5 cos x} = 5 {cos x}
2
3D + 2 D − 8 (3D − 4)( D + 2)

(3D + 4)( D − 2) (3D + 4)( D − 2)


=5 2 2
{cos x} =5 {cos x}
(9 D − 16)( D − 4) [9( −12 ) − 16][−12 − 4]

( D)
[D2 is replaced by – 12 in the denominator] form
(D2 )

5 1
= [3D 2 6 D 4 D 8]{cos x} = [3 D 2 − 2 D − 8]cos x
( 25) ( 5) 25

1  d2 d 
=  3 2 (cos x) − 2 (cos x) − 8cos x 
25  dx dx 
1 1
= [ −3cos + 2sin x − 8 cos x ] = (2 sin x − 11cos x)
25 25
The general solution is
y = C.F. + P.I.
1
= c1e –2x + c2e4/3x + (2 sin x − 11cos x) .
25
Type IV: F (D) y = xn, n is a positive integer.
1
Here P.I. = {x n } = [F(D)]–1 {xn}
F ( D)
In this case, [F (D)]–1 is expanded in a binomial series in ascending powers of
D upto Dn and then operate on xn with each term of the expansion. The terms in
the expansion beyond Dn need not be considered, since the result of their operation
on xn will be zero.
Example 20: Solve D2 (D2 + D + 1)y = x2.
Solution: The reduced equation is
D2 (D2 + D + 1)y = 0 (1)
mx
Let y = Ae be a trial solution of Equation (2) and then the auxiliary equation
is
m2 (m2 + m + 1) = 0
−1 ± 1 − 4 −1 ± −3 − I ± 3i
m = 0, 0 and m = = =
Self-Instructional 2 2 2
250 Material
The complementary function is Partial Differential
Equations
1
x 3 3
y = (c1 + c2 x) e 0 . x + e 2 c3 cos x c4 sin x
2 2
1 NOTES
x 3 3
= c1 + c2x + e 2 c3 cos x c4 sin x
2 2
where c1, c2, c3, c4 are the arbitrary constant.
The particular integral is
1 1
y= {x 2 } = (1 + D + D2 )−1{x2}
2
D ( D + D + 1) 2 D2
1
= {1 ( D D2 ) (D D 2 )2 (D D )3 ...}{x 2}
D2
1
= 2
{1 ( D D2 ) ( D2 2 D3 D4 ) (D D 2 )3 ...}{x2}
D
1
= 2
{x 2 (2 x 2) (2) 0}
D
1 1 x3 x4 x3
= 2
{x 2 2 x} = x2 =
D D 3 12 3

The general solution is y = C.F. + P.I.


3 3 x4 x3
= c1 + c2x + e x/2
c3 cos x c4 sin x .
2 2 12 3

Example 21: Solve (D2 + 4)y = x sin2x.


Solution: The reduced equation is
(D2 + 4) y = 0
The trial solution y = A emx gives the auxiliary equation as
m2 + 4 = 0, m = ± 2i
The complementary function is y = c1 cos 2x + c2 sin 2x
1
The particular integral is y = 2
{x sin 2 x}
D 4
1 x 1 x x
= 2
(1 cos 2 x ) = 2
cos 2 x
D 4 2 D 4 2 2

1 x 1 x (e 2ix e 2ix
)
=
D2 4 2 D2 4 2 2
1
1 D2 x 1 e 2ix e 2ix
= 1 {x} {x}
4 4 2 4 ( D 2i ) 2 4 4( D 2i) 2 4

Self-Instructional
Material 251
Partial Differential
1x e2ix 1 e 2ix
1
Equations = {x} {x}
42 4 D2 4 Di 4 4 4 D 2
4 Di 4 4

x e 2ix 1 e 2ix
= {x} {x}
NOTES 8 4
4 Di 1
D
4 . ( 4 Di ) 1
D
4i 4i
1 1
x e2ix 1 D e 2 xi D
= . 1 {x} 1 {x}
8 4 4 Di 4i 4( 4 Di ) 4i

x e2ix 1 D D2 e 2 xi D
= . 1 ... {x} 1 ... {x}
8 4 4 Di 4i 16 4( 4 Di ) 4i

x e2ix 1 1 e 2 xi 1
= . x x
8 4 4 Di 4i 4 . 4 Di 4i

x e 2ix x 2 x e 2 xi x 2 x
=
8 2 . 8i 2 4i 2 . 8i 2 4i

x x 2 e 2ix e 2 xi
x e2ix e 2 xi
=
8 2.8 2i 2 .16 . i 2 2

x x2 x
= − sin 2 x − cos 2 x
8 2.8 2.16
x x2 x
= − sin 2 x − cos 2 x
8 16 32
Hence the general solution is y = C.F. + P.I.
x x2 x
= c1 cos 2x + c2 sin 2x + sin 2x – cos 2 x .
8 16 32
Example 22: Solve (D4 + D3 – 3D2 – 5D – 2) y = 3xe–x.
Solution: The reduced equation is
(D4 + D3 – 3D2 – 5D – 2) y = 0 (1)
mx
The trial solution y = Ae gives the auxiliary equation as
m4 + m3 – 3m2 – 5 m – 2 = 0
or m4 + m3 – 3m2 – 3 m – 2m – 2 = 0
or m3 (m + 1) – 3m (m + 1) – 2 (m +1)
or (m + 1) (m3 – 3m – 2) = 0 or (m + 1) {m3 + m2 – m2 – m – 2m –2)
=0
or (m + 1) {m2 (m + 1) – m (m + 1) – 2 (m + 1)} = 0
or (m + 1) (m + 1) (m2 – m – 2) = 0
or (m + 1)2 (m2 – 2m + m – 2) = 0
or (m + 1)2 (m + 1) (m – 2) = 0
Self-Instructional
252 Material
m = – 1, –1, –1, 2 Partial Differential
Equations
The complementary function is y = (c1 + c2 x + c3x2) e–x + c4e2x.
The particular integral is
1 NOTES
y= 3
{3e x
x}
( D 1) ( D 2)
1 1
= 3e–x 3
{x} = 3e–x 3
{x}
( D 1 1) ( D 3) D ( 3) (1 D/3)
1
1 D 1 D D2
= – e–x 1 {x} e x
1 ... {x}
D3 3 D3 3 9

1 1 1 x2 x 1 x3 x2
= – e–x x e x
e x
D3 3 D2 2 3 D 6 6

x4 x3
= –e–x
24 18

The general solution is y = C.F. + P.I.


x4 x3
= (c1 + c2 x + c3 x2) + c4 e2x – e–x .
24 18

Type V: (a) F (D) y = xV where V is a function of x.


1  1  1
Here P.I. = {xV } =  x − F ′( D )  {V }.
F ( D)  F ( D )  F ( D)
Example 23: Solve (D2 + 9) y = x sin x.
Solution: The reduced equation is (D2 + 9) y = 0 (1)
The trial solution y = Aemx gives the auxiliary equation as
m2 + 9 = 0 or m = ± 3i
C.F. = c1 cos 3x + c2 sin 3x where c1 and c2 are arbitrary constants.
1
and P.I. = {x sin x} where F (D) = D2 + 9
F (D)

 1  1
= x − F ′( D )  {sin x}
 F ( D)  F (D)

2D 1
= x 2 2
{sin x}
D 9 D 9

2D sin x 2D sin x
= x 2
= x
D 9 1 9 D2 9 8

x sin x 1 1 x sin x 1
= D{sin x} = − cos x
8 4 1 9 8 32
Self-Instructional
Material 253
Partial Differential Hence the general solution is
Equations
x sin x 1
y = C.F. + P.I. = c1 cos 3x + c2 sin 3x + − cos x
8 32

NOTES (b) F (D) y = xnV where V is any function of x.


n
1 1 F (D) 1
HereP.I. = { f ( x)} = {x nV } x {V }
F ( D) F ( D) F ( D) F ( D)

Example 24: Solve (D2 –1)y = x2 sin x


Solution: The reduced equation is (D2 –1)y = 0 (2)
Let y = Aemx be a trial solution. Then the auxiliary equation is
m2 – 1 = 0 or m = ± 1
C.F. = c1ex + c2e–x where c1 and c2 are arbitrary constants.
1
P.I. = {x 2 sin x} where F(D) = D2 – 1
F ( D)
2 2
F ( D) 1 1 1
= x {sin x} =  x − 2

2 D  2 {sin x}
F ( D) F ( D)  D −1  D −1

1 1 1
= x 2D x 2D sin x
D2 1 D2 1 12 1

1 1
= x 2
2D x 2
2 D { 1/ 2 sin x}
D 1 D 1
1 x 1
= x 2
2D sin x 2
{cos x}
D 1 2 D 1
1  x 1
=  x − 
2 D  − sin x − cos x 
2
 D −1   2 2 
x2 x 1
=– sin x − cos x + 2 {D ( x sin x + cos x)}
2 2 D −1
x2 x 1
=– sin x cos x 2
{sin x x cos x sin x}
2 2 D 1
x2 x 1
=– sin x − cos x + 2 {x cos x}
2 2 D −1
1  1  1
Again 2
{x cos x} =  x − 2 2D  2 {cos x}
D −1  D − 1  D −1
1 1
=  x − 2D  
2
cos

x
 D − 1   −1 − 1 
1 1
= x cos x 2
{ sin x}
2 D 1

Self-Instructional
254 Material
1 sin x Partial Differential
1 1
= x cos x = – x cos x + sin x Equations
2 12 1 2 2
x2 x x 1
P.I. = – sin x − cos x − cos x + sin x
2 2 2 2 NOTES
1 2 1
= x sin x x cos x sin x
2 2
Hence the general solution is
1 2 1
y = C.F. + P.I. = c1ex + c2e–x x sin x x cos x sin x .
2 2
9.5.1 Classification of Linear Partial Differential Equations of Second Order
Consider the following linear partial differential equation of the second order in
two independent variables,

∂ 2u ∂ 2u ∂ 2u ∂u ∂u
A + B + C +D +E + Fu = G
∂x 2
∂x∂y ∂y 2
∂x ∂y
Where A, B, C, D, E, F and G are functions of x and y.
This equation when converted to quasi-linear partial differential equation
takes the form,

∂ 2u ∂ 2u ∂ 2u  ∂u ∂u 
A 2 +B + C 2 + f  x, y, u ,  = 0
∂x ∂x∂y ∂y  ∂x ∂y 
These equations are said to be of:
1. Elliptic type if B2 – 4AC < 0
2. Parabolic type if B2 – 4AC = 0
3. Hyperbolic type if B2 – 4AC > 0
Let us consider some examples to understand this:
∂ 2u ∂ 2u 2
2 ∂ u ∂u
(i) 2
− 2 x + x 2
−2 =0
∂x ∂x∂y ∂y ∂y
uxx – 2xuxy + x2uyy – 2uy = 0
Comparing it with the general equation we find that,
A = 1, B = –2x, C = x2
Therefore
B2 – 4AC = (–2x)2 – 4x2 = 0, ∀ x and y 0
So the equation is parabolic at all points.
(ii) y2uxx + x2uyy = 0

Self-Instructional
Material 255
Partial Differential Comparing it with the general equation we get,
Equations
A = y2, B = 0, C= x2
Therefore
NOTES B2 – 4AC = 0 – 4x2y2 < 0, ∀ x and y 0
So the equation is elliptic at all points.
(iii) x2uxx – y2uyy = 0
Comparing it with the general equation we find that,
A = x2, B = 0, C = –y2
Therefore
B2 – 4AC = 0 – 4x2y2 > 0, ∀ x and y 0
So the equation is hyperbolic at all points.
Following three are the most commonly used partial differential equations
of the second order:
1. Laplace equation

∂ 2u ∂ 2u
+ =0
∂x 2 ∂y 2
This is equation is of elliptic type.
2. One-dimensional heat flow equation
∂u ∂ 2u
= c2 2
∂t ∂x
This equation is of parabolic type.
3. One-dimensional wave equation
∂ 2u 2
2 ∂ u
=c
∂t 2 ∂x 2
This is a hyperbolic equation.

9.6 HOMOGENEOUS AND NON-HOMOGENEOUS


EQUATIONS WITH CONSTANT
COEFFICIENTS

Homogeneous Linear Equations with Constant Coefficients


Let f(D, D' )z = V(x, y) (9.39)
Then if
f (D, D ′ ) = A0 D n + A1 D n −1 D ′ + A2 D n − 2 D2′ +  + An D ′ n (9.40)
Self-Instructional
256 Material
Partial Differential
where A1 , A 2 ,, A n are constants. Equations

Then Equation (9.39) is known as Homogeneous equation and takes the


form
NOTES
(A 0 )
D n + A 1 D n −1 D ′ + A 2 D n − 2 D ′ 2 +  + A n D ′ n z = V (x , y ) (9.41)

Complementary Function
Consider the equation,
(A 0 )
D n + A 1 D n −1 D ′ + A 2 D n − 2 D ′ 2 +  + A n D ′ n z = 0 (9.42)
Let
z = φ(y + mx) (9.43)
be a solution of Equation (9.42)
Now D r z = m r φ r (y + mx )

D ′ s z = φ (e ) (y + mx )

and D r D ′s z = m r φ ( r + s ) (y + mx )
Therefore, on substituting Equation (9.43) in Equation (9.42), we get
(A 0 )
m n + A 1 m n − s + A 2 m n − 2 +  + A n φ ( n ) (y + mx ) = 0
which will be satisfied if
A 0 m n + A 1 m n −1 + A 2 m n − 2 +  + A n = 0 (9.44)
Equation (9.44) is known as the Auxiliary Equation.
Let m1 , m 2 ,, m n be the roots of the Equation (9.44),
Then the following three cases arise:
Case I: Roots m1 , m 2 ,, m n are distinct.
Part of C.F. corresponding to m = m1 is
z = φ1 (y + m1x )
where ‘ 1’ is an arbitrary function.
Part of C.F. corresponding to m = m2 is
z = φ 2 (y + m 2 x )
where 2
is any arbitrary function.
Now since our equation is linear, so the sum of solutions is also a solution.
Therefore, our complimentary function becomes,
C.F. = 1
(y + m1x) + 2
(y + m2x) +……………+ n
(y + mnx)
Self-Instructional
Material 257
Partial Differential Case II: Roots are imaginary.
Equations
Let the pair of complex roots of the Equation (9.44) be
u ± iv
NOTES then the corresponding part of complimentary function is
z= 1
(y + ux + ivx) + 2
(y + ux – ivx) …(9.45)
Let y + ux = P and vx = Q
Then z = (P + iQ) +
1
(P – iQ)
2

Or z = ( 1+ )P + ( 1–
2
)iQ
2

If 1
+ 2
= 1

And 1
– 2
= 2

Then

1
φ1 = ( ξ 1 + iξ 2 )
2
and
1
φ2 = ( ξ 1 − iξ 2 )
2
Substituting these values in Equation (9.45), we get

1 1 1 1
z= ξ 1 ( P + iQ ) + iξ 2 ( P + iQ ) + ξ 1 ( P − iQ ) − iξ 2 ( P − iQ )
2 2 2 2
or
1 1
z = {ξ 1 ( P + iQ ) + ξ 1 ( P − iQ )} + i{ξ 2 ( P + iQ ) − ξ 2 ( P − iQ)}
2 2
Case III: Roots are repeated.
Let m be the repeated root of Equation (9.44).
Then we have,
(D – mD')(D – mD')z = 0
Putting (D – mD')z = U, we get (9.46)
(D – mD')U = 0 (9.47)
Since the equation is linear, it has the following subsidiary equations,
dx dy dU
= = (9.48)
1 −m 0
Two independent integrals of Equation (9.48) are
y + mx = constant
Self-Instructional
258 Material
and U = constant Partial Differential
Equations
U = φ(y + mx )
is a solution of Equation (9.47) where is an arbitrary function.
NOTES
Substituting in Equation (9.46)
∂z ∂z
−m = φ(y + mx ) (9.49)
∂x ∂y
which has the following subsidiary equations,
dx dy dz
= =
1 − m φ(y + mx )
Two independent integrals of Equation (9.46) are
y + mx = constant

and z = xφ(y + mx ) + constant


Therefore z = xφ(y + mx ) + ψ(y + mx ) (9.50)
is a solution of Equation (9.49) where is an arbitrary function.
Equation (9.50) is the part of C.F. corresponding to two times repeated
root.
In general, if the root m is repeated r times, the corresponding part of C.F.
is
z = x r −1φ1 (y + mx ) + x r − 2 φ 2 (y + mx ) +  + φ r (y + mx )

where φ1 , φ2 , , φr are arbitrary functions.

Example 25: Solve the equation, (D 3 − 3D 2 D ′ + 3DD ′ 2 − D ′ 3 )z = 0 .


Solution: The A.E. of the given equation is
m 3 − 3m 2 + 3m − 1 = 0
or (m − 1)3 = 0
m = 1, 1, 1
C.F. = x 2 φ1 (y + x ) + xφ 2 (y + x ) + φ 2 (y + x ) .
Non-Homogeneous Linear Equations with Constant Coefficients
If all the terms on left hand side of Equation (9.39) are not of same degree then
Equation (9.39) is said to be Non-Homogeneous equation. Equation is said to
be reducible if the symbolic function f (D, D′) can be resolved into factors each
of which is of first degree in D and D' and irreducible otherwise.
Self-Instructional
Material 259
Partial Differential For example, the equation
Equations
( )
f (D, D ′ )z = D 2 − D ′ 2 + 2 D + 1 z = (D + D ′ + 1)(D − D ′ + 1)z = x 2 + xy
is reducible while the equation
NOTES
( ) ( )
f (D, D ′ )z = D D ′ + D ′ 3 z = D ′ D + D ′ 2 z = cos (x + 2 y )
is irreducible.
Reducible Non Homogeneous Equations
In the equation,
f (D, D′) = (a 1D + b1D′ + c1 )(a 2 D + b 2 D′ + c 2 )(a n D + b n D′ + c n ) …(9.51)
where a’s, b’s and c’s are constants.
The complementary function takes the form
(a1D + b1D′ + c1 )(a 2 D + b 2 D′ + c2 )(a n D + bn D′ + c n )z = 0 (9.52)
Any solution of the equation given by
(a i D + b i D′ + c i )z = 0 (9.53)
is a solution of the Equation (9.52)
Forming the Lagrange’s subsidiary equations of Equation (9.53),
dx dy dz
= = (9.54)
ai bi − ci z
The two independent integrals of Equation (9.54) are
b i x − a i y = constant
ci
− x
ai
and z = constant e , if ai 0
or
ci
− y
b
z = constant e , if bi 0
Therefore
ci
z = e − ai x φi (b i x − a i y ) , if a i ≠ 0

or
ci
− y
z= e bi
ψ i (bi x − ai y ) if bi ≠ 0

is the general solution of Equation (9.53). Here φ i and ψ i are arbitrary functions.
Self-Instructional
260 Material
Example 26: Solve the differential equations Partial Differential
Equations
(D 2
)
− D ′ 2 − 3D + 3D ′ z = 0 .
Solution: The equation can also be written as
NOTES
(D − D′)(D + D′ − 3)z = 0
C.F. = φ1 (y + x ) + e 3 x φ 2 (x − y )
Or
ψ1 ( y + x) + e3 y ψ 2 ( x − y )
When the Factors are Repeated
Let the factor is repeated two times and is given by,
(aD + bD '+ c)
Consider the equation
(aD + bD′ + c )(aD + bD′ + c )z = 0 (9.55)

Put (aD + bD′ + c)z = U (9.56)


Then the Equation (9.51) reduces to
(aD + bD′ + c)U = 0 (9.57)
General solution of Equation (9.57) is
c
U = e − a x φ(bx − ay) if a ≠ 0 (9.58)

Or
c

U =e
− y
b ψ(bx − ay) if b ≠ 0 (9.59)

Substituting Equation (9.58) in Equation (9.56), we obtain


c
φ(bx − ay)
− x
(aD + bD'+ c) z = e a (9.60)

The subsidiary equations are,

dx dy dz
= = c
a b − x (9.61)
e a
φ(bx − ay ) − cz

The two independent integrals of Equation (9.61) are given by


bx − ay = constant = (9.62)

Self-Instructional
Material 261
Partial Differential
c c
Equations dz c 1 − x 1 − x
and + z = e a φ(bx − ay ) = e a φ(λ) (9.63)
dx a a a
The Equation (9.63) being an ordinary linear equation has the following
NOTES solution:
c
x 1
ze a = xφ(λ) + constant
a
c
x 1
or ze a = xφ(bx − ay ) + constant
a
Therefore, general solution of Equation (9.60) is
c c
x − x − x
z = e a φ(bx − ay ) + φ1 (bx − ay )e a
a
c
= e − a x {xφ (bx − ay ) + φ (bx − ay )} …(9.64)
2 1

where 1
and 2
are arbitrary functions.
Similarly from Equations (9.59) and (9.56), we get

{yψ 2 (bx − ay) + ψ1 (bx − ay)}


c
− y
z=e b

where 1
and 2
are arbitrary functions.
In general, for r times repeated factor, (aD + bD′ + c)
c r
− x
z=e a
∑x i −1
φ i (bx − ay ) if a ≠ 0
i =1

Or
c r
− y
z−e b
∑y i −1
ψ i (bx − ay ) if b ≠ 0
i =1

where φ1 , φ 2, ...., φ r and ψ1 , ψ 2 ,..., ψ r are arbitrary functions.


Example 27: Solve the differential equation,
(2 D − D ′ + 4 )(D + 2D ′ + 12 z = 0 )
Solution: C.F. corresponding to the factor (2D − D′ + 4) is

e4y φ(x + 2 y )
C.F. corresponding to the factor (D = 2 D ′ + 1)2 is

Self-Instructional
262 Material
Partial Differential
e − x {xφ 2 (2 x − y ) + φ1 (2 x − y )} Equations

Hence C.f. = e4yφ(x + 2 y ) + e − x {xφ 2 (2 x − y ) + φ1 (2 x − y )}


Irreducible Non-Homogeneous Equations NOTES
For solving the equation
f (D, D′)z = 0 (9.65)
Substitute z = ce ax + by where a, b and c are constants (9.66)
Now D r z = ca r e ax + by
D r D ′ a z = ca r b s e ax + by
and D ′ s z = cb s e ax + by
Substituting Equation (9.66) in Equation (9.65), we get,
cf (a , b )e ax + by = 0
which will hold if
f (a, b) = 0 (9.67)
For any selected value of a (or b) Equation (9.67) gives one or more values
of b (or a). Thus there exists infinitely many pairs of numbers (ai, bi) satisfying
Equation (9.67).
Thus

z = ∑ c i e a i x + bi y (9.68)
i =1

where f (a i , b i ) = 0 ∀ i, is a solution of the Equation (9.65),


If
f (D, D′) = (D + hD′ + k )g(D, D′) (9.69)
then any pair (a, b) such that
a + hb + k = 0 (9.70)
satisfies Equation (9.67). There are infinite number of such solutions.
From Equation (9.70)
a = −(hb + k )
Thus

z = ∑ c i e −( hb i + k )x + b i y
i =1

Self-Instructional
Material 263
Partial Differential ∞

= e ∑ cie
Equations − kx b i ( y − hx )
(9.71)
i =1

is a part of C.F. corresponding to a linear factor (D + hD′ + k ) given in


NOTES Equation (9.69).
Equation (9.71) is equivalent to
e − kx φ(y − hx )
where ‘ ’ is an arbitrary function.
Equation (9.68) is the general solution if f (D, D') has no linear factor
otherwise general solution will be composed of both arbitrary functions and partly
arbitrary constants.
Example 28: Solve the differential equation (2 D 4 + 3D 2 D ′ + D ′ 2 )z = 0 .
Solution: The given equation is equivalent to
(2D 2
)(
+ D′ D 2 + D′ z = 0 )
C.F. corresponding to the first factor

= ∑c e
i =1
i
a i x + bi y

where a i and bi are related by

2a i2 + b i = 0

or b i = −2a i2
Therefore, part of C.F. corresponding to the first factor

∑d e
i =1
i
ei ( x − ei y )

where ei and di are arbitrary constants.


∞ ∞

C.F. = ∑c e
i =1
i
a i (x −2 a i y )
+ ∑ d i e ei ( x −ei y )
i =1

Particular Integral
In the equation,
f (D, D′)z = V(x, y) …(9.72)
f(D, D’) is a non homogeneous function of D and D ′ .
1
P.I. = V (x , y ) …(9.73)
f (D, D′)
Self-Instructional
264 Material
Here if V(x, y ) is of the form e ax+by where ‘a’ and ‘b’ are constants then
Partial Differential
Equations
we use the following theorem to evaluate the particular integral:
Theorem 9.1: If f (a , b ) ≠ 0 , then
NOTES
1 1
e ax + by = e ax + by
f (D, D′ ) f (a , b )
Proof: By differentiation
D r D 's e ax +by = a r b s e ax +by
D r e ax + by = a r e ax + by

D 's e ax +by = b s e ax +by


f ( D, D' )e ax + by = f ( a, b)e ax + by

1
e ax + by = f (a , b ) e ax + by
f (D, D′ )
Dividing the above equation by f(a, b)
1 1
e ax + by = e ax + by
f (a , b ) f (D, D )

1 1
or e ax + by = e ax + by
f (D, D′ ) f (a , b )

Example 29: Solve the equation (D 2 − D ′ 2 − 3D + 3D′)z = e x − 2 y


Solution: The given equation is equivalent to
(D − D′)(D + D′ − 3)z = e x −2 y
C.F. = φ1 (y + x ) + e 3 x φ 2 (y − x )
1
P.I. = e x−2 y
( D − D' )( D + D'−3)
1 x −2 y
=− e
12
3x 1 x −2 y
Therefore, z = φ1 ( y + x) + e φ 2 ( y − x) − e
12
But in case V(x, y) is of the form e ax + by φ(x , y ) where ‘a’ and ‘b’ are
constants then following theorem is used to evaluate the particular integral:

Self-Instructional
Material 265
Theorem 9.2: If φ(x, y) is any function, then
Partial Differential
Equations

1 1
e ax + by φ(x , y ) = e ax + by φ( x , y )
f (D, D ′) f (D + a , D′ + b )
NOTES
Proof: From Leibnitz’s theorem for successive differentiation, we have
{ } {
D r e ax + by φ(x , y ) = e ax + by D r φ(x , y ) + r c1a.D r −1φ(x , y ) }
+ r c 2 a 2 d r − 2 φ(x , y ) +  + r c r a r φ(x , y )

= e ax + by {D r + r c1 D r −1 + r c 2 a 2 D r − 2 +  + r c r a r }φ(x , y )

= e ax + by (D + a )r φ (x , y ) .
Similarly
{ }
D 's e ax+by φ( x, y ) = e ax+by ( D'+b) s φ( x, y )

{ }
and D r D' s e ax +by φ( x, y ) = D r [e ax +by ( D'+b)φ( x, y )]

= e ax + by (D + a )r (D ′ + b )s φ (x , y )

So { }
f (D, D ′ ) e ax + by φ(x , y ) = e ax + by f (D + a , D ′ + b )φ(x , y ) (9.74)

Put f (D + a, D′ + b )φ(x, y ) = ψ(x, y )

1
φ( x , y ) = ψ (x , y )
f (D + a , D′ + b )
Substituting in Equation (9.74), we get

 1 
f (D, D ′ )e ax + by ψ (x , y ) = e ax + by ψ (x , y )
 f (D + a , D ′ + b ) 

1
Operating on the equation by
f (D, D′)

1 1
e ax + by
f (D + a , D′ + b )
ψ (x , y ) =
f (D, D′)
{
e ax + by ψ (x , y ) }
Replacing ψ(x, y) by φ(x, y ) , we have

1 1
f (D, D )

( )
e ax + by φ(x , y ) = e ax + by
f (D + a , D ′ + b )
φ(x , y )

Self-Instructional
266 Material
Example 30: Solve (D 2 − D ′ 2 − 3D + 3D ′)z = xv + e x + av .
Partial Differential
Equations

Solution: The given equation is equivalent to,


(D − D ′)(D + D′ − 3)y = xy + e x + 2 y NOTES
C.F. = φ1 (y + x ) + e φ 2 (x − y )
3x

1 1
P.I. = xy + e x+2 y
(D − D′)(D + D′ − 3) (D − D′)(D + D′ − 3)
−1 −1
1  D ′   D + D′ 
=− 1 −  1 −  xy
3D  D   3 

x +2 y 1
+e 1
(D + 1 − D′ − 2)(D + 1 + D′ + 2 − 3) .
1  D′ D ′ 2   D + D′ 2 
=− 1 + + 2 + 1 + + DD′ +  xy + e x + 2 y
3D  D D  3 9 
1
.1
(D − D′ − 1)(D + D′)

1  2 x2 1 2
=−  xy + x + + y +  − xe x + 2 y
3D  3 2 3 9

1 x2y x2 x3 1 2  x +2 y
= 3  2 + 3 + 6 + 3 xy + 9 x  − xe
− .
 

( )
Example 31: Solve D 2 − DD'+ D'−1 z = cos( x + 2 y ) + e y + xy + 1 .
Solution: Equation is equivalent to
(D − 1)(D − D'+1)z = cos(x + 2 y ) + e y + xy + 1
Complementary Function = e x φ1 ( y ) + e y φ 2 ( x + y ) .
Particular integral corresponding to cos (x + 2y) is

Self-Instructional
Material 267
Partial Differential
1
Equations cos(x + 2 y )
D − DD′ + D′ − 1
2

1
NOTES = cos(x + 2 y )
(− 1) − (− 2 ) + D′ − 1
1
= cos(x + 2 y )
D′
1
= sin (x + 2 y )
2
Corresponding to e y , the particular integral is
1
= ey
D − DD ′ + D ′ − 1
2

1
= ey
D′ − 1
1
= ey. .1
D′
= ye .
y

Particular Integral corresponding to the part (xy + 1) is


1
= (xy + 1)
(D − 1)(D − D′ + 1)
− {1 − D} {1 + ( D − D' )} ( xy + 1)
−1 −1

{ }{ }
= − 1 + D + D 2 + ..... 1 − (D − D ′) + (D − D′) − ..... (xy + 1)
2

= −{1 + D + D 2
+ .....}{(xy + 1) − (y − x ) − 2}
= −{1 + D + D 2
+ .....}(xy − y + x − 1)
= −{(xy − y + x − 1) + (y + 1)}
= −(xy + x )
= − x (y + 1)
1
z = e x φ1 ( y ) + e y φ 2 ( x + y ) + sin( x + 2 y ) + ye y − x( y + 1)
2

Self-Instructional
268 Material
Partial Differential
9.7 PARTIAL DIFFERENTIAL EQUATIONS Equations

REDUCIBLE TO EQUATIONS WITH


CONSTANT COEFFICIENTS
NOTES
The equation,
f (xD, yD′)z = V(x, y )
where f (xD, yD′) = ∑ c rs x r y s D r D′s , c rs = constant. (9.75)
r ,s

is reduced to linear partial differential equation with constant coefficients by


the following substitution:
u = log x, v = log y (9.76)
By substitution of Equation (9.76)

xD = x
∂x
u
x
u x

= = d (say )
∂u
And
1 ∂ 
x 2 D 2 = x 2 D 
 x ∂u 

 1 ∂ 1 ∂2 
= x 2  − 2 + 2 2 
 x ∂u x ∂u 

∂2 ∂
= 2

∂u ∂u
= d(d − 1)
Therefore,
(
x r D r = d (d − 1)(d − 2 )..... d − r − 1 )
(
and y s D′s = d ′(d ′ − 1)(d ′ − 2 )... d ′ − s − 1)
( )
Hence f (xD, yD′) = ∑ c rs d (d − 1)..... d − r − 1 d ' (d '−1).....(d '− s − 1)

= g(d, d′)
Self-Instructional
Material 269
Partial Differential Here the coefficients in g(d, d’) are constants.
Equations
Thus by substitution Equation (9.75) is reduced to
(
g(d, d ′)z = V e u , e v )
NOTES
Or g (d , d ' ) z = U(u, v ) (9.77)
Equation (9.77) can be solved by methods that have been described for
solving partial differential equations with constant coefficients.
Example 32: Solve the differential equation,
(x D
2 2
)
− 4xyDD′ + 4 y 2 D′ 2 + 6 yD′ z = x 3 y 4
Solution: Put u = log x
v = log y
The given equation can be reduced to
{d(d − 1) − 4dd′ + 4d′(d′ − 1) + 6d′}z = e 3u+4v

or

or (d − 2d ′)(d − 2d ′ − 1)z = e 3u + 4 v

The complementary function is φ1 (2u + v ) + e u φ 2 (2u + v )

( ) (
= φ1 log x 2 y + xφ 2 log x 2 y )
= ψ (x y ) + xψ (x y )
1
2
2
2

1
And the particular integral is e 3u +2 v
(d − 2d )(d − 2d − 1)
′ ′

1 3u + 4 v
= e
30
1 3 4
= x y
30
1 3 4
( ) ( )
z = ψ 1 x 2 y + xψ 2 x 2 y +
30
x y .

(
Example 33: Find the solution of, x 2 D 2 − y 2 D′ 2 − yD′ + xD z = 0 )
Solution: Put u = log x
v = log y
The given differential can be reduced to
Self-Instructional
270 Material
{d(d − 1) − d′(d′ − 1) − d′ + d}z = 0 Partial Differential
Equations

(d 2
)
− d′2 z = 0
A.E. is NOTES
2
m −1 = 0
m = 1,−1
z = φ1 (v + u ) + φ 2 (v − u )

 y
= φ1 (log xy ) + φ 2  log 
 x

y
= Ψ1 (xy ) + Ψ2   .
x
Example 34: Determine the solution of the following equation:
(x D
2 2
)
+ 2xyDD′ + y 2 D′ 2 z + nz = n (xD + yD′)z + x 2 + y 2 + x 3
Solution: Put u = log x
v = log y
The Equation reduces to
{d(d − 1) + 2dd′ + d′(d′ − 1)}z − n (d + d′)z + nz = e 2u + e 2 v + e 3u
or

or

or
{(d + d′) 2
}
− (n + 1)(d + d ′) + n z = e 2 u + e 2 v + e 3 u
or
(d + d′ − n )(d + d′ − 1)z = e 2u + e 2 v + e 3u
C.F. = e nu φ1 (u − v ) + e u φ 2 (u − v )

n x x
= x ψ1   + xψ 2  
 y  y

Self-Instructional
Material 271
Partial Differential
1
Equations
P.I. =
(d + d′ − n )(d + d′ − 1)
{
e 2 u + e 2 v + e 3u }

NOTES
=

x2 + y2 1 1 3
= − − . x
n−2 2 n −3

x  x  x 2 + y2 1 x3
z = x n ψ1   + xψ 2   − −
y  y n −2 2 n −3

y 1
Example 35: Solve (x D − xyDD′ − 2 y D′ + xD − 2 yD′)z = log
2 2 2 2

x 2
Solution: Put u = log x
v = log y
Our equation reduces to

{d(d − 1) − dd′ − 2d′(d′ − 1) + d − 2d′}z = v − u − 1


2
1
(d 2
)
− dd ′ − 2d ′ 2 z = v − u −
2

or (d − 2d′)(d + d ′)z = v − u − 1
2
C.F. = φ1 (2u + v ) + φ 2 (u − v )

x
2
( )
= ψ1 x y + ψ 2  
y

1  1
P.I. = v −u − 
(d − 2d′)(d + d′)  2

1 1  d ′  1
= . 1 −  v − u − 
d − 2d ′ d  d  2

1 1 1 
= . v − u − − u 
d − 2d ′ d  2 

Self-Instructional
272 Material
Partial Differential
1  1  Equations
= . uv − u 2 − u 
d − 2d ′  2 

1  2d ′ 4d ′ 2  2 1  NOTES
= d 1 + d + d 2 +   uv − u − 2 u 
  

1 2 1 2
= uv − u − u + u 
d 2 

1
= (log x )2 log y − 1 (log x )2
2 4

x 1 1
( )
z = ψ1 x 2 y + ψ 2   + (log x ) log y − (log x ) .
2 2

 y 2 4
Example 36: Solve the differential equation,

(x D ) ( )
n
2 2
+ 2 xyDD′ + y 2 D′ 2 z = x 2 + y 2 2

Solution: Put u = log x


v = log y

The equation is reduced to {d (d − 1) + 2dd ′ + d ′(d ′ − 1)}z = (e 2 u + e 2 v )2


n

{(d + d′) } ( )
n
or − (d + d ′) z = e 2 u + e 2 v
2
2

n
or (d + d′)(d + d′ − 1)z = (e 2 u + e 2 v )2
C.F. = φ1 (u − v ) + e u φ 2 (u − v )

 x  x
= φ1  log  + xφ 2  log 
 y  y

x x
= Ψ1   + xΨ2  
 y  y

1
( )
n

Particular Integral is = e 2u + e 2v 2
(d + d′)(d + d′ − 1)

Self-Instructional
Material 273
Partial Differential
1 n
Equations
Substituting Z =
d + d′ − 1
(
e 2u + e 2v ) 2

∂Z ∂Z n
NOTES or +
∂u ∂v
= Z + e 2u + e 2v( )
2

du dv dZ
The subsidiary equations are = =
( )
n
1 1
Z + e 2u + e 2v 2

Two independent integrals of Equation are given by


u – v = constant = a (say)
dZ n
and
dv
(
− Z = e 2u + e 2v )
2

n
= e nv (e 2 a + 1) 2

Since this equation is linear, therefore

e (n −1)v 2a
( )
n
Ze −v = e +1 2
(n − 1)
e nv 2a
( )
n
Z= e +1 2
n −1

(e )
n
2u
+ e 2v 2
=
(n − 1)

 
( )
n
1  e 2u + e 2v 2 
P.I. = d + d ′  n − 1 
 

1
{ } du
n

n − 1) a =∫v− u
= ( e 2 u + e 2a + 2 u 2

1  2a 
n

= n − 1 ∫ (e + 1) ∫ e du 
2 nu

  a = v −u

1  nu 2 a 
( )
n

= n (n − 1) e e + 1 2

 a = v − u

Self-Instructional
274 Material
Partial Differential
1
( )
n
Equations
= e 2u + e 2v 2
n (n − 1)

1
( )
n
x 2 + y2 NOTES
= 2
n (n − 1)

x x 1
( )
n
z = ψ1   + xψ 2   + x 2 + y2 2 .
 y  y  n (n − 1)

8y
Example 37: Solve (x D − 2 xyDD′ + y D′ − xD + 3yD′)z =
2 2 2 2

x
Solution: Put u = log x
v = log x
Our Equation reduces to

or

or (d − d′)(d − d′ − 2)z = 8e v−u


C.F. = φ1 (u + v ) + e 2 u φ 2 (u + v )

= ψ1 (xy ) + x 2 ψ 2 (xy )

1
P.I. = 8. e v−u
(d − d′)(d − d′ − 2)
= e v−u

y
=
x

y
z = ψ (xy ) + x 2 ψ 2 (xy ) + .
x

(
Example 38: Solve x 2 D 2 + 2xyDD′ + y 2 D′ 2 z = x m y n )
Solution: Put u = log x
v = log y
The equation reduces to
Self-Instructional
Material 275
Partial Differential
Equations {d(d − 1) + 2dd′ + d′(d′ − 1)}z = e mu + nv

or
NOTES
or (d + d′)(d + d ′ − 1)z = e mu + nv
C.F. = φ1 (u − v ) + e u φ 2 (u − v )

x x
= ψ1   + xψ 2  
 y  y
1
P.I. = e mu + nv
(d + d′)(d + d′ − 1)
1
= e mu + nv
(m + n )(m + n − 1)
1
= x m yn
(m + n )(m + n − 1)
x x 1
z = ψ1   + xψ 2   + x m yn .
 y  y  (m + n )(m + n − 1)
Check Your Progress
5. Write the general linear differential equation with constant coefficients.
6. What are the three types of second order partial differential equations?
7. What is the complementary function of the equation
(A 0 D n + A 1 D n −1 D ′ + A 2 D n − 2 D ′ 2 +  + A n D ′ n )z = 0 if the roots are
distinct?
8. When is a non-homogeneous equation said to be reducible?
9. Which mathematical function is used to reduce partial differential
equations to equations with constant coefficients?

9.8 ANSWERS TO ‘CHECK YOUR PROGRESS’

1. The partial differential equation Pp + Qq = R, where P, Q, R are functions


of x, y, z is called Lagrange’s linear differential equation.
2. We have to make following assumptions:
(a) The mass of the string for each unit length is constant (‘homogeneous
string’). The string is perfectly elastic and does not offer any resistance
to bending.
Self-Instructional
276 Material
(b) The tension caused by stretching the string before fixing it at the ends Partial Differential
Equations
is so large that the action of the gravitational force on the string can be
neglected.
(c) The string performs small transverse motions in a vertical plane; that
NOTES
is, every particle of the string moves strictly vertically and so that the
deflection and the slope at every point of the string always remain
small in absolute value.

3. Each u n ( x, t ) = (Bn cos λ n t + Bn * sin λ n t )sin x represents a harmonic
L
motion having the frequency λ n / 2π = cn / 2 L cycles per unit time. This
motion is called the nth normal mode of the string.
4. Charpit’s method is used to find the solution of most general partial differential
equation of order one.
5. The linear differential equation with constant coefficients are of the form,

dny d n−1 y d n−2 y dy


n
+ P1 n −1 + P2 n − 2 + ..... + Pn −1 + Pn y = Q
dx dx dx dx
Where P1, P2, …., Pn are constants and Q is a function of x.
6. The three types of equations are the elliptic type, the parabolic type and the
hyperbolic type.
7. Let m1, m2, ..., mn be the roots of the equation then C.F. = 1(y + m1x) +
2
(y + m2x) +……………+ n(y + mnx) where i’s are arbitrary functions.
8. The equation f(D, D')z = V(x, y) is said to be reducible if the symbolic
function f (D, D') can be resolved into factors each of which is of first degree
in D and D'.
9. Logarithm function is used to reduce partial differential equations to equations
with constant coefficients

9.9 SUMMARY

Lagrange’s equation can be solved by forming auxiliary equations and then


finding two independent solutions of the auxiliary equations.
Wave equation governs the motion of a violin string.
Charpit’s method is used to find the solution of most general partial differential
equation of order one.
The general solution of F(D)y = Q consists of two parts:
o The complementary function which is the complete primitive of the
reduced equation and is of the form
Self-Instructional
Material 277
Partial Differential y = c1y1 + c2y2 + …+ cnyn containing n arbitrary constants.
Equations
o The particular integral which is a solution of F(D)y = Q containing no
arbitrary constant.
NOTES Second order partial differential equations can be classified as elliptic,
parabolic or hyperbolic type.
If f(D, D2)z = V(x, y) and
f (D, D ′ ) = A0 D n + A1 D n −1 D ′ + A2 D n − 2 D2′ +  + An D ′ n where
A1 , A 2 ,, A n are constants then the equation is known as
homogeneous equation.
The roots of homogeneous equation can be distinct, repeated or imaginary.
If all the terms on left hand side of Equation f(D, D')z = V(x, y) are not of
same degree then equation is said to be non homogeneous equation. Equation
is said to be reducible if the function f (D, D') can be resolved into factors
each of which is of first degree in D and D' and irreducible otherwise.
The equation, f (xD, yD′)z = V(x, y ) where
f (xD, yD′) = ∑ c rs x r y s D r D′s , c rs = constant is reduced to linear partial
r ,s

differential equation with constant coefficients by the substitution,


u = log x and v = log y

9.10 KEY WORDS

Partial differential equation: Any equation which contains one or more


partial derivatives is called a partial differential equation.
Fundamental mode: The first normal mode is referred as the fundamental
mode.

9.11 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define partial differential equations with suitable examples.
2. How will you identify the order of a partial differential equation?
3. Which equations are termed as singular integral?
4. How will you determine the degree of the partial differential equation?
5. What is a spectrum?
Self-Instructional
278 Material
6. Define Wronskian of functions. Partial Differential
Equations
7. Give examples of parabolic, elliptic and hyperbolic type equations.
8. What is the difference between homogeneous and non homogeneous
differential equations? NOTES
Long-Answer Questions
1. Solve the following differential equations:
a. (3 z − 4 y ) p + (4 x − 2 z )q = 2 y − 3 x

b. x( z 2 − y 2 ) p + y ( x 2 − z 2 )q = z ( y 2 − x 2 )
2. How does the frequency of the fundamental mode of the vibrating string
depend on the (a) Length of the string (b) On the mass per unit length (c)
On the tension? What happens to that frequency if we double the tension?
3. Find u(x, t) of the string of length L = when c2 = 1, the initial velocity is
zero, and the initial deflection is
a. 0.01 sin 3x.

 1 
b. k  sin x − sin 2 x 
 2 .
c. 0.1x(π − x ) .

d. (
0.1x π 2 − x 2 . )
4. Find the deflection u ( x, t ) of the string of length L = and c 2 = 1 for zero
initial displacement and ‘triangular’ initial velocity u t ( x,0 ) = (0.01x ) if
1 1
0< x< π , u t ( x,0) = 0.01(π − x ) if π < x < π . (Initial conditions with
2 2
u t ( x,0) ≠ 0 are hard to realize experimentally).
5. Find solutions u(x, y) of the following equations by separating variables.
a. u x + u y = 0.

b. ux − u y = 0 .

c. y 2u x − x 2u y = 0
.
d. u x + u y = (x + y )u
.
e. u xx + u yy = 0
.

Self-Instructional
Material 279
Partial Differential
Equations f. u xy − u = 0 .

g. u xx − u yy = 0 .

NOTES h. xu xy + 2 yu = 0
.
6. Show that

n πx
a. The substitution of u ( x, t ) = ∑ G n (t ) sin (L =length of the string)
n =1 L
into the wave equation governing free vibrations leads to

G + λ 2 G = 0, λ = cnπ
n n n
L .
b. Forced vibrations of the string under an external force P(x, t) per unit
length acting normal to the string are governed by the equation
P
u tt = c 2 u xx + .
ρ
7. Find Complete Integrals of the following equations:
a. p 2 + px + q = z .
b. p2 x + q2 y = z .

c. px + qy = z 1 + pq .
d. p(1 + q2) = q (z – a).
e. ( )
pq + x(2 y + 1) p + y 2 + y q − (2 y + 1)z = 0 .
f. ( pq )( px + qy ) = 1.
g. pxy + pq + qy = yz .
h. (p 2
)
+ q 2 x = pz .
i. 2( y + zq ) = q( xp + yq ) .
8. Solve the equations:
a. (D 2
)
+ DD′s − 1D′3 z = 0 .

b. (D 3
+ 3D 2 D′ − 4D′3 z = 0 . )
9. Solve the equations:
a. (D 2
)
+ 2DD′ + D′ 2 z = 12xy .
b. (D 2
)
− 2DD′ − 15D′ 2 z = 12xy .
Self-Instructional
280 Material
c. (D 2
)
− 6DD′ − 9D′2 z = 12 x 2 + 16xy .
Partial Differential
Equations

d. (D 3
)
− 7 DD′ 2 − 6D′3 z = x 2 + xy 2 + y 3 .
1
e. (D D′ − 2DD′
2 2
+ D′ 3 z = ) x2 .
NOTES

10. Solve the equations:


a. (D − DD′ − 2D′ )z = x − y .
2 2

b. (D − 3DD′ + 2D′ )z = x + y .
2 2

c. (4D − 4DD′ + D′ )z = 16 log(x + 2y) .


2 2

d. (D − 7DD′ − 6D′ )z = cos(x − y) + x + xy + y


3 2 3 2 2 3
.

e. (D − 7DD′ − 6D′ )z = sin (x + 2y) + e .


3 2 3 3x+y

f. (D − 3DD′ + 2D′ )z = x − 2y .
3 2 3

g. (D − 4D D′ + 5DD′ − 2D′ )z = e + y + x .
3 2 2 3 y+2x

11. Solve the equations:


a. (D 3
)
− 3DD′ 2 − 2D′ 3 z = cos(x + 2 y ) .

b. (D 2
)
+ 5DD′ + 5D′ 2 z = x sin (3x − 2 y ) .
12. Solve the equations:
a. (D 2
)
− Dd′ − 2D′ 2 z = (y − 1)e x .

b. (D 3
)
− 3DD′ 2 − 2D′3 z = cos(x + 2 y ) − e y (3 + 2 x ) .
13. Solve the equations:
a. (DD′ + D′ 2
)
− 3D′ z = 0 .

b. (2D + D′ − 1)2 (D − 2D′ + 2 )3 z = 0 .


14. Solve the equations:
a. (2D − D′ + D )z = 0 .
2 2

b. (D + DD′ + D + D′ + 1)z = 0 .
2

15. Solve the equations:


a. (D − D′ − 1)(D + D′ − 2)z = e 2 x − y .
b. (D 2
)
− D′ z = e x + y . Self-Instructional
Material 281
Partial Differential 16. Solve the equations:
Equations
a. (D 2
)
− DD′ − 2D z = cos(3x + 4 y ) .

NOTES b. (D 2
)
− D′ z = A cos(lx + my) , where A, l, m are constants.
17. Solve the equations:
a. (D = D′ − 1)(D + 2D′ − 3)z = 4 + 3x + 6y .
x+2
b. (D 3
− DD ′ 2 − D 2 + DD ′ z = ) x3
.

c. (D 2
)
− D′ y = 2 y − x 2 .
18. Solve the equations:
( )
a. D − D′ 2 z = cos(x − 3y ) .

b. (D + D′ − 1)(D + D′ − 3)(D + D′)z = e x+ y sin (2 x + y ) .


c. (D + DD′ + D′ − 1)z = 4 sin h x .
2

d. (D D′ + D′ − 2)z = sin 3x − e cos 2 y .


2 2 2y ∞

19. Solve the equations:


a. (x D − y D′ )z = xy .
2 3 3 2

b. (x D + 2xyDD′ + y D′ )z = x y .
2 2 2 2 2 2

c. (x D − 2xyDD′ − 3y D′ + xD − 3yD′)z = x y cos(log x ).


2 2 2 2 2 3

20. Solve (D − 2D D′ − DD′ + 2D′ )z = e .


3 2 2 3 x+y

21. Solve (D + D′ + D′′ − 3DD′D′′)u = x = 3xyz .


3 3 3 3

22. Solve the following equations:


a. r = x2 ey.
b. x ys = 1 .
23. Solve the following equations:
a. t − xq = − sin y − x cos y .
b. t − xq = x 2 .
c. yt − q = xy .
24. Solve the following equations:
a. xr + ys + p = 10xy 3 .
Self-Instructional
282 Material
Partial Differential
b. 2 yt − xs + 2q = 4 yx 3 . Equations

c. z + r = x cos(x + y ) .
25. Solve the differential equation, r − 2 yp + y 2 z = (y − 2 )e 2 x +3 y . NOTES

9.12 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
Material 283
Ordinary Differential
Equations
UNIT 10 ORDINARY DIFFERENTIAL
EQUATIONS
NOTES
Structure
10.0 Introduction
10.1 Objectives
10.2 Ordinary Differential Equations
10.3 Answers to Check Your Progress Questions
10.4 Summary
10.5 Key Words
10.6 Self Assessment Questions and Exercises
10.7 Further Readings

10.0 INTRODUCTION

In mathematics, an ordinary differential equation is a relation that contains functions


of only one independent variable and one or more of their derivatives with respect
to that variable. Ordinary differential equations are distinguished from partial
differential equations, which involve partial derivatives of functions of several
variables. Ordinary differential equations arise in many different contexts including
geometry, mechanics, astronomy and population modelling. The Picard—Lindelöf
theorem, Picard’s existence theorem or Cauchy–Lipschitz theorem is an important
theorem on existence and uniqueness of solutions to first-order equations with
given initial conditions. The Picard method is a way of approximating solutions of
ordinary differential equations. Originally, it was a way of proving the existence of
solutions.
In this unit, you will study about the ordinary differential equations and local
truncation error.

10.1 OBJECTIVES

After going through this unit, you will be able to:


Understand the ordinary differential equations
Analyse the local truncation error

10.2 ORDINARY DIFFERENTIAL EQUATIONS

Even though there are many methods to find an analytical solution of ordinary
differential equations, for many differential equations solutions in closed form cannot
be obtained. There are many methods available for finding a numerical solution for
Self-Instructional
284 Material
differential equations. We consider the solution of an initial value problem associated Ordinary Differential
Equations
with a first order differential equation given by,
dy
= f ( x, y )
dx NOTES
(10.1)
With y (x0) = y0 (10.2)
In general, the solution of the differential equation may not always exist. For
the existence of a unique solution of the differential Equation (10.1), the following
conditions, known as Lipshitz conditions must be satisfied,
(i) The function f(x, y) is defined and continuous in the strip
R : x0 x b, y
(ii) There exists a constant L such that for any x in (x0, b) and any two num-
bers y and y1
|f(x, y) – f(x, y1)| ≤ L|y – y1|
(10.3)
The numerical solution of initial value problems consists of finding the ap-
proximate numerical solution of y at successive steps x1, x2,..., xn of x. A number
of good methods are available for computing the numerical solution of differential
equations.

Picard’s Method of Successive Approximations


Consider the solution of the initial value problem,
dy
f ( x, y ) with y(x0) = y0
dx
Taking y = y (x) as a function of x, we can integrate the differential equation
with respect to x from x = x0 to x, in the form
x
y = y0 + ∫ f ( x, y( x)) dx
x0
(10.4)

The integral contains the unknown function y (x) and it is not possible to
integrate it directly. In Picard’s method, the first approximate solution y (1) ( x) is
obtained by replacing y (x) by y0.
x

Thus, ) y0 + ∫ f ( x, y0 )dx
y (1) ( x= (10.5)
x0

The second approximate solution is derived on replacing y by y(1) (x). Thus,


x

∫ f ( x, y
( 2) (1)
y ( x) = y0 + ( x)) dx
(10.6)
x0
Self-Instructional
Material 285
Ordinary Differential The process can be continued, so that we have the general approximate solu-
Equations
tion given by,
x

∫ f ( x, y
(n) ( n −1)
y ( x) = y 0 + ( x))dx,
NOTES for n = 2, 3... (10.7)
x0

This iteration formula is known as Picard’s iteration for finding solution of a


first order differential equation, when an initial condition is given. The iterations are
continued until two successive approximate solutions y(k) and y(k+1) give
approximately the same result for the desired values of x up to a desired accuracy.
Note: Due to practical difficulties in evaluating the necessary integration, this method
cannot be always used. However, if f (x, y) is a polynomial in x and y, the succes-
sive approximate solutions will be obtained as a power series of x.
Example 1: Find four successive approximate solutions for the following initial
value problem: y ′ = x + y, with y (0) = 1, by Picard’s method. Hence compute y
(0.1) and y (0.2) correct to five significant digits.
Solution: We have, y x y , with y (0) = 1.
The first approximation by Picard’s method is,
x

y (1) ( x) y (0) [x y (0)] dx


0
x
x2
y (1) ( x) 1 ( x 1) dx 1 x
0
2
The second approximation is,
x
x2 x3

( 2) 2
y ( x) = 1 + ( x + 1 + x + )dx = 1 + x + x +
2 6
0

Similarly, the third approximation is,


x
x3
y (3) ( x) 1 (1 2 x x2 )dx
0
6
x3 x4
y (3) ( x) 1 x x2
3 24
The fourth approximation is,
x
x3 x4
y (4) x 1 (1 2x x2 )dx
0
3 24
x3 x4 x5
y (4) ( x) 1 x x2
3 12 120

It is clear that successive approximations are easily determined as power se-


ries of x having one degree more than the previous one. The value of y (0.1) is
given by,
Self-Instructional
286 Material
Ordinary Differential
(0.1)3 (0.1)4 Equations
y (0.1) =1 + 0.1 + (0.1) + + 2
+ ... ≈ 1.1103, correct to five significant dig-
3 4
its.
(0.2)3 (0.2)4 (0.2)5 NOTES
Similarly, y (0.2) 1 0.2 (0.2)2 1.2431.
3 4 120
Example 2: Find the successive approximate solution of the initial value problem,
y xy 1, with y (0) = 1, by Picard’s method.
Solution: The first approximate solution is given by,
x
x2

(1)
y ( x) = 1 + ( x + 1) dx = 1 + x +
2
0

The second and third approximate solutions are,


x
x2 x2 x3 x 4

( 2)
y ( x ) = 1 + [ x(1 + x + ) + 1]dx = 1 + x + + +
2 2 3 8
0
x
x 2 x3 x4 x 2 x3 x 4 x5 x6

y (3) ( x ) = 1 + [ x(1 + x +
0
2
+
3
+ ) + 1]dx = 1 + x +
4 2
+
3
+ + +
8 15 48

Example 3: Compute y (0.25) and y (0.5) correct to three decimal places by


solving the following initial value problem by Picard’s method:
dy x2
= , y (0) = 0
dx 1 + y 2

dy x2
Solution: We have dx = 1 + y 2 , y (0) = 0

By Picard’s method, the first approximation is,


x
x2 x3

(1)
y ( x) = 0 + dx =
1+ 0 3
0

The second approximate solution is,


x 2
x
∫ 1+[y
( 2)
y ( x) = (1)
dx
0
( x)]2
x 2 3
x x

−1
= dx = tan
x6 3
0 1+
9

(0.25) 2
For x 0.25, y (1) (0.25) 0.0052
3
Self-Instructional
Material 287
Ordinary Differential
Equations (0.25) 2
y (2) (0.25) tan 1
0.0052
3
y (0.25) 0.005, Correct to three decimal place.
NOTES
(0.5)2
Again, for x = 0.5, y (1)=
(0.5) = 0.083333
3

(0.5) 3
y ( 2) (0.5) = tan −1 = 0.0416
3
Thus, correct to three decimal places, y (0.5) = 0.042.
Note: For this problem we observe that, the integral for getting the third and
higher approximate solution is either difficult or impossible to evalute, since
x
x2
y (3) ( x) = ∫ 2 is not integrable.
0  x3 
1 +  tan −1 
 3
Example 4: Use Picard’s method to find two successive approximate solutions
of the initial value problem,
dy y − x
= , y (0 ) = 1
dx y + x

Solution: The first approximate solution by Picard’s method is given by,


x
) y0 + ∫ f ( x, y0 )dx
y (1) ( x=
0
x x
1− x 2 − (1 + x)
1+ ∫
y (1) ( x ) = 1+ ∫
dx = dx
0 1+ x 0 1+ x
y (1) ( x ) =1 + 2log e |1 + x | − x

The second approximate solution is given by,


x
y ( 2) ( x) = y 0 + ∫ f ( x, y
(1)
( x))dx
0
x x
x − 2 x + 2 log e | 1 + x | x
= 1+ ∫
0
1 + 2 log e | 1 + x |
dx = 1 + x − 2 ∫ 1 + 2 log
0 e |1+ x |
dx

We observe that, it is not possible to obtain the integral for getting y(2)(x). Thus
Picard’s method is not applicable to get successive approximate solutions.

Multistep Methods
We have seen that for finding the solution at each step, the Taylor series method
and Runge-Kutta methods requires evaluation of several derivatives. We shall
Self-Instructional
288 Material
now develop the multistep method which require only one derivative evaluation Ordinary Differential
Equations
per step; but unlike the self starting Taylor series or Runge-Kutta methods, the
multistep methods make use of the solution at more than one previous step points.
Let the values of y and y1 already have been evaluated by self-starting methods
NOTES
at a number of equally spaced points x0, x1,..., xn. We now integrate the differential
equation,

dy
f ( x, y ), from xn to xn 1
dx
xn 1 xn 1

i.e., dy f ( x, y ) dx
xn xn
xn 1

yn 1 yn f ( x, y ( x)) dx
xn

To evaluate the integral on the right hand side, we consider f (x, y) as a function
of x and replace it by an interpolating polynomial, i.e., a Newton’s backward
difference interpolation using the (m + 1) points xn, xn+1, xn–2,..., xn–m,
m
x xn
pm ( x ) ( 1)k (k s ) k
f n k , where s
k 0 h
s 1
k s ( s 1)( s 2)...( s k 1)
k!

Substituting pm(x) in place of f (x, y), we obtain


1 m

yn 1 yn h ( 1)k k
s k
fn k ds
0 k 0
2
yn h [ fn 1 fn 1 2 fn 2 ... m fn m ]
1

Where
where k ( 1) k k
s
ds
0

The coefficients k can be easily computed to give,


1 5 3 251
0 1, 1 , 2 , 3 , 4 , etc.
2 12 8 720
Taking m = 3, the above formula gives,
 1 5 3 
y n +1 = y n + h  f n + ∆f n −1 + ∆2 f n − 2 + ∆3 f n −3 
 2 12 8 

Substituting the expression of the differences in terms of function values given


by,
∆f n −1 = f n − f n −1 , ∆2 f n − 2 = f n − 2 f n −1 + f n − 2
∆3 f n −3 = f n − 3 f n −1 + 3 f n − 2 − f n −3
Self-Instructional
Material 289
Ordinary Differential We get on arranging,
Equations
h
y n +1 = y n + [55 f n − 59 f n −1 + 37 f n − 2 − 9 f n −3 ] (10.8)
24
This is known as Adams-Bashforth formula of order 4. The local error of
NOTES
this formula is,
1
s 3
E h5 f iv ( ) ds (10.9)
0
4
By using mean value theorem of integral calculus,
1
s 3
E h5 f iv ( ) ds
0
4
Or, 251 (10.10)
E h5 f iv ( ).
720
The fourth order Adams-Bashforth formula requires four starting values, i.e.,
the derivaties, f3, f2, f1 and f0. This is a multistep method.

Predictor-Correction Methods
These methods use a pair of multistep numerical integration. The first is the Predictor
formula, which is an open-type explicit formula derived by using, in the integral, an
interpolation formula which interpolates at the points xn, xn–1,..., xn–m. The second
is the Corrector formula which is obtained by using interpolation formula that
interpolates at the points xn+1, xn, ..., xn–p in the integral.

Euler’s Predictor-Corrector Formula


The simplest formula of the type is a pair of formula given by,
y n( +p1) = y n + h f ( xn , yn ) (10.11)

(c) h ( p)
y n +1 = y n + [ f ( xn , y n ) + f ( xn +1 , y n +1 )] (10.12)
2
In order to determine the solution of the problem upto a desired accuracy, the
corrector formula can be employed in an iterative manner as shown below:
Step 1: Compute yn(0+)1 , using Equation (10.11)
i.e., yn(0+)1 = yn+ h f (xn, yn)
Step 2: Compute yn( k+)1 using Equation (10.12)
(k ) h
i.e., yn +1 =
yn + [ f ( xn , yn ) + f ( xn +1 , yn( k+−11) )], for K =
1, 2, 3,...,
2
The computation is continued till the condition given below is satisfied,
yn( k )1 yn( k 11)
yn( k )1
(10.13)
Self-Instructional
290 Material
where is the prescribed accuracy. Ordinary Differential
Equations
It may be noted that the accuracy achieved will depend on step size h and on
the local error. The local error in the predictor and corrector formula are,
h2 h3
y ( 1 ) and y ( 2 ), respectively.. NOTES
2 12

Milne’s Predictor-Corrector Formula


A commonly used Predictor-Corrector system is the fourth order Milne’s
Predictor-Corrector formula. It uses the following as Predictor and Corrector.
( p) 4h *
y n +1 = y n −3 + (2 f n − f n −1 + 2 f n − 2 )
3
h
y n( c+)1 = y n −1 + [ f n −1 + 4 f n + f n +1 ( xn +1 , y n( +p1) )] (10.14)
3
The local errors in these formulae are respectively,
14 5 ( v ) 1 5 (v )
h y ( 1 ) and h y 2 ) (10.15)
45 90
dy
Example 5: Compute the Taylor series solution of the problem = xy + 1, y (0) = 1,
dx
up to x5 terms and hence compute values of y(0.1), y (0.2) and y (0.3). Use
Milne’s Predictor-Corrector method to compute y (0.4) and y (0.5).
Solution: We have y ′ = xy + 1, with y (0) = 1, ∴ y′(0) = 1
Differentiating successively, we get
y ′′( x) = xy ′ + y ∴ y ′′(0) =1
y ′′′( x) = xy ′′ + y ′ ∴ y ′′′(0) =2
y (iv ) ( x) = xy ′′′ + 3 y ′′ (iv )
∴ y (0) = 3
(v) (iv ) (iv )
y ( x) = xy + 4 y ′′′ ∴ y (0) = 8
Thus the Taylor series solution is given by,
x2 x3 x 4 iv x5 ( v )
y ( x) y (0) xy (0) y (0) y (0) y (0) y (0)
2 3! 4! 5!
x 2 x3 x4 x5
1 x 2 3 8
2 3! 4! 5!
x 2 x3 x4 x5
y ( x) 1 x
2 3 8 15
0.01 0.001 0.0001 0.00001
y (0.1) 1 0.1
2 3 8 15
1.1053
0.04 0.008 0.0016 0.00032
y (0.2) 1 0.2
2 3 8 15
1.22288
0.09 0.027 0.0081 0.00243
y (0.3) 1 0.3
2 3 8 15
1.35526 Self-Instructional
Material 291
Ordinary Differential
Equations
For application of Milne’s Predictor-Corrector method, we compute y ′ (0.1),
y ′ (0.2) and y ′ (0.3).

y ′(0.1) = 0.1 × 1.1053 + 1= 1.11053


NOTES y ′(0.2)= 0.2 × 1.22288 + 1= 1.244576
y ′(0.3)= 0.3 × 1.35526 + 1= 1.40658

4h
The Predictor formula gives, y4 = y(0.4) = y0+ (2 y1′ − y 2′ + 2 y3′ ) .
3

4 × 0.1
∴ y4(0) = 1 + (2 × 1.11053 − 1.24458 + 2 × 1.40658)
3
=1.50528 ∴ y4′ =1 + 0.4 × 1.50528 =1.602112
(1) h
The Corrector formula gives, y 4 = y 2 + ( y 2′ + 4 y3′ + y ′4 ) .
3
0.1
y (0.4) 1.22288 (1.24458 4 1.40658 1.60211)
3
1.22288 0.28243
1.50531

Numerical Solution of Boundary Value Problems


We consider the solution of ordinary differential equation of order 2 or more,
when value of the dependent variable is given at more than one point, usually at the
two ends of an interval in which the solution is required. For example, the simplest
boundary value problem associated with a second order differential equation is,
y′′ +p (x) y ′ +q (x)y = r (x) (10.16)
with boundary conditions, y (a) = A, y (b) = B. (10.17)
The following two methods reduce the boundary value problem into initial
value problems which are then solved by any of the methods for solving such
problems.
Reduction to a Pair of Initial Value Problem
This method is applicable to linear differential equations only. In this method, the
solution is assumed to be a linear combination of two solutions in the form,
y(x) = u(x) + (x) (10.18)
where is a suitable constant determined by using the boundary condition and
u(x) and (x) are the solutions of the following two initial value problems:
(i) u ′′ + p(x) u ′ + q(x)u = r(x)
u(a) = A, u ′ (a) = 1, (say). (10.19)
(ii) + p(x) + q(x) υ = r(x)
υ (a) = 0 and (a) = α 2 , (say) (10.20)
Self-Instructional
292 Material
where 1 and 2 are arbitrarily assumed constants. After solving the two initial Ordinary Differential
Equations
value problems, the constant is determined by satisfying the boundary condition
at x = b. Thus,
B u (b) (b)
NOTES
Or, B (b) (10.21)
, provided (b) 0
(b)
Evidently, y(a) = A, is already satisfied.
If (b) = 0, then we solve the initial value problem for again by choosing
(a) = 3, for some other value for which (b) will be non-zero.
Another method which is commonly used for solving boundary problems is
the finite difference method discussed below.
Finite Difference Method
In this method of solving boundary value problem, the derivatives appearing in the
differential equation and boundary conditions, if necessary, are replaced by
appropriate difference gradients.
Consider the differential equation, y′′ +p(x) y ′ + q(x)y = r(x) (10.22)
with the boundary conditions, y (a) = and y (b) = (10.23)
The interval [a, b] is divided into N equal parts each of width h, so that
h = (b–a)/N, and the end points are x0 = a and xn = b. The interior mesh points xi
at which solution values y(xi) are to be determined are,
xn = x0+ nh, n = 1, 2, ..., N – 1
(10.24)
The values of y at the mesh points is denoted by yn given by,
yn = y (x0+ nh), n = 0, 1, 2, ..., N (10.25)
The following central difference approximations are usually used in finite
difference method of solving boundary value problem,
y n +1 − y n −1
y ′( xn ) ≈ (10.26)
2h

y n +1 − 2 y n + y n −1
y ′′( xn ) ≈ 2 (10.27)
h
Substituting these in the differential equation, we have
2(yn+1–2yn+ yn–1) + pn h(yn+1– yn–1) + 2h2gnyn = 2rnh2,
where pn = p(xn), qn = q(xn), rn = r(xn) (10.28)
Rewriting the equation by regrouping we get,
(2–hpn)yn–1+(–4+2h2qn)yn+(2+h2qn)yn+1 = 2rnh2
(10.29)

Self-Instructional
Material 293
Ordinary Differential This equation is to be considered at each of the interior points, i.e., it is true for
Equations
n = 1, 2, ..., N–1.
The boundary conditions of the problem are given by,
NOTES y0 , yn
(10.30)
Introducing these conditions in the relevant equations and arranging them, we
have the following system of linear equations in (N–1) unknowns y1, y2, ..., yn–1.

( 4 2h 2 q1 ) y1 (2 hp1 ) y2 2r1h 2 (2 hp1 )


(2 hp2 ) y1 ( 4 2h 2 q2 ) y2 (2 hp2 ) y3 2r2 h 2
(2 hp3 ) y2 ( 4 2h 2 q3 ) y3 (2 hp3 ) y4 2r3 h 2
... ... ... ... ...
(2 hpN 2 ) ( 4 2h 2 qN 2 ) y N 2 (2 hpN 2 ) yN 1 2rN 2 h 2
(2 hpN 1 ) yN 2 ( 4 2h 2 q N 1 ) y N 1 2rN 1h 2 (2 hpN 1 ) (10.31)
The above system of N–1 equations can be expressed in matrix notation in the
form
Ay = b
(10.32)
Where the coefficient matrix A is a tridiagonal one, of the form

 B1 C1 0 0... 0 0 0 
A B2 C2 0... 0 0 0 
 2 
0 A3 B3 C3 ... 0 0 0 
A=  (10.33)
 ... ... ... ... ... ... ... 
0 0 0 0... AN − 2 BN −2 C N −2 
 
 0 0 0 0... 0 AN −1 B N −1 

Where Bi = −4 + 2h 2 qi , i = 1, 2,..., N − 1
Ci = 2 + hpi , i = 1, 2,..., N − 2 (10.34)
Ai = 2 − hpi , i = 2, 3,..., N − 1

The vector b has components,

b1 2 1h 2 (2 hp1 )
bi 2 i h , for i 2
2, 3,..., N 2
(10.35)
bN 1 2 N 1 h2 (2 hlp N 1 )

The system of linear equations can be directly solved using suitable methods.

Self-Instructional
294 Material
Example 6: Compute values of y (1.1) and y (1.2) on solving the following initial Ordinary Differential
Equations
value problem, using Runge-Kutta method of order 4:
y′
y ′′ + + y = 0 , with y(1) = 0.77, y ′ (1) = –0.44
x NOTES
Solution: We first rewrite the initial value problem in the form of pair of first order
equations.
−z
y ′ = z, z ′ = −y
x
with y (1) = 0.77 and z (1) = –0.44.
We now employ Runge-Kutta method of order 4 with h = 0.1,
1
y (1.1) = y (1) + (k1 + 2k2 + 2k3 + k4 )
6
1
y ′(1.1) =z (1.1) =+1 (l1 + 2l2 + 2l3 + l4 )
6
k1 =−0.44 × 0.1 = −0.044
 0.44 
l1 =
0.1 ×  − 0.77  =−0.033
 1 
 0.033 
k2= 0.1 ×  −0.44 − = 0.04565
 2 
 0.4565 
l2 =
0.1 ×  − 0.748  =
−0.031323809
 1.05 

 −0.03123809 
k3 =0.1 ×  −0.44 +  =−0.0455661904
 2 
 0.0455661904 
l3 =
0.1 ×  − 0.747175 = −0.031321128
 1.05 
k4 =0.1 × (−0.47132112) =−0.047132112
 0.047132112 
l4 = 0.1 ×  − 0.72443381 =
−0.068158643
 1.1 
1
∴ y (1.1)= 0.77 + [−0.044 + 2 × ( −0.045661904) − 0.029596005] =0.727328602
6
1
y ′(1.1) = −0.44 + [−0.033 + 2(−0.031323809) + 2( −0.031321128) − 0.029596005]
6
1
= −0.44 + [−0.33 − 0.062647618 − 0.062642256 − 0.029596005]
6
= −0.526322021
Example 7: Compute the solution of the following initial value problem for x =
0.2, using Taylor series solution method of order 4: n.l.
d2 y dy
y x , y(0) 1, y (0) 0
dx2 dx

Self-Instructional
Material 295
Ordinary Differential
Equations Solution: Given y y xy , we put z y , so that
z y xz , y z and y (0) 1, z (0) 0.
We solve for y and z by Taylor series method of order 4. For this we first
NOTES compute y ′′(0), y ′′′(0), y iv (0),...
We have, y ′′(0)= y(0) + 0 × y ′(0) = 1, z ′(0) = 1
y ′′′(0) = z ′′(0) = y ′(0) + z (0) + 0.z ′(0) = 0
y iv (0) =z ′′′(0) = y ′′(0) + 2 z ′(0) + 0.z ′′(0) =3
z iv (0) =4 z ′′(0) + 0.z ′′′(0) =0
By Taylor series of order 4, we have
x2 x3 x 4 iv
y (0 + x) = y(0) + xy ′(0) + y ′′(0) + y ′′′(0) + y (0)
2! 3! 4!
x2 x4
or, y ( x) =+
1+ ×3
2! 4!
(0.2)2 (0.2)4
∴ y(0.2) =
1+ + 1.0202
=
2! 8
(0.2)3
Similarly, y (0.2) z (0.2) 0.2 3 0.204
4!
Example 8: Compute the solution of the following initial value problem for x =
2
d y
0.2 by fourth order Runge -Kutta method: n.l. = xy, y (0) = 1, y ′(0) = 1
dx 2
Solution: Given y ′′ = xy, we put y ′ = z and the simultaneous first order problem,
y z f ( x, y , z ), say z xy g ( x, y , z ), say with y (0) 1 and z (0) 1
We use Runge-Kutta 4th order formulae, with h = 0.2, to compute y (0.2)
and y ′(0.2), given below..
k1 h f ( x0 , y0 , z0 ) 0.2 1 0.2
l1 h g ( x0 , y0 , z0 ) 0.2 0 0
h k1 l1
k2 h f x0 , y0 , z0 0.2 (1 0) 0.2
2 2 2
h k1 l1 0.2 0.2
l2 h g x0 , y0 , z0 0.2 1 0.022
2 2 2 2 2
h k2 l2
k3 h f x0 , y0 , z0 0.2 1.011 0.2022
2 2 2
h k2 l
l3 h g x0 , y0 , z0 2 0.2 0.1 1.1 0.022
2 2 2
k4 h f ( x0 h, y0 k3 , z0 l3 ) 0.2 1.022 0.2044
l4 h g ( x0 h , y0 k3 , z0 l3 ) 0.2 0.2 1.2022 0.048088
1
y (0.2) 1 (0.2 2(0.2 0.2022) 0.2044) 1.2015
6
1
Self-Instructional y (0.2) 1 (0 2 (0.022 0.022) 0.048088) 1.02268
296 Material 6
Local Truncation Error Ordinary Differential
Equations
Local Truncation error in a numerical method is error that is caused by using
simple approximations to represent exact mathematical formulas. The only way to
completely avoid truncation error is to use exact calculations. However, truncation NOTES
error can be reduced by applying the same approximation to a larger number of
smaller intervals or by switching to a better approximation. Analysis of truncation
error is the single most important source of information about the theoretical
characteristics that distinguish better methods from poorer ones. With a
combination of theoretical analysis and numerical experiments, it is possible to
estimate truncation error accurately.

Check Your Progress


1. Define Picard's method of successive approximation.
2. What is a predictor formula?
3. What are local errors in Milne's predictor-corrector formulae?
4. Where can the method of reduction to a pair of initial value problem be
applied?

10.3 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. In Picard’s method the first approximate solution y (1) ( x) is obtained by


x
replacing y(x) by y 0. Thus, y (1) ( x=) y0 + ∫ f ( x, y0 )dx . The second
x0

approximate solution is derived on replacing y by y(1) (x). Thus,


x

∫ f ( x, y
( 2) (1)
y ( x) = y0 + ( x)) dx
.
x0

This iteration formula is known as Picard’s iteration for finding solution of a


first order differential equation, when an initial condition is given. The iterations
are continued until two successive approximate solutions yk and yk + 1 give
approximately the same result for the desired values of x up to a desired
accuracy.
2. A predictor formula is an open-type explicit formula derived by using, in the
integral, an interpolation formula which interpolates at the points xn, xn – 1,
..., xn – m.

Self-Instructional
Material 297
Ordinary Differential
14 5 (v ) 1
Equations 3. The local errors in these formulae are h y (ξ1 ) and − h 5 y (v ) (ξ 2 ) .
45 90
4. This method is applicable to linear differential equations only.
NOTES
10.4 SUMMARY

Picard’s iteration is a method of finding solutions of a first order differential


equation when an initial condition is given.
The multistep method requires only one derivative evaluation per step; but
unlike the self starting Taylor series on Runge-Kutta methods, the multistep
methods make use of the solution at more than one previous step points.
These methods use a pair of multistep numerical integration. The first is the
predictor formula, which is an open-type explicit formula derived by using,
in the integral, an interpolation formula which interpolates at the points
xn, xn – 1, ..., xn – m. The second is the corrector formula which is obtained by
using interpolation formula that interpolates at the points xn + 1, xn, ..., xn – p in
the integral.
The solution of ordinary differential equation of order 2 or more, when
values of the dependent variable is given at more than one point, usually at
the two ends of an interval in which the solution is required.
The methods used to reduce the boundary value problem into initial value
problems are reduction to a pair of initial value problem and finite difference
method.

10.5 KEY WORDS

Predictor formula: It is an open-type explicit formula derived by using, in


the integral, an interpolation formula which interpolates at the points xn,
xn – 1, ..., xn – m.
Corrector formula: It is obtained by using interpolation formula that
interpolates at the points xn + 1, xn, ..., xn – p in the integral.

10.6 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What are ordinary differential equations?
2. Name the methods for computing the numerical solution of differential
equations.
Self-Instructional
298 Material
3. When is multistep method used? Ordinary Differential
Equations
4. Name the predictor-corrector methods.
5. How will you find the numerical solution of boundary value problems?
Long-Answer Questions NOTES

1. Use Picard’s method to compute values of y(0.1), y(0.2) and y(0.3) correct
to four decimal places, for the problem, y = x + y, y(0) = 1.
dy 1
2. Given = (1 + x 2 ) y 2 , and y(0) = 1, y(0.1) = 1.06, y(0.2) = 1.12, y(0.3)
dx 2
= 1.21. Compute y(0.4) by Milne’s predictor-corrector method.

10.7 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
Material 299
Euler’s Method

UNIT 11 EULER’S METHOD


NOTES Structure
11.0 Introduction
11.1 Objectives
11.2 Euler Method
11.3 Answers to Check Your Progress Questions
11.4 Summary
11.5 Key Words
11.6 Self Assessment Questions and Exercises
11.7 Further Readings

11.0 INTRODUCTION

The Euler method is a first-order method, which means that the local error (error
per step) is proportional to the square of the step size, and the global error (error
at a given time) is proportional to the step size. The Euler method often serves as
the basis to construct more complex methods.
In this unit, you will study about the Euler’s method and modified Euler’s
method.

11.1 OBJECTIVES

After going through this unit, you will be able to:


Analyse the Euler’s method
Understand about the modified Euler’s method

11.2 EULER’S METHOD

Euler’s is a crude but simple method of solving a first order initial value problem:
dy
= f ( x, y ), y ( x0 ) = y0
dx
This is derived by integrating f (x0, y0) instead of f (x, y) for a small interval,
x0 h x0 h

dy f ( x0 , y0 )dx
x0 x0

y ( x0 + h ) − y ( x0 ) =
hf ( x0 , y0 )

Writing y1 = y (x0+ h), we have


y1 = y0+h f (x0, y0) (11.1)
Self-Instructional
300 Material
Similarly, we can write Euler’s Method

y2 = y (x1+ h) = y1+ h f (x1, y1) (11.2)


where x1 = x0+ h.
Proceeding successively, we can get the solution at any xn = x0+ nh, as NOTES
yn = yn–1+ h f (xn–1, yn–1) (11.3)
This method, known as Euler’s method, can be geometrically interpreted, as
shown in Figure 11.1.
Y

X
0 x0 x1 x2

Fig. 11.1 Euler’s Method


For small step size h, the solution curve y = y (x), is approximated by the tan-
gential line.
The local error at any xk , i.e., the truncation error of the Euler’s method is given
by,
ek = y(xk+1) – yk+1
Where yk+1 is the solution by Euler’s method.

ek= y ( xk + h) − { yk + hf ( xk , yk )}
h2
= yk + hy ′( xk ) + y′′( xk + θh) − yk − hy′( xk ), 0 < θ < 1
2
h2
=ek y ′′( xk + θh), 0 < θ < 1
2

Note: The Euler’s method finds a sequence of values {yk} of y for the sequence of
values {xk}of x, step by step. But to get the solution up to a desired accuracy, we
have to take the step size h to be very small. Again, the method should not be used
for a larger range of x about x0, since the propagated error grows as integration
proceeds.
Example 1: Solve the following differential equation by Euler’s method for x = 0.1,
dy
0.2, 0.3; taking h = 0.1; = x 2 − y, y (0) = 1. Compare the results with exact solu-
dx
tion.
dy
Solution: Given = x 2 − y, with y (0) = 1.
dx

Self-Instructional
Material 301
Euler’s Method In Euler’s method one computes in successive steps, values of y1, y2, y3,... at
x1 = x0+ h, x2 = x0 + 2h, x3 = x0 + 3h, using the formula,
yn +1 =
yn + hf ( xn , yn ), for n =
0, 1, 2,...
NOTES yn + h ( x 2 n − yn )
yn +1 =

With h = 0.1 and starting with x0 = 0, y0 = 1, we present the successive


computations in the table given below.

2
n xn yn f ( xn , y n ) = x n − y n y n +1 = y n + hf ( xn , y n )
0 0.0 1.000 − 1.000 0.9000
1 0.1 0.900 − 0.8900 0.8110
2 0.2 0.8110 − 0.7710 0.7339
3 0.3 0.7339 − 0.6439 0.6695

dy
The analytical solution of the differential equation written as + y = x 2 , is
dx

ye x ∫ x e dx + c
2 x
=

Or, ye x = x 2 e x − 2 xe x + 2e x + c.

Since, y =1 for x =0,∴ c =−1.

y = x 2 − 2 x + 2 − e− x .
The following table compares the exact solution with the approximate solution
by Euler’s method.
n xn Approximate Solution Exact Solution % Error
1 0.1 0.9000 0.9052 0.57
2 0.2 0.8110 0.8213 1.25
3 0.3 0.7339 0.7492 2.04
Example 2: Compute the solution of the following initial value problem by Euler’s
method, for x = 0.1 correct to four decimal places, taking h = 0.02,
dy y − x
= , y (0) = 1 .
dx y + x

Solution: Euler’s method for solving an initial value problem,


dy
f ( x, y ), y ( x0 ) y0 , is yn 1 yn h f ( xn , yn ), for n 0, 1, 2,...
dx
Taking h = 0.02, we have x1 = 0.02, x2 = 0.04, x3 = 0.06, x4 = 0.08, x5 = 0.1.
Using Euler’s method, we have, since y(0) = 1

Self-Instructional
302 Material
Euler’s Method
1− 0
y (0.02) = y1 = y 0 + h f ( x0 , y0 ) = 1 + 0.02 × = 1.0200
1+ 0
1.0200 − 0.02
y (0.04) = y 2 = y1 + h f ( x1 , y1 ) = 1.0200 + 0.02 × = 1.0392
1.0200 + 0.02
NOTES
1.0392 − 0.04
y (0.06) = y3 = y 2 + h f ( x2 , y 2 ) = 1.0392 + 0.02 × = 1.0577
1.0392 + 0.04
1.0577 − 0.06
y(0.08) = y 4 = y3 + h f ( x3 , y3 ) = 1.0577 + 0.02 × = 1.0756
1.0577 + 0.06
1.0756 − 0.08
y (0.1) = y5 = y 4 + h f ( x4 , y 4 ) = 1.0756 + 0.02 × = 1.0928
1.0756 + 0.08
Hence, y (0.1) = 1.0928.

Modified Euler’s Method


In order to get somewhat moderate accuracy, Euler’s method is modified by computing
the derivative y ′ = f ( x, y ), at a point xn as the mean of f (xn, yn) and f (xn+1, y(0)n+1),
where,
y (0) n 1 yn h f ( xn , yn )
h
y (1) n [ f ( xn , yn ) f ( xn 1 , y (0) n 1 )] (11.4)
1 yn
2
This modified method is known as Euler-Cauchy method. The local truncation
error of the modified Euler’s method is of the order O(h3).
Note: Modified Euler’s method can be used to compute the solution up to a desired
accuracy by applying it in an iterative scheme as stated below.

Compute y ( k ) n 1 yn h f ( xn , y n )
h
Compute yn( k 11) yn f ( xn , y n ) f ( xn 1 , yn( k )1 , for k 0, 1, 2,... (11.5)
2
The iterations are continued until two successive approximations yn( k+)1 and yn( k++11)
coincide to the desired accuarcy. As a rule, the iterations converge rapidly for a
sufficiently small h. If, however, after three or four iteration the iterations still do not
give the necessary accuracy in the solution, the spacing h is decreased and iterations
are performed again.
Example 3: Use modified Euler’s method to compute y (0.02) for the initial value
problem, dy = x 2 + y, with y (0) = 1, taking h = 0.01. Compare the result with the
dx
exact solution.
Solution: Modified Euler’s method consists of obtaining the solution at successive
points, x1 = x0 + h, x2 = x0 + 2h,..., xn = x0 + nh, by the two stage computations given
by,
y n(0+)1 = yn + hf ( xn , yn )

y n(1+)1 = y n +
h
2
[ ]
f ( xn , y n ) + f ( x n +1 , y n( 0+)1 ) .
Self-Instructional
Material 303
Euler’s Method For the given problem, f (x, y) = x2 + y and h = 0.01
y1(0) = y0 + h[ x02 + y0 ] = 1 + 0.01 × 1 = 1.01
0.01
y1(1) =
[1.0 + 1.01 + (0.01)2 ] =
1+ 1.01005
NOTES 2
i.e., =y1 y=
(0.01) 1.01005

Next, y 2( 0) = y1 + h [ x12 + y1 ]
= 1.01005 + 0.01[(0.1) 2 + 1.01005]
= 1.01005 + 0.010102 = 1.02015

0.01
y2(1) = 1.01005 +
[(0.01)2 + 1.01005 + (0.01)2 + 1.02015]
2
0.01
= 1.01005 + × (2.02140)
2
= 1.01005 + 0.10107
= 1.11112
∴ y2 y (0.02)
= = 1.11112

Euler’s Method for a Pair of Differential Equations


Consider an initial value problem associated with a pair of first order differential
equation given by,
dy dz
= f ( x, y , z ), = g ( x, y , z ) (11.6)
dx dx
with y (x0) = y0, z (x0) = z0 (11.7)
Euler’s method can be extended to compute approximate values yi and zi of y
(xi) and z (xi) respectively given by,
yi+1 = yi+h f (xi, yi, zi)
zi+1 = zi+h g (xi, yi, zi) (11.8)
starting with i = 0 and continuing step by step for i = 1, 2, 3,... Evidently, we can also
extend Euler’s method for an initial value problem associated with a second order
differential equation by rewriting it as a pair of first order equations.
Consider the initial value problem,

d2y  dy 
= g  x, y,  , with y(x ) = y , y ′( x0 ) = y0′
2
dx  dx  0 0

dy dz
We write = z , so that = g ( x, y , z ) with y (x0) = y0 and z (x0) = y0′ .
dx dx
Example 4: Compute y(1.1) and y(1.2) by solving the initial value problem,
y′
y ′′ + + y = 0, with y (1) = 0.77, y ′ (1) = –0.44
x

Self-Instructional
304 Material
Euler’s Method
z
Solution: We can rewrite the problem as y ′ = z , z ′ = − − y; with y, (1) = 0.77 and
x
z (1.1) = –0.44.
Taking h = 0.1, we use Euler’s method for the problem in the form, NOTES
yi +1 = yi + hzi
 z 
z i +1 = z i + h − 1 − yi , i = 0, 1, 2,...
 xi 
Thus, y1 = y (1.1) and z1 = z (1.1) are given by,
y1 = y0 + hz 0 = 0.77 + 0.1× (−0.44) = 0.726
 z 
z1 = z 0 + h − 0 − y 0  = −0.44 + 0.1× (0.44 − 0.77)
 x0 
= −0.44 − 0.33 = −0.473

Similarly, y2 = y(1.2) = y1 + hz1 = 0.726 – 0.1 (–0.473) = 0.679

z1
z2 z (1.2) z1 h y1
x1
0.473
0.473 0.1 0.726
1.1
0.473 0.1 0.296 0.503
Thus, y (1.1) 0.726 and y (12) 0.679.

Example 5: Using Euler’s method, compute y (0.1) and y (0.2) for the initial value
problem,
y ′′ + y = 0, y (0) = 0, y ′(0) = 1

Solution: We rewrite the initial value problem as y ′ = z, z ′ = − y, with y (0) = 0, z (0)


= 1.
Taking h = 0.1, we have by Euler’s method,
y1 =y (0.1) =y0 + hz0 =0 + 0.1 × 1 =0.1
z1 =z (0.1) =z0 + h(− y0 ) =1 + 0.1 × 0 =1.0
y2 = y (0.2) = y1 + hz1 =0.1 + 0.1 × 1.0 =0.2
z2 =z (0.2) =z1 − hy1 =1.0 − 0.1 × 0.1 =0.99

Example 6: For the initial value problem y ′′ + xy ′ + y = 0 , y(0) = 0, y ′ (0) = 1.


Compute the values of y for 0.05, 0.10, 0.15 and 0.20, having accuracy not exceeding
0.5 10–4.
Solution: We form Taylor series expansion using y (0), y ′ (0) = 1 and from the
differential equation,

Self-Instructional
Material 305
y ′′ + xy ′ + y = 0, we get y ′′(0) = 0
y ′′′( x) = − xy ′′ − 2 y ′ ∴ y ′′′(0) = −2

y (iv ) ( x) = − xy ′′′ − 3 y ′′ ∴ y iv (0) = 0


y v ( x) = − xy (iv ) − 4 y ′′′ ∴ y v (0) = 8

And in general, y(2n)(0) = 0, y ( 2n +1) (0) = −2ny ( 2 n −1) (0) = (−1) n 2 n.2!
x3 x5 n 2n n ! x 2n +1
Thus, y ( x) =x − 3 + 15 − ... + (−1) (2n + 1)! + ...
This is an alternating series whose terms decrease. Using this, we form the
solution for y up to 0.2 as given below:
x 0 0.05 0.10 0.15 0.20
y ( x) 0 0.0500 0.0997 0.1489 0.1973

Check Your Progress


1. How are Euler's method and Taylor's method related?
2. Why should we not use Euler's method for a larger range of x?

11.3 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. If we take k = 1, we get the Euler’s method, y1 = y0 + h f(x0, y0).
2. The method should not be used for a larger range of x about x0, since the
propagated error grows as integration proceeds.

11.4 SUMMARY

Euler’s is a crude but simple method of solving a first order initial value
problem:
dy
= f ( x, y ), y ( x0 ) = y0
dx
The local error at any xk , i.e., the truncation error of the Euler’s method is
given by,
ek = y(xk+1) – yk+1
Modified Euler’s method can be used to compute the solution up to a desired
accuracy by applying it in an iterative scheme as stated below.

Compute y ( k ) n 1 yn h f ( xn , y n )
h
Compute yn( k 11) yn f ( xn , y n ) f ( xn 1 , yn( k )1 , for k 0, 1, 2,...
2
Euler’s method can be extended to compute approximate values yi and zi of
y (xi) and z (xi) respectively given by,
yi+1 = yi+h f (xi, yi, zi)
zi+1 = zi+h g (xi, yi, zi)

11.5 KEY WORDS


Euler’s method: The Euler’s method finds a sequence of values {yk} of y
for the sequence of values {xk}of x, step by step.

11.6 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define Euler’s method.
2. Explain the Euler’s method for a pair of differential equations.
Long-Answer Questions
1. Compute values of y at x = 0.02, by Euler’s method taking h = 0.01, given
dy
y is the solution of the following initial value problem: dx
= x3 + y, y(0) = 1.
2. Evaluate y(0.02) by modified Euler’s method, given y = x2 + y, y(0) = 1,
correct to four decimal places.

11.7 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Taylor’s Method
BLOCK - IV
TAYLOR’S METHOD, R.K METHOD
AND STABILITY ANALYSIS
NOTES

UNIT 12 TAYLOR’S METHOD


Structure
12.0 Introduction
12.1 Objectives
12.2 Taylor’s Method
12.3 Answers to Check Your Progress Questions
12.4 Summary
12.5 Key Words
12.6 Self Assessment Questions and Exercises
12.7 Further Readings

12.0 INTRODUCTION

In mathematics, the Taylor series of a function is an infinite sum of terms that are
expressed in terms of the function’s derivatives at a single point. For most common
functions, the function and the sum of its Taylor series are equal near this point.
Taylor’s series are named after Brook Taylor who introduced them in 1715.
In this unit, you will study about the Taylor’s method.

12.1 OBJECTIVES

After going through this unit, you will be able to:


Understand the Taylor’s method
Explain the Taylor’s series

12.2 TAYLOR’S METHOD


Consider the solution of the first order differential equation,
dy
= f ( x, y ) with y ( x0 ) = y 0 (12.1)
dx
where f (x, y) is sufficiently differentiable with respect to x and y. The solution y (x)
of the problem can be expanded about the point x0 by a Taylor series in the form,
h2 y k ( x0 ) k hk 1
y ( x0 h) y ( x0 ) hy ( x0 ) y ( x0 ) ... h ( ) (12.2)
2! k! (k 1)!

Self-Instructional
308 Material
The derivatives in the above expansion can be determined as follows, Taylor’s Method

y ′( x0 ) = f ( x0 , y 0 )
y ′′ ( x0 ) = f x ( x0 , y0 ) + f y ( x0 , y 0 ) y ′( x0 )

y ′′′ ( x0 ) = f xx ( x0 , y0 ) + 2 f xy ( x0 , y 0 ) y ′( x0 ) + f yy ( x0 , y0 ) { y ′ ( x0 )}2 + f y ( x, y ) y ′′ ( x0 ) NOTES


where a suffix x or y denotes partial derivative with respect to x or y.
Thus the value of y1 = y (x0+h), can be computed by taking the Taylor series
expansion shown above. Usually, because of difficulties in obtaining higher order
derivatives, commonly a fourth order method is used. The solution at x2 = x1+h, can
be found by evaluating the derivatives at (x1, y1) and using the expansion; otherwise,
writing x2 = x0+2h, we can use the same expansion. This process can be continued
for determining yn+1 with known values xn, yn.
Note: If we take k = 1, we get the Euler’s method, y1 = y0+h f (x0, y0).
Thus, Euler’s method is a particular case of Taylor series method.
Example 1: Form the Taylor series solution of the initial value problem,
dy
= xy + 1, y (0) = 1 up to five terms and hence compute y (0.1) and y (0.2), correct
dx
to four decimal places.
Solution: We have, y ′ = xy + 1, y (0) = 1
Differentiating successively we get,

y ′′( x) = xy ′ + y , ∴ y ′′(0) = 1
y ′′′( x) = xy ′′ + 2 y ′, ∴ y ′′′(0) = 2
y (iv ) ( x) = xy ′′′ + 3 y ′′, ∴ y (iv ) (0) = 3
y ( v ) ( x) = xy (iv ) + 3 y ′′′, ∴ y ( v ) (0) = 6

Hence, the Taylor series solution y (x) is given by,

x2 x3 x 4 (iv ) x 5 (v)
y ( x) ≈ y (0) + xy ′(0) + y ′′(0) + y ′′′(0) + y (0 ) + y (0)
2 3! 4! 5!
x 2 x3 x4 x5 x 2 x3 x 4 x5
≈ 1+ x + + ×2+ ×3+ × 6 = 1+ x + + + +
2 6 24 120 2 3 8 20
0.01 0.001 0.0001 0.00001
∴ y (0.1) ≈ 1 + 0.1 + + + + = 1.1053
2 3 8 20

Similarly, y (0.2) ≈ 1 + 0.2 + 0.04 + 0.008 + 0.0016 + 0.00032 =1.04274


2 3 8 20
Example 2: Find first two non-vanishing terms in the Taylor series solution of the
initial value problem y ′ = x 2 + y 2 , y (0) = 0. Hence, compute y (0.1), y (0.2), y (0.3)
and comment on the accuracy of the solution.

Self-Instructional
Material 309
Taylor’s Method
Solution: We have, y ′ =
x 2 + y 2 , y (0) =
0
Differentiating successively we have,

NOTES y ′′ = 2 x + 2 yy ′, ∴ y ′′(0) = 0
y ′′′ = 2 + 2[ yy ′′ + ( y ′) 2 , y ′′′(0) = 2
( iv ) ( iv )
y = 2 ( yy ′′′ + 3 y ′y ′′), ∴y (0) = 0
(v) ( iv )
y = 2[ yy 2
+ 4 y ′y ′′′ + 3 ( y ′′) ], ∴ y (v) = 0
y (vi ) = 2[ yy ( v ) + 5 y ′y (iv ) + 10 y ′′y ′′′], ∴ y ( vi ) = 0
y ( vii ) = 2[ yy ( vi ) + 6 y ′ y ( v ) + 15 y ′′ y (iv ) + 10 ( y ′′′) 2 ] ∴ y ( vi ) (0) = 80

x3 x7 80 1 3 x 7
The Taylor series up to two terms is y ( x) = ×2+ × = x +
6 7 7! 3 63
Example 3: Given x y ′ = x – y2, y(2) = 1, evaluate y(2.1), y(2.2) and y(2.3) correct
to four decimal places using Taylor series method.
Solution: Given y x y 2 , i.e., y 1 y 2 / x and y 1 for x 2. To compute
y(2.1) by Taylor series method, we first find the derivatives of y at x = 2.
1
y ′ =1 − y 2 / x ∴ y ′(2) =1 − =0.5
2
xy ′′ + y ′ =1 − 2 yy ′
1 1 1 2 1
2 y ′′(2) + 1 2.
=− ∴ y ′′(2) = − × =−0.25
2 2 4 2 2
xy ′′′ + 2 y ′′ =−2 y ′ − 2 yy ′′
2

2
 1 1  1
2 y ′′′(2) + 2  −  = −2   − 2  − 
 4   2  4
1 1
Or, 2 y ′′′(2) = ∴ y ′′′(2) = 0.25
=
2 4
( iv )
xy + 3 y = ′′′ −4 y ′y ′′ − 2 y ′y ′′ − 2 yy′′′

1 1 1 1
2y (2) 3 6 2
4 2 4 4
3 3 1 1
y (2) 0.25
4 4 2 2
(0.1) 2 (0.1)3 (0.1) 4
y (2.1) y (2) 0.1 y (2) y (2) y (2) y (2)
2 3! 4!
0.01 0.001 0.0001
1 0.1 0.5 ( 0.25) 0.25 ( 0.25)
2 6 24
1 0.05 0.00125 0.00004 0.000001
1.0488
Self-Instructional
310 Material
Taylor’s Method
0.04 0.008 0.0016
y (2.2) 1 0.2 0.5 ( 0.25) 0.25 ( 0.5)
2 6 24
1 0.1 0.005 0.00032 0.00003
1.0954 NOTES
0.09 0.009 0.0081
y (2.3) 1 0.3 0.5 ( 0.25) 0.25 (0.5)
2 2 24
1 0.15 0.01125 0.001125 0.000168
1.005043

Check Your Progress


1. Explain the Taylor’s method.
2. Dive the derivative of the Taylor’s series.

12.3 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Consider the solution of the first order differential equation,


dy
= f ( x, y ) with y ( x0 ) = y 0
dx
2. The derivatives of Taylor’s series can be determined as follows:

y ′( x0 ) = f ( x0 , y0 )
y ′′ ( x0 ) = f x ( x0 , y0 ) + f y ( x0 , y0 ) y ′( x0 )
2
y ′′′ ( x0 ) = f xx ( x0 , y0 ) + 2 f xy ( x0 , y0 ) y ′( x0 ) + f yy ( x0 , y0 ) { y ′ ( x0 )} + f y ( x, y ) y ′′ ( x0 )

12.4 SUMMARY

The solution y (x) of the problem can be expanded about the point x0 by a
Taylor series in the form,
h2 y k ( x0 ) k hk 1
y ( x0 h) y ( x0 ) hy ( x0 ) y ( x0 ) ... h ( )
2! k! (k 1)!
because of difficulties in obtaining higher order derivatives, commonly a
fourth order method is used.
The solution at x2 = x1+h, can be found by evaluating the derivatives at
(x1, y1) and using the expansion; otherwise, writing x2 = x0+2h, we can use
the same expansion.

Self-Instructional
Material 311
Taylor’s Method
12.5 KEY WORDS
Taylor’s series method:If we take k = 1, we get the Euler’s method, y1 =
NOTES y0+h f (x0, y0).
Thus, Euler’s method is a particular case of Taylor series method.

12.6 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What is Taylor’s series?
2. Give one example of Taylor’s method.
Long-Answer Questions
1. Discuss about the Taylor’s method.
2. Compute the derivatives of Taylor’s expansion.

12.7 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
312 Material
Runge Kutta Method

UNIT 13 RUNGE KUTTA METHOD


Structure NOTES
13.0 Introduction
13.1 Objectives
13.2 Runge Kutta Method
13.3 Answers to Check Your Progress Questions
13.4 Summary
13.5 Key Words
13.6 Self Assessment Questions and Exercises
13.7 Further Readings

13.0 INTRODUCTION

In numerical analysis, the Runge–Kutta methods are a family of implicit and explicit
iterative methods, which include the well-known routine called the Euler Method,
used in temporal discretization for the approximate solutions of ordinary differential
equations.
In this unit, you will study about the Runge-Kutta methods and Runge-
Kutta methods for a pair of equations.

13.1 OBJECTIVES

After going through this unit, you will be able to:


Analyse the Runge-Kutta methods
Understand the Runge-Kutta methods for a pair of equations

13.2 RUNGE KUTTA METHOD


Runge-Kutta method can be of different orders. They are very useful when the
method of Taylor series is not easy to apply because of the complexity of finding
higher order derivatives. Runge-Kutta methods attempt to get better accuracy and
at the same time obviate the need for computing higher order derivatives. These
methods, however, require the evaluation of the first order derivatives at several
off-step points.
Here we consider the derivation of Runge-Kutta method of order 2.
The solution of the (n + 1)th step is assumed in the form,
yn+1 = yn+ ak1+ bk2 (13.1)

Self-Instructional
Material 313
Runge Kutta Method Where k1 = h f (xn, yn) and
k2 = h f(xn+ h, yn+ k1), for n = 0, 1, 2,... (13.2)
The unknown parameters a, b, , and are determined by expanding in Taylor
series and forming equations by equating coefficients of like powers of h. We have,
NOTES
2 3
h h
y n +1 = y ( xn + h) = y n + h y ′( xn ) + y ′′( xn ) + y ′′′( xn ) + 0 (h 4 )
2 6
h2 h3
= y n + h f ( xn , y n ) + [ f x + ff y ]n + [ f xx + 2 ff yy + f yy f 2 + f x f y + f y2 f ]n + 0(h 4 ) (13.3)
2 6
The subscript n indicates that the functions within brackets are to be evaluated
at (xn, yn).
Again, expanding k2 by Taylor series with two variables, we have
2 2 2
k12
k2 h[ f n ah ( f x ) n k1 ( f y ) n ( f xx ) n hk1 ( f xy ) n ( f yy ) n 0( h3 )] (13.4)
2 2
Thus on substituting the expansion of k 2, we get from Equation (13.4)
2 2
yn 1 yn ( a b) h f n bh 2 ( f x ff y ) n bh3 f xx ff xx f 2 f yy 0(h 4 )
2 2
On comparing with the expansion of yn+1 and equating coefficients of h and h2
we get the relations,
1
a b 1, b b
2
There are three equations for the determination of four unknown parameters.
Thus, there are many solutions. However, usually a symmetric solution is taken by
1
setting a b , then 1
2
Thus we can write a Runge-Kutta method of order 2 in the form,
h
yn +1 =yn + [ f ( xn , yn ) + f ( xn + h, yn + h f ( xn , yn ))], for n =0, 1, 2,... (13.5)
2
Proceeding as in second order method, Runge-Kutta method of order 4 can be
formulated. Omitting the derivation, we give below the commonly used Runge-
Kutta method of order 4.

1 5
y n +1 = y n + ( k1 + 2k 2 + 2k3 + k 4 ) + 0 ( h )
6
k1 = h f ( xn , y n )
 h k 
k 2 = h f  xn + , y n + 1 
 2 2
 h k 
k 3 = h f  xn + , y n + 2 
 2 2 
k 4 = h f ( x n + h, y n + k 3 ) (13.6)

Self-Instructional
314 Material
Runge-Kutta method of order 4 requires the evaluation of the first order derivative Runge Kutta Method
f (x, y), at four points. The method is self-starting. The error estimate with this
method can be roughly given by,
y n* − y n
|y (xn) – yn| ≈ (13.7) NOTES
15
h
where yn* and yn are the approximate values computed with and h respectively
2
as step size and y (xn) is the exact solution.
Note: In particular, for the special form of differential equation y ′ = F (x), a function
of x alone, the Runge-Kutta method reduces to the Simpson’s one-third formula of
numerical integration from xn to xn+1. Then,
xn+1

yn+1 = yn+ ∫ F ( x)dx


xn

h h
Or, yn+1 = yn+ [F(xn) + 4F(xn+ ) + F(xn+h)]
6 2
Runge-Kutta methods are widely used particularly for finding starting values at
steps x1, x2, x3,..., since it does not require evaluation of higher order derivatives. It
is also easy to implement the method in a computer program.
Example 1: Compute values of y (0.1) and y (0.2) by 4th order Runge-Kutta method,
correct to five significant figures for the initial value problem,
dy
= x + y , y ( 0) = 1
dx
dy
Solution: We have = x + y , y ( 0) = 1
dx
∴ f ( x, y ) =
x + y, h=
0.1, x0 =
0, y0 =
1
By Runge-Kutta method,
1
y (0.1) = y (0) + ( k + 2k 2 + 2k 3 + k 4 )
6 1
where, k1 h f ( x0 , y0 ) 0.1 (0 1) 0.1
h k2
k2 h f x0 , y0 0.1 (0.05 1.05) 0.11
2 2
h k2
k3 h f x0 , y0 0.1 (0.05 1.055) 0.1105
2 2
k4 h f ( x0 h, y0 k3 ) 0.1 (0.1 1.1105) 0.12105
where 1
y (0.1) 1 [0.1 2 (0.11 0.1105 0.12105] 1.130516
6
Thus,
= x1 0.1,
= y1 1.130516

Self-Instructional
Material 315
Runge Kutta Method 1
y (0.2) y (0.1) ( k1 2k2 2k3 k4 )
6
k1 h f ( x1 , y1 ) 0.1 (0.1 1.11034) 0.121034
h k1
k2 h f x1 , y1 0.1 (0.15 1.17086) 0.132086
NOTES 2 2
h k2
k3 h f x1 , y1 0.1 (0.15 1.17638) 0.132638
2 2
k4 h f ( x1 h, y1 k3 ) 0.1 (0.2 1.24298) 0.144298
1
y2 y (0.2) 1.11034 [0.121034 2 (0.132086 0.132638) 0.144298] 1.2428
6
Example 2: Use Runge-Kutta method of order 4 to evaluate y (1.1) and y (1.2), by
taking step length h = 0.1 for the initial value problem,
dy
= x 2 + y 2 , y (1) = 0
dx
Solution: For the initial value problem,
dy
f ( x, y ), y ( x0 ) y0 , the Runge-Kutta method of order 4 is given as,
dx
1
y n +1 = y n + ( k1 + 2k 2 + 2k 3 + k 4 )
6

k1 h f ( xn , yn )
h k1
k2 h f xn , yn
2 2

where h k2
k3 h f xn , yn
2 2
k4 h f ( xn h, y n k3 ), for n 0, 1, 2,...

For the given problem, f (x, y) = x2 + y2, x0 = 1, y0 = 0, h = 0.1.


Thus,

k1 h f ( x0 , y0 ) 0.1 (12 02 ) 0.1


h k1
k2 h f x0 , y0 0.1 [(1.05)2 (0.5) 2 ] 0.13525
2 2
h k2
k3 h f x0 , y0 0.1 [(1.05) 2 (0.05525)2 ] 0.13555
2 2

k4 h f ( x0 h, y0 k3 ) 0.1 [(1.1) 2 (0.13555)2 ] 0.12283


1
y1 y0 (k1 2k 2 2 k3 k4 )
6
1 1
(0.1 0.2705 0.2711 0.12283) 0.76443
6 6
0.127405

Self-Instructional
316 Material
For y (1.2): Runge Kutta Method

k1 0.1[(1.1)2 (0.11072) 2 ] 0.12226


k2 0.1[(1.15) 2 (0.17183)2 ] 0.135203
k3 0.1[(1.15) 2 (0.17832) 2 ] 0.135430 NOTES
k4 0.1[(1.2) 2 (0.24615) 2 ] 0.150059.
1
y2 y (1.2) 0.11072 (0.12226 0.270406 0.270860 0.150069)
6
0.24631

Algorithm: Solution of first order differential equation by Runge-Kutta method of


order 2: y ′ = f (x) with y (x0) = y0.
Step 1: Define f (x, y)
Step 2: Read x0, y0, h, xf [h is step size, xf is final x]
Step 3: Repeat Steps 4 to 11 until x1 > xf
Step 4: Compute k1 = f (x0, y0)
Step 5: Compute y1 = y0+ hk1
Step 6: Compute x1 = x0+ h
Step 7: Compute k2 = f (x1, y1)
Step 8: Compute y1 = y0 + h × (k1 + k 2 ) / 2
Step 9: Write x1, y1
Step 10: Set x0 = x1
Step 11: Set y0 = y1
Step 12: Stop
Algorithm: Solution
= of y1 f=
( x, y ), y ( x0 ) y0 by Runge-Kutta method of
order 4.
Step 1: Define f (x, y)
Step 2: Read x0, y0, h, xf
Step 3: Repeat Step 4 to Step 16 until x1 > xf
Step 4: Compute k1 = h f (x0, y0)
h
Step 5: Compute x = x0 +
2

k1
Step 6: Compute y = y0 +
2
Step 7: Compute k2 = h f (x, y)
k2
Step 8: Compute y = y0 +
2

Self-Instructional
Material 317
Runge Kutta Method Step 9: Compute k3 = h f(x, y)
Step 10: Compute x1 = x0+ h
Step 11: Compute y = y0+ k3
NOTES Step 12: Compute k4 = h f (x1, y)
Step 13: Compute y1 = y0+ (k1+ 2 (k2+ k3) + k4)/6
Step 14: Write x1, y1
Step 15: Set x0 = x1
Step 16: Set y0 = y1
Step 17: Stop

Runge-Kutta Method for a Pair of Equations


Consider an initial value problem associated with a system of two first order ordinary
differential equations in the form,

dy dz
f ( x, y, z ), g ( x, y , z )
dx dx
with y ( x0 ) y0 and z ( x0 ) z0

The Runge-Kutta method of order 4 can be easily extended in the following


form,

1
yi 1 yi (k1 2k2 2k3 k4 )
6
1 (13.8)
zi 1 zi (l1 2l2 2l3 l4 ) for i 0, 1, 2,...
6

Where k1 = hf ( xi , yi , z i ), l1 = hg ( xi , yi , z i )
 h k l   h k l 
k 2 = hf  xi + , yi + 1 , z i + 1 , l 2 = hg  xi + , yi + 1 , z1 + 1 
 2 2 2  2 2 2
 h k l   h k l 
k 3 = hf  xi + , yi + 2 , z i + 2 , l3 = hg  xi + , yi + 2 , zi + 2 
 2 2 2  2 2 2
k 4 = hf ( xi + h, y1 + k 3 , z i + l3 ), l 4 = hf ( xi + h, yi + k 3 , z i + l3 )

y i = y ( xi ), z i = z ( xi ), i = 0, 1, 2,...

The solutions for y(x) and z(x) are determined at successive step points x1 = x0+h, x2
= x1+h = x0+2h,..., xN = x0+Nh.

Runge-Kutta Method for a Second Order Differential Equation


Consider the initial value problem associated with a second order differential equation,
d2y
= g ( x, y, y ′)
dx 2

Self-Instructional
318 Material
Runge Kutta Method
with y (x0) = y0 and y ′ (x0) = 0

On substituting z = y ′, the above problem is reduced to the problem,


dy dz
= z, = g ( x, y , z ) NOTES
dx dx
with y (x0) = y0 and z (x0) = y ′ (x0) = 0

which is an initial value problem associated with a system of two first order differential
equations. Thus we can write the Runge-Kutta method for a second order differential
equation as,

1
yi 1 yi (k1 2k2 2k3 k4 ),
6
1 (13.9)
zi 1 yi 1 zi (l1 2l2 2l3 l4 ) for i 0, 1, 2,...
6

where
= k1 h=
( zi ), l1 hg ( xi , yi , zi )
 l   h k l 
k2 =h  zi + 1  , l2 =hg  xi + , yi + 1 , zi + 1 
 2  2 2 2
 l   h k l 
k3 =h  zi + 2  , l3 =hg  xi + , yi + 2 , zi + 2 
 2  2 2 2
k4 =h( zi + l3 ), l4 =hg ( xi + h, yi + k3 , zi + l3 )

Check Your Progress


1. When are Runge-Kutta methods applied?
2. Give the uses of Runge-Kutta method.

13.3 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Runge-Kutta methods are very useful when the method of Taylor series is
not easy to apply because of the complexity of finding higher order
derivatives.
2. Runge-Kutta methods are widely used particularly for finding starting values
at steps x , x , x ,..., since it does not require evaluation of higher order
1 2 3
derivatives. It is also easy to implement the method in a computer program.

13.4 SUMMARY

Runge-Kutta methods attempt to get better accuracy and at the same time
obviate the need for computing higher order derivatives.
Self-Instructional
Material 319
Runge Kutta Method The solution of the (n + 1)th step is assumed in the form,
yn+1 = yn+ ak1+ bk2
Where k1 = h f (xn, yn) and k2 = h f(xn+ h, yn+ k1), for n = 0, 1, 2,...
NOTES Runge-Kutta method of order 4 requires the evaluation of the first order
derivative f (x, y), at four points. The method is self-starting.
In particular, for the special form of differential equation y ′ = F (x), a function
of x alone, the Runge-Kutta method reduces to the Simpson’s one-third formula
of numerical integration from xn to xn+1.
The Runge-Kutta method of order 4 can be easily extended in the following
form,
1
yi 1 yi (k1 2k2 2k3 k4 )
6
1
zi 1 zi (l1 2l2 2l3 l4 ) for i 0, 1, 2,...
6

13.5 KEY WORDS


Runge-Kutta Method It can be of different orders. They are very useful
when the method of Taylor series is not easy to apply because of the complexity
of finding higher order derivatives.

13.6 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What is the significance of Runge-Kutta methods of different orders?
2. Explain the Runge-Kutta method for a pair of equations.
Long-Answer Questions
1. Using Runge-Kutta method of order 4, compute y(0.1) for each of the
following problems:
dy
(a) x + y, y (0) =
= 1
dx
dy
(b) x + y 2 , y (0) =
= 1
dx
2. Compute solution of the following initial value problem by Runge-Kutta
method of order 4 taking h = 0.2 upto x = 1; y = x – y, y(0) = 1.5.

Self-Instructional
320 Material
Runge Kutta Method
13.7 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


Scientific and Engineering Computation. New Delhi: New Age NOTES
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
Material 321
Stability Analysis

UNIT 14 STABILITY ANALYSIS


NOTES Structure
14.0 Introduction
14.1 Objectives
14.2 Stability Analysis
14.3 Answers to Check Your Progress Questions
14.4 Summary
14.5 Key Words
14.6 Self Assessment Questions and Exercises
14.7 Further Readings

14.0 INTRODUCTION

In mathematics, stability theory addresses the stability of solutions of differential


equations and of trajectories of dynamical systems under small perturbations of
initial conditions. The heat equation, for example, is a stable partial differential
equation because small perturbations of initial data lead to small variations in
temperature at a later time as a result of the maximum principle. In partial differential
equations one may measure the distances between functions using Lp norms or
the sup norm, while in differential geometry one may measure the distance between
spaces using the Gromov–Hausdorff distance.
In this unit, you will study about stability analysis.

14.1 OBJECTIVES

After going through this unit, you will be able to:


Explain the basic concept of stability analysis
Understand the use of stability concept in finding solutions

14.2 STABILITY ANALYSIS

In mathematics and statistics stability theory defines the stability of solutions


of differential equations and of trajectories of dynamical systems under small
perturbations of initial conditions. The heat equation, for example, is a stable partial
differential equation because small perturbations of initial data lead to small variations
in temperature at a later time as a result of the maximum principle. In partial
differential equations one may measure the distances between functions using Lp
norms or the sup norm, while in differential geometry one may measure the distance
between spaces using the Gromov–Hausdorff distance.
Self-Instructional
322 Material
Under favourable circumstances, the question may be reduced to a well- Stability Analysis

studied problem involving eigenvalues of matrices. A more general method


involves Lyapunov functions. In practice, any one of a number of different stability
criteria are applied.
NOTES
Overview in Dynamical Systems
Many parts of the qualitative theory of differential equations and dynamical systems
deal with asymptotic properties of solutions and the trajectories—what happens
with the system after a long period of time. The simplest kind of behaviour is
exhibited by equilibrium points, or fixed points, and by periodic orbits. If a particular
orbit is well understood, it is natural to ask next whether a small change in the initial
condition will lead to similar behaviour. Stability theory helps us to find that whether
a nearby orbit will indefinitely stay close to a given orbit ,or will it converge to a
given orbit. In the former case, the orbit is called stable; in the latter case, it is
called asymptotically stable and the given orbit is said to be attracting.
An equilibrium solution fe to an autonomous system of first order ordinary
differential equations is called:
Stable if for every (small) , there exists a   ,such that every
solution f(t) having initial conditions within distance ,i.e. ,
|| f(t0)- , fe || <   of the equilibrium remains within distance , i.e. , for
all  || f(t) - fe || < for all t t0.
Asymptotically stable if it is stable and, in addition, there exists   > 0 ,
such that whenever || f(t0) - fe ||< then f(t) fe as t .

Self-Instructional
Material 323
Stability Analysis Stability means that the trajectories do not change too much under small
perturbations. The opposite situation, where a nearby orbit is getting repelled
from the given orbit. In general, perturbing the initial state in some directions results
in the trajectory asymptotically approaching the given one and in other directions
NOTES to the trajectory getting away from it. There may also be directions for which the
behaviour of the perturbed orbit is more complicated (neither converging nor
escaping completely), and then stability theory does not give sufficient information
about the dynamics.
In stability theory, the qualitative behaviour of an orbit under perturbations
can be analysed using the linearization of the system near the orbit. In particular, at
each equilibrium of a smooth dynamical system with an n-dimensional phase space,
there is a certain n×n matrix A whose eigenvalues characterize the behaviour of
the nearby points (Hartman–Grobman theorem). More precisely, if all eigenvalues
are negative real numbers or complex numbers with negative real parts then the
point is a stable attracting fixed point, and the nearby points converge to it at
an exponential rate, Lyapunov stability and exponential stability. If none of the
eigenvalues are purely imaginary (or zero) then the attracting and repelling directions
are related to the eigen-spaces of the matrix A with eigenvalues whose real part is
negative and, respectively, positive. Analogous statements are known for
perturbations of more complicated orbits.
Stability of Fixed Points
The simplest kind of an orbit is a fixed point, or an equilibrium. If a mechanical
system is in a stable equilibrium state then a small push will result in a localized
motion, for example, small oscillations as in the case of a pendulum. In a system
with damping, a stable equilibrium state is moreover asymptotically stable. On the
other hand, for an unstable equilibrium, such as a ball resting on a top of a hill,
certain small pushes will result in a motion with a large amplitude that may or may
not converge to the original state. Stability of a nonlinear system can be deduced
from the stability of its linearization.
Maps: Let f: R R be a continuously differentiable function with a fixed
point a, f(a) = a. Consider the dynamical system obtained by iterating the function f:
xn+1 = f(xn), n = 0, 1, 2, . . . .
The fixed point is stable if the absolute value of the derivative of f at
is strictly less than 1, and unstable if it is strictly greater than 1. This is because near
the point a, the function f has a linear approximation with slope f (a):
f(x) f(a) + f (a)(x – a).

Self-Instructional
324 Material
Thus Stability Analysis

xn 1 xn f xn xn  f a f a xn a xn a f a xn a xn
xn 1 xn
f a 1 xn a f a 1 NOTES
xn a

which means that the derivative measures the rate at which the successive
iterates approach the fixed point a or diverge from it. If the derivative at a is
exactly 1 or “1, then more information is needed in order to decide stability.
There is an analogous criterion for a continuously differentiable
map f: Rn Rn with a fixed point a, expressed in terms of its Jacobian
matrix at a, Ja (f). If all eigenvalues of J are real or complex numbers with absolute
value strictly less than 1 then a is a stable fixed point; if at least one of them has
absolute value strictly greater than 1 then a is unstable. Just as for n=1, the case of
the largest absolute value being 1 needs to be investigated further — the Jacobian
matrix test is inconclusive. The same criterion holds more generally
for diffeomorphisms of a smooth manifold.
Linear Autonomous Systems
The stability of fixed points of a system of constant coefficient linear differential
equations of first order can be analysed using the eigenvalues of the corresponding
matrix.
An autonomous system
x = Ax,
where x(t) Rn and A is an n×n matrix with real entries, has a constant
solution
x(t) = 0.
(In a different language, the origin 0 Rn is an equilibrium point of the
corresponding dynamical system.) This solution is asymptotically stable as t
(“in the future”) iff for all eigenvalues λ of A, Re(λ) < 0. Similarly, it is asymptotically
stable as t – (“in the past”) iff for all eigenvalues λ of A, Re(λ) > 0. If there
exists an eigenvalue λ of A with Re(λ) > 0 then the solution is unstable for t ’! “.
The stability of the origin for a linear system can be determined by the Routh–
Hurwitz stability criterion. The eigenvalues of a matrix are the roots of
its characteristic polynomial. A polynomial in one variable with real coefficients is
called a Hurwitz polynomial if the real parts of all roots are strictly negative.
The Routh–Hurwitz theorem implies a characterization of Hurwitz polynomials by
means of an algorithm that avoids computing the roots.
Non-Linear Autonomous Systems
Asymptotic stability of fixed points of a non-linear system can be demonstrated
using the Hartman–Grobman theorem.
Self-Instructional
Material 325
Stability Analysis Suppose that v is a C1-vector field in Rn which vanishes at a point p,
v(p) = 0. Then the corresponding autonomous system
x = v(x)
NOTES has a constant solution
x(t) = p.
Let Jp(v) be the n×n Jacobian matrix of the vector field v at the point p. If
all eigenvalues of J have strictly negative real part then the solution is asymptotically
stable. This condition can be tested using the Routh–Hurwitz criterion.

Check Your Progress


1. How can linear differential equations of first order be analysed?
2. Define Hurwitz polynomial.
3. How can asymptotic stability of fixed points be demonstrated?

14.3 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The stability of fixed points of a system of constant coefficient linear


differential equations of first order can be analysed using the eigenvalues of
the corresponding matrix.
2. A polynomial in one variable with real coefficients is called a Hurwitz
polynomial if the real parts of all roots are strictly negative.
3. Asymptotic stability of fixed points of a non-linear system can be
demonstrated using the Hartman–Grobman theorem.

14.4 SUMMARY

The simplest kind of behaviour is exhibited by equilibrium points, or fixed


points, and by periodic orbits.
Stability theory helps us to find that whether a nearby orbit will indefinitely
stay close to a given orbit ,or will it converge to a given orbit. In the former
case, the orbit is called stable; in the latter case, it is called asymptotically
stable and the given orbit is said to be attracting.
Stability means that the trajectories do not change too much under small
perturbations.
In stability theory, the qualitative behaviour of an orbit under perturbations
can be analysed using the linearization of the system near the orbit.

Self-Instructional
326 Material
If all eigenvalues are negative real numbers or complex numbers with Stability Analysis

negative real parts then the point is a stable attracting fixed point, and the
nearby points converge to it at an exponential rate, Lyapunov stability and
exponential stability.
NOTES
The simplest kind of an orbit is a fixed point, or an equilibrium.
Stability of a nonlinear system can be deduced from the stability of
its linearization.
The fixed point is stable if the absolute value of the derivative of f at
is strictly less than 1, and unstable if it is strictly greater than 1.
There is an analogous criterion for a continuously differentiable
map f: Rn Rn with a fixed point a, expressed in terms of its Jacobian
matrix at a, Ja (f).
The stability of the origin for a linear system can be determined by the Routh–
Hurwitz stability criterion.
The Routh–Hurwitz theorem implies a characterization of Hurwitz
polynomials by means of an algorithm that avoids computing the roots.
If all eigenvalues of J have strictly negative real part then the solution is
asymptotically stable.

14.5 KEY WORDS

Maps: Let f: R R be a continuously differentiable function with a fixed


point a, f(a) = a. Consider the dynamical system obtained by iterating the
function f:
xn+1 = f(xn), n = 0, 1, 2, . . . .

14.6 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define stability.
2. Elaborate on non-linear autonomous systems.
Long-Answer Questions
1. Explain the stability theory.
2. Give details on linear autonomous systems.

Self-Instructional
Material 327
Stability Analysis
14.7 FURTHER READINGS

Jain, M. K., S. R. K. Iyengar and R. K. Jain. 2007. Numerical Methods for


NOTES Scientific and Engineering Computation. New Delhi: New Age
International (P) Limited.
Atkinson, Kendall E. 1989. An Introduction to Numerical Analysis, 2nd Edition.
US: John Wiley & Sons.
Jain, M. K. 1983. Numerical Solution of Differential Equations. New Delhi:
New Age International (P) Limited.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw-Hill.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.

Self-Instructional
328 Material

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy