0% found this document useful (0 votes)
3 views84 pages

NM Unit 1

This document outlines a course on Numerical Methods, focusing on the application of numerical techniques to solve complex mathematical problems in science and engineering. It includes a detailed syllabus covering topics such as non-linear equations, linear algebraic equations, interpolation, numerical differentiation and integration, and differential equations. The document also discusses the characteristics, uses, approximations, errors, and significant figures relevant to numerical methods.

Uploaded by

s79562253
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views84 pages

NM Unit 1

This document outlines a course on Numerical Methods, focusing on the application of numerical techniques to solve complex mathematical problems in science and engineering. It includes a detailed syllabus covering topics such as non-linear equations, linear algebraic equations, interpolation, numerical differentiation and integration, and differential equations. The document also discusses the characteristics, uses, approximations, errors, and significant figures relevant to numerical methods.

Uploaded by

s79562253
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

Numerical

Methods

Chapter 1

Er. Ganesh Ram Dhonju


Course Objectives
– The objective of this course is to equip students with a thorough
understanding of numerical methods, focusing on their application
in obtaining approximate solutions to complex mathematical
problems commonly encountered in science and engineering.
– Emphasizing algorithm development, programming, and
visualization techniques, the course enables students to apply
computational approaches effectively, enhancing their problem-
solving capabilities in real-world applications.

2 Er. Ganesh Ram Dhonju 11/22/2024


Syllabus of Numerical Method
1 Solution of Non-Linear Equations (7 hours) 2 Solution of System of Linear Algebraic Equations (8 hours)
2.1 Direct methods
1.1 Errors and accuracy in numerical computations
2.1.1 Gauss Jordan method
1.2 Bisection method
2.1.2 Gauss elimination method, pivoting strategies
1.3 Regula Falsi method and secant method (Partial and complete)
1.4 Newton Raphson method 2.1.3 Matrix inverse using Gauss Jordan and Gauss
1.5 Fixed point iteration method elimination methods

1.6 Comparison of the methods (Bracketing vs open- 2.1.4 Factorization methods (Do-Little’s method and Crout’s
ended methods and rates of convergence) method)
2.2 Iterative methods
1.7 Solution of system of non-linear equations
2.2.1 Jacobi’s method
1.7.1 Direct approach
2.2.2 Gauss-Seidal method
1.7.2 Newton Raphson method
2.3 Determination of largest and smallest Eigen values
and corresponding vectors using the power method

3 Er. Ganesh Ram Dhonju 11/22/2024


3 Interpolation (9 hours) 4 Numerical Differentiation and Integration (6 hours)
3.1 Polynomial Interpolation
4.1 Numerical differentiation
3.1.1 Finite differences (Forward, backward, central
and divided differences) 4.1.1 Differentiation using polynomial interpolation
3.1.2 Interpolation with equally spaced intervals:
formulae for equally spaced intervals
Newton’s forward and backward difference 4.1.2 Local maxima and minima from equally spaced
interpolation, Stirling’s and Bessel’s central difference
interpolation data
3.1.3 Interpolation with unequally spaced intervals: 4.2 Numerical integration
Newton’s divided difference interpolation, Lagrange
interpolation 4.2.1 Newton Cote’s general quadrature formula
3.2 Least square method of curve fitting 4.2.2 Trapezoidal rule, Simpson’s 1/3 and 3/8 rules,
3.2.1 Linear form and forms reducible to linear form Boole’s rule, Weddle’s rule
3.2.2 Quadratic form and forms reducible to 4.2.3 Romberg integration
quadratic form
4.2.4 Gauss-Legendre integration (up to 3-point formula)
3.2.3 Higher degree polynomials
3.3 Cubic spline interpolation
3.3.1 Equally spaced interval
3.3.2 Unequally spaced interval

4 Er. Ganesh Ram Dhonju 11/22/2024


5 Solution of Ordinary Differential Equations (ODE) (8 6 Solution of Partial Differential Equations (7 hours)
hours)
6.1 Introduction and classification
5.1 Initial value problems
6.2 Finite difference approximations of partial derivatives
5.1.1 Solution of first order equations: Taylor’s series
method, Euler’s method, Runge-Kutta methods 6.3 Solution of elliptic equations
(Second and fourth order) 6.3.1 Laplace equation
5.1.2 Solution of system of first order ODEs via 6.3.2 Poisson’s equation
Runge-Kutta methods
6.4 Solution of parabolic and hyperbolic equations
5.1.3 Solution of second order ODEs via Runge-Kutta
methods 6.4.1 One-dimensional heat equation: Bendre-Schmidt
method, Crank-Nicolson method
5.2 Two-point boundary value problems
6.4.2 Solution of wave equation
5.2.1 Shooting method
5.2.2 Finite difference method

5 Er. Ganesh Ram Dhonju 11/22/2024


Numerical Method
– Numerical method is a mathematical too designed to solve numerical
problems.

– Numerical methods provide a way to solve problems quickly and easily


compared to analytic solutions. Whether the goal is integration
or solution of complex differential equations, there are many tools
available to reduce the solution of what can be sometimes quite difficult
analytical math to simple algebra.

6
Er. Ganesh Ram Dhonju
11/22/202
Characteristics of Numerical Methods
1. The solution procedure is iterative, with the accuracy of the solution
improving with each iteration.
2. The solution procedure provides only an approximation to the true,
but unknown, solution.
3. An initial estimate of the solution may be required.
4. The algorithm is simple and can be easily programmed.
5. The solution procedure may occasionally diverge from rather than
converge to the true solution.

7 Er. Ganesh Ram Dhonju 11/22/2024


Use of Numerical Methods
– In solving practical technical problems using scientific and
mathematical tools when available and using experience and
intuition, mathematical models provide a priori estimates of
performance – very desirable when prototypes of experiments are
costly.
– Engineering problems frequently arise in which exact analytical
solutions are not available.
– Approximate solutions are normally sufficient for engineering
applications, allowing the use of approximate numerical methods.
– Most of the real life problems are not solvable accurately and these
problems can be solved using numerical methods.
8 Er. Ganesh Ram Dhonju 11/22/2024
Approximations
– For many engineering problems, we cannot obtain
analytical solutions.
– Such numbers which represent the given numbers to a
certain degree of accuracy are called approximate numbers.
– Numerical methods yield approximate results, results that
are close to the exact analytical solution. We cannot exactly
compute the errors associated with numerical methods.
9 Er. Ganesh Ram Dhonju 11/22/2024
– Exact numbers are 1, 5, 10, 55/2, 25.4 etc
– There are numbers such as 1/3 = 0.33333….., √2 =
1.414213………….
π = 3.141592……………………..
which cannot be expressed by a finite number of digits but may be
approximated by numbers 0.3333, 1.4142, 3.1416 respectively, such
numbers which represent the given numbers to a certain degree of
accuracy are called approximate numbers.

10 Er. Ganesh Ram Dhonju 11/22/2024


– Only rarely given data are exact, since they originate from
measurements. Therefore there is probably error in the input
information.
– Algorithm itself usually introduces errors as well, e.g.,
unavoidable round-offs, etc The output information will then
contain error from both of these sources.
– How confident we are in our approximate result?
– The question is “how much error is present in our
calculation and is it tolerable?”

11 Er. Ganesh Ram Dhonju 11/22/2024


Significant Figures
– The significant digits of number are those that can be
used with confidence. They correspond to the number of
certain digits plus one estimated digit.
– Zeros are not always significant figures because they may
be necessary just to locate a decimal point.
– The numbers 0.00001845, 0.0001845, and 0.001845 all
have four significant figures.

12 Er. Ganesh Ram Dhonju 11/22/2024


Significant Figures
– Number of significant figures indicates precision. Significant digits of a
number are those that can be used with confidence, e.g.,
53,800 How many significant figures?
5.38 x 104 3
5.380 x 104 4
5.3800 x 104 5
Zeros are sometimes used to locate the decimal point not significant figures.
0.00001753 4
0.0001753 4
0.001753 4

13 Er. Ganesh Ram Dhonju 11/22/2024


Rules of significant zeros
– All non zero digits are always significant
– All zeros occurring between non zero digits are always significant
– Leading zeros ( zeros to the left of the first non zero digit) are never significant,
such zeros merely indicate the position of the decimal point.
– 0.00017 has only two significant figures
– 0.000000123 has only three significant figures

– In a number with a decimal point, trailing zeros are significant. For this reason,
it is important to give consideration to when a decimal point is used and to keep
the trailing zeros to indicate the actual number of significant figures
– 400. has three significant figures
– 5.00 has three significant figures
– 0.050 has two significant figures

14 Er. Ganesh Ram Dhonju 11/22/2024


– When the decimal point is not written, trailing zeros are
not considered to be significant.
– 1700,000 has two significant figures
– 2000 has one significant figure.

– To indicate that the trailing zeros are significant a decimal


point must be added.
– The number 400. has three significant digits
– 102400 = 10.24 x 104 has four significant figures
– 102400 = 10.240 x 104 has five significant figures

15 Er. Ganesh Ram Dhonju 11/22/2024


Accuracy and Precision
– Accuracy refers to how closely a computed or measured value agrees with the
true value.
– Precision refers to how closely individual computed or measured values agree
with each other.

16 Er. Ganesh Ram Dhonju 11/22/2024


– Accuracy. How close is a computed or measured value to the true value
– Precision (or reproducibility). How close is a computed or measured value to
previously computed or measured values.
– Inaccuracy (or bias). A systematic deviation from the actual value.
– Imprecision (or uncertainty). Magnitude of scatter.

17 Er. Ganesh Ram Dhonju 11/22/2024


a) Inaccurate and imprecise
b) Accurate and imprecise
c) Inaccurate and precise
d) Accurate and precise

18 Er. Ganesh Ram Dhonju 11/22/2024


Errors ?
Numerical errors arise from the use of approximations to represent exact
mathematical operations and quantities.

Why measure errors?


1) To determine the accuracy of numerical results.
2) To develop stopping criteria for iterative algorithms.

19 Er. Ganesh Ram Dhonju 11/22/2024


Error
– We may not be concerned with the sign of the error but
we are interested in whether the percent absolute value
is lower than a pre specified percent tolerance es.
– It is also convenient to relate these errors to the number
of significant figures in the approximation.
– It can be shown that if the following criterion is met, we
can be assured that the result is correct to at least n
significant figures

20 Er. Ganesh Ram Dhonju 11/22/2024


Approximate Percent Relative Error
– Sometimes true value is not known
– The approximate percent relative error
– For iterative processes, the error can be approximated as
the difference in values between successive iterations.
– Stopping Criterion
The computation is repeated until |ea| < es

21 Er. Ganesh Ram Dhonju 11/22/2024


Types of Error
1. Inherent Errors
– Error which are already present in the statement of a problem before
its solution are called inherent errors. Such errors arise either due to
the given data being approximate or due to the limitations of
mathematical tables, calculation or the digital computer.
– Inherent errors can be minimized by taking better data or by using
high precision computing aids.
– There errors contain two components namely data errors and
conversion errors.

22 Er. Ganesh Ram Dhonju 11/22/2024


2. Rounding Errors
– Rounding errors arise from the process of rounding off the numbers
during the computation. Such errors are unavoidable in most of the
calculations due to the limitations of the computing aids.
– Rounding errors can be reduced:
– By changing the calculation procedure so as to avoid subtraction of
nearly equal number of division by a small number
– By retaining at least one more significant figure at each step than
that giving in the data and rounding off at the last step.

23 Er. Ganesh Ram Dhonju 11/22/2024


3. Truncation errors
– Truncation errors are caused by using approximate results or on
replacing an infinite process by a finite one. If we are using a
decimal computer having a fixed word length of 4 digit,
rounding off of 13.658 gives 13.66 whereas truncation giving
13.65.
– Example 1: approximation to a derivative using a finite
difference equation:

24 Er. Ganesh Ram Dhonju 11/22/2024


Absolute Error
If X is the true value of a quality and x’ is its
approximate value, then |X-X`| i.e. |error| is called
the absolute error Ea.
True Value = Approximation + Error
Et = True value – Approximation (+/-)
True error
true error
True fractional relative error =
true value
true error
True percent relative error,  t = 100%
true value
25 Er. Ganesh Ram Dhonju 11/22/2024
 For numerical methods, the true value will be
known only when we deal with functions that can
be solved analytically (simple systems). In real
world applications, we usually not know the answer
a priori. Then

Approximate error
a = 100%
Approximation

 Iterative approach, example Newton’s method


Current approximation - Previous approximation
a = 100%
(+ / -)
Current approximation
26 Er. Ganesh Ram Dhonju 11/22/2024
 Use absolute value.
 Computations are repeated until stopping criterion is
satisfied.

 a  s Pre-specified % tolerance based on


the knowledge of your solution

 If the following criterion is met

 s = (0.5 10 (2-n) )%


you can be sure that the result is correct to at least n
significant figures.

27 Er. Ganesh Ram Dhonju 11/22/2024


Example—True Error
The derivative, f (x) of a function f (x) can be
approximated by the equation,
f ( x + h) − f ( x)
f ' ( x) 
h

If f ( x ) = 7e 0.5 x
and h = 0.3
a) Find the approximate value of f ' (2)
b) True value of f ' (2)
c) True error for part (a)

28 Er. Ganesh Ram Dhonju 11/22/2024


Solution:
Solution:
b) The exact value of f ' (2) can be found by using
a) For x = 2 and h = 0.3 our knowledge of differential calculus.
f ( 2 + 0.3) − f ( 2) f ( x) = 7e 0.5 x
f ' ( 2) 
0.3
f ' ( x) = 7  0.5  e0.5 x
f (2.3) − f (2)
= = 3.5e 0.5 x
0.3
So the true value of f ' (2) is
7e − 7e
0.5 ( 2.3) 0.5 ( 2 )
= f ' (2) = 3.5e 0.5( 2 )

0.3
= 9.5140
22.107 − 19.028
= True error is calculated as
0.3
Et = True Value – Approximate Value
= 10.263 = 9.5140 −10.263 = −0.722

29 Er. Ganesh Ram Dhonju 11/22/2024


Relative True Error
◼ Defined as the ratio between the true error,
and the true value.
True Error
Relative True Error ( t ) =
True Value

30 Er. Ganesh Ram Dhonju 11/22/2024


Example—Relative True Error
Following from the previous example for true error,
find the relative true error for f ( x) = 7e 0.5 x at f ' (2)
with h = 0.3
From the previous example,
Et = −0.722
Relative True Error is defined as
True Error
t =
True Value
− 0.722
= = −0.075888
9.5140
as a percentage,
t = −0.075888  100% = −7.5888%

31 Er. Ganesh Ram Dhonju 11/22/2024


Approximate Error
◼ What can be done if true values are not known or
are very difficult to obtain?

◼ Approximate error is defined as the difference


between the present approximation and the
previous approximation.

Approximate Error ( E a ) = Present Approximation – Previous Approximation

32 Er. Ganesh Ram Dhonju 11/22/2024


Example—Approximate Error
For f ( x) = 7e 0.5 x at x = 2 find the following,
a) f (2) using h = 0.3
b) f (2) using h = 0.15
c) approximate error for the value of f (2) for part b)
Solution:
a) For x = 2 and h = 0.3 b) For x = 2 and h = 0.15 c) So the approximate error,
f ( x + h) − f ( x) f (2 + 0.15) − f (2) E a is
f ' ( x)  f ' (2) 
h 0.15
f ( 2 + 0.3) − f ( 2) f (2.15) − f (2) Ea = Present Approximation
f ' ( 2)  = – Previous
0.3 0.15
f (2.3) − f (2) − 7e
0.5 ( 2.15)
7e 0.5 ( 2 )
Approximation
= =
0.3 0.15
= 9.8800 − 10.263
7e 0.5( 2.3) − 7e 0.5( 2) 22.107 − 19.028 20.50 − 19.028
= = = = 9.8800 = −0.38300
0.3 0.3 0.15
33 = 10.263
Er. Ganesh Ram Dhonju 11/22/2024
Round-off Errors

 Numbers such as , e, or 7 cannot be expressed by a


fixed number of significant figures.
 Computers use a base-2 representation, they cannot
precisely represent certain exact base-10 numbers.
 Fractional quantities are typically represented in
computer using “floating point” form, e.g.,
Integer part
exponent
m.be
mantissa Base of the number system
used

34 Er. Ganesh Ram Dhonju 11/22/2024


Relative Approximate Error
◼ Defined as the ratio between the
approximate error and the present
approximation.
Approximate Error
Relative Approximate Error ( a) =
Present Approximation

35 Er. Ganesh Ram Dhonju 11/22/2024


Example—Relative Approximate Error
For f ( x) = 7e at x = 2 , find the relative approximate
0.5 x

error using values from h = 0.3 and h = 0.15


Solution:
From Example 3, the approximate value of f (2) = 10.263
using h = 0.3 and f (2) = 9.8800 using h = 0.15
Ea = Present Approximation – Previous Approximation
= 9.8800 − 10.263
= −0.38300

36 Er. Ganesh Ram Dhonju 11/22/2024


Solution: (cont.)
Approximate Error
a =
Present Approximation
− 0.38300
= = −0.038765
9.8800
as a percentage,
a = −0.038765100% = −3.8765%
Absolute relative approximate errors may also need to
be calculated,
a =| −0.038765 | = 0.038765 or 3.8765%

37 Er. Ganesh Ram Dhonju 11/22/2024


How is Absolute Relative Error used as
a stopping criterion?

If |a |  s where s is a pre-specified tolerance, then


no further iterations are necessary and the process is
stopped.

If at least m significant digits are required to be


correct in the final answer, then
|a | 0.5  10 2−m %

38 Er. Ganesh Ram Dhonju 11/22/2024


Chopping
Example:
=3.14159265358 to be stored on a base-10 system
carrying 7 significant digits.
=3.141592 chopping error t=0.00000065
If rounded
=3.141593 t=0.00000035
 Some machines use chopping, because rounding adds
to the computational overhead. Since number of
significant figures is large enough, resulting chopping
error is negligible.

39 Er. Ganesh Ram Dhonju 11/22/2024


– Example: Truncation Error in Atomic Weight
The weight of oxygen is 15.9994. If we round the atomic weight of
oxygen to 16, the error is
e = 16 - 15.9994 - 0.0006
The relative true error:

0.0006
er = = 0.4  10−4
15.9994

40 Er. Ganesh Ram Dhonju 11/22/2024


Problems
# Round off following numbers to four significant digits and then calculate
the absolute error and percentage error.

75462343, 3.26425, 35.46735,0.70035,0.00032217, 18.265101

# Find the relative error in taking π = 3.141593 as 22/7.

# √29=5.385 and √11=3.317 correct to 4 significant figures. Find the


relative errors in their sum and difference.

# Find the absolute and Relative error if the number X = 0.004997 is i)


truncated to three decimal digits ii) rounded off to three decimal digits.

41 Er. Ganesh Ram Dhonju 11/22/2024


# Roundoff the number 584970 and 84.79452 to 4
significant figure and computer absolute error, relative
error and percentage error in each case.
# Find the absolute error and relative error if the number X
= 0.00547528 is
i. truncated to the three decimal digits
ii. Rounded off to three decimal digits.

42 Er. Ganesh Ram Dhonju 11/22/2024


Other Errors
– Blunders – errors caused by malfunctions of the computer or
human imperfection.
– Model errors – errors resulting form incomplete mathematical
models.
– Data uncertainty – errors resulting from the accuracy and/or
precision of the data.

43 Er. Ganesh Ram Dhonju 11/22/2024


Roots of Equations

• Why?
2
−𝑏 ∓ 𝑏2 − 4𝑎𝑐
𝑎𝑥 + 𝑏𝑥 + 𝑐 = 0 ⇒ 𝑥=
2𝑎

• But
𝑎𝑥 5 + 𝑏𝑥 4 + 𝑐𝑥 3 + 𝑑𝑥 2 + 𝑒𝑥 + 𝑓 = 0 ⇒ 𝑥 =?
sin 𝑥 + 𝑥 = 0 ⇒ 𝑥 =?

44 Er. Ganesh Ram Dhonju 11/22/2024


Nonlinear Equation
Solvers

Bracketing Graphical Open Methods

Bisection Newton Raphson


False Position
(Regula-Falsi) Secant

All Iterative

45 Er. Ganesh Ram Dhonju 11/22/2024


Bracketing Methods
(Or, two point methods for finding roots)
 Two initial guesses for the root are
required. These guesses must “bracket” or
be on either side of the root.

 If one root of a real and continuous


function, f(x)=0, is bounded by values
x=xl, x =xu then

f(xl) . f(xu) <0. (The function changes sign on


opposite sides of the root)

46 Er. Ganesh Ram Dhonju 11/22/2024


No answer (No root)

Two roots( Might


work for a while!!)
Nice case (one root)

Oops!! (two roots!!)


Discontinuous
function. Need
special method
Three roots( Might work
for a while!!)

47 Er. Ganesh Ram Dhonju 11/22/2024


The incremental search method
– Tests the value of the function at evenly spaced intervals
– Finds brackets by identifying function sign changes between neighboring points
– If the spacing between the points of an incremental search are too far apart,
brackets may be missed due to capturing an even number of roots within two
points.
– Incremental searches cannot find brackets containing even multiplicity roots
regardless of spacing.

48 Er. Ganesh Ram Dhonju 11/22/2024


Bisection Method
– The bisection method is a variation of the
incremental search method in which the
interval is always divided in half.
– If a function changes sign over an interval,
the function value at the midpoint is
evaluated.
– The location of the root is then determined
as lying within the subinterval where the
sign change occurs.
– The absolute error is reduced by factor of 2
for each iteration.

49 Er. Ganesh Ram Dhonju 11/22/2024


The Bisection Method
For the arbitrary equation of one variable, f(x)=0
1. Pick xl and xu such that they bound the root of interest, check if f(xl).f(xu) <0.
2. Estimate the root by evaluating f(𝑥𝑚 ). 𝑥𝑚 = (𝑥𝑙 + 𝑥𝑢 )/2
3. Now check the following
f(x)

a) If f(xl)f(xm) < 0, then the root lies between xl and xm; then xl = xl ; xu = xm.
b) If f(xl)f(xm) > 0, then the root lies between xm and xu; then xl = xm; xu = xu.
c) If f(xl)f(xm) = 0; then the root is x m. Stop the algorithm if this is true.
4. Find the new estimate of the root 𝑥𝑚 = (𝑥𝑙 + 𝑥𝑢 )/2
x xm
x
where
xu 𝑛𝑒𝑤 − 𝑥 𝑜𝑙𝑑
𝑥𝑚 𝑚 𝑜𝑙𝑑 = previous estimate of root
5. Compare s with a ∈𝑎 = × 100 𝑥𝑚
𝑛𝑒𝑤
𝑥𝑚
𝑛𝑒𝑤 = current estimate of root
𝑥𝑚

6. If a< s, stop. Otherwise repeat the process.

50 Er. Ganesh Ram Dhonju 11/22/2024


51 Er. Ganesh Ram Dhonju 11/22/2024
# Find a root of the equation x2-4x-10=0. Using bisection
method. Correct upto 3 decimal place
X0 = (X1+X2)/2

Iteration X1 X2 F(X1) F(X2) X0 F(X0) Remarks


1
2
3
4

52 Er. Ganesh Ram Dhonju 11/22/2024


Problems
# cos(x) = x ex
# x3 – 2x- 5 =0
# xlog10x = 1.2
# x5-3x3-1 = 0
# f(x) =4 sin(x) – ex
# f(x) = e-x-x
# f(x) = cosx – 3x +1
# f(x) = 2x – log10x - 7

53 Er. Ganesh Ram Dhonju 11/22/2024


1
𝑓 𝑥 =
Evaluation of Method f(x) 𝑥

Pros Cons x

• Easy • Slow convergence


• Always find root • Function changes sign
• Number of iterations but root does not exist
required to attain an • Multiple roots
absolute error can be • If a function f(x) is such f(x)

computed a priori. that it just touches the x


axis it will be unable to
find the lower and upper 𝑓 𝑥 = 𝑥2
guesses. x

54 Er. Ganesh Ram Dhonju 11/22/2024


The False-Position Method (Regula-Falsi)
 Next guess by connecting the endpoints with
straight line

 The location of the intercept of the straight line (xl)

 If a real root is bounded by xl and xu of f(x)=0, then


we can approximate the solution by doing a linear
interpolation between the points [xl, f(xl)] and [xu,
f(xu)] to find the xr value such that l(xr)=0, l(x) is
the linear approximation of f(x).

55 Er. Ganesh Ram Dhonju 11/22/2024


Procedure
1. Find a pair of values of x, xl and xu such that
fl=f(xl) <0 and fu=f(xu) >0.
2. Estimate the value of the root from the
following formula
𝑥𝑙 𝑓𝑢 − 𝑥𝑢 𝑓𝑙
𝑥𝑟 =
𝑓𝑢 − 𝑓𝑙

and evaluate f(xr).

56 Er. Ganesh Ram Dhonju 11/22/2024


3. Use the new point to replace one of the original points, keeping the two points
on opposite sides of the x axis.

If f(xr)<0 then xl=xr == > fl=f(xr)

If f(xr)>0 then xu=xr == > fu=f(xr)

If f(xr)=0 then you have found the root and need go no further!
4. See if the new xl and xu are close enough for convergence to be declared. If they are
not go back to step 2.

 Why this method?


 Faster
 Always converges for a single root.

57 Er. Ganesh Ram Dhonju 11/22/2024


Regula Falsi Method Algorithm:
1. Start
2. Read values of x0, x1 and e
*Here x0 and x1 are the two initial guesses
e is the degree of accuracy or the absolute error i.e. the stopping criteria*
3. Computer function values f(x0) and f(x1)
4. Check whether the product of f(x0) and f(x1) is negative or not.
If it is positive take another initial guesses.
If it is negative then goto step 5.
5. Determine:
x = [x0*f(x1) – x1*f(x0)] / (f(x1) – f(x0))
6. Check whether the product of f(x1) and f(x) is negative or not.
If it is negative, then assign x0 = x;
If it is positive, assign x1 = x;
7. Check whether the value of f(x) is greater than 0.00001 or not.
If yes, goto step 5.
If no, goto step 8.
*Here the value 0.00001 is the desired degree of accuracy, and hence the stopping
criteria.*
8. Display the root as x.
9. Stop
58 Er. Ganesh Ram Dhonju 11/22/2024
Newton-Raphson Method

 Most widely used method.


 Based on Taylor series expansion:
f(x)
2
x
f ( xi +1 ) = f ( xi ) + f ( xi )x + f ( xi ) + Ox 3
2!
The root is the value of x i +1 when f(x i +1 ) = 0
f(xi)
x f (x )
i, i

Rearranging,
Solve for
0 = f(xi ) + f (xi )( xi +1 − xi )
f(xi-1)
f ( xi )
xi +1 = xi − Newton-Raphson formula

f ( xi ) xi+2 xi+1 xi X

59 Er. Ganesh Ram Dhonju 11/22/2024


Derivation

f(x)

𝐴𝐵
f(xi) B tan( 𝛼) =
𝐴𝐶

𝑓(𝑥𝑖 )
𝑓′(𝑥𝑖) =
𝑥𝑖 − 𝑥𝑖+1

C  A X 𝑓(𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 −
xi+1 xi 𝑓 ′ (𝑥𝑖 )

60 Er. Ganesh Ram Dhonju 11/22/2024


 A convenient method for functions whose derivatives can be evaluated
analytically. It may not be convenient for functions whose derivatives cannot
be evaluated analytically.

61 Er. Ganesh Ram Dhonju 11/22/2024


Newton
1. Start
Raphson Method Algorithm:
2. Read x, e, n, d
*x is the initial guess
e is the absolute error i.e the desired degree of accuracy
n is for operating loop
d is for checking slope*
3. Do for i =1 to n in step of 2
4. f = f(x)
5. f1 = f'(x)
6. If ( [f1] < d), then display too small slope and goto 11.
*[ ] is used as modulus sign*
7. x1 = x – f/f1
8. If ( [(x1 – x)/x1] < e ), the display the root as x1 and goto 11.
*[ ] is used as modulus sign*
9. x = x1 and end loop
10.Display method does not converge due to oscillation.
11.Stop
62 Er. Ganesh Ram Dhonju 11/22/2024
Find the x2-4x-10 =0. using NR method
– F(x) = x2-4x-10
– F’(x) = 2x-4

xi F(xi) F’(xi) xi+1 = xi –f(xi)/f’(xi) Remarks (error)

– Calculate a real root of non linear equation xsinx+cosx = 0 using NR method.


– X log10 X = 1.2
– Find the reciprocal of 3 using NR method.
– Find the real root of 3x – e-x = 0
– F(x) = xex-2
– Evaluate the value of √3 using NR method.

63 Er. Ganesh Ram Dhonju 11/22/2024


Advantages

– Converges fast (quadratic convergence), if it converges.


– Requires only one guess

64 Er. Ganesh Ram Dhonju 11/22/2024


Drawbacks
1. Divergence at inflection points
Selection of the initial guess or an iteration value of the root that
is close to the inflection point of the function 𝑓 𝑥 may start
diverging away from the root in the Newton-Raphson method.
3
For example, to find the root of the equation 𝑓 𝑥 = 𝑥−1 + 0.512 = 0 .
3
𝑥𝑖3 − 1 + 0.512
The Newton-Raphson method reduces to 𝑥𝑖+1 = 𝑥𝑖 −
3 𝑥𝑖 − 1 2 .

Table 1 shows the iterated values of the root of the equation.


The root starts to diverge at Iteration 6 because the previous estimate
of 0.92589 is close to the inflection point of 𝑥 = 1 .
Eventually after 12 more iterations the root converges to the exact
value of 𝑥 = 0.2.

65 Er. Ganesh Ram Dhonju 11/22/2024


Drawbacks – Inflection Points
Table 1 Divergence near inflection point.
Iteration xi
Number
0 5.0000
1 3.6560
2 2.7465
3 2.1084
4 1.6000
5 0.92589
6 −30.119
7 −19.746 Figure 8 Divergence at inflection point for
3
𝑓 𝑥 = 𝑥−1 + 0.512 = 0
18 0.2000

66 Er. Ganesh Ram Dhonju 11/22/2024


Drawbacks – Division by Zero
2. Division by zero
For the equation
𝑓 𝑥 = 𝑥 3 − 0.03𝑥 2 + 2.4 × 10−6 = 0

the Newton-Raphson method


reduces to
𝑥𝑖3 − 0.03𝑥𝑖2 + 2.4 × 10−6
𝑥𝑖+1 = 𝑥𝑖 −
3𝑥𝑖2 − 0.06𝑥𝑖

For 𝑥0 = 0 or 𝑥0 = 0.02 , the Figure 9 Pitfall of division by zero


denominator will equal zero. or near a zero number

67 Er. Ganesh Ram Dhonju 11/22/2024


Drawbacks – Oscillations near
local maximum and minimum

3. Oscillations near local maximum and minimum

Results obtained from the Newton-Raphson method may


oscillate about the local maximum or minimum without
converging on a root but converging on the local maximum or
minimum.
Eventually, it may lead to division by a number close to zero
and may diverge.
For example for 𝑓 𝑥 = 𝑥2 + 2 = 0 the equation has no real
roots.

68 Er. Ganesh Ram Dhonju 11/22/2024


Drawbacks – Oscillations near
local maximum and minimum

Table 3 Oscillations near local maxima 6


f(x)
and mimima in Newton-Raphson method. 5

Iteration ∈𝑎 %
𝑥𝑖 𝑓 𝑥𝑖 4

Number
–1.0000
3
0 3.00 3

1 0.5 2.25 300.00 2


2

2 –1.75 5.063 128.571 11


3 –0.30357 2.092 476.47 4
x
4 3.1423 11.874 109.66 -2 -1
0
0 1 2 3
-1.75 -0.3040 0.5 3.142
5 1.2529 3.570 150.80 -1

6 –0.17166 2.029 829.88


Figure Oscillations around local
7 5.7395 34.942 102.99
minima for 𝑓 𝑥 =. 𝑥 2 + 2
8 2.6955 9.266 112.93
9 0.97678 2.954 175.96
69 Er. Ganesh Ram Dhonju 11/22/2024
Drawbacks – Root Jumping

4. Root Jumping
In some cases where the function 𝑓 𝑥 is oscillating and has a number
of roots, one may choose an initial guess close to a root. However, the
guesses may jump and converge to some other root.
1.5
f(x)
For example 1

𝑓 𝑥 = sin 𝑥 = 0 0.5

x
Choose -2
0
0 2 4 6 8 10

𝑥0 = 2.4𝜋 = 7.539822 -0.06307


-0.5
0.5499 4.461 7.539822

It will converge to 𝑥=0 -1

-1.5

instead of 𝑥 = 2𝜋 = 6.2831853 Figure Root jumping from intended


location of root for
𝑓 𝑥 . = sin 𝑥 = 0
70 Er. Ganesh Ram Dhonju 11/22/2024
71 Er. Ganesh Ram Dhonju 11/22/2024
# logx = cosx
# xlog10x-1.2=0
# x3-3x-5=0
# sinx=1-x
# cosx-x2-x=0
# x+logx=2
# 3x-cosx-1=0
- correct to 4 decimal places

72 Er. Ganesh Ram Dhonju 11/22/2024


The Secant Method
 A slight variation of Newton’s method for functions
f(x)

whose derivatives are difficult to evaluate. For these


cases the derivative can be approximated by a f(xi)
x f (x )
i, i

backward finite divided difference.


1 𝑥𝑖 − 𝑥𝑖−1
≅ f(xi-1)
𝑓 ′ (𝑥𝑖 ) 𝑓(𝑥𝑖 ) − 𝑓(𝑥𝑖−1 )
𝑥𝑖 − 𝑥𝑖−1 
X
𝑥𝑖+1 = 𝑥𝑖 − 𝑓(𝑥𝑖 ) 𝑖 = 1,2,3, … xi+2 xi+1 xi
𝑓(𝑥𝑖 ) − 𝑓(𝑥𝑖−1 )

Figure Geometrical illustration of the


Newton-Raphson method.

73 Er. Ganesh Ram Dhonju 11/22/2024


• Requires two initial estimates of x , e.g,
xo, x1. However, because f(x) is not
required to change signs between
estimates, it is not classified as a The secant method can also be derived from geometry:
“bracketing” method. The Geometric Similar Triangles
AB DC
• The secant method has the same properties =
AE DE
as Newton’s method. Convergence is not
can be written as
guaranteed for all xo, f(x). f(x) f ( xi ) f ( xi −1 )
=
xi − xi +1 xi −1 − xi +1
f(xi) B On rearranging, the secant
method is given as

f ( xi )( xi − xi −1 )
xi +1 = xi −
f(xi-1) C f ( xi ) − f ( xi −1 )
E D A X
xi+1 xi-1 xi

74 Er. Ganesh Ram Dhonju 11/22/2024


Secant
1. Start
Method Algorithm
2. Get values of x0, x1 and e
*Here x0 and x1 are the two initial guesses
e is the stopping criteria, absolute error or the desired degree of
accuracy*
3. Compute f(x0) and f(x1)
4. Compute x2 = [x0*f(x1) – x1*f(x0)] / [f(x1) – f(x0)]
5. Test for accuracy of x2
If [ (x2 – x1)/x2 ] > e, *Here [ ] is used as modulus sign*
then assign x0 = x1 and x1 = x2
goto step 4
Else,
goto step 6
6. Display the required root as x2.
7. Stop
75 Er. Ganesh Ram Dhonju 11/22/2024
Find the root of xex-cosx=0 using the secant method.

X1 X2 F(x1) F(x2) X0 F(x0)


1 2 1.718 13.77 0.85751
2 0.85751 13.77 1.0215 0.76602
0.85751 0.76602 1.0215 0.64797 0.60733
0.76602 0.60733
0.60733

Find the root of 𝑓 𝑥 = 𝑥 3 − 9𝑥 + 1 𝑢𝑠𝑖𝑛𝑔 𝑡ℎ𝑒 𝑠𝑒𝑐𝑎𝑛𝑡 𝑚𝑒𝑡ℎ𝑜𝑑.

76 Er. Ganesh Ram Dhonju 11/22/2024


77 Er. Ganesh Ram Dhonju 11/22/2024
Simple Fixed-point Iteration
78
•Rearrange the function f(x) = 0 so that x is
on the left side of the equation:
𝑓 𝑥 =0 ⇒ 𝑔 𝑥 =𝑥

Use the new function g to predict a new value of x as

𝑥𝑘 = 𝑔(𝑥𝑘−1 ) 𝑥𝑜 given, k = 1, 2, ...

•The approximate error is given by :


•Fixed-point methods may sometime “diverge”,
depending on the stating point (initial guess) and
how the function behaves.
Er. Ganesh Ram Dhonju 11/22/2024
Solve f(x) = e-x-x
• Re write as x = g(x) by isolating x as x = e-x
• x=g(x) can be expressed as a pair of
equations:
f1=x
f2=g(x) (component equations)
start with an initial guess x =0,
I Xi |e|
0 0
1 1
2 0.3679
3 0.6922
4 0.5005

79 Er. Ganesh Ram Dhonju 11/22/2024


Find the square root of 5
X=√5 X=√5 X=√5
f(x) = x2-5 = 0 f(x) = x2-5 = 0 f(x) = x2-5 = 0
Case I Case II Case III
x= g(x)= 5/x x= g(x)= x2+x-5 2x= 5/x + x
Let x0=1 Let x0=0 g(x)= ½[ 5/x + x]
Let x0=1

80 Er. Ganesh Ram Dhonju 11/22/2024


81 Er. Ganesh Ram Dhonju 11/22/2024
Example:
𝑓(𝑥) = 𝑥 2 − 𝑥 − 2 𝑓 𝑥 = 𝑥 3 − 9𝑥 + 1 𝑓 𝑥 = 𝑥3 + 𝑥2 − 1 𝑓 𝑥 = 𝑥 3 − 3𝑥 + 1
/
x=𝑔(𝑥) = 𝑥 2 − 2 x = 𝑔 𝑥 = −1/(𝑥 2 − 9) x=𝑔(𝑥) = (1 − 𝑥 2 )1 3
𝑜𝑟 𝑜𝑟 𝑜𝑟
x= 𝑔(𝑥) = 𝑥 + 2 𝑥 = 𝑔 𝑥 = (𝑥 3 +1)/9 𝑥 = 𝑔(𝑥) = 1/ 𝑥 + 1
𝑜𝑟 𝑜𝑟 𝑜𝑟
/ /
x= 𝑔(𝑥) = 1 + 𝑥
2 x=𝑔(𝑥) = 9𝑥 − 1 1 3 x=𝑔(𝑥) = (1 − 𝑥 3 )1 2
⋮ ⋮

82 Er. Ganesh Ram Dhonju 11/22/2024


– 2x-log10x=7
– Cos(x) = 3x-1

83 Er. Ganesh Ram Dhonju 11/22/2024


Conclusion

• Fixed-point iteration converges if


𝑔′ (𝑥) ≺ 1 (slope of the line f(x) = x)

•When the method converges, the error is


roughly proportional to or less than the error
of the previous step, therefore it is called
“linearly convergent.”

84 Er. Ganesh Ram Dhonju 11/22/2024

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy