0% found this document useful (0 votes)
17 views47 pages

CH332 29jan

The document outlines methods for solving non-linear equations in computational chemistry, focusing on techniques such as the Bisection method, False Position method, and Newton-Raphson method. It discusses the importance of choosing initial guesses, termination criteria for iterations, and the convergence of these methods. Additionally, it highlights the graphical and trial-and-error approaches, emphasizing the need for accurate solutions in scientific problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views47 pages

CH332 29jan

The document outlines methods for solving non-linear equations in computational chemistry, focusing on techniques such as the Bisection method, False Position method, and Newton-Raphson method. It discusses the importance of choosing initial guesses, termination criteria for iterations, and the convergence of these methods. Additionally, it highlights the graphical and trial-and-error approaches, emphasizing the need for accurate solutions in scientific problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

CH 332

Computational Chemistry

Course Instructor: Dr. Debdas Dhabal

Department of Chemistry, IIT Guwahati


29 January 2024
Outline

ØRoots of non-linear equations


ØMethods for solving non-linear equations
ØBisection method
Ø Choosing initial guess
Ø Termination criteria for the iterations
Ø An example
Ø Convergence of Bisection method
ØFalse Position method
Ø An example
Ø Convergence of False Position method
Ø Drawbacks of False Position method
ØNewton Raphson Method
Ø Convergence of Bisection method
Ø Limitation of Newton Raphson Method
Direct methods for solving non-linear equations do not exist
except for certain simple cases
Graphical methods:
o Useful when we are satisfied with approximate solutions for a problem Limited accuracy
and precision

Trial and Error method:


o Involves a series of guesses for x. Evaluate the function to see if it is close to zero

o These two methods are satisfactory for simple functions.


o Cumbersome and time-consuming for complicated functions
o Inaccurate results that are not acceptable for scientific problems
Iterative methods to find the roots of a polynomial equation

o Bracketing methods
o start with two initial guesses that bracket the root
o Systematically reduce the width of the bracket to reach the solution
o Two popular methods under this category
o Bisection method
o False position method
o Open-end methods
o Single initial guess
o Two initial guesses are possible but that may not bracket the root
o Iterative methods that fall under this category are
o Newton-Raphson method
o Secant method
o Muller’s method
o Fixed-point method
o Bairstow’s method
Choosing initial guesses and criteria to stop the iterations

For a polynomial: 𝒇 𝒙 = 𝒂𝒏 𝒙𝒏 + 𝒂𝒏"𝟏 𝒙𝒏"𝟏 + ⋯ + 𝒂𝟏 𝒙 + 𝒂𝟎

𝑎'"% This value could be taken as the initial guess when no


The largest possible root is given by: 𝑥%∗ = −
𝑎' other value is suggested

How to search the bracket of roots of a polynomial?

o Search intervals that contain the real roots: o The maximum absolute value of the root:

𝑎'"% ( 𝑎'"(
( ∗
𝑥)*+ = −2
𝑎'"% 𝑎'"(
𝑥∗ ≤ −2 𝑎' 𝑎'
𝑎' 𝑎'

o Real roots lie within the interval: ∗


− 𝑥)*+ ∗
, 𝑥)*+
Choosing initial guesses and criteria to stop the iterations

Let’s consider a polynomial: 𝒇 𝒙 = 𝟑𝒙𝟑 − 𝟗𝒙𝟐 + 𝟔𝒙 + 𝟏𝟔

𝑎#$! −9
The largest possible root is given by: 𝑥!∗ = − =− =3
𝑎# 3

o All roots must satisfy the relation

( (
𝑎'"% 𝑎'"( −9 6
𝑥∗ ≤ −2 = −2 = 5
𝑎' 𝑎' 3 3

o Real roots lie within the interval: − 5, + 5


Bisection method: The most reliable and simple method for
the solution of nonlinear equations
Also known as binary chopping or half-interval method:
It relies on the fact that:
o 𝑓(𝑥) s real and continuous in the interval 𝑥% < 𝑥 < 𝑥(
o If 𝑓(𝑥% ) and 𝑓(𝑥( ) are of opposite signs: 𝑓(𝑥% ) 𝑓 𝑥( < 0

Then there is at least one root in that interval


𝑓(𝑥)

𝒙𝟏 /𝒙𝟐 𝑥! 𝑥&
Let’s define a point 𝒙𝟎 =
𝟐 𝑥
0 𝑥%
Bisection method:
𝑓(𝑥)

𝑥! 𝑥&
𝑥
0 𝑥%

Considering these three


Now consider the following three conditions: conditions, what are we doing
o if 𝑓 𝑥0 = 0, we have root at 𝑥0 essentially?
o If 𝑓 𝑥0 ×𝑓 𝑥% < 0, there is a root between 𝑥0 and 𝑥%
Ans: We are finding out the sign of
o If 𝑓 𝑥0 ×𝑓 𝑥( < 0, there is a root between 𝑥0 and 𝑥(
the function at 𝑥0
For the above function 𝑓 𝑥0 ×𝑓 𝑥( < 0 suggests that the root lies between 𝑥0 and 𝑥( .
o Now 𝑥% becomes 𝑥0
o Further divide this subinterval into two halves to locate a new subinterval containing the root
Bisection method:
𝑓(𝑥)

𝑥!
𝑥! 𝑥&
𝑥
0 𝑥& 𝑥%

Now again consider the following three conditions:


o if 𝑓 𝑥0 = 0, we have root at 𝑥0
o If 𝑓 𝑥0 ×𝑓 𝑥% < 0, there is a root between 𝑥0 and 𝑥%
o If 𝑓 𝑥0 ×𝑓 𝑥( < 0, there is a root between 𝑥0 and 𝑥(

For the above function 𝑓 𝑥0 ×𝑓 𝑥% < 0 suggests that the root lies between 𝑥0 and 𝑥% .
o Now 𝑥( becomes 𝑥0
o Further divide this subinterval into two halves to locate a new subinterval containing the root
Overview of Bisection method
o 𝑓(𝑥) s real and continuous in the interval 𝑥% < 𝑥 < 𝑥( 𝑓(𝑥)
o If 𝑓(𝑥% ) and 𝑓(𝑥( ) are of opposite signs: 𝑓(𝑥% ) 𝑓 𝑥( < 0

𝑥!
o If 𝑓 𝑥0 = 0, we have root at 𝑥0 𝑥! 𝑥&
o If 𝑓 𝑥0 ×𝑓 𝑥% < 0, there is a root between 𝑥0 and 𝑥% 𝑥
0 𝑥& 𝑥%
o If 𝑓 𝑥0 ×𝑓 𝑥( < 0, there is a root between 𝑥0 and 𝑥(

𝑓 𝑥& ×𝑓 𝑥% < 0 suggests that the root lies between 𝑥& and 𝑥%

o Now 𝑥% becomes 𝑥0
o If 𝑓 𝑥0 ×𝑓 𝑥% < 0, there is a root between 𝑥0 and 𝑥%
o If 𝑓 𝑥0 ×𝑓 𝑥( < 0, there is a root between 𝑥0 and 𝑥(

𝑓 𝑥0 ×𝑓 𝑥% < 0 suggests that the root lies between 𝑥0 and 𝑥% .


o Now 𝑥( becomes 𝑥0
o Further divide this subinterval into two halves to locate a new subinterval containing the root
Termination criteria for the iterations

v Absolute error in x v Relative error in x


𝑥./0 − 𝑥.
𝑥./0 − 𝑥. ≤ 𝐸1 ≤ 𝐸2
𝑥./0
𝑥. → estimate of the root at the ith iterations

v Value of the function at the root v Difference in function values


𝑓(𝑥./0) ≤ 𝐸
𝑓 𝑥./0 − 𝑓(𝑥. ) ≤ 𝐸

Sometimes, combination of two or more criteria are used


Bisection method to find out the roots of an equation

Let’s consider a polynomial: 𝒇 𝒙 = 𝒙𝟐 − 𝟒𝒙 − 𝟏𝟎

o All roots must satisfy the relation


( (
𝑎'"% 𝑎'"( −4 10
𝑥∗ ≤ −2 = −2 =6
𝑎' 𝑎' 1 1

o Real roots lie within the interval: −6, +6

o Let’s find out the approximate location of the roots

x -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
f(x) 50 35 22 11 2 -4 -10 -13 -14 -13 -10 -5 2
Graphical method to get an idea of the nature of the function

140
2
120 x −4x−10
100
80
f (x)

60
40 Root 1 Root 2

20
0
−20
−10−8 −6 −4 −2 0 2 4 6 8 10
x
Now let’s calculate the root between the interval -2 and -1.5
Convergence of Bisection method

o From the above discussion, we know that in the bisection method, the interval containing
the root is reduced by a factor of 2. The same procedure is repeated multiple times.

o Let’s consider that we repeat the procedure n times. Then the interval containing the
root is
𝒙𝟐 − 𝒙𝟏 ∆𝒙
𝒏 = 𝒏
𝟐 𝟐
∆𝒙 ∆𝒙
o So, the root must lie between - to
𝟐𝒏 𝟐𝒏

∆𝑥
𝐸
o Now the error bound at n-th iteration is # =
2#

∆𝑥 𝐸' o Error reduces linearly with a


o Error bound at (n+1)-th iteration is 𝐸'/% = =
2'/% 2 factor of 0.5

o Bisection method is linearly convergent


A FORTRAN code for Bisection Method
A FORTRAN code for Bisection Method
A FORTRAN code for Bisection Method
A FORTRAN code for Bisection Method
Error estimate in Bisection method

An approximate percent relative error

𝒙𝒏𝒆𝒘
𝒓 − 𝒙𝒐𝒍𝒅
𝒓
𝜺𝒂 = 𝒏𝒆𝒘 𝟏𝟎𝟎%
𝒙𝒓

where xnew is the root for the present


iteration and xold is the root from the
previous iteration

True percent error

𝒙𝒓 − 𝒙𝒊𝒓
𝜺𝒕 = 𝟏𝟎𝟎% 𝜺𝒂 captures the general trend, not exact
𝒙𝒓
where xr is the actual root of the equation, Note: the above plot is for a specific example. We will be
𝒙𝒊𝒓 is the root estimate at ith iteration. evaluating this example in practical class.
False Position method

Although the bisection method is a good technique to determine the root of an equation
It has some drawbacks:
o The brute-force nature makes it inefficient
o It divides the interval from 𝑥! to 𝑥% into equal halves: the root may be close to one of the points

𝑓(𝑥% )

An alternative method is
𝑥!
𝑥 “False-position method”
𝑥& 𝑥%

𝑓(𝑥! )
False Position method
𝑓(𝑥% )

𝑥!
𝑥
𝑥& 𝑥%

𝑓(𝑥! )

Join the points x1 and x2 by a line: The intersection point to the x-axis is an
improved estimate of the root.

Now, the equation of that 𝑓 𝑥B − 𝑓(𝑥0) 𝑦 − 𝑓(𝑥0)


straight line can be written =
𝑥B − 𝑥0 𝑥 − 𝑥0
as:
False Position formula
𝑓(𝑥% )

𝑥!
𝑥
𝑥& 𝑥%

𝑓(𝑥! )

The line intersects the x-axis at x=x0, Rearrangement leads to


o at this point y=0
o Now the equation becomes 𝒇(𝒙𝟏 )(𝒙𝟐 − 𝒙𝟏 )
𝒙𝟎 = 𝒙𝟏 −
𝒇 𝒙𝟐 − 𝒇(𝒙𝟏 )
𝑓 𝑥B − 𝑓(𝑥0) −𝑓(𝑥0)
= This is the false-position formula.
𝑥B − 𝑥0 𝑥C − 𝑥0
Note that x0 is obtained from the correction to x1
The difference between bisection and false position method is
the way x0 is calculated
False Position method to find out the roots of an equation

Let’s consider the same polynomial as we used for Bisection method 𝒇 𝒙 = 𝒙𝟐 − 𝟒𝒙 − 𝟏𝟎

o All roots must satisfy the relation


( (
𝑎'"% 𝑎'"( −4 10
𝑥∗ ≤ −2 = −2 =6
𝑎' 𝑎' 1 1

o Real roots lie within the interval: −6, +6

o Let’s find out the approximate location of the roots

x -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
f(x) 50 35 22 11 2 -5 -10 -13 -14 -13 -10 -5 2

We will find out the root between -2 and -1


Graphical method to get an idea of the nature of the function

140 2
120 x2−4x−10
1
100 0
80
−1
f (x)

f (x)
60
Root 1 Root 2 −2
40
20 −3
0 −4
−20 −5
−10−8 −6 −4 −2 0 2 4 6 8 10 −2 −1.8 −1.6 −1.4 −1.2 −1
x x
zoomed in near the root 1
Example

𝒇(𝒙𝟏 )(𝒙𝟐 − 𝒙𝟏 ) x1=-2, f(x1)=2


Iteration 1: 𝒙𝟎 = 𝒙𝟏 − x2=-1, f(x2)=-5
𝒇 𝒙𝟐 − 𝒇(𝒙𝟏 )
2 (−1 + 2) 2 𝑥C = −1.71429
𝑥C = −2 − 𝑥C = −2 −
−5 − 2 −7
Iteration 2: f(-2)*f(-1.71429) = (-) ve
𝑥C = −1.71429 = 𝑥B
f(-1)*f(-1.71429) = (+) ve
𝑓(𝑥B) = 𝑓(−1.71429) =-0.204050
2 (−1.71429 + 2)
𝑥C = −2 −
−0.204050 − 2

0.57142
𝑥C = −2 − = −2 + 0.25926 = −1.74074
−2.204050
A FORTRAN code for False Position method
A FORTRAN code for False Position method
A FORTRAN code for False Position method
A FORTRAN code for False Position method
A FORTRAN code for False Position method
Error estimate: comparison with Bisection method

True percent error

𝒙𝒓 − 𝒙𝒊𝒓
𝜺𝒕 = 𝟏𝟎𝟎%
𝒙𝒓

Error for false position decreases much


faster than for bisection because of the
more efficient scheme for root location

Run the two FORTRAN codes (bisection


and false position). You will see that the
False position method converges in fewer
iterations.
Drawbacks of False Position method
Let’s take this example 𝑓 𝑥 = 𝑥 0C − 1
Results for Bisection method:

Results for False position method:

This is due to the shape of the function


Approximate error is smaller than the true error
Output of the FORTRAN codes

Bisection Method: False Position Method:

Not converged in 7965570


iterations

Converged in 21 iterations
Convergence of False Position method

Similar to the Bisection method, the False position method converges linearly

(𝒙𝒊 − 𝒃)𝒇” (𝑹)


𝑬𝒊/𝟏 = 𝑬𝒊
𝒇F (𝑹)

R is some point in the interval between xi and b


Now we will move to Newton-Raphson method

o Bracketing methods
o start with two initial guesses that bracket the root
o Systematically reduce the width of the bracket to reach the solution
o Two popular methods under this category
o Bisection method
o False position method
o Open-end methods
o Single initial guess
o Two initial guesses are possible but that may not bracket the root
o Iterative methods that fall under this category are
o Newton-Raphson method
o Secant method
o Muller’s method
o Fixed-point method
o Bairstow’s method
Newton-Raphson method

Let’s consider that x1 is Draw a The point of intersection of


the first approximation tangent to the this tangent to the x-axis
to the root of the curve f(x) at x gives you the second
function f(x) =x1 approximation to the root
Choosing initial guesses and criteria to stop the iterations

o If the point of intersection is x2, the slope of the


tangent is:
𝒇 𝒙𝟏 − 𝒇(𝒙𝟐 )
𝐭𝐚𝐧 ∝= = 𝒇′(𝒙𝟏 )
𝒙𝟏 − 𝒙𝟐

𝒇 𝒙𝟐 = 𝟎

o 𝒇 𝒙𝟏 is the slope of 𝒇 𝒙 at x=x1

The rearrangement of the above equation gives the Newton-


Raphson formula:
𝑓(𝑥% )
𝑥( = 𝑥% − 4
𝑓 (𝑥% )
𝑓(𝑥( ) 𝑓(𝑥' )
The next approximation would be: 𝑥5 = 𝑥( − 4 In general, for nth iteration: 𝑥'/% = 𝑥' − 4
𝑓 (𝑥( ) 𝑓 (𝑥' )
Newton-Raphson method to find out the roots of an equation

Let’s consider a polynomial: 𝒇 𝒙 = 𝒙𝟐 − 𝟒𝒙 − 𝟏𝟎

o All roots must satisfy the relation


( (
𝑎'"% 𝑎'"( −4 10
𝑥∗ ≤ −2 = −2 =6
𝑎' 𝑎' 1 1

o Real roots lie within the interval: −6, +6

o Let’s find out the approximate location of the roots

x -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
f(x) 50 35 22 11 2 -5 -10 -13 -14 -13 -10 -5 2
In the vicinity of -1
After a few iterations Newton-Raphson method gives a result
close to the Bisection method
𝒇 𝒙 = 𝒙𝟐 − 𝟒𝒙 − 𝟏𝟎 𝒇4 𝒙 = 𝟐𝒙 − 𝟒
𝑓(𝑥% ) 𝑥! = −1 𝑓 −1 = (−1)( −4 ∗ −1 − 10
Newton-Raphson formula: 𝑥( = 𝑥% − 4
𝑓 (𝑥% ) =-5
−5 𝑓 4 −1 = 2 ∗ −1 − 4
𝑥( = −1 −
−6 =-6
= -1.8333

0.69418889 𝑓 −1.8333 = 0.69418889


𝑓(𝑥( ) 𝑥5 = −1.8333 − 𝑥% = −1.8333
𝑥5 = 𝑥( − −7.6666
𝑓 4 (𝑥( )
= -1.74275 𝑓 4 −1.8333 = −7.6666

𝑓(𝑥5 ) 0.00817756 𝑓 −1.74275 = 0.00817756


𝑥6 = 𝑥5 − 4 𝑥6 = −1.74275 − 𝑥( = −1.74275
𝑓 (𝑥5 ) −7.4855
= -1.741657 𝑓 4 −1.74275 = −7.4855

Newton-Raphson Method after 4 iterations: -1.741657 Bisection Method after 7 iterations: -1.7416
Convergence of Newton-Raphson method

o Error is roughly proportional to the square of the error in the previous iteration

𝒇” (𝑹) 𝟐
𝑬𝒏/𝟏 = F 𝑬𝒏
𝟐𝒇 (𝒙𝒏 )

o Newton-Raphson is said to have quadratic convergence

On the other hand, Bisection method:

o Bisection method is linearly convergent


∆𝑥 𝐸O
𝐸O/0 = O/0 =
2 2

o Newton-Raphson may not converge if your initial guess is too far from the
expected root. When it does converge, it converges faster than the Bisection
method.
A FORTRAN code for Newton-Raphson Method
A FORTRAN code for Newton-Raphson Method
A FORTRAN code for Newton-Raphson Method
Limitation of Newton-Raphson method
o In some situations, division by zero may occur.
o The initial guess should not be too far from the expected root.
o A particular value in the iteration sequence may repeat. This leads to an infinite
loop.
Process and Job-control

THANK YOU!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy