Lecture 5 NRM
Lecture 5 NRM
METHOD(NRM)
NEWTON RAPHSON
METHOD
Methods such as the bisection method and the false position
method of finding roots of a nonlinear equation f(x)= 0 require
bracketing of the root by two guesses. Such methods are called
bracketing methods. These methods are always convergent since
they are based on reducing the interval between the two guesses
so as to zero in on the root of the equation. In the Newton-
Raphson method, the root is not bracketed. In fact, only one
initial guess of the root is needed to get the iterative process
started to find the root of an equation. The method hence falls in
the category of open methods. Convergence in open methods is
not guaranteed but if the method does converge, it does so much
faster than the bracketing methods.
The method is based on the idea of linear approximation. For a
function f(x), the basic idea is to use the tangent line at a point
to estimate the next point, , which is closer to the root.
Formula
The Newton-Raphson iteration formula is:
where
is the current guess for the root,
is the value of the function at ,
is the value of the derivative of the function at ,
s the next approximation for the root.
Algorithm of Fixed Point Iteration Method
Theorem
Let f:[a,b]→R be any function which is twice
differentiable in (a,b) with only one root α in (a,b).
Let and denote the first and second order derivatives
of with respect to x. If α is a simple root and is computed
by the Newton-Raphson method, then the method
converges if
<
Convergence Criteria:
The method typically converges quickly if the initial
guess is close to the actual root and the function is
well-behaved.
However, if the initial guess is too far from the root or
Advantages:
Very fast convergence when the initial guess is close to the root.
Requires less iteration compared to methods like Bisection or
Regula Falsi.
Disadvantages:
Requires the derivative f′(x), which can be difficult to compute.
May fail to converge if the initial guess is far from the root or if
Oscillations near local maximum and minimum
Root jumping In some case where the function f(x) is oscillating
and has a number of roots, one may choose an initial guess close to
a root. However, the guesses may jump and converge to some other
root
Accuracy: Newton-Raphson is typically the most accurate in terms
of convergence speed and precision, especially when the initial guess
Example 1
Let
Example 1
1st iteration :
f(0.5)=(0.5)^3+2(0.5)^2+0.5-1=0.125
f′(0.5)=3(0.5)^2+4(0.5)+1=3.75
=0.5-(0.125/3.75)
=0.4667
Table