ECM6Lecture6Vietnam 2014
ECM6Lecture6Vietnam 2014
Slide 1 of 8
Lecture 6
Newtons Method
Brian G. Higgins
Department of Chemical Engineering & Materials Science
University of California, Davis
April 2014, Hanoi, Vietnam
ECM6Lecture6Vietnam_2014.nb
ECM6Lecture6Vietnam_2014.nb
= f HpL +
xn = p +
Hxn - pL
f
x
xn
P
Simplifying gives
xn+1 =
f
x
(1)
xn
P
To find a solution to the above expression we look for solutions of the form
xn = qn u
f
x
qn u q =
f
x
<1
p
And xn as n, if
f
x
>1
p
To summarize
(i)
I f
M
x p
(ii)
I f
M
x p
(iii)
I f
M
x p
We can also define a basin of attraction for a fixed point: Suppose p is a fixed point, then the basin of
attraction of p consists of all x such that f@nD HxL p as n increases without bound.
(i)
4
I f
M
x p
ECM6Lecture6Vietnam_2014.nb
(ii) I f
M
> 1, then
x p
(iii)
I f
M
x p
We can also define a basin of attraction for a fixed point: Suppose p is a fixed point, then the basin of
attraction of p consists of all x such that f@nD HxL p as n increases without bound.
Example 4:
Consider the 1-D map
xn+1 = f@xn D m xn H1 - xn L
First we need to find the fixed points of the map, i.e.
p = f@pD = m p H1 - pL
Solving for p we find
HiL
p=0
HiiL
1 = m-m p
fl
p = Hm - 1L m
= m H1 - 2 pL
p
If p = 0, then
f
x
= m H1 - 2 pL
p=0
=m
p=0
Thus the fixed point is stable if -1 < m < 1. For the fixed point p = Hm - 1L m, the Jacobian becomes
f
x
= m H1 - 2 pL
p=Hm-1Lm
= 2-m
p=Hm-1Lm
Hence p = Hm - 1L m is attracting if
Example 1
In the first example we consider
f HxL = ex - 3 x2 = 0
which we transform to
x = g HxL ln H3L + ln Ix2 M
First let us plot this function f(x) for a range of values of x
ECM6Lecture6Vietnam_2014.nb
PlotAx - 3 x2 , 8x, - 3, 5<, Frame True, FrameLabel 8"x", "fHxL"<, PlotStyle BlueE
10
fHxL
-10
-20
-2
It follows from the plot that f HxL = 0 has 1 negative and 2 positive roots. For reference the roots are
p1 = -0.458962
p2 = 0.910008
p3 = 3.73308
In the next plot we show a plot of the LHS (=x) and the RHS (=ln H3L + ln Ix2 M) for a range of x
values (Note in Mathematica ln(x) is expressed as Log[x])
PlotA9x, Log@3D + LogAx2 E=, 8x, - 3, 5<, Frame True,
FrameLabel 8"x", "x,gHxL"<, PlotStyle 8Red, Blue<, PlotRange 8- 10, 5<E
4
2
x,gHxL
0
-2
-4
-6
-8
-10
-2
As before we see that there are 3 roots (these are fixed points of g(x) such that p = gHpL. Let us use
NestList to generate the sequence for 30 iterations starting with x0 = 1. Here is the result
g@x_D := Log@3D + LogAx2 E
NestList@g, 1, 30D N
81., 1.09861, 1.28671, 1.60279, 2.0421, 2.52657, 2.95234, 3.26381,
3.4644, 3.58369, 3.6514, 3.68883, 3.70923, 3.72026, 3.7262, 3.72939,
3.7311, 3.73202, 3.73251, 3.73277, 3.73292, 3.73299, 3.73303, 3.73305,
3.73307, 3.73307, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308<
Note that we have converged to a fixed point p3 3.73308 ..., which is one of the desired roots of
f HxL = 0. We can readily test out other initial values for x0 to find that the iteration always converges to
the the same fixed point
ECM6Lecture6Vietnam_2014.nb
Note that we have converged to a fixed point p3 3.73308 ..., which is one of the desired roots of
f HxL = 0. We can readily test out other initial values for x0 to find that the iteration always converges to
the the same fixed point
NestList@g, - 2, 30D N
8- 2., 2.48491, 2.91908, 3.24115, 3.45047, 3.57563, 3.6469,
3.68637, 3.70789, 3.71954, 3.72581, 3.72918, 3.73099, 3.73196, 3.73248,
3.73276, 3.73291, 3.73299, 3.73303, 3.73305, 3.73306, 3.73307, 3.73307,
3.73308, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308<
Consequently, our iteration method does not allow us to determine the other 2 roots. Here is a graphical
representation of the iterations, starting with x=8
8
x,gHxL
10
12
Like us now check the local stability of the fixed points. For this calculation we need to compute
|H g xLp | which is given by
Abs@g '@xDD
2
Abs@xD
Let us evaluate the derivative at the fixed points {p1 = -0.458962, p2 = 0.910008, p3 = 3.73308}
Map@Abs@g '@DD &, 8- 0.458962, 0.910008, 3.73308<D
84.35766, 2.19778, 0.535751<
It follows that only p3 = 3.73308 is a stable fixed point, and for this reason our iteration scheme converges to this value.
Example 2
In this example we consider following equation we wish to solve
f HxL = 0,
f HxL = -x - 3 x
1
3
-x
ECM6Lecture6Vietnam_2014.nb
x=
1
3
-x f HxL = 0
-x
3
Plot@8x, g1@xD<, 8x, 0, 1<, Frame True,
FrameLabel 8"x", "x,g1HxL"<, PlotStyle 8Red, Blue<D
1.0
0.8
x,g1HxL
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
1.0
We see that there is a single positive root. Let us generate a sequence using x0 = 0.2, which is approximately at the value of the root.
NestList@g1, 0.2, 20D N
80.2, 0.27291, 0.25372, 0.258636, 0.257368, 0.257695, 0.25761,
0.257632, 0.257627, 0.257628, 0.257628, 0.257628, 0.257628, 0.257628,
0.257628, 0.257628, 0.257628, 0.257628, 0.257628, 0.257628, 0.257628<
It is evident the sequence converges to the desired fixed point, and a quick check on stability confirms
that the fixed point is stable, viz., H g1 xLp < 1
Abs@g1 '@0.257628DD
0.257628
ECM6Lecture6Vietnam_2014.nb
g2@x_D := -x - 2 x
Plot@8x, g1@xD<, 8x, 0, 1<, Frame True,
FrameLabel 8"x", "x,g2HxL"<, PlotStyle 8Red, Blue<D
1.0
0.8
0.6
x,g2HxL
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
1.0
It is evident from the plot that the function g2(x) has the same fixed point
NestList@g2, 0.2, 7D N
90.2, 0.418731, - 0.17958, 1.55588, - 2.90075, 23.9892, - 47.9784, 6.86679 1020 =
It is clear that after 7 iterations our sequence is diverging. A check on the stability of the fixed point
confirms that H g2 xLp > 1
Abs@g2 '@0.257628DD
2.77288
Summary
Thus when we construct a iteration function, we must ensure that the fixed point defined by the iteration
function is stable. If not, our iteration scheme will fail. This assumes of course we know the value of the
fixed point. In general this is not the case ( If we did there is no reason to use a iteration method!). Thus
in practice we can attempt to eestimate the root and then use the estimate to check on stability. One
way of estimating the root is by plotting the function g(x) over a range of values of x.
ECM6Lecture6Vietnam_2014.nb
f HxL = f Hxk L +
Hx - xk L +
xk
We next set x = p, so that Dx x - xk = p - xk represents the deviation from the root of f HxL. This means
that
f HpL = 0
It then follows that
f Hxk L +
f
x
Hp - xk L 0
xk
f Hxk L
I f
M
x x
(2)
k
The RHS of Eq.(1) is our approximation to the root. Our task then is to find an iterate of g(x) such the
RHS of (1) is equal to p. More generally we can write
xk+1 = g Hxk L xk -
f Hxk L
I f
M
x x
(3)
k
<1
Taking
g HxL = x -
f HxL
I f
M
x
= 1-
f
x
f
x
2 f
f
I f
M
x
x2
2 f
f
I f
M
x
x2
10
ECM6Lecture6Vietnam_2014.nb
2 f
f
I f
M
x
x2
<1
for the fixed point iteration to converge. Recall that at the fixed point f HpL = 0. Hence the convergence
requirement is satisfied unless H f xLp =0
ECM6Lecture6Vietnam_2014.nb
fHxL
-10
-20
-2
At each root , H f xLp 0. Thus or fixed point iteration should converge. Let us test it out using the
following functions
f@x_D := x - 3 x2
g@x_D := x -
f@xD
f '@xD
This iteration method for finding roots is called Newton's method. A potential draw back of Newton's
method is that it requires a calculation of the derivative of the function. When applied to large sets of
equations this can be a time consuming calculation.
11
12
ECM6Lecture6Vietnam_2014.nb
This iteration method for finding roots is called Newton's method. A potential draw back of Newton's
method is that it requires a calculation of the derivative of the function. When applied to large sets of
equations this can be a time consuming calculation.
ECM6Lecture6Vietnam_2014.nb
13
(4)
We can think of ek as the amount that must be added to get the value of p. An algorithm converges if
successive ek become smaller, i.e
ek-1 > ek > ek+1
That is
ek
0, as k
Since we do not in general know p at the outset, the error indicator defined by (3) is not terribly useful.
Instead we use information about the iterates to judge accuracy. Since we know
x0 , x1 , x2 , , xk , xk+1 , ....
we therefore form the increments from the iterate values
Dx0 = x1 - x0 , Dx1 = x2 - x1 , Dx2 = x3 - x2 , Dxk = xk+1 - xk ,
Now if the convergence is rapid we would expect
Dxk = xk+1 - xk p - xk = ek ,
if
ek+1
<<
ek
Thus a judgement of when to stop the iteration is when Dxk is small in some suitably defined way. We
note that by its definition Dxk approximates the error of xk-1 and not of xk .
Further, Dxk xk approximates the relative error of xk-1 . So in summary we can say
Absolute Difference Test :
Relative Difference Test :
Dk
Dk
where C is a suitable constant, usually taken as 1.0, and N is normally less than the machine precision,
usually 16 digits. Note that the absolute difference test depends on the size of p. If p is say 60000, then
on a machine with machine precision of 16, we can expect at most 11 digits of accuracy after the
decimal point. Recall Mathematica's Accuracy function gives you this information
8Accuracy@60 000.0D, Accuracy@ 0.06D<
811.1764, 17.1764<
As a rule then it makes sense to use the relative difference test, rather than the absolute difference test
to stop an iteration algorithm.
14
ECM6Lecture6Vietnam_2014.nb
Example
Consider the following function
f1 HxL = 150 047.623 0.005 x - 800.135 0.005 x x + 0.005 x x2
We want to find the roots of this function using Newton's method. Let us define the function
Plot of Function
f1@x_D := 150 047.623 0.005 x - 800.135 0.005 x x + 0.005 x x2
fHxL
400 000
300 000
200 000
100 000
0
100
200
300
400
500
600
We see that the roots are near x=300 and x=500. As before we can define Newton's method using the
following fixed point iteration algorithm based upon
xk+1 = xk - f1 Hxk L f1' Hxk L
Newtons Method
g1@x_D := x - f1@xD f1 '@xD
Let us try out this iteration using NestList with x=410, and 10 iterations
sol1 = NestList@g1, 410, 10D
8410, 76.1101, 624.259, 562.295, 522.121,
503.91, 500.238, 500.1, 500.099, 500.099, 500.099<
We converge to the root at x=500.099 We can check on the accuracy of our result by evaluating the
Accuracy of the last few iterates. First we can determine the number of digits after the decimal place
Accuracy@sol1@@11DDD
13.2555
So we have 13 digits of accuracy after the decimal point. The actual value of the root stored in the
computer is
ECM6Lecture6Vietnam_2014.nb
15
So we have 13 digits of accuracy after the decimal point. The actual value of the root stored in the
computer is
sol2 = sol1@@89, 10, 11<DD; sol2 InputForm
{500.09940269270874, 500.09940269234096, 500.09940269234113}
Next we can evaluate the value of the function at the 9, 10 and 11th iterates
Map@f1@xD . x &, sol2D
98.96864 10-7 , - 4.65661 10-10 , 0.=
Clearly we can readily increase the accuracy by changing the value of Dxk . If we want to use the
relative error test we proceed as follows
For@i = 1; x0 = 410; Dx = 1, Abs@Dx x0D > 0.001, i ++,
x1 = N@x0 - f1@x0D f1 '@x0DD; Print@x1D; Dx = x1 - x0; x0 = x1D
16
ECM6Lecture6Vietnam_2014.nb
76.1101
624.259
562.295
522.121
503.91
500.238
500.1
f1@xD . x x0
8.96864 10-7
ECM6Lecture6Vietnam_2014.nb
Example 1
Consider the following recursive algorithm studied earlier
xn+1 =
1
b
Hb - 1L xn +
a
xn
Recall this formula is used to compute the square root of a real number a.
We define the function
g3@b_, x_D :=
1
b
Hb - 1L x +
78.8
x
With this in mind we use NestList with a pure function taking the value of b =1.5, and a=78.8
iterates = NestList@g3@1.5, D &, 9, 20D
89, 8.83704, 8.89036, 8.87248, 8.87842, 8.87644, 8.8771,
8.87688, 8.87695, 8.87693, 8.87694, 8.87694, 8.87694, 8.87694,
8.87694, 8.87694, 8.87694, 8.87694, 8.87694, 8.87694, 8.87694<
To for the error estimates Dxk , we partition the list into overlapping partitions
partList = Partition@iterates, 2, 1D
889, 8.83704<, 88.83704, 8.89036<, 88.89036, 8.87248<, 88.87248, 8.87842<,
88.87842, 8.87644<, 88.87644, 8.8771<, 88.8771, 8.87688<, 88.87688, 8.87695<,
88.87695, 8.87693<, 88.87693, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<,
88.87694, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<,
88.87694, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<<
Then we use map to create the error estimate Dxk at each iteration step
17
18
ECM6Lecture6Vietnam_2014.nb
= CL 0.333
We can combine the above code fragments into a single compound statement as follows
LinearConvergenceTest = Hiterates = NestList@g3@1.5, D &, 9, 20D;
partList = Partition@iterates, 2, 1D;
Dx = Map@Abs@@@2DD - @@1DDD &, partListD;
Map@@@2DD @@1DD &, Partition@Dx, 2, 1DDL
80.327186, 0.335332, 0.332662, 0.333557, 0.333259, 0.333358,
0.333325, 0.333336, 0.333332, 0.333334, 0.333333, 0.333333,
0.333333, 0.333333, 0.333333, 0.333333, 0.333333, 0.333333, 0.333332<
We can also test for quadratic convergence by simply modifying the last line of the code
QuadraticConvergenceTest = Iiterates = NestList@g3@1.5, D &, 9, 20D;
partList = Partition@iterates, 2, 1D;
Dx = Map@Abs@@@2DD - @@1DDD &, partListD;
MapA@@2DD @@1DD2 &, Partition@Dx, 2, 1DEM
92.00773, 6.28914, 18.6056, 56.0799, 167.977, 504.194, 1512.32,
4537.22, 13 611.4, 40 834.4, 122 503., 367 509., 1.10253 106 , 3.30758 106 ,
9.92275 106 , 2.97683 107 , 8.93048 107 , 2.67914 108 , 8.0374 108 =
It is clear that
Dxk+1
HDxk L2
CQ
Example 2
Consider the following function
f HxL = ex - 3 x2
Thus we define the following function
ECM6Lecture6Vietnam_2014.nb
19
f@x_D := x - 3 x2
g@x_D := x -
f@xD
f '@xD
Newton's algorithm converges to the desired root x = -0.4589. Let us test the convergence rate and see
if it is linear
LinearConvergenceTest = Hiterates = NestList@N@g@DD &, - 20, 10D;
partList = Partition@iterates, 2, 1D;
Dx = Map@Abs@@@2DD - @@1DDD &, partListD;
Map@@@2DD @@1DD &, Partition@Dx, 2, 1DDL
90.5, 0.499844, 0.495419, 0.462451, 0.348747,
0.153461, 0.0243245, 0.000590959, 3.49192 10-7 =
Clearly there is no constant CL for these iterates. We can also test for quadratic convergence and find
QuadraticConvergenceTest = Iiterates = NestList@N@g@DD &, - 20, 10D;
partList = Partition@iterates, 2, 1D;
Dx = Map@Abs@@@2DD - @@1DDD &, partListD;
MapA@@2DD @@1DD2 &, Partition@Dx, 2, 1DEM
80.05, 0.0999688, 0.19823, 0.373498, 0.609073, 0.768503, 0.793766, 0.792798, 0.792706<