0% found this document useful (0 votes)
66 views33 pages

Numerical Solutions of Nonlinear Systems of Equations: Tsung-Ming Huang

The document discusses numerical methods for solving nonlinear systems of equations, including fixed point iteration, Newton's method, quasi-Newton methods, and steepest descent techniques. It provides theorems on fixed points and the contraction mapping theorem. An example nonlinear system is given and solved using fixed point iteration, demonstrating convergence to the unique solution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views33 pages

Numerical Solutions of Nonlinear Systems of Equations: Tsung-Ming Huang

The document discusses numerical methods for solving nonlinear systems of equations, including fixed point iteration, Newton's method, quasi-Newton methods, and steepest descent techniques. It provides theorems on fixed points and the contraction mapping theorem. An example nonlinear system is given and solved using fixed point iteration, demonstrating convergence to the unique solution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Numerical solutions of nonlinear systems

of equations

Tsung-Ming Huang

Department of Mathematics
National Taiwan Normal University, Taiwan
E-mail: min@math.ntnu.edu.tw

August 28, 2011



Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Outline

1 Fixed points for functions of several variables

2 Newton’s method

3 Quasi-Newton methods

4 Steepest Descent Techniques



Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Fixed points for functions of several variables

Theorem 1
Let f : D ⊂ Rn → R be a function and x0 ∈ D. If all the partial
derivatives of f exist and ∃ δ > 0 and α > 0 such that
∀ kx − x0 k < δ and x ∈ D, we have

∂f (x)
∂xj ≤ α, ∀ j = 1, 2, . . . , n,

then f is continuous at x0 .

Definition 2 (Fixed Point)


A function G from D ⊂ Rn into Rn has a fixed point at p ∈ D if
G(p) = p. 師

Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Theorem 3 (Contraction Mapping Theorem)


Let D = {(x1 , · · · , xn )T ; ai ≤ xi ≤ bi , ∀ i = 1, . . . , n} ⊂ Rn .
Suppose G : D → Rn is a continuous function with G(x) ∈ D
whenever x ∈ D. Then G has a fixed point in D.
Suppose, in addition, G has continuous partial derivatives and
a constant α < 1 exists with

∂gi (x) α
∂xj ≤ n , whenever x ∈ D,

for j = 1, . . . , n and i = 1, . . . , n. Then, for any x(0) ∈ D,

x(k) = G(x(k−1) ), for each k ≥ 1

converges to the unique fixed point p ∈ D and



αk 大

k x(k) − p k∞ ≤ k x(1) − x(0) k∞ .


1−α
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Example 4
Consider the nonlinear system
1
3x1 − cos(x2 x3 ) − = 0,
2
x21 − 81(x2 + 0.1)2 + sin x3 + 1.06 = 0,
10π − 3
e−x1 x2 + 20x3 + = 0.
3

Fixed-point problem:
Change the system into the fixed-point problem:
1 1
x1 = cos(x2 x3 ) + ≡ g1 (x1 , x2 , x3 ),
3 6
1
q
x2 = 2
x1 + sin x3 + 1.06 − 0.1 ≡ g2 (x1 , x2 , x3 ),
9
1 10π − 3
x3 = − e−x1 x2 − ≡ g3 (x1 , x2 , x3 ).
20 60 師

Let G : R3 → R3 be defined by G(x) = [g1 (x), g2 (x), g3 (x)]T .


Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

G has a unique point in D ≡ [−1, 1] × [−1, 1] × [−1, 1]:


Existence: ∀ x ∈ D,
1 1
|g1 (x)| ≤ | cos(x2 x3 )| + ≤ 0.5,
3 6
1√
q
1
|g2 (x)| =
2
x1 + sin x3 + 1.06 − 0.1 ≤ 1 + sin 1 + 1.06 − 0.1 < 0
9 9
1 10π − 3 1 10π − 3
|g3 (x)| = e−x1 x2 + ≤ e+ < 0.61,
20 60 20 60
it implies that G(x) ∈ D whenever x ∈ D.
Uniqueness:

∂g1
= 0, ∂g2 = 0 and ∂g3 = 0,


∂x1 ∂x2 ∂x3

as well as

∂g1 1 1 師
∂x2 ≤ 3 |x3 | · | sin(x2 x3 )| ≤ 3 sin 1 < 0.281, 大

Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques


∂g1 1 1

∂x3 ≤ |x2 | · | sin(x2 x3 )| ≤ sin 1 < 0.281,
3 3

∂g2 |x1 | 1

∂x1 = p < √ < 0.238,
9 x21 + sin x3 + 1.06 9 0.218

∂g2 | cos x3 | 1

∂x3 = p
2
< √ < 0.119,
18 x1 + sin x3 + 1.06 18 0.218

∂g3 |x2 | −x1 x2 1

∂x1 = e ≤ e < 0.14,
20 20

∂g3 |x1 | −x1 x2 1

∂x2 = e ≤ e < 0.14.
20 20
These imply that g1 , g2 and g3 are continuous on D and ∀ x ∈ D,

∂gi
∂xj ≤ 0.281, ∀ i, j.


Similarly, ∂gi /∂xj are continuous on D for all i and j. Consequently, 大

G has a unique fixed point in D.


Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Approximated solution:
Fixed-point iteration (I):
Choosing x(0) = [0.1, 0.1, −0.1]T , {x(k) } is generated by
(k) 1 (k−1) (k−1) 1
x1 = cos x x3 + ,
3r 2 6
1   2
(k) (k−1) (k−1)
x2 = x1 + sin x3 + 1.06 − 0.1,
9
(k) 1 (k−1) (k−1) 10π − 3
x3 = − e−x1 x2
− .
20 60
Result:
(k) (k) (k)
k x1 x2 x3 kx(k) − x(k−1) k∞
0 0.10000000 0.10000000 -0.10000000
1 0.49998333 0.00944115 -0.52310127 0.423
2 0.49999593 0.00002557 -0.52336331 9.4 × 10−3
3 0.50000000 0.00001234 -0.52359814 2.3 × 10−4
4 0.50000000 0.00000003 -0.52359847 1.2 × 10−5


5 0.50000000 0.00000002 -0.52359877 3.1 × 10−7
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Approximated solution (cont.):


Accelerate convergence of the fixed-point iteration:
(k) 1 (k−1) (k−1) 1
x1 = cos x x3 + ,
3r 2 6
1   2
(k) (k) (k−1)
x2 = x1 + sin x3 + 1.06 − 0.1,
9
(k) 1 (k) (k) 10π − 3
x3 = − e−x1 x2 − ,
20 60
as in the Gauss-Seidel method for linear systems.
Result:

(k) (k) (k)


k x1 x2 x3 kx(k) − x(k−1) k∞
0 0.10000000 0.10000000 -0.10000000
1 0.49998333 0.02222979 -0.52304613 0.423
2 0.49997747 0.00002815 -0.52359807 2.2 × 10−2
3 0.50000000 0.00000004 -0.52359877 2.8 × 10−5

4 0.50000000 0.00000000 -0.52359877 3.8 × 10−8


Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Newton’s method
First consider solving the following system of nonlinear eqs.:
(
f1 (x1 , x2 ) = 0,
f2 (x1 , x2 ) = 0.
(k) (k)
Suppose (x1 , x2 ) is an approximation to the solution of the
(k) (k)
system above, and we try to compute h1 and h2 such that
(k) (k) (k) (k)
(x1 + h1 , x2 + h2 ) satisfies the system. By the Taylor’s
theorem for two variables,
(k) (k) (k) (k)
0 = f1 (x1 + h1 , x2 + h2 )
(k) (k) (k) ∂f1 (k) (k) (k) ∂f1 (k) (k)
≈ f1 (x1 , x2 ) + h1 (x1 , x2 ) + h2 (x , x2 )
∂x1 ∂x2 1
(k) (k) (k) (k)
0 = f2 (x1 + h1 , x2 + h2 )


(k) (k) (k) ∂f2 (k) (k) (k) ∂f2 (k) (k)
≈ f2 (x1 , x2 ) + h1 (x1 , x2 ) + h2 (x1 , x2 )
∂x1 ∂x2
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Put this in matrix form


" (k) (k) (k) (k)
#" # " #  
∂f1 ∂f1 (k) (k) (k)
∂x1 (x1 , x2 ) ∂x2 (x1 , x2 ) h1
+
f1 (x1 , x2 )

0
.
∂f2 (k) (k) ∂f2 (k) (k) (k) (k) (k) 0
∂x1 (x 1 , x2 ) ∂x2 (x 1 , x2 ) h2 f2 (x 1 , x2 )
The matrix
" (k) (k) (k) (k)
#
∂f1 ∂f1
(k) (k) ∂x1 (x1 , x2 ) ∂x2 (x1 , x2 )
J(x1 , x2 ) ≡ ∂f2 (k) (k) ∂f2 (k) (k)
∂x1 (x1 , x2 ) ∂x2 (x1 , x2 )
(k) (k)
is called the Jacobian matrix. Set h1 and h2 be the solution of the
linear system
" # " #
(k) (k) (k)
(k) (k) h1 f1 (x1 , x2 )
J(x1 , x2 ) (k) =− (k) (k) ,
h2 f2 (x1 , x2 )
then
" # " # " #
(k+1) (k) (k)
x1 x1 h1
(k+1) = (k) + (k)
x2 x2 h2 師

is expected to be a better approximation.


Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

In general, we solve the system of n nonlinear equations


fi (x1 , · · · , xn ) = 0, i = 1, . . . , n. Let
 T
x = x1 x2 · · · xn
and  T
F (x) = f1 (x) f2 (x) · · · fn (x) .
The problem can be formulated as solving
F (x) = 0, F : Rn → Rn .
∂fi
Let J(x), where the (i, j) entry is ∂xj
(x), be the n × n Jacobian
matrix. Then the Newton’s iteration is defined as
x(k+1) = x(k) + h(k) ,
where h(k) ∈ Rn is the solution of the linear system


J(x(k) )h(k) = −F (x(k) ).
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Algorithm 1 (Newton’s Method for Systems)


Given a function F : Rn → Rn , an initial guess x(0) to the zero
of F , and stop criteria M , δ, and ε, this algorithm performs the
Newton’s iteration to approximate one root of F .

Set k = 0 and h(−1) = e1 .


While (k < M ) and (k h(k−1) k≥ δ) and (k F (x(k) ) k≥ ε)
Calculate J(x(k) ) = [∂Fi (x(k) )/∂xj ].
Solve the n × n linear system J(x(k) )h(k) = −F (x(k) ).
Set x(k+1) = x(k) + h(k) and k = k + 1.
End while
Output (“Convergent x(k) ”) or
(“Maximum number of iterations exceeded”)


Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Theorem 5
Let x∗ be a solution of G(x) = x. Suppose ∃ δ > 0 with
(i) ∂gi /∂xj is continuous on Nδ = {x; kx − x∗ k < δ} for all i and j.
(ii) ∂ 2 gi (x)/(∂xj ∂xk ) is continuous and
2
∂ gi (x)
∂xj ∂xk ≤ M

for some M whenever x ∈ Nδ for each i, j and k.


(iii) ∂gi (x∗ )/∂xk = 0 for each i and k.
Then ∃ δ̂ < δ such that the sequence {x(k) } generated by

x(k) = G(x(k−1) )

converges quadratically to x∗ for any x(0) satisfying kx(0) − x∗ k∞ < δ̂.


Moreover,
n2 M (k−1) 師

kx(k) − x∗ k∞ ≤ kx − x∗ k2∞ , ∀ k ≥ 1.
2
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Example 6
Consider the nonlinear system
1
3x1 − cos(x2 x3 ) −
= 0,
2
x21 − 81(x2 + 0.1)2 + sin x3 + 1.06 = 0,
10π − 3
e−x1 x2 + 20x3 + = 0.
3

Nonlinear functions: Let


F (x1 , x2 , x3 ) = [f1 (x1 , x2 , x3 ), f2 (x1 , x2 , x3 ), f3 (x1 , x2 , x3 )]T ,
where
1
f1 (x1 , x2 , x3 ) = 3x1 − cos(x2 x3 ) − ,
2
f2 (x1 , x2 , x3 ) = x21 − 81(x2 + 0.1)2 + sin x3 + 1.06, 師

10π − 3
f3 (x1 , x2 , x3 ) = e−x1 x2 + 20x3 + .
3
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Nonlinear functions (cont.):


The Jacobian matrix J(x) for this system is
 
3 x3 sin x2 x3 x2 sin x2 x3
J(x1 , x2 , x3 ) =  2x1 −162(x2 + 0.1) cos x3 .
−x2 e−x1 x2 −x1 e−x1 x2 20

Newton’s iteration with initial x(0) = [0.1, 0.1, −0.1]T :


 (k)   (k−1)   (k−1) 
x1 x1 h1
 (k)   (k−1)   (k−1) 
 x2   x2= −
  h2 ,
(k) (k−1) (k−1)
x3 x3 h3
where
 (k−1) 
h1
(k−1) (k−1) (k−1) −1
 
 (k−1)  (k−1) (k−1) (k−1)
 h2  = J(x1 , x2 , x3 ) F (x1 , x2 , x3

(k−1) 大
h3
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Result:

(k) (k) (k)


k x1 x2 x3 kx(k) − x(k−1) k∞
0 0.10000000 0.10000000 -0.10000000
1 0.50003702 0.01946686 -0.52152047 0.422
2 0.50004593 0.00158859 -0.52355711 1.79 × 10−2
3 0.50000034 0.00001244 -0.52359845 1.58 × 10−3
4 0.50000000 0.00000000 -0.52359877 1.24 × 10−5
5 0.50000000 0.00000000 -0.52359877 0



Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Quasi-Newton methods
Newton’s Methods
Advantage: quadratic convergence
Disadvantage: For each iteration, it requires
O(n3 ) + O(n2 ) + O(n) arithmetic operations:
n2 partial derivatives for Jacobian matrix – in most situations,
the exact evaluation of the partial derivatives is inconvenient.
n scalar functional evaluations of F
O(n3 ) arithmetic operations to solve linear system.
quasi-Newton methods
Advantage: it requires only n scalar functional evaluations
per iteration and O(n2 ) arithmetic operations
Disadvantage: superlinear convergence
Recall that in one dimensional case, one uses the linear model
`k (x) = f (xk ) + ak (x − xk )

to approximate the function f (x) at xk . That is, `k (xk ) = f (xk ) 大

for any ak ∈ R. If we further require that `0 (xk ) = f 0 (xk ), then


Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

The zero of `k (x) is used to give a new approximate for the zero of
f (x), that is,
1
xk+1 = xk − 0 f (xk )
f (xk )
which yields Newton’s method.
If f 0 (xk ) is not available, one instead asks the linear model to satisfy
`k (xk ) = f (xk ) and `k (xk−1 ) = f (xk−1 ).
In doing this, the identity
f (xk−1 ) = `k (xk−1 ) = f (xk ) + ak (xk−1 − xk )
gives
f (xk ) − f (xk−1 )
ak = .
xk − xk−1
Solving `k (x) = 0 yields the secant iteration
xk − xk−1
xk+1 = xk − f (xk ). 師

f (xk ) − f (xk−1 )
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

In multiple dimension, the analogue affine model becomes

Mk (x) = F (xk ) + Ak (x − xk ),

where x, xk ∈ Rn and Ak ∈ Rn×n , and satisfies

Mk (xk ) = F (xk ),

for any Ak . The zero of Mk (x) is then used to give a new


approximate for the zero of F (x), that is,

xk+1 = xk − A−1
k F (xk ).

The Newton’s method chooses

Ak = F 0 (xk ) ≡ J(xk ) = the Jacobian matrix

and yields the iteration


−1 師

xk+1 = xk − F 0 (xk )

F (xk ).
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

When the Jacobian matrix J(xk ) ≡ F 0 (xk ) is not available, one


can require

Mk (xk−1 ) = F (xk−1 ).

Then

F (xk−1 ) = Mk (xk−1 ) = F (xk ) + Ak (xk−1 − xk ),

which gives

Ak (xk − xk−1 ) = F (xk ) − F (xk−1 )

and this is the so-called secant equation. Let

hk = xk − xk−1 and yk = F (xk ) − F (xk−1 ).

The secant equation becomes




Ak hk = yk .
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

However, this secant equation can not uniquely determine Ak .


One way of choosing Ak is to minimize Mk − Mk−1 subject to
the secant equation. Note

Mk (x) − Mk−1 (x) = F (xk ) + Ak (x − xk ) − F (xk−1 ) − Ak−1 (x − xk−


= (F (xk ) − F (xk−1 )) + Ak (x − xk ) − Ak−1 (x − x
= Ak (xk − xk−1 ) + Ak (x − xk ) − Ak−1 (x − xk−1 )
= Ak (x − xk−1 ) − Ak−1 (x − xk−1 )
= (Ak − Ak−1 )(x − xk−1 ).

For any x ∈ Rn , we express

x − xk−1 = αhk + tk ,

for some α ∈ R, tk ∈ Rn , and hTk tk = 0. Then




Mk −Mk−1 = (Ak −Ak−1 )(αhk +tk ) = α(Ak −Ak−1 )hk +(Ak −Ak−1 )tk .
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Since

(Ak − Ak−1 )hk = Ak hk − Ak−1 hk = yk − Ak−1 hk ,

both yk and Ak−1 hk are old values, we have no control over the
first part (Ak − Ak−1 )hk . In order to minimize Mk (x) − Mk−1 (x),
we try to choose Ak so that

(Ak − Ak−1 )tk = 0

for all tk ∈ Rn , hTk tk = 0. This requires that Ak − Ak−1 to be a


rank-one matrix of the form

Ak − Ak−1 = uk hTk

for some uk ∈ Rn . Then



uk hTk hk = (Ak − Ak−1 )hk = yk − Ak−1 hk 大
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

which gives
yk − Ak−1 hk
uk = .
hTk hk

Therefore,
(yk − Ak−1 hk )hTk
Ak = Ak−1 + . (1)
hTk hk

After Ak is determined, the new iterate xk+1 is derived from


solving Mk (x) = 0. It can be done by first noting that

hk+1 = xk+1 − xk =⇒ xk+1 = xk + hk+1

and

Mk (xk+1 ) = 0 ⇒ F (xk ) + Ak (xk+1 − xk ) = 0 ⇒ Ak hk+1 = −F (x師k )


These formulations give the Broyden’s method.


Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Algorithm 2 (Broyden’s Method)


Given a n-variable nonlinear function F : Rn → Rn , an initial
iterate x0 and initial Jacobian matrix A0 ∈ Rn×n (e.g., A0 = I),
this algorithm finds the solution for F (x) = 0.

Given x0 , tolerance T OL, maximum number of iteration M .


Set k = 1.
While k ≤ M and kxk − xk−1 k2 ≥ T OL
Solve Ak hk+1 = −F (xk ) for hk+1
Update xk+1 = xk + hk+1
Compute yk+1 = F (xk+1 ) − F (xk )
Update
(yk+1 − Ak hk+1 )hTk+1 (yk+1 + F (xk ))hTk+1
Ak+1 = Ak + = A k +
hTk+1 hk+1 hTk+1 hk+1

k =k+1 大

End While
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Solve the linear system Ak hk+1 = −F (xk ) for hk+1 :


LU -factorization: cost 23 n3 + O(n2 ) floating-point
operations.
Applying the Shermann-Morrison-Woodbury formula
−1 −1 T −1
B + UV T = B −1 − B −1 U I + V T B −1 U V B
to (1), we have

A−1
k
−1
(yk − Ak−1 hk )hTk

= Ak−1 +
hTk hk
 −1
−1 −1 yk − Ak−1 hk T −1 yk − Ak−1 hk
= Ak−1 − Ak−1 1 + hk Ak−1 hTk
hTk hk hTk hk
(hk − A−1 T −1
k−1 yk )hk Ak−1
= A−1
k−1 + . 師

hTk A−1
k−1 yk
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Newton-based methods
Advantage: high speed of convergence once a sufficiently
accurate approximation
Weakness: an accurate initial approximation to the solution
is needed to ensure convergence.
Steepest Descent method converges only linearly to the sol., but
it will usually converge even for poor initial approximations.
“Find sufficiently accurate starting approximate solution by using
Steepest Descent method” + ”Compute convergent solution by
using Newton-based methods”
The method of Steepest Descent determines a local minimum
for a multivariable function of g : Rn → R.
A system of the form fi (x1 , . . . , xn ) = 0, i = 1, 2, . . . , n has a
solution at x iff the function g defined by
n
X 2
g(x1 , . . . , xn ) = [fi (x1 , . . . , xn )] 師

i=1
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Basic idea of steepest descent method:


(i) Evaluate g at an initial approximation x(0) ;
(ii) Determine a direction from x(0) that results in a decrease in the
value of g;
(iii) Move an appropriate distance in this direction and call the new
vector x(1) ;
(iv) Repeat steps (i) through (iii) with x(0) replaced by x(1) .
Definition 7 (Gradient)
If g : Rn → R, the gradient, ∇g(x), at x is defined by
 
∂g ∂g
∇g(x) = (x), · · · , (x) .
∂x1 ∂xn
Definition 8 (Directional Derivative)
The directional derivative of g at x in the direction of v with k v k2 = 1
is defined by

g(x + hv) − g(x) 大

Dv g(x) = lim = v T ∇g(x).


h→0 h
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Theorem 9
The direction of the greatest decrease in the value of g at x is
the direction given by −∇g(x).

Object: reduce g(x) to its minimal value zero.


⇒ for an initial approximation x(0) , an appropriate choice
for new vector x(1) is

x(1) = x(0) − α∇g(x(0) ), for some constant α > 0.

Choose α > 0 such that g(x(1) ) < g(x(0) ): define

h(α) = g(x(0) − α∇g(x(0) )),

then find α∗ such that

h(α∗ ) = min h(α). 師



α
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

How to find α∗ ?
Solve a root-finding problem h0 (α) = 0 ⇒ Too costly, in
general.
Choose three number α1 < α2 < α3 , construct quadratic
polynomial P (x) that interpolates h at α1 , α2 and α3 , i.e.,

P (α1 ) = h(α1 ), P (α2 ) = h(α2 ), P (α3 ) = h(α3 ),

to approximate h. Use the minimum value P (α̂) in [α1 , α3 ]


to approximate h(α∗ ). The new iteration is

x(1) = x(0) − α̂∇g(x(0) ).

Set α1 = 0 to minimize the computation


α3 is found with h(α3 ) < h(α1 ).
Choose α2 = α3 /2.


Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Example 10
Use the Steepest Descent method with x(0) = (0, 0, 0)T to find a
reasonable starting approximation to the solution of the
nonlinear system
1
f1 (x1 , x2 , x3 ) = 3x1 − cos(x2 x3 ) − = 0,
2
f2 (x1 , x2 , x3 ) = x21 − 81(x2 + 0.1)2 + sin x3 + 1.06 = 0,
10π − 3
f3 (x1 , x2 , x3 ) = e−x1 x2 + 20x3 + = 0.
3
Let g(x1, x2 , x3 ) =
[f1 (x1 , x2 , x3 )]2 + [f2 (x1 , x2 , x3 )]2 + [f3 (x1 , x2 , x3 )]2 . Then
∇g(x1 , x2 , x3 ) ≡ ∇g(x)

∂f1 ∂f2 ∂f3
= 2f1 (x) (x) + 2f2 (x) (x) + 2f3 (x) (x),
∂x1 ∂x1 ∂x1

∂f1 ∂f2 ∂f3 大

2f1 (x) (x) + 2f2 (x) (x) + 2f3 (x) (x),


∂x2 ∂x2 ∂x2
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

For x(0) = [0, 0, 0]T , we have

g(x(0) ) = 111.975 and z0 = k∇g(x(0) )k2 = 419.554.

Let
1
z= ∇g(x(0) ) = [−0.0214514, −0.0193062, 0.999583]T .
z0
With α1 = 0, we have

g1 = g(x(0) − α1 z) = g(x(0) ) = 111.975.

Let α3 = 1 so that

g3 = g(x(0) − α3 z) = 93.5649 < g1 .

Set α2 = α3 /2 = 0.5. Thus




g2 = g(x(0) − α2 z) = 2.53557.
Fixed points Newton’s method Quasi-Newton methods Steepest Descent Techniques

Form quadratic polynomial P (α) defined as


P (α) = g1 + h1 α + h3 α(α − α2 )
that interpolates g(x(0) − αz) at α1 = 0, α2 = 0.5 and α3 = 1 as follows

g2 − g1
g2 = P (α2 ) = g1 + h1 α2 ⇒ h1 = = −218.878,
α2
g3 = P (α3 ) = g1 + h1 α3 + h3 α3 (α3 − α2 ) ⇒ h3 = 400.937.
Thus
P (α) = 111.975 − 218.878α + 400.937α(α − 0.5)
so that
0 = P 0 (α0 ) = −419.346 + 801.872α0 ⇒ α0 = 0.522959
Since
g0 = g(x(0) − α0 z) = 2.32762 < min{g1 , g3 },
we set 師

(1) (0) T
x =x − α0 z = [0.0112182, 0.0100964, −0.522741] .

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy