0% found this document useful (0 votes)
9 views49 pages

Num Method 05 Root Finding I SP20

Uploaded by

Isaac Thales
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views49 pages

Num Method 05 Root Finding I SP20

Uploaded by

Isaac Thales
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Lecture 05: Root Finding:

MAE 284
Numerical Methods
R. N. Tantaris

1
Root Finding Methods
• Some basic objectives for Chapter 5:
– Understanding what roots problems are and where they
occur in engineering and science.
– Knowing how to determine a root graphically.
– Understanding the incremental search method and its
shortcomings.
– Knowing how to solve a roots problem with the bisection
method.
– Knowing how to estimate the error of bisection and why
it differs from error estimates for other types of root
location algorithms.
– Understanding the false position method and how it
differs from bisection.
3
Root Finding Motivation
• In mathematics a root (or a zero) of a function f(x) is
a value for x that produces f(x) = 0 .
• Also, a root or solution of an equation are the values
of x for which the equation holds true.
• Consider an equation of the form
𝑓 𝑥 =𝑐
where 𝑐 is a constant.
• Depending on what 𝑓(𝑥) is, it may be easy to solve:
𝑎𝑥 2 + 𝑏𝑥 = −𝑐
• We can easily solve this using the quadratic formula
−𝑏 ± 𝑏 2 − 4𝑎𝑐
𝑥1,2 = 4
2𝑎
Root Finding Motivation
• But consider the equation
𝑒 −0.5𝑥 cos 𝑥 − 0.2 = −0.1 (1)
• There isn’t an easy way to solve this equation.
• However, suppose we create a new function, 𝑔(𝑥)
defined as
𝑔 𝑥 =𝑓 𝑥 −𝑐
Or in this case
𝑔 𝑥 = 𝑒 −0.5𝑥 cos 𝑥 − 0.2 + 0.1
• Then, if we find the roots of 𝑔(𝑥), they will be the
solutions of equation (1).
5
Root Finding Motivation
• Lets say that our engineering problem is only
interested in positive solutions less than 20.

• We can plot the function

𝑔 𝑥 = 𝑒 −0.5𝑥 cos 𝑥 − 0.2 + 0.1

for values of x in the range 0 ≤ 𝑥 ≤ 20

6
Root Finding Motivation
𝑔 𝑥 = 𝑒 −0.5𝑥 cos 𝑥 − 0.2 + 0.1

This function has two zeros


(values of 𝑥 that result in 𝑔(𝑥) = 0)

7
Root Finding Motivation
• We can use the zoom tool (magnifying glass icon) to zoom in
−0.5𝑥
𝑔 𝑥 of=the
to get a better idea 𝑒 valuecos 𝑥 −roots.
of the 0.2 + 0.1
• We can also use the grid command to draw a grid on the plot.
• With these tools we can see one root is between 2 and 2.5
• And the second root is between 4 and 4.5

8
Root Finding Motivation
• We can continue to zoom in until we can obtain an
approximation of the root to a desired tolerance.
𝑥 ≅ 2.05
𝑥 ≅ 2.054
𝑥 ≅ 2.0538

The zero (to 15 decimals) is


x = 2.053798176384282

9
Root Finding Motivation
• Lets check to see if x = 2.053798176384282 is a zero
of the function
𝑔 𝑥 = 𝑒 −0.5𝑥 cos 𝑥 − 0.2 + 0.1
𝑔 2.053798176384282 = −5.55 × 10−17
which is pretty close to zero!
• But remember the problem we are trying to solve.
We are trying to find the values of x for which
𝑓 𝑥 = 𝑒 −0.5𝑥 cos 𝑥 − 0.2 = −0.1
Lets calculate the value of 𝑓(𝑥) for the value of 𝑥 we
obtained:
𝑓 𝑥 ≅ −0.100000000000000
10
Root Finding Methods
Requires two initial
Trial and error using
guesses to surround the Nonlinear Equation one initial guess
root
Solvers

Bracketing
Graphical Open Methods
Methods

Newton
Bisection
Useful for obtaining Raphson
quick rough estimates
False Secant
Position
Iterative methods
11
Graphical Method
• An example: f ( x)  sin(10 x)  cos(3 x)

A closer look at some


of the roots
>> x=linspace(3,5,100);
>> f=sin(10*x)+cos(3*x);
>> plot(x,f)

Look at just the 2 distinct


>> x=linspace(0,5,100); roots between 4.2 and 4.3
>> f=sin(10*x)+cos(3*x); >> x=linspace(4.2,4.3,100);
>> plot(x,f) >> f=sin(10*x)+cos(3*x);
>> xlabel(‘X’) >> plot(x,f)
>> ylabel(‘Y’) >> grid
13
Textbook Example 5.1
• Your supervisor at a bungee cord manufacturer has
given you the task of determining the maximum
allowable mass of a bungee jumper.
• Strength calculations and tests have shown that the
cord will not break for a mass less than 500 kg.
• However, recent accidents have shown you must also
take into account injuries.
• After researching the issue, you find that medical
studies have shown that the chances of injury in a
bungee jumper increase dramatically if the free-fall
velocity exceeds 36 m/s after 4 s.
14
Textbook Example 5.1
• So now you need to calculate the velocity of a bungee
jumper as a function of the mass of the bungee
jumper, 𝑚.
• You use your knowledge of physics and dynamics to
determine that the velocity of a bungee jumper as a
function of time is:
𝑚𝑔 𝐶𝑑 𝑔
𝑣 𝑡 = tanh 𝑡
𝐶𝑑 𝑚

• Your company has determined that the drag


coefficient of an average bungee jumper is 0.25 kg/m.
15
Textbook Example 5.1
• At this point it is a good idea to write a problem
statement and list everything you know:

Determine the mass, m, of a bungee jumper with a drag


coefficient, cd, of 0.25 [kg/m] which results in a velocity
of 36 [m/s] after 4 [s] of free fall.
v(𝑡) = 36 m/s t= 4s
𝐶𝑑 = 0.25 kg/m 𝑔 = 9.81 m/s2

𝑚𝑔 𝐶𝑑 𝑔
𝑣 𝑡 = tanh 𝑡
𝐶𝑑 𝑚
16
Textbook Example 5.1
• So we need determine the value of the mass, 𝑚, that satisfies
the equation
(9.81)𝑚 (0.25) 9.81
36 = tanh (4)
(0.25) 𝑚

2.4525
36 = 39.24𝑚 tanh 4
𝑚

• So what do we do?
• First, we need to see that this equation is in the form
𝑓 𝑥 =𝑏
and recall that we can use the graphical method to solve it.
17
Textbook Example 5.1
• As before, we need to write
𝑓 𝑚 =𝑏
as
𝑔 𝑚 =𝑓 𝑚 −𝑏
We rewrite the equation

2.4525
36 = 39.24𝑚 tanh 4
𝑚
as
2.4525
𝑔 𝑚 = 39.24𝑚 tanh 4 − 36
𝑚

18
Textbook Example 5.1
• Use the graphical approach to determine the mass, m,
of the bungee jumper with a drag coefficient, cd, of
0.25 [kg/m] to have a velocity of 36 [m/s] after 4 [s]
of free fall. The acceleration of gravity, g, in SI units
is 9.81 [m/s]. Given the following function:
gm  gcd 
v(t )  tanh  t 
cd  m  This is what we are
trying to drive to zero.
gm  gcd 
f ( m) 
𝑔(𝑚) tanh  t   36
cd  m 
19
Textbook Example 5.1
• Plotting the function in MATLAB:

Note: The function


crosses the m-axis
between 140 and 150 kg.
Rough estimate of the
root is 145 kg.

20
Textbook Example 5.1
• So what do you tell your boss?

21
Bracketing Methods

22
Bracketing Methods
• Bracketing methods, sometimes referred to as two
point methods, require two initial guesses for the zero
(also called the root).
• These guesses must “bracket” or be on either side of
the root.
• Bracketing methods always work but converge
slowly (more iterations).
• For a single problem this may not be a consideration.
• But what if you had to find thousands of roots (or
millions)?

23
Bracketing Methods
• All bracketing methods are based on the idea that, for
two function values, 𝑓1 and 𝑓2 , if 𝑓1 and 𝑓2 have
opposite signs, there must be at least one root
between them
Positive
• Let xl and xu (lower and upper) function
be two guesses such that the values
sign of the function changes;
that is, where f(xl ) f(xu ) < 0
• In the figure, we could choose
𝑥𝑙 = 4, 8, 𝑜𝑟 12 Negative
function
and 𝑥𝑢 = 16 𝑜𝑟 20
values 24
General Bracketing Cases
Ways that roots may occur:
1. If two guess points have
the same sign, then there
are an even number of (or
zero) roots.
2. A single root may be
bracketed by negative and
positive values.
3. Three roots may be
bracketed by a single pair
of negative and positive
values.
Fig. 5.1 26
Bracketing Cases - Exceptions
Exceptions to the general cases:
1. Tangent points. Here we
have end points (blue
dots) of opposite signs,
but there are an even
number of roots (black
dots) between bounds.
2. Discontinuous functions.
Here we have two end
points of opposite sign but
still have an even number
of roots.

Fig. 5.2 27
Incremental Search Method
• The Incremental Search Method is a numerical tool
used by bracketing methods to identify brackets.
• It is NOT a bracketing method.
• The incremental search method tests the value of the
function at evenly spaced intervals and finds brackets
by identifying function sign changes between
neighboring points.
• Incremental search method identifies the change of
sign on opposite sides of a root.
• Uses input values spaced at evenly spaced increments

28
Incremental Search Method
• The basic idea for the incremental search method is
the fact that there will be a sign change before and
after a root is found.
Check
f ( xl ) f ( xu )  0
If true then
f ( xu ) xl and xu
Become the bracket that
f ( xl ) contains the root

xl xu
29
Incremental Search Method
• So how do we find the two initial guesses?
• One way is to plot the function
• Plotting the function should always be the first step in
finding the roots of a function whenever possible.
• A second method is an incremental search.
• Let’s look at an algorithm and corresponding m-file
(you will create in a workshop) that implements an
incremental search to locate roots of any function
func within the range from xmin to xmax with an
optional argument ns allowing the user to specify the
number of intervals within the range.

30
Flow Diagram of Incremental Search
func= function to evaluate
xmin & xmax = lower & upper bounds a a. Here we wish to allow the user to
ns = number of points to evaluate specify any arbitrary function, func,
bounds to search for a zero, xmin and
b xmax, and the number of points to
Evaluate func at x between xmin and xmax
evaluate, ns.
b. Evaluate ns points of f between xmin
nb = 0, xb = 0 c and xmax
c. Initialize the number of zeros found,
i=1 FALSE nb, to zero and a list of locations, xb.
i=i+1
i < ns-1 d d. Begin a loop that goes through all
TRUE points within our range.
TRUE e. If the signs of two consecutive points
if sign(f(xi)) e are the same, we do nothing and
= sign(f(xi+1)) move onto the next pair of points.
FALSE f. If the signs are different, we then
nb = nb+1 f increment the number of roots, nb,
xb(nb,:) = [xi xi+1] and save the two bracketing points in
xb.
g. Once we reach the end of our loop,
we output the number of roots nb and
Display number of roots g
Display xb list of bracketing x values xb.
31
incsearch
f=@(x) sin(10*x)+cos(3*x);
xb = incsearch(f,3,6)
number of brackets
5
xb =
3.2449 3.3061
3.3061 3.3673
3.7347 3.7959
4.6531 4.7143
5.6327 5.6939

Ex: 5.2
(CHAPRA)

Fig. 5.4 32
Issues with Incremental Search
• If the increment length is too small, the search can be
very time consuming
• If the increment length is too great, the algorithm may
miss roots or bracket regions with multiple roots
• Tangent points and discontinuous functions are not
handled and may result in brackets with more than one
root.

33
Root Finding Methods
Nonlinear Equation
Solvers

Bracketing
Graphical Open Methods
Methods

Newton
Bisection Raphson

False Secant
Position

34
Bisection Method
• The bisection method is a variation of the incremental search
method to where the interval from 𝑥𝑙 to 𝑥𝑢 is always divided
in half.
• The algorithm requires two initial
guesses, 𝑥𝑙 to 𝑥𝑢 , that bracket the
root (i.e. f(xl ) f(xu ) < 0 ).
• The interval is divided in half and
the function is evaluated at the
midpoint.
• Then the roots must either be the
midpoint or it must be between
one of the two sub-intervals. Fig. 5.5
• The sub-interval containing the Absolute error is reduced by a
root is determined by seeing factor of two on each iteration
where the function changes sign. 35
Bisection Method
1. Choose xl and xu such that they bound the root of interest (the
function changes sign over the interval), check if f(xl)*f(xu) <0.
2. Calculate the midpoint (new estimate the root xr ):
𝑥𝑙 + 𝑥𝑢
𝑥𝑟 =
2
3. Determine in which subinterval the root lies:
a. If f(xl)*f(xr)<0 the root lies in the lower subinterval.
Set xu = xr and return to step 2.
b. If f(xl)*f(xr)>0, the root lies in the upper subinterval. Set xl
= xr and return to step 2.
c. If f(xl)*f(xr)=0, xr is the root. Terminate the algorithm.

36
Bisection Method Error
• The error is determined by:

Approximate Error Percent Approximate Relative Error


 approx  present  previous present  previous
OR a 
 approx  x
new
r x
old
r
present
xrnew  xrold
a  new
100 %
xr
• Now, when εa or εapprox becomes less than a specified stopping
criterion εs we will terminate the program or computations.

37
Bisection Method Example
gm  gcd 
• Find root of f ( m) 
cd
tanh  t   36
 m 
1. Function changes sign between
Root x=50, 200 so we can evaluate xr to
be 125.
2. Compute: f(50)*f(125)=1.871
3. Since this is >0, xl=xr
4. Begin new iteration. Compute xr
between x=125 and 200 so
xr=162.5.
5. Compute: f(125)*f(162.5)=-0.1469
6. Since this is <0, xu=xr
7. Begin new iteration. Compute xr
between x=125 and 162.5 so
xr=143.75.
8. Compute f(125)*f(143.75)=-0.0084
9. Since this is <0, xu=xr
10. Continue until converged. 38
39
Bisection Method
Advantages Disadvantages
• Easy to understand • Relatively slow
• Easy to implement • Requires two guesses
• Always finds a root that bound the root
• Number of iterations • Multiple roots can cause
required to attain an problems.
absolute error can be • Doesn’t consider the
computed a priori (in values of f(xl) and f(xu).
advance). (if f(xl) is closer to zero,
it is likely that root is
closer to xl ).
41
Root Finding Methods
Nonlinear Equation
Solvers

Bracketing
Graphical Open Methods
Methods

Newton
Bisection Raphson
False Secant
Position

42
False Position Method
• Another useful bracketing method is the false
position method.
• This method is also called the linear interpolation
method and Regula Falsi (straight line of falsehood).
• It is based on the false assumption that the function
can be approximated by a straight line.
• It determines the next guess by connecting the
endpoints with a straight line and determining the
location of the intercept of the straight line (xr).
• The value of xr then replaces whichever of the two
initial guesses which result in a function value with
the same sign as f(xr) (just like the bisection method). 43
False Position Method
• Illustration of the false position method:
Recall the geometry of
similar triangles:
xr  xl  f ( xl )

xu  xr f ( xu )

With some algebraic


manipulation:
f ( xu )( xl  xu )
xr  xu 
f ( xl )  f ( xu )
Fig. 5.8 (Eq. 5.7)
44
False Position Method
1. Find a pair of values of xl and xu such that f(xl) and f(xu) have
different signs.
f  xu  xl  xu 
2. Estimate the value of the root r x  x 
f  xl   f  xu 
u

and evaluate f(xr).


3. Use the new point to replace one of the original points, keeping
the two points on opposite sides of the x axis:
3. if f(xr)*f(xl)<0 then xu=xr
4. if f(xr)*f(xl)>0 then xl=xr
5. if f(xr)=0 then you have found the root and need go no
further!
4. Check if the new xl and xu are “close enough together”. If they
are not go back to step 2.
45
Bracketed Root Finding Techniques
• Always check the solution by substituting the
estimated root in the original equation to determine
whether f(xr) ≈ 0.
• False Position algorithm is identical to the bisection
algorithm except for the calculation for the estimated
root location.
Bisection Method False Position Method

xl  xu f  xu  xl  xu 
xr  xr  xu 
2 f  xl   f  xu 

46
+/- of False Position Method
• Advantages
– Faster
– Always converges to a single root

• Disadvantages
– One of the disadvantages is that the method is one-
sided. That is, one of the bracketing points will tend
to stay fixed which will sometimes lead to poor
convergence (i.e. large number of iterations) for
functions with a significant amount of curvature.

47
+/- of False Position Method
Fig. 5.9

48
MATLAB Rootfinding
• Matlab has several functions for dealing with roots (zeros)
• General functions:
fzero: x = fzero(fun,x0) tries to find a zero of fun
near x0, if x0 is a scalar. fun is a function handle. Type
help fzero at the Matlab prompt for more information
on using it.
• Polynomials.
roots: r = roots(c) returns a column vector whose
elements are the roots of the polynomial c. Row vector c
contains the coefficients of a polynomial, ordered in
descending powers.
poly: p = poly(r) where r is a vector returns a row
vector whose elements are the coefficients of the polynomial
whose roots are the elements of r. 49
Function Handles
• We learned about anonymous functions in lecture 4 on
functions.
• The Matlab command >> f = (@)x x^2
creates an anonymous function.
• The variable, f, is defined to be a class (data type) called a
function handle
>> x = 5;
>> f = @(x) x^2
f =
function_handle with value:
@(x)x^2
>> whos
Name Size Bytes Class
f 1x1 32 function_handle
x 1x1 8 double
50
Function Handles
• Function handles can also be created for named functions.
• Suppose you have created a function called myfunction
located in the current Matlab directory.
• You can create a function handle for it as follows
>> g = @myfunction

>> g = @myfunction
g =
function_handle with value:
@myfunction

• For more information:


>> doc(‘function handle’)
51
Solarchick Engineering
• Root finding methods
– Bisection Method
– False Position Method
– Newton-Raphson Method
• Numerical Integration methods
– Trapezoid Rule - single application
– Trapezoid Rule – composite application
• Keep an eye on this channel for new videos
• https://www.youtube.com/channel/UCPeL1s0b51zVg
Rm5Q-VLgYA

53
Announcements
• There are several important announcements that have been
made on Canvas – it is your responsibility to make sure you
set up Canvas to receive announcements as soon as possible
and to read them and respond to them if required.

• Be sure that all documents are uploaded to the Canvas drop


box on-time
– No late submissions will be accepted and you will receive a
grade of zero for the assignment.

• You are responsible for keeping up with the schedule: attend


lectures, quizzes, and exams, and submitting Homework on
time.
54

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy