0% found this document useful (0 votes)
70 views16 pages

Second Order Odes: Y (X) F (X, Y)

The document discusses second-order ordinary differential equations (ODEs). It notes that many physical and biological systems are described by second-order ODEs involving relationships between velocities, accelerations, and positions. A second-order ODE can be reduced to an equivalent system of first-order ODEs by defining a vector Z containing the variables y and y'. Numerical methods for solving the equivalent first-order system provide approximations for both y and y'. Taylor series methods are discussed for approximating the solution to an initial value problem by differentiating the ODE to generate Taylor series coefficients. The local and global errors of numerical methods are defined.

Uploaded by

Sadek Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views16 pages

Second Order Odes: Y (X) F (X, Y)

The document discusses second-order ordinary differential equations (ODEs). It notes that many physical and biological systems are described by second-order ODEs involving relationships between velocities, accelerations, and positions. A second-order ODE can be reduced to an equivalent system of first-order ODEs by defining a vector Z containing the variables y and y'. Numerical methods for solving the equivalent first-order system provide approximations for both y and y'. Taylor series methods are discussed for approximating the solution to an initial value problem by differentiating the ODE to generate Taylor series coefficients. The local and global errors of numerical methods are defined.

Uploaded by

Sadek Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Second Order ODEs

Often physical or biological systems are best described by second or


higher-order ODEs. That is, second or higher order derivatives appear
in the mathematical model of the system.

For example, from physics we know that Newtons laws of motion


describe trajectory or gravitational problems in terms of relationships
between velocities, accelerations and positions. These can often be
described as IVPs, where the ODE has the form,

y 00 (x) = f (x, y)

or
y 00 (x) = f (x, y, y 0 ).

CSCC51H- Numerical Approx, Int and ODEs – p.130/177


Second Order ODEs (cont)
A Second-order scalar ODE can be reduced to an equivalent system of
00
first-order ODEs as follows: With y = f (x, y, y 0 ) we let Z(x) be defined
by,
Z(x) = [z1 (x), z2 (x)]T ,
where z1 (x) ≡ y(x) and z2 (x) ≡ y 0 (x). It is then clear that Z(x) is the
solution of the first order system of IVPs:
   
0 z10 (x) y 0 (x)
Z =   =  
z20 (x) y 00 (x)
   
z2 (x) z (x)
=   = 2 
f (x, y, y 0 ) f (x, z1 , z2 )
≡ F (x, Z).

CSCC51H- Numerical Approx, Int and ODEs – p.131/177


Observations re 2nd-order ODEs
Note that in solving this ‘equivalent’ system for Z(x), we determine an
approximation to y 0 (x) as well as to y(x). This has implications for
numerical methods as, when working with this equivalent system, we
will also be trying to accurately approximate y 0 (x) and this may be
more difficult than just approximating y(x).
Note also that to determine a unique solution to our problem we must
prescribe initial conditions for Z(a), that is for both y(a) and y 0 (a).
Second order systems of ODEs can be reduced to first order systems
similarly (doubling the number of equations).
Higher order equations can be reduced to first order systems in a
similar way.

CSCC51H- Numerical Approx, Int and ODEs – p.132/177


Numerical Methods for IVPs
Taylor Series Methods:

If f (x, y) is sufficiently differentiable wrt x and y then we can determine


the Taylor series expansion of the unique solution y(x) to

y 0 = f (x, y), y(a) = y0 ,

by differentiating the ODE at the point x0 = a. That is, for x near x0 = a


we have,

0 (x − x0 )2 00
y(x) = y(x0 ) + (x − x0 )y (x0 ) + y (x0 ) + · · · ,
2

CSCC51H- Numerical Approx, Int and ODEs – p.133/177


Taylor Series Methods (cont)
To generate the TS coefficients, y (n) (x0 )/ n !, we differentiate the ODE and
evaluate at x = x0 = a. The first few terms are computed from the
expressions,

y 0 (x) = f (x, y) = f,
d
y 00 (x) = f (x, y) = fx + fy y 0 = fx + fy f,
dx
d 00
y 000 (x) = [y (x)] = (fxx + fxy f ) + (fyx + fyy f )f + fy (fx + fy f )
dx
= fxx + 2fxy f + fyy f 2 + fy fx + fy2 f.

CSCC51H- Numerical Approx, Int and ODEs – p.134/177


Key Observation for TS Methods
In general, if f (x, y) is sufficiently differentiable, we can use the first
(k + 1) terms of the Taylor series as an approximation to y(x) for
|(x − x0 )| ‘small’. That is, we can approximate y(x) by ẑk,0 (x),

(x − x0 )k k
ẑk,0 (x) ≡ y0 + (x − x0 )y00 + ···+ y0 .
k!
Note that the derivatives of y become quite complicated so one usually
chooses a small value of k (k ≤ 6 or 7).

CSCC51H- Numerical Approx, Int and ODEs – p.135/177


Key Observation for TS (cont)
One can use ẑk,0 (x1 ) as an approximation, y1 , to y(x1 ). We can then
evaluate the derivatives of y(x) at x = x1 to define a new polynomial
ẑk,1 (x) as an approximation to y(x) for |(x − x1 )| ‘small’ and repeat the
procedure.
Note:
The resulting ẑk,j (x) for j = 0, 1, · · · define a piecewise polynomial
approximation to y(x) that is continuous on [a , b].
How do we choose hj = (xj − xj−1 ) and k?

CSCC51H- Numerical Approx, Int and ODEs – p.136/177


TS Method – Summary
Let Tk (x, yj−1 ) denote the first k + 1 terms of the Taylor series expanded
about the discrete approximation, (xj−1 , yj−1 ), and ẑk,j (x) be the
polynomial approximation (to y(x)) associated with this truncated Taylor
series,
ẑk,j (x) = yj−1 + ∆ Tk (x, yj−1 ),
∆ 0 ∆k−1 (k−1)
Tk (x, yj−1 ) ≡ f (xj−1 , yj−1 ) + f (xj−1 , yj−1 ) · · ·+ f (xj−1 , yj−1 ),
2 k!
where ∆ = (x − xj−1 ).
A simple, constant stepsize (fixed h) TS method is then given by:
-Set h = (b − a)/N ;
-for j = 1, 2, · · · N
xj = xj−1 + h;
yj = yj−1 + h Tk (xj , yj−1 );
-end

CSCC51H- Numerical Approx, Int and ODEs – p.137/177


Local/Global Errors
Note that, strictly speaking, zk,j (x) is not a direct approximation to y(x)
but to the solution of the ‘local’ IVP:

zj0 = f (x, zj ), zj (xj−1 ) = yj−1 .

Since yj−1 will not be equal to y(xj−1 ) in general, the solution to this local
problem, zj (x), will not then be the same as y(x).
To understand and appreciate the implications of this observation we
distinguish between the ‘local’ and ‘global’ errors.
Definitions:
The local error associated with step j is zj (xj ) − yj .
The global error at xj is y(xj ) − yj .

CSCC51H- Numerical Approx, Int and ODEs – p.138/177


A Classical Approach
A Classical (pre 1965) numerical method approximates y(x) by dividing
[a, b] into equally spaced subintervals, xj = a + j h (where h = (b − a)/N )
and, proceeding in a step-by-step fashion, generates yj after
y1 , y2 , · · · yj−1 have been determined.
If the Taylor series method is used in this way, then the TS theorem
with remainder shows that the local error on step j (for the TS method
of order k) is:
(k+ 1)
hk+ 1 f (k) (ηj , zj (ηj )) hk+ 1 zj (ηj )
Ej = = .
(k + 1))! (k + 1)!
If k = 1 we have Eulers Method where yj = yj−1 + h f (xj−1 , yj−1 ),
and the associated local error satisfies,
h2 0 0
L Ej = y (ηj ).
2

CSCC51H- Numerical Approx, Int and ODEs – p.139/177


Error Bounds for IVP Methods
Definition: A method is said to converge iff,
 
lim ma x |y(xj ) − yj | → 0.
h→0,(N →∞ ) j= 1 ,2,· · · N

Theorem: (typical of classical convergence results)


Let [xj , yj ]N
j= 0 be the approximate solution of the IVP,
0
y = f (x, y), y(a) = y0 over [a, b] generated by Euler’s method with
constant stepsize h = (b − a)/N . If the exact solution, y(x), ∈ C 2 [a, b]
00
and |fy | < L, |y (x)| < Y then the associated GE, ej = y(xj ) − yj ,
xj = a + j h satisfies (for all j > 0),

hY (xj −x0 )L
|ej | ≤ (e − 1 ) + e(xj −x0 )L |e0 |,
2L
hY (b−a)L
≤ (e − 1 ) + e(b−a)L |e0 |.
2L

CSCC51H- Numerical Approx, Int and ODEs – p.140/177


Observations re Convergence
1. e0 will usually be equal to zero.
2. This bound is generally pessimistic as it is exponential in (b − a) where
linear error growth is often observed on practical or realistic problems.
3. In the general case one can show that when local error is O(hp+ 1 ) the
global error is O(hp ).

CSCC51H- Numerical Approx, Int and ODEs – p.141/177


Proof of Conv Th (outline)
Eulers Method satisfies,
yj = yj−1 + hf (xj−1 , yj−1 ).
A Taylor series expansion of y(x) about x = xj−1 implies
h2 00
y(xj ) = y(xj−1 ) + hf (xj−1 , y(xj−1 )) + y (ηj ).
2
Subtracting the first equation from the second we obtain,
h2 00
y(xj )−yj = y(xj−1 )−yj−1 +h[f (xj−1 , y(xj−1 )) − f (xj−1 , yj−1 )] + y (ηj ).
2
00
If Y = m a x x∈[a ,b ] |y (x)| and |fy | ≤ L, then, from the definition of ej and
the observation that f (x, y) satisfies a Lipschitz condition with respect to y,
we have ...

CSCC51H- Numerical Approx, Int and ODEs – p.142/177


Proof (cont)
h2 00
|ej | ≤ |ej−1 | + hL|y(xj−1 ) − yj−1 | + | y (ηj )|,
2
h2
≤ |ej−1 | + hL|ej−1 | + Y,
2
h2
= |ej−1 |(1 + hL) + Y.
2
This is a linear recurrence relation (or inequality) which after some work
(straightforward) can be shown to imply our desired result,
hY (b−a)L
|ej | ≤ (e − 1 ) + e(b−a)L |e0 |.
2L
Note that this is only an upper bound on the global error and it may not be
sharp.

CSCC51H- Numerical Approx, Int and ODEs – p.143/177


An Example
Consider the following equation,
0
y = y, y(0) = 1, on [0, 1].

Now since ∂f x
∂y = 1 , L = 1 and since y(x) = e , we have Y = e and e0 = 0.
Applying our error bound with h = 1/N and yN ≈ y(1) = e we obtain,

he
|G E N | = |yN − e| ≤ (e − 1) < 2.4h.
2
But for h = .1 we observe that y1 0 = 2.5 9 3 7 .. with an associated true error
of .1246 .. (≡ e − y1 0 ). This error bound is .24. This is an overestimate by a
factor of 2.
Exercise: Compare the bound to the true error for h = .01, h = .001.

CSCC51H- Numerical Approx, Int and ODEs – p.144/177


Limitations of Classical Approach
Analysis is valid only in the limit as h → 0.
Bounds are usually very pessimistic (can overestimate
the error by several orders of magnitude).
Analysis does not consider the affect of f.p. arithmetic.

CSCC51H- Numerical Approx, Int and ODEs – p.145/177

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy