Chapter2 Num Methods IVPs
Chapter2 Num Methods IVPs
Chengjie Cai
School of Mathematical Sciences
2022-23
Lecture 3 The forward Euler method
Today, we will:
learn how to approximate solutions to rst-order ODEs using the
forward Euler method;
evaluate how good the forward Euler method is, and consider how to
make it more accurate.
(1)
where the RHS of the ODE has been used in the last step.
Chengjie Cai (UoN) MATH2052 2022-23 5 / 113
The forward Euler method
To obtain an iterative rule for the approximation (1), let xn = x0 + nh and
let yn ≈ y (xn ). Then
(2)
Exact Solution
y
x0 x1 x2 x3 x4
x
Exact Solution
Forward Euler
y
x0 x1 x2 x3 x4
x
Chengjie Cai (UoN) MATH2052 2022-23 8 / 113
Forward Euler method example
yn+1 = yn + 0:2(xn + yn ):
n xn = x0 + nh yn
0 0.0
1 0.2
2 0.4
3 0.6
4 0.8
5 1.0
0.4
y
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 1.2
x
We can see that our approximation is quite poor, and also gets worse with
each step.
Chengjie Cai (UoN) MATH2052 2022-23 11 / 113
Forward Euler method example error analysis
The previous plot and the errors above suggest that the approximate
solution given by the FE method with a mesh size of h = 0:2 is not
particularly accurate. This can be improved by reducing h.
Of course, in practice we would not know the exact solution, but
applying the method to a problem we do know the answer for allows
us to better understand the method.
Chengjie Cai (UoN) MATH2052 2022-23 12 / 113
Forward Euler method improving accuracy
Solution:
From (4) we have ; and thus:
0.95
where:
xn = x0 + nh are evenly spaced points that are h apart,
yn is our approximation to y (xn ).
(c) Compute the error of the approximation at each mesh point and
briey comment on your ndings.
(d) Comment how we could make our approximation better.
Today, we will:
learn how to describe and analyse the errors of numerical methods
(through `big O' notation);
consider the next steps in reducing the error of the forward Euler
method;
consider oscillating errors.
We write as h → 0.
In the previous example, we could also show that |f (h)| < 3 for h < 1,
giving . While this is true, it is less informative.
An analogy: Suppose the error in a measurement is less than an inch.
While it is also true that the error is less than a mile, it is more
informative to say that it is less than an inch.
F Always pick the most strict order of magnitude you can, as this will be
the most informative.
Chengjie Cai (UoN) MATH2052 2022-23 22 / 113
Big O notation a short-cut
If h is very small, then h2 , h3 , . . . are tiny.
The `big O' picks out the largest term... therefore for our purposes, we
do not have to go through the formal M -‹ process; it is sucient to
simply identify the largest term as h → 0.
For functions that are not polynomials, we need to rst write them out
as Maclaurin series.
Also, we do not need to include any co-ecients; we only need the
power of h.
Example 1: Example 2:
eh = sin(3h) =
Solution:
(a) The largest term is as h → 0; therefore, f1 (h) =
(b)
The largest term is as h → 0; therefore, f2 (h) =
Let's consider a log-log plot of the global errors at x = 0:5 (the values
in orange), for the 3 dierent values of h.
The local error for this method is O(h3 ), and we can expect the global
error to be .
We will study this method in detail next lecture.
Chengjie Cai (UoN) MATH2052 2022-23 29 / 113
Oscillating error
From the examples of the forward Euler method in the previous lecture, it
appears as though the error increases as we take more steps; however, as
the following example shows, this is not always the case.
Example: Consider the IVP
y 0 = −y + 2 cos(x); y (0) = 1; (5)
on the interval 0 ≤ x ≤ 2ı, with h = ı=10, ı=20, ı=40.
Solution:
From (5), we have ; and thus:
yn+1 = yn + hf (xn ; yn )
This gure shows our approximation for each h considered. We can see
that the error does not always increase as x increases; the error oscillates.
0 Exact
y
π
h = 10
π
h= 20
−1 h= π
40
π 3π
0 2 π 2 2π
x
This gure shows the error in our approximation for each h considered.
Although the errors are all bounded, we have a smaller error for smaller h.
0.2
π
h= 10
π
0.1 h= 20
π
y(xn ) − yn
h= 40
0.0
−0.1
−0.2 π 3π
0 2 π 2 2π
x
(a) Use big O notation to nd the order of each of these functions:
1
(i) f (h) = 100h + 10 − .
h
(ii) g (h) = 2 sin(h2 ).
h2 + h
(iii) p(h) = √ .
h + h3
@f @f
(b) Let f (x; y ) = xy 2 + e−x sin(4y ) . Calculate and .
@x @y
(c) What will the error convergence plot look like for the second-order
Taylor method?
Today, we will:
derive and apply the second-order Taylor method to solve rst-order
IVPs;
compare this method to the forward Euler method.
h2
y (x + h) = y (x) + hy 0 (x) + y 00 (x) + O(h3 ):
| {z 2 }
Second-order Taylor method
The forward Euler method has a local error of O(h2 ), whereas the
second-order Taylor method has a local error of O(h3 ); an order of
magnitude lower.
Thus:
(7)
Solution:
In this case, f (x; y ) = ; hence:
@f @f
= and = :
@x @y
Thus, applying (7), the method becomes
y2 =
n xn yn
0 0 0
1 0.2 0.020000
2 0.4 0.088400
3 0.6 0.215848
4 0.8 0.415335
5 1.0 0.702708
This plot shows just how much better the second-order Taylor
approximation is for the previous example.
0.8
Exact
0.6 2nd order Taylor
Euler
0.4
y
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
x
−2
Euler
−4
O(h)
2nd TS
−6 O(h2 )
−3 −2 −1 0
log10 (h)
Solution:
In this case, f (x; y ) = ; hence:
@f @f
= and = :
@x @y
(a) Write down the exact solution (we calculated this in the feedback Q
from lecture 3).
(b) Apply the second-order Taylor method to approximate the solution to
the IVP on 0 ≤ x ≤ 1, with h = 0:25.
(c) Compute the error of the approximation at each mesh point and
compare them to those from the Euler method (feedback Q from
lecture 3).
(d) The Euler method and second-order Taylor method are described as
O(h) and O(h2 ) methods respectively. Write down what this actually
means mathematically.
Today, we will:
derive and apply the modied Euler method; an O(h2 ) method that is
(in many ways) superior to the second-order Taylor method.
In the last few lectures, we discussed the Euler and Taylor series methods:
The (forward) Euler method is explicit and uses just the rst-order
derivative, provided by the original ODE.
Higher order Taylor methods are explicit, but use higher order
derivatives that must be derived from the original ODE.
Higher order Taylor methods are more accurate than the Euler method,
but are also more computationally intensive and can be cumbersome.
We now seek methods that are O(h2 ), but do not require the
calculation of higher order derivatives. Two such methods are the
(implicit) trapezoidal method and the (explicit) modied Euler
method.
R xn+1
≈ xn
f (x, y) dx
xn xn+1 x
∗
yn+1 =
Apply the modied Euler method to approximate the solution to the IVP
y0 = x + y; y (0) = 0; (10)
over 0 ≤ x ≤ 1 with h = 0:2, using the auxiliary value notation.
Solution:
Write down k1 and k2 in terms of xn and yn :
k1 = hf (xn ; yn )
k2 = hf (xn+1 ; yn + k1 )
1
y1 = y0 + (k1 + k2 ) =
2
Second iteration:
k1 = 0:2(x1 + y1 ) =
y2 =
n xn yn
0 0 0
1 0.2 0.0200
2 0.4 0.0884
3 0.6 0.2158
4 0.8 0.4153
5 1.0 0.7027
This plot shows just how superior the modied Euler method is over the
original Euler method.
0.8
Exact
0.6 Euler
Mod. Euler
0.4
y
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
x
Errors comparison between the Euler (E) and modied Euler (ME)
methods, over 0 ≤ x ≤ 1 with h = 0:2, for IVP (10).
This error convergence plot shows that the modied Euler (ME) method
does indeed achieve O(h2 ) convergence.
0
log10 (|y(1) − yN |)
−2
Euler
−4
O(h)
ME
−6 O(h2 )
−3 −2 −1 0
log10 (h)
k2 =
y1 =
Second iteration:
k1 =
k2 =
y2 =
Chengjie Cai (UoN) MATH2052 2022-23 68 / 113
The modied Euler method example 2
We worked out last lecture that the exact solution in this case is
y (x) = e(x =3) .
3
Due to these desirable attributes, this method is our favourite (so far!).
Chengjie Cai (UoN) MATH2052 2022-23 70 / 113
Lecture 6 optional feedback question
(a) Apply the modied Euler method to approximate the solution to the
IVP on 0 ≤ x ≤ 1, with h = 0:25.
(b) Compute the error of the approximation at each mesh point and
compare them to those from the Euler and second-order Taylor
methods (feedback Q from lecture 5).
The Euler and modied Euler methods are Runge-Kutta methods of rst-
and second-order respectively.
Runge-Kutta methods are predictor-corrector methods.
Runge-Kutta methods do not require the computation of derivatives;
unlike the second-order Taylor series method. Instead, they use
evaluations of f (x; y ).
When they agree with the Taylor series up to and including the term
in hp , the order of the Runge-Kutta method is then p .
One particular fourth-order Runge-Kutta method is very commonly
used in practice and is often referred to as the Runge-Kutta method.
k3 = hf x0 + 12 h; y0 + 12 k2
` ´
k1 = hf (x0 ; y0 )
= =
= =
= =
k2 = hf x0 + 12 h; y0 + 12 k1
` ´
k4 = hf (x0 + h; y0 + k3 )
= =
= =
= =
Repeating the same process but using the newly updated values for the ki
and yn gives
y2 = y1 + 61 (k1 + 2k2 + 2k3 + k4 )
= 2:5041:
The plot on the following slide shows how the RK4 trajectory is
obtained graphically.
In this (dierent) example, x0 = 0, y0 = 0:01, h = 0:2, and:
k1 = 0:02; k3 = 0:10;
k2 = 0:06; k4 = 0:26:
Solution:
For (13), we have
The RK4 method is
1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 ) :
6
k2 = hf (x0 + 12 h; y0 + 12 k1 ) =
k3 = hf (x0 + 12 h; y0 + 12 k2 ) =
k4 = hf (x0 + h; y0 + k3 ) =
n xn yn
0 0 0
1 0.2 0.021400
2 0.4 0.091818
3 0.6 0.222106
4 0.8 0.425520
5 1.0 0.718250
Chengjie Cai (UoN) MATH2052 2022-23 83 / 113
The RK4 method example 2
Again, using the exact solution y = exn − xn − 1, we can nd the errors at
each xn .
As the table below shows, the RK4 approximation to the solution of (13) is
clearly much closer to the exact solution than the approximations obtained
using the Euler (E) and modied Euler (ME) methods.
−2
−4
log10 (|y(1) − yN |)
−6
−8
−10 Euler
O(h)
−12 ME
O(h2 )
−14 RK4
O(h4 )
−16
−3.5 −3.0 −2.5 −2.0 −1.5 −1.0 −0.5 0.0
log10 (h)
k2 = hf xn + 12 h; yn + 21 k1 =
` ´
k3 = hf xn + 21 h; yn + 21 k2 =
` ´
k4 = hf (xn + h; yn + k3 ) =
y1 =
Second iteration: k1 =
k2 =
k3 =
k4 =
y2 =
Chengjie Cai (UoN) MATH2052 2022-23 88 / 113
The RK4 method example 3
The exact solution in this case is y (x) = e(x .
3 =3)
The method:
X is explicit.
X is an O(h4 ) method.
X doesn't require any higher order derivatives of f (x; y ).
× can be unstable if h is too large.
Overall, this method is superior to the forward Euler, Taylor series and
modied Euler methods.
Chengjie Cai (UoN) MATH2052 2022-23 90 / 113
Lecture 7 optional feedback question
(a) Apply the RK4 method to approximate the solution to the IVP on
0 ≤ x ≤ 0:5, with h = 0:25.
(b) Compute the error of the approximation at each mesh point and
compare them to those from the Euler and ME methods (homework
from lecture 6).
Today, we will:
learn how to solve systems of rst-order IVPs numerically;
learn how to solve second-order IVPs numerically.
where y n = (y1 n ; y2 n ; : : : ; ym n ) .
n xn yn zn
0 0
1 0.05
2 0.10
3 0.15 0.148881 0.933125
4 0.20 0.195537 0.867253
5 0.25 0.238900 0.781262
6 0.30 0.277963 0.676881
Doing this by hand is slow and tedious, but it is (fairly) easy to adapt
our scalar IVP Python programs for y and f to deal with vectors y
and f .
−2
log10 (|y(1) − yN |)
−4
Error
−6 O(h)
−5 −4 −3 −2 −1
log10 (h)
where
If „ «
y
y= ;
z
then we can write k 1 and k 2 in component form as
k1 = and k 2 =
we have
k1y
!
=
k1z
k2y
!
=
k2z
k2y
!
k2 = =
k2z
k2 =
1
y 1 = y 0 + (k 1 + k 2 )
2
k2 =
1
y 2 = y 1 + (k 1 + k 2 )
2
J =
yn+1 =
zn+1 =
Example: y 00 + 4y 0 + 5y = e−x .
Dene , giving . Substitute these into the ODE:
z0 =
We can then solve this system like in the previous e.g. to nd y (x).
Chengjie Cai (UoN) MATH2052 2022-23 110 / 113
Converting a 2nd-order IVP into a pair of 1st-order IVPs
You try rst!
Convert the following second-order IVP into a pair of rst-order IVPs:
y 00 + y y 0 = cos(x):
Solution:
Dene , giving . Substitute these into the ODE:
z0 =