0% found this document useful (0 votes)
322 views40 pages

Dynamical Systems: Lecture Notes

The document provides an overview of dynamical systems and differential equations. It discusses the historical background of dynamical systems beginning with Newton's work in the 1600s. It defines dynamical systems as systems that evolve over time, whether settling into equilibrium or repeating in cycles. The document outlines types of dynamical systems as differential equations, which describe continuous time systems, and difference equations, which describe discrete time systems. It also defines types of differential equations and provides examples of modeling population growth with first order differential equations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
322 views40 pages

Dynamical Systems: Lecture Notes

The document provides an overview of dynamical systems and differential equations. It discusses the historical background of dynamical systems beginning with Newton's work in the 1600s. It defines dynamical systems as systems that evolve over time, whether settling into equilibrium or repeating in cycles. The document outlines types of dynamical systems as differential equations, which describe continuous time systems, and difference equations, which describe discrete time systems. It also defines types of differential equations and provides examples of modeling population growth with first order differential equations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Dynamical Systems

Dr. Joseph K. Ansong


(Dept. of Mathematics, University of Ghana, Legon)

LECTURE NOTES
Dynamical Systems J.K.A

2 Dr.
c Joseph K. Ansong
Contents

1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Historical Background . . . . . . . . . . . . . . . . . . 1
1.2 Definitions & Terminology . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Types of dynamical systems . . . . . . . . . . . . . . . 2
1.2.2 Types of differential equations . . . . . . . . . . . . . . 3
1.2.3 A general framework . . . . . . . . . . . . . . . . . . . 5
1.2.4 Nonautonomous Systems . . . . . . . . . . . . . . . . . 7

2 First Order Differential Equations 9


2.1 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Linear First-Order Equations . . . . . . . . . . . . . . . . . . 12

3 One-dimensional Flows: flows on the line 15


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 A Geometric Approach to one-dimensional equations . . . . . 15
3.3 Fixed Points and Stability . . . . . . . . . . . . . . . . . . . . 18
3.4 Population Growth . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Linear Stability Analysis . . . . . . . . . . . . . . . . . . . . . 24
3.6 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . 27
3.7 Impossibility of Periodic Solutions . . . . . . . . . . . . . . . . 30
3.8 Computational and Numerical Methods . . . . . . . . . . . . . 32
3.8.1 The Euler Method . . . . . . . . . . . . . . . . . . . . 32
3.8.2 Improved Euler’s Method (Heun’s Method) . . . . . . . 35
3.8.3 Numerical Solvers . . . . . . . . . . . . . . . . . . . . . 36

3
Dynamical Systems J.K.A

4 Dr.
c Joseph K. Ansong
Chapter 1

Introduction

1.1 Overview
This is the subject that deals with change, with systems that evolve in time.
Whether the system in question settles down to equilibrium, keeps repeating
in cycles, or does something more complicated, it is dynamics that we use
to analyze the behavior. Dynamical ideas appear in various places: in differ-
ential equations, classical mechanics, chemical kinetics, population biology,
etc.

1.1.1 Historical Background


The history is taken from the book by Strogatz; a more detailed account
can be found there. Dynamics was originally a branch of physics, beginning
with the work of Newton on differential equations in the mid-1600s. Newton
discovered the laws of motion and universal gravitation and used them to
explain Kepler’s laws of planetary motion. He solved the two-body problem:
the problem of calculating the motion of the earth around the sun, given the
inverse-square law of gravitational attraction between them. Later, other
physicists and mathematicians tried to extend Newton’s analytical methods
to the three-body problem (e.g., sun, earch, and moon) without success.
However, the work of Poincare brought a breakthrough in the late 1800s.
He introduced a new point view that emphasized qualitative rather than
quantitative questions. For example, instead of asking for the exact positions
of the planets at all times, he asked “Is the solar system stable forever, or
will some planets eventually fly off to infinity?” He developed a powerful
geometric approach to analyzing such questions. This is one of the main
approaches in the modern subject of dynamics, with applications in several
different fields beyond celestial mechanics. Poincare was also the first person

1
Dynamical Systems J.K.A

to discuss the possibility of chaos, in which a deterministic system exhibits


aperiodic behavior that depends sensitively on the initial conditions, thereby
rendering long-term prediction impossible.
The invention of high-speed computers in the 1950s led to a revolution in
the study of dynamics. The computer allowed one to experiment with equa-
tions in a way that was impossible before, and thereby giving some intuition
about the behaviour of nonlinear systems. Such experiments led to Lorenzs
discovery in 1963 of chaotic motion on a strange attractor. By studying
convection roles in the atmosphere he gained some understanding of the no-
torious unpredictability of the weather. Lorenz found that the solutions to
his equations never settled down to equilibrium or to a periodic stateinstead
they continued to oscillate in an irregular, aperiodic fashion. Moreover, if
he started his simulations from two slightly different initial conditions, the
resulting behaviors would soon become totally different. The implication
was that the system was inherently unpredictable–tiny errors in measuring
the current state of the atmosphere (or any other chaotic system) would
be amplified rapidly, eventually leading to embarrassing forecasts. Lorenz
also showed that there was structure in the chaos: when plotted in three
dimensions, the solutions to his equations fell onto a butterfly-shaped set of
points.
Two other major developments in dynamics in the 1970s are fractals
and mathematical biology. Mandelbrot codified and popularized fractals,
produced magnificent computer graphics of them, and showed how they could
be applied in a variety of subjects. In mathematical biology, Winfree applied
the geometric methods of dynamics to biological oscillations, for example
heart rhythms.

1.2 Definitions & Terminology

1.2.1 Types of dynamical systems

The two main types of dynamical systems are differential equations and
difference equations (or iterated maps). Differential equations describe
the evolution of systems in continuous time (see below), whereas iterated
maps arise in problems where time is discrete. Differential equations are
used much more widely in science and engineering, and we shall therefore
concentrate on them.

2 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

1.2.2 Types of differential equations


Mathematical models play a critial role in the sciences and engineering by
helping us gain a better understanding of real life phenomena. The mod-
els are generally simplified versions of the actual physical phenomena under
investigation. They are simplified because the parameters governing the nat-
ural phenomena is often not completely understood or may be too compli-
cated to be represented mathematically. The development of mathematical
models usually result in an equation or a set of equations specifying how
an unknown function (say φ(t)) changes with respect to a variable (say t).
Such an equation is referred to as a differential equation. For example,
suppose the variation in the population of mosquitoes in a certain village is
represented by the differential equation
dM
= kM, (1.1)
dt
where M is the number (population) of mosquitoes at time t, and k is a known
constant (obtained from observational or experimental data). Fortunately for
us, equation (1.1) can be solved easily using integration techniques learned
in your Calculus class. Re-writing the equation and integrating results in
dM
= kdt,
Z M Z
dM
= kdt,
M
ln(M ) = kt + C1 ,
M = ekt+C1 = eC1 ekt
M (t) = Cekt . (1.2)

The integration constant C can be determined if we know the initial popu-


lation of mosquitoes. Equation (1.2) then helps us to predict the population
of mosquitoes at a future time. For instance, if initially the population of
mosquitoes is 1000 and k is −5, then we have

1000 = Ce5×0 = C,
=⇒ M (t) = 1000e−5t . (1.3)

Thus, our simple model may now be used to predict the decline in the pop-
ulation of mosquitoes in the village. A differential equation (e.g. equation
1.1) with the initial value specified (e.g. M = 1000 at t = 0; as in the above
example) is called an Initial Value Problem (IVP).

3 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Notice that the solution to a differential equation is not a number but a


function (see equation 1.2), and contains arbitrary “constants of integration”.
The presence of these arbitrary constants implies that there is generally no
unique solution to a differential equation. The unknown function in a differ-
ential equation (e.g. M in equation 1.1) is called the dependent variable
and the others (e.g. t in equation 1.1) is called the independent variable.
It is usually clear from the equation which variable is dependent and which
is independent.

Definition 1. [Differential Equation]


A differential equation is an equation involving the rate of change of a
quantity. That is, it involves a function and its derivatives.

An ordinary differential equation (ODE) is a differential equation in


which only the derivatives of the unknown function with respect to one in-
dependent variable appear in the equation. Examples of ODEs are
dy
+ kt = 10,
dt
d2 y dy
m 2 + b + ky = 0. (1.4)
dt dt
Equation (1.4) is the equation for a damped harmonic oscillator and involves
only ordinary derivatives dy/dt and d2 y/dt2 with respect to one independent
variable, t. A partial differential equation (PDE) is a differential equa-
tion in which partial derivatives of the unknown function with respect to at
least two independent variables appear in the equation. The following are
some examples of PDEs
∂u ∂ 2u
= 2,
∂t ∂t
∂φ ∂φ
+b + b = 0,
dx ∂y
where the first equation is often called the heat equation. There are two
independent variables t and x. The order of a differential equation is the
order of the highest derivative appearing in the equation. Examples of first
order equations are
dx dy
+ x = 0, and + 3xy = 0,
dt dx
and the following are second order equations:
 3
d2 y d2 x dx
+ cy = 0, and + = 0.
dx2 dt2 dt

4 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

The constants appearing in a differential equation are called parameters;


for example the constants m and b in equation (1.4). A first order ODE is
called explicit if it can be written in the form

dx
= f (t, x), (1.5)
dt
for a real-valued function f of two variables.

1.2.3 A general framework


Even though dynamical systems deals with both ODEs and PDEs, our con-
cern here is with purely temporal behavior, and so we would deal exclusivley
with ordinary differential equations. A very general framework for ordinary
differential equations is provided by the system:

ẋ1 = f1 (x1 , x2 , · · · , xn )
ẋ2 = f2 (x1 , x2 , · · · , xn )
..
. (1.6)
ẋn = fn (x1 , x2 , · · · , xn )

where the overdots denote differentiation with respect to t such that ẋi =
dxi /dt. The variables x1 , · · · , xn might represent concentrations of chemicals
in a reactor, populations of different species in an ecosystem, or the positions
and velocities of the planets in the solar system. The functions f1 , · · · , fn are
determined by the problem at hand. A system in which the f ’s do not explic-
itly contain the independent variable t are said to be autonomous, otherwise
it is called nonautonomous. System (1.6) shows that every higher order
differential equations may be written as a system of first order equations.
For example, the damped oscillator (1.4)

d2 x dx
m 2
+ b + kx = 0 =⇒ mẍ + bẋ + kx = 0
dt dt
can be written in the form of (1.6). Here’s the approach: define new variables

x1 = x, and x2 = ẋ.

Then ẋ1 = x2 . From these definitions and the differential equations, we get

b k
ẋ2 = ẍ = − ẋ − x
m m
5 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

b k
=− x2 − x1 .
m m
Thus, the equivalent system of the damped oscillator in the form of (1.6) is
given by

ẋ1 = x2
b k
ẋ2 = − x2 − x1 .
m m
This system is said to be linear, because all the xi on the right-hand side
appear to the first power only. Otherwise the system would be nonlinear.
Typical nonlinear terms are products, powers, and functions of the xi , such
as x1 x2 , (x1 )3 , or cos x2 . For example, the swinging of a pendulum is governed
by the equation
g
ẍ + sin x = 0,
L
where x is the angle of the pendulum from vertical, g is the acceleration due
to gravity, and L is the length of the pendulum. The equivalent system is
nonlinear:

ẋ1 = x2
g
ẋ2 = − sin x1 .
L
Nonlinearity makes the pendulum equation very difficult to solve analytically.
The usual way around this is to invoke the small angle approximation sin x ≈
x for x << 1. This converts the problem to a linear one, which can then
be solved easily. But by restricting to small x, we’re throwing out some
of the physics, like motions where the pendulum whirls over the top. Is it
really necessary to make such drastic approximations? It turns out that the
pendulum equation can be solved analytically, in terms of elliptic functions.
But there ought to be an easier way. After all, the motion of the pendulum is
simple: at low energy, it swings back and forth, and at high energy it whirls
over the top. There should be some way of extracting this information from
the system directly. This is the sort of problem we’ll learn how to solve, using
geometric methods.
Here’s the rough idea. Suppose we happen to know a solution to the
pendulum system, for a particular initial condition. This solution would be
a pair of functions x1 (t) and x2 (t), representing the position and velocity of
the pendulum. If we construct an abstract space with coordinates (x1 , x2 ),
then the solution (x1 (t), x2 (t)) corresponds to a point moving along a curve
in this space (Figure).

6 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

This curve is called a trajectory, and the space is called the phase
space for the system. The phase space is completely filled with trajectories,
since each point can serve as an initial condition. Our goal is to run this con-
struction in reverse: given the system, we want to draw the trajectories, and
thereby extract information about the solutions. In many cases, geometric
reasoning will allow us to draw the trajectories without actually solving the
system!
Some terminology: the phase space for the general system (1.6) is the
space with coordinates x1 , · · · , xn . Because this space is n-dimensional, we
will refer to (1.6) as an n-dimensional system or an nth-order system.
Thus n represents the dimension of the phase space.

1.2.4 Nonautonomous Systems


You might worry that (1.6) is not general enough because it doesn’t include
any explicit time dependence (ie it is autonomous). How do we deal with
time-dependent or nonautonomous equations like the forced harmonic oscil-
lator mẍ + bẋ + kx = F cos t? In this case too there’s an easy trick that
allows us to rewrite the system in the form (1.6). We let x1 = x and x2 = ẋ
as before but now we introduce x3 = t. Then ẋ3 = 1 and so the equivalent
system is

ẋ1 = x2
1
ẋ2 = (−kx1 − bx2 + F cos x3 ) (1.7)
m
ẋ3 = 1

which is an example of a three-dimensional sytem. Similarly, an nth-order


time-dependent equation is a special case of an (n + 1)−dimensional system.
By this trick, we can always remove any time dependence by adding an extra
dimension to the system.
The virtue of this change of variables is that it allows us to visualize a
phase space with trajectories frozen in it. Otherwise, if we allowed explicit
time dependence, the vectors and the trajectories would always be wiggling–
this would ruin the geometric picture we’re trying to build. A more physical
motivation is that the state of the forced harmonic oscillator is truly three-
dimensional: we need to know three numbers, x, ẋ, and t, to predict the
future, given the present. So a three-dimensional phase space is natural.

7 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

8 Dr.
c Joseph K. Ansong
Chapter 2

First Order Differential


Equations

2.1 Separable Equations

Definition 2 (Separable Equations). A first order differential equa-


tion
dy
= f (x, y)
dx
is said to be separable if f (x, y) can be written as the product of a
function g(x) that depends on only x and a function h(y) that depends
on only y such that
dy
= h(y)g(x) (2.1)
dx

Method of Solution
To solve
dy
= h(y)g(x), (2.2)
dx
separate variables to get

dy
= g(x)dx, h(y) 6= 0.
h(y)

9
Dynamical Systems J.K.A

Now integrate both sides:


Z Z
dy
= g(x)dx, h(y) 6= 0.
h(y)
The result of the integration above yields the solution to the differential
equation.
Example 1. Solve the following differential equations
dy 1
= 3
dx xy
Solution.
dy 1
= 3.
dx xy
dx
=⇒ y 3 dy = ,
x
Z Z
3 dx
=⇒ y dy = ,
x
1
=⇒ y 4 = ln(x) + C1 ,
4
4
=⇒ y = [ln(x) + 4C1 ] = [ln(x) + C],
y = [ln(x) + C]1/4
Example 2. Solve the following differential equations
dy
= y(2 + sin x)
dx
(2.3)

Solution.
dy
= y(2 + sin x),
dx
Z Z
dy
=⇒ = (2 + sin x)dx,
y
=⇒ ln(y) = 2x − cos(x) + C1 ,
=⇒ y = e2x−cos x+C1 = eC1 e2x−cos(x) ,
∴ y = Ke2x−cos(x) .
Example 3. Solve
(1 + x)dy − ydx = 0

10 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Solution.
(1 + x)dy − ydx = 0
dy 1
=⇒ = dx
y 1+x
Z Z
dy 1
=⇒ = dx
y 1+x
=⇒ ln |y| = ln |1 + x| + C1
y
=⇒ ln | | = C1
1+x
y
=⇒ = ±eC1 = C
1+x
=⇒ y = C(1 + x),
where C is an arbitrary constant.

Example 4. Solve
dy
= y2 − 9
dx
Solution. Z Z
dy
2
= dx, y 6= ±3
y −9
Now, we use partial fractions to write:
1 1 A B
= ≡ + ,
y2 −9 (y − 3)(y + 3) y−3 y+3

=⇒ 1 = A(y + 3) + B(y − 3),


1
y=3: 6A = 1, =⇒ A =
6
1
y = −3 : 1 = −6B, =⇒ B = −
6
1 1/6 −1/6
= +
y2 −9 y−3 y+3
So the integral becomes
Z  
−1/6
Z
1/6
+ dy = dx
y−3 y+3

1 1
ln |y − 3| − ln |y + 3| = x + C1
6 6
11 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A


y − 3
ln
= 6x + C2 , C2 = 6C1
y + 3
y−3
= ±eC2 e6x = Ce6x
y+3
Thus,
1 + Ce6x
 
y=3 .
1 − Ce6x

2.2 Linear First-Order Equations


A linear first order differential equation is an equation of the form
dy
a1 (x) + a0 (x)y = b(x).
dx
To solve the differential equation, we first write it in a standard form
dy
+ P (x)y = Q(x). (2.4)
dx
Now suppose that equation (2.4) could be simplified by multiplying by some
function, say µ(x), such that
dy
µ(x) + µ(x)P (x)y = µ(x)Q(x) (2.5)
dx
and that
d
[µ(x)y] = µ(x)Q(x). (2.6)
dx
Then it will be easy to integrate the equation above and obtain a solution
to the differential equation. Notice that equations (2.5) and (2.6) imply that
we can compute the function µ(x) :
µ0 (x) = µ(x)P (x)
Z Z

=⇒ = P (x)dx
µ
R
=⇒ µ(x) = e P (x)dx . (2.7)
So from equation (2.6) we get the solution
Z
µ(x)y = µ(x)Q(x)dx
Z
1
∴y= µ(x)Q(x)dx, (2.8)
µ(x)

12 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

where µ(x) is given by equation (2.7). The function µ(x) is called the inte-
grating factor of the differential equation.

(2.9)

Below is a summary of the approach outlined above.


a) Write the linear equation in a standard form
dy
+ P (x)y = Q(x). (2.10)
dx

b) Calculate the integrating factor:


R
P (x)dx
µ(x) = e

c) Multiply (2.10) by µ(x) such that


d
[µ(x)y] = µ(x)Q(x)
dx

d) Integrate the equation above to get


Z
1
y= µ(x)Q(x)dx
µ(x)

Example 5. Solve the following IVP


dy 4
+ 4y − e−x = 0, y(0) = .
dx 3
Solution.
dy
+ 4y − e−x = 0
dx
dy
=⇒ + 4y = e−x
dx
R
4dx
µ(x) = e = e4x
where we have ignored the constant of integration because it cancels out in
the subsequent steps. Thus
dy
e4x + 4xe4x = e3x
dx
d  4x 
=⇒ e y = e3x
dx
13 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Z
4x 1
=⇒ e y = e3x dx = e3x + C
3
1
=⇒ y = e−x + Ce−4x
3
Applying the initial condition, we have
4 4 1
x = 0, y = , =⇒ = + C, =⇒ C = 1
3 3 3
1 −x
∴ y = e + e−4x .
3
Example 6. Solve the differential equation
dy
− 4y = x6 ex
x
dx
Solution. Re-write in a standard form to get
dy y
− 4 = x5 e x , x 6= 0
dx x
− x4 dx −4
R
µ=e = e−4 ln |x| = eln x = x−4
dy
=⇒ x−4 − 4x−5 y = xex
dx
d  −4 
=⇒ x y = xex
dx Z
=⇒ x−4 y = xex dx
Using integration by parts, we let
Z
I= xex dx

=⇒ I = xex − ex + C = ex (x − 1) + C
Thus
x−4 y = ex (x − 1) + C
∴ y = x4 ex (x − 1) + Cx4 .
EXERCISE 1. Solve
dy
1) − y = e3x
dx
dr
2) + r tan θ = sec θ

dy y
3) = + 2x + 1
dx x

14 Dr.
c Joseph K. Ansong
Chapter 3

One-dimensional Flows: flows


on the line

3.1 Introduction
Let’s start with a simple single equation of the form
ẋ = f (x) (3.1)
where x(t) is a real-valued function of time t, and f (x) is a smooth real-
valued function of x. The equation is said to be one-dimensional or first
order.

3.2 A Geometric Approach to one-dimensional


equations
Graphical illustrations are often more helpful than formulas for analyzing
nonlinear systems. This is illustrated with a simple example. Later, we will
interpret a differential equation as a vector field.
Here’s a simple nonlinear differential equation that can be solved analyt-
ically:
ẋ = sin x (3.2)
To solve, we separate variables and integrate:
dx
dt = ,
sin x
Z Z
1
=⇒ dt = dx
sin x

15
Dynamical Systems J.K.A

Z
=⇒ t = csc xdx

∴ t = − ln | csc x + cot x| + C.
Suppose we have the initial condition that x = x0 at t = 0. Then we have

C = ln | csc x0 + cot x0 |,

so the solution becomes


csc x + cot x
0 0
t = ln . (3.3)

csc x + cot x
Even though we have an exact solution to the problem, the solution is difficult
to interpret. For example, it is difficult answer the following equations:

1. Suppose x0 = π/4; describe the qualitative features of the solution x(t)


for all t > 0. In particular, what happens as t → ∞?

2. For an arbitrary initial condition x0 , what is the behaviour of x(t) as


t → ∞?

In comparison to the analytical solution, a graphical analysis of (3.2) is


much clearer and simpler as shown in Figure 3.1. You can think of t as time,
x as position of an imaginary particle moving along the real line, and ẋ as the
velocity of that particle. Then the differential equation ẋ = sin x represents
a vector field on the line: it dictates the velocity vector ẋ at each x. To
sketch the vector field, it is convenient to plot ẋ versus x, and then draw
arrows on the x-axis to indicate the corresponding velocity vector at each x.
The arrows point to the right when ẋ > 0 and to the left when ẋ < 0. A

Figure 3.1:

more physical way to think about the vector field is this: imagine that fluid
is flowing steadily along the x-axis with a velocity that varies from place to
place, according to the rule ẋ = sin x. As shown in Figure 3.1, the flow is
to the right when ẋ > 0 and to the left when ẋ < 0. At points where ẋ = 0,

16 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

there is no flow; such points are therefore called fixed points. You can
see that there are two kinds of fixed points in Figure 3.1: solid black dots
represent stable fixed points (often called attractors or sinks, because the
flow is toward them) and open circles represent unstable fixed points (also
known as sources or repellers).
This graphical illustration now gives us more insight and understanding
of the solutions to ẋ = sin x. We start the imaginary particle at x0 and
watch how it is carried along the flow. We can now answer the questions
posed above:

1. Figure 3.1 shows that a particle starting at x0 = π/4 moves to the


right faster and faster until it crosses x = π/2 (where sin x reaches
its maximum). Then the particle starts slowing down and eventually
approaches the stable fixed point x = π from the left. Thus, the quali-
tative form of the solution is as shown in Figure 3.2.
The solution curve is concave up at first, and then concave down; this
corresponds to the initial acceleration for x < π/2, followed by the
deceleration toward x = π.

2. In the same way, Figure 3.1 shows that for any initial condition x0 ,
if ẋ > 0 initially, the particle heads to the right and asymptotically
approaches the nearest stable fixed point. Similarly, if ẋ < 0 initially,
the particle approaches the nearest stable fixed point to its left. If
ẋ = 0, then x remains constant. The qualitative form of the solution
for any initial condition is sketched in Figure 3.3.

Figure 3.2:

17 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.3:

One of the drawbacks of the geometric approach is the lack of quantitative


information: for instance, we don’t know the time at which the speed |ẋ| is
greatest. However, in many cases qualitative information is what we care
about and then a picture is fine.

3.3 Fixed Points and Stability


In general, to determine the qualitative behaviour of solutions to ẋ = f (x),
we just need to draw the graph of f (x) and then use it to sketch the vector
field on the real line as illustrated in Figure 3.4. The idea is to imagine that a
fluid is flowing along the real line with a local velocity f (x). This imaginary
fluid is called the phase fluid, and the real line is the phase space such that

(a) The flow is to the right where f (x) > 0 and to the left where f (x) < 0.

(b) To find the solution to ẋ = f (x) starting from an arbitrary initial


condition x0 , we place an imaginary particle (known as a phase point)
at x0 and watch how it is carried along by the flow.

(c) As time goes on, the phase point moves along the x-axis according to

18 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.4:

some function x(t). This function is called the trajectory based at x0 ,


and it represents the solution of the differential equation starting from
the initial condition x0 .

(d) A picture like Figure 3.4 which shows all the qualitatively different
trajectories of the system, is called a phase portrait.

The appearance of the phase portrait is controlled by the fixed points x∗ ,


defined by f (x∗ ) = 0; they correspond to stagnation points of the flow. In
Figure 3.4, the solid black dot is a stable fixed point (the local flow is toward
it) and the open dot is an unstable fixed point (the flow is away from it).
In terms of the original differential equation, fixed points represent equi-
librium solutions (sometimes called steady, constant, or rest solutions, since
if x = x∗ initially, then x(t) = x∗ for all time).

Definition 3. An equilibrium is defined to be stable if all sufficiently small


disturbances away from it damp out in time.

Thus stable equilibria are represented geometrically by stable fixed points.


Conversely, unstable equilibria, in which disturbances grow in time, are rep-
resented by unstable fixed points.

Example 7. Find all fixed points for

ẋ = x2 − 1,

and classify their stability.

19 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Solution. In this case f (x) = x2 − 1. To determine the fixed points, we set


f (x∗ ) = 0 and solve for x∗ . Thus

x∗2 − 1 = 0 =⇒ x∗ = ±1.

To determine stability, we plot f (x) = x2 − 1 and then sketch the vector field
(see Figure 3.5). The flow is to the right where x2 − 1 > 0 and to the left
where x2 − 1 < 0. Hence

x∗ = −1 is stable,

x∗ = 1 is unstable.

Figure 3.5:

Note that the definition of stable equilibrium is based on small distur-


bances; certain large disturbances may fail to decay. In Example 7, all small
disturbances to x∗ = −1 will decay, but a large disturbance that sends x to
the right of x = 1 will not decay– in fact, the phase point will be repelled
out to +∞. To emphasize this aspect of stability, we sometimes say that x∗ 1
is locally stable, but not globally stable.
Example 8. Sketch the phase portrait corresponding to

ẋ = x − cos x,

and determine the stability of all the fixed points.


Solution. Our previous method is still valid but it’s difficult to apply because
we need to figure out how to graph x − cos x. An alternative approach, which
is much easier, is to separately graph y = x and y = cos x since we know
how to plot them. By plotting them on the same axes, we find that they
intersect in exactly one point (Figure 3.6). The intersection corresponds to a
fixed point, since x∗ = cos x∗ and therefore f (x∗ ) = x∗ − cos x∗ = 0. To plot

20 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

the vector field, we notice that when the line lies above the cosine curve, we
have x > cos x and so ẋ > 0: the flow is to the right. Also, the flow is to
the left where the line is below the cosine curve. Hence x∗ is the only fixed
point, and it is unstable. Note that we can classify the stability of x∗ ,
even though we don’t have a formula for x∗ itself !

Figure 3.6:

3.4 Population Growth


The simplest model for the growth of a population of organisms is

Ṅ = rN,

where N (t) is the population at time t, and r > 0 is the growth rate. This
model predicts exponential growth:

N (t) = N0 ert

where N0 is the population at t = 0. Realistically, such exponential grown


cannot go on forever. To model the effects of overcrowding and limited
resources, population biologists and demographers often assume that the
per capita growth rate Ṅ /N decreases when N becomes sufficiently large as
depicted in Figure 3.7a. For small N , the growth rate equals r, just as before.
However, for populations larger than a certain carrying capacity K, the
growth rate actually becomes negative. That is, the death rate is higher than
the birth rate.

21 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.7:

Thus, the exponentially growing (or Malthusian) model is then modified


by assuming that the per capita growth rate Ṅ /N decreases linearly with N
(see Figure 3.7b). This leads to the so-called logistic equation
 
N
Ṅ = rN 1 − .
K
This equation was first suggested to describe the growth of human popula-
tions by Verhulst in 1838. The equation can be solved analytically (Exercise)
but we prefer to use the geometric approach.
Figure 3.8 shows a plot of Ṅ versus N to see what the vector field looks
like. In this case we plot only N ≥ 0, since it doesn’t make sense to think
about negative populations. By setting
N∗
 

rN 1 − = 0,
K
we see that fixed points occur at N ∗ = 0 and N ∗ = K. From the flow, we find
that N ∗ = 0 is an unstable fixed point and N ∗ = K is a stable fixed point.
In biological terms, N = 0 is an unstable equilibrium: a small population
will grow exponentially fast and run away from N = 0. On the other hand,
if N is disturbed slightly from K, the disturbance will decay monotonically
and N (t) → K as t → ∞.
Figure 3.8 shows that if we start a phase point at any N0 > 0, it will
always flow toward N = K. Hence, the population always approaches the
carrying capacity. The only exception is if N0 = 0; then there’s nobody
around to start reproducing and so N = 0 for all time.
We may now deduce the qualitative shape of the solutions using Figure
3.8. For example, if N0 < K/2, the phase point moves faster and faster until

22 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.8:

it crosses N = K/2, where the parabola in Figure 3.8 reaches its maximum.
Then the phase point slows down and eventually creeps toward N = K.
In biological terms, this means that the population initially grows in an
accelerating fashion, and the graph of N (t) is concave up. But after N =
K/2, the derivative Ṅ begins to decrease, and so N (t) is concave down as
it asymptotes to the horizontal line N = K (Figure 3.9). Thus the graph of
N (t) is S-shaped or sigmoid for N0 < K/2.
Something qualitatively different occurs if the initial condition N0 lies
between K/2 and K; now the solutions are decelerating from the start. Hence
these solutions are concave down for all t. If the population initially exceeds
the carrying capacity (N0 > K), then N (t) decreases toward N = K and
is concave up. Finally, if N0 = 0 or N0 = K, then the population stays
constant.

Critique of the Logistic Model


The logistic equation was tested in laboratory experiments in which colonies
of bacteria, yeast, or other simple organisms were grown in conditions of
constant climate, food supply, and absence of predators. These experiments
often yielded sigmoid growth curves, in some cases with an impressive match
to the logistic predictions.
On the other hand, the agreement was much worse for fruit flies, flour bee-
tles, and other organisms that have complex life cycles involving eggs, larvae,
pupae, and adults. In these organisms, the predicted asymptotic approach
to a steady carrying capacity was never observed–instead the populations ex-

23 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.9:

hibited large, persistent fluctuations after an initial period of logistic growth.


Thus, the logistic model should really be regarded as a metaphor for
populations that have a tendency to grow from zero population up to some
carrying capacity K.

3.5 Linear Stability Analysis


So far we have relied on graphical methods to determine the stability of
fixed points. Frequently one would like to have a more quantitative measure
of stability, such as the rate of decay to a stable fixed point. This sort of
information may be obtained by linearizing about a fixed point, as we now
explain.

Linearization
Let x∗ be a fixed point, and let
η(t) = x(t) − x∗
be a small perturbation away from x∗ . To see whether the perturbation grows
or decays, we derive a differential equation for η. Differentiating the above

24 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

equation gives
d
η̇ = (x − x∗ ) = ẋ,
dt
since x∗ is a constant. Thus

η̇ = ẋ = f (x) = f (x∗ + η).

Recall that a Taylor series expansion of any function g(x) about a point, say
a, is given by
g 0 (a) g 00 (a) g 000 (a)
g(x) = g(a) + (x − a) + (x − a)2 + (x − a)3 + · · ·
1! 2! 3!
Thus, using Taylor’s expansion, we have

f (x∗ + η) = f (x∗ ) + ηf 0 (x∗ ) + O(η 2 ),

where O(η 2 ) denotes quadratically small terms in η. Now, since f (x∗ ) = 0


(because x∗ is a fixed point), we get

η̇ = ηf 0 (x∗ ) + O(η 2 ).

If f 0 (x∗ ) 6= 0, the O(η 2 ) terms are negligible and we may write the approxi-
mation
η̇ ≈ ηf 0 (x∗ ).
This is a linear equation in η, and is called the linearization about x∗ .
Separating variables and solving η̇ = ηf 0 (x∗ ) gives
0 ∗ )t
η(t) = Cef (x

where C is an arbitrary constant and f 0 (x∗ ) is also constant. It shows that the
perturbation η(t) grows exponentially if f 0 (x∗ ) > 0 and decays if f 0 (x∗ ) < 0.
If f 0 (x∗ ) = 0, the O(η 2 ) terms are not negligible and a nonlinear analysis is
needed to determine stability; to be discussed shortly.

Implications of linearization
An important conclusion from the linearization above is that the slope f 0 (x∗ )
at the fixed point determines its stability. This is consistent with our earlier
analysis. In other words, an alternative approach to determining the stability
of a fixed point is find f 0 (x∗ ), as opposed to first graphing f (x) and indicating
the vector fields. An additional and new feature of f 0 (x∗ ) is that we now have
a measure of how stable a fixed point is–that’s determined by the magnitude
of f 0 (x∗ ).

25 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

The magnitude of f 0 (x∗ ) plays the role of an exponential growth or decay


rate. Its reciprocal 1/|f 0 (x∗ )| is a characteristic time scale; it determines
the time required for x(t) to vary significantly in the neighborhood of x∗ .
Example 9. Using linear stability analysis, determine the stability of the
fixed points for
ẋ = sin x.
Solution. Fixed points occur where

f (x) = sin x = 0

=⇒ x∗ = nπ,
where n is an integer. Then

0 ∗ 1, n even
f (x ) = cos nπ =
−1, n odd.

Thus, x∗ is unstable if n is even and stable if n is odd. This is consistent


with results in Figure 3.1.
Example 10. Classify the fixed points of the logistic equation:
 
N
Ṅ = rN 1 − .
K
using linear stability analysis, and find the characteristic time scale in each
case.
Solution. Here  
N
f (N ) = rN 1 −
K
and the fixed points are N ∗ = 0 and N ∗ = K. Now
2rN
f 0 (N ) = r − .
K
Thus
f 0 (0) = r and f 0 (K) = −r.
Hence N ∗ = 0 is unstable and N ∗ = K is stable, as shown earlier using the
geometric arguments. The characteristic time scale in either case is
1 1
= .
|f 0 (N ∗ )| r

26 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Example 11. What can be said about the stability of a fixed point when
f 0 (x∗ ) = 0? Consider the following problems:

(a) ẋ = −x3

(a) ẋ = x3

(a) ẋ = x2

(a) ẋ = 0

Solution. In general, we cannot say anything about the stability, we need to


consider the stability in a case-by-case basis. In each of the given problems,
the system (problem) has a fixed point x∗ = 0 with f 0 (x∗ ) = 0. However,
we will see that the stability is different in each case. Using the graphical
approach (see Figure 3.10) shows that

(a) x∗ = 0 is stable

(b) x∗ = 0 is unstable

(c) x∗ = 0 is a hybrid case, which may be referred to as half-stable, since


the fixed point is attracting from the left and repelling from the right.

(d) x∗ = 0 is a whole line of fixed points; perturbations neither grow nor


decay.

These examples often arise naturally in the context of bifurcations; to be


discussed later.

3.6 Existence and Uniqueness


The theorem for existence and uniqueness of solutions is sometimes referred
to as the Picard-Lindelöf Theorem:

Theorem 1. Consider the initial value problem

ẋ = f (x), x0 = x(0).

Suppose that f (x) and f 0 (x) are continuous on an open interval R of the x-
axis, and suppose that x0 is a point in R. Then the initial value problem has
a solution x(t) on some time interval (−τ, τ ) about t = 0, and the solution
is unique.

27 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.10:

Explanation of theorem:
The theorem says that if f (x) is smooth enough, then solutions exist and are
unique. Even so, there’s no guarantee that solutions exist forever, as shown
by some examples. Again, the theorem does not say that the solutions exist
for all time; they are only guaranteed to exist in a (possibly very short) time
interval around t = 0.

Example 12. Discuss the existence and uniqueness of solutions to the initial
value problem ẋ = 1 + x2 , x(0) = x0 . Do solutions exist for all time?

Solution. This function f (x) = 1 + x2 is continuous and has a continuous


derivative for all x. Thus, by the existence and uniqueness theorem, solutions
exist and are unique for any initial condition x0 .
Consider the initial condition x(0) = 0. Solving the problem analytically

28 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

by separation of variables, we get


Z Z
dx
= dt
1 + x2

=⇒ tan−1 x = t + C.
Applying the initial condition x(0) = 0 yields C = 0. Thus, the solution is
given by
x(t) = tan t.
Observe that this solution exists for −π/2 < t < π/2 since x(t) → ±∞ as
t → ±π/2. Outside this interval, the problem has no solution for the initial
value x0 = 0. Why?
Also observe that the system has solutions that reach infinity in finite
time. This occurrence is referred to as blow up. It has physical importance
in different (numerical) models. That is one reason why it is sometimes nec-
essary to determine the existence and uniqueness of solutions before actually
solving equations.

Example 13. Consider the initial value problem


dx
= 3x2/3 , x(2) = 0.
dt
Does Theorem 1 imply the existence of a unique solution?

Solution. We have f (x) = 3x2/3 and ∂f /∂x = 2/x1/3 . Though the function
f is continuous about the initial point (2, 0), its partial derivative ∂f /∂x is
not continuous or even defined at x = 0. So the hypotheses of Theorem 1
are not satisfied. Therefore we cannot use Theorem 1 to determine whether
the IVP has or does not have a unique solution.

Remark. Why did the problem violate the hypotheses of the theorem? First
notice that the equation can be solved via integration:
dx
= 3x2/3
dt
Z Z
−2/3
=⇒ x dx = 3dt

=⇒ 3x1/3 = 3t + C1
=⇒ x1/3 = (t + C)
=⇒ x = (t + C)3

29 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Applying the initial condition results in the solution


x = (t − 2)3 .
Also notice that x(t) ≡ 0 is another solution to the IVP since it satisfies the
equation. So the IVP has two solutions that both satisfy the initial condition
(see Figure 3.11) and therefore cannot have a unique solution as required by
Theorem 1.

Figure 3.11: Illustration of the solutions to dx/dt = 3x2/3 . Any rectangle


encompassing (2, 0) contains the two solution curves.

3.7 Impossibility of Periodic Solutions


There are no period solutions to the first order equation
ẋ = f (x).
The reason is that the dynamics of the system is dominated by fixed points.
So far we have seen that trajectories either approach a fixed point or diverge
to infinity (see Figure 3.12). These are the only possibilities on a real line.
Trajectories are forced to increase or decrease monotonically, or remain fixed.
In other words, the phase point (our imaginary fluid particle) never reverses
direction.

30 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.12:

Theorem 2 (No periodic solutions). There are...

31 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

3.8 Computational and Numerical Methods


3.8.1 The Euler Method
Consider the first order differential equation
dx
= f (t, x), x(t0 ) = x0 . (3.4)
dt
Suppose we want to find the values of a solution x(t) of (3.4) at equally
spaced points
t0 , t1 , · · · , tn
with ti+1 − ti = h and x(t0 ) = x0 . Since f (t, x) is known, we can compute
dx(t0 ) dx
= = f (t0 , x0 ).
dt dt t=t0

The Taylor expansion of x(t) at t1 is


1
x(t1 ) = x(t0 + h) = x(t0 ) + hx0 (t0 ) + h2 x00 (t0 ) + · · ·
2
Neglecting higher order terms, we get

x(t1 ) = x(t0 ) + hx0 (t0 )

=⇒ x(t1 ) = x(t0 ) + hf (t0 , x0 )

=⇒ x1 = x0 + hf (t0 , x0 ). (3.5)

If h is sufficiently small, then (3.5) is a good approximation to x(t1 ) as shown


in Figure 3.13. At t2 = t1 + h, an approximation x2 to x(t2 ) is given by

x(t2 ) = x(t1 + h) = x(t1 ) + hf (t1 , x1 )


∴ x2 = x1 + hf (t1 , x1 ). (3.6)

Repeating the process, we get the approximation xi to x(ti ) of a solution to


(3.4) as

xi+1 = xi + hf (ti , xi ), (3.7)


i = 0, 1, · · · , n − 1.

The ODE in (3.4) may also be written as the integral equation


Z t
x(t) = x0 + f (s, x(s))ds, (3.8)
t0

32 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.13: Schematic for the Euler method.

leading to the recursive relation:


Z ti+1
xi+1 = xi + f (s, x(s))ds. (3.9)
ti

Comparing (3.7) and (3.9), we see that


Z ti+1
f (s, x(s))ds ≈ hf (ti , xi ). (3.10)
ti

Equation (3.10) implies that the integral over each interval [ti , ti+1 ] is ap-
proximated by the area of a rectangle of width h and height f (ti , x(ti )), as
illustrated in Figure 3.14.

Example 14. Use Euler’s method with step size h = 0.1 to approximate the
solution to the IVP
dx √
= t x, x(1) = 4
dt
at the points t = 1.1, 1.2, 1.3, 1.4, and 1.5.

Solution.
x0 = 4, t0 = 1, h = 0.1

f (t, x) = t x
Thus,
xi+1 = xi + hf (ti , xi )

33 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Figure 3.14: Schematic for the Euler method.


=⇒ xi+1 = xi + (0.1)ti xi


i=0: x1 = x0 + (0.1)t0 x0

=⇒ x1 = 4 + (0.1)(1) 4
=⇒ x1 = 4 + (0.1)(2) = 4 + 0.2 = 4.2
∴ t1 = 1.1 =⇒ x1 = 4.2

i = 1, t2 = t1 + h = 1.1 + 0.1 = 1.2



i = 1 : x2 = x1 + (0.1)t1 x1

=⇒ x2 = 4.2 + (0.1)(1.1) 4.2
=⇒ x2 = 4.2 + (0.11)(2.049) = 4.2 + 0.225 = 4.425
∴ t2 = 1.2 =⇒ x1 = 4.425.

Similar steps can be followed to get the values of x(t) at the other points as
shown in Table 3.1. It is advisable to use software like MATLAB or Python
to code up the formula for computing the values.

Table 3.1 also shows the values of the exact solution for comparison. The
equation is solved below using the method of separation of variables.
dx √
=t x
dt
34 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

i ti Euler’s method Exact solution Heun’s method


0 1 4.0 4.0 4.0
1 1.1 4.200 4.213 4.213
2 1.2 4.425 4.452 4.452
3 1.3 4.678 4.720 ?????
4 1.4 4.959 5.018 ?????
5 1.5 5.271 5.348 ?????

Table 3.1: Table of numerical and exact values.


Z Z Z Z
dx −1/2
=⇒ √ = tdt =⇒ x dx = tdt
x
1
=⇒ 2x1/2 = t2 + C
2
Applying the initial condition x(1) = 4, we get

1 7
4= + C =⇒ C =
2 2
 2
1 2 7
∴x= t + .
4 4

3.8.2 Improved Euler’s Method (Heun’s Method)


This method uses the trapezoidal rule to approximate the integral in equation
(3.9):
Z ti+1
xi+1 = xi + f (s, x(s))ds,
ti

instead of the one in (3.10). So we let


Z ti+1  
f (ti , xi ) + f (tt+i , xi+1 )
f (s, x(s))ds ≈ h
ti 2

h
≈ [f (ti , xi ) + f (tt+i , xi+1 )] .
2
This estimates the area under each [ti , ti+1 ] by averaging the left and right
Riemann sums. Thus, we get

h
xi+1 = xi + [f (ti , xi ) + f (tt+i , xi+1 )] .
2
35 Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A

Because xi+1 now appears on the right-hand side, we use (3.7) to approximate
it and get
h
xi+1 = xi + [f (ti , xi ) + f (tt+i , xi + hf (ti , xi ))] . (3.11)
2
Example 15. Use the improved Euler method with step size h = 0.1 to
approximate the solution to the IVP (i.e. example 14)
dx √
= t x, x(1) = 4
dt
at the points t = 1.1, 1.2, 1.3, 1.4, and 1.5.
Solution. √
f (t, x) = t x
x0 = 4, t0 = 1, h = 0.1, ti+1 = ti + h
h
xi+1 = xi + [f (ti , xi ) + f (tt+i , xi + hf (ti , xi ))]
2
 
h √ √
q
=⇒ xi+1 = xi + ti xi + ti+1 xi + hti xi
2
 
h √ √
q
i = 0 : t1 = 1.1, x1 = x0 + t0 x0 + t1 x0 + ht0 x0
2
√ √
 
0.1
q
=⇒ x1 = 4 + (1) 4 + (1.1) 4 + (0.1)(1) 4
2
∴ x1 = 4 + 0.2127 = 4.213.
 
h √ √
q
i = 1 : t2 = 1.2, x2 = x1 + t1 x1 + t2 x1 + ht1 x1
2
√ √
 
0.1
q
=⇒ x2 = 4.213 + (1.1) 4.213 + (1.2) 4.213 + (0.1)(1.1) 4.213
2
∴ x2 = 4.213 + 0.2393 = 4.4523.
You may complete the rest of the calculations and fill up the missing values
in Table 3.1.

3.8.3 Numerical Solvers


Apart from coding up a numerical method to solve a differential equtions,
one may also apply different numerical packages or software. For example,
MATLAB, Python, Mathematica, Maple, etc.

36 Dr.
c Joseph K. Ansong

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy