Dynamical Systems: Lecture Notes
Dynamical Systems: Lecture Notes
LECTURE NOTES
Dynamical Systems J.K.A
2
Dr.
c Joseph K. Ansong
Contents
1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Historical Background . . . . . . . . . . . . . . . . . . 1
1.2 Definitions & Terminology . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Types of dynamical systems . . . . . . . . . . . . . . . 2
1.2.2 Types of differential equations . . . . . . . . . . . . . . 3
1.2.3 A general framework . . . . . . . . . . . . . . . . . . . 5
1.2.4 Nonautonomous Systems . . . . . . . . . . . . . . . . . 7
3
Dynamical Systems J.K.A
4
Dr.
c Joseph K. Ansong
Chapter 1
Introduction
1.1 Overview
This is the subject that deals with change, with systems that evolve in time.
Whether the system in question settles down to equilibrium, keeps repeating
in cycles, or does something more complicated, it is dynamics that we use
to analyze the behavior. Dynamical ideas appear in various places: in differ-
ential equations, classical mechanics, chemical kinetics, population biology,
etc.
1
Dynamical Systems J.K.A
The two main types of dynamical systems are differential equations and
difference equations (or iterated maps). Differential equations describe
the evolution of systems in continuous time (see below), whereas iterated
maps arise in problems where time is discrete. Differential equations are
used much more widely in science and engineering, and we shall therefore
concentrate on them.
2
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
1000 = Ce5×0 = C,
=⇒ M (t) = 1000e−5t . (1.3)
Thus, our simple model may now be used to predict the decline in the pop-
ulation of mosquitoes in the village. A differential equation (e.g. equation
1.1) with the initial value specified (e.g. M = 1000 at t = 0; as in the above
example) is called an Initial Value Problem (IVP).
3
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
4
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
dx
= f (t, x), (1.5)
dt
for a real-valued function f of two variables.
ẋ1 = f1 (x1 , x2 , · · · , xn )
ẋ2 = f2 (x1 , x2 , · · · , xn )
..
. (1.6)
ẋn = fn (x1 , x2 , · · · , xn )
where the overdots denote differentiation with respect to t such that ẋi =
dxi /dt. The variables x1 , · · · , xn might represent concentrations of chemicals
in a reactor, populations of different species in an ecosystem, or the positions
and velocities of the planets in the solar system. The functions f1 , · · · , fn are
determined by the problem at hand. A system in which the f ’s do not explic-
itly contain the independent variable t are said to be autonomous, otherwise
it is called nonautonomous. System (1.6) shows that every higher order
differential equations may be written as a system of first order equations.
For example, the damped oscillator (1.4)
d2 x dx
m 2
+ b + kx = 0 =⇒ mẍ + bẋ + kx = 0
dt dt
can be written in the form of (1.6). Here’s the approach: define new variables
x1 = x, and x2 = ẋ.
Then ẋ1 = x2 . From these definitions and the differential equations, we get
b k
ẋ2 = ẍ = − ẋ − x
m m
5
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
b k
=− x2 − x1 .
m m
Thus, the equivalent system of the damped oscillator in the form of (1.6) is
given by
ẋ1 = x2
b k
ẋ2 = − x2 − x1 .
m m
This system is said to be linear, because all the xi on the right-hand side
appear to the first power only. Otherwise the system would be nonlinear.
Typical nonlinear terms are products, powers, and functions of the xi , such
as x1 x2 , (x1 )3 , or cos x2 . For example, the swinging of a pendulum is governed
by the equation
g
ẍ + sin x = 0,
L
where x is the angle of the pendulum from vertical, g is the acceleration due
to gravity, and L is the length of the pendulum. The equivalent system is
nonlinear:
ẋ1 = x2
g
ẋ2 = − sin x1 .
L
Nonlinearity makes the pendulum equation very difficult to solve analytically.
The usual way around this is to invoke the small angle approximation sin x ≈
x for x << 1. This converts the problem to a linear one, which can then
be solved easily. But by restricting to small x, we’re throwing out some
of the physics, like motions where the pendulum whirls over the top. Is it
really necessary to make such drastic approximations? It turns out that the
pendulum equation can be solved analytically, in terms of elliptic functions.
But there ought to be an easier way. After all, the motion of the pendulum is
simple: at low energy, it swings back and forth, and at high energy it whirls
over the top. There should be some way of extracting this information from
the system directly. This is the sort of problem we’ll learn how to solve, using
geometric methods.
Here’s the rough idea. Suppose we happen to know a solution to the
pendulum system, for a particular initial condition. This solution would be
a pair of functions x1 (t) and x2 (t), representing the position and velocity of
the pendulum. If we construct an abstract space with coordinates (x1 , x2 ),
then the solution (x1 (t), x2 (t)) corresponds to a point moving along a curve
in this space (Figure).
6
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
This curve is called a trajectory, and the space is called the phase
space for the system. The phase space is completely filled with trajectories,
since each point can serve as an initial condition. Our goal is to run this con-
struction in reverse: given the system, we want to draw the trajectories, and
thereby extract information about the solutions. In many cases, geometric
reasoning will allow us to draw the trajectories without actually solving the
system!
Some terminology: the phase space for the general system (1.6) is the
space with coordinates x1 , · · · , xn . Because this space is n-dimensional, we
will refer to (1.6) as an n-dimensional system or an nth-order system.
Thus n represents the dimension of the phase space.
ẋ1 = x2
1
ẋ2 = (−kx1 − bx2 + F cos x3 ) (1.7)
m
ẋ3 = 1
7
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
8
Dr.
c Joseph K. Ansong
Chapter 2
Method of Solution
To solve
dy
= h(y)g(x), (2.2)
dx
separate variables to get
dy
= g(x)dx, h(y) 6= 0.
h(y)
9
Dynamical Systems J.K.A
Solution.
dy
= y(2 + sin x),
dx
Z Z
dy
=⇒ = (2 + sin x)dx,
y
=⇒ ln(y) = 2x − cos(x) + C1 ,
=⇒ y = e2x−cos x+C1 = eC1 e2x−cos(x) ,
∴ y = Ke2x−cos(x) .
Example 3. Solve
(1 + x)dy − ydx = 0
10
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Solution.
(1 + x)dy − ydx = 0
dy 1
=⇒ = dx
y 1+x
Z Z
dy 1
=⇒ = dx
y 1+x
=⇒ ln |y| = ln |1 + x| + C1
y
=⇒ ln | | = C1
1+x
y
=⇒ = ±eC1 = C
1+x
=⇒ y = C(1 + x),
where C is an arbitrary constant.
Example 4. Solve
dy
= y2 − 9
dx
Solution. Z Z
dy
2
= dx, y 6= ±3
y −9
Now, we use partial fractions to write:
1 1 A B
= ≡ + ,
y2 −9 (y − 3)(y + 3) y−3 y+3
1 1
ln |y − 3| − ln |y + 3| = x + C1
6 6
11
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
y − 3
ln
= 6x + C2 , C2 = 6C1
y + 3
y−3
= ±eC2 e6x = Ce6x
y+3
Thus,
1 + Ce6x
y=3 .
1 − Ce6x
12
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
where µ(x) is given by equation (2.7). The function µ(x) is called the inte-
grating factor of the differential equation.
(2.9)
Z
4x 1
=⇒ e y = e3x dx = e3x + C
3
1
=⇒ y = e−x + Ce−4x
3
Applying the initial condition, we have
4 4 1
x = 0, y = , =⇒ = + C, =⇒ C = 1
3 3 3
1 −x
∴ y = e + e−4x .
3
Example 6. Solve the differential equation
dy
− 4y = x6 ex
x
dx
Solution. Re-write in a standard form to get
dy y
− 4 = x5 e x , x 6= 0
dx x
− x4 dx −4
R
µ=e = e−4 ln |x| = eln x = x−4
dy
=⇒ x−4 − 4x−5 y = xex
dx
d −4
=⇒ x y = xex
dx Z
=⇒ x−4 y = xex dx
Using integration by parts, we let
Z
I= xex dx
=⇒ I = xex − ex + C = ex (x − 1) + C
Thus
x−4 y = ex (x − 1) + C
∴ y = x4 ex (x − 1) + Cx4 .
EXERCISE 1. Solve
dy
1) − y = e3x
dx
dr
2) + r tan θ = sec θ
dθ
dy y
3) = + 2x + 1
dx x
14
Dr.
c Joseph K. Ansong
Chapter 3
3.1 Introduction
Let’s start with a simple single equation of the form
ẋ = f (x) (3.1)
where x(t) is a real-valued function of time t, and f (x) is a smooth real-
valued function of x. The equation is said to be one-dimensional or first
order.
15
Dynamical Systems J.K.A
Z
=⇒ t = csc xdx
∴ t = − ln | csc x + cot x| + C.
Suppose we have the initial condition that x = x0 at t = 0. Then we have
C = ln | csc x0 + cot x0 |,
Figure 3.1:
more physical way to think about the vector field is this: imagine that fluid
is flowing steadily along the x-axis with a velocity that varies from place to
place, according to the rule ẋ = sin x. As shown in Figure 3.1, the flow is
to the right when ẋ > 0 and to the left when ẋ < 0. At points where ẋ = 0,
16
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
there is no flow; such points are therefore called fixed points. You can
see that there are two kinds of fixed points in Figure 3.1: solid black dots
represent stable fixed points (often called attractors or sinks, because the
flow is toward them) and open circles represent unstable fixed points (also
known as sources or repellers).
This graphical illustration now gives us more insight and understanding
of the solutions to ẋ = sin x. We start the imaginary particle at x0 and
watch how it is carried along the flow. We can now answer the questions
posed above:
2. In the same way, Figure 3.1 shows that for any initial condition x0 ,
if ẋ > 0 initially, the particle heads to the right and asymptotically
approaches the nearest stable fixed point. Similarly, if ẋ < 0 initially,
the particle approaches the nearest stable fixed point to its left. If
ẋ = 0, then x remains constant. The qualitative form of the solution
for any initial condition is sketched in Figure 3.3.
Figure 3.2:
17
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Figure 3.3:
(a) The flow is to the right where f (x) > 0 and to the left where f (x) < 0.
(c) As time goes on, the phase point moves along the x-axis according to
18
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Figure 3.4:
(d) A picture like Figure 3.4 which shows all the qualitatively different
trajectories of the system, is called a phase portrait.
ẋ = x2 − 1,
19
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
x∗2 − 1 = 0 =⇒ x∗ = ±1.
To determine stability, we plot f (x) = x2 − 1 and then sketch the vector field
(see Figure 3.5). The flow is to the right where x2 − 1 > 0 and to the left
where x2 − 1 < 0. Hence
x∗ = −1 is stable,
x∗ = 1 is unstable.
Figure 3.5:
ẋ = x − cos x,
20
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
the vector field, we notice that when the line lies above the cosine curve, we
have x > cos x and so ẋ > 0: the flow is to the right. Also, the flow is to
the left where the line is below the cosine curve. Hence x∗ is the only fixed
point, and it is unstable. Note that we can classify the stability of x∗ ,
even though we don’t have a formula for x∗ itself !
Figure 3.6:
Ṅ = rN,
where N (t) is the population at time t, and r > 0 is the growth rate. This
model predicts exponential growth:
N (t) = N0 ert
21
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Figure 3.7:
22
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Figure 3.8:
it crosses N = K/2, where the parabola in Figure 3.8 reaches its maximum.
Then the phase point slows down and eventually creeps toward N = K.
In biological terms, this means that the population initially grows in an
accelerating fashion, and the graph of N (t) is concave up. But after N =
K/2, the derivative Ṅ begins to decrease, and so N (t) is concave down as
it asymptotes to the horizontal line N = K (Figure 3.9). Thus the graph of
N (t) is S-shaped or sigmoid for N0 < K/2.
Something qualitatively different occurs if the initial condition N0 lies
between K/2 and K; now the solutions are decelerating from the start. Hence
these solutions are concave down for all t. If the population initially exceeds
the carrying capacity (N0 > K), then N (t) decreases toward N = K and
is concave up. Finally, if N0 = 0 or N0 = K, then the population stays
constant.
23
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Figure 3.9:
Linearization
Let x∗ be a fixed point, and let
η(t) = x(t) − x∗
be a small perturbation away from x∗ . To see whether the perturbation grows
or decays, we derive a differential equation for η. Differentiating the above
24
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
equation gives
d
η̇ = (x − x∗ ) = ẋ,
dt
since x∗ is a constant. Thus
Recall that a Taylor series expansion of any function g(x) about a point, say
a, is given by
g 0 (a) g 00 (a) g 000 (a)
g(x) = g(a) + (x − a) + (x − a)2 + (x − a)3 + · · ·
1! 2! 3!
Thus, using Taylor’s expansion, we have
η̇ = ηf 0 (x∗ ) + O(η 2 ).
If f 0 (x∗ ) 6= 0, the O(η 2 ) terms are negligible and we may write the approxi-
mation
η̇ ≈ ηf 0 (x∗ ).
This is a linear equation in η, and is called the linearization about x∗ .
Separating variables and solving η̇ = ηf 0 (x∗ ) gives
0 ∗ )t
η(t) = Cef (x
where C is an arbitrary constant and f 0 (x∗ ) is also constant. It shows that the
perturbation η(t) grows exponentially if f 0 (x∗ ) > 0 and decays if f 0 (x∗ ) < 0.
If f 0 (x∗ ) = 0, the O(η 2 ) terms are not negligible and a nonlinear analysis is
needed to determine stability; to be discussed shortly.
Implications of linearization
An important conclusion from the linearization above is that the slope f 0 (x∗ )
at the fixed point determines its stability. This is consistent with our earlier
analysis. In other words, an alternative approach to determining the stability
of a fixed point is find f 0 (x∗ ), as opposed to first graphing f (x) and indicating
the vector fields. An additional and new feature of f 0 (x∗ ) is that we now have
a measure of how stable a fixed point is–that’s determined by the magnitude
of f 0 (x∗ ).
25
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
f (x) = sin x = 0
=⇒ x∗ = nπ,
where n is an integer. Then
0 ∗ 1, n even
f (x ) = cos nπ =
−1, n odd.
26
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Example 11. What can be said about the stability of a fixed point when
f 0 (x∗ ) = 0? Consider the following problems:
(a) ẋ = −x3
(a) ẋ = x3
(a) ẋ = x2
(a) ẋ = 0
(a) x∗ = 0 is stable
(b) x∗ = 0 is unstable
ẋ = f (x), x0 = x(0).
Suppose that f (x) and f 0 (x) are continuous on an open interval R of the x-
axis, and suppose that x0 is a point in R. Then the initial value problem has
a solution x(t) on some time interval (−τ, τ ) about t = 0, and the solution
is unique.
27
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Figure 3.10:
Explanation of theorem:
The theorem says that if f (x) is smooth enough, then solutions exist and are
unique. Even so, there’s no guarantee that solutions exist forever, as shown
by some examples. Again, the theorem does not say that the solutions exist
for all time; they are only guaranteed to exist in a (possibly very short) time
interval around t = 0.
Example 12. Discuss the existence and uniqueness of solutions to the initial
value problem ẋ = 1 + x2 , x(0) = x0 . Do solutions exist for all time?
28
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
=⇒ tan−1 x = t + C.
Applying the initial condition x(0) = 0 yields C = 0. Thus, the solution is
given by
x(t) = tan t.
Observe that this solution exists for −π/2 < t < π/2 since x(t) → ±∞ as
t → ±π/2. Outside this interval, the problem has no solution for the initial
value x0 = 0. Why?
Also observe that the system has solutions that reach infinity in finite
time. This occurrence is referred to as blow up. It has physical importance
in different (numerical) models. That is one reason why it is sometimes nec-
essary to determine the existence and uniqueness of solutions before actually
solving equations.
Solution. We have f (x) = 3x2/3 and ∂f /∂x = 2/x1/3 . Though the function
f is continuous about the initial point (2, 0), its partial derivative ∂f /∂x is
not continuous or even defined at x = 0. So the hypotheses of Theorem 1
are not satisfied. Therefore we cannot use Theorem 1 to determine whether
the IVP has or does not have a unique solution.
Remark. Why did the problem violate the hypotheses of the theorem? First
notice that the equation can be solved via integration:
dx
= 3x2/3
dt
Z Z
−2/3
=⇒ x dx = 3dt
=⇒ 3x1/3 = 3t + C1
=⇒ x1/3 = (t + C)
=⇒ x = (t + C)3
29
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
30
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Figure 3.12:
31
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
=⇒ x1 = x0 + hf (t0 , x0 ). (3.5)
32
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Equation (3.10) implies that the integral over each interval [ti , ti+1 ] is ap-
proximated by the area of a rectangle of width h and height f (ti , x(ti )), as
illustrated in Figure 3.14.
Example 14. Use Euler’s method with step size h = 0.1 to approximate the
solution to the IVP
dx √
= t x, x(1) = 4
dt
at the points t = 1.1, 1.2, 1.3, 1.4, and 1.5.
Solution.
x0 = 4, t0 = 1, h = 0.1
√
f (t, x) = t x
Thus,
xi+1 = xi + hf (ti , xi )
33
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
√
=⇒ xi+1 = xi + (0.1)ti xi
√
i=0: x1 = x0 + (0.1)t0 x0
√
=⇒ x1 = 4 + (0.1)(1) 4
=⇒ x1 = 4 + (0.1)(2) = 4 + 0.2 = 4.2
∴ t1 = 1.1 =⇒ x1 = 4.2
Similar steps can be followed to get the values of x(t) at the other points as
shown in Table 3.1. It is advisable to use software like MATLAB or Python
to code up the formula for computing the values.
Table 3.1 also shows the values of the exact solution for comparison. The
equation is solved below using the method of separation of variables.
dx √
=t x
dt
34
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
1 7
4= + C =⇒ C =
2 2
2
1 2 7
∴x= t + .
4 4
h
≈ [f (ti , xi ) + f (tt+i , xi+1 )] .
2
This estimates the area under each [ti , ti+1 ] by averaging the left and right
Riemann sums. Thus, we get
h
xi+1 = xi + [f (ti , xi ) + f (tt+i , xi+1 )] .
2
35
Dr.
c Joseph K. Ansong
Dynamical Systems J.K.A
Because xi+1 now appears on the right-hand side, we use (3.7) to approximate
it and get
h
xi+1 = xi + [f (ti , xi ) + f (tt+i , xi + hf (ti , xi ))] . (3.11)
2
Example 15. Use the improved Euler method with step size h = 0.1 to
approximate the solution to the IVP (i.e. example 14)
dx √
= t x, x(1) = 4
dt
at the points t = 1.1, 1.2, 1.3, 1.4, and 1.5.
Solution. √
f (t, x) = t x
x0 = 4, t0 = 1, h = 0.1, ti+1 = ti + h
h
xi+1 = xi + [f (ti , xi ) + f (tt+i , xi + hf (ti , xi ))]
2
h √ √
q
=⇒ xi+1 = xi + ti xi + ti+1 xi + hti xi
2
h √ √
q
i = 0 : t1 = 1.1, x1 = x0 + t0 x0 + t1 x0 + ht0 x0
2
√ √
0.1
q
=⇒ x1 = 4 + (1) 4 + (1.1) 4 + (0.1)(1) 4
2
∴ x1 = 4 + 0.2127 = 4.213.
h √ √
q
i = 1 : t2 = 1.2, x2 = x1 + t1 x1 + t2 x1 + ht1 x1
2
√ √
0.1
q
=⇒ x2 = 4.213 + (1.1) 4.213 + (1.2) 4.213 + (0.1)(1.1) 4.213
2
∴ x2 = 4.213 + 0.2393 = 4.4523.
You may complete the rest of the calculations and fill up the missing values
in Table 3.1.
36
Dr.
c Joseph K. Ansong