2302 MTL102 ODE Part
2302 MTL102 ODE Part
1. Introduction: ODEs
A di↵erential equation is an equation involving an unknown function and its derivatives. In
general the unknown function may depend on several variables and the equation may include
various partial derivatives.
Definition 1.1. A di↵erential equation involving ordinary derivatives is called a ordinary dif-
ferential equation (ODE).
• A most general ODE has the form
F x, y, y 0 , . . . , y (n) = 0 , (1.1)
where F is a given function of (n + 2) variables and y = y(x) is an unknown function of
a real variable x.
• The maximum order n of the derivative y (n) in (1.1) is called the order of the ODE.
Applications: Di↵erential equations play a central role not only in mathematics but also in
almost all areas of science and engineering, economics, and social sciences:
• Flow of current in a conductor: Consider an RC circuit with resistance R and capacity
C with no external current. Let x(t) be the capacitor voltage and I(t) the current
circulating in the circuit. Then according to Kirkcho↵’s law, R I(t)+x(t) = 0. Moreover,
the constitutive law of capacitor yields I(t) = C dx(t)
dt . Hence, we get the first order
di↵erential equation
x(t)
x0 (t) + = 0.
RC
• Population dynamics: Let x(t) be the number of individuals of a population at time t,
b be the birth rate of the population and d be the death rate of the population. Then
according to the simple “Malthus model”, growth rate of the population is proportional
to the number of new born individuals minus the number of deaths. Hence we get the
first order ODE
x0 (t) = kx(t), where k = b d.
• An example of a second order equation is y 00 + y = 0, which arises naturally in the study
of electrical and mechanical oscillations.
• Motion of a missile; The behaviours of a mixture; The spread of disease etc.
Definition 1.2. A function y : I ! R, where I ⇢ R is an open interval, is said to be a solution
of n-th order ODE, if it is n-times di↵erentiable, and satisfies (1.1) for all x 2 I.
Example 1.1. Consider the ODE y 0 = y. Let us first find all positive solutions. Note that
y0 0 0
y = (ln y) and therefore we obtain (ln y) = 1. Thus, this implies that
ln y = x + C =) y = C1 ex where C1 = eC > 0, C 2 R.
1
2 A. K. MAJEE
0
If y(x) < 0 for all x, then use yy = (ln( y))0 , and obtain y = C1 ex where C1 > 0. Combining
these two cases together, we obtain any solution y(x) has the form
y(x) = Cex , C 2 R.
⇤
Example 2.1. Find the general solution of x0 (t) + 4tx(t) = 8t.
Here ↵(t) = 4t, and hence we have A(t) = 2t2 . Therefore, using the method of variation of
parameter, the general solution of the given ODE is given by
Z Z
2t2
⇥ 2t2
⇤ 2t2
⇥ d 2t2 ⇤ 2
x(t) = e C + 8te dt = e C +2 e = 2 + Ce 2t .
dt
ODES AND PDES 3
1st order linear IVP: Suppose, we are interested in solving the initial value problem
y 0 + ↵(x)y = b(x), y(x0 ) = y0 , where x0 2 I.
From the previous 0 A 0
R x calculation, we see that u = be with A = ↵. Integrating from x0 to x and
taking A(x) = x0 ↵(s) ds, we have
Z x Rs
↵(r) dr
u(x) u(x0 ) = b(s)e x0 ds.
x0
Example 2.2. Find the solution of x0 (t) + kx(t) = h, x(0) = x0 , where h and k are constant.
This equation arises in the RC circuit when there is a generator of constant voltage h.
Solution: Using (2.3), we get
Z t
⇥ ⇤ ⇥ ekt 1 ⇤ ⇣ k ⌘ kt h
x(t) = e kt x0 + heks ds = e kt x0 + h = x0 e + .
0 k h k
Notice that x(t) ! hk as t ! 1 from below if x0 < hk , and from above if x0 > hk . Moreover, the
capacitor voltage x(t) does not decay to 0 but tends to the constant voltage hk .
2.2. General 1st order ODEs. We consider the general first order ODE of the form
y 0 = f¯(x, y) ,
where f¯ is some continuous function. We have seen that if f¯(x, y) = ↵(x)y + b(x) for some
↵, 2 C(I), then the above ODE has solution in explicit form (cf. the method of variation of
parameter).
2.2.1. Equations with variables separated: Let us consider a separable ODE
y 0 = f (x)g(y) , (2.4)
where f and g are given continuous functions with f (x) 6= 0. If y = k is any zero of g, then
y(x) = k is a constant solution of (2.4). On the other hand, if y(x) = k is a constant solution,
then g(k) = 0. Therefore y(x) = k is a constant solution if and only if g(k) = 0.
• y(x) = k is called an equilibrium solution if and only if g(k) = 0.
Hence if y(x) is a non-constant solution, then g(y(x)) 6= 0 for any x. Any separable equation
can be solved by means of the following theorem.
Theorem 2.3 (The method of separation of variables). Let f and g be continuous functions
on some intervals I and J respectively such that g 6= 0 on J. Let F resp. G be a primitive
function of f resp. g1 on I resp. J. Then a function y defined on some subinterval of I, solves
the equation (2.4) if and only if
G(y(x)) = F (x) + C , (2.5)
for all x in the domain of y, where C is a real constant.
Proof. Let y(x) solves (2.4). Since F 0 = f and G0 = g1 , the equation (2.4) is equivalent to
0
y 0 G0 (y) = F 0 (x) =) G(y(x)) = F 0 (x) =) G(y(x)) = F (x) + C .
4 A. K. MAJEE
Conversely, if function y satisfies (2.5) and is known to be di↵erentiable in its domain, then
di↵erentiating (2.5) in x, we obtain y 0 G0 (y) = F 0 (x). Arguing backwards, we arrive at (2.4).
Let us show that y is di↵erentiable. Since g(y) 6= 0, either g(y) > 0 or g(y) < 0 in the whole
domain. Then G is either strictly increasing or strictly decreasing in the whole domain. In both
cases, the inverse function G 1 is well-defined and di↵erentiable. It follows from (2.5) that
1
y(x) = G F (x) + C .
Since both F and G 1 are di↵erentiable, we conclude that y is di↵erentiable. ⇤
Example 2.3. Consider the ODE
y 0 (x) = y(x) , x 2 R , y(x) > 0 .
Then f (x) ⌘ 1 and g(y) = y 6= 0. Note that F (x) = x and G(y) = log(y). The equation (2.5)
becomes
log(y) = x + C =) y(x) = Cex ,
where C is any positive constant.
Example 2.4. Consider the equation
p
y0 = |y|
which is defined for all y 2 R. Note that y = 0 is a trivial solution. In the domains y > 0 and
y < 0, the equation can be solved using separation of variables. In the domain y > 0, we obtain
Z Z
dy p 1 2
p = dx =) 2 y = x + C =) y = x + C .
y 4
Since y > 0, we must have x > C which follows from the second expression. Similarly in the
domain y < 0, we have
1
y= (x + C)2 , x < C .
4
We see that the integral curves in the domain y > 0 touch the curve y = 0 and so do the integral
curves in the domain y < 0.
Example 2.5. The logistic equation
y 0 (x) = y(x) ↵ y(x) , ↵, > 0.
In this model, y(x) represents the population of some species and therefore y(x) 0. Note that
y(x) = 0 and y(x) = ↵ are two equilibrium solutions. Such solutions play an important role
in analyzing the trajectories of solutions in general. In order to solve the logistic equation, we
separate the variables and obtain (assuming that y 6= 0 and y 6= ↵ )
dy 1 dy dy 1 1
= dx =) + = dx =) log(|y|) log(|↵ y|) = x + c
y(↵ y) ↵ y ↵↵ y ↵ ↵
1 y y
=) log | | = x + c =) | | = ke↵x ,
↵ ↵ y ↵ y
where k = ec↵ . This is a general solution in implicit form. To solve for y, consider the case
↵ke↵x
0 < y(x) < ↵ . Then, y(x) = 1+ ke↵x . For the case y >
↵
, the solution takes the form
↵ke↵x ↵
y(x) = 1 ke↵x . In any case, limx!1 y(x) = . This shows that all non-constant solutions
approach the equilibrium solution y(x) = ↵ as x ! 1, some from above the line y = ↵ and
others from below.
ODES AND PDES 5
2.2.2. Exact equations: Suppose that the first order equation y 0 = f¯(x, y) is written in the
form
M (x, y) + N (x, y)y 0 = 0 , (2.6)
where M , N are real-valued functions defined for real x, y on some domain ⌦.
Definition 2.2. We say that the equation (2.6) is exact in ⌦ if there exists a function F having
continuous first partial derivatives such that
@F @F
=M, =N. (2.7)
@x @y
Theorem 2.4. Suppose the equation (2.6) is exact in a domain ⌦ ⇢ R2 i.e., there exists F
such that @F @F
@x = M , @y = N in ⌦. Then every continuously di↵erentiable function defined
implicitly by a relation
F (x, (x)) = c (c = constant) ,
is a solution of (2.6), and every solution of (2.6) whose graph lies in ⌦ arises in this way.
Proof. Under the assumptions of the theorem, equation (2.6) becomes
@F @F
(x, y) + (x, y)y 0 = 0 .
@x @y
If is any solution on some interval I, then
@F @F
(x, (x)) + (x, (x)) 0 (x) = 0 , 8x 2 I . (2.8)
@x @y
If (x) = F (x, (x)), then from the above equation, we see that 0 (x) = 0, and hence
F (x, (x)) = c, where c is some constant. Thus the solution must be a function which is
given implicitly by the relation F (x, (x)) = c. Conversely, if is a di↵erentiable function on
some interval I defined implicitly by the relation F (x, y) = c, then
F (x, (x)) = c , 8x 2 I .
@F
Di↵erentiation along with the property @x = M, @F
@y = N yields that is a solution of (2.6).
This completes the proof. ⇤
We will say that F (x, y) = c is the general solution of (2.6).
Example 2.6. Consider the equation
x (y 4 1)y 0 (x) = 0 .
Here M = x and N = 1 y 4 . Define F (x, y) = 12 x2 + y 1 5
5y . Then above equation is exact.
Hence the solution is given by
F (x, y) = c =) 2y 5 10y = 5x2 + c .
Example 2.7. Find the general solution of the ODE 2ye2x + 2x cos(y) + e2x x2 sin(y) y 0 = 0.
Solution: The equation is of the form M (x, y) + N (x, y)y 0 = 0 with M (x, y) = 2ye2x + 2x cos(y)
and N (x, y) = e2x x2 sin(y). Define F (x, y) = ye2x + x2 cos(y). Then F has continuous first
partial derivatives on R2 and @F @F
@x (x, y) = M (x, y), @y (x, y) = N (x, y). Hence the given ODE is
exact. Thus, the general solution is given by the formula
ye2x + x2 cos(y) = c, c 2 R.
How do we recognize when an equation is exact? The following theorem gives a necessary
and sufficient conditions.
6 A. K. MAJEE
Theorem 2.5. Let M, N be two real-valued functions which have continuous first partial deriva-
tives on some rectangle
n o
R := (x, y) 2 R2 : |x x0 | a , |y y0 | b .
Then the equation (2.6) is exact in R if and only if
@M @N
= (2.9)
@y @x
in R.
Proof. It is easy to see that if the equation (2.6) is exact, then (2.9) holds. Now suppose that
(2.9) holds in the rectangle R. We wand to find a function F having continuous first partial
derivatives such that @F @F
@x = M and @y = N . If we had a such function, then
Z x Z y Z x Z y
@F (s, y) @F (x0 , t)
F (x, y) F (x0 , y0 ) = ds + dt = M (s, y) ds + N (x0 , t) dt
x0 @x y0 @y x0 y0
It is clear from (2.12) that @F @y (x, y) = N (x, y) for all (x, y) in R. Therefore, we need to show
that (2.12) is valid, where F is defined by (2.11). Now, by using the condition (2.9), we have
hZ x Z y i
F (x, y) M (s, y0 ) ds + N (x, t) dt
x0 y0
Z x Z y
⇥ ⇤ ⇥ ⇤
= M (s, y) M (s, y0 ) ds N (x, t) N (x0 , t) dt
x y0
Z 0x Z y h i
@M @N
= (s, t) (s, t) ds dt = 0 .
x0 y 0 @y @x
This completes the proof. ⇤
Example 2.8. Find the general solution of the ODE
2xydx + (x2 + y 2 ) dy = 0 .
Here, M (x, y) = 2xy and N (x, y) = x2 + y 2 . Note that @M @N
@y = @x = 2x. Thus, the equation is
exact. Define the function F by (taking (x0 , y0 ) = (0, 0))
Z x Z y Z x Z y
y3
F (x, y) = M (s, y) ds + N (x0 , t) dt = 2sy ds + t2 dt = yx2 + .
0 0 0 0 3
y3
Therefore, the general solution is given by the formula yx2 + 3 = c, where c is arbitrary real
constant.
ODES AND PDES 7
@ F̃
To find F̃ , we know that @x = 2xy 2 + 2x, and hence
F̃ (x, y) = x2 y 3 + x2 + f (y) ,
@ F̃
where f is independent of x. Again @y = Ñ gives
Our aim is to show that on some interval I containing x0 , there is a solution of the above
initial value problem. Let us first show the following.
ODES AND PDES 9
Theorem 3.1. A function is a solution of (3.1) on some interval I if and only if it is solution
of the integral equation (on I)
Z x
y = y0 + f (s, y(s)) ds . (3.2)
x0
Theorem 3.3. Let f be a continuous function defined on the rectangle R. Further suppose that
f is Lipschitz in y. Then { k } converges to a solution of the initial value problem (3.1) on I.
Proof. Note that
k
X
k (x) = 0 (x) + [ i (x) i 1 (x)] , 8x 2 I .
i=1
Therefore, it suffices to show that the series
X1
0 (x) + [ i (x) i 1 (x)]
i=1
converges. Let us estimate the terms i (x) i 1 (x). Observe that, since f satisfies Lipschitz
condition in R
Z xh i Z x
| 2 (x) 1 (x)| = f (t, 1 (t)) f (t, 0 (t)) dt K | 1 (t) 0 (t)| dt
x0 x0
Z x
(x x0 )2
KM |t x0 | dt KM .
x0 2
Claim:
M Ki 1 M K i |x x0 |i
i (x) i 1 (x) |x x0 |i = , i = 1, 2, . . . (3.5)
i! K i!
We shall prove (3.5) via induction. Note that (3.5) is true for i = 1 and i = 2. Assume now that
(3.5) holds for i = m. Let us assume that x x0 (similar proof for x x0 ). By using Lipschitz
condition, and the induction hypothesis, we have
Z x Z x
M Km 1
m+1 (x) m (x) K m (t) m 1 (t) dt K |t x0 |m dt
x0 x0 m!
M Km
= |x x0 |m+1
(m + 1)!
P
Hence (3.5) holds for i = 1, 2, . . .. It follows that the i-th term of the series | 0 (x)|+ 1 i=1 | i (x)
M K|x x0 | . Hence
i 1 (x)]| is less than
P or equal to K times the i-th term of the power series for e
the series 0 (x) + 1 i=1 [ i (x) i 1 (x)] is convergent for all x 2 I, and therefore the sequence
{ k } converges to a limit (x) as k ! 1.
Properties of the limit function : We first show that is continuous on I. Indeed, for any
x, x̃ 2 I, we have, by using the boundedness of f
Z x
k+1 (x) k+1 (x̃) = f (t, k (t)) dt M |x x̃|
x̃
=) (x) (x̃) M |x x̃| .
This shows that is continuous on I. Moreover, taking x̃ = x0 in the above estimate, we see
that
| (x) y0 | M |x x0 | , 8x 2 I
and hence (x, (x)) is in R for all x 2 I.
We now estimate | (x) k (x)|. Note that, since (x) is the limit of the series 0 (x) +
P1 Pk
i=1 [ i (x) i 1 (x)] and k = 0 (x) + i=1 [ i (x) i 1 (x)], we see that
1
X 1 1
M X K i |x x0 |i M X K i ↵i
| (x) k (x)| = [ i (x) i 1 (x)]
K i! K i!
i=k+1 i=k+1 i=k+1
ODES AND PDES 11
Since L < 1, we get max|x x0 | |y1 (x) y2 (x)| = 0—which then implies that y1 = y2 on
I . In particular, y1 (x0 ± ) = y2 (x0 ± ). Repeating the same procedure in the interval
[x0 + , x0 + 2 ] and [x0 2 , x0 ], we get y1 (x) = y2 (x) for all x 2 [x0 2 , x0 + 2 ]. Since
the set {x 2 R : |x x0 | ↵} is compact, after a finite number of times, we get y1 (x) = y2 (x)
for all x 2 [a, b]. This completes the proof. ⇤
Remark 3.1. One can start Picard’s iteration procedure with any continuous function 0 on
|x x0 | a such that the points (x, 0 (x)) are in R for |x x0 | a. One can proceed as in the
previous theorem by showing:
i) 1 , 2 , . . . exist and are continuous on I, and satisfy | k (x) y0 | M |x x0 | for all
k = 1, 2, . . ..
M K i |x x0 |i
ii | i+1 (x) i (x)| 2 K i! for all i = 1, 2, . . ..
(K↵) K↵ i
iii) Show that k tends to a limit function and | i (x) (x)| 2 M
K i! e .
Let us illustrate this method on the following example:
y0 = y , y(0) = 1 .
Here 0 (x) = 1. Moreover, the approximate functions are given by
Z x
1 (x) = 1 + 0 (s) ds = 1 + x
0
Z x
x2
2 (x) = 1 + 1 (s) ds = 1 + x +
0 2!
12 A. K. MAJEE
Z x
x2 x3
3 (x) =1+ 2 (s) ds =1+x+ +
0 2! 3!
and by induction
x2 x3 xk
k (x) =1+x+
+ + ... + , k = 0, 1, 2, . . . .
2! 3! k!
Clearly, k (x) ! ex as k ! 1, and the function y(x) = ex indeed solves the above IVP.
Definition 3.2. We say that J ⇢ R is the maximal interval of definition of the solution y(x)
of the IVP (3.1), if any interval I where y(x) is defined is contained in J, and y(t) cannot be
extended in an interval greater than J.
Example 3.1. Consider the IVP
y0 = y2 , y(x0 ) = a 6= 0 .
In view of Theorem 3.3, above problem has a solution in a neighbourhood of x0 . Note that,
the function (x, c) = x 1 c solves y 0 = y 2 . Thus, we need to impose the requirement that
(x0 , c) = a , c = x0 + a1 . Let ca = x0 + a1 . Hence for a > 0, solution to the above problem is
1
ya (x) =
, x < ca .
x ca
Thus, in this case, the maximum interval of definition is ( 1, ca ). Similarly, for a < 0, one
can show the maximum interval of definition is (ca , 1) for the above problem.
3.1.1. Non-local existence of solutions: Theorem 3.3 guarantees a solution only for x near
the initial point x0 . There are some cases, where solution may exists on the whole interval
|x x0 | a. For example, consider the ODE of the form y 0 + g(x)y = h(x), where g, h are
continuous on |x x0 | a. Let us define the strip
n o
S := |x x0 | a , |y| < 1 .
Since g is contonuous on |x x0 | a, there exists K > 0 such that |g(x)| K. Then the
function f (x, y) = g(x)y + h(x) are Lipschitz continuous on the strip S.
Theorem 3.4. Let f be a real-valued continuous function on the strip S, and Lipschitz contin-
uous on S with constant K > 0. Then the Picard’s iterations { k } for the problem (3.1) exist
on the entire interval |x x0 | a, and converge there to a solution of (3.1).
Proof. Note that (x, 0 (x)) 2 S. Now since f is continuous on S, there exists M > 0 such that
|f (x, y0 )| M for |x x0 | a, and hence
Z x
| 1 (x)| |y0 | + f (t, y0 ) dt |y0 | + M |x x0 | |y0 | + M a < 1 .
x0
Moreover, each k is continuous on |x x0 | a. Now assume that the points
(x, 0 (x)), (x, 1 (x), . . . (x, k (x))
To show the convergence of the sequence { k } to a limit function , we can mimic the proof
of Theorem 3.3, once we note that
Z x
1 (x) 0 (x) |f (t, y0 )| dt M |x x0 | .
x0
Next we show that is continuous. Observe that, thanks to (3.5) (which holds in this case)
k
X k
X k
M X K i |x x0 |i
| k (x) y0 | = [ i (x) i 1 (x)] i (x) i 1 (x)
K i!
i=1 i=1 i=1
X1
M K i |x x0 |i M Ka
= e 1 := b .
K i! K
i=1
Taking the limit as k ! 1, we obtain
| (x) y0 | b , (|x x0 | a) .
Note that f is continuous on R, where the rectangle R is given by
n o
R := (x, y) 2 R2 : |x x0 | a , |y y0 | b ,
and hence, there exists a constant N > 0 such that |f (x, y)| N for (x, y) 2 R. Let x, x̃ be two
pints in the interval |x x0 | a. Then
Z x
k+1 (x) k+1 (x̃) = f (t, k (t)) dt N |x x̃|
x̃
=) | (x) (x̃)| N |x x̃| .
Rest of the proof is a repetition of the analogous parts of the proof of Theorem 3.3, with ↵
replaced by a everywhere. ⇤
Example 3.2. Consider the IVP y 0 = y + x2 sin(y), y(0) = 1, where is a real constant
such that | | 1. Show that the solution of the given IVP exists for |x| 1.
Solution: Here f (x, y) = y + x2 sin(y). Consider the strip S = {|x| 1, |y| < 1}. Then f is
continuous on S and Lipschitz continuous on S as |@y f (x, y)| 2 on S. Thus, by Theorem 3.4,
the solution of the given problem exists on the entire interval |x| 1.
In view of Theorem 3.4, we arrive at the following corollary.
Corollary 3.5. Let f be a real-valued continuous function on the plane |x| < 1, |y| < 1, which
satisfies a Lipschitz condition on each strip Sa defined by
n o
Sa := |x| a , |y| < 1 , (a > 0) .
Then every initial value problem
y 0 = f (x, y) , y(x0 ) = y0 ,
has a solution which exists for all real x.
Proof. For any real number x, there exists a > 0 such that x is contained inside the interval
|x x0 | a. Consider now the strip
n o
S := |x x0 | a , |y| < 1 .
Since S is contained in the strip
n o
S̃ := |x| |x0 | + a , |y| < 1 .
f satisfies all the conditions of Theorem 3.4. Thus, { k (x)} tends to (x), where is a solution
to the initial-value problem. This completes the proof. ⇤
14 A. K. MAJEE
Theorem 3.9. Let f , g be continuous function on R, and f satisfies a Lipschitz condition there
with Lipschitz constant K. Let , be solutions of (3.7), (3.8) respectively on an interval I
containing x0 , with graphs contained in R. Suppose that the following inequalities are valid
|f (x, y) g(x, y)| " , (x, y) 2 R , (3.9)
|y1 y2 | , (3.10)
for some non-negative constants " , . Then
x0 | " K|x x0 |
(x) (x) eK|x + e 1 , 8x 2 I . (3.11)
K
Proof. Since , are solutions of (3.7), (3.8) respectively on an interval I containing x0 , we see
that
Z x⇥ ⇤
(x) (x) = y1 y2 + f (t, (t)) g(t, (t)) dt
x0
Z x⇥ Z x⇥
⇤ ⇤
= y1 y2 + f (t, (t)) f (t, (t)) dt + f (t, (t)) g(t, (t)) dt .
x0 x0
Assume that x x0 . Then, in view of (3.9), (3.10), and the Lipschitz condition of f with
Lipschitz constant K, we obtain from the above expression
Z x
| (x) (x)| + K | (s) (s)| ds + "(x x0 ) . (3.12)
x0
Rx
Define, E(x) = x0 | (s) (s)| ds. Then E 0 (x) = | (x) (x)| and E(x0 ) = 0. Therefore,
(3.12) becomes
E 0 (x) KE(x) + "(x x0 ) .
has a unique solution, say for |x| 12 such that the graph (x, (x)) lies in the rectangle R.
The Lipschitz constant K of f on the rectangle R is 12 , and |f (x, y) g(x, y)| 5110 for all
(x, y) 2 R. Hence by the continuous dependence estimate, we have, for all |x| 12
2 |x|
| (x) (x)| 10
e2 1 .
5
As a consequence of Theorem 3.9, we have
i) Uniqueness Theorem: Let f be continuous and satisfies a Lipschitz condition on
R. If and are two solutions of the IVP (3.1) on an interval I containing x0 , then
(x) = (x) for all x 2 I.
ii) Let f be continuous and satisfies a Lipschitz condition on R, and gk , k = 1, 2, . . . be
continuous on R such that
|f (x, y) gk (x, y)| "k , (x, y) 2 R
with "k ! 0 as k ! 1. Let yk ! y0 as k ! 1. Let k be a solution to the IVP
y 0 = gk (x, y) , y(x0 ) = yk ,
and is a solution to the IVP (3.1) on some interval I containing x0 . Then k (x) ! (x)
on I.
Remark 3.2. The Lipschitz condition on f on the rectangle R is necessary to have uniqueness
of solution of the IVP (3.1). To see this, consider the IVP
2
y 0 = 3y 3 , y(0) = 0 .
It is easy to check that the function k , for any positive number k
(
0, 1 < x k,
k (x) = 3
(x k) , k x < 1 ,
is a solution of the above IVP. So, there are infinitely many solution on any rectangle R containing
the origin. But note that, f does NOT satisfy a Lipschitz condition on R.
Remark 3.3. We have shown the existence of solution of the IVP under more stronger condition
namely Lipschitzness of the function f . But one can relax the Lipschithzness to gurantee the
existence of a solution of the IVP only under the continuity assumption on f . This is called
Peano Theorem.
Theorem 3.10 (Peano Theorem). If the function f (x, y) is continuous on a rectangle R and
if (x0 , y0 ) 2 R, then the IVP y 0 = f (x, y) with y(x0 ) = y0 has a solution in the neighborhood of
x0 .
˜ ⇢ Rn , and ⌦ = I ⇥ ⌦.
Let ⌦ ˜ We introduce
0 1 0 1
y1 f1 (x, ~y )
B y2 C B f2 (x, ~y ) C
B C n ~ B C
~y = B .. C 2 R ; f (x, ~y ) = B .. C 2 Rn
@.A @ . A
yn fn (x, ~y )
Then f~ : ⌦ ! Rn , and the system of equation (4.1) can be written in the compact form
~y 0 = f~(x, ~y ) .
An equation of the n-th order ODE y (n) = f (x, y, y 0 , . . . , y (n 1) ) may also be treated as a system
of the type (4.1). To see this, let y1 = y, y2 = y 0 , . . . , yn = y (n 1) . Then from the ODE equation
y (n) = f (x, y, y 0 , . . . , y (n 1) ), we have
Example 4.1. Consider the initial value problem y (3) + 2y 0 (y 0 )3 + y = x2 + 1 with y(0) =
0, y 0 (0) = 1, y 00 (0) = 1. We want to convert into a system of equations. Let y1 = y, y2 = y 0 and
y3 = y 00 . Note that y 0 = y10 = y2 . Using these, the required system takes the form
y10 = y2 ; y20 = y3 ; y30 = y23 2y2 y1 + x 2 + 1 ,
(4.3)
y1 (0) = 0; y2 (0) = 1; y3 (0) = 1 .
Theorem 4.1 (Local existence). Let f~ be a continuous vector valued function defined on
R = |x x0 | a, |~y ~y0 | b, a, b > 0
This can be written in compact form ~y 0 = f~(x, ~y ), ~y (0) = ~y0 = (0, 1), where f~(x, ~y ) = (y2 , y1 ).
Let us calculate the successive approximations ~ k (x):
~ 0 (x) = ~y0 = (0, 1) ,
Z x
~ 1 (x) = (0, 1) + (1, 0) ds = (x, 1) ,
0
Z x Z x
~ 2 (x) = (0, 1) + x2 x2
f~(s, ~ 1 (s)) ds = (0, 1) + (1, s) ds = (0, 1) + (x, ) = (x, 1 ),
0 0 2 2
Z x
~ 3 (x) = (0, 1) + s2 x3 x2
(1 , s) ds = (x ,1 ),
0 2 3! 2
Z x
~ 4 (x) = (0, 1) + s2 s3 x3 x2 x4
(1 , s + ) ds = (x ,1 + ).
0 2 3! 3! 2 4!
It is not difficult to show that all the ~ k exist for all real x and ~ k (x) ! (sin(x), cos(x)). Thus,
the unique solution of the the given IVP is ~ (x) = (sin(x), cos(x)).
Theorem 4.2 (Non-local existence). Let f~ be a continuous vector valued function defined on
S = |x x0 | a, |~y | < 1, a>0
and satisfies there a Lipschitz condition. Then the successive approximation { ~ k }1 k=0 for the
0 ~ ~
~y = f (x, ~y ); ~y (x0 ) = ~y0 exist on |x x0 | a, and converges there to a solution of the IVP.
Corollary 4.3. Let f~ be a continuous vector valued function defined on |x| < 1, |~y | < 1, and
satisfies there a Lipschitz condition on each strip
Sa = {|x| a, |~y | < 1, a > 0}.
Then every initial value problem ~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 has a solution which exists for all
x 2 R.
Example 4.3. Consider the system
y10 = 3y1 + xy3 , y20 = y2 + x3 y3 , y30 = 2xy1 y2 + e x y 3 .
this system of equation can be written in the compact form ~y 0 = f~(x, ~y ) where
0 1 0 1
y1 3y1 + 3y3
~y = @y2 A , and f~(x, ~y ) = @ y 2 + x 3 y3 A.
y3 2xy1 y2 + e y3 x
Note that f~ is a continuous vector-valued function defined on |x| < 1, |~y | < 1. It is Lipschitz
continuous on the strip Sa = {|x| a, |~y | < 1, a > 0}, since for (x, ~y ), (x, ~ỹ) 2 Sa ,
f~(x, ~y ) f~(x, ~ỹ) = |3(y1 ỹ1 ) + x(y3 ỹ3 )| + |(y2 ỹ2 ) + x3 (y3 ỹ3 )|
x
+ |2x(y1 ỹ1 ) (y2 ỹ2 ) + e (y3 ỹ3 )|
x 3
(3 + 2|x|)|y1 ỹ1 | + |x| + e + |x| |y3 ỹ3 | + 2|y2 ỹ2 |
5 + 3a + ea + a3 |~y ~ỹ| .
Therefore, every initial value problem for this system has a solution which exists for all real x.
Moreover, solution is unique.
Example 4.4. For any Lipschitz continuous function on R, consider the IVP
y 00 (x) = f (y), y(0) = 0, y 0 (0) = 0 .
ODES AND PDES 19
Then solution of the above IVP is even. Since f is Lipschitz continuous, by writing the above
IVP in vector form, one can show that above problem has a solution y(x) defined on whole
real line. Let z(x) = y( x). Note that z(x) satisfies the above IVP. Hence by uniqueness,
z(x) = y( x) for all x 2 R. In other words, y(x) is even in x.
Like in the 1st order ODE (scalar valued), we have the following continuous dependence
estimate and uniqueness theorem.
Theorem 4.4 (Continuous dependence estimate ). Let f~, ~g be two continuous vector-valued
function defined on a rectangle
R = |x x0 | a, |~y ~y0 | b, a, b > 0
and suppose f~ satisfies a Lipschitz condition on R with Lipschitz constant K. Let ~ , ~ are
solutions of the problems ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y1 and ~y 0 = ~g (x, ~y ), ~y (x0 ) = ~y2 respectively on
some interval I containing x0 . If for ", 0,
|~y1 ~y2 | , |f~(x, ~y ) ~g (x, ~y )| " 8(x, ~y ) 2 R
then,
" K|x x0 |
| ~ (x) ~ (x)| eK|x ex0 |
+1 8x 2 I .
K
In particular, the problem ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 has at most one solution on any interval
containing x0 .
Theorem 4.5 (Gronwall’s lemma:II). Let u(t), p(t) and q(t) be non-negative continuous func-
tions defined on the interval I = [a, b]. Suppose the following inequality holds:
Z t
u(t) p(t) + q(s)u(s) ds, t 2 I.
a
Then, we have
Z t ⇣Z t ⌘
u(t) p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧, t 2 I.
a ⌧
Rt
Proof. Let r(t) = a q(s)u(s) ds. Then r(·) is di↵erentiable in (a, b) with r0 (t) = q(t)u(t) and
r(a) = 0. Hence, by our given condition, one has
r0 (t) = q(t)u(t) q(t){p(t) + r(t)} =) r0 (t) r(t)q(t) p(t)q(t)
⇣ Z t ⌘ ⇣ Z t ⌘
=) r0 (t) r(t)q(t) exp q(s) ds p(t)q(t) exp q(s) ds
a a
dh ⇣ Z t ⌘i ⇣ Z t ⌘
=) r(t) exp q(s) ds p(t)q(t) exp q(s) ds
dt a a
⇣ Z t ⌘ Z t ⇣ Z ⌧ ⌘
=) r(t) exp q(s) ds p(⌧ )q(⌧ ) exp q(s) ds d⌧
a a a
Z t ⇣Z t Z ⌧ ⌘
=) r(t) p(⌧ )q(⌧ ) exp q(s) ds q(s) ds d⌧
a a a
Z t ⇣Z t ⌘
=) r(t) p(⌧ )q(⌧ ) exp q(s) ds d⌧ .
a ⌧
Hence, we get
Z t ⇣Z t ⌘
u(t) p(t) + r(t) p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧ .
a ⌧
⇤
20 A. K. MAJEE
Corollary 4.6. Let u(t), p(t) and q(t) be non-negative continuous functions defined on the in-
terval I = [a, b]. Suppose the following inequality holds:
Z t
u(t) p(t) + q(s)u(s) ds, t 2 I.
a
Assume that p(·) is non-decreasing. Then
⇣Z t ⌘
u(t) p(t) exp q(s) ds , t 2 [a, b].
a
4.1. Existence and uniqueness for linear systems: Consider the linear system ~y 0 = f~(x, ~y ),
where the component of f~ are given by
n
X
fj (x, ~y ) = ajk yk + bj (x), j = 1, 2, . . . , n
k=1
ODES AND PDES 21
and the functions ajk , bj are continuous on an interval I containing x0 . Now consider the strip
Sa = {|x x0 | a, |~y | < 1}. PSuppose ajk , bj are continuous on |x x0 | a. Then there exists
a constant K > 0 such that nj=1 |ajk (x)| K for all k = 1, 2, . . . , n, and for all x satisfying
|x x0 | a. Note that
n
X
@ f~
(x, ~y ) = a1k (x), a2k (x), . . . , ank (x) = |ajk (x)| K .
@yk
j=1
Thus, f~ satisfies a Lipschitz condition on S with Lipschitz constant K. Hence we arrive at the
following theorem
Theorem 4.7. Let ~y 0 = f~(x, ~y ) be a linear system described as above. If ~y0 2 Rn , there exists
one and only one solution ~ of the IVP ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 .
Note that linear system can be written in the equivalent form
~y 0 (x) = A(x)~y (x) + ~b(x)
where
0 1 0 1
a11 (x) a12 (x) a13 (x) . . . a1n (x) b1 (x)
B a21 (x) a22 (x) a23 (x) . . . a2n (x) C B b2 (x) C
B C ~ B C
A(x) = B .. .. .. .. .. C and b(x) = B .. C .
@ . . . . . A @ . A
an1 (x) an2 (x) an3 (x) . . . ann (x) bn (x)
Pn
Example 4.6. Consider the homogeneous linear system yj0 = k=1 ajk yk (j = 1, 2, . . . , n),
where ajk are continuous on some interval I. Then it is easy to see that ~ = ~0 is a solution .
P
This is called a trivial solution. Let K be such that nj=1 |ajk (x)| K. Let x0 2 I, and ~ be
any solution of the linear homogeneous system. Now consider two IVP
~y 0 = f~(x, ~y ), ~y (x0 ) = ~0; ~y 0 = f~(x, ~y ), ~y (x0 ) = ~ (x0 )
P
where the component of f~ is fj (x, ~y ) = nk=1 ajk yk . Then according to continuous dependence
estimate theorem, we have
" = 0 K|x x0 |
| ~ (x)| = | ~ (x) ~0| | ~ (x0 ) ~0|eK|x x0 | + e 1 = | ~ (x0 )|eK|x x0 | 8x 2 I .
K
For linear equations of order n, we have non-local existence.
Theorem 4.8. Let a0 , a1 , . . . , an 1 and b be continuous real valued functions on an interval I
containing x0 . If ↵0 , ↵1 , . . . , ↵n 1 are any n constant, there exists one and only one solution
of the ODE
y (n) + an 1 (x)y
(n 1)
(x) + . . . + a0 (x)y = b(x) on I ,
y(x0 ) = ↵0 , y 0 (x0 ) = ↵1 , . . . , y (n 1)
(x0 ) = ↵n 1.
Proof. Let ~y0 = (↵0 , ↵1 , . . . , ↵n 1 ). Given ODE can be written as a system of linear equation
yj0 = yj+1 (j = 1, 2, . . . , n 1); yn0 = b(x) an 1 (x)yn ... a0 (x)y1 .
Then, according to Theorem 4.7, above problem has a unique solution ~ = 1, 2, . . . , n on
I satisfying 1 (x0 ) = ↵0 , 2 (x0 ) = ↵1 , . . . , n (x0 ) = ↵n 1 . But, since
0 0 00 (n 1)
2 = 1, 3 = 2 = 1, . . . , n = 1 ,
the function 1 is the required solution on I. ⇤
22 A. K. MAJEE
Example 5.3. f (t) = sin(t) and g(t) = t3 are linearly independent functions on the interval
(0, ⇡). To see this, we need to show that Wronskian at some point is non-zero. Note that,
⇡ 1
W [sin(t), t3 ]( ) = p (⇡/4)2 3 ⇡/4 6= 0.
4 2
3
Hence f (t) = sin(t) and g(t) = t are linearly independent functions on the interval (0, ⇡).
Remark 5.1. Let us make the following remarks:
i) If u1 , u2 , . . . , um are linearly dependent, then W [u1 (t), u2 (t), . . . , um (t)] = 0 for all t 2 I.
ii) Converse of i) is not true in general. For example, let m = 2 and u1 (t) = t2 and u2 (t) =
|t|t. We first show that u1 and u2 are linearly independent. Let c1 u1 (t) + c2 u2 (t) = 0.
Then
(
(c1 + c2 )t2 = 0 t > 0
=) c1 = c2 = 0.
c1 c2 t2 = 0 t < 0.
But
2 t2 |t|t
W [t , |t|t] = det = 0 8t 2 R.
2t 2|t|
Theorem 5.2. Let I be an interval in R and ai : I ! R (i = 0, 1, . . . , m 1) be continuous. Let
u1 , u2 , . . . , um be m-solutions of the linear homogeneous ODE (2.1). Suppose that u1 , u2 , . . . , um
are linearly independent. Then
W [u1 (t), u2 (t), . . . , um (t)] 6= 0 8t 2 I .
Proof. Suppose that the result is NOT true. Then there exists t0 2 I such that
W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] = 0 .
2 32 3
c1 0
6 c2 7 607
6 7 6 7
Then, there exists C := 6 .. 7 6= 0 = 6 .. 7 such that A C = 0, where
4 . 5 4.5
cm 0
2 3
u1 (t0 ) u2 (t0 ) ... um (t0 )
6
6 u01 (t0 ) u02 (t0 ) ... u0m (t0 ) 7
7
A := 6 .. .. .. .. 7.
4 . . . . 5
(m 1) (m 1) (m 1)
u1 (t0 ) u2 (t0 ) . . . um (t0 )
Pm
Now, define v(t) = i=1 ci ui (t), t 2 I. Then
v(t0 ) =c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm um (t0 ) = 0 ,
v 0 (t0 ) =c1 u01 (t0 ) + c2 u02 (t0 ) + . . . + cm u0m (t0 ) = 0 ,
.... .. ..
.. . .
(m 1) (m 1)
v (m 1)
(t0 ) =c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm u(m
m
1)
(t0 ) = 0 .
Since ui , i = 1, 2, . . . , m solves the linear homogeneous ODE (2.1), v satisfies the following initial
value problem:
v (m) (t) + am 1 (t)v
(m 1)
(t) + . . . a1 (t)v 0 (t) + a0 (t)u(t) = 0 t 2 I ,
v(t0 ) = v 0 (t0 ) = . . . = v (m 1)
(t0 ) = 0 .
P
Thus, v(t) = 0 for all t 2 I. In other words, m i=1 ci ui (t) = 0 for all t 2 I with the above choice
of the constants c1 , c2 , . . . , cm , which is a contradiction. This finishes the proof. ⇤
24 A. K. MAJEE
Corollary 5.3. Let u1 , u2 , . . . , um be m-solutions of the linear homogeneous ODE (2.1). Then
u1 , u2 , . . . , um are linearly independent if and only if
W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0 for some t0 2 I .
Example 5.4. sin(x) and cos(x) are two linearly independent solution of the homogeneous ODE
y 00 (x) + y(x) = 0. Note that sin(x) and cos(x) solves the ODE. Moreover,
W [sin(x), cos(x)] = 1 8x 2 R.
Therefore, they are two linearly independent solution of the homogeneous ODE y 00 (x) + y(x) = 0.
Theorem 5.4. The real vector space X defined in Theorem 2.1 is of finite dimensional and
dim X = m.
Corollary 5.5. If u1 , u2 , . . . , um are any independent solutions of the linear homogeneous ODE
(2.1), then any solution u of (2.1) can be written as
u(t) = c1 u1 (t) + c2 u2 (t) + . . . + cm um (t) t2I.
We see that W [u1 , u2 ] satisfies the first order linear homogeneous equation u0 (t) + a1 (t)u(t) = 0,
and hence
Z t
⇥ ⇤
W [u1 (t), u2 (t)] = C exp a1 (s) ds .
t0
Putting t = t0 in the above expression, we obtain C = W [u1 (t0 ), u2 (t0 )], and hence
Z t
⇥ ⇤
W [u1 (t), u2 (t)] = exp a1 (s) ds W [u1 (t0 ), u2 (t0 )].
t0
For general m, one needs to make use of some general properties of the determinant. From the
definition of W = W [u1 , u2 , . . . , um ] as a determinant, it follows that its derivative W 0 is a sum
of m determinants
W 0 = V 1 + V2 + . . . + V m ,
ODES AND PDES 25
where Vk di↵ers from W only its k-th row, and the k-th row of Vk is obtained by di↵erentiating
the k-th row of W . The first m 1 determinant are all zero as they each have two identical
rows. Hence
2 3 2 3
u1 u2 . . . um u1 u2 ... um
6 u01
6 u02 . . . u0m 77
6
6 u01 u02 ... u0m 7
7
0
W = det 6 .. .. .. .. 7 = det 6 .. .. .. .. 7.
4 . . . . 5 4 . . . . 5
(m) (m) (m) P m 1 (j) P m 1 (j) P m 1 (j)
u1 u2 . . . um a
j=0 j 1 u a
j=0 j 2 u . . . a
j=0 j mu
The value of the determinant is unchanged if we multiply any row by a number and add to the
last row. We multiply the first row by a0 , the second row by a1 , . . ., the (m 1) row by am 2 ,
and add these to the last row, we have
2 3
u1 u2 ... um
6
6 u01 u02 ... u0m 7
7
W 0 = det 6 .. .. .. .
.. 7 = am 1 W
4 . . . 5
(m 1) (m 1) (m 1)
am 1 u1 am 1 u2 ... am 1 um
Thus, W satisfies the linear first order equation u0 (t) + am 1 (t)u(t) = 0, and hence
Z t
⇥ ⇤
W [u1 , u2 , . . . , um ](t) = exp am 1 (s) ds W [u1 , u2 , . . . , um ](t0 ) .
t0
⇤
Theorem 5.7. Let up be a particular solution of the non-homogeneous ODE
L(u) := u(m) (t) + am 1 (t)u
(m 1)
(t) + . . . + a0 (t)u(t) = b(t) 8t 2 I
and v is any solution of the corresponding linear homogeneous ODE. Then any solution u of the
non-homogeneous ODE can be written as u = v + up .
Proof. Let u be any solution and up be its particular solution. Then u up is a solution of the
homogeneous ODE. Hence u up = c1 v1 (t) + c2 v2 (t) + . . . + cm vm (t), where v1 , v2 , . . . , vm are
any linearly independent solutions of the linear homogeneous ODE . Since any solution v of the
homogeneous ODE can be written in the form c1 v1 (t) + c2 v2 (t) + . . . + cm vm (t), the result follows
easily. ⇤
Later, we use variation of parameters method to find the particular solution up .
Example 5.6. Let x1 , x2 , x3 and x4 be solutions of the linear homogeneous ODE x(4) (t)
3x(3) (t) + 2x0 (t) 5x(t) = 0 such that W [x1 , x2 , x3 , x4 ](0) = 5. Then by Abel’s theorem
Z 6
⇥ ⇤
W [x1 , x2 , x3 , x4 ](6) = exp 3 ds W [x1 , x2 , x3 , x4 ](0) = 5e18 .
0
Example 5.7. The functions u1 (t) = sin t and u2 (t) = t2 can not be solutions of a di↵erential
equation u00 (t) + a1 (t)u0 (t) + a0 (t)u(t) = 0, where a0 , a1 are continuous functions. To see this, we
first consider the Wronskian of u1 and u2 . Note that W [u1 (t), u2 (t)] = 2t sin t t2 cos t. Thus
W [u1 ( ⇡2 ), u2 ( ⇡2 )] = ⇡ 6= 0, and W [u1 (0), u2 (0)] = 0. Thus, in view of the previous theorem, u1
and u2 cannot be a solution.
Example 5.8. Let us explain why et , sin(t) and t cannot be solutions of a third order homoge-
neous equation with continuous coefficients. Notice that
⇡ ⇡ ⇡
W [et , sin(t), t](0) = 0; W [et , sin(t), t]( ) = (2 )e 2 6= 0.
2 2
26 A. K. MAJEE
If they are solutions of a third order homogeneous equation with continuous coefficients, then
by Abel’s theorem, Wronskian be either identically zero or nonzero. Therefore, et , sin(t) and t
cannot be solutions of a third order homogeneous equation with continuous coefficients.
5.2. Linear homogeneous equation with constant coefficients. We are interested in the
ODE
u(m) (t) + am 1u
(m 1)
(t) + . . . + a0 u(t) = 0 , where ai 2 R . (5.1)
Define the di↵erential operator L with constant coefficients as
Xm
di
L⌘ ai i , ai 2 R with am = 1 .
dt
i=0
For u : R ! R which is m-times di↵erentiable, we define
Xm
di u(t)
Lu(t) = ai .
dti
i=0
In this notation, we are interested in finding u such that Lu(t) = 0 for all t 2 R. Define a
polynomial
m m 1
p( ) = + am 1 + . . . + a1 + a0 = 0 . (5.2)
The polynomial p is called the characteristic polynomial of the operator L.
Remark 5.2. We observe the followings:
a) For a given polynomial p of degree m, we can associate a di↵erential operator Lp such
that p is the characteristic polynomial of Lp .
b) Let p be a polynomial of degree m such that p = p1 p2 , where p1 and p2 are polynomial
with real coefficients. Then for any m-times di↵erentiable function u,
Lp (u) = Lp1 Lp2 (u) = Lp2 Lp1 (u) .
Thus, if u is a solution of Lp1 u = 0 or Lp2 u = 0, then u is a solution of Lp u = 0.
By the fundamental theorem of algebra, we can write the characteristic polynomial as
p( ) = ( 1) . . . ( m) , i 2 C.
Note that i ’s may not be distinct. Suppose that i is not real. Then ¯ i is a root of p and hence
¯ i = j for some j. Hence we can write
m1 m k1
p( ) = ( 1) ...( k1 ) z1 ) n 1 (
( z̄1 )n1 . . . ( zk 2 ) n k 2 ( z̄k2 )nk2
P 1 P 2
where i 2 R, zj = xj + iyj with yj 6= 0, and ki=1 mi + 2 kj=1 nj = m. Thus
mi
⇥ ⇤n
p( ) = p1 p2 . . . pk1 q1 q2 . . . qk2 ; pi = ( i) , qj = ( zj )( z̄j ) j .
Therefore, if we find Lpi (u) = 0 or Lqj (u) = 0, then u is a solution of Lp u = 0.
d mi
Finding the solutions of Lpi (u) = 0: We need to find u such that dt i u = 0. This
d d d d 0
gives ( dt i )( dt i ) . . . ( dt i )u = 0. Now ( dt i )u = 0 =) u i u = 0 and hence
t d 2 d d
u = e i . Suppose that ( dt i ) u = 0. Set v = ( dt i )u. Then ( dt i )v = 0, and hence
t
v = e . Thus,
i
e it
= u0 iu
=) u0 e it
u ie it
=1
=) (ue it
)0 = 1
it
=) e u=t+C
ODES AND PDES 27
it it
=) u(t) = te + Ce .
Therefore, u = te i t + Ce i t solves Lpi u = 0. This yields that Lpi (te it ) = 0, and hence e it and
te i t are solutions of Lpi u = 0.
Theorem 5.8. The functions e i t , te i t , . . . , tmi 1e it are mi linearly independent solution of
Lpi u = 0 and hence solution of Lu = 0.
Proof. We can easily check that Lpi (tj e i t ) = 0 if j mi 1. Let us show the indepen-
P i 1
dence of the functions. Let m j=0 Cj+1 t e
j i t = 0. Putting t = 0, we get C = 0, and hence
1
Pmi 1 j 1 t
j=1 Cj+1 t e = 0. Again putting t = 0 in the above expression, we have C2 = 0. Contin-
i
m
X (m 1)
, c0i (t)ui (t) = b(t) .
i=1
We now arrive at the following theorem.
P
Theorem 5.10. The function u(t) = m i=1 ci (t)ui (t) is a particular solution of the non-homogeneous
ODE
u(m) (t) + am 1 (t)u
(m 1)
(t) + . . . + a0 (t)u(t) = b(t) t2I
if c1 , c2 , . . . , cm satisfy the the following matrix equation AC = B, where
2 3 2 03 2 3
u1 u2 ... um c1 0
6 u01 u 0 . . . u 0 7 6 0 7 6 7
6 2 m 7 6 c2 7 6 07
A = 6 .. .. .. .. 7 , C = 6 .. 7 and B = 6 .. 7 .
4 . . . . 5 4 . 5 4.5
(m 1) (m 1) (m 1) 0
u1 u2 . . . um cm b
Remark 5.3. Note that, since u1 , u2 , . . . , um are linearly independent, the Wronskian is non-
zero. Thus the above matrix equation has an unique solution for C. Moreover, by using
Cramer’s rule we have explicit formula for c0i , namely
Wi (t)b(t)
c0i (t) = (1 i m)
W [u1 , u2 , . . . , um ](t)
where Wi is the determinant obtained from W [u1 , u2 , . . . , ui , . . . , um ] by replacing i-th column
Rt
by 0, 0, . . . , 0, 1. One can take ci (t) = t0 c0i (s) ds for some t0 2 I. Thus, the particular solution
up now takes of the form
Xm Z t
Wi (s)b(s)
up (t) = ui (t) ds .
t0 W [u 1 , u2 , . . . , um ](s)
i=1
Remark 5.4. Fix t0 2 I. Then the particular solution up of the non-homogeneous ODE
u(m) (t) + am 1 (t)u(m 1) (t) + . . . + a0 (t)u(t) = b(t) t 2 I
P Rt Wi (s)b(s)
given by up (t) = mi=1 ui (t)ci (t), where ci (t) = t0 W [u1 ,u2 ,...,um ](s) ds satisfies the following initial
conditions
up (t0 ) = u0p (t0 ) = . . . = u(m
p
1)
(t0 ) = 0 .
Pm 0 (j)
Note that, since i=1 ci (t)ui (t) = 0 for all 0 j (m 2), we have
m
X (j)
u(j)
p (t) = ci (t)ui (t) 8 0 j (m 1) .
i=1
(m 1)
Since ci (t0 ) = 0, it easily follows that up (t0 ) = u0p (t0 ) = . . . = up (t0 ) = 0.
Example 5.12. Find a general solution of the ODE
y (3) + y 00 + y 0 + y = 1 .
The characteristic polynomial p( ) associated to the homogeneous ODE is given by
3 2 2
p( ) = + + + 1 = ( + 1)( + 1) = ( + 1)( + i)( i)
Thus, the roots of p are i, i, and 1. Hence three independent solutions of the homogeneous
ODE are given by
u1 (t) = e t , u2 (t) = cos(t), u3 (t) = sin(t) .
30 A. K. MAJEE
Thus,
Z t Z
t2 1 t 1⇥ ⇤
c1 (t) = s ds = , c2 (t) = se s ds = 1 e t te t ,
0 2 2 0 2
Z t
1 1⇥ ⇤ t2 1
c3 (t) = ses ds = 1 et + tet , and hence up = 1 + (et e t) .
2 0 2 2 2
Therefore, a general solution to the non-homogeneous ODE is given by
t2
u(t) = c1 + c2 et + c3 e t
+ 1,
2
where c1 , c2 and c3 are arbitrary constants.
5.4. Euler’s Equation: Consider the equation
tm u(m) (t) + am 1t
m 1 (m 1)
u (t) + . . . + a1 tu0 (t) + a0 u(t) = b(t) (5.3)
where a0 , a1 , . . . , am 1 are constants. Consider t > 0. Let s = log(t) (for t < 0, we must use
s = log( t)). Then
du du 1
=
dt ds t
d2 u d2 u du 1
=
dt2 ds2 ds t2
d3 u d3 u d2 u du 1
3
= 3
3 2 +2
dt ds ds ds t3
.. .. ..
. . .
dm u ⇣ dm u dm 1 u du ⌘ 1
= + C m 1 + . . . + C 1 ,
dtm dsm dsm 1 ds tm
for some constants C1 , C2 , . . . , Cm 1 . Substituting these in (5.3), we obtain a non-homogeneous
ODE with constant coefficients
dm u dm 1 u du
+ B m 1 + . . . + B1 + B0 u = b(es )
dsm dsm 1 ds
for some constants B0 , B1 , . . . , Bm 1 . One can solve this ODE and then substitute s = log(t) to
get the solution of the Euler’s equation (5.3).
Example 5.14. Solve
x2 y 00 + xy 0 y = x3 x > 0.
dy d2 y dy
Let x = eu . Then xy 0 = du and x2 y 00 = du2 du . Therefore, we have
d2 y
y = e3u u 2 R .
du2
The characteristic polynomial corresponding to the homogeneous ODE is given by
2
p( ) = 1 = ( + 1)( 1) .
Therefore, y1 = eu and y2 = e u are two linearly independent solutions. Note that W [y1 , y2 ](u) =
2. Moreover
u
0 e u u e 0
W1 (u) = det = e , W2 (u) = det u = eu .
1 e u e 1
Hence
Z u Z u
1 1⇥ ⇤ 1 1
c1 (u) = e dr = e2u
2r
1 , c2 = e4r dr = 1 e4u ,
2 0 4 2 0 8
32 A. K. MAJEE
Note that y1 (t) = (3t 1) is a solution, and y1 (t) 6= 0. To find another independent solution,
using Theorem 5.11, we obtain
R 3
Z
e 3t 1 dt 1
y2 (t) = (3t 1) 2
dt = .
(3t 1) 6(3t 1)
Therefore, general solution is given by
1 1
y(t) = c1 (3t 1) c2 , t> .
6(3t 1) 3
1
R
One can easily check that the substitution x(t) = y(t)e 2 p(t) dt transform the equation
x + p(t)x0 + q(t)x = 0 into the form y 00 + P (t)y(t) = 0, where p, q are continuous functions such
00
that p0 is continuous. Therefore, instead of studiyng the equation of the form x00 +p(t)x0 +q(t)x =
0, we will study the equation of the form
y 00 + ↵(x)y(x) = 0.
Oscillatory behaviour of solutions: Consider the second order linear homogeneous equation
y 00 + p(x)y(x) = 0 . (5.5)
For simplicity, we assume that p(x) is continuous everywhere.
Definition 5.3. We say that a nontrivial solution y(x) of (5.5) is oscillatory (or it oscillates)
if for any number T , y(x) has infinitely many zeros in the interval (T, 1); or equivalently, for
any number ⌧ , there exists a number ⇠ > ⌧ such that y(⇠) = 0. We also call the equation (5.5)
oscillatory if it has an oscillatory solution.
Consider the equation y 00 + 4y = 0. Two independent solutions are y1 (x) = sin(2x) and
y2 (x) = cos(2x). Note that y1 (x) three zeros on (0, 2⇡). Moreover, between two consecutive
zeros of y1 (x), there is only one zero of y2 (x). We have the following general result.
Theorem 5.12 (Sturm Separation Theorem). Let y1 (x) and y2 (x) be two linearly independent
solutions of (5.5), and suppose a and b are two consecutive zeros of y1 (x) with a < b. Then
y2 (x) has exactly one zero in the interval (a, b).
Proof. Notice that y2 (a) 6= 0 6= y2 (b)( otherwise y1 and y2 would have common zero and hence
their Wronskian would be zero, contradicting the fact that they are linearly independent). Sup-
pose y2 (x) 6= 0 on (a, b). Then y2 (x) 6= 0 on [a, b]. Define h(x) = yy12 (x) (x) . Then h satisfies all
the conditions of Rolle’s theorem. Hence there exists c 2 (a, b) such that h0 (c) = 0. In other
words, W [yy12,y 2 ](c)
= 0. Since y2 (c) 6= 0, W [y1 , y2 ](c) = 0, a contradiction as y1 and y2 are linearly
2 (c)
independent. Thus, there exists c 2 (a, b) such that y2 (c) = 0.
We now show the uniqueness. Suppose there exist c1 , c2 2 (a, b) such that y2 (c1 ) = y2 (c2 ) = 0.
Then, by what we have just proved, there would exist a number d between c1 and c2 such that
y1 (d) = 0, contradicting the fact that a and b are consecutive zeros of y1 (x). ⇤
Example 5.17. Show that between any two consecutive zeros of sin(t), there exists only one zero
of a1 sin(t) + a2 cos(t), where a1 , a2 2 R with a2 6= 0. To see this, we apply Sturm Separation
Theorem. Note that y1 (t) := sin(t) and y2 (t) := a1 sin(t) + a2 cos(t) are two solutions of the
ODE y 00 (t) + y(t) = 0. Since W [y1 , y2 ](t) = a2 6= 0, y1 and y2 are linearly independent.
Therefore, by Theorem 5.12, between any two consecutive zeros of sin(t), there exists only one
zero of a1 sin(t) + a2 cos(t).
In view of Theorem 5.12, we arrive at the following corollary.
Corollary 5.13. If (5.5) has one oscillatory solution, then all of its solutions are oscillatory.
34 A. K. MAJEE
Theorem 6.4. Let 1 [qi ], i = 1, 2 be the first eigenvalues of (px0 )0 + qi x = 0, x(a) = 0 = x(b).
If q1 (t) q2 (t) for all t 2 [a, b], then
1 [q2 ] 1 [q1 ] .
˜
Let 1 [q] resp. 1 [q] be the first eigenvalue of (px0 )0 + qx = 0 resp. (p̃x0 )0 + qx = 0 with
boundary conditions x(a) = 0 = x(b). If p(t) p̃ for all t 2 [a, b], then
˜
1 [q] 1 [q] .
7. System of ODE:
In this section, we study system of ODE of the form ~y 0 (t) = A(t)~y (t) + ~b(t) with some initial
condition, where A(t) = (aij (t))n⇥n is a n ⇥ n matrix with aij 2 C(I; R) and ~b 2 C(I; Rn ), and
find its explicit solution.
7.1. Constant coefficient homogeneous system: We are interested in ~y 0 (t) = A~y (t), where
A = (aij )n⇥n is a n ⇥ n matrix with real entries. For A = (aij )n⇥n , and t 2 R, we define
At = (taij )n⇥n . By M(n, R), we denote the set of all n ⇥ n matrices with real entries. We first
show that
Xm
Ak
eA = lim
m!1 k!
k=0
is well-defined. Indeed, for m > n, one has
m
X n
X m
X m
X
Ak Ak Ak kAkk
= !0
as m, n ! 1 .
k! k! k! k!
k=0 k=0 k=n+1 k=n+1
nP k
o P
m A Ak
Thus, the sequence k=0 k! is Cauchy. This implies that limm!1 m k=0 k! exists and e
A
is well-defined. In the above, we have used the following norm: kAk = supk~xk2 1 kA~xk2 with
qP
n 2
k~xk2 := i=1 xi .
Theorem 7.1. For any A 2 M(n, R), one has keA k ekAk .
P n Ak Pn kAkk kAk . Since k·k is continuous function, we conclude
Proof. Note that k=0 k! k=0 k! e
that keA k ekAk . ⇤
Note that if A and B are two square matrices such that AB = BA, then eA+B = eA eB . Hence
(eA ) 1 = e A.
Theorem 7.2. Suppose A = C 1 BC, then eA = C 1 eB C.
Theorem 7.3. Let A 2 M(n, R) and ~x 2 Rn . Then the unique solution of the IVP ~y 0 (t) =
A~y (t), ~y (0) = ~x is given by the formula ~y (t) = eAt ~x.
d At
Thus, we have dt e = AeAt . Note that ~y (0) = I~x = ~x. Moreover,
d At
~y 0 (t) = e ~x = AeAt ~x = A~y (t), 8t 2 R.
dt
Uniqueness: Let ~y (t) be any solution of the given IVP. Set ~z (t) := e At ~
y (t). Since e At and A
commute each other and ~y (t) is a solution, we have
~z 0 (t) = Ae At
~y (t) + e At 0
~y (t) = 0 =) ~z (t) = ~c .
Since ~z (0) = ~x, we see that ~z (t) = ~c = ~x and hence ~y (t) = eAt ~x. This completes the proof. ⇤
Corollary 7.4. The unique solution of the IVP ~y 0 (t) = A~y (t), ~y (t0 ) = ~x is given by the formula
~y (t) = eA(t t0 ) ~x.
✓ ◆ ✓ ◆
0 2 1 2
Example 7.3. Solve the IVP ~y (t) = A~y (t) with ~y (0) = and A = . Solution is
0 2 1
given by the formula
✓ ◆ ✓ ◆✓ ◆ ✓ ◆
At 2 t cos(2t) sin(2t) 2 t 2 cos(2t)
~y (t) = e =e =e .
0 sin(2t) cos(2t) 0 2 sin(2t)
40 A. K. MAJEE
Proof. Note that ~u(t0 ) = ~0 and ~u(·) is di↵erentiable in I. Taking di↵erentiation, we have
Z t Z t Z t
0 @~y ~ ~
~u (t) = ~y (t, t) + (t, s) ds = b(t) + A~y (t, s) ds = b(t) + A ~y (t, s) ds
t0 @t t0 t0
= ~b(t) + A~u(t) .
⇤
Corollary 7.6. The unique solution of ~y 0 (t) = A~y (t) + ~b(t), t 2 I, ~y (t0 ) = ~x is given by the
formula Z t
~y (t) = eA(t t0 ) ~x + eA(t s)~b(s) ds.
t0
✓ ◆ ✓ ◆
1 1 1
Example 7.4. Find the solution of the IVP ~y 0 (t) = A~y (t)+~b(t), ~y (0) = , where A =
1 0 1
✓ ◆ ✓ ◆
~ t At t 1 t
and b(t) = . In this case e = e and hence
0 0 t
Z t ✓R t ◆ ✓ t ◆
A(t s)~ set s ds e 1 t
e b(s) ds = 0 = .
0 0 0
Thus, the solution of the given IVP is given by
Z t ✓ t ◆
At A(t s)~ 2e 1 t + tet
~y (t) = e ~y (0) + e b(s) ds = .
0 tet
7.3. Non-homogeneous system of ODE. We study the non-homogeneous system of the form
~y 0 (t) = A(t)~y (t) + ~b(t), ~y (t0 ) = ~x
where entries of A(t) and ~b(t) are continuous function of t on I. We shall first prove the existence
of n linearly independent solutions of associated homogeneous system.
Definition 7.1. Let ~u1 , · · · , ~un be n-vector valued functions. The
0 i1 Wronskian W (t), is defined
0 1 2 3 n
1 u1
u1 (t) u1 (t) u1 (t) . . . u1 (t) B ui C
B u1 (t) u2 (t) u3 (t) . . . un (t)C B 2C
B 2 2 2 2 C i = B ui3 C , 1 i n.
by W (t) = det B .. .. .. .. . C
.. A where ~
u B C
@ . . . . B .. C
1 2 3 n
@ . A
un (t) un (t) un (t) . . . un (t)
uin
One can easily arrive at the following theorem.
Theorem 7.7. If W (t0 ) 6= 0 for some t0 2 I, then the vector functions ~u1 , · · · , ~un are linearly
independent on I.
ODES AND PDES 41
Moreover, there exist n linearly independent solutions of ~y 0 (t) = A(t)~y (t); namely, n unique
solutions of the IVP
~y 0 (t) = A(t)~y (t), ~y (t0 ) = ~ei
where ~ei is the i-th standard basis element of Rn . Furthermore, the unique solution of the IVP
~y 0 (t) = A(t)~y (t), ~y (t0 ) = ~x := (x1 , x2 , . . . , xn ){
is given by the formula
n
X
~y (t) = xi ~ui (t),
i=1
where ~ui is the unique solution of the IVP ~y 0 (t) = A(t)~y (t), ~y (t0 )P = ~ei , 1 i n. Indeed, let
~y (t) be any solution of the given IVP. Consider the function ~z (t) = ni=1 xi ~ui (t). Then ~z (t0 ) = ~x
and
n n n
d~z X d~ui X i
X
= xi = xi A(t)~u (t) = A(t) xi ~ui (t) = A(t)~z (t) .
dt dt
i=1 i=1 i=1
Thus, by uniqueness of IVP, it follows that
n
X
~y (t) = ~z (t) = xi ~ui (t) . (7.1)
i=1
This also shows that ~y (t) is spanned by n linearly independent solutions ~ui , 1 i n.
Exercise 7.1. Let ~y i (t), 1 i n be n solutions of the system ~y (t) = A(t)~y (t) and t0 2 I.
Then show that Rt
Tr(A(s)) ds
W (t) = W (t0 )e t0 ,
Pn
where Tr(A(t)) is define by Tr(A(t)) = i=1 aii (t) with A(t) = aij (t) 1i,jn .
Example 7.5. 0
✓ We would
◆ like to find two independent solutions of the system ~u (t) = A(t)~u(t),
0 1
where A(t) = 1 1 , t > 0. Let ~u = (x, y)> be a solution. Then x0 = y and y 0 = tx2 yt .
t2 t
Thus, we have a second order ODE in x namely
t2 x00 + tx0 (t) + x(t) = 0, t > 0.
This is a form of Euler’s equation. General solution is given by
x(t) = C1 cos(log(t)) + C2 sin(log(t)).
✓ ◆
> cos(log(t))
Suppose ~u1 (1) = (1, 0) . Then, we have ~u1 = u2 (1) = (0, 1)> , then the
sin(log(t)) . If ~
✓ ◆ t
sin(log(t))
solution is given by ~u2 (t) = cos(log(t)) . Therefore, two independent solutions of the given
t
system are
✓ ◆ ✓ ◆
cos(log(t)) sin(log(t))
~u1 = sin(log(t)) , ~u2 (t) = cos(log(t)) .
t t
Let us denote the non-singular matrix
⇥ ⇤
(t, t0 ) := ~u1 , ~u2 , . . . , ~un .
Then using this notation, we can write the solution of the IVP ~y 0 (t) = A(t)~y (t), ~y (t0 ) = ~x as
~y (t) = (t, t0 )~x.
Definition 7.2. (t, t0 ) is called the principle fundamental matrix of the linear system
~y 0 (t) = A(t)~y (t).
42 A. K. MAJEE
7.3.1. Adjoint system: Let (t) be a fundamental matrix of ~y 0 (t) = A(y)~y (t). Then (t) 1 (t) =
0 for all t 2 I. Hence we have
d 1 d
(t) (t) + (t) 1 (t) = 0
dt dt
d 1
=) (t) (t) + A(t) (t) 1 (t) = 0
dt
d d
=) 1
(t) = 1
(t)A(t) =) ( 1 (t))> = A> (t)( 1 (t))> .
dt dt
This shows that ( 1 (t))> is a fundamental matrix for the system ~y 0 (t) = A> (t)~y (t).
Definition 7.4. The system ~y 0 (t) = A> (t)~y (t) is called the adjoint system of ~y 0 (t) = A(t)~y (t).
Theorem 7.11. Let be a fundamental matrix of ~y 0 (t) = A(t)~y (t). Then is a fundamental
matrix for the adjoint system if and only if > = C, for some non-singular constant matrix C.
Proof. Let > = C. Then one has
>
= C> =) =( >
) 1 >
C =( 1 > >
) C
We have seen that ( 1 )> is a fundamental matrix of the adjoint equation. Again, by previous
theorem = ( 1 )> C> is a fundamental matrix of the adjoint system. Conversely, let be a
fundamental solution of the adjoint system. Define C(t) = > . Then C(t) is non-singular and
di↵erentiable. Moreover,
d d d >
C(t) = ( (t))> (t) + (t) (t) = A> (t) (t) (t) + (t)> A(t) (t) = 0.
dt dt dt
Hence C(t) is a constant non-singular matrix. This completes the proof. ⇤
Remark 7.1. If A(t) is skew-symmetric i.e., A(t) = A> (t) for all t 2 I, then any fundamental
solution ~ is also a fundamental solution of the adjoint
of the linear system ~y 0 (t) = A(t)y(t)
system. Thus, > = C and hence k (t)k2 is constant for all t 2 I.
7.3.2. Variation of parameter: Consider the non-homogeneous system of ODE
~y 0 (t) = A(t)~y (t) + ~b(t), ~y (t0 ) = ~y0 . (7.3)
Let (t, t0 ) be the principle fundamental matrix for the system ~y 0 (t) = A(t)~y (t). Then any
solution of ~y 0 (t) = A(t)~y (t) is given by ~y (t) = (t, t0 )C, where C = (c1 , c2 , . . . , cn )> . Let us try
to find a solution of (7.3) in the form
>
~z (t) = (t, t0 )C(t), C(t) = c1 (t), c2 (t), . . . , cn (t) .
Suppose ~z (t) solves the non-homogeneous system. Then one has
d d d
~z 0 (t) = (t, t0 )C(t) + (t, t0 ) C(t) = A(t) (t, t0 )C(t) + (t, t0 ) C(t)
dt dt dt
d d
= A(t)~z (t) + (t, t0 ) C(t) = ~z 0 (t) ~b(t) + (t, t0 ) C(t) .
dt dt
Thus, ~z (t) solves the non-homogeneous system if
Z t
d 1~ 1
C(t) = (t, t0 ) b(t) =) C(t) = C(t0 ) + (s, t0 ) ~b(s) ds .
dt t0
Thus, the solution is given by
Z t
1~
~z (t) = (t, t0 )~y0 + (t, t0 ) (s, t0 ) b(s) ds
t0
44 A. K. MAJEE
Z t
= (t, t0 )~y0 + (t, t0 ) (t0 , s)~b(s) ds
t0
Z t
= (t, t0 )~y0 + (t, s)~b(s) ds .
t0
✓ ◆
1 1
Example 7.7. We solve the non-homogeneous system ~y 0 (t) = A~y (t) + ~b(t), where A = ,
0 1
✓ ◆ ✓ ◆
~b(t) = t and ~y (t0 ) = ~y0 := y01 . Note that, the principle fundamental matrix of the
0 y02
corresponding homogeneous system is given by
✓ ◆
t t0 1 t t0
(t, t0 ) = e .
0 1
Thus, by variation of parameter, the solution is given by
✓ ◆ ✓ ◆
t t0 y01 + (t t0 )y02 t t 0 t0 + 1 (t + 1)et0 t
~y (t) = e +e
y02 0
✓ ◆
y + (t t0 )y02 + t0 + 1 (t + 1)et0 t
= et t0 01 .
y02
7.4. Calculation of principle fundamental matrix: We now recall some basic theorems
from Linear algebra to calculate the principle fundamental matrix (t, t0 ) for the system ~y 0 (t) =
A~y (t).
Theorem 7.12. Let A 2 M(n, R). Assume that A has n distinct real eigen values i , 1 i n.
Let ~ui , 1 i n be the corresponding eigen-vector. Then C = [~u1 , u~2 , . . . , ~un ] is invertible and
C 1 AC = diag( 1 , 2 , . . . , n ). In this case, the principle fundamental matrix (t, t0 ) is given by
the formula
(t, t0 ) = eA(t=t0 ) = Cediag[ j (t t0 ), 1jn] C 1 .
Example
✓ 7.8. We calculate the principle fundamental matrix of the system ~y 0 (t) = A~y (t) with
◆
1 2
A= . Note that 1 and 4 are two real distinct eigen values of A. Corresponding eigen
3 2
✓ ◆ ✓ 3 2◆
> > 1 2 1 5 5 . Thus,
vectors are ~u1 = ( 1, 1) and ~u2 = (2, 3) . Hence C = , and C = 1 1
1 3 5 5
the principle fundamental matrix is given by
✓ ◆ ✓ (t t ) ◆ ✓ 3 2◆
1 2 e 0 0 5 5
(t, t0 ) = 1 1
1 3 0 e4(t t0 ) 5 5
✓ ◆
1 3e (t t0 ) + 2e4(t t0 ) 2e (t t 0 ) + 2e4(t t0 )
= .
5 3e (t t0 ) + 3e4(t t0 ) 2e (t t0 ) + 3e4(t t0 )
Theorem 7.13. Let A 2 M(2n, R). Assume that A has 2n distinct complex eigen values
¯ ¯
1 , 1 , . . . , n , n , where j = aj + ibj with bj 6= 0. Let wj = uj + ivj , 1 j n be
an eigen vector corresponding to j . Let C✓= [u~1 , ~v1◆, . . . , ~un , v~n ]. Then C is invertible, and
a j bj
C 1 AC = diag(A1 , A2 , . . . , An ), where Aj = , 1 j n. Moreover, the principle
bj a j
fundamental matrix (t, t0 ) has the representation
(t, t0 ) = Cediag(A1 (t t0 ),A2 (t t0 ),...,An (t t0 )) C 1
⇣ ✓ ◆ ⌘
aj (t t0 ) cos bj (t t0 ) sin bj (t t0 ) 1
= C diag e , 1jn C .
sin bj (t t0 ) cos bj (t t0 )
ODES AND PDES 45
Example 7.9. Find the principle fundamental matrix of the system ~y 0 (t) = A~y (t) with
0 1
1 1 0 0
B1 1 0 0 C
A=B @0 0 3
C.
2A
0 0 1 1
Observe that 1 ± i and 2 ± i are eigenvalues of A. They are 4 distinct complex eigen values.
Moreover, a corresponding pair of complex eigenvectors
0 are1w ~ 1 = ~u1 + i~v1 = (i, 1, 0, 0)> and
0 1 0 0
B 1 0 0 0C
~ 2 = ~u2 + i~v2 = (0, 0, 1 + i, 1)> . Hence C = B
w C
@0 0 1 1A is invertible. Moreover, C =
1
0 0 1 0
0 1
0 1 0 0
B1 0 0 0 C
B C
@0 0 0 1 A. Thus, the principle fundamental matrix is given by
0 0 1 1
0 1 0 t t0 1
0 1 0 0 e cos(t t0 ) et t0 sin(t t0 ) 0 0
B1 0 0 0C B et t0 sin(t t0 ) et t0 cos(t t0 ) 0 0 C
(t, t0 ) = B
@0 0 1 1A @
CB C
0 0 e 2(t t )
0 cos(t t0 ) e 2(t t )
0 sin(t t0 ) A
0 0 1 0 0 0 e2(t t0 ) sin(t t0 ) e2(t t0 ) cos(t t0 )
0 1
0 1 0 0
B1 0 0 0 C
·B
@0 0 0 1 A .
C
0 0 1 1
In case A has both real and complex eigenvalues and they are distinct, we have the following
result.
Theorem 7.14. Suppose A 2 M(2n k, R) has k-distinct real eigen values 1 , . . . , k and
distinct comples eigen values k+1 , ¯ k+1 , . . . , n , ¯ n with j = aj + ibj , bj 6= 0 for j = k + 1, k +
2, . . . , n. Let ~uj be an eigen vector corresponding to j , 1 j k and w ~ j = u~j + i~vj be an
eigen vector corresponding to j , k + 1 j n. Then C = [~u1 , . . . ✓ , ~uk , ~uk+1 ,◆~vk+1 , . . . , ~un , ~vn ] is
a j bj
invertible and C 1 AC = diag( 1 , . . . , k , Ak+1 , . . . , An ) where Aj = , k + 1 j n.
bj a j
0 1
1 0 0
Example 7.10. Consider the matrix A = @1 1 1A. It has distinct eigen values 1 = 1, 2 =
0 1 1
1 + i and 0 ¯ 2 = 1 i.1 The corresponding eigen > ~ 2 = (0, i, 1)> .
0 1 vectors are ~u1 = (1, 0, 1) and w
1 0 0 1 0 0
Hence C = @0 0 1A and C 1 = @ 1 0 1A. Thus, the principal fundamental matrix of the
1 1 0 0 1 0
0
system ~y (t) = A~y (t) is given by
0 1
1 0 0
(t, t0 ) = eA(t t0 ) = et t0 C @0 cos(t t0 ) sin(t t0 ) A C 1
0 sin(t t0 ) cos(t t0 )
0 1
1 0 0
= et t0 @ sin(t t0 ) cos(t t0 ) sin(t t0 )A .
1 cos(t t0 ) sin(t t0 ) cos(t t0 )
Next we will find the exponential matrix eA , when A has multiple eigen-values.
46 A. K. MAJEE
Definition 7.5. Let A 2 M(n, R) and be an eigen value of multiplicity k n. Any non-zero
solution of (A I)j ~v = ~0 for some j 2 {1, 2, . . . , k} is called generalized eigenvector.
Definition 7.6. A 2 M(n, R) is said to be nilpotent of order k if Ak = 0, Ak 1 6= 0.
Theorem 7.15. Let A 2 M(n, R) with real eigen-values 1 , 2 , . . . , n repeated accordingly to the
multiplicity . Then there exists a basis of generalized eigenvectors of Rn . Let {~u1 , ~u2 , . . . , ~un } be
one such basis for Rn . Then C := [~u1 , ~u2 , . . . , ~un ] is invertible and C 1 SC = diag( 1 , 2 , . . . , n )
with A = S + N. Furthermore, A S = N is nilpotent of order k n, SN = NS and
1
eA = eN eS = eN C diag(e j , 1 j n) C .
Remark 7.2. If is an eigenvalue of multiplicity n of A 2 M(n, R), then S in the above theorem
takes the form S = diag( , , . . . , ). Hence it is easier to find the principle fundamental matrix
:
k
X Nj (t t0 )j
(t, t0 ) = e (t t0 ) .
j!
j=0
Example 7.11. 0
✓ ◆ We will find the principle fundamental matrix for the system ~y (t) = A~y✓ (t) with
◆
3 1 2 0
A= . Note that A has an eigen value = 2 with multiplicity 2. Hence S =
1 1 0 2
✓ ◆
1 1
and N = A S = . It is easy to check that N2 = 0. Hence the principle fundamental
1 1
matrix is given by
✓ ◆
A(t t0 ) 2(t t0 ) 1 + (t t0 ) t t0
(t, t0 ) = e =e .
(t t0 ) 1 (t t0 )
0 1
1 0 0
Example 7.12. Consider the system ~y 0 (t) = A~y (t) with A = @ 1 2 0A. One can easily
1 1 2
check that A has the eigenvalues 1 = 1, 2 = 3 = 2, and the corresponding eigenvectors
are ~u1 = (1, 1, 2)> and ~u2 = (0, 0, 1)> . We therefore must find one generalized eigenvector
corresponding to = 2 and independent
0 of
1 ~u2 by solving
0 (A 2I) 1 u3 = ~0. We can choose
2~
1 0 0 1 0 0
~u3 = (0, 1, 0)> , and hence C = @ 1 0 1A and C 1 = @ 2 0 1A. The matrix S is given
2 1 0 1 1 0
0 1
1 0 0
by S = C diag(1, 2, 2)C 1 = @ 1 2 0A. Thus, the nilpotent matrix is given by N = A S =
2 0 2
0 1
0 0 0
@ 0 0 0A. Note that N2 = 0. Thus, the principle fundamental matrix for the given system
1 1 0
is given by
1
(t, t0 ) = (I + N(t t0 )) C diag(1, 2, 2) C
0 1
et t0 0 0
=@ et t0
e2(t t0 ) e2(t t0 ) 0 A.
2e(t t0 ) + (2 t t0 )e2(t t0 ) (t t0 )e2(t t0 ) e2(t t0 )
In the case of multiple complex eigenvalues, we have the following theorem.
Theorem 7.16. Let 1 , ¯ 1 , . . . n,
¯ n be the eigen-values of A 2 M(2n, R) repeatedly accord-
ingly to the multiplicity. Let j = aj + ibj , bj 6= 0 for j = 1, 2, . . . , n. Then there exists
ODES AND PDES 47
a basis of generalized complex eigenvectors w ~ j = ~uj + i~vj , w ~¯j = ~uj i~vj for j = 1, 2, . . . , n
and {~u1 , ~v1 , . . . , ~un , ~vn } is a basis for R . Set C = [~u1 , ~v1 , . . . , ~un , ~vn ]. Then C is invertible,
2n
1
A = S✓+ N, SN = ◆ NS, N is nilpotent of order k 2n, and C SC = diag(A1 , A2 , . . . , An ), where
a j bj
Aj = , 1 j n.
bj a j
0 1
0 1 0 0
B1 0 0 0 C
Example 7.13. Consider the matrix A = B @0 0 0
C. Observe that it has eigen values
1A
2 0 1 0
¯
1 = i and 1 = ~ 1 = (0, 0, i, 1)> .
i with multiplicity 2. Moreover, one eigen vector is w
We need to find generalized eigen vector w ~ 2 . From
0 the equation
1 (A iI) w
2 ~ 2 = ~0, we can
0 0 0 1
B0 0 1 0C
~ 2 = (i, 1, 0, 1)> . Thus, the matrix C = B
choose w C
@0 1 0 0A is invertible with its inverse
1 0 1 0
0 1
0 1 0 1
B 0 0 1 0C
C 1=B C
@0 1 0 0A. The matrix S is computed as
1 0 0 0
0 1 0 1
0 1 0 0 0 1 0 0
B 1 0 0 0C B1 0 0 0C
S = CB
@0
CC 1
=B C.
0 0 1A @0 1 0 1A
0 0 1 0 1 0 1 0
0 1
0 0 0 0
B0 0 0 0 C
The nilpotent matrix is given by N = A S = B @0
C. Furthermore, N2 = 0. Hence the
1 0 0A
1 0 0 0
principle fundamental solution of the corresponding system is given by
0 1
cos(t t0 ) sin(t t0 ) 0 0
B sin(t t0 ) cos(t t0 ) 0 0 C 1
(t, t0 ) = C B
@
C C I + N(t t0 ) .
0 0 cos(t t0 ) sin(t t0 ) A
0 0 sin(t t0 ) cos(t t0 )
Theorem 7.17. Let 1 , . . . m be the real eigen values repeated according to the multiplicities and
¯ ¯
m+1 , m+1 , . . . , n , n be the comples eigen values repeated according to the multiplicities . Then
there exists a basis {~u1 , . . . , ~um , ~vm+1 , ~um+1 , . . . , ~vn , ~un } of R2n m such that ~uj is a generalized
eigen vector corresponding to j , 1 j m, and w ~ j = ~uj + i~vj is a generalized eigen vector cor-
responding to j = aj + ibj j = m + 1, . . . , n. Moreover, C = [~u1 , . . . , ~um , ~vm+1 , ~um+1 , . . . , ~vn , ~un ]
is invertible and C 1 AC = diag(B1 , B2 , . . . Br ), where the elementary Jordan blocks B = Bj , j =
1, 2, . . . , r are either of the form
0 1
1 0 ··· 0
B0 1 · · · 0C
B C
B
B = B· · · C (7.4)
C
@ 0 ··· 1A
0 ··· 0
48 A. K. MAJEE
We shall now examine the relationship between the boundedness of solutions of the perturbed
system
~u0 (t) = A~u(t) + B(t)~u(t), t 2 [t0 , 1) (7.6)
and its linear system ~u0 (t) = A~u(t), where A 2 M(n, R) and t 7! B(t) is continuous.
Theorem 7.19. Let all the solutions of ~u0 (t) = A~u(t) be bounded on [0, 1). Then all solutions
of (7.6) are bounded provided Z 1
kB(t)k dt < +1.
t0
Proof. Since all the solutions of ~u0 (t) = A~u(t) be bounded on [t0 , 1), there exists a constant
C > 0 such that |eAt | C for all t 2 [0, 1]. Let ~u(t0 ) = ~x for the problem (7.6). Then, we have
Z t Z t
A(t t0 ) A(t s)
|~u(t)| = e ~x + e B(s)~u(s) ds C|~x| + C kB(s)k|~u(s)| ds
t0 t0
Hence by Gronwall’s lemma, we get
⇣ Z t ⌘
|~u(t)| C|~x| exp C kB(s)k ds C|~x| exp(M ), 8 t 2 [t0 , 1)
t0
⇣ R ⌘
1
where M := exp C t0 kB(s)k ds < +1. This shows that all solutions of (7.6) are bounded.
⇤
✓ ◆
0 1
Example 7.17. Consider the system ~y 0 (t) = A~y (t) + B(t)~y (t), where A = and B(t) =
1 1
✓ ◆ p
0 0 1±i 3
0 1 , t > 0. The eigenvalues of A are 2 , and hence the linear system ~y 0 (t) = A~y (t)
(1+t)2
has a bounded solution. Moreover, it is easy to see that
Z t Z t
1
lim kB(s)k ds = lim ds < +1.
t!1 t t!1 t (1 + s)2
0 0
Since kB(t)k ! 0 as t ! 1, given any C > 0, there exists tb t0 > 0 such that
kB(t)k C, 8t tb .
Thus, we have
Z tb Z t
↵t ↵t0 ↵s
e |~u(t)| M e |~x| + M kB(s)k|~u(s)|e ds + M kB(s)k|~u(s)|e↵s ds
t0 tb
Z tb Z t
M e↵t0 |~x| + M kB(s)k|~u(s)|e↵s ds + CM |~u(s)|e↵s ds
t0 tb
M C(t tb )
C0 e , 8 t 2 [tb , 1),
R tb
where C0 := M e↵t0 |~x| + M t0 kB(s)k|~u(s)|e↵s ds. Since C > 0 is arbitrary, we can choose C
such that M C < ↵ and hence e(M C ↵)t ! 0 as t ! 1. This shows that |~u(t)| ! 0 as t ! 1.
This completes the proof. ⇤
✓ ◆
1 5
Example 7.18. Consider the linear system ~y 0 (t) = A~y (t) + B(t)~y (t), where A = and
0 4
✓ 2t ◆
e 0
B(t) = 1 , t > 0. Since the real part of all eigenvalues of A are negative (it has real
e t (1+t) 2
distinct eigen values 1 and 4) and kB(t)k ! 0 as t ! 1, we conclude that all the solutions
of the given system asymptotically converge to zero i.e., |~u(t)| ! 0 as t ! 1.
7.5.1. Boundedness and asymptotic behaviour of non-autonomous linear system:
Consider the non-autonomous system
~u0 (t) = A(t)~u(t) . (7.7)
Let ~u(t) be a solution of (7.7). Since |~u|2 = h~u(t), ~u(t)i, by di↵erentiating, we have
d
|~u(t)|2 = h~u0 (t), ~u(t)i + h~u(t), ~u0 (t)i = hA(t)~u(t), ~u(t)i + h~u(t), A(t)~u(t)i
dt ⌦ ↵
= A(t) + A> (t) ~u(t), ~u(t) .
Note that A(t) + A> (t) is a symmetric matrix and hence all the eigenvalues are real. Define
M (t) := largest eigenvalue of A(t) + A> (t), m(t) := smallest eigenvalue of A(t) + A> (t) .
Theorem 7.21. Let t 7! A(t) 2 M(n, R) is continuous on [t0 , 1) and M (t) and m(t) are
defined as above. Then the followings hold:
Rt
i) If limt!1 t0 M (s) ds = 1, then any solution of (7.7) satisfies limt!1 |~u(t)| = 0.
nR o
t
ii) If the set t0 M (s) ds : t > t0 is bounded, then any solution of (7.7) will be bounded
if ~u(t0 ) 6= ~0.
Rt
iii) If lim supt!1 t0 m(s) ds = 1, then every solution if (7.7) is unbounded.
Proof. Note that (~u> )0 (t) = ~u> (t)A> (t) and |~u|2 = ~u> (t)~u(t). Hence taking di↵erentiation, we
have
d
|~u(t)|2 = (~u> )0 (t)~u(t) + ~u> (t), ~u0 (t) = ~u> (t)A> (t)~u(t) + ~u> (t)A(t)~u(t)
dt
= ~u> (t) A(t) + A> (t) ~u(t) .
Note that, for any ~x 2 Rn \ {~0}, one has
~x> A(t) + A> (t) ~x
m(t) M (t).
~x> ~x
ODES AND PDES 51
To study the boundedness and asymptotic behaviour of (7.9), we need the following assump-
tions:
Rt
A.1 lim inf t!1 t0 Tr(A(s)) ds > 1 or Tr(A(s)) = 0.
Rt
A.2 limt!1 t0 kB(s)k ds < +1.
Theorem 7.22. Let the assumptions A.1 and A.2 holds. If all the solutions of ~u0 (t) = A(t)~u(t)
is bounded, then all the solutions of (7.9) are also bounded.
52 A. K. MAJEE
Proof. Let (t) be a fundamental matrix of the system ~u0 (t) = A(t)~u(t). Since the solution of
~u0 (t) = A(t)~u(t) is bounded, there exists C⇣ > 0 such that⌘k (t)k C for all t 2 [t0 , 1). By
Rt
Abel’s theorem det( (t)) = det( (t0 )) exp t0 Tr(A(s)) ds , and hence
1 adj( (t))
(t) = ⇣R ⌘.
t
det( (t0 )) exp t0 Tr(A(s)) ds
Thanks to the assumption A.1, we see that
1
k (t)k C, 8 t 2 [t0 , 1) .
Let us write down the solution of (7.9):
Z t Z t
1 1
~u(t) = (t, t0 )~x + (t, s)B(s)~u(s) ds = (t) (t0 )~x + (t) (s)B(s)~u(s) ds .
t0 t0
Define
1
C1 = max{sup k (t)k, sup k (t)k}, C0 = C1 |~x|.
t t0 t t0
Thus, we have
Z t ⇣ Z t ⌘
|~u(t)| C0 + C12 kB(s)k|~u(s)| ds =) |~u(t)| C0 exp C12 kB(s)k ds < +1
t0 t0
by the assumption A.2. This proves the theorem. ⇤
✓ 1 ◆
2 t2
Example 7.21. Consider the linear system ~u0 (t) = [A(t) + B(t)]~u(t) with A(t) = (1+t)2 ,
t 1
✓ ◆
0 0
and B(t) = 0 a for t > 0. Note that the maximum eigenvalues of A(t) + A> (t) is
(1+t) 2
2
Rt
M (t) := (1+t) 2 and limt!1 0 M (s) ds < +1. Thus, the system ~ u0 (t) = A(t)~u(t) has a bounded
solution. If we show that the assumptions A.1 and A.2 holds, then by Theorem 7.22, given
system has bounded solution. It is easy to check that
Z t Z t
ds
lim inf Tr(A(s)) ds = lim inf 2
> 1,
t!1 0 t!1 0 (1 + s)
Z t Z t
|a|
lim kB(s)k ds = lim ds = |a| < +1 .
t!1 0 t!1 0 (1 + s)2
Theorem 7.23. Under the assumptions A.1 and A.2, if all the solutions of ~u0 (t) = A(t)~u(t)
are asymptotically convergent to zero, then is the case for all solutions of (7.9).
Proof. Asymptotic convergence of the solution of ~u0 (t) = A(t)~u(t) implies that k (t)k ! 0 as
t ! 1, where is a fundamental matrix for the system ~u0 (t) = A(t)~u(t). Moreover, by using
Abel’s theorem and the assumption A.1, we conclude that
1
k (t)k ! 0 as t ! 1.
Let C1 be defined as in the proof R tof Theorem 7.22. Now from the representation of the solution
of (7.9) ~u(t) = (t) 1 (t0 )~x + t0 (t) 1 (s)B(s)~u(s) ds, we have
n Z t o
1
|~u(t)| k (t)k k (t0 )k|~x| + k 1 (s)kkB(s)k|~u(s)| ds
t0
n Z t o
k (t)k M + C1 kB(s)k|~u(s)| ds
t0
ODES AND PDES 53
Z t
|~u(t)| |~u(s)|
=) M + C1 kB(s)kk (s)k ds ( is non-singular)
k (t)k t0 k (s)k
|~u(t)| ⇣ Z t ⌘
=) M exp C1 k (s)k kB(s)k ds (by Gronwall’s lemma)
k (t)k t0
⇣ Z t ⌘
|~u(t)| 2
=) M exp C1 kB(s)k ds ,
k (t)k t0
where M := k 1 (t0 )k|~x|. Since k (t)k ! 0 as t ! 1, by the assumption A.2, we conclude
from the last inequality that |~u(t)| ! 0 as t ! 1. This completes the proof. ⇤
54 A. K. MAJEE
8. Stability Analysis:
Consider the initial value problem
~u0 = f~(t, ~u(t)) , t 2 [t0 , 1); ~u(t0 ) = ~x , (8.1)
where we assume that f~ : ⌦ ! Rn is continuous and locally Lipschitz in second argument.
Moreover, we assume that (8.1) has a solution defined in [t0 , 1). The unique solution is denoted
by ~u(t, t0 , ~x).
Definition 8.1. Let ~u(·, t0 , ~x) be a solution of (8.1).
i) It is said to be stable if for every " > 0, there exists = (", t0 , ~x) > 0 such that
|~x ~y | < =) |~u(t, t0 , ~x) ~u(t, t0 , ~y )| < " .
ii) It is called asymptotically stable if it is stable and there exists > 0 such that for all
~y 2 B(~x, ), there holds
lim |~u(t, t0 , ~x) ~u(t, t0 , ~y )| = 0.
t!1
iii) It is called unstable if it is NOT stable.
Example 8.1. Consider the IVP ~u0 = f~(t) , ~u(t0 ) = ~x , where f~ : [t0 , 1) ! Rn is continuous.
Rt
Then ~u(t, t0 , ~x) = ~x + t0 f~(s) ds. Thus,
|~u(t, t0 , ~x) ~u(t, t0 , ~y )| = |~x ~y | .
This shows that ~u(t, t0 , ~x) is stable but NOT asymptotically stable.
Example 8.2. Consider the IVP
u0 (t) = a(t)u(t) , u(t0 ) = x , with a 2 C[t0 , 1) .
Rt
a(s) ds
Then u(t, t0 , x) = xe t0 , and hence
Rt
a(s) ds
|u(t, t0 , x)
u(t, t0 , y)| = |x y|e t0 .
Rt
i) It is stable if there exists M > 0 such that t0 a(s) ds M for all t > t0 .
Rt
ii) It is unstable if limt!1 t0 a(s) ds = 1.
Rt
iii) It is asymptotically stable if limt!1 t0 a(s) ds = 1.
Theorem 8.1. Let t 7! A (t) be continuous function from [t0 , 1) 7! M(n, R). Then all solutions
~u(·, t0 , ~x) of the linear system
~u0 = A (t)~u(t) , ~u(t0 ) = ~x
are stable if and only if all solutions are bounded.
Proof. Suppose that the solution is bounded. Since ~u(t, t0 , ~x) = (t, t0 )~x, k (t, t0 )k is bounded
for all t 2 [t0 , 1), where (t, t0 ) is the principle fundamental matrix. Now
~u(t, t0 , ~x) ~u(t, t0 , ~y ) = (t, t0 )~x (t, t0 )~y k (t, t0 )k|~x ~y | M |~x ~y |, 8 t 2 [t0 , 1) .
This shows that all solutions are stable. Conversely, assume that all solutions are stable. In
particular, ~u(t, t0 , ~0) is stable. Hence, there exists > 0 such that
~u(t, t0 , ~y ) ~u(t, t0 , ~0) 1 8 ~y 2 B(~0, ),
=) |~u(t, t0 , ~y )| 1 8 ~y 2 B(~0, ) . (8.2)
Let ~x 2 Rn \ {~0}. Choose ~y 2 Rn such that ~y = ~
x
x| .
2 |~ This implies that ~y 2 B(~0, 2 ). Thus
2|~x| 2|~x| 2|~x| 2|~x|
~u(t, t0 , ~x) = ~u(t, t0 , ~y ) = (t, t0 ) ~y = (t, t0 )~y = ~u(t, t0 , ~y )
ODES AND PDES 55
2|~x| 2|~x|
=) |~u(t, t0 , ~x)| |~u(t, t0 , ~y )| ,
where in the last inequality, we have used (8.2). Thus, all solutions are bounded. ⇤
Theorem 8.2. Let t 7! A (t) be continuous function from [t0 , 1) 7! M(n, R), where A (t) =
A + B (t). Then
a) If the real part of all multiple eigenvalues
R 1 of A are negative and the real part of sim-
ple eigenvalues are non-negative, and t0 kB B (t)k dt < +1, then any solution of ~u0 =
A (t)~u(t) is stable.
b) If real part of any eigenvalue of A is negative, and kB B (t)k ! 0 as t ! 1, then any
0
solution of ~u = A (t)~u(t) is asymptotically stable.
Example 8.3. Consider the linear system of equation
1
u01 = u1 u2 + e t u3 , u02 = u2 + u4 ,
1+t
u03 = e 2t
u2 3u3 2u4 , u04 = u3 u4 .
This can be written as ~u0 = A (t)~u(t) with A (t) = A + B (t), where
0 1 0 1
1 1 0 0 0 0 e t 0
B0 0C B 1 C
A=B
1 0 C , and B (t) = B0 0 0 1+t C
.
@0 0 3 2A @0 e 2t 0 0 A
0 0 1 1 0 0 0 0
A
To find the eigen values of A , consider the equation det(A I ) = 0, and solve it. It is easy
1
to check that eigenvalues are 1, 1, 2 ± i. Moreover, kB B (t)k1 = e t + e 2t + 1+t , and hence
B (t)k ! 0 as t ! 1. Thus, in view of Theorem 8.2, any solution of the given system is
kB
asymptotically stable.
Remark 8.1. For nonlinear systems, boundedness and stability are distinct concepts. for
example, consider the scalar first order ODE
(
y 0 (t) = tp , p 1 ,
y(t0 ) = y0 .
Then solution is given by y(t, t0 , y0 ) = y0 + 1
p+1 tp+1 tp+1
0 . Hence
|y(t, t0 , y0 ) y(t, t0 , y0 + y0 )| = | y0 | < , if | y0 | < .
Thus, it is stable. But it is NOT bounded.
8.1. Critical points and their stability: Consider the system ~u0 (t) = f~(~u(t)). If ~x0 2 ⌦ is
an equilibrium point,i.e., f~(~x0 ) = ~0, then ~u(t) = ~x0 is a solution of the ODE
~u0 (t) = f~(~u(t)), t > 0; ~u(0) = ~x0 .
Conversely, if ~u(t) ⌘ ~x0 is a constant solution, then f~(~x0 ) = ~0.
Definition 8.2. We say that ~x0 is stable/asymptotically stable/ unstable if this solution is
stable/asymptotically stable/unstable.
For any f~ 2 C 1 (⌦), where ⌦ is an open subset of Rn , we denote by Df~(~x0 ) is the matrix
2 @f @f1 @f1
3
1
(~ x 0 ) (~ x 0 ) . . . (~x 0 )
6 @u
@f2
1 @u2
@f2
@un
@f2 7
6 @u (~ x 0 ) @u2 (~ x 0 ) . . . @un (~ x0 )7
~ 6
Df (x0 ) = 6 1
.. .. .. 7.
.. 7
4 . . . . 5
@fn @fn @fn
@u1 (~x 0 ) @u2 (~x 0 ) . . . @un (~x 0 )
56 A. K. MAJEE
Definition 8.3. A critical point ~x0 2 ⌦ is said to be hyperbolic if none of the eigenvalues of
Df~(~x0 ) are purely imaginary.
Example 8.4. Consider the nonlinear system
u01 = u1 , u02 = u2 + u21 , u03 = u3 + u21 .
The only equilibrium point of this system is ~0. The matrix Df~(~0) is given by
0 1
1 0 0
Df~(~0) = @ 0 1 0A .
0 0 1
Eigenvalues of Df~(~0) are 1, 1 and 1. Hence the equilibrium point ~0 is hyperbolic.
Theorem 8.3. A hyperbolic equilibrium point ~x0 is asymptotically stable if all the eigenvalues
of Df~(~x0 ) have negative real part.
Example 8.5. Consider the nonlinear system
u01 = u1 + u23 , u02 = u21 2u2 , u03 = u21 + u32 4u3 .
Note that ~0 is an equilibria of the given system. The matrix Df~(~0) is given by
0 1
1 0 0
Df~(~0) = @ 0 2 0A.
0 0 4
Eigenvalues of Df~(~0) are 1, 2 and 4, none of them are purely imaginary. Hence ~0 is a
hyperbolic equilibrium point. Since all eigenvalues of Df~(~x0 ) have negative real part, we conclude
that ~0 is asymptotically stable.
Theorem 8.4. If ~x0 is a stable equilibrium point of the system ~u0 (t) = f~(~u(t)), the no eigen-
values of Df~(~x0 ) has positive real part.
Example 8.6. Consider the non-linear system x0 (t) = x(y 1),✓y 0 (t) =◆x y 1. Note that
0 2
(2, 1)> is a critical point. The Jacobian matrix at (2, 1)> is A = . The eigenvalues of
1 1
A are given by = 1±3 2 . Since one of the eigenvalue of A has positive real part, we conclude
that (2, 1)> is a unstable equilibrium point.
Remark 8.2. Hyperbolic equilibrium points are either asymptotically stable or unstable.
8.2. Liapunov functions and stability. The stability of non-hyperbolic equilibrium points
is typically more difficult to determine. A method, due to Liapunov, that is very useful for
deciding the stability of non-hyperbolic equilibrium points.
Definition 8.4. Let f~ 2 C 1 (⌦), V 2 C 1 (⌦; R), and ~ t (~x) is the flow of the system ~u0 (t) =
f~(~u(t)), i.e., ~ t (~x) = ~u(t, ~x). Then, for any ~x 2 ⌦, the derivative of the function V along the
solution ~u(t, ~x) is given by
. d
V (~x) = V ( ~ t (~x)) = rV (~x) · ~u0 (0, ~x) = rV (~x) · f~(~x).
dt t=0
Since V (~x0 ) = 0 and V (~x) > 0 for all ~x 2 ⌦ \ {~x0 }, we have m" > 0. Moreover, by continuity of
V (·), there exists > 0 such that
m"
|V (~x)| < 8 |~x ~x0 | < . (8.3)
2
We claim that, for ~x 2 B(~x0 , ),
|~u(t, ~x) ~x0 | < ", 8t 0.
Suppose that it is not true. Then there exists t1 > 0 such that |~u(t1 , ~x) ~x0 | ". Since
t 7! |~u(t, ~x) ~x0 | is continuous, there exists t2 2 [0, t1 ] such that |~u(t2 , ~x) ~x0 | = ". Thus by
using monotonicity of V (~u(t, ~x)) in time variable and (8.3), we get
m"
m" V (~u(t, ~x)) V (~u(0, ~x)) = V (~x) <
2
–a contradiction! Thus, ~x0 is a stable equilibrium point.
Proof of ii): In this case t ! V (~u(t, ~x)) is strictly decreasing when ~x 6= ~x0 . Choose " > 0 such
that cl(B(~x0 , ")) ⇢ ⌦. Then, from i), there exists > 0 such that whenever |~x ~x0 | < , we
have for all t 0, |~u(t, ~x) ~x0 | < ". We claim that
lim |~u(t, ~x) ~x0 | = 0.
t!1
We show this via sequential criterion. Let {tk } ⇢ [0, 1) be such that tk ! 1. We then show
that
~u(tk , ~x) ! ~x0 , as k ! 1.
Note that ~u(t, ~x) ⇢ cl(B(~x0 , ")) ⇢ ⌦. This implies that, upto a subsequence, ~u(tk , ~x) ! ~y0
and ~y0 2 cl(B(~x0 , ")) ⇢ ⌦. It remains to show that ~y0 = ~x0 . Suppose ~y0 6= ~x0 . Then by
.
assumption, V (~y0 ) < 0. Since t ! V (~u(t, ~x)) is strictly decreasing and ~u(tk , ~x) ! ~y0 , we infer
that V (~u(tk , ~x)) is strictly decreasing to V (~y0 ) i.e.,
V (~y0 ) < V (~u(tk , ~x)) 8 k. (8.4)
Since ~y0 6= ~x0 , we have V (~y0 ) = V (~u(0, ~y0 )) > V (~u(t, ~y0 )) and hence for all ~y sufficiently close
to ~y0 , we have V (~y0 ) > V (~u(t, ~y )). hence, for large k 2 N, we have
V (~y0 ) > V (~u(t, ~u(tk , ~x))) = V (~u(t + tk , ~x)).
Take t = tk+1 tk . Then we have, V (~y0 ) > V (~u(tk+1 , ~x))—a contradiction to (8.4). Hence
~y0 = ~x0 , and this completes the proof of ii).
Proof of iii): In view of the given assumption, t ! V (~u(t, ~x)) is strictly increasing when
~x 6= ~x0 . We want to show that ~x0 is unstable. Suppose it is stable. Let " > 0 be such that
cl(B(~x0 , ")) ⇢ ⌦. Then there exists > 0 such that
|~x ~x0 | < =) |~u(t, ~x) ~x0 | < ", 8t 0.
58 A. K. MAJEE
Fix ~x 6= ~x0 such that |~x ~x0 | < . Since V (~x) > 0, and V (·) is continuous, we can choose 1 >0
such that whenever |~y ~x0 | < 1 , V (~y ) < V (~ x)
2 . We claim that
Note that
. d d
V (~u(⌧, ~x)) = V (~u(t, ~u(⌧, ~x))) =
V (~u(t + ⌧, ~x))
dt t=0 dt t=0
@
= rV (~u(⌧, ~x)) · ~u0 (⌧, ~x) = V (~u(⌧, ~x))
@⌧
Z t Z t .
@
=) V (~u(t, ~x)) V (~u(0, ~x)) = V (~u(⌧, ~x)) d⌧ = V (~u(⌧, ~x)) d⌧ ↵t
0 @⌧ 0
=) V (~u(t, ~x)) ↵t + V (~x), 8 t 0.
Let M" := max|~y x0 |" V
~ (~y ). Then we have
M" V (~u(t, ~x)) V (~x) + ↵t ! 1 as t ! 1
— a contradiction. Thus ~x0 is unstable. This completes the proof.
⇤
Remark 8.3. The function V satisfying the assumption of Theorem 8.5 is called the Liapunov
function.
Example 8.7. Consider the linear system
x01 = 4x1 2x2 , x02 = x1 .
Note that (0, 0) is only equilibrium point of the given system. Let V : R2 ! R be a C 1 function
defined by
V (x1 , x2 ) = c1 x21 + c2 x22 , c1 , c2 2 R+ .
Since v(0, 0) = 0 and V (x1 , x2 ) > 0 for (x1 , x2 ) 6= (0, 0), V is a Liapunov function. Now
.
V (x1 , x2 ) = rV (x1 , x2 ) · f~(x1 , x2 ) = 8c1 x21 + (2c2 4c1 )x1 x2 .
Choose c1 , c2 > 0 such that c2 = 2c1 . Taking c1 = 1, we have .
c2 = 2, and hence the Liapunov
.
function takes of the form V (x1 , x2 ) = x21 + 2x22 . Note that V (x1 , x2 ) = 8x21 . Hence V (~x) 0
for all ~x 2 R2 \ {~0}. Thus, (0, 0) is stable equilibrium point.
Example 8.8. Consider the nonlinear system
x01 = x2 x1 x2 , x02 = x1 + x21 .
Origin is an equilibrium point. Consider the Liapunov function V (x1 , x2 ) = x21 + x22 . Then
.
V (x1 , x2 ) = 0 for all (x1 , x2 ) 2 R2 . Thus, (0, 0) is an stable equilibrium point. Furthermore,
d
since dt V (x1 (t), x2 (t)) = 0, V (x1 (t), x2 (t)) = c1 for some constant. i.e., the trajectories of this
system lie on the circle x21 + x22 = c2 . Hence (0, 0) is NOT asymptotically stable equilibrium
point.
Example 8.9. Consider the second order di↵erential equation x00 + q(x) = 0, where q : R ! R
is a continuous function such that xq(x) > 0 8x 6= 0. This can be written as a system
x01 = x2 , x02 = q(x1 ) where x1 = x .
ODES AND PDES 59
The total energy of the system (sum of kinetic energy 12 (x01 )2 and the potential energy)
Z x1
x2
V (~x) = 2 + q(s) ds .
2 0
Note that (0, 0) is an equilibrium point, and V (0, 0) = 0. Moreover, since xq(x) > 0 8x 6= 0,
it is easy to check that V (x1 , x2 ) > 0 for all (x1 , x2 ) 2 R2 \ {~0}. Therefore, V is a Liapunov
function. Now .
V (x1 , x2 ) = (q(x1 ), x2 ) · (x2 , q(x1 )) = 0 .
The solution curves are given by V (~x) = c, i.e., the energy is constant on the solution curves or
trajectories of this system. Hence the origin is a stable equilibrium point.
Example 8.10. Consider the nonlinear system
x01 = x2 + x31 + x1 x22 , x02 = x1 + x32 + x2 x21 .
Note that (0, 0) is an equilibrium point. Consider the Liapunov function V (x1 , x2 ) = x21 + x22 .
Then
.
V (x1 , x2 ) = (2x1 , 2x2 ) · ( x2 + x31 + x1 x22 , x1 + x32 + x2 x21 )
= 2(x21 + x22 )2 > 0 , 8(x1 , x2 ) 2 R2 \ {~0} .
Thus, by Theorem 8.5, we conclude that (0, 0) is unstable equilibrium point.
Example 8.11. Let f (x) resp. g(x) be an even polynomial resp. odd polynomial in x. Consider
the 2nd order ODE y 00 + f (y)y 0 + g(y) = 0. This can be written as
Z x
0 0
x1 = x2 F (x1 ) , x2 = g(x1 ) , where F (x) = f (s) ds .
0
To see this, let x1 = y. Then From the equation, we have
d d 0
x001 + F (x1 ) + g(x1 ) = 0 =) [x + F (x1 )] = g(x1 ) .
dt dt 1
Set x2 = x01 + F (x1 ). Then we have x01 = x2 F (x1 ) , x02 = g(x1 ).
Rx
Let G(x) = 0 g(s) ds, and suppose that G(x) > 0 and g(x)F (x) > 0 in a deleted neighborhood
of the origin. Then the origin is a asymptotically stable equilibrium point. Indeed, consider the
Liapunov function in a nbd. of (0, 0) as
Z x1
1
V (x1 , x2 ) = g(s) ds + x22 .
0 2
Note that V (0, 0) = 0 and V (x1 , x2 ) > 0 in a deleted nbd. of (0, 0). Moreover,
.
V (x1 , x2 ) = (g(x1 ), x2 ) · (x2 F (x1 ), g(x1 )) = g(x1 )F (x1 ) < 0 .
Hence origin is a asymptotically stable equilibrium point. Note that if we assume that
G(x) > 0 and g(x)F (x) < 0 in a deleted neighborhood of the origin,
then origin will be a unstable equilibrium point.
60 A. K. MAJEE
9. Phase-plane analysis:
Consider a second order autonomous equation
x00 + V 0 (x) = 0, (9.1)
where V : R ! R is a smooth function. Setting y = x0 , equation (9.1) can be written as
x0 = y , y0 = V 0 (x) . (9.2)
We assume that solution of the above problem exists for all t 2 R.
Remark 9.1. If x(t) is a solution of (9.1), then x(t + h) is also a solution for any h 2 R.
Since the system (9.2) is nonlinear, explicit solution is hard to obtain. The phase plane anal-
ysis is an analytical approach that provides several types of information about the behaviour
of the solution of the underlying system without solving the equation explicitly.
The plane (x, y) is called phase plane and study of the system (9.2) is called phase plane
analysis. Note that the system (9.2) can be written as
x0 = Hy (x, y); y 0 = Hx (x, y)
where
1
H(x, y) = y 2 + V (x).
2
This is the total energy of (9.2).
Definition 9.1. Let H(x, y) be a di↵erentiable function on R2 . The autonomous system
x0 = Hy (x, y), y 0 = Hx (x, y) (9.3)
is called a Hamiltonian system and H is called Hamiltonian.
Lemma 9.1. If (x(t), y(t)) is a solution of (9.3), then there exists c 2 R such that
H(x(t), y(t)) = c.
Proof. Let (x(t), y(t)) be a solution of (9.3). Then by chain rule
d
H(x(t), y(t)) = Hx (x, y)x0 + Hy (x, y)y 0 = Hx Hy Hy Hx = 0 .
dt
=) H(x(t), y(t)) = c .
⇤
Remark 9.2. In view of the above lemma, we say that the system (9.2) is a conservative system.
Define
1
Ec = {(x, y) 2 R2 : H(x, y) = c} = {(x, y) 2 R2 : y 2 + V (x) = c}.
2
Note that if (x(t), y(t)) solves (9.3), then (x(t), y(t)) 2 ⇤c for all t. Ec 6= ; if and only if V (x) c
for some x.
Example 9.1. Let H(x, y) = Ax2 + Bxy + Cy 2 . Then (0, 0) is the only equilibrium point.
• If c 6= 0, then the curve ⇤c is a conic. Precisely
i) If B 2 4AC < 0, and c > 0, then ⇤c is an ellipse.
ii) If B = 0, A = C and c > 0, then ⇤c is a circle.
iii) If B 2 4AC > 0 and c 6= 0, then ⇤c is a hyperbola.
• If c = 0 or B 2 = 4AC, then the conic can be a pair of straight lines or it reduces to a
point.
ODES AND PDES 61
Remark 9.3. The point (x⇤ , y ⇤ ) 2 R2 such that Hx (x⇤ , y ⇤ ) = 0 = Hy (x⇤ , y ⇤ ) are precisely the
equilibria of the Hamiltonian system (9.3).
Remark 9.4. ⇤c does not contain equilibria of (9.3) if and only if Hx and Hy do NOT vanish
simultaneously on ⇤c .
9.1. Periodic solutions: Denote (x(t), y(t)), the solution of the Hamiltonian system (9.2) such
that H(x(t), y(t)) = c. We are interested in the periodicity of the solution of (9.2). A solution
(x(t), y(t)) of (9.2) is periodic with period T > 0, if
x(t + T ) = x(t), y(t + T ) = y(t) 8 t 2 R.
By periodic solution, we mean a non-trivial periodic solution, namely
p a periodic solution which
is NOT an equilibrium solution. From the set Ec , we have y = ± 2c 2V (x). It is clear the if
x = x(t), y = y(t) is a periodic solution of (9.2), then Ec is closed bounded curve.
Theorem 9.2. Suppose that Ec 6= ; is compact curve (closed and bounded) that does not contain
equilibria of (9.2). Then (x(t), y(t)) is a periodic solution of (9.2).
Example 9.3. Consider the IVP
x0 = 2x + 3y , x(0) = 0; y0 = 3x 2y , y(0) = 1 .
It is a Hamiltonian system with Hamiltonian
3 3
H(x, y) = 2xy + y 2 + x2 := Ax2 + Bxy + Cy 2 .
2 2
The curve Ec has equation 2xy + 32 y 2 + 32 x2 = c. Since x(0) = 0 and y(0) = 1, we get c = 32 .
Thus the curve Ec is an ellipse (as B 2 4AC < 0 and c = 32 > 0). Note that it does not contain
the equilibrium point (0, 0). Hence the solution of the IVP is periodic.
Example 9.4. Consider the IVP
x0 = x 6y , x(0) = 1; y0 = 2x y , y(0) = 0 .
This is a Hamiltonian system with Hamiltonian
H(x, y) = xy 3y 2 + x2 := Ax2 + Bxy + Cy 2 .
The equation of the curve Ec is x2 + xy 3y 2 = c. From initial conditions, we get c = 1. Thus
the conic Ec is hyperbola (B 2 4AC = 13 > 0), and hence the solution is unbounded.
Consider a second order autonomous ODE x00 = f (x). Then, it can be re-written as system
(9.2). Hence it is a Hamiltonian system with Hamiltonian
1
H(x, y) = y 2 F (x), where F 0 (x) = f (x).
2
Thus, if H = c is compact curve, and does not contain any zeros of f , then it carries a periodic
solution of the equation x00 = f (x).
62 A. K. MAJEE
Theorem 9.5. Let h0 = h(P0 ). Then for every > h0 , the system (9.5) has a periodic solution
(x(t), y(t)) such that h(x(t), y(t)) = .
Proof. Since P0 is the global minimum of h, the set h(x, y) = for all > h0 is non-empty
closed curve around P0 . Note that P0 is the unique equilibrium point of (9.5). Thus, repeating
the arguments carried out to prove the existence of periodic solutions of (9.2), one can show
easily that the curve h(x, y) = corresponds to a periodic solution of (9.5).
⇤
Let (x(t), y(t)) be a T -periodic solution of (9.5), which exists thanks to Theorem 9.5. Define
the average size of the prey and predator as
Z Z
1 T 1 T
x̄ = x(t) dt; ȳ = y(t) dt.
T 0 T 0
Theorem 9.6. If (x(t), y(t)) is a T -periodic solution of (9.5), then x̄ = x0 and ȳ = y0 .
0
Proof. Since y 0 = y(dx c), one has yy = dx c. Integrating from 0 to T , we have
Z T Z T 0
y
d x(t) dt cT = dt = ln(y(T )) ln(y(0)) = 0 (since y is T -periodic)
0 0 y
c
=) dx̄ c = 0 =) x̄ = = x0 .
d
Similarly, by using first equation of (9.5), we can show that ȳ = ab = y0 . ⇤