0% found this document useful (0 votes)
58 views63 pages

2302 MTL102 ODE Part

This document provides an introduction to differential equations. It begins by defining ordinary differential equations (ODEs) as equations involving an unknown function and its derivatives. Examples of applications of ODEs in various fields like physics, biology, and engineering are given. The document then discusses methods for solving linear first order ODEs, including using an integrating factor and the method of variation of parameters. It also introduces the concept of separable equations and equilibrium solutions.

Uploaded by

AYUSH SINGH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views63 pages

2302 MTL102 ODE Part

This document provides an introduction to differential equations. It begins by defining ordinary differential equations (ODEs) as equations involving an unknown function and its derivatives. Examples of applications of ODEs in various fields like physics, biology, and engineering are given. The document then discusses methods for solving linear first order ODEs, including using an integrating factor and the method of variation of parameters. It also introduces the concept of separable equations and equilibrium solutions.

Uploaded by

AYUSH SINGH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

DIFFERENTIAL EQUATIONS (2302-MTL102)

ANANTA KUMAR MAJEE

1. Introduction: ODEs
A di↵erential equation is an equation involving an unknown function and its derivatives. In
general the unknown function may depend on several variables and the equation may include
various partial derivatives.
Definition 1.1. A di↵erential equation involving ordinary derivatives is called a ordinary dif-
ferential equation (ODE).
• A most general ODE has the form
F x, y, y 0 , . . . , y (n) = 0 , (1.1)
where F is a given function of (n + 2) variables and y = y(x) is an unknown function of
a real variable x.
• The maximum order n of the derivative y (n) in (1.1) is called the order of the ODE.
Applications: Di↵erential equations play a central role not only in mathematics but also in
almost all areas of science and engineering, economics, and social sciences:
• Flow of current in a conductor: Consider an RC circuit with resistance R and capacity
C with no external current. Let x(t) be the capacitor voltage and I(t) the current
circulating in the circuit. Then according to Kirkcho↵’s law, R I(t)+x(t) = 0. Moreover,
the constitutive law of capacitor yields I(t) = C dx(t)
dt . Hence, we get the first order
di↵erential equation
x(t)
x0 (t) + = 0.
RC
• Population dynamics: Let x(t) be the number of individuals of a population at time t,
b be the birth rate of the population and d be the death rate of the population. Then
according to the simple “Malthus model”, growth rate of the population is proportional
to the number of new born individuals minus the number of deaths. Hence we get the
first order ODE
x0 (t) = kx(t), where k = b d.
• An example of a second order equation is y 00 + y = 0, which arises naturally in the study
of electrical and mechanical oscillations.
• Motion of a missile; The behaviours of a mixture; The spread of disease etc.
Definition 1.2. A function y : I ! R, where I ⇢ R is an open interval, is said to be a solution
of n-th order ODE, if it is n-times di↵erentiable, and satisfies (1.1) for all x 2 I.
Example 1.1. Consider the ODE y 0 = y. Let us first find all positive solutions. Note that
y0 0 0
y = (ln y) and therefore we obtain (ln y) = 1. Thus, this implies that

ln y = x + C =) y = C1 ex where C1 = eC > 0, C 2 R.
1
2 A. K. MAJEE

0
If y(x) < 0 for all x, then use yy = (ln( y))0 , and obtain y = C1 ex where C1 > 0. Combining
these two cases together, we obtain any solution y(x) has the form
y(x) = Cex , C 2 R.

2. Certain classes of nonlinear first order ODE


An n-th order linear ODE is a relation of the form
an (t)u(n) (t) + an 1 (t)u
(n 1)
(t) + . . . a1 (t)u0 (t) + a0 (t)u(t) = b(t) 8t 2 I with an 6= 0.
Definition 2.1. We say that the linear ODE is homogeneous if b(t) = 0 for all t 2 I. Otherwise
we say that it is non-homogeneous.
Theorem 2.1. Consider the linear homogeneous ODEs
u(m) (t) + am (t) + . . . a1 (t)u0 (t) + a0 (t)u(t) = 0, t 2 I.
1 (t)u
(m 1)
(2.1)
n o
Let X = u : I ! R : u is a solution of (2.1) . Then X is a real vector space with usual
addition of functions and scalar multiplication by real number.
2.1. 1st order linear ODE. Let us first consider the linear ODE of 1st order of the form
y 0 + ↵(x)y = b(x) , (2.2)
where ↵ and b are given function defined on I. A linear ODE can be solved as follows:
Theorem 2.2 (The method of variation of parameter). For ↵, b 2 C(I), the general solution of
(2.2) has the form
h Z i
A(x)
y(x) = e C + b(x)eA(x) dx ,

where A(x) is a primitive of ↵(x) on I, i.e., A0 (x) = ↵(x).


Proof. We want to find a di↵erential function µ(x) > 0 such that
0
µ(x)y 0 (x) + µ(x)↵(x)y(x) = µ(x)y(x) .
Note that µ(x) = eA(x) does this work (check !!!). This µ(x) is called an integrating factor. Let
us make the change of the unknown function
u(x) = y(x)eA(x) , y(x) = u(x)e A(x)
.
Substituting this in the given ODE, we obtain
A 0
(ue ) + ↵ue A
= b =) u0 e A
+ ue A
A0 + ↵ue A
= b.
Since A0 = ↵, we have
Z h Z i
u0 = beA =) u(x) = C + b(x)eA(x) dx =) y(x) = e A(x)
C+ b(x)eA(x) dx .


Example 2.1. Find the general solution of x0 (t) + 4tx(t) = 8t.

Here ↵(t) = 4t, and hence we have A(t) = 2t2 . Therefore, using the method of variation of
parameter, the general solution of the given ODE is given by
Z Z
2t2
⇥ 2t2
⇤ 2t2
⇥ d 2t2 ⇤ 2
x(t) = e C + 8te dt = e C +2 e = 2 + Ce 2t .
dt
ODES AND PDES 3

1st order linear IVP: Suppose, we are interested in solving the initial value problem
y 0 + ↵(x)y = b(x), y(x0 ) = y0 , where x0 2 I.
From the previous 0 A 0
R x calculation, we see that u = be with A = ↵. Integrating from x0 to x and
taking A(x) = x0 ↵(s) ds, we have
Z x Rs
↵(r) dr
u(x) u(x0 ) = b(s)e x0 ds.
x0

Note that u(x0 ) = y(x0 )eA(x0 ) = y(x0 )e0 = y0 . Thus,


Z x Rs
↵(r) dr
u(x) = y0 + b(s)e x0 ds
x0
Rx h Z x Rs i
↵(s) ds ↵(r) dr
=) y(x) = e x0 y0 + b(s)e x0 ds . (2.3)
x0

Example 2.2. Find the solution of x0 (t) + kx(t) = h, x(0) = x0 , where h and k are constant.
This equation arises in the RC circuit when there is a generator of constant voltage h.
Solution: Using (2.3), we get
Z t
⇥ ⇤ ⇥ ekt 1 ⇤ ⇣ k ⌘ kt h
x(t) = e kt x0 + heks ds = e kt x0 + h = x0 e + .
0 k h k
Notice that x(t) ! hk as t ! 1 from below if x0 < hk , and from above if x0 > hk . Moreover, the
capacitor voltage x(t) does not decay to 0 but tends to the constant voltage hk .
2.2. General 1st order ODEs. We consider the general first order ODE of the form
y 0 = f¯(x, y) ,
where f¯ is some continuous function. We have seen that if f¯(x, y) = ↵(x)y + b(x) for some
↵, 2 C(I), then the above ODE has solution in explicit form (cf. the method of variation of
parameter).
2.2.1. Equations with variables separated: Let us consider a separable ODE
y 0 = f (x)g(y) , (2.4)
where f and g are given continuous functions with f (x) 6= 0. If y = k is any zero of g, then
y(x) = k is a constant solution of (2.4). On the other hand, if y(x) = k is a constant solution,
then g(k) = 0. Therefore y(x) = k is a constant solution if and only if g(k) = 0.
• y(x) = k is called an equilibrium solution if and only if g(k) = 0.
Hence if y(x) is a non-constant solution, then g(y(x)) 6= 0 for any x. Any separable equation
can be solved by means of the following theorem.
Theorem 2.3 (The method of separation of variables). Let f and g be continuous functions
on some intervals I and J respectively such that g 6= 0 on J. Let F resp. G be a primitive
function of f resp. g1 on I resp. J. Then a function y defined on some subinterval of I, solves
the equation (2.4) if and only if
G(y(x)) = F (x) + C , (2.5)
for all x in the domain of y, where C is a real constant.
Proof. Let y(x) solves (2.4). Since F 0 = f and G0 = g1 , the equation (2.4) is equivalent to
0
y 0 G0 (y) = F 0 (x) =) G(y(x)) = F 0 (x) =) G(y(x)) = F (x) + C .
4 A. K. MAJEE

Conversely, if function y satisfies (2.5) and is known to be di↵erentiable in its domain, then
di↵erentiating (2.5) in x, we obtain y 0 G0 (y) = F 0 (x). Arguing backwards, we arrive at (2.4).
Let us show that y is di↵erentiable. Since g(y) 6= 0, either g(y) > 0 or g(y) < 0 in the whole
domain. Then G is either strictly increasing or strictly decreasing in the whole domain. In both
cases, the inverse function G 1 is well-defined and di↵erentiable. It follows from (2.5) that
1
y(x) = G F (x) + C .
Since both F and G 1 are di↵erentiable, we conclude that y is di↵erentiable. ⇤
Example 2.3. Consider the ODE
y 0 (x) = y(x) , x 2 R , y(x) > 0 .
Then f (x) ⌘ 1 and g(y) = y 6= 0. Note that F (x) = x and G(y) = log(y). The equation (2.5)
becomes
log(y) = x + C =) y(x) = Cex ,
where C is any positive constant.
Example 2.4. Consider the equation
p
y0 = |y|
which is defined for all y 2 R. Note that y = 0 is a trivial solution. In the domains y > 0 and
y < 0, the equation can be solved using separation of variables. In the domain y > 0, we obtain
Z Z
dy p 1 2
p = dx =) 2 y = x + C =) y = x + C .
y 4
Since y > 0, we must have x > C which follows from the second expression. Similarly in the
domain y < 0, we have
1
y= (x + C)2 , x < C .
4
We see that the integral curves in the domain y > 0 touch the curve y = 0 and so do the integral
curves in the domain y < 0.
Example 2.5. The logistic equation
y 0 (x) = y(x) ↵ y(x) , ↵, > 0.
In this model, y(x) represents the population of some species and therefore y(x) 0. Note that
y(x) = 0 and y(x) = ↵ are two equilibrium solutions. Such solutions play an important role
in analyzing the trajectories of solutions in general. In order to solve the logistic equation, we
separate the variables and obtain (assuming that y 6= 0 and y 6= ↵ )
dy 1 dy dy 1 1
= dx =) + = dx =) log(|y|) log(|↵ y|) = x + c
y(↵ y) ↵ y ↵↵ y ↵ ↵
1 y y
=) log | | = x + c =) | | = ke↵x ,
↵ ↵ y ↵ y
where k = ec↵ . This is a general solution in implicit form. To solve for y, consider the case
↵ke↵x
0 < y(x) < ↵ . Then, y(x) = 1+ ke↵x . For the case y >

, the solution takes the form
↵ke↵x ↵
y(x) = 1 ke↵x . In any case, limx!1 y(x) = . This shows that all non-constant solutions
approach the equilibrium solution y(x) = ↵ as x ! 1, some from above the line y = ↵ and
others from below.
ODES AND PDES 5

2.2.2. Exact equations: Suppose that the first order equation y 0 = f¯(x, y) is written in the
form
M (x, y) + N (x, y)y 0 = 0 , (2.6)
where M , N are real-valued functions defined for real x, y on some domain ⌦.
Definition 2.2. We say that the equation (2.6) is exact in ⌦ if there exists a function F having
continuous first partial derivatives such that
@F @F
=M, =N. (2.7)
@x @y
Theorem 2.4. Suppose the equation (2.6) is exact in a domain ⌦ ⇢ R2 i.e., there exists F
such that @F @F
@x = M , @y = N in ⌦. Then every continuously di↵erentiable function defined
implicitly by a relation
F (x, (x)) = c (c = constant) ,
is a solution of (2.6), and every solution of (2.6) whose graph lies in ⌦ arises in this way.
Proof. Under the assumptions of the theorem, equation (2.6) becomes
@F @F
(x, y) + (x, y)y 0 = 0 .
@x @y
If is any solution on some interval I, then
@F @F
(x, (x)) + (x, (x)) 0 (x) = 0 , 8x 2 I . (2.8)
@x @y
If (x) = F (x, (x)), then from the above equation, we see that 0 (x) = 0, and hence
F (x, (x)) = c, where c is some constant. Thus the solution must be a function which is
given implicitly by the relation F (x, (x)) = c. Conversely, if is a di↵erentiable function on
some interval I defined implicitly by the relation F (x, y) = c, then
F (x, (x)) = c , 8x 2 I .
@F
Di↵erentiation along with the property @x = M, @F
@y = N yields that is a solution of (2.6).
This completes the proof. ⇤
We will say that F (x, y) = c is the general solution of (2.6).
Example 2.6. Consider the equation
x (y 4 1)y 0 (x) = 0 .
Here M = x and N = 1 y 4 . Define F (x, y) = 12 x2 + y 1 5
5y . Then above equation is exact.
Hence the solution is given by
F (x, y) = c =) 2y 5 10y = 5x2 + c .
Example 2.7. Find the general solution of the ODE 2ye2x + 2x cos(y) + e2x x2 sin(y) y 0 = 0.
Solution: The equation is of the form M (x, y) + N (x, y)y 0 = 0 with M (x, y) = 2ye2x + 2x cos(y)
and N (x, y) = e2x x2 sin(y). Define F (x, y) = ye2x + x2 cos(y). Then F has continuous first
partial derivatives on R2 and @F @F
@x (x, y) = M (x, y), @y (x, y) = N (x, y). Hence the given ODE is
exact. Thus, the general solution is given by the formula
ye2x + x2 cos(y) = c, c 2 R.
How do we recognize when an equation is exact? The following theorem gives a necessary
and sufficient conditions.
6 A. K. MAJEE

Theorem 2.5. Let M, N be two real-valued functions which have continuous first partial deriva-
tives on some rectangle
n o
R := (x, y) 2 R2 : |x x0 |  a , |y y0 |  b .
Then the equation (2.6) is exact in R if and only if
@M @N
= (2.9)
@y @x
in R.
Proof. It is easy to see that if the equation (2.6) is exact, then (2.9) holds. Now suppose that
(2.9) holds in the rectangle R. We wand to find a function F having continuous first partial
derivatives such that @F @F
@x = M and @y = N . If we had a such function, then
Z x Z y Z x Z y
@F (s, y) @F (x0 , t)
F (x, y) F (x0 , y0 ) = ds + dt = M (s, y) ds + N (x0 , t) dt
x0 @x y0 @y x0 y0

Similarly by writing F (x, y) F (x0 , y0 ) = F (x, y) F (x, y0 ) + F (x, y0 ) F (x0 , y0 ), we could


have
Z x Z y
F (x, y) F (x0 , y0 ) = M (s, y0 ) ds + N (x, t) dt . (2.10)
x0 y0
We now define F by the formula
Z x Z y
F (x, y) = M (s, y) ds + N (x0 , t) dt . (2.11)
x0 y0
@F
Then, F (x0 , y0 ) = 0 and @x (x, y) = M (x, y) for all (x, y) in R. From (2.10), we can also
define F by the formula
Z x Z y
F (x, y) = M (s, y0 ) ds + N (x, t) dt . (2.12)
x0 y0

It is clear from (2.12) that @F @y (x, y) = N (x, y) for all (x, y) in R. Therefore, we need to show
that (2.12) is valid, where F is defined by (2.11). Now, by using the condition (2.9), we have
hZ x Z y i
F (x, y) M (s, y0 ) ds + N (x, t) dt
x0 y0
Z x Z y
⇥ ⇤ ⇥ ⇤
= M (s, y) M (s, y0 ) ds N (x, t) N (x0 , t) dt
x y0
Z 0x Z y h i
@M @N
= (s, t) (s, t) ds dt = 0 .
x0 y 0 @y @x
This completes the proof. ⇤
Example 2.8. Find the general solution of the ODE
2xydx + (x2 + y 2 ) dy = 0 .
Here, M (x, y) = 2xy and N (x, y) = x2 + y 2 . Note that @M @N
@y = @x = 2x. Thus, the equation is
exact. Define the function F by (taking (x0 , y0 ) = (0, 0))
Z x Z y Z x Z y
y3
F (x, y) = M (s, y) ds + N (x0 , t) dt = 2sy ds + t2 dt = yx2 + .
0 0 0 0 3
y3
Therefore, the general solution is given by the formula yx2 + 3 = c, where c is arbitrary real
constant.
ODES AND PDES 7

Example 2.9. Solve the ODE (x2 2y)y 0 = 3x2 2xy.


solution: Given ODE can be written as
(3x2 2xy)dx + (2y x2 )dy = 0 i.e., M (x, y)dx + N (x, y) dy = 0.
A simple calculation shows that @M
@y (x, y) =
@N
@x (x, y) = 2x. Hence the ODE is exact for all
x, y 2 R. To find F , we know that @F
@x = M,
@F
@y = N . Thus, F satisfies

F (x, y) = x3 x2 y + f (y), where f is independent of x.


@F
Now, @y = N gives
f 0 (y) x2 = 2y x2 =) f 0 (y) = 2y.
Taking f (y) = y 2 , we obtain F (x, y) = x2 (1 y) + y 2 . Thus, general solution is
x2 (1 y) + y 2 = c, c 2 R.
2.2.3. The integrating factor: Sometimes, if the equation (2.6) is NOT exact, one can find a
function u, nowhere zero, such that the equation
u(x, y)M (x, y)dx + u(x, y)N (x, y) dy = 0
is exact. Such a function is called an integrating factor. For example ydx xdy = 0 (x >
0, y > 0) is not exact, by multiplying the equation by u(x, y) = y12 makes it exact. Note that all
the three function
1 1 1
, 2
,
xy x y2
are integrating factors of the above ODE. Thus, integrating factors need not be unique.
Remark 2.1. In view of Theorem 2.5, we see that a function u on a rectangle R, having
continuously partial derivatives, is an integrating factor of the equation (2.6) if and only if
⇣ @M @N ⌘ @u @u
u =N M . (2.13)
@y @x @x @y
i) If u is an integrating factor which is function of x only, then
1 ⇣ @M @N ⌘
p=
N @y @x
is a continuous function of x alone, provided N (x, y) 6= 0 in R.
ii) If u is an integrating factor which is function of y only, then
1 ⇣ @N @M ⌘
q=
M @x @y
is a continuous function of y alone, provided M (x, y) 6= 0 in R.
Example 2.10. Find an integrating factor of
(2y 3 + 2) dx + 3xy 2 dy = 0 , x 6= 0 , y 6= 0
and solve the ODE.
Here M (x, y) = 2y 3 + 2 and⇣ N (x, y) =⌘ 3xy 2 . Note that the equation is not exact. Now
@M @N 1 @M @N
@y
2
@x = 3y and hence N @y @x = x1 is a continuous function of x alone. Thus
integrating factor should be only function of x. Note that u(x) = x satisfies the relation (2.13).
After multiplication by integrating factor, equation becomes
M̃ (x, y) dx + Ñ (x, y) dy = 0 , where M̃ (x, y) = 2xy 3 + 2x , Ñ (x, y) = 3x2 y 2 .
8 A. K. MAJEE

@ F̃
To find F̃ , we know that @x = 2xy 2 + 2x, and hence
F̃ (x, y) = x2 y 3 + x2 + f (y) ,
@ F̃
where f is independent of x. Again @y = Ñ gives

f 0 (y) + 3x2 y 2 = 3x2 y 2 =) f (y) = c.


Thus, the general solution is given implicitly by
x2 (y 3 + 1) = c , c 2 R.
2.2.4. Bernoulli Equations: We are interested in the ODE of the form
y 0 + p(x)y = q(x)y n ,
where p and q are continuous functions. Note that for n = 0 or n = 1, the equation is linear,
and we already know how to solve it. For n 6= 0, 1, we first divide the ODE by y n to get
y n y 0 + p(x)y 1 n = q(x). Take v = y 1 n . Then v 0 = (1 n)y n y 0 , and plugging this, we have
1
v 0 + p(x)v = q(x) .
1 n
Since this a linear ODE, we can use method of variation of parameter to find its solution and
hence able to find the solution of given Bernoulli equation.
Example 2.11. We wish to find all solutions of the ODE 3y 2 y 0 + y 3 = e x . We first write
x x
down the given ODE in the Bernoulli form: y 0 + 13 y = e 3 y 2 . Here p(x) = 13 , q(x) = e 3 and
n = 2. Thus, taking v = y 3 , we have the following linear ODE
v0 + v = e x
.
General solution is given by v(x) = e x (c + x), where c is an arbitrary constant. Thus, the
general solution of the given ODE is given by
1 x
y(x) = (c + x) 3 e 3 , 8 x 2 R.
Example 2.12. Find the general solution of the ODE y 0 2xy = xy 2 .
Solution: Note that given ODE is the Bernoulii equation with p(x) = 2x and q(x) = x.
Taking v = y 1 , we have the following linear ODE
v 0 + 2xv = x.
x2
The general solution is given by v(x) = 12 + ce . Hence the general solution of the given
ODE is
⇣ 1 2
⌘ 1
y(x) = + ce x , x 2 R.
2

3. Existence and uniqueness of solutions to first order ODE


3.1. The method of successive approximations: Let us consider the initial value problem
y 0 = f (x, y) , y(x0 ) = y0 , (3.1)
where f is any continuous real-valued function defined on some rectangle
n o
R := (x, y) 2 R2 : |x x0 |  a , |y y0 |  b , (a, b > 0)

Our aim is to show that on some interval I containing x0 , there is a solution of the above
initial value problem. Let us first show the following.
ODES AND PDES 9

Theorem 3.1. A function is a solution of (3.1) on some interval I if and only if it is solution
of the integral equation (on I)
Z x
y = y0 + f (s, y(s)) ds . (3.2)
x0

Proof. Suppose is a solution to the initial value problem on I, then


Z x Z x
0
(t) = f (t, (t)) =) (x) = (x0 ) + f (t, (t)) dt = y0 + f (t, (t)) dt ,
x0 x0
where in the middle equation, we used the fact that f (t, (t)) is continuous on I. Hence is a
solution of (3.2). Conversely, suppose that is a solution to (3.2) i.e.,
Z x
(x) = y0 + f (s, (s)) ds , 8x 2 I .
x0
Then (x0 ) = y0 and 0 (x) = f (x, (x)) for all x 2 I. Thus is a solution to the initial value
problem (3.1). ⇤
We now want to solve the integral equation via approximation. Define Picard’s approxima-
tions by
0 (x) = y0
Z x
(3.3)
k+1 (x) = y0 + f (s, k (s)) ds (k = 0, 1, . . .)
x0
First we show that all the functions k , k = 0, 1, 2, . . . exist on some interval.

Theorem 3.2. The approximate function exist as continuous function on


k
n b o
I := x 2 R : |x x0 |  ↵ = min{a, }
M
and (x, k (x)) 2 R for all x 2 I, where M > 0 is such that |f (x, y)|  M for all (x, y) in R.
Indeed,
| k (x) y0 |  M |x x0 | , 8x 2 I . (3.4)
b
Note that for x 2 I, |x x0 |  M, and hence (x, k (x)) are in R for all x 2 I.
Proof. We will prove it by induction. Clearly 0 exists on I and (3.4) satisfies. Now
Z x
| 1 (x) y0 | = f (s, y0 ) ds  M |x x0 |
x0
, and 1 is continuous on I. Now assume that it is valid for the functions 0 , 1 , . . . , k . Define
the function Fk (t) = f (t, k (t)), which exists for t 2 I. It is continuous on I. Therefore k+1
exists as a continuous function on I. Moreover,
Z x
k+1 (x) y0  Fk (t) dt  M |x x0 | .
x0

Definition 3.1. Let f be a function defined for (x, y) is a set S. We say that it is Lipschitz in
y, if there exists a constant K > 0 such that
|f (x, y1 ) f (x, y2 )|  K|y1 y2 | , 8(x, y1 ), (x, y2 ) 2 S .
Exercise 3.1. Show that if the partial derivative fy exists and is bounded in a rectangle R, then
f is Lipschitz in y in R.
We now show that approximate solutions k converges on I to a solution of the initial value
problem under certain assumptions on f .
10 A. K. MAJEE

Theorem 3.3. Let f be a continuous function defined on the rectangle R. Further suppose that
f is Lipschitz in y. Then { k } converges to a solution of the initial value problem (3.1) on I.
Proof. Note that
k
X
k (x) = 0 (x) + [ i (x) i 1 (x)] , 8x 2 I .
i=1
Therefore, it suffices to show that the series
X1
0 (x) + [ i (x) i 1 (x)]
i=1
converges. Let us estimate the terms i (x) i 1 (x). Observe that, since f satisfies Lipschitz
condition in R
Z xh i Z x
| 2 (x) 1 (x)| = f (t, 1 (t)) f (t, 0 (t)) dt  K | 1 (t) 0 (t)| dt
x0 x0
Z x
(x x0 )2
 KM |t x0 | dt  KM .
x0 2
Claim:
M Ki 1 M K i |x x0 |i
i (x) i 1 (x)  |x x0 |i = , i = 1, 2, . . . (3.5)
i! K i!
We shall prove (3.5) via induction. Note that (3.5) is true for i = 1 and i = 2. Assume now that
(3.5) holds for i = m. Let us assume that x x0 (similar proof for x  x0 ). By using Lipschitz
condition, and the induction hypothesis, we have
Z x Z x
M Km 1
m+1 (x) m (x)  K m (t) m 1 (t) dt  K |t x0 |m dt
x0 x0 m!
M Km
= |x x0 |m+1
(m + 1)!
P
Hence (3.5) holds for i = 1, 2, . . .. It follows that the i-th term of the series | 0 (x)|+ 1 i=1 | i (x)
M K|x x0 | . Hence
i 1 (x)]| is less than
P or equal to K times the i-th term of the power series for e
the series 0 (x) + 1 i=1 [ i (x) i 1 (x)] is convergent for all x 2 I, and therefore the sequence
{ k } converges to a limit (x) as k ! 1.
Properties of the limit function : We first show that is continuous on I. Indeed, for any
x, x̃ 2 I, we have, by using the boundedness of f
Z x
k+1 (x) k+1 (x̃) = f (t, k (t)) dt  M |x x̃|

=) (x) (x̃)  M |x x̃| .
This shows that is continuous on I. Moreover, taking x̃ = x0 in the above estimate, we see
that
| (x) y0 |  M |x x0 | , 8x 2 I
and hence (x, (x)) is in R for all x 2 I.
We now estimate | (x) k (x)|. Note that, since (x) is the limit of the series 0 (x) +
P1 Pk
i=1 [ i (x) i 1 (x)] and k = 0 (x) + i=1 [ i (x) i 1 (x)], we see that
1
X 1 1
M X K i |x x0 |i M X K i ↵i
| (x) k (x)| = [ i (x) i 1 (x)]  
K i! K i!
i=k+1 i=k+1 i=k+1
ODES AND PDES 11

M (K↵)k+1 n (K↵) (K↵)2 o


= 1+ + + ...
K (k + 1)! K + 2 (k + 2)(k + 3)
k+1 1
X
M (K↵) K i ↵i M (K↵)k+1 K↵
 = e . (3.6)
K (k + 1)! i! K (k + 1)!
i=0
(K↵)k+1
Note that (k+1)! ! 0 as k ! 1.
Now we will show that is a solution to the integral equation (3.2). Note that
Z x
k+1 (x) = y0 + f (t, k (t)) dt ,
x0
Rx Rx
and k+1 (x) ! (x), as k ! 1. Thus it suffices to show that x0 f (t, k (t)) dt ! x0 f (t, (t)) dt.
Indeed, thanks to Lipschitz condition of f , and (3.6) we have
Z x Z x Z x
f (t, k (t)) dt f (t, (t)) dt  K | (t) k (t)| dt
x0 x0 x0
(K↵)k+1 K↵
M e |x x0 | ! 0 , (k ! 1) .
(k + 1)!

To prove uniqueness, we first consider an interval I := {|x x0 |  } where > 0 is such


that L < 1. Suppose there exist two solutions y1 and y2 of the IVP. Then, one has
Z x Z x
y1 (x) y2 (x) = f (s, y1 (s)) f (s, y2 (s)) ds  L |y1 (s) y2 (s)| dx
x0 x0
 L|x x0 | max |y1 (x) y2 (x)|  L max |y1 (x) y2 (x)|
|x x0 | |x x0 |
=) max |y1 (x) y2 (x)|  L max |y1 (x) y2 (x)| .
|x x0 | |x x0 |

Since L < 1, we get max|x x0 | |y1 (x) y2 (x)| = 0—which then implies that y1 = y2 on
I . In particular, y1 (x0 ± ) = y2 (x0 ± ). Repeating the same procedure in the interval
[x0 + , x0 + 2 ] and [x0 2 , x0 ], we get y1 (x) = y2 (x) for all x 2 [x0 2 , x0 + 2 ]. Since
the set {x 2 R : |x x0 |  ↵} is compact, after a finite number of times, we get y1 (x) = y2 (x)
for all x 2 [a, b]. This completes the proof. ⇤
Remark 3.1. One can start Picard’s iteration procedure with any continuous function 0 on
|x x0 |  a such that the points (x, 0 (x)) are in R for |x x0 |  a. One can proceed as in the
previous theorem by showing:
i) 1 , 2 , . . . exist and are continuous on I, and satisfy | k (x) y0 |  M |x x0 | for all
k = 1, 2, . . ..
M K i |x x0 |i
ii | i+1 (x) i (x)|  2 K i! for all i = 1, 2, . . ..
(K↵) K↵ i
iii) Show that k tends to a limit function and | i (x) (x)|  2 M
K i! e .
Let us illustrate this method on the following example:
y0 = y , y(0) = 1 .
Here 0 (x) = 1. Moreover, the approximate functions are given by
Z x
1 (x) = 1 + 0 (s) ds = 1 + x
0
Z x
x2
2 (x) = 1 + 1 (s) ds = 1 + x +
0 2!
12 A. K. MAJEE
Z x
x2 x3
3 (x) =1+ 2 (s) ds =1+x+ +
0 2! 3!
and by induction
x2 x3 xk
k (x) =1+x+
+ + ... + , k = 0, 1, 2, . . . .
2! 3! k!
Clearly, k (x) ! ex as k ! 1, and the function y(x) = ex indeed solves the above IVP.
Definition 3.2. We say that J ⇢ R is the maximal interval of definition of the solution y(x)
of the IVP (3.1), if any interval I where y(x) is defined is contained in J, and y(t) cannot be
extended in an interval greater than J.
Example 3.1. Consider the IVP
y0 = y2 , y(x0 ) = a 6= 0 .
In view of Theorem 3.3, above problem has a solution in a neighbourhood of x0 . Note that,
the function (x, c) = x 1 c solves y 0 = y 2 . Thus, we need to impose the requirement that
(x0 , c) = a , c = x0 + a1 . Let ca = x0 + a1 . Hence for a > 0, solution to the above problem is
1
ya (x) =
, x < ca .
x ca
Thus, in this case, the maximum interval of definition is ( 1, ca ). Similarly, for a < 0, one
can show the maximum interval of definition is (ca , 1) for the above problem.
3.1.1. Non-local existence of solutions: Theorem 3.3 guarantees a solution only for x near
the initial point x0 . There are some cases, where solution may exists on the whole interval
|x x0 |  a. For example, consider the ODE of the form y 0 + g(x)y = h(x), where g, h are
continuous on |x x0 |  a. Let us define the strip
n o
S := |x x0 |  a , |y| < 1 .
Since g is contonuous on |x x0 |  a, there exists K > 0 such that |g(x)|  K. Then the
function f (x, y) = g(x)y + h(x) are Lipschitz continuous on the strip S.
Theorem 3.4. Let f be a real-valued continuous function on the strip S, and Lipschitz contin-
uous on S with constant K > 0. Then the Picard’s iterations { k } for the problem (3.1) exist
on the entire interval |x x0 |  a, and converge there to a solution of (3.1).
Proof. Note that (x, 0 (x)) 2 S. Now since f is continuous on S, there exists M > 0 such that
|f (x, y0 )|  M for |x x0 |  a, and hence
Z x
| 1 (x)|  |y0 | + f (t, y0 ) dt  |y0 | + M |x x0 |  |y0 | + M a < 1 .
x0
Moreover, each k is continuous on |x x0 |  a. Now assume that the points
(x, 0 (x)), (x, 1 (x), . . . (x, k (x))

are in S for |x x0 |  a. We show that (x, k+1 (x)) lies in S. Indeed,


Z x Z x
| k+1 (x)|  |y0 | + f (t, k (t)) dt  |y0 | + M |x x0 | + f (t, k (t)) f (t, y0 ) dt
x0 x0
Z x
 |y0 | + M a + K | k (t)| + |y0 | dt
x0
Since k (x) is continuous on |x x0 |  a, we see that | k+1 (x)| < 1 for |x x0 |  a. Hence,
by induction, the points (x, k (x)) are in S.
ODES AND PDES 13

To show the convergence of the sequence { k } to a limit function , we can mimic the proof
of Theorem 3.3, once we note that
Z x
1 (x) 0 (x)  |f (t, y0 )| dt  M |x x0 | .
x0
Next we show that is continuous. Observe that, thanks to (3.5) (which holds in this case)
k
X k
X k
M X K i |x x0 |i
| k (x) y0 | = [ i (x) i 1 (x)]  i (x) i 1 (x) 
K i!
i=1 i=1 i=1
X1
M K i |x x0 |i M Ka
 = e 1 := b .
K i! K
i=1
Taking the limit as k ! 1, we obtain
| (x) y0 |  b , (|x x0 |  a) .
Note that f is continuous on R, where the rectangle R is given by
n o
R := (x, y) 2 R2 : |x x0 |  a , |y y0 |  b ,
and hence, there exists a constant N > 0 such that |f (x, y)|  N for (x, y) 2 R. Let x, x̃ be two
pints in the interval |x x0 |  a. Then
Z x
k+1 (x) k+1 (x̃) = f (t, k (t)) dt  N |x x̃|

=) | (x) (x̃)|  N |x x̃| .
Rest of the proof is a repetition of the analogous parts of the proof of Theorem 3.3, with ↵
replaced by a everywhere. ⇤
Example 3.2. Consider the IVP y 0 = y + x2 sin(y), y(0) = 1, where is a real constant
such that | |  1. Show that the solution of the given IVP exists for |x|  1.
Solution: Here f (x, y) = y + x2 sin(y). Consider the strip S = {|x|  1, |y| < 1}. Then f is
continuous on S and Lipschitz continuous on S as |@y f (x, y)|  2 on S. Thus, by Theorem 3.4,
the solution of the given problem exists on the entire interval |x|  1.
In view of Theorem 3.4, we arrive at the following corollary.
Corollary 3.5. Let f be a real-valued continuous function on the plane |x| < 1, |y| < 1, which
satisfies a Lipschitz condition on each strip Sa defined by
n o
Sa := |x|  a , |y| < 1 , (a > 0) .
Then every initial value problem
y 0 = f (x, y) , y(x0 ) = y0 ,
has a solution which exists for all real x.
Proof. For any real number x, there exists a > 0 such that x is contained inside the interval
|x x0 |  a. Consider now the strip
n o
S := |x x0 |  a , |y| < 1 .
Since S is contained in the strip
n o
S̃ := |x|  |x0 | + a , |y| < 1 .
f satisfies all the conditions of Theorem 3.4. Thus, { k (x)} tends to (x), where is a solution
to the initial-value problem. This completes the proof. ⇤
14 A. K. MAJEE

Example 3.3. Consider the equation


y 0 = h1 (x)p(cos(y)) + h2 (x)q(sin(y)) := f (x, y) ,
where h1 and h2 are continuous functions for all real x, and p , q are polynomials. Consider the
strip Sa := |x|  a , |y| < 1 , where a > 0. Note that, since h1 , h2 are continuous, there
exists Na > 0 such that |hi (x)|  Na for |x|  a. Again, since p, q are polynomials, there exists
a constant C > 0 such that
max p0 (⇠), q 0 (⇠) : ⇠ 2 [ 1, 1]  C.
Let us check that f is Lipschitz continuous on the strip Sa . Indeed, for any (x, y1 ), (x, y2 ) 2 Sa ,
f (x, y1 ) f (x, y2 )  |h1 (x)||p0 (⇠1 )|| cos(y1 ) cos(y2 )| + |h2 (x)||q 0 (⇠2 )|| sin(y1 ) sin(y2 )|
 2 Na C|y1 y2 | .
Thus, thanks to Corollary 3.6, every initial value problem for this equation has a solution which
exists for all real x.
Example 3.4. Show that the IVP y 0 = cos(y)1 x2
(|x| < 1), y(0) = 2 has a unique solution for all
|x| < 1.
Solution: Given IVP has a unique solution for all |x| < 1, if we show that f (x, y) = cos(y)
1 x2
is
continuous on each strip Sa := {|x|  a, |y| < 1, 0 < a < 1} and Lipschitz continuous in the
second argument on Sa . It is easy to see that f is Lipschitz continuous in y on Sa with Lipschitz
constant K = 1 1a2 .
Corollary 3.6. Let ⌦ = R ⇥ R, and f is globally Lipschitz on ⌦ in its second argument. Let
(x0 , y0 ) 2 ⌦ be given. Then the Cauchy problem y 0 = f (x, y), y(x0 ) = y0 has a unique solution
defined on all of R.
Lemma 3.7. If the maximum interval of definition J is not all of R (assuming that f is defined
on R2 ), then it cannot be closed.
Proof. By contradiction, let J = [↵, ] or ( 1, ] or [↵, 1) for some ↵, 2 R with ↵ < . We
deal with first case. Consider the IVP ỹ 0 (x) = f (x, ỹ) with ỹ( ) = y( ). Then this problem has
a solution on [ , + ] for some > 0. Consider the function
(
y(x), ↵  x 
ȳ(x) =
ỹ(x),  x  + .
One can easily check that ȳ is a solution of the ODE y 0 = f (x, y) defined on [↵, + ]—
contradicting the fact that J is the maximal interval of definition. ⇤
Proposition 3.8. If the solution y(x) of y 0 = f (x, y), where f is continuous and locally Lipschitz
on R2 , is monotone and bounded, then the maximum interval of definition J is all of R.
Proof. By contradiction, let J is strictly contained in R. Suppose that J is bounded. We show
that J is closed. Let < +1 be the right extreme of J. Since y(x) is monotone and bounded,
limx! y(x) exists and finite. Thus y(x) is defined at x = and hence J contains . Similarly if
↵ < +1 is the left extreme of J, then J also contains ↵. In other words, J is closed—contraction
to previous lemma. ⇤
3.2. Continuous dependence estimate: Suppose we have two IVP
y 0 = f (x, y) , y(x0 ) = y1 , (3.7)
y 0 = g(x, y) , y(x0 ) = y2 , (3.8)
where f and g both are real-valued continuous function on the rectangle R, and (x0 , y1 ) , (x0 , y2 )
are points in R.
ODES AND PDES 15

Theorem 3.9. Let f , g be continuous function on R, and f satisfies a Lipschitz condition there
with Lipschitz constant K. Let , be solutions of (3.7), (3.8) respectively on an interval I
containing x0 , with graphs contained in R. Suppose that the following inequalities are valid
|f (x, y) g(x, y)|  " , (x, y) 2 R , (3.9)
|y1 y2 |  , (3.10)
for some non-negative constants " , . Then
x0 | " K|x x0 |
(x) (x)  eK|x + e 1 , 8x 2 I . (3.11)
K
Proof. Since , are solutions of (3.7), (3.8) respectively on an interval I containing x0 , we see
that
Z x⇥ ⇤
(x) (x) = y1 y2 + f (t, (t)) g(t, (t)) dt
x0
Z x⇥ Z x⇥
⇤ ⇤
= y1 y2 + f (t, (t)) f (t, (t)) dt + f (t, (t)) g(t, (t)) dt .
x0 x0

Assume that x x0 . Then, in view of (3.9), (3.10), and the Lipschitz condition of f with
Lipschitz constant K, we obtain from the above expression
Z x
| (x) (x)|  + K | (s) (s)| ds + "(x x0 ) . (3.12)
x0
Rx
Define, E(x) = x0 | (s) (s)| ds. Then E 0 (x) = | (x) (x)| and E(x0 ) = 0. Therefore,
(3.12) becomes
E 0 (x) KE(x)  + "(x x0 ) .

Multiplying this inequality by e K(x x0 ) , and then integrating from x0 to x, we have


Z x Z x
K(x x0 ) K(t x0 )
E(x)e  e dt + " (t x0 )e K(t x0 ) dt
x0 x0
⇥ K(x x0 )
⇤ " " ⇥ ⇤ K(x x0 )
= 1 e + 2+ 2 K(x x0 ) 1 e .
K K K
Multiplying both sides of this inequality by eK(x x0 ) , we have
⇥ ⇤ " ⇥ ⇤ "
E(x)  eK(x x0 )
1 K(x x0 ) + 1 + 2 eK(x x0 )
.
K K2 K
We now use this estimate in (3.12) to arrive at the required result for x x0 . A similar proof
holds in case x  x0 . This completes the proof. ⇤
Example 3.5. Consider the IVP
1
y 0 = xy + y 10 , y(0) = .
10
We first show that this problem has a solution which exists for all |x|  12 . To do so, we
consider the rectangle R = {|x|  12 , |y 10 1 1
|  10 }. It is easy to see that g(x, y) = xy + y 10 is
1
continuous on R and |g(x, y)| < 5 := M for all (x, y) 2 R. Hence the given IVP has a unique
solution defined on the interval I = {x : |x|  ↵ := min{ 12 , Mb } where b = 10 1
. Observe that
the IVP
1
y 0 = f (x, y) := xy, y(0) =
10
16 A. K. MAJEE

has a unique solution, say for |x|  12 such that the graph (x, (x)) lies in the rectangle R.
The Lipschitz constant K of f on the rectangle R is 12 , and |f (x, y) g(x, y)|  5110 for all
(x, y) 2 R. Hence by the continuous dependence estimate, we have, for all |x|  12
2 |x|
| (x) (x)|  10
e2 1 .
5
As a consequence of Theorem 3.9, we have
i) Uniqueness Theorem: Let f be continuous and satisfies a Lipschitz condition on
R. If and are two solutions of the IVP (3.1) on an interval I containing x0 , then
(x) = (x) for all x 2 I.
ii) Let f be continuous and satisfies a Lipschitz condition on R, and gk , k = 1, 2, . . . be
continuous on R such that
|f (x, y) gk (x, y)|  "k , (x, y) 2 R
with "k ! 0 as k ! 1. Let yk ! y0 as k ! 1. Let k be a solution to the IVP
y 0 = gk (x, y) , y(x0 ) = yk ,

and is a solution to the IVP (3.1) on some interval I containing x0 . Then k (x) ! (x)
on I.
Remark 3.2. The Lipschitz condition on f on the rectangle R is necessary to have uniqueness
of solution of the IVP (3.1). To see this, consider the IVP
2
y 0 = 3y 3 , y(0) = 0 .
It is easy to check that the function k , for any positive number k
(
0, 1 < x  k,
k (x) = 3
(x k) , k  x < 1 ,

is a solution of the above IVP. So, there are infinitely many solution on any rectangle R containing
the origin. But note that, f does NOT satisfy a Lipschitz condition on R.
Remark 3.3. We have shown the existence of solution of the IVP under more stronger condition
namely Lipschitzness of the function f . But one can relax the Lipschithzness to gurantee the
existence of a solution of the IVP only under the continuity assumption on f . This is called
Peano Theorem.
Theorem 3.10 (Peano Theorem). If the function f (x, y) is continuous on a rectangle R and
if (x0 , y0 ) 2 R, then the IVP y 0 = f (x, y) with y(x0 ) = y0 has a solution in the neighborhood of
x0 .

4. Existence and uniqueness for systems and higher order equations:


Consider the system of di↵erential equations in normal form
y10 = f1 (x, y1 , y2 , . . . , yn )
y20 = f2 (x, y1 , y2 , . . . , yn )
.. (4.1)
.
yn0 = fn (x, y1 , y2 , . . . , yn )
ODES AND PDES 17

˜ ⇢ Rn , and ⌦ = I ⇥ ⌦.
Let ⌦ ˜ We introduce
0 1 0 1
y1 f1 (x, ~y )
B y2 C B f2 (x, ~y ) C
B C n ~ B C
~y = B .. C 2 R ; f (x, ~y ) = B .. C 2 Rn
@.A @ . A
yn fn (x, ~y )

Then f~ : ⌦ ! Rn , and the system of equation (4.1) can be written in the compact form
~y 0 = f~(x, ~y ) .

An equation of the n-th order ODE y (n) = f (x, y, y 0 , . . . , y (n 1) ) may also be treated as a system
of the type (4.1). To see this, let y1 = y, y2 = y 0 , . . . , yn = y (n 1) . Then from the ODE equation
y (n) = f (x, y, y 0 , . . . , y (n 1) ), we have

y10 = y2 ; y20 = y3 ; . . . ; yn0 1 = yn ;


(4.2)
yn0 = f (x, y1 , y2 , . . . , yn ) .

Example 4.1. Consider the initial value problem y (3) + 2y 0 (y 0 )3 + y = x2 + 1 with y(0) =
0, y 0 (0) = 1, y 00 (0) = 1. We want to convert into a system of equations. Let y1 = y, y2 = y 0 and
y3 = y 00 . Note that y 0 = y10 = y2 . Using these, the required system takes the form
y10 = y2 ; y20 = y3 ; y30 = y23 2y2 y1 + x 2 + 1 ,
(4.3)
y1 (0) = 0; y2 (0) = 1; y3 (0) = 1 .

Definition 4.1. A solution ~ = ( 1 , 2 , . . . , n) of the system ~y 0 = f~(x, ~y ) is a di↵erentiable


function on a real interval I such that
i) (x, ~ (x)) 2 ⌦
ii) ~ 0 (x) = f~(x, ~ (x)) for all x 2 I .

Theorem 4.1 (Local existence). Let f~ be a continuous vector valued function defined on
R = |x x0 |  a, |~y ~y0 |  b, a, b > 0

and suppose f~ satisfies Lipschitz condition on R, then the successive approximation { ~ k }1


k=0
~ 0 (x) = ~y0
Z x
~ k+1 (x) = ~y0 + f~(s, ~ k (s)) ds , k = 0, 1, 2, . . .
x0

converges on the interval Icon = |x x0 |  ↵ = min{a, Mb } to a solution ~ of the IVP


~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 on Icon , where M is a positive constant such that |f~(x, ~y )|  M for
all (x, ~y ) 2 R. Moreover,
k+1
~ k (x) ~  M (K↵) eK↵ 8x 2 Icon ,
K (k + 1)!
where K is a Lipschitz constant of f~ on R.
Example 4.2. Consider the problem
y10 = y2 , y1 (0) = 0
y20 = y1 , y2 (0) = 1 .
18 A. K. MAJEE

This can be written in compact form ~y 0 = f~(x, ~y ), ~y (0) = ~y0 = (0, 1), where f~(x, ~y ) = (y2 , y1 ).
Let us calculate the successive approximations ~ k (x):
~ 0 (x) = ~y0 = (0, 1) ,
Z x
~ 1 (x) = (0, 1) + (1, 0) ds = (x, 1) ,
0
Z x Z x
~ 2 (x) = (0, 1) + x2 x2
f~(s, ~ 1 (s)) ds = (0, 1) + (1, s) ds = (0, 1) + (x, ) = (x, 1 ),
0 0 2 2
Z x
~ 3 (x) = (0, 1) + s2 x3 x2
(1 , s) ds = (x ,1 ),
0 2 3! 2
Z x
~ 4 (x) = (0, 1) + s2 s3 x3 x2 x4
(1 , s + ) ds = (x ,1 + ).
0 2 3! 3! 2 4!
It is not difficult to show that all the ~ k exist for all real x and ~ k (x) ! (sin(x), cos(x)). Thus,
the unique solution of the the given IVP is ~ (x) = (sin(x), cos(x)).
Theorem 4.2 (Non-local existence). Let f~ be a continuous vector valued function defined on
S = |x x0 |  a, |~y | < 1, a>0
and satisfies there a Lipschitz condition. Then the successive approximation { ~ k }1 k=0 for the
0 ~ ~
~y = f (x, ~y ); ~y (x0 ) = ~y0 exist on |x x0 |  a, and converges there to a solution of the IVP.
Corollary 4.3. Let f~ be a continuous vector valued function defined on |x| < 1, |~y | < 1, and
satisfies there a Lipschitz condition on each strip
Sa = {|x|  a, |~y | < 1, a > 0}.
Then every initial value problem ~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 has a solution which exists for all
x 2 R.
Example 4.3. Consider the system
y10 = 3y1 + xy3 , y20 = y2 + x3 y3 , y30 = 2xy1 y2 + e x y 3 .
this system of equation can be written in the compact form ~y 0 = f~(x, ~y ) where
0 1 0 1
y1 3y1 + 3y3
~y = @y2 A , and f~(x, ~y ) = @ y 2 + x 3 y3 A.
y3 2xy1 y2 + e y3 x

Note that f~ is a continuous vector-valued function defined on |x| < 1, |~y | < 1. It is Lipschitz
continuous on the strip Sa = {|x|  a, |~y | < 1, a > 0}, since for (x, ~y ), (x, ~ỹ) 2 Sa ,
f~(x, ~y ) f~(x, ~ỹ) = |3(y1 ỹ1 ) + x(y3 ỹ3 )| + |(y2 ỹ2 ) + x3 (y3 ỹ3 )|
x
+ |2x(y1 ỹ1 ) (y2 ỹ2 ) + e (y3 ỹ3 )|
x 3
 (3 + 2|x|)|y1 ỹ1 | + |x| + e + |x| |y3 ỹ3 | + 2|y2 ỹ2 |
 5 + 3a + ea + a3 |~y ~ỹ| .
Therefore, every initial value problem for this system has a solution which exists for all real x.
Moreover, solution is unique.
Example 4.4. For any Lipschitz continuous function on R, consider the IVP
y 00 (x) = f (y), y(0) = 0, y 0 (0) = 0 .
ODES AND PDES 19

Then solution of the above IVP is even. Since f is Lipschitz continuous, by writing the above
IVP in vector form, one can show that above problem has a solution y(x) defined on whole
real line. Let z(x) = y( x). Note that z(x) satisfies the above IVP. Hence by uniqueness,
z(x) = y( x) for all x 2 R. In other words, y(x) is even in x.
Like in the 1st order ODE (scalar valued), we have the following continuous dependence
estimate and uniqueness theorem.
Theorem 4.4 (Continuous dependence estimate ). Let f~, ~g be two continuous vector-valued
function defined on a rectangle
R = |x x0 |  a, |~y ~y0 |  b, a, b > 0
and suppose f~ satisfies a Lipschitz condition on R with Lipschitz constant K. Let ~ , ~ are
solutions of the problems ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y1 and ~y 0 = ~g (x, ~y ), ~y (x0 ) = ~y2 respectively on
some interval I containing x0 . If for ", 0,
|~y1 ~y2 |  , |f~(x, ~y ) ~g (x, ~y )|  " 8(x, ~y ) 2 R
then,
" K|x x0 |
| ~ (x) ~ (x)|  eK|x ex0 |
+1 8x 2 I .
K
In particular, the problem ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 has at most one solution on any interval
containing x0 .
Theorem 4.5 (Gronwall’s lemma:II). Let u(t), p(t) and q(t) be non-negative continuous func-
tions defined on the interval I = [a, b]. Suppose the following inequality holds:
Z t
u(t)  p(t) + q(s)u(s) ds, t 2 I.
a
Then, we have
Z t ⇣Z t ⌘
u(t)  p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧, t 2 I.
a ⌧
Rt
Proof. Let r(t) = a q(s)u(s) ds. Then r(·) is di↵erentiable in (a, b) with r0 (t) = q(t)u(t) and
r(a) = 0. Hence, by our given condition, one has
r0 (t) = q(t)u(t)  q(t){p(t) + r(t)} =) r0 (t) r(t)q(t)  p(t)q(t)
⇣ Z t ⌘ ⇣ Z t ⌘
=) r0 (t) r(t)q(t) exp q(s) ds  p(t)q(t) exp q(s) ds
a a
dh ⇣ Z t ⌘i ⇣ Z t ⌘
=) r(t) exp q(s) ds  p(t)q(t) exp q(s) ds
dt a a
⇣ Z t ⌘ Z t ⇣ Z ⌧ ⌘
=) r(t) exp q(s) ds  p(⌧ )q(⌧ ) exp q(s) ds d⌧
a a a
Z t ⇣Z t Z ⌧ ⌘
=) r(t)  p(⌧ )q(⌧ ) exp q(s) ds q(s) ds d⌧
a a a
Z t ⇣Z t ⌘
=) r(t)  p(⌧ )q(⌧ ) exp q(s) ds d⌧ .
a ⌧
Hence, we get
Z t ⇣Z t ⌘
u(t)  p(t) + r(t)  p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧ .
a ⌧

20 A. K. MAJEE

Corollary 4.6. Let u(t), p(t) and q(t) be non-negative continuous functions defined on the in-
terval I = [a, b]. Suppose the following inequality holds:
Z t
u(t)  p(t) + q(s)u(s) ds, t 2 I.
a
Assume that p(·) is non-decreasing. Then
⇣Z t ⌘
u(t)  p(t) exp q(s) ds , t 2 [a, b].
a

Proof. From Theorem 4.5, and non-decreasing property of p(t), we have


Z t ⇣Z t ⌘
u(t)  p(t) + r(t)  p(t) + p(⌧ )q(⌧ ) exp q(s) ds d⌧
a ⌧
n Z t ⇣Z t ⌘ o
 p(t) 1 + q(⌧ ) exp q(s) ds d⌧
a ⌧
n Z t ⇣Z t ⌘ o ⇣Z t ⌘
d
= p(t) 1 exp q(s) ds d⌧ = p(t) exp q(s) ds .
a d⌧ ⌧ a

Example 4.5. Consider the system
y10 = y1 + "y2 , y20 = "y1 + y2 , (4.4)
where " is a positive constant.
i) ✓ Writing the ~
◆ system in vector form, we see that the vector valued function f" (x, ~y ) =
y1 + "y2
is continuous and also Lipschitz continuous on each strip Sa with Lipschitz
"y1 + y2
constant Ka = 1 + ". Thus, in view of Corrolarry 4.3, every initial value problem of
(4.4) has a solution, which exists for all real x.
ii) Let ~ " be the solution of (4.4) satisfying ~ (0) = (1, 1), and let ~ be the solution of
y10 = y1 , y20 = y2 satisfying ~ (0) = (1, 1). Then, for each real x, by using Lipschitz
condition of f" , we get that
Z x Z x
~ " (x) ~ (x) = ~ ~
f" (s, " (s)) ds ~g (s, ~ (s)) ds
0 0
Z x Z x
= ~ ~ ~ ~
f" (s, " (s)) f" (s, (s)) ds f~" (s, ~ (s)) ~g (s, ~ (s)) ds
Z x0 Z x0
 f~" (s, ~ " (s)) f~" (s, ~ (s)) ds + f~" (s, ~ (s)) ~g (s, ~ (s)) ds
0 0
Z x Z x
 (1 + ") ~ " (s)
~ (s) ds + " | ~ (s)| ds .
0 0
An application of Gronwall’s lemma then implies
Z x
~ " (x) ~ (x)  "e(1+")x | ~ (s)| ds ! 0 as " ! 0 .
0

4.1. Existence and uniqueness for linear systems: Consider the linear system ~y 0 = f~(x, ~y ),
where the component of f~ are given by
n
X
fj (x, ~y ) = ajk yk + bj (x), j = 1, 2, . . . , n
k=1
ODES AND PDES 21

and the functions ajk , bj are continuous on an interval I containing x0 . Now consider the strip
Sa = {|x x0 |  a, |~y | < 1}. PSuppose ajk , bj are continuous on |x x0 |  a. Then there exists
a constant K > 0 such that nj=1 |ajk (x)|  K for all k = 1, 2, . . . , n, and for all x satisfying
|x x0 |  a. Note that
n
X
@ f~
(x, ~y ) = a1k (x), a2k (x), . . . , ank (x) = |ajk (x)|  K .
@yk
j=1

Thus, f~ satisfies a Lipschitz condition on S with Lipschitz constant K. Hence we arrive at the
following theorem
Theorem 4.7. Let ~y 0 = f~(x, ~y ) be a linear system described as above. If ~y0 2 Rn , there exists
one and only one solution ~ of the IVP ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 .
Note that linear system can be written in the equivalent form
~y 0 (x) = A(x)~y (x) + ~b(x)
where
0 1 0 1
a11 (x) a12 (x) a13 (x) . . . a1n (x) b1 (x)
B a21 (x) a22 (x) a23 (x) . . . a2n (x) C B b2 (x) C
B C ~ B C
A(x) = B .. .. .. .. .. C and b(x) = B .. C .
@ . . . . . A @ . A
an1 (x) an2 (x) an3 (x) . . . ann (x) bn (x)
Pn
Example 4.6. Consider the homogeneous linear system yj0 = k=1 ajk yk (j = 1, 2, . . . , n),
where ajk are continuous on some interval I. Then it is easy to see that ~ = ~0 is a solution .
P
This is called a trivial solution. Let K be such that nj=1 |ajk (x)|  K. Let x0 2 I, and ~ be
any solution of the linear homogeneous system. Now consider two IVP
~y 0 = f~(x, ~y ), ~y (x0 ) = ~0; ~y 0 = f~(x, ~y ), ~y (x0 ) = ~ (x0 )
P
where the component of f~ is fj (x, ~y ) = nk=1 ajk yk . Then according to continuous dependence
estimate theorem, we have
" = 0 K|x x0 |
| ~ (x)| = | ~ (x) ~0|  | ~ (x0 ) ~0|eK|x x0 | + e 1 = | ~ (x0 )|eK|x x0 | 8x 2 I .
K
For linear equations of order n, we have non-local existence.
Theorem 4.8. Let a0 , a1 , . . . , an 1 and b be continuous real valued functions on an interval I
containing x0 . If ↵0 , ↵1 , . . . , ↵n 1 are any n constant, there exists one and only one solution
of the ODE
y (n) + an 1 (x)y
(n 1)
(x) + . . . + a0 (x)y = b(x) on I ,
y(x0 ) = ↵0 , y 0 (x0 ) = ↵1 , . . . , y (n 1)
(x0 ) = ↵n 1.

Proof. Let ~y0 = (↵0 , ↵1 , . . . , ↵n 1 ). Given ODE can be written as a system of linear equation
yj0 = yj+1 (j = 1, 2, . . . , n 1); yn0 = b(x) an 1 (x)yn ... a0 (x)y1 .
Then, according to Theorem 4.7, above problem has a unique solution ~ = 1, 2, . . . , n on
I satisfying 1 (x0 ) = ↵0 , 2 (x0 ) = ↵1 , . . . , n (x0 ) = ↵n 1 . But, since
0 0 00 (n 1)
2 = 1, 3 = 2 = 1, . . . , n = 1 ,
the function 1 is the required solution on I. ⇤
22 A. K. MAJEE

5. General solution of linear equations


5.1. Linear independence and Wronskian:
Definition 5.1. Let I be an interval in R and u1 , u2 , . . . , um be real-valued functions defined
on I. We say that the functions u1 , u2 , . . . , um are
i) linearly dependent if there exist constants c1 , c2 , . . . , cm not all zero such that
m
X
ci ui (t) = 0 8 t 2 I.
i=1
Pm
ii) linearly independent if it is not linearly dependent, i.e., i=1 ci ui (t) = 0 8 t 2 I =)
ci = 0 8i = 1, 2, . . . , m.
Example 5.1. sin(t) and cos(t) are linearly independent on R. To see this, let c1 sin(t) +
c2 cos(t) = 0. Then c1 cos(t) c2 sin(t) = 0. Multiplying the first equation by cos(t), the second
one by sin(t) and adding, we obtain c2 = 0. Now, returning to the first equation, we get
c1 sin(t) = 0 for all t 2 R. This implies that c1 = 0. Hence sin(t) and cos(t) are linearly
independent on R.
Example 5.2. If u1 (t) and u2 (t) are two linearly independent functions on the interval I, and
v(t) is a function such that v(t) > 0 on I. Then u1 v and u2 v are linearly independent on I.
Solution: To see this, let c1 u1 (t)v(t) + c2 u2 (t)v(t) = 0. Since v(t) > 0 on I, we divide the
equation by v(t) and have c1 u1 (t) + c2 u2 (t) = 0 for all t 2 I. Since u1 and u2 are independent,
we have c1 = 0 = c2 . In other words, u1 v and u2 v are linearly independent on I.
Definition 5.2 (Wronskian). Let I be an interval in R and u1 , u2 , . . . , um are (m 1) times
di↵erentiable real-valued functions defined on I. We define the Wronskian of u1 , u2 , . . . , um ,
denoted by W [u1 (t), u2 (t), . . . , um (t)], as
2 3
u1 (t) u2 (t) ... um (t)
6 u01 (t) u02 (t) ... u0m (t) 7
6 7
W [u1 (t), u2 (t), . . . , um (t)] = det 6 .. .. .. .. 7.
4 . . . . 5
(m 1) (m 1) (m 1)
u1 (t) u2 (t) . . . um (t)
Theorem 5.1. Suppose that W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0 for some t0 2 I. Then u1 , u2 , . . . , um
are linearly independent.
Proof. Suppose that u1P
, u2 , . . . , um are linearly dependent. Then there exist constants c1 , c2 , . . . , cm
not all zero such that m i=1 ci ui (t) = 0 8 t 2 I. Taking di↵erentiation (m 1)-times, we have
c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm um (t0 ) = 0 ,
c1 u01 (t0 ) + c2 u02 (t0 ) + . . . + cm u0m (t0 ) = 0 ,
.. .. ..
. . .
(m 1) (m 1)
c 1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm u(m
m
1)
(t0 ) = 0 .
This can we written in the form A C = 0, where
2 3 23 2 3
u1 (t0 ) u2 (t0 ) ... um (t0 ) c1 0
6 u01 (t0 ) u02 (t0 ) ... u0m (t0 ) 7 6 c2 7 607
6 7 6 7 6 7
A=6 .. .. .. .. 7, C = 6 .. 7 , and 0 = 6 .. 7 .
4 . . . . 5 4 . 5 4.5
(m 1) (m 1) (m 1)
u1 (t0 ) u2 (t0 ) . . . um (t0 ) cm 0
Since W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0, we see that A is invertible and therefore we have that
c1 = c2 = . . . = cm = 0, which is a contradiction. This completes the proof. ⇤
ODES AND PDES 23

Example 5.3. f (t) = sin(t) and g(t) = t3 are linearly independent functions on the interval
(0, ⇡). To see this, we need to show that Wronskian at some point is non-zero. Note that,
⇡ 1
W [sin(t), t3 ]( ) = p (⇡/4)2 3 ⇡/4 6= 0.
4 2
3
Hence f (t) = sin(t) and g(t) = t are linearly independent functions on the interval (0, ⇡).
Remark 5.1. Let us make the following remarks:
i) If u1 , u2 , . . . , um are linearly dependent, then W [u1 (t), u2 (t), . . . , um (t)] = 0 for all t 2 I.
ii) Converse of i) is not true in general. For example, let m = 2 and u1 (t) = t2 and u2 (t) =
|t|t. We first show that u1 and u2 are linearly independent. Let c1 u1 (t) + c2 u2 (t) = 0.
Then
(
(c1 + c2 )t2 = 0 t > 0
=) c1 = c2 = 0.
c1 c2 t2 = 0 t < 0.
But

2 t2 |t|t
W [t , |t|t] = det = 0 8t 2 R.
2t 2|t|
Theorem 5.2. Let I be an interval in R and ai : I ! R (i = 0, 1, . . . , m 1) be continuous. Let
u1 , u2 , . . . , um be m-solutions of the linear homogeneous ODE (2.1). Suppose that u1 , u2 , . . . , um
are linearly independent. Then
W [u1 (t), u2 (t), . . . , um (t)] 6= 0 8t 2 I .
Proof. Suppose that the result is NOT true. Then there exists t0 2 I such that
W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] = 0 .
2 32 3
c1 0
6 c2 7 607
6 7 6 7
Then, there exists C := 6 .. 7 6= 0 = 6 .. 7 such that A C = 0, where
4 . 5 4.5
cm 0
2 3
u1 (t0 ) u2 (t0 ) ... um (t0 )
6
6 u01 (t0 ) u02 (t0 ) ... u0m (t0 ) 7
7
A := 6 .. .. .. .. 7.
4 . . . . 5
(m 1) (m 1) (m 1)
u1 (t0 ) u2 (t0 ) . . . um (t0 )
Pm
Now, define v(t) = i=1 ci ui (t), t 2 I. Then
v(t0 ) =c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm um (t0 ) = 0 ,
v 0 (t0 ) =c1 u01 (t0 ) + c2 u02 (t0 ) + . . . + cm u0m (t0 ) = 0 ,
.... .. ..
.. . .
(m 1) (m 1)
v (m 1)
(t0 ) =c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm u(m
m
1)
(t0 ) = 0 .
Since ui , i = 1, 2, . . . , m solves the linear homogeneous ODE (2.1), v satisfies the following initial
value problem:
v (m) (t) + am 1 (t)v
(m 1)
(t) + . . . a1 (t)v 0 (t) + a0 (t)u(t) = 0 t 2 I ,
v(t0 ) = v 0 (t0 ) = . . . = v (m 1)
(t0 ) = 0 .
P
Thus, v(t) = 0 for all t 2 I. In other words, m i=1 ci ui (t) = 0 for all t 2 I with the above choice
of the constants c1 , c2 , . . . , cm , which is a contradiction. This finishes the proof. ⇤
24 A. K. MAJEE

Corollary 5.3. Let u1 , u2 , . . . , um be m-solutions of the linear homogeneous ODE (2.1). Then
u1 , u2 , . . . , um are linearly independent if and only if
W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0 for some t0 2 I .

Example 5.4. sin(x) and cos(x) are two linearly independent solution of the homogeneous ODE
y 00 (x) + y(x) = 0. Note that sin(x) and cos(x) solves the ODE. Moreover,
W [sin(x), cos(x)] = 1 8x 2 R.
Therefore, they are two linearly independent solution of the homogeneous ODE y 00 (x) + y(x) = 0.
Theorem 5.4. The real vector space X defined in Theorem 2.1 is of finite dimensional and
dim X = m.
Corollary 5.5. If u1 , u2 , . . . , um are any independent solutions of the linear homogeneous ODE
(2.1), then any solution u of (2.1) can be written as
u(t) = c1 u1 (t) + c2 u2 (t) + . . . + cm um (t) t2I.

Example 5.5. General solution of the ODE y 00 (x) + y(x) = 0 is given by


y(x) = c1 sin(x) + c2 cos(x), c1 , c2 2 R.
Theorem 5.6 (Abel’s Theorem). Let u1 , u2 , . . . , um be m-solutions of the linear homogeneous
ODE (2.1) on an interval I, and let t0 be any point in I. Then
Z t
⇥ ⇤
W [u1 , u2 , . . . , um ](t) = exp am 1 (s) ds W [u1 , u2 , . . . , um ](t0 )
t0

As a consequence, W [u1 , u2 , . . . , um ](t) is either identically zero or it never vanishes.


Proof. We prove this result for m = 2. In this case, W [u1 , u2 ] = u1 u02 u01 u2 , and hence
W 0 [u1 , u2 ] = u1 u002 u001 u2 . Since u1 and u2 are solutions of the linear homogeneous ODE, we
have
u00i (t) = a1 (t)u0i (t) a0 (t)ui (t) i = 1, 2 .
Thus
W 0 [u1 , u2 ] = a1 u1 u02 u01 u2 = a1 W [u1 , u2 ] .

We see that W [u1 , u2 ] satisfies the first order linear homogeneous equation u0 (t) + a1 (t)u(t) = 0,
and hence
Z t
⇥ ⇤
W [u1 (t), u2 (t)] = C exp a1 (s) ds .
t0

Putting t = t0 in the above expression, we obtain C = W [u1 (t0 ), u2 (t0 )], and hence
Z t
⇥ ⇤
W [u1 (t), u2 (t)] = exp a1 (s) ds W [u1 (t0 ), u2 (t0 )].
t0

For general m, one needs to make use of some general properties of the determinant. From the
definition of W = W [u1 , u2 , . . . , um ] as a determinant, it follows that its derivative W 0 is a sum
of m determinants
W 0 = V 1 + V2 + . . . + V m ,
ODES AND PDES 25

where Vk di↵ers from W only its k-th row, and the k-th row of Vk is obtained by di↵erentiating
the k-th row of W . The first m 1 determinant are all zero as they each have two identical
rows. Hence
2 3 2 3
u1 u2 . . . um u1 u2 ... um
6 u01
6 u02 . . . u0m 77
6
6 u01 u02 ... u0m 7
7
0
W = det 6 .. .. .. .. 7 = det 6 .. .. .. .. 7.
4 . . . . 5 4 . . . . 5
(m) (m) (m) P m 1 (j) P m 1 (j) P m 1 (j)
u1 u2 . . . um a
j=0 j 1 u a
j=0 j 2 u . . . a
j=0 j mu
The value of the determinant is unchanged if we multiply any row by a number and add to the
last row. We multiply the first row by a0 , the second row by a1 , . . ., the (m 1) row by am 2 ,
and add these to the last row, we have
2 3
u1 u2 ... um
6
6 u01 u02 ... u0m 7
7
W 0 = det 6 .. .. .. .
.. 7 = am 1 W
4 . . . 5
(m 1) (m 1) (m 1)
am 1 u1 am 1 u2 ... am 1 um

Thus, W satisfies the linear first order equation u0 (t) + am 1 (t)u(t) = 0, and hence
Z t
⇥ ⇤
W [u1 , u2 , . . . , um ](t) = exp am 1 (s) ds W [u1 , u2 , . . . , um ](t0 ) .
t0

Theorem 5.7. Let up be a particular solution of the non-homogeneous ODE
L(u) := u(m) (t) + am 1 (t)u
(m 1)
(t) + . . . + a0 (t)u(t) = b(t) 8t 2 I
and v is any solution of the corresponding linear homogeneous ODE. Then any solution u of the
non-homogeneous ODE can be written as u = v + up .
Proof. Let u be any solution and up be its particular solution. Then u up is a solution of the
homogeneous ODE. Hence u up = c1 v1 (t) + c2 v2 (t) + . . . + cm vm (t), where v1 , v2 , . . . , vm are
any linearly independent solutions of the linear homogeneous ODE . Since any solution v of the
homogeneous ODE can be written in the form c1 v1 (t) + c2 v2 (t) + . . . + cm vm (t), the result follows
easily. ⇤
Later, we use variation of parameters method to find the particular solution up .
Example 5.6. Let x1 , x2 , x3 and x4 be solutions of the linear homogeneous ODE x(4) (t)
3x(3) (t) + 2x0 (t) 5x(t) = 0 such that W [x1 , x2 , x3 , x4 ](0) = 5. Then by Abel’s theorem
Z 6
⇥ ⇤
W [x1 , x2 , x3 , x4 ](6) = exp 3 ds W [x1 , x2 , x3 , x4 ](0) = 5e18 .
0

Example 5.7. The functions u1 (t) = sin t and u2 (t) = t2 can not be solutions of a di↵erential
equation u00 (t) + a1 (t)u0 (t) + a0 (t)u(t) = 0, where a0 , a1 are continuous functions. To see this, we
first consider the Wronskian of u1 and u2 . Note that W [u1 (t), u2 (t)] = 2t sin t t2 cos t. Thus
W [u1 ( ⇡2 ), u2 ( ⇡2 )] = ⇡ 6= 0, and W [u1 (0), u2 (0)] = 0. Thus, in view of the previous theorem, u1
and u2 cannot be a solution.
Example 5.8. Let us explain why et , sin(t) and t cannot be solutions of a third order homoge-
neous equation with continuous coefficients. Notice that
⇡ ⇡ ⇡
W [et , sin(t), t](0) = 0; W [et , sin(t), t]( ) = (2 )e 2 6= 0.
2 2
26 A. K. MAJEE

If they are solutions of a third order homogeneous equation with continuous coefficients, then
by Abel’s theorem, Wronskian be either identically zero or nonzero. Therefore, et , sin(t) and t
cannot be solutions of a third order homogeneous equation with continuous coefficients.
5.2. Linear homogeneous equation with constant coefficients. We are interested in the
ODE
u(m) (t) + am 1u
(m 1)
(t) + . . . + a0 u(t) = 0 , where ai 2 R . (5.1)
Define the di↵erential operator L with constant coefficients as
Xm
di
L⌘ ai i , ai 2 R with am = 1 .
dt
i=0
For u : R ! R which is m-times di↵erentiable, we define
Xm
di u(t)
Lu(t) = ai .
dti
i=0
In this notation, we are interested in finding u such that Lu(t) = 0 for all t 2 R. Define a
polynomial
m m 1
p( ) = + am 1 + . . . + a1 + a0 = 0 . (5.2)
The polynomial p is called the characteristic polynomial of the operator L.
Remark 5.2. We observe the followings:
a) For a given polynomial p of degree m, we can associate a di↵erential operator Lp such
that p is the characteristic polynomial of Lp .
b) Let p be a polynomial of degree m such that p = p1 p2 , where p1 and p2 are polynomial
with real coefficients. Then for any m-times di↵erentiable function u,
Lp (u) = Lp1 Lp2 (u) = Lp2 Lp1 (u) .
Thus, if u is a solution of Lp1 u = 0 or Lp2 u = 0, then u is a solution of Lp u = 0.
By the fundamental theorem of algebra, we can write the characteristic polynomial as
p( ) = ( 1) . . . ( m) , i 2 C.
Note that i ’s may not be distinct. Suppose that i is not real. Then ¯ i is a root of p and hence
¯ i = j for some j. Hence we can write
m1 m k1
p( ) = ( 1) ...( k1 ) z1 ) n 1 (
( z̄1 )n1 . . . ( zk 2 ) n k 2 ( z̄k2 )nk2
P 1 P 2
where i 2 R, zj = xj + iyj with yj 6= 0, and ki=1 mi + 2 kj=1 nj = m. Thus
mi
⇥ ⇤n
p( ) = p1 p2 . . . pk1 q1 q2 . . . qk2 ; pi = ( i) , qj = ( zj )( z̄j ) j .
Therefore, if we find Lpi (u) = 0 or Lqj (u) = 0, then u is a solution of Lp u = 0.
d mi
Finding the solutions of Lpi (u) = 0: We need to find u such that dt i u = 0. This
d d d d 0
gives ( dt i )( dt i ) . . . ( dt i )u = 0. Now ( dt i )u = 0 =) u i u = 0 and hence
t d 2 d d
u = e i . Suppose that ( dt i ) u = 0. Set v = ( dt i )u. Then ( dt i )v = 0, and hence
t
v = e . Thus,
i

e it
= u0 iu

=) u0 e it
u ie it
=1
=) (ue it
)0 = 1
it
=) e u=t+C
ODES AND PDES 27

it it
=) u(t) = te + Ce .
Therefore, u = te i t + Ce i t solves Lpi u = 0. This yields that Lpi (te it ) = 0, and hence e it and
te i t are solutions of Lpi u = 0.
Theorem 5.8. The functions e i t , te i t , . . . , tmi 1e it are mi linearly independent solution of
Lpi u = 0 and hence solution of Lu = 0.
Proof. We can easily check that Lpi (tj e i t ) = 0 if j  mi 1. Let us show the indepen-
P i 1
dence of the functions. Let m j=0 Cj+1 t e
j i t = 0. Putting t = 0, we get C = 0, and hence
1
Pmi 1 j 1 t
j=1 Cj+1 t e = 0. Again putting t = 0 in the above expression, we have C2 = 0. Contin-
i

uing like this, we get C1 = C2 = . . . = Cmi = 0. This completes the proof. ⇤


d d
Finding the solutions of Lqj (u) = 0: Consider the case nj = 1. Then we have ( dt z̄j )( dt
zj )u = 0, and hence ezj t is a solution. Now
⇥ ⇤
ezj t = exj t+iyj t = exj t cos(yj t) + i sin(yj t)
Lqj u = 0 gives that its real part and imaginary part are zero. Again, if u = u1 + iu2 , then
Lu = L(u1 ) + iL(u2 ) and hence L(u) = 0 i↵ L(u1 ) = 0 and L(u2 ) = 0. This implies that
exj t cos(yj t), exj t sin(yj t) are solutions of Lqj u = 0, and they are linearly independent.
Theorem 5.9. The functions
exj t sin(yj t), texj t sin(yj t), . . . , tnj 1 xj t
e sin(yj t) ,
xj t xj t n j 1 xj t
e cos(yj t), te cos(yj t), . . . , t e cos(yj t)
are 2nj -linearly independent solutions of Lqj u = 0. Moreover, a basis for the space of solutions
of Lu = 0 is given by
tj e it
j = 0, 1, . . . , mi 1 (i = 1, 2, . . . , k1 )
j xi t j xi t
t e sin yi t, t e cos yi t j = 0, 1, . . . , ni 1 (i = 1, 2, . . . , k2 ) .
Example 5.9. Solve the initial value problem
2y 00 + y 0 y = 0, y(0) = 1, y 0 (0) = 2 .
We need to find two linearly independent solutions of the above ODE. The characteristic poly-
nomial p in this case is given by p( ) = 2 2 + 1. Since p( ) = (2 1)( + 1), the roots of
1
p( ) are 1 and 2 . Thus the general solution is given by
1
x
y(x) = c1 e + c2 e 2 x .
In view of the initial conditions, we have
1
c1 + c2 = 1, c1 + c2 = 2 .
2
1
Solving the above equation, we get c1 = 1 and c2 = 2. Therefore, y(x) = e x + 2e 2 x is the
desired solution.
Example 5.10. Find a general solution of the ODE:
y (4) + y = 0.
This equation arises in the study of the deflection of beams. The characteristic polynomial is
given by p( ) = 4 + 1. Now p( ) can be written as
p p p
p( ) = ( 2 + 1)2 ( 2 )2 = 2 + 2 + 1 2
2 +1 .
28 A. K. MAJEE

Thus the roots of p( ) are given by


1 1
p (1 ± i), p (1 ± i) .
2 2
Thus every real solution has the form
x ⇥
p x x ⇤ px ⇥ x x ⇤
y(x) = e 2 c1 cos( p ) + c2 sin( p ) + e 2 c3 cos( p ) + c4 sin( p )
2 2 2 2
where c1 , c2 , c3 and c4 are real constants.
Example 5.11. Let us find the solution of the initial-value problem:
y (3) + y = 0, y(0) = 0, y 0 (0) = 1, y 00 (0) = 0 .
The characteristic
p
polynomial p( ) = 3 + 1 = ( + 1)( 2 + 1). Thus root of p is given by
1± 3i
1, 2 . Therefore, the general solution is given by
p p
x⇥ 3 3 ⇤
x
y(x) = c1 e + e 2 c2 cos( x) + c3 sin( x) .
2 2
Note that
y(0) = 0 =) c1 + c2 = 0
p
y 0 (0) = 1 =) 2c1 + c2 + 3c3 = 2
p
y 00 (0) = 0 =) c1 c2 + 3c3 = 0 .
2
Solving the above equations, we get c1 = = 25 and c3 = 5p
5 , c2
4
3
. Thus, the solution is
p p
2 x x ⇥2 3 4 3 ⇤
y(x) = e +e 2 cos( x) + p sin( x) .
5 5 2 5 3 2
5.3. Finding particular solution to non-homogeneous ODE. Let ui (1  i  m) be m-
linearly independent solutions Pm to the linear homogeneous ODE (2.1). We want to find functions
cPi (t) such that up (t) = i=1 ciP
(t)ui (t) is a solution to the⇤non-homogeneous ODE. Let u(t) =
m 0 (t) = m ⇥ 0 0 (t) . Assume that
Pm 0
i=1 c i (t)u i (t). Then u i=1 i c (t)u i (t) + c i (t)u i i=1 ci (t)ui (t) = 0.
Then
m
X X m
⇥ 0 ⇤
u0 (t) = ci (t)u0i (t) and hence u00 (t) = ci (t)u0i (t) + ci (t)u00i (t) .
i=1 i=1
Pm 0 0
Pm
Again assume that i=1 ci (t)ui (t) = 0. Then u00 (t) = 00
i=1 ci (t)ui (t). Therefore by assuming
m
X m
X Xm
(m 2)
c0i (t)ui (t) = 0, c0i (t)u0i (t) = 0, . . . , c0i (t)ui (t) = 0 ,
i=1 i=1 i=1
we get
m
X (j)
u(j) (t) = ci (t)ui (t) = 0 j = 0, 1, . . . , m 1.
i=1
Pm ⇥ 0 (m 1) (m) ⇤
Then, u(m) (t) = i=1 ci (t)ui (t) + ci (t)ui (t) . Now u satisfies the non-homogeneous
equation i↵
m
X Xm m
X
⇥ 0 (m 1) (m) ⇤ (m 1)
ci (t)ui (t) + ci (t)ui (t) + am 1 (t) ci (t)ui (t) + . . . + a0 (t) ci (t)ui (t) = b(t)
i=1 i=1 i=1
m
X m
X h i
(m 1) (m) (m 1)
, c0i (t)ui (t) + ci (t) ui (t) + am 1 (t)ui (t) + . . . + a0 (t)ui (t) = b(t)
i=1 i=1
ODES AND PDES 29

m
X (m 1)
, c0i (t)ui (t) = b(t) .
i=1
We now arrive at the following theorem.
P
Theorem 5.10. The function u(t) = m i=1 ci (t)ui (t) is a particular solution of the non-homogeneous
ODE
u(m) (t) + am 1 (t)u
(m 1)
(t) + . . . + a0 (t)u(t) = b(t) t2I
if c1 , c2 , . . . , cm satisfy the the following matrix equation AC = B, where
2 3 2 03 2 3
u1 u2 ... um c1 0
6 u01 u 0 . . . u 0 7 6 0 7 6 7
6 2 m 7 6 c2 7 6 07
A = 6 .. .. .. .. 7 , C = 6 .. 7 and B = 6 .. 7 .
4 . . . . 5 4 . 5 4.5
(m 1) (m 1) (m 1) 0
u1 u2 . . . um cm b
Remark 5.3. Note that, since u1 , u2 , . . . , um are linearly independent, the Wronskian is non-
zero. Thus the above matrix equation has an unique solution for C. Moreover, by using
Cramer’s rule we have explicit formula for c0i , namely
Wi (t)b(t)
c0i (t) = (1  i  m)
W [u1 , u2 , . . . , um ](t)
where Wi is the determinant obtained from W [u1 , u2 , . . . , ui , . . . , um ] by replacing i-th column
Rt
by 0, 0, . . . , 0, 1. One can take ci (t) = t0 c0i (s) ds for some t0 2 I. Thus, the particular solution
up now takes of the form
Xm Z t
Wi (s)b(s)
up (t) = ui (t) ds .
t0 W [u 1 , u2 , . . . , um ](s)
i=1

Remark 5.4. Fix t0 2 I. Then the particular solution up of the non-homogeneous ODE
u(m) (t) + am 1 (t)u(m 1) (t) + . . . + a0 (t)u(t) = b(t) t 2 I
P Rt Wi (s)b(s)
given by up (t) = mi=1 ui (t)ci (t), where ci (t) = t0 W [u1 ,u2 ,...,um ](s) ds satisfies the following initial
conditions
up (t0 ) = u0p (t0 ) = . . . = u(m
p
1)
(t0 ) = 0 .
Pm 0 (j)
Note that, since i=1 ci (t)ui (t) = 0 for all 0  j  (m 2), we have
m
X (j)
u(j)
p (t) = ci (t)ui (t) 8 0  j  (m 1) .
i=1
(m 1)
Since ci (t0 ) = 0, it easily follows that up (t0 ) = u0p (t0 ) = . . . = up (t0 ) = 0.
Example 5.12. Find a general solution of the ODE
y (3) + y 00 + y 0 + y = 1 .
The characteristic polynomial p( ) associated to the homogeneous ODE is given by
3 2 2
p( ) = + + + 1 = ( + 1)( + 1) = ( + 1)( + i)( i)
Thus, the roots of p are i, i, and 1. Hence three independent solutions of the homogeneous
ODE are given by
u1 (t) = e t , u2 (t) = cos(t), u3 (t) = sin(t) .
30 A. K. MAJEE

Let us first calculate W [u1 , u2 , u3 ](0). Notice that


2 3
1 1 0
W [u1 , u2 , u3 ](0) = det 4 1 0 15 = 2 .
1 1 0
Rt
Thus, W [u1 , u2 , u3 ](t) = exp[ 0 ds]W [u1 , u2 , u3 ](0) = 2e t . Observe that c0i (t) = 12 et Wi (t). To
find a particular solution, we need to calculate Wi (t). Note that
2 3
0 cos(t) sin(t)
W1 (t) = det 40 sin(t) cos(t) 5 = 1 ,
1 cos(t) sin(t)
2 t 3
e 0 sin(t) ⇥ ⇤
W2 (t) = det 4 e t 0 cos(t) 5 = e t sin(t) + cos(t) ,
e t 1 sin(t)
2 t 3
e cos(t) 0 ⇥ ⇤
W3 (t) = det 4 e t sin(t) 05 = e t sin(t) + cos(t) .
e t cos(t) 1
Thus,
Z t
1 1
c1 (t) = es ds = [et 1]
2 0 2
Z t
1 1⇥ ⇤
c2 (t) = sin(s) + cos(s) ds =
cos(t) sin(t) 1 ,
2
0 2
t Z
1 1⇥ ⇤
c3 (t) = sin(s) + cos(s) ds = cos(t) + sin(t) 1 .
2 0 2
Thus the particular solution up is given by
3
X 1⇥ ⇤
up (t) = ci (t)ui (t) = 1 cos(t) + sin(t) + e t .
2
i=1
Hence, a general solution to the non-homogeneous ODE is given by
3
X
t
u(t) = up + ci ui (t) = 1 + c1 e + c2 cos(t) + c3 sin(t) ,
i=1
where c1 , c2 and c3 are arbitrary constants.
Example 5.13. We would like to find general solution of the ODE
y 000 y 0 = x.
The characteristic polynomial p( ) = 3 , and its roots are given by 0, 1, 1. Thus, three
independent solution of the homogeneous ODE y 000 y 0 = 0 are u1 (x) = 1, u2 (x) = ex and
u3 (x) = e x . Note that W [u1 , u2 , u3 ](0) = 2, and hence by Abel’s theorem, W [u1 , u2 , u3 ](t) =
Rt
exp[ 0 0 ds]W [u1 , u2 , u3 ](0) = 2. Observe that c0i (t) = 2t Wi (t). To find a particular solution,
we need to calculate Wi (t). Note that
2 3 2 3
0 et e t 1 0 e t
W1 (t) = det 40 et e t 5 = 2 , W2 (t) = det 40 0 e t5 = e t ,
1 et e t 0 1 e t
2 3
1 et 0
W3 (t) = det 40 et 05 = et .
0 et 1
ODES AND PDES 31

Thus,
Z t Z
t2 1 t 1⇥ ⇤
c1 (t) = s ds = , c2 (t) = se s ds = 1 e t te t ,
0 2 2 0 2
Z t
1 1⇥ ⇤ t2 1
c3 (t) = ses ds = 1 et + tet , and hence up = 1 + (et e t) .
2 0 2 2 2
Therefore, a general solution to the non-homogeneous ODE is given by
t2
u(t) = c1 + c2 et + c3 e t
+ 1,
2
where c1 , c2 and c3 are arbitrary constants.
5.4. Euler’s Equation: Consider the equation
tm u(m) (t) + am 1t
m 1 (m 1)
u (t) + . . . + a1 tu0 (t) + a0 u(t) = b(t) (5.3)
where a0 , a1 , . . . , am 1 are constants. Consider t > 0. Let s = log(t) (for t < 0, we must use
s = log( t)). Then
du du 1
=
dt ds t
d2 u d2 u du 1
=
dt2 ds2 ds t2
d3 u d3 u d2 u du 1
3
= 3
3 2 +2
dt ds ds ds t3
.. .. ..
. . .
dm u ⇣ dm u dm 1 u du ⌘ 1
= + C m 1 + . . . + C 1 ,
dtm dsm dsm 1 ds tm
for some constants C1 , C2 , . . . , Cm 1 . Substituting these in (5.3), we obtain a non-homogeneous
ODE with constant coefficients
dm u dm 1 u du
+ B m 1 + . . . + B1 + B0 u = b(es )
dsm dsm 1 ds
for some constants B0 , B1 , . . . , Bm 1 . One can solve this ODE and then substitute s = log(t) to
get the solution of the Euler’s equation (5.3).
Example 5.14. Solve
x2 y 00 + xy 0 y = x3 x > 0.
dy d2 y dy
Let x = eu . Then xy 0 = du and x2 y 00 = du2 du . Therefore, we have
d2 y
y = e3u u 2 R .
du2
The characteristic polynomial corresponding to the homogeneous ODE is given by
2
p( ) = 1 = ( + 1)( 1) .
Therefore, y1 = eu and y2 = e u are two linearly independent solutions. Note that W [y1 , y2 ](u) =
2. Moreover
  u
0 e u u e 0
W1 (u) = det = e , W2 (u) = det u = eu .
1 e u e 1
Hence
Z u Z u
1 1⇥ ⇤ 1 1
c1 (u) = e dr = e2u
2r
1 , c2 = e4r dr = 1 e4u ,
2 0 4 2 0 8
32 A. K. MAJEE

and therefore the particular solution yp is given by


1 1 u 1
yp = e3u e + e u
.
8 4 8
Therefore the general solution is
1 C 2 x3
y = C1 eu + C2 e u
+ e3u = C1 x + + ,
8 x 8
where C1 , C2 are arbitrary constants.
5.5. On Comparison theorems of Sturns: Consider a general 2nd order linear homogeneous
ODE
y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0 , p, q 2 C(I) . (5.4)
We know that, if y1 (x) is a solution of (5.4), then cy1 (x) is also a solution, where c is a constant.
Is c(x)y1 (x) a solution? For some particular case, answer is yes, and it is given by the following
theorem.
Theorem 5.11. Let y1 (x) be a solution of (5.4) with y1 (x) 6= 0 on I. Then
Z R
e p(x) dx
y2 (x) = y1 (x) dx
y12 (x)
is a solution of (5.4). Moreover, y1 and y2 are linearly independent.
Proof. Let y2 (x) = v(x)y1 (x). We would like to find v(x) such that y2 (x) satisfies (5.4). To do
so, suppose y2 (x) satisfies (5.4). Then by calculating y200 (x) + p(x)y20 (x) + q(x)y2 (x), we see that
v 0 (x)[2y10 (x) + p(x)y1 (x)] + y1 (x)v 00 (x) = 0
where we have used the fact that y1 is a solution of (5.4). Let w = v 0 . Then w satisfies a first
oder ODE given by
2y 0 (x)
w0 (x) + [ 1 + p(x)]w(x) = 0.
y1 (x)
Hence by the method of variation of parameters, we obtain
R
e p(x) dx
w(x) = c .
y12 (x)
Since we only need one function v(x) so that v(x)y (x) is a solution, we can let c = 1 and hence
R e R p(x) dx R e R p(x)1 dx
v(x) = y 2 (x)
dx. Thus, y2 (x) = y1 (x) y 2 (x)
dx is a solution of (5.4).
1 1 R
Let us calculate the Wronskian of y1 and y2 . It is easy to see that W [y1 , y2 ](x) = e p(x) dx 6=
0. Therefore, y1 and y2 are linearly independent. ⇤
Example 5.15. Find a general solution of
y0 1
y 00 + 2 y = 0 , x > 0.
x x
Note that y1 = x is a solution and Ry1 (x) 6= 0. To find another independent solution, using above
R 1 dx
theorem, we obtain y2 (x) = x e x2 x dx = x ln(x). Therefore, general solution is given by
y(x) = c1 x + c2 x ln(x) .
Example 5.16. Find general solution of the ODE
1
(3t 1)2 y 00 (t) + 3(3t 1)y 0 (t) 9y(t) = 0 , t > .
3
ODES AND PDES 33

Note that y1 (t) = (3t 1) is a solution, and y1 (t) 6= 0. To find another independent solution,
using Theorem 5.11, we obtain
R 3
Z
e 3t 1 dt 1
y2 (t) = (3t 1) 2
dt = .
(3t 1) 6(3t 1)
Therefore, general solution is given by
1 1
y(t) = c1 (3t 1) c2 , t> .
6(3t 1) 3
1
R
One can easily check that the substitution x(t) = y(t)e 2 p(t) dt transform the equation
x + p(t)x0 + q(t)x = 0 into the form y 00 + P (t)y(t) = 0, where p, q are continuous functions such
00

that p0 is continuous. Therefore, instead of studiyng the equation of the form x00 +p(t)x0 +q(t)x =
0, we will study the equation of the form
y 00 + ↵(x)y(x) = 0.
Oscillatory behaviour of solutions: Consider the second order linear homogeneous equation
y 00 + p(x)y(x) = 0 . (5.5)
For simplicity, we assume that p(x) is continuous everywhere.
Definition 5.3. We say that a nontrivial solution y(x) of (5.5) is oscillatory (or it oscillates)
if for any number T , y(x) has infinitely many zeros in the interval (T, 1); or equivalently, for
any number ⌧ , there exists a number ⇠ > ⌧ such that y(⇠) = 0. We also call the equation (5.5)
oscillatory if it has an oscillatory solution.
Consider the equation y 00 + 4y = 0. Two independent solutions are y1 (x) = sin(2x) and
y2 (x) = cos(2x). Note that y1 (x) three zeros on (0, 2⇡). Moreover, between two consecutive
zeros of y1 (x), there is only one zero of y2 (x). We have the following general result.
Theorem 5.12 (Sturm Separation Theorem). Let y1 (x) and y2 (x) be two linearly independent
solutions of (5.5), and suppose a and b are two consecutive zeros of y1 (x) with a < b. Then
y2 (x) has exactly one zero in the interval (a, b).
Proof. Notice that y2 (a) 6= 0 6= y2 (b)( otherwise y1 and y2 would have common zero and hence
their Wronskian would be zero, contradicting the fact that they are linearly independent). Sup-
pose y2 (x) 6= 0 on (a, b). Then y2 (x) 6= 0 on [a, b]. Define h(x) = yy12 (x) (x) . Then h satisfies all
the conditions of Rolle’s theorem. Hence there exists c 2 (a, b) such that h0 (c) = 0. In other
words, W [yy12,y 2 ](c)
= 0. Since y2 (c) 6= 0, W [y1 , y2 ](c) = 0, a contradiction as y1 and y2 are linearly
2 (c)
independent. Thus, there exists c 2 (a, b) such that y2 (c) = 0.
We now show the uniqueness. Suppose there exist c1 , c2 2 (a, b) such that y2 (c1 ) = y2 (c2 ) = 0.
Then, by what we have just proved, there would exist a number d between c1 and c2 such that
y1 (d) = 0, contradicting the fact that a and b are consecutive zeros of y1 (x). ⇤
Example 5.17. Show that between any two consecutive zeros of sin(t), there exists only one zero
of a1 sin(t) + a2 cos(t), where a1 , a2 2 R with a2 6= 0. To see this, we apply Sturm Separation
Theorem. Note that y1 (t) := sin(t) and y2 (t) := a1 sin(t) + a2 cos(t) are two solutions of the
ODE y 00 (t) + y(t) = 0. Since W [y1 , y2 ](t) = a2 6= 0, y1 and y2 are linearly independent.
Therefore, by Theorem 5.12, between any two consecutive zeros of sin(t), there exists only one
zero of a1 sin(t) + a2 cos(t).
In view of Theorem 5.12, we arrive at the following corollary.
Corollary 5.13. If (5.5) has one oscillatory solution, then all of its solutions are oscillatory.
34 A. K. MAJEE

Theorem 5.14 (Sturm Comparison Theorem). Consider the equations


y 00 (x) + ↵(x)y(x) = 0 , (5.6)
y 00 (x) + (x)y(x) = 0 . (5.7)
Suppose that y↵ (x) is a nontrivial solution of (5.6) with consecutive zeros at x = a and x = b.
Assume further that ↵, 2 C[a, b] and ↵(x)  (x) on [a, b]. Let y (x) be any nontrivial solution
of (5.7). Then there exists a number c with a < c < b such that y (c) = 0 unless ↵(x) = (x)
on (a, b).
Proof. W.L.O.G, we assume that y↵ (x) > 0 on the interval (a, b). Suppose y does not have
a zero on (a, b). W.L.O.G, let y > 0 on (a, b). Now, multiplying (5.6) by y (x) and (5.7) by
y↵ (x), and then subtracting the resulting equations, we obtain
y (x)y↵00 y↵ (x)y 00 + (↵(x) (x))y↵ (x)y (x) = 0
0
=) y y↵0 y↵ y 0
= ( (x) ↵(x))y↵ (x)y (x)
Z b Z b
0
=) y y↵0 y↵ y 0 dx = ( (x) ↵(x))y↵ (x)y (x) dx
a a
Z b
=) y (b)y↵0 (b) y↵0 (a)y (a) = ( (x) ↵(x))y↵ (x)y (x) dx .
a
If ↵(x) 6= (x) on (a, b), there exists x0 2 (a, b) such that ↵(x0 ) < (x0 ) and hence (x) ↵(x) >
0 in a nbd. of x0 . Therefore, by positivity of y↵ and y on (a, b), we see that
Z b
( (x) ↵(x))y↵ (x)y (x) dx > 0 .
a
On the other hand, since y > 0 on (a, b), we must have y (a) 0 and y (b) 0. Moreover,
since y↵ (x) > 0 on the interval (a, b) and y↵ (a) = 0 = y↵ (b), we have y↵0 (a) > 0 and y↵0 (b) < 0.
Therefore,
Z b
0 0
0 y (b)y↵ (b) y↵ (a)y (a) = ( (x) ↵(x))y↵ (x)y (x) dx > 0 —a contradiction !
a
Thus, there exists at least one c 2 (a, b) such that y (c) = 0. ⇤
Example 5.18. We now show that any solution of y 00 + x2 y = 0 has infinitely many zeros
there. Note that all solutions of y 00 + y = 0 are oscillatory. Consider the problem on [1, 1). On
this interval, 1  x2 . Hence by Sturm Comparison Theorem, there exists infinitely many zeros
of in [1, 1).
Example 5.19. Show that any solution of y 00 (t) + 2y 0 (t) + t2 y(t) = 0 on the interval [2, 1)
has infinitely many zeros.
Solution: Consider the transformation z(t) = (t)et on the given interval [2, 1). Note that if
z(t) has infinitely many zeros, then (t) must have infinitely many zeros. Observe that, since
(t) satisfy the given ODE, z(t) satisfies the following ODE
z 00 (t) + (t2 1)z(t) = 0.
Note that 1 := ↵(t)  (t) := 1 on the interval [2, 1). Since all solutions of y 00 (t) +
t2
↵(t)y(t) = 0 has infinitely many zeros on [2, 1), by Sturm comparison theorem, z(t) has infin-
itely many zeros and hence (t) has infinitely many zeros.
ODES AND PDES 35

6. Sturm Liouville eigen-value problem:


Consider the boundary value problem
y 00 + A(x)y 0 + B(x)y(x) + C(x)y(x) = 0 , y(a) = 0 = y(b) ,
where
Rx a < b, is a real parameter and A, B, C 2 C[a, b]. Multiplying the equation by p(x) =
A(s) ds
ea , and setting r(x) = p(x)B(x), q(x) = p(x)C(x), the original equation becomes
0
p(x)y 0 + r(x)y(x) + q(x)y(x) = 0 .
Note here that p(t) > 0 and p 2 C 1 [a, b]. We simplify the equation by letting r(x) = 0(
equivalent to letting B(x) = 0). Thus, we consider the following boundary-value problem
0
p(x)y 0 + q(x)y(x) = 0 in [a, b]; y(a) = 0 = y(b) , (6.1)
where p 2 C 1 [a, b] with p > 0 and 0 6= q 2 C[a, b]. Note that one of the solutions of (6.1) is the
trivial solution y(x) ⌘ 0.
Definition 6.1. We say that is an eigen value of (6.1) if it has a non-trivial solution, called
eigenfunction, corresponding to .
Example 6.1. Consider the problem
u00 = u in (0, 1) (6.2)
u(0) = 0 = u(1) , (6.3)
where is a given constant. Let us consider the following case.
Case 1: < 0. Then, = k 2 for some k > 0. A solution of (6.3) takes the form
kt
u(t) = c1 e + c2 ekt .
Since u(0) = u(1) = 0, we get c1 + c2 = 0, and c1 e k + c2 ek = 0. Thus, c1 = c2 = 0. This
implies that for < 0, this problem does not have a non-trivial solution.
Case 2: = 0. in this case, it is easy to check that the problem does not have any non-trivial
solution.
Case 3: > 0. Then a general solution of u00 = u is
p p
u(t) = c1 sin( t) + c2 cos( t).
p p
Now u(0) = 0 gives c2 = 0, and hence u(t) = c1 sin(p t). u(1)p= 0 then gives c1 sin( ) = 0.
Since we are looking for non-trivial solution, sin( ) = 0, i.e., = n⇡ or
2 n2 ⇡ 2 n = 1, 2, . . . .
Conclusion: The problem (6.3) has a non-trivial solution only when 2 n2 ⇡ 2 n = 1, 2, . . . .
Example 6.2. Consider the boundary value problem y 00 + my = 0 with y(a) = y(b) = 0. Then,
for m > 0, the eigenvalues are given by
k2 ⇡2
k = , k = 1, 2, . . . .
m(b a)2
Theorem 6.1. If q(x) > 0, then the eigen values of (6.1) are strictly positive.
Theorem 6.2. Let 1 and 2 be two eigen function of (6.1) associated to 1 and 2 respectively
with 1 6= 2 . Then
Z b
q(t) 1 (t) 2 (t) dt = 0.
a
As a consequence, eigenfunctions corresponding to di↵erent eigenvalues are linearly independent.
36 A. K. MAJEE

Proof. Since 1 and 2 are eigen functions, we have


0 0 0 0
(p 1) + 1q 1 = 0 , (p 2) + 2q 1 = 0.
Multiplying the first equation by 2 and then integrating by parts from a to b along with
boundary condition, we obtain
Z b Z b Z b
0 0
1 q 1 2 dt = (p 1 ) 2 dt = p 01 02 dt .
a a a
Rb Rb 0 0 dt.
Similarly, we have a 2 q 1 2 dt = a p 1 2 Thus,
Z b
( 1 2 )q 1 2 dt = 0.
a
Rb
Since 1 6= 2 , we have a q(t) 1 (t) 2 (t) dt = 0.
Suppose that 1 and 2 are linearly dependent. Then there exists a non-zero constant ↵ such
that 2 = ↵ 1 . Then we have
Z b
0=↵ q(t) 21 (t) dt a contradiction!.
a

In the previous example, we have noticed that there exists infinitely many eigenvalues 0 <
k " 1. This result holds for the equation (6.1) when q(t) > 0.
Theorem 6.3. Suppose that q(t) > 0. Then there exist infinitely many positive eigenvalues k
such that
0 < 1 < 2 < . . . < k < . . . , and lim k = 1.
k!1

Variational characterization of the smallest eigenvalue 1 : Denote by k [q] the eigen-


values of (6.1) and by k (t) a corresponding eigenfunction. Let 1 [q] is the smallest eigen value.
Then by multiplying the equation (p 01 )0 + 1 [q]q 1 by 1 and then integrating by parts, we have
Z b Z b Rb 0 2
2 0 2 a p(t)( 1 ) dt
1 [q] q(t) 1 (t) dt = p(t)( 1 ) dt =) 1 [q] = R b
,
a a q(t) 2 (t) dt
a 1
where in the last last line we have used the fact that q(t) > 0. Let E denotes the class of
functions 2 C 1 (a, b) such that (a) = 0 = (b). Then
Rb 0 2
a p(t)( ) dt
1 [q]  Rb , 8 2E.
q(t) 2 (t) dt
a
Moreover,
Rb
a p(t)( 0 )2 dt
1 [q] = min R( ), where R( ) = R b , called Rayleigh Quotient.
2E q(t) 2 (t) dt
a

Example 6.3. Consider the problem x00


+ x = 0 with boundary condition x(0) = 0 = x(⇡).
Then the eigen values are k = k 2 , k = 1, 2, . . .. Hence by variational characterization of first
eigenvalue, we have Z ⇡ Z ⇡
2
(t) dt  ( 0 )2 dt , 8 2 E.
0 0
The above inequality is known as Poincaré inequality.
One can use variational characterization of the smallest eigenvalue to arrive at the following
theorem.
ODES AND PDES 37

Theorem 6.4. Let 1 [qi ], i = 1, 2 be the first eigenvalues of (px0 )0 + qi x = 0, x(a) = 0 = x(b).
If q1 (t)  q2 (t) for all t 2 [a, b], then
1 [q2 ]  1 [q1 ] .
˜
Let 1 [q] resp. 1 [q] be the first eigenvalue of (px0 )0 + qx = 0 resp. (p̃x0 )0 + qx = 0 with
boundary conditions x(a) = 0 = x(b). If p(t)  p̃ for all t 2 [a, b], then
˜
1 [q]  1 [q] .

Example 6.4. We would like to estimate the first eigenvalue 1 [q] of


0 0
(p(t)x ) + q(t)x = 0, x(a) = 0 = x(b)
under the assumptions 0 < ↵  p(t)  , and 0 < m  q(t)  M in [a, b]. Let us denote ¯ 1 [q]
resp. ˜ 1 [q] the first eigen value of (↵x0 )0 + q(t)x = 0 resp. ( x0 )0 + q(t)x = 0 with boundary
conditions x(a) = 0 = x(b). Then in view of previous theorem, ¯ 1 [M ]  ¯ 1 [q] and ˜ 1 [q]  ˜ 1 [m].
Again since 0 < ↵  p(t)  , we have ¯ 1 [q]  1 [q]  ˜ 1 [q]. Combining these two, we have
¯ 1 [M ]  1 [q]  ˜ 1 [m]. Note that, ¯ 1 [M ] = ↵⇡2 2 and ˜ 1 [m] = ⇡2
M (b a) m(b a)2
. Thus,
↵⇡ 2 ⇡2
 1 [q]  .
M (b a)2 m(b a)2
Example 6.5. Estimate the first eigen value of x00 + (1 + t)x = 0 x(0) = x(1) = 0. Note here
that q(t) = 1 + t, and hence 1  q(t)  2 for all t 2 [0, 1]. Thus, 1 [2]  1 [q]  1 [1]. In other
words,
⇡2
 1 [q]  ⇡ 2 .
2
Remark 6.1. We have considered only the Dirichlet boundary conditions x(a) = 0 = x(b). One
could also consider the Neumann boundary conditions x0 (a) = 0 = x0 (b), or the mixed boundary
conditions
↵1 x(a) + 1 x0 (a) = 0 , ↵2 x(b) + 2 x0 (b) = 0 ,
✓ ◆
↵1 1
where the matrix is non-singular.
↵2 2
38 A. K. MAJEE

7. System of ODE:
In this section, we study system of ODE of the form ~y 0 (t) = A(t)~y (t) + ~b(t) with some initial
condition, where A(t) = (aij (t))n⇥n is a n ⇥ n matrix with aij 2 C(I; R) and ~b 2 C(I; Rn ), and
find its explicit solution.

7.1. Constant coefficient homogeneous system: We are interested in ~y 0 (t) = A~y (t), where
A = (aij )n⇥n is a n ⇥ n matrix with real entries. For A = (aij )n⇥n , and t 2 R, we define
At = (taij )n⇥n . By M(n, R), we denote the set of all n ⇥ n matrices with real entries. We first
show that
Xm
Ak
eA = lim
m!1 k!
k=0
is well-defined. Indeed, for m > n, one has
m
X n
X m
X m
X
Ak Ak Ak kAkk
=  !0
as m, n ! 1 .
k! k! k! k!
k=0 k=0 k=n+1 k=n+1
nP k
o P
m A Ak
Thus, the sequence k=0 k! is Cauchy. This implies that limm!1 m k=0 k! exists and e
A

is well-defined. In the above, we have used the following norm: kAk = supk~xk2 1 kA~xk2 with
qP
n 2
k~xk2 := i=1 xi .

Theorem 7.1. For any A 2 M(n, R), one has keA k  ekAk .
P n Ak Pn kAkk kAk . Since k·k is continuous function, we conclude
Proof. Note that k=0 k!  k=0 k!  e
that keA k  ekAk . ⇤
Note that if A and B are two square matrices such that AB = BA, then eA+B = eA eB . Hence
(eA ) 1 = e A.
Theorem 7.2. Suppose A = C 1 BC, then eA = C 1 eB C.

Proof. It is easy to check by induction that Ak = C 1 Bk C. Hence by definition, we have


n
X 1 Bk C n
X
C 1 Bk 1 B
eA = lim =C lim C=C e C.
n!1 k! n!1 k!
k=0 k=0

One can easily check that if A = diag( 1 , 2 , . . . , n ), then eA = diag(e 1 , e 2 , . . . , e n ).
✓ ◆
a b
Example 7.1. Let A = , a, b 2 R. We wish to calculate eA . Observe that if 1 =
b a
a1 + ib1 and 2 = a2 + ib2 , then
✓ ◆✓ ◆ ✓ ◆
Re( 1 ) Im( 1 ) Re( 2 ) Im( 2 ) Re( 1 2 ) Im( 1 2 )
= .
Im( 1 ) Re( 1 ) Im( 2 ) Re( 2 ) Im( 1 2 ) Re( 1 2 )
Thus, if = a + ib, by induction, we have
✓ ◆ n Pn Re( k ) Pn !
X Im( k )
k Re( k ) Im( k ) Ak
A =
Im( k ) Re( k )
=) = Pk=0n
k!
Im( k ) Pk=0
n
k!
Re( k )
k! k=0 k=0
k=0 k! k!
0 ⇣P k
⌘ ⇣P k
⌘1
n n n
X Ak Re Im
=) =@ ⇣ Pk=0 k! k ⌘ ⇣ Pk=0 k!k ⌘A
k! n n
k=0 Im k=0 k! Re k=0 k!
ODES AND PDES 39
✓ ◆ ✓ ◆
Re(e ) Im(e ) cos(b) sin(b)
=) eA = = ea .
Im(e ) Re(e ) sin(b) cos(b)
✓ ◆
a b
A
Example 7.2. Calculate e for A = , a, b 2 R. We re-write A as A = aI + B, where I
0 a
✓ ◆
0 b
is the identity matrix of order 2 and B = . Observe that aI commutes with B, and hence
0 0
✓ ◆
A aI B a B k B 1 b
e = e e = e e . Since B = 0 for k 2, it is easy to see that e = I + B = , and hence
0 1
✓ ◆
A a 1 b
e =e .
0 1

Theorem 7.3. Let A 2 M(n, R) and ~x 2 Rn . Then the unique solution of the IVP ~y 0 (t) =
A~y (t), ~y (0) = ~x is given by the formula ~y (t) = eAt ~x.

Proof. We first show that for any square matrix A,


d At
e = AeAt .
dt
Since A commutes with itself, we have

d At eA(t+h) eAt eAh I ⇣ Ah X Ak hk 1 1⌘


e = lim = eAt lim = eAt lim +
dt h!0 h h!0 h h!0 h k!
k=2
1
X Ak hk 1
= AeAt + eAt lim .
h!0 k!
k=2

Taking a = kAk, we see that


1
X 1
X 1
X
Ak hk 1 ak |h|k 1 ak 2 |h|k 2
  a2 |h| = a2 |h|ea|h| ! 0 as h ! 0 .
k! k! k!
k=2 k=2 k=2

d At
Thus, we have dt e = AeAt . Note that ~y (0) = I~x = ~x. Moreover,
d At
~y 0 (t) = e ~x = AeAt ~x = A~y (t), 8t 2 R.
dt
Uniqueness: Let ~y (t) be any solution of the given IVP. Set ~z (t) := e At ~
y (t). Since e At and A
commute each other and ~y (t) is a solution, we have

~z 0 (t) = Ae At
~y (t) + e At 0
~y (t) = 0 =) ~z (t) = ~c .

Since ~z (0) = ~x, we see that ~z (t) = ~c = ~x and hence ~y (t) = eAt ~x. This completes the proof. ⇤

Corollary 7.4. The unique solution of the IVP ~y 0 (t) = A~y (t), ~y (t0 ) = ~x is given by the formula
~y (t) = eA(t t0 ) ~x.
✓ ◆ ✓ ◆
0 2 1 2
Example 7.3. Solve the IVP ~y (t) = A~y (t) with ~y (0) = and A = . Solution is
0 2 1
given by the formula
✓ ◆ ✓ ◆✓ ◆ ✓ ◆
At 2 t cos(2t) sin(2t) 2 t 2 cos(2t)
~y (t) = e =e =e .
0 sin(2t) cos(2t) 0 2 sin(2t)
40 A. K. MAJEE

7.2. Constant coefficient non-homogeneous system. Let us consider the non-homogeneous


linear system
~y 0 (t) = A~y (t) + ~b(t), ~y (t0 ) = ~x, ~b 2 C(I; Rn ) .
Duhamel’s principle: For each fixed s 2 R, consider the IVP ~y 0 (t) = A~y (t), ~y (s) = ~b(s).
Then, this problem has a unique solution, denoted by ~y (·, s) i.e., ~y (t, s) = eA(t s)~b(s). With the
help of ~y (t, s), we have the following theorem regarding the non-homogeneous problem.
Rt
Theorem 7.5. Let ~u(t) := t0 ~y (t, s) ds. Then ~u(·) solves the following non-homogeneous prob-
lem
~u0 (t) = A~u(t) + ~b(t), t 2 I, ~u(t0 ) = ~0.

Proof. Note that ~u(t0 ) = ~0 and ~u(·) is di↵erentiable in I. Taking di↵erentiation, we have
Z t Z t Z t
0 @~y ~ ~
~u (t) = ~y (t, t) + (t, s) ds = b(t) + A~y (t, s) ds = b(t) + A ~y (t, s) ds
t0 @t t0 t0

= ~b(t) + A~u(t) .

Corollary 7.6. The unique solution of ~y 0 (t) = A~y (t) + ~b(t), t 2 I, ~y (t0 ) = ~x is given by the
formula Z t
~y (t) = eA(t t0 ) ~x + eA(t s)~b(s) ds.
t0
✓ ◆ ✓ ◆
1 1 1
Example 7.4. Find the solution of the IVP ~y 0 (t) = A~y (t)+~b(t), ~y (0) = , where A =
1 0 1
✓ ◆ ✓ ◆
~ t At t 1 t
and b(t) = . In this case e = e and hence
0 0 t
Z t ✓R t ◆ ✓ t ◆
A(t s)~ set s ds e 1 t
e b(s) ds = 0 = .
0 0 0
Thus, the solution of the given IVP is given by
Z t ✓ t ◆
At A(t s)~ 2e 1 t + tet
~y (t) = e ~y (0) + e b(s) ds = .
0 tet
7.3. Non-homogeneous system of ODE. We study the non-homogeneous system of the form
~y 0 (t) = A(t)~y (t) + ~b(t), ~y (t0 ) = ~x
where entries of A(t) and ~b(t) are continuous function of t on I. We shall first prove the existence
of n linearly independent solutions of associated homogeneous system.
Definition 7.1. Let ~u1 , · · · , ~un be n-vector valued functions. The
0 i1 Wronskian W (t), is defined
0 1 2 3 n
1 u1
u1 (t) u1 (t) u1 (t) . . . u1 (t) B ui C
B u1 (t) u2 (t) u3 (t) . . . un (t)C B 2C
B 2 2 2 2 C i = B ui3 C , 1  i  n.
by W (t) = det B .. .. .. .. . C
.. A where ~
u B C
@ . . . . B .. C
1 2 3 n
@ . A
un (t) un (t) un (t) . . . un (t)
uin
One can easily arrive at the following theorem.
Theorem 7.7. If W (t0 ) 6= 0 for some t0 2 I, then the vector functions ~u1 , · · · , ~un are linearly
independent on I.
ODES AND PDES 41

Moreover, there exist n linearly independent solutions of ~y 0 (t) = A(t)~y (t); namely, n unique
solutions of the IVP
~y 0 (t) = A(t)~y (t), ~y (t0 ) = ~ei
where ~ei is the i-th standard basis element of Rn . Furthermore, the unique solution of the IVP
~y 0 (t) = A(t)~y (t), ~y (t0 ) = ~x := (x1 , x2 , . . . , xn ){
is given by the formula
n
X
~y (t) = xi ~ui (t),
i=1
where ~ui is the unique solution of the IVP ~y 0 (t) = A(t)~y (t), ~y (t0 )P = ~ei , 1  i  n. Indeed, let
~y (t) be any solution of the given IVP. Consider the function ~z (t) = ni=1 xi ~ui (t). Then ~z (t0 ) = ~x
and
n n n
d~z X d~ui X i
X
= xi = xi A(t)~u (t) = A(t) xi ~ui (t) = A(t)~z (t) .
dt dt
i=1 i=1 i=1
Thus, by uniqueness of IVP, it follows that
n
X
~y (t) = ~z (t) = xi ~ui (t) . (7.1)
i=1
This also shows that ~y (t) is spanned by n linearly independent solutions ~ui , 1  i  n.
Exercise 7.1. Let ~y i (t), 1  i  n be n solutions of the system ~y (t) = A(t)~y (t) and t0 2 I.
Then show that Rt
Tr(A(s)) ds
W (t) = W (t0 )e t0 ,
Pn
where Tr(A(t)) is define by Tr(A(t)) = i=1 aii (t) with A(t) = aij (t) 1i,jn .
Example 7.5. 0
✓ We would
◆ like to find two independent solutions of the system ~u (t) = A(t)~u(t),
0 1
where A(t) = 1 1 , t > 0. Let ~u = (x, y)> be a solution. Then x0 = y and y 0 = tx2 yt .
t2 t
Thus, we have a second order ODE in x namely
t2 x00 + tx0 (t) + x(t) = 0, t > 0.
This is a form of Euler’s equation. General solution is given by
x(t) = C1 cos(log(t)) + C2 sin(log(t)).
✓ ◆
> cos(log(t))
Suppose ~u1 (1) = (1, 0) . Then, we have ~u1 = u2 (1) = (0, 1)> , then the
sin(log(t)) . If ~
✓ ◆ t
sin(log(t))
solution is given by ~u2 (t) = cos(log(t)) . Therefore, two independent solutions of the given
t
system are
✓ ◆ ✓ ◆
cos(log(t)) sin(log(t))
~u1 = sin(log(t)) , ~u2 (t) = cos(log(t)) .
t t
Let us denote the non-singular matrix
⇥ ⇤
(t, t0 ) := ~u1 , ~u2 , . . . , ~un .
Then using this notation, we can write the solution of the IVP ~y 0 (t) = A(t)~y (t), ~y (t0 ) = ~x as
~y (t) = (t, t0 )~x.
Definition 7.2. (t, t0 ) is called the principle fundamental matrix of the linear system
~y 0 (t) = A(t)~y (t).
42 A. K. MAJEE

Theorem 7.8. A matrix valued function : I ⇥I ! M(n, R) is a principle fundamental matrix


if and only if solves the IVP
d
(t, t0 ) = A(t) (t, t0 ), (t0 , t0 ) = In⇥n . (7.2)
dt
⇥ 1 2 ⇤
Proof. Let (t, t0 ) be the principle fundamental matrix. Then (t, t0 ) := ~u , ~u , . . . , ~un and
hence
d ⇥ d~u1 (t) d~u2 (t) d~un (t) ⇤ ⇥ ⇤
(t, t0 ) = , ,..., = A(t)~u1 (t), A(t)~u2 (t), . . . , A(t)~un (t) = A(t) (t, t0 ) ,
dt ⇥ 1 dt dt dt ⇤
2 n
(t0 , t0 ) = ~u (t0 ), ~u (t0 ), . . . , ~u (t0 ) = 1n⇥n .
Again, suppose that solves (7.2). Then by uniqueness of the system, we see that (t, t0 ) =
(t, t0 ). This completes the proof. ⇤
Definition 7.3. A matrix valued function : I ! M(n, R) is called a fundamental matrix
d
for the system ~y 0 (t) = A(t)~y (t) if satisfies the system dt (t) = A(t) (t) for all t 2 I and
det( (t)) 6= 0 for all t 2 I.
✓ ◆
0 2 cos2 (t) 1 sin(2t)
Example 7.6. A fundamental matrix for the system ~y (t) = ~y (t)
✓ 2t ◆ 1 sin(2t) 2 sin2 (t)
e cos(t) sin(t)
is given by (t) = .
e 2t sin(t) cos(t)
Theorem 7.9. Let be a fundamental matrix for the system ~y 0 (t) = A(y)~y (t). Then for any
non-singular matrix C, the matrix valued function ¯ (t) := (t)C is also a fundamental matrix.
Conversely, if and ¯ are two fundamental matrix of the system ~y 0 (t) = A(y)~y (t), then there
exists non-singular C 2 M(n, R) such that ¯ = C.
Proof. Let ¯ (t) = (t)C. Then
d ¯ d
(t) = (t) C = A(t) (t)C = A(t) ¯ (t); det( ¯ (t)) = det( (t))detC 6= 0 .
dt dt
Hence ¯ (·) is a fundamental matrix for the given system. Conversely, let (t) and ¯ (t) be
two fundamental matrix. Set C(t) := ( (t)) 1 ¯ (t). Then, we have ¯ (t) = (t)C(t). Moreover,
taking derivative, we have
d ¯ d d
(t) = (t)C(t) + (t) C(t)
dt dt dt
d d
=) A(t) ¯ (t) = A(t) (t)C(t) + (t) C(t) = A(t) ¯ (t) + (t) C(t)
dt dt
d d
=) (t) C(t) = 0 =) C(t) = 0 =) C(t) = C = (cij ) = —a constant matrix.
dt dt
By definition, C is non-singular. This completes the proof. ⇤
Corollary 7.10. Let (t) be a fundamental matrix for the system ~y 0 (t) = A(y)~y (t). Then the
principle fundamental matrix is given by
1
(t, t0 ) = (t)( (t0 )) .
Proof. Since (t) is a given fundamental matrix, (t0 ) is constant matrix which is non-singular.
Hence det(( (t0 )) 1 ) 6= 0 and therefore (t)( (t0 )) 1 is a fundamental matrix. Note that
(t0 )( (t0 )) 1 is the identity matrix and hence the principle fundamental matrix is given by
(t, t0 ) = (t)( (t0 )) 1 . ⇤
Exercise 7.2. Show that (t, t0 ) satisfies the following properties:
a) ( (t, t0 )) 1 = (t0 , t) for all t, t0 2 I.
b) (t, t0 ) = (t, t1 ) (t1 , t0 ) for all t0 , t1 , t 2 I.
ODES AND PDES 43

7.3.1. Adjoint system: Let (t) be a fundamental matrix of ~y 0 (t) = A(y)~y (t). Then (t) 1 (t) =
0 for all t 2 I. Hence we have
d 1 d
(t) (t) + (t) 1 (t) = 0
dt dt
d 1
=) (t) (t) + A(t) (t) 1 (t) = 0
dt
d d
=) 1
(t) = 1
(t)A(t) =) ( 1 (t))> = A> (t)( 1 (t))> .
dt dt
This shows that ( 1 (t))> is a fundamental matrix for the system ~y 0 (t) = A> (t)~y (t).
Definition 7.4. The system ~y 0 (t) = A> (t)~y (t) is called the adjoint system of ~y 0 (t) = A(t)~y (t).
Theorem 7.11. Let be a fundamental matrix of ~y 0 (t) = A(t)~y (t). Then is a fundamental
matrix for the adjoint system if and only if > = C, for some non-singular constant matrix C.
Proof. Let > = C. Then one has
>
= C> =) =( >
) 1 >
C =( 1 > >
) C
We have seen that ( 1 )> is a fundamental matrix of the adjoint equation. Again, by previous
theorem = ( 1 )> C> is a fundamental matrix of the adjoint system. Conversely, let be a
fundamental solution of the adjoint system. Define C(t) = > . Then C(t) is non-singular and
di↵erentiable. Moreover,
d d d >
C(t) = ( (t))> (t) + (t) (t) = A> (t) (t) (t) + (t)> A(t) (t) = 0.
dt dt dt
Hence C(t) is a constant non-singular matrix. This completes the proof. ⇤
Remark 7.1. If A(t) is skew-symmetric i.e., A(t) = A> (t) for all t 2 I, then any fundamental
solution ~ is also a fundamental solution of the adjoint
of the linear system ~y 0 (t) = A(t)y(t)
system. Thus, > = C and hence k (t)k2 is constant for all t 2 I.
7.3.2. Variation of parameter: Consider the non-homogeneous system of ODE
~y 0 (t) = A(t)~y (t) + ~b(t), ~y (t0 ) = ~y0 . (7.3)
Let (t, t0 ) be the principle fundamental matrix for the system ~y 0 (t) = A(t)~y (t). Then any
solution of ~y 0 (t) = A(t)~y (t) is given by ~y (t) = (t, t0 )C, where C = (c1 , c2 , . . . , cn )> . Let us try
to find a solution of (7.3) in the form
>
~z (t) = (t, t0 )C(t), C(t) = c1 (t), c2 (t), . . . , cn (t) .
Suppose ~z (t) solves the non-homogeneous system. Then one has
d d d
~z 0 (t) = (t, t0 )C(t) + (t, t0 ) C(t) = A(t) (t, t0 )C(t) + (t, t0 ) C(t)
dt dt dt
d d
= A(t)~z (t) + (t, t0 ) C(t) = ~z 0 (t) ~b(t) + (t, t0 ) C(t) .
dt dt
Thus, ~z (t) solves the non-homogeneous system if
Z t
d 1~ 1
C(t) = (t, t0 ) b(t) =) C(t) = C(t0 ) + (s, t0 ) ~b(s) ds .
dt t0
Thus, the solution is given by
Z t
1~
~z (t) = (t, t0 )~y0 + (t, t0 ) (s, t0 ) b(s) ds
t0
44 A. K. MAJEE
Z t
= (t, t0 )~y0 + (t, t0 ) (t0 , s)~b(s) ds
t0
Z t
= (t, t0 )~y0 + (t, s)~b(s) ds .
t0
✓ ◆
1 1
Example 7.7. We solve the non-homogeneous system ~y 0 (t) = A~y (t) + ~b(t), where A = ,
0 1
✓ ◆ ✓ ◆
~b(t) = t and ~y (t0 ) = ~y0 := y01 . Note that, the principle fundamental matrix of the
0 y02
corresponding homogeneous system is given by
✓ ◆
t t0 1 t t0
(t, t0 ) = e .
0 1
Thus, by variation of parameter, the solution is given by
✓ ◆ ✓ ◆
t t0 y01 + (t t0 )y02 t t 0 t0 + 1 (t + 1)et0 t
~y (t) = e +e
y02 0
✓ ◆
y + (t t0 )y02 + t0 + 1 (t + 1)et0 t
= et t0 01 .
y02
7.4. Calculation of principle fundamental matrix: We now recall some basic theorems
from Linear algebra to calculate the principle fundamental matrix (t, t0 ) for the system ~y 0 (t) =
A~y (t).
Theorem 7.12. Let A 2 M(n, R). Assume that A has n distinct real eigen values i , 1  i  n.
Let ~ui , 1  i  n be the corresponding eigen-vector. Then C = [~u1 , u~2 , . . . , ~un ] is invertible and
C 1 AC = diag( 1 , 2 , . . . , n ). In this case, the principle fundamental matrix (t, t0 ) is given by
the formula
(t, t0 ) = eA(t=t0 ) = Cediag[ j (t t0 ), 1jn] C 1 .
Example
✓ 7.8. We calculate the principle fundamental matrix of the system ~y 0 (t) = A~y (t) with

1 2
A= . Note that 1 and 4 are two real distinct eigen values of A. Corresponding eigen
3 2
✓ ◆ ✓ 3 2◆
> > 1 2 1 5 5 . Thus,
vectors are ~u1 = ( 1, 1) and ~u2 = (2, 3) . Hence C = , and C = 1 1
1 3 5 5
the principle fundamental matrix is given by
✓ ◆ ✓ (t t ) ◆ ✓ 3 2◆
1 2 e 0 0 5 5
(t, t0 ) = 1 1
1 3 0 e4(t t0 ) 5 5
✓ ◆
1 3e (t t0 ) + 2e4(t t0 ) 2e (t t 0 ) + 2e4(t t0 )
= .
5 3e (t t0 ) + 3e4(t t0 ) 2e (t t0 ) + 3e4(t t0 )
Theorem 7.13. Let A 2 M(2n, R). Assume that A has 2n distinct complex eigen values
¯ ¯
1 , 1 , . . . , n , n , where j = aj + ibj with bj 6= 0. Let wj = uj + ivj , 1  j  n be
an eigen vector corresponding to j . Let C✓= [u~1 , ~v1◆, . . . , ~un , v~n ]. Then C is invertible, and
a j bj
C 1 AC = diag(A1 , A2 , . . . , An ), where Aj = , 1  j  n. Moreover, the principle
bj a j
fundamental matrix (t, t0 ) has the representation
(t, t0 ) = Cediag(A1 (t t0 ),A2 (t t0 ),...,An (t t0 )) C 1
⇣ ✓ ◆ ⌘
aj (t t0 ) cos bj (t t0 ) sin bj (t t0 ) 1
= C diag e , 1jn C .
sin bj (t t0 ) cos bj (t t0 )
ODES AND PDES 45

Example 7.9. Find the principle fundamental matrix of the system ~y 0 (t) = A~y (t) with
0 1
1 1 0 0
B1 1 0 0 C
A=B @0 0 3
C.
2A
0 0 1 1
Observe that 1 ± i and 2 ± i are eigenvalues of A. They are 4 distinct complex eigen values.
Moreover, a corresponding pair of complex eigenvectors
0 are1w ~ 1 = ~u1 + i~v1 = (i, 1, 0, 0)> and
0 1 0 0
B 1 0 0 0C
~ 2 = ~u2 + i~v2 = (0, 0, 1 + i, 1)> . Hence C = B
w C
@0 0 1 1A is invertible. Moreover, C =
1

0 0 1 0
0 1
0 1 0 0
B1 0 0 0 C
B C
@0 0 0 1 A. Thus, the principle fundamental matrix is given by
0 0 1 1
0 1 0 t t0 1
0 1 0 0 e cos(t t0 ) et t0 sin(t t0 ) 0 0
B1 0 0 0C B et t0 sin(t t0 ) et t0 cos(t t0 ) 0 0 C
(t, t0 ) = B
@0 0 1 1A @
CB C
0 0 e 2(t t )
0 cos(t t0 ) e 2(t t )
0 sin(t t0 ) A
0 0 1 0 0 0 e2(t t0 ) sin(t t0 ) e2(t t0 ) cos(t t0 )
0 1
0 1 0 0
B1 0 0 0 C
·B
@0 0 0 1 A .
C

0 0 1 1
In case A has both real and complex eigenvalues and they are distinct, we have the following
result.
Theorem 7.14. Suppose A 2 M(2n k, R) has k-distinct real eigen values 1 , . . . , k and
distinct comples eigen values k+1 , ¯ k+1 , . . . , n , ¯ n with j = aj + ibj , bj 6= 0 for j = k + 1, k +
2, . . . , n. Let ~uj be an eigen vector corresponding to j , 1  j  k and w ~ j = u~j + i~vj be an
eigen vector corresponding to j , k + 1  j  n. Then C = [~u1 , . . . ✓ , ~uk , ~uk+1 ,◆~vk+1 , . . . , ~un , ~vn ] is
a j bj
invertible and C 1 AC = diag( 1 , . . . , k , Ak+1 , . . . , An ) where Aj = , k + 1  j  n.
bj a j
0 1
1 0 0
Example 7.10. Consider the matrix A = @1 1 1A. It has distinct eigen values 1 = 1, 2 =
0 1 1
1 + i and 0 ¯ 2 = 1 i.1 The corresponding eigen > ~ 2 = (0, i, 1)> .
0 1 vectors are ~u1 = (1, 0, 1) and w
1 0 0 1 0 0
Hence C = @0 0 1A and C 1 = @ 1 0 1A. Thus, the principal fundamental matrix of the
1 1 0 0 1 0
0
system ~y (t) = A~y (t) is given by
0 1
1 0 0
(t, t0 ) = eA(t t0 ) = et t0 C @0 cos(t t0 ) sin(t t0 ) A C 1
0 sin(t t0 ) cos(t t0 )
0 1
1 0 0
= et t0 @ sin(t t0 ) cos(t t0 ) sin(t t0 )A .
1 cos(t t0 ) sin(t t0 ) cos(t t0 )
Next we will find the exponential matrix eA , when A has multiple eigen-values.
46 A. K. MAJEE

Definition 7.5. Let A 2 M(n, R) and be an eigen value of multiplicity k  n. Any non-zero
solution of (A I)j ~v = ~0 for some j 2 {1, 2, . . . , k} is called generalized eigenvector.
Definition 7.6. A 2 M(n, R) is said to be nilpotent of order k if Ak = 0, Ak 1 6= 0.
Theorem 7.15. Let A 2 M(n, R) with real eigen-values 1 , 2 , . . . , n repeated accordingly to the
multiplicity . Then there exists a basis of generalized eigenvectors of Rn . Let {~u1 , ~u2 , . . . , ~un } be
one such basis for Rn . Then C := [~u1 , ~u2 , . . . , ~un ] is invertible and C 1 SC = diag( 1 , 2 , . . . , n )
with A = S + N. Furthermore, A S = N is nilpotent of order k  n, SN = NS and
1
eA = eN eS = eN C diag(e j , 1  j  n) C .
Remark 7.2. If is an eigenvalue of multiplicity n of A 2 M(n, R), then S in the above theorem
takes the form S = diag( , , . . . , ). Hence it is easier to find the principle fundamental matrix
:
k
X Nj (t t0 )j
(t, t0 ) = e (t t0 ) .
j!
j=0

Example 7.11. 0
✓ ◆ We will find the principle fundamental matrix for the system ~y (t) = A~y✓ (t) with

3 1 2 0
A= . Note that A has an eigen value = 2 with multiplicity 2. Hence S =
1 1 0 2
✓ ◆
1 1
and N = A S = . It is easy to check that N2 = 0. Hence the principle fundamental
1 1
matrix is given by
✓ ◆
A(t t0 ) 2(t t0 ) 1 + (t t0 ) t t0
(t, t0 ) = e =e .
(t t0 ) 1 (t t0 )
0 1
1 0 0
Example 7.12. Consider the system ~y 0 (t) = A~y (t) with A = @ 1 2 0A. One can easily
1 1 2
check that A has the eigenvalues 1 = 1, 2 = 3 = 2, and the corresponding eigenvectors
are ~u1 = (1, 1, 2)> and ~u2 = (0, 0, 1)> . We therefore must find one generalized eigenvector
corresponding to = 2 and independent
0 of
1 ~u2 by solving
0 (A 2I) 1 u3 = ~0. We can choose
2~

1 0 0 1 0 0
~u3 = (0, 1, 0)> , and hence C = @ 1 0 1A and C 1 = @ 2 0 1A. The matrix S is given
2 1 0 1 1 0
0 1
1 0 0
by S = C diag(1, 2, 2)C 1 = @ 1 2 0A. Thus, the nilpotent matrix is given by N = A S =
2 0 2
0 1
0 0 0
@ 0 0 0A. Note that N2 = 0. Thus, the principle fundamental matrix for the given system
1 1 0
is given by
1
(t, t0 ) = (I + N(t t0 )) C diag(1, 2, 2) C
0 1
et t0 0 0
=@ et t0
e2(t t0 ) e2(t t0 ) 0 A.
2e(t t0 ) + (2 t t0 )e2(t t0 ) (t t0 )e2(t t0 ) e2(t t0 )
In the case of multiple complex eigenvalues, we have the following theorem.
Theorem 7.16. Let 1 , ¯ 1 , . . . n,
¯ n be the eigen-values of A 2 M(2n, R) repeatedly accord-
ingly to the multiplicity. Let j = aj + ibj , bj 6= 0 for j = 1, 2, . . . , n. Then there exists
ODES AND PDES 47

a basis of generalized complex eigenvectors w ~ j = ~uj + i~vj , w ~¯j = ~uj i~vj for j = 1, 2, . . . , n
and {~u1 , ~v1 , . . . , ~un , ~vn } is a basis for R . Set C = [~u1 , ~v1 , . . . , ~un , ~vn ]. Then C is invertible,
2n
1
A = S✓+ N, SN = ◆ NS, N is nilpotent of order k  2n, and C SC = diag(A1 , A2 , . . . , An ), where
a j bj
Aj = , 1  j  n.
bj a j
0 1
0 1 0 0
B1 0 0 0 C
Example 7.13. Consider the matrix A = B @0 0 0
C. Observe that it has eigen values
1A
2 0 1 0
¯
1 = i and 1 = ~ 1 = (0, 0, i, 1)> .
i with multiplicity 2. Moreover, one eigen vector is w
We need to find generalized eigen vector w ~ 2 . From
0 the equation
1 (A iI) w
2 ~ 2 = ~0, we can
0 0 0 1
B0 0 1 0C
~ 2 = (i, 1, 0, 1)> . Thus, the matrix C = B
choose w C
@0 1 0 0A is invertible with its inverse
1 0 1 0
0 1
0 1 0 1
B 0 0 1 0C
C 1=B C
@0 1 0 0A. The matrix S is computed as
1 0 0 0
0 1 0 1
0 1 0 0 0 1 0 0
B 1 0 0 0C B1 0 0 0C
S = CB
@0
CC 1
=B C.
0 0 1A @0 1 0 1A
0 0 1 0 1 0 1 0
0 1
0 0 0 0
B0 0 0 0 C
The nilpotent matrix is given by N = A S = B @0
C. Furthermore, N2 = 0. Hence the
1 0 0A
1 0 0 0
principle fundamental solution of the corresponding system is given by
0 1
cos(t t0 ) sin(t t0 ) 0 0
B sin(t t0 ) cos(t t0 ) 0 0 C 1
(t, t0 ) = C B
@
C C I + N(t t0 ) .
0 0 cos(t t0 ) sin(t t0 ) A
0 0 sin(t t0 ) cos(t t0 )

Theorem 7.17. Let 1 , . . . m be the real eigen values repeated according to the multiplicities and
¯ ¯
m+1 , m+1 , . . . , n , n be the comples eigen values repeated according to the multiplicities . Then
there exists a basis {~u1 , . . . , ~um , ~vm+1 , ~um+1 , . . . , ~vn , ~un } of R2n m such that ~uj is a generalized
eigen vector corresponding to j , 1  j  m, and w ~ j = ~uj + i~vj is a generalized eigen vector cor-
responding to j = aj + ibj j = m + 1, . . . , n. Moreover, C = [~u1 , . . . , ~um , ~vm+1 , ~um+1 , . . . , ~vn , ~un ]
is invertible and C 1 AC = diag(B1 , B2 , . . . Br ), where the elementary Jordan blocks B = Bj , j =
1, 2, . . . , r are either of the form
0 1
1 0 ··· 0
B0 1 · · · 0C
B C
B
B = B· · · C (7.4)
C
@ 0 ··· 1A
0 ··· 0
48 A. K. MAJEE

for one of the real eigenvalues of A or of the form


0 1
D2 I2 02 · · · 02
B 02 D2 I2 · · · 02 C
B C
B=B B· · ·
C
C (7.5)
@ 02 ··· D2 I2 A
02 ··· 02 D2
with ✓ ◆ ✓ ◆ ✓ ◆
a b 1 0 0 0
D2 = , I2 = , 02 =
b a 0 1 0 0
for = a + ib one of the complex eigen-values of A. The principle fundamental matrix then
calculated by the formula
(t, t0 ) = C diag(eBj (t t0 ) ) C 1 .
7.5. Boundedness and asymptotic behaviour of linear system. We are interested in
investigating the behaviour of the solution of a linear system ~y 0 (t) = A(t)~y (t) + ~b(t) for large
value of t.
Thanks to the Jordan decomposition theorem, we have seen that eAt is a matrix whose entries
are linear combinations of functions of the form eat tk cos(bt), eat tk sin(bt), where a + ib is an
eigen values of A and k  n 1. Hence, one arrives at the following theorem.
Theorem 7.18. Consider the linear system ~u0 (t) = A ~u(t), where A 2 M(n, R): set of all n ⇥ n
matrices with real entries.
a) If the real parts of all multiple eigenvalues are negative and the real part of simple eigen-
values are non-positive, then all solutions of the system are bounded.
b) If real part of any eigenvalue is negative, then any solution of the linear system ~u(t) has
the property: limt!1 |~u(t)| = 0 i.e., asymptotically converges to zero.
✓ ◆
0 0 1
Example 7.14. Consider the system ~y (t) = A~y (t), where A = . Note that A has
1 0
eigen values ±i, and hence real parts of the simple eigenvalues of A are non-positive. Thus, any
solution of the given system is bounded.
Example 7.15. Consider the linear system
u01 = u1 + u2 , u02 = u1 u2 .
This can be written as

0 1 1
~u (t) = A ~u(t) , where A = .
1 1
Note that eigenvalues of A are 0 and 2. Hence all solutions of the linear system are bounded.
Example 7.16. Consider the linear system of equation
u01 = u1 , u02 = u1 2u2 , u03 = u1 + 2u2 5u3 .
We rewrite the above system as
2 3
1 0 0
~u0 (t) = A ~u(t) , where A = 4 1 2 05.
1 2 5
It is easy to show that the eigenvalues of A are 1, 2 and 5. Thus, the solution of the given
system asymptotically converges to zero.
ODES AND PDES 49

We shall now examine the relationship between the boundedness of solutions of the perturbed
system
~u0 (t) = A~u(t) + B(t)~u(t), t 2 [t0 , 1) (7.6)
and its linear system ~u0 (t) = A~u(t), where A 2 M(n, R) and t 7! B(t) is continuous.
Theorem 7.19. Let all the solutions of ~u0 (t) = A~u(t) be bounded on [0, 1). Then all solutions
of (7.6) are bounded provided Z 1
kB(t)k dt < +1.
t0

Proof. Since all the solutions of ~u0 (t) = A~u(t) be bounded on [t0 , 1), there exists a constant
C > 0 such that |eAt |  C for all t 2 [0, 1]. Let ~u(t0 ) = ~x for the problem (7.6). Then, we have
Z t Z t
A(t t0 ) A(t s)
|~u(t)| = e ~x + e B(s)~u(s) ds  C|~x| + C kB(s)k|~u(s)| ds
t0 t0
Hence by Gronwall’s lemma, we get
⇣ Z t ⌘
|~u(t)|  C|~x| exp C kB(s)k ds  C|~x| exp(M ), 8 t 2 [t0 , 1)
t0
⇣ R ⌘
1
where M := exp C t0 kB(s)k ds < +1. This shows that all solutions of (7.6) are bounded.

✓ ◆
0 1
Example 7.17. Consider the system ~y 0 (t) = A~y (t) + B(t)~y (t), where A = and B(t) =
1 1
✓ ◆ p
0 0 1±i 3
0 1 , t > 0. The eigenvalues of A are 2 , and hence the linear system ~y 0 (t) = A~y (t)
(1+t)2
has a bounded solution. Moreover, it is easy to see that
Z t Z t
1
lim kB(s)k ds = lim ds < +1.
t!1 t t!1 t (1 + s)2
0 0

Thus, all the solutions of given system are bounded.


Next we examine the asymptotic convergence of the solution of (7.6).
Theorem 7.20. Consider the problem (7.6) where A 2 M(n, R) with all eigenvalues having
negative real parts and kB(t)k ! 0 as t ! 1. Then all solutions of (7.6) asymptotically
converges to zero.
Proof. Since real part of all eigenvalues of A is negative, there exist positive constants ↵ and M
such that
↵t
|eAt |  M e , 8t 0.
Let ~u(t) be the solution of (7.6). Then we have
Z t
A(t t0 )
|~u(t)| = e ~x + eA(t s) B(s)~u(s) ds
t0
Z t
↵(t t0 ) ↵(t s)
 Me |~x| + M e kB(s)k|~u(s)| ds
t0
n Z t o
↵t ↵t0
 Me e |~x| + kB(s)k|~u(s)|e↵s ds
t0
Z t
↵t ↵t0
=) e |~u(t)|  M e |~x| + M kB(s)k|~u(s)|e↵s ds
t0
50 A. K. MAJEE

Since kB(t)k ! 0 as t ! 1, given any C > 0, there exists tb t0 > 0 such that
kB(t)k  C, 8t tb .
Thus, we have
Z tb Z t
↵t ↵t0 ↵s
e |~u(t)|  M e |~x| + M kB(s)k|~u(s)|e ds + M kB(s)k|~u(s)|e↵s ds
t0 tb
Z tb Z t
 M e↵t0 |~x| + M kB(s)k|~u(s)|e↵s ds + CM |~u(s)|e↵s ds
t0 tb
M C(t tb )
 C0 e , 8 t 2 [tb , 1),
R tb
where C0 := M e↵t0 |~x| + M t0 kB(s)k|~u(s)|e↵s ds. Since C > 0 is arbitrary, we can choose C
such that M C < ↵ and hence e(M C ↵)t ! 0 as t ! 1. This shows that |~u(t)| ! 0 as t ! 1.
This completes the proof. ⇤
✓ ◆
1 5
Example 7.18. Consider the linear system ~y 0 (t) = A~y (t) + B(t)~y (t), where A = and
0 4
✓ 2t ◆
e 0
B(t) = 1 , t > 0. Since the real part of all eigenvalues of A are negative (it has real
e t (1+t) 2

distinct eigen values 1 and 4) and kB(t)k ! 0 as t ! 1, we conclude that all the solutions
of the given system asymptotically converge to zero i.e., |~u(t)| ! 0 as t ! 1.
7.5.1. Boundedness and asymptotic behaviour of non-autonomous linear system:
Consider the non-autonomous system
~u0 (t) = A(t)~u(t) . (7.7)
Let ~u(t) be a solution of (7.7). Since |~u|2 = h~u(t), ~u(t)i, by di↵erentiating, we have
d
|~u(t)|2 = h~u0 (t), ~u(t)i + h~u(t), ~u0 (t)i = hA(t)~u(t), ~u(t)i + h~u(t), A(t)~u(t)i
dt ⌦ ↵
= A(t) + A> (t) ~u(t), ~u(t) .
Note that A(t) + A> (t) is a symmetric matrix and hence all the eigenvalues are real. Define
M (t) := largest eigenvalue of A(t) + A> (t), m(t) := smallest eigenvalue of A(t) + A> (t) .
Theorem 7.21. Let t 7! A(t) 2 M(n, R) is continuous on [t0 , 1) and M (t) and m(t) are
defined as above. Then the followings hold:
Rt
i) If limt!1 t0 M (s) ds = 1, then any solution of (7.7) satisfies limt!1 |~u(t)| = 0.
nR o
t
ii) If the set t0 M (s) ds : t > t0 is bounded, then any solution of (7.7) will be bounded

if ~u(t0 ) 6= ~0.
Rt
iii) If lim supt!1 t0 m(s) ds = 1, then every solution if (7.7) is unbounded.

Proof. Note that (~u> )0 (t) = ~u> (t)A> (t) and |~u|2 = ~u> (t)~u(t). Hence taking di↵erentiation, we
have
d
|~u(t)|2 = (~u> )0 (t)~u(t) + ~u> (t), ~u0 (t) = ~u> (t)A> (t)~u(t) + ~u> (t)A(t)~u(t)
dt
= ~u> (t) A(t) + A> (t) ~u(t) .
Note that, for any ~x 2 Rn \ {~0}, one has
~x> A(t) + A> (t) ~x
m(t)   M (t).
~x> ~x
ODES AND PDES 51

Hence, we see that

m(t)|~u(t)|2  ~u> (t) A(t) + A> (t) ~u(t)  M (t)|~u(t)|2


d
=) m(t)|~u(t)|2  |~u(t)|2  M (t)|~u(t)|2 . (7.8)
dt
Thus, from (7.8), we have
d Rt
u(t)|2
dt |~ d M (s) ds
 M (t) =) log(~u(t)|2 )  M (t) =) |~u(t)|2  |~u(t0 )|2 e t0 .
~u(t)|2 dt
Rt
Thus, if limt!1 t0 M (s) ds = 1, then limt!1 |~u(t)| = 0—this proves the assertion i). Suppose
nR o
t
the set t0 M (s) ds : t > t0 is bounded, then any solution of (7.7) will be bounded if ~u(t0 ) 6= ~0.
Furthermore, from (7.8), we get
d Rt
u(t)|2
dt |~ d m(s) ds
m(t) =) log(~u(t)|2 ) m(t) =) |~u(t)|2 |~u(t0 )|2 e t0 .
~u(t)|2 dt
Rt
Thus, if lim supt!1 m(s) ds = 1, then |~u(t)| becomes unbounded. This completes the
t0
proof. ⇤
0 1
t 0 0
Example 7.19. Consider the non-autonomous system ~u0 (t) = A(t)~u(t), where A(t) = @ 0 t2 0 A.
0 0 t2
0 1
2t 0 0
Then A(t) + A> (t) = @ 0 2t2 0 A. Largest eigenvalue is given by
0 0 2t2
(
2t, if t 1
M (t) =
2t2 , if t  1 .
Rt
Since limt!1 t0 M (s) ds = 1, we conclude that limt!1 |~u(t)| = 0.
✓ 1 ◆
(1+t)2
t2
Example 7.20. Consider the non-autonomous system ~u0 (t)
= A(t)~u(t)with A(t) =
t2 1
✓ 2 ◆
2 0 2
for t > 0. Then, A(t) + A> (t) = (1+t) and hence M (t) = (1+t) 2 . Note that the set
0 2
Rt Rt
0 M (s) ds is bounded as limt!1 0 M (s) ds = 2. Thus, any solution of the given system is
bounded.

Next we consider the perturbed system


~u0 (t) = [A(t) + B(t)]~u(t), t > t0 . (7.9)

To study the boundedness and asymptotic behaviour of (7.9), we need the following assump-
tions:
Rt
A.1 lim inf t!1 t0 Tr(A(s)) ds > 1 or Tr(A(s)) = 0.
Rt
A.2 limt!1 t0 kB(s)k ds < +1.

Theorem 7.22. Let the assumptions A.1 and A.2 holds. If all the solutions of ~u0 (t) = A(t)~u(t)
is bounded, then all the solutions of (7.9) are also bounded.
52 A. K. MAJEE

Proof. Let (t) be a fundamental matrix of the system ~u0 (t) = A(t)~u(t). Since the solution of
~u0 (t) = A(t)~u(t) is bounded, there exists C⇣ > 0 such that⌘k (t)k  C for all t 2 [t0 , 1). By
Rt
Abel’s theorem det( (t)) = det( (t0 )) exp t0 Tr(A(s)) ds , and hence

1 adj( (t))
(t) = ⇣R ⌘.
t
det( (t0 )) exp t0 Tr(A(s)) ds
Thanks to the assumption A.1, we see that
1
k (t)k  C, 8 t 2 [t0 , 1) .
Let us write down the solution of (7.9):
Z t Z t
1 1
~u(t) = (t, t0 )~x + (t, s)B(s)~u(s) ds = (t) (t0 )~x + (t) (s)B(s)~u(s) ds .
t0 t0
Define
1
C1 = max{sup k (t)k, sup k (t)k}, C0 = C1 |~x|.
t t0 t t0
Thus, we have
Z t ⇣ Z t ⌘
|~u(t)|  C0 + C12 kB(s)k|~u(s)| ds =) |~u(t)|  C0 exp C12 kB(s)k ds < +1
t0 t0
by the assumption A.2. This proves the theorem. ⇤
✓ 1 ◆
2 t2
Example 7.21. Consider the linear system ~u0 (t) = [A(t) + B(t)]~u(t) with A(t) = (1+t)2 ,
t 1
✓ ◆
0 0
and B(t) = 0 a for t > 0. Note that the maximum eigenvalues of A(t) + A> (t) is
(1+t) 2
2
Rt
M (t) := (1+t) 2 and limt!1 0 M (s) ds < +1. Thus, the system ~ u0 (t) = A(t)~u(t) has a bounded
solution. If we show that the assumptions A.1 and A.2 holds, then by Theorem 7.22, given
system has bounded solution. It is easy to check that
Z t Z t
ds
lim inf Tr(A(s)) ds = lim inf 2
> 1,
t!1 0 t!1 0 (1 + s)
Z t Z t
|a|
lim kB(s)k ds = lim ds = |a| < +1 .
t!1 0 t!1 0 (1 + s)2

Theorem 7.23. Under the assumptions A.1 and A.2, if all the solutions of ~u0 (t) = A(t)~u(t)
are asymptotically convergent to zero, then is the case for all solutions of (7.9).
Proof. Asymptotic convergence of the solution of ~u0 (t) = A(t)~u(t) implies that k (t)k ! 0 as
t ! 1, where is a fundamental matrix for the system ~u0 (t) = A(t)~u(t). Moreover, by using
Abel’s theorem and the assumption A.1, we conclude that
1
k (t)k ! 0 as t ! 1.
Let C1 be defined as in the proof R tof Theorem 7.22. Now from the representation of the solution
of (7.9) ~u(t) = (t) 1 (t0 )~x + t0 (t) 1 (s)B(s)~u(s) ds, we have
n Z t o
1
|~u(t)|  k (t)k k (t0 )k|~x| + k 1 (s)kkB(s)k|~u(s)| ds
t0
n Z t o
 k (t)k M + C1 kB(s)k|~u(s)| ds
t0
ODES AND PDES 53
Z t
|~u(t)| |~u(s)|
=)  M + C1 kB(s)kk (s)k ds ( is non-singular)
k (t)k t0 k (s)k
|~u(t)| ⇣ Z t ⌘
=)  M exp C1 k (s)k kB(s)k ds (by Gronwall’s lemma)
k (t)k t0
⇣ Z t ⌘
|~u(t)| 2
=)  M exp C1 kB(s)k ds ,
k (t)k t0
where M := k 1 (t0 )k|~x|. Since k (t)k ! 0 as t ! 1, by the assumption A.2, we conclude
from the last inequality that |~u(t)| ! 0 as t ! 1. This completes the proof. ⇤
54 A. K. MAJEE

8. Stability Analysis:
Consider the initial value problem
~u0 = f~(t, ~u(t)) , t 2 [t0 , 1); ~u(t0 ) = ~x , (8.1)
where we assume that f~ : ⌦ ! Rn is continuous and locally Lipschitz in second argument.
Moreover, we assume that (8.1) has a solution defined in [t0 , 1). The unique solution is denoted
by ~u(t, t0 , ~x).
Definition 8.1. Let ~u(·, t0 , ~x) be a solution of (8.1).
i) It is said to be stable if for every " > 0, there exists = (", t0 , ~x) > 0 such that
|~x ~y | < =) |~u(t, t0 , ~x) ~u(t, t0 , ~y )| < " .
ii) It is called asymptotically stable if it is stable and there exists > 0 such that for all
~y 2 B(~x, ), there holds
lim |~u(t, t0 , ~x) ~u(t, t0 , ~y )| = 0.
t!1
iii) It is called unstable if it is NOT stable.
Example 8.1. Consider the IVP ~u0 = f~(t) , ~u(t0 ) = ~x , where f~ : [t0 , 1) ! Rn is continuous.
Rt
Then ~u(t, t0 , ~x) = ~x + t0 f~(s) ds. Thus,
|~u(t, t0 , ~x) ~u(t, t0 , ~y )| = |~x ~y | .
This shows that ~u(t, t0 , ~x) is stable but NOT asymptotically stable.
Example 8.2. Consider the IVP
u0 (t) = a(t)u(t) , u(t0 ) = x , with a 2 C[t0 , 1) .
Rt
a(s) ds
Then u(t, t0 , x) = xe t0 , and hence
Rt
a(s) ds
|u(t, t0 , x)
u(t, t0 , y)| = |x y|e t0 .
Rt
i) It is stable if there exists M > 0 such that t0 a(s) ds  M for all t > t0 .
Rt
ii) It is unstable if limt!1 t0 a(s) ds = 1.
Rt
iii) It is asymptotically stable if limt!1 t0 a(s) ds = 1.
Theorem 8.1. Let t 7! A (t) be continuous function from [t0 , 1) 7! M(n, R). Then all solutions
~u(·, t0 , ~x) of the linear system
~u0 = A (t)~u(t) , ~u(t0 ) = ~x
are stable if and only if all solutions are bounded.
Proof. Suppose that the solution is bounded. Since ~u(t, t0 , ~x) = (t, t0 )~x, k (t, t0 )k is bounded
for all t 2 [t0 , 1), where (t, t0 ) is the principle fundamental matrix. Now
~u(t, t0 , ~x) ~u(t, t0 , ~y ) = (t, t0 )~x (t, t0 )~y  k (t, t0 )k|~x ~y |  M |~x ~y |, 8 t 2 [t0 , 1) .
This shows that all solutions are stable. Conversely, assume that all solutions are stable. In
particular, ~u(t, t0 , ~0) is stable. Hence, there exists > 0 such that
~u(t, t0 , ~y ) ~u(t, t0 , ~0)  1 8 ~y 2 B(~0, ),
=) |~u(t, t0 , ~y )|  1 8 ~y 2 B(~0, ) . (8.2)
Let ~x 2 Rn \ {~0}. Choose ~y 2 Rn such that ~y = ~
x
x| .
2 |~ This implies that ~y 2 B(~0, 2 ). Thus
2|~x| 2|~x| 2|~x| 2|~x|
~u(t, t0 , ~x) = ~u(t, t0 , ~y ) = (t, t0 ) ~y = (t, t0 )~y = ~u(t, t0 , ~y )
ODES AND PDES 55

2|~x| 2|~x|
=) |~u(t, t0 , ~x)|  |~u(t, t0 , ~y )|  ,
where in the last inequality, we have used (8.2). Thus, all solutions are bounded. ⇤
Theorem 8.2. Let t 7! A (t) be continuous function from [t0 , 1) 7! M(n, R), where A (t) =
A + B (t). Then
a) If the real part of all multiple eigenvalues
R 1 of A are negative and the real part of sim-
ple eigenvalues are non-negative, and t0 kB B (t)k dt < +1, then any solution of ~u0 =
A (t)~u(t) is stable.
b) If real part of any eigenvalue of A is negative, and kB B (t)k ! 0 as t ! 1, then any
0
solution of ~u = A (t)~u(t) is asymptotically stable.
Example 8.3. Consider the linear system of equation
1
u01 = u1 u2 + e t u3 , u02 = u2 + u4 ,
1+t
u03 = e 2t
u2 3u3 2u4 , u04 = u3 u4 .
This can be written as ~u0 = A (t)~u(t) with A (t) = A + B (t), where
0 1 0 1
1 1 0 0 0 0 e t 0
B0 0C B 1 C
A=B
1 0 C , and B (t) = B0 0 0 1+t C
.
@0 0 3 2A @0 e 2t 0 0 A
0 0 1 1 0 0 0 0
A
To find the eigen values of A , consider the equation det(A I ) = 0, and solve it. It is easy
1
to check that eigenvalues are 1, 1, 2 ± i. Moreover, kB B (t)k1 = e t + e 2t + 1+t , and hence
B (t)k ! 0 as t ! 1. Thus, in view of Theorem 8.2, any solution of the given system is
kB
asymptotically stable.
Remark 8.1. For nonlinear systems, boundedness and stability are distinct concepts. for
example, consider the scalar first order ODE
(
y 0 (t) = tp , p 1 ,
y(t0 ) = y0 .
Then solution is given by y(t, t0 , y0 ) = y0 + 1
p+1 tp+1 tp+1
0 . Hence
|y(t, t0 , y0 ) y(t, t0 , y0 + y0 )| = | y0 | < , if | y0 | < .
Thus, it is stable. But it is NOT bounded.
8.1. Critical points and their stability: Consider the system ~u0 (t) = f~(~u(t)). If ~x0 2 ⌦ is
an equilibrium point,i.e., f~(~x0 ) = ~0, then ~u(t) = ~x0 is a solution of the ODE
~u0 (t) = f~(~u(t)), t > 0; ~u(0) = ~x0 .
Conversely, if ~u(t) ⌘ ~x0 is a constant solution, then f~(~x0 ) = ~0.
Definition 8.2. We say that ~x0 is stable/asymptotically stable/ unstable if this solution is
stable/asymptotically stable/unstable.
For any f~ 2 C 1 (⌦), where ⌦ is an open subset of Rn , we denote by Df~(~x0 ) is the matrix
2 @f @f1 @f1
3
1
(~ x 0 ) (~ x 0 ) . . . (~x 0 )
6 @u
@f2
1 @u2
@f2
@un
@f2 7
6 @u (~ x 0 ) @u2 (~ x 0 ) . . . @un (~ x0 )7
~ 6
Df (x0 ) = 6 1
.. .. .. 7.
.. 7
4 . . . . 5
@fn @fn @fn
@u1 (~x 0 ) @u2 (~x 0 ) . . . @un (~x 0 )
56 A. K. MAJEE

Definition 8.3. A critical point ~x0 2 ⌦ is said to be hyperbolic if none of the eigenvalues of
Df~(~x0 ) are purely imaginary.
Example 8.4. Consider the nonlinear system
u01 = u1 , u02 = u2 + u21 , u03 = u3 + u21 .
The only equilibrium point of this system is ~0. The matrix Df~(~0) is given by
0 1
1 0 0
Df~(~0) = @ 0 1 0A .
0 0 1
Eigenvalues of Df~(~0) are 1, 1 and 1. Hence the equilibrium point ~0 is hyperbolic.
Theorem 8.3. A hyperbolic equilibrium point ~x0 is asymptotically stable if all the eigenvalues
of Df~(~x0 ) have negative real part.
Example 8.5. Consider the nonlinear system
u01 = u1 + u23 , u02 = u21 2u2 , u03 = u21 + u32 4u3 .
Note that ~0 is an equilibria of the given system. The matrix Df~(~0) is given by
0 1
1 0 0
Df~(~0) = @ 0 2 0A.
0 0 4
Eigenvalues of Df~(~0) are 1, 2 and 4, none of them are purely imaginary. Hence ~0 is a
hyperbolic equilibrium point. Since all eigenvalues of Df~(~x0 ) have negative real part, we conclude
that ~0 is asymptotically stable.
Theorem 8.4. If ~x0 is a stable equilibrium point of the system ~u0 (t) = f~(~u(t)), the no eigen-
values of Df~(~x0 ) has positive real part.
Example 8.6. Consider the non-linear system x0 (t) = x(y 1),✓y 0 (t) =◆x y 1. Note that
0 2
(2, 1)> is a critical point. The Jacobian matrix at (2, 1)> is A = . The eigenvalues of
1 1
A are given by = 1±3 2 . Since one of the eigenvalue of A has positive real part, we conclude
that (2, 1)> is a unstable equilibrium point.
Remark 8.2. Hyperbolic equilibrium points are either asymptotically stable or unstable.
8.2. Liapunov functions and stability. The stability of non-hyperbolic equilibrium points
is typically more difficult to determine. A method, due to Liapunov, that is very useful for
deciding the stability of non-hyperbolic equilibrium points.
Definition 8.4. Let f~ 2 C 1 (⌦), V 2 C 1 (⌦; R), and ~ t (~x) is the flow of the system ~u0 (t) =
f~(~u(t)), i.e., ~ t (~x) = ~u(t, ~x). Then, for any ~x 2 ⌦, the derivative of the function V along the
solution ~u(t, ~x) is given by
. d
V (~x) = V ( ~ t (~x)) = rV (~x) · ~u0 (0, ~x) = rV (~x) · f~(~x).
dt t=0

Theorem 8.5. Let ⌦ be an open set in Rn and f~ : ⌦ ! Rn be C 1 . Let ~x0 2 ⌦ be an equilibrium


point of the system ~u0 (t) = f~(~u(t)). Assume that there exists a function V : ⌦ ! R such that V
is C 1 and
V (~x0 ) = 0 , V (~x) > 0 , 8~x 6= x~0 .
.
i) If V (~x)  0 for all ~x 2 ⌦, then ~x0 is stable.
ODES AND PDES 57
.
ii) If V (~x) < 0 for all ~x 2 ⌦ \ {~x0 }, then ~x0 is asymptotically stable.
.
iii) If V (~x) > 0 for all ~x 2 ⌦ \ {~x0 }, then ~x0 is unstable.
Proof. Proof of i): Observe that the function t ! V (~u(t, ~x)) is monotonically decreasing.
Indeed, by using chain-rule and given condition, we have
d .
V (~u(t, ~x)) = rV (~u(t, ~x)) · f~(~u(t, ~x)) = V (~u(t, ~x))  0 .
dt
Let " > 0. Define
m" = min V (~x).
|~ x0 |="
x ~

Since V (~x0 ) = 0 and V (~x) > 0 for all ~x 2 ⌦ \ {~x0 }, we have m" > 0. Moreover, by continuity of
V (·), there exists > 0 such that
m"
|V (~x)| < 8 |~x ~x0 | < . (8.3)
2
We claim that, for ~x 2 B(~x0 , ),
|~u(t, ~x) ~x0 | < ", 8t 0.
Suppose that it is not true. Then there exists t1 > 0 such that |~u(t1 , ~x) ~x0 | ". Since
t 7! |~u(t, ~x) ~x0 | is continuous, there exists t2 2 [0, t1 ] such that |~u(t2 , ~x) ~x0 | = ". Thus by
using monotonicity of V (~u(t, ~x)) in time variable and (8.3), we get
m"
m"  V (~u(t, ~x))  V (~u(0, ~x)) = V (~x) <
2
–a contradiction! Thus, ~x0 is a stable equilibrium point.
Proof of ii): In this case t ! V (~u(t, ~x)) is strictly decreasing when ~x 6= ~x0 . Choose " > 0 such
that cl(B(~x0 , ")) ⇢ ⌦. Then, from i), there exists > 0 such that whenever |~x ~x0 | < , we
have for all t 0, |~u(t, ~x) ~x0 | < ". We claim that
lim |~u(t, ~x) ~x0 | = 0.
t!1

We show this via sequential criterion. Let {tk } ⇢ [0, 1) be such that tk ! 1. We then show
that
~u(tk , ~x) ! ~x0 , as k ! 1.
Note that ~u(t, ~x) ⇢ cl(B(~x0 , ")) ⇢ ⌦. This implies that, upto a subsequence, ~u(tk , ~x) ! ~y0
and ~y0 2 cl(B(~x0 , ")) ⇢ ⌦. It remains to show that ~y0 = ~x0 . Suppose ~y0 6= ~x0 . Then by
.
assumption, V (~y0 ) < 0. Since t ! V (~u(t, ~x)) is strictly decreasing and ~u(tk , ~x) ! ~y0 , we infer
that V (~u(tk , ~x)) is strictly decreasing to V (~y0 ) i.e.,
V (~y0 ) < V (~u(tk , ~x)) 8 k. (8.4)
Since ~y0 6= ~x0 , we have V (~y0 ) = V (~u(0, ~y0 )) > V (~u(t, ~y0 )) and hence for all ~y sufficiently close
to ~y0 , we have V (~y0 ) > V (~u(t, ~y )). hence, for large k 2 N, we have
V (~y0 ) > V (~u(t, ~u(tk , ~x))) = V (~u(t + tk , ~x)).
Take t = tk+1 tk . Then we have, V (~y0 ) > V (~u(tk+1 , ~x))—a contradiction to (8.4). Hence
~y0 = ~x0 , and this completes the proof of ii).
Proof of iii): In view of the given assumption, t ! V (~u(t, ~x)) is strictly increasing when
~x 6= ~x0 . We want to show that ~x0 is unstable. Suppose it is stable. Let " > 0 be such that
cl(B(~x0 , ")) ⇢ ⌦. Then there exists > 0 such that
|~x ~x0 | < =) |~u(t, ~x) ~x0 | < ", 8t 0.
58 A. K. MAJEE

Fix ~x 6= ~x0 such that |~x ~x0 | < . Since V (~x) > 0, and V (·) is continuous, we can choose 1 >0
such that whenever |~y ~x0 | < 1 , V (~y ) < V (~ x)
2 . We claim that

1 < |~u(t, ~x) ~x0 | < ", 8t 0.


V (~
x)
If |~u(t, ~x) ~x0 | < 1, then V (~u(t, ~x)) < 2 —a contradiction as V is strictly increasing. Define
.
↵ := min V (~y ) > 0.
1 |~
y x0 |"
~

Note that
. d d
V (~u(⌧, ~x)) = V (~u(t, ~u(⌧, ~x))) =
V (~u(t + ⌧, ~x))
dt t=0 dt t=0
@
= rV (~u(⌧, ~x)) · ~u0 (⌧, ~x) = V (~u(⌧, ~x))
@⌧
Z t Z t .
@
=) V (~u(t, ~x)) V (~u(0, ~x)) = V (~u(⌧, ~x)) d⌧ = V (~u(⌧, ~x)) d⌧ ↵t
0 @⌧ 0
=) V (~u(t, ~x)) ↵t + V (~x), 8 t 0.
Let M" := max|~y x0 |" V
~ (~y ). Then we have
M" V (~u(t, ~x)) V (~x) + ↵t ! 1 as t ! 1
— a contradiction. Thus ~x0 is unstable. This completes the proof.

Remark 8.3. The function V satisfying the assumption of Theorem 8.5 is called the Liapunov
function.
Example 8.7. Consider the linear system
x01 = 4x1 2x2 , x02 = x1 .
Note that (0, 0) is only equilibrium point of the given system. Let V : R2 ! R be a C 1 function
defined by
V (x1 , x2 ) = c1 x21 + c2 x22 , c1 , c2 2 R+ .
Since v(0, 0) = 0 and V (x1 , x2 ) > 0 for (x1 , x2 ) 6= (0, 0), V is a Liapunov function. Now
.
V (x1 , x2 ) = rV (x1 , x2 ) · f~(x1 , x2 ) = 8c1 x21 + (2c2 4c1 )x1 x2 .
Choose c1 , c2 > 0 such that c2 = 2c1 . Taking c1 = 1, we have .
c2 = 2, and hence the Liapunov
.
function takes of the form V (x1 , x2 ) = x21 + 2x22 . Note that V (x1 , x2 ) = 8x21 . Hence V (~x)  0
for all ~x 2 R2 \ {~0}. Thus, (0, 0) is stable equilibrium point.
Example 8.8. Consider the nonlinear system
x01 = x2 x1 x2 , x02 = x1 + x21 .
Origin is an equilibrium point. Consider the Liapunov function V (x1 , x2 ) = x21 + x22 . Then
.
V (x1 , x2 ) = 0 for all (x1 , x2 ) 2 R2 . Thus, (0, 0) is an stable equilibrium point. Furthermore,
d
since dt V (x1 (t), x2 (t)) = 0, V (x1 (t), x2 (t)) = c1 for some constant. i.e., the trajectories of this
system lie on the circle x21 + x22 = c2 . Hence (0, 0) is NOT asymptotically stable equilibrium
point.
Example 8.9. Consider the second order di↵erential equation x00 + q(x) = 0, where q : R ! R
is a continuous function such that xq(x) > 0 8x 6= 0. This can be written as a system
x01 = x2 , x02 = q(x1 ) where x1 = x .
ODES AND PDES 59

The total energy of the system (sum of kinetic energy 12 (x01 )2 and the potential energy)
Z x1
x2
V (~x) = 2 + q(s) ds .
2 0
Note that (0, 0) is an equilibrium point, and V (0, 0) = 0. Moreover, since xq(x) > 0 8x 6= 0,
it is easy to check that V (x1 , x2 ) > 0 for all (x1 , x2 ) 2 R2 \ {~0}. Therefore, V is a Liapunov
function. Now .
V (x1 , x2 ) = (q(x1 ), x2 ) · (x2 , q(x1 )) = 0 .
The solution curves are given by V (~x) = c, i.e., the energy is constant on the solution curves or
trajectories of this system. Hence the origin is a stable equilibrium point.
Example 8.10. Consider the nonlinear system
x01 = x2 + x31 + x1 x22 , x02 = x1 + x32 + x2 x21 .
Note that (0, 0) is an equilibrium point. Consider the Liapunov function V (x1 , x2 ) = x21 + x22 .
Then
.
V (x1 , x2 ) = (2x1 , 2x2 ) · ( x2 + x31 + x1 x22 , x1 + x32 + x2 x21 )
= 2(x21 + x22 )2 > 0 , 8(x1 , x2 ) 2 R2 \ {~0} .
Thus, by Theorem 8.5, we conclude that (0, 0) is unstable equilibrium point.
Example 8.11. Let f (x) resp. g(x) be an even polynomial resp. odd polynomial in x. Consider
the 2nd order ODE y 00 + f (y)y 0 + g(y) = 0. This can be written as
Z x
0 0
x1 = x2 F (x1 ) , x2 = g(x1 ) , where F (x) = f (s) ds .
0
To see this, let x1 = y. Then From the equation, we have
d d 0
x001 + F (x1 ) + g(x1 ) = 0 =) [x + F (x1 )] = g(x1 ) .
dt dt 1
Set x2 = x01 + F (x1 ). Then we have x01 = x2 F (x1 ) , x02 = g(x1 ).
Rx
Let G(x) = 0 g(s) ds, and suppose that G(x) > 0 and g(x)F (x) > 0 in a deleted neighborhood
of the origin. Then the origin is a asymptotically stable equilibrium point. Indeed, consider the
Liapunov function in a nbd. of (0, 0) as
Z x1
1
V (x1 , x2 ) = g(s) ds + x22 .
0 2
Note that V (0, 0) = 0 and V (x1 , x2 ) > 0 in a deleted nbd. of (0, 0). Moreover,
.
V (x1 , x2 ) = (g(x1 ), x2 ) · (x2 F (x1 ), g(x1 )) = g(x1 )F (x1 ) < 0 .
Hence origin is a asymptotically stable equilibrium point. Note that if we assume that
G(x) > 0 and g(x)F (x) < 0 in a deleted neighborhood of the origin,
then origin will be a unstable equilibrium point.
60 A. K. MAJEE

9. Phase-plane analysis:
Consider a second order autonomous equation
x00 + V 0 (x) = 0, (9.1)
where V : R ! R is a smooth function. Setting y = x0 , equation (9.1) can be written as
x0 = y , y0 = V 0 (x) . (9.2)
We assume that solution of the above problem exists for all t 2 R.
Remark 9.1. If x(t) is a solution of (9.1), then x(t + h) is also a solution for any h 2 R.
Since the system (9.2) is nonlinear, explicit solution is hard to obtain. The phase plane anal-
ysis is an analytical approach that provides several types of information about the behaviour
of the solution of the underlying system without solving the equation explicitly.
The plane (x, y) is called phase plane and study of the system (9.2) is called phase plane
analysis. Note that the system (9.2) can be written as
x0 = Hy (x, y); y 0 = Hx (x, y)
where
1
H(x, y) = y 2 + V (x).
2
This is the total energy of (9.2).
Definition 9.1. Let H(x, y) be a di↵erentiable function on R2 . The autonomous system
x0 = Hy (x, y), y 0 = Hx (x, y) (9.3)
is called a Hamiltonian system and H is called Hamiltonian.
Lemma 9.1. If (x(t), y(t)) is a solution of (9.3), then there exists c 2 R such that
H(x(t), y(t)) = c.
Proof. Let (x(t), y(t)) be a solution of (9.3). Then by chain rule
d
H(x(t), y(t)) = Hx (x, y)x0 + Hy (x, y)y 0 = Hx Hy Hy Hx = 0 .
dt
=) H(x(t), y(t)) = c .

Remark 9.2. In view of the above lemma, we say that the system (9.2) is a conservative system.
Define
1
Ec = {(x, y) 2 R2 : H(x, y) = c} = {(x, y) 2 R2 : y 2 + V (x) = c}.
2
Note that if (x(t), y(t)) solves (9.3), then (x(t), y(t)) 2 ⇤c for all t. Ec 6= ; if and only if V (x)  c
for some x.
Example 9.1. Let H(x, y) = Ax2 + Bxy + Cy 2 . Then (0, 0) is the only equilibrium point.
• If c 6= 0, then the curve ⇤c is a conic. Precisely
i) If B 2 4AC < 0, and c > 0, then ⇤c is an ellipse.
ii) If B = 0, A = C and c > 0, then ⇤c is a circle.
iii) If B 2 4AC > 0 and c 6= 0, then ⇤c is a hyperbola.
• If c = 0 or B 2 = 4AC, then the conic can be a pair of straight lines or it reduces to a
point.
ODES AND PDES 61

Definition 9.2. Let ⌦ be an open set in Rn , f~ : ⌦ ! Rn be locally Lipschitz. Consider the


ODE
~u0 (t) = f~(~u(t)) . (9.4)

A point x0 2 ⌦ is called an equilibrium point or critical point of (9.4), if f~(x0 ) = ~0.


Example 9.2. Equilibrium point of the system x0 = x + 1, y 0 = x + 3y 1 is ( 1, 23 ).

Remark 9.3. The point (x⇤ , y ⇤ ) 2 R2 such that Hx (x⇤ , y ⇤ ) = 0 = Hy (x⇤ , y ⇤ ) are precisely the
equilibria of the Hamiltonian system (9.3).
Remark 9.4. ⇤c does not contain equilibria of (9.3) if and only if Hx and Hy do NOT vanish
simultaneously on ⇤c .
9.1. Periodic solutions: Denote (x(t), y(t)), the solution of the Hamiltonian system (9.2) such
that H(x(t), y(t)) = c. We are interested in the periodicity of the solution of (9.2). A solution
(x(t), y(t)) of (9.2) is periodic with period T > 0, if
x(t + T ) = x(t), y(t + T ) = y(t) 8 t 2 R.
By periodic solution, we mean a non-trivial periodic solution, namely
p a periodic solution which
is NOT an equilibrium solution. From the set Ec , we have y = ± 2c 2V (x). It is clear the if
x = x(t), y = y(t) is a periodic solution of (9.2), then Ec is closed bounded curve.
Theorem 9.2. Suppose that Ec 6= ; is compact curve (closed and bounded) that does not contain
equilibria of (9.2). Then (x(t), y(t)) is a periodic solution of (9.2).
Example 9.3. Consider the IVP
x0 = 2x + 3y , x(0) = 0; y0 = 3x 2y , y(0) = 1 .
It is a Hamiltonian system with Hamiltonian
3 3
H(x, y) = 2xy + y 2 + x2 := Ax2 + Bxy + Cy 2 .
2 2
The curve Ec has equation 2xy + 32 y 2 + 32 x2 = c. Since x(0) = 0 and y(0) = 1, we get c = 32 .
Thus the curve Ec is an ellipse (as B 2 4AC < 0 and c = 32 > 0). Note that it does not contain
the equilibrium point (0, 0). Hence the solution of the IVP is periodic.
Example 9.4. Consider the IVP
x0 = x 6y , x(0) = 1; y0 = 2x y , y(0) = 0 .
This is a Hamiltonian system with Hamiltonian
H(x, y) = xy 3y 2 + x2 := Ax2 + Bxy + Cy 2 .
The equation of the curve Ec is x2 + xy 3y 2 = c. From initial conditions, we get c = 1. Thus
the conic Ec is hyperbola (B 2 4AC = 13 > 0), and hence the solution is unbounded.
Consider a second order autonomous ODE x00 = f (x). Then, it can be re-written as system
(9.2). Hence it is a Hamiltonian system with Hamiltonian
1
H(x, y) = y 2 F (x), where F 0 (x) = f (x).
2
Thus, if H = c is compact curve, and does not contain any zeros of f , then it carries a periodic
solution of the equation x00 = f (x).
62 A. K. MAJEE

Example 9.5. Consider the IVP


1
x00 = x + x3 , x(0) = 0 , x0 (0) = .
2
This is a Hamiltonian system with Hamiltonian
1 1 1 4
H(x, y) = y 2 + x2 x .
2 2 4
Moreover, Ec has the equation 12 y 2 + 12 x2 14 x4 = c. Since x(0) = 0 and x0 (0) := y(0) = 12 ,
we have c = 18 . Thus the curve is defined by 2y 2 + 2x2 x4 = 12 . Note that the curve has a
component C which is a closed bounded curve surrounding the origin, and does not contain any
zeros of f , which are 0, 1, 1. Hence the corresponding solution is periodic with energy 18 .
Example 9.6. Consider the IVP
x00 + x + 6x5 = 0 , x(0) = 0 , x0 (0) = a 6= 0 .
Then this is Hamiltonian system with H(x, y) = 12 y 2 + 12 x2 + x6 . Hence equation of Ec is
1 2 1 2 6 1 2
2 y + 2 x + x = c. From the initial conditions, we obtain c = 2 a , and hence the equation of
the curve is given by
x2 + 2x6 + y 2 = a2 .
Note that it is a compact curve and does not contain the zeros of f , which is 0. Thus, the
solution is periodic.
9.2. On planar autonomous system. A general planar autonomous system is of the form
x0 = f (x, y), y 0 = g(x, y)
where f, g : R2 ! R are given continuous function. We will study a well-know Lotka-Volterra
prey-predator system. Let x(t) denote the number of prey at time t and y(t) the number
of predators at time t. Under certain assumptions, the dynamics of biological systems in which
pray and predator interact can be modeled as
x0 (t) = ax bxy, y 0 (t) = cy + dxy, (9.5)
where a, b, c, d are positive real numbers describing the interaction of the pray and predator.
Equation (9.5) is known as Lotka-Volterra system. Observe that equilibrium solutions of
(9.5) are (0, 0) and P0 = (x0 , y0 ) with x0 = dc , y0 = ab .
Lemma 9.3. Let (x(t), y(t)) be a solution of (9.5). Define
h(x, y) = dx + by c ln(x) a ln(y), x, y > 0.
Then h(x(t), y(t)) is identically constant. In other words, it is a conservative system.
Proof. By using chain rule along with the fact that (x(t), y(t)) solves (9.5), we have
d cx0 ay 0
h(x(t), y(t)) = dx0 + by 0 . = 0.
dt x y
This shows that h(x(t), y(t)) is identically constant. ⇤
Lemma 9.4. P0 is the proper global minimum of h.
Proof. Observe that (x0 , y0 ) solves the equation hx = 0 = hy . The hessian matrix h00 (x0 , y0 ) is
given by !
✓ ◆ c
00 hxx (x0 , y0 ) hxy (x0 , y0 ) x20
0
h (x0 , y0 ) = = ,
hyx (x0 , y0 ) hyy (x0 , y0 ) 0 ya2
0
which is positive definite. Thus P0 is the global minimum of h. ⇤
We now discuss periodic solution of (9.5).
ODES AND PDES 63

Theorem 9.5. Let h0 = h(P0 ). Then for every > h0 , the system (9.5) has a periodic solution
(x(t), y(t)) such that h(x(t), y(t)) = .
Proof. Since P0 is the global minimum of h, the set h(x, y) = for all > h0 is non-empty
closed curve around P0 . Note that P0 is the unique equilibrium point of (9.5). Thus, repeating
the arguments carried out to prove the existence of periodic solutions of (9.2), one can show
easily that the curve h(x, y) = corresponds to a periodic solution of (9.5).

Let (x(t), y(t)) be a T -periodic solution of (9.5), which exists thanks to Theorem 9.5. Define
the average size of the prey and predator as
Z Z
1 T 1 T
x̄ = x(t) dt; ȳ = y(t) dt.
T 0 T 0
Theorem 9.6. If (x(t), y(t)) is a T -periodic solution of (9.5), then x̄ = x0 and ȳ = y0 .
0
Proof. Since y 0 = y(dx c), one has yy = dx c. Integrating from 0 to T , we have
Z T Z T 0
y
d x(t) dt cT = dt = ln(y(T )) ln(y(0)) = 0 (since y is T -periodic)
0 0 y
c
=) dx̄ c = 0 =) x̄ = = x0 .
d
Similarly, by using first equation of (9.5), we can show that ȳ = ab = y0 . ⇤

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy