0% found this document useful (0 votes)
77 views9 pages

Linear Secondorder General (1) .Ps

The document discusses general properties of second-order linear differential equations. It states that the Wronskian of two independent solutions is never zero and maintains a constant sign. It also notes that the zeros of independent solutions occur alternately, with one solution having exactly one zero between successive zeros of the other solution. The standard form of the differential equation can be reduced to a normal form by making a variable substitution. Sturm-Liouville problems represent an important class of second-order linear differential equations with boundary conditions.

Uploaded by

stevenelimiller
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PS, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views9 pages

Linear Secondorder General (1) .Ps

The document discusses general properties of second-order linear differential equations. It states that the Wronskian of two independent solutions is never zero and maintains a constant sign. It also notes that the zeros of independent solutions occur alternately, with one solution having exactly one zero between successive zeros of the other solution. The standard form of the differential equation can be reduced to a normal form by making a variable substitution. Sturm-Liouville problems represent an important class of second-order linear differential equations with boundary conditions.

Uploaded by

stevenelimiller
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PS, PDF, TXT or read online on Scribd
You are on page 1/ 9

General Properties for Second

Order Linear Differential Equations

In this document I have collected a few general results concerning homogeneous linear secondorder differential equations, i.e. equations of form,

y + P (x)y + Q(x)y = 0

(1)

They are very interesting results that, when no technique is available for finding the analytic
form of solutions, can help investigating properties of such solutions.

Wronskian of two independent solutions

If y1 and y2 are two (not necessarily independent) solutions of equation (1), then their wronskian,


y1 y2

= y1 y2 y2 y1
W (y1 , y2 )
(2)

y1 y2

is either identically zero, or it has constant sign over the whole interval of validity of the two
solutions.
Let us consider the first derivative of W . A straightforward derivation of formula (2) yields:

W = y1 y2 + y1 y2 y2 y1 y2 y1 = y1 y2 y2 y1

Next, given that both y1 and y2 are solutions of the linear second-order homogeneous equation,
the following equations are valid:

y1 + P (x)y1 + Q(x)y1 = 0

y2 + P (x)y2 + Q(x)y2 = 0
On multiplying the first equation by y2 , the second by y1 , and subtracting the first from the
second, we obtain:

(y1 y2 y2 y1 ) + P (x)(y1 y2 y2 y1 ) = 0
In the above equation we recognize the expressions for the wronskian and its first derivative; the
equation, thus, can be re-written as:
dW
+ P (x)W = 0
dx
The general solution of this equation is:
 Z

W = k exp P (x)dx
1

The exponential is always positive, then the wronskian will have the same sign of the constant
k. If k = 0, the wronskian will be identically zero. This is what we wanted to prove. It can
be further shown (but we will not do it here) that the wronskian can be zero only if the two
solutions are linearly dependent. Therefore the wronskian of two linearly independent solution
is never zero and has the same sign throughout the interval of validity of the two soutions.
EXAMPLE 1.
Consider the following equation:

y +y =0
sin x and cos x are two linearly independent solutions of this equation. Their wronskian is:


sin x
cos x

W (sin x, cos x) =
= sin2 x cos2 x = 1
cos x sin x

i.e. it has constant sign (it is negative) throughout the whole interval of validity of sin x and
cos x (which is < x < +).

Zeroes of independent solutions

If y1 and y2 are two independent solutions of equation (1), then the zeros of this functions are
distinct and occur alternatively. More specifically, y1 vanishes exactly once between any two
successive zeros of y2 .
This is quite an interesting a powerful observation on the qualitative behaviour of linear, second
order equations solutions. In order to see how this is true, let us consider the expression for

the wronskian, equation (2). Quantity y1 (x)y2 (x) y2 (x)y1 (x) has constant sign (positive or
negative, it does not matter), because the two functions are linearly independent. They cannot
have a common zero. Let us, in fact, suppose the opposite, and let us call this common zero

with x0 . Then y1 (x0 ) = y2 (x0 ) = 0, and w(y1 (x0 ), y2 (x0 )) = y1 (x0 )y2 (x0 ) y2 (x0 )y1 (x0 ) = 0.
But we know that the wronskian has to be different from zero, therefore x0 cannot be a zero
for both y1 and y2 . Let us consider, then, two successive zeros of y2 , say x1 and x2 , and let
us see what happens to y1 in the interval x1 < x < x2 . At both x1 and x2 the wronskian is

different from zero. More specifically, w(x1 ) = y1 (x1 )y2 (x1 ) and w(x2 ) = y1 (x2 )y2 (x2 ). To fix
ideas, and to make the argument easier to follow, let us suppose that the wronskian is positive.

We also know that y2 (x1 ) will be of a different sign from y2 (x2 ), because in order to go from a

zero to the next, function y2 needs to increase and decrease or vice-versa. Therefore y1 (x1 )y2 (x1 )

can maintain the same sign as y1 (x2 )y2 (x2 ) only if y1 (x1 ) has a different sign from y1 (x2 ). But,
being y1 a continuous function, it will necessarily have to vanish between x1 and x2 . Furthermore y1 will vanish exactly only once, because x1 and x2 are two consecutive zeros of y2 . If
this were not the case, then we could apply a similar argument to y2 to show that it would possess another zero between x1 and x2 , thus violating the initial assumption of two successive zeros.
EXAMPLE 2.
Using again the two independent solutions sin x and cos x, it is easy to check the validity of the
above statement, because the sine function has exactly one zero between two successive zeros of
the cosine function.

Reduction of standard form to normal form

The standard form of a linear, second order differential equation is what we have defined at (1).
This can always be reduced to its normal form:

u + q(x)u = 0

(3)

where q(x) is defined through P and Q. Let us see how to operate the transformation.
We have to suppose, first, that y can be factorised as uv, with u and v two new functions:
y = uv
y
y

= u v + uv

= u v + 2u v + uv

Then we replace the above quantities in equation (1) and, successively, collect terms in u , u
and u:

vu + (2v + P v)u + (v + P v + Qv)u = 0

To arrive at form (3) we simply have to set u coefficient to zero:


 Z


2v + P v = 0

v = exp P (x)dx
From the above expression for v, first and second derivatives


1
v = Pv , v = P +
2
2

are easily computable:



1 2
P v
4

The transformed equation becomes, thus,


1
1 1

vu + ( P + P 2 P 2 + Q)vu = 0
2
4
2

1
1

u + (Q P 2 P )u = 0
4
2
(we can divide by v because always in this case v 6= 0). The above equation coincides with
equation (3) once q(x) is defined as:
1
1
q(x) = Q(x) P 2 (x) P (x)
4
2
EXAMPLE 3.
Reduce Bessels equation,

x2 y + xy + (x2 m2 )y = 0
to its normal form.
Solution.
First the equation has to be re-written in its standard form, through a division by x2 :


m2

1
y + y + 1 2 y =0
x
x

(4)

Then it is simply a matter of computing q(x) according to formula (4). For this purpose we
compute the following quantities:
m2
x2
1
1
P 2 (x) = 2
4
4x
1
1
P (x) =
2
2x2
Q(x) = 1

Therefore,
m2
1
1
1 4m2

+
=
1
+
x2
4x2 2x2
4x2
and Bessels equation in its normal form is:



1 4m2
u=0
u + 1+
4x2
q(x) = 1

There is an interesting result about the normal form which we will state, but not prove. If
q(x) < 0 and u(x) is a non-trivial solution of (3), then u(x) has at most one zero (that is it
has not an oscillatory character). So, if one is interested in oscillations, then he/she should
concentrate on equations with positive q(x). Consider for example equation,

y y =0
which is already in its normal form. Here q(x) = 1, i.e. it is negative for all x. The general
solution is C1 exp(x) + C2 exp(x). To visualise how this solution has at most one zero, let us
choose any two arbitrary constants, for instance C1 = C2 = 1; so y(x) = exp(x) 1/ exp(x).
The plot for this function is shown at Figure 1; this function clearly shows only one zero.

Figure 1:

Sturm-Liouville problems

A very important class of second order, linear differential equations goes under the name of SturmLiouville problem. More specifically, the kind of equations in this problem have the following
self-adjoint form:


dy
d
(x)
+ [(x) + (x)]y = 0
(5)
dx
dx
where is a parameter that can assume several values, and is called eigenvalue, while functions
(x), (x) and (x) are continuous on an interval [a, b], and, in this interval, (x) > 0 and
(x) > 0. In a Sturm-Liouville problem there are boundary conditions, rather than initial
conditions. In general they can be written as,

C1 y(a) + C2 y (a) = 0

D1 y(b) + D2 y (b) = 0

(6)

where, remember, a and b are boundary points for the equation interval [a, b].
EXAMPLE 4.
Find a solution for the following Sturm-Liouville problem on the interval [0, ]:


y(0) = 0
y + y = 0 ,
y() = 0
Solution.
The given equation can be straightforwardly written in self-adjoint form as follows:


d
dy

y + y =
1
+ (1 + 0)y = 0
dx
dx
This is exactly form (5) with (x) = 1 > 0, (x) = 1 > 0 and (x) = 0 a continuous function. We
have, therefore, verified that the given problem is a Sturm-Liouville problem. To find solutions,
let us consider, in turn, < 0, = 0 and > 0.
< 0. If we look at the given equation in its normal form, we realize that q(x) = < 0. Thus
we know that at most a zero is allowed for any solution. This can be the case here, because the
boundary conditions force the solutio to have at least two zeros. Then cannot be a negative
number.
= 0. In this case the equation is a very simple one, with general solution y(x) = Ax + B, i.e.
a straight line. Such a curve can have at most one zero, therefore it cannot be a solution for our
problem; cannot be zero either.
> 0. This is, obviously, the only interesting case. The general solution is:

y(x) = A cos( x) + B sin( x)


By using
the first boundary conditions, y(0) = 0, we get A = 0. Thus we are left with y(x) =
B sin( x). The second boundary condition becomes:

B sin( ) = 0
Now, b cannot be zero, otherwise the trivial solution would be the only solution to the problem.
So the boundary condition transforms into:

= n , n = 1, 2, 3, . . .
sin( ) = 0
5

We have found, here, a very interesting and important result for all Sturm-Liouville problems.
There is usually an infinite set of solutions yn (x), called eigenfunctions, each one corresponding
to an allowed value of the eigenvalue = n . For the problem just examined, eigenvalues and
eigenvectors are:
= n = n2

y(x) = yn (x) = sin(nx) , n = 1, 2, 3, . . .

Given that the starting function and the boundary conditions are linear, any linear combination
of these eigenfunctions will still be solution of the Sturm-Liouville problem. There is an infinite
number of eigenfunctions, thus there will exist functions, expressed as infinite summations of these
eigenfunctions, that are solution to the Sturm-Liouville problem. One very important feature of
the eigenfunctions coming from a Sturm-Liouville problem is that they form an orthogonal set.
That is, if yn (x) and ym (x) are two eigenfunctions corresponding to eigenvalues n and m , then:

Z b
= 0 if m 6= n
(7)
w(x)ym yn (x)dx
=
6
0 if m = n
a
where w(x) is a so-called weight function, defined and continuous in interval [a, b]. Let us try and
show that this is actually true. First of all let us re-write equation (5) for both eigenvalues:


dym
d
(x)
+ [m (x) + (x)]ym = 0
dx
dx


d
dyn
(x)
+ [n (x) + (x)]yn = 0
dx
dx
Let us now multiply the first equation by yn , the second by ym and subtract the second from the
first:




dym
d
dyn
d
(x)
ym
(x)
+ (m n )(x)ym yn = 0
yn
dx
dx
dx
dx
Further, let us integrate the whole equation between a and b:




Z b
Z b
Z b
d
dyn
dym
d
yn (x)
(x)
dx
(x)
dx = 0
ym (x)
(x)ym (x)yn (x)dx =
(m n )
dx
dx
dx
dx
a
a
a
Through integration by parts one obtains:


Z b
Z b
d

dyn
b
ym (x)
ym yn dx
(x)
dx = [ym yn ]a
dx
dx
a
a
and, similarly:
Z

b
a



Z b

dym
b

d
yn ym dx
(x)
dx = [yn ym ]a
yn (x)
dx
dx
a

The integrated equation so becomes:


Z b
Z b
Z b

ym yn dx [yn ym ]ba +
(x)ym (x)yn (x)dx = [ym yn ]ba
yn ym dx
(m n )
a

(m n )

(x)ym (x)yn (x)dx = (b)[ym (b)yn (b)yn (b)ym (b)](a)[ym (a)yn (a)yn (a)ym (a)]
a

The quantities in square brackets are wronskians. If,



y (x) yn (x)
Wmn (x) m

ym (x) yn (x)
6

then the above equation can be re-written as follows:


Z b
(x)ym (x)yn (x)dx = (b)Wmn (b) (a)Wmn (a)
(m n )
a

Now we ask: is the right hand side zero? It is certainly zero if the wronskians are zero. Let us,
then, consider the boundary conditions, and re-write the first of (6) for both ym and yn :


C1 ym (a) + C2 ym (a) = 0

C1 yn (a) + C2 yn (a) = 0
At least one of the two constants, C1 and C2 , is different from zero. If we look at the above
equations as a system for the two unknown C1 and C2 , then it will give solutions other than the
trivial one if the determinant,

ym (a) ym
(a)

yn (a) y (a) Wmn (a)
n
is zero. Thus, Wmn (a) must be equal to 0. Similarly, using the second of (6), we can show that
Wmn (b) = 0. The integrated equation has, eventually, assumed the following form:
(m n )

(x)ym (x)yn (x)dx = 0


a

This is equivalent to (7) if m 6= n. Thus, it has been proved that the set of eigenfunctions is an
orthogonal set, with weight function equal to (x).
EXAMPLE 5.
In EXAMPLE 4 the set of orthogonal eigenfunctions was {yn (x) = sin(nx), n = 1, 2, 3, . . .}. For
them the orthogonality is given by the following integral:

Z
0
for m 6= n
sin(mx) sin(nx)dx =
/2
for
m=n
0
As (x) = 1 in this example, we expect also the weight function to be 1. This is indeed the case
in the orthogonality integral just shown.
Where does the term self-adjoint come from? Let us consider a generic linear, second order
homogeneous equation:

A(x)y + B(x)y + C(x)y = 0


(8)
Sometime it is possible to multiply the equation for a specific function, say (x), that change it
into the following exact form:
h

(x)A(x)y

+ [S(x)y] = 0

(9)

The same function (x) needs to satisfy a specific differential equation; let us set up to determine
such an equation. Let us, first, expand equation (9) by carrying out the derivatives:

Ay + A y + Ay + S y + Sy = 0

Ay + ( A + A + S)y y + S y = 0
7

A comparison of this equation with equation (8), previously multiplied by (x), entails the
followings relations:


A + A + S = B

S = C
or, taking the derivative of the first equation:


A + A + A + A + S = B + B

S = C

A + 2 A + A + C = B + B

Thus, any chosen function (x), will have to obey the following equation:

A(x) + [2A (x) B(x)] + [A (x) B (x) + C(x)] = 0

(10)

This equation is called, quite sensibly, adjoint of equation (8). There are cases where the adjoint
is exactly equivalent to the equation itself; these equation are called self-adjoints. For example,
the following Legendre equation,

(1 x2 )y 2xy + p(p + 1)y = 0


has adjoint equation:

(1 x2 ) + [2(2x) (2x)] + [2 (2) + p(p + 1)] = 0


2

(1 x ) 2x + p(p + 1) = 0
This is, again, Legendre equation; thus the adjoint is equivalent to the equation itself. Therefore
Legendre equation has a self-adjoint form.
Is it always possible to transform a linear, second order, homogeneous equation into a self-adjoint
one? The answer is yes, as long as we multiply equation (8) by a factor:
Z

B(x)
1
exp
dx
(11)
(x) =
A(x)
A(x)
Things proceed as follows. Let us multiply (8) by the factor in (11):

Z

Z

Z
B
B

C
B
dx y + exp
dx y + exp
dx y = 0
exp
A
A
A
A
A
The first two terms can be combined into one; this way the equation assumes the form:
 
Z


Z
d
B

B
C
dx y + exp
dx y = 0
exp
dx
A
A
A
which is essentially the self-adjoint form (1).
EXAMPLE 6.
Cast Bessels equation,

x2 y + xy + (x2 p2 )y = 0
into self-adjoint form.

Solution.
The whole equation will have to be multiplied by a factor given by (11). In this case it is:
Z

x
1
1
1
dx
= 2 exp (ln x) =
= 2 exp
x
x2
x
x
After multiplication the equation looks like this:

xy + y + (x2 p2 )y = 0
The first two terms can be re-written as one. After this the equation is already in its self-adjoint
form:


p2
d  
xy + x
y=0
dx
x

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy