Lecture 23
Lecture 23
Dylan Zwick
Spring 2014
x′ = Ax
1
Admittedly, not one of Sherlock Holmes’s more popular mysteries.
1
Example - Find a general solution to the system:
9 4 0
x′ = −6 −1 0 x
6 4 3
2
More room for the example problem
1 −3
A=
3 7
3
In this situation we call this eigenvalue defective, and the defect of this
eigenvalue is the difference beween the multiplicity of the root and the
number of linearly independent eigenvectors. In the example above the
defect is of order 1.
Now, how does this relate to systems of differential equations, and how
do we deal with it if a defective eigenvalue shows up?2 Well, suppose we
have the same matrix A as above, and we’re given the differential equa-
tion:
x′ = Ax.
1
e4t
−1
but for a complete set of solutions we’ll need another linearly indepen-
dent solution. How do we get this second solution?
Based on experience we may think that, if v1 eλt is a solution, then we
can get a second solution of the form v2 teλt . Let’s try out this solution:
2
No, run and hide is not an acceptable answer.
3
Nuts!
4
Hooray!
4
v1 eλt + λv1 teλt + λv2 eλt = Av1 teλt + Av2 eλt .
If we equate the eλt terms and the teλt terms we get the two equalities:
(A − λI)v1 = 0
and
(A − λI)v2 = v1 .
Let’s think about this. What this means is that, if we have a defective
eigenvalue with defect 1, we can find two linearly independent solutions
by simply finding a solution to the equation
(A − λI)2 v2 = 0
such that
(A − λI)v2 6= 0.
x1 (t) = v1 eλt
and
x2 (t) = (v1 t + v2 )eλt
2 0 0
(A − 4I) =
0 0
5
So, any non-zero vector could potentially work as our vector v2 . If we
try
1
v2 =
0
then we get:
−3 −3 1 −3
=
3 3 0 3
(A − λI)r v = 0
but
(A − λI)r−1 v 6= 0.
6
We note that a rank 1 generalized eigenvector is just our standard eigen-
vector, where we treat a matrix raised to the power 0 as the identity matrix.
We define a length k chain of generalized eigenvectors based on the eigenvec-
tor v1 as a set {v1 , . . . , vk } of k generalized eigenvectors such that
(A − λI)vk = vk−1
(A − λI)vk−1 = vk−2
..
.
(A − λI)v2 = v1 .
(A − λI)d+1 u = 0.
x1 (t) = v1 eλt
x2 (t) = (v1 t + v2 ) eλt
1 2
x3 (t) = v1 t + v2 t + v3 eλt
2
5
This is mathspeak for “A theorem that’s true but we’re not going to prove. So just
trust me.”
7
..
.
tk−1 t2
xk (t) = v1 + · · · + vk−2 + vk−1 t + vk eλt
(k − 1)1 2!
0 1 2
x′ = −5 −3 −7 x.
1 0 0
6
In practice we’ll only be dealing with smaller (2x2, 3x3, maybe a 4x4) systems, so
things won’t get too awful.
8
Room for the example problem.
9
Even MORE room for the example problem.
10
Notes on Homework Problems
Problem 5.4.1 is a 2 × 2 system with a repeated eigenvalue. Shouldn’t be
too hard. As in seciton 5.2, the problem also asks you to use a computer
system or graphing calculator to construct a direction field and typical
solution curves for the system. A sketch is fine.
Problems 5.4.8 and 5.4.16 deal with larger systems than problem 5.4.1,
but the approach is the same. They’re larger, so they’re a bit harder, but on
the upside you don’t have to draw any direction fields!
For problem 5.4.25 you’ve got to find some chains. You should take
some powers of the coefficient matrix. You’ll find it’s nilpotent, and that
should help you a lot in generating these chains!
Problem 5.4.33 investigates what you do when you’ve got a defective
complex root. Please note that there’s a typo in the textbook! The equation
T T
v2 = 9 0 1 i should be v2 = 0 0 1 i .
11