0% found this document useful (0 votes)
29 views14 pages

DIFFERENTIAL EQUATIONS LECTURE SLIDES - Complex Eigenvalues

The document discusses complex eigenvalues in the context of a system defined by x' = Ax, where A is a constant matrix with real entries. It explains that if λ = α + iβ is a complex eigenvalue, then its complex conjugate λ = α - iβ is also an eigenvalue, and provides a method to derive linearly independent real-valued solutions from the complex solutions. Additionally, it covers the case of repeated eigenvalues and provides examples to illustrate the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views14 pages

DIFFERENTIAL EQUATIONS LECTURE SLIDES - Complex Eigenvalues

The document discusses complex eigenvalues in the context of a system defined by x' = Ax, where A is a constant matrix with real entries. It explains that if λ = α + iβ is a complex eigenvalue, then its complex conjugate λ = α - iβ is also an eigenvalue, and provides a method to derive linearly independent real-valued solutions from the complex solutions. Additionally, it covers the case of repeated eigenvalues and provides examples to illustrate the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Complex Eigenvalues

Consider the system


x ′ = Ax
where
 A is a constant n × n matrix with real entries. 
 Recall that the eigenvalues of A are the roots of p(λ) = det(A − λI). 
 
 p(λ) is called the characteristic polynomial of A and p(λ) = 0 is called 
 
 
the characteristic equation of A.
Suppose that λ = α + iβ (α, β ∈ R, b ̸= 0) is a complex eigenvalue of A with
 
k1
 
 . 
corresponding eigenvector K =  
..  ∈ Cn .

 
kn
Taking complex conjugates of both sides of AK = λK
(recall: z = a + ib ⇒ z = a − ib) we get AK = λ K (since entries of A are real).
 
k1
 
 . 
This means λ = α − iβ is also an eigenvalue of A and K =  
..  is an

 
kn
eigenvector corresponding to λ. .
.
.
.
.
. . . . .
. . . .
. . . .
. . . .
. . . .
. . . . .
.
.
.
.
.
.
.
.
.
We have two linearly independent solutions (since λ ̸= λ)
x(1) (t) = Keλt , x(2) (t) = Keλt = x(1) (t) of the system x ′ = Ax.
To find two linearly independent real valued solutions, we will take real and
imaginary parts u(t) and v(t) of x(1) (t) = u(t) + iv(t).
Let K = a + ib where a, b ∈ Rn .
(Recall: Euler’s formula eiθ = cos θ + i sin θ)

x(1) (t) = Keλt = (a + ib)e(α+iβ)t = (a + ib)eαt (cos(βt) + i sin(βt))


⇒ x(1) (t) = eαt (a cos(βt) − b sin(βt)) +i eαt (a sin(βt) + b cos(βt))
| {z } | {z }
u(t) v(t)

u(t) = e (a cos(βt) − b sin(βt))


αt

and
v(t) = eαt (a sin(βt) + b cos(βt))
are solutions of the system since they are linear combinations of two
x(1) (t) + x(2) (t) x(1) (t) − x(2) (t)
solutions: u(t) = , v(t) =
2 2i

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
We need to show that u(t) and v(t) are linearly independent. First note
that a and b are linearly independent vectors:
   
K+K K−K
c1 a + c2 b = 0 ⇒ c1 + c2 =0
 2   2i 
ic1 + c2 ic1 − c2
⇒ K+ K=0
2i 2i
Since K and K are linearly independent we obtain ic1 + c2 = 0 and
ic1 − c2 = 0 which give c1 = c2 = 0. Hence a and b are linearly independent.
c1 u(t) + c2 v(t) = 0
⇒ c1 eαt (a cos(βt) − b sin(βt)) + c2 eαt (a sin(βt) + b cos(βt)) = 0
⇒ (c1 cos(βt) + c2 sin(βt))a + (−c1 sin(βt) + c2 cos(βt))b = 0

 c1 cos(βt) + c2 sin(βt) = 0
⇒ (since a and b are linearly independent)

 −c1 sin(βt) + c2 cos(βt) = 0
Multiplying the first equation by cos(βt), the second equation by − sin(βt)
and adding up gives: c1 = 0.
Multiplying the first equation by sin(βt), the second equation by cos(βt)
and adding up gives: c2 = 0.
Therefore, u(t) and v(t) are two linearly independent solutions of the
system x ′ = Ax.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Example
 
 −3 2 
a) Find the general solution of the system x ′ =  x
−1 −1

b) Find
 the solution
 of the initial
 value
 problem
 −3 2   2 
x′ =   x , x(0) =  
−1 −1 −4

Solution
 
 −3 2 
a) A =  .
−1 −1

−3 − λ 2
p(λ) = det(A−λI) = = (−3−λ)(−1−λ)−2(−1) = λ2 +4λ+5
−1 −1 − λ

−4 ∓ 16 − 20 −4 ∓ i2
p(λ) = 0 ⇒ λ = = = −2 ∓ i
2 2
⇒ λ1 = −2 + i, λ2 = λ1 = −2 − i complex eigenvalues.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
 = −2 + i : (A 
λ −(−2 +i)I)K
 = 0
 −1 − i 2   k1   0 
  = 
−1 1−i k2 0
 

 
(−1 − i)k1 + 2k2 = 0 (−1 − i)k1 + 2k2 = 0 

 
−k1 + (1 − i)k2 = 0  −(1 + i)k1 + (1 + i)(1 − i)k2 = 0 


(−1 − i)k1 + 2k2 = 0 
⇒ ⇒ (−1 − i)k1 + 2k2 = 0

(−1 − i)k1 + 2k2 = 0 
 
 2 
⇒ K1 =   is an eigenvector belonging to λ1 = −2 + i.
1+i
   
  2  
and K2 = K1 =   is an eigenvector belonging to λ2 = −2 − i.
1−i

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
   
 2  (−2+i)t  2  −2t it
K1 eλ1 t =  e = e e
1+i 1+i
 
 2  −2t
=   e (cos t + i sin t)
1+i
 
 2 cos t + i2 sin t 
= e−2t  
cos t − sin t + i(cos t + sin t)
   
 2 cos t  −2t 
2 sin t 
= e−2t   + i e  
cos t − sin t cos t + sin t
| {z } | {z }
u(t) v(t)

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
general solution is x(t) = c1 u(t) + c2 v(t) which is
   
 2 cos t  −2t 
2 sin t 
x(t) = c1 e−2t   + c2 e  
cos t − sin t cos t + sin t
       
 2   2   0   2 
b) x(0) =   ⇒ c1   + c2  = 
−4 1 1 −4


2c1 = 2 
⇒ ⇒ c1 = 1, c2 = −5

c1 + c2 = −4 
solution of 
the IVP is   
 2 cos t   2 sin t 
x(t) = e−2t   − 5e−2t   which is
cos t − sin t cos t + sin t
 
 2 cos t − 10 sin t 
x(t) = e−2t  
−4 cos t − 6 sin t
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Repeated Eigenvalues

We will consider only double real root case.


Consider the system
x ′ = Ax
where A is a constant n × n matrix with real entries. Suppose that λ is a
double real root of p(λ) = 0. (That is, λ has algebraic multiplicity 2.)
 
If λ0 is a root such that p(λ) = (λ − λ0 )m g(λ) where g is a polynomial
 
 
 with g(λ0 ) ̸= 0 then we say that λ0 has algebraic multiplicity m. 
 
 
 If the number of linearly independent eigenvectors associated with λ is r, 
 0 
 
 
 then we say that λ0 has geometric multiplicity r. 
 
It can be proved that 1 ≤ r ≤ m.

If the geometric multiplicity of λ is also 2, that is if we can find 2 linearly


independent eigenvectors K(1) , K(2) corresponding to λ, then we have two
linearly independent solutions x(1) (t) = K(1) eλt and x(2) (t) = K(2) eλt .

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
 
Example
 0 1 1 
 
Find the general solution of the system x = 

 1 0 1 
x
 
1 1 0
 
Solution  0 1 1 
 
A=
 1 0 1  
 
1 1 0

−λ 1 1
p(λ) = det(A − λI) = 1 −λ 1

1 1 −λ

−λ 1 1 1 1 −λ
= (−λ) −1 +1
1 −λ 1 −λ 1 1
| {z } | {z } | {z }
λ2 −1 −λ−1 1+λ

= λ(1 − λ2 ) + 1 + λ + 1 + λ = λ(1 − λ)(1 + λ) + 2(1 + λ)

= (1 + λ)(−λ2 + λ + 2) = −(1 + λ)(λ2 − λ − 2)

= −(1 + λ)(λ − 2)(λ + 1)


p(λ) = (2 − λ)(1 + λ)2 = 0 ⇒ λ1 = 2, λ2 = λ3 = −1 . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
 = 2 : (A − 2I)K=
λ 0   
 −2 1 1   k1   0 
    
 1 −2 1     
   k2  =  0 
    
1 1 −2 k3 0
   
 −2 1 1 0   1 −2 1 0 
  R1 ↔R2  
 1 −2 1 0   1 0 
  −−−−−→  −2 1 
   
1 1 −2 0 1 1 −2 0
   
 1 −2 1 0  −1  1 −2 1 0 
2R +R   3 R2  
−−−1−−−2→  0 −3 3 0  −−−−→  0 1 −1 0 
−R1 +R3 






0 3 −3 0 0 3 −3 0
 
 
 1 0 −1 0  
 

2R2 +R1   k1 − k3 = 0 k1 = k 3
−−−− −−−→  0 1 −1 0  ⇒
−3R2 +R3  
 k2 − k3 = 0   
 k2 = k3 
0 0 0 0
    
 k
 


  1
    
3
     is an eigenvector
N(A − 2I) =   k  k3 ∈ R ⇒ K1 =  1 
  

 
3
 
   belonging to λ1 = 2.

 

 k3  1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
    
 1 1 1  k1  0
    
λ = −1 : (A − (−1)I)K = 0 ⇒  1 1 1   k2 
    
=0
    
1 1 1 k3 0
   
1 1 1 0 1 1 1 0  
  −R1 +R2  
 1 1 1 0  −−−−−−→  0 0 0 0  k + k + k = 0 ⇒ k = −k − k
  −R1 +R3   1 2 3 1 2 3
   
1 1 1 0 0 0 0 0
        

 −k − 
 
 −1 −1 


 k   
         
2 3
       
N(A + I) =   k 
 k ,
2 3k ∈ R = k 
2 1 
 + k  
3 0  2 3k , k ∈ R

 
2
  
      


 
   

 k3   0 1 
   
 −1   −1 
    are 2 linearly independent eigenvectors
 
⇒ K2 =  1  and K3 =   0 

    belonging to λ2 = λ3 = −1.
0 1
     
1  −1   −1 
  2t   −t   −t
general solution is x(t) = c1    
 1  e + c2  1  e + c3  0  e
 
     
1 0 1
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Let λ be a double real root of p(λ) = 0 with geometric multiplicity 1 and
let K be an associated eigenvector. Then we have a solution x(1) (t) = Keλt .
A second solution will be of the form x(2) (t) = Kteλt + Peλt .
 
Note that only one term x = Kteλt will not work:
 
 

 x = Kteλt ⇒ x′ = Keλt + Kλteλt 

 
 x′ = Ax ⇒ Keλt + Kλteλt = |{z} ̸ 0) 
AK teλt ⇒ Keλt = 0 ⇒ K = 0 (contradiction K is an eigenvector so K =
λK


x = Kte λt
+ Pe λt
⇒ x = Keλt + Kλteλt + Pλeλt
x ′ = Ax ⇐⇒ Keλt + Kλteλt + Pλeλt = AKteλt + APeλt

⇐⇒ K + tλK + λP = t |{z}
AK +AP
λK

⇐⇒ K + λP = AP

⇐⇒ (A − λI)P = K
Therefore, we can find a second solution x(2) (t) = Kteλt + Peλt where P is a
solution of (A − λI)P = K (it can be shown that it is always possible to
solve this equation for P and it can also be shown that x(1) (t) and x(2) (t)
are linearly independent). A vector P satisfying (A − λI)P = K is called a
generalized eigenvector corresponding to the eigenvalue λ.
In general, for a system x ′ = Ax with A an n × n (n ≥ 3) real matrix, A may have eigenvalues with
algebraic multiplicity n ≥ m ≥ 3 and some of these repeated eigenvalues may have geometric
multiplicity less than m. Discussion of solutions in such cases requires knowledge of Jordan form from
linear algebra so we will not discuss them in this course.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
 
Example −1 
 1
Find the general solution of the system x ′ =  x
  1 3
1 −1
Solution A =  

1 3

1−λ −1
p(λ) = det(A − λI) = = (1 − λ)(3 − λ) − (−1)1
1 3−λ

= λ2 − 4λ + 4 = (λ − 2)2
p(λ) = (λ − 2)2
= 0 ⇒ λ1 = λ2 = 2
    
 −1 −1 k
 1   0 
λ = 2 : (A − 2I)K = 0 ⇒   = 
1 1 k2 0
   
 
 −1 −1 0  1 2 
R +R −1 −1 0 
  −−−−−→   −k1 − k2 = 0 ⇒ k1 = −k2
1 1 0 0 0 0
    

 −k2 

   −1  is an eigenvector
N(A − 2I) =   k2 ∈ R ⇒ K =  

 

k2 1 belonging to λ1 = λ2 = 2.
 
 −1  2t
Hence we have a solution x(1) (t) = Keλt =  e .
1 . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
    
 −1 −1   p1   −1 
(A − 2I)P = K ⇒   = 
1 1 p2 1
   
 −1 −1 −1  R1 +R2  −1 −1 −1 
  −−−−−→  
1 1 1 0 0 0
 
−p1 − p2 = −1 ⇒ p1 = 1 − p2
     
 1−α   1   −1 
⇒P= =  + α  is a solution of (A − 2I)P = K for any
α 0 1
 
 1 
constant α. So we can choose P =   and obtain another solution for x ′ = Ax
0
   
 −1   1 
as x(2) (t) = Kteλt + Peλt =   te2t +   e2t
1 0
general solution is x(t) = c1 x(1) (t) + c2 x(2) (t)
      
 −1  2t  −1  2t  1  2t 
which is x(t) = c1   e + c2   te +  e 
1 1 0

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy