0% found this document useful (0 votes)
12 views

310F21 Eigenstuff

This document provides a step-by-step guide for finding the eigenvalues and eigenvectors of a square matrix A, and determining if A can be diagonalized. The steps are: 1) Compute the characteristic polynomial of A; 2) Find the roots of the polynomial, which are the eigenvalues of A; 3) For each eigenvalue, find the corresponding eigenvectors by computing the null space of A - λI. If A has n distinct eigenvalues, it is diagonalizable. If diagonalizable, compute matrices P and D such that A = PDP-1, where D is diagonal with eigenvalues and P contains eigenvectors as columns.

Uploaded by

Masoud Oweisi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

310F21 Eigenstuff

This document provides a step-by-step guide for finding the eigenvalues and eigenvectors of a square matrix A, and determining if A can be diagonalized. The steps are: 1) Compute the characteristic polynomial of A; 2) Find the roots of the polynomial, which are the eigenvalues of A; 3) For each eigenvalue, find the corresponding eigenvectors by computing the null space of A - λI. If A has n distinct eigenvalues, it is diagonalizable. If diagonalizable, compute matrices P and D such that A = PDP-1, where D is diagonal with eigenvalues and P contains eigenvectors as columns.

Uploaded by

Masoud Oweisi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

MATH 310 “EIGENSTUFF”: A STEP-BY-STEP GUIDE

If A is a square (n × n) matrix, we often want to find its eigenvalues and eigenvectors, determine whether
or not it is diagonalizable, and if so, diagonalize it (write it as A = P DP −1 where all three matrices are
n × n, P is invertible, and D is diagonal). Here is a brief guide to the steps in this process, which may serve
as a memory aid; see the lecture notes for numerous examples.
• Begin by computing det(A − λI), where λ is a variable and I is the n × n identity matrix. A − λI is
just A with λs subtracted from its diagonal entries. You should use cofactor expansion to calculate
this determinant (other methods, such as row reduction to triangular form, are dangerous in the
presence of a variable in the matrix).
• (If the matrix A is already triangular, its eigenvalues are just its diagonal entries, and the calculation
of this determinant can be skipped. If it is not triangular, you are not allowed to row-reduce it with
the goal of making it triangular. Row reduction usually changes the eigenvalues of a matrix!)
• After simplifying, det(A − λI) will be a degree-n polynomial in λ (the leading term will be either
λn or −λn ), known as the characteristic polynomial of A. Find the roots (zeros) of this polynomial,
that is, the values of λ which, when plugged in, make the polynomial zero. If A is 2 × 2, you can use
simple factoring or the quadratic formula for this. If A is 3 × 3 or larger, this is more challenging
and you will likely want to use technology (such as the solve command in WolframAlpha) to find
the roots.
• The roots of the characteristic polynomial are the eigenvalues of A. In general, it is possible for
some or even all of these roots to be non-real complex numbers, but we will not consider this case
in Math 310. Some of the roots may be repeated; if this happens, you must make a note of it, since
it will be important later. The (algebraic) multiplicity of an eigenvalue is the number of times it is
repeated as a root.
• If A has n distinct (different) eigenvalues, then A is automatically diagonalizable. If A has repeated
eigenvalues, it may or may not be diagonalizable: more analysis is required.
• Remember that λ = 0 can be an eigenvalue (this just means that A is not invertible). This is in
contrast to the fact that ~0 cannot, by definition, be an eigenvector.
• For each eigenvalue λ in turn, find the corresponding eigenvectors. These are the nonzero solutions
~x of the equation A~x = λ~x, and an equivalent form of this equation is (A − λI)~x = ~0. That is, the
eigenvectors of A with eigenvalue λ are the nonzero vectors in the null space null(A − λI). Since at
this point you are plugging in specific known values of λ, the matrices A − λI are now just square
matrices containing numbers only, and you can find their null spaces by row reduction as we have
done before (identify the free variables and solve for the pivot variables in terms of them). There
will always be at least one free variable unless a mistake was made earlier.
• For each eigenvalue, the number of free variables you get (which is the maximum number of linearly
independent eigenvectors corresponding to that eigenvalue) is at least 1 and at most equal to its
multiplicity (number of times it is repeated as a root). This means that non-repeated eigenvalues,
which have multiplicity 1, always yield exactly one free variable, while for repeated eigenvalues,
there are multiple possibilities. If a matrix has repeated eigenvalues, it is only diagonalizable if
every repeated eigenvalue gives the maximum possible number of free variables!

1
2 MATH 310 “EIGENSTUFF”: A STEP-BY-STEP GUIDE

• If you want to find a diagonalization of A, you need a basis for each of the eigenspaces. Non-
repeated eigenvalues have one-dimensional eigenspaces, so you can pick any single eigenvector you
want for each such eigenvalue. For repeated eigenvalues, write the eigenspace (null space of A−λI)
in parametric vector form, and use the vectors appearing (after the free variables have been factored
out) as basis vectors for the eigenspace, just like you did when finding bases for null spaces earlier
in the course.
• You can replace vectors in a basis (for any subspace, in particular for an eigenspace) by nonzero
scalar multiples of themselves and you will still have a basis. Therefore, you can “clear fractions” in
the entries of eigenvectors if you wish (this corresponds to making different choices for the values
of the free variable(s)).
• If A is diagonalizable (which happens if A has n distinct eigenvalues or each of its repeated eigen-
values provides the maximum possible number of free variables), we can find P and D in a diago-
nalization A = P DP −1 as follows: D’s diagonal entries are the eigenvalues of A (its other entries
are all zeros), and P ’s columns are linearly independent eigenvectors of A, appearing in the same
order as their corresponding eigenvalues.
• Once you have P and D, P −1 simply needs to be computed using the matrix inversion methods
from earlier in the course (which requires row reduction unless P is 2 × 2, in which case there’s a
familiar shortcut).
• You can choose any order for the diagonal entries of D, but your choice forces the columns of
P to be ordered in the corresponding way. Similarly, you can choose any linearly independent
eigenvectors you want as columns of P ; changing P will change P −1 . There are therefore many
possible correct diagonalizations A = P DP −1 of a diagonalizable matrix A.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy