0% found this document useful (0 votes)
243 views441 pages

Interactive Linear Algebra

Uploaded by

Saurabh Bhandari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
243 views441 pages

Interactive Linear Algebra

Uploaded by

Saurabh Bhandari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 441

Interactive Linear Algebra

Interactive Linear Algebra

Dan Margalit
Georgia Institute of Technology

Joseph Rabinoff
Georgia Institute of Technology

November 16, 2018


© 2017 Georgia Institute of Technology
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.2 or any later version
published by the Free Software Foundation; with no Invariant Sections, no Front-
Cover Texts, and no Back-Cover Texts. A copy of the license is included in the
appendix entitled “GNU Free Documentation License.” All trademarks™ are the
registered® marks of their respective owners.
Contributors to this textbook

DAN MARGALIT LARRY ROLEN


School of Mathematics School of Mathematics
Georgia Institute of Technology Georgia Institute of Technology
dmargalit7@math.gatech.edu larry.rolen@math.gatech.edu

JOSEPH RABINOFF
School of Mathematics
Georgia Institute of Technology
rabinoff@math.gatech.edu

Joseph Rabinoff contributed all of the figures, the demos, and the technical
aspects of the project, as detailed below.

• The textbook is written in XML and compiled using a variant of Robert


Beezer’s MathBook XML, as heavily modified by Rabinoff.

• The mathematical content of the textbook is written in LaTeX, then converted


to HTML-friendly SVG format using a collection of scripts called PreTeX: this
was coded by Rabinoff and depends heavily on Inkscape for pdf decoding
and FontForge for font embedding. The figures are written in PGF/TikZ and
processed with PreTeX as well.

• The demonstrations are written in JavaScript+WebGL using Steven Wittens’


brilliant framework called MathBox.

All source code can be found on GitHub. It may be freely copied, modified, and
redistributed, as detailed in the appendix entitled “GNU Free Documentation Li-
cense.”
Larry Rolen wrote many of the exercises.

v
vi
Variants of this textbook

There are several variants of this textbook available.

• The master version is the default version of the book.

• The version for math 1553 is fine-tuned to contain only the material covered
in Math 1553 at Georgia Tech.

The section numbering is consistent across versions. This explains why Section
6.3 does not exist in the Math 1553 version, for example.
You are currently viewing the master version.

vii
viii
Contents

Contributors to this textbook v

Variants of this textbook vii

1 Overview 1

2 Systems of Linear Equations: Algebra 9


2.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Row Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Parametric Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3 Systems of Linear Equations: Geometry 35


3.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Vector Equations and Spans . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3 Matrix Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.4 Solution Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.6 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.7 Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.8 Bases as Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . 102
3.9 The Rank Theorem and the Basis Theorem . . . . . . . . . . . . . . . 108

4 Linear Transformations and Matrix Algebra 113


4.1 Matrix Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.2 One-to-one and Onto Transformations . . . . . . . . . . . . . . . . . . 129
4.3 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.4 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.5 Matrix Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

5 Determinants 185
5.1 Determinants: Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.2 Cofactor Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.3 Determinants and Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 220

ix
x CONTENTS

6 Eigenvalues and Eigenvectors 235


6.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . 236
6.2 The Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . . . . 250
6.3 Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
6.4 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
6.5 Complex Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
6.6 Stochastic Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320

7 Orthogonality 337
7.1 Dot Products and Orthogonality . . . . . . . . . . . . . . . . . . . . . . 338
7.2 Orthogonal Complements . . . . . . . . . . . . . . . . . . . . . . . . . . 345
7.3 Orthogonal Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
7.4 Orthogonal Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
7.5 The Method of Least Squares . . . . . . . . . . . . . . . . . . . . . . . . 380

A Complex Numbers 403

B Notation 407

C Hints and Solutions to Selected Exercises 409

D GNU Free Documentation License 411

Index 421
Chapter 1

Overview

The Subject of This Textbook Before starting with the content of the text, we
first ask the basic question: what is linear algebra?
• Linear: having to do with lines, planes, etc.

• Algebra: solving equations involving unknowns.


The name of the textbook highlights an important theme: the synthesis between
algebra and geometry. It will be very important to us to understand systems of
linear equations both algebraically (writing equations for their solutions) and geo-
metrically (drawing pictures and visualizing).
Remark. The term “algebra” was coined by the 9th century mathematician Abu
Ja’far Muhammad ibn Musa al-Khwarizmi. It comes from the Arabic word al-jebr,
meaning reunion of broken parts.
At the simplest level, solving a system of linear equations is not very hard. You
probably learned in high school how to solve a system like
x + 3y − z = 4
(
2x − y + 3z = 17
y − 4z = −3.
However, in real life one usually has to be more clever.
• Engineers need to solve many, many equations in many, many variables.
Here is a tiny example:
3x 1 + 4x 2 + 10x 3 + 19x 4 − 2x 5 − 3x 6 = 141

 7x + 2x − 13x − 7x + 21x + 8x = 2567

1 2 3 4 5 6
3
−x + 9x 2 + 2 x3 + x 4 + 14x 5 + 27x 6 = 26
1 1

2 x 1 + 4x 2 + 10x 3 + 11x 4 + 2x 5 + x 6 = −15.

• Often it is enough to know some information about the set of solutions,


without having to solve the equations in the first place. For instance, does
there exist a solution? What does the solution set look like geometrically?
Is there still a solution if we change the 26 to a 27?

1
2 CHAPTER 1. OVERVIEW

• Sometimes the coefficients also contain parameters, like the eigenvalue equa-
tion
(7 − λ)x + y+ 3z = 0
(
−3x + (2 − λ) y − 3z = 0
−3x − 2 y + (−1 − λ)z = 0.

• In data modeling, a system of equations generally does not actually have a


solution. In that case, what is the best approximate solution?

Accordingly, this text is organized into three main sections.

1. Solve the matrix equation Ax = b (chapters 2–4).

• Solve systems of linear equations using matrices, row reduction, and


inverses.
• Analyze systems of linear equations geometrically using the geometry
of solution sets and linear transformations.

2. Solve the matrix equation Ax = λx (chapters 5–6).

• Solve eigenvalue problems using the characteristic polynomial.


• Understand the geometry of matrices using similarity, eigenvalues, di-
agonalization, and complex numbers.

3. Approximately solve the matrix equation Ax = b (chapter 7).

• Find best-fit solutions to systems of linear equations that have no actual


solution using least-squares approximations.
• Study the geometry of closest vectors and orthogonal projections.

This text is roughly half computational and half conceptual in nature. The
main goal is to present a library of linear algebra tools, and more importantly, to
teach a conceptual framework for understanding which tools should be applied in
a given context.

If Matlab can find the answer faster than you can, then your question is just
an algorithm: this is not real problem solving.

The subtle part of the subject lies in understanding what computation to ask
the computer to do for you—it is far less important to know how to perform com-
putations that a computer can do better than you anyway.
3

Uses of Linear Algebra in Engineering The vast majority of undergraduates at


Georgia Tech have to take a course in linear algebra. There is a reason for this:

Most engineering problems, no matter how complicated, can be reduced to


linear algebra:

Ax = b or Ax = λx or Ax ≈ b.

Here we present some sample problems in science and engineering that require
linear algebra to solve.

Example (Civil Engineering). The following diagram represents traffic flow around
the town square. The streets are all one way, and the numbers and arrows indi-
cate the number of cars per hour flowing along each street, as measured by sensors
underneath the roads.

Traffic flow (cars/hr)

250 120
x
120 70

w y

175 530
z

115 390

There are no sensors underneath some of the streets, so we do not know how
much traffic is flowing around the square itself. What are the values of x, y, z, w?
Since the number of cars entering each intersection has to equal the number of
cars leaving that intersection, we obtain a system of linear equations:

w + 120 = x + 250


x + 120 = y + 70

 y + 530 = z + 390

z + 115 = w + 175.
4 CHAPTER 1. OVERVIEW

Example (Chemical Engineering). A certain chemical reaction (burning) takes


ethane and oxygen, and produces carbon dioxide and water:

x C2 H6 + y O2 → z CO2 + w H2 O

What ratio of the molecules is needed to sustain the reaction? The following
three equations come from the fact that the number of atoms of carbon, hydro-
gen, and oxygen on the left side has to equal the number of atoms on the right,
respectively:

2x = z
6x = 2w
2 y = 2z + w.

Example (Biology). In a population of rabbits,

1. half of the newborn rabbits survive their first year;

2. of those, half survive their second year;

3. the maximum life span is three years;

4. rabbits produce 0, 6, 8 baby rabbits in their first, second, and third years,
respectively.

If you know the rabbit population in 2016 (in terms of the number of first, sec-
ond, and third year rabbits), then what is the population in 2017? The rules for
reproduction lead to the following system of equations, where x, y, z represent the
number of newborn, first-year, and second-year rabbits, respectively:

6 y2016 + 8z2016 = x 2017




1
2 x 2016 = y2017
1
 y
2 2016 = z2017 .

A common question is: what is the asymptotic behavior of this system? What will
the rabbit population look like in 100 years? This turns out to be an eigenvalue
problem.

Use this link to view the online demo

Left: the population of rabbits in a given year. Right: the proportions of rabbits in
that year. Choose any values you like for the starting population, and click “Advance
1 year” several times. What do you notice about the long-term behavior of the ratios?
This phenomenon turns out to be due to eigenvectors.
5

Example (Astronomy). An asteroid has been observed at the following locations:

(0, 2), (2, 1), (1, −1), (−1, −2), (−3, 1), (−1, −1).

Its orbit around the sun is elliptical; it is described by an equation of the form

x 2 + B y 2 + C x y + Dx + E y + F = 0.

What is the most likely orbit of the asteroid, given that there was some signifi-
cant error in measuring its position? Substituting the data points into the above
equation yields the system

(0)2 + B(2)2 + C(0)(2) + D(0) + E(2) + F = 0


(2)2 + B(1)2 + C(2)(1) + D(2) + E(1) + F = 0
(1)2 + B(−1)2 + C(1)(−1) + D(1) + E(−1) + F = 0
(−1)2 + B(−2)2 + C(−1)(−2) + D(−1) + E(−2) + F = 0
(−3)2 + B(1)2 + C(−3)(1) + D(−3) + E(1) + F = 0
(−1)2 + B(−1)2 + C(−1)(−1) + D(−1) + E(−1) + F = 0.

There is no actual solution to this system due to measurement error, but here is
the best-fitting ellipse:

(0, 2)

(−3, 1)
(2, 1)
(−1, 1)

(1, −1)

(−1, −2)

266x 2 + 405 y 2 − 178x y + 402x − 123 y − 1374 = 0

Example (Computer Science). Each web page has some measure of importance,
which it shares via outgoing links to other pages. This leads to zillions of equations
in zillions of variables. Larry Page and Sergei Brin realized that this is a linear
algebra problem at its core, and used the insight to found Google. We will discuss
this example in detail in Section 6.6.
6 CHAPTER 1. OVERVIEW

How to Use This Textbook There are a number of different categories of ideas
that are contained in most sections. They are listed at the top of the section, under
Objectives, for easy review. We classify them as follows.

• Recipes: these are algorithms that are generally straightforward (if some-
times tedious), and are usually done by computer in real life. They are
nonetheless important to learn and to practice.

• Vocabulary words: forming a conceptual understanding of the subject of lin-


ear algebra means being able to communicate much more precisely than in
ordinary speech. The vocabulary words have precise definitions, which must
be learned and used correctly.

• Essential vocabulary words: these vocabulary words are essential in that they
form the essence of the subject of linear algebra. For instance, if you do not
know the definition of an eigenvector, then by definition you cannot claim
to understand linear algebra.

• Theorems: these describe in a precise way how the objects of interest relate
to each other. Knowing which recipe to use in a given situation generally
means recognizing which vocabulary words to use to describe the situation,
and understanding which theorems apply to that problem.

• Pictures: visualizing the geometry underlying the algebra means interpreting


and drawing pictures of the objects involved. The pictures are meant to be
a core part of the material in the text: they are not just a pretty add-on.

This textbook is exclusively targeted at Math 1553 at Georgia Tech. As such,


it contains exactly the material that is taught in that class; no more, and no less:
students in Math 1553 are responsible for understanding all visible content. In the
online version some extra material (most examples and proofs, for instance) is
hidden, in that one needs to click on a link to reveal it, like this:

Hidden Content. Hidden content is meant to enrich your understanding of the


topic, but is not an official part of Math 1553. That said, the text will be very
hard to follow without understanding the examples, and studying the proofs is an
excellent way to learn the conceptual part of the material. (Not applicable to the
PDF version.)

Finally, we remark that there are over 140 interactive demos contained in the
text, which were created to illustrate the geometry of the topic. Click the “view in
a new window” link, and play around with them! You will need a modern browser.
Internet Explorer is not a modern browser; try Safari, Chrome, or Firefox. Here is
a demo from Section 7.5:

Use this link to view the online demo

Click and drag the points on the grid on the right.


7

Feedback Every page of the online version has a link on the bottom for providing
feedback. This will take you to the GitHub Issues page for this book. It requires a
Georgia Tech login to access.
8 CHAPTER 1. OVERVIEW
Chapter 2

Systems of Linear Equations:


Algebra

Primary Goal. Solve a system of linear equations algebraically in parametric form.

This chapter is devoted to the algebraic study of systems of linear equations


and their solutions. We will learn a systematic way of solving equations of the
form
3x 1 + 4x 2 + 10x 3 + 19x 4 − 2x 5 − 3x 6 = 141

 7x + 2x − 13x − 7x + 21x + 8x = 2567

1 2 3 4 5 6
3
−x + 9x 2 + 2 x 3 + x 4 + 14x 5 + 27x 6 = 26
1 1

2 x 1 + 4x 2 + 10x 3 + 11x 4 + 2x 5 + x 6 = −15.

In Section 2.1, we will introduce systems of linear equations, the class of equa-
tions whose study forms the subject of linear algebra. In Section 2.2, will present
a procedure, called row reduction, for finding all solutions of a system of linear
equations. In Section 2.3, you will see hnow to express all solutions of a system
of linear equations in a unique way using the parametric form of the general solu-
tion.

2.1 Systems of Linear Equations

Objectives

1. Understand the definition of Rn , and what it means to use Rn to label points


on a geometric object.

2. Pictures: solutions of systems of linear equations, parameterized solution


sets.

3. Vocabulary words: consistent, inconsistent, solution set.

9
10 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

During the first half of this textbook, we will be primarily concerned with un-
derstanding the solutions of systems of linear equations.
Definition. An equation in the unknowns x, y, z, . . . is called linear if both sides
of the equation are a sum of (constant) multiples of x, y, z, . . ., plus an optional
constant.
For instance,
3x + 4 y = 2z
−x − z = 100
are linear equations, but
3x + yz = 3
sin(x) − cos( y) = 2
are not.
We will usually move the unknowns to the left side of the equation, and move
the constants to the right.
A system of linear equations is a collection of several linear equations, like
x + 2 y + 3z = 6
(
2x − 3 y + 2z = 14 (2.1.1)
3x + y − z = −2.

Definition (Solution sets).


• A solution of a system of equations is a list of numbers x, y, z, . . . that make
all of the equations true simultaneously.
• The solution set of a system of equations is the collection of all solutions.
• Solving the system means finding all solutions with formulas involving some
number of parameters.

A system of linear equations need not have a solution. For example, there do
not exist numbers x and y making the following two equations true simultane-
ously:
x + 2y = 3
§

x + 2 y = −3.
In this case, the solution set is empty. As this is a rather important property of a
system of equations, it has its own name.
Definition. A system of equations is called inconsistent if it has no solutions. It
is called consistent otherwise.
A solution of a system of equations in n variables is a list of n numbers. For
example, (x, y, z) = (1, −2, 3) is a solution of (2.1.1). As we will be studying
solutions of systems of equations throughout this text, now is a good time to fix
our notions regarding lists of numbers.
2.1. SYSTEMS OF LINEAR EQUATIONS 11

2.1.1 Line, Plane, Space, Etc.


We use R to denote the set of all real numbers, i.e., the number line. This contains
numbers like 0, 32 , −π, 104, . . .

Definition. Let n be a positive whole number. We define

Rn = all ordered n-tuples of real numbers (x 1 , x 2 , x 3 , . . . , x n ).

An n-tuple of real numbers is called a point of Rn .

In other words, Rn is just the set of all (ordered) lists of n real numbers. We
will draw pictures of Rn in a moment, but keep in mind that this is the definition.
For example, (0, 23 , −π) and (1, −2, 3) are points of R3 .

Example (The number line). When n = 1, we just get R back: R1 = R. Geometri-


cally, this is the number line.

−3 −2 −1 0 1 2 3

Example (The Euclidean plane). When n = 2, we can think of R2 as the x y-plane.


We can do so because every point on the plane can be represented by an ordered
pair of real numbers, namely, its x- and y-coordinates.

(1, 2)

(0, −3)
12 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

Example (3-Space). When n = 3, we can think of R3 as the space we (appear


to) live in. We can do so because every point in space can be represented by an
ordered triple of real numebrs, namely, its x-, y-, and z-coordinates.

(−2, 2, 2)
(1, −1, 3)

Interactive: Points in 3-Space.

Use this link to view the online demo

A point in 3-space, and its coordinates. Click and drag the point, or move the sliders.

So what is R4 ? or R5 ? or Rn ? These are harder to visualize, so you have to


go back to the definition: Rn is the set of all ordered n-tuples of real numbers
(x 1 , x 2 , x 3 , . . . , x n ).
They are still “geometric” spaces, in the sense that our intuition for R2 and R3
often extends to Rn .
We will make definitions and state theorems that apply to any Rn , but we will
only draw pictures for R2 and R3 .
The power of using these spaces is the ability to label various objects of interest,
such as geometric objects and solutions of systems of equations, by the points of
Rn .

Example (Color Space). All colors you can see can be described by three quanti-
ties: the amount of red, green, and blue light in that color. (Humans are trichro-
matic.) Therefore, we can use the points of R3 to label all colors: for instance, the
point (.2, .4, .9) labels the color with 20% red, 40% green, and 90% blue intensity.
2.1. SYSTEMS OF LINEAR EQUATIONS 13

blue

n
gree

red

Example (Traffic Flow). In Chapter 1, we could have used R4 to label the amount
of traffic (x, y, z, w) passing through four streets. In other words, if there are
10, 5, 3, 11 cars per hour passing through roads x, y, z, w, respectively, then this
can be recorded by the point (10, 5, 3, 11) in R4 . This is useful from a psychologi-
cal standpoint: instead of having four numbers, we are now dealing with just one
piece of data.

w y

Example (QR Codes). A QR code is a method of storing data in a grid of black


and white squares in a way that computers can easily read. A typical QR code
is a 29 × 29 grid. Reading each line left-to-right and reading the lines top-to-
bottom (like you read a book) we can think of such a QR code as a sequence of
29 × 29 = 841 digits, each digit being 1 (for white) or 0 (for black). In such a
way, the entire QR code can be regarded as a point in R841 . As in the previous
example, it is very useful from a psychological perspective to view a QR code as a
single piece of data in this way.
14 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

The QR code for this textbook is a 29 × 29 array of black/white squares.

In the above examples, it was useful from a psychological perspective to replace


a list of four numbers (representing traffic flow) or of 841 numbers (representing a
QR code) by a single piece of data: a point in some Rn . This is a powerful concept;
starting in Section 3.2, we will almost exclusively record solutions of systems of
linear equations in this way.

2.1.2 Pictures of Solution Sets


Before discussing how to solve a system of linear equations below, it is helpful to
see some pictures of what these solution sets look like geometrically.

One Equation in Two Variables. Consider the linear equation x + y = 1. We can


rewrite this as y = 1 − x, which defines a line in the plane: the slope is −1, and
the x-intercept is 1.

Definition (Lines). For our purposes, a line is a ray that is straight and infinite in
both directions.
2.1. SYSTEMS OF LINEAR EQUATIONS 15

One Equation in Three Variables. Consider the linear equation x + y + z = 1.


This is the implicit equation for a plane in space.

Definition (Planes). A plane is a flat sheet that is infinite in all directions.

Remark. The equation x + y + z + w = 1 defines a “3-plane” in 4-space, and


more generally, a single linear equation in n variables defines an “(n − 1)-plane”
in n-space. We will make these statements precise in Section 3.7.

Two Equations in Two Variables. Now consider the system of two linear equa-
tions
x − 3 y = −3
§

2x + y = 8.

Each equation individually defines a line in the plane, pictured below.


16 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

A solution to the system of both equations is a pair of numbers (x, y) that makes
both equations true at once. In other words, it as a point that lies on both lines
simultaneously. We can see in the picture above that there is only one point where
the lines intersect: therefore, this system has exactly one solution. (This solution
is (3, 2), as the reader can verify.)
Usually, two lines in the plane will intersect in one point, but of course this is
not always the case. Consider now the system of equations
x − 3 y = −3
§

x − 3 y = 3.
These define parallel lines in the plane.
2.1. SYSTEMS OF LINEAR EQUATIONS 17

The fact that that the lines do not intersect means that the system of equations
has no solution. Of course, this is easy to see algebraically: if x − 3 y = −3, then
it is cannot also be the case that x − 3 y = 3.
There is one more possibility. Consider the system of equations
x − 3 y = −3
§

2x − 6 y = −6.
The second equation is a multiple of the first, so these equations define the same
line in the plane.

In this case, there are infinitely many solutions of the system of equations.
Two Equations in Three Variables. Consider the system of two linear equations
x + y +z =1
n
x − z = 0.
Each equation individually defines a plane in space. The solutions of the system
of both equations are the points that lie on both planes. We can see in the picture
below that the planes intersect in a line. In particular, this system has infinitely
many solutions.

Use this link to view the online demo

The planes defined by the equations x + y + z = 1 and x − z = 0 intersect in the red


line, which is the solution set of the system of both equations.

Remark. In general, the solutions of a system of equations in n variables is the


intersection of “(n − 1)-planes” in n-space. This is always some kind of linear
space, as we will discuss in Section 3.4.
18 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

2.1.3 Parametric Description of Solution Sets


According to this definition, solving a system of equations means writing down all
solutions in terms of some number of parameters. We will give a systematic way of
doing so in Section 2.3; for now we give parametric descriptions in the examples
of the previous subsection.

Lines. Consider the linear equation x + y = 1 of this example. In this context,


we call x + y = 1 an implicit equation of the line. We can write the same line in
parametric form as follows:

(x, y) = (t, 1 − t) for any t ∈ R.

This means that every point on the line has the form (t, 1− t) for some real number
t. In this case, we call t a parameter, as it parameterizes the points on the line.

t = −1
t =0

t =1

Now consider the system of two linear equations


x + y +z =1
n
x −z =0
of this example. These collectively form the implicit equations for a line in R3 .
(At least two equations are need to define a line in space.) This line also has a
parametric form with one parameter t:

(x, y, z) = (t, 1 − 2t, t).

Use this link to view the online demo

The planes defined by the equations x + y + z = 1 and x − z = 0 intersect in the


yellow line, which is parameterized by (x, y, z) = (t, 1 − 2t, t). Move the slider to
change the parameterized point.
2.2. ROW REDUCTION 19

Note that in each case, the parameter t allows us to use R to label the points
on the line. However, neither line is the same as the number line R: indeed, every
point on the first line line has two coordinates, like the point (0, 1), and every point
on the second line has three coordinates, like (0, 1, 0).

Planes. Consider the linear equation x + y + z = 1 of this example. This is an


implicit equation of a plane in space. This plane has an equation in parametric
form: we can write every point on the plane as

(x, y, z) = (1 − t − w, t, w) for any t, w ∈ R.

In this case, we need two parameters t and w to describe all points on the plane.

Use this link to view the online demo

The plane in R3 defined by the equation x + y + z = 1. This plane is parameterized


by two numbers t, w; move the sliders to change the parameterized point.

Note that the parameters t, w allow us to use R2 to label the points on the plane.
However, this plane is not the same as the plane R2 : indeed, every point on this
plane has three coordinates, like the point (0, 0, 1).

When there is a unique solution, as in this example, it is not necessary to use


parameters to describe the solution set.

2.2 Row Reduction

Objectives

1. Learn to replace a system of linear equations by an augmented matrix.

2. Learn how the elimination method corresponds to performing row opera-


tions on an augmented matrix.

3. Understand when a matrix is in (reduced) row echelon form.

4. Learn which row reduced matrices come from inconsistent linear systems.

5. Recipe: the row reduction algorithm.

6. Vocabulary words: row operation, row equivalence, matrix, augmented


matrix, pivot, (reduced) row echelon form.

In this section, we will present an algorithm for “solving” a system of linear


equations.
20 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

2.2.1 The Elimination Method


We will solve systems of linear equations algebraically using the elimination method.
In other words, we will combine the equations in various ways to try to eliminate
as many variables as possible from each equation. There are three valid operations
we can perform on our system of equations:

• Scaling: we can multiply both sides of an equation by a nonzero number.

x + 2 y + 3z = 6 −3x − 6 y − 9z = −18
( (
multiply 1st by −3
2x − 3 y + 2z = 14 −−−−−−−−−→ 2x − 3 y + 2z = 14
3x + y − z = −2 3x + y − z = −2

• Replacement: we can add a multiple of one equation to another, replacing


the second equation with the result.

x + 2 y + 3z = 6 x+ 2 y + 3z = 6
( (
2nd = 2nd−2×1st
2x − 3 y + 2z = 14 −−−−−−−−−−→ −7 y − 4z = 2
3x + y − z = −2 3x + y − z = −2

• Swap: we can swap two equations.

x + 2 y + 3z = 6 3x + y − z = −2
( (
3rd ←→ 1st
2x − 3 y + 2z = 14 −−−−−−→ 2x − 3 y + 2z = 14
3x + y − z = −2 x + 2 y + 3z = 6

Example. Solve (2.1.1) using the elimination method.


Solution.

 x + 2 y + 3z = 6  x+ 2 y + 3z = 6
 
2nd = 2nd−2×1st
2x − 3 y + 2z = 14 −−−−−−−−−−→ −7 y − 4z = 2
3x + y − z = −2 3x + = −2
 
y− z
 x + 2 y + 3z = 6

3rd = 3rd−3×1st
−−−−−−−−−→ −7 y − 4z = 2
= −20

−5 y − 10z
 x + 2 y + 3z = 6

2nd ←→ 3rd
−−−−−−→ −5 y − 10z = −20
=

−7 y − 4z 2
 x + 2 y + 3z =6

divide 2nd by −5
−−−−−−−−→ y + 2z =4
=2

−7 y − 4z
 x + 2 y + 3z = 6

3rd = 3rd+7×2nd
−−−−−−−−−−→ y + 2z = 4
= 30

10z
2.2. ROW REDUCTION 21

At this point we’ve eliminated both x and y from the third equation, and we can
solve 10z = 30 to get z = 3. Substituting for z in the second equation gives
y + 2 · 3 = 4, or y = −2. Substituting for y and z in the first equation gives
x + 2 · (−2) + 3 · 3 = 6, or x = 3. Thus the only solution is (x, y, z) = (1, −2, 3).
We can check that our solution is correct by substituting (x, y, z) = (1, −2, 3)
into the original equation:
x + 2 y + 3z = 6 1 + 2 · (−2) + 3 · 3 = 6
( (
substitute
2x − 3 y + 2z = 14 −−−−→ 2 · 1 − 3 · (−2) + 2 · 3 = 14
3x + y − z = −2 3 · 1 + (−2) − 3 = −2.

Augmented Matrices and Row Operations Solving equations by elimination


requires writing the variables x, y, z and the equals sign = over and over again,
merely as placeholders: all that is changing in the equations is the coefficient
numbers. We can make our life easier by extracting only the numbers, and putting
them in a box:
 
x + 2 y + 3z = 6 1 2 3 6
(
becomes
2x − 3 y + 2z = 14 −−−−→  2 −3 2 14  .
3x + y − z = −2 3 1 −1 −2

This is called an augmented matrix. The word “augmented” refers to the vertical
line, which we draw to remind ourselves where the equals sign belongs; a matrix
is a grid of numbers without the vertical line. In this notation, our three valid ways
of manipulating our equations become row operations:

• Scaling: multiply all entries in a row by a nonzero number.


   
1 2 3 6 −3 −6 −9 −18
R1 =R1 ×−3
 2 −3 2 14  −−−−−→  2 −3 2 14 
3 1 −1 −2 3 1 −1 −2

Here the notation R1 simply means “the first row”, and likewise for R2 , R3 ,
etc.

• Replacement: add a multiple of one row to another, replacing the second


row with the result.
   
1 2 3 6 1 2 3 6
R2 =R2 −2×R1
 2 −3 2 14  −−−−−−−→  0 −7 −4 2
3 1 −1 −2 3 1 −1 −2

• Swap: interchange two rows.


   
1 2 3 6 3 1 −1 −2
R1 ←→R3
 2 −3 2 14  −−−−→  2 −3 2 14 
3 1 −1 −2 1 2 3 6
22 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

Example. Solve (2.1.1) using row operations.


Solution. We start by forming an augmented matrix:
 
x + 2 y + 3z = 6 1 2 3 6
(
becomes
2x − 3 y + 2z = 14 −−−−→  2 −3 2 14  .
3x + y − z = −2 3 1 −1 −2

Eliminating a variable from an equation means producing a zero to the left of the
line in an augmented matrix. First we produce zeros in the first column (i.e. we
eliminate x) by subtracting multiples of the first row.
   
1 2 3 6 1 2 3 6
R2 =R2 −2R1
 2 −3 2 14  −−−−−−→  0 −7 −4 2
3 1 −1 −2 3 1 −1 −2
 
1 2 3 6
R3 =R3 −3R1
−−−−−−→  0 −7 −4 2
0 −5 −10 −20

This was made much easier by the fact that the top-left entry is equal to 1, so we
can simply multiply the first row by the number below and subtract. In order to
eliminate y in the same way, we would like to produce a 1 in the second column.
We could divide the second row by −7, but this would produce fractions; instead,
let’s divide the third by −5.
   
1 2 3 6 1 2 3 6
R3 =R3 ÷−5
 0 −7 −4 2  −−−−−→  0 −7 −4 2 
0 −5 −10 −20 0 1 2 4
 
1 2 3 6
R2 ←→R3
−−−−→  0 1 2 4
0 −7 −4 2
 
1 2 3 6
R2 ←→R3
−−−−→  0 1 2 4 
0 0 10 30
 
1 2 3 6
R3 =R3 ÷10
−−−−−→  0 1 2 4 
0 0 1 3

We swapped the second and third row just to keep things orderly. Now we translate
this augmented matrix back into a system of equations:
 
1 2 3 6 x + 2 y + 3z = 6
(
becomes
 0 1 2 4  −−−−→ y + 2z = 4
0 0 1 3 z=3
Hence z = 3; back-substituting as in this example gives (x, y, z) = (1, −2, 3).
2.2. ROW REDUCTION 23

The process of doing row operations to a matrix does not change the solution
set of the corresponding linear equations!

Indeed, the whole point of doing these operations is to solve the equations using
the elimination method.
Definition. Two matrices are called row equivalent if one can be obtained from
the other by doing some number of row operations.
So the linear equations of row-equivalent matrices have the same solution set.
Example (An Inconsistent System). Solve the following system of equations using
row operations:
x+ y =2
(
3x + 4 y = 5
4x + 5 y = 9
Solution. First we put our system of equations into an augmented matrix.

x+ y =2
 
1 1 2
(
augmented matrix
3x + 4 y = 5 −−−−−−−−−→  3 4 5 
4x + 5 y = 9 4 5 9

We clear the entries below the top-left using row replacement.


   
1 1 2 1 1 2
R2 =R2 −3R1
 3 4 5  −−−−−− →  0 1 −1 
4 5 9 4 5 9
 
1 1 2
R3 =R3 −4R1
−−−−−−→  0 1 −1 
0 1 1

Now we clear the second entry from the last row.


   
1 1 2 1 1 2
R3 =R3 −R2
 0 1 −1  −−−−− →  0 1 −1 
0 1 1 0 0 2

This translates back into the system of equations


x+y = 2
(
y = −1
0 = 2.
Our original system has the same solution set as this system. But this system has no
solutions: there are no values of x, y making the third equation true! We conclude
that our original equation was inconsistent.
24 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

2.2.2 Echelon Forms


In the previous subsection we saw how to translate a system of linear equations
into an augmented matrix. We want to find an algorithm for “solving” such an
augmented matrix. First we must decide what it means for an augmented matrix
to be “solved”.

Definition. A matrix is in row echelon form if:

1. All zero rows are at the bottom.

2. The first nonzero entry of a row is to the right of the first nonzero entry of
the row above.

3. Below the first nonzero entry of a row, all entries are zero.

Here is a picture of a matrix in row echelon form:


 
? ? ? ? ?
 0 ? ? ? ? ? = any number
 
 0 0 0 ? ? ? = any nonzero number
0 0 0 0 0

Definition. A pivot is the first nonzero entry of a row of a matrix in row echelon
form.

A matrix in row-echelon form is generally easy to solve using back-substitution.


For example,
 
1 2 3 6 x + 2 y + 3z = 6
(
becomes
0 1 2 4 − −−−→ y + 2z = 4
0 0 10 30 10z = 30.

We immediately see that z = 3, which implies y = 4 − 2 · 3 = −2 and x = 6 −


2(−2) − 3 · 3 = 1. See this example.

Definition. A matrix is in reduced row echelon form if it is in row echelon form,


and in addition:

5. Each pivot is equal to 1.

6. Each pivot is the only nonzero entry in its column.

Here is a picture of a matrix in reduced row echelon form:

1 0 ? 0 ?
 
0 1 ? 0 ? ? = any number
0 0 0 1 ? 1 = pivot
0 0 0 0 0
2.2. ROW REDUCTION 25

A matrix in reduced row echelon form is in some sense completely solved. For
example,  
1 0 0 1 x = 1
(
becomes
 0 1 0 −2  −−−−→ y = −2
0 0 1 3 z = 3.
Example. The following matrices are in reduced row echelon form:
 ‹  ‹  ‹
1 0 2  1 17 0 0 0 0
0 1 8 0 .
0 1 −1 0 0 1 0 0 0
The following matrices are in row echelon form but not reduced row echelon form:
 
 ‹ 2 7 1 4  ‹  ‹
2 1 0 0 2 1 1 17 0 2 1 3
.
0 1 0 1 1 0 0 0
0 0 0 3

The following matrices are not in echelon form:


0
   
2 7 1 4  ‹  ‹
0 0 2 1 0 17 0 2 1 1
0 2 1 2 1 0 .
0 0 1 3
0

When deciding if an augmented matrix is in (reduced) row echelon form, there


is nothing special about the augmented column(s). Just ignore the vertical
line.

If an augmented matrix is in reduced row echelon form, the corresponding


linear system is viewed as solved. We will see below why this is the case, and we
will show that any matrix can be put into reduced row echelon form using only
row operations.

2.2.3 The Row Reduction Algorithm


Theorem. Every matrix is row equivalent to one and only one matrix in reduced row
echelon form.
We will give an algorithm, called row reduction or Gaussian elimination,
which demonstrates that every matrix is row equivalent to at least one matrix in
reduced row echelon form.

The uniqueness statement is interesting—it means that, no matter how you


row reduce, you always get the same matrix in reduced row echelon form.

This assumes, of course, that you only do the three legal row operations, and
you don’t make any arithmetic errors.
We will not prove uniqueness, but maybe you can!
26 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

Algorithm (Row Reduction).

Step 1a: Swap the 1st row with a lower one so a leftmost nonzero entry is in
the 1st row (if necessary).

Step 1b: Scale the 1st row so that its first nonzero entry is equal to 1.

Step 1c: Use row replacement so all entries below this 1 are 0.

Step 2a: Swap the 2nd row with a lower one so that the leftmost nonzero entry
is in the 2nd row.

Step 2b: Scale the 2nd row so that its first nonzero entry is equal to 1.

Step 2c: Use row replacement so all entries below this 1 are 0.

Step 3a: Swap the 3rd row with a lower one so that the leftmost nonzero entry
is in the 3rd row.

etc.

Last Step: Use row replacement to clear all entries above the pivots, starting
with the last pivot.

Example. Row reduce this matrix:

 
0 −7 −4 2
2 4 6 12  .
3 1 −1 −2

Solution.
2.2. ROW REDUCTION 27

! !
0 −7 −4 2 R1 ←→ R2 2 4 6 12
2 4 6 12 0 −7 −4 2
3 1 −1 −2 3 1 −1 −2
Step 1a: Row swap Step 1b: Scale to make this 1.
to make this nonzero.
!
R1 = R1 ÷ 2 1 2 3 6
0 −7 −4 2
3 1 −1 −2
Step 1c: Subtract a multiple of
the first row to clear this.

Optional: swap rows 2


!
R3 = R3 − 3R1 1 2 3 6
and 3 to make Step 2b 0 −7 −4 2
easier next. 0 −5 −10 −20

!
R2 ←→ R3 1 2 3 6
0 −5 −10 −20
0 −7 −4 2
Step 2a: This is already nonzero.
Step 2b: Scale to make this 1.
!
Note how Step 2b R2 = R2 ÷ −5 1 2 3 6
0 1 2 4
doesn’t create fractions. 0 −7 −4 2
Step 2c: Add 7 times
the second row to clear this.
!
R3 = R3 + 7R2 1 2 3 6
0 1 2 4
0 0 10 30
Step 3a: This is already nonzero.
Step 3b: Scale to make this 1.
!
R3 = R3 ÷ 10 1 2 3 6
0 1 2 4
0 0 1 3
Last step: add multiples of
the third row to clear these.
!
R2 = R2 − 2R3 1 2 3 6
0 1 0 −2
0 0 1 3

!
R1 = R1 − 3R3 1 2 0 −3
0 1 0 −2
0 0 1 3
Last step: add −2 times
the third row to clear this.
!
R1 = R1 − 2R2 1 0 0 1
0 1 0 −2
0 0 1 3

The reduced row echelon form of the matrix is


 
1 0 0 1 x = 1
(
translates to
 0 1 0 −2  −−−−−−→ y = −2
0 0 1 3 z = 3.
28 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

The reduced row echelon form of the matrix tells us that the only solution is
(x, y, z) = (1, −2, 3).

Use this link to view the online demo

Animated slideshow of the row reduction in this example.

Here is the row reduction algorithm, summarized in pictures.

Get
 a 1 here Clear down Get a 1 here
1 ? ? ?

? ? ? ?

1 ? ? ?
 
? ? ? ? ? ? ? ? 0 1 ? ?
? ? ? ? ? ? ? ? 0 ? ? ?
     
? ? ? ? ? ? ? ? 0 ? ? ?

Clear down (maybe these are already zero) Get a 1 here


1 ? ? ? 1 ? ? ? ? ? ?
     
1
0 1 ? ? 0 1 ? ?  0 1 ? ?
0 ? ? ? 0 0 0 ? 0 0 0 ?
     
0 ? ? ? 0 0 0 ? 0 0 0 ?

Clear down Clear up


Matrix is in REF
? ? ? 1 ? ? ? ? ? ?
   
1 1
0 1 ? ? 0 1 ? ? 0 1 ? ?
0 0 0 1 0 0 0 1 0 0 0 1
     
0 0 0 ? 0 0 0 0 0 0 0 0

Clear up Matrix is in RREF


1 ? ? 1 0 ? 0
   
0
0 1 ? 0 0 1 ? 0
0 0 0 1 0 0 0 1
   
0 0 0 0 0 0 0 0

Example (An Inconsistent System). Solve the linear system

2x + 10 y = −1
§

3x + 15 y = 2

using row reduction.


2.2. ROW REDUCTION 29

Solution.

1 5 − 12
 ‹  ‹
2 10 −1 R1 =R1 ÷2
−−−−−→ (Step 1b)
3 15 2 3 15 2
 1

R2 =R2 −3R1 1 5 −2
−−−−−−→ 7 (Step 1c)
0 0 2
R2 =R2 × 27
 1‹
1 5 −2
−−−−−→ (Step 2b)
0 0 1
R1 =R1 + 21 R2
 ‹
1 5 0
−−−−−−→ (Step 2c)
0 0 1

This row reduced matrix corresponds to the inconsistent system

x + 5y = 0
n
0 = 1.
In the above example, we saw how to recognize the reduced row echelon form
of a consistent system.

The Row Echelon Form of an Inconsistent System. An augmented matrix corre-


sponds to an inconsistent system of equations if and only if the last column (i.e., the
augmented column) is a pivot column.

In other words, the row reduced matrix of an inconsistent system looks like
this:  
1 0 ? ? 0
0 1 ? ? 0
0 0 0 0 1

We have discussed two classes of matrices so far:

1. When the reduced row echelon form of a matrix has a pivot in every non-
augmented column, then it corresponds to a system with a unique solution:
 
1 0 0 1 x = 1
(
translates to
 0 1 0 −2  −−−−−−→ y = −2
0 0 1 3 z = 3.

2. When the reduced row echelon form of a matrix has a pivot in the last (aug-
mented) column, then it corresponds to a system with a no solutions:

x + 5y = 0
 ‹
1 5 0
n
translates to
−−−−−−→
0 0 1 0 = 1.

What happens when one of the non-augmented columns lacks a pivot? This is the
subject of Section 2.3.
30 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

Example (A System with Many Solutions). Solve the linear system

2x + y + 12z = 1
§

x + 2 y + 9z = −1

using row reduction.

Solution.
 ‹  ‹
2 1 12 1 R1 ←→R2 1 2 9 −1
−−−−→ (Optional)
1 2 9 −1 2 1 12 1
 ‹
R2 =R2 −2R1 1 2 9 −1
−−−−−−→ (Step 1c)
0 −3 −6 3
 ‹
R2 =R2 ÷−3 1 2 9 −1
−−−−−→ (Step 2b)
0 1 2 −1
 ‹
R1 =R1 −2R2 1 0 5 1
−−−−−−→ (Step 2c)
0 1 2 −1

This row reduced matrix corresponds to the linear system

x + 5z = 1
§

y + 2z = −1.

In what sense is the system solved? We will see in Section 2.3.

2.3 Parametric Form

Objectives

1. Learn to express the solution set of a system of linear equations in parametric


form.

2. Understand the three possibilities for the number of solutions of a system of


linear equations.

3. Recipe: parametric form.

4. Vocabulary word: free variable.


2.3. PARAMETRIC FORM 31

2.3.1 Free Variables


There is one possibility for the row reduced form of a matrix that we did not see
in Section 2.2.
Example (A System with a Free Variable). Consider the linear system
2x + y + 12z = 1
§

x + 2 y + 9z = −1.
We solve it using row reduction:
 ‹  ‹
2 1 12 1 R1 ←→R2 1 2 9 −1
−−−−→ (Optional)
1 2 9 −1 2 1 12 1
 ‹
R2 =R2 −2R1 1 2 9 −1
−−−−−−→ (Step 1c)
0 −3 −6 3
 ‹
R2 =R2 ÷−3 1 2 9 −1
−−−−−→ (Step 2b)
0 1 2 −1
 ‹
R1 =R1 −2R2 1 0 5 1
−−−−−−→ (Step 2c)
0 1 2 −1
This row reduced matrix corresponds to the linear system
x + 5z = 1
§

y + 2z = −1.
In what sense is the system solved? We rewrite as
x = 1 − 5z
§

y = −1 − 2z.
For any value of z, there is exactly one value of x and y that make the equations
true. But we are free to choose any value of z.
We have found all solutions: it is the set of all values x, y, z, where
x = 1 − 5z
(
y = −1 − 2z z any real number.
z= z
For instance, setting z = 0 gives the solution (x, y, z) = (1, −1, 0), and setting
z = 1 gives the solution (x, y, z) = (−4, −3, 1).

Use this link to view the online demo

A picture of the solution set (the yellow line) of the linear system in this example.
There is a unique solution for every value of z; move the slider to change z.

Definition. Consider a consistent system of equations in the variables x 1 , x 2 , . . . , x n .


Let A be a row echelon form of the augmented matrix for this system.
We say that x i is a free variable if its corresponding column in A does not
contain a pivot.
32 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

In the above example, the variable z was free because the reduced row echelon
form matrix was  ‹
1 0 5 1
.
0 1 2 −1
In the matrix
1 ? 0 ? ?
 ‹
,
0 0 1 ? ?
the free variables are x 2 and x 4 . (The augmented column is not free because it
does not correspond to a variable.)

Recipe: Parametric form. The parametric form of the solution set of a con-
sistent system of linear equations is obtained as follows.

1. Write the system as an augmented matrix.

2. Row reduce to reduced row echelon form.

3. Write the corresponding (solved) system of linear equations.

4. Move all free variables to the right hand side of the equations.

Moving the free variables to the right hand side of the equations amounts to
solving for the non-free variables (the ones that come from columns with pivots) in
terms of the free variables. One can think of the free variables as being independent
variables, and the non-free variables being dependent.

Implicit Versus Parameterized Equations. The solution set of the system of linear
equations
2x + y + 12z = 1
§

x + 2 y + 9z = −1
is a line in R3 , as we saw in this example. These equations are called the implicit
equations for the line: the line is defined implicitly as the simultaneous solutions
to those two equations.
The parametric form
x = 1 − 5z
§

y = −1 − 2z.
can be written as follows:

(x, y, z) = (1 − 5z, −1 − 2z, z) z any real number.

This called a parameterized equation for the same line. It is an expression that
produces all points of the line in terms of one parameter, z.
One should think of a system of equations as being an implicit equation for
its solution set, and of the parametric form as being the parameterized equation
for the same set. The parameteric form is much more explicit: it gives a concrete
recipe for producing all solutions.
2.3. PARAMETRIC FORM 33

You can chose any value for the free variables in a (consistent) linear system.
Free variables come from the columns without pivots in a matrix in row echelon
form.

Example. Suppose that the reduced row echelon form of the matrix for a linear
system in four variables x 1 , x 2 , x 3 , x 4 is
 ‹
1 0 0 3 2
.
0 0 1 4 −1

The free variables are x 2 and x 4 : they are the ones whose columns are not pivot
columns.
This translates into the system of equations

x1 + 3x 4 = 2 x 1 = 2 − 3x 4
§ §
parametric form
−−−−−−−−→
x 3 + 4x 4 = −1 x 3 = −1 − 4x 4 .

What happened to x 2 ? It is a free variable, but no other variable depends on it.


The general solution to the system is

(x 1 , x 2 , x 3 , x 4 ) = (2 − 3x 4 , x 2 , −1 − 4x 4 , x 4 ),

for any values of x 2 and x 4 . For instance, (2, 0, −1, 0) is a solution (with x 2 = x 4 =
0), and (5, 1, 3, −1) is a solution (with x 2 = 1, x 4 = −1).

Example (A Parameterized Plane). The system of one linear equation

x + y +z =1

comes from the matrix 


1 1 1 1 ,

which is already in reduced row echelon form. The free variables are y and z. The
parametric form for the general solution is

(x, y, z) = (1 − y − z, y, z)

for any values of y and z. This is the parametric equation for a plane in R3 .

Use this link to view the online demo

A plane described by two parameters y and z. Any point on the plane is obtained by
substituting suitable values for y and z.
34 CHAPTER 2. SYSTEMS OF LINEAR EQUATIONS: ALGEBRA

2.3.2 Number of Solutions


There are three possibilities for the reduced row echelon form of the augmented
matrix of a linear system.

1. The last column is a pivot column. In this case, the system is inconsistent.
There are zero solutions, i.e., the solution set is empty. For example, the
matrix  
1 0 0
0 1 0
0 0 1
comes from a linear system with no solutions.

2. Every column except the last column is a pivot column. In this case, the
system has a unique solution. For example, the matrix
 
1 0 0 a
0 1 0 b
0 0 1 c

tells us that the unique solution is (x, y, z) = (a, b, c).

3. The last column is not a pivot column, and some other column is not a
pivot column either. In this case, the system has infinitely many solutions,
corresponding to the infinitely many possible values of the free variable(s).
For example, in the system corresponding to the matrix
 ‹
1 −2 0 3 1
,
0 0 1 4 −1

any values for x 2 and x 4 yield a solution to the system of equations.


Chapter 3

Systems of Linear Equations:


Geometry

Primary Goals.

1. Understand what the solution set of Ax = b looks like.

2. Understand the set of b such that Ax = b is consistent.

This chapter is devoted to the geometric study of two objects:

1. the solution set of a system of linear equations, and

2. the set of all constants that makes a particular system consistent.

These objects are related in a beautiful way by the rank theorem in Section 3.9.
We will develop a large amount of vocabulary that we will use to describe
the above objects: vectors (Section 3.1), spans (Section 3.2), linear independence
(Section 3.5), subspaces (Section 3.6), dimension (Section 3.7), coordinate sys-
tems (Section 3.8), etc. We will use these concepts to give a precise geometric
description of the solution set of any system of equations (Section 3.4). We will
also learn how to express systems of equations more simply using matrix equations
(Section 3.3).

3.1 Vectors

Objectives

1. Learn how to add and scalar-multiply vectors in Rn , both algebraically and


geometrically.

2. Understand linear combinations geometrically.

3. Pictures: vector addition, vector subtraction, linear combinations.

35
36 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

4. Vocabulary words: vector, linear combination.

3.1.1 Vectors in Rn
We have been drawing points of Rn as dots in the line, plane, space, etc. We can
also draw them as arrows. Since we have two geometric interpretations in mind,
in what follows we will call an ordered list of n real numbers an element of Rn .
Points and Vectors. A point is an element of Rn , drawn as a point (a dot).

the point (1, 3)

A vector is an element of Rn , drawn as an arrow.

1

the vector 3

The difference is purely psychological: points and vectors are both just lists of
numbers.
Interactive: A vector in R3 , by coordinates.

Use this link to view the online demo

A vector in R3 , and its coordinates. Drag the arrow head and tail.
3.1. VECTORS 37

When we think of an element of Rn as a vector, we will usually write it vertically,


like a matrix with one column:  ‹
1
v= .
3
We will also write 0 for the zero vector.
Why make the distinction between points and vectors? A vector need not start
at the origin: it can be located anywhere! In other words, an arrow is determined
by its length and its direction,
‹ not by its location. For instance, these arrows all
1
represent the vector .
2

Unless otherwise specified, we will assume that all vectors start at the origin.

Vectors makes sense in the real world: many physical quantities, such as ve-
locity, are represented as vectors. But it makes more sense to think of the velocity
of a car as being located at the car.
Remark. Some authors use boldface letters to represent vectors, as in “v”, or use
arrows, as in “~
v ”. As it is usually clear from context if a letter represents a vector,
we do not decorate vectors in this way.
Note. Another way to think about a vector is as a difference between two points,
1

or the arrow from one point to another. For instance, 2 is the arrow from (1, 1)
to (2, 3).

(2, 3)

1

2
(1, 1)
38 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

3.1.2 Vector Algebra and Geometry


Here we learn how to add vectors together and how to multiply vectors by num-
bers, both algebraically and geometrically.

Vector addition and scalar multiplication.

• We can add two vectors together:


     
a x a+x
 b +  y  =  b + y  .
c z c+z

• We can multiply, or scale, a vector by a real number c:


   
x c·x
c  y  = c · y  .
z c·z

We call c a scalar to distinguish it from a vector. If v is a vector and c is a


scalar, then cv is called a scalar multiple of v.

Addition and scalar multiplication work in the same way for vectors of length n.

Example.
         
1 4 5 1 −2
2 + 5 = 7 and − 2 2 = −4 .
  
3 6 9 3 −6

The Parallelogram Law for Vector Addition Geometrically, the sum of two vec-
tors v, w is obtained as follows: place the tail of w at the head of v. Then v + w is
the vector whose tail is the tail of v and whose head is the head of w. Doing this
both ways creates a parallelogram. For example,
 ‹  ‹  ‹
1 4 5
+ = .
3 2 5

Why? The width of v + w is the sum of the widths, and likewise with the
heights.
3.1. VECTORS 39

5=2+3=3+2
v

w
v+
v
w

5=1+4=4+1

Interactive: The parallelogram law for vector addition.

Use this link to view the online demo

The parallelogram law for vector addition. Click and drag the heads of and w.

Vector Subtraction Geometrically, the difference of two vectors v, w is obtained


as follows: place the tail of v and w at the same point. Then v − w is the vector
from the head of w to the head of v. For example,
 ‹  ‹  ‹
1 4 −3
− = .
4 2 2
Why? If you add v − w to w, you get v.

v−
w

Interactive: Vector subtraction.

Use this link to view the online demo

Vector subtraction. Click and drag the heads of v and w.


40 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Scalar Multiplication A scalar multiple of a vector v has the same (or opposite)
direction, but a different length. For instance, 2v is the vector in the direction of
v but twice as long, and − 12 v is the vector in the opposite direction of v, but half
as long. Note that the set of all scalar multiples of a (nonzero) vector v is a line.

Some multiples of v. All multiples of v.

2v

0v
− 21 v

Interactive: Scalar multiplication.

Use this link to view the online demo

Scalar multiplication. Drag the slider to change the scalar.

3.1.3 Linear Combinations


We can add and scalar-multiply vectors in the same equation.
Definition. Let c1 , c2 , . . . , c p be scalars, and let v1 , v2 , . . . , vp be vectors in Rn . The
vector in Rn
c1 v1 + c2 v2 + · · · + c p vp
is called a linear combination of the vectors v1 , v2 , . . . , vp , with weights or coef-
ficients c1 , c2 , . . . , c p .
Geometrically, a linear combination is obtained by stretching / shrinking the
vectors v1 , v2 , . . . , vp according to the coefficients, then adding them together using
the parallelogram law.
1 1
 
Example. Let v1 = 2 and v2 = 0 . Here are some linear combinations of v1 and
v2 , drawn as points.
3.1. VECTORS 41

• v1 + v2

• v1 − v2
v1
• 2v1 + 0v2

v2 • 2v2

• −v1

The locations of these points are found using the parallelogram law for vector
addition. Any vector on the plane is a linear combination of v1 and v2 , with suitable
coefficients.

Use this link to view the online demo

Linear combinations of two vectors in R2 : move the sliders to change the coefficients of
v1 and v2 . Note that any vector on the plane can be obtained as a linear combination
of v1 , v2 with suitable coefficients.

Interactive: Linear combinations of three vectors.

Use this link to view the online demo

Linear combinations of three vectors: move the sliders to change the coefficients of
v1 , v2 , v3 . Note how the parallelogram law for addition of three vectors is more of a
“parallepiped law”.

Example (Linear Combinations of a Single Vector). A linear combination of a sin-


1
gle vector v = 2 is just a scalar multiple of v. So some examples include

3 1
 ‹  ‹  ‹
1 3/2 −1/2
v= , v= , − v= , ...
2 2 3 2 −1

The set of all linear combinations is the line through v. (Unless v = 0, in which
case any scalar multiple of v is again 0.)
42 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Example (Linear Combinations of Collinear Vectors). The set of all linear combi-
nations of the vectors
 ‹  ‹
2 −1
v1 = and v2 =
2 −1
is the line containing both vectors.

v1

v2

The difference between this and a previous example is that both vectors lie on
the same line. Hence any scalar multiples of v1 , v2 lie on that line, as does their
sum.
Interactive: Linear combinations of two collinear vectors.

Use this link to view the online demo

Linear combinations of two collinear vectors in R2 . Move the sliders to change the
coefficients of v1 , v2 . Note that there is no way to “escape” the line.
3.2. VECTOR EQUATIONS AND SPANS 43

3.2 Vector Equations and Spans

Objectives

1. Understand the equivalence between a system of linear equations and a vec-


tor equation.

2. Learn the definition of Span{x 1 , x 2 , . . . , x p }, and how to draw pictures of


spans.

3. Recipe: solve a vector equation using augmented matrices / decide if a vector


is in a span.

4. Pictures: an inconsistent system of equations, a consistent system of equa-


tions, spans in R2 and R3 .

5. Vocabulary word: vector equation.

6. Essential vocabulary word: span.

3.2.1 Vector Equations


An equation involving vectors with n coordinates is the same as n equations in-
volving only numbers. For example, the equation
     
1 −1 8
x 2 + y −2 = 16 (3.2.1)
6 −1 3

simplifies to
       
x −y 8 x − y! 8
2x  + −2 y  = 16 or 2x − 2 y = 16 .
6x −y 3 6x − y 3

For two vectors to be equal, all of their coordinates must be equal, so this is just
the system of linear equations

x− y = 8
(
2x − 2 y = 16 (3.2.2)
6x − y = 3.

Definition. A vector equation is an equation involving a linear combination of


vectors with possibly unknown coefficients.
44 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY
    
8 1 −1
Example. Is 16 a linear combination of 2 and −2?
    
3 6 −1
Solution. This means: can we solve the equation
     
1 −1 8
x 2 + y −2 = 16
    
6 −1 3

in the unknowns x, y? This vector equation translates into the system of linear
equations
x− y = 8
(
2x − 2 y = 16
6x − y = 3.
We form an augmented matrix and row reduce:
   
1 −1 8 1 0 −1
RREF
 2 −2 16  − −→  0 1 −9  .
6 −1 3 0 0 0

 is consistent, and the solution is x=−1 and


Thus the equation y  = −9. We
8 1 −1
conclude that 16 is indeed a linear combination of 2 and −2, with co-
    
3 6 −1
efficients −1 and −9:      
1 −1 8
− 2 − 9 −2 = 16 .
6 −1 3

Use this link to view the online demo

A picture of the vector equation (3.2.1). Try to solve the equation geometrically by
moving the sliders.

A Picture of a Consistent System. We saw in the above example that the system
of equations (3.2.2) is consistent. Equivalently, this means that the vector equation
(3.2.1) has a solution. Therefore, the figure above is a picture of a consistent system
of equations. Compare this figure below.

In order to actually solve the vector equation


     
1 −1 8
x 2 + y −2 = 16,
6 −1 3
3.2. VECTOR EQUATIONS AND SPANS 45

one has to solve the system of linear equations


x− y = 8
(
2x − 2 y = 16
6x − y = 3.
This means forming the augmented matrix
 
1 −1 8
 2 −2 16 
6 −1 3

and row reducing. Note that the columns of the augmented matrix are the vectors
from the original vector equation, so it is not actually necessary to write the system
of equations: one can go directly from the vector equation to the augmented matrix
by “smooshing the vectors together”.

Recipe: Solving a vector equation. In general, the vector equation

x 1 v1 + x 2 v2 + · · · + x p vp = b

where v1 , v2 , . . . , vp , b are vectors in Rn and x 1 , x 2 , . . . , x p are unknown scalars,


has the same solution set as the linear system with augmented matrix
 
| | | |
 v1 v2 · · · vp b 
| | | |

whose columns are the vi ’s and the b’s.

Now we have (at least) two equivalent ways of thinking about systems of equa-
tions:

1. Augmented matrices.

2. Linear combinations of vectors (vector equations).

The second is more geometric in nature: it lends itself to drawing pictures.

3.2.2 Spans
It will be important to know what are all linear combinations of a set of vectors
v1 , v2 , . . . , vp in Rn . In other words, we would like to understand the set of all
vectors b in Rn such that the vector equation (in the unknowns x 1 , x 2 , . . . , x p )

x 1 v1 + x 2 v2 + · · · + x p vp = b

has a solution (i.e. is consistent).


46 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Essential Definition. Let v1 , v2 , . . . , vp be vectors in Rn . The span of v1 , v2 , . . . , vp is


the collection of all linear combinations of v1 , v2 , . . . , vp , and is denoted Span{v1 , v2 , . . . , vp }.
In symbols:

Span{v1 , v2 , . . . , vp } = x 1 v1 + x 2 v2 + · · · + x p vp | x 1 , x 2 , . . . , x p in R

We also say that Span{v1 , v2 , . . . , vp } is the subset spanned by or generated by the


vectors v1 , v2 , . . . , vp .

The above definition is the first of several essential definitions that we will see
in this textbook. They are essential in that they form the essence of the subject of
linear algebra: learning linear algebra means (in part) learning these definitions.
All of the definitions are important, but it is essential that you learn and understand
the definitions marked as such.

Set Builder Notation. The notation



x 1 v1 + x 2 v2 + · · · + x p vp | x 1 , x 2 , . . . , x p in R

reads as: “the set of all things of the form x 1 v1 + x 2 v2 + · · · + x p vp such that
x 1 , x 2 , . . . , x p are in R.” The vertical line is “such that”; everything to the left of it
is “the set of all things of this form”, and everything to the right is the condition
that those things must satisfy to be in the set. Specifying a set in this way is called
set builder notation.
All mathematical notation is only shorthand: any sequence of symbols must
translate into a usual sentence.

Three characterizations of consistency. Now we have three equivalent ways of


making the same statement:

1. A vector b is in the span of v1 , v2 , . . . , vp .

2. The vector equation

x 1 v1 + x 2 v2 + · · · + x p vp = b

has a solution.

3. The linear system with augmented matrix


 
| | | |
 v1 v2 · · · vp b
| | | |

is consistent.

Equivalent means that, for any given list of vectors v1 , v2 , . . . , vp , b, either all
three statements are true, or all three statements are false.
3.2. VECTOR EQUATIONS AND SPANS 47

Use this link to view the online demo

This is a picture of an inconsistent linear system: the vector w on the right-hand side
of the equation x 1 v1 + x 2 v2 = w is not in the span of v1 , v2 . Convince yourself of this
by trying to solve the equation x 1 v1 + x 2 v2 = w by moving the sliders, and by row
reduction. Compare this figure.

Pictures of Spans Drawing a picture of Span{v1 , v2 , . . . , vp } is the same as draw-


ing a picture of all linear combinations of v1 , v2 , . . . , vp .

Span{v, w}
Span{v}

v
v

Span{v, w}

Pictures of spans in R2 .

Interactive: Span of two vectors in R2 .

Use this link to view the online demo

Interactive picture of a span of two vectors in R2 . Check “Show x.v + y.w” and move
the sliders to see how every point in the violet region is in fact a linear combination
of the two vectors.
48 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Span{v} Span{v, w}

v v

Span{u, v, w} Span{u, v, w}

v v
u
u
w w

Pictures of spans in R3 . The span of two noncollinear vectors is the plane containing
the origin and the heads of the vectors. Note that three coplanar (but not collinear)
vectors span a plane and not a 3-space, just as two collinear vectors span a line and
not a plane.

Interactive: Span of two vectors in R3 .

Use this link to view the online demo

Interactive picture of a span of two vectors in R3 . Check “Show x.v + y.w” and move
the sliders to see how every point in the violet region is in fact a linear combination
of the two vectors.

Interactive: Span of three vectors in R3 .

Use this link to view the online demo

Interactive picture of a span of three vectors in R3 . Check “Show x.v + y.w + z.u”
and move the sliders to see how every point in the violet region is in fact a linear
combination of the three vectors.
3.3. MATRIX EQUATIONS 49

3.3 Matrix Equations

Objectives
1. Understand the equivalence between a system of linear equations, an aug-
mented matrix, a vector equation, and a matrix equation.

2. Characterize the vectors b such that Ax = b is consistent, in terms of the


span of the columns of A.

3. Characterize matrices A such that Ax = b is consistent for all vectors b.

4. Recipe: multiply a vector by a matrix (two ways).

5. Picture: the set of all vectors b such that Ax = b is consistent.

6. Vocabulary word: matrix equation.

3.3.1 The Matrix Equation Ax = b.


In this section we introduce a very concise way of writing a system of linear equa-
tions: Ax = b. Here A is a matrix and x, b are vectors (generally of different sizes),
so first we must explain how to multiply a matrix by a vector.

When we say “A is an m×n matrix,” we mean that A has m rows and n columns.

Remark. In this book, we do not reserve the letters m and n for the numbers of
rows and columns of a matrix. If we write “A is an n × m matrix”, then n is the
number of rows of A and m is the number of columns.
Definition. Let A be an m × n matrix with columns v1 , v2 , . . . , vn :
 
| | |
A =  v1 v2 · · · vn 
| | |

The product of A with a vector x in Rn is the linear combination


 
  x1
| | | x 
2
Ax = v1 v2 · · · vn  

 ...  = x 1 v1 + x 2 v2 + · · · + x n vn .

| | |
xn

This is a vector in Rm .
50 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Example.
 
 ‹ 1  ‹  ‹  ‹  ‹
4 5 6   4 5 6 32
2 =1 +2 +3 = .
7 8 9 7 8 9 50
3

In order for Ax to make sense, the number of entries of x has to be the same as
the number of columns of A: we are using the entries of x as the coefficients of the
columns of A in a linear combination. The resulting vector has the same number
of entries as the number of rows of A, since each column of A has that number of
entries.

If A is an m × n matrix (m rows, n columns), then Ax makes sense when x has


n entries. The product Ax has m entries.

The following properties are easy to verify.

Properties of the Matrix-Vector Product. Let A be an m × n matrix, let u, v be


vectors in Rn , and let c be a scalar. Then:

• A(u + v) = Au + Av

• A(cu) = cAu

Definition. A matrix equation is an equation of the form Ax = b, where A is an


m × n matrix, b is a vector in Rm , and x is a vector whose coefficients x 1 , x 2 , . . . , x n
are unknown.

Matrix Equations and Vector Equations. Let v1 , v2 , . . . , vn and b be vectors in Rm .


Consider the vector equation

x 1 v1 + x 2 v2 + · · · + x n vn = b.

This is equivalent to the matrix equation Ax = b, where



  x1
| | |  x2
A =  v1 v2 · · · vn  and x =
 ...  .

| | |
xn

Conversely, if A is any m×n matrix, then Ax = b is equivalent to the vector equation

x 1 v1 + x 2 v2 + · · · + x n vn = b,

where v1 , v2 , . . . , vn are the columns of A, and x 1 , x 2 , . . . , x n are the entries of x.


3.3. MATRIX EQUATIONS 51

Example. Write the vector equation


 
7
2v1 + 3v2 − 4v3 = 2

1

as a matrix equation, where v1 , v2 , v3 are vectors in R3 .


Solution. Let A be the matrix with columns v1 , v2 , v3 , and let x be the vector with
entries 2, 3, −4. Then
  
| | | 2
Ax = v1 v2 v3
   3  = 2v1 + 3v2 − 4v3 ,
| | | −4
 
7
so the vector equation is equivalent to the matrix equation Ax = 2 .
1

Four Ways of Writing a Linear System. We now have four equivalent ways of
writing (and thinking about) a system of linear equations:
1. As a system of equations:
2x 1 + 3x 2 − 2x 3 = 7
§
x 1 − x 2 − 3x 3 = 5

2. As an augmented matrix:
 ‹
2 3 −2 7
1 −1 −3 5

3. As a vector equation (x 1 v1 + x 2 v2 + · · · + x n vn = b):


 ‹  ‹  ‹  ‹
2 3 −2 7
x1 + x2 + x3 =
1 −1 −3 5

4. As a matrix equation (Ax = b):


 
 x1 ‹
 ‹
2 3 −2   7
x2 = .
1 −1 −3 5
x3

In particular, all four have the same solution set.

We will move back and forth freely between the four ways of writing a linear
system, over and over again, for the rest of the book.
52 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Another Way to Compute Ax The above definition is a useful way of defining the
product of a matrix with a vector when it comes to understanding the relationship
between matrix equations and vector equations. Here we give a definition that is
better-adapted to computations by hand.

Definition. A row vector is a matrix with one row. The product of a row vector
of length n and a (column) vector of length n is
 
x1
  x2
a1 a2 · · · an   ...  = a1 x 1 + a2 x 2 + · · · + an x n .

xn

This is a scalar.

Recipe: The row-column rule for matrix-vector multiplication. If A is an


m × n matrix with rows r1 , r2 , . . . , rm , and x is a vector in Rn , then
   
— r1 — r1 x
 — r2 —  r x
Ax =  .  x =  2.  .
 ..   .. 
— rm — rm x

Example.
  
 1
   4 5 6 2 
‹ 1   
   = 4 · 1 + 5 · 2 + 6 · 3 = 32 .
3 
 ‹  ‹
4 5 6   
2 =
7 8 9 7·1+8·2+9·3 50
 1 
 
3 
 7 8 9 2 
3

This is the same answer as before:


 
‹ 1
1·4+2·5+3·6
  ‹  ‹  ‹  ‹  ‹
4 5 6   4 5 6 32
2 =1 +2 +3 = = .
7 8 9 7 8 9 1·7+2·8+3·9 50
3

3.3.2 Spans and Consistency


Let A be a matrix with columns v1 , v2 , . . . , vn :
 
| | |
A =  v1 v2 · · · vn  .
| | |
3.3. MATRIX EQUATIONS 53

Then

Ax = b has a solution
 
x1
 x2
 ...  = b
⇐⇒ there exist x 1 , x 2 , . . . , x n such that A  

xn
⇐⇒ there exist x 1 , x 2 , . . . , x n such that x 1 v1 + x 2 v2 + · · · + x n vn = b
⇐⇒ b is a linear combination of v1 , v2 , . . . , vn
⇐⇒ b is in the span of the columns of A.

Spans and Consistency. The matrix equation Ax = b has a solution if and


only if b is in the span of the columns of A.

This gives an equivalence between an algebraic statement (Ax = b is consis-


tent), and a geometric statement (b is in the span of the columns of A).


2 1
Example (An Inconsistent System). Let A =  −1 0 . Does the equation Ax =
1 −1
 
0
2 have a solution?
2

Solution. First we answer the question geometrically. The columns of A are

   
2 1
v1 = −1 and v2 =  0 ,
1 −1

 
0
and the target vector (on the right-hand side of the equation) is w = 2. The

2
equation Ax = w is consistent if and only if w is contained in the span of the
columns of A. So we draw a picture:
54 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Span{v1 , v2 }
w

v1

v2

It does not appear that w lies in Span{v1 , v2 }, so the equation is inconsistent.

Use this link to view the online demo

The vector w is not contained in Span{v1 , v2 }, so the equation Ax = b is inconsistent.


(Try moving the sliders to solve the equation.)

Let us check our geometric answer by solving the matrix equation using row
reduction. We put the system into an augmented matrix and row reduce:
   
2 1 0 1 0 0
RREF
 −1 0 2  −−→  0 1 0  .
1 −1 2 0 0 1

The last equation is 0 = 1, so the system is indeed inconsistent, and the matrix
equation    
2 1 0
 −1 0 x = 2
 
1 −1 2
has no solution.

2 1
Example (A Consistent System). Let A =  −1 0 . Does the equation Ax =
1 −1
 
1
−1 have a solution?
2
3.3. MATRIX EQUATIONS 55

Solution. First we answer the question geometrically. The columns of A are


   
2 1
v1 = −1
  and v2 = 0 ,

1 −1
 
1
and the target vector (on the right-hand side of the equation) is w = −1. The

2
equation Ax = w is consistent if and only if w is contained in the span of the
columns of A. So we draw a picture:

Span{v1 , v2 }

w
v1

v2

It appears that w is indeed contained in the span of the columns of A; in fact,


we can see  ‹
1
w = v1 − v2 =⇒ x = .
−1

Use this link to view the online demo

The vector w is contained in Span{v1 , v2 }, so the equation Ax = b is consistent. (Move


the sliders to solve the equation.)

Let us check our geometric answer by solving the matrix equation using row
reduction. We put the system into an augmented matrix and row reduce:
   
2 1 1 1 0 1
RREF
 −1 0 −1  −−→  0 1 −1  .
1 −1 2 0 0 0
56 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

This gives us x = 1 and y = −1, which is consistent with the picture:


       
2 1 1  ‹ 1
1
1 −1 − 1 0 = −1
     or A = −1 .
−1
1 −1 2 2

When Solutions Always Exist Building on this note, we have the following cri-
terion for when Ax = b is consistent for every choice of b.

Theorem. Let A be an m× n (non-augmented) matrix. The following are equivalent:

1. Ax = b has a solution for all b in Rm .

2. The span of the columns of A is all of Rm .

3. A has a pivot in every row.

Proof. The equivalence of 1 and 2 is established by this note as applied to every b


in Rm .
Now we show that 1 and 3 are equivalent. (Since we know 1 and 2 are equiv-
alent, this implies 2 and 3 are equivalent as well.) If A has a pivot in every row,
then its reduced row echelon form looks like this:
 
1 0 ? 0 ?
0 1 ? 0 ?,
0 0 0 1 ?

and therefore A b reduces to this:
 
1 0 ? 0 ? ?
0 1 ? 0 ? ?.
0 0 0 1 ? ?

There is no b that makes it inconsistent, so there is always a solution. Conversely,


if A does not have a pivot in each row, then its reduced row echelon form looks
like this:  
1 0 ? 0 ?
0 1 ? 0 ?,
0 0 0 0 0

which can give rise to an inconsistent system after augmenting with b:


 
1 0 ? 0 ? 0
0 1 ? 0 ? 0.
0 0 0 0 0 16
3.4. SOLUTION SETS 57

Recall that equivalent means that, for any given matrix A, either all of the
conditions of the above theorem are true, or they are all false.

Be careful when reading the statement of the above theorem. The first two
conditions look very much like this note, but they are logically quite different
because of the quantifier “for all b”.

Interactive: The criteria of the theorem are satisfied.

Use this link to view the online demo

An example where the criteria of the above theorem are satisfied. The violet region is
the span of the columns v1 , v2 , v3 of A, which is the same as the set of all b such that
Ax = b has a solution. If you drag b, the demo will solve Ax = b for you and move
x.

Interactive: The critera of the theorem are not satisfied.

Use this link to view the online demo

An example where the criteria of the above theorem are not satisfied. The violet line
is the span of the columns v1 , v2 , v3 of A, which is the same as the set of all b such that
Ax = b has a solution. Try dragging b in and out of the column span.

3.4 Solution Sets

Objectives

1. Understand the relationship between the solution set of Ax = 0 and the


solution set of Ax = b.

2. Understand the difference between the solution set and the column span.

3. Recipes: parametric vector form, write the solution set of a homogeneous


system as a span.

4. Pictures: solution set of a homogeneous system, solution set of an inhomo-


geneous system, the relationship between the two.

5. Vocabulary words: homogeneous/inhomogeneous, trivial solution.

In this section we will study the geometry of the solution set of any matrix
equation Ax = b.
58 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

3.4.1 Homogeneous Systems


The equation Ax = b is easier to solve when b = 0, so we start with this case.

Definition. A system of linear equations of the form Ax = 0 is called homoge-


neous.
A system of linear equations of the form Ax = b for b 6= 0 is called inhomoge-
neous.

A homogeneous system is just a system of linear equations where all constants


on the right side of the equals sign are zero.

A homogeneous system always has the solution x = 0. This is called the trivial
solution. Any nonzero solution is called nontrivial.

Observation. The equation Ax = 0 has a nontrivial solution ⇐⇒ there is a free


variable, ⇐⇒ A has a column without a pivot.

Example (No nontrivial solutions). What is the solution set of Ax = 0, where


 
1 3 4
A =  2 −1 2 ?
1 0 1

Solution. We form an augmented matrix and row reduce:


   
1 3 4 0 1 0 0 0
RREF
 2 −1 2 0  −−→ 0 1 0 0.
1 0 1 0 0 0 1 0

The only solution is the trivial solution x = 0.

Observation. In the above example, the last column of the augmented matrix
 
1 3 4 0
 2 −1 2 0 
1 0 1 0

will be zero throughout the row reduction process. So it is not really necessary to
write augmented matrices when solving homogeneous systems.

Example (The solution set is a line). What is the solution set of Ax = 0, where
 ‹
1 −3
A= ?
2 −6
3.4. SOLUTION SETS 59

Solution. We row reduce (without augmenting, as suggested in the above ob-


servation):  ‹  ‹
1 −3 RREF 1 −3
−−→ .
2 −6 0 0
This corresponds to the single equation x 1 − 3x 2 = 0. We can write the parametric
form as follows:
x 1 = 3x 2
§
x2 = x2.
We wrote the redundant equation x 2 = x 2 in order to turn the above system into
a vector equation:  ‹  ‹
x1 3
x= = x2 .
x2 1
This vector equation is called the parametric vector form of the solution set.
Since x 2 is allowed to
 be anything, this says that the solution set is the set of all
3
scalar multiples of 1 , otherwise known as
§ ‹ª
3
Span .
1
We know how to draw the picture of a span of a vector: it is a line. Therefore, this
is a picture of the the solution set:

Ax = 0

Use this link to view the online demo

Interactive picture of the solution set of Ax = 0. If you drag x along the line spanned
3 3
 
by 1 , the product Ax is always equal to zero. This is what it means for Span{ 1 }
to be the solution set of Ax = 0.

Since there were two variables in the above example, the solution set is a subset
of R2 . Since one of the variables was free, the solution set is a line.

In order to actually find a nontrivial solution to Ax = 0 in the above example,


it suffices to substitute any nonzero value for the freevariable
 x 2 . For instance,
3 3
taking x 2 = 1 gives the nontrivial solution x = 1 · 1 = 1 .
60 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Example (The solution set is a plane). What is the solution set of Ax = 0, where

 ‹
1 −1 2
A= ?
−2 2 −4

Solution. We row reduce (without augmenting, as suggested in the above ob-


servation):
 ‹  ‹
1 −1 2 RREF 1 −1 2
−−→ .
−2 2 −4 0 0 0

This corresponds to the single equation x 1 − x 2 + 2x 3 = 0. We can write the


parametric form as follows:

( x = x − 2x
1 2 3
x2 = x2
x3 = x3.

We wrote the redundant equations x 2 = x 2 and x 3 = x 3 in order to turn the above


system into a vector equation:

     
x1 1 −2
x = x2 = x2 1 + x3 0  .
    
x3 0 1

This vector equation is called the parametric vector form of the solution set.
Since x 2 and x 3 are allowed to 
be anything,
 this says that the solution set is the
1 −2
set of all linear combinations of 1 and 0 . In other words, the solution set
  
0 1
is
   
 1 −2 
Span 1 ,
  0 .
 0 1 

We know how to draw the span of two noncollinear vectors in R3 : it is a plane.


Therefore, this is a picture of the solution set:
3.4. SOLUTION SETS 61

Ax = 0

Use this link to view the online demo

Interactive picture of the solution set of Ax = 0. If you drag x along the violet plane,
the product Ax is always equal to zero. This is what it means for the plane to be the
solution set of Ax = 0.

Since there were three variables in the above example, the solution set is a
subset of R3 . Since two of the variables were free, the solution set is a plane.

Example (Four variables). What is the solution set of Ax = 0, where


 
1 2 0 −1
A =  −2 −3 4 5 ?
2 4 0 −2

Solution. We row reduce (without augmenting, as suggested in the above ob-


servation):
   
1 2 0 −1 1 0 −8 −7
RREF
 −2 −3 4 5  −−→  0 1 4 3.
2 4 0 −2 0 0 0 0

This corresponds to the system of equations

x1 − 8x 3 − 7x 4 = 0
§
x 2 + 4x 3 + 3x 4 = 0
62 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

We can write the parametric form as follows:

x1 = 8x 3 + 7x 4


x2 = −4x 3 − 3x 4

x = x3
 3

x4 = x4.

We wrote the redundant equations x 3 = x 3 and x 4 = x 4 in order to turn the above


system into a vector equation:

8 7
     
x1
x  −4 −3
x =  2 = x3   + x4   .
x3 1 0
x4 0 1

This vector equation is called the parametric vector form of the solution set.
Since x 3 and x 4 are allowed to be anything, this says that the solution set is the
8 7
   
−4 −3
set of all linear combinations of   and  . In other words, the solution
1 0
0 1
set is
   
 8 7 
−4 −3
 
Span   ,   .
 1 0 
 
0 1

Since there were four variables in the above example, the solution set is a subset
of R4 . Since two of the variables were free, the solution set is a plane.

Recipe: Parametric vector form (homogeneous case). Let A be an m × n


matrix. Suppose that the free variables in the homogeneous equation Ax = 0
are x i , x j , x k , . . .. Then the solutions to Ax = 0 can be written in the form

x = x i vi + x j v j + x k vk + · · ·

for some vectors vi , v j , vk , . . . in Rn , and any scalars x i , x j , x k , . . .. This is called


the parametric vector form of the solution. It is obtained by finding the
parametric form of the solution, including the redundant equations x i = x i ,
x j = x j , x k = x k , . . ., putting these equations in order, and making a vector
equation. 
In this case, the solution set can be written Span vi , v j , vk , . . . .

Note that the solution set of a homogeneous equation Ax = 0 is a span!


3.4. SOLUTION SETS 63

3.4.2 Inhomogeneous Systems


Recall that a matrix equation Ax = b is called inhomogeneous when b 6= 0.

Example (The solution set is a line). What is the solution set of Ax = b, where
 ‹  ‹
1 −3 −3
A= and b= ?
2 −6 −6

(Compare this example.)


Solution. We row reduce the associated augmented matrix:
 ‹  ‹
1 −3 −3 RREF 1 −3 −3
−−→ .
2 −6 −6 0 0 0

This corresponds to the single equation x 1 −3x 2 = −3. We can write the parametric
form as follows:
x 1 = 3x 2 − 3
§
x 2 = x 2 + 0.
We turn the above system into a vector equation:
 ‹  ‹  ‹
x1 3 −3
x= = x2 + .
x2 1 0

This vector equation is called the parametric vector form of the solution set. Since
x 2 is allowed to
 be anything, this says that the−3solution set is the set of all scalar
3

multiples of 1 , translated by the vector p = 0 . This is a line which contains p
3

and is parallel to Span{ 1 }: it is a translate of a line. We write the solution set as
§ ‹ª  ‹
3 −3
Span + .
1 0

Here is a picture of the the solution set:

Ax = b

p Ax = 0
64 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Use this link to view the online demo

Interactive picture of the solution set of Ax = b. If you drag x along the violet line, the
product Ax is always equal to b. This is what it means for the line to be the solution
set of Ax = b.

In the above example, the solution set was all vectors of the form
 ‹  ‹  ‹
x1 3 −3
x= = x2 +
x2 1 0

−3

where x 2 is any scalar. The vector p = 0 is also a solution of Ax = b: take
x 2 = 0. We call p a particular solution.

Example (The solution set is a plane). What is the solution set of Ax = b, where
 ‹  ‹
1 −1 2 1
A= and b= ?
−2 2 −4 −2

(Compare this example.)


Solution. We row reduce the associated augmented matrix:
 ‹  ‹
1 −1 2 1 RREF 1 −1 2 1
−−→ .
−2 2 −4 −2 0 0 0 0

This corresponds to the single equation x 1 − x 2 + 2x 3 = 1. We can write the


parametric form as follows:
( x = x − 2x + 1
1 2 3
x2 = x2 +0
x3 = x 3 + 0.

We turn the above system into a vector equation:


       
x1 1 −2 1
x =  x 2  = x 2 1 + x 3  0  + 0 .
x3 0 1 0

This vector equation is called the parametric vector form of the solution set.
Since x 2 and x 3 are allowed to
beanything,
 this
 says that the solution set is the
 set

1 −2 1
of all linear combinations of 1 and  0 , translated by the vector p = 0.
1 1 0
3.4. SOLUTION SETS 65
   
 1 −2 
This is a plane which contains p and is parallel to Span 1 ,
  0  : it is a
 1 1 
translate of a plane. We write the solution set as
     
 1 −2  1
Span  1 ,
  0  + 0 .

 1 1  0

Here is a picture of the solution set:

Ax = b

Use this link to view the online demo

Interactive picture of the solution set of Ax = b. If you drag x along the violet plane,
the product Ax is always equal to b. This is what it means for the plane to be the
solution set of Ax = b.

In the above example, the solution set was all vectors of the form
       
x1 1 −2 1
x =  x 2  = x 2 1 + x 3  0  + 0 .
x3 0 1 0
 
1
where x 2 and x 3 are any scalars. In this case, a particular solution is p = 0.
0
66 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

In the previous example and the example before it, the parametric vector form
of the solution set of Ax = b was exactly the same as the parametric vector form
of the solution set of Ax = 0 (from this example and this example, respectively),
plus a particular solution.

Key Observation. The set of solutions to Ax = b, if it is nonempty, is obtained


by taking one particular solution p of Ax = b, and adding all solutions of
Ax = 0.
In particular, the solution set of Ax = b is either empty, or it is a translate of a
span.

The parametric vector form of the solutions of Ax = b is just the parametric


vector form of the solutions of Ax = 0, plus a particular solution p.
It is not hard to see why the key observation is true. If p is a particular solution,
then Ap = b, and if x is a solution to the homogeneous equation Ax = 0, then
A(x + p) = Ax + Ap = 0 + b = b,
so x + p is another solution of Ax = b.
Remark. Row reducing to find the parametric vector form will give you one par-
ticular solution p of Ax = b. But the key observation is true for any solution p.
Example (The solution set is a point). What is the solution set of Ax = b, where
   
1 3 4 0
A = 2 −1 2
  and b = 1? 
1 0 1 0

Solution. We form an augmented matrix and row reduce:


   
1 3 4 0 1 0 0 −1
RREF
 2 −1 2 1  − −→  0 1 0 −1  .
1 0 1 0 0 0 1 1
 
−1
The only solution is p = −1.
1
According to the key observation, this is supposed to be a translate of a span
by p. Indeed, we saw in the first example that the only solution of Ax = 0 is the
trivial solution, i.e., that the solution set is the one-point set {0}. The solution set
of the inhomogeneous equation Ax = b is
 
−1
{0} + −1 .

1

Note that {0} = Span{0}, so the homogeneous solution set is a span.


3.4. SOLUTION SETS 67

Ax = b

Ax = 0

See the interactive figures in the next subsection for visualizations of the key
observation.

Dimension of the solution set. As in the first subsection, when there is one
free variable in a consistent matrix equation, the solution set is a line—this line
does not pass through the origin when the system is inhomogeneous—when
there are two free variables, the solution set is a plane, etc.
We will develop a rigorous definition of dimension in Section 3.7, but for now
it is important to note that the “dimension” of the solution set of a consistent
system is equal to the number of free variables.

3.4.3 Solution Sets and Column Spans


To every m × n matrix A, we have now associated two completely different geo-
metric objects, both described using spans.

• The solution set: for fixed b, this is the set of all x such that Ax = b.

◦ This is a span if b = 0, and it is a translate of a span (or it is empty) if


b 6= 0.
◦ It is a subset of Rn .
◦ It is computed by solving a system of equations: usually by row reduc-
ing and finding the parametric vector form.

• The span of the columns of A: this is the set of all b such that Ax = b is
consistent.
68 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

◦ This is always a span.


◦ It is a subset of Rm .
◦ It is not computed by solving a system of equations: row reduction
plays no role.

Do not confuse these two geometric constructions!

Interactive: Solution set and span of the columns (1).

Use this link to view the online demo

Left: the solution set of Ax = b is in violet. Right: the span of the columns of A is
in violet. As you move x, you change b, so the solution set changes—but all solution
sets are parallel planes. If you move b within the span of the columns, the solution
set also changes, and the demo solves the equation to find a particular solution x. If
you move b outside of the span of the columns, the system becomes inconsistent, and
the solution set disappears.

Interactive: Solution set and span of the columns (2).

Use this link to view the online demo

Left: the solution set of Ax = b is in violet. Right: the span of the columns of A is
in violet. As you move x, you change b, so the solution set changes—but all solution
sets are parallel planes. If you move b within the span of the columns, the solution
set also changes, and the demo solves the equation to find a particular solution x. If
you move b outside of the span of the columns, the system becomes inconsistent, and
the solution set disappears.

3.5 Linear Independence

Objectives

1. Understand the concept of linear independence.

2. Learn several criteria for linear independence.

3. Understand the relationship between linear independence and pivot columns


/ free variables.

4. Recipe: test if a set of vectors is linearly independent / find an equation of


linear dependence.
3.5. LINEAR INDEPENDENCE 69

5. Picture: whether a set of vectors in R2 or R3 is linearly independent or not.

6. Vocabulary words: linear dependence relation / equation of linear depen-


dence.

7. Essential vocabulary words: linearly independent, linearly dependent.

Sometimes the span of a set of vectors is “smaller” than you expect from the
number of vectors, as in the picture below. This means that (at least) one of the
vectors is redundant: it can be removed without affecting the span. In the present
section, we formalize this idea in the notion of linear (in)dependence.

Span{u, v, w}
Span{v, w}
v
u
v
w
w

Pictures of sets of vectors that are linearly dependent. Note that in each case, one
vector is in the span of the others—so it doesn’t make the span bigger.

3.5.1 The Definition of Linear Independence


Essential Definition. A set of vectors {v1 , v2 , . . . , vp } is linearly independent if
the vector equation
x 1 v1 + x 2 v2 + · · · + x p vp = 0
has only the trivial solution x 1 = x 2 = · · · = x p = 0. The set {v1 , v2 , . . . , vp } is
linearly dependent otherwise.
In other words, {v1 , v2 , . . . , vp } is linearly dependent if there exist numbers
x 1 , x 2 , . . . , x p , not all equal to zero, such that

x 1 v1 + x 2 v2 + · · · + x p vp = 0.

This is called a linear dependence relation or equation of linear dependence.

Note that linear (in)dependence is a notion that applies to a collection of vec-


tors, not to a single vector, or to one vector in the presence of some others.
70 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Example (Checking linear dependence). Is the set


     
 1 1 3 
1 , −1 , 1
 1 2 4 

linearly independent?
Solution. Equivalently, we are asking if the homogeneous vector equation
       
1 1 3 0
x 1 + y −1 + z 1 = 0
1 2 4 0

has a nontrivial solution. We solve this by forming a matrix and row reducing (we
do not augment because of this observation in Section 3.4):
   
1 1 3 1 0 2
row reduce
 1 −1 1  −−−−−→ 0 1 1
1 2 4 0 0 0

This says x = −2z and y = −z. So there exist nontrivial solutions: for instance,
taking z = 1 gives this equation of linear dependence:
       
1 1 3 0
−2 1 − −1 + 1 = 0 .
      
1 2 4 0

Use this link to view the online demo

Move the sliders to solve the homogeneous vector equation in this example. Do you see
why the vectors need to be coplanar in order for there to exist a nontrivial solution?

Example (Checking linear independence). Is the set


     
 1 1 3 
 1  , −1 , 1
 −2 2 4 

linearly independent?
Solution. Equivalently, we are asking if the homogeneous vector equation
       
1 1 3 0
x  1  + y −1 + z 1 = 0
−2 2 4 0
3.5. LINEAR INDEPENDENCE 71

has a nontrivial solution. We solve this by forming a matrix and row reducing (we
do not augment because of this observation in Section 3.4):
   
1 1 3 1 0 0
row reduce
 1 −1 1  −−−−−→ 0 1 0
−2 2 4 0 0 1

This says x = y = z = 0, i.e., the only solution is the trivial solution. We conclude
that the set is linearly independent.

Use this link to view the online demo

Move the sliders to solve the homogeneous vector equation in this example. Do you
see why the vectors would need to be coplanar in order for there to exist a nontrivial
solution?

The above examples lead to the following recipe.

Recipe: Checking linear (in)dependence. A set of vectors {v1 , v2 , . . . , vp } is


linearly independent if and only if the vector equation

x 1 v1 + x 2 v2 + · · · + x p vp = 0

has only the trivial solution, if and only if the matrix equation Ax = 0 has only
the trivial solution, where A is the matrix with columns v1 , v2 , . . . , vp :
 
| | |
A =  v1 v2 · · · vp  .
| | |

This is true if and only if A has a pivot in every column.

See this observation in Section 3.4. To rephrase:

Linear independence and matrix columns.

• The vectors {v1 , v2 , . . . , vp } are linearly independent if and only if the matrix A
with columns v1 , v2 , . . . , vp has a pivot in every column, if and only if Ax = 0
has only the trivial solution.

• Solving the matrix equatiion Ax = 0 will either verify that the columns v1 , v2 , . . . , vp
are linearly independent, or will produce a linear dependence relation by sub-
stituting any nonzero values for the free variables.
72 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Suppose that A has more columns than rows. Then A cannot have a pivot in
every column (it has at most one pivot per row), so its columns are automatically
linearly dependent.

A wide matrix has linearly dependent columns.

For example, four vectors in R3 are automatically linearly dependent. Note


that a tall matrix may or may not have linearly independent columns.

Facts about linear independence.

1. Two vectors are linearly dependent if and only if they are collinear, i.e., one is
a scalar multiple of the other.

2. Any set containing the zero vector is linearly dependent.

3. If a subset of {v1 , v2 , . . . , vp } is linearly dependent, then {v1 , v2 , . . . , vp } is lin-


early dependent as well.

Proof.

1. If v1 = cv2 then v1 − cv2 = 0, so {v1 , v2 } is linearly dependent. In the other


x
direction, if x 1 v1 + x 2 v2 = 0 with x 1 6= 0 (say), then v1 = − x 21 v2 .

2. It is easy to produce a linear dependence relation if one vector is the zero


vector: for instance, if v1 = 0 then

1 · v1 + 0 · v2 + · · · + 0 · vp = 0.

3. After reordering, we may suppose that {v1 , v2 , . . . , vr } is linearly dependent,


with r < p. This means that there is an equation of linear dependence

x 1 v1 + x 2 v2 + · · · + x r vr = 0,

with at least one of x 1 , x 2 , . . . , x r nonzero. This is also an equation of lin-


ear dependence among {v1 , v2 , . . . , vp }, since we can take the coefficients of
vr+1 , . . . , vp to all be zero.

With regard to the first fact, note that the zero vector is a multiple of any vector,
so it is collinear with any other vector. Hence facts 1 and 2 are consistent with each
other.
3.5. LINEAR INDEPENDENCE 73

3.5.2 Criteria for Linear Independence


In this subsection we give several criteria for a set of vectors to be linearly (in)dependent.
Keep in mind, however, that the actual definition is above.
Theorem. A set of vectors {v1 , v2 , . . . , vp } is linearly dependent if and only if one of
the vectors is in the span of the other ones.
Proof. Suppose, for instance, that v3 is in Span{v1 , v2 , v4 }, so we have an equation
like
1
v3 = 2v1 − v2 + 6v4 .
2
We can subract v3 from both sides of the equation to get
1
0 = 2v1 − v2 − v3 + 6v4 .
2
This is a linear dependence relation.
In the other direction, if we have a linear dependence relation like
1
0 = 2v1 − v2 + v3 − 6v4 ,
2
then we can move any nonzero term to the left side of the equation and divide by
its coefficient:
1 1
 ‹
v1 = v2 − v3 + 6v4 .
2 2
This shows that v1 is in Span{v2 , v3 , v4 }.
We leave it to the reader to generalize this proof for any set of vectors.

Warning. In a linearly dependent set {v1 , v2 , . . . , vp }, it is not generally true


that any vector v j is in the span of the others, only that at least one of them is.
See this figure below.

Theorem. A set of vectors {v1 , v2 , . . . , vp } is linearly dependent if and only if we can


remove one of the vectors without shrinking the span.
Proof. If {v1 , v2 , . . . , vp } is linearly dependent, then we know from the above theo-
rem that one vector is in the span of the others; for instance,
1
v3 = 2v1 − v2 + 6v4 .
2
In this case, any linear combination of v1 , v2 , v3 , v4 is already a linear combination
of v1 , v2 , v4 :
1
 ‹
x 1 v1 + x 2 v2 + x 3 v3 + x 4 v4 = x 1 v1 + x 2 v2 + x 3 2v1 − v2 + 6v4 + x 4 v4
2
1
 ‹
= (x 1 + 2x 3 )v1 + x 2 − x 3 v2 + (x 4 + 6)v4 .
2
74 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Therefore, Span{v1 , v2 , v3 , v4 } is contained in Span{v1 , v2 , v4 }. Any linear combina-


tion of v1 , v2 , v4 is also a linear combination of v1 , v2 , v3 , v4 (with the v3 -coefficient
equal to zero), so Span{v1 , v2 , v4 } is also contained in Span{v1 , v2 , v3 , v4 }, and thus
they are equal.
In the other direction, suppose that we can remove v3 without shrinking the
span of {v1 , v2 , v3 , v4 }: in other words, Span{v1 , v2 , v3 , v4 } = Span{v1 , v2 , v4 }. Since
v3 is in Span{v1 , v2 , v3 , v4 } (the v3 -coefficient is 1 and the rest are 0), this means
that v3 is in Span{v1 , v2 , v4 }, so the vectors are linearly dependent by the previous
theorem.
We leave it to the reader to generalize this proof for any set of vectors.

The previous theorem makes precise in what sense a set of linearly dependent
vectors is redundant.

Increasing Span Criterion. A set of vectors {v1 , v2 , . . . , vp } is linearly independent


if and only if, for every j, the vector v j is not in Span{v1 , v2 , . . . , v j−1 }.

Proof. It is equivalent to show that {v1 , v2 , . . . , vp } is linearly dependent if and only


if v j is in Span{v1 , v2 , . . . , v j−1 } for some j. The “if” implication is an immediate
consequence of the previous theorem. Suppose then that {v1 , v2 , . . . , vp } is linearly
dependent. This means that some v j is in the span of the others. Choose the largest
such j. We claim that this v j is in Span{v1 , v2 , . . . , v j−1 }. If not, then

v j = x 1 v1 + x 2 v2 + · · · + x j−1 v j−1 + x j+1 v j+1 + · · · + x p vp

with not all of x j+1 , . . . , x p equal to zero. Suppose for simplicity that x p 6= 0. Then
we can rearrange:

1 
vp = − x 1 v1 + x 2 v2 + · · · + x j−1 v j−1 − v j + x j+1 v j+1 + · · · + x p−1 vp−1 .
xp

This says that vp is in the span of {v1 , v2 , . . . , vp−1 }, which contradicts our assump-
tion that v j is the last vector in the span of the others.

If you make a set of vectors by adding one vector at a time, and if the span got
bigger every time you added a vector, then your set is linearly independent.

3.5.3 Pictures of Linear Independence


A set containg one vector {v} is linearly independent when v 6= 0, since x v = 0
implies x = 0.
3.5. LINEAR INDEPENDENCE 75

Span{v}

A set of two noncollinear vectors {v, w} is linearly independent:

• Neither is in the span of the other, so we can apply the first criterion.

• The span got bigger when we added w, so we can apply the increasing span
criterion.

Span{w}

v
w

Span{v}
76 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

The set of three vectors {v, w, u} below is linearly dependent:

• u is in Span{v, w}, so we can apply the first criterion.

• The span did not increase when we added u, so we can apply the increasing
span criterion.

In the picture below, note that v is in Span{u, w}, and w is in Span{u, v}, so we
can remove any of the three vectors without shrinking the span.

Span{w}
Span{v, w}

v
w

u
Span{v}

Two collinear vectors are always linearly dependent:

• w is in Span{v}, so we can apply the first criterion.

• We can remove w without shrinking the span, so we can apply the second
criterion.

• The span did not increase when we added w, so we can apply the increasing
span criterion.
3.5. LINEAR INDEPENDENCE 77

Span{v}

These three vectors {v, w, u} are linearly dependent: indeed, {v, w} is already
linearly dependent, so we can use the third fact.

u
Span{v}

Interactive: Linear independence of two vectors in R2 .


78 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Use this link to view the online demo

Move the vector heads and the demo will tell you if they are linearly independent and
show you their span.

Interactive: Linear dependence of three vectors in R2 .

Use this link to view the online demo

Move the vector heads and the demo will tell you that they are linearly dependent and
show you their span.

The two vectors {v, w} below are linearly independent because they are not
collinear.

Span{w}

Span{v}

The three vectors {v, w, u} below are linearly independent: the span got bigger
when we added w, then again when we added u, so we can apply the increasing
span criterion.
3.5. LINEAR INDEPENDENCE 79

Span{v, w}

Span{w}

v
u

Span{v}

The three coplanar vectors {v, w, u} below are linearly dependent:

• u is in Span{v, w}, so we can apply the first criterion.

• We can remove u without shrinking the span, so we can apply the second
criterion.

• The span did not increase when we added u, so we can apply the increasing
span criterion.
80 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Span{v, w}

Span{w}

u v

Span{v}

Note that three vectors are linearly dependent if and only if they are coplanar.
Indeed, {v, w, u} is linearly dependent if and only if one vector is in the span of the
other two, which is a plane (or a line) (or {0}).

The four vectors {v, w, u, x} below are linearly dependent: they are the columns
of a wide matrix. Note however that u is not contained in Span{v, w, x}. See this
warning.
3.5. LINEAR INDEPENDENCE 81

Span{v, w}

Span{w}

x v
u

Span{v}

The vectors {v, w, u, x} are linearly dependent, but u is not contained in Span{v, w, x}.

Interactive: Linear independence of two vectors in R3 .

Use this link to view the online demo

Move the vector heads and the demo will tell you if they are linearly independent and
show you their span.

Interactive: Linear independence of three vectors in R3 .

Use this link to view the online demo

Move the vector heads and the demo will tell you if they are linearly independent and
show you their span.

3.5.4 Linear Dependence and Free Variables


In light of this theorem and this criterion, it is natural to ask which columns of a
matrix are “redundant”, i.e., which we can remove without affecting the column
span.
82 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Theorem. Let v1 , v2 , . . . , vp be vectors in Rn , and consider the matrix


 
| | |
A =  v1 v2 · · · vp  .
| | |

Then we can delete the columns of A without pivots (the columns corresponding to
the free variables), without changing Span{v1 , v2 , . . . , vp }.
The pivot columns are linearly independent, so we cannot delete any more columns.
Proof. If the matrix is in reduced row echelon form:
 
1 0 2 0
A = 0 1 3 0
0 0 0 1

then the column without a pivot is visibly in the span of the pivot columns:
       
2 1 0 0
3 = 2 0 + 3 1 + 0 0 ,
0 0 0 1

and the pivot columns are linearly independent:


         
0 1 0 0 x1
0 = x 1 0 + x 2 1 + x 4 0 =  x 2  =⇒ x 1 = x 2 = x 4 = 0.
0 0 0 1 x4

If the matrix is not in reduced row echelon form, then we row reduce:
   
1 7 23 3 1 0 2 0
RREF
A= 2 4 16 0  −−→  0 1 3 0  .
−1 −2 −8 4 0 0 0 1

The following two vector equations have the same solution set, as they come from
row-equivalent matrices:
       
1 7 23 3
x 1 2 + x 2 4 + x 3 16 + x 4 0 = 0
      
−1 −2 −8 4
       
1 0 2 0
x 1 0 + x 2 1 + x 3 3 + x 4 0 = 0.
0 0 0 1

We conclude that
       
23 1 7 3
 16  = 2  2  + 3  4  + 0 0
−8 −1 −2 4
3.5. LINEAR INDEPENDENCE 83

and that      
1 7 3
x 1  2  + x 2  4  + x 4 0 = 0
−1 −2 4

has only the trivial solution.

Note that it is necessary to row reduce A to find which are its pivot columns.
However, the span of the columns of the row reduced matrix is generally not equal
to the span of the columns of A. See theorem in Section 3.7 for a restatement of
the above theorem.

Example. The matrix  


1 2 0 −1
A =  −2 −3 4 5
2 4 0 −2

has reduced row echelon form


 
1 0 −8 −7
0 1 4 3.
0 0 0 0

Therefore, the first two columns of A are the pivot columns, so we can delete the
others without changing the span:
           
 1 2   1 2 0 −1 
Span −2 , −3
   = Span −2 , −3 , 4 ,
      5 .
 2 4   2 4 0 −2 

Moreover, the first two columns are linearly independent.

Pivot Columns and Dimension. Let d be the number of pivot columns in the
matrix  
| | |
A =  v1 v2 · · · vp  .
| | |

• If d = 1 then Span{v1 , v2 , . . . , vp } is a line.

• If d = 2 then Span{v1 , v2 , . . . , vp } is a plane.

• If d = 3 then Span{v1 , v2 , . . . , vp } is a 3-space.

• Et cetera.
84 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

3.6 Subspaces

Objectives

1. Learn the definition of a subspace.

2. Learn to determine whether or not a subset is a subspace.

3. Learn the most important examples of subspaces.

4. Recipe: compute a spanning set for a null space.

5. Picture: whether a subset of R2 or R3 is a subspace or not.

6. Vocabulary words: subspace, column space, null space.

In this section we discuss subspaces of Rn . A subspace turns out to be exactly the


same thing as a span, except we don’t have a particular set of spanning vectors in
mind. This change in perspective is quite useful, as it is easy to produce subspaces
which are not obviously spans. For example, the solution set of the equation x +
3 y + z = 0 is a span because the equation is homogeneous, but we would have to
compute the parametric vector form to find a spanning set of vectors.

x + 3y + z = 0

3.6.1 Subspaces: Definition and Examples


Definition. A subset of Rn is any collection of points of Rn .

For instance, the unit circle



C = (x, y) in R2 x 2 + y 2 = 1

is a subset of R2 .
Above we expressed C in set builder notation.

Definition. A subspace of Rn is a subset V of Rn satisfying:


3.6. SUBSPACES 85

1. Nonemptiness: The zero vector is in V .

2. Closure under addition: If u and v are in V , then u + v is also in V .

3. Closure under scalar multiplication: If v is in V and c is in R, then cv is


also in V .

As a consequence of these properties, we see:

• If v is a vector in V , then all scalar multiples of v are in V by the third


property. In other words, the line through v is also contained in V .

• If u, v are vectors in V and x, y are scalars, then xu, y v are also in V by the
third property, so xu + y v is in V by the second property. Therefore, all of
Span{u, v} is contained in V

• Similarly, if v1 , v2 , . . . , vn are all in V , then Span{v1 , v2 , . . . , vn } is contained in


V . In other words, a subspace contains the span of any vectors in it.

If you choose enough vectors, then eventually their span will fill up V , so we
already see that a subspace is a span.

Remark. Suppose that V is a nonempty subset of Rn that satisfies properties 2


and 3. Let v be any vector in V . Then 0v = 0 is in V by the third property, so V
automatically satisfies property 1. It follows that the only subset of Rn that satisfies
properties 2 and 3 but not property 1 is the empty subset {}. This is why we call
the first property “nonemptiness”.

Example. The set Rn is a subspace of itself: indeed, it contains zero, and is closed
under addition and scalar multiplication.

Example. The set {0} containing only the zero vector is a subspace of Rn : it con-
tains zero, and if you add zero to itself or multiply it by a scalar, you always get
zero.

Example (A line through the origin). A line L through the origin is a subspace.

Example (A plane through the origin). A plane P through the origin is a subspace.
86 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Non-example (A line not containing the origin). A line L (or any other subset)
that does not contain the origin is not a subspace. It fails the first defining property:
every subspace contains the origin by definition.

Non-example (A circle). The unit circle C is not a subspace. It fails all three
defining properties: it does not contain the origin, it is not closed under addition,
and it is not closed under scalar multiplication. In the picture, one red vector is
the sum of the two black vectors (which are contained in C), and the other is a
scalar multiple of a black vector.

Non-example (The first quadrant). The first quadrant in R2 is not a subspace. It


contains the origin and is closed under addition, but it is not closed under scalar
multiplication (by negative numbers).
3.6. SUBSPACES 87

Non-example (A line union a plane). The union of a line and a plane in R3 is not
a subspace. It contains the origin and is closed under scalar multiplication, but it
is not closed under addition: the sum of a vector on the line and a vector on the
plane is not contained in the line or in the plane.

Subsets versus Subspaces. A subset of Rn is any collection of vectors what-


soever. For instance, the unit circle

C = (x, y) in R2 x 2 + y 2 = 1

is a subset of R2 , but it is not a subspace. In fact, all of the non-examples above


are still subsets of Rn . A subspace is a subset that happens to satisfy the three
additional defining properties.

In order to verify that a subset of Rn is in fact a subspace, one has to check the
three defining properties. That is, unless the subset has already been verified to
be a subspace: see this important note below.

Example (Verifying that a subset is a subspace). Let


§ ‹ ª
a
V= in R 2a = 3b .
2
b

Verify that V is a subspace.


88 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Solution. First we point out that the condition “2a = 3b” defines whether or not
a
a vector is in V : that is, to say b is in V means that 2a = 3b. In other words, a
vector is in V if twice its first coordinate equals three times its second coordinate.
0

Let us check the first property. The subset V does contain the zero vector 0 ,
because 2 · 0 = 3 · 0.
Next we check the second property. To show that V is closed under addition,
a c

we have to check that for any vectors u = b and v = d in V , the sum u + v is in
V . Since we cannot assume anything else about u and v, we must treat them as
unknowns.
We have
a+c
 ‹  ‹  ‹
a c
+ = .
b d b+d
a+c

To say that b+d is contained in V means that 2(a + c) = 3(b + d), or 2a + 2c =
3b + 3d. The one thing we are allowed to assume about u and v is that 2a = 3b
and 2c = 3d, so we see that u + v is indeed contained in V .
Next we check the third property. To show that V is closed under scalar multi-
a
plication, we have to check that for any vector v = b in V and any scalar c in R,
the product cv is in V . Again, we must treat v and c as unknowns. We have
 ‹  ‹
a ca
c = .
b cb
ca

To say that c b is contained in V means that 2(ca) = 3(c b), i.e., that c · 2a = c · 3b.
The one thing we are allowed to assume about v is that 2a = 3b, so cv is indeed
contained in V .
Since V satisfies all three defining properties, it is a subspace. In fact, it is the
line through the origin with slope 2/3.

Example (Showing that a subset is not a subspace). Let


§ ‹ ª
a
V= in R ab = 0 .
2
b
3.6. SUBSPACES 89

Is V a subspace?
Solution. First we point out that the condition “ab = 0” defines whether or not
a
a vector is in V : that is, to say b is in V means that ab = 0. In other words, a
vector is in V if the product of its coordinates is zero, i.e., if one (or both) of its
coordinates are zero.
0

Let us check the first property. The subset V does contain the zero vector 0 ,
because 0 · 0 = 0.
Next we check the third property. To show that V is closed under scalar multi-
a
plication, we have to check that for any vector v = b in V and any scalar c in R,
the product cv is in V . Since we cannot assume anything else about v and c, we
must treat them as unknowns.
We have  ‹  ‹
a ca
c = .
b cb
ca

To say that c b is contained in V means that (ca)(c b) = 0. Rewriting, this means
c 2 (a b) = 0. The one thing we are allowed to assume about v is that ab = 0, so we
see that cv is indeed contained in V .
Next we check the second property. It turns out that V is not closed under
addition; to verify this, we must show that there exists some vectors u, v in V such
that u + v is not contained in V . The  easiest way
 to do so is to produce examples
1 0
of such vectors. We can take u = 0 and v = 1 ; these are contained in V because
the products of their coordinates are zero, but
 ‹  ‹  ‹
1 0 1
+ =
0 1 1

is not contained in V because 1 · 1 6= 0.


Since V does not satisfy the second property (it is not closed under addition),
we conclude that V is not a subspace. Indeed, it is the union of the two coordinate
axes, which is not a span.

V
90 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

3.6.2 Common Types of Subspaces


Theorem (Spans are subspaces). If v1 , v2 , . . . , vp are any vectors in Rn , then Span{v1 , v2 , . . . , vp }
is a subspace of Rn .

Proof. We have to verify the three defining properties.

1. The zero vector 0 = 0v1 + 0v2 + · · · + 0vp is in the span.

2. If u = a1 v1 + a2 v2 + · · · + a p vp and v = b1 v1 + b2 v2 + · · · + b p vp are in
Span{v1 , v2 , . . . , vp }, then

u + v = (a1 + b1 )v1 + (a2 + b2 )v2 + · · · + (a p + b p )vp

is also in Span{v1 , v2 , . . . , vp }.

3. If v = a1 v1 + a2 v2 + · · · + a p vp is in Span{v1 , v2 , . . . , vp } and c is a scalar, then

cv = ca1 v1 + ca2 v2 + · · · + ca p vp

is also in Span{v1 , v2 , . . . , vp }.

Since Span{v1 , v2 , . . . , vp } satisfies the three defining properties of a subspace, it is


a subspace.

If V = Span{v1 , v2 , . . . , vp }, we say that V is the subspace spanned by or gen-


erated by the vectors v1 , v2 , . . . , vp .

Every subspace is a span, and every span is a subspace.

A matrix naturally gives rise to two subspaces.

Definition. Let A be an m × n matrix.

• The column space of A is the subspace of Rm spanned by the columns of A.


It is written Col(A).

• The null space of A is the set of all solutions of the homogeneous equation
Ax = 0: 
Nul(A) = x in Rn Ax = 0 .
This is a subspace of Rn .

The column space is defined to be a span, so it is a subspace by the above


theorem. We need to verify that the null space is a span.

Proof. We have to verify the three defining properties. To say that a vector v is in
Nul(A) means that Av = 0.

1. The zero vector is in Nul(A) because A0 = 0.


3.6. SUBSPACES 91

2. Suppose that u, v are in Nul(A). This means that Au = 0 and Av = 0. Hence

A(u + v) = Au + Av = 0 + 0 = 0

by the linearity of the matrix-vector product in Section 3.3. Therefore, u + v


is in Nul(A).

3. Suppose that v is in Nul(A) and c is a scalar. Then

A(cv) = cAv = c · 0 = 0

by the linearity of the matrix-vector product in Section 3.3, so cv is also in


Nul(A).

Since Nul(A) satisfies the three defining properties of a subspace, it is a subspace.

Example. Describe the column space and the null space of


 
1 1
A = 1 1.
1 1

Solution. The column space is the span of the columns of A:


     
 1 1   1 
Col(A) = Span  1 , 1
   = Span 1 .
 1 1   1 

This is a line in R3 .

Col(A)

The null space is the solution set of the homogeneous system Ax = 0. To


compute this, we need to row reduce A. Its reduced row echelon form is
 
1 1
0 0.
0 0
92 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

This gives the equation x + y = 0, or


n x = −y  ‹  ‹
parametric vector form x −1
−−−−−−−−−−−−→ =y .
y = y y 1
 −1
Hence the null space is Span 1 , which is a line in R2 .

Nul(A)

Notice that the column space is a subspace of R3 , whereas the null space is a
subspace of R2 . This is because A has three rows and two columns.
The column space and the null space of a matrix are both subspaces, so they
are both spans. The column space of a matrix A is defined to be the span of the
columns of A. The null space is defined to be the solution set of Ax = 0, so this
is a good example of a kind of subspace that we can define without any spanning
set in mind. In order to do computations, however, it is usually necessary to find a
spanning set.
To be clear: the null space is the solution set of a (homogeneous) system of
equations. For example, the null space of the matrix
 
1 7 2
A =  −2 1 3 
4 −2 −3
is the solution set of Ax = 0, i.e., the solution set of the system of equations
x + 7 y + 2z = 0
(
−2x + y + 3z = 0
4x − 2 y − 3z = 0.
To find a spanning set for the null space, one has to solve this system of equations.

Recipe: Compute a spanning set for a null space. To find a spanning set
for Nul(A), compute the parametric vector form of the solutions to the homo-
geneous equation Ax = 0. The vectors attached to the free variables form a
spanning set for Nul(A).
3.6. SUBSPACES 93

Example (Two free variables). Find a spanning set for the null space of the matrix

 ‹
2 3 −8 −5
A= .
−1 2 −3 −8

Solution. We compute the parametric vector form of the solutions of Ax = 0.


The reduced row echelon form of A is

 ‹
1 0 −1 2
.
0 1 −2 −3

The free variables are x 3 and x 4 ; the parametric form of the solution set is

x1 = x 3 − 2x 4

1 −2
     
 x1
x2 = 2x 3 + 3x 4  x2 2 3

parametric
−−−−−→  x  = x 3 1 + x 4  0  .
x = x3 vector form 3
 3

x4 = x4 x4 0 1

Therefore,
   
 1 −2 
2  3 
 
Nul(A) = Span   ,   .
 1 0 
 
0 1

Example (No free variables). Find a spanning set for the null space of the matrix

 ‹
1 3
A= .
2 4

Solution. We compute the parametric vector form of the solutions of Ax = 0.


The reduced row echelon form of A is

 ‹
1 0
.
0 1

There are no free variables; hence the only solution of Ax = 0 is the trivial solution.
In other words,
Nul(A) = {0} = Span{0}.
94 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

How do you know if a subset is a subspace?. It is rarely necessary to ver-


ify the three properties of a subspace directly: most of the verifications were
already done in this section.

• Is your subset a span? Can it be written as a span?

• Can it be written as the column space of a matrix?

• Can it be written as the null space of a matrix? In other words, is it the


solution set of a homogeneous system of equations?

• Is it all of Rn or the zero subspace {0}?

• Can it be written as a type of subspace that we will see later (eigenspace,


orthogonal complement, ...)?

If so, then it is automatically a subspace. If all else fails:

• Can you verify directly that it satisfies the three defining properties of a
subspace?

Example (Verifying that a subset is a subspace, reprise). Let


§ ‹ ª
a
V= in R 2a = 3b
2
b

be the subset of a previous example. Verify that V is a subspace.


Solution. The subset V is exactly the solution set of the homogeneous
 equation
2x − 3 y = 0. Hence it is the null space of the matrix 2 −3 , so it is a subspace.

3.7 Basis and Dimension

Objectives

1. Understand the definition of a basis of a subspace.

2. Recipes: basis for a column space, basis for a null space, basis of a span.

3. Picture: basis of a subspace of R2 or R3 .

4. Essential vocabulary words: basis, dimension.


3.7. BASIS AND DIMENSION 95

3.7.1 Basis of a Subspace


As we discussed in Section 3.6, a subspace is the same as a span, except we do
not have a set of spanning vectors in mind. There are infinitely many choices
of spanning sets for a nonzero subspace; to avoid reduncancy, usually it is most
convenient to choose a spanning set with the minimal number of vectors in it. This
is the idea behind the notion of a basis.

Essential Definition. Let V be a subspace of Rn . A basis of V is a set of vectors


{v1 , v2 , . . . , vm } in V such that:

1. V = Span{v1 , v2 , . . . , vm }, and

2. the set {v1 , v2 , . . . , vm } is linearly independent.

Recall that a set of vectors is linearly independent if and only if, when you re-
move any vector from the set, the span shrinks (Theorem 3.5.12). In other words,
if {v1 , v2 , . . . , vm } is a basis of a subspace V , then no proper subset of {v1 , v2 , . . . , vm }
will span V : it is a minimal spanning set.

A subspace generally has infinitely many different bases, but they all contain
the same number of vectors.

We leave it as an exercise to prove that any two bases have the same number of
vectors; one might want to wait until after learning the invertible matrix theorem
in Section 4.5.

Essential Definition. Let V be a subspace of Rn . The number of vectors in any


basis of V is called the dimension of V , and is written dim V .

Example (A basis of R2 ). Find a basis of R2 .


Solution. We need to findtwo
 vectors in R2 that span R2 and are linearly inde-
1 0

pendent. One such basis is 0 , 1 :
a

1. They span because any vector b can be written as a linear combination of
1
 0
0 , 1 :  ‹  ‹  ‹
a 1 0
=a +b .
b 0 1

2. They are linearly independent: if


 ‹  ‹  ‹  ‹
1 0 x 0
x +y = =
0 1 y 0

then x = y = 0.

This shows that the plane R2 has dimension 2.


96 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

0

1

1

0

Example (All bases of R2 ). Find all bases of R2 .


Solution. We know from the previous example that R2 has dimension 2, so any
basis of R2 has two vectors in it. Let v1 , v2 be vectors in R2 , and let A be the matrix
with columns v1 , v2 .
1. To say that {v1 , v2 } spans R2 means that A has a pivot in every row: see this
theorem in Section 3.3.
2. To say that {v1 , v2 } is linearly independent means that A has a pivot in every
column: see this theorem in Section 3.5.
Since A is a 2 × 2 matrix, it has a pivot in every row exactly when it has a pivot
in every column. Hence any two noncollinear vectors form a basis of R2 . For
example, § ‹  ‹ª
1 1
,
0 1
is a basis.

1

1

1

0

Example (The standard basis of Rn ). One shows exactly as in the above example
that the standard coordinate vectors
       
1 0 0 0
0 1 0 0
.
..  , e2 =  ...  , . . . , en−1 =  ...  , en =  ... 
     
e1 = 
       
0 0 1 0
0 0 0 1
form a basis for Rn . This is sometimes known as the standard basis.
In particular, Rn has dimension n.
3.7. BASIS AND DIMENSION 97

Example. The previous example implies that any basis for Rn has n vectors in
it. Let v1 , v2 , . . . , vn be vectors in Rn , and let A be the n × n matrix with columns
v1 , v2 , . . . , vn .

1. To say that {v1 , v2 , . . . , vn } spans Rn means that A has a pivot in every row:
see this theorem in Section 3.3.

2. To say that {v1 , v2 , . . . , vn } is linearly independent means that A has a pivot


in every column: see this theorem in Section 3.5.

Since A is a square matrix, it has a pivot in every row if and only if it has a pivot
in every column. We will see in Section 4.5 that the above two conditions are
equivalent to the invertibility of the matrix A.

Example. Let
      
 x   −3 0 
V=  y  in R x + 3 y + z = 0
3
B=  1  ,  1 .
 z   0 −3 

Verify that V is a subspace, and show directly that B is a basis for V .


Solution. First we observe that V is the solution set of the homogeneous equation
x + 3 y + z = 0, so it is a subspace: see this important note in Section 3.6. To show
that B is a basis, we really need to verify three things:

1. Both vectors are in V because


(−3) + 3(1) + (0) = 0
(0) + 3(1) + (−3) = 0.
 
x
2. Span: suppose that  y  is in V . Since x +3 y +z = 0 we have y = − 31 (x +z),
z
so        
x x −3 0
 y  = − 1 (x + z) = − x  1  − z  1  .
3
z 3 0 3 −3
z
Hence B spans V .

3. Linearly independent:
       
−3 0 −3c1 0
c1  1  + c2  1  = 0 =⇒ c1 + c2  = 0 =⇒ c1 = c2 = 0.
0 −3 −3c2 0

Alternatively, one can observe that the two vectors are not collinear.
98 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Since V has a basis with two vectors, it has dimension two: it is a plane.

Use this link to view the online demo

A picture of the plane V and its basis B = {v1 , v2 }. Note that B spans V and is linearly
independent.

This example is somewhat contrived, in that we will learn systematic methods


for verifying that a subset is a basis. The intention is to illustrate the defining
properties of a basis.

3.7.2 Bases for Common Types of Subspaces

Now we show how to find bases for the types of subspaces we encountered in
Section 3.6, namely: a span, the column space of a matrix, and the null space of
a matrix.

A basis for the column space First we show how to compute a basis for the
column space of a matrix.

Theorem. The pivot columns of a matrix A form a basis for Col(A).

Proof. This is a restatement of a theorem in Section 3.5.


3.7. BASIS AND DIMENSION 99

The above theorem is referring to the pivot columns in the original matrix, not
its reduced row echelon form. Indeed, a matrix and its reduced row echelon
form generally have different column spaces. For example, in the matrix A
below:

1 2 0 −1 1 0 −8 −7
‚ Œ ‚ Œ
A= −2 −3 4 5 RREF
−−→ 0 1 4 3
2 4 0 −2 0 0 0 0

pivot columns = basis pivot columns in RREF

the pivot columns are the first two columns, so a basis for Col(A) is
   
 1 2 
−2 , −3 .
 2 4 

The first two columns of the reduced row echelon form certainly span a dif-
ferent subspace, as
      
 1 0   a 
Span 0 , 1 =  b a, b in R ,
 0 0   0 

but Col(A) contains vectors whose last coordinate is nonzero.

A basis of a span Computing a basis for a span is the same as computing a basis
for a column space. Indeed, the span of finitely many vectors v1 , v2 , . . . , vm is the
column space of a matrix, namely, the matrix A whose columns are v1 , v2 , . . . , vm :
 
| | |
A =  v1 v2 · · · vm  .
| | |

Example (A basis of a span). Find a basis of the subspace


       
 1 2 0 −1 
V = Span −2 , −3 , 4 ,  5  .
 2 4 0 −2 
100 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Solution. The subspace V is the column space of the matrix


 
1 2 0 −1
A =  −2 −3 4 5.
2 4 0 −2

The reduced row echelon form of this matrix is


 
1 0 −8 −7
0 1 4 3.
0 0 0 0

The first two columns are pivot columns, so a basis for V is


   
 1 2 
−2 , −3 .
 2 4 

Use this link to view the online demo

A picture of the plane V and its basis B = {v1 , v2 }.

Example (Another basis of the same span). Find a basis of the subspace
       
 1 2 0 −1 
V = Span  −2 , −3  , 4  ,  5
 2 4 0 −2 

which does not consist of the first two vectors, as in the previous example.
Solution. The point of this example is that the above theorem gives one basis for
V ; as always, there are infinitely more.
Reordering the vectors, we can express V as the column space of
 
0 −1 1 2
A0 =  4 5 −2 −3  .
0 −2 2 4

The reduced row echelon form of this matrix is


 
1 0 3/4 7/4
 0 1 −1 −2  .
0 0 0 0

The first two columns are pivot columns, so a basis for V is


   
 0 −1 
4 ,  5  .
 0 −2 
3.7. BASIS AND DIMENSION 101

These are the last two vectors in the given spanning set.

Use this link to view the online demo

A picture of the plane V and its basis B = {v1 , v2 }.

A basis for the null space In order to compute a basis for the null space of a
matrix, one has to find the parametric vector form of the solutions of the homoge-
neous equation Ax = 0.
Theorem. The vectors attached to the free variables in the parametric vector form of
the solution set of Ax = 0 form a basis of Nul(A).
In lieu of a proof, we illustrate the theorem with an example, and we leave it
to the reader to generalize the argument.
Example (A basis of a null space). Find a basis of the null space of the matrix
 
0 −1 1 2
A = 4 5 −2 −3  .
0 −2 2 4

Solution. The reduced row echelon form of A is


 
1 0 −8 −7
0 1 4 3.
0 0 0 0

Hence the parametric form and parametric vector form of the solutions of Ax = 0
are
x 1 = 8x 3 + 7x 4

8 7
   

x 2 = −4x 3 − 3x 4 −4 −3

=⇒ x = x3   + x4   .
   
 x3 = x3 1 0
x4 =

x4 0 1
Every solution of Ax = 0 has the above form for some values of x 3 , x 4 : this is the
point of the parametric vector form. It follows that
   
 8 7 
−4 −3
 
Nul(A) = Span   ,   .
 1 0 
 
0 1

Next we check linear independence. Suppose that

0 8 7 8x 3 + 4x 4
       
0 −4 −3 −4x 3 − 3x 4 
0 = x 3  1  + x 4  0  =  x3 .
0 0 1 x4
102 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Comparing the third and fourth coordinates, we see that x 3 = x 4 = 0. It follows


that    
 8 7 
−4 −3
 
 1 ,  0 

 

0 1
is indeed a basis for Nul(A).

3.8 Bases as Coordinate Systems

Objectives

1. Learn to view a basis as a coordinate system on a subspace.

2. Recipes: compute the B-coordinates of a vector, compute the usual coordi-


nates of a vector from its B-coordinates.

3. Picture: the B-coordinates of a vector using its location on a nonstandard


coordinate grid.

4. Vocabulary word: B-coordinates.

In this section, we interpret a basis of a subspace V as a coordinate system on


V , and we learn how to write a vector in V in that coordinate system.
Fact. If B = {v1 , v2 , . . . , vm } is a basis for a subspace V , then any vector x in V can
be written as a linear combination

x = c1 v1 + c2 v2 + · · · + cm vm

in exactly one way.


Proof. Recall that to say B is a basis for V means that B spans V and B is linearly
independent. Since B spans V , we can write any x in V as a linear combination
of v1 , v2 , . . . , vm . For uniqueness, suppose that we had two such expressions:

x = c1 v1 + c2 v2 + · · · + cm vm
x = c10 v1 + c20 v2 + · · · + cm
0
vm .

Subtracting the first equation from the second yields

0 = x − x = (c1 − c10 )v1 + (c2 − c20 )v2 + · · · + (cm − cm


0
)vm .

Since B is linearly independent, the only solution to the above equation is the
trivial solution: all the coefficients must be zero. It follows that ci − ci0 for all i,
which proves that c1 = c10 , c2 = c20 , . . . , cm = cm
0
.
3.8. BASES AS COORDINATE SYSTEMS 103

Example. Consider the standard basis of R3 from this example in Section 3.7:
     
1 0 0
e1 = 0 , e2 = 1 , e3 = 0 .
0 0 1

According to the above fact, every vector in R3 can be written as a linear combi-
nation of e1 , e2 , e3 , with unique coefficients. For example,
       
3 1 0 0
v =  5  = 3 0 + 5 1 − 2 0 = 3e1 + 5e2 − 2e3 .
−2 0 0 1

In this case, the coordinates of v are exactly the coefficients of e1 , e2 , e3 .


What exactly are coordinates, anyway? One way to think of coordinates is that
they give directions for how to get to a certain point from the origin. In the above
example, the linear combination 3e1 + 5e2 − 2e3 can be thought of as the following
list of instructions: start at the origin, travel 3 units north, then travel 5 units east,
then 2 units down.
Definition. Let B = {v1 , v2 , . . . , vm } be a basis of a subspace V , and let
x = c1 v1 + c2 v2 + · · · + cm vm
be a vector in V . The coefficients c1 , c2 , . . . , cm are the coordinates of x with
respect to B. The B-coordinate vector of x is the vector
 
c1
 c2 
[x]B = 
 ...  in R .
 m

cm
If we change the basis, then we can still give instructions for how to get to the
point (3, 5, −2), but the instructions will be different. Say for example we take the
basis      
1 0 0
v1 = e1 + e2 = 1 , v2 = e2 = 1 , v3 = e3 = 0 .
0 0 1
We can write (3, 5, −2) in this basis as 3v1 + 2v2 − 2v3 . In other words: start at
the origin, travel northeast 3 times as far as v1 , then 2 units east, then 2 units
down. In this situation, we can say that “3 is the v1 -coordinate of (3, 5, −2), 2 is
the v2 -coordinate of (3, 5, −2), and −2 is the v3 -coordinate of (3, 5, −2).”

The above definition gives a way of using Rm to label the points of a subspace of
dimension m: a point is simply labeled by its B-coordinate vector. For instance,
if we choose a basis for a plane, we can label the points of that plane with the
points of R2 .
104 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Example (A nonstandard coordinate system on R2 ). Define


 ‹  ‹
1 1
v1 = , v2 = , B = {v1 , v2 }.
1 −1
1. Verify that B is a basis for R2 .
1

2. If [w]B = 2 , then what is w?
5

3. Find the B-coordinates of v = 3 .
Solution.
1. By the basis theorem in Section 3.9, any two linearly independent vectors
form a basis for R2 . Clearly v1 , v2 are not multiples of each other, so they are
linearly independent.
1

2. To say [w]B = 2 means that 1 is the v1 -coordinate of w, and that 2 is the
v2 -coordinate:
 ‹  ‹  ‹
1 1 3
w = v1 + 2v2 = +2 = .
1 −1 −1
3. We have to solve the vector equation v = c1 v1 + c2 v2 in the unknowns c1 , c2 .
We form an augmented matrix and row reduce:
 ‹  ‹
1 1 5 RREF 1 0 4
−−→ .
1 −1 3 0 1 1
4

We have c1 = 4 and c2 = 1, so v = 4v1 + v2 and [v]B = 1 .
In the following picture, we indicate the coordinate system defined by B by draw-
ing lines parallel to the “v1 -axis”
 and “v2 -axis”. Using this grid it is easy
 to see that
5 1
the B-coordinates of v are 1 , and that the B-coordinates of w are 2 .

v1

v2 w
3.8. BASES AS COORDINATE SYSTEMS 105

This picture could be the grid of streets in Palo Alto, California. Residents of
Palo Alto refer to northwest as “north” and to northeast as “east”. There is a reason
for this: the old road to San Francisco is called El Camino Real, and that road runs
from the southeast to the northwest in Palo Alto. So when a Palo Alto resident
says “go south two blocks and east one block”, they are giving directions from the
origin to the Whole Foods at w.

Use this link to view the online demo

A picture of the basis B = {v1 , v2 } of R2 . The grid indicates the coordinate system
defined by the basis B; one set of lines measures the v1 -coordinate, and the other set
measures the v2 -coordinate. Use the sliders to find the B-coordinates of w.

Example. Let
   
2 1
v1 = −1 v2 =  0 .
1 −1

These form a basis B for a plane V = Span{v1 , v2 } in R3 . We indicate the coordinate


system defined by B by drawing lines parallel to the “v1 -axis” and “v2 -axis”:

u3
v1
u1 v2 u2
u4

We can see from the picture


 that the v1 -coordinate of u1 is equal to 1, as is the
1
v2 -coordinate, so [u1 ]B = 1 . Similarly, we have
 3

−1
 ‹  ‹
0
[u2 ]B = 1 [u3 ]B = 2
[u4 ]B = 3 .
2 − 12 2
106 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Use this link to view the online demo

Left: the B-coordinates of a vector x. Right: the vector x. The violet grid on the right
is a picture of the coordinate system defined by the basis B; one set of lines measures
the v1 -coordinate, and the other set measures the v2 -coordinate. Drag the heads of the
vectors x and [x]B to understand the correspondence between x and its B-coordinate
vector.

Example (A coordinate system on a plane). Define


   
1 1
v1 = 0 , v2 = 1 , B = {v1 , v2 }, V = Span{v1 , v2 }.
1 1

1. Verify that B is a basis for V .


5

2. If [w]B = 2 , then what is w?
 
5
3. Find the B-coordinates of v = 3 .

5

Solution.

1. We need to verify that B spans V , and that it is linearly independent. By


definition, V is the span of B; since v1 and v2 are not multiples of each other,
they are linearly independent. This shows in particular that V is a plane.
5

2. To say [w]B = 2 means that 5 is the v1 -coordinate of w, and that 2 is the
v2 -coordinate:
     
1 1 7
w = 5v1 + 2v2 = 5 0 + 2 1 = 2 .
    
1 1 7

3. We have to solve the vector equation v = c1 v1 + c2 v2 in the unknowns c1 , c2 .


We form an augmented matrix and row reduce:
   
1 1 5 1 0 2
RREF
0 1 3 −−→ 0 1 3.
1 1 5 0 0 0

2

We have c1 = 2 and c2 = 3, so v = 2v1 + 3v2 and [v]B = 3 .
3.8. BASES AS COORDINATE SYSTEMS 107

Use this link to view the online demo

A picture of the plane V and the basis B = {v1 , v2 }. The violet grid is a picture of the
coordinate system defined by the basis B; one set of lines measures the v1 -coordinate,
and the other set measures the v2 -coordinate. Use the sliders to find the B-coordinates
of v.

Example (A coordinate system on another plane). Define


     
2 −1 2
v1 = 3 , v2 =
   1 , v3 = 8 , V = Span{v1 , v2 , v3 }.
 
2 1 6

1. Find a basis B for V .




4
2. Find the B-coordinates of x = 11.
8

Solution.

1. We write V as the column space of a matrix A, then row reduce to find the
pivot columns, as in this example in Section 3.7.
   
2 −1 2 1 0 2
RREF
A =  3 1 8  −−→  0 1 2.
2 1 6 0 0 0

The first two columns are pivot columns, so we can take B = {v1 , v2 } as our
basis for V .

2. We have to solve the vector equation x = c1 v1 + c2 v2 . We form an augmented


matrix and row reduce:
   
2 −1 4 1 0 3
RREF
3 1 11  −−→  0 1 2.
2 1 8 0 0 0

3

We have c1 = 3 and c2 = 2, so x = 3v1 + 2v2 , and thus [x]B = 2 .

Use this link to view the online demo

A picture of the plane V and the basis B = {v1 , v2 }. The violet grid is a picture of the
coordinate system defined by the basis B; one set of lines measures the v1 -coordinate,
and the other set measures the v2 -coordinate. Use the sliders to find the B-coordinates
of x.
108 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

Recipes: B-coordinates. If B = {v1 , v2 , . . . , vm } is a basis for a subspace V and


x is in V , then
 
c1
 c2 
[x]B = 
 ...  means x = c1 v1 + c2 v2 + · · · + cm vm .

cm

Finding the B-coordinates of x means solving the vector equation

x = c1 v1 + c2 v2 + · · · + cm vm

in the unknowns c1 , c2 , . . . , cm . This generally means row reducing the aug-


mented matrix  
| | | |
 v1 v2 · · · vm x  .
| | | |

Remark. Let B = {v1 , v2 , . . . , vm } be a basis of a subspace V . Finding the B-


coordinates of a vector x means solving the vector equation

x = c1 v1 + c2 v2 + · · · + cm vm .

If x is not in V , then this equation has no solution, as x is not in V = Span{v1 , v2 , . . . , vm }.


In other words, the above equation is inconsistent when x is not in V .

3.9 The Rank Theorem and the Basis Theorem

Objectives

1. Learn to use the rank theorem and the basis theorem.

2. Picture: the rank theorem.

3. Theorems: rank theorem, basis theorem.

4. Vocabulary words: rank, nullity.

In this section we present two important general facts about dimensions and
bases.
3.9. THE RANK THEOREM AND THE BASIS THEOREM 109

3.9.1 The Rank Theorem


With the rank theorem, we can finally relate the dimension of the solution set of
a matrix equation with the dimension of the column space.

Definition. The rank of a matrix A, written rank(A), is the dimension of the column
space Col(A).
The nullity of a matrix A, written nullity(A), is the dimension of the null space
Nul(A).

According to this theorem in Section 3.7, the rank of A is equal to the number of
columns with pivots. On the other hand, this theorem in Section 3.7 implies that
nullity(A) equals the number of free variables, which is the number of columns
without pivots. To summarize:

rank(A) = dim Col(A) = the number of columns with pivots


nullity(A) = dim Nul(A) = the number of free variables
= the number of columns without pivots

Clearly (the number of columns with pivots) plus (the number of columns without
pivots) equals (the number of columns of A), so we have proved the following
theorem.

Rank Theorem. If A is an m × n matrix, then

rank(A) + nullity(A) = n.

In other words, for any consistent system of linear equations,

(dim of column span) + (dim of solution set) = (number of variables).

Example (The rank is 2 and the nullity is 2). Consider the following matrix and
its reduced row echelon form:

1 2 0 −1 1 0 −8 −7
‚ Œ ‚ Œ
A= −2 −3 4 5 RREF
−−→ 0 1 4 3
2 4 0 −2 0 0 0 0

basis of Col(A) free variables

A basis for Col(A) is given by the pivot columns:


   
 1 2 
−2 , −3 ,
 2 4 
110 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY

so rank(A) = dim Col(A) = 2.


Since there are two free variables x 3 , x 4 , the null space of A has two vectors
(see this example in Section 3.7):
   
 8 7 
−4 −3
 
 1 ,  0  ,

 

0 1

so nullity(A) = 2.
In this case, the rank theorem says that 2 + 2 = 4, where 4 is the number of
columns.

Interactive: Rank is 1, nullity is 2.

Use this link to view the online demo

This 3 × 3 matrix has rank 1 and nullity 2. The violet plane on the left is the null
space, and the violet line on the right is the column space.

Interactive: Rank is 2, nullity is 1.

Use this link to view the online demo

This 3 × 3 matrix has rank 2 and nullity 1. The violet line on the left is the null space,
and the violet plane on the right is the column space.

3.9.2 The Basis Theorem


Recall from this example in Section 3.7 that {v1 , v2 , . . . , vn } forms a basis for Rn if
and only if the matrix A with columns v1 , v2 , . . . , vn has a pivot in every row and
column. Since A is an n×n matrix, these two conditions are equivalent: the vectors
span if and only if they are linearly independent. The basis theorem is an abstract
version of the preceding statement, that applies to any subspace.

Basis Theorem. Let V be a subspace of dimension m. Then:

• Any m linearly independent vectors in V form a basis for V .

• Any m vectors that span V form a basis for V .

Proof. Suppose that B = {v1 , v2 , . . . , vm } is a set of linearly independent vectors in


V . In order to show that B is a basis for V , we must prove that V = Span{v1 , v2 , . . . , vm }.
If not, then there exists some vector vm+1 in V that is not contained in Span{v1 , v2 , . . . , vm }.
By the increasing span criterion in Section 3.5, the set {v1 , v2 , . . . , vm , vm+1 } is also
linearly independent. Continuing in this way, we keep choosing vectors until we
eventually do have a linearly independent spanning set: say V = Span{v1 , v2 , . . . , vm , . . . , vm+k }.
3.9. THE RANK THEOREM AND THE BASIS THEOREM 111

Then {v1 , v2 , . . . , vm+k } is a basis for V , which implies that dim(V ) = m + k > m.
But we were assuming that V has dimension m, so B must have already been a
basis.
Now suppose that B = {v1 , v2 , . . . , vm } spans V . If B is not linearly independent,
then by this theorem in Section 3.5, we can remove some number of vectors from B
without shrinking its span. After reordering, we can assume that we removed the
last k vectors without shrinking the span, and that we cannot remove any more.
Now V = Span{v1 , v2 , . . . , vm−k }, and {v1 , v2 , . . . , vm−k } is a basis for V because it is
linearly independent. This implies that dim V = m−k < m. But we were assuming
that dim V = m, so B must have already been a basis.

In other words, if you already know that dim V = m, and if you have a set of
m vectors B = {v1 , v2 , . . . , vm } in V , then you only have to check one of:

1. B is linearly independent, or

2. B spans V ,

in order for B to be a basis of V . If you did not already know that dim V = m, then
you would have to check both properties.
For example, if V is a plane, then any two noncollinear vectors in V form a
basis.

Example (Yet another basis for a span). Find a basis of the subspace
       
 1 2 0 −1 
V = Span  −2 , −3 , 4 ,
      5
 2 4 0 −2 

which is different from the bases in this example in Section 3.7 and this example
in Section 3.7.
Solution. We know from the previous examples that dim V = 2. By the basis
theorem, it suffices to find any two noncollinear vectors in V . We write two linear
combinations of the four given spanning vectors, chosen at random:
           
1 2 3 2 0 −2
1   
w1 = −2 + −3 = −5
      w2 = − −3 +
  4 = 5 .
2 4 6 4 2 0 −4

Since w1 , w2 are not collinear, B = {w1 , w2 } is a basis for V .

Use this link to view the online demo

A picture of the plane V and its basis B = {w1 , w2 }.


112 CHAPTER 3. SYSTEMS OF LINEAR EQUATIONS: GEOMETRY
Chapter 4

Linear Transformations and Matrix


Algebra

Primary Goal. Learn about linear transformations and their relationship to ma-
trices.

In practice, one is often lead to ask questions about the geometry of a trans-
formation: a function that takes an input and produces an output. This kind of
question can be answered by linear algebra if the transformation can be expressed
by a matrix.

Example. Suppose you are building a robot arm with three joints that can move
its hand around a plane, as in the following picture.

 ‹
x
= f (θ , φ, ψ)
ψ y

Define a transformation f as follows: f (θ , φ, ψ) is the (x, y) position of the


hand when the joints are rotated by angles θ , φ, ψ, respectively. The output of f
tells you where the hand will be on the plane when the joints are set at the given
input angles.
Unfortunately, this kind of function does not come from a matrix, so one cannot
use linear algebra to answer questions about this function. In fact, these functions
are rather complicated; their study is the subject of inverse kinematics.

113
114 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

In this chapter, we will be concerned with the relationship between matrices


and transformations. In Section 4.1, we will consider the equation b = Ax as
a function with independent variable x and dependent variable b, and we draw
pictures accordingly. We spend some time studying transformations in the abstract,
and asking questions about a transformation, like whether it is one-to-one and/or
onto (Section 4.2). In Section 4.3 we will answer the question: “when exactly can
a transformation be expressed by a matrix?” We then present matrix multiplication
as a special case of composition of transformations (Section 4.4). This leads to the
study of matrix algebra: that is, to what extent one can do arithmetic with matrices
in the place of numbers. With this in place, we learn to solve matrix equations by
dividing by a matrix in Section 4.5.

4.1 Matrix Transformations

Objectives
1. Learn to view a matrix geometrically as a function.

2. Learn examples of matrix transformations: reflection, dilation, rotation, shear,


projection.

3. Understand the vocabulary surrounding transformations: domain, codomain,


range.

4. Understand the domain, codomain, and range of a matrix transformation.

5. Pictures: common matrix transformations.

6. Vocabulary words: transformation / function, domain, codomain, range,


identity transformation, matrix transformation.

In this section we learn to understand matrices geometrically as functions, or


transformations. We briefly discuss transformations in general, then specialize to
matrix transformations, which are transformations that come from matrices.

4.1.1 Matrices as Functions


Informally, a function is a rule that accepts inputs and produces outputs. For in-
stance, f (x) = x 2 is a function that accepts one number x as its input, and outputs
the square of that number: f (2) = 4. In this subsection, we interpret matrices as
functions.
Let A be a matrix with m rows and n columns. Consider the matrix equation
b = Ax (we write it this way instead of Ax = b to remind the reader of the notation
y = f (x)). If we vary x, then b will also vary; in this way, we think of A as a
function with independent variable x and dependent variable b.
4.1. MATRIX TRANSFORMATIONS 115

• The independent variable (the input) is x, which is a vector in Rn .

• The dependent variable (the output) is b, which is a vector in Rm .

The set of all possible output vectors are the vectors b such that Ax = b has some
solution; this is the same as the column space of A by this note in Section 3.3.

x
Ax Col(A)

b = Ax
Rn Rm

Interactive: A 2 × 3 matrix.

Use this link to view the online demo

A picture of a 2 × 3 matrix, regarded as a function. The input vector is x, which is a


vector in R3 , and the output vector is b = Ax, which is a vector in R2 . The violet line
on the right is the column space; as you vary x, the output b is constrained to lie on
this line.

Interactive: A 3 × 2 matrix.

Use this link to view the online demo

A picture of a 3 × 2 matrix, regarded as a function. The input vector is x, which is


a vector in R2 , and the output vector is b = Ax, which is a vector in R3 . The violet
plane on the right is the column space; as you vary x, the output b is constrained to
lie on this plane.
116 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (Projection onto the x y-plane). Let


 
1 0 0
A = 0 1 0.
0 0 0

Describe the function b = Ax geometrically.


Solution. In the equation Ax = b, the input vector x and the output vector b are
both in R3 . First we multiply A by a vector to see what it does:
      
x 1 0 0 x x
A y =  0 1 0   y =  y .
z 0 0 0 z 0

Multiplication by A simply sets the z-coordinate equal to zero: it projects onto the
x y-plane.

Use this link to view the online demo

Multiplication by the matrix A projects a vector onto the x y-plane. Move the input
vector x to see how the output vector b changes.

Example (Reflection). Let  ‹


−1 0
A= .
0 1
Describe the function b = Ax geometrically.
Solution. In the equation Ax = b, the input vector x and the output vector b are
both in R2 . First we multiply A by a vector to see what it does:
 ‹  ‹ ‹  ‹
x −1 0 x −x
A = = .
y 0 1 y y

Multiplication by A negates the x-coordinate: it reflects over the y-axis.


4.1. MATRIX TRANSFORMATIONS 117

b = Ax

Use this link to view the online demo

Multiplication by the matrix A reflects over the y-axis. Move the input vector x to see
how the output vector b changes.

Example (Dilation). Let  ‹


1.5 0
A= .
0 1.5
Describe the function b = Ax geometrically.
Solution. In the equation Ax = b, the input vector x and the output vector b are
both in R2 . First we multiply A by a vector to see what it does:
 ‹  ‹ ‹  ‹  ‹
x 1.5 0 x 1.5x x
A = = = 1.5 .
y 0 1.5 y 1.5 y y
Multiplication by A is the same as scalar multiplication by 1.5: it scales or dilates
the plane by a factor of 1.5.

b = Ax

Use this link to view the online demo

Multiplication by the matrix A dilates the plane by a factor of 1.5. Move the input
vector x to see how the output vector b changes.
118 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (Identity). Let


 ‹
1 0
A= .
0 1
Describe the function b = Ax geometrically.
Solution. In the equation Ax = b, the input vector x and the output vector b are
both in R2 . First we multiply A by a vector to see what it does:
 ‹  ‹ ‹  ‹
x 1 0 x x
A = = .
y 0 1 y y

Multiplication by A does not change the input vector at all: it is the identity trans-
formation which does nothing.

b = Ax

Use this link to view the online demo

Multiplication by the matrix A does not move the vector x: that is, b = Ax = x. Move
the input vector x to see how the output vector b changes.

Example (Rotation). Let


 ‹
0 −1
A= .
1 0
Describe the function b = Ax geometrically.
Solution. In the equation Ax = b, the input vector x and the output vector b are
both in R2 . First we multiply A by a vector to see what it does:
 ‹  ‹ ‹  ‹
x 0 −1 x −y
A = = .
y 1 0 y x

We substitute a few test points in order to understand the geometry of the trans-
formation:
4.1. MATRIX TRANSFORMATIONS 119

 ‹  ‹
1 −2
A =
2 1
 ‹  ‹
−1 −1
A =
1 −1
 ‹  ‹
0 2
A =
−2 0

Multiplication by A is counterclockwise rotation by 90◦ .

b = Ax

Use this link to view the online demo

Multiplication by the matrix A rotates the vector x counterclockwise by 90◦ . Move


the input vector x to see how the output vector b changes.

Example (Shear). Let  ‹


1 1
A= .
0 1
Describe the function b = Ax geometrically.
Solution. In the equation Ax = b, the input vector x and the output vector b are
both in R2 . First we multiply A by a vector to see what it does:

x+y
 ‹  ‹ ‹  ‹
x 1 1 x
A = = .
y 0 1 y y

Multiplication by A adds the y-coordinate to the x-coordinate; this is called a shear


in the x-direction.
120 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

b = Ax

Use this link to view the online demo

Multiplication by the matrix A adds the y-coordinate to the x-coordinate. Move the
input vector x to see how the output vector b changes.

4.1.2 Transformations
At this point it is convenient to fix our ideas and terminology regarding functions,
or transformations. This allows us to systematize our discussion of matrices as
functions.

Definition. A transformation (or function or map) from Rn to Rm is a rule T that


assigns to each vector x in Rn a vector T (x) in Rm .

• Rn is called the domain of T .

• Rm is called the codomain of T .

• For x in Rn , the vector T (x) in Rm is the image of x under T . The notation


x 7→ y means “ y is the image of x under T ,” i.e., that y = T (x).

• The set of all images {T (x) | x in Rn } is the range of T .

The notation T : Rn −→ Rm means “T is a transformation from Rn to Rm .”

It may help to think of T as a “machine” that takes x as an input, and gives


you T (x) as the output.
4.1. MATRIX TRANSFORMATIONS 121

x
range
T (x)

T
Rn Rm
domain codomain

The points of the domain Rn are the inputs of T : this simply means that it makes
sense to evaluate T on lists of n numbers. Likewise, the points of the codomain
Rm are the outputs of T : this means that the result of evaluating T is always a list
of m numbers.
The range of T is the set of all vectors in the codomain that actually arise as
outputs of the function T , for some input. In other words, the range is all vectors
b in the codomain such that T (x) = b has a solution x in the domain.

Example (A Function of one variable). Most of the functions you may have seen
previously have domain and codomain equal to R = R1 . For example,
 the length of the opposite 
edge over the hypotenuse of 
sin: R −→ R sin(x) =  .
a right triangle with angle x
in radians
Notice that we have defined sin by a rule: a function is defined by specifying what
the output of the function is for any possible input.
You may be used to thinking of such functions in terms of their graphs:

(x, sin x)

In this case, the horizontal axis is the domain, and the vertical axis is the
codomain. This is useful when the domain and codomain are R, but it is hard
122 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

to do when, for instance, the domain is R2 and the codomain is R3 . The graph of
such a function is a subset of R5 , which is difficult to visualize. For this reason, we
will rarely graph a transformation.
Note that the range of sin is the interval [−1, 1]: this is the set of all possible
outputs of the sin function.

Example (Functions of several variables). Here is an example of a function from


R2 to R3 :
 
 ‹ x+y
x
f = cos( y) .
y
y − x2

The inputs of f each have two entries, and the outputs have three entries. In this
case, we have defined f by a formula, so we evaluate f by substituting values for
the variables:
   
 ‹ 2+3 5
2
f = cos(3) = cos(3) .
3
3 − 22 −1

Here is an example of a function from R3 to R3 :

the counterclockwise rotation


‚ Œ
f (v) = of v by an angle of 42◦ about .
the z-axis

In other words, f takes a vector with three entries, then rotates it; hence the ouput
of f also has three entries. In this case, we have defined f by a geometric rule.

Definition. The identity transformation IdRn : Rn → Rn is the transformation de-


fined by the rule
IdRn (x) = x for all x in Rn .

In other words, the identity transformation does not move its input vector: the
output is the same as the input. Its domain and codomain are both Rn , and its
range is Rn as well, since every vector in Rn is the output of itself.

Example (A real-word transformation: robotics). The definition of transformation


and its associated vocabulary may seem quite abstract, but transformations are
extremely common in real life. Here is an example from the fields of robotics and
computer graphics.
Suppose you are building a robot arm with three joints that can move its hand
around a plane, as in the following picture.
4.1. MATRIX TRANSFORMATIONS 123

 ‹
x
= f (θ , φ, ψ)
ψ y

Define a transformation f : R3 → R2 as follows: f (θ , φ, ψ) is the (x, y) position


of the hand when the joints are rotated by angles θ , φ, ψ, respectively. Evaluating
f tells you where the hand will be on the plane when the joints are set at the given
angles.
It is relatively straightforward to find a formula for f (θ , φ, ψ) using some basic
trigonometry. If you want the robot to fetch your coffee cup, however, you have to
find the angles θ , φ, ψ that will put the hand at the position of your beverage. It
is not at all obvious how to do this, and it is not even clear if the answer is unique!
You can ask yourself: “which positions on the table can my robot arm reach?” or
“what is the arm’s range of motion?” This is the same as asking: “what is the range
of f ?”
Unfortunately, this kind of function does not come from a matrix, so one cannot
use linear algebra to answer these kinds of questions. In fact, these functions are
rather complicated; their study is the subject of inverse kinematics.

4.1.3 Matrix Transformations


Now we specialize the general notions and vocabulary from the previous subsec-
tion to the functions defined by matrices that we considered in the first subsection.
Definition. Let A be an m × n matrix. The matrix transformation associated to A
is the transformation
T : Rn −→ Rm defined by T (x) = Ax.
This is the transformation that takes a vector x in Rn to the vector Ax in Rm .
Example. The definition of a matrix transformation T tells us how to evaluate T
on any given vector: we multiply the input vector by a matrix. For instance, let
 ‹
1 2 3
A=
4 5 6
and let T (x) = Ax be the associated matrix transformation. Then
     
−1 −1  ‹ −1  ‹
1 2 3 −14
T −2 = A −2 = −2 = .
4 5 6 −32
−3 −3 −3
124 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

If A has n columns, then it only makes sense to multiply A by vectors with n


entries. This is why the domain of T (x) = Ax is Rn . If A has n rows, then Ax has
m entries for any vector x in Rn ; this is why the codomain of T (x) = Ax is Rm .
Suppose that A has columns v1 , v2 , . . . , vn . If we multiply A by a general vector
x, we get
 
  x1
| | | x 
2
Ax = v1 v2 · · · vn  

 ...  = x 1 v1 + x 2 v2 + · · · + x n vn .

| | |
xn

This is just a general linear combination of v1 , v2 , . . . , vn . Therefore, the outputs of


T (x) = Ax are exactly the linear combinations of the columns of A: the range of
T is the column space of A. See this note in Section 3.3.

Let A be an m × n matrix, and let T (x) = Ax be the associated matrix transfor-


mation.

• The domain of T is Rn , where n is the number of columns of A.

• The codomain of T is Rm , where m is the number of rows of A.

• The range of T is the column space of A.

Interactive: A 2 × 3 matrix: reprise. Let


 ‹
1 −1 2
A= ,
−2 2 4

and define T (x) = Ax. The domain of T is R3 , and the codomain is R2 . The range
of T is the column space; since all three columns are collinear, the range is a line
in R2 .

Use this link to view the online demo

A picture of the matrix transformation T . The input vector is x, which is a vector in


R3 , and the output vector is b = T (x) = Ax, which is a vector in R2 . The violet line
on the right is the range of T ; as you vary x, the output b is constrained to lie on this
line.

Interactive: A 3 × 2 matrix: reprise. Let


 
1 0
A = 0 1,
1 0
4.1. MATRIX TRANSFORMATIONS 125

and define T (x) = Ax. The domain of T is R2 , and the codomain is R3 . The range
of T is the column space; since A has two columns which are not collinear, the
range is a plane in R3 .

Use this link to view the online demo

A picture of the matrix transformation T . The input vector is x, which is a vector in


R2 , and the output vector is b = T (x) = Ax, which is a vector in R3 . The violet plane
on the right is the range of T ; as you vary x, the output b is constrained to lie on this
plane.

Example (Projection onto the x y-plane: reprise). Let


 
1 0 0
A = 0 1 0,
0 0 0

and let T (x) = Ax. What are the domain, the codomain, and the range of T ?

Solution. Geometrically, the transformation T projects a vector directly “down”


onto the x y-plane in R3 .

The inputs and outputs have three entries, so the domain and codomain are
both R3 . The possible outputs all lie on the x y-plane, and every point on the x y-
plane is an output of T (with itself as the input), so the range of T is the x y-plane.
Be careful not to confuse the codomain with the range here. The range is a
plane, but it is a plane in R3 , so the codomain is still R3 . The outputs of T all have
three entries; the last entry is simply always zero.
126 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (Matrix transformations of R2 ). In the first subsection we discussed the


transformations defined by several 2 × 2 matrices, namely:
 ‹
−1 0
Reflection: A=
0 1
 ‹
1.5 0
Dilation: A=
0 1.5
 ‹
1 0
Identity: A=
0 1
 ‹
0 −1
Rotation: A=
1 0
 ‹
1 1
Shear: A= .
0 1

In each case, the associated matrix transformation T (x) = Ax has domain and
codomain equal to R2 . The range is also R2 , as can be seen geometrically (what
is the input for a given output?), or using the fact that the columns of A are not
collinear (so they form a basis for R2 ).

Example (Questions about a [matrix). transformation] Let


 
1 1
A = 0 1,
1 1

and let T (x) = Ax, so T : R2 → R3 is a matrix transformation.


 ‹
3
1. Evaluate T (u) for u = .
4
2. Let  
7
b = 5 .

7

Find a vector v in R2 such that T (v) = b. Is there more than one?

3. Does there exist a vector w in R3 such that there is more than one v in R2
with T (v) = w?

4. Find a vector w in R3 which is not in the range of T .

Note: all of the above questions are intrinsic to the transformation T : they
make sense to ask whether or not T is a matrix transformation. See the next ex-
ample. As T is in fact a matrix transformation, all of these questions will translate
into questions about the corresponding matrix A.
Solution.
4.1. MATRIX TRANSFORMATIONS 127

1. We evaluate T (u) by substituting the definition of T in terms of matrix mul-


tiplication:    
 ‹ 1 1  ‹ 7
3 3
T = 0 1
  = 4 .

4 4
1 1 7

2. We want to find a vector v such that b = T (v) = Av. In other words, we


want to solve the matrix equation Av = b. We form an augmented matrix
and row reduce:
    augmented   row  
1 1 7 1 1 7 1 0 2
matrix
 0 1  v = 5 −−−−−−−→ 0 1 5 − reduce 
−−−→ 0 1 5  .
1 1 7 1 1 7 0 0 0

This gives x = 2 and y = 5, so that there is a unique vector


 ‹
2
v=
5

such that T (v) = b.

3. Translation: is there any vector w in R3 such that the solution set of Av = w


has more than one vector in it? The solution set of Ax = w, if nonempty, is a
translate of the solution set of Av = b above, which has one vector in it. See
this key observation in Section 3.4. It follows that the solution set of Av = w
can have at most one vector.

4. Translation: find a vector w such that the matrix equation Av = w is not


consistent. Notice that if we take
 
1
w = 2 ,

3

then the matrix equation Av = w translates into the system of equations


x+y =1
(
y =2
x + y = 3,

which is clearly inconsistent.

Example (Questions about a [non-matrix). transformation] Define a transforma-


tion T : R2 → R3 by the formula
 
 ‹ ln(x)
x
T = cos( y) .
y
ln(x)
128 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA
 ‹
1
1. Evaluate T (u) for u = .
π

2. Let  
7
b = 1 .

7

Find a vector v in R2 such that T (v) = b. Is there more than one?

3. Does there exist a vector w in R3 such that there is more than one v in R2
with T (v) = w?

4. Find a vector w in R3 which is not in the range of T .

Note: we asked (almost) the exact same questions about a matrix transforma-
tion in the previous example. The point of this example is to illustrate the fact
that the questions make sense for a transformation that has no hope of coming
from a matrix. In this case, these questions do not translate into questions about
a matrix; they have to be answered in some other way.

Solution.

1. We evaluate T (u) using the defining formula:


   
 ‹ ln(1) 0
1
T = cos(π) = 0 .
π
  
ln(1) 0

2. We have      
e7 ln(e7 ) 7
T 2πn = cos(2πn) = 1
e7 ln(e7 ) 7

for any whole number n. Hence there are infinitely many such vectors.

3. The vector b from the previous part is an example of such a vector.

4. Since cos( y) is always between −1 and 1, the vector


 
0
w = 2

0

is not in the range of T .


4.2. ONE-TO-ONE AND ONTO TRANSFORMATIONS 129

4.2 One-to-one and Onto Transformations

Objectives

1. Understand the definitions of one-to-one and onto transformations.

2. Recipes: verify whether a matrix transformation is one-to-one and/or onto.

3. Pictures: examples of matrix transformations that are/are not one-to-one


and/or onto.

4. Vocabulary words: one-to-one, onto.

In this section, we discuss two of the most basic questions one can ask about a
transformation: whether it is one-to-one and/or onto. For a matrix transformation,
we translate these questions into the language of matrices.

4.2.1 One-to-one Transformations


Definition (One-to-one transformations). A transformation T : Rn → Rm is one-
to-one if, for every vector b in Rm , the equation T (x) = b has at most one solution
x in Rn .

Remark. Another word for one-to-one is injective.

Here are some equivalent ways of saying “T is one-to-one:”

• The transformation T takes different vectors in Rn to different vectors in Rm .

• Different inputs of T have different outputs.

• If T (u) = T (v) then u = v.

one-to-one
x T (x)

T ( y)
y

T (z)
range
z

T
Rn Rm
130 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Here are some equivalent ways of saying “T is not one-to-one:”

• There exists some vector b in Rm such that the equation T (x) = b has more
than one solution x in Rn .

• There are two different inputs of T with the same output.

• There exist vectors u, v such that u 6= v but T (u) = T (v).

not one-to-one
x T (x) = T ( y)

range
z
T (z)
T
Rn Rm

Example (Functions of one variable). The function sin: R → R is not one-to-one.


Indeed, sin(0) = sin(π) = 0, so the inputs 0 and π have the same output 0. In fact,
the equation sin(x) = 0 has infinitely many solutions . . . , −2π, −π, 0, π, 2π, . . ..
The function exp: R → R defined by exp(x) = e x is one-to-one. Indeed, if
T (x) = T ( y), then e x = e y , so ln(e x ) = ln(e y ), and hence x = y. The equation
T (x) = A has one solution x = ln(A) if A > 0, and it has zero solutions if A ≤ 0.
The function f : R → R defined by f (x) = x 3 is one-to-one. Indeed, if f (x) =
f ( y) then x 3 = y 3 ; takingpcube roots gives x = y. In other words, the only
solution of f (x) = A is x = A.
3

The function f : R → R defined by f (x) = x 3 − x is not one-to-one. Indeed,


f (0) = f (1) = f (−1) = 0, so the inputs 0, 1, −1 all have the same output 0. The
solutions of the equation x 3 − x = 0 are exactly the roots of f (x) = x(x −1)(x +1),
and this equation has three roots.
The function f : R → R defined by f (x) = x 2 is not one-to-one. Indeed, f (1) =
1 = f (−1), so the inputs 1 and −1 have the same outputs. The function g : R → R
defined by g(x) = |x| is not one-to-one for the same reason.

Example (A real-word transformation: robotics). Suppose you are building a


robot arm with three joints that can move its hand around a plane, as in this
example in Section 4.1.
4.2. ONE-TO-ONE AND ONTO TRANSFORMATIONS 131

 ‹
x
= f (θ , φ, ψ)
ψ y

Define a transformation f : R3 → R2 as follows: f (θ , φ, ψ) is the (x, y) position


of the hand when the joints are rotated by angles θ , φ, ψ, respectively. Asking
whether f is one-to-one is the same as asking whether there is more than one way
to move the arm in order to reach your coffee cup. (There is.)

Theorem (One-to-one matrix transformations). Let A be an m × n matrix, and let


T (x) = Ax be the associated matrix transformation. The following statements are
equivalent:

1. T is one-to-one.

2. T (x) = b has at most one solution for every b in Rm .

3. Ax = b has a unique solution or is inconsistent for every b in Rm .

4. Ax = 0 has only the trivial solution.

5. The columns of A are linearly independent.

6. A has a pivot in every column.

Proof. Statements 1, 2, and 3 are translations of each other. The equivalence of


3 and 4 follows from this key observation in Section 3.4: if Ax = 0 has only one
solution, then Ax = b has only one solution as well, or it is inconsistent. The
equivalence of 4, 5, and 6 is a consequence of this theorem in Section 3.5.

Recall that equivalent means that, for a given matrix, either all of the statements
are true simultaneously, or they are all false.

Example (A matrix transformation that is one-to-one). Let A be the matrix


 
1 0
A = 0 1,
1 0

and define T : R2 → R3 by T (x) = Ax. Is T one-to-one?


132 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Solution. The reduced row echelon form of A is


 
1 0
0 1.
0 0

Hence A has a pivot in every column, so T is one-to-one.

Use this link to view the online demo

A picture of the matrix transformation T . As you drag the input vector on the left
side, you see that different input vectors yield different output vectors on the right
side.

Example (A matrix transformation that is not one-to-one). Let


 
1 0 0
A = 0 1 0,
0 0 0

and define T : R3 → R3 by T (x) = Ax. Is T one-to-one? If not, find two different


vectors u, v such that T (u) = T (v).
Solution. The matrix A is already in reduced row echelon form. It does not have
a pivot in every column, so T is not one-to-one. Therefore, we know from the
theorem that Ax = 0 has nontrivial solutions. If v is a nontrivial (i.e., nonzero)
solution of Av = 0, then T (v) = Av = 0 = A0 = T (0), so 0 and v are different
vectors with the same output. For instance,
      
0 1 0 0 0 0
T 0 =  0 1 0  0 = 0 = T 0 .
1 0 0 1 1 0

Geometrically, T is projection onto the x y-plane. Any two vectors that lie on
the same vertical line will have the same projection. For b on the x y-plane, the
solution set of T (x) = b is the entire vertical line containing b. In particular,
T (x) = b has infinitely many solutions.

Use this link to view the online demo

A picture of the matrix transformation T . The transformation T projects a vector


onto the x y-plane. The violet line is the solution set of T (x) = 0. If you drag x
along the violet line, the output T (x) = Ax does not change. This demonstrates that
T (x) = 0 has more than one solution, so T is not one-to-one.
4.2. ONE-TO-ONE AND ONTO TRANSFORMATIONS 133

Example (A matrix transformation that is not one-to-one). Let A be the matrix


 ‹
1 1 0
A= ,
0 1 1

and define T : R3 → R2 by T (x) = Ax. Is T one-to-one? If not, find two different


vectors u, v such that T (u) = T (v).
Solution. The reduced row echelon form of A is
 ‹
1 0 −1
.
0 1 1

There is not a pivot in every column, so T is not one-to-one. Therefore, we know


from the theorem that Ax = 0 has nontrivial solutions. If v is a nontrivial (i.e.,
nonzero) solution of Av = 0, then T (v) = Av = 0 = A0 = T (0), so 0 and v are
different vectors with the same output. In order to find a nontrivial solution, we
find the parametric form of the solutions of Ax = 0 using the reduced matrix above:
§
x −z =0 nx = z
=⇒
y +z =0 y = −z

The free variable is z. Taking z = 1 gives the nontrivial solution


     
1  ‹ 1 0
1 1 0  
T −1 =
  −1 = 0 = T 0 .
0 1 1
1 1 0

Use this link to view the online demo

A picture of the matrix transformation T . The violet line is the null space of A, i.e.,
solution set of T (x) = 0. If you drag x along the violet line, the output T (x) = Ax
does not change. This demonstrates that T (x) = 0 has more than one solution, so T
is not one-to-one.

Example (A matrix transformation that is not one-to-one). Let


 ‹
1 −1 2
A= ,
−2 2 −4

and define T : R3 → R2 by T (x) = Ax. Is T one-to-one? If not, find two different


vectors u, v such that T (u) = T (v).
Solution. The reduced row echelon form of A is
 ‹
1 −1 2
.
0 0 0
134 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

There is not a pivot in every column, so T is not one-to-one. Therefore, we know


from the theorem that Ax = 0 has nontrivial solutions. If v is a nontrivial (i.e.,
nonzero) solution of Av = 0, then T (v) = Av = 0 = A0 = T (0), so 0 and v are
different vectors with the same output. In order to find a nontrivial solution, we
find the parametric form of the solutions of Ax = 0 using the reduced matrix above:

x − y + 2z = 0 =⇒ x = y − 2z.

The free variables are y and z. Taking y = 1 and z = 0 gives the nontrivial solution
     
1  ‹ 1 0
1 −1 2
T 1 = 1 = 0 = T 0 .
−2 2 −4
0 0 0

Use this link to view the online demo

A picture of the matrix transformation T . The violet plane is the solution set of
T (x) = 0. If you drag x along the violet plane, the output T (x) = Ax does not
change. This demonstrates that T (x) = 0 has more than one solution, so T is not
one-to-one.

The previous three examples can be summarized as follows. Suppose that


T (x) = Ax is a matrix transformation that is not one-to-one. By the theorem,
there is a nontrivial solution of Ax = 0. This means that the null space of A is not
the zero space. All of the vectors in the null space are solutions to T (x) = 0. If
you compute a nonzero vector v in the null space (by row reducing and finding
the parametric form of the solution set of Ax = 0, for instance), then v and 0 both
have the same output: T (v) = Av = 0 = T (0).

Wide matrices do not have one-to-one transformations. If T : Rn → Rm is a


one-to-one matrix transformation, what can we say about the relative sizes of n
and m?
The matrix associated to T has n columns and m rows. Each row and each
column can only contain one pivot, so in order for A to have a pivot in every
column, it must have at least as many rows as columns: n ≤ m.
This says that, for instance, R3 is “too big” to admit a one-to-one linear trans-
formation into R2 .
Note that there exist tall matrices that are not one-to-one: for example,

1 0 0
 
0 1 0
0 0 0
0 0 0

does not have a pivot in every column.


4.2. ONE-TO-ONE AND ONTO TRANSFORMATIONS 135

4.2.2 Onto Transformations


Definition (Onto transformations). A transformation T : Rn → Rm is onto if, for
every vector b in Rm , the equation T (x) = b has at least one solution x in Rn .

Remark. Another word for onto is surjective.

Here are some equivalent ways of saying “T is onto:”

• The range of T is equal to the codomain of T .

• Every vector in the codomain is the output of some input vector.

x onto
range(T )

T (x)

T
Rn Rm = codomain

Here are some equivalent ways of saying “T is not onto:”

• The range of T is smaller than the codomain of T .

• There exists a vector b in Rm such that the equation T (x) = b does not have
a solution.

• There is a vector in the codomain that is not the output of any input vector.

not onto

T (x)

x range(T )

T
Rn Rm = codomain
136 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (Functions of one variable). The function sin: R → R is not onto. Indeed,
taking b = 2, the equation sin(x) = 2 has no solution. The range of sin is the closed
interval [−1, 1], which is smaller than the codomain R.
The function exp: R → R defined by exp(x) = e x is not onto. Indeed, taking
b = −1, the equation exp(x) = e x = −1 has no solution. The range of exp is the
set (0, ∞) of all positive real numbers.
The function f : R → R defined by f (x) p = x 3 is onto. Indeed, the equation
3
f (x) = x = b always has the solution x = b.
3

The function f : R → R defined by f (x) = x 3 − x is onto. Indeed, the solutions


of the equation f (x) = x 3 − x = b are the roots of the polynomial x 3 − x − b; as
this is a cubic polynomial, it has at least one real root.
Example (A real-word transformation: robotics). The robot arm transformation
of this example is not onto. The robot cannot reach objects that are very far away.
Theorem (Onto matrix transformations). Let A be an m × n matrix, and let T (x) =
Ax be the associated matrix transformation. The following statements are equivalent:
1. T is onto.

2. T (x) = b has at least one solution for every b in Rm .

3. Ax = b is consistent for every b in Rm .

4. The columns of A span Rm .

5. A has a pivot in every row.


Proof. Statements 1, 2, and 3 are translations of each other. The equivalence of 3,
4, and 5 follows from this theorem in Section 3.3.
Example (A matrix transformation that is onto). Let A be the matrix
 ‹
1 1 0
A= ,
0 1 1

and define T : R3 → R2 by T (x) = Ax. Is T onto?


Solution. The reduced row echelon form of A is
 ‹
1 0 −1
.
0 1 1
Hence A has a pivot in every row, so T is onto.

Use this link to view the online demo

A picture of the matrix transformation T . Every vector on the right side is the output
of T for a suitable input. If you drag b, the demo will find an input vector x with
output b.
4.2. ONE-TO-ONE AND ONTO TRANSFORMATIONS 137

Example (A matrix transformation that is not onto). Let A be the matrix


 
1 0
A = 0 1,
1 0

and define T : R2 → R3 by T (x) = Ax. Is T onto? If not, find a vector b in R3 such


that T (x) = b has no solution.
Solution. The reduced row echelon form of A is
 
1 0
0 1.
0 0

Hence A does not have a pivot in every row, so T is not onto. In fact, since
   
 ‹ 1 0  ‹ x
x x
T = 0 1 =  y ,
y y
1 0 x

we see that for every output vector of T , the third entry is equal to the first. There-
fore,
b = (1, 2, 3)
is not in the range of T .

Use this link to view the online demo

A picture of the matrix transformation T . The range of T is the violet plane on the
right; this is smaller than the codomain R3 . If you drag b off of the violet plane, then
the equation Ax = b becomes inconsistent; this means T (x) = b has no solution.

Example (A matrix transformation that is not onto). Let


 ‹
1 −1 2
A= ,
−2 2 −4

and define T : R3 → R2 by T (x) = Ax. Is T onto? If not, find a vector b in R2 such


that T (x) = b has no solution.
Solution. The reduced row echelon form of A is
 ‹
1 −1 2
.
0 0 0

There is not a pivot in every row, so T is not onto. The range of T is the column
space of A, which is equal to
§ ‹  ‹  ‹ª § ‹ª
1 −1 2 1
Span , , = Span ,
−2 2 −4 −2
138 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

since all three


 columns of A are collinear. Therefore, any vector not on the line
1 1

through −2 is not in the range of T . For instance, if b = 1 then T (x) = b has
no solution.

Use this link to view the online demo

A picture of the matrix transformation T . The range of T is the violet line on the
right; this is smaller than the codomain R2 . If you drag b off of the violet line, then
the equation Ax = b becomes inconsistent; this means T (x) = b has no solution.

The previous two examples illustrate the following observation. Suppose that
T (x) = Ax is a matrix transformation that is not onto. This means that range(T ) =
Col(A) is a subspace of Rm of dimension less than m: perhaps it is a line in the
plane, or a line in 3-space, or a plane in 3-space, etc. Whatever the case, the range
of T is very small compared to the codomain. To find a vector not in the range of
T , choose a random nonzero vector b in Rm ; you have to be extremely unlucky
to choose a vector that is in the range of T . Of course, to check whether a given
vector b is in the range of T , you have to solve the matrix equation Ax = b to see
whether it is consistent.

Tall matrices do not have onto transformations. If T : Rn → Rm is an onto matrix


transformation, what can we say about the relative sizes of n and m?
The matrix associated to T has n columns and m rows. Each row and each
column can only contain one pivot, so in order for A to have a pivot in every row,
it must have at least as many columns as rows: m ≤ n.
This says that, for instance, R2 is “too small” to admit an onto linear transfor-
mation to R3 .
Note that there exist wide matrices that are not onto: for example,
 ‹
1 −1 2
−2 2 −4

does not have a pivot in every row.

4.2.3 Comparison
The above expositions of one-to-one and onto transformations were written to
mirror each other. However, “one-to-one” and “onto” are complementary notions:
neither one implies the other. Below we have provided a chart for comparing the
two. In the chart, A is an m × n matrix, and T : Rn → Rm is the matrix transforma-
tion T (x) = Ax.
4.2. ONE-TO-ONE AND ONTO TRANSFORMATIONS 139

T is one-to-one T is onto
T (x) = b has at most one solution T (x) = b has at least one solution
for every b. for every b.
The columns of A are linearly The columns of A span Rm .
independent.
A has a pivot in every column. A has a pivot in every row.

Example (Functions of one variable). The function sin: R → R is neither one-to-


one nor onto.
The function exp: R → R defined by exp(x) = e x is one-to-one but not onto.
The function f : R → R defined by f (x) = x 3 is one-to-one and onto.
The function f : R → R defined by f (x) = x 3 − x is onto but not one-to-one.
Example (A matrix transformation that is neither one-to-one nor onto). Let
 ‹
1 −1 2
A= ,
−2 2 −4

and define T : R3 → R2 by T (x) = Ax. This transformation is neither one-to-one


nor onto, as we saw in this example and this example.

Use this link to view the online demo

A picture of the matrix transformation T . The violet plane is the solution set of
T (x) = 0. If you drag x along the violet plane, the output T (x) = Ax does not
change. This demonstrates that T (x) = 0 has more than one solution, so T is not
one-to-one. The range of T is the violet line on the right; this is smaller than the
codomain R2 . If you drag b off of the violet line, then the equation Ax = b becomes
inconsistent; this means T (x) = b has no solution.

Example (A matrix transformation that is one-to-one but not onto). Let A be the
matrix  
1 0
A = 0 1,
1 0
and define T : R2 → R3 by T (x) = Ax. This transformation is one-to-one but not
onto, as we saw in this example and this example.

Use this link to view the online demo

A picture of the matrix transformation T . The range of T is the violet plane on


the right; this is smaller than the codomain R3 . If you drag b off of the violet plane,
then the equation Ax = b becomes inconsistent; this means T (x) = b has no solution.
However, for b lying on the violet plane, there is a unique vector x such that T (x) = b.
140 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (A matrix transformation that is onto but not one-to-one). Let A be the
matrix  ‹
1 1 0
A= ,
0 1 1

and define T : R3 → R2 by T (x) = Ax. This transformation is onto but not one-to-
one, as we saw in this example and this example.

Use this link to view the online demo

A picture of the matrix transformation T . Every vector on the right side is the output of
T for a suitable input. If you drag b, the demo will find an input vector x with output
b. The violet line is the null space of A, i.e., solution set of T (x) = 0. If you drag x
along the violet line, the output T (x) = Ax does not change. This demonstrates that
T (x) = 0 has more than one solution, so T is not one-to-one.

Example (Matrix transformations that are both one-to-one and onto). In this sub-
section in Section 4.1, we discussed the transformations defined by several 2 × 2
matrices, namely:
 ‹
−1 0
Reflection: A=
0 1
 ‹
1.5 0
Dilation: A=
0 1.5
 ‹
1 0
Identity: A=
0 1
 ‹
0 −1
Rotation: A=
1 0
 ‹
1 1
Shear: A= .
0 1

In each case, the associated matrix transformation T (x) = Ax is both one-to-one


and onto. A 2 × 2 matrix A has a pivot in every row if and only if it has a pivot in
every column (if and only if it has two pivots), so in this case, the transformation
T is one-to-one if and only if it is onto. One can see geometrically that they are
onto (what is the input for a given output?), or that they are one-to-one using the
fact that the columns of A are not collinear.

Use this link to view the online demo

Counterclockwise rotation by 90◦ is a matrix transformation. This transformation is


onto (if b is a vector in R2 , then it is the output vector for the input vector which is
b rotated clockwise by 90◦ ), and it is one-to-one (different vectors rotate to different
vectors).
4.3. LINEAR TRANSFORMATIONS 141

One-to-one is the same as onto for square matrices. We observed in the previ-
ous example that a square matrix has a pivot in every row if and only if it has a
pivot in every column. Therefore, a matrix transformation T from Rn to itself is
one-to-one if and only if it is onto: in this case, the two notions are equivalent.
Conversely, by this note and this note, if a matrix transformation T : Rm → Rn
is both one-to-one and onto, then m = n.

Note that in general, a transformation T is both one-to-one and onto if and


only if T (x) = b has exactly one solution for all b in Rm .

4.3 Linear Transformations

Objectives

1. Learn how to verify that a transformation is linear, or prove that a transfor-


mation is not linear.

2. Understand the relationship between linear transformations and matrix trans-


formations.

3. Recipe: compute the matrix of a linear transformation.

4. Theorem: linear transformations and matrix transformations.

5. Notation: the standard coordinate vectors e1 , e2 , . . ..

6. Vocabulary words: linear transformation, standard matrix, identity ma-


trix.

In Section 4.1, we studied the geometry of matrices by regarding them as func-


tions, i.e., by considering the associated matrix transformations. We defined some
vocabulary (domain, codomain, range), and asked a number of natural questions
about a transformation. For a matrix transformation, these translate into questions
about matrices, which we have many tools to answer.
In this section, we make a change in perspective. Suppose that we are given a
transformation that we would like to study. If we can prove that our transformation
is a matrix transformation, then we can use linear algebra to study it. This raises
two important questions:

1. How can we tell if a transformation is a matrix transformation?

2. If our transformation is a matrix transformation, how do we find its matrix?


142 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

For example, we saw in this example in Section 4.1 that the matrix transformation
 ‹
0 −1
2
T : R −→ R 2
T (x) = x
1 0
is a counterclockwise rotation of the plane by 90◦ . However, we could have defined
T in this way:

T : R2 −→ R2 T (x) = the counterclockwise rotation of x by 90◦ .

Given this definition, it is not at all obvious a priori that T is a matrix transforma-
tion, or what matrix it is associated to.

4.3.1 Linear Transformations: Definition


In this section, we introduce the class of transformations that come from matrices.
Definition. A linear transformation is a transformation T : Rn → Rm satisfying

T (u + v) = T (u) + T (v)
T (cu) = cT (u)

for all vectors u, v in Rn and all scalars c.


Let T : Rn → Rm be a matrix transformation: T (x) = Ax for an m × n matrix A.
By this proposition in Section 3.3, we have

T (u + v) = A(u + v) = Au + Av = T (u) + T (v)


T (cu) = A(cu) = cAu = cT (u)

for all vectors u, v in Rn and all scalars c. Since a matrix transformation satisfies
the two defining properties, it is a linear transformation
We will see in the next subsection that a linear transformation is a matrix trans-
formation; we just haven’t computed its matrix yet.
Facts about linear transformations. Let T : Rn → Rm be a linear transformation.
Then:
1. T (0) = 0.

2. For any vectors v1 , v2 , . . . , vk in Rn and scalars c1 , c2 , . . . , ck , we have



T c1 v1 + c2 v2 + · · · + ck vk = c1 T (v1 ) + c2 T (v2 ) + · · · + ck T (vk ).

Proof.
1. Since 0 = −0, we have

T (0) = T (−0) = −T (0)

by the second defining property. The only vector w such that w = −w is the
zero vector.
4.3. LINEAR TRANSFORMATIONS 143

2. Let us suppose for simplicity that k = 2. Then

T (c1 v1 + c2 v2 ) = T (c1 v1 ) + T (c2 v2 ) first property


= c1 T (v1 ) + c2 T (v2 ) second property.

In engineering, the second fact is called the superposition principle. For exam-
ple, T (cu+d v) = cT (u)+d T (v) for any vectors u, v and any scalars c, d. To restate
the first fact:

A linear transformation necessarily takes the zero vector to the zero vector.

Example (A non-linear transformation). Define T : R → R by T (x) = x + 1. Is T


a linear transformation?
Solution. We have T (0) = 0+1 = 1. Since any linear transformation necessarily
takes zero to zero by the above important note, we conclude that T is not linear.
Note: in this case, it was not necessary to check explicitly that T does not satisfy
both defining properties: since T (0) = 0 is a consequence of these properties, at
least one of them must not be satisfied. (In fact, this T satisfies neither.)

Example (Verifying linearity: dilation). Define T : R2 → R2 by T (x) = 1.5x. Verify


that T is linear.
Solution. We have to check the defining properties for all vectors u, v and all
scalars c. In other words, we have to treat u, v, and c as unknowns. The only thing
we are allowed to use is the definition of T .

T (u + v) = 1.5(u + v) = 1.5u + 1.5v = T (u) + T (v)


T (cu) = 1.5(cu) = c(1.5u) = cT (u).

Since T satisfies both defining properties, T is linear.


Note: we know from this example in Section 4.1 that T is a matrix transforma-
tion: in fact,  ‹
1.5 0
T (x) = x.
0 1.5
Since a matrix transformation is a linear transformation, this is another proof that
T is linear.

Example (Verifying linearity: rotation). Define T : R2 → R2 by

T (x) = the vector x rotated counterclockwise by the angle θ .

Verify that T is linear.


Solution. Since T is defined geometrically, we give a geometric argument. For
the first property, T (u) + T (v) is the sum of the vectors obtained by rotating u
144 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

and v by θ . On the other side of the equation, T (u + v) is the vector obtained by


rotating the sum of the vectors u and v. But it does not matter whether we sum
or rotate first, as the following picture shows.

T T (v)
T (u + v)
u+v v T (u)
θ
u

For the second property, cT (u) is the vector obtained by rotating u by the angle
θ , then changing its length by a factor of c (reversing direction of c < 0. On the
other hand, T (cu) first changes the length of c, then rotates. But it does not matter
in which order we do these two operations.

T
T (cu)
T (u)
u cu θ

This verifies that T is a linear transformation. We will find its matrix in the
next subsection. Note however that it is not at all obvious that T can be expressed
as multiplication by a matrix.

Example (A transformation defined by a formula). Define T : R2 → R3 by the


formula  
 ‹ 3x − y
x
T =  y .
y
x

Verify that T is linear.


Solution. We have to check the defining properties for all vectors u, v and all
scalars c. In other words, we have to treat u, v, and c as unknowns; the only thing
we are allowed to use is the definition of T . Since T is defined in terms of the
4.3. LINEAR TRANSFORMATIONS 145

x1 x2
 
coordinates of u, v, we need to give those names as well; say u = y1 and v = y2 .
For the first property, we have
 
3(x 1 + x 2 ) − ( y1 + y2 )
x + x2
 ‹  ‹‹  ‹
x1 x
T + 2 =T 1 = y1 + y2
y1 + y2

y1 y2
x1 + x2
 
(3x 1 − y1 ) + (3x 2 − y2 )
= y1 + y2 
x1 + x2
   
3x 1 − y1 3x 2 − y2  ‹  ‹
x x
=  y1  +  y2  = T 1
+T 2 .
y1 y2
x1 x2

For the second property,


 
  ‹‹  ‹ 3(c x 1 ) − (c y1 )
x c x1
T c 1 =T = c y1 
y1 c y1
c x1
   
c(3x 1 − y1 ) 3x 1 − y1  ‹
x
=  c y1  =c  y1  = cT 1 .
y1
c x1 x1

Since T satisfies the defining properties, T is a linear transformation.


Note: we will see in this example below that
 
 ‹ 3 −1  ‹
x x
T = 0 1  .
y y
1 0

Hence T is in fact a matrix transformation.

One can show that, if a transformation is defined by formulas in the coordi-


nates as in the above example, then the transformation is linear if and only if each
coordinate is a linear expression in the variables with no constant term.

Example (A translation). Define T : R3 → R3 by


 
1
T (x) = x + 2 .
3

This kind of transformation is called a translation. As in a previous example, this


T is not linear, because T (0) is not the zero vector.
146 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (More non-linear transformations). Verify that the following transfor-


mations from R2 to R2 are not linear:

2x + 1
 ‹  ‹  ‹  ‹  ‹  ‹
x |x| x xy x
T1 = T2 = T3 = .
y y y y y x − 2y

Solution. In order to verify that a transformation T is not linear, we have to


show that T does not satisfy at least one of the two defining properties. For the
first, the negation of the statement “T (u + v) = T (u) + T (v) for all vectors u, v” is
“there exists at least one pair of vectors u, v such that T (u + v) 6= T (u) + T (v).”
In other words, it suffices to find one example of a pair of vectors u, v such that
T (u + v) 6= T (u) + T (v). Likewise, for the second, the negation of the statement
“T (cu) = cT (u) for all vectors u and all scalars c” is “there exists some vector u
and some scalar c such that T (cu) 6= cT (u).” In other words, it suffices to find one
vector u and one scalar c such that T (cu) 6= cT (u).
For the first transformation, we note that
  ‹‹  ‹  ‹  ‹
1 −1 | − 1| 1
T1 − = T1 = =
0 0 0 0

but that  ‹  ‹  ‹  ‹
1 |1| 1 −1
−T1 =− =− = .
0 0 0 0
Therefore, this transformation does not satisfy the second property.
For the second transformation, we note that
  ‹‹  ‹  ‹  ‹
1 2 2·2 4
T2 2 = T2 = =
1 2 2 2

but that  ‹  ‹  ‹  ‹
1 1·1 1 2
2T2 =2 =2 = .
1 1 1 2
Therefore, this transformation does not satisfy the second property.
For the third transformation, we observe that

2(0) + 1
 ‹  ‹  ‹  ‹
0 1 0
T3 = = 6= .
0 0 − 2(0) 0 0

Since T3 does not take the zero vector to the zero vector, it cannot be linear.

When deciding whether a transformation T is linear, generally the first thing


to do is to check whether T (0) = 0; if not, T is automatically not linear. Note
however that the non-linear transformations T1 and T2 of the above example do
take the zero vector to the zero vector.

Challenge. Find an example of a transformation that satisfies the first property of


linearity but not the second.
4.3. LINEAR TRANSFORMATIONS 147

4.3.2 The Standard Coordinate Vectors


In the next subsection, we will present the relationship between linear transfor-
mations and matrix transformations. Before doing so, we need the following im-
portant notation.

Standard coordinate vectors. The standard coordinate vectors in Rn are


the n vectors
       
1 0 0 0
0 1 0 0
.
..  , e2 =  ...  , . . . , en−1 =  ...  , en =  ...  .
     
e1 = 
       
0 0 1 0
0 0 0 1

The ith entry of ei is equal to 1, and the other entries are zero.
From now on, for the rest of the book, we will use the symbols e1 , e2 , . . . to
denote the standard coordinate vectors.

There is an ambiguity in this notation: one has to know from context that e1 is
meant to have n entries. That is, the vectors
 
 ‹ 1
1 0
and
0
0

may both be denoted e1 , depending on whether we are discussing vectors in R2 or


in R3 .
The standard coordinate vectors in R2 and R3 are pictured below.

in R2 in R3

e2 e3
e1 e2
e1

These are the vectors of length 1 that point in the positive directions of each
of the axes.
148 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Multiplying a matrix by the standard coordinate vectors. If A is an m× n matrix


with columns v1 , v2 , . . . , vm , then Aei = vi for each i = 1, 2, . . . , n:
 
| | |
 v1 v2 · · · vn  ei = vi .
| | |

In other words, multiplying a matrix by ei simply selects its ith column.

For example,
              
1 2 3 1 1 1 2 3 0 2 1 2 3 0 3
 4 5 6  0 = 4  4 5 6  1 = 5  4 5 6  0 = 6 .
7 8 9 0 7 7 8 9 0 8 7 8 9 1 9

Definition. The n × n identity matrix is the matrix I n whose columns are the n
standard coordinate vectors in Rn :
 
1 0 ··· 0 0
0 1 ··· 0 0
. . .. .. 
In =  . . ...
. . . . .
0 0 ··· 1 0
0 0 ··· 0 1

We will see in this example below that the identity matrix is the matrix of the
identity transformation.

4.3.3 The Matrix of a Linear Transformation


Now we can prove that every linear transformation is a matrix transformation, and
we will show how to compute the matrix.

Theorem (The matrix of a linear transformation). Let T : Rn → Rm be a linear


transformation. Let A be the m × n matrix
 
| | |
A =  T (e1 ) T (e2 ) · · · T (en )  .
| | |

Then T is the matrix transformation associated with A: that is, T (x) = Ax.

Proof. We suppose for simplicity that T is a transformation from R3 to R2 . Let A


4.3. LINEAR TRANSFORMATIONS 149

be the matrix given in the statement of the theorem. Then


        
x 1 0 0
T y ; = T x 0 + y 1 + z 0
       
z 0 0 1

= T x e1 + y e2 + ze3
= x T (e1 ) + y T (e2 ) + zT (e3 )
  
| | | x
= T (e1 ) T (e2 ) T (e3 )
   y
| | | z
 
x
= A y .

z

The matrix A in the above theorem is called the standard matrix for T . The
columns of A are the vectors obtained by evaluating T on the n standard coordinate
vectors in Rn . To summarize part of the theorem:

Matrix transformations are the same as linear transformations.

Dictionary. Linear transformations are the same as matrix transformations, which


come from matrices. The correspondence can be summarized in the following
dictionary.

 
n m | | |
T: R →R
−−−→ m × n matrix A = T (e1 )
 T (e2 ) · · · T (en ) 
Linear transformation | | |

T : Rn → Rm
←−−− m × n matrix A
T (x) = Ax

Example (The matrix of a dilation). Define T : R2 → R2 by T (x) = 1.5x. Find the


standard matrix A for T .
Solution. The columns of A are obtained by evaluating T on the standard coor-
dinate vectors e1 , e2 .
 ‹
1.5 
T (e1 ) = 1.5e1 = 
0 
 ‹
1.5 0
 ‹ =⇒ A = .
0  0 1.5
T (e2 ) = 1.5e2 = 
1.5

This is the matrix we started with in this example in Section 4.1.


150 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (The matrix of a rotation). Define T : R2 → R2 by

T (x) = the vector x rotated counterclockwise by the angle θ .

Find the standard matrix for T .

Solution. The columns of A are obtained by evaluating T on the standard coor-


dinate vectors e1 , e2 . In order to compute the entries of T (e1 ) and T (e2 ), we have
to do some trigonometry.

T (e2 )

e2
T (e1 )
cos(θ ) θ
sin(θ )
θ
cos(θ ) e1 sin(θ )

We see from the picture that

cos(θ ) 
 ‹ 
T (e1 ) =
sin(θ ) 

cos(θ ) − sin(θ )
 ‹
‹ =⇒ A =
− sin(θ )  sin(θ ) cos(θ )

T (e2 ) = 
cos(θ )

We saw in the above example that the matrix for counterclockwise rotation of
the plane by an angle of θ is

cos(θ ) − sin(θ )
 ‹
A= .
sin(θ ) cos(θ )

Example (A transformation defined by a formula). Define T : R2 → R3 by the


formula  
 ‹ 3x − y
x
T =  y .
y
x

Find the standard matrix for T .


4.3. LINEAR TRANSFORMATIONS 151

Solution. We substitute the standard coordinate vectors into the formula defin-
ing T :
    
 ‹ 3(1) − 0 3
1 
T (e1 ) = T = =

 0  0  

0
 
−1

1 1

 3
    =⇒ A =  0 1  .
 ‹ 3(0) − 1 −1  1 0
0 
T (e2 ) = T =  =  1 

1 
1 
0 0

Example (A transformation defined in steps). Let T : R3 → R3 be the linear trans-


formation that reflects over the x y-plane and then projects onto the yz-plane.
What is the standard matrix for T ?
Solution. This transformation is described geometrically, in two steps. To find
the columns of A, we need to follow the standard coordinate vectors through each
of these steps.

yz yz yz
reflect x y project yz
−−−−−→ −−−−−→
e1 xy xy xy

Since e1 lies on the x y-plane, reflecting over the x y-plane does not move e1 .
Since e1 is perpendicular to the yz-plane, projecting e1 onto the yz-plane sends it
to zero. Therefore,  
0
T (e1 ) = 0 .

0

yz yz yz
reflect x y project yz
−−−−−→ −−−−−→
e2 xy xy xy

Since e2 lies on the x y-plane, reflecting over the x y-plane does not move e2 .
Since e2 lies on the yz-plane, projecting onto the yz-plane does not move e2 either.
Therefore,  
0
T (e2 ) = e2 = 1 .

0
152 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

yz yz yz
e3 reflect x y project yz
−−−−−→ −−−−−→
xy xy xy

Since e3 is perpendicular to the x y-plane, reflecting over the x y-plane takes e3


to its negative. Since −e3 lies on the yz-plane, projecting onto the yz-plane does
not move it. Therefore,  
0
T (e3 ) = −e3 =  0  .
−1
Now we have computed all three columns of A:
  
0 

T (e1 ) = 0 



0 


   
  
0 
 0 0 0
T (e2 ) = 1 =⇒ A =  0 1 0  .
0 

 0 0 −1
  

0  

T (e1 ) = 0 
  


−1 

Use this link to view the online demo

Illustration of a transformation defined in steps. Click and drag the vector on the left.

Recall from this definition in Section 4.1 that the identity transformation is the
transformation IdRn : Rn → Rn defined by IdRn (x) = x for every vector x.

Example (The standard matrix of the identity transformation). Verify that the
identity transformation IdRn : Rn → Rn is linear, and compute its standard matrix.
Solution. We verify the two defining properties of linear transformations. Let
u, v be vectors in Rn . Then

IdRn (u + v) = u + v = IdRn (u) + IdRn (v).

If c is a scalar, then
IdRn (cu) = cu = c IdRn (u).
Since IdRn satisfies the two defining properties, it is a linear transformation.
4.4. MATRIX MULTIPLICATION 153

Now that we know that IdRn is linear, it makes sense to compute its standard
matrix. For each standard coordinate vector ei , we have IdRn (ei ) = ei . In other
words, the columns of the standard matrix of IdRn are the standard coordinate
vectors, so the standard matrix is the identity matrix
 
1 0 ··· 0 0
0 1 ··· 0 0
. .
In =  . . . . . ... .. 
. . ..
0 0 ··· 1 0
0 0 ··· 0 1

We computed in this example that the matrix of the identity transform is the
identity matrix: for every x in Rn ,

x = IdRn (x) = I n x.

Therefore, I n x = x for all vectors x: the product of the identity matrix and a vector
is the same vector.

4.4 Matrix Multiplication

Objectives

1. Understand compositions of transformations.

2. Understand the relationship between matrix products and compositions of


matrix transformations.

3. Become comfortable doing basic algebra involving matrices.

4. Recipe: matrix multiplication (two ways).

5. Picture: composition of transformations.

6. Vocabulary word: composition.

In this section, we study compositions of transformations: that is, chaining


transformations together. The composition of matrix transformations corresponds
to a notion of multiplying two matrices together. We also discuss addition and
scalar multiplication of transformations and of matrices.
154 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

4.4.1 Transformation algebra


In this subsection we describe three operations that one can perform on transfor-
mations: addition, scalar multiplication, and composition. In the next subsection,
we will translate these operations into the language of matrices, for matrix trans-
formations.
Definition.
• Let T, U : Rn → Rm be two transformations. Their sum is the transformation
T + U : Rn → Rm defined by

(T + U)(x) = T (x) + U(x).

Note that addition of transformations is only defined when both transforma-


tions have the same domain and codomain.

• Let T : Rn → Rm be a transformation, and let c be a scalar. The scalar prod-


uct of c with T is the transformation cT : Rn → Rm defined by

(cT )(x) = c · T (x).

The sum of two transformations T, U : Rn → Rm is another transformation


called T + U; its value on an input vector x is the sum of the outputs of T and
U. Similarly, the product of T with a scalar c is another transformation called cT ;
its value on an input vector x is the vector c · T (x).
Example (Functions of one variable). Define f : R → R by f (x) = x 2 and g : R →
R by g(x) = x 3 . The sum f + g : R → R is the transformation defined by the rule

( f + g)(x) = f (x) + g(x) = x 2 + x 3 .

For instance, ( f + g)(−2) = (−2)2 + (−2)3 = −4.


Define exp: R → R by exp(x) = e x . The product 2 exp: R → R is the transfor-
mation defined by the rule

(2 exp)(x) = 2 · exp(x) = 2e x .

For instance, (2 exp)(1) = 2 · exp(1) = 2e.


Properties of addition and scalar multiplication. Let S, T, U : Rn → Rm be trans-
formations and let c, d be scalars. The following properties are easily verified:

T +U =U +T S + (T + U) = (S + T ) + U
c(T + U) = cT + cU (c + d)T = cT + d T
c(d T ) = (cd)T T +0= T

In one of the above properties, we used 0 to denote the transformation Rn →


R that is zero on every input vector: 0(x) = 0 for all x. This is called the zero
m

transformation.
4.4. MATRIX MULTIPLICATION 155

Definition. Let T : Rn → Rm and U : R p → Rn be transformations. Their composi-


tion is the transformation T ◦ U : R p → Rm defined by

(T ◦ U)(x) = T (U(x)).

Composing two transformations means chaining them together: T ◦ U is the


transformation that first applies U, then applies T (note the order of operations).
More precisely, to evaluate T ◦ U on an input vector x, first you evaluate U(x),
then you take this output vector of U and use it as an input vector of T : that is,
(T ◦ U)(x) = T (U(x)). Of course, this only makes sense when the outputs of U
are valid inputs of T .

T ◦U T ◦ U(x)
x

T
U
U(x)

Rp Rn Rm

Here is a picture of the composition T ◦ U as a “machine” that first runs U, then


takes its output and feeds it into T ; there is a similar picture in this subsection in
Section 4.1.

T ◦U

Rp U U(x) T Rm
x T ◦ U(x)
Rn

Domain and codomain of a composition.

• In order for T ◦ U to be defined, the codomain of U must equal the


domain of T .

• The domain of T ◦ U is the domain of U.

• The codomain of T ◦ U is the codomain of T .


156 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (Functions of one variable). Define f : R → R by f (x) = x 2 and g : R →


R by g(x) = x 3 . The composition f ◦ g : R → R is the transformation defined by
the rule
f ◦ g(x) = f (g(x)) = f (x 3 ) = (x 3 )2 = x 6 .
For instance, f ◦ g(−2) = f (−8) = 64.

Interactive: A composition of matrix transformations. Define T : R3 → R2 and


U : R2 → R3 by
 
 ‹ 1 0
1 1 0
T (x) = x and U(x) =  0 1  x.
0 1 1
1 0

Their composition is a transformation T ◦ U : R2 → R2 ; it turns out to be the matrix


transformation associated to the matrix 11 10 .

Use this link to view the online demo

A composition of two matrix transformations, i.e., a transformation performed in


two steps. On the left is the domain of U/the domain of T ◦ U; in the middle is the
codomain of U/the domain of T , and on the right is the codomain of T /the codomain
of T ◦ U. The vector x is the input of U and of T ◦ U; the vector in the middle is the
output of U/the input of T , and the vector on the right is the output of T /of T ◦ U.
Click and drag x.

Interactive: A transformation defined in steps. Let S : R3 → R3 be the linear


transformation that first reflects over the x y-plane and then projects onto the yz-
plane, as in this example in Section 4.3. The transformation S is the composition
T ◦ U, where U : R3 → R3 is the transformation that reflects over the x y-plane,
and T : R3 → R3 is the transformation that projects onto the yz-plane.

Use this link to view the online demo

Illustration of a transformation defined in steps. On the left is the domain of U/the


domain of S; in the middle is the codomain of U/the domain of T , and on the right
is the codomain of T /the codomain of S. The vector u is the input of U and of S; the
vector in the middle is the output of U/the input of T , and the vector on the right is
the output of T /of S. Click and drag u.

Interactive: A transformation defined in steps. Let S : R3 → R3 be the linear


transformation that first projects onto the x y-plane, and then projects onto the
xz-plane. The transformation S is the composition T ◦ U, where U : R3 → R3 is
the transformation that projects onto the x y-plane, and T : R3 → R3 is the trans-
formation that projects onto the xz-plane.
4.4. MATRIX MULTIPLICATION 157

Use this link to view the online demo

Illustration of a transformation defined in steps. Note that projecting onto the x y-


plane, followed by projecting onto the xz-plane, is the projection onto the x-axis.

Recall from this definition in Section 4.1 that the identity transformation is the
transformation IdRn : Rn → Rn defined by IdRn (x) = x for every vector x.
Properties of composition. Let S, T, U be transformations and let c be a scalar.
Suppose that T : Rn → Rm , and that in each of the following identities, the do-
mains and the codomains are compatible when necessary for the composition to
be defined. The following properties are easily verified:

S ◦ (T + U) = S ◦ T + S ◦ U (S + T ) ◦ U = S ◦ U + T ◦ U
c(T ◦ U) = (cT ) ◦ U c(T ◦ U) = T ◦ (cU) if T is linear
T ◦ IdRn = T IdRm ◦T = T
S ◦ (T ◦ U) = (S ◦ T ) ◦ U

The final property is called associativity; it simply says that

S ◦ (T ◦ U)(x) = S(T ◦ U(x)) = S(T (U(x))) = S ◦ T (U(x)) = (S ◦ T ) ◦ U(x).

In other words, both S ◦ (T ◦ U) and (S ◦ T ) ◦ U are the transformation defined by


first applying U, then T , then S.

Composition of transformations is not commutative in general. That is, in


general, T ◦ U 6= U ◦ T , even when both compositions are defined.

Example (Functions of one variable). Define f : R → R by f (x) = x 2 and g : R →


R by g(x) = e x . The composition f ◦ g : R → R is the transformation defined by
the rule
f ◦ g(x) = f (g(x)) = f (e x ) = (e x )2 = e2x .
The composition g ◦ f : R → R is the transformation defined by the rule
2
g ◦ f (x) = g( f (x)) = g(x 2 ) = e x .
2 2
Note that e x 6= e2x in general; for instance, if x = 1 then e x = e and e2x = e2 .
Example (Non-commutative composition of transformations). Define matrix trans-
formations T, U : R2 → R2 by
 ‹  ‹
1 1 1 0
T (x) = x and U(x) = x.
0 1 1 1
Geometrically, T is a shear in the x-direction, and U is a shear in the Y -direction.
We evaluate  ‹  ‹  ‹
1 1 2
T ◦U =T =
0 1 1
158 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

and  ‹  ‹  ‹
1 1 1
U◦T =U = .
0 0 1
1

Since T ◦ U and U ◦ T have different outputs for the input vector 0 , they are
different transformations. (See this example.)

Use this link to view the online demo

Illustration of the composition T ◦ U.

Use this link to view the online demo

Illustration of the composition U ◦ T .

4.4.2 Matrix algebra


In this subsection, we translate the algebra of linear transformations from the pre-
vious subsection into the language of matrices. First we need some terminology.

Notation. Let A be an m × n matrix. We will generally write ai j for the entry in


the ith row and the jth column. It is called the i, j entry of the matrix.

a11 · · · a1 j · · · a1n
 
. .. ..
 .. . .
ith row


 a ··· a ··· a 
 i1 ij in
 .. .. ..

. . .

am1 · · · am j · · · amn
jth column

Definition.

• The sum of two m×n matrices is the matrix obtained by summing the entries
of A and B individually:

a11 + b11 a12 + b12 a13 + b13


 ‹  ‹  ‹
a11 a12 a13 b11 b12 b13
+ =
a21 a22 a23 b21 b22 b23 a21 + b21 a22 + b22 a23 + b23

In other words, the i, j entry of A + B is the sum of the i, j entries of A and


B. Note that addition of matrices is only defined when both matrices have
the same dimensions.
4.4. MATRIX MULTIPLICATION 159

• The scalar product of a scalar c with a matrix A is obtained by scaling all


entries of A by c:
 ‹  ‹
a11 a12 a13 ca11 ca12 ca13
c =
a21 a22 a23 ca21 ca22 ca23

In other words, the i, j entry of cA is c times the i, j entry of A.

Fact. Let T, U : Rn → Rm be linear transformations with standard matrices A, B, re-


spectively, and let c be a scalar.

• The standard matrix for T + U is A + B.

• The standard matrix for cT is cA.

In view of the above fact, the following properties are consequences of the
corresponding properties of transformations. They are easily verified directly from
the definitions as well.

Properties of addition and scalar multiplication. Let A, B, C be m × n matrices


and let c, d be scalars. Then:

A+ B = B + A C + (A + B) = (C + A) + B
c(A + B) = cA + cB (c + d)A = cA + dA
c(dA) = (cd)A A+ 0 = A

In one of the above properties, we used 0 to denote the m × n matrix whose


entries are all zero. This is the standard matrix of the zero transformation, and is
called the zero matrix.

Definition (Matrix multiplication). Let A be an m × n matrix and let B be an n × p


matrix. Denote the columns of B by v1 , v2 , . . . , vp :
 
| | |
B =  v1 v2 · · · vp  .
| | |

The product AB is the m × p matrix with columns Av1 , Av2 , . . . , Avp :


 
| | |
AB =  Av1 Av2 · · · Avp  .
| | |

In other words, matrix multiplication is defined column-by-column, or “dis-


tributes over the columns of B.”
160 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example.

     
 ‹1 0  ‹ 1  ‹ 0
1 1 0  1 1 0   1 1 0  
0 1 =  0 1
0 1 1 0 1 1 0 1 1
1 0 1 0
 ‹  ‹‹  ‹
1 1 1 1
= =
1 1 1 1

In order for the vectors Av1 , Av2 , . . . , Avp to be defined, the numbers of rows of
B has to equal the number of columns of A.

Dimensions of the matrix product.

• In order for AB to be defined, the number of rows of B has to equal the


number of columns of A.

• The product of an m × n matrix and an n × p matrix is an m × p matrix.

If B has only one column, then AB also has one column. A matrix with one
column is the same as a vector, so the definition of the matrix product generalizes
the definition of the matrix-vector product.
If A is a square matrix, then we can multiply it by itself; we define its powers
to be

A2 = AA A3 = AAA etc.

The row-column rule for matrix multiplication Recall from this definition in
Section 3.3 that the product of a row vector and a column vector is the scalar

 
x1
  x2
a1 a2 · · · an  ...  = a1 x 1 + a2 x 2 + · · · + an x n .

xn

The following procedure for finding the matrix product is much better adapted
to computations by hand; the previous definition is more suitable for proving the
4.4. MATRIX MULTIPLICATION 161

theorem below.
Recipe: The row-column rule for matrix multiplication. Let A be an m × n
matrix, let B be an n × p matrix, and let C = AB. Then the i j entry of C is the
ith row of A times the jth column of B:

ci j = ai1 b1 j + ai2 b2 j + · · · + ain bn j .

Here is a diagram:
 b · · · b · · · b  
c11 · · · c1 j · · · c1p
 
a11 · · · a1k · · · a1n 11 1j 1p
 ... .. ..   .. .. ..  . .. ..
. . .
ith row . .   .. . . 
=


a
 i1 · · · a ik · · · a

b
in   k1 · · · b kj · · · b   c
kp   i1
· · · c ij · · · c 
ip 
 .. .. ..   . .. ..   .. .. .. 
. . . .. . . . . .
am1 · · · amk · · · amn bn1 · · · bn j · · · bnp cm1 · · · cm j · · · cmp

jth column i j entry

Proof. The row-column rule for matrix-vector multiplication in Section 3.3 says
that if A has rows r1 , r2 , . . . , rm and x is a vector, then
   
— r1 — r1 x
 — r2 —  r x
Ax =  .  x =  2.  .
 ..   .. 
— rm — rm x
The definition of matrix multiplication is
   
| | | | | |
A  c1 c2 · · · c p  =  Ac1 Ac2 · · · Ac p  .
| | | | | |
It follows that
   
— r1 —   r 1 c1 r 1 c2 · · · r 1 c p
 — r2 —  | | |  r 2 c1 r 2 c2 · · · r 2 c p 
 .
..
  c 1 c 2 · · · c p
 =  .
 .. .. ..  .
 
| | | . .
— rm — rm c1 rm c2 · · · rm c p
Example. The row-column rule allows us to compute the product matrix one entry
at a time:
 
‹ 1 −3 ‚ Œ ‚ Œ
1 · 1 + 2 · 2 + 3 · 3 14

1 2 3 
2 −2  = =
4 5 6
3 −1
 
 ‹ 1 −3 ‚ Œ ‚ Œ
1 2 3 
2 −2  = =
4 5 6 4 · 1 + 5 · 2 + 6 · 3 32
3 −1
162 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

4.4.3 Composition and Matrix Multiplication


The point of this subsection is to show that matrix multiplication corresponds to
composition of transformations.

Theorem. Let T : Rn → Rm and U : R p → Rn be linear transformations, and let A


and B be their standard matrices, respectively, so A is an m × n matrix and B is an
n × p matrix. Then T ◦ U : R p → Rn is a linear transformation, and its standard
matrix is the product AB.

Proof. First we verify that T ◦ U is linear. Let u, v be vectors in R p . Then

T ◦ U(u + v) = T (U(u + v)) = T (U(u) + U(v))


= T (U(u)) + T (U(v)) = T ◦ U(u) + T ◦ U(v).

If c is a scalar, then

T ◦ U(cv) = T (U(cv)) = T (cU(v)) = cT (U(v)) = cT ◦ U(v).

Since T ◦ U satisfies the two defining properties in Section 4.3, it is a linear trans-
formation.
Now that we know that T ◦ U is linear, it makes sense to compute its standard
matrix. Let C be the standard matrix of T ◦ U, so T (x) = Ax, U(x) = B x, and
T ◦ U(x) = C x. By this theorem in Section 4.3, the first column of C is C e1 , and
the first column of B is Be1 . We have

T ◦ U(e1 ) = T (U(e1 )) = T (Be1 ) = A(Be1 ).

By definition, the first column of the product AB is the product of A with the first
column of B, which is Be1 , so

C e1 = T ◦ U(e1 ) = A(Be1 ) = (AB)e1 .

It follows that C has the same first column as AB. The same argument as applied
to the ith standard coordinate vector ei shows that C and AB have the same ith
column; since they have the same columns, they are the same matrix.

The theorem justifies our choice of definition of the matrix product. This is the
one and only reason that matrix products are defined in this way. To rephrase:

Products and compositions. The matrix of the composition of two linear


transformations is the product of the matrices of the transformations.

Example (Composition of rotations). In this example in Section 4.3, we showed


that the standard matrix for the counterclockwise rotation of the plane by an angle
of θ is
cos(θ ) − sin(θ )
 ‹
A= .
sin(θ ) cos(θ )
4.4. MATRIX MULTIPLICATION 163

Let T : R2 → R2 be counterclockwise rotation by 45◦ , and let U : R2 → R2 be coun-


terclockwise rotation by 90◦ . The matrices A and B for T and U are, respectively,

cos(45◦ ) − sin(45◦ ) 1 1 −1
 ‹  ‹
A= =p
sin(45◦ ) cos(45◦ ) 2 1 1
cos(90◦ ) − sin(90◦ )
 ‹  ‹
0 −1
B= = .
sin(90◦ ) cos(90◦ ) 1 0

Here we used the trigonometric identities


1 1
cos(45◦ ) = p sin(45◦ ) = p
2 2
cos(90◦ ) = 0 sin(90◦ ) = 1.

The standard matrix of the composition T ◦ U is

1 1 −1 1 −1 −1
 ‹ ‹  ‹
0 −1
AB = p =p .
2 1 1 1 0 2 1 −1
This is consistent with the fact that T ◦U is counterclockwise rotation by 90◦ +45◦ =
135◦ : we have
cos(135◦ ) − sin(135◦ ) 1 −1 −1
 ‹  ‹
=p
sin(135◦ ) cos(135◦ ) 2 1 −1
p p
because cos(135◦ ) = −1/ 2 and sin(135◦ ) = 1/ 2.

Challenge. Derive the trigonometric identities

sin(α ± β) = sin(α) cos(β) ± cos(α) sin(β)

and
cos(α ± β) = cos(α) cos(β) ∓ sin(α) sin(β)
using the theorem as applied to rotation transformations, as in the previous exam-
ple.

Interactive: A composition of matrix transformations. Define T : R3 → R2 and


U : R2 → R3 by
 
 ‹ 1 0
1 1 0
T (x) = x and U(x) =  0 1  x.
0 1 1
1 0

Their composition is a linear transformation T ◦ U : R2 → R2 . By the theorem, its


standard matrix is  
 ‹ 1 0  ‹
1 1 0  1 1
0 1 = ,
0 1 1 1 1
1 0
164 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

as we computed in the above example.

Use this link to view the online demo

The matrix of the composition T ◦ U is the product of the matrices for T and U.

Interactive: A transformation defined in steps. Let S : R3 → R3 be the linear


transformation that first reflects over the x y-plane and then projects onto the yz-
plane, as in this example in Section 4.3. The transformation S is the composition
T ◦ U, where U : R3 → R3 is the transformation that reflects over the x y-plane,
and T : R3 → R3 is the transformation that projects onto the yz-plane.
Let us compute the matrix B for U.

yz yz
reflect x y
e1 −−−−−→ U(e1 )
xy xy

Since e1 lies on the x y-plane, reflecting it over the x y-plane does not move it:
 
1
U(e1 ) = 0 .

0

yz yz
reflect x y
−−−−−→
e2 xy U(e2 ) xy

Since e2 lies on the x y-plane, reflecting over the x y-plane does not move it
either:  
0
U(e2 ) = e2 = 1 .

0
4.4. MATRIX MULTIPLICATION 165

yz yz
e3 reflect x y
−−−−−→
xy U(e3 ) xy

Since e3 is perpendicular to the x y-plane, reflecting over the x y-plane takes


e3 to its negative:  
0
U(e3 ) = −e3 = 0  .

−1
We have computed all of the columns of B:
   
| | | 1 0 0
B =  U(e1 ) U(e2 ) U(e3 )  =  0 1 0  .
| | | 0 0 −1

By a similar method, we find


 
0 0 0
A = 0 1 0.
0 0 1

It follows that the matrix for S = T ◦ U is


  
0 0 0 1 0 0
AB =  0 1 0   0 1 0 
0 0 1 0 0 −1
        
0 0 0 1 0 0 0 0 0 0 0 0
=   0 1 0  0  0 1 0  1  0 1 0   0  
0 0 1 0 0 0 1 0 0 0 1 −1
 
0 0 0
= 0 1 0 ,
0 0 −1

as we computed in this example in Section 4.3.

Use this link to view the online demo


166 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example (Non-commutative multiplication of matrices). Define matrix transfor-


mations T, U : R2 → R2 by

‹  ‹
1 1 1 0
T (x) = x and U(x) = x,
0 1 1 1

as in this example. The matrix for T ◦ U is


 ‹ ‹  ‹
1 1 1 0 2 1
= ,
0 1 1 1 1 1

whereas the matrix for U ◦ T is


 ‹ ‹  ‹
1 0 1 1 1 1
= .
1 1 0 1 1 2

In particular, we have
 ‹ ‹  ‹ ‹
1 1 1 0 1 0 1 1
6 = ,
0 1 1 1 1 1 0 1

as must be the case as T ◦ U 6= U ◦ T .

Recall from this definition in Section 4.3 that the identity transformation is
the n × n matrix I n whose columns are the standard coordinate vectors in Rn .
The identity matrix is the standard matrix of the identity transformation: that is,
x = IdRn (x) = I n x for all vectors x in Rn .
In view of the above theorem, the following properties are consequences of the
corresponding properties of transformations.

Properties of matrix multiplication. Let A, B, C be matrices and let c be a scalar.


Suppose that A has dimensions m × n, and that in each of the following identities,
the dimensions of B and C are compatible when necessary for the product to be
defined. Then:

C(A + B) = CA + C B (A + B)C = AC + BC
c(AB) = (cA)B c(AB) = A(cB)
AI n = A ImA = A
(AB)C = A(BC)

Most of the above properties are easily verified directly from the definitions.
The associativity property (AB)C = A(BC), however, is not (try it!). It is much
easier to prove by relating matrix multiplication to composition of transformations,
and using the obvious fact that composition of transformations is associative.
4.5. MATRIX INVERSES 167

Although matrix multiplication satisfies many of the properties one would ex-
pect, one must be careful when doing matrix arithmetic, as there are several prop-
erties that are not satisfied in general.

Matrix multiplication caveats.

• Matrix multiplication is not commutative: AB is not usually equal to BA,


even when both products are defined and have the same size. See this
example.

• Matrix multiplication does not satisfy the cancellation law: AB = AC


does not imply B = C, even when A 6= 0. For example,
 ‹ ‹  ‹  ‹ ‹
1 0 1 2 1 2 1 0 1 2
= = .
0 0 3 4 0 0 0 0 5 6

• It is possible for AB = 0, even when A 6= 0 and B 6= 0. For example,


 ‹ ‹  ‹
1 0 0 0 0 0
= .
1 0 1 1 0 0

Order of Operations. Let T : Rn → Rm and U : R p → Rn be linear transformations,


and let A and B be their standard matrices, respectively. Recall that T ◦ U(x) is the
vector obtained by first applying U to x, and then T .
On the matrix side, the standard matrix of T ◦U is the product AB, so T ◦U(x) =
(AB)x. By associativity of matrix multiplication, we have (AB)x = A(B x), so the
product (AB)x can be computed by first multiplying x by B, then multipyling the
product by A.
Therefore, matrix multiplication happens in the same order as composition of
transformations. In other words, both matrices and transformations are written in
the order opposite from the order in which they act. But matrix multiplication and
composition of transformations are written in the same order as each other: the
matrix for T ◦ U is AB.

4.5 Matrix Inverses

Objectives

1. Understand what it means for a square matrix to be invertible.

2. Learn about invertible transformations, and understand the relationship be-


tween invertible matrices and invertible transformations.

3. Recipes: compute the inverse matrix, solve a linear system by taking inverses.
168 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

4. Picture: the inverse of a transformation.

5. Theorem: the invertible matrix theorem.

6. Vocabulary words: inverse matrix, inverse transformation.

In Section 4.1 we learned to multiply matrices together. In this section, we


learn to “divide” by a matrix. This allows us to solve the matrix equation Ax = b
in an elegant way:
Ax = b ⇐⇒ x = A−1 b.
One has to take care when “dividing by matrices”, however, because not every
matrix has an inverse, and the order of matrix multiplication is important.

4.5.1 Invertible Matrices


The reciprocal or inverse of a nonzero number a is the number b which is charac-
terized by the property that ab = 1. For instance, the inverse of 7 is 1/7. We use
this formulation to define the inverse of a matrix.

Definition. Let A be an n × n (square) matrix. We say that A is invertible if there


is an n × n matrix B such that

AB = I n and BA = I n .

In this case, the matrix B is called the inverse of A, and we write B = A−1 .

We have to require AB = I n and BA = I n because in general, matrix multiplica-


tion is not commutative. However, we will show in the invertible matrix theorem
that if A and B are n × n matrices such that AB = I n , then automatically BA = I n .

Example. Verify that the matrices


 ‹  ‹
2 1 1 −1
A= and B=
1 1 −1 2

are inverses.
Solution. We will check that AB = I2 and that BA = I2 .
 ‹ ‹  ‹
2 1 1 −1 1 0
AB = =
1 1 −1 2 0 1
 ‹ ‹  ‹
1 −1 2 1 1 0
BA = =
−1 2 1 1 0 1

Therefore, A is invertible, with inverse B.


4.5. MATRIX INVERSES 169

Remark. There exist non-square matrices whose product is the identity. Indeed,
if  
 ‹ 1 0
1 0 0
A= and B =  0 1 
0 1 0
0 0

then AB = I2 . However, BA 6= I3 , so B does not deserve to be called the inverse of


A.
One can show using the ideas in this subsection that if A is an n × m matrix
for n 6= m, then there is no m × n matrix B such that AB = I m and BA = I n .
For this reason, we restrict ourselves to square matrices when we discuss matrix
invertibility.

Facts about invertible matrices. Let A and B be invertible n × n matrices.

1. A−1 is invertible, and its inverse is (A−1 )−1 = A.

2. AB is invertible, and its inverse is (AB)−1 = B −1 A−1 (note the order).

Proof.

1. The equations AA−1 = I n and A−1 A = I n at the same time exhibit A−1 as the
inverse of A and A as the inverse of A−1 .

2. We compute

(B −1 A−1 )AB = B −1 (A−1 A)B = B −1 I n B = B −1 B = I n .

Here we used the associativity of matrix multiplication and the fact that
I n B = B. This shows that B −1 A−1 is the inverse of AB.

Why is the inverse of AB not equal to A−1 B −1 ? If it were, then we would have

I n = (AB)(A−1 B −1 ) = ABA−1 B −1 .

But there is no reason for ABA−1 B −1 to equal the identity matrix: one cannot switch
the order of A−1 and B, so there is nothing to cancel in this expression.
More generally, the inverse of a product of several invertible matrices is the
product of the inverses, in the opposite order; the proof is the same. For instance,

(ABC)−1 = C −1 B −1 A−1 .
170 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

4.5.2 Computing the Inverse Matrix


So far we have defined the inverse matrix without giving any strategy for comput-
ing it. We do so now, beginning with the special case of 2 × 2 matrices.
Definition. The determinant of a 2 × 2 matrix is the number
 ‹
a b
det = ad − bc.
c d
 ‹
a b
Proposition. Let A = .
c d
1. If det(A) 6= 0, then A is invertible, and

1
 ‹
d −b
−1
A = .
det(A) −c a

2. If det(A) = 0, then A is not invertible.


Proof.
1
 ‹
d −b
1. Suppose that det(A) 6= 0. Define B = . Then
det(A) −c a

1 1
 ‹  ‹  ‹
a b d −b ad − bc 0
AB = = = I2 .
c d det(A) −c a ad − bc 0 ad − bc

The reader can check that BA = I2 , so A is invertible and B = A−1 .

2. Suppose that det(A) = ad − bc = 0. Let T : R2 → R2 be the matrix transfor-


mation T (x) = Ax. Then

−ab + ab
 ‹  ‹ ‹  ‹  ‹
−b a b −b 0
T = = = =0
a c d a −bc + ad det(A)
 ‹  ‹ ‹  ‹  ‹
d a b d ad − bc det(A)
T = = = = 0.
−c c d −c cd − cd 0

If A is the
 zero matrix,
 then it is obviously not invertible. Otherwise, one of
−b d
v = a and v = −c will be a nonzero vector in the null space of A. Suppose
that there were a matrix B such that BA = I2 . Then

v = I2 v = BAv = B0 = 0,

which is impossible as v 6= 0. Therefore, A is not invertible.

There is an analogous formula for the inverse of an n × n matrix, but it is not


as simple, and it is computationally intensive. The interested reader can find it in
subsection in Section 5.2.
4.5. MATRIX INVERSES 171

Example. Let
 ‹
1 2
A= .
3 4

Then det(A) = 1 · 4 − 2 · 3 = −2. By the proposition, the matrix A is invertible with


inverse
‹−1
1 1 4 −2
  ‹  ‹
1 2 4 −2
= =− .
3 4 det(A) −3 1 2 −3 1
We check:
1 4 −2 1 −2 0
 ‹  ‹  ‹
1 2
·− =− = I2 .
3 4 2 −3 1 2 0 −2

The following theorem gives a procedure for computing A−1 in general.

Theorem. Let A be an n × n matrix, and let ( A | I n ) be the matrix obtained by


augmenting A by the identity matrix. If the reduced row echelon form of ( A | I n ) has
the form ( I n | B ), then A is invertible and B = A−1 . Otherwise, A is not invertible.

Proof. First suppose that the reduced row echelon form of ( A | I n ) does not have
the form ( I n | B ). This means that fewer than n pivots are contained in the first n
columns (the non-augmented part), so A has fewer than n pivots. It follows that
Nul(A) 6= {0} (the equation Ax = 0 has a free variable), so there exists a nonzero
vector v in Nul(A). Suppose that there were a matrix B such that BA = I n . Then

v = I n v = BAv = B0 = 0,

which is impossible as v 6= 0. Therefore, A is not invertible.


Now suppose that the reduced row echelon form of ( A | I n ) has the form
( I n | B ). In this case, all pivots are contained in the non-augmented part of the
matrix, so the augmented part plays no role in the row reduction: the entries of
the augmented part do not influence the choice of row operations used. Hence,
row reducing ( A | I n ) is equivalent to solving the n systems of linear equations
Ax 1 = e1 , Ax 2 = e2 , . . . , Ax n = en , where e1 , e2 , . . . , en are the standard coordinate
vectors:
 
1 0 4 1 0 0
Ax 1 = e1 : 0 1 2 0 1 0
0 −3 −4 0 0 1
 
1 0 4 1 0 0
Ax 2 = e2 : 0 1 2 0 1 0
0 −3 −4 0 0 1
 
1 0 4 1 0 0
Ax 3 = e3 : 0 1 2 0 1 0.
0 −3 −4 0 0 1
172 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

The columns x 1 , x 2 , . . . , x n of the matrix B in the row reduced form are the solu-
tions to these equations:
   
1 1 0 0 1 −6 −2
A 0 = e1 :  0 1 0 0 −2 −1 
0 0 0 1 0 3/2 1/2
   
−6 1 0 0 1 −6 −2
A  −2  = e2 :  0 1 0 0 −2 −1 
3/2 0 0 1 0 3/2 1/2
   
−2 1 0 0 1 −6 −2
A  −1  = e3 :  0 1 0 0 −2 −1  .
1/2 0 0 1 0 3/2 1/2

By this fact in Section 4.3, the product Bei is just the ith column x i of B, so

ei = Ax i = ABei

for all i. By the same fact, the ith column of AB is ei , which means that AB is the
identity matrix. Thus B is the inverse of A.
Example (An invertible matrix). Find the inverse of the matrix
 
1 0 4
A = 0 1 2.
0 −3 −4

Solution. We augment by the identity and row reduce:

   
1 0 4 1 0 0 1 0 4 1 0 0
R3 =R3 +3R2
0 1 2 0 1 0 −−−−−−→ 0
  1 2 0 1 0
0 −3 −4 0 0 1 0 0 2 0 3 1
 
R1 = R1 − 2R3
1 0 0 1 −6 −2
R2 = R2 − R3
−−−−−→ 0  1 0 0 −2 −1 
0 0 2 0 3 1
 
1 0 0 1 −6 −2
R3 =R3 ÷2
−−−−−→ 0  1 0 0 −2 −1 .
0 0 1 0 3/2 1/2

By the theorem, the inverse matrix is


 −1  
1 0 4 1 −6 −2
0 1 2  =  0 −2 −1  .
0 −3 −4 0 3/2 1/2
4.5. MATRIX INVERSES 173

We check:     
1 0 4 1 −6 −2 1 0 0
0 1 2   0 −2 −1  =  0 1 0  .
0 −3 −4 0 3/2 1/2 0 0 1

Example (A non-invertible matrix). Is the following matrix invertible?


 
1 0 4
A = 0 1 2.
0 −3 −6

Solution. We augment by the identity and row reduce:

   
1 0 4 1 0 0 1 0 4 1 0 0
R3 =R3 +3R2
0 1 2 0 1 0  −−−−−−→  0 1 2 0 1 0 .
0 −3 −6 0 0 1 0 0 0 0 3 1

At this point we can stop, because it is clear that the reduced row echelon form
will not have I3 in the non-augmented part: it will have a row of zeros. By the
theorem, the matrix is not invertible.

4.5.3 Solving Linear Systems using Inverses


In this subsection, we learn to solve Ax = b by “dividing by A.”

Theorem. Let A be an invertible n × n matrix, and let b be a vector in Rn . Then the


matrix equation Ax = b has exactly one solution:

x = A−1 b.

Proof. We calculate:

Ax = b =⇒ A−1 (Ax) = A−1 b


=⇒ (A−1 A)x = A−1 b
=⇒ I n x = A−1 b
=⇒ x = A−1 b.

Here we used associativity of matrix multiplication, and the fact that I n x = x for
any vector b.

Example (Solving a 2 × 2 system using inverses). Solve the matrix equation


 ‹  ‹
1 3 1
x= .
−1 2 1
174 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Solution. By the theorem, the only solution of our linear system is


‹−1  ‹
1 2 −3 1 −1
  ‹ ‹  ‹
1 3 1 1
x= = = .
−1 2 1 5 1 1 1 5 2

Here we used  ‹
1 3
det = 1 · 2 − (−1) · 3 = 5.
−1 2

Example (Solving a 3 × 3 system using inverses). Solve the system of equations


( 2x + 3x + 2x = 1
1 2 3
x1 + 3x 3 = 1
2x 1 + 2x 2 + 3x 3 = 1.

Solution. First we write our system as a matrix equation Ax = b, where


   
2 3 2 1
A= 1 0 3
  and b = 1 .

2 2 3 1

Next we find the inverse of A by augmenting and row reducing:

   
2 3 2 1 0 0 1 0 3 0 1 0
R1 ←→R2
 1 0 3 0 1 0  −−−−→ 2 3 2 1 0 0
2 2 3 0 0 1 2 2 3 0 0 1
 
R2 = R2 − 2R1
1 0 3 0 1 0
R3 = R3 − 2R1
−−−−−→  0 3 −4 1 −2 0 
0 2 −3 0 −2 1
 
1 0 3 0 1 0
R2 =R2 −R3
−−−−−→ 0  1 −1 1 0 −1 
0 2 −3 0 −2 1
 
1 0 3 0 1 0
R3 =R3 −2R2
−−−−−−→  0 1 −1 1 0 −1 
0 0 −1 −2 −2 3
 
1 0 3 0 1 0
R3 =−R3
−−−−→ 0  1 −1 1 0 −1 
0 0 1 2 2 −3
 
R1 = R1 − 3R3
1 0 0 −6 −5 9
R = R2 + R3
−2−−−− →0 1 0 3 2 −4 .
0 0 1 2 2 −3
4.5. MATRIX INVERSES 175

By the theorem, the only solution of our linear system is


   −1       
x1 2 3 2 1 −6 −5 9 1 −2
 x 2  =  1 0 3  1 =  3 2 −4  1 = 1 .
 
x3 2 2 3 1 2 2 −3 1 1

The advantage of solving a linear system using inverses is that it becomes much
faster to solve the matrix equation Ax = b for other, or even unknown, values of
b. For instance, in the above example, the solution of the system of equations

2x 1 + 3x 2 + 2x 3 = b1
(
x1 + 3x 3 = b2
2x 1 + 2x 2 + 3x 3 = b3 ,

where b1 , b2 , b3 are unknowns, is


   −1       
x1 2 3 2 b1 −6 −5 9 b1 −6b1 − 5b2 + 9b3
 x 2  =  1 0 3   b2  =  3 2 −4   b2  =  3b1 + 2b2 − 4b3  .
x3 2 2 3 b3 2 2 −3 b3 2b1 + 2b2 − 3b3

4.5.4 Invertible linear transformations


As with matrix multiplication, it is helpful to understand matrix inversion as an
operation on linear transformations. Recall that the identity transformation on Rn
is denoted IdRn .

Definition. A transformation T : Rn → Rn is invertible if there exists a transfor-


mation U : Rn → Rn such that T ◦ U = IdRn and U ◦ T = IdRn . In this case, the
transformation U is called the inverse of T , and we write U = T −1 .

The inverse U of T “undoes” whatever T did. We have

T ◦ U(x) = x and U ◦ T (x) = x

for all vectors x. This means that if you apply T to x, then you apply U, you get
the vector x back, and likewise in the other order.

Example (Functions of one variable). Define f : R → R by f (x) = 2x. This is an


invertible transformation, with inverse g(x) = x/2. Indeed,
 ‹  ‹
x x
f ◦ g(x) = f (g(x)) = f =2 =x
2 2
and
2x
g ◦ f (x) = g( f (x)) = g(2x) = = x.
2
176 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

In other words, dividing by 2 undoes the transformation that multiplies by 2.


Define f : R → R by f (x) = x 3 . This is an invertible transformation, with
p
inverse g(x) = 3 x. Indeed,
p p 3
f ◦ g(x) = f (g(x)) = f ( x) = x =x
3 3

and p
3
g ◦ f (x) = g( f (x)) = g(x 3 ) = x 3 = x.
In other words, taking the cube root undoes the transformation that takes a num-
ber to its cube.
Define f : R → R by f (x) = x 2 . This is not an invertible function. Indeed, we
have f (2) = 2 = f (−2), so there is no way to undo f : the inverse transformation
would not know if it should send 2 to 2 or −2. More formally, if g : R → R satisfies
g( f (x)) = x, then

2 = g( f (2)) = g(2) and − 2 = g( f (−2)) = g(2),

which is impossible: g(2) is a number, so it cannot be equal to 2 and −2 at the


same time.
Define f : R → R by f (x) = e x . This is not an invertible function. Indeed, if
there were a function g : R → R such that f ◦ g = IdR , then we would have

−1 = f ◦ g(−1) = f (g(−1)) = e g(−1) .

But e x is a positive number for every x, so this is impossible.

Example (Dilation). Let T : R2 → R2 be dilation by a factor of 3/2: that is, T (x) =


3/2x. Is T invertible? If so, what is T −1 ?
Solution. Let U : R2 → R2 be dilation by a factor of 2/3: that is, U(x) = 2/3x.
Then
2 3 2
 ‹
T ◦ U(x) = T x = · x=x
3 2 3
and
3 2 3
 ‹
U ◦ T (x) = U x = · x = x.
2 3 2
Hence T ◦ U = IdR2 and U ◦ T = IdR2 , so T is invertible, with inverse U. In other
words, shrinking by a factor of 2/3 undoes stretching by a factor of 3/2.

U T
4.5. MATRIX INVERSES 177

Use this link to view the online demo

Shrinking by a factor of 2/3 followed by scaling by a factor of 3/2 is the identity


transformation.

Use this link to view the online demo

Scaling by a factor of 3/2 followed by shrinking by a factor of 2/3 is the identity


transformation.

Example (Rotation). Let T : R2 → R2 be counterclockwise rotation by 45◦ . Is T


invertible? If so, what is T −1 ?
Solution. Let U : R2 → R2 be clockwise rotation by 45◦ . Then T ◦ U first rotates
clockwise by 45◦ , then counterclockwise by 45◦ , so the composition rotates by zero
degrees: it is the identity transformation. Likewise, U ◦T first rotates counterclock-
wise, then clockwise by the same amount, so it is the identity transformation. In
other words, clockwise rotation by 45◦ undoes counterclockwise rotation by 45◦ .

T U

Use this link to view the online demo

Counterclockwise rotation by 45◦ followed by clockwise rotation by 45◦ is the identity


transformation.

Use this link to view the online demo

Clockwise rotation by 45◦ followed by counterclockwise rotation by 45◦ is the identity


transformation.

Example (Reflection). Let T : R2 → R2 be the reflection over the y-axis. Is T


invertible? If so, what is T −1 ?
Solution. The transformation T is invertible; in fact, it is equal to its own in-
verse. Reflecting a vector x over the y-axis twice brings the vector back to where
it started, so T ◦ T = IdR2 .
178 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

T T

Use this link to view the online demo

The transformation T is equal to its own inverse: applying T twice takes a vector
back to where it started.

Non-Example (Projection). Let T : R3 → R3 be the projection onto the x y-plane,


introduced in this example in Section 4.1. Is T invertible?
Solution. The transformation T is not invertible. Every vector on the z-axis
projects onto the zero vector, so there is no way to undo what T did: the inverse
transformation would not know which vector on the z-axis it should send the zero
vector to. More formally, suppose there were a transformation U : R3 → R3 such
that U ◦ T = IdR3 . Then

0 = U ◦ T (0) = U(T (0)) = U(0)

and       
0 0 0
0 = U ◦ T 0 = U  T 0 = U(0).
1 1 1

But U(0) is as single vector in R3 , so it cannot be equal to 0 and to (0, 0, 1) at the


same time.

Use this link to view the online demo

Projection onto the x y-plane is not an invertible transformation: all points on each
vertical line are sent to the same point by T , so there is no way to undo T .

Proposition.

1. A transformation T : Rn → Rn is invertible if and only if it is both one-to-one


and onto.

2. If T is already known to be invertible, then U : Rn → Rn is the inverse of T


provided that either T ◦ U = IdRn or U ◦ T = IdRn : it is only necessary to verify
one.
4.5. MATRIX INVERSES 179

Proof. To say that T is one-to-one and onto means that T (x) = b has exactly one
solution for every b in Rn .
Suppose that T is invertible. Then T (x) = b always has the unique solution
x = T −1 (b): indeed, applying T −1 to both sides of T (x) = b gives

x = T −1 (T (x)) = T −1 (b),

and applying T to both sides of x = T −1 (b) gives

T (x) = T (T −1 (b)) = b.

Conversely, suppose that T is one-to-one and onto. Let b be a vector in Rn ,


and let x = U(b) be the unique solution of T (x) = b. Then U defines a transfor-
mation from Rn to Rn . For any x in Rn , we have U(T (x)) = x, because x is the
unique solution of the equation T (x) = b for b = T (x). For any b in Rn , we have
T (U(b)) = b, because x = U(b) is the unique solution of T (x) = b. Therefore, U
is the inverse of T , and T is invertible.
Suppose now that T is an invertible transformation, and that U is another
transformation such that T ◦ U = IdRn . We must show that U = T −1 , i.e., that
U ◦ T = IdRn . We compose both sides of the equality T ◦ U = IdRn on the left by
T −1 and on the right by T to obtain

T −1 ◦ T ◦ U ◦ T = T −1 ◦ IdRn ◦T.

We have T −1 ◦ T = IdRn and IdRn ◦U = U, so the left side of the above equation
is U ◦ T . Likewise, IdRn ◦T = T and T −1 ◦ T = IdRn , so our equality simplifies to
U ◦ T = IdRn , as desired.
If instead we had assumed only that U ◦ T = IdRn , then the proof that T ◦ U =
IdRn proceeds similarly.

Remark. It makes sense in the above definition to define the inverse of a trans-
formation T : Rn → Rm , for m 6= n, to be a transformation U : Rm → Rn such that
T ◦ U = IdRm and U ◦ T = IdRn . In fact, there exist invertible transformations
T : Rn → Rm for any m and n, but they are not linear, or even continuous.
If T is a linear transformation, then it can only be invertible when m = n, i.e.,
when its domain is equal to its codomain. Indeed, if T : Rn → Rm is one-to-one,
then n ≤ m by this note in Section 4.2, and if T is onto, then m ≤ n by this note in
Section 4.2. Therefore, when discussing invertibility we restrict ourselves to the
case m = n.

Challenge. Find an invertible (non-linear) transformation T : R2 → R.

Theorem. Let T : Rn → Rn be a linear transformation with standard matrix A. Then


T is invertible if and only if A is invertible, in which case T −1 is linear with standard
matrix A−1 .
180 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Proof. Suppose that T is invertible. Let U : Rn → Rn be the inverse of T . We claim


that U is linear. We need to check the defining properties in Section 4.3. Let u, v
be vectors in Rn . Then

u + v = T (U(u)) + T (U(v)) = T (U(u) + U(v))

by linearity of T . Applying U to both sides gives



U(u + v) = U T (U(u) + U(v)) = U(u) + U(v).

Let c be a scalar. Then


cu = cT (U(u)) = T (cU(u))
by linearity of T . Applying U to both sides gives

U(cu) = U T (cU(u)) = cU(u).

Since U satisfies the defining properties in Section 4.3, it is a linear transformation.


Now that we know that U is linear, we know that it has a standard matrix
B. By the compatibility of matrix multiplication and composition in Section 4.4,
the matrix for T ◦ U is AB. But T ◦ U is the identity transformation IdRn , and the
standard matrix for IdRn is I n , so AB = I n . One shows similarly that BA = I n . Hence
A is invertible and B = A−1 .
Conversely, suppose that A is invertible. Let B = A−1 , and define U : Rn → Rn
by U(x) = B x. By the compatibility of matrix multiplication and composition in
Section 4.4, the matrix for T ◦ U is AB = I n , and the matrix for U ◦ T is BA = I n .
Therefore,

T ◦ U(x) = AB x = I n x = x and U ◦ T (x) = BAx = I n x = x,

which shows that T is invertible with inverse transformation U.

Example (Dilation). Let T : R2 → R2 be dilation by a factor of 3/2: that is, T (x) =


3/2x. Is T invertible? If so, what is T −1 ?
Solution. In this example in Section 4.1 we showed that the matrix for T is
 ‹
3/2 0
A= .
0 3/2

The determinant of A is 9/4 6= 0, so A is invertible with inverse

1
 ‹  ‹
3/2 0 2/3 0
A =
−1
= .
9/4 0 3/2 0 2/3

By the theorem, T is invertible, and its inverse is the matrix transformation for
A−1 :  ‹
2/3 0
T (x) =
−1
x.
0 2/3
We recognize this as a dilation by a factor of 2/3.
4.5. MATRIX INVERSES 181

Example (Rotation). Let T : R2 → R2 be counterclockwise rotation by 45◦ . Is T


invertible? If so, what is T −1 ?
Solution. In this example in Section 4.3, we showed that the standard matrix
for the counterclockwise rotation of the plane by an angle of θ is

cos(θ ) − sin(θ )
 ‹
.
sin(θ ) cos(θ )

Therefore, the standard matrix A for T is

1 1 −1
 ‹
A= p ,
2 1 1
where we have used the trigonometric identities
1 1
cos(45◦ ) = p sin(45◦ ) = p .
2 2
The determinant of A is
1 1 1 −1 1 1
det(A) = p · p − p p = + = 1,
2 2 2 2 2 2
so the inverse is
1
 ‹
1 1
A =p
−1
.
2 −1 1
By the theorem, T is invertible, and its inverse is the matrix transformation for
A−1 :
1
 ‹
1 1
T (x) = p
−1
x.
2 −1 1
We recognize this as a clockwise rotation by 45◦ , using the trigonometric identities
1 1
cos(−45◦ ) = p sin(−45◦ ) = − p .
2 2
Example (Reflection). Let T : R2 → R2 be the reflection over the y-axis. Is T
invertible? If so, what is T −1 ?
Solution. In this example in Section 4.1 we showed that the matrix for T is
 ‹
−1 0
A= .
0 1

This matrix has determinant −1, so it is invertible, with inverse


 ‹  ‹
1 0 −1 0
A =−
−1
= = A.
0 −1 0 1

By the theorem, T is invertible, and it is equal to its own inverse: T −1 = T . This


is another way of saying that a reflection “undoes” itself.
182 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

4.5.5 Characterizing Invertible Matrices


This subsection consists of a single important theorem, with many equivalent con-
ditions for a matrix to be invertible. This is one of the most important theorems
in this textbook. We will append two more criteria in Section 6.1.

Invertible Matrix Theorem. Let A be an n × n matrix, and let T : Rn → Rn be the


matrix transformation T (x) = Ax. The following statements are equivalent:

1. A is invertible.

2. T is invertible.

3. The reduced row echelon form of A is the identity matrix I n .

4. A has n pivots.

5. Ax = 0 has no solutions other than the trivial one.

6. Nul(A) = {0}.

7. nullity(A) = 0.

8. The columns of A are linearly independent.

9. The columns of A form a basis for Rn .

10. T is one-to-one.

11. Ax = b is consistent for all b in Rn .

12. Ax = b has a unique solution for each b in Rn .

13. The columns of A span Rn .

14. Col(A) = Rn .

15. dim Col(A) = n.

16. rank(A) = n.

17. T is onto.

18. There is a matrix B such that AB = I n .

19. There is a matrix B such that BA = I n .

Proof. (1 ⇐⇒ 2): This was established in a previous theorem.


(3 ⇐⇒ 4): The only reduced row echelon form matrix with n pivots is I n .
(5 ⇐⇒ 6 ⇐⇒ 7 ⇐⇒ 8 ⇐⇒ 9 ⇐⇒ 10): The first three are translations of
each other, because Nul(A) is the set of solutions of Ax = 0, and nullity(A) is the
4.5. MATRIX INVERSES 183

dimension of Nul(A). The equivalence of 5 and 8 results from this theorem in Sec-
tion 3.5, and the equivalence of 5 and 10 results from this theorem in Section 4.2.
The basis theorem in Section 3.9 implies the equivalence of 8 and 9, since any set
of n linearly independent vectors in Rn forms a basis.
(11 ⇐⇒ 13 ⇐⇒ 14 ⇐⇒ 15 ⇐⇒ 16 ⇐⇒ 17): Assertions 13, 14, and
17 are translations of each other, because Col(A) is the span of the columns of A,
which is equal to the range of T . The equivalence of 11 and 13 follows from this
theorem in Section 3.3. Since dim(Rn ) = n, the only n-dimensional subspace of
Rn is all of Rn , so 14, 15, and 16 are equivalent.
(4 ⇐⇒ 8) and (4 ⇐⇒ 13): Since A has n rows and columns, it has a pivot
in every row (resp. column) if and only if it has n pivots. By this theorem in
Section 3.5, there is a pivot in every column if and only if the columns are linearly
independent, and by this theorem in Section 3.3, there is a pivot in every row if
and only if the rows span Rn .
(2 ⇐⇒ 10 ⇐⇒ 17): By this proposition, the transformation T is invertible
if and only if it is both one-to-one and onto, so 2 is equivalent to (10 and 17). We
have already shown (10 ⇐⇒ 8 ⇐⇒ 4 ⇐⇒ 13 ⇐⇒ 17), so 10 and 17 are
equivalent to each other; thus they are both equivalent to 2.
At this point, we have demonstrated the equivalence of assertions 1–17.
(1 =⇒ 18 =⇒ 17): Invertibility of A means there exists a matrix B such that
AB = I n and BA = I n , so 1 implies 18. Now we prove directly that T is onto. Let b
be a vector in Rn , and let x = B b. Then

T (x) = Ax = AB b = I n b = b,

so T (x) = b has at least one solution.


(1 =⇒ 19 =⇒ 10): Invertibility of A means there exists a matrix B such that
AB = I n and BA = I n , so 1 implies 19. Now we prove directly that T is one-to-one.
Suppose that x and y are vectors such that T (x) = T ( y).

T (x) = T ( y) =⇒ Ax = Ay =⇒ BAx = BAy =⇒ I n x = I n y =⇒ x = y.

Therefore, T (x) = T ( y) =⇒ x = y, so T is one-to-one.


Since 10 and 17 both imply 1, we have finished the proof.

There are two kinds of square matrices:

1. invertible matrices, and

2. non-invertible matrices.

For invertible matrices, all of the statements of the invertible matrix theorem
are true.
For non-invertible matrices, all of the statements of the invertible matrix the-
orem are false.
184 CHAPTER 4. LINEAR TRANSFORMATIONS AND MATRIX ALGEBRA

Example. Is this matrix invertible?


 
1 2 −1
A= 2 4 7 
−2 −4 1

Solution. The second column is a multiple of the first. The columns are linearly
dependent, so A does not satisfy condition 8 of the invertible matrix theorem.
Therefore, A is not invertible.

Example. Let A be an n × n matrix and let T (x) = Ax. Suppose that the range of
T is Rn . Show that the columns of A are linearly independent.
Solution. The range of T is the column space of A, so A satisfies condition 14 of
the invertible matrix theorem. Therefore, A also satisfies condition 8, which says
that the columns of A are linearly independent.

Example. Let A be a 3 × 3 matrix such that


   
1 2
A 7 = A  0  .
0 −1

Show that the rank of A is at most 2.


Solution. If we set    
1 2
b = A 7 = A 0 ,
  
0 −1
then Ax = b has multiple solutions, so it does not satisfy condition 12 of the
invertible matrix theorem. Therefore, it does not satisfy condition 16, so the rank
of A is not 3. But rank(A) = dim Col(A) and Col(A) is a subspace of R3 , so its rank
cannot exceed 3. Therefore, the rank is at most 2.

Example. Suppose that A is an n × n matrix such that Ax = b is inconsistent some


vector b. Show that Ax = b has infinitely many solutions for some (other) vector
b.
Solution. By hypothesis, A does not satisfy condition 11 of the invertible matrix
theorem. Therefore, it does not satisfy condition 5, so if we take b = 0, then the
equation Ax = b has infinitely many solutions.
Chapter 5

Determinants

We begin by recalling the overall structure of this book:

1. Solve the matrix equation Ax = b.

2. Solve the matrix equation Ax = λx, where λ is a number.

3. Approximately solve the matrix equation Ax = b.

At this point we have said all that we will say about the first part. This chapter
belongs to the second.

Primary Goal. Learn about determinants: their computation and their properties.

The determinant of a square matrix A is a number det(A). This incredible quan-


tity is one of the most important invariants of a matrix; as such, it forms the basis
of most advanced computations involving matrices.
In Section 5.1, we will define the determinant in terms of its behavior with
respect to row operations. The determinant satisfies many wonderful properties:
for instance, det(A) 6= 0 if and only if A is invertible. We will discuss some of these
properties in Section 5.1 as well. In Section 5.2, we will give a recursive formula
for the determinant of a matrix. This formula is very useful, for instance, when
taking the determinant of a matrix with unknown entries; this will be important
in Chapter 6. Finally, in Section 5.3, we will relate determinants to volumes. This
gives a geometric interpretation for determinants, and explains why the deter-
minant is defined the way it is. This interpretation of determinants is a crucial
ingredient in the change-of-variables formula in multivariable calculus.

5.1 Determinants: Definition

Objectives

1. Learn the definition of the determinant.

185
186 CHAPTER 5. DETERMINANTS

2. Learn some ways to eyeball a matrix with zero determinant, and how to
compute determinants of upper- and lower-triangular matrices.

3. Learn the basic properties of the determinant, and how to apply them.

4. Recipe: compute the determinant using row and column operations.

5. Theorems: existence theorem, invertibility property, multiplicativity prop-


erty, transpose property.

6. Vocabulary words: diagonal, upper-triangular, lower-triangular, trans-


pose.

7. Essential vocabulary word: determinant.

In this section, we define the determinant, and we present one way to com-
pute it. Then we discuss some of the many wonderful properties the determinant
enjoys.

5.1.1 The Definition of the Determinant


The determinant of a square matrix A is a real number det(A). It is defined via its
behavior with respect to row operations; this means we can use row reduction to
compute it. We will give a recursive formula for the determinant in Section 5.2.
We will also show in subsection that the determinant is related to invertibility, and
in Section 5.3 that it is related to volumes.

Essential Definition. The determinant is a function



det: n × n matrices −→ R

satisfying the following properties:

1. Doing a row replacement on A does not change det(A).

2. Scaling a row of A by a scalar c multiplies the determinant by c.

3. Swapping two rows of a matrix multiplies the determinant by −1.

4. The determinant of the identity matrix I n is equal to 1.

In each of the first three cases, doing a row operation on a matrix scales the
determinant by a nonzero number. (Multiplying a row by zero is not a row op-
eration.) Therefore, doing row operations on a square matrix A does not change
whether or not the determinant is zero.
The reason behind using these particular defining properties is geometric. See
Section 5.3.
5.1. DETERMINANTS: DEFINITION 187
 ‹
1 0
Example. Compute det .
0 3
 ‹
1 0
Solution. Let A = . Since A is obtained from I2 by multiplying the second
0 3
row by the constant 3, we have

det(A) = 3 det(I2 ) = 3 · 1 = 3.
 
1 0 0
Example. Compute det  0 0 1  .
5 1 0
Solution. First we row reduce, then we compute the determinant in the opposite
order:
 
1 0 0
0 0 1 det = −1
5 1 0
 
1 0 0
R2 ←→ R3
−−−−−−−→  5 1 0  det = 1
0 0 1
 
1 0 0
R2 = R2 − 5R1
−−−−−−−→  0 1 0  det = 1
0 0 1

The reduced row echelon form is I3 , which has determinant 1. Working backwards
from I3 and using the four defining properties, we see that the second matrix also
has determinant 1 (it differs from I3 by a row replacement), and the first matrix
has determinant −1 (it differs from the second by a row swap).
 ‹
2 1
Example. Compute det .
1 4
Solution. First we row reduce, then we compute the determinant in the opposite
order:
 ‹
2 1
det = 7
1 4
 ‹
R1 ←→ R2 1 4
−−−−−−−→ det = −7
2 1
 ‹
R2 = R2 − 2R1 1 4
−−−−−−−→ det = −7
0 −7
 ‹
R2 = R2 ÷ −7 1 4
−−−−−−−→ det = 1
0 1
 ‹
R1 = R1 − 4R2 1 0
−−−−−−−→ det = 1
0 1
188 CHAPTER 5. DETERMINANTS

The reduced row echelon form of the matrix is the identity matrix I2 , so its de-
terminant is 1. The second-last step in the row reduction was a row replacement,
so the second-final matrix also has determinant 1. The previous step in the row
reduction was a row scaling by −1/7; since (the determinant of the second ma-
trix times −1/7) is 1, the determinant of the second matrix must be −7. The first
step in the row reduction was a row swap, so the determinant of the first matrix
is negative the determinant of the second. Thus, the determinant of the original
matrix is 7.

Next we will compute some determinants of matrices with special properties,


but before doing so, we need some terminology involving matrices.

Definition.

• The diagonal entries of a matrix A are the entries a11 , a22 , . . .:

diagonal entries
 
  a11 a12 a13
a11 a12 a13 a14 a
 21 a22 a23 
 a21 a22 a23 a24 
  
 
 a31 a32 a33 
a31 a32 a33 a34
a41 a42 a43

• A square matrix is called upper-triangular if its nonzero entries all lie above
the diagonal, and it is called lower-triangular if its nonzero entries all lie
below the diagonal. It is called diagonal if all of its nonzero entries lie on
the diagonal, i.e., if it is both upper-triangular and lower-triangular.

diagonal
upper-triangular lower-triangular  
? ? ? ? ? 0 0 0 ? 0 0 0
0 ? ? ? ? ? 0 0 0 ? 0 0
     
 
0 0 ? ? ?
   
? ? ? 0 0 0 0
 
 
0 0 0 ? ? ? ? ? 0 0 0 ?

Proposition. Let A be an n × n matrix.

1. If A has a zero row or column, then det(A) = 0.

2. If A is upper-triangular or lower-triangular, then det(A) is the product of its


diagonal entries.
5.1. DETERMINANTS: DEFINITION 189

Proof.

1. Suppose that A has a zero row. Let B be the matrix obtained by negating
the zero row. Then det(A) = − det(B) by the second defining property. But
A = B, so det(A) = det(B):
   
1 2 3 1 2 3
R2 =−R2
 0 0 0  −−−− → 0 0 0.
7 8 9 7 8 9

Putting these together yields det(A) = − det(A), so det(A) = 0.


Now suppose that A has a zero column. Then A is not invertible by the
invertible matrix theorem in Section 4.5, so its reduced row echelon form has
a zero row. Since row operations do not change whether the determinant is
zero, we conclude det(A) = 0.

2. First suppose that A is upper-triangular, and that one of the diagonal entries
is zero, say aii = 0. We can perform row operations to clear the entries above
the nonzero diagonal entries:

a11 ? ? ? a11 0 ? 0
   
 0 a22 ? ?   0 a22 ? 0 
−−−→ 
 0 0 0 ?  0 0 0 0 
0 0 0 a44 0 0 0 a44

In the resulting matrix, the ith row is zero, so det(A) = 0 by the first part.
Still assuming that A is upper-triangular, now suppose that all of the diagonal
entries of A are nonzero. Then A can be transformed to the identity matrix
by scaling the diagonal entries and then doing row replacements:
     
a ? ? scale by 1 ? ? row 1 0 0
a−1 , b−1 , c −1 replacements
 0 b ?  −−−−−−−→  0 1 ?  −−−−−−− → 0 1 0
0 0 c 0 0 1 0 0 1
det = abc ←−−−−−−− det = 1 ←−−−−−−− det = 1
Since det(I n ) = 1 and we scaled by the reciprocals of the diagonal entries,
this implies det(A) is the product of the diagonal entries.
The same argument works for lower triangular matrices, except that the the
row replacements go down instead of up.

Example. Compute the determinants of these matrices:


     
1 2 3 −20 0 0 17 −3 4
0 4 5  π 0 0   0 0 0.
0 0 6 100 3 −7 11/2 1 e
190 CHAPTER 5. DETERMINANTS

Solution. The first matrix is upper-triangular, the second is lower-triangular, and


the third has a zero row:
 
1 2 3
det  0 4 5  = 1 · 4 · 6 = 24
0 0 6
 
−20 0 0
det  π 0 0  = −20 · 0 · −7 = 0
100 3 −7
 
17 −3 4
det  0 0 0  = 0.
11/2 1 e

A matrix can always be transformed into row echelon form by a series of row
operations, and a matrix in row echelon form is upper-triangular. Therefore, we
have a systematic way of computing the determinant:

Recipe: Computing determinants by row reducing. Let A be a square ma-


trix. Suppose that you do some number of row operations on A to obtain a
matrix B in row echelon form. Then
(product of the diagonal entries of B)
det(A) = (−1) r · ,
(product of scaling factors used)

where r is the number of row swaps performed.

In other words, the determinant of A is the product of diagonal entries of the


row echelon form B, times a factor of ±1 coming from the number of row swaps
you made, divided by the product of the scaling factors used in the row reduction.

Remark. This is an efficient way of computing the determinant of a large matrix,


either by hand or by computer. The computational complexity of row reduction is
O(n3 ); by contrast, the cofactor expansion algorithm we will learn in Section 5.2
p
has complexity O(n!) ≈ O(nn n), which is much larger. (Cofactor expansion has
other uses.)
 
0 −7 −4
Example. Compute det  2 4 6 .
3 7 −1

Solution. We row reduce the matrix, keeping track of the number of row swaps
and of the scaling factors used.
5.1. DETERMINANTS: DEFINITION 191
   
0 −7 −4 2 4 6
R1 ←→R2
2 4 6 −−−−→ 0
  −7 −4  r = 1
3 7 −1 3 7 −1
 
1 2 3
R1 =R1 ÷2 1
−−−−−→ 0  −7 −4  scaling factors = 2
3 7 −1
 
1 2 3
R3 =R3 −3R1
−−−−−−→ 0  −7 −4 
0 1 −10
 
1 2 3
R2 ←→R3
−−−−→ 0  1 −10  r = 2
0 −7 −4
 
1 2 3
R3 =R3 +7R2
−−−−−−→ 0  1 −10 
0 0 −74

We made two row swaps and scaled once by a factor of 1/2, so the recipe says
that  
0 −7 −4
1 · 1 · −74
det  2 4 6  = (−1)2 · = −148.
3 7 −1 1/2
 
1 2 3
Example. Compute det  2 −1 1  .
3 0 1
Solution. We row reduce the matrix, keeping track of the number of row swaps
and of the scaling factors used.
   
1 2 3 1 2 3
R2 =R2 −2R1
 2 −1 1  −−−−−− →  0 −5 −5 
R3 =R3 −3R1
3 0 1 0 −6 −8
 
1 2 3
R2 =R2 ÷−5
−−−−−→  0 1 1  scaling factors = − 15
0 −6 −8
 
1 2 3
R3 =R3 +6R2
−−−−−−→  0 1 1 
0 0 −2

We did not make any row swaps, and we scaled once by a factor of −1/5, so
the recipe says that
 
1 2 3
1 · 1 · −2
det  2 −1 1  = = 10.
3 0 1 −1/5
192 CHAPTER 5. DETERMINANTS

Example (The determinant of a 2 × 2 matrix). Let us


‹ use the recipe to compute
a b
the determinant of a general 2 × 2 matrix A = .
c d

• If a = 0, then
 ‹  ‹  ‹
a b 0 b c d
det = det = − det = −bc.
c d c d 0 b

• If a 6= 0, then
 ‹  ‹  ‹
a b 1 b/a 1 b/a
det = a · det = a · det
c d c d 0 d − c · b/a
= a · 1 · (d − bc/a) = ad − bc.

In either case, we recover the formula in Section 4.5:


 ‹
a b
det = ad − bc.
c d

The determinant is characterized by its defining properties, since we can com-


pute the determinant of any matrix using row reduction, as in the above recipe.
However, we have not yet proved the existence of a function satisfying the defin-
ing properties! Row reducing will compute the determinant if it exists, but we
cannot use row reduction to prove existence, because we do not yet know that
you compute the same number by row reducing in two different ways.

Theorem (Existence of the determinant). There exists one and only function from
the set of square matrices to the real numbers, that satisfies the four defining proper-
ties.

We will prove the existence theorem in Section 5.2, by exhibiting a recursive


formula for the determinant. Again, the real content of the existence theorem is:

No matter which row operations you do, you will always compute the same
value for the determinant.

5.1.2 Magical Properties of the Determinant


In this subsection, we will discuss a number of the amazing properties enjoyed by
the determinant: the invertibility property, the multiplicativity property, and the
transpose property.

Invertibility Property. A square matrix is invertible if and only if det(A) 6= 0.


5.1. DETERMINANTS: DEFINITION 193

Proof. If A is invertible, then its reduced row echelon form is the identity matrix by
the invertible matrix theorem in Section 4.5. Since row operations do not change
whether the determinant is zero, and since det(I n ) = 1, this implies det(A) 6= 0.
Conversely, if A is not invertible, then it is row equivalent to a matrix with a zero
row. Again, row operations do not change whether the determinant is nonzero, so
in this case det(A) = 0.
By the invertibility property, a matrix that does not satisfy any of the properties
of the invertible matrix theorem in Section 4.5 has zero determinant.
Corollary. Let A be a square matrix. If the rows or columns of A are linearly depen-
dent, then det(A) = 0.
Proof. If the columns of A are linearly dependent, then A is not invertible by state-
ment 8 of the invertible matrix theorem in Section 4.5. Suppose now that the rows
of A are linearly dependent. If r1 , r2 , . . . , rn are the rows of A, then one of the rows
is in the span of the others, so we have an equation like

r2 = 3r1 − r3 + 2r4 .

If we perform the following row operations on A:

R2 = R2 − 3R1 ; R2 = R2 + R3 ; R2 = R2 − 2R4

then the second row of the resulting matrix is zero. Hence A is not invertible in
this case either.
Alternatively, if the rows of A are linearly dependent, then one can combine
statement 8 of the invertible matrix theorem in Section 4.5 and the transpose
property below to conclude that det(A) = 0.
In particular, if two rows/columns of A are multiples of each other, then det(A) =
0. We also recover the fact that a matrix with a row or column of zeros has deter-
minant zero.
Example. The following matrices all have zero determinant:

3 1 2 4
       
0 2 −1 5 −15 11 π e 11
 0 5 10  ,  3 −9 2  ,  0 0 0 0 
 4 2 5 12  ,  3π 3e 33  .
0 −7 3 2 −6 16 12 −7 2
−1 3 4 8

The proofs of the multiplicativity property and the transpose property below,
as well as the cofactor expansion theorem in Section 5.2 and the determinants
and volumes theorem in Section 5.3, use the following strategy: define another
function d : {n × n matrices} → R, and prove that d satisfies the same four defining
properties as the determinant. By the existence theorem, the function d is equal to
the determinant. This is an advantage of defining a function via its properties: in
order to prove it is equal to another function, one only has to check the defining
properties.
194 CHAPTER 5. DETERMINANTS

Multiplicativity Property. If A and B are n × n matrices, then


det(AB) = det(A) det(B).
Proof. In this proof, we need to use the notion of an elementary matrix. This is a
matrix obtained by doing one row operation to the identity matrix. There are three
kinds of elementary matrices: those arising from row replacement, row scaling,
and row swaps:
   
1 0 0 1 0 0
R2 = R2 − 2R1
 0 1 0  −−−−−−−−→  −2 1 0 
0 0 1 0 0 1
   
1 0 0 3 0 0
R1 = 3R1
 0 1 0  −−−−−−−−→ 0 1 0
0 0 1 0 0 1
   
1 0 0 0 1 0
R1 ←→ R2
 0 1 0  −−−−−−−−→ 1 0 0
0 0 1 0 0 1

The important property of elementary matrices is the following claim.


Claim: If E is the elementary matrix for a row operation, then EA is the matrix
obtained by performing the same row operation on A.
In other words, left-multiplication by an elementary matrix applies a row op-
eration. For example,
    
1 0 0 a11 a12 a13 a11 a12 a13
 −2 1 0   a21 a22 a23  =  a21 − 2a11 a22 − 2a12 a23 − 2a13 
0 0 1 a31 a32 a33 a31 a32 a33
    
3 0 0 a11 a12 a13 3a11 3a12 3a13
 0 1 0   a21 a22 a23  =  a21 a22 a23 
0 0 1 a31 a32 a33 a31 a32 a33
    
0 1 0 a11 a12 a13 a21 a22 a23
 1 0 0   a21 a22 a23  =  a11 a12 a13  .
0 0 1 a31 a32 a33 a31 a32 a33

The proof of the Claim is by direct calculation; we leave it to the reader to gener-
alize the above equalities to n × n matrices.
As a consequence of the Claim and the four defining properties, we have the
following observation. Let C be any square matrix.
1. If E is the elementary matrix for a row replacement, then det(EC) = det(C).
In other words, left-multiplication by E does not change the determinant.

2. If E is the elementary matrix for a row scale by a factor of c, then det(EC) =


c det(C). In other words, left-multiplication by E scales the determinant by a
factor of c.
5.1. DETERMINANTS: DEFINITION 195

3. If E is the elementary matrix for a row swap, then det(EC) = − det(C). In


other words, left-multiplication by E negates the determinant.

Now we turn to the proof of the multiplicativity property. Suppose to begin


that B is not invertible. Then AB is also not invertible: otherwise, (AB)−1 AB = I n
implies B −1 = (AB)−1 A. By the invertibility property, both sides of the equation
det(AB) = det(A) det(B) are zero.
Now assume that B is invertible, so det(B) 6= 0. Define a function
 det(C B)
d : n × n matrices −→ R by d(C) = .
det(B)

We claim that d satisfies the four defining properties of the determinant.

1. Let C 0 be the matrix obtained by doing a row replacement on C, and let E


be the elementary matrix for this row replacement, so C 0 = EC. Since left-
multiplication by E does not change the determinant, we have det(EC B) =
det(C B), so

det(C 0 B) det(EC B) det(C B)


d(C 0 ) = = = = d(C).
det(B) det(B) det(B)

2. Let C 0 be the matrix obtained by scaling a row of C by a factor of c, and let


E be the elementary matrix for this row replacement, so C 0 = EC. Since
left-multiplication by E scales the determinant by a factor of c, we have
det(EC B) = c det(C B), so

det(C 0 B) det(EC B) c det(C B)


d(C 0 ) = = = = c · d(C).
det(B) det(B) det(B)

3. Let C 0 be the matrix obtained by swapping two rows of C, and let E be


the elementary matrix for this row replacement, so C 0 = EC. Since left-
multiplication by E negates the determinant, we have det(EC B) = − det(C B),
so
det(C 0 B) det(EC B) − det(C B)
d(C 0 ) = = = = −d(C).
det(B) det(B) det(B)

4. We have
det(I n B) det(B)
d(I n ) = = = 1.
det(B) det(B)

Since d satisfies the four defining properties of the determinant, it is equal to


the determinant by the existence theorem. In other words, for all matrices A, we
have
det(AB)
det(A) = d(A) = .
det(B)
Multiplying through by det(B) gives det(A) det(B) = det(AB).
196 CHAPTER 5. DETERMINANTS

Recall that taking a power of a square matrix A means taking products of A


with itself:
A2 = AA A3 = AAA etc.
If A is invertible, then we define

A−2 = A−1 A−1 A−3 = A−1 A−1 A−1 etc.

For completeness, we set A0 = I n if A 6= 0.

Corollary. If A is a square matrix, then

det(An ) = det(A)n

for all n ≥ 1. If A is invertible, then the equation holds for all n ≤ 0 as well; in
particular,
1
det(A−1 ) = .
det(A)
Proof. Using the multiplicativity property, we compute

det(A2 ) = det(AA) = det(A) det(A) = det(A)2

and

det(A3 ) = det(AAA) = det(A) det(AA) = det(A) det(A) det(A) = det(A)3 ;

the pattern is clear.


We have
1 = det(I n ) = det(AA−1 ) = det(A) det(A−1 )
by the multiplicativity property and the fourth defining property, which shows that
det(A−1 ) = det(A)−1 . Thus

det(A−2 ) = det(A−1 A−1 ) = det(A−1 ) det(A−1 ) = det(A−1 )2 = det(A)−2 ,

and so on.

Example. Compute det(A100 ), where


 ‹
4 1
A= .
2 1

Solution. We have det(A) = 4 − 2 = 2, so

det(A100 ) = det(A)100 = 2100 .

Nowhere did we have to compute the 100th power of A! (We will learn an efficient
way to do that in Section 6.4.)

Here is another application of the multiplicativity property.


5.1. DETERMINANTS: DEFINITION 197

Corollary. Let A1 , A2 , . . . , A p be n × n matrices. Then the product A1 A2 · · · A p is in-


vertible if and only if each Ai is invertible.

Proof. The determinant of the product is the product of the determinants by the
multiplicativity property:

det(A1 A2 · · · A p ) = det(A1 ) det(A2 ) · · · det(A p ).

By the invertibility property, this is nonzero if and only if A1 A2 · · · A p is invert-


ible. On the other hand, det(A1 ) det(A2 ) · · · det(A p ) is nonzero if and only if each
det(Ai ) 6= 0, which means each Ai is invertible.

Example. For any number n we define



‹
1 n
An = .
1 2

Show that the product


A1 A2 A3 A4 A5
is not invertible.
Solution. When n = 2, the matrix A2 is not invertible, because its rows are iden-
tical:  ‹
1 2
A2 = .
1 2
Hence any product involving A2 is not invertible.

In order to state the transpose property, we need to define the transpose of a


matrix.

Definition. The transpose of an m× n matrix A is the n× m matrix AT whose rows


are the columns of A. In other words, the i j entry of AT is a ji .

AT
A  
  a11 a21 a31
a11 a12 a13 a14  
   a12 a22 a32 
 a21 a22 a23 a24  



   a13 a23 a33 
a31 a32 a33 a34  
a14 a24 a34
flip

Like inversion, transposition reverses the order of matrix multiplication.


198 CHAPTER 5. DETERMINANTS

Fact. Let A be an m × n matrix, and let B be an n × p matrix. Then

(AB) T = B T AT .

Proof. First suppose that A is a row vector an B is a column vector, i.e., m = p = 1.


Then
 
b1
  b2 
AB = a1 a2 · · · an   ...  = a1 b1 + a2 b2 + · · · + an bn

bn
 
a1
  a2 
= b1 b2 · · · bn  ...  = B A .
 T T

an

Now we use the row-column rule for matrix multiplication. Let r1 , r2 , . . . , rm


be the rows of A, and let c1 , c2 , . . . , c p be the columns of B, so
   
— r1 —   r 1 c1 r 1 c2 · · · r 1 c p
 — r2 —  | | |  r 2 c1 r 2 c2 · · · r 2 c p 
AB = 
 ..   c 1 c 2 · · · c p
 =  .
 .. .. ..  .
. 
| | | . .
— rm — r m c1 r m c2 · · · r m c p

By the case we handled above, we have ri c j = c Tj riT . Then


 
r 1 c1 r 2 c1 · · · r m c1
 r 1 c2 r 2 c2 · · · r m c2 
(AB) T = 
 ... .. .. 
. . 
r1 c p r2 c p · · · rm c p
 T T T T
· · · c1T rmT

c1 r 1 c1 r 2
 c2T r1T c2T r2T · · · c2T rmT 
= ... .. .. 
. . 
c pT r1T
c pT r2T · · · c pT rmT
— c1T — 
 

 — c2T —  | | |
= ..  rT
1
r2T · · · rmT  = B T AT .
 . 
| | |
— c pT —

Transpose Property. For any square matrix A, we have

det(A) = det(AT ).
5.1. DETERMINANTS: DEFINITION 199

Proof. We follow the same strategy as in the proof of the multiplicativity property:
namely, we define
d(A) = det(AT ),
and we show that d satisfies the four defining properties of the determinant. Again
we use elementary matrices, also introduced in the proof of the multiplicativity
property.
1. Let C 0 be the matrix obtained by doing a row replacement on C, and let E be
the elementary matrix for this row replacement, so C 0 = EC. The elementary
matrix for a row replacement is either upper-triangular or lower-triangular,
with ones on the diagonal:
   
1 0 3 1 0 0
R1 = R1 + 3R3 :  0 1 0  R3 = R3 + 3R1 :  0 1 0  .
0 0 1 3 0 1

It follows that E T is also either upper-triangular or lower-triangular, with


ones on the diagonal, so det(E T ) = 1 by this proposition. By the fact and the
multiplicativity property,
d(C 0 ) = det((C 0 ) T ) = det((EC) T ) = det(C T E T )
= det(C T ) det(E T ) = det(C T ) = d(C).

2. Let C 0 be the matrix obtained by scaling a row of C by a factor of c, and let


E be the elementary matrix for this row replacement, so C 0 = EC. Then E is
a diagonal matrix:  
1 0 0
R2 = cR2 :  0 c 0  .
0 0 1
Thus det(E T ) = c. By the fact and the multiplicativity property,
d(C 0 ) = det((C 0 ) T ) = det((EC) T ) = det(C T E T )
= det(C T ) det(E T ) = c det(C T ) = c · d(C).

3. Let C 0 be the matrix obtained by swapping two rows of C, and let E be the
elementary matrix for this row replacement, so C 0 = EC. The E is equal to
its own transpose:
   T
0 1 0 0 1 0
R1 ←→ R2 :  1 0 0  =  1 0 0  .
0 0 1 0 0 1

Since E (hence E T ) is obtained by performing one row swap on the identity


matrix, we have det(E T ) = −1. By the fact and the multiplicativity property,
d(C 0 ) = det((C 0 ) T ) = det((EC) T ) = det(C T E T )
= det(C T ) det(E T ) = − det(C T ) = −d(C).
200 CHAPTER 5. DETERMINANTS

4. Since I nT = I n , we have

d(I n ) = det(I nT ) = det(I n ) = 1.

Since d satisfies the four defining properties of the determinant, it is equal to


the determinant by the existence theorem. In other words, for all matrices A, we
have
det(A) = d(A) = det(AT ).

The transpose property is very useful. For concreteness, we note that det(A) =
det(AT ) means, for instance, that
   
1 2 3 1 4 7
det  4 5 6  = det  2 5 8  .
7 8 9 3 6 9

This implies that the determinant has the curious feature that it also behaves well
with respect to column operations. Indeed, a column operation on A is the same
as a row operation on AT , and det(A) = det(AT ).

Corollary. The determinant satisfies the following properties with respect to column
operations:

1. Doing a column replacement on A does not change det(A).

2. Scaling a column of A by a scalar c multiplies the determinant by c.

3. Swapping two columns of a matrix multiplies the determinant by −1.

The previous corollary makes it easier to compute the determinant: one is al-
lowed to do row and column operations when simplifying the matrix. (Of course,
one still has to keep track of how the row and column operations change the de-
terminant.)
 
2 7 4
Example. Compute det  3 1 3  .
4 0 1
Solution. It takes fewer column operations than row operations to make this
matrix upper-triangular:

   
2 7 4 −14 7 4
C1 =C1 −4C3
 3 1 3  −−−−−− →  −9 1 3 
4 0 1 0 0 1
 
49 7 4
C1 =C1 +9C2
−−−−−−→  0 1 3 
0 0 1
5.1. DETERMINANTS: DEFINITION 201

We performed two column replacements, which does not change the determi-
nant; therefore,
   
2 7 4 49 7 4
det  3 1 3  = det  0 1 3  = 49.
4 0 1 0 0 1

Multilinearity The following observation is useful for theoretical purposes.


We can think of det as a function of the rows of a matrix:
 
— v1 —
 — v2 — 
det(v1 , v2 , . . . , vn ) = det  .. .
 . 
— vn —

Multilinearity Property. Let i be a whole number between 1 and n, and fix n − 1


vectors v1 , v2 , . . . , vi−1 , vi+1 , . . . , vn in Rn . Then the transformation T : Rn → R defined
by
T (x) = det(v1 , v2 , . . . , vi−1 , x, vi+1 , . . . , vn )
is linear.

Proof. First assume that i = 1, so

T (x) = det(x, v2 , . . . , vn ).

We have to show that T satisfies the defining properties in Section 4.3.

• By the first defining property, scaling any row of a matrix by a number c


scales the determinant by a factor of c. This implies that T satisfies the
second property, i.e., that

T (c x) = det(c x, v2 , . . . , vn ) = c det(x, v2 , . . . , vn ) = cT (x).

• We claim that T (v + w) = T (v) + T (w). If w is in Span{v, v2 , . . . , vn }, then

w = cv + c2 v2 + · · · + cn vn

for some scalars c, c2 , . . . , cn . Let A be the matrix with rows v + w, v2 , . . . , vn ,


so T (v + w) = det(A). By performing the row operations

R 1 = R 1 − c2 R 2 ; R1 = R1 − c3 R3 ; ... R 1 = R 1 − cn R n ,

the first row of the matrix A becomes

v + w − (c2 v2 + · · · + cn vn ) = v + cv = (1 + c)v.
202 CHAPTER 5. DETERMINANTS

Therefore,

T (v + w) = det(A) = det((1 + c)v, v2 , . . . , vn )


= (1 + c) det(v, v2 , . . . , vn )
= T (v) + cT (v) = T (v) + T (cv).

Doing the opposite row operations

R 1 = R 1 + c2 R 2 ; R 1 = R 1 + c3 R 3 ; ... R 1 = R 1 + cn R n

to the matrix with rows cv, v2 , . . . , vn shows that

T (cv) = det(cv, v2 , . . . , vn )
= det(cv + c2 v2 + · · · + cn vn , v2 , . . . , vn )
= det(w, v2 , . . . , vn ) = T (w),

which finishes the proof of the first property in this case.


Now suppose that w is not in Span{v, v2 , . . . , vn }. This implies that {v, v2 , . . . , vn }
is linearly dependent (otherwise it would form a basis for Rn ), so T (v) = 0.
If v is not in Span{v2 , . . . , vn }, then {v2 , . . . , vn } is linearly dependent by the
increasing span criterion in Section 3.5, so T (x) = 0 for all x, as the ma-
trix with rows x, v2 , . . . , vn is not invertible. Hence we may assume v is in
Span{v2 , . . . , vn }. By the above argument with the roles of v and w reversed,
we have T (v + w) = T (v) + T (w).

For i 6= 1, we note that

T (x) = det(v1 , v2 , . . . , vi−1 , x, vi+1 , . . . , vn )


= − det(x, v2 , . . . , vi−1 , v1 , vi+1 , . . . , vn ).

By the previously handled case, we know that −T is linear:

−T (c x) = −cT (x) − T (v + w) = −T (v) − T (w).

Multiplying both sides by −1, we see that T is linear.

For example, we have


     
— v1 — — v1 — — v1 —
det  — av + bw —  = a det  — v —  + b det  — w — 
— v3 — — v3 — — v3 —

By the transpose property, the determinant is also multilinear in the columns of a


matrix:
     
| | | | | | | | |
det  v1 av + bw v3  = a det  v1 v v3  + b det  v1 w v3  .
| | | | | | | | |
5.1. DETERMINANTS: DEFINITION 203

Remark (Alternative defining properties). In more theoretical treatments of the


topic, where row reduction plays a secondary role, the defining properties of the
determinant are often taken to be:
1. The determinant det(A) is multilinear in the rows of A.

2. If A has two identical rows, then det(A) = 0.

3. The determinant of the identity matrix is equal to one.


We have already shown that our four defining properties imply these three. Con-
versely, we will prove that these three alternative properties imply our four, so that
both sets of properties are equivalent.
Defining property 2 is just the second defining property in Section 4.3. Suppose
that the rows of A are v1 , v2 , . . . , vn . If we perform the row replacement R i = R i +cR j
on A, then the rows of our new matrix are v1 , v2 , . . . , vi−1 , vi + cv j , vi+1 , . . . , vn , so by
linearity in the ith row,
det(v1 , v2 , . . . , vi−1 , vi + cv j , vi+1 , . . . , vn )
= det(v1 , v2 , . . . , vi−1 , vi , vi+1 , . . . , vn ) + c det(v1 , v2 , . . . , vi−1 , v j , vi+1 , . . . , vn )
= det(v1 , v2 , . . . , vi−1 , vi , vi+1 , . . . , vn ) = det(A),
where det(v1 , v2 , . . . , vi−1 , v j , vi+1 , . . . , vn ) = 0 because v j is repeated. Thus, the
alternative defining properties imply our first two defining properties. For the
third, suppose that we want to swap row i with row j. Using the second alternative
defining property and multilinearity in the ith and jth rows, we have
0 = det(v1 , . . . , vi + v j , . . . , vi + v j , . . . , vn )
= det(v1 , . . . , vi , . . . , vi + v j , . . . , vn ) + det(v1 , . . . , v j , . . . , vi + v j , . . . , vn )
= det(v1 , . . . , vi , . . . , vi , . . . , vn ) + det(v1 , . . . , vi , . . . , v j , . . . , vn )
+ det(v1 , . . . , v j , . . . , vi , . . . , vn ) + det(v1 , . . . , v j , . . . , v j , . . . , vn )
= det(v1 , . . . , vi , . . . , v j , . . . , vn ) + det(v1 , . . . , v j , . . . , vi , . . . , vn ),
as desired.
Example. We have
       
−1 1 0 0
 2  = − 0 + 2 1 + 3 0 .
3 0 0 1

Therefore,
   
−1 7 2 1 7 2
det  2 −3 2  = − det 0 −3
 2
3 1 1 0 1 1
   
0 7 2 0 7 2
+ 2 det  1 −3 2  + 3 det  0 −3 2  .
0 1 1 1 1 1
204 CHAPTER 5. DETERMINANTS

This is the basic idea behind cofactor expansions in Section 5.2.

Summary: Magical Properties of the Determinant.

1. There is one and only one function det: {n × n matrices} → R satisfying


the four defining properties.

2. The determinant of an upper-triangular or lower-triangular matrix is the


product of the diagonal entries.

3. A square matrix is invertible if and only if det(A) 6= 0; in this case,

1
det(A−1 ) = .
det(A)

4. If A and B are n × n matrices, then

det(AB) = det(A) det(B).

5. For any square matrix A, we have

det(AT ) = det(A).

6. The determinant can be computed by performing row and/or column


operations.

5.2 Cofactor Expansions

Objectives
1. Learn to recognize which methods are best suited to compute the determi-
nant of a given matrix.
2. Recipes: the determinant of a 3 × 3 matrix, compute the determinant using
cofactor expansions.
3. Vocabulary words: minor, cofactor.

In this section, we give a recursive formula for the determinant of a matrix,


called a cofactor expansion. The formula is recursive in that we will compute the
determinant of an n × n matrix assuming we already know how to compute the
determinant of an (n − 1) × (n − 1) matrix.
At the end is a supplementary subsection on Cramer’s rule and a cofactor for-
mula for the inverse of a matrix.
5.2. COFACTOR EXPANSIONS 205

5.2.1 Cofactor Expansions


A recursive formula must have a starting point. For cofactor expansions, the start-
ing point is the case of 1×1 matrices. The definition of determinant directly implies
that 
det a = a.
To describe cofactor expansions, we need to introduce some notation.

Definition. Let A be an n × n matrix.

1. The (i, j) minor, denoted Ai j , is the (n − 1) × (n − 1) matrix obtained from A


by deleting the ith row and the jth column.

2. The (i, j) cofactor Ci j is defined in terms of the minor by

Ci j = (−1)i+ j det(Ai j ).

Note that the signs of the cofactors follow a “checkerboard pattern.” Namely,
(−1)i+ j is pictured in this matrix:

+ − + −
 
− + − +
+ .
− + −
− + − +

Example. For  
1 2 3
A = 4 5 6,
7 8 9

compute A23 and C23 .


Solution.

!
1 2 3 
1 2
‹ 
1 2
‹
A23 = 4 5 6 = C23 = (−1) 2+3
det = (−1)(−6) = 6
7 8 7 8
7 8 9

The cofactors Ci j of an n × n matrix are determinants of (n − 1) × (n − 1)


submatrices. Hence the following theorem is in fact a recursive procedure for
computing the determinant.

Theorem (Cofactor expansion). Let A be an n × n matrix with entries ai j .


206 CHAPTER 5. DETERMINANTS

1. For any i = 1, 2, . . . , n, we have


n
X
det(A) = ai j Ci j = ai1 Ci1 + ai2 Ci2 + · · · + ain Cin .
j=1

This is called cofactor expansion along the ith row.

2. For any j = 1, 2, . . . , n, we have


n
X
det(A) = ai j Ci j = a1 j C1 j + a2 j C2 j + · · · + an j Cn j .
i=1

This is called cofactor expansion along the jth column.


Proof. First we will prove that cofactor expansion along the first column computes
the determinant. Define a function d : {n × n matrices} → R by
n
X
d(A) = (−1)i+1 ai1 det(Ai1 ).
i=1

We want to show that d(A) = det(A). Instead of showing that d satisfies the four
defining properties of the determinant in Section 5.1, we will prove that it satsifies
the three alternative defining properties in Section 5.1, which were shown to be
equivalent.
1. We claim that d is multilinear in the rows of A. Let A be the matrix with rows
v1 , v2 , . . . , vi−1 , v + w, vi+1 , . . . , vn :
 
a11 a12 a13
A =  b 1 + c1 b 2 + c2 b 3 + c3  .
a31 a32 a33

Here we let bi and ci be the entries of v and w, respectively. Let B and C be


the matrices with rows v1 , v2 , . . . , vi−1 , v, vi+1 , . . . , vn and v1 , v2 , . . . , vi−1 , w, vi+1 , . . . , vn ,
respectively:
   
a11 a12 a13 a11 a12 a13
B =  b1 b2 b3  C =  c1 c2 c3  .
a31 a32 a33 a31 a32 a33

We wish to show d(A) = d(B)+d(C). For i 0 6= i, the (i 0 , 1)-cofactor of A is the


sum of the (i 0 , 1)-cofactors of B and C, by multilinearity of the determinants
of (n − 1) × (n − 1) matrices:
 ‹
a12 a13
(−1) det(A31 ) = (−1) det
3+1 3+1
b 2 + c2 b 3 + c3
 ‹  ‹
a12 a13 a12 a13
= (−1) det
3+1
+ (−1) det
3+1
b2 b3 c2 c3
= (−1)3+1 det(B31 ) + (−1)3+1 det(C31 ).
5.2. COFACTOR EXPANSIONS 207

On the other hand, the (i, 1)-cofactors of A, B, and C are all the same:
 ‹
a a
(−1) 2+1
det(A21 ) = (−1) 2+1
det 12 13
a32 a33
= (−1)2+1 det(B21 ) = (−1)2+1 det(C21 ).

Now we compute
X 0
d(A) = (−1)i+1 (bi + ci ) det(Ai1 ) + (−1)i +1 ai1 det(Ai 0 1 )
i 0 6=i

= (−1) i+1
bi det(Bi1 ) + (−1) ci det(Ci1 )
i+1
X 0
(−1)i +1 ai1 det(Bi 0 1 ) + det(Ci 0 1 )

+
i 0 6=i
– ™
X 0
= (−1)i+1 bi det(Bi1 ) + (−1)i +1 ai1 det(Bi 0 1 )
i 0 6=i
– ™
X 0
+ (−1)i+1 ci det(Ci1 ) + (−1)i +1 ai1 det(Ci 0 1 )
i 0 6=i

= d(B) + d(C),

as desired. This shows that d(A) satisfies the first defining property in the
rows of A.

We still have to show that d(A) satisfies the second defining property in the
rows of A. Let B be the matrix obtained by scaling the ith row of A by a factor
of c:
   
a11 a12 a13 a11 a12 a13
A =  a21 a22 a23  B =  ca21 ca22 ca23  .
a31 a32 a33 a31 a32 a33

We wish to show that d(B) = c d(A). For i 0 6= i, the (i 0 , 1)-cofactor of B


is c times the (i 0 , 1)-cofactor of A, by multilinearity of the determinants of
(n − 1) × (n − 1)-matrices:
‹ 
a12 a13
(−1) 3+1
det(B31 ) = (−1) det 3+1
ca22 ca23
 ‹
a12 a13
= (−1) · c det
3+1
= (−1)3+1 · c det(A31 ).
a22 a23

On the other hand, the (i, 1)-cofactors of A and B are the same:
 ‹
a a
(−1) 2+1
det(B21 ) = (−1) 2+1
det 12 13 = (−1)2+1 det(A21 ).
a32 a33
208 CHAPTER 5. DETERMINANTS

Now we compute
X 0
d(B) = (−1) i+1
cai1 det(Bi1 ) + (−1)i +1 ai 0 1 det(Bi 0 1 )
i 0 6=i
X 0
= (−1)i+1 cai1 det(Ai1 ) + (−1)i +1 ai 0 1 · c det(Ai 0 1 )
i 0 6=i
– ™
X
i 0 +1
= c (−1) i+1
cai1 det(Ai1 ) + (−1) ai 0 1 det(Ai 0 1 )
i 0 6=i

= c d(A),

as desired. This completes the proof that d(A) is multilinear in the rows of
A.

2. Now we show that d(A) = 0 if A has two identical rows. Suppose that rows
i1 , i2 of A are identical, with i1 < i2 :
 
a11 a12 a13 a14
a a a a 
A =  21 22 23 24  .
a31 a32 a33 a34
a11 a12 a13 a14

If i 6= i1 , i2 then the (i, 1)-cofactor of A is equal to zero, since Ai1 is an (n −


1) × (n − 1) matrix with identical rows:
 
a12 a13 a14
(−1)2+1 det(A21 ) = (−1)2+1 det  a32 a33 a34  = 0.
a12 a13 a14

The (i1 , 1)-minor can be transformed into the (i2 , 1)-minor using i2 − i1 − 1
row swaps:

a22 a23 a24 a22 a23 a24 a12 a13 a14


! ! !
A11 = a32 a33 a34 a12 a13 a14 a22 a23 a24 = A41
a12 a13 a14 a32 a33 a34 a32 a33 a34

Therefore,

(−1)i1 +1 det(Ai1 1 ) = (−1)i1 +1 · (−1)i2 −i1 −1 det(Ai2 1 ) = −(−1)i2 +1 det(Ai2 1 ).

The two remaining cofactors cancel out, so d(A) = 0, as desired.

3. It remains to show that d(I n ) = 1. The first is the only one nonzero term in
the cofactor expansion of the identity:

d(I n ) = 1 · (−1)1+1 det(I n−1 ) = 1.


5.2. COFACTOR EXPANSIONS 209

This proves that det(A) = d(A), i.e., that cofactor expansion along the first column
computes the determinant.
Now we show that cofactor expansion along the jth column also computes the
determinant. By performing j − 1 column swaps, one can move the jth column
of a matrix to the first column, keeping the other columns in order. For example,
here we move the third column to the first, using two column swaps:

a a12 a13 a14  a a13 a12 a14  a a12 a11 a14 


11 11 13
a21 a22 a23 a24 a21 a23 a22 a24 a23 a22 a21 a24
a31 a32 a33 a34 a31 a33 a32 a34 a33 a32 a31 a34
     
a41 a42 a43 a44 a41 a43 a42 a44 a43 a42 a41 a44

Let B be the matrix obtained by moving the jth column of A to the first column
in this way. Then the (i, j) minor Ai j is equal to the (i, 1) minor Bi1 , since deleting
the ith column of A is the same as deleting the first column of B. By construction,
the (i, j)-entry ai j of A is equal to the (i, 1)-entry bi1 of B. Since we know that we
can compute determinants by expanding along the first column, we have
n
X n
X
det(B) = (−1)i+1 bi1 det(Bi1 ) = (−1)i+1 ai j det(Ai j ).
i=1 i=1

Since B was obtained from A by performing j − 1 column swaps, we have


n
X
det(A) = (−1) j−1
det(B) = (−1) j−1
(−1)i+1 ai j det(Ai j )
i=1
n
X
= (−1)i+ j ai j det(Ai j ).
i=1

This proves that cofactor expansion along the ith column computes the determi-
nant of A.
By the transpose property in Section 5.1, the cofactor expansion along the ith
row of A is the same as the cofactor expansion along the ith column of AT . Again
by the transpose property, we have det(A) = det(AT ), so expanding cofactors along
a row also computes the determinant.

Note that the theorem actually gives 2n different formulas for the determinant:
one for each row and one for each column. For instance, the formula for cofactor
expansion along the first column is
n
X
det(A) = ai1 Ci1 = a11 C11 + a21 C21 + · · · + an1 Cn1
i=1
= a11 det(A11 ) − a21 det(A21 ) + a31 det(A31 ) − · · · ± an1 det(An1 ).
210 CHAPTER 5. DETERMINANTS

Remember, the determinant of a matrix is just a number, defined by the four defin-
ing properties in Section 5.1, so to be clear:

You obtain the same number by expanding cofactors along an y row or column.

Now that we have a recursive formula for the determinant, we can finally prove
the existence theorem in Section 5.1.

Proof. Let us review what we actually proved in Section 5.1. We showed that if
det: {n × n matrices} → R is any function satisfying the four defining properties
of the determinant (or the three alternative defining properties), then it also satis-
fies all of the wonderful properties proved in that section. In particular, since det
can be computed using row reduction by this recipe in Section 5.1, it is uniquely
characterized by the defining properties. What we did not prove was the exis-
tence of such a function, since we did not know that two different row reduction
procedures would always compute the same answer.
Consider the function d defined by cofactor expansion along the first row:
n
X
d(A) = (−1)i+1 ai1 det(Ai1 ).
i=1

If we assume that the determinant exists for (n − 1) × (n − 1) matrices, then there


is no question that the function d exists, since we gave a formula for it. Moreover,
we showed in the proof of the theorem above that d satisfies the three alternative
defining properties of the determinant, again only assuming that the determinant
exists for (n − 1) × (n − 1) matrices. This proves the existence of the determinant
for n × n matrices!
This is an
 example of a proof by mathematical induction. We start by noticing
that det a = a satisfies the four defining properties of the determinant of a
1 × 1 matrix. Then we showed that the determinant of n × n matrices exists,
assuming the determinant of (n − 1) × (n − 1) matrices exists. This implies that all
determinants exist, by the following chain of logic:

1 × 1 exists =⇒ 2 × 2 exists =⇒ 3 × 3 exists =⇒ · · · .

Example. Find the determinant of


 
2 1 3
A =  −1 2 1  .
−2 2 3

Solution. We make the somewhat arbitrary choice to expand along the first row.
The minors and cofactors are
5.2. COFACTOR EXPANSIONS 211
 
2 1 3  ‹ 
‹
2 1 2 1
A11 = −1
 2 1 =
 C11 = + det =4
2 3 2 3
−2 2 3
 
2 1 3  ‹ ‹
−1 1 −1 1
A12 =  −1 2 1 = C12 = − det =1
−2 3 −2 3
−2 2 3
 
2 1 3  ‹ ‹
−1 2 −1 2
A13 = −1
 2 1 =
 C13 = + det = 2.
−2 2 −2 2
−2 2 3

Thus,
det(A) = a11 C11 + a12 C12 + a13 C13 = (2)(4) + (1)(1) + (3)(2) = 15.
The determinant of a 2 × 2 matrix. Let us compute (again) the determinant of a
general 2 × 2 matrix  ‹
a b
A= .
c d
The minors are
   
a b a b
A11 = A12 =
 
= d = c
c d c d
   
a b a b
A21 = A22 =
 
= b = a .
c d c d

The minors are all 1 × 1 matrices. As we have seen that the determinant of a
1 × 1 matrix is just the number inside of it, the cofactors are therefore
C11 = + det(A11 ) = d C12 = − det(A12 ) = −c
C21 = − det(A21 ) = −b C22 = + det(A22 ) = a
Expanding cofactors along the first column, we find that
det(A) = aC11 + cC21 = ad − bc,
which agrees with the formulas in this definition in Section 4.5 and this example
in Section 5.1.
The determinant of a 3 × 3 matrix. We can also use cofactor expansions to find
a formula for the determinant of a 3 × 3 matrix. Let is compute the determinant
of  
a11 a12 a13
A =  a21 a22 a23 
a31 a32 a33
by expanding along the first row. The minors and cofactors are:
212 CHAPTER 5. DETERMINANTS
 
a11 a12 a13  ‹  ‹
a22 a23 a a
A11 = a21
 a22 a23 =
 C11 = + det 22 23
a32 a33 a32 a33
a31 a32 a33
 
a11 a12 a13  ‹  ‹
a21 a23 a a
A12 =  a21 a22 a23  = C12 = − det 21 23
a31 a33 a31 a33
a31 a32 a33
 
a11 a12 a13  ‹  ‹
a21 a22 a a
A13 = a21
 a22 a23 =
 C13 = + det 21 22
a31 a32 a31 a32
a31 a32 a33

The determinant is:

det(A) = a11 C11 + a12 C12 + a13 C13


 ‹  ‹  ‹
a22 a23 a21 a23 a21 a22
= a11 det − a12 det + a13 det
a32 a33 a31 a33 a31 a32
= a11 (a22 a33 − a23 a32 ) − a12 (a21 a33 − a23 a31 ) + a13 (a21 a32 − a22 a31 )
= a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a11 a23 a32 − a12 a21 a33 .

The formula for the determinant of a 3 × 3 matrix looks too complicated to


memorize outright. Fortunately, there is the following mnemonic device.

Recipe: Computing the Determinant of a 3 × 3 Matrix. To compute the


determinant of a 3 × 3 matrix, first draw a larger matrix with the first two
columns repeated on the right. Then add the products of the downward diag-
onals together, and subtract the products of the upward diagonals:

 
a11 a12 a13 a11 a22 a33 + a12 a23 a31 + a13 a21 a32
det  a21 a22 a23  =
a31 a32 a33 −a13 a22 a31 − a11 a23 a32 − a12 a21 a33

a11 a12 a13 a11 a12 a11 a12 a13 a11 a12
a21 a22 a23 a21 a22 − a21 a22 a23 a21 a22
a31 a32 a33 a31 a32 a31 a32 a33 a31 a32

Alternatively, it is not necessary to repeat the first two columns if you allow your
diagonals to “wrap around” the sides of a matrix, like in Pac-Man or Asteroids.
5.2. COFACTOR EXPANSIONS 213
 
1 3 5
Example. Find the determinant of A =  2 0 −1 .
4 −3 1
Solution. We repeat the first two columns on the right, then add the products of
the downward diagonals and subtract the products of the upward diagonals:

13 5 1 3 1 3 5 1 3
20 −1 2 0 − 2 0 −1 2 0
4−3 1 4 −3 4 −3 1 4 −3
 
1 3 5 (1)(0)(1) + (3)(−1)(4) + (5)(2)(−3)
det  2 0 −1  = = −51.
4 −3 1 − (5)(0)(4) − (1)(−1)(−3) − (3)(2)(1)

Cofactor expansions are most useful when computing the determinant of a


matrix that has a row or column with several zero entries. Indeed, if the (i, j)
entry of A is zero, then there is no reason to compute the (i, j) cofactor. In the
following example we compute the determinant of a matrix with two zeros in the
fourth column by expanding cofactors along the fourth column.
Example. Find the determinant of
2 5 −3 −2
 
 −2 −3 2 −5 
A= .
1 3 −2 0 
−1 6 4 0

Solution. The fourth column has two zero entries. We expand along the fourth
column to find
   
−2 −3 2 2 5 −3
det(A) = 2 det  1 3 −2  − 5 det  1 3 −2 
−1 6 4 −1 6 4
− 0 det(don’t care) + 0 det(don’t care).
We only have to compute two cofactors. We can find these determinants using any
method we wish; for the sake of illustration, we will expand cofactors on one and
use the formula for the 3 × 3 determinant on the other.
Expanding along the first column, we compute
 
−2 −3 2
det  1 3 −2 
−1 6 4
 ‹  ‹  ‹
3 −2 −3 2 −3 2
= −2 det − det − det
6 4 6 4 3 −2
= −2(24) − (−24) − 0 = −48 + 24 + 0 = −24.
214 CHAPTER 5. DETERMINANTS

Using the formula for the 3 × 3 determinant, we have


 
2 5 −3 (2)(3)(4) + (5)(−2)(−1) + (−3)(1)(6)
det  1 3 −2  = = 11.
−1 6 4 − (2)(−2)(6) − (5)(1)(4) − (−3)(3)(−1)

Thus, we find that


det(A) = 2(−24) − 5(11) = −103.
Cofactor expansions are also very useful when computing the determinant of
a matrix with unknown entries. Indeed, it is inconvenient to row reduce in this
case, because one cannot be sure whether an entry containing an unknown is a
pivot or not.
Example. Compute the determinant of this matrix containing the unknown λ:

−λ 2 7 12
 
 3 1−λ 2 −4 
A= .
0 1 −λ 7 
0 0 0 2−λ

Solution. First we expand cofactors along the fourth row:


  
det(A) = 0 det · · · + 0 det · · · + 0 det · · ·
 
−λ 2 7
+ (2 − λ) det  3 1 − λ 2  .
0 1 −λ

We only have to compute one cofactor. To do so, first we clear the (3, 3)-entry by
performing the column replacement C3 = C3 + λC2 , which does not change the
determinant:
   
−λ 2 7 −λ 2 7 + 2λ
det  3 1 − λ 2  = det  3 1 − λ 2 + λ(1 − λ)  .
0 1 −λ 0 1 0

Now we expand cofactors along the third row to find


 
−λ 2 7 + 2λ
7 + 2λ
 ‹
−λ
det 3 1 − λ 2 + λ(1 − λ) = (−1) det 2+3
3 2 + λ(1 − λ)
 
0 1 0
  ‹
= − −λ 2 + λ(1 − λ) − 3(7 + 2λ)

= −λ3 + λ2 + 8λ + 21.

Therefore, we have

det(A) = (2 − λ)(−λ3 + λ2 + 8λ + 21) = λ4 − 3λ3 − 6λ2 − 5λ + 42.


5.2. COFACTOR EXPANSIONS 215

It is often most efficient to use a combination of several techniques when com-


puting the determinant of a matrix. Indeed, when expanding cofactors on a matrix,
one can compute the determinants of the cofactors in whatever way is most con-
venient. Or, one can perform row and column operations to clear some entries of
a matrix before expanding cofactors, as in the previous example.

Summary: methods for computing determinants. We have several ways of


computing determinants:

1. Special formulas for 2 × 2 and 3 × 3 matrices.


This is usually the best way to compute the determinant of a small ma-
trix, except for a 3 × 3 matrix with several zero entries.

2. Cofactor expansion.
This is usually most efficient when there is a row or column with several
zero entries, or if the matrix has unknown entries.

3. Row and column operations.


This is generally the fastest when presented with a large matrix which
does not have a row or column with a lot of zeros in it.

4. Any combination of the above.


Cofactor expansion is recursive, but one can compute the determinants
of the minors using whatever method is most convenient. Or, you can
perform row and column operations to clear some entries of a matrix
before expanding cofactors.

Remember, all methods for computing the determinant yield the same number.

5.2.2 Cramer’s Rule and Matrix Inverses


Recall from this proposition in Section 4.5 that one can compute the determinant
of a 2 × 2 matrix using the rule

1
 ‹  ‹
d −b d −b
A= =⇒ −1
A = .
−c a det(A) −c a

We computed the cofactors of a 2 × 2 matrix in this example; using C11 = d, C12 =


−c, C21 = −b, C22 = a, we can rewrite the above formula as

1
 ‹
C11 C21
A−1 = .
det(A) C12 C22

It turns out that this formula generalizes to n × n matrices.


216 CHAPTER 5. DETERMINANTS

Theorem. Let A be an invertible n × n matrix, with cofactors Ci j . Then


 
C11 C21 ··· Cn−1,1 Cn1
 C12 C22 ··· Cn−1,2 Cn2 
1   ... .. .. .. .. 
A−1 = . . . .  . (5.2.1)
det(A) 
C
1,n−1 C2,n−1 · · · Cn−1,n−1 Cn,n−1 
C1n C2n · · · Cn−1,n Cnn

The matrix of cofactors is sometimes called the adjugate matrix of A, and is


denoted adj(A):
 
C11 C21 ··· Cn−1,1 Cn1
 C12 C22 ··· Cn−1,2 Cn2 
 . .. .. .. 
adj(A) =  .. ..
. . . .  .

C
1,n−1 C2,n−1 · · · Cn−1,n−1 Cn,n−1 
C1n C2n · · · Cn−1,n Cnn

Note that the (i, j) cofactor Ci j goes in the ( j, i) entry the adjugate matrix, not the
(i, j) entry: the adjugate matrix is the transpose of the cofactor matrix.

Remark. In fact, one always has A · adj(A) = adj(A) · A = det(A)I n , whether or not
A is invertible.

Example. Use the theorem to compute A−1 , where


 
1 0 1
A = 0 1 1.
1 1 0

Solution. The minors are:

 ‹  ‹  ‹
1 1 0 1 0 1
A11 = A12 = A13 =
1 0 1 0 1 1
 ‹  ‹  ‹
0 1 1 1 1 0
A21 = A22 = A23 =
1 0 1 0 1 1
 ‹  ‹  ‹
0 1 1 1 1 0
A31 = A32 = A33 =
1 1 0 1 0 1

The cofactors are:


5.2. COFACTOR EXPANSIONS 217

C11 = −1 C12 = 1 C13 = −1


C21 = 1 C22 = −1 C23 = −1
C31 = −1 C32 = −1 C33 = 1

Expanding along the first row, we compute the determinant to be

det(A) = 1 · C11 + 0 · C12 + 1 · C13 = −2.

Therefore, the inverse is


   
C11 C21 C31 −1 1 −1
1  1
A−1 = C12 C22 C32  = −  1 −1 −1  .
det(A) C 2 −1 −1 1
13 C23 C33

It is clear from the previous example that (5.2.1) is a very inefficient way of
computing the inverse of a matrix, compared to augmenting by the identity matrix
and row reducing, as in this subsection in Section 4.5. However, it has its uses.
• If a matrix has unknown entries, then it is difficult to compute its inverse
using row reduction, for the same reason it is difficult to compute the de-
terminant that way: one cannot be sure whether an entry containing an
unknown is a pivot or not.

• This formula is useful for theoretical purposes. Notice that the only denomi-
nators in (5.2.1) occur when dividing by the determinant: computing cofac-
tors only involves multiplication and addition, never division. This means,
for instance, that if the determinant is very small, then any measurement
error in the entries of the matrix is greatly magnified when computing the
inverse. In this way, (5.2.1) is useful in error analysis.
The proof of the theorem uses an interesting trick called Cramer’s Rule, which
gives a formula for the entries of the solution of an invertible matrix equation.
Cramer’s Rule. Let x = (x 1 , x 2 , . . . , x n ) be the solution of Ax = b, where A is an
invertible n × n matrix and b is a vector in Rn . Let Ai be the matrix obtained from A
by replacing the ith column by b. Then
det(Ai )
xi = .
det(A)
Proof. First suppose that A is the identity matrix, so that x = b. Then the matrix
Ai looks like this:
1 0 b1 0
 
 0 1 b2 0 
0 0 b 0.
3
0 0 b4 1
218 CHAPTER 5. DETERMINANTS

Expanding cofactors along the ith row, we see that det(Ai ) = bi , so in this case,

det(Ai )
x i = bi = det(Ai ) = .
det(A)

Now let A be a general n × n matrix. One way to solve Ax = b is to row reduce


the augmented matrix ( A | b ); the result is ( I n | x ). By the case we handled above,
it is enough to check that the quantity det(Ai )/ det(A) does not change when we
do a row operation to ( A | b ), since det(Ai )/ det(A) = x i when A = I n .

1. Doing a row replacement on ( A | b ) does the same row replacement on A


and on Ai :
 
a11 a12 a13 b1
R2 =R2 −2R3
 a21 a22 a23 b2  −−−−−−→
a31 a32 a33 b3
 
a11 a12 a13 b1
 a21 − 2a31 a22 − 2a32 a23 − 2a33 b2 − 2b3 
a31 a32 a33 b3
   
a11 a12 a13 a11 a12 a13
R2 =R2 −2R3
 a21 a22 a23  −−−−−−→  a21 − 2a31 a22 − 2a32 a23 − 2a33 
a31 a32 a33 a31 a32 a33
   
a11 b1 a13 a11 b1 a13
R2 =R2 −2R3
 a21 b2 a23  −−−−−−→  a21 − 2a31 b2 − 2b3 a23 − 2a33  .
a31 b3 a33 a31 b3 a33

In particular, det(A) and det(Ai ) are unchanged, so det(A)/ det(Ai ) is un-


changed.

2. Scaling a row of ( A | b ) by a factor of c scales the same row of A and of Ai


by the same factor:
   
a11 a12 a13 b1 a11 a12 a13 b1
R2 =cR2
 a21 a22 a23 b2  −−−− →  ca21 ca22 ca23 c b2 
a31 a32 a33 b3 a31 a32 a33 b3
   
a11 a12 a13 a11 a12 a13
R2 =cR2
 a21 a22 a23  −−−− → ca21
 ca22 ca23 
a31 a32 a33 a31 a32 a33
   
a11 b1 a13 a11 b1 a13
R2 =cR2
 a21 b2 a23  −−−− → ca21
 c b2 ca23  .
a31 b3 a33 a31 b3 a33

In particular, det(A) and det(Ai ) are both scaled by a factor of c, so det(Ai )/ det(A)
is unchanged.
5.2. COFACTOR EXPANSIONS 219

3. Swapping two rows of ( A | b ) swaps the same rows of A and of Ai :


   
a11 a12 a13 b1 a21 a22 a23 b2
R1 ←→R2
 a21 a22 a23 b2 −−−−→ a11
  a12 a13 b1 
a31 a32 a33 b3 a31 a32 a33 b3
   
a11 a12 a13 a21 a22 a23
R1 ←→R2
 a21 a22 a23  −−−−→  a11 a12 a13 
a31 a32 a33 a31 a32 a33
   
a11 b1 a13 a21 b2 a23
R1 ←→R2
 a21 b2 a23  −−−−→  a11 b1 a13  .
a31 b3 a33 a31 b3 a33

In particular, det(A) and det(Ai ) are both negated, so det(Ai )/ det(A) is un-
changed.

Example. Compute the solution of Ax = b using Cramer’s rule, where


 ‹  ‹
a b 1
A= b= .
c d 2

Here the coefficients of A are unknown, but A may be assumed invertible.


Solution. First we compute the determinants of the matrices obtained by replac-
ing the columns of A with b:
 ‹
1 b
A1 = det(A1 ) = d − 2b
2 d
 ‹
a 1
A2 = det(A2 ) = 2a − c.
c 2

Now we compute

det(A1 ) d − 2b det(A2 ) 2a − c
= = .
det(A) ad − bc det(A) ad − bc

It follows that
1
 ‹
d − 2b
x= .
ad − bc 2a − c

Now we use Cramer’s rule to prove the first theorem of this subsection.

Proof. The jth column of A−1 is x j = A−1 e j . This vector is the solution of the matrix
equation 
Ax = A A−1 e j = I n e j = e j .
220 CHAPTER 5. DETERMINANTS

By Cramer’s rule, the ith entry of x j is det(Ai )/ det(A), where Ai is the matrix
obtained from A by replacing the ith column of A by e j :
a11 a12 0 a14
 
a a 1 a24 
Ai =  21 22 (i = 3, j = 2).
a31 a32 0 a34 
a41 a42 0 a44
Expanding cofactors along the ith column, we see the determinant of Ai is exactly
the ( j, i)-cofactor C ji of A. Therefore, the jth column of A−1 is
 
C j1
C
1  j2 

xj = . ,
det(A)  .. 
C jn
and thus
 
  C11 C21 ··· Cn−1,1 Cn1
| | |  C12 C22 ··· Cn−1,2 Cn2 
1  .. .. .. .. .. 
A−1
= x1 x2
 · · · xn =

 .
 . . . .  .
| | | det(A) C
1,n−1 C2,n−1 · · · Cn−1,n−1 Cn,n−1 
C1n C2n · · · Cn−1,n Cnn

5.3 Determinants and Volumes

Objectives
1. Understand the relationship between the determinant of a matrix and the
volume of a parallelepiped.
2. Learn to use determinants to compute volumes of parallelograms and trian-
gles.
3. Learn to use determinants to compute the volume of some curvy shapes like
ellipses.
4. Pictures: parallelepiped, the image of a curvy shape under a linear transfor-
mation.
5. Theorem: determinants and volumes.
6. Vocabulary word: parallelepiped.

In this section we give a geometric interpretation of determinants, in terms


of volumes. This will shed light on the reason behind three of the four defining
properties of the determinant. It is also a crucial ingredient in the change-of-
variables formula in multivariable calculus.
5.3. DETERMINANTS AND VOLUMES 221

5.3.1 Parallelograms and Paralellepipeds


The determinant computes the volume of the following kind of geometric object.
Definition. The paralellepiped determined by n vectors v1 , v2 , . . . , vn in Rn is the
subset 
P = a1 x 1 + a2 x 2 + · · · + an x n 0 ≤ a1 , a2 , . . . , an ≤ 1 .
In other words, a parallelepiped is the set of all linear combinations of n vectors
with coefficients in [0, 1]. We can draw parallelepipeds using the parallelogram
law for vector addition.
Example (The unit cube). The parallelepiped determined by the standard coordi-
nate vectors e1 , e2 , . . . , en is the unit n-dimensional cube.

e3
e2
e2
e1 e1

Example (Parallelograms). When n = 2, a paralellepiped is just a paralellogram


in R2 . Note that the edges come in parallel pairs.

P
v2

v1

Example. When n = 3, a parallelepiped is a kind of a skewed cube. Note that the


faces come in parallel pairs.

v3

v2
v1
222 CHAPTER 5. DETERMINANTS

When does a parallelepiped have zero volume? This can happen only if the
parallelepiped is flat, i.e., it is squashed into a lower dimension.

v1 v1
v3
v2 P v2 P

This means exactly that {v1 , v2 , . . . , vn } is linearly dependent, which by this corol-
lary in Section 5.1 means that the matrix with rows v1 , v2 , . . . , vn has determinant
zero. To summarize:

Key Observation. The parallelepiped defined by v1 , v2 , . . . , vn has zero volume if


and only if the matrix with rows v1 , v2 , . . . , vn has zero determinant.

5.3.2 Determinants and Volumes


The key observation above is only the beginning of the story: the volume of a
parallelepiped is always a determinant.

Theorem (Determinants and volumes). Let v1 , v2 , . . . , vn be vectors in Rn , let P be


the parallelepiped determined by these vectors, and let A be the matrix with rows
v1 , v2 , . . . , vn . Then the absolute value of the determinant of A is the volume of P:

| det(A)| = vol(P).

Proof. Since the four defining properties characterize the determinant, they also
characterize the absolute value of the determinant. Explicitly, | det | is a function
on square matrices which satisfies these properties:

1. Doing a row replacement on A does not change | det(A)|.

2. Scaling a row of A by a scalar c multiplies | det(A)| by |c|.

3. Swapping two rows of a matrix does not change | det(A)|.

4. The determinant of the identity matrix I n is equal to 1.


5.3. DETERMINANTS AND VOLUMES 223

The absolute value of the determinant is the only such function: indeed, by this
recipe in Section 5.1, if you do some number of row operations on A to obtain a
matrix B in row echelon form, then
(product of the diagonal entries of B)
| det(A)| = .
(product of scaling factors used)
For a square matrix A, we abuse notation and let vol(A) denote the volume
of the paralellepiped determined by the rows of A. Then we can regard vol as a
function from the set of square matrices to the real numbers. We will show that
vol also satisfies the above four properties.
1. For simplicity, we consider a row replacement of the form R n = R n + cR i .
The volume of a paralellepiped is the volume of its base, times its height:
here the “base” is the paralellepiped determined by v1 , v2 , . . . , vn−1 , and the
“height” is the perpendicular distance of vn from the base.

height
v2 v3 height
v2
v1 base
e
bas v1

Translating vn by a multiple of vi moves vn in a direction parallel to the base.


This changes neither the base nor the height! Thus, vol(A) is unchanged by
row replacements.

height v2
v2 −−−−→ height
v2 − .5v1
v1 v1
base bas
e

v3 + .5v1

v3 height −−−−→ v3 v2 height


v2
base base
v1 v1
224 CHAPTER 5. DETERMINANTS

2. For simplicity, we consider a row scale of the form R n = cR n . This scales the
length of vn by a factor of |c|, which also scales the perpendicular distance
of vn from the base by a factor of |c|. Thus, vol(A) is scaled by |c|.

height
3
v2 −−−−→ 3 4 height
4 v2
v1 v1
e e
bas bas

4
3 v3
4
−−−−→ 3 height
v3 height
v2 v2
base base
v1 v1

3. Swapping two rows of A just reorders the vectors v1 , v2 , . . . , vn , hence has


no effect on the parallelepiped determined by those vectors. Thus, vol(A) is
unchanged by row swaps.

v2 −−−−→ v1

v1 v2

v2 −−−−→ v3
v3 v2

v1 v1
5.3. DETERMINANTS AND VOLUMES 225

4. The rows of the identity matrix I n are the standard coordinate vectors e1 , e2 , . . . , en .
The associated paralellepiped is the unit cube, which has volume 1. Thus,
vol(I n ) = 1.

Since | det | is the only function satisfying these properties, we have

vol(P) = vol(A) = | det(A)|.

This completes the proof.

Since det(A) = det(AT ) by the transpose property, the absolute value of det(A)
is also equal to the volume of the paralellepiped determined by the columns of A
as well.

Example (Length). A 1 × 1 matrix A is just a number a . In this case, the paral-
lelepiped P determined by its one row is just the interval [0, a] (or [a, 0] if a < 0).
The “volume” of a region in R1 = R is just its length, so it is clear in this case that
vol(P) = |a|.

vol(P) = |a|
0 a

Example (Area). When A is a 2 × 2 matrix, its rows determine a parallelogram in


R2 . The “volume” of a region in R2 is its area, so we obtain a formula for the area
of a parallelogram: it is the determinant of the matrix whose rows are the vectors
forming two adjacent sides of the parallelogram.

 ‹
 ‹
a a b
area = det = |ad − bc|
b c d

 ‹
c
d

It is perhaps surprising that it is possible to compute the area of a parallelogram


without trigonometry. It is a fun geometry problem to prove this formula by hand.
[Hint: first think about the case when the first row of A lies on the x-axis.]
226 CHAPTER 5. DETERMINANTS

Example. Find the area of the parallelogram with sides (1, 3) and (2, −3).

Solution. The area is


 ‹
1 3
det = | − 3 − 6| = 9.
2 −3

Example. Find the area of the parallelogram in the picture.

Solution. We choose two adjacent sides to be the rows of a matrix. We choose


the top two:
5.3. DETERMINANTS AND VOLUMES 227

 ‹
2
−1
 ‹
−1
−4

Note that we do not need to know where the origin is in the picture: vectors
are determined by their length and direction, not where they start. The area is
 ‹
−1 −4
det = |1 + 8| = 9.
2 −1

Example (Area of a triangle). Find the area of the triangle with vertices (−1, −2), (2, −1), (1, 3).

Solution. Doubling a triangle makes a paralellogram. We choose two of its sides


to be the rows of a matrix.
228 CHAPTER 5. DETERMINANTS

 ‹
2
5

 ‹
3
1

The area of the parallelogram is


 ‹
2 5
det = |2 − 15| = 13,
3 1

so the area of the triangle is 13/2.

You might be wondering: if the absolute value of the determinant is a volume,


what is the geometric meaning of the determinant without the absolute value? The
next remark explains that we can think of the determinant as a signed volume. If
you have taken an integral calculus course, you probably computed negative areas
under curves; the idea here is similar.

Remark (Signed volumes). The theorem on determinants and volumes tells us


that the absolute value of the determinant is the volume of a paralellepiped. This
raises the question of whether the sign of the determinant has any geometric mean-
ing. 
A 1 × 1 matrix A is just a number a . In this case, the parallelepiped P de-
termined by its one row is just the interval [0, a] if a ≥ 0, and it is [a, 0] if a < 0.
In this case, the sign of the determinant determines whether the interval is to the
left or the right of the origin.
For a 2 × 2 matrix with rows v1 , v2 , the sign of the determinant determines
whether v2 is counterclockwise or clockwise from v1 . That is, if the counterclock-
wise angle from v1 to v2 is less than 180◦ , then the determinant is positive; other-
wise it is negative (or zero).
5.3. DETERMINANTS AND VOLUMES 229

v2 v1

v1 v2

 ‹  ‹
— v1 — — v1 —
det >0 det <0
— v2 — — v2 —

a

For example, if v1 = b , then the counterclockwise rotation of v1 by 90◦ is
−b

v2 = a by this example in Section 4.3, and
 ‹
a b
det = a2 + b2 > 0.
−b a

b

On the other hand, the clockwise rotation of v1 by 90◦ is −a , and
 ‹
a b
det = −a2 − b2 < 0.
b −a

For a 3×3 matrix with rows v1 , v2 , v3 , the right-hand rule determines the sign of
the determinant. If you point the index finger of your right hand in the direction
of v1 and your middle finger in the direction of v2 , then the determinant is positive
if your thumb points roughly in the direction of v3 , and it is negative otherwise.

v3 v3
v2 v1

v1 v2
   
— v1 — — v1 —
det — v2 —  > 0
 det  — v2 —  < 0
— v3 — — v3 —

In higher dimensions, the notion of signed volume is still important, but it is


usually defined in terms of the sign of a determinant.
230 CHAPTER 5. DETERMINANTS

5.3.3 Volumes of Regions


Let A be an n × n matrix with columns v1 , v2 , . . . , vn , and let T : Rn → Rn be the
associated matrix transformation T (x) = Ax. Then T (e1 ) = v1 and T (e2 ) = v2 , so
T takes the unit cube C to the parallelepiped P determined by v1 , v2 , . . . , vn :

T
 
e2 C | | P
 v1 v2  v2
| |
e1 v1

Since the unit cube has volume 1 and its image has volume | det(A)|, the trans-
formation T scaled the volume of the cube by a factor of | det(A)|. To rephrase:

If A is an n × n matrix with corresponding matrix transformation T : Rn → Rn ,


and if C is the unit cube in Rn , then the volume of T (C) is | det(A)|.

The notation T (S) means the image of the region S under the transformation
T . In set builder notation, this is the subset

T (S) = T (x) | x in S .

In fact, T scales the volume of any region in Rn by the same factor, even for
curvy regions.

Theorem. Let A be an n × n matrix, and let T : Rn → Rn be the associated matrix


transformation T (x) = Ax. If S is any region in Rn , then

vol(T (S)) = | det(A)| · vol(S).

Proof. Let C be the unit cube, let v1 , v2 , . . . , vn be the columns of A, and let P
be the paralellepiped determined by these vectors, so T (C) = P and vol(P) =
| det(A)|. For " > 0 we let "C be the cube with side lengths ", i.e., the paralellepiped
determined by the vectors "e1 , "e2 , . . . , "en , and we define "P similarly. By the
second defining property, T takes "C to "P. The volume of "C is " n (we scaled each
of the n standard vectors by a factor of ") and the volume of "P is " n | det(A)| (for
the same reason), so we have shown that T scales the volume of "C by | det(A)|.
5.3. DETERMINANTS AND VOLUMES 231

T
 
| |
"e2 "C  v1 v2 
"v2 "P
| |
"e1
"v1

vol("C) = " n vol("P) = " n | det(A)|

By the first defining property, the image of a translate of "C is a translate of


"P:
T (x + "C) = T (x) + "T (C) = T (x) + "P.
Since a translation does not change volumes, this proves that T scales the volume
of a translate of "C by | det(A)|.
At this point, we need to use techniques from multivariable calculus, so we
only give an idea of the rest of the proof. Any region S can be approximated
by a collection of very small cubes of the form x + "C. The image T (S) is then
approximated by the image of this collection of cubes, which is a collection of very
small paralellepipeds of the form T (x) + "P.

S T (S)
T

x + "C T (x) + "P

The volume of S is closely approximated by the sum of the volumes of the


cubes; in fact, as " goes to zero, the limit of this sum is precisely vol(S). Likewise,
the volume of T (S) is equal to the sum of the volumes of the paralellepipeds,
take in the limit as " → 0. The key point is that the volume of each cube is scaled by
| det(A)|. Therefore, the sum of the volumes of the paralellepipeds is | det(A)| times
the sum of the volumes of the cubes. This proves that vol(T (S)) = | det(A)| vol(S).

Example. Let S be a half-circle of radius 1, let


 ‹
1 2
A= ,
2 1
232 CHAPTER 5. DETERMINANTS

and define T : R2 → R2 by T (x) = Ax. What is the area of T (S)?

T
T (S)
S  ‹
1 2
2 1

Solution. The area of the unit circle is π, so the area of S is π/2. The transfor-
mation T scales areas by a factor of | det(A)| = |1 − 4| = 3, so

vol(T (S)) = 3 vol(S) = .
2
Example (Area of an ellipse). Find the area of the interior E of the ellipse defined
by the equation
2x − y 2 y + 3x 2
 ‹  ‹
+ = 1.
2 3
Solution. This ellipse is obtained from the unit circle X 2 + Y 2 = 1 by the linear
change of coordinates
2x − y
X=
2
y + 3x
Y= .
3
In other words, if we define a linear transformation T : R2 → R2 by
(2x − y)/2
 ‹  ‹
x
T = ,
y ( y + 3x)/3
x x
 
then T y lies on the unit circle C whenever y lies on E.

T
E 
1 −1/2
‹ C
1 1/3
5.3. DETERMINANTS AND VOLUMES 233

We compute the standard matrix A for T by evaluating on the standard coor-


dinate vectors:
 ‹  ‹  ‹  ‹  ‹
1 1 0 −1/2 1 −1/2
T = T = =⇒ A = .
0 1 1 1/3 1 1/3

Therefore, T scales areas by a factor of | det(A)| = | 31 + 21 | = 56 . The area of the unit


circle is π, so
5
π = vol(C) = vol(T (E)) = | det(A)| · vol(E) = vol(E),
6
and thus the area of E is 6π/5.

Remark (Multiplicativity of | det |). The above theorem also gives a geometric rea-
son for multiplicativity of the (absolute value of the) determinant. Indeed, let A
and B be n × n matrices, and let T, U : Rn → Rn be the corresponding matrix trans-
formations. If C is the unit cube, then
 
vol T ◦ U(C) = vol T (U(C)) = | det(A)| vol(U(C))
= | det(A)| · | det(B)| vol(C)
= | det(A)| · | det(B)|.

On the other hand, the matrix for the composition T ◦ U is the product AB, so

vol T ◦ U(C) = | det(AB)| vol(C) = | det(AB)|.

Thus | det(AB)| = | det(A)| · | det(B)|.


234 CHAPTER 5. DETERMINANTS
Chapter 6

Eigenvalues and Eigenvectors

Primary Goal. Solve the matrix equation Ax = λx.


This chapter constitutes the core of any first course on linear algebra: eigen-
values and eigenvectors play a crucial role in most real-world applications of the
subject.
Example. In a population of rabbits,
1. half of the newborn rabbits survive their first year;

2. of those, half survive their second year;

3. the maximum life span is three years;

4. rabbits produce 0, 6, 8 baby rabbits in their first, second, and third years,
respectively.
What is the asymptotic behavior of this system? What will the rabbit population
look like in 100 years?

Use this link to view the online demo

Left: the population of rabbits in a given year. Right: the proportions of rabbits in
that year. Choose any values you like for the starting population, and click “Advance
1 year” several times. What do you notice about the long-term behavior of the ratios?
This phenomenon turns out to be due to eigenvectors.

In Section 6.1, we will define eigenvalues and eigenvectors, and show how to
compute the latter; in Section 6.2 we will learn to compute the former. In Sec-
tion 6.3 we introduce the notion of similar matrices, and demonstrate that similar
matrices do indeed behave similarly. In Section 6.4 we study matrices that are
similar to diagonal matrices and in Section 6.5 we study matrices that are simi-
lar to rotation-scaling matrices, thus gaining a solid geometric understanding of
large classes of matrices. Finally, we spend Section 6.6 presenting a common kind
of application of eigenvalues and eigenvectors to real-world problems, including
searching the Internet using Google’s PageRank algorithm.

235
236 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

6.1 Eigenvalues and Eigenvectors

Objectives

1. Learn the definition of eigenvector and eigenvalue.

2. Learn to find eigenvectors and eigenvalues geometrically.

3. Learn to decide if a number is an eigenvalue of a matrix, and if so, how to


find an associated eigenvector.

4. Recipe: find a basis for the λ-eigenspace.

5. Pictures: whether or not a vector is an eigenvector, eigenvectors of standard


matrix transformations.

6. Theorem: the expanded invertible matrix theorem.

7. Vocabulary word: eigenspace.

8. Essential vocabulary words: eigenvector, eigenvalue.

In this section, we define eigenvalues and eigenvectors. These form the most
important facet of the structure theory of square matrices. As such, eigenvalues
and eigenvectors tend to play a key role in the real-life applications of linear alge-
bra.

6.1.1 Eigenvalues and Eigenvectors


Here is the most important definition in this text.

Essential Definition. Let A be an n × n matrix.

1. An eigenvector of A is a nonzero vector v in Rn such that Av = λv, for some


scalar λ.

2. An eigenvalue of A is a scalar λ such that the equation Av = λv has a non-


trivial solution.

If Av = λv for v 6= 0, we say that λ is the eigenvalue for v, and that v is an


eigenvector for λ.

The German prefix “eigen” roughly translates to “self” or “own”. An eigenvec-


tor of A is a vector that is taken to a multiple of itself, which partially explains the
terminology.

Note. Eigenvalues and eigenvectors are only for square matrices.


6.1. EIGENVALUES AND EIGENVECTORS 237

Eigenvectors are by definition nonzero. Eigenvalues may be equal to zero.

We cannot consider the zero vector to be an eigenvector: since A0 = 0 = λ0


for every scalar λ, the associated eigenvalue would be undefined.

Example (Verifying eigenvectors). Consider the matrix


 ‹  ‹  ‹
2 2 1 2
A= and vectors v = w= .
−4 8 1 1

Which are eigenvectors? What are their eigenvalues?


Solution. We have
 ‹ ‹  ‹
2 2 1 4
Av = = = 4v.
−4 8 1 4

Hence, v is an eigenvector of A, with eigenvalue λ = 4. On the other hand,


 ‹ ‹  ‹
2 2 2 6
Aw = = .
−4 8 1 0

which is not a scalar multiple of w. Hence, w is not an eigenvector of A.

Av v is an eigenvector
w
w is not an eigenvector
v Aw

Example (Verifying eigenvectors). Consider the matrix


     
0 6 8 16 2
1
A =  2 0 0  and vectors v =  4  w = 2 .
0 12 0 1 2

Which are eigenvectors? What are their eigenvalues?


Solution. We have
    
0 6 8 16 32
1
Av =  2 0 0   4  =  8  = 2v.
0 12 0 1 2
238 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Hence, v is an eigenvector of A, with eigenvalue λ = 2. On the other hand,


    
0 6 8 2 28
1
Aw = 2 0 0
  2 = 1 ,
 
1
0 2 0 2 1

which is not a scalar multiple of w. Hence, w is not an eigenvector of A.


Example (An eigenvector with eigenvalue zero). Let
 ‹  ‹
1 3 −3
A= v= .
2 6 1

Is v an eigenvector of A? If so, what is its eigenvalue?


Solution. The product is
 ‹ ‹  ‹
1 3 −3 0
Av = = = 0v.
2 6 1 0

Hence, v is an eigenvector with eigenvalue zero.


As noted above, an eigenvalue is allowed to be zero, but an eigenvector is not.
To say that Av = λv means that Av and λv are collinear with the origin. So,
an eigenvector of A is a nonzero vector v such that Av and v lie on the same line
through the origin. In this case, Av is a scalar multiple of v; the eigenvalue is the
scaling factor.

Aw w Av
v v is an eigenvector

w is not an eigenvector

For matrices that arise as the standard matrix of a linear transformation, it is


often best to draw a picture, then find the eigenvectors and eigenvalues geometri-
cally by studying which vectors are not moved off of their line. For a transformation
that is defined geometrically, it is not necessary even to compute its matrix to find
the eigenvectors and eigenvalues.
Example (Reflection). Let T : R2 → R2 be the linear transformation that reflects
over the line L defined by y = −x, and let A be the matrix for T . Find the eigen-
values and eigenvectors of A without doing any computations.
Solution. This transformation is defined geometrically, so we draw a picture.
6.1. EIGENVALUES AND EIGENVECTORS 239

Au

u L

The vector u is not an eigenvector, because Au is not collinear with u and the
origin.

Az L

The vector z is not an eigenvector either.

L
Av

The vector v is an eigenvector because Av is collinear with v and the origin. The
vector Av has the same length as v, but the opposite direction, so the associated
eigenvalue is −1.

w
Aw

L
240 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

The vector w is an eigenvector because Aw is collinear with w and the origin:


indeed, Aw is equal to w! This means that w is an eigenvector with eigenvalue 1.
It appears that all eigenvectors lie either on L, or on the line perpendicular to
L. The vectors on L have eigenvalue 1, and the vectors perpendicular to L have
eigenvalue −1.

Use this link to view the online demo

An eigenvector of A is a vector x such that Ax is collinear with x and the origin. Click
and drag the head of x to convince yourself that all such vectors lie either on L, or on
the line perpendicular to L.

Example (Projection). Let T : R2 → R2 be the linear transformation that projects a


vector vertically onto the x-axis, and let A be the matrix for T . Find the eigenvalues
and eigenvectors of A without doing any computations.
Solution. This transformation is defined geometrically, so we draw a picture.

Au

The vector u is not an eigenvector, because Au is not collinear with u and the
origin.

Az

The vector z is not an eigenvector either.


6.1. EIGENVALUES AND EIGENVECTORS 241

Av

The vector v is an eigenvector. Indeed, Av is the zero vector, which is collinear


with v and the origin; since Av = 0v, the associated eigenvalue is 0.

w Aw

The vector w is an eigenvector because Aw is collinear with w and the origin:


indeed, Aw is equal to w! This means that w is an eigenvector with eigenvalue 1.
It appears that all eigenvectors lie on the x-axis or the y-axis. The vectors on
the x-axis have eigenvalue 1, and the vectors on the y-axis have eigenvalue 0.

Use this link to view the online demo

An eigenvector of A is a vector x such that Ax is collinear with x and the origin. Click
and drag the head of x to convince yourself that all such vectors lie on the coordinate
axes.

Example (Identity). Find all eigenvalues and eigenvectors of the identity matrix
In.
Solution. The identity matrix has the property that I n v = v for all vectors v in
Rn . We can write this as I n v = 1 · v, so every nonzero vector is an eigenvector with
eigenvalue 1.

Use this link to view the online demo

Every nonzero vector is an eigenvector of the identity matrix.


242 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Example (Dilation). Let T : R2 → R2 be the linear transformation that dilates by a


factor of 1.5, and let A be the matrix for T . Find the eigenvalues and eigenvectors
of A without doing any computations.
Solution. We have
Av = T (v) = 1.5v
2
for every vector v in R . Therefore, by definition every nonzero vector is an eigen-
vector with eigenvalue 1.5.

Av
v

Use this link to view the online demo

Every nonzero vector is an eigenvector of a dilation matrix.

Example (Shear). Let  ‹


1 1
A=
0 1
and let T (x) = Ax, so T is a shear in the x-direction. Find the eigenvalues and
eigenvectors of A without doing any computations.
Solution. In equations, we have
x+y
 ‹  ‹ ‹  ‹
x 1 1 x
A = = .
y 0 1 y y
This tells us that a shear takes a vector and adds its y-coordinate to its x-coordinate.
Since the x-coordinate changes but not the y-coordinate, this tells us that any vec-
tor v with nonzero y-coordinate cannot be collinear with Av and the origin.

v
Av
6.1. EIGENVALUES AND EIGENVECTORS 243

On the other hand, any vector v on the x-axis has zero y-coordinate, so it is
not moved by A. Hence v is an eigenvector with eigenvalue 1.

w Aw

Accordingly, all eigenvectors of A lie on the x-axis, and have eigenvalue 1.

Use this link to view the online demo

All eigenvectors of a shear lie on the x-axis. Click and drag the head of x to find the
eigenvectors.

Example (Rotation). Let T : R2 → R2 be the linear transformation that rotates


counterclockwise by 90◦ , and let A be the matrix for T . Find the eigenvalues and
eigenvectors of A without doing any computations.
Solution. If v is any nonzero vector, then Av is rotated by an angle of 90◦ from
v. Therefore, Av is not on the same line as v, so v is not an eigenvector. And of
course, the zero vector is never an eigenvector.

v
Av

Therefore, this matrix has no eigenvectors and eigenvalues.

Use this link to view the online demo

This rotation matrix has no eigenvectors. Click and drag the head of x to find one.
244 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Here we mention one basic fact about eigenvectors.


Fact (Eigenvectors with distinct eigenvalues are linearly independent). Let v1 , v2 , . . . , vk
be eigenvectors of a matrix A, and suppose that the corresponding eigenvalues λ1 , λ2 , . . . , λk
are distinct (all different from each other). Then {v1 , v2 , . . . , vk } is linearly indepen-
dent.
Proof. Suppose that {v1 , v2 , . . . , vk } were linearly dependent. According to the in-
creasing span criterion in Section 3.5, this means that for some j, the vector v j is in
Span{v1 , v2 , . . . , v j−1 }. If we choose the first such j, then {v1 , v2 , . . . , v j−1 } is linearly
independent. Note that j > 1 since v1 6= 0.
Since v j is in Span{v1 , v2 , . . . , v j−1 },, we can write
v j = c1 v1 + c2 v2 + · · · + c j−1 v j−1
for some scalars c1 , c2 , . . . , c j−1 . Multiplying both sides of the above equation by A
gives

λ j v j = Av j = A c1 v1 + c2 v2 + · · · + c j−1 v j−1
= c1 Av1 + c2 Av2 + · · · + c j−1 Av j−1
= c1 λ1 v1 + c2 λ2 v2 + · · · + c j−1 λ j−1 v j−1 .
Subtracting λ j times the first equation from the second gives
0 = λ j v j − λ j v j = c1 (λ1 − λ j )v1 + c2 (λ2 − λ j )v2 + · · · + c j−1 (λ j−1 − λ j )v j−1 .
Since λi 6= λ j for i < j, this is an equation of linear dependence among v1 , v2 , . . . , v j−1 ,
which is impossible because those vectors are linearly independent. Therefore,
{v1 , v2 , . . . , vk } must have been linearly independent after all.
When k = 2, this says that if v1 , v2 are eigenvectors with eigenvalues λ1 6= λ2 ,
then v2 is not a multiple of v1 . In fact, any nonzero multiple cv1 of v1 is also an
eigenvector with eigenvalue λ1 :
A(c v1 ) = cAv1 = c(λ1 v1 ) = λ1 (cv1 ).
As a consequence of the above fact, we see that:

An n × n matrix A has at most n eigenvalues.

6.1.2 Eigenspaces
Let A be an n × n matrix, and let λ be a scalar. The eigenvectors with eigenvalue
λ, if any, are the nonzero solutions of the equation Av = λv. We can rewrite this
equation as follows:
Av = λv
⇐⇒ Av − λv = 0
⇐⇒ Av − λI n v = 0
⇐⇒ (A − λI n )v = 0.
6.1. EIGENVALUES AND EIGENVECTORS 245

Therefore, the eigenvectors of A with eigenvalue λ, if any, are the nontrivial solu-
tions of the matrix equation (A−λI n )v = 0, i.e., the nonzero vectors in Nul(A−λI n ).
If this equation has no nontrivial solutions, then λ is not an eigenvector of A.
The above observation is important because it says that finding the eigenvec-
tors for a given eigenvalue means solving a homogeneous system of equations. For
instance, if  
7 1 3
A =  −3 2 −3  ,
−3 −2 −1
then an eigenvector with eigenvalue λ is a nontrivial solution of the matrix equa-
tion     
7 1 3 x x
 −3 2 −3   y  = λ  y  .
−3 −2 −1 z z
This translates to the system of equations

7x + y + 3z = λx (7 − λ)x + y+ 3z = 0
( (
−3x + 2 y − 3z = λ y −−−→ −3x + (2 − λ) y − 3z = 0
−3x − 2 y − z = λz −3x − 2 y + (−1 − λ)z = 0.
This is the same as the homogeneous matrix equation
  
7−λ 1 3 x
 −3 2 − λ −3   y  = 0,
−3 −2 −1 − λ z

i.e., (A − λI3 )v = 0.

Definition. Let A be an n × n matrix, and let λ be an eigenvalue of A. The λ-


eigenspace of A is the solution set of (A−λI n )v = 0, i.e., the subspace Nul(A−λI n ).

The λ-eigenspace is a subspace because it is the null space of a matrix, namely,


the matrix A − λI n . This subspace consists of the zero vector and all eigenvectors
of A with eigenvalue λ.

Note. Since a nonzero subspace is infinite, every eigenvalue has infinitely many
eigenvectors. (For example, multiplying an eigenvector by a nonzero scalar gives
another eigenvector.) On the other hand, there can be at most n linearly indepen-
dent eigenvectors of an n × n matrix, since Rn has dimension n.

Example (Computing eigenspaces). For each of the numbers λ = −2, 1, 3, decide


if λ is an eigenvalue of the matrix
 ‹
2 −4
A= ,
−1 −1

and if so, compute a basis for the λ-eigenspace.


246 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Solution. The number 3 is an eigenvalue of A if and only if Nul(A − 3I2 ) is


nonzero. Hence, we have to solve the matrix equation (A − 3I2 )v = 0. We have
 ‹  ‹  ‹
2 −4 1 0 −1 −4
A − 3I2 = −3 = .
−1 −1 0 1 −1 −4

The reduced row echelon form of this matrix is

x = −4 y
 ‹ §  ‹  ‹
1 4 parametric parametric x −4
−−−−−→ −−−−−→ =y .
0 0 form y = y vector form y 1

Since y is a free variable, the null


 −4space
 of A−3I2 is nonzero, so 3 is an eigenvector.
A basis for the 3-eigenspace is 1 .
Concretely, we have shown that the eigenvectors of A with eigenvalue 3 are
−4 −4
exactly the nonzero multiples of 1 . In particular, 1 is an eigenvector, which
we can verify:
 ‹ ‹  ‹  ‹
2 −4 −4 −12 −4
= =3 .
−1 −1 1 3 1

The number 1 is an eigenvalue of A if and only if Nul(A− I2 ) is nonzero. Hence,


we have to solve the matrix equation (A − I2 )v = 0. We have
 ‹  ‹  ‹
2 −4 1 0 1 −4
A − I2 = − = .
−1 −1 0 1 −1 −2

This matrix has determinant −6, so it is invertible. By the invertible matrix theo-
rem in Section 4.5, we have Nul(A − I2 ) = {0}, so 1 is not an eigenvalue.
The eigenvectors of A with eigenvalue −2, if any, are the nonzero solutions of
the matrix equation (A + 2I2 )v = 0. We have
 ‹  ‹  ‹
2 −4 1 0 4 −4
A + 2I2 = +2 = .
−1 −1 0 1 −1 1

The reduced row echelon form of this matrix is


 ‹ nx = y  ‹  ‹
1 −1 parametric parametric x 1
−−−−−→ −−−−−→ =y .
0 0 form y = y vector form y 1

Hence
 there exist eigenvectors with eigenvalue −2, namely, any nonzero multiple
1
 1
of 1 . A basis for the −2-eigenspace is 1 .

Use this link to view the online demo


−4

The 3-eigenspace is the line spanned by 1 . This means that A scales every vector
1

in that line by a factor of 3. Likewise, the −2-eigenspace is the line spanned by 1 .
Click and drag the vector x around to see how A acts on that vector.
6.1. EIGENVALUES AND EIGENVECTORS 247

Example (Computing eigenspaces). For each of the numbers λ = 0, 21 , 2, decide if


λ is an eigenvector of the matrix
 
7/2 0 3
A =  −3/2 2 −3  ,
−3/2 0 −1

and if so, compute a basis for the λ-eigenspace.


Solution. The number 2 is an eigenvalue of A if and only if Nul(A − 2I3 ) is
nonzero. Hence, we have to solve the matrix equation (A − 2I3 )v = 0. We have
     
7/2 0 3 1 0 0 3/2 0 3
A − 2I3 =  −3/2 2 −3  − 2  0 1 0  =  −3/2 0 −3  .
−3/2 0 1 0 0 1 −3/2 0 −3

The reduced row echelon form of this matrix is


       
1 0 2 x = −2z x 0 −2
(
parametric parametric
 0 0 0  −−−−−→ y = y −−−−−→  y  = y 1 + z  0  .
form vector form
0 0 0 z= z z 0 1

The matrix A − 2I3 has two free variables, so the null space of A − 2I3 is nonzero,
and thus 2 is an eigenvector. A basis for the 2-eigenspace is
   
 0 −2 
1 ,  0  .
 0 1 

This is a plane in R3 .
The eigenvectors of A with eigenvalue 12 , if any, are the nonzero solutions of
the matrix equation (A − 21 I3 )v = 0. We have
     
7/2 0 3 1 0 0 3 0 3
1 1
A − I3 = −3/2 2 −3 −
  0 1 0  =  −3/2 3/2 −3  .
2 −3/2 0 1 2 0 0 1 −3/2 0 −3/2

The reduced row echelon form of this matrix is


     
1 0 1 ¨ x = −z x −1
parametric parametric
 0 1 −1  −−−−−→ y = z −−−−−→  y = z  1  .
form vector form
0 0 0 z= z z 1

Hence there exist eigenvectors with eigenvalue 21 , so 1


2 is an eigenvalue. A basis
for the 12 -eigenspace is  
 −1 
1 .
 1 
248 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

This is a line in R3 .
The number 0 is an eigenvalue of A if and only if Nul(A − 0I3 ) = Nul(A) is
nonzero. This is the same as asking whether A is noninvertible, by the invertible
matrix theorem in Section 4.5. The determinant of A is det(A) = 2 6= 0, so A is
invertible by the invertibility property in Section 5.1. It follows that 0 is not an
eigenvalue of A.

Use this link to view the online demo

The 2-eigenspace is the violet plane. This means that A scales every vector in that
plane by a factor of 2. The 12 -eigenspace is the green line. Click and drag the vector x
around to see how A acts on that vector.

Example (Reflection). Let T : R2 → R2 be the linear transformation that reflects


over the line L defined by y = −x, and let A be the matrix for T . Find all
eigenspaces of A.
Solution. We showed in this example that all eigenvectors with eigenvalue 1 lie
on L, and all eigenvectors with eigenvalue −1 lie on the line L ⊥ that is perpendic-
ular to L. Hence, L is the 1-eigenspace, and L ⊥ is the −1-eigenspace.
None of this required any computations, but we can verify our conclusions
using algebra. First we compute the matrix A:
 ‹  ‹  ‹  ‹  ‹
1 0 0 −1 0 −1
T = T = =⇒ A= .
0 −1 1 0 −1 0

Computing the 1-eigenspace means solving the matrix equation (A − I2 )v = 0. We


have
 ‹  ‹  ‹  ‹
0 −1 1 0 −1 −1 RREF 1 1
A − I2 = − = −−→ .
−1 0 0 1 −1 −1 0 0

The parametric form of the solution set is x = − y, or equivalently, y = −x, which


is exactly the equation for L. Computing the −1-eigenspace menas solving the
matrix equation (A + I2 )v = 0; we have
 ‹  ‹  ‹  ‹
0 −1 1 0 1 −1 RREF 1 −1
A + I2 = + = −−→ .
−1 0 0 1 −1 1 0 0

The parametric form of the solution set is x = y, or equivalently, y = x, which is


exactly the equation for L ⊥ .

Use this link to view the online demo

The violet line L is the 1-eigenspace, and the green line L ⊥ is the −1-eigenspace.
6.1. EIGENVALUES AND EIGENVECTORS 249

Recipes: Eigenspaces. Let A be an n × n matrix and let λ be a number.

1. λ is an eigenvalue of A if and only if (A − λI n )v = 0 has a nontrivial


solution, if and only if Nul(A − λI n ) 6= {0}.

2. In this case, finding a basis for the λ-eigenspace of A means finding a


basis for Nul(A − λI n ), which can be done by finding the parametric
vector form of the solutions of the homogeneous system of equations
(A − λI n )v = 0.

3. The dimension of the λ-eigenspace of A is equal to the number of free


variables in the system of equations (A− λI n )v = 0, which is the number
of columns of A − λI n without pivots.

4. The eigenvectors with eigenvalue λ are the nonzero vectors in Nul(A −


λI n ), or equivalently, the nontrivial solutions of (A − λI n )v = 0.

We conclude with an observation about the 0-eigenspace of a matrix.

Fact. Let A be an n × n matrix.

1. The number 0 is an eigenvalue of A if and only if A is not invertible.

2. In this case, the 0-eigenspace of A is Nul(A).

Proof. We know that 0 is an eigenvalue of A if and only if Nul(A − 0I n ) = Nul(A)


is nonzero, which is equivalent to the noninvertibility of A by the invertible matrix
theorem in Section 4.5. In this case, the 0-eigenspace is by definition Nul(A−0I n ) =
Nul(A).

Concretely, an eigenvector with eigenvalue 0 is a nonzero vector v such that


Av = 0v, i.e., such that Av = 0. These are exactly the nonzero vectors in the null
space of A.

6.1.3 The Invertible Matrix Theorem: Addenda


We now have two new ways of saying that a matrix is invertible, so we add them
to the invertible matrix theorem.

Invertible Matrix Theorem. Let A be an n × n matrix, and let T : Rn → Rn be the


matrix transformation T (x) = Ax. The following statements are equivalent:

1. A is invertible.

2. T is invertible.

3. The reduced row echelon form of A is the identity matrix I n .


250 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

4. A has n pivots.

5. Ax = 0 has no solutions other than the trivial one.

6. Nul(A) = {0}.

7. nullity(A) = 0.

8. The columns of A are linearly independent.

9. The columns of A form a basis for Rn .

10. T is one-to-one.

11. Ax = b is consistent for all b in Rn .

12. Ax = b has a unique solution for each b in Rn .

13. The columns of A span Rn .

14. Col(A) = Rn .

15. dim Col(A) = n.

16. rank(A) = n.

17. T is onto.

18. There is a matrix B such that AB = I n .

19. There is a matrix B such that BA = I n .

20. det(A) 6= 0.

21. 0 is not an eigenvalue of A.

6.2 The Characteristic Polynomial

Objectives

1. Learn that the eigenvalues of a triangular matrix are the diagonal entries.

2. Learn some strategies for finding the zeros of a polynomial.

3. Recipes: compute eigenvalues using the characteristic polynomial, the char-


acteristic polynomial of a 2 × 2 matrix.

4. Vocabulary words: characteristic polynomial, trace.


6.2. THE CHARACTERISTIC POLYNOMIAL 251

In Section 6.1 we discussed how to decide whether a given number λ is an


eigenvalue of a matrix, and if so, how to find all of the associated eigenvectors.
In this section, we will give a method for computing all of the eigenvalues of a
matrix. This does not reduce to solving a system of linear equations: indeed, it
requires solving a nonlinear equation in one variable, namely, finding the roots of
the characteristic polynomial.
Definition. Let A be an n × n matrix. The characteristic polynomial of A is the
function
f (λ) = det(A − λI n ).
We will see below that the characteristic polynomial is in fact a polynomial.
Finding the characterestic polynomial means computing the determinant of the
matrix A − λI n , whose entries contain the unknown λ.
Example. Find the characteristic polynomial of the matrix
 ‹
5 2
A= .
2 1

Solution. We have
λ 0
 ‹  ‹‹
5 2
f (λ) = det(A − λI2 ) = det −
2 1 0 λ
5−λ
 ‹
2
= det
2 1−λ
= (5 − λ)(1 − λ) − 2 · 2 = λ2 − 6λ + 1.

Example. Find the characteristic polynomial of the matrix


 
0 6 8
1
A =  2 0 0.
0 21 0

Solution. We compute the determinant by expanding cofactors along the third


column:
 
−λ 6 8
1
f (λ) = det(A − λI3 ) = det  2 −λ 0 
1
0 2 −λ
1 1
 ‹  ‹
=8 − 0 · −λ − λ λ − 6 ·
2
4 2
= −λ + 3λ + 2.
3

The point of the characteristic polynomial is that we can use it to compute


eigenvalues.
252 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Theorem (Eigenvalues are roots of the characteristic polynomial). Let A be an n×n


matrix, and let f (λ) = det(A − λI n ) be its characteristic polynomial. Then λ is an
eigenvalue of A if and only if f (λ) = 0.
Proof. By the invertible matrix theorem in Section 6.1, the matrix equation (A −
λI n )x = 0 has a nontrivial solution if and only if det(A − λI n ) = 0. Therefore,

λ is an eigenvalue of A ⇐⇒ Ax = λx has a nontrivial solution


⇐⇒ (A − λI n )x = 0 has a nontrivial solution
⇐⇒ A − λI n is not invertible
⇐⇒ det(A − λI n ) = 0
⇐⇒ f (λ) = 0.

Example (Finding eigenvalues). Find the eigenvalues and eigenvectors of the ma-
trix  ‹
5 2
A= .
2 1

Solution. In the above example we computed the characteristic polynomial of A


to be f (λ) = λ2 − 6λ + 1. We can solve the equation λ2 − 6λ + 1 = 0 using the
quadratic formula: p
6 ± 36 − 4 p
λ= = 3 ± 2 2.
2
p p
Therefore, the eigenvalues are 3 + 2 2 and 3 − 2 2.
To compute the eigenvectors, we solve the homogeneous p system of equationts
(A − λI2 )x = 0 for each eigenvalue λ. When λ = 3 + 2 2, we have
p
p
 
2−2 2 2 p
A − (3 + 2)I2 =
2 −2 − 2 2
p  p 
R1 =R1 ×(2+2 2) −4 4 + 4 p2
−−−−−−−−−→
2 −2 − 2 2
 p 
R2 =R2 +R1 /2 −4 4 + 4 2
−−−−−−−→
0 0
 p 
R1 =R1 ÷−4 1 −1 − 2
−−−−−→ .
0 0
p p
The parametric form of the general solution p 
is x = (1 + 2) y, so the (3 + 2 2)-
1+ 2
eigenspace is the line spanned by 1 . We compute in the same way that the
p p 
1− 2
(3 − 2 2)-eigenspace is the line spanned by 1 .

Use this link to view the online demo


p p
The green line is the (3 − 2 2)-eigenspace, and the violet line is the (3 + 2 2)-
eigenspace.
6.2. THE CHARACTERISTIC POLYNOMIAL 253

Example (Finding eigenvalues). Find the eigenvalues and eigenvectors of the ma-
trix  
0 6 8
1
A =  2 0 0.
0 21 0

Solution. In the above example we computed the characteristic polynomial of A


to be f (λ) = −λ3 + 3λ + 2. We eyeball that f (2) = −8 + 3 · 2 + 2 = 0. Thus λ − 2
divides f (λ); to find the other roots, we perform polynomial long division:

−λ3 + 3λ + 2
= −λ2 − 2λ − 1 = −(λ + 1)2 .
λ−2
Therefore,
f (λ) = −(λ − 2)(λ + 1)2 ,
so the only eigenvalues are λ = 2, −1.
We compute the 2-eigenspace by solving the homogeneous system (A−2I3 )x =
0. We have
   
−2 6 8 1 0 −16
1 RREF
A − 2I3 =  2 −2 0  −−→  0 1 −4  .
1
0 2 −2 0 0 0

The parametric form and parametric vector form of the solutions are:
   
x = 16z x 16
(
y = 4z −→  y = z 4 .
 
z= z z 1

Therefore, the 2-eigenspace is the line


 
 16 
Span  4  .
 1 

We compute the −1-eigenspace by solving the homogeneous system (A+I3 )x =


0. We have    
1 6 8 1 0 −4
1 RREF
A + I3 =  2 1 0  −−→  0 1 2  .
0 12 1 0 0 0
The parametric form and parametric vector form of the solutions are:
   
x = 4z x 4
(
y = −2z −→  y  = z −2 .
z= z z 1
254 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Therefore, the −1-eigenspace is the line


 
 4 
Span −2 .
 1 

Use this link to view the online demo

The green line is the −1-eigenspace, and the violet line is the 2-eigenspace.

Form of the characteristic polynomial It is time that we justified the use of the
term “polynomial.” First we need a vocabulary word.

Definition. The trace of a square matrix A is the number Tr(A) obtained by sum-
ming the diagonal entries of A:
 
a11 a12 ··· a1,n−1 a1n
 a21 a22 ··· a2,n−1 a2n 
 . .. .. .. 
..
 = a11 + a22 + · · · + ann .
Tr  .
 . . . . . 
a
n−1,1 an−1,2 · · · an−1,n−1 an−1,n

an1 an2 · · · an,n−1 ann

Theorem. Let A be an n × n matrix, and let f (λ) = det(A− λI n ) be its characteristic


polynomial. Then f (λ) is a polynomial of degree n. Moreover, f (λ) has the form

f (λ) = (−1)n λn + (−1)n−1 Tr(A)λn−1 + · · · + det(A).

In other words, the coefficient of λn−1 is ± Tr(A), and the constant term is det(A) (the
other coefficients are just numbers without names).

Proof. First we notice that

f (0) = det(A − 0I n ) = det(A),

so that the constant term is always det(A).


We will prove the rest of the theorem only for 2 × 2 matrices; the reader is
encouraged to complete the proof  in general using cofactor expansions. We can
write a 2 × 2 matrix as A = c d ; then
a b

a−λ
 ‹
b
f (λ) = det(A − λI2 ) = det = (a − λ)(d − λ) − bc
c d −λ
= λ2 − (a + d)λ + (ad − bc) = λ2 − Tr(A)λ + det(A).
6.2. THE CHARACTERISTIC POLYNOMIAL 255

Recipe: The characteristic polynomial of a 2 × 2 matrix. When n = 2, the


theorem tells us all of the coefficients of the characteristic polynomial:

f (λ) = λ2 − Tr(A)λ + det(A).

This is generally the fastest way to compute the characteristic polynomial of a


2 × 2 matrix.

Example. Find the characteristic polynomial of the matrix


 ‹
5 2
A= .
2 1

Solution. We have

f (λ) = λ2 − Tr(A)λ + det(A) = λ2 − (5 + 1)λ + (5 · 1 − 2 · 2) = λ2 − 6λ + 1,

as in the above example.

Factoring the characteristic polynomial If A is an n × n matrix, then the char-


acteristic polynomial f (λ) has degree n by the above theorem. When n = 2, one
can use the quadratic formula to find the roots of f (λ). There exist algebraic for-
mulas for the roots of cubic and quartic polynomials, but these are generally too
cumbersome to apply by hand. Even worse, it is known that there is no algebraic
formula for the roots of a general polynomial of degree at least 5.
In practice, the roots of the characteristic polynomial are found numerically
by computer. That said, there do exist methods for finding roots by hand. For
instance, we have the following consequence of the rational root theorem (which
we also call the rational root theorem):

Rational Root Theorem. Suppose that A is an n × n matrix whose characteristic


polynomial f (λ) has integer (whole-number) entries. Then all rational roots of its
characteristic polynomial are integer divisors of det(A).

For example, if A has integer entries, then its characteristic polynomial has
integer coefficients. This gives us one way to find a root by hand, if A has an
eigenvalue that is a rational number. Once we have found one root, then we can
reduce the degree by polynomial long division.

Example. Find the eigenvalues of the matrix


 
7 0 3
A =  −3 2 −3  .
−3 0 −1
256 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Solution. We compute the characteristic polynomial by expanding cofactors along


the first row:
 
7−λ 0 3
f (λ) = det(A − λI3 ) = det  −3 2 − λ −3 
−3 0 −1 − λ
= (7 − λ)(2 − λ)(−1 − λ) + 3 · 3(2 − λ)
= −λ3 + 8λ2 − 14λ + 4.

The determinant of A is the constant term f (0) = 4; its integer divisors are ±1, ±2, ±4.
We check which are roots:

f (1) = −3 f (−1) = 27 f (2) = 0 f (−2) = 72 f (4) = 12 f (−4) = 252.

The only rational root of f (λ) is λ = 2. We divide by λ − 2 using polynomial long


division:
−λ3 + 8λ2 − 14λ + 4
= −λ2 + 6λ − 2.
λ−2
We can use the quadratic formula to find the roots of the quotient:
p
−6 ± 36 − 4 · 2 p
λ= = 3 ± 7.
−2
We have factored f completely:
p  p 
f (λ) = −(λ − 2) λ − (3 + 7) λ − (3 − 7) .
p p
Therefore, the eigenvalues of A are 2, 3 + 7, 3 − 7.

In the above example, we could have expanded cofactors along the second
column to obtain
7−λ
 ‹
3
f (λ) = (2 − λ) det .
−3 −1 − λ
Since 2 − λ was the only nonzero entry in its column, this expression already
has the 2 − λ term factored out: the rational root theorem was not needed. The
determinant
 in the above expression is the characteristic polynomial of the matrix
7 3
−3 −1 , so we can compute it using the trace and determinant:

f (λ) = (2 − λ) λ2 − (7 − 1)λ + (−7 + 9) = (2 − λ)(λ2 − 6λ + 2).

Example. Find the eigenvalues of the matrix


 
7 0 3
A =  −3 2 −3  .
4 2 0
6.2. THE CHARACTERISTIC POLYNOMIAL 257

Solution. We compute the characteristic polynomial by expanding cofactors along


the first row:
 
7−λ 0 3
f (λ) = det(A − λI3 ) = det  −3 2 − λ −3 
4 2 −λ
 
= (7 − λ) −λ(2 − λ) + 6 + 3 −6 − 4(2 − λ)
= −λ3 + 9λ2 − 8λ.

The constant term is zero, so A has determinant zero. We factor out λ, then eyeball
the roots of the quadratic factor:

f (λ) = −λ(λ2 − 9λ + 8) = −λ(λ − 1)(λ − 8).

Therefore, the eigenvalues of A are 0, 1, and 8.

Finding Eigenvalues of a 3×3 Matrix. Let A be a 3×3 matrix. Here are some
strategies for factoring its characteristic polynomial f (λ). First, you must find
one eigenvalue:

1. Do not multiply out the characteristic polynomial if it is already partially


factored! This happens if you expand cofactors along the second column
in this example.

2. If there is no constant term, you can factor out λ, as in this example.

3. If the matrix is triangular, the roots are the diagonal entries (see below).

4. Guess one eigenvalue using the rational root theorem: if det(A) is an


integer, substitute all (positive and negative) divisors of det(A) into f (λ).

5. Find an eigenvalue using the geometry of the matrix. For instance, a


reflection has eigenvalues ±1.

After obtaining an eigenvalue λ1 , use polynomial long division to compute


f (λ)/(λ − λ1 ). This is a quadratic polynomial, to which you can apply the
quadratic formula to find the remaining roots.

Eigenvalues of a triangular matrix It is easy to compute the determinant of an


upper- or lower-triangular matrix; this makes it easy to find its eigenvalues as well.

Corollary. If A is an upper- or lower-triangular matrix, then the eigenvalues of A are


its diagonal entries.
258 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Proof. Suppose for simplicity that A is a 3 × 3 upper-triangular matrix:


 
a11 a12 a13
A =  0 a22 a23  .
0 0 a33

Its characteristic polynomial is


 
a11 − λ a12 a13
f (λ) = det(A − λI3 ) = det  0 a22 − λ a23  .
0 0 a33 − λ

This is also an upper-triangular matrix, so the determinant is the product of the


diagonal entries:
f (λ) = (a11 − λ)(a22 − λ)(a33 − λ).
The zeros of this polynomial are exactly a11 , a22 , a33 .

Example. Find the eigenvalues of the matrix

1 7 2 4
 
0 1 3 11 
A= .
0 0 π 101 
0 0 0 0

Solution. The eigenvalues are the diagonal entries 1, π, 0. (The eigenvalue 1


occurs twice, but it counts as one eigenvalue; in Section 6.4 we will define the
notion of algebraic multiplicity of an eigenvalue.)

6.3 Similarity

Objectives

1. Learn to interpret similar matrices geoemetrically.

2. Understand the relationship between the eigenvalues, eigenvectors, and char-


acteristic polynomials of similar matrices.

3. Recipe: compute Ax in terms of B, C for A = C BC −1 .

4. Picture: the geometry of similar matrices.

5. Vocabulary word: similarity.


6.3. SIMILARITY 259

Some matrices are easy to understand. For instance, a diagonal matrix


 ‹
2 0
D=
0 1/2
x 2x
 
just scales the coordinates of a vector: D y = y/2 . The purpose of most of the
rest of this chapter is to understand complicated-looking matrices by analyzing to
what extent they “behave like” simple matrices. For instance, the matrix

1 11 6
 ‹
A=
10 9 14
2/3

has eigenvalues 2 and 1/2, with corresponding eigenvectors v1 = 1 and v2 =
−1

1 . Notice that

D(x e1 + y e2 ) = x De1 + y De2 = 2x e1 − 21 y e2


A(x v1 + y v2 ) = xAv1 + yAv2 = 2x v1 − 21 y v2 .

Using v1 , v2 instead of the usual coordinates makes A “behave” like a diagonal


matrix.

Use this link to view the online demo

The matrices A and D behave similarly. Click “multiply” to multiply the colored points
by D on the left and A on the right. (We will see in Section 6.4 why the points follow
hyperbolic paths.)

The other case of particular importance will be matrices that “behave” like a
rotation matrix: indeed, this will be crucial for understanding Section 6.5 geomet-
rically. See this important note.
In this section, we study in detail the situation when two matrices behave sim-
ilarly with respect to different coordinate systems. In Section 6.4 and Section 6.5,
we will show how to use eigenvalues and eigenvectors to find a simpler matrix
that behaves like a given matrix.

6.3.1 Similar Matrices


We begin with the algebraic definition of similarity.

Definition. Two n × n matrices A and B are similar if there exists an invertible


n × n matrix C such that A = C BC −1 .

Example. The matrices


 ‹  ‹
−12 15 3 0
and
−10 13 0 −2
260 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

are similar because


 ‹  ‹ ‹ ‹−1
−12 15 −2 3 3 0 −2 3
= ,
−10 13 1 −1 0 −2 1 −1

as the reader can verify.

Example. The matrices


 ‹  ‹
3 0 1 0
and
0 −2 0 1

are not similar. Indeed, the second matrix is the identity matrix I2 , so if C is any
invertible 2 × 2 matrix, then
 ‹
3 0
C I2 C = C C = I2 6=
−1 −1
.
0 −2

As in the above example, one can show that I n is the only matrix that is similar
to I n , and likewise for any scalar multiple of I n .

Similarity is unrelated to row equivalence. Any invertible matrix is row equiv-


alent to I n , but I n is the only matrix similar to I n . For instance,
 ‹  ‹
2 1 1 0
and
0 2 0 1

are row equivalent but not similar.

As suggested by its name, similarity is what is called an equivalence relation.


This means that it satisfies the following properties.

Proposition. Let A, B, and C be n × n matrices.

1. Reflexivity: A is similar to itself.

2. Symmetry: if A is similar to B, then B is similar to A.

3. Transitivity: if A is similar to B and B is similar to C, then A is similar to C.

Proof.

1. Taking C = I n = I n−1 , we have A = I n AI n−1 .

2. Suppose that A = C BC −1 . Multiplying both sides on the left by C −1 and on


the right by C gives

C −1 AC = C −1 (C BC −1 )C = B.

Since (C −1 )−1 = C, we have B = C −1 A(C −1 )−1 , so that B is similar to A.


6.3. SIMILARITY 261

3. Suppose that A = DBD−1 and B = EC E −1 . Subsituting for B and remember-


ing that (DE)−1 = E −1 D−1 , we have

A = D(EC E −1 )D−1 = (DE)C(DE)−1 ,

which shows that A is similar to C.

Example. The matrices


 ‹  ‹
−12 15 3 0
and
−10 13 0 −2

are similar, as we saw in this example. Likewise, the matrices


 ‹  ‹
−12 15 −12 5
and
−10 13 −30 13

are similar because


 ‹  ‹ ‹ ‹−1
−12 5 2 −1 −12 15 2 −1
= .
−30 13 2 1 −10 13 2 1

It follows that  ‹  ‹
−12 5 3 0
and
−30 13 0 −2
are similar to each other.

We conclude with an observation about similarity and powers of matrices.

Fact. Let A = C BC −1 . Then for any n ≥ 1, we have

An = C B n C −1 .

Proof. First note that

A2 = AA = (C BC −1 )(C BC −1 ) = C B(C −1 C)BC −1 = C BI n BC −1 = C B 2 C −1 .

Next we have

A3 = A2 A = (C B 2 C −1 )(C BC −1 ) = C B 2 (C −1 C)BC −1 = C B 3 C −1 .

The pattern is clear.

Example. Compute A100 , where


‹  ‹ ‹ ‹−1
5 13 −2 3 0 −1 −2 3
A= = .
−2 −5 1 −1 1 0 1 −1
262 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Solution. By the fact, we have

 ‹ ‹100  ‹−1
−2 3 0 −1 −2 3
A 100
= .
1 −1 1 0 1 −1

The matrix 01 −1 0 is a counterclockwise rotation by 90◦ . If we rotate by 90◦ four
times, then we end up where we started. Hence rotating by 90◦ one hundred times
is the identity transformation, so

 ‹ ‹ ‹−1  ‹
−2 3 1 0 −2 3 1 0
A100
= = .
1 −1 0 1 1 −1 0 1

6.3.2 Geometry of Similar Matrices


Similarity is a very interesting construction when viewed geometrically. We will
see that, roughly, similar matrices do the same thing in different coordinate systems.
The reader might want to review B-coordinates and nonstandard coordinate grids
in Section 3.8 before reading this subsection.
By the invertible matrix theorem in Section 6.1, an n × n matrix C is invertible
if and only if its columns v1 , v2 , . . . , vn form a basis for Rn . This means we can speak
of the B-coordinates of a vector in Rn , where B is the basis of columns of C. Recall
that
   
c1   c1
 c2  | | | c 
2
[x]B = 
 ... 
 means x = c1 v1 + c2 v2 + · · · + cn vn = v1 v2
 · · · vn  
 ...  .

| | |
cn cn

Since C is the matrix with columns v1 , v2 , . . . , vn , this says that x = C[x]B . Multi-
plying both sides by C −1 gives [x]B = C −1 x. To summarize:

Let C be an invertible n × n matrix with columns v1 , v2 , . . . , vn , and let B =


{v1 , v2 , . . . , vn }, a basis for Rn . Then for any x in Rn , we have

C[x]B = x and C −1 x = [x]B .

This says that C changes from the B-coordinates to the usual coordinates, and
C −1 changes from the usual coordinates to the B-coordinates.

Suppose that A = C BC −1 . The above observation gives us another way of com-


puting Ax for a vector x in Rn . Recall that C BC −1 x = C(B(C −1 x)), so that multi-
plying C BC −1 by x means first multiplying by C −1 , then by B, then by C. See this
6.3. SIMILARITY 263

example in Section 4.4.

Recipe: Computing Ax in terms of B. Suppose that A = C BC −1 , where C is


an invertible matrix with columns v1 , v2 , . . . , vn . Let B = {v1 , v2 , . . . , vn }, a basis
for Rn . Let x be a vector in Rn . To compute Ax, one does the following:

1. Multiply x by C −1 , which changes to the B-coordinates: [x]B = C −1 x.

2. Multiply this by B: B[x]B = BC −1 x.

3. Interpreting this vector as a B-coordinate vector, we multiply it by C to


change back to the usual coordinates: Ax = C BC −1 x = C B[x]B .

B -coordinates usual coordinates

multiply by C −1

[x]B Ax x

B[x]B

multiply by C

To summarize: if A = C BC −1 , then A and B do the same thing, only in different


coordinate systems.

The following example is the heart of this section.

Example. Consider the matrices

 ‹  ‹  ‹
1/2 3/2 2 0 1 1
A= B= C= .
3/2 1/2 0 −1 1 −1

1

One can verify that A = C BC −1 : see this example in Section 6.4. Let v1 = 1 and
1

v2 = −1 , the columns of C, and let B = {v1 , v2 }, a basis of R2 .
The matrix B is diagonal: it scales the x-direction by a factor of 2 and the
y-direction by a factor of −1.
264 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

B
e2

e1 Be1

Be2

To compute Ax, first we multiply by C −1 to find the B-coordinates of x, then


0
we multiply by B, then we multiply by C again. For instance, let x = −2 .

1. We see from the B-coordinate grid below that x = −v1 + v2 . Therefore,


−1
C −1 x = [x]B = 1 .
−2

2. Multiplying by B scales the coordinates: B[x]B = −1 .
−2

3. Interpreting −1 as a B-coordinate vector, we multiply by C to get
 ‹  ‹
−2 −3
Ax = C = −2v1 − v2 = .
−1 −1

Of course, this vector lies at (−2, −1) on the B-coordinate grid.

B -coordinates usual coordinates

multiply by C −1

[x]B

scale x by 2
scale y by −1 Ax
B[x]B x

multiply by C

1 5

Now let x = 2 −3 .

1. We see from the B-coordinate grid that x = 21 v1 + 2v2 . Therefore, C −1 x =


1/2

[x]B = 2 .
6.3. SIMILARITY 265

1

2. Multiplying by B scales the coordinates: B[x]B = −2 .

1

3. Interpreting −2 as a B-coordinate vector, we multiply by C to get

‹  ‹
1 −1
Ax = C = v1 − 2v2 = .
−2 3

This vector lies at (1, −2) on the B-coordinate grid.

B -coordinates usual coordinates

[x]B multiply by C −1

Ax
scale x by 2
scale y by −1
x
B[x]B

multiply by C

To summarize:

• B scales the e1 -direction by 2 and the e2 -direction by −1.

• A scales the v1 -direction by 2 and the v2 -direction by −1.


266 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

B
e2

e1 Be1

Be2

C −1 C

Av1
v1 Av2

v2
A

Use this link to view the online demo

The geometric relationship between the similar matrices A and B acting on R2 . Click
and drag the heads of x and [x]B . Study this picture until you can reliably predict
where the other three vectors will be after moving one of them: this is the essence of
the geometry of similar matrices.

Interactive: Another matrix similar to B. Consider the matrices

1 −8 −9 1 −1 −3
 ‹  ‹  ‹
2 0
A =
0
B= C =
0
.
5 6 13 0 −1 2 2 1

−1 −3
Then A0 = C 0 B(C 0 )−1 , as one can verify. Let v10 = 12 2 and v20 = 21 1 , the columns
 

of C 0 , and let B0 = {v10 , v20 }. Then A0 does the same thing as B, as in the previous
example, except A0 uses the B0 -coordinate system. In other words:

• B scales the e1 -direction by 2 and the e2 -direction by −1.

• A0 scales the v10 -direction by 2 and the v20 -direction by −1.


6.3. SIMILARITY 267

B
e2

e1 Be1

Be2

(C 0 )−1 C0

A0 v10
v10
v20
A0 v20
A0

Use this link to view the online demo

The geometric relationship between the similar matrices A0 and B acting on R2 . Click
and drag the heads of x and [x]B0 .

Example (A matrix similar to a rotation matrix). Consider the matrices

1
 ‹  ‹  ‹
7 −17 0 −1 2 −1/2
A= B= C= .
6 5 −7 1 0 1 1/2

2 −1
One can verify that A = C BC −1 . Let v1 = 1 and v2 = 12 1 , the columns of C, and
 

let B = {v1 , v2 }, a basis of R2 .


The matrix B rotates the plane counterclockwise by 90◦ .
268 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

B
e2 Be1

e1 Be2

To compute Ax, first we multiply by C −1 to find the B-coordinates of x, then


1
we multiply by B, then we multiply by C again. For instance, let x = 23 1 .

 the B-coordinate grid below that x = v1 + v2 . Therefore, C x =


−1
1. We see from
1
[x]B = 1 .
−1

2. Multiplying by B rotates by 90◦ : B[x]B = 1 .
−1

3. Interpreting 1 as a B-coordinate vector, we multiply by C to get

1 −5
 ‹  ‹
−1
Ax = C = −v1 + v2 = .
1 2 −1

Of course, this vector lies at (−1, 1) on the B-coordinate grid.

B -coordinates usual coordinates

multiply by C −1
B[x]B [x]B x

rotate by 90◦
Ax

multiply by C

1 −1

Now let x = 2 −2 .

1. We see from the B-coordinate grid that x = − 12 v1 − v2 . Therefore, C −1 x =


−1/2

[x]B = −1 .
1

2. Multiplying by B rotates by 90◦ : B[x]B = −1/2 .
6.3. SIMILARITY 269

1

3. Interpreting −1/2 as a B-coordinate vector, we multiply by C to get

1 1 9
 ‹  ‹
1
Ax = C = v1 − v2 = .
−1/2 2 4 3

This vector lies at (1, − 12 ) on the B-coordinate grid.

B -coordinates usual coordinates

multiply by C −1

Ax
rotate by 90◦
B[x]B
x
[x]B
multiply by C

To summarize:

• B rotates counterclockwise around the circle centered at the origin and pass-
ing through e1 and e2 .

• A rotates counterclockwise around the ellipse centered at the origin and pass-
ing through v1 and v2 .
270 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

B
e2
Be1
e1
Be2

C −1 C

v1
v2 Av1

A0 Av2

Use this link to view the online demo

The geometric relationship between the similar matrices A and B acting on R2 . Click
and drag the heads of x and [x]B .
6.3. SIMILARITY 271

To summarize and generalize the previous example:

A Matrix Similar to a Rotation Matrix. Let


 
| |
cos θ − sin θ
 ‹
B= C =  v1 v2  A = C BC −1 ,
sin θ cos θ
| |

where C is assumed invertible. Then:

• B rotates the plane by an angle of θ around the circle centered at the


origin and passing through e1 and e2 , in the direction from e1 to e2 .

• A rotates the plane by an angle of θ around the ellipse centered at the


origin and passing through v1 and v2 , in the direction from v1 to v2 .

B
e2
Be1
e1
Be2

C −1 C

v1
v2 Av1

A0 Av2
272 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Interactive: Similar 3 × 3 matrices. Consider the matrices


     
−1 0 0 −1 0 0 −1 1 0
A =  −1 0 2  B =  0 −1 0  C =  1 1 1.
−1 1 1 0 0 2 −1 0 1

Then A = C BC −1 , as one can verify. Let v1 , v2 , v3 be the columns of C, and let


B = {v1 , v2 , v3 }, a basis of R3 . Then A does the same thing as B, except A uses the
B -coordinate system. In other words:

• B scales the e1 , e2 -plane by −1 and the e3 -direction by 2.

• A scales the v1 , v2 -plane by −1 and the v3 -direction by 2.

Use this link to view the online demo

The geometric relationship between the similar matrices A and B acting on R3 . Click
and drag the heads of x and [x]B .

6.3.3 Eigenvalues of Similar Matrices


Since similar matrices behave in the same way with respect to different coordinate
systems, we should expect their eigenvalues and eigenvectors to be closely related.

Fact. Similar matrices have the same characteristic polynomial.

Proof. Suppose that A = C BC −1 , where A, B, C are n × n matrices. We calculate

A − λI n = C BC −1 − λC C −1 = C BC −1 − CλC −1
= C BC −1 − CλI n C −1 = C(B − λI n )C −1 .

Therefore,

det(A − λI n ) = det(C(B − λI n )C −1 ) = det(C) det(B − λI n ) det(C)−1 = det(B − λI n ).

Here we have used the multiplicativity property in Section 5.1 and its corollary in
Section 5.1.

Since the eigenvalues of a matrix are the roots of its characteristic polynomial,
we have shown:

Similar matrices have the same eigenvalues.

By this theorem in Section 6.2, similar matrices also have the same trace and
determinant.
6.3. SIMILARITY 273

Note. The converse of the fact is false. Indeed, the matrices


 ‹  ‹
1 1 1 0
and
0 1 0 1

both have characteristic polynomial f (λ) = (λ − 1)2 , but they are not similar,
because the only matrix that is similar to I2 is I2 itself.

Given that similar matrices have the same eigenvalues, one might guess that
they have the same eigenvectors as well. Upon reflection, this is not what one
should expect: indeed, the eigenvectors should only match up after changing from
one coordinate system to another. This is the content of the next fact, remembering
that C and C −1 change between the usual coordinates and the B-coordinates.

Fact. Suppose that A = C BC −1 . Then

v is an eigenvector of A =⇒ C −1 v is an eigenvector of B
v is an eigenvector of B =⇒ C v is an eigenvector of A.

The eigenvalues of v / C −1 v or v / C v are the same.

Proof. Suppose that v is an eigenvector of A with eigenvalue λ, so that Av = λv.


Then
B(C −1 v) = C −1 (C BC −1 v) = C −1 (Av) = C −1 λv = λ(C −1 v),
so that C −1 v is an eigenvector of B with eigenvalue λ. Likewise if v is an eigen-
vector of B with eigenvalue λ, then Bv = λv, and we have

A(C v) = (C BC −1 )C v = C Bv = C(λv) = λ(C v),

so that C v is an eigenvalue of A with eigenvalue λ.

If A = C BC −1 , then C −1 takes the λ-eigenspace of A to the λ-eigenspace of B,


and C takes the λ-eigenspace of B to the λ-eigenspace of A.

Example. We continue with the above example: let


 ‹  ‹  ‹
1/2 3/2 2 0 1 1
A= B= C= ,
3/2 1/2 0 −1 1 −1

1 1
 
so A = C BC −1 . Let v1 = 1 and v2 = −1 , the columns of C. Recall that:

• B scales the e1 -direction by 2 and the e2 -direction by −1.

• A scales the v1 -direction by 2 and the v2 -direction by −1.


274 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

This means that the x-axis is the 2-eigenspace of B, and the y-axis is the −1-
eigenspace of B; likewise, the “v1 -axis” is the 2-eigenspace of A, and the “v2 -axis”
is the −1-eigenspace of A. This is consistent with the fact, as multiplication by C
changes e1 into C e1 = v1 and e2 into C e2 = v2 .

ce
1-

a
ei

sp
C

ge

en
ns

ig
pa

e
2-
ce
2-eigenspace
−1-eigenspace

Use this link to view the online demo

The eigenspaces of A are the lines through v1 and v2 . These are the images under C
of the coordinate axes, which are the eigenspaces of B.

Interactive: Another matrix similar to B. Continuing with this example, let


1 −8 −9 1 −1 −3
 ‹  ‹  ‹
2 0
A =
0
B= C =
0
,
5 6 13 0 −1 2 2 1
−1 −3
so A0 = C 0 B(C 0 )−1 . Let v10 = 21 2 and v20 = 12 1 , the columns of C 0 . Then:
 

• B scales the e1 -direction by 2 and the e2 -direction by −1.

• A0 scales the v10 -direction by 2 and the v20 -direction by −1.


As before, the x-axis is the 2-eigenspace of B, and the y-axis is the −1-eigenspace
of B; likewise, the “v10 -axis” is the 2-eigenspace of A0 , and the “v20 -axis” is the −1-
eigenspace of A0 . This is consistent with the fact, as multiplication by C 0 changes
e1 into C 0 e1 = v10 and e2 into C 0 e2 = v20 .
2-e
ig

C
ens
pac
e

2-eigenspace
−1-eigenspace

−1-
eige
nspa
ce
6.4. DIAGONALIZATION 275

Use this link to view the online demo

The eigenspaces of A0 are the lines through v10 and v20 . These are the images under C 0
of the coordinate axes, which are the eigenspaces of B.

Interactive: Similar 3 × 3 matrices. Continuing with this example, let


     
−1 0 0 −1 0 0 −1 1 0
A =  −1 0 2  B =  0 −1 0  C =  1 1 1,
−1 1 1 0 0 2 −1 0 1

so A = C BC −1 . Let v1 , v2 , v3 be the columns of C. Then:

• B scales the e1 , e2 -plane by −1 and the e3 -direction by 2.

• A scales the v1 , v2 -plane by −1 and the v3 -direction by 2.

In other words, the x y-plane is the −1-eigenspace of B, and the z-axis is the 2-
eigenspace of B; likewise, the “v1 , v2 -plane” is the −1-eigenspace of A, and the
“v3 -axis” is the 2-eigenspace of A. This is consistent with the fact, as multiplication
by C changes e1 into C e1 = v1 , e2 into C e2 = v2 , and e3 into C e3 = v3 .

Use this link to view the online demo

The −1-eigenspace of A is the green plane, and the 2-eigenspace of A is the violet line.
These are the images under C of the x y-plane and the z-axis, respectively, which are
the eigenspaces of B.

6.4 Diagonalization

Objectives

1. Learn two main criteria for a matrix to be diagonalizable.

2. Develop a library of examples of matrices that are and are not diagonalizable.

3. Understand what diagonalizability and multiplicity have to say about simi-


larity.

4. Recipes: diagonalize a matrix, quickly compute powers of a matrix by diag-


onalization.
276 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

5. Pictures: the geometry of diagonal matrices, why a shear is not diagonaliz-


able.

6. Theorem: the diagonalization theorem (two variants).

7. Vocabulary words: diagonalizable, algebraic multiplicity, geometric mul-


tiplicity.

Diagonal matrices are the easiest kind of matrices to understand: they just
scale the coordinate directions by their diagonal entries. In Section 6.3, we saw
that similar matrices behave in the same way, with respect to different coordinate
systems. Therefore, if a matrix is similar to a diagonal matrix, it is also relatively
easy to understand. This section is devoted to the question: “When is a matrix
similar to a diagonal matrix?”

6.4.1 Diagonalizability
Before answering the above question, first we give it a name.

Definition. An n×n matrix A is diagonalizable if it is similar to a diagonal matrix:


that is, if there exists an invertible n × n matrix C and a diagonal matrix D such
that
A = C DC −1 .

Example. Any diagonal matrix is D is diagonalizable because it is similar to itself.


For instance,    
1 0 0 1 0 0
 0 2 0  = I3  0 2 0  I −1 .
3
0 0 3 0 0 3

Example. Most of the examples in Section 6.3 involve diagonalizable matrices:


 ‹  ‹ ‹ ‹−1
−12 15 is diagonalizable −2 3 3 0 −2 3
−10 13 because it equals 1 −1 0 −2 1 −1
 ‹  ‹ ‹ ‹−1
1/2 3/2 is diagonalizable 1 1 2 0 1 1
3/2 1/2 because it equals 1 −1 0 −1 1 −1
‹‹−1
1 −8 −9 1 −1 −3 1 −1 −3
 ‹  ‹ ‹ 
is diagonalizable 2 0
5 6 13 because it equals 2 2 1 0 −1 2 2 1
     −1
−1 0 0 −1 1 0 −1 0 0 −1 1 0
 −1 0 2  is diagonalizable  1 1 1   0 −1 0   1 1 1  .
because it equals
−1 1 1 −1 0 1 0 0 2 −1 0 1

Example. If a matrix A is diagonalizable, and if B is similar to A, then B is diago-


nalizable as well by proposition in Section 6.3.
6.4. DIAGONALIZATION 277

Powers of diagonalizable matrices Multiplying diagonal matrices together just


multiplies their diagonal entries:
    
x1 0 0 y1 0 0 x 1 y1 0 0
 0 x 2 0   0 y2 0  =  0 x 2 y2 0 .
0 0 x3 0 0 y3 0 0 x 3 y3

Therefore, it is easy to take powers of a diagonal matrix:


 n  
x 0 0 xn 0 0
 0 y 0  =  0 yn 0  .
0 0 z 0 0 zn

By this fact in Section 6.3, if A = C DC −1 then An = C D n C −1 , so it is also easy to


take powers of diagonalizable matrices. This will be very important in applications
to difference equations in Section 6.6.

Recipe: Compute powers of a diagonalizable matrix. If A = C DC −1 , where


D is a diagonal matrix, then An = C D n C −1 :
   
x 0 0 xn 0 0
A = C  0 y 0  C −1 =⇒ An = C  0 y n 0  C −1 .
0 0 z 0 0 zn

Example. Let
 ‹  ‹ ‹ ‹−1
1/2 3/2 1 1 2 0 1 1
A= = .
3/2 1/2 1 −1 0 −1 1 −1

Find a closed formula for An , where n is any positive whole number.


Solution. We have
 ‹ ‹n  ‹−1
1 1 2 0 1 1
A =n
1 −1 0 −1 1 −1
‹ n
1 −1 −1
 ‹  ‹
1 1 2 0
=
1 −1 0 (−1)n −2 −1 1
(−1)n 1 1 1
 n ‹  ‹
2
=
2n (−1)n+1 2 1 −1
1 2n + (−1)n 2n + (−1)n+1
 ‹
= ,
2 2 + (−1) 2n + (−1)n
n n+1

where we used (−1)n+2 = (−1)2 (−1)n = (−1)n .

Now we come to the primary criterion for diagonalizability. It shows that di-
agonalizability is an eigenvalue problem.
278 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Diagonalization Theorem. An n × n matrix A is diagonalizable if and only if A has


n linearly independent eigenvectors.
In this case, A = C DC −1 for

λ1 0 · · · 0
 
 
| | |  0 λ2 · · · 0 
C = v1 v2 · · · vn 
 D=  ... .. . . ,
.. 
| | | . . .
0 0 · · · λn

where v1 , v2 , . . . , vn are linearly independent eigenvectors, and λ1 , λ2 , . . . , λn are the


corresponding eigenvalues, in the same order.
Proof. First suppose that A has n linearly independent eigenvectors v1 , v2 , . . . , vn ,
with eigenvalues λ1 , λ2 , . . . , λn . Define C as above, so C is invertible by the invert-
ible matrix theorem in Section 6.1. Let D = C −1 AC, so A = C DC −1 . Multiplying
by standard coordinate vectors picks out the columns of C: we have C ei = vi , so
ei = C −1 vi . We multiply by the standard coordinate vectors to find the columns of
D:
Dei = C −1 AC ei = C −1 Avi = C −1 λi vi = λi C −1 vi = λi ei .
Therefore, the columns of D are multiples of the standard coordinate vectors:

λ1 0 · · ·
 
0 0
 0 λ2 · · · 0 0 
 . . . .. 
D= .. .. . . . ..
 .
. 
 0 0 ··· λ 0 n−1
0 0 ··· 0 λn

Now suppose that A = C DC −1 , where C has columns v1 , v2 , . . . , vn , and D is


diagonal with diagonal entries λ1 , λ2 , . . . , λn . Since C is invertible, its columns
are linearly independent. We have to show that vi is an eigenvector of A with
eigenvalue λi . We know that the standard coordinate vector ei is an eigenvector
of D with eigenvalue λi , so:

Avi = C DC −1 vi = C Dei = Cλi ei = λi C ei = λi vi .

By this fact in Section 6.1, if an n × n matrix A has n distinct eigenvalues


λ1 , λ2 , . . . , λn , then a choice of corresponding eigenvectors v1 , v2 , . . . , vn is auto-
matically linearly independent.

An n × n matrix with n distinct eigenvalues is diagonalizable.

Easy Example. Apply the diagonalization theorem to the matrix


 
1 0 0
A = 0 2 0.
0 0 3
6.4. DIAGONALIZATION 279

Solution. This diagonal matrix is in particular upper-triangular, so its eigenval-


ues are the diagonal entries 1, 2, 3. The standard coordinate vectors are eigenval-
ues of a diagonal matrix:
         
1 0 0 1 1 1 0 0 0 0
 0 2 0  0 = 1 · 0  0 2 0  1 = 2 · 1
0 0 3 0 0 0 0 3 0 0

    
1 0 0 0 0
 0 2 0  0 = 3 · 0 .
0 0 3 1 1

Therefore, the diagonalization theorem says that A = C DC −1 , where the columns


of C are the standard coordinate vectors, and the D is the diagonal matrix with
entries 1, 2, 3:
     −1
1 0 0 1 0 0 1 0 0 1 0 0
0 2 0 = 0 1 00 2 00 1 0 .
0 0 3 0 0 1 0 0 3 0 0 1

This just tells us that A is similar to itself.


Actually, the diagonalization theorem is not completely trivial even for diagonal
matrices. If we put our eigenvalues in the order 3, 2, 1, then the corresponding
eigenvectors are e3 , e2 , e1 , so we also have that A = C 0 D0 (C 0 )−1 , where C 0 is the
matrix with columns e3 , e2 , e1 , and D0 is the diagonal matrix with entries 3, 2, 1:
     −1
1 0 0 0 0 1 3 0 0 0 0 1
0 2 0 = 0 1 00 2 00 1 0 .
0 0 3 1 0 0 0 0 1 1 0 0

In particular, the matrices


   
1 0 0 3 0 0
0 2 0 and 0 2 0
0 0 3 0 0 1

are similar to each other.

Non-Uniqueness of Diagonalization. We saw in the above example that changing


the order of the eigenvalues and eigenvectors produces a different diagonalization
of the same matrix. There are generally many different ways to diagonalize a
matrix, corresponding to different orderings of the eigenvalues of that matrix. The
important thing is that the eigenvalues and eigenvectors have to be listed in the
280 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

same order.
   −1
| | | λ1 0 0 | | |
A =  v1 v2 v3   0 λ2 0   v1 v2 v3 
| | | 0 0 λ3 | | |
   −1
| | | λ3 0 0 | | |
=  v3 v2 v1   0 λ2 0   v3 v2 v1  .
| | | 0 0 λ1 | | |

There are other ways of finding different diagonalizations of the same matrix.
For instance, you can scale one of the eigenvectors by a constant c:
   −1
| | | λ1 0 0 | | |
A = v1
 v2 v3   0 λ2 0   v1 v2 v3 
| | | 0 0 λ3 | | |
   −1
| | | λ1 0 0 | | |
=  cv1 v2 v3   0 λ2 0   cv1 v2 v3  ,
| | | 0 0 λ3 | | |

you can find a different basis entirely for an eigenspace of dimension at least 2,
etc.

Example (A diagonalizable 2 × 2 matrix). Diagonalize the matrix


 ‹
1/2 3/2
A= .
3/2 1/2

Solution. We need to find the eigenvalues and eigenvectors of A. First we com-


pute the characteristic polynomial:

f (λ) = λ2 − Tr(A)λ + det(A) = λ2 − λ − 2 = (λ + 1)(λ − 2).

Therefore, the eigenvalues are −1 and 2. We need to compute eigenvectors for


each eigenvalue. We start with λ1 = −1:
 ‹  ‹
3/2 3/2 RREF 1 1
(A + 1I2 )v = 0 ⇐⇒ v = 0 −−→ v = 0.
3/2 3/2 0 0
−1

The parametric form is x = − y, so v1 = 1 is an eigenvector with eigenvalue λ1 .
Now we find an eigenvector with eigenvalue λ2 = 2:
 ‹  ‹
−3/2 3/2 RREF 1 −1
(A − 2I2 )v = 0 ⇐⇒ v = 0 −−→ v = 0.
3/2 −3/2 0 0
1

The parametric form is x = y, so v2 = 1 is an eigenvector with eigenvalue 2.
6.4. DIAGONALIZATION 281

The eigenvectors v1 , v2 are linearly independent, so the diagonalization theo-


rem says that
 ‹  ‹
−1 1 −1 0
A = C DC −1
for C = D= .
1 1 0 2
Alternatively, if we choose 2 as our first eigenvalue, then
 ‹  ‹
1 −1 2 0
A = C D (C )
0 0 0 −1
for C = 0
D =
0
.
1 1 0 −1

Use this link to view the online demo

The green line is the −1-eigenspace of A, and the violet line is the 2-eigenspace. There
are two linearly independent (noncollinear) eigenvectors visible in the picture: choose
any nonzero vector on the green line, and any nonzero vector on the violet line.

Example (A diagonalizable 2 × 2 matrix with a zero eigenvector). Diagonalize the


matrix  ‹
2/3 −4/3
A= .
−2/3 4/3

Solution. We need to find the eigenvalues and eigenvectors of A. First we com-


pute the characteristic polynomial:

f (λ) = λ2 − Tr(A)λ + det(A) = λ2 − 2λ = λ(λ − 2).

Therefore, the eigenvalues are 0 and 2. We need to compute eigenvectors for each
eigenvalue. We start with λ1 = 0:
 ‹  ‹
2/3 −4/3 RREF 1 −2
(A − 0I2 )v = 0 ⇐⇒ v = 0 −−→ v = 0.
−2/3 4/3 0 0
2

The parametric form is x = 2 y, so v1 = 1 is an eigenvector with eigenvalue λ1 .
Now we find an eigenvector with eigenvalue λ2 = 2:
 ‹  ‹
−4/3 −4/3 RREF 1 1
(A − 2I2 )v = 0 ⇐⇒ v = 0 −−→ v = 0.
−2/3 −2/3 0 0
1

The parametric form is x = − y, so v2 = −1 is an eigenvector with eigenvalue 2.
The eigenvectors v1 , v2 are linearly independent, so the diagonalization theo-
rem says that
 ‹  ‹
2 1 0 0
A = C DC −1
for C = D= .
1 −1 0 2
Alternatively, if we choose 2 as our first eigenvalue, then
 ‹  ‹
1 2 2 0
A = C D (C )
0 0 0 −1
for C = 0
D =
0
.
−1 1 0 0
282 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

In the above example, the (non-invertible) matrix A = 13 −2



2 −4
4 is similar to

the diagonal matrix D = 0 2 . Since A is not invertible, zero is an eigenvalue by
0 0

the invertible matrix theorem, so one of the diagonal entries of D is necessarily


zero. Also see this example below.
Example (A diagonalizable 3 × 3 matrix). Diagonalize the matrix
 
4 −3 0
A =  2 −1 0  .
1 −1 1

Solution. We need to find the eigenvalues and eigenvectors of A. First we com-


pute the characteristic polynomial by expanding cofactors along the third column:
 ‹ ‹
4 −3
f (λ) = det(A − λI3 ) = (1 − λ) det − λI2
2 −1
= (1 − λ)(λ2 − 3λ + 2) = −(λ − 1)2 (λ − 2).

Therefore, the eigenvalues are 1 and 2. We need to compute eigenvectors for each
eigenvalue. We start with λ1 = 1:
   
3 −3 0 1 −1 0
RREF
(A − I3 )v = 0 ⇐⇒  2 −2 0  v = 0 −−→  0 0 0  v = 0.
1 −1 0 0 0 0

The parametric vector form is


¨x = y
     
x 1 0
y = y =⇒  y  = y 1 + z 0 .
z= z z 0 1

Hence a basis for the 1-eigenspace is


   
 1 0
B1 = v1 , v2 where v1 = 1 , v2 = 0 .
0 1

Now we compute the eigenspace for λ2 = 2:


   
2 −3 0 1 0 −3
RREF
(A − 2I3 )v = 0 ⇐⇒  2 −3 0  v = 0 −−→  0 1 −2  v = 0
1 −1 −1 0 0 0

The parametric form is x = 3z, y = 2z, so an eigenvector with eigenvalue 2 is


 
3
v3 = 2 .
1
6.4. DIAGONALIZATION 283

The eigenvectors v1 , v2 , v3 are linearly independent: v1 , v2 form a basis for the


1-eigenspace, and v3 is not contained in the 1-eigenspace because its eigenvalue
is 2. Therefore, the diagonalization theorem says that
   
1 0 3 1 0 0
A = C DC −1 for C =  1 0 2  D = 0 1 0.
0 1 1 0 0 2

Use this link to view the online demo

The green plane is the 1-eigenspace of A, and the violet line is the 2-eigenspace. There
are three linearly independent eigenvectors visible in the picture: choose any two
noncollinear vectors on the green plane, and any nonzero vector on the violet line.

Here is the procedure we used in the above examples.

Recipe: Diagonalization. Let A be an n × n matrix. To diagonalize A:

1. Find the eigenvalues of A using the characteristic polynomial.

2. For each eigenvalue λ of A, compute a basis Bλ for the λ-eigenspace.

3. If there are fewer than n total vectors in all of the eigenspace bases Bλ ,
then the matrix is not diagonalizable.

4. Otherwise, the n vectors v1 , v2 , . . . , vn in the eigenspace bases are linearly


independent, and A = C DC −1 for

λ1 0 · · · 0
 
 
| | |  0 λ2 · · · 0 
C =  v1 v2 · · · vn  and D =   ... .. . . ,
.. 
| | | . . .
0 0 · · · λn

where λi is the eigenvalue for vi .

We will justify the linear independence assertion in part 4 in the proof of this
theorem below.
Example (A shear is not diagonalizable). Let
 ‹
1 1
A= ,
0 1

so T (x) = Ax is a shear. The characteristic polynomial of A is f (λ) = (λ − 1)2 , so


the only eigenvalue of A is 1. We compute the 1-eigenspace:
 ‹ ‹
0 1 x
(A − I2 )v = 0 ⇐⇒ = 0 ⇐⇒ y = 0.
0 0 y
284 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

In other words, the 1-eigenspace is exactly the x-axis, so all of the eigenvectors
of A lie on the x-axis. It follows that A does not admit two linearly independent
eigenvectors, so by the diagonalization theorem, it is not diagonalizable.
In this example in Section 6.1, we studied the eigenvalues of a shear geomet-
rically; we reproduce the interactive demo here.

Use this link to view the online demo

All eigenvectors of a shear lie on the x-axis.

Example (A projection is diagonalizable). Let L be a line through the origin in


R2 , and define T : R2 → R2 to be the transformation that sends a vector x to the
closest point on L to x, as in the picture below.

T (x)
L

This is an example of an orthogonal projection. We will see in Section 7.3 that T


is a linear transformation; let A be the matrix for T . Any vector on L is not moved
by T because it is the closest point on L to itself: hence it is an eigenvector of A
with eigenvalue 1. Let L ⊥ be the line perpendicular to L and passing through the
origin. Any vector x on L ⊥ is closest to the zero vector on L, so a (nonzero) such
vector is an eigenvector of A with eigenvalue 0. (See this example in Section 6.1
for a special case.) Since A has two distinct eigenvalues, it is diagonalizable; in
fact, we know from the diagonalization theorem that A is similar to the matrix
1 0
0 0 .
Note that we never had to do any algebra! We know that A is diagonalizable
for geometric reasons.

Use this link to view the online demo

The line L (violet) is the 1-eigenspace of A, and L ⊥ (green) is the 0-eigenspace. Since
there are linearly independent eigenvectors, we know that A is diagonalizable.

Example (A non-diagonalizable 3 × 3 matrix). Let


 
1 1 0
A = 0 1 0.
0 0 2
6.4. DIAGONALIZATION 285

The characteristic polynomial of A is f (λ) = −(λ − 1)2 (λ − 2), so the eigenvalues


of A are 1 and 2. We compute the 1-eigenspace:
  
0 1 0 x
(A − I3 )v = 0 ⇐⇒  0 0 0   y  = 0 ⇐⇒ y = z = 0.
0 0 2 z

In other words, the 1-eigenspace is the x-axis. Similarly,


  
−1 1 0 x
(A − 2I3 )v = 0 ⇐⇒  0 −1 0   y  = 0 ⇐⇒ x = y = 0,
0 0 0 z

so the 2-eigenspace is the z-axis. In particular, all eigenvectors of A lie on the xz-
plane, so there do not exist three linearly independent eigenvectors of A. By the
diagonalization theorem, the matrix A is not diagonalizable.
Notice that A contains a 2 × 2 block on its diagonal that looks like a shear:
 
1 1 0
A = 0 1 0.
0 0 2

This makes one suspect that such a matrix is not diagonalizable.

Use this link to view the online demo

All eigenvectors of A lie on the x- and z-axes.

Example (A rotation matrix). Let


 ‹
0 −1
A= ,
1 0

so T (x) = Ax is the linear transformation that rotates counterclockwise by 90◦ .


We saw in this example in Section 6.1 that A does not have any eigenvectors at all.
It follows that A is not diagonalizable.

Use this link to view the online demo

This rotation matrix has no eigenvectors.

The characteristic polynomial of A is f (λ) = λ2 + 1, which of course does


not have any real roots. If we
p allow complex numbers, however, then f has two
roots, namely, ±i, where i = −1. Hence the matrix is diagonalizable if we allow
ourselves to use complex numbers. We will treat this topic in detail in Section 6.5.
286 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

The following point is often a source of confusion.

Diagonalizability has nothing to do with invertibility. Of the following ma-


trices, the first is diagonalizable and invertible, the second is diagonalizable
but not invertible, the third is invertible but not diagonalizable, and the fourth
is neither invertible nor diagonalizable, as the reader can verify:
 ‹  ‹  ‹  ‹
1 0 1 0 1 1 0 1
.
0 1 0 0 0 1 0 0

Remark (Non-diagonalizable 2 × 2 matrices with an eigenvalue). As in the above


example, one can check that the matrix

λ 1
 ‹
Aλ =
0 λ

is not diagonalizable for any number λ. We claim that any non-diagonalizable


2 × 2 matrix B with a real eigenvalue λ is similar to Aλ . Therefore, up to similarity,
these are the only such examples.
To prove this, let B be such a matrix. Let v1 be an eigenvector with eigenvalue
λ, and let v2 be any vector in R2 that is not collinear with v1 , so that {v1 , v2 } forms
a basis for R2 . Let C be the matrix with columns v1 , v2 , and consider A = C −1 BC.
We have C e1 = v1 and C e2 = v2 , so C −1 v1 = e1 and C −1 v2 = e2 . We can compute
the first column of A as follows:

Ae1 = C −1 BC e1 = C −1 Bv1 = C −1 λv1 = λC −1 v1 = λe1 .

Therefore, A has the form


λ b
‹
A= .
0 d

Since A is similar to B, it also has only one eigenvalue λ; since A is upper-triangular,


this implies d = λ, so
λ b
 ‹
A= .
0 λ

As B is not diagonalizable, we know A is not diagonal (B is similar to A), so b 6= 0.


Now we observe that
‹−1
λ b λ/b 1 λ 1
 ‹ ‹  ‹ ‹  ‹
1/b 0 1/b 0 b 0
= = = Aλ .
0 1 0 λ 0 1 0 λ 0 1 0 λ

We have shown that B is similar to A, which is similar to Aλ , so B is similar to Aλ


by the transitivity property of similar matrices.
6.4. DIAGONALIZATION 287

6.4.2 Algebraic and Geometric Multiplicity


In this subsection, we give a variant of the diagonalization theorem that provides
another criterion for diagonalizability. It is stated in the language of multiplicities
of eigenvalues.
Recall that the multiplicity of a root λ0 of a polynomial f (λ) is equal to the
number of factors of λ − λ0 that divide f (λ). For instance, in the polynomial

f (λ) = −λ3 + 4λ2 − 5λ + 2 = −(λ − 1)2 (λ − 2),

the root λ0 = 2 has multiplicity 1, and the root λ0 = 1 has multiplicity 2.

Definition. Let A be an n × n matrix, and let λ be an eigenvalue of A.

1. The algebraic multiplicity of λ is its multiplicity as a root of the character-


istic polynomial of A.

2. The geometric multiplicity of λ is the dimension of the λ-eigenspace.

Since the λ-eigenspace of A is Nul(A− λI n ), its dimension is the number of free


variables in the system of equations (A − λI n )x = 0, i.e., the number of columns
without pivots in the matrix A − λI n .

Example. The shear matrix  ‹


1 1
A=
0 1
has only one eigenvalue λ = 1. The characteristic polynomial of A is f (λ) =
(λ − 1)2 , so 1 has algebraic multiplicity 2, as it is a double root of f . On the other
hand, we showed in this example that the 1-eigenspace of A is the x-axis, so the
geometric multiplicity of 1 is equal to 1. This matrix is not diagonalizable.

Use this link to view the online demo

Eigenspace of the shear matrix, with multiplicities.

The identity matrix  ‹


1 0
I2 =
0 1
also has characteristic polynomial (λ−1)2 , so the eigenvalue 1 has algebraic multi-
plicity 2. Since every nonzero vector in R2 is an eigenvector of I2 with eigenvalue
1, the 1-eigenspace is all of R2 , so the geometric multiplicity is 2 as well. This
matrix is diagonal.

Use this link to view the online demo

Eigenspace of the identity matrix, with multiplicities.


288 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Example. Continuing with this example, let


 
4 −3 0
A =  2 −1 0  .
1 −1 1

The characteristic polynomial is f (λ) = −(λ − 1)2 (λ − 2), so that 1 and 2 are the
eigenvalues, with algebraic multiplicities 2 and 1, respectively. We computed that
the 1-eigenspace is a plane and the 2-eigenspace is a line, so that 1 and 2 also have
geometric multiplicities 2 and 1, respectively. This matrix is diagonalizable.

Use this link to view the online demo

The green plane is the 1-eigenspace of A, and the violet line is the 2-eigenspace. Hence
the geometric multiplicity of the 1-eigenspace is 2, and the geometric multiplicity of
the 2-eigenspace is 1.

In this example, we saw that the matrix


 
1 1 0
A = 0 1 0
0 0 2

also has characteristic polynomial f (λ) = −(λ − 1)2 (λ − 2), so that 1 and 2 are
the eigenvalues, with algebraic multiplicities 2 and 1, respectively. In this case,
however, both eigenspaces are lines, so that both eigenvalues have geometric mul-
tiplicity 1. This matrix is not diagonalizable.

Use this link to view the online demo

Both eigenspaces of A are lines, so they both have geometric multiplicity 1.

We saw in the above examples that the algebraic and geometric multiplicities
need not coincide. However, they do satisfy the following fundamental inequality,
the proof of which is beyond the scope of this text.
Theorem (Algebraic and Geometric Multiplicity). Let A be a square matrix and let
λ be an eigenvalue of A. Then
1 ≤ (the geometric multiplicity of λ) ≤ (the algebraic multiplicity of λ).
In particular, if the algebraic multiplicity of λ is equal to 1, then so is the geo-
metric multiplicity.

If A has an eigenvalue λ with algebraic multiplicity 1, then the λ-eigenspace


is a line.

We can use the theorem to give another criterion for diagonalizability (in ad-
dition to the diagonalization theorem).
6.4. DIAGONALIZATION 289

Diagonalization Theorem, Variant. Let A be an n × n matrix. The following are


equivalent:

1. A is diagonalizable.

2. The sum of the geometric multiplicities of the eigenvalues of A is equal to n.

3. The sum of the algebraic multiplicities of the eigenvalues of A is equal to n, and


the geometric multiplicity equals the algebraic multiplicity of each eigenvalue.

Proof. We will show 1 =⇒ 2 =⇒ 3 =⇒ 1. First suppose that A is diagonalizable.


Then A has n linearly independent eigenvectors v1 , v2 , . . . , vn . This implies that the
sum of the geometric multiplicities is at least n: for instance, if v1 , v2 , v3 have the
same eigenvalue λ, then the geometric multiplicity of λ is at least 3 (as the λ-
eigenspace contains three linearly independent vectors), and so on. But the sum
of the algebraic multiplicities is greater than or equal to the sum of the geometric
multiplicities by the theorem, and the sum of the algebraic multiplicities is at most
n because the characteristic polynomial has degree n. Therefore, the sum of the
geometric multiplicities equals n.
Now suppose that the sum of the geometric multiplicities equals n. As above,
this forces the sum of the algebraic multiplicities to equal n as well. As the algebraic
multiplicities are all greater than or equal to the geometric multiplicities in any
case, this implies that they are in fact equal.
Finally, suppose that the third condition is satisfied. Then the sum of the geo-
metric multiplicities equals n. Suppose that the distinct eigenvectors are λ1 , λ2 , . . . , λk ,
and that Bi is a basis for the λi -eigenspace, which we call Vi . We claim that the col-
lection B = {v1 , v2 , . . . , vn } of all vectors in all of the eigenspace bases Bi is linearly
independent. Consider the vector equation

0 = c1 v1 + c2 v2 + · · · + cn vn .

Grouping the eigenvectors with the same eigenvalues, this sum has the form

0 = (something in V1 ) + (something in V2 ) + · · · + (something in Vk ).

Since eigenvectors with distinct eigenvalues are linearly independent, each “some-
thing in Vi ” is equal to zero. But this implies that all coefficients c1 , c2 , . . . , cn are
equal to zero, since the vectors in each Bi are linearly independent. Therefore, A
has n linearly independent eigenvectors, so it is diagonalizable.

The first part of the third statement simply says that the characteristic poly-
nomial of A factors completely into linear polynomials over the real numbers: in
other words, there are no complex (non-real) roots. The second part of the third
statement says in particular that for any diagonalizable matrix, the algebraic and
290 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

geometric multiplicities coincide.

Let A be a square matrix and let λ be an eigenvalue of A. If the algebraic


multiplicity of λ does not equal the geometric multiplicity, then A is not diag-
onalizable.

The examples at the beginning of this subsection illustrate the theorem. Here
we give some general consequences for diagonalizability of 2×2 and 3×3 matrices.

Diagonalizability of 2 × 2 Matrices. Let A be a 2 × 2 matrix. There are four cases:

1. A has two different eigenvalues. In this case, each eigenvalue has algebraic
and geometric multiplicity equal to one. This implies A is diagonalizable.
For example:  ‹
1 7
A= .
0 2

2. A has one eigenvalue λ of algebraic and geometric multiplicity 2. To say


that the geometric multiplicity is 2 means that Nul(A − λI2 ) = R2 , i.e., that
every vector in R2 is in the null space of A − λI2 . This implies that A − λI2
is the zero matrix, so that A is the diagonal matrix λI2 . In particular, A is
diagonalizable. For example:
 ‹
1 0
A= .
0 1

3. A has one eigenvalue λ of algebraic multiplicity 2 and geometric multiplicity


1. In this case, A is not diagonalizable, by part 3 of the theorem. For example,
a shear:  ‹
1 1
A= .
0 1

4. A has no eigenvalues. This happens when the characteristic polynomial has


no real roots. In particular, A is not diagonalizable. For example, a rotation:
 ‹
1 −1
A= .
1 1

Diagonalizability of 3 × 3 Matrices. Let A be a 3 × 3 matrix. We can analyze the


diagonalizability of A on a case-by-case basis, as in the previous example.

1. A has three different eigenvalues. In this case, each eigenvalue has algebraic
and geometric multiplicity equal to one. This implies A is diagonalizable.
For example:  
1 7 4
A = 0 2 3 .
0 0 −1
6.4. DIAGONALIZATION 291

2. A has two distinct eigenvalues λ1 , λ2 . In this case, one has algebraic mul-
tiplicity one and the other has algebraic multiplicity two; after reordering,
we can assume λ1 has multiplicity 1 and λ2 has multiplicity 2. This implies
that λ1 has geometric multiplicity 1, so A is diagonalizable if and only if the
λ2 -eigenspace is a plane. For example:
 
1 7 4
A = 0 2 0.
0 0 2

On the other hand, if the geometric multiplicity of λ2 is 1, then A is not


diagonalizable. For example:
 
1 7 4
A = 0 2 1.
0 0 2

3. A has only one eigenvalue λ. If the algebraic multiplicity of λ is 1, then A


is not diagonalizable. This happens when the characteristic polynomial has
two complex (non-real) roots. For example:
 
1 −1 0
A = 1 1 0.
0 0 2

Otherwise, the algebraic multiplicity of λ is equal to 3. In this case, if the


geometric multiplicity is 1:
 
1 1 1
A = 0 1 1
0 0 1

or 2:  
1 0 1
A = 0 1 1
0 0 1

then A is not diagonalizable. If the geometric multiplicity is 3, then Nul(A −


λI3 ) = R3 , so that A − λI3 is the zero matrix, and hence A = λI3 . Therefore,
in this case A is necessarily diagonal, as in:
 
1 0 0
A = 0 1 0.
0 0 1
292 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Similarity and multiplicity Recall from this fact in Section 6.3 that similar ma-
trices have the same eigenvalues. It turns out that both notions of multiplicity of
an eigenvalue are preserved under similarity.

Theorem. Let A and B be similar n × n matrices, and let λ be an eigenvalue of A and


B. Then:

1. The algebraic multiplicity of λ is the same for A and B.

2. The geometric multiplicity of λ is the same for A and B.

Proof. Since A and B have the same characteristic polynomial, the multiplicity of
λ as a root of the characteristic polynomial is the same for both matrices, which
proves the first statement. For the second, suppose that A = C BC −1 for an invert-
ible matrix C. By this fact in Section 6.3, the matrix C takes eigenvectors of B to
eigenvectors of A, both with eigenvalue λ.
Let {v1 , v2 , . . . , vk } be a basis of the λ-eigenspace of B. We claim that {C v1 , C v2 , . . . , C vk }
is linearly independent. Suppose that

c1 C v1 + c2 C v2 + · · · + ck C vk = 0.

Regrouping, this means



C c1 v1 + c2 v2 + · · · + ck vk = 0.

By the invertible matrix theorem in Section 6.1, the null space of C is trivial, so
this implies
c1 v1 + c2 v2 + · · · + ck vk = 0.
Since v1 , v2 , . . . , vk are linearly independent, we get c1 = c2 = · · · = ck = 0, as
desired.
By the previous paragraph, the dimension of the λ-eigenspace of A is greater
than or equal to the dimension of the λ-eigenspace of B. By symmetry (B is similar
to A as well), the dimensions are equal, so the geometric multiplicities coincide.

For instance, the four matrices in this example are not similar to each other,
because the algebraic and/or geometric multiplicities of the eigenvalues do not
match up. Or, combined with the above theorem, we see that a diagonalizable
matrix cannot be similar to a non-diagonalizable one, because the algebraic and
geometric multiplicities of such matrices cannot both coincide.

Example. Continuing with this example, let


 
4 −3 0
A =  2 −1 0  .
1 −1 1
6.4. DIAGONALIZATION 293

This is a diagonalizable matrix that is similar to


   
1 0 0 1 0 3
D =  0 1 0  using the matrix C = 1 0 2.
0 0 2 0 1 1

The 1-eigenspace of D is the x y-plane, and the 2-eigenspace is the z-axis. The
matrix C takes the x y-plane to the 1-eigenspace of A, which is again a plane, and
the z-axis to the 2-eigenspace of A, which is again a line. This shows that the
geometric multiplicities of A and D coincide.

Use this link to view the online demo

The matrix C takes the x y-plane to the 1-eigenspace of A (the grid) and the z-axis to
the 2-eigenspace (the green line).

The converse of the theorem is false: there exist matrices whose eigenvectors
have the same algebraic and geometric multiplicities, but which are not similar.
See the example below. However, for 2 × 2 and 3 × 3 matrices whose characteristic
polynomial has no complex (non-real) roots, the converse of the theorem is true.
(We will handle the case of complex roots in Section 6.5.)
Example (Matrices that look similar but are not). Show that the matrices
0 0 0 0 0 1 0 0
   
0 0 1 0 0 0 0 0
A= and B = 
0 0 0 1 0 0 0 1
0 0 0 0 0 0 0 0

have the same eigenvalues with the same algebraic and geometric multiplicities,
but are not similar.
Solution. These matrices are upper-triangular. They both have characteristic
polynomial f (λ) = λ4 , so they both have one eigenvalue 0 with algebraic mul-
tiplicity 4. The 0-eigenspace is the null space, which has dimension 2 in each
case because A and B have two columns without pivots. Hence 0 has geometric
multiplicity 2 in each case.
To show that A and B are not similar, we note that
0 0 0 0 0 0 0 0
   
0 0 0 1 0 0 0 0
A2 =  and B 2 =  ,
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0

as the reader can verify. If A = C BC −1 then by this important note, we have

A2 = C B 2 C −1 = C0C −1 = 0,

which is not the case.


294 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

On the other hand, suppose that A and B are diagonalizable matrices with the
same characteristic polynomial. Since the geometric multiplicities of the eigenval-
ues coincide with the algebraic multiplicities, which are the same for A and B, we
conclude that there exist n linearly independent eigenvectors of each matrix, all
of which have the same eigenvalues. This shows that A and B are both similar to
the same diagonal matrix. Using the transitivity property of similar matrices, this
shows:

Diagonalizable matrices are similar if and only if they have the same character-
istic polynomial, or equivalently, the same eigenvalues with the same algebraic
multiplicities.

Example. Show that the matrices


   
1 7 2 1 0 0
A =  0 −1 3  and B =  −2 4 0 
0 0 4 −5 −4 −1

are similar.
Solution. Both matrices have the three distinct eigenvalues 1, −1, 4. Hence they
are both diagonalizable, and are similar to the diagonal matrix
 
1 0 0
 0 −1 0  .
0 0 4

By the transitivity property of similar matrices, this implies that A and B are similar
to each other.
Example (Diagonal matrices with the same entries are similar). Any two diagonal
matrices with the same diagonal entries (possibly in a different order) are similar
to each other. Indeed, such matrices have the same characteristic polynomial. We
saw this phenomenon in this example, where we noted that
     −1
1 0 0 0 0 1 3 0 0 0 0 1
0 2 0 = 0 1 00 2 00 1 0 .
0 0 3 1 0 0 0 0 1 1 0 0

6.4.3 The Geometry of Diagonalizable Matrices


A diagonal matrix is easy to understand geometrically, as it just scales the coordi-
nate axes:
         
1 0 0 1 1 1 0 0 0 0
 0 2 0  0 = 1 · 0  0 2 0  1 = 2 · 1
0 0 3 0 0 0 0 3 0 0
6.4. DIAGONALIZATION 295
    
1 0 0 0 0
 0 2 0  0 = 3 · 0 .
0 0 3 1 1
Therefore, we know from Section 6.3 that a diagonalizable matrix simply scales
the “axes” with respect to a different coordinate system. Indeed, if v1 , v2 , . . . , vn
are linearly independent eigenvectors of an n × n matrix A, then A scales the vi -
direction by the eigenvalue λi .
In the following examples, we visualize the action of a diagonalizable matrix
A in terms of its dynamics. In other words, we start with a collection of vectors
(drawn as points), and we see where they move when we multiply them by A
repeatedly.

Example (Eigenvalues |λ1 | > 1, |λ2 | < 1). Describe how the matrix

1 11 6
 ‹
A=
10 9 14

acts on the plane.


Solution. First we diagonalize A. The characteristic polynomial is

5 1
 ‹
f (λ) = λ − Tr(A)λ + det(A) = λ − λ + 1 = (λ − 2) λ −
2 2
.
2 2
We compute the 2-eigenspace:

1
 ‹  ‹
−9 6 RREF 1 −2/3
(A − 2I3 )v = 0 ⇐⇒ v = 0 −−→ v = 0.
10 9 −6 0 0
2/3

The parametric form of this equation is x = 2/3 y, so one eigenvector is v1 = 1 .
For the 1/2-eigenspace, we have:

1 1 6 6
 ‹  ‹  ‹
RREF 1 1
A − I3 v = 0 ⇐⇒ v = 0 −−→ v = 0.
2 10 9 9 0 0
−1

The parametric form of this equation is x = − y, so an eigenvector is v2 = 1 . It
follows that A = C DC −1 , where
 ‹  ‹
2/3 −1 2 0
C= D= .
1 1 0 1/2

The diagonal matrix D scales the x-coordinate by 2 and the y-coordinate by


1/2. Therefore, it moves vectors closer to the x-axis and farther from the y-axis.
In fact, since (2x)( y/2) = x y, multiplication by D does not move a point off of a
hyperbola x y = C.
The matrix A does the same thing, in the v1 , v2 -coordinate system: multiplying
a vector by A scales the v1 -coordinate by 2 and the v2 -coordinate by 1/2. Therefore,
A moves vectors closer to the 2-eigenspace and farther from the 1/2-eigenspace.
296 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Use this link to view the online demo

Dynamics of the matrices A and D. Click “multiply” to multiply the colored points by
D on the left and A on the right.

Example (Eigenvalues |λ1 | > 1, |λ2 | > 1). Describe how the matrix

1
 ‹
13 −2
A=
5 −3 12

acts on the plane.


Solution. First we diagonalize A. The characteristic polynomial is

f (λ) = λ2 − Tr(A)λ + det(A) = λ2 − 5λ + 6 = (λ − 2)(λ − 3).

Next we compute the 2-eigenspace:

1
 ‹  ‹
3 −2 RREF 1 −2/3
(A − 2I3 )v = 0 ⇐⇒ v = 0 −−→ v = 0.
5 −3 2 0 0

2/3

The parametric form of this equation is x = 2/3 y, so one eigenvector is v1 = 1 .
For the 3-eigenspace, we have:

1
 ‹  ‹
−2 −2 RREF 1 1
(A − 3I3 )v = 0 ⇐⇒ v = 0 −−→ v = 0.
5 −3 −3 0 0

−1

The parametric form of this equation is x = − y, so an eigenvector is v2 = 1 . It
follows that A = C DC −1 , where
 ‹  ‹
2/3 −1 2 0
C= D= .
1 1 0 3

The diagonal matrix D scales the x-coordinate by 2 and the y-coordinate by 3.


Therefore, it moves vectors farther from both the x-axis and the y-axis, but faster
in the y-direction than the x-direction.
The matrix A does the same thing, in the v1 , v2 -coordinate system: multiplying
a vector by A scales the v1 -coordinate by 2 and the v2 -coordinate by 3. Therefore,
A moves vectors farther from the 2-eigenspace and the 3-eigenspace, but faster in
the v2 -direction than the v1 -direction.

Use this link to view the online demo

Dynamics of the matrices A and D. Click “multiply” to multiply the colored points by
D on the left and A on the right.
6.4. DIAGONALIZATION 297

Example (Eigenvalues |λ1 | < 1, |λ2 | < 1). Describe how the matrix

1 12 2
 ‹
A =
0
30 3 13
acts on the plane.
Solution. This is the inverse of the matrix A from the previous example. In that
example, we found A = C DC −1 for
 ‹  ‹
2/3 −1 2 0
C= D= .
1 1 0 3

Therefore, remembering that taking inverses reverses the order of multiplication,


we have
 ‹
1/2 0
A = A = (C DC ) = (C ) D C = C
0 −1 −1 −1 −1 −1 −1 −1
C −1 .
0 1/3

The diagonal matrix D−1 does the opposite of what D does: it scales the x-coordinate
by 1/2 and the y-coordinate by 1/3. Therefore, it moves vectors closer to both
coordinate axes, but faster in the y-direction. The matrix A0 does the same thing,
but with respect to the v1 , v2 -coordinate system.

Use this link to view the online demo

Dynamics of the matrices A0 and D−1 . Click “multiply” to multiply the colored points
by D−1 on the left and A0 on the right.

Example (Eigenvalues |λ1 | = 1, |λ2 | < 1). Describe how the matrix

1 5 −1
 ‹
A=
6 −2 4
acts on the plane.
Solution. First we diagonalize A. The characteristic polynomial is
3 1 1
 ‹
f (λ) = λ − Tr(A)λ + det(A) = λ − λ + = (λ − 1) λ −
2 2
.
2 2 2
Next we compute the 1-eigenspace:

1 −1 −1
 ‹  ‹
RREF 1 1
(A − I3 )v = 0 ⇐⇒ v = 0 −−→ v = 0.
6 −2 −2 0 0
−1

The parametric form of this equation is x = − y, so one eigenvector is v1 = 1 .
For the 1/2-eigenspace, we have:

1 1 2 −1
 ‹  ‹  ‹
RREF 1 −1/2
A − I3 v = 0 ⇐⇒ v = 0 −−→ v = 0.
2 6 −2 1 0 0
298 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

1/2

The parametric form of this equation is x = 1/2 y, so an eigenvector is v2 = 1 .
It follows that A = C DC −1 , where
 ‹  ‹
−1 1/2 1 0
C= D= .
1 1 0 1/2

The diagonal matrix D scales the y-coordinate by 1/2 and does not move the
x-coordinate. Therefore, it simply moves vectors closer to the x-axis along ver-
tical lines. The matrix A does the same thing, in the v1 , v2 -coordinate system:
multiplying a vector by A scales the v2 -coordinate by 1/2 and does not change
the v1 -coordinate. Therefore, A “sucks vectors into the 1-eigenspace” along lines
parallel to v2 .

Use this link to view the online demo

Dynamics of the matrices A and D. Click “multiply” to multiply the colored points by
D on the left and A on the right.

Interactive: A diagonalizable 3 × 3 matrix. The diagonal matrix


 
1/2 0 0
D= 0 2 0 
0 0 3/2

scales the x-coordinate by 1/2, the y-coordinate by 2, and the z-coordinate by 3/2.
Looking straight down at the x y-plane, the points follow parabolic paths taking
them away from the x-axis and toward the y-axis. The z-coordinate is scaled by
3/2, so points fly away from the x y-plane in that direction.
If A = C DC −1 for some invertible matrix C, then A does the same thing as D,
but with respect to the coordinate system defined by the columns of C.

Use this link to view the online demo

Dynamics of the matrices A and D. Click “multiply” to multiply the colored points by
D on the left and A on the right.

6.5 Complex Eigenvalues

Objectives

1. Learn to find complex eigenvalues and eigenvectors of a matrix.

2. Learn to recognize a rotation-scaling matrix, and compute by how much the


matrix rotates and scales.
6.5. COMPLEX EIGENVALUES 299

3. Understand the geometry of 2 × 2 and 3 × 3 matrices with a complex eigen-


value.

4. Recipe: a 2 × 2 matrix with a complex eigenvalue is similar to a rotation-


scaling matrix.

5. Pictures: the geometry of matrices with a complex eigenvalue.

6. Theorems: the rotation-scaling theorem, the block diagonalization theorem.

7. Vocabulary word: rotation-scaling matrix.

In Section 6.4, we saw that a matrix whose characteristic polynomial has dis-
tinct real roots is diagonalizable: it is similar to a diagonal matrix, which is much
simpler to analyze. In this section, we study matrices whose characteristic poly-
nomial has complex roots. It turns out that such a matrix is similar (in the 2 × 2
case) to a rotation-scaling matrix, which is also relatively easy to understand.
In a certain sense, this entire section is analogous to Section 6.4, with rotation-
scaling matrices playing the role of diagonal matrices.
See Appendix A for a review of the complex numbers.

6.5.1 Matrices with Complex Eigenvalues


As a consequence of the fundamental theorem of algebra as applied to the char-
acteristic polynomial, we see that:

Every matrix has a (possibly real) complex eigenvalue.

We can compute a corresponding (complex) eigenvector in exactly the same


way as before: by row reducing the matrix A − λI n . Now, however, we have to do
arithmetic with complex numbers.

Example (A 2 × 2 matrix). Find the complex eigenvalues and eigenvectors of the


matrix  ‹
1 −1
A= .
1 1

Solution. The characteristic polynomial of A is

f (λ) = λ2 − Tr(A)λ + det(A) = λ2 − 2λ + 2.

The roots of this polynomial are


p
2± 4−8
λ= = 1 ± i.
2
300 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

First we compute an eigenvector for λ = 1 + i. We have

1 − (1 + i)
 ‹  ‹
−1 −i −1
A − (1 + i)I2 = = .
1 1 − (1 + i) 1 −i

Now we row reduce, noting that the second row is i times the first:
 ‹  ‹  ‹
−i −1 R2 =R2 −iR1 −i −1 R1 =R1 ÷−i 1 −i
−−−−−−→ −−−−−→ .
1 −i 0 0 0 0

i

The parametric form is x = i y, so that an eigenvector is v1 = 1 . Next we compute
an eigenvector for λ = 1 − i. We have

1 − (1 − i)
 ‹  ‹
−1 i −1
A − (1 − i)I2 = = .
1 1 − (1 − i) 1 i

Now we row reduce, noting that the second row is −i times the first:
 ‹  ‹  ‹
i −1 R2 =R2 +iR1 i −1 R1 =R1 ÷i 1 i
−−−−−−→ −−−−→ .
1 i 0 0 0 0

−i

The parametric form is x = −i y, so that an eigenvector is v2 = 1 .
We can verify our answers:
 ‹ ‹  ‹  ‹
1 −1 i i−1 i
= = (1 + i)
1 1 1 i+1 1
 ‹ ‹  ‹  ‹
1 −1 −i −i − 1 −i
= = (1 − i) .
1 1 1 −i + 1 1

Example (A 3 × 3 matrix). Find the eigenvalues and eigenvectors, real and com-
plex, of the matrix  
4/5 −3/5 0
A =  3/5 4/5 0  .
1 2 2

Solution. We compute the characteristic polynomial by expanding cofactors along


the third row:
 
4/5 − λ −3/5 0
8
 ‹
f (λ) = det  3/5 4−5−λ 0  = (2 − λ) λ − λ + 1 .
2

1 2 2−λ 5

This polynomial has one real root at 2, and two complex roots at
p
8/5 ± 64/25 − 4 4 ± 3i
λ= = .
2 5
6.5. COMPLEX EIGENVALUES 301

Therefore, the eigenvalues are


4 + 3i 4 − 3i
λ = 2, , .
5 5
We eyeball that v1 = e3 is an eigenvector with eigenvalue 2, since the third column
is 2e3 .
Next we find an eigenvector with eigenvaluue (4 + 3i)/5. We have
   
−3i/5 −3/5 0 i 1 0
4 + 3i R1 =R1 ×−5/3
A− I3 =  3/5 −3i/5 0  −−−−−−− →  1 −i 0  .
5 2 − (4 + 3i)/5
R2 =R2 ×5/3
1 2 1 2 6−3i 5

We row reduce, noting that the second row is −i times the first:
   
i 1 0 i 1 0
R2 =R2 +iR1
 1 −i 0  −−−−−−−−→ 0 0 0 
6−3i
1 2 5 1 2 6−3i
5
 
i 1 0
R3 =R3 +iR1
−−−−−−−−→  0 0 0 
6−3i
0 2+i 5
 
i 1 0
R2 ←→R3
−−−−−−−−→  0 2 + i 6−3i 5

0 0 0
 
1 −i 0
R1 =R1 ÷i
−−−−−−−−→  0 1 9−12i 25

R2 = R2 ÷ (2 + i)
0 0 0
 
1 0 12+9i
25
R1 =R1 +iR2
−−−−−−−−→  0 1 9−12i 25
.
0 0 0
The free variable is z; the parametric form of the solution is
 12 + 9i
x =− z
25
 y = − 9 − 12i z.
25
Taking z = 25 gives the eigenvector
 
−12 − 9i
v2 = −9 + 12i  .
25
A similar calculation (replacing all occurences of i by −i) shows that an eigenvector
with eigenvalue (4 − 3i)/5 is
 
−12 + 9i
v3 = −9 − 12i  .
25
302 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

We can verify our calculations:


      
4/5 −3/5 0 −12 + 9i −21/5 + 72i/5 −12 + 9i
 3/5 4/5 0  −9 − 12i  = −72/5 − 21i/5 = 4 + 3i −9 − 12i 
1 2 2 25 20 − 15i 5 25
      
4/5 −3/5 0 −12 − 9i −21/5 − 72i/5 −12 − 9i
 3/5 4/5 0  −9 + 12i  = −72/5 + 21i/5 = 4 − 3i −9 + 12i  .
1 2 2 25 20 + 15i 5 25

If A is a matrix with real entries, then its characteristic polynomial has real
coefficients, so this note implies that its complex eigenvalues come in conjugate
pairs. In the first example, we notice that
 ‹
i
1 + i has an eigenvector v1 =
1
 ‹
−i
1 − i has an eigenvector v2 = .
1
In the second example,
 
−12 − 9i
4 + 3i
has an eigenvector v1 = −9 + 12i 
5 25
 
−12 + 9i
4 − 3i
has an eigenvector v2 = −9 − 12i 
5 25

In these cases, an eigenvector for the conjugate eigenvalue is simply the conju-
gate eigenvector (the eigenvector obtained by conjugating each entry of the first
eigenvector). This is always true; we leave the verification to the reader.

Let A be a matrix with real entries. If

λ is a complex eigenvalue with eigenvector v,


then λ is a complex eigenvalue with eigenvector v.

In other words, both eigenvalues and eigenvectors come in conjugate pairs.

Since it can be tedious to divide by complex numbers while row reducing, it is


useful to learn the following trick.
Eigenvector Trick for 2 × 2 Matrices. Let A be a 2 × 2 matrix, and let λ be a (real
or complex) eigenvalue. Then
 ‹  ‹
z w −w
A − λI2 = =⇒ is an eigenvector with eigenvalue λ,
? ? z
assuming the first row of A − λI2 is nonzero.
6.5. COMPLEX EIGENVALUES 303

Indeed, since λ is an eigenvalue, we know that A− λI2 is not an invertible ma-


trix. It follows that the rows are collinear (otherwise the determinant is nonzero),
so that the second row is automatically a (complex) multiple of the first:
 ‹  ‹
z w z w
= .
? ? cz cw

−w w
 
It is obvious that z is in the null space of this matrix, as is −z , for that matter.
Note that we never had to compute the second row of A − λI2 !

Example (A 2 × 2 matrix, the easy way). Find the complex eigenvalues and eigen-
vectors of the matrix  ‹
1 −1
A= .
1 1

Solution. Since the characteristic polynomial of a 2 × 2 matrix A is f (λ) = λ2 −


Tr(A)λ + det(A), its roots are
p p
Tr(A) ± Tr(A)2 − 4 det(A) 2 ± 4 − 8
λ= = = 1 ± i.
2 2
To find an eigenvector with eigenvalue 1 + i, we compute
 ‹ ‹
−i −1 eigenvector 1
A − (1 + i)I2 = −−−−−→ v1 = .
? ? −i

The eigenvector for the conjugate eigenvalue is the complex conjugate:


 ‹
1
v2 = v 1 = .
i

i −i
 
In this example we found the eigenvectors 1 and 1 for the eigenvalues 1+ i
1

and 1 − i, respectively, but in this example we found the eigenvectors −i and
1

i for the same eigenvalues of the same matrix. These vectors do not look like
multiples of each other at first—but since we now have complex numbers at our
disposal, we can see that they actually are multiples:
 ‹  ‹  ‹  ‹
i 1 −i 1
−i = i = .
1 −i 1 i

6.5.2 Rotation-Scaling Matrices


The most important examples of matrices with complex eigenvalues are rotation-
scaling matrices, i.e., scalar multiples of rotation matrices.
304 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Definition. A rotation-scaling matrix is a 2 × 2 matrix of the form


 ‹
a −b
,
b a

where a and b are real numbers, not both equal to zero.

The following proposition justifies the name.

Proposition. Let  ‹
a −b
A=
b a
be a rotation-scaling matrix. Then:

1. A is a product of a rotation matrix

cos θ − sin θ
 ‹  ‹
r 0
with a scaling matrix .
sin θ cos θ 0 r

2. The scaling factor r is


Æ p
r= det(A) = a2 + b2 .

3. The rotationangle θ is the counterclockwise angle from the positive x-axis to


a
the vector b :

 ‹
a
b

4. The eigenvalues of A are λ = a ± bi.


p p
Proof. Set r = det(A) = a2 + b2 . The point (a/r, b/r) has the property that
 ‹2
 a 2 b a2 + b2
+ = = 1.
r r r2

In other words (a/r, b/r) lies on the unit circle. Therefore, it has the form (cos θ , sin
 θ ),
a/r
where θ is the counterclockwise angle from the positive x-axis to the vector b/r ,
a

or since it is on the same line, to b :
6.5. COMPLEX EIGENVALUES 305

 ‹
a
cos θ
 ‹  ‹
b a/r
= .
b/r sin θ
(a/r, b/r) θ

It follows that
cos θ − sin θ
 ‹  ‹ ‹
a/r −b/r r 0
A= r = ,
b/r a/r 0 r sin θ cos θ

as desired.
For the last statement, we compute the eigenvalues of A as the roots of the
characteristic polynomial:
p p
Tr(A) ± Tr(A)2 − 4 det(A) 2a ± 4a2 − 4(a2 + b2 )
λ= = = a ± bi.
2 2
Geometrically, a rotation-scaling matrix does exactly what the name says: it
rotates and scales (in either order).
Example (A rotation-scaling matrix). What does the matrix
 ‹
1 −1
A=
1 1

do geometrically?
Solution. Thisp is a rotation-scaling
p matrix with a = b = 1. Therefore, it scales
by a factor of det(A) = 2 and rotates counterclockwise by 45◦ :

 ‹
1
1
45◦

Here is a picture of A:

45◦
rotate by p
scale by 2
306 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

An interactive figure is included below.

Use this link to view the online demo


p
Multiplication by the matrix A rotates the plane by 45◦ and dilates by a factor of 2.
Move the input vector x to see how the output vector b changes.

Example (A rotation-scaling matrix). What does the matrix


 p 
− 3 −1
A= p
1 − 3
do geometrically?
p
Solution. This is a rotation-scaling
p p matrix with a = − 3 and b = 1. Therefore,
it scales by a factor of det(A) = 3 + 1 = 2 and rotates counterclockwise by the
angle θ in the picture:
 p 
− 3
1

To compute this angle, we do a bit of trigonometry:


 p 
− 3
1 π
‹
1 ϕ = tan
−1
p =
3 6
1 θ 5π
ϕ θ =π−ϕ = .
p 6
3

Therefore, A rotates counterclockwise by 5π/6 and scales by a factor of 2.


rotate by
6
scale by 2
6.5. COMPLEX EIGENVALUES 307

An interactive figure is included below.

Use this link to view the online demo

Multiplication by the matrix A rotates the plane by 5π/6 and dilates by a factor of 2.
Move the input vector x to see how the output vector b changes.
p 
− 3
The matrix in the second example has second column 1 , which is rotated
counterclockwise from theppositive x-axis by an angle of 5π/6. This rotation angle
is not equal to tan 1/(− 3) = − π6 . The problem is that arctan always outputs
−1


values between −π/2 and π/2: it does not account for points in the second or third
quadrants. This is why we drew a triangle and used its (positive) edge lengths to
compute the angle ϕ:

 p 
− 3
1 π
‹
1 ϕ = tan −1
p =
3 6
1 θ 5π
ϕ θ =π−ϕ = .
p 6
3

p 
− 3
Alternatively, we could have observed that 1 lies in the second quadrant,
so that the angle θ in question is

1
 ‹
θ = tan −1
p + π.
− 3

a

When finding the rotation angle of a vector b , do not blindly compute
a

tan−1 (b/a), since this will give the wrong answer when b is in the second or
third quadrant. Instead, draw a picture.

6.5.3 Geometry of 2 × 2 Matrices with a Complex Eigenvalue


Let A be a 2 × 2 matrix with a complex, non-real eigenvalue λ. Then A also has the
eigenvalue λ 6= λ. In particular, A has distinct eigenvalues, so it is diagonalizable
over the complex numbers. However, the following construction is generally more
useful, as all matrices involved have real entries.
308 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Rotation-Scaling Theorem. Let A be a 2 × 2 real matrix with a complex (non-real)


eigenvalue λ, and let v be an eigenvector. Then A = C BC −1 for
 
| |  ‹
Re(λ) Im(λ)
C =  Re(v) Im(v)  and B= .
− Im(λ) Re(λ)
| |

p particular, A is similar to a rotation-scaling matrix that scales by a factor of |λ| =


In
det(B).

Proof. First we need to show that Re(v) and Im(v) are linearly independent, since
otherwise C is not invertible. If not, then there exist real numbers x, y, not both
equal to zero, such that x Re(v) + y Im(v) = 0. Then

( y + i x)v = ( y + i x) Re(v) + i Im(v)
= y Re(v) − x Im(v) + (x Re(v) + y Im(v)) i
= y Re(v) − x Im(v).

Now, ( y + i x)v is also an eigenvector of A with eigenvalue λ, as it is a scalar


multiple of v. But we just showed that ( y + i x)v is a vector with real entries, and
any real eigenvector of a real matrix has a real eigenvalue. Therefore, Re(v) and
Im(v) must be linearly independent after all.
x+ y i

Let λ = a + bi and v = z+wi . We observe that

x + yi
 ‹
Av = λv = (a + bi)
z + wi
(ax − b y) + (a y + bx)i
 ‹
=
(az − bw) + (aw + bz)i
a y + bx
 ‹  ‹
ax − b y
= +i .
az − bw aw + bz

On the other hand, we have


 ‹  ‹‹  ‹  ‹
x y x y
A +i =A + iA = ARe(v) + iAIm(v).
z w z w

Matching real and imaginary parts gives

a y + bx
 ‹  ‹
ax − b y
ARe(v) = AIm(v) = .
az − bw aw + bz

Now we compute C BC −1 Re(v) and C BC −1 Im(v). Since C e1 = Re(v) and


6.5. COMPLEX EIGENVALUES 309

C e2 = Im(v), we have C −1 Re(v) = e1 and C −1 Im(v) = e2 , so

‹ 
a
C BC Re(v) = C Be1 = C
−1
= a Re(v) − b Im(v)
−b
 ‹  ‹  ‹
x y ax − b y
=a −b = = ARe(v)
z w az − bw
 ‹
b
C BC Im(v) = C Be2 = C
−1
= b Re(v) + a Im(v)
a
a y + bx
 ‹  ‹  ‹
x y
=b +a = = AIm(v).
z w aw + bz

Therefore, ARe(v) = C BC −1 Re(v) and AIm(v) = C BC −1 Im(v).


Since Re(v) and Im(v) are linearly independent, they form a basis for R2 . Let
w be any vector in R2 , and write w = c Re(v) + d Im(v). Then


Aw = A c Re(v) + d Im(v)
= cARe(v) + dAIm(v)
= cC BC −1 Re(v) + dC BC −1 Im(v)

= C BC −1 c Re(v) + d Im(v)
= C BC −1 w.

This proves that A = C BC −1 .

Here Re and Im denote the real and imaginary parts, respectively:

x + yi x + yi
 ‹  ‹  ‹  ‹
x y
Re(a + bi) = a Im(a + bi) = b Re = Im = .
z + wi z z + wi w

The rotation-scaling matrix in question is the matrix

 ‹
a −b
B= with a = Re(λ), b = − Im(λ).
b a

Geometrically, the rotation-scaling theorem says that a 2×2 matrix with a complex
eigenvalue behaves similarly to a rotation-scaling matrix. See this important note
in Section 6.3.
One should regard the rotation-scaling theorem as a close analogue of the di-
agonalization theorem in Section 6.4, with a rotation-scaling matrix playing the
310 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

role of a diagonal matrix. Before continuing, we restate the theorem as a recipe:

Recipe: A 2 × 2 matrix with a complex eigenvalue. Let A be a 2 × 2 real


matrix.

1. Compute the characteristic polynomial

f (λ) = λ2 − Tr(A)λ + det(A),

then compute its roots using the quadratic formula.

2. If the eigenvalues are complex, choose one of them, and call it λ.

3. Find a corresponding (complex) eigenvalue v using the trick.

4. Then A = C BC −1 for
 
| |  ‹
Re(λ) Im(λ)
C =  Re(v) Im(v)  and B= .
− Im(λ) Re(λ)
| |

This scales by a factor of |λ|.

Example. What does the matrix


 ‹
2 −1
A=
2 0
do geometrically?
Solution. The eigenvalues of A are
p p
Tr(A) ± Tr(A)2 − 4 det(A) 2 ± 4 − 8
λ= = = 1 ± i.
2 2
We choose the eigenvalue λ = 1 − i and find a corresponding eigenvector, using
the trick:
1 + i −1 eigenvector
 ‹  ‹
1
A − (1 − i)I2 = −−−−−→ v = .
? ? 1+i
According to the rotation-scaling theorem, we have A = C BC −1 for
  ‹  ‹‹  ‹
1 1 1 0
C = Re Im =
1+i 1+i 1 1
 ‹  ‹
Re(λ) Im(λ) 1 −1
B= = .
− Im(λ) Re(λ) 1 1
The matrix B is the rotation-scaling matrix in the above example:
p it rotates coun-

terclockwise by an angle of 45 and scales by a factor of 2. The matrix A does
the same thing, but with respect to the Re(v), Im(v)-coordinate system:
6.5. COMPLEX EIGENVALUES 311

45◦
rotate by p
scale by 2

C −1 C

rotate by 45◦
“around an ellipse”
p
scale by 2

To summarize:
• B rotates around the circle centered at the originp and passing through e1 and
e2 , in the direction from e1 to e2 , then scales by 2.

• A rotates around the ellipse centered at the origin and passing through
p Re(v)
and Im(v), in the direction from Re(v) to Im(v), then scales by 2.
The reader might want to refer back to this example in Section 6.3.

Use this link to view the online demo

The geometric action of A and B on the plane. Click “multiply” to multiply the colored
points by B on the left and A on the right.

1
 λ = 1+i as our eigenvalue, then0we
If instead we had chosen would have found
the eigenvector v = 1−i . In this case we would have A = C B (C 0 )−1 , where
0

  ‹  ‹‹  ‹
1 1 1 0
C = Re
0
Im =
1−i 1−i 1 −1
   ‹
Re(λ) Im(λ) 1 1
B0 = = .
− Im(λ) Re(λ) −1 1
p
So, A is also similar to a clockwise rotation by 45◦ , followed by a scale by 2.
312 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Example. What does the matrix

 p 
− 3+1 −2
A= p
1 − 3−1

do geometrically?

Solution. The eigenvalues of A are

p p p
Tr(A) ± Tr(A)2 − 4 det(A) −2 3 ± 12 − 16 p
λ= = = − 3 ± i.
2 2

p
We choose the eigenvalue λ = − 3−i and find a corresponding eigenvector, using
the trick:

p 1 + i −2 eigenvector
 ‹  ‹
2
A − (− 3 − i)I2 = −−−−−→ v = .
? ? 1+i

According to the rotation-scaling theorem, we have A = C BC −1 for

  ‹  ‹‹  ‹
2 2 2 0
C = Re Im =
1+i 1+i 1 1
 ‹  p 
Re(λ) Im(λ) − 3 −1
B= = p .
− Im(λ) Re(λ) 1 − 3

The matrix B is the rotation-scaling matrix in the above example: it rotates coun-
terclockwise by an angle of 5π/6 and scales by a factor of 2. The matrix A does
the same thing, but with respect to the Re(v), Im(v)-coordinate system:
6.5. COMPLEX EIGENVALUES 313

rotate by 5π/6
scale by 2

C −1 C

rotate by 5π/6
“around an ellipse”
scale by 2

To summarize:
• B rotates around the circle centered at the origin and passing through e1 and
e2 , in the direction from e1 to e2 , then scales by 2.

• A rotates around the ellipse centered at the origin and passing through Re(v)
and Im(v), in the direction from Re(v) to Im(v), then scales by 2.
The reader might want to refer back to this example in Section 6.3.

Use this link to view the online demo

The geometric action of A and B on the plane. Click “multiply” to multiply the colored
points by B on the left and A on the right.
p
If instead we had chosen λ = − 3 − i as our eigenvalue, then we would have
2
found the eigenvector v = 1−i . In this case we would have A = C 0 B 0 (C 0 )−1 , where
  ‹  ‹‹  ‹
2 2 2 0
C = Re
0
Im =
1−i 1−i 1 −1
   p 
Re(λ) Im(λ) − 3 1
B =
0
= p .
− Im(λ) Re(λ) −1 − 3
So, A is also similar to a clockwise rotation by 5π/6, followed by a scale by 2.
314 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

We saw in the above examples that the rotation-scaling theorem can be applied
in two different ways to any given matrix: one has to choose one of the two con-
jugate eigenvalues to work with. Replacing λ by λ has the effect of replacing v by
v, which just negates all imaginary parts, so we also have A = C 0 B 0 (C 0 )−1 for
 
| |  ‹
Re(λ) − Im(λ)
C 0 =  Re(v) − Im(v)  and B =
0
.
Im(λ) Re(λ)
| |

The matrices B and B 0 are similar to each  other. The only difference between them
Re(λ) Re(λ)
is the direction of rotation, since − Im(λ) and Im(λ) are mirror images of each other
over the x-axis:

 ‹
Re(λ)
Im(λ)
θ


Re(λ)
‹ −θ
− Im(λ)

The discussion that follows is closely analogous to the exposition in this sub-
section in Section 6.4, in which we studied the dynamics of diagonalizable 2 × 2
matrices.

Dynamics of a 2 × 2 Matrix with a Complex Eigenvalue. Let A be a 2 × 2 matrix


with a complex (non-real) eigenvalue λ. By the rotation-scaling theorem, the
matrix A is similar to a matrix that rotates by some amount and scales by |λ|.
Hence, A rotates around an ellipse and scales by |λ|. There are three different
cases.
|λ| > 1: when the scaling factor is greater than 1, then vectors tend to get
longer, i.e., farther from the origin. In this case, repeatedly multiplying a vector
by A makes the vector “spiral out”. For example,
p p
p

1 3 + 1 p −2 3−i
A= p λ= p |λ| = 2>1
2 1 3−1 2

gives rise to the following picture:


6.5. COMPLEX EIGENVALUES 315

A3 v

A2 v

Av
v

|λ| = 1: when the scaling factor is equal to 1, then vectors do not tend to get
longer or shorter. In this case, repeatedly multiplying a vector by A simply “rotates
around an ellipse”. For example,
p  p
1 3 + 1 p −2 3−i
A= λ= |λ| = 1
2 1 3−1 2

gives rise to the following picture:

Av v
A2 v

A3 v
316 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

|λ| < 1: when the scaling factor is less than 1, then vectors tend to get shorter,
i.e., closer to the origin. In this case, repeatedly multiplying a vector by A makes
the vector “spiral in”. For example,
p  p
1 3 + 1 p −2 3−i 1
A= p λ= p |λ| = p < 1
2 2 1 3−1 2 2 2

gives rise to the following picture:

A3 v

A2 v

Av

Interactive: |λ| > 1.


p  p 
1 3 + 1 p −2 1
 ‹
−1
3 p 2 0
A= p B=p C=
2 1 3−1 2 1 3 1 1
p
3−i p
λ= p |λ| = 2>1
2

Use this link to view the online demo

The geometric action of A and B on the plane. Click “multiply” to multiply the colored
points by B on the left and A on the right.
6.5. COMPLEX EIGENVALUES 317

Interactive: |λ| = 1.
p  p 
1 3 + 1 p −2 1
 ‹
−1
3 p 2 0
A= B= C=
2 1 3−1 2 1 3 1 1
p
3−i
λ= |λ| = 1
2

Use this link to view the online demo

The geometric action of A and B on the plane. Click “multiply” to multiply the colored
points by B on the left and A on the right.

Interactive: |λ| < 1.


p  p 
1 3 + 1 p −2 1
 ‹
−1
3 p 2 0
A= p B= p C=
2 2 1 3−1 2 2 1 3 1 1
p
3−i 1
λ= p |λ| = p < 1
2 2 2

Use this link to view the online demo

The geometric action of A and B on the plane. Click “multiply” to multiply the colored
points by B on the left and A on the right.

Remark (Classification of 2 × 2 matrices up to similarity). At this point, we can


write down the “simplest” possible matrix which is similar to any given 2×2 matrix
A. There are four cases:
1. A has two real eigenvalues λ1 , λ2 . In this case, A is diagonalizable, so A is
similar to the matrix
λ1 0
 ‹
.
0 λ2
This representation is unique up to reordering the eigenvalues.

2. A has one real eigenvalue λ of geometric multiplicity 2. In this case, we saw


in this example in Section 6.4 that A is equal to the matrix

λ 0
 ‹
.
0 λ

3. A has one real eigenvalue λ of geometric multiplicity 1. In this case, A is not


diagonalizable, and we saw in this remark in Section 6.4 that A is similar to
the matrix
λ 1
 ‹
.
0 λ
318 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

4. A has no real eigenvalues. In this case, A has a complex eigenvalue λ, and A


is similar to the rotation-scaling matrix
 ‹
Re(λ) Im(λ)
− Im(λ) Re(λ)

by the rotation-scaling theorem.


 By this proposition, the eigenvalues of a
rotation-scaling matrix ab −b
a are a±bi, so that two rotation-scaling matrices
 
a −b
b a and d c are similar if and only if a = c and b = ±d.
c −d

6.5.4 Block Diagonalization


For matrices larger than 2×2, there is a theorem that combines the diagonalization
theorem in Section 6.4 and the rotation-scaling theorem. It says essentially that a
matrix is similar to a matrix with parts that look like a diagonal matrix, and parts
that look like a rotation-scaling matrix.

Block Diagonalization Theorem. Let A be a real n × n matrix. Suppose that for


each (real or complex) eigenvalue, the algebraic multiplicity equals the geometric
multiplicity. Then A = C BC −1 , where B and C are as follows:

• The matrix B is block diagonal, where the blocks are 1 × 1 blocks containing
the real eigenvalues (with their multiplicities), or 2 × 2 blocks containing the
matrices  ‹
Re(λ) Im(λ)
− Im(λ) Re(Λ)
for each non-real eigenvalue λ (with multiplicity).

• The columns of C form bases


 for the eigenspaces for the real eigenvectors, or
come in pairs Re(v) Im(v) for the non-real eigenvectors.

The block diagonalization theorem is proved in the same way as the diagonal-
ization theorem in Section 6.4 and the rotation-scaling theorem. It is best under-
stood in the case of 3 × 3 matrices.

Block Diagonalization of a 3 × 3 Matrix with a Complex Eigenvalue. Let A be


a 3 × 3 matrix with a complex eigenvalue λ1 . Then λ1 is another eigenvalue,
and there is one real eigenvalue λ2 . Since there are three distinct eigenvalues,
they have algebraic and geometric multiplicity one, so the block diagonalization
theorem applies to A.
Let v1 be a (complex) eigenvector with eigenvalue λ1 , and let v2 be a (real)
eigenvector with eigenvalue λ2 . Then the block diagonalization theorem says that
A = C BC −1 for
   
| | | Re(λ1 ) Im(λ1 ) 0
C =  Re(v1 ) Im(v1 ) v2  B =  − Im(λ1 ) Re(λ1 ) 0 .
| | | 0 0 λ2
6.5. COMPLEX EIGENVALUES 319

Example (Geometry of a 3 × 3 matrix with a complex eigenvalue). What does the


matrix  
33 −23 9
1 
A= 22 33 −23 
29 19 14 50
do geometrically?
Solution. First we find the (real and complex) eigenvalues of A. We compute the
characteristic polynomial using whatever method we like:

f (λ) = det(A − λI3 ) = −λ3 + 4λ2 − 6λ + 4.

We search for a real root using the rational root theorem. The possible rational
roots are ±1, ±2, ±4; we find f (2) = 0, so that λ − 2 divides f (λ). Performing
polynomial long division gives

f (λ) = −(λ − 2) λ2 − 2λ + 2 .

The quadratic term has roots


p
2± 4−8
λ= = 1 ± i,
2
so that the complete list of eigenvalues is λ1 = 1 − i, λ1 = 1 + i, and λ2 = 2.
Now we compute some eigenvectors, starting with λ1 = 1 − i. We row reduce
(probably with the aid of a computer):
   
4 + 29i −23 9 1 0 7/5 + i/5
1  RREF
A − (1 − i)I3 = 22 4 + 29i −23  −−→  0 1 −2/5 + 9i/5  .
29 19 14 21 + 29i 0 0 0

The free variable is z, and the parametric form is



7 1
 ‹  
x =−
 + i z −7 − i
5 5 ‹
z=5
−−−−−→ v1 =  2 − 9i  .
2 9 eigenvector
y =

− i z 5
5 5
For λ2 = 2, we have
   
−25 −23 9 1 0 −2/3
1  RREF
A − 2I3 = 22 −25 −23  −−→  0 1 1/3  .
29 19 14 −8 0 0 0

The free variable is z, and the parametric form is


 2  
x = z 2
3 z=3
−−−−−→ v2 = −1 .

 y = −1z eigenvector
3
3
320 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

According to the block diagonalization theorem, we have A = C BC −1 for


   
| | | −7 −1 2
C =  Re(v1 ) Im(v1 ) v2  =  2 −9 −1 
| | | 5 0 3
   
Re(λ1 ) Im(λ1 ) 0 1 −1 0
B =  − Im(λ1 ) Re(λ1 ) 0  =  1 1 0  .
0 0 2 0 0 2

The matrix B is a combination of the rotation-scaling matrix 11 −1 1 from this
example, and a diagonal matrix. More specifically, p B acts on the x y-coordinates by

rotating counterclockwise by 45 and scaling by 2, and it scales the z-coordinate
by 2. This means that points above the x y-plane spiral out away from the z-axis
and move up, and points below the x y-plane spiral out away from the z-axis and
move down.
The matrix A does the same thing as B, but with respect to the {Re(v1 ), Im(v1 ), v2 }-
coordinate system. That is, A acts on the Re(v1 ), Im(v1 )-plane by spiraling out, and
A acts on the v2 -coordinate by scaling by a factor of 2. See the demo below.

Use this link to view the online demo

The geometric action of A and B on R3 . Click “multiply” to multiply the colored points
by B on the left and A on the right. (We have scaled C by 1/6 so that the vectors x
and y have roughly the same size.)

6.6 Stochastic Matrices

Objectives
1. Learn examples of stochastic matrices and applications to difference equa-
tions.
2. Recipe: find the steady state of a positive stochastic matrix.
3. Picture: dynamics of a positive stochastic matrix.
4. Theorem: the Perron–Frobenius theorem.
5. Vocabulary words: difference equation, (positive) stochastic matrix, steady
state.

This section is devoted to one common kind of application of eigenvalues: to


the study of difference equations, in particular to Markov chains. We will introduce
stochastic matrices, which encode this type of difference equation, and will cover
in detail the most famous example of a stochastic matrix: the Google Matrix.
6.6. STOCHASTIC MATRICES 321

6.6.1 Difference Equations


Suppose that we are studying a system whose state can be described by a list of
numbers: for instance, the numbers of rabbits aged 0, 1, and 2 years, respectively,
or the number of copies of Prognosis Negative in each of the Red Box kiosks in
Atlanta. In each case, we can represent the state at time t by a vector vt . Suppose
in addition that the state at time t + 1 is related to the state at time t in a linear
way: vt+1 = Avt for some matrix A. This is the situation we will consider in this
subsection.

Definition. A difference equation is an equation of the form

vt+1 = Avt

for an n × n matrix A and vectors v0 , v1 , v2 , . . . in Rn .

In other words:

• vt is the “state at time t,”

• vt+1 is the “state at time t + 1,” and

• vt+1 = Avt means that A is the “change of state matrix.”

Note that
vt = Avt−1 = A2 vt−2 = · · · = At v0 ,
which should hint to you that the long-term behavior of a difference equation is
an eigenvalue problem.
We will use the following example in this subsection and the next. Understand-
ing this section amounts to understanding this example.

Example. Red Box has kiosks all over Atlanta where you can rent movies. You
can return them to any other kiosk. For simplicity, pretend that there are three
kiosks in Atlanta, and that every customer returns their movie the next day. Let
vt be the vector whose entries x t , y t , z t are the number of copies of Prognosis
Negative at kiosks 1, 2, and 3, respectively. Let A be the matrix whose i, j-entry is
the probability that a customer renting Prognosis Negative from kiosk j returns it
to kiosk i. For example, the matrix
 
.3 .4 .5
A =  .3 .4 .3 
.4 .2 .2

encodes a 30% probability that a customer renting from kiosk 3 returns the movie
to kiosk 2, and a 40% probability that a movie rented from kiosk 1 gets returned
to kiosk 3. The second row (for instance) of the matrix A says:
322 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

The number of movies returned to kiosk 2 will be (on average):

30% of the movies from kiosk 1


40% of the movies from kiosk 2
30% of the movies from kiosk 3

Applying this to all three rows, this means


   
xt .3x t + .4 y t + .5z t
A  y t  = .3x t + .4 y t + .3z t  .
zt .4x t + .2 y t + .2z t

Therefore, Avt represents the number of movies in each kiosk the next day:

Avt = vt+1 .

This system is modeled by a difference equation.


An important question to ask about a difference equation is: what is its long-
term behavior? Where will the movies be after 100 days? In the next subsection,
we will answer this question for a particular type of difference equation.
Example (Rabbit population). In a population of rabbits,
1. half of the newborn rabbits survive their first year;

2. of those, half survive their second year;

3. the maximum life span is three years;

4. rabbits produce 0, 6, 8 rabbits in their first, second, and third years, respec-
tively.
Let vt be the vector whose entries x t , y t , z t are the number of rabbits aged 0, 1,
and 2, respectively. The rules above can be written as a system of equations:
x t+1 = 6 y t + 8z t
y t+1 = 21 x t
1
z t+1 = 2 yt .

In matrix form, this says:  


0 6 8
 1 0 0  vt = vt+1 .
2
0 12 0
This system is modeled by a difference equation.
Define  
0 6 8
1
A =  2 0 0.
0 12 0
6.6. STOCHASTIC MATRICES 323

We compute A has eigenvalues 2 and −1, and that an eigenvector with eigenvalue
2 is  
16
v = 4 .

1
This partially explains why the ratio x t : y t : z t approaches 16 : 4 : 1 and why all
three quantities eventually double each year in this demo:

Use this link to view the online demo

Left: the population of rabbits in a given year. Right: the proportions of rabbits in
that year. Choose any values you like for the starting population, and click “Advance
1 year” several times. Notice that the ratio x t : y t : z t approaches 16 : 4 : 1, and that
all three quantities eventually double each year.

6.6.2 Stochastic Matrices and the Steady State


In this subsection, we discuss difference equations representing probabilities, like
the Red Box example. Such systems are called Markov chains. The most important
result in this section is the Perron–Frobenius theorem, which describes the long-
term behavior of a Markov chain.

Definition. A square matrix A is stochastic if all of its entries are nonnegative,


and the entries of each column sum to one.
A matrix is positive if all of its entries are positive numbers.

Example. Continuing with the Red Box example, the matrix


 
.3 .4 .5
A =  .3 .4 .3 
.4 .2 .2

is a positive stochastic matrix. The fact that the columns sum to one says that all
of the movies rented from a particular kiosk must be returned to some other kiosk
(remember that every customer returns their movie the next day). For instance,
the first column says:

Of the movies rented from kiosk 1,

30% will be returned to kiosk 1


30% will be returned to kiosk 2
40% will be returned to kiosk 3.

The sum is 100%, as all of the movies are returned to one of the three
kiosks.
324 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

The matrix A represents the change of state from one day to the next:
     
x t+1 xt .3x t + .4 y t + .5z t
 y t+1  = A  y t  = .3x t + .4 y t + .3z t  .
z t+1 zt .4x t + .2 y t + .2z t

If we sum the entries of vt+1 , we obtain

(.3x t + .4 y t + .5z t ) + (.3x t + .4 y t + .3z t ) + (.4x t + .2 y t + .2z t )


= (.3 + .3 + .4)x t + (.4 + .4 + .2) y t + (.5 + .3 + .2)z t
= x t + yt + zt .

This says that the total number of copies of Prognosis Negative in the three kiosks
does not change from day to day, as we expect.
The fact that the entries of the vectors vt and vt+1 sum to the same number is
a consequence of the fact that the columns of a stochastic matrix sum to one.

Let A be a stochastic matrix, let vt be a vector, and let vt+1 = Avt . Then the
sum of the entries of vt equals the sum of the entries of vt+1 .

Computing the long-term behavior of a difference equation turns out to be


an eigenvalue problem. The eigenvalues of stochastic matrices have very special
properties.
Fact. Let A be a stochastic matrix. Then:
1. 1 is an eigenvalue of A.

2. If λ is a (real or complex) eigenvalue of A, then |λ| ≤ 1.


Proof. If A is stochastic, then the rows of AT sum to one. But multiplying a matrix
by the vector (1, 1, . . . , 1) sums the rows:
      
.3 .3 .4 1 .3 + .3 + .4 1
 .4 .4 .2  1 = .4 + .4 + .2 = 1 .
.5 .3 .2 1 .5 + .3 + .2 1

Therefore, 1 is an eigenvalue of AT . But A and AT have the same characteristic


polynomial: 
det(A − λI n ) = det (A − λI n ) T = det(AT − λI n ).
Therefore, 1 is an eigenvalue of A.
Now let λ be any eigenvalue of A, so it is also an eigenvalue of AT . Let x =
(x 1 , x 2 , . . . , x n ) be an eigenvector of AT with eigenvalue λ, so λx = AT x. The jth
entry of this vector equation is
n
X
λx j = ai j x i .
i=1
6.6. STOCHASTIC MATRICES 325

Choose x j with the largest absolute value, so |x i | ≤ |x j | for all i. Then

n
X n
X n
X
|λ| · |x j | = ai j x i ≤ ai j · |x i | ≤ ai j · |x j | = 1 · |x j |,
i=1 i=1 i=1

Pn
where the last equality holds because i=1
ai j = 1. This implies |λ| ≤ 1.

In fact, for a positive stochastic matrix A, one can show that if λ 6= 1 is a (real
or complex) eigenvalue of A, then |λ| < 1. The 1-eigenspace of a stochastic matrix
is very important.

Definition. A steady state of a stochastic matrix A is an eigenvector w with eigen-


value 1, such that the entries are positive and sum to 1.

The Perron–Frobenius theorem describes the long-term behavior of a difference


equation represented by a stochastic matrix. Its proof is beyond the scope of this
text.

Perron–Frobenius Theorem. Let A be a positive stochastic matrix. Then A admits


a unique steady state vector w, which spans the 1-eigenspace.
Moreover, for any vector v0 with entries summing to some number c, the iterates

v1 = Av0 , v2 = Av1 , . . . , vt = Avt−1 , . . .

approach cw as t gets large.

Translation: The Perron–Frobenius theorem makes the following assertions:

• The 1-eigenspace of a positive stochastic matrix is a line.

• The 1-eigenspace contains a vector with positive entries.

• All vectors approach the 1-eigenspace upon repeated multiplication by A.

One should think of a steady state vector w as a vector of percentages. For example,
if the movies are distributed according to these percentages today, then they will
be have the same distribution tomorrow, since Aw = w. And nomatter the starting
distribution of movies, the long-term distribution will always be the steady state
vector.
The sum c of the entries of v0 is the total number of things in the system being
modeled. The total number does not change, so the long-term state of the system
326 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

must approach cw: it is a multiple of w because it is contained in the 1-eigenspace,


and the entries of cw sum to c.

Recipe 1: Compute the steady state vector. Let A be a positive stochastic


matrix. Here is how to compute the steady-state vector of A.

1. Find any eigenvector v of A with eigenvalue 1 by solving (A − I n )v = 0.

2. Divide v by the sum of the entries of v to obtain a vector w whose entries


sum to 1.

3. This vector automatically has positive entries. It is the unique steady-


state vector.

The above recipe is suitable for calculations by hand, but it does not take ad-
vantage of the fact that A is a stochastic matrix. In practice, it is generally faster
to compute a steady state vector by computer as follows:

Recipe 2: Compute the steady state vector by computer. Let A be a positive


stochastic matrix. Here is how to compute the steady-state vector of A with a
computer.

1. Choose any vector v0 whose entries sum to 1 (e.g., a standard coordinate


vector).

2. Compute v1 = Av0 , v2 = Av1 , v3 = Av2 , etc.

3. These converge to the steady state vector w.

Example (A 2 × 2 stochastic matrix). Consider the positive stochastic matrix


 ‹
3/4 1/4
A= .
1/4 3/4
This matrix has characteristic polynomial
3 1
f (λ) = λ2 − Tr(A)λ + det(λ) = λ2 − λ + = (λ − 1)(λ − 1/2).
2 2
Notice that 1 is strictly greater than the other eigenvalue, and that it has algebraic
(hence, geometric) multiplicity 1. One computes eigenvectors for the eigenvalues
1, 1/2 to be, respectively,
 ‹  ‹
1 1
u1 = u2 = .
1 −1
The eigenvector u1 necessarily has positive entries; the steady-state vector is
1 1 1
 ‹  ‹  ‹
1 50%
w= = = .
1+1 1 2 1 50%
6.6. STOCHASTIC MATRICES 327

The Perron–Frobenius theorem asserts that, for any vector v0 , the vectors v1 =
Av0 , v2 = Av1 , . . . approach a vector whose entries are the same: 50% of the sum
will be in the first entry, and 50% will be in the second.
We can see this explicitly, as follows. The eigenvectors u1 , u2 form a basis B for
R2 ; for any vector x = a1 u1 + a2 u2 in R2 , we have
a2
Ax = A(a1 u1 + a2 u2 ) = a1 Au1 + a2 Au2 = a1 u1 + u2 .
2
Iterating multiplication by A in this way, we have
a2
At x = a1 u1 + u2 −→ a1 u1
2t
as t → ∞. This shows that At x approaches
 ‹
a
a1 u1 = 1 .
a1

Note that the sum of the entries of a1 u1 is equal to the sum of the entries of a1 u1 +
a2 u2 , since the entries of u2 sum to zero.
To illustrate the theorem with numbers, let us choose a particular value for u0 ,
1 x

say u0 = 0 . We compute the values for u t = ytt in this table:

t xt yt
0 1.000 0.000
1 0.750 0.250
2 0.625 0.375
3 0.563 0.438
4 0.531 0.469
5 0.516 0.484
6 0.508 0.492
7 0.504 0.496
8 0.502 0.498
9 0.501 0.499
10 0.500 0.500

0.5

We see that u t does indeed approach 0.5 .
Now we turn to visualizing the dynamics of (i.e., repeated multiplication by)
the matrix A. This matrix is diagonalizable; we have A = C DC −1 for
 ‹  ‹
1 1 1 0
C= D= .
1 −1 0 1/2

The matrix D leaves the x-coordinate unchanged and scales the y-coordinate by
1/2. Repeated multiplication by D makes the y-coordinate very small, so it “sucks
all vectors into the x-axis.”
328 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

The matrix A does the same thing as D, but with respect to the coordinate
system defined by the columns u1 , u2 of C. This means that A “sucks all vectors
into the 1-eigenspace”, without changing the sum of the entries of the vectors.

Use this link to view the online demo

Dynamics of the stochastic matrix A. Click “multiply” to multiply the colored points
by D on the left and A on the right. Note that on both sides, all vectors are “sucked
into the 1-eigenspace” (the red line).

Example. Continuing with the Red Box example, we can illustrate the Perron–Frobenius
theorem explicitly. The matrix
 
.3 .4 .5
A =  .3 .4 .3 
.4 .2 .2

has characteristic polynomial

f (λ) = −λ3 + 0.12λ − 0.02 = −(λ − 1)(λ + 0.2)(λ − 0.1).

Notice that 1 is strictly greater in absolute value than the other eigenvalues, and
that it has algebraic (hence, geometric) multiplicity 1. One computes eigenvectors
for the eigenvalues 1, −0.2, 0.1 to be, respectively,
     
7 −1 1
u1 = 6  u2 = 0
  u3 = −3 .

5 1 2

The eigenvector u1 necessarily has positive entries; the steady-state vector is


 
7
1 1  
w= u1 = 6 .
7+6+5 18 5

The eigenvectors u1 , u2 , u3 form a basis B for R3 ; for any vector x = a1 u1 + a2 u2 +


a3 u3 in R3 , we have

Ax = A(a1 u1 + a2 u2 + a3 u3 )
= a1 Au1 + a2 Au2 + a3 Au3
= a1 u1 − 0.2a2 u2 + 0.1a3 u3 .

Iterating multiplication by A in this way, we have

At x = a1 u1 − (0.2) t a2 u2 + (0.1) t a3 u3 −→ a1 u1

as t → ∞. This shows that At x approaches a1 u1 , which is an eigenvector with


eigenvalue 1, as guaranteed by the Perron–Frobenius theorem.
6.6. STOCHASTIC MATRICES 329

What do the above calculations say about the number of copies of Prognosis
Negative in the Atlanta Red Box kiosks? Suppose that the kiosks start with 100
copies of the movie, with 30 copies at kiosk 1, 50 copies at kiosk 2, and 20 copies
at kiosk 3. Let v0 = (30, 50, 20) be the vector describing this state. Then there will
be v1 = Av0 movies in the kiosks the next day, v2 = Av1 the day after that, and so
on. We let vt = (x t , y t , z t ).

t xt yt zt
0 30.000000 50.000000 20.000000
1 39.000000 35.000000 26.000000
2 38.700000 33.500000 27.800000
3 38.910000 33.350000 27.740000
4 38.883000 33.335000 27.782000
5 38.889900 33.333500 27.776600
6 38.888670 33.333350 27.777980
7 38.888931 33.333335 27.777734
8 38.888880 33.333333 27.777786
9 38.888891 33.333333 27.777776
10 38.888889 33.333333 27.777778

(Of course it does not make sense to have a fractional number of movies; the
decimals are included here to illustrate the convergence.) The steady-state vector
says that eventually, the movies will be distributed in the kiosks according to the
percentages
   
7 38.888888%
1   
w= 6 = 33.333333% ,
18 5 27.777778%

which agrees with the above table. Moreover, this distribution is independent of
the beginning distribution of movies in the kiosks.
Now we turn to visualizing the dynamics of (i.e., repeated multiplication by)
the matrix A. This matrix is diagonalizable; we have A = C DC −1 for
   
7 −1 1 1 0 0
C =  6 0 −3  D =  0 −.2 0  .
5 1 2 0 0 .1

The matrix D leaves the x-coordinate unchanged, scales the y-coordinate by −1/5,
and scales the z-coordinate by 1/10. Repeated multiplication by D makes the y-
and z-coordinates very small, so it “sucks all vectors into the x-axis.”
The matrix A does the same thing as D, but with respect to the coordinate
system defined by the columns u1 , u2 , u3 of C. This means that A “sucks all vectors
into the 1-eigenspace”, without changing the sum of the entries of the vectors.
330 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Use this link to view the online demo

Dynamics of the stochastic matrix A. Click “multiply” to multiply the colored points
by D on the left and A on the right. Note that on both sides, all vectors are “sucked
into the 1-eigenspace” (the green line). (We have scaled C by 1/4 so that vectors have
roughly the same size on the right and the left. The “jump” that happens when you
press “multiply” is a negation of the −.2-eigenspace, which is not animated.)

The picture of a positive stochastic matrix is always the same, whether or not
it is diagonalizable: all vectors are “sucked into the 1-eigenspace,” which is a line,
without changing the sum of the entries of the vectors. This is the geometric
content of the Perron–Frobenius theorem.

6.6.3 Google’s PageRank Algorithm


Internet searching in the 1990s was very inefficient. Yahoo or AltaVista would
scan pages for your search text, and simply list the results with the most occur-
rences of those words. Not surprisingly, the more unsavory websites soon learned
that by putting the words “Alanis Morissette” a million times in their pages, they
could show up first every time an angsty teenager tried to find Jagged Little Pill on
Napster.
Larry Page and Sergey Brin invented a way to rank pages by importance. They
founded Google based on their algorithm. Here is how it works. (Roughly.)
Each web page has an associated importance, or rank. This is a positive num-
ber. This rank is determined by the following rule.
The Importance Rule. If a page P links to n other pages Q 1 , Q 2 , . . . , Q n , then each
page Q i inherits 1n of P’s importance.
In practice, this means:
• If a very important page links to your page (and not to a zillion other ones
as well), then your page is considered important.

• If a zillion unimportant pages link to your page, then your page is still im-
portant.

• If only one unknown page links to yours, your page is not important.
Alternatively, there is the random surfer interpretation. A “random surfer” just sits
at his computer all day, randomly clicking on links. The pages he spends the most
time on should be the most important. So, the important (high-ranked) pages are
those where a random surfer will end up most often. This measure turns out to be
equivalent to the rank.
The Importance Matrix. Consider an Internet with n pages. The importance
matrix is the n × n matrix A whose i, j-entry is the importance that page j passes
to page i.
6.6. STOCHASTIC MATRICES 331

Observe that the importance matrix is a stochastic matrix, assuming every page
contains a link: if page i has m links, then the ith column contains the number
1/m, m times, and the number zero in the other entries.

Example. Consider the following Internet with only four pages. Links are indi-
cated by arrows.

1
3
A 1
B
3

1 1 1
1 3 2 2

1
2
C D
1
2

The Importance Rule says:


1
• Page A has 3 links, so it passes 3 of its importance to pages B, C, D.
1
• Page B has 2 links, so it passes 2 of its importance to pages C, D.

• Page C has one link, so it passes all of its importance to page A.


1
• Page D has 2 links, so it passes 2 of its importance to pages A, C.

In terms of matrices, if v = (a, b, c, d) is the vector containing the ranks a, b, c, d


of the pages A, B, C, D, then
      
0 0 1 12 a c + 12 d a
 1 0 0 0   b  1 a   b
1   =  1
 31 1  3  =  .
1

3 2 0 2
c 3a + 2 b + 12 d  c
1 1 1 1
3a + 2 b
3 2 0 0
d d

The matrix on the left is the Importance Matrix, and the final equality expresses
the Importance Rule.

The above example illustrates the key observation.

Key Observation. The rank vector is an eigenvector of the importance matrix


with eigenvalue one.

In light of the key observation, we would like to use the Perron–Frobenius


theorem to find the rank vector. Unfortunately, the Importance Matrix is not always
a positive stochastic matrix.
332 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

Example (A page with no links). Consider the following Internet with three pages:

A
1

C
1
B

The importance matrix is  


0 0 0
0 0 0.
1 1 0
This has characteristic polynomial
 
−λ 0 0
f (λ) = det  0 −λ 0  = −λ3 .
1 1 −λ

So 1 is not an eigenvalue at all: there is no rank vector! The Importance Matrix is


not stochastic because the page C has no links.

Example (Disconnected Internet). Consider the following Internet:

1 D
2
1
1 2
1 1
A B C 1 2 2
1 2

1
2
E

The Importance Matrix is

0 1 0 0 0
 
1 0 0 0 0
0 0 0 12 21  .
 
0 0 12 0 12 
0 0 12 12 0
6.6. STOCHASTIC MATRICES 333

This has linearly independent eigenvectors


   
1 0
1 0
0 and 1 ,
   
0 1
0 1

both with eigenvalue 1. So there is more than one rank vector in this case. Here
the Importance Matrix is stochastic, but not positive.
Here is Page and Brin’s solution. First we fix the importance matrix by replacing
each zero column with a column of 1/ns, where n is the number of pages:
   
0 0 0 0 0 1/3
A =  0 0 0  becomes A0 =  0 0 1/3  .
1 1 0 1 1 1/3

The modified Importance Matrix A0 is always stochastic.


Now we choose a number p in (0, 1), called the damping factor. (A typical
value is p = 0.15.)
The Google Matrix. Let A be the Importance Matrix for an Internet with n pages,
and let A0 be the modified Importance Matrix. The Google Matrix is the matrix
 
1 1 ··· 1
1 1 1 ··· 1
M = (1 − p) · A0 + p · B where B =  . . . . .
n  .. .. . . .. 
1 1 ··· 1

In the random surfer interpretation, this matrix M says: with probability p,


our surfer will surf to a completely random page; otherwise, he’ll click a random
link on the current page, unless the current page has no links, in which case he’ll
surf to a completely random page in either case.
The reader can verify the following important fact.
Fact. The Google Matrix is a positive stochastic matrix.
If we declare that the ranks of all of the pages must sum to one, then we find:

The 25 Billion Dollar Eigenvector. The PageRank vector is the steady state
of the Google Matrix.

This exists and has positive entries by the Perron–Frobenius theorem. The hard
part is calculating it: in real life, the Google Matrix has zillions of rows.
Example. What is the PageRank vector for the following Internet? (Use the damp-
ing factor p = 0.15.)
334 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

1
3
A 1
B
3

1 1 1
3 2 2

1
2
C D
1
2

Which page is the most important? Which is the least important?


Solution. First we compute the modified Importance Matrix:
   
0 0 0 21 0 0 14 12
 1 0 0 0  modify  1 0 1 0
A= 1 1
 3 
1 −−−→ A = 
0 3 4
1 1 1 1

3 2 0 2 3 2 4 2
1 1 1 1 1
3 2 0 0 3 2 4 0

Choosing the damping factor p = 0.15, the Google Matrix is


 
0 0 14 12 1/4 1/4 1/4 1/4
 
 1 0 1 0  1/4 1/4 1/4 1/4 
M =0.85 · 
 1 1 1 1  + 0.15 ·  1/4 1/4 1/4
3 4 
3 2 4 2
1/4 
1 1 1
3 2 4 0 1/4 1/4 1/4 1/4
0.0375 0.0375 0.2500 0.4625
 
 0.3208 0.0375 0.2500 0.0375 
≈ .
0.3208 0.4625 0.2500 0.4625 
0.3208 0.4625 0.2500 0.0375
The PageRank vector is the steady state:
0.2192
 
0.1752
w≈
0.3558
0.2498
This is the PageRank:
1
3
.22 1
.18
3

1 1 1
3 2 2

1
2
.35 .25
1
2
6.6. STOCHASTIC MATRICES 335

Page C is the most important, with a rank of 0.558, and page B is the least
important, with a rank of 0.1752.
336 CHAPTER 6. EIGENVALUES AND EIGENVECTORS
Chapter 7

Orthogonality

Let us recall one last time the structure of this book:

1. Solve the matrix equation Ax = b.

2. Solve the matrix equation Ax = λx, where λ is a number.

3. Approximately solve the matrix equation Ax = b.

We have now come to the third part.

Primary Goal. Approximately solve the matrix equation Ax = b.

Finding approximate solutions of equations generally requires computing the


closest vector on a subspace from a given vector. This becomes an orthogonality
problem: one needs to know which vectors are perpendicular to the subspace.

closest point

First we will define orthogonality and learn to find orthogonal complements of


subspaces in Section 7.1 and Section 7.2. The core of this chapter is Section 7.3,
in which we discuss the orthogonal projection of a vector onto a subspace; this is
a method of calculating the closest vector on a subspace to a given vector. These
calculations become easier in the presence of an orthogonal set, as we will see
in Section 7.4. In Section 7.5 we will present the least-squares method of ap-
proximately solving systems of equations, and we will give applications to data
modeling.

337
338 CHAPTER 7. ORTHOGONALITY

Example. In data modeling, one often asks: “what line is my data supposed to lie
on?” This can be solved using a simple application of the least-squares method.

Example. Gauss invented the method of least squares to find a best-fit ellipse: he
correctly predicted the (elliptical) orbit of the asteroid Ceres as it passed behind
the sun in 1801.

7.1 Dot Products and Orthogonality

Objectives
7.1. DOT PRODUCTS AND ORTHOGONALITY 339

1. Understand the relationship between the dot product, length, and distance.

2. Understand the relationship between the dot product and orthogonality.

3. Vocabulary words: dot product, length, distance, unit vector, unit vector
in the direction of x.

4. Essential vocabulary word: orthogonal.

In this chapter, it will be necessary to find the closest point on a subspace to a


given point, like so:

closest point

The closest point has the property that the difference between the two points is
orthogonal, or perpendicular, to the subspace. For this reason, we need to develop
robust notions of orthogonality, length, and distance.

7.1.1 The Dot Product


The basic construction in this section is the dot product, which measures angles
between vectors and computes the length of a vector.
Definition. The dot product of two vectors x, y in Rn is
   
x1 y1
 x 2   y2 
x· y = ...  ·  ...  = x 1 y1 + x 2 y2 + · · · + x n yn .
  

xn yn

Thinking of x, y as column vectors, this is the same as x T y.


For example,
     
1 4  4
2 · 5 = 1 2 3 5 = 1 · 4 + 2 · 5 + 3 · 6 = 32.
3 6 6

Notice that the dot product of two vectors is a scalar.


You can do arithmetic with dot products mostly as usual, as long as you re-
member you can only dot two vectors together, and that the result is a scalar.
340 CHAPTER 7. ORTHOGONALITY

Properties of the Dot Product. Let x, y, z be vectors in Rn and let c be a scalar.

1. Commutativity: x · y = y · x.

2. Distributivity with addition: (x + y) · z = x · z + y · z.

3. Distributivity with scalar multiplication: (c x) · y = c(x · y).

The dot product of a vector with itself is an important special case:


   
x1 x1
 x2  x2
 .  ·  .  = x 2 + x 2 + · · · + x 2.
 ..   ..  1 2 n

xn xn

Therefore, for any vector x, we have:

• x·x ≥0

• x · x = 0 ⇐⇒ x = 0.

This leads to a good definition of length.

Fact. The length of a vector x in Rn is the number


p q
kxk = x · x = x 12 + x 22 + · · · + x n2 .

It is easy to see why this is true for vectors in R2 , by the Pythagorean theorem.

 ‹
3
4

p
 ‹
3
5

= 32 + 42 = 5
=

4
42

4
+
32
p

For vectors in R3 , one can check that kxk really is the length of x, although
now this requires two applications of the Pythagorean theorem.
Note that the length of a vector is the length of the arrow; if we think in terms
of points, then the length is its distance from the origin.

Fact. If x is a vector and c is a scalar, then kc xk = |c| · kxk.


7.1. DOT PRODUCTS AND ORTHOGONALITY 341

This says that scaling a vector by c scales its length by |c|. For example,
 ‹  ‹  ‹
6 3 3
= 2 =2 = 10.
8 4 4

Now that we have a good notion of length, we can define the distance between
points in Rn . Recall that the difference between two points x, y is naturally a
vector, namely, the vector y − x pointing from x to y.

Definition. The distance between two points x, y in Rn is the length of the vector
from x to y:
dist(x, y) = k y − xk.

Example. Find the distance from (1, 2) to (4, 4).


Solution. Let x = (1, 2) and y = (4, 4). Then

p
 ‹
3 p
dist(x, y) = k y − xk = = 32 + 22 = 13.
2

y
x
y−

Vectors with length one are very common in applications, so we give them a
name.
p
Definition. A unit vector is a vector x with length kxk = x · x = 1.

The standard coordinate vectors e1 , e2 , e3 , . . . are unit vectors (have length one):
 
1 p
ke1 k =  0 = 12 + 02 + 02 = 1.
0

For any nonzero vector x, there is a unique unit vector pointing in the same direc-
tion. It is obtained by dividing by the length of x.
342 CHAPTER 7. ORTHOGONALITY

Fact. Let x be a nonzero vector in Rn . The unit vector in the direction of x is the
vector x/kxk.
This is in fact a unit vector (noting that kxk is a positive number, so 1/kxk =
1/kxk):
x 1
= kxk = 1.
kxk kxk
 ‹
3
Example. What is the unit vector u in the direction of x = ?
4
Solution. We divide by the length of x:
1 1 3
 ‹  ‹  ‹
x 3 3/5
u= =p = = .
kxk 32 + 42 4 5 4 4/5

u
0

7.1.2 Orthogonal Vectors


In this section, we show how the dot product can be used to provide a robust
notion of orthogonality, i.e., when two vectors are perpendicular to each other.
Important Fact. Two vectors x, y in Rn are orthogonal or perpendicular if x · y =
0.
Notation: x ⊥ y means x · y = 0.

Since 0 · x = 0 for any vector x, the zero vector is orthogonal to every vector
in Rn .

We can see why the important fact is true using the law of cosines. In our
language, the law of cosines asserts that if x, y are two nonzero vectors, and if α
is the angle between them, then
k y − xk2 = kxk2 + k yk2 − 2kxk k yk cos α.
7.1. DOT PRODUCTS AND ORTHOGONALITY 343

ky
−x
k
k yk
x
α kxk

In particular, α = 90◦ if and only if cos(α) = 0, which happens if and only if


k y − xk2 = kxk2 + k yk2 . Therefore,
x and y are perpendicular ⇐⇒ kxk2 + k yk2 = k y − xk2
⇐⇒ x · x + y · y = ( y − x) · ( y − x)
⇐⇒ x · x + y · y = y · y + x · x − 2x · y
⇐⇒ x · y = 0.
To reiterate:

x⊥y ⇐⇒ x· y =0 ⇐⇒ k y − xk2 = kxk2 + k yk2 .


 
1
Example. Find all vectors orthogonal to v =  1  .
−1
Solution. We have to find all vectors x such that x · v = 0. This means solving
the equation    
x1 1
0 = x · v =  x2 ·  1  = x1 + x2 − x3.
x3 −1
The parametric form for the solution set is x 1 = −x 2 + x 3 , so the parametric vector
form of the general solution is
     
x1 −1 1
x = x 2 = x 2 1 + x 3 0 .
    
x3 0 1

Therefore, the answer is the plane


   
 −1 1 
Span  1 , 0 .
 
 0 1 
For instance,
       
−1 1 −1 1
 1 ⊥ 1  because  1  ·  1  = 0.
0 −1 0 −1
344 CHAPTER 7. ORTHOGONALITY
 
 
1 1
Example. Find all vectors orthogonal to both v = 1 and w = 1.
  
−1 1

Solution. We have to solve the system of two homogeneous equations


   
x1 1
0 = x · v =  x2 ·  1  = x1 + x2 − x3
x3 −1
   
x1 1
0 = x ·w =  x 2 · 1 = x 1 + x 2 + x 3 .
 
x3 1

In matrix form:  ‹  ‹
1 1 −1 RREF 1 1 0
−−→ .
1 1 1 0 0 1

The parametric vector form of the solution set is


   
x1 −1
 x2 = x2  1  .
x3 0

Therefore, the answer is the line


 
 −1 
Span  1  .
 0 

For instance,
       
−1 1 −1 1
 1 · 1 =0 and  1  · 1 = 0.
0 −1 0 1

Remark (Angle between two vectors). More generally, the law of cosines gives a
formula for the angle α between two nonzero vectors:

2kxkk yk cos(α) = kxk2 + k yk2 − k y − xk2


= x · x + y · y − ( y − x) · ( y − x)
= x · x + y · y − y · y − x · x + 2x · y
= 2x · y
x·y
 ‹
=⇒ α = cos −1
.
kxkk yk
7.2. ORTHOGONAL COMPLEMENTS 345

7.2 Orthogonal Complements

Objectives

1. Understand the basic properties of orthogonal complements.

2. Recipes: shortcuts for computing the orthogonal complements of common


subspaces.

3. Picture: orthogonal complements in R2 and R3 .

4. Vocabulary words: orthogonal complement, row space.

It will be important to compute the set of all vectors that are orthogonal to a
given set of vectors. It turns out that a vector is orthogonal to a set of vectors if
and only if it is orthogonal to the span of those vectors, which is a subspace, so we
restrict ourselves to the case of subspaces.

Definition. Let W be a subspace of Rn . Its orthogonal complement is the sub-


space 
W ⊥ = v in Rn | v · w = 0 for all w in W .
The symbol W ⊥ is read “W perp”.

This is the set of all vectors v in Rn that are orthogonal to all of the vectors in
W . We will show below that W ⊥ is indeed a subspace.

Note. We now have two similar-looking pieces of notation:

AT is the transpose of a matrix A.


W ⊥ is the orthogonal complement of a subspace W .

Try not to confuse the two.

Pictures of orthogonal complements The orthogonal complement of a line W


through the origin in R2 is the perpendicular line W ⊥ .

W
W⊥
346 CHAPTER 7. ORTHOGONALITY

Interactive: Orthogonal complements in R2 .

Use this link to view the online demo

The orthogonal complement of the line spanned by v is the perpendicular line. Click
and drag the head of v to move it.

The orthogonal complement of a line W in R3 is the perpendicular plane W ⊥ .

W

W

Interactive: Orthogonal complements in R3 .

Use this link to view the online demo

The orthogonal complement of the line spanned by v is the perpendicular plane. Click
and drag the head of v to move it.

The orthogonal complement of a plane W in R3 is the perpendicular line W ⊥ .

W⊥
W
7.2. ORTHOGONAL COMPLEMENTS 347

Interactive: Orthogonal complements in R3 .

Use this link to view the online demo

The orthogonal complement of the plane spanned by v, w is the perpendicular line.


Click and drag the heads of v, w to change the plane.

We see in the above pictures that (W ⊥ )⊥ = W .


Example. The orthogonal complement of Rn is {0}, since the zero vector is the
only vector that is orthogonal to all of the vectors in Rn .
For the same reason, we have {0}⊥ = Rn .
Since any subspace is a span, the following proposition gives a recipe for com-
puting the orthogonal complement of any subspace. However, below we will give
several shortcuts for computing the orthogonal complements of other common
kinds of subspaces.
Proposition (The orthogonal complement of a span). Let v1 , v2 , . . . , vm be vectors
in Rn , and let W = Span{v1 , v2 , . . . , vm }. Then
— v1T —
 

  — v2T — 
W = all vectors orthogonal to each v1 , v2 , . . . , vm = Nul 

.. .
 . 
— vmT —
Proof. To justify the first equality, we need to show that a vector x is perpendicular
to the all of the vectors in W if and only if it is perpendicular only to v1 , v2 , . . . , vm .
Since the vi are contained in W , we really only have to show that if x · v1 = x · v2 =
· · · = x · vm = 0, then x is perpendicular to every vector v in W . Indeed, any vector
in W has the form v = c1 v1 + c2 v2 + · · · + cm vm for suitable scalars c1 , c2 , . . . , cm , so
x · v = x · (c1 v1 + c2 v2 + · · · + cm vm )
= c1 (x · v1 ) + c2 (x · v2 ) + · · · + cm (x · vm )
= c1 (0) + c2 (0) + · · · + cm (0) = 0.
Therefore, x is in W ⊥ .
To prove the second equality, we let
— v1T —
 
 — v2T — 
A= .. .
 . 
— vmT —
By the row-column rule for matrix multiplication in Section 3.3, for any vector x
in Rn we have  T   
v1 x v1 · x
 v2T x   v2 · x 
Ax =  ...  =  ...  .
  

vmT x vm · x
348 CHAPTER 7. ORTHOGONALITY

Therefore, x is in Nul(A) if and only if x is perpendicular to each vector v1 , v2 , . . . , vm .

By the proposition, computing the orthogonal complement of a span means


solving a system of linear equations. For example, if
   
1 −2
v1 = 7
  v2 = 3 

2 1

then Span{v1 , v2 }⊥ is the solution set of the homogeneous linear system associated
to the matrix
— v1T —
 ‹  ‹
1 7 2
= .
— v2T — −2 3 1
This is the solution set of the system of equations
x 1 + 7x 2 + 2x 3 = 0
§
−2x 1 + 3x 2 + x 3 = 0.

Example. Compute W ⊥ , where


   
 1 −2 
W = Span 7 ,
  3 .
 2 1 

Solution. According to the proposition, we need to compute the null space of


the matrix  ‹  ‹
1 7 2 RREF 1 0 −1/17
−−→ .
−2 3 1 0 1 5/17
The free variable is x 3 , so the parametric form of the solution set is x 1 = x 3 /17, x 2 =
−5x 3 /17, and the parametric vector form is
   
x1 1/17
 x 2  = x 3 −5/17 .
x3 1

Scaling by a factor of 17, we see that


 
 1 
W = Span −5 .

 17 

We can check our work:


       
1 1 −2 1
7 · −5 = 0  3  · −5 = 0.
2 17 1 17
7.2. ORTHOGONAL COMPLEMENTS 349
 
1
Example. Find all vectors orthogonal to v =  1  .
−1
Solution. According to the proposition, we need to compute the null space of
the matrix  
A = — v — = 1 1 −1 .
This matrix is in reduced-row echelon form. The parametric form for the solution
set is x 1 = −x 2 + x 3 , so the parametric vector form of the general solution is
     
x1 −1 1
x =  x 2  = x 2  1  + x 3 0 .
x3 0 1

Therefore, the answer is the plane


   
 −1 1 
Span  1  , 0 .
 0 1 

Use this link to view the online demo

The set of all vectors perpendicular to v.

Example. Compute
   ⊥
 1 1 
Span  1 , 1
  .
 −1 1 

Solution. According to the proposition, we need to compute the null space of


the matrix  ‹  ‹
1 1 −1 RREF 1 1 0
A= −−→ .
1 1 1 0 0 1
The parametric vector form of the solution is
   
x1 −1
 x2 = x2  1  .
x3 0

Therefore, the answer is the line


 
 −1 
Span  1  .
 0 
350 CHAPTER 7. ORTHOGONALITY

Use this link to view the online demo

The orthogonal complement of the plane spanned by v = (1, 1, −1) and w = (1, 1, 1).

In order to find shortcuts for computing orthogonal complements, we need the


following basic facts.

Facts about Orthogonal Complements. Let W be a subspace of Rn . Then:

1. W ⊥ is also a subspace of Rn .

2. (W ⊥ )⊥ = W.

3. dim(W ) + dim(W ⊥ ) = n.

Proof. For the first assertion, we verify the three defining properties of subspaces.

1. The zero vector is in W ⊥ because the zero vector is orthogonal to every vector
in Rn .

2. Let u, v be in W ⊥ , so u · x = 0 and v · x = 0 for every vector x in W . We must


verify that (u + v) · x = 0 for every x in W . Indeed, we have

(u + v) · x = u · x + v · x = 0 + 0 = 0.

3. Let u be in W ⊥ , so u · x = 0 for every x in W , and let c be a scalar. We must


verify that (cu) · x = 0 for every x in W . Indeed, we have

(cu) · x = c(u · x) = c0 = 0.

Next we prove the third assertion. Let v1 , v2 , . . . , vm be a basis for W , so m =


dim(W ), and let vm+1 , vm+2 , . . . , vk be a basis for W ⊥ , so k − m = dim(W ⊥ ). We
need to show k = n. First we claim that {v1 , v2 , . . . , vm , vm+1 , vm+2 , . . . , vk } is linearly
independent. Suppose that c1 v1 +c2 v2 +· · ·+ck vk = 0. Let w = c1 v1 +c2 v2 +· · ·+cm vm
and w0 = cm+1 vm+1 + cm+2 vm+2 + · · · + ck vk , so w is in W , w0 is in W 0 , and w + w0 = 0.
Then w = −w0 is in both W and W ⊥ , which implies w is perpendicular to itself.
In particular, w · w = 0, so w = 0, and hence w0 = 0. Therefore, all coefficients
ci are equal to zero, because {v1 , v2 , . . . , vm } and {vm+1 , vm+2 , . . . , vk } are linearly
independent.
It follows from the previous paragraph that k ≤ n. Suppose that k < n. Then
the matrix
— v1T —
 
 — v2T — 
A= .. 
 . 
— vkT —
7.2. ORTHOGONAL COMPLEMENTS 351

has more columns than rows (it is “wide”), so its null space is nonzero by this note
in Section 4.2. Let x be a nonzero vector in Nul(A). Then
 T   
v1 x v1 · x
 v2T x   v2 · x 
0 = Ax = 
 ...  =  ... 
  

vkT x vk · x

by the row-column rule for matrix multiplication in Section 3.3. Since v1 · x =


v2 · x = · · · = vm · x = 0, it follows from this proposition that x is in W ⊥ , and
similarly, x is in (W ⊥ )⊥ . As above, this implies x is orthogonal to itself, which
contradicts our assumption that x is nonzero. Therefore, k = n, as desired.
Finally, we prove the second assertion. Clearly W is contained in (W ⊥ )⊥ : this
says that everything in W is perpendicular to the set of all vectors perpendicu-
lar to everything in W . Let m = dim(W ). By 3, we have dim(W ⊥ ) = n − m, so
dim((W ⊥ )⊥ ) = n − (n − m) = m. The only m-dimensional subspace of (W ⊥ )⊥ is all
of (W ⊥ )⊥ , so (W ⊥ )⊥ = W.
See these paragraphs for pictures of the second property. As for the third: for
example, if W is a (2-dimensional) plane in R4 , then W ⊥ is another (2-dimensional)
plane. Explicitly, we have
  
1 0
      
 x x x 
 y y 0 y 1
 
 ⊥
Span e1 , e2 =   in R4   ·   = 0 and     = 0
       
 z z 0 z 0 
 
w w 0 w 0
  
 0 
0
  
=   in R 4
= Span e3 , e4 } :
 z 
 
w

the orthogonal complement of the x y-plane is the zw-plane.


Definition. The row space of a matrix A is the span of the rows of A, and is denoted
Row(A).
If A is an m × n matrix, then the rows of A are vectors with n entries, so Row(A)
is a subspace of Rn . Equivalently, since the rows of A are the columns of AT , the
row space of A is the column space of AT :

Row(A) = Col(AT ).

We showed in the above proposition that if A has rows v1T , v2T , . . . , vmT , then

Row(A)⊥ = Span{v1 , v2 , . . . , vm }⊥ = Nul(A).

Taking orthogonal complements of both sides and using the second fact gives

Row(A) = Nul(A)⊥ .
352 CHAPTER 7. ORTHOGONALITY

Replacing A by AT and remembering that Row(A) = Col(AT ) gives

Col(A)⊥ = Nul(AT ) and Col(A) = Nul(AT )⊥ .

To summarize:

Recipes: Shortcuts for computing orthogonal complements. For any vec-


tors v1 , v2 , . . . , vm , we have

— v1T —
 
 — v2T — 
Span{v1 , v2 , . . . , vm }⊥ = Nul  .. .
 . 
— vmT —

For any matrix A, we have

Row(A)⊥ = Nul(A) Nul(A)⊥ = Row(A)


Col(A)⊥ = Nul(AT ) Nul(AT )⊥ = Col(A).

Example. Compute the orthogonal complement of the subspace



W = (x, y, z) in R3 | 3x + 2 y = z .

Solution. Rewriting, we see that W is the solution set of the system


 of equations
3x + 2 y − z = 0, i.e., the null space of the matrix A = 3 2 −1 . Therefore,
 
 3 
W ⊥ = Row(A) = Span  2  .
 −1 

No row reduction was needed!

Example (Orthogonal complement of an eigenspace). Find the orthogonal com-


plement of the 5-eigenspace of the matrix
 
2 4 −1
A =  3 2 0 .
−2 4 3

Solution. The 5-eigenspace is


 
−3 4 −1
W = Nul(A − 5I3 ) = Nul  3 −3 0  ,
−2 4 −2
7.3. ORTHOGONAL PROJECTION 353

so        
−3 4 −1  −3 3 −2 
W = Row 3 −3 0
⊥   = Span  4 , −3 ,
    4 .
−2 4 −2  −1 0 −2 
These vectors are necessarily linearly dependent (why)?
Remark (Row rank and column rank). Let A be an m × n matrix. By the rank
theorem in Section 3.9, we have
dim Col(A) + dim Nul(A) = n.
On the other hand the third fact says that
dim Nul(A)⊥ + dim Nul(A) = n,
which implies dim Col(A) = dim Nul(A)⊥ . Since Nul(A)⊥ = Row(A), we have
dim Col(A) = dim Row(A).
In other words, the span of the rows of A has the same dimension as the span of the
columns of A, even though the first lives in Rn and the second lives in Rm . This fact
is often stated as “the row rank equals the column rank.”

7.3 Orthogonal Projection

Objectives
1. Understand the orthogonal decomposition of a vector with respect to a sub-
space.
2. Understand the relationship between orthogonal decomposition and orthog-
onal projection.
3. Understand the relationship between orthogonal decomposition and the clos-
est vector on / distance to a subspace.
4. Learn the basic properties of orthogonal projections as linear transforma-
tions and as matrix transformations.
5. Recipes: orthogonal projection onto a line, orthogonal decomposition by
solving a system of equations, orthogonal projection via a complicated ma-
trix product.
6. Pictures: orthogonal decomposition, orthogonal projection.
7. Vocabulary words: orthogonal decomposition, orthogonal projection.

Let W be a subspace of Rn and let x be a vector in Rn . In this section, we


will learn to compute the closest vector x W to x in W . The vector x W is called the
orthogonal projection of x onto W .
354 CHAPTER 7. ORTHOGONALITY

7.3.1 Orthogonal Decomposition


We begin by fixing some notation.
Notation. Let W be a subspace of Rn and let x be a vector in Rn . We denote the
closest vector to x on W by x W .
To say that x W is the closest vector to x on W means that the difference x − x W
is orthogonal to the vectors in W :

x − xW
xW W

In other words, if x W ⊥ = x − x W , then we have x = x W + x W ⊥ , where x W is in


W and x W ⊥ is in W ⊥ . The first order of business is to prove that the closest vector
always exists.
Theorem (Orthogonal decomposition). Let W be a subspace of Rn and let x be a
vector in Rn . Then we can write x uniquely as

x = xW + xW ⊥

where x W is the closest vector to x on W and x W ⊥ is in W ⊥ .


Proof. Let m = dim(W ), so n − m = dim(W ⊥ ) by this fact in Section 7.2. Let
v1 , v2 , . . . , vm be a basis for W and let vm+1 , vm+2 , . . . , vn be a basis for W ⊥ . We
showed in the proof of this fact in Section 7.2 that {v1 , v2 , . . . , vm , vm+1 , vm+2 , . . . , vn }
is linearly independent, so it forms a basis for Rn . Therefore, we can write

x = (c1 v1 + · · · + cm vm ) + (cm+1 vm+1 + · · · + cn vn ) = x W + x W ⊥ ,

where x W = c1 v1 + · · · + cm vm and x W ⊥ = cm+1 vm+1 + · · · + cn vn . Since x W ⊥ is


orthogonal to W , the vector x W is the closest vector to x on W , so this proves that
such a decomposition exists.
As for uniqueness, suppose that

x = x W + x W ⊥ = yW + yW ⊥

for x W , yW in W and x W ⊥ , yW ⊥ in W ⊥ . Rearranging gives

x W − yW = yW ⊥ − x W ⊥ .

Since W and W ⊥ are subspaces, the left side of the equation is in W and the right
side is in W ⊥ . Therefore, x W − yW is in W and in W ⊥ , so it is orthogonal to itself,
which implies x W − yW = 0. Hence x W = yW and x W ⊥ = yW ⊥ , which proves
uniqueness.
7.3. ORTHOGONAL PROJECTION 355

Definition. Let W be a subspace of Rn and let x be a vector in Rn . The expression

x = xW + xW ⊥

for x W in W and x W ⊥ in W ⊥ , is called the orthogonal decomposition of x with


respect to W , and the closest vector x W is the orthogonal projection of x onto
W.

Since x W is the closest vector on W to x, the distance from x to the subspace


W is the length of the vector from x W to x, i.e., the length of x W ⊥ . To restate:

Closest vector and distance. Let W be a subspace of Rn and let x be a vector


in Rn .

• The orthogonal projection x W is the closest vector to x in W .

• The distance from x to W is kx W ⊥ k.

Example (Orthogonal decomposition with respect to the x y-plane). Let W be


the x y-plane in R3 , so W ⊥ is the z-axis. It is easy to compute the orthogonal
decomposition of a vector with respect to this W :
     
1 1 0
x = 2 =⇒ x W = 2
   x W ⊥ = 0

3 0 3
     
a a 0
x=  b  =⇒ x W = b
 x W ⊥ = 0 .

c 0 c

We see that the orthogonal decomposition in this case expresses a vector in terms
of a “horizontal” component (in the x y-plane) and a “vertical” component (on the
z-axis).

xW ⊥

xW W
356 CHAPTER 7. ORTHOGONALITY

Use this link to view the online demo

Orthogonal decomposition of a vector with respect to the x y-plane in R3 . Note that


x W is in the x y-plane and x W ⊥ is in the z-axis. Click and drag the head of the vector
x to see how the orthogonal decomposition changes.

Example (Orthogonal decomposition of a vector in W ). If x is in a subspace W ,


then the closest vector to x in W is itself, so x = x W and x W ⊥ = 0. Conversely, if
x = x W then x is contained in W because x W is contained in W .

Example (Orthogonal decomposition of a vector in W ⊥ ). If W is a subspace and


x is in W ⊥ , then the orthogonal decomposition of x is x = 0 + x, where 0 is in W
and x is in W ⊥ . It follows that x W = 0. Conversely, if x W = 0 then the orthogonal
decomposition of x is x = x W + x W ⊥ = 0 + x W ⊥ , so x = x W ⊥ is in W ⊥ .

Interactive: Orthogonal decomposition in R2 .

Use this link to view the online demo

Orthogonal decomposition of a vector with respect to a line W in R2 . Note that x W


is in W and x W ⊥ is in the line perpendicular to W . Click and drag the head of the
vector x to see how the orthogonal decomposition changes.

Interactive: Orthogonal decomposition in R3 .

Use this link to view the online demo

Orthogonal decomposition of a vector with respect to a plane W in R3 . Note that x W


is in W and x W ⊥ is in the line perpendicular to W . Click and drag the head of the
vector x to see how the orthogonal decomposition changes.

Interactive: Orthogonal decomposition in R3 .

Use this link to view the online demo

Orthogonal decomposition of a vector with respect to a line W in R3 . Note that x W


is in W and x W ⊥ is in the plane perpendicular to W . Click and drag the head of the
vector x to see how the orthogonal decomposition changes.

Now we turn to the problem of computing x W and x W ⊥ . Of course, since x W ⊥ =


x − x W , really all we need is to compute x W . The following theorem gives a method
for computing the orthogonal projection in terms of a spanning set.
7.3. ORTHOGONAL PROJECTION 357

Theorem. Let W be a subspace of Rn , let v1 , v2 , . . . , vm be a spanning set for W (e.g.,


a basis), and let A be the n × m matrix with columns v1 , v2 , . . . , vm :
 
| | |
A =  v1 v2 · · · vm  .
| | |

Let x be a vector in Rn . Then the matrix equation AT Ac = AT x in the unknown vector


c is consistent, and x W = Ac for any solution c.
Proof. Let x = x W + x W ⊥ be the orthogonal decomposition with respect to W . We
have AT x W ⊥ = 0 by this proposition in Section 7.2, so

AT x = AT (x W + x W ⊥ ) = AT x W + AT x W ⊥ = AT x W .

Since x W is in W , we can write x W = c1 v1 + c2 v2 + · · · + cm vm for some scalars


c1 , c2 , . . . , cm . Let c be the vector with entries c1 , c2 , . . . , cm . Then Ac = x W , so

AT x = AT x W = AT Ac.

This proves that the matrix equation AT Ac = AT x is consistent, and that x W = Ac


for a solution c.
Example (Orthogonal projection onto a line). Let L = Span{u} be a line in Rn
and let x be a vector in Rn . By the theorem, to find x L we must solve the matrix
equation u T uc = u T x, where we regard u as an n × 1 matrix. But u T u = u · u
and u T x = u · x, so c = (u · x)/(u · u) is a solution of u T uc = u T x, and hence
x L = uc = (u · x)/(u · u) u.

x
x L⊥ u

u· x
xL = u
L u·u

To reiterate:

Recipe: Orthogonal projection onto a line. If L = Span{u} is a line, then


u· x
xL = u and x L⊥ = x − x L
u·u
for any vector x.
358 CHAPTER 7. ORTHOGONALITY

2
Example
 (Projection onto a line in R ). 3Compute the orthogonal projection of
−6

x = 4 onto the line L spanned by u = 2 , and find the distance from x to L.
Solution. First we find
x ·u −18 + 8 3 10 3 1 −48
 ‹  ‹  ‹
xL = u= =− x L⊥ = x − xL = .
u·u 9+4 2 13 2 13 72

The distance from x to L is


1p 2
kx L ⊥ k = 48 + 722 ≈ 6.656.
13

 ‹
−6
4
 ‹
3
2

10 3
 ‹

13 2
L

Use this link to view the online demo

Distance from the line L.

Example (Projection onto a line in R3 ). Let


   
−2 −1
x = 3  u =  1 ,
−1 1

and let L be the line spanned by u. Compute x L and x ⊥


L
.
Solution.
     
−1 −1 −2
x ·u 2+3−1   4   1 
xL = u= 1 = 1 x L⊥ = x − xL = 5 .
u·u 1+1+1 1 3 1 3 −7

Use this link to view the online demo

Orthogonal projection onto the line L.


7.3. ORTHOGONAL PROJECTION 359

When W = Span{v1 , v2 , . . . , vm } has dimension greater than one, computing the


orthogonal projection of x onto W means solving the matrix equation AT Ac = AT x,
where A has columns v1 , v2 , . . . , vm . In other words, we can compute the closest
vector by solving a system of linear equations. To be explicit, we state the theorem
as a recipe:

Recipe: Compute an orthogonal decomposition. Let W =


Span{v1 , v2 , . . . , vm } and let A be the matrix with columns v1 , v2 , . . . , vm .
Here is a method to compute the orthogonal decomposition of a vector x with
respect to W :

1. Compute the matrix AT A and the vector AT x.

2. Form the augmented matrix for the matrix equation AT Ac = AT x in the


unknown vector c, and row reduce.

3. This equation is always consistent; choose one solution c. Then

x W = Ac xW ⊥ = x − xW .

Example (Projection onto the x y-plane). Use the theorem to compute the orthog-
onal decomposition of a vector with respect to the x y-plane in R3 .
Solution. A basis for the x y-plane is given by the two standard coordinate vec-
tors    
1 0
e1 = 0
  e2 = 1 .

0 0
Let A be the matrix with columns e1 , e2 :
 
1 0
A = 0 1.
0 0

Then
   
 ‹ x1  ‹ x1  ‹
1 0 1 0 0   x
AT A = = I2 AT
x2 =
 x2 = 1 .
0 1 0 1 0 x2
x3 x3

It follows that the unique solution c of AT Ac = I2 c = AT x is given by the first two


coordinates of x, so
     
 ‹ 1 0  ‹ x1 0
x x
xW = A 1 =  0 1  1 =  x2 xW ⊥ = x − xW =  0  .
x2 x2
0 0 0 x3

We have recovered this example.


360 CHAPTER 7. ORTHOGONALITY

Example (Projection onto a plane in R3 ). Let


     
 1 1  1
W = Span  0 , 1
  x = 2 .

 −1 0  3

Compute x W and the distance from x to W .


Solution. We have to solve the matrix equation AT Ac = AT x, where
 
1 1
A =  0 1.
−1 0

We have  ‹  ‹
2 1 −2
A A=
T
A x=
T
.
1 2 3
We form an augmented matrix and row reduce:

1 −7
 ‹  ‹  ‹
2 1 −2 RREF 1 0 −7/3
−−→ =⇒ c = .
1 2 3 0 1 8/3 3 8

It follows that
   
1 2
1 1
xW = Ac = 8 xW ⊥ = x − xW = −2 .
3 7 3 2

The distance from x to W is


1p
kx W ⊥ k = 4 + 4 + 4 ≈ 1.155.
3

Use this link to view the online demo

Orthogonal projection onto the plane W .

Example (Projection onto a 3-space in R4 ). Let


     
1 0 1  0
 

0 1 1 1
 
W = Span   ,   ,   x =  .
 −1 0 1  3
 
0 −1 −1 4

Compute the orthogonal decomposition of x with respect to W .


7.3. ORTHOGONAL PROJECTION 361

Solution. We have to solve the matrix equation AT Ac = AT x, where

1 0 1
 
 0 1 1
A= .
−1 0 1
0 −1 −1

We compute    
2 0 0 −3
AT A =  0 2 2  AT x = −3 .
0 2 4 0
We form an augmented matrix and row reduce:
     
2 0 0 −3 1 0 0 −3/2 −3
RREF 1
 0 2 2 −3  −−→  0 1 0 −3  =⇒ c = −6 .
0 2 4 0 0 0 1 3/2 2 3

It follows that
0 0
   
1 −3 1 5
xW = Ac =   xW ⊥ =  .
2 6 2 0
3 5

In the context of the above theorem, if we start with a basis of W , then it turns
out that the square matrix AT A is automatically invertible! (It is always the case
that AT A is square and the equation AT Ac = AT x is consistent, but AT A need not be
invertible in general.)

Corollary. Let W be a subspace of Rn , let v1 , v2 , . . . , vm be a basis for W , and let A be


the n × m matrix with columns v1 , v2 , . . . , vm :
 
| | |
A =  v1 v2 · · · vm  .
| | |

Then the m × m matrix AT A is invertible, and for all vectors x in Rn , we have

x W = A(AT A)−1 AT x.

Proof. We will show that Nul(AT A) = {0}, which implies invertibility by the invert-
ible matrix theorem in Section 6.1. Suppose that AT Ac = 0. Then AT Ac = AT 0, so
0W = Ac by the theorem. But 0W = 0 (the orthogonal decomposition of the zero
vector is just 0 = 0 + 0), so Ac = 0, and therefore c is in Nul(A). Since the columns
of A are linearly independent, we have c = 0, so Nul(AT A) = 0, as desired.
Let x be a vector in Rn and let c be a solution of AT Ac = AT x. Then c =
(AT A)−1 AT x, so x W = Ac = A(AT A)−1 AT x.
362 CHAPTER 7. ORTHOGONALITY

Example (Matrix for a projection). Continuing with the above example, let
     
 1 1  x1
W = Span  0  , 1 x =  x2 .
 −1 0  x3

Compute x W .
Solution. Clearly the spanning vectors are noncollinear, so according to the corol-
lary, we have x W = A(AT A)−1 AT x, where
 
1 1
A =  0 1.
−1 0

We compute
1
 ‹  ‹
2 1 2 −1
A A=
T
=⇒ (A A) T −1
= ,
1 2 3 −1 2
so
   
1 1 ‹ x1
1 2 −1
 ‹
1 0 −1  
xW = A(AT A)−1 AT x =  0 1  x2
3 −1 2 1 1 0
−1 0 x3
2x 1 + x 2 − x 3
    
2 1 −1 x1
1 1
= 1 2 1   x 2  =  x 1 + 2x 2 + x 3  .
3 −1 1 2 x3 3
−x + x + 2x 1 2 3

7.3.2 Orthogonal Projection


In this subsection, we change perspective and think of the orthogonal projection
x W as a function of x. This function turns out to be a linear transformation with
many nice properties, and is a good example of a linear transformation which is
not originally defined as a matrix transformation.

Properties of Orthogonal Projections. Let W be a subspace of Rn , and define


T : Rn → Rn by T (x) = x W . Then:

1. T is a linear transformation.

2. T (x) = x if and only if x is in W .

3. T (x) = 0 if and only if x is in W ⊥ .

4. T ◦ T = T .

5. The range of T is W .

Proof.
7.3. ORTHOGONAL PROJECTION 363

1. We have to verify the defining properties of linearity in Section 4.3. Let


x, y be vectors in Rn , and let x = x W + x W ⊥ and y = yW + yW ⊥ be their
orthogonal decompositions. Since W and W ⊥ are subspaces, the sums x W +
yW and x W ⊥ + yW ⊥ are in W and W ⊥ , respectively. Therefore, the orthogonal
decomposition of x + y is (x W + yW ) + (x W ⊥ + yW ⊥ ), so

T (x + y) = (x + y)W = x W + yW = T (x) + T ( y).

Now let c be a scalar. Then c x W is in W and c x W ⊥ is in W ⊥ , so the orthogonal


decomposition of c x is c x W + c x W ⊥ , and therefore,

T (c x) = (c x)W = c x W = cT (x).

Since T satisfies the two defining properties in Section 4.3, it is a linear


transformation.

2. See this example.

3. See this example.

4. For any x in Rn the vector T (x) is in W , so T ◦ T (x) = T (T (x)) = T (x) by


2.

5. Any vector x in W is in the range of T , because T (x) = x for such vectors.


On the other hand, for any vector x in Rn the output T (x) = x W is in W , so
W is the range of T .

We compute the standard matrix of the orthogonal projection in the same way
as for any other transformation: by evaluating on the standard coordinate vec-
tors. In this case, this means projecting the standard coordinate vectors onto the
subspace.
Example (Matrix of a projection). Let L be the line in R2 spanned by the vector
3

u = 2 , and define T : R2 → R2 by T (x) = x L . Compute the standard matrix B for
T.
Solution. The columns of B are T (e1 ) = (e1 ) L and T (e2 ) = (e2 ) L . We have
 ‹
u · e1 3 3 
(e1 ) L = u= 
u·u 13 2 1 9 6
  ‹
=⇒ B = .
u · e2 2 3  13 6 4
 ‹
(e2 ) L = u= 
u·u 13 2

Example (Matrix of a projection). Let L be the line in R2 spanned by the vector


 
−1
u =  1 ,
1
364 CHAPTER 7. ORTHOGONALITY

and define T : R3 → R3 by T (x) = x L . Compute the standard matrix B for T .


Solution. The columns of B are T (e1 ) = (e1 ) L , T (e2 ) = (e2 ) L , and T (e3 ) = (e3 ) L .
We have
 
−1 
u · e1 −1  
(e1 ) L = u= 1 

u·u 3


1 


     
−1 1 −1 −1


u · e2 1 1

(e2 ) L = u=  1  =⇒ B =  −1 1 1 .
u·u 3 1 
 3 −1 1 1

   

−1 
u · e3 1  

(e3 ) L = u= 1


u·u

3 1 

Example (Matrix of a projection). Continuing with this example, let


   
 1 1 
W = Span  0 , 1 ,
 
 −1 0 

and define T : R3 → R3 by T (x) = x W . Compute the standard matrix B for T .


Solution. The columns of B are T (e1 ) = (e1 )W , T (e2 ) = (e2 )W , and T (e3 ) =
(e3 )W . Let  
1 1
A =  0 1.
−1 0

To compute each (ei )W , we solve the matrix equation AT Ac = AT ei for c, then use
the equality (ei )W = Ac. First we note that
‹  ‹
2 1 1 0 −1
A A=
T
; A ei = the ith column of A =
T T
.
1 2 1 1 0

For e1 , we form an augmented matrix and row reduce:


 
2
1 
 ‹  ‹  ‹
2 1 1 RREF 1 0 1/3 1/3
−−→ =⇒ (e1 )W = A = 1 .
1 2 1 0 1 1/3 1/3 3 −1

We do the same for e2 :


 
1
1
 ‹  ‹  ‹
2 1 0 RREF 1 0 −1/3 −1/3
−−→ =⇒ (e1 )W = A = 2
1 2 1 0 1 2/3 2/3 3 1
7.3. ORTHOGONAL PROJECTION 365

and for e3 :
 
−1
1 
 ‹  ‹  ‹
2 1 −1 RREF 1 0 −2/3 −2/3
−−→ =⇒ (e1 )W = A = 1 .
1 2 0 0 1 1/3 1/3 3 2

It follows that  
2 1 −1
1
B =  1 2 1 .
3 −1 1 2

In the previous example, we could have used the fact that


   
 1 1 
 0  , 1
 −1 0 

forms a basis for W , so that


 
  1 1
T (x) = x W = A(AT A)−1 AT x for A =  0 1
−1 0

by the corollary. In this case, we have already expressed T as a matrix transfor-


mation with matrix A(AT A)−1 AT . See this example.

Let W be a subspace of Rn with basis v1 , v2 , . . . , vm , and let A be the matrix with


columns v1 , v2 , . . . , vm . Then the standard matrix for T (x) = x W is

A(AT A)−1 AT .

We can translate the above properties of orthogonal projections into properties


of the associated standard matrix.

Properties of Projection Matrices. Let W be a subspace of Rn , define T : Rn → Rn


by T (x) = x W , and let B be the standard matrix for T . Then:

1. Col(B) = W.

2. Nul(B) = W ⊥ .

3. B 2 = B.

4. B is similar to the diagonal matrix with m ones and n−m zeros on the diagonal,
where m = dim(W ).

In particular, B has eigenvalues 0 (unless W = Rn ) and 1 (unless W = {0}).


366 CHAPTER 7. ORTHOGONALITY

Proof. The first three assertions are translations of properties 5, 3, and 4, respec-
tively, using this important note in Section 4.1 and this theorem in Section 4.4.
For the final assertion, we showed in the proof of this theorem that there is a
basis of Rn of the form {v1 , . . . , vm , vm+1 , . . . , vn }, where {v1 , . . . , vm } is a basis for
W and {vm+1 , . . . , vn } is a basis for W ⊥ . Each vi is an eigenvector of B: indeed, for
i ≤ m we have
Bvi = T (vi ) = vi = 1 · vi
because vi is in W , and for i > m we have

Bvi = T (vi ) = 0 = 0 · vi

because vi is in W ⊥ . Therefore, we have found a basis of eigenvectors, with as-


sociated eigenvalues 1, . . . , 1, 0, . . . , 0 (m ones and n − m zeros). Now we use the
diagonalization theorem in Section 6.4.
Example. Continuing with the above example, we showed that
 
2 1 −1
1
B=  1 2 1 
3 −1 1 2

is the standard matrix of the orthogonal projection onto


   
 1 1 
W = Span  0  , 1 .
 −1 0 

One can verify by hand that B 2 = B (try it!). We compute W ⊥ as the null space of
 ‹  ‹
1 0 −1 RREF 1 0 −1
−−→ .
1 1 0 0 1 1

The free variable is x 3 , and the parametric form is x 1 = x 3 , x 2 = −x 3 , so that


 
 1 
W = Span −1 .

 1 

It follows that B has eigenvectors


     
1 1 1
 0 , 1 , −1
−1 0 1

with eigenvalues 1, 1, 0, respectively, so that


   −1
1 1 1 1 0 0 1 1 1
B =  0 1 −1   0 1 0   0 1 −1  .
−1 0 1 0 0 0 −1 0 1
7.3. ORTHOGONAL PROJECTION 367

Remark. As we saw in this example, if you are willing to compute bases for W and
W ⊥ , then this provides a third way of finding the standard matrix B for projection
onto W : indeed, if {v1 , v2 , . . . , vm } is a basis for W and {vm+1 , vm+2 , . . . , vn } is a basis
for W ⊥ , then
1 ··· 0 0 ··· 0
 
   ... . . . ... ... . . . ...   −1
| | |  0 ··· 1 0 ··· 0
 | | |
B =  v1 v1 · · · vn    0 · · · 0 0 · · · 0  v1 v1 · · · vn
  ,
| | | | | |
 .. . . . .. .. . . . .. 
 
. . . .
0 ··· 0 0 ··· 0
where the middle matrix in the product is the diagonal matrix with m ones and
n − m zeros on the diagonal. However, since you already have a basis for W , it is
faster to multiply out the expression A(AT A)−1 AT as in the corollary.
Remark (Reflections). Let W be a subspace of Rn , and let x be a vector in Rn . The
reflection of x over W is defined to be the vector

refW (x) = x − 2x W ⊥ .

In other words, to find refW (x) one starts at x, then moves to x − x W ⊥ = x W , then
continues in the same direction one more time, to end on the opposite side of W .

x
−x W ⊥

xW
W refW (x)
−x W ⊥

Since x W ⊥ = x − x W , we also have

refW (x) = x − 2(x − x W ) = 2x W − x.

We leave it to the reader to check using the definition that:


1. refW ◦ refW = IdRn .

2. The 1-eigenspace of refW is W , and the −1-eigenspace of refW is W ⊥ .

3. refW is similar to the diagonal matrix with m = dim(W ) ones on the diagonal
and n − m negative ones.
368 CHAPTER 7. ORTHOGONALITY

7.4 Orthogonal Sets

Objectives

1. Understand which is the best method to use to compute an orthogonal pro-


jection in a given situation.

2. Recipes: an orthonormal set from an orthogonal set, Projection Formula, B-


coordinates when B is an orthogonal set, Gram–Schmidt process.

3. Vocabulary words: orthogonal set, orthonormal set.

In this section, we give a formula for orthogonal projection that is considerably


simpler than the one in Section 7.3, in that it does not require row reduction or
matrix inversion. However, this formula, called the Projection Formula, only works
in the presence of an orthogonal basis. We will also present the Gram–Schmidt
process for turning an arbitrary basis into an orthogonal one.

7.4.1 Orthogonal Sets and the Projection Formula


Computations involving projections tend to be much easier in the presence of an
orthogonal set of vectors.

Definition. A set of nonzero vectors {u1 , u2 , . . . , um } is called orthogonal if ui ·u j =


0 whenever i 6= j. It is orthonormal if it is orthogonal, and in addition ui · ui = 1
for all i = 1, 2, . . . , m.

In other words, a set of vectors is orthogonal if different vectors in the set


are perpendicular to each other. An orthonormal set is an orthogonal set of unit
vectors.

Example. The standard coordinate vectors in Rn always form an orthonormal set.


For instance, in R3 we check that
           
1 0 1 0 0 0
0 · 1 = 0 0 · 0 = 0 1 · 0 = 0.
0 0 0 1 0 1

Since ei · ei = 1 for all i = 1, 2, 3, this shows that {e1 , e2 , e3 } is orthonormal.


7.4. ORTHOGONAL SETS 369

e3

e2
e1

Example. Is this set orthogonal? Is it orthonormal?


     
 1 1 1 
B=  1 , −2 ,
    0
 1 1 −1 

Solution. We check that


           
1 1 1 1 1 1
1 · −2 = 0 1 ·  0  = 0 −2 ·  0  = 0.
1 1 1 −1 1 −1
Therefore, B is orthogonal.

The set B is not orthonormal because, for instance,


   
1 1
1 · 1 = 3 6= 1.
1 1
However, we can make it orthonormal by replacing each vector by the unit vector
in the direction of each vector:
      
1 1 1 
1   1   1  

p 1 , p −2 , p 0
 3 1 6 1 2 −1 

is orthonormal.
370 CHAPTER 7. ORTHOGONALITY

We saw in the previous example that it is easy to produce an orthonormal set


of vectors from an orthogonal one by replacing each vector with the unit vector in
the same direction.

Recipe: An orthonormal set from an orthogonal set. If {v1 , v2 , . . . , vm } is an


orthogonal set of vectors, then

v1 v2 vm
§ ª
, ,...,
kv1 k kv2 k kvm k

is an orthonormal set.

Example. Let a, b be scalars, and let


 ‹  ‹
a −b
u1 = u2 = .
b a

Is B = {u1 , u2 } orthogonal?
Solution. We only have to check that
 ‹  ‹
a −b
· = −ab + ab = 0.
b a

Therefore, {u1 , u2 } is orthogonal, unless a = b = 0.

Non-Example. Is this set orthogonal?


     
 1 1 1 
B= 1 , −2 , −1
   
 1 1 −1 

Solution. This set is not orthogonal because


   
1 1
1 · −1 = 1 − 1 − 1 = −1 6= 0.
1 −1

We will see how to produce an orthogonal set from B in this subsection.

A nice property enjoyed by orthogonal sets is that they are automatically lin-
early independent.

Fact. An orthogonal set is linearly independent. Therefore, it is a basis for its span.
7.4. ORTHOGONAL SETS 371

Proof. Suppose that {u1 , u2 , . . . , um } is orthogonal. We need to show that the equa-
tion
c1 u 1 + c2 u 2 + · · · + c m u m = 0
has only the trivial solution c1 = c2 = · · · = cm = 0. Taking the dot product of both
sides of this equation with u1 gives

0 = u1 · 0 = u1 · c1 u1 + c2 u2 + · · · + cm um
= c1 (u1 · u1 ) + c2 (u1 · u2 ) + · · · + cm (u1 · um )
= c1 (u1 · u1 )
because u1 · ui = 0 for i > 1. Since u1 6= 0 we have u1 · u1 6= 0, so c1 = 0. Similarly,
taking the dot product with ui shows that each ci = 0, as desired.
One advantage of working with orthogonal sets is that it gives a simple formula
for the orthogonal projection of a vector.
Projection Formula. Let W be a subspace of Rn , and let {u1 , u2 , . . . , um } be an or-
thogonal basis for W . Then for any vector x in Rn , the orthogonal projection of x
onto W is given by the formula
x · u1 x · u2 x · um
xW = u1 + u2 + · · · + um .
u1 · u1 u2 · u2 um · um
Proof. Let
x · u1 x · u2 x · um
y= u1 + u2 + · · · + um .
u1 · u1 u2 · u2 um · um
This vector is contained in W because it is a linear combination of u1 , u2 , . . . , um .
Hence we just need to show that x − y is in W ⊥ , i.e., that ui · (x − y) = 0 for each
i = 1, 2, . . . , m. For u1 , we have
x · u1 x · u2 x · um
 ‹
u1 · (x − y) = u1 · x − u1 − u2 − · · · − um
u1 · u1 u2 · u2 um · um
x · u1
= u1 · x − (u1 · u1 ) − 0 − · · · − 0
u1 · u1
= 0.
A similar calculation shows that ui · (x − y) = 0 for each i, so x − y is in W ⊥ , as
desired.
If {u1 , u2 , . . . , um } is an orthonormal basis for W , then the denominators ui ·ui =
1 go away, so the projection formula becomes even simpler:
x W = (x · u1 ) u1 + (x · u2 ) u2 + · · · + (x · um ) um .
Example. Suppose that L = Span{u} is a line. The set {u} is an orthogonal basis
for L, so the Projection Formula says that for any vector x, we have
x ·u
xL = u,
u·u
as in this example in Section 7.3. See also this example in Section 7.3 and this
example in Section 7.3.
372 CHAPTER 7. ORTHOGONALITY

Suppose that {u1 , u2 , . . . , um } is an orthogonal basis for a subspace W , and let


L i = Span{ui } for each i = 1, 2, . . . , m. Then we see that for any vector x, we have
x · u1 x · u2 x · um
xW = u1 + u2 + · · · + um
u1 · u1 u2 · u2 um · um
= x L1 + x L2 + ··· + x Lm .

In other words, for an orthogonal basis, the projection of x onto W is the sum of the
projections onto the lines spanned by the basis vectors. In this sense, projection
onto a line is the most important example of an orthogonal projection.

Example (Projection onto the x y-plane). Continuing with this example in Sec-
tion 7.3 and this example in Section 7.3, use the projection formula to compute
the orthogonal projection of a vector onto the x y-plane in R3 .
Solution. A basis for the x y-plane is given by the two standard coordinate vec-
tors    
1 0
e1 = 0
  e2 = 1 .

0 0

The set {e1 , e2 } is orthogonal, so for any vector x = (x 1 , x 2 , x 3 ), we have


 
x1
x · e1 x · e2
xW = e1 + e2 = x 1 e1 + x 2 e2 = x 2  .

e1 · e1 e2 · e2 0

Use this link to view the online demo

Orthogonal projection of a vector onto the x y-plane in R3 . Note that x W is the sum of
the projections of x onto the e1 - and e2 -coordinate axes (shown in orange and brown,
respectively).

Example (Projection onto a plane in R3 ). Let


     
 1 1  2
W = Span  0  , 1 x =  3 .
 −1 1  −2

Find x W and x W ⊥ .
Solution. The vectors
   
1 1
u1 =  0  u2 = 1
−1 1
7.4. ORTHOGONAL SETS 373

are orthogonal, so we can use the Projection Formula:


     
1 1 3
x · u1 x · u2 4  3   
xW = u1 + u2 = 0 + 1 = 1 .
u1 · u1 u2 · u2 2 −1 3 1 −1

Then we have  
−1
xW ⊥ = x − xW = 2 .

−1

Use this link to view the online demo

Orthogonal projection of a vector onto the plane W . Note that x W is the sum of the
projections of x onto the lines spanned by u1 and u2 (shown in orange and brown,
respectively).

Example (Projection onto a 3-space in R4 ). Let


     
1 0 1  0
 

0 1 1 1
 
W = Span   ,   , x =  .
 −1 0 1
 3
 
0 −1 1 4
Compute x W , and find the distance from x to W .
Solution. The vectors
1 0 1
     
0 1 1
u1 =   u2 =   u3 =  
−1 0 1
0 −1 1
are orthogonal, so we can use the Projection Formula:
x · u1 x · u2 x · u3
xW = u1 + u2 + u3
u1 · u1 u2 · u2 u3 · u3
1 0 1 1
       
−3  0  −3  1  8 1 1 1
=  +  0  + 1 = 7
2 −1 2 4 2
0 −1 1 7
−1
 
1 1 
xW ⊥ = x − xW =   .
2 −1
1
The distance from x to W is
1p
kx W ⊥ k = 1 + 1 + 1 + 1 = 1.
2
374 CHAPTER 7. ORTHOGONALITY

Now let W be a subspace of Rn with orthogonal basis B = {v1 , v2 , . . . , vm }, and


let x be a vector in W . Then x = x W , so by the projection formula, we have
x · u1 x · u2 x · um
x = xW = u1 + u2 + · · · + um .
u1 · u1 u2 · u2 um · um

This gives us a way of expressing x as a linear combination of the basis vectors in


B : we have computed the B -coordinates of x without row reducing!

Recipe: B-coordinates when B is an orthogonal set. Let W be a subspace of


Rn with orthogonal basis B = {v1 , v2 , . . . , vm } and let x be a vector in W . Then

x · u1 x · u2 x · um
 ‹
[x]B = , , ..., .
u1 · u1 u2 · u2 um · um

As with orthogonal projections, if {u1 , u2 , . . . , um } is an orthonormal basis of W ,


then the formula is even simpler:

[x]B = x · u1 , x · u2 , . . . , x · um .

Example (Computing coordinates with respect to an orthogonal basis). Find the


B -coordinates of x, where
§ ‹  ‹ª  ‹
1 −4 0
B= , x= .
2 2 3

Solution. Since  ‹  ‹
1 −4
u1 = u2 =
2 2
form an orthogonal basis of R2 , we have

x · u1 x · u2 3·2 3·2 6 3
 ‹  ‹  ‹
[x]B = , = , = , .
u1 · u1 u2 · u2 12 + 22 (−4)2 + 22 5 10

x
6
5 u1

u2
u1
3
10 u2
7.4. ORTHOGONAL SETS 375

Use this link to view the online demo

Computing B-coordinates using the Projection Formula.

The following example shows that the Projection Formula does in fact require
an orthogonal basis.
Non-Example (A non-orthogonal basis). Consider the basis B = {v1 , v2 } of R2 ,
where  ‹  ‹
2 1
v1 = v2 = .
−1/2 2
1

This is not orthogonal because v1 · v2 = 2 − 1 = 1 6= 0. Let x = 1 . Let us try to
compute x = x R2 using the Projection Formula with respect to the basis B:
x · v1 x · v2 3/2 3 1
 ‹  ‹  ‹
2 111/85
x R2 = v1 + v2 = + = 6= x.
v1 · v1 v2 · v2 17/4 −1/2 5 2 87/85
Since x = x R2 , we see that the Projection Formula does not compute the orthogonal
projection in this case. Geometrically, the projections of x onto the lines spanned
by v1 and v2 do not sum to x, as we can see from the picture.

v2
111/85

87/85

x Span{v2 }
x

x Span{v1 }
v1

Use this link to view the online demo

When v1 and v2 are not orthogonal, then x R2 = x is not necessarily equal to the sum
(red) of the projections (orange and brown) of x onto the lines spanned by v1 and v2 .

You need an orthogonal basis to use the Projection Formula.


376 CHAPTER 7. ORTHOGONALITY

7.4.2 The Gram–Schmidt Process


We saw in the previous subsection that orthogonal projections and B-coordinates
are much easier to compute in the presence of an orthogonal basis for a subspace.
In this subsection, we give a method, called the Gram–Schmidt Process, for com-
puting an orthogonal basis of a subspace.
The Gram–Schmidt Process. Let v1 , v2 , . . . , vm be a basis for a subspace W of Rn .
Define:

1. u1 = v1
v2 · u1
2. u2 = (v2 )Span{u1 }⊥ = v2 − u1
u1 · u1
v3 · u1 v3 · u2
3. u3 = (v3 )Span{u1 ,u2 }⊥ = v3 − u1 − u2
u1 · u1 u2 · u2
..
.
m−1
X vm · ui
m. um = (vm )Span{u1 ,u2 ,...,um−1 }⊥ = vm − ui .
i=1
ui · ui

Then {u1 , u2 , . . . , um } is an orthogonal basis for the same subspace W .


Proof. First we claim that each ui is in W , and in fact that ui is in Span{v1 , v2 , . . . , vi }.
Clearly u1 = v1 is in Span{v1 }. Then u2 is a linear combination of u1 and v2 ,
which are both in Span{v1 , v2 }, so u2 is in Span{v1 , v2 } as well. Similarly, u3 is a
linear combination of u1 , u2 , and v3 , which are all in Span{v1 , v2 , v3 }, so u3 is in
Span{v1 , v2 , v3 }. Continuing in this way, we see that each ui is in Span{v1 , v2 , . . . , vi }.
Now we claim that {u1 , u2 , . . . , um } is an orthogonal set. Let 1 ≤ i < j ≤ m.
Then u j = (v j )Span{u1 ,u2 ,...,u j−1 }⊥ , so by definition u j is orthogonal to every vector in
Span{u1 , u2 , . . . , u j−1 }. In particular, u j is orthogonal to ui .
We still have to prove that each ui is nonzero. Clearly u1 = v1 6= 0. Suppose that
ui = 0. Then (vi )Span{u1 ,u2 ,...,ui−1 }⊥ = 0, which means that vi is in Span{u1 , u2 , . . . , ui−1 }.
But each u1 , u2 , . . . , ui−1 is in Span{v1 , v2 , . . . , vi−1 } by the first paragraph, so vi is in
Span{v1 , v2 , . . . , vi−1 }. This contradicts the increasing span criterion in Section 3.5;
therefore, ui must be nonzero.
The previous two paragraphs justify the use of the projection formula in the
equalities
i−1
X vi · u j
(vi )Span{u1 ,u2 ,...,ui−1 }⊥ = vi − (vi )Span{u1 ,u2 ,...,ui−1 } = vi − uj
j=1
uj · uj

in the statement of the theorem.


Since {u1 , u2 , . . . , um } is an orthogonal set, it is linearly independent. Thus it
is a set of m linearly independent vectors in W , so it is a basis for W by the basis
theorem in Section 3.9. Similarly, for every i, we saw that the set {u1 , u2 , . . . , ui }
is contained in the i-dimensional subspace Span{v1 , v2 , . . . , vi }, so {u1 , u2 , . . . , ui }
is an orthogonal basis for Span{v1 , v2 , . . . , vi }.
7.4. ORTHOGONAL SETS 377

Example (Two vectors). Find an orthogonal basis {u1 , u2 } for W = Span{v1 , v2 },


where
   
1 1
v1 = 1
  v2 = 1 .

0 1

Solution. We run Gram–Schmidt: first take u1 = v1 , then


     
1 1 0
v2 · u1 2
u2 = v2 − u1 = 1 − 1 = 0 .
u1 · u1 1 2 0 1

Then {u1 , u2 } is an orthogonal basis for W : indeed, it is clear that u1 · u2 = 0.


Geometrically, we are simply replacing v2 with the part of v2 that is perpendic-
ular to the line L1 = Span{v1 }:

v2
u2 = (v2 ) L1⊥

L1 v1 = u 1

Example (Three vectors). Find an orthogonal basis {u1 , u2 , u3 } for W = Span{v1 , v2 , v3 } =


R3 , where
     
1 1 3
v1 = 1 v2 = 1 v3 = 1 .
0 1 1
378 CHAPTER 7. ORTHOGONALITY

Solution. We run Gram–Schmidt:


 
1
1. u1 = v1 = 1
0
     
1 1 0
v2 · u1 2   
2. u2 = v2 − u1 = 1 −
  1 = 0
u1 · u1 1 2 0 1

v3 · u1 v3 · u2
3. u3 = v3 − u1 − u2
u1 · u1 u2 · u2
       
3 1 0 1
4  1   
= 1 −
  1 − 0 = −1 .
1 2 0 1 1 0

Then {u1 , u2 , u3 } is an orthogonal basis for W : indeed, we have

u1 · u2 = 0 u1 · u3 = 0 u2 · u3 = 0.

Geometrically, once we have u1 and u2 , we replace v3 by the part that is or-


thogonal to W2 = Span{u1 , u2 } = Span{v1 , v2 }:

W2

u2

(v
3)
W
2
u1
v3

)W2

(v 3
=
u3
L1
7.4. ORTHOGONAL SETS 379

Example (Three vectors in R4 ). Find an orthogonal basis {u1 , u2 , u3 } for W =


Span{v1 , v2 , v3 }, where

1 −1 4
    
1  4 −2
v1 =   v2 =   v3 =   .
1 4 −2
1 −1 0

Solution. We run Gram–Schmidt:

1
 
1
1. u1 = v1 =  
1
1
−1 1 −5/2
     
v2 · u1  4 6 1  5/2
2. u2 = v2 − u1 =   −   = 
u1 · u1 4 4 1 5/2
−1 1 −5/2
v3 · u1 v3 · u2
3. u3 = v3 − u1 − u2
u1 · u1 u2 · u2
4 1 −5/2 2
       
−2 0 1 −20  5/2  0
= −  −  5/2 =  0 .
−2 24 1 25
0 1 −5/2 −2

Then {u1 , u2 , u3 } is an orthogonal basis for W .

We saw in the proof of the Gram–Schmidt Process that for every i between 1
and m, the set {u1 , u2 , . . . , ui } is a an orthogonal basis for Span{v1 , v2 , . . . , vi }.
If we had started with a spanning set {v1 , v2 , . . . , vm } which is linearly depen-
dent, then for some i, the vector vi is in Span{v1 , v2 , . . . , vi−1 } by the increasing
span criterion in Section 3.5. Hence

0 = (vi )Span{v1 ,v2 ,...,vi−1 }⊥ = (vi )Span{u1 ,u2 ,...,ui−1 }⊥ = ui .

You can use the Gram–Schmidt Process to produce an orthogonal basis from
any spanning set: if some ui = 0, just throw away ui and vi , and continue.
380 CHAPTER 7. ORTHOGONALITY

7.4.3 Two Methods to Compute the Projection


We have now presented two methods for computing the orthogonal projection of
a vector: this theorem in Section 7.3 involves row reduction, and the projection
formula requires an orthogonal basis. Here are some guidelines for which to use
in a given situation.

1. If you already have an orthogonal basis, it is almost always easier to use the
projection formula. This often happens in the sciences.

2. If you are going to have to compute the projections of many vectors onto the
same subspace, it is worth your time to run Gram–Schmidt to produce an
orthogonal basis, so that you can use the projection formula.

3. If you only have to project one or a few vectors onto a subspace, it is faster
to use the theorem in Section 7.3. This is the method we will follow in
Section 7.5.

7.5 The Method of Least Squares

Objectives

1. Learn examples of best-fit problems.

2. Learn to turn a best-fit problem into a least-squares problem.

3. Recipe: find a least-squares solution (two ways).

4. Picture: geometry of a least-squares solution.

5. Vocabulary words: least-squares solution.

In this section, we answer the following important question:

Suppose that Ax = b does not have a solution. What is the best ap-
proximate solution?

For our purposes, the best approximate solution is called the least-squares solution.
We will present two methods for finding least-squares solutions, and we will give
several applications to best-fit problems.
7.5. THE METHOD OF LEAST SQUARES 381

7.5.1 Least-Squares Solutions


We begin by clarifying exactly what we will mean by a “best approximate solution”
to an inconsistent matrix equation Ax = b.

Definition. Let A be an m × n matrix and let b be a vector in Rm . A least-squares


solution of the matrix equation Ax = b is a vector b
x in Rn such that

x ) ≤ dist(b, Ax)
dist(b, Ab

for all other vectors x in Rn .

Recall that dist(v, w) = kv−wk is the distance between the vectors v and w. The
term “least squares” comes from the fact that dist(b, Ax) = kb − Ab x k is the square
root of the sum of the squares of the entries of the vector b −Ab
x . So a least-squares
solution minimizes the sum of the squares of the differences between the entries
x and b. In other words, a least-squares solution solves the equation Ax = b
of Ab
as closely as possible, in the sense that the sum of the squares of the difference
b − Ax is minimized.

Least Squares: Picture Suppose that the equation Ax = b is inconsistent. Recall


from this note in Section 3.3 that the column space of A is the set of all other vectors
c such that Ax = c is consistent. In other words, Col(A) is the set of all vectors of the
form Ax. Hence, the closest vector of the form Ax to b is the orthogonal projection
of b onto Col(A). This is denoted bCol(A) , following this notation in Section 7.3.

b Ax

Ax 0
x = bCol(A)⊥
b − Ab

Ax
Col A
x = bCol(A)
Ab

A least-squares solution of Ax = b is a solution b


x of the consistent equation
Ax = bCol(A)

Note. If Ax = b is consistent, then bCol(A) = b, so that a least-squares solution is


the same as a usual solution.
382 CHAPTER 7. ORTHOGONALITY

Where is b
x in this picture? If v1 , v2 , . . . , vn are the columns of A, then
 
x1
b
bx2
x = A
Ab  ...  = b
 x v +b
1 1 x 2 v2 + · · · + b
x n vn .

xn
b

Hence the entries of b x are the “coordinates” of bCol(A) with respect to the spanning
set {v1 , v2 , . . . , vm } of Col(A). (They are honest B-coordinates if the columns of A
are linearly independent.)

x 1 v1 0 v1
x = bCol(A)⊥
b − Ab
b
x 2 v2
b
v2
Col A
x = bCol(A)
Ab

Use this link to view the online demo

The violet plane is Col(A). The closest that Ax can get to b is the closest vector on
Col(A) to b, which is the orthogonal projection bCol(A) (in blue). The vectors v1 , v2 are
the columns of A, and the coefficients of b
x are the lengths of the green lines. Click and
drag b to move it.

We learned to solve this kind of orthogonal projection problem in Section 7.3.

Theorem. Let A be an m × n matrix and let b be a vector in Rm . The least-squares


solutions of Ax = b are the solutions of the matrix equation

AT Ax = AT b

Proof. By this theorem in Section 7.3, if b x is a solution of the matrix equation


AT Ax = AT b, then Ab x is equal to bCol(A) . We argued above that a least-squares
solution of Ax = b is a solution of Ax = bCol(A) .
7.5. THE METHOD OF LEAST SQUARES 383

In particular, finding a least-squares solution means solving a consistent system


of linear equations. We can translate the above theorem into a recipe:

Recipe 1: Compute a least-squares solution. Let A be an m × n matrix and


let b be a vector in Rn . Here is a method for computing a least-squares solution
of Ax = b:

1. Compute the matrix AT A and the vector AT b.

2. Form the augmented matrix for the matrix equation AT Ax = AT b, and


row reduce.

3. This equation is always consistent, and any solution b


x is a least-squares
solution.

x of Ax = b, then
To reiterate: once you have found a least-squares solution b
bCol(A) is equal to Ab
x.

Example. Find the least-squares solutions of Ax = b where:


   
0 1 6
A = 1 1 b = 0 .
2 1 0

What quantity is being minimized?


Solution. We have
 
 0 1 ‹ ‹
0 1 2  5 3
A A=
T
1 1 =

1 1 1 3 3
2 1

and  
 ‹ 6  ‹
0 1 2   0
A b=
T
0 = .
1 1 1 6
0
We form an augmented matrix and row reduce:
 ‹  ‹
5 3 0 RREF 1 0 −3
−−→ .
3 3 6 0 1 5
−3

Therefore, the only least-squares solution is b x= 5 .
This solution minimizes the distance from Ab x to b, i.e., the sum of the squares
x = b − bCol(A) = bCol(A)⊥ . In this case, we have
of the entries of b − Ab
     
6 5 1 Æ p
kb − Abx k = 0 −  2  = −2 = 12 + (−2)2 + 12 = 6.
0 −1 1
384 CHAPTER 7. ORTHOGONALITY
p
Therefore, bCol(A) = Ab
x is 6 units from b.
In the following picture, v1 , v2 are the columns of A:

b
5v2
v1 p −3v1
6
v2

 ‹
−3
bCol(A) = A
5
Col A

Use this link to view the online demo

The violet plane is Col(A). The closest that Ax can get to b is the closest vector on
Col(A) to b, which is the orthogonal projection bCol(A) (in blue). The vectors v1 , v2
x are the B-coordinates of bCol(A) , where
are the columns of A, and the coefficients of b
B = {v1 , v2 }.

Example. Find the least-squares solutions of Ax = b where:


   
2 0 1
A = −1 1
  b=  0 .
0 2 −1

Solution. We have
 
 2 0 ‹  ‹
2 −1 0  5 −1
A A=
T
−1 1  =
0 1 2 −1 5
0 2

and  
 1  ‹ ‹
2 −1 0   2
A b=
T
0 = .
0 1 2 −2
−1
We form an augmented matrix and row reduce:
 ‹  ‹
5 −1 2 RREF 1 0 1/3
−−→ .
−1 5 −2 0 1 −1/3
7.5. THE METHOD OF LEAST SQUARES 385

1 1

x=
Therefore, the only least-squares solution is b 3 −1 .

Use this link to view the online demo

The red plane is Col(A). The closest that Ax can get to b is the closest vector on
Col(A) to b, which is the orthogonal projection bCol(A) (in blue). The vectors v1 , v2
x are the B-coordinates of bCol(A) , where
are the columns of A, and the coefficients of b
B = {v1 , v2 }.

The reader may have noticed that we have been careful to say “the least-squares
solutions” in the plural, and “a least-squares solution” using the indefinite article.
This is because a least-squares solution need not be unique: indeed, if the columns
of A are linearly dependent, then Ax = bCol(A) has infinitely many solutions. The
following theorem, which gives equivalent criteria for uniqueness, is an analogue
of this corollary in Section 7.3.
Theorem. Let A be an m × n matrix and let b be a vector in Rm . The following are
equivalent:
1. Ax = b has a unique least-squares solution.

2. The columns of A are linearly independent.

3. AT A is invertible.
In this case, the least-squares solution is

x = (AT A)−1 AT b.
b

Proof. The set of least-squares solutions of Ax = b is the solution set of the consis-
tent equation AT Ax = AT b, which is a translate of the solution set of the homoge-
neous equation AT Ax = 0. Since AT A is a square matrix, the equivalence of 1 and 3
follows from the invertible matrix theorem in Section 6.1. The set of least squares-
solutions is also the solution set of the consistent equation Ax = bCol(A) , which has
a unique solution if and only if the columns of A are linearly independent by this
theorem in Section 3.5.
Example (Infinitely many least-squares solutions). Find the least-squares solu-
tions of Ax = b where:
   
1 0 1 6
A = 1 1 −1
  b = 0 .

1 2 −3 0

Solution. We have
   
3 3 −3 6
AT A =  3 5 −7  AT b = 0 .
−3 −7 11 6
386 CHAPTER 7. ORTHOGONALITY

We form an augmented matrix and row reduce:


   
3 3 −3 6 1 0 1 5
RREF
 3 5 −7 0  −−→  0 1 −2 −3  .
−3 −7 11 6 0 0 0 0

The free variable is x 3 , so the solution set is

( x = −x − 5      
1 3 x1 −1 −5
parametric
x 2 = 2x 3 + 3 −−−−−→ x =  x2 = x3  2  +  3  .
b
vector form
x3 = x3 x3 1 0

For example, taking x 3 = 0 and x 3 = 1 gives the least-squares solutions


   
−5 −6
x= 3
b  and x = 5 .
b 
0 1

Geometrically, we see that the columns v1 , v2 , v3 of A are coplanar:

b
v2

v1

v3 bCol(A) = Ab
x

Col A

Therefore, there are many ways of writing bCol(A) as a linear combination of


v1 , v2 , v3 .

Use this link to view the online demo

The three columns of A are coplanar, so there are many least-squares solutions. (The
demo picks one solution when you move b.)
7.5. THE METHOD OF LEAST SQUARES 387

As usual, calculations involving projections become easier in the presence of an


orthogonal set. Indeed, if A is an m×n matrix with orthogonal columns u1 , u2 , . . . , um ,
then we can use the projection formula in Section 7.4 to write
(b · u1 )/(u1 · u1 )
 

b · u1 b · u2 b · um  (b · u2 )/(u2 · u2 ) 
bCol(A) = u1 + u2 + · · · + um = A  .. .
u1 · u1 u2 · u2 um · um  . 
(b · um )/(um · um )
Note that the least-squares solution is unique in this case, since an orthogonal set
is linearly independent.

Recipe 2: Compute a least-squares solution. Let A be an m × n matrix with


orthogonal columns u1 , u2 , . . . , um , and let b be a vector in Rn . Then the least-
squares solution of Ax = b is the vector

b · u1 b · u2 b · um
 ‹
x=
b , , ..., .
u1 · u1 u2 · u2 um · um

This formula is particularly useful in the sciences, as matrices with orthogonal


columns often arise in nature.
Example. Find the least-squares solution of Ax = b where:
1 0 1 0
   
 0 1 1 1
A= b =  .
−1 0 1  3
0 −1 1 4

Solution. Let u1 , u2 , u3 be the columns of A. These form an orthogonal set, so


b · u1 b · u2 b · u3 −3 −3 8 3 3
 ‹  ‹  ‹
x=
b , , = , , = − ,− ,2 .
u1 · u1 u2 · u2 u3 · u3 2 2 4 2 2
Compare this example in Section 7.4.

7.5.2 Best-Fit Problems


In this subsection we give an application of the method of least squares to data
modeling. We begin with a basic example.
Example (Best-fit line). Suppose that we have measured three data points
(0, 6), (1, 0), (2, 0),
and that our model for these data asserts that the points should lie on a line. Of
course, these three points do not actually lie on a single line, but this could be due
to errors in our measurement. How do we predict which line they are supposed
to lie on?
388 CHAPTER 7. ORTHOGONALITY

(0, 6)

(2, 0)
(1, 0)

The general equation for a (non-vertical) line is

y = M x + B.

If our three data points were to lie on this line, then the following equations would
be satisfied:

6= M ·0+B
0= M ·1+B (7.5.1)
0 = M · 2 + B.

In order to find the best-fit line, we try to solve the above equations in the un-
knowns M and B. As the three points do not actually lie on a line, there is no
actual solution, so instead we compute a least-squares solution.
Putting our linear equations into matrix form, we are trying to solve Ax = b
for    
0 1  ‹ 6
M
A= 1 1
  x= b = 0 .

B
2 1 0

We solved this least-squares problem in this example: the only least-squares solu-
M −3
 
tion to Ax = b is b
x = B = 5 , so the best-fit line is

y = −3x + 5.
7.5. THE METHOD OF LEAST SQUARES 389

(0, 6)

y=
−3 x
+5
(2, 0)
(1, 0)

What exactly is the line y = f (x) = −3x + 5 minimizing? The least-squares


x minimizes the sum of the squares of the entries of the vector b − Ab
solution b x.
The vector b is the left-hand side of (7.5.1), and

   
 ‹ −3(0) + 5 f (0)
−3
A = −3(1) + 5 =  f (1) .
5
−3(2) + 5 f (2)

In other words, Ab x is the vector whose entries are the y-coordinates of the graph
of the line at the values of x we specified in our data points, and b is the vector
whose entries are the y-coordinates of those data points. The difference b − Ab x is
the vertical distance of the graph from the data points:
390 CHAPTER 7. ORTHOGONALITY

(0, 6)
−1

y=
−3 x
5 +
2
(2, 0)
(1, 0)
−1

   
6  ‹ −1
−3
x = 0 −A
b − Ab   = 2

5
0 −1

The best-fit line minimizes the sum of the squares of these vertical distances.

Interactive: Best-fit line.

Use this link to view the online demo

The best-fit line minimizes the sum of the squares of the vertical distances (violet).
Click and drag the points to see how the best-fit line changes.

Example (Best-fit parabola). Find the parabola that best approximates the data
points

(−1, 1/2), (1, −1), (2, −1/2), (3, 2).


7.5. THE METHOD OF LEAST SQUARES 391

(3, 2)

(−1, 1/2)

(2, −1/2)
(1, −1)

What quantity is being minimized?


Solution. The general equation for a parabola is

y = B x 2 + C x + D.

If the four points were to lie on this parabola, then the following equations would
be satisfied:
1
= B(−1)2 + C(−1) + D
2
−1 = B(1)2 + C(1) + D
1 (7.5.2)
− = B(2)2 + C(2) + D
2
2 = B(3)2 + C(3) + D.
We treat this as a system of equations in the unknowns B, C, D. In matrix form,
we can write this as Ax = b for
1 −1 1 1/2
     
B
1 1 1  −1
A= x = C  b= .
4 2 1  −1/2
D
9 3 1 2

We find a least-squares solution by multiplying both sides by the transpose:


   
99 35 15 31/2
AT A =  35 15 5  AT b =  7/2  ,
15 5 4 1

then forming an augmented matrix and row reducing:


     
99 35 15 31/2 1 0 0 53/88 53/88
RREF
 35 15 5 7/2  − −→  0 1 0 −379/440  =⇒ b x = −379/440 .
15 5 4 1 0 0 1 −41/44 −41/44
392 CHAPTER 7. ORTHOGONALITY

The best-fit parabola is


53 2 379 41
y= x − x− .
88 440 44

Multiplying through by 88, we can write this as

379
88 y = 53x 2 − x − 82.
5

379
88 y = 53x 2 − x − 82
5
(3, 2)

(−1, 0.5)

(1, −1) (2, −0.5)

Now we consider what exactly the parabola y = f (x) is minimizing. The least-
squares solution bx minimizes the sum of the squares of the entries of the vector
b − Ab
x . The vector b is the left-hand side of (7.5.2), and
  
53 379
88 (−1)
2
− 440 (−1) − 41 f (−1)

44
 53 (1)2 379
− 440 (1) − 41   f (1) 
x=
Ab  88 41  = 
44 
.
 53 (2)2
88
379
− 440 (2) − 44 f (2) 
53
88 (3)
2 379
− 440 (3) − 41
44
f (3)

In other words, Ab x is the vector whose entries are the y-coordinates of the graph
of the parabola at the values of x we specified in our data points, and b is the
vector whose entries are the y-coordinates of those data points. The difference
b − Abx is the vertical distance of the graph from the data points:
7.5. THE METHOD OF LEAST SQUARES 393

379
88 y = 53x 2 − x − 82
5

21
220

7
− 220

− 14
55

21
110

1/2
    −7/220
53/88
 −1  21/110
x =
b − Ab − A −379/440 = 

−1/2 −14/55
−41/44
2 21/220

The best-fit parabola minimizes the sum of the squares of these vertical dis-
tances.

Use this link to view the online demo

The best-fit parabola minimizes the sum of the squares of the vertical distances (vio-
let). Click and drag the points to see how the best-fit parabola changes.

Example (Best-fit linear function). Find the linear function f (x, y) that best ap-
proximates the following data:
x y f (x, y)
1 0 0
0 1 1
−1 0 3
0 −1 4
What quantity is being minimized?
Solution. The general equation for a linear function in two variables is
f (x, y) = B x + C y + D.
We want to solve the following system of equations in the unknowns B, C, D:
B(1) + C(0) + D = 0
B(0) + C(1) + D = 1
(7.5.3)
B(−1) + C(0) + D = 3
B(0) + C(−1) + D = 4.
394 CHAPTER 7. ORTHOGONALITY

In matrix form, we can write this as Ax = b for

1 0 1 0
     
B
 0 1 1 1
A= x =  C  b =  .
−1 0 1 3
D
0 −1 1 4

We observe that the columns u1 , u2 , u3 of A are orthogonal, so we can use recipe 2:


b · u1 b · u2 b · u3 −3 −3 8 3 3
 ‹  ‹  ‹
x=
b , , = , , = − ,− ,2 .
u1 · u1 u2 · u2 u3 · u3 2 2 4 2 2
Therefore, the best-fit linear equation is
3 3
f (x, y) = − x − y + 2.
2 2
Here is a picture of the graph of f (x, y):

(0, −1, 4) f (−1, 0)

f (0, −1) (−1, 0, 3)

(0, 1, 1)

f (1, 0) f (0, 1)

(1, 0, 0)
7.5. THE METHOD OF LEAST SQUARES 395

Now we consider what quantity is being minimized by the function f (x, y).
The least-squares solution b
x minimizes the sum of the squares of the entries of the
vector b − Ab
x . The vector b is the right-hand side of (7.5.3), and
  
− 23 (1) − 32 (0) + 2 f (1, 0)

 − 3 (0) − 3 (1) + 2   f (0, 1) 
x =
Ab  − 3 (−1) − 3 (0) + 2  =  f (−1, 0) .
2 2 
2 2
− 32 (0) − 32 (−1) + 2 f (0, −1)

In other words, Ab x is the vector whose entries are the values of f evaluated on
the points (x, y) we specified in our data table, and b is the vector whose entries
are the desired values of f evaluated at those points. The difference b − Ab x is
the vertical distance of the graph from the data points, as indicated in the above
picture. The best-fit linear function minimizes the sum of these vertical distances.

Use this link to view the online demo

The best-fit linear function minimizes the sum of the squares of the vertical distances
(violet). Click and drag the points to see how the best-fit linear function changes.

All of the above examples have the following form: some number of data points
(x, y) are specified, and we want to find a function
y = B1 g1 (x) + B2 g2 (x) + · · · + Bm g m (x)
that best approximates these points, where g1 , g2 , . . . , g m are fixed functions of
x. Indeed, in the best-fit line example we had g1 (x) = x and g2 (x) = 1; in the
best-fit parabola example we had g1 (x) = x 2 , g2 (x) = x, and g3 (x) = 1; and
in the best-fit linear function example we had g1 (x 1 , x 2 ) = x 1 , g2 (x 1 , x 2 ) = x 2 ,
and g3 (x 1 , x 2 ) = 1 (in this example we take x to be a vector with two entries).
We evaluate the above equation on the given data points to obtain a system of
linear equations in the unknowns B1 , B2 , . . . , Bm —once we evaluate the g i , they
just become numbers, so it does not matter what they are—and we find the least-
squares solution. The resulting best-fit function minimizes the sum of the squares
of the vertical distances from the graph of y = f (x) to our original data points.
To emphasize that the nature of the functions g i really is irrelevant, consider
the following example.
Example (Best-fit trigonometric function). What is the best-fit function of the form
y = B + C cos(x) + D sin(x) + E cos(2x) + F sin(2x) + G cos(3x) + H sin(3x)
passing through the points
 ‹  ‹  ‹  ‹  ‹  ‹  ‹  ‹  ‹
−4 −3 −2 −1 0 1 2 3 4
, , , , , , , , ?
−1 0 −1.5 .5 1 −1 −.5 2 −1
396 CHAPTER 7. ORTHOGONALITY

(3, 2)
(0, 1)
(−1, .5)

(−3, 0)
(2, −.5) (4, −1)
(−4, −1) (1, −1)
(−2, −1.5)

Solution. We want to solve the system of equations

−1 = B + C cos(−4) + D sin(−4) + E cos(−8) + F sin(−8) + G cos(−12) + H sin(−12)


0 = B + C cos(−3) + D sin(−3) + E cos(−6) + F sin(−6) + G cos(−9) + H sin(−9)
−1.5 = B + C cos(−2) + D sin(−2) + E cos(−4) + F sin(−4) + G cos(−6) + H sin(−6)
0.5 = B + C cos(−1) + D sin(−1) + E cos(−2) + F sin(−2) + G cos(−3) + H sin(−3)
1 = B + C cos(0) + D sin(0) + E cos(0) + F sin(0) + G cos(0) + H sin(0)
−1 = B + C cos(1) + D sin(1) + E cos(2) + F sin(2) + G cos(3) + H sin(3)
−0.5 = B + C cos(2) + D sin(2) + E cos(4) + F sin(4) + G cos(6) + H sin(6)
2 = B + C cos(3) + D sin(3) + E cos(6) + F sin(6) + G cos(9) + H sin(9)
−1 = B + C cos(4) + D sin(4) + E cos(8) + F sin(8) + G cos(12) + H sin(12).

All of the terms in these equations are numbers, except for the unknowns
B, C, D, E, F, G, H:

−1 = B − 0.6536C + 0.7568D − 0.1455E − 0.9894F + 0.8439G + 0.5366H


0 = B − 0.9900C − 0.1411D + 0.9602E + 0.2794F − 0.9111G − 0.4121H
−1.5 = B − 0.4161C − 0.9093D − 0.6536E + 0.7568F + 0.9602G + 0.2794H
0.5 = B + 0.5403C − 0.8415D − 0.4161E − 0.9093F − 0.9900G − 0.1411H
1 = B + C + E + G
−1 = B + 0.5403C + 0.8415D − 0.4161E + 0.9093F − 0.9900G + 0.1411H
−0.5 = B − 0.4161C + 0.9093D − 0.6536E − 0.7568F + 0.9602G − 0.2794H
2 = B − 0.9900C + 0.1411D + 0.9602E − 0.2794F − 0.9111G + 0.4121H
−1 = B − 0.6536C − 0.7568D − 0.1455E + 0.9894F + 0.8439G − 0.5366H.

Hence we want to solve the least-squares problem


7.5. THE METHOD OF LEAST SQUARES 397

1 −0.6536 0.7568 −0.1455 −0.9894 0.8439 0.5366 −1


   
1 −0.9900 −0.1411 0.9602 0.2794 −0.9111 −0.4121  B 0
 

1 −0.4161 −0.9093 −0.6536 0.7568 0.9602 0.2794   C  −1.5
   
1 0.5403 −0.8415 −0.4161 −0.9093 −0.9900 −0.1411   D   0.5
    
1 1 0 1 0 1 0E =  1 .
    
1 0.5403 0.8415 −0.4161 0.9093 −0.9900 0.1411   F   −1
    
1 −0.4161 0.9093 −0.6536 −0.7568 0.9602 −0.2794  G −0.5
    
1 −0.9900 0.1411 0.9602 −0.2794 −0.9111 0.4121  H  2
1 −0.6536 −0.7568 −0.1455 0.9894 0.8439 −0.5366 −1

We find the least-squares solution with the aid of a computer:

−0.1435
 
 0.2611
−0.2337
 
x ≈  1.116 .
 
b
−0.5997
 
−0.2767
 
0.1076

Therefore, the best-fit function is

y ≈ −0.1435 + 0.2611 cos(x) − 0.2337 sin(x) + 1.116 cos(2x) − 0.5997 sin(2x)


− 0.2767 cos(3x) + 0.1076 sin(3x).

(3, 2)
(0, 1)
(−1, .5)

(−3, 0)
(2, −.5)
(4, −1)
(−4, −1) (1, −1)
(−2, −1.5)

y ≈ −0.14 + 0.26 cos(x) − 0.23 sin(x) + 1.11 cos(2x) − 0.60 sin(2x) − 0.28 cos(3x) + 0.11 sin(3x)
398 CHAPTER 7. ORTHOGONALITY

As in the previous examples, the best-fit function minimizes the sum of the
squares of the vertical distances from the graph of y = f (x) to the data points.

Use this link to view the online demo

The best-fit function minimizes the sum of the squares of the vertical distances (violet).
Click and drag the points to see how the best-fit function changes.

The next example has a somewhat different flavor from the previous ones.

Example (Best-fit ellipse). Find the best-fit ellipse through the points

(0, 2), (2, 1), (1, −1), (−1, −2), (−3, 1), (−1, −1).

(0, 2)

(−3, 1)
(2, 1)
(−1, 1)

(1, −1)

(−1, −2)

What quantity is being minimized?


Solution. The general equation for an ellipse (actually, for a nondegenerate conic
section) is
x 2 + B y 2 + C x y + Dx + E y + F = 0.

This is an implicit equation: the ellipse is the set of all solutions of the equation,
just like the unit circle is the set of solutions of x 2 + y 2 = 1. To say that our data
points lie on the ellipse means that the above equation is satisfied for the given
7.5. THE METHOD OF LEAST SQUARES 399

values of x and y:
(0)2 + B(2)2 + C(0)(2) + D(0) + E(2) + F = 0
(2)2 + B(1)2 + C(2)(1) + D(2) + E(1) + F = 0
(1)2 + B(−1)2 + C(1)(−1) + D(1) + E(−1) + F = 0
(7.5.4)
(−1)2 + B(−2)2 + C(−1)(−2) + D(−1) + E(−2) + F = 0
(−3)2 + B(1)2 + C(−3)(1) + D(−3) + E(1) + F = 0
(−1)2 + B(−1)2 + C(−1)(−1) + D(−1) + E(−1) + F = 0.
To put this in matrix form, we move the constant terms to the right-hand side of
the equals sign; then we can write this as Ax = b for
   
4 0 0 2 1   0
B
1 2 2 1 1 −4
C 
 1 −1 1 −1 1 −1
   
A= x =  D b =  .
 
4 2 −1 −2 1 −1

E
 1 −3 −3 1 1 −9
F
1 1 −1 −1 1 −1
We compute

7 −5
 
−19

36 0 12
 7 19 9 −5 1  17
A A =  −5
T
9 16 1 −2  A b =  20 .
T
   
 0 −5 1 12 0  −9
12 1 −2 0 6 −16
We form an augmented matrix and row reduce:

7 −5 0 12 −19
  
36 1 0 0 0 0 405/266
 7 19 9 −5 1 17  0 1 0 0 0 −89/133 
 RREF 
 −5 9 16 1 −2 20  −−→  0 0 1 0 0 201/133  .
 
 0 −5 1 12 0 −9  0 0 0 1 0 −123/266 
12 1 −2 0 6 −16 0 0 0 0 1 −687/133
The least-squares solution is
 
405/266
 −89/133 
x =  201/133  ,
 
b
−123/266
−687/133
so the best-fit ellipse is
405 2 89 201 123 687
x2 + y − xy+ x− y− = 0.
266 133 133 266 133
Multiplying through by 266, we can write this as
266x 2 + 405 y 2 − 178x y + 402x − 123 y − 1374 = 0.
400 CHAPTER 7. ORTHOGONALITY

(0, 2)

(−3, 1)
(2, 1)
(−1, 1)

(1, −1)

(−1, −2)

266x 2 + 405 y 2 − 178x y + 402x − 123 y − 1374 = 0

Now we consider the question of what quantity is minimized by this ellipse.


The least-squares solution b x minimizes the sum of the squares of the entries of
the vector b − Abx , or equivalently, of Ab x − b. The vector −b contains the constant
terms of the left-hand sides of (7.5.4), and
 405 89 201 123 687

266 (2) − 133 (0)(2) + 133 (0) − 266 (2) − 133
2

 405 (1)2 − 89 201 123 687 


 266 133 (2)(1) + 133 (2) − 266 (1) − 133 
 405 89 201 123 687 
 266 (−1)2 − 133 (1)(−1) + 133 (1) − 266 (−1) − 133 
x =  405
Ab 89 201 123 687 
 266 (−2)2 − 133 (−1)(−2) + 133 (−1) − 266 (−2) − 133 
 405 2 89 201 123 687
 266 (1) − 133 (−3)(1) + 133 (−3) − 266 (1) − 133 

405 89 201 123 687
266 (−1) 133 (−1)(−1) + 133 (−1) 266 (−1)
2
− − − 133

contains the rest of the terms on the left-hand side of (7.5.4). Therefore, the
x − b are the quantities obtained by evaluating the function
entries of Ab

405 2 89 201 123 687


f (x, y) = x 2 + y − xy+ x− y−
266 133 133 266 133
on the given data points.
If our data points actually lay on the ellipse defined by f (x, y) = 0, then eval-
uating f (x, y) on our data points would always yield zero, so Ab x − b would be
the zero vector. This is not the case; instead, Abx − b contains the actual values of
f (x, y) when evaluated on our data points. The quantity being minimized is the
sum of the squares of these values:

minimized =
f (0, 2)2 + f (2, 1)2 + f (1, −1)2 + f (−1, −2)2 + f (−3, 1)2 + f (−1, −1)2 .
7.5. THE METHOD OF LEAST SQUARES 401

One way to visualize this is as follows. We can put this best-fit problem into the
framework of this example by asking to find an equation of the form

f (x, y) = x 2 + B y 2 + C x y + Dx + E y + F

which best approximates the data table

x y f (x, y)
0 2 0
2 1 0
1 −1 0
−1 −2 0
−3 1 0
−1 −1 0.

The resulting function minimizes the sum of the squares of the vertical distances
from these data points (0, 2, 0), (2, 1, 0), . . ., which lie on the x y-plane, to the
graph of f (x, y).

Use this link to view the online demo

The best-fit ellipse minimizes the sum of the squares of the vertical distances (violet)
from the points (x, y, 0) to the graph of f (x, y) on the left. The ellipse itself is the
zero set of f (x, y), on the right. Click and drag the points on the right to see how the
best-fit ellipse changes. Can you arrange the points so that the best-fit conic section
is actually a hyperbola?

Note. Gauss invented the method of least squares to find a best-fit ellipse: he
correctly predicted the (elliptical) orbit of the asteroid Ceres as it passed behind
the sun in 1801.
402 CHAPTER 7. ORTHOGONALITY
Appendix A

Complex Numbers

In this Appendix we give a brief review of the arithmetic and basic properties of
the complex numbers.
As motivation, notice that the rotation matrix
 ‹
0 −1
A=
1 0

has characteristic polynomial f (λ) = λ2 + 1. A zero of this function is a square


root of −1. If we want this polynomial to have a root, then we have to use a larger
number system: we need to declare by fiat that there exists a square root of −1.

Definition.

1. The imaginary number i is defined to satisfy the equation i 2 = −1.

2. A complex number is a number of the form a + bi, where a, b are real


numbers.

The set of all complex numbers is denoted C.

The real numbers are just the complex numbers of the form a + 0i, so that R is
contained in C.
a

We can identify C with R2 by a + bi ←→ b . So when we draw a picture of C,
we draw the plane:

i
1
real axis

1−i
imaginary axis

403
404 APPENDIX A. COMPLEX NUMBERS

Arithmetic of Complex Numbers. We can perform all of the usual arithmetic


operations on complex numbers: add, subtract, multiply, divide, absolute value.
There is also an important new operation called complex conjugation.
• Addition is performed componentwise:

(a + bi) + (c + di) = (a + c) + (b + d)i.

• Multiplication is performed using distributivity and i 2 = −1:

(a + bi)(c + di) = ac + adi + bci + bdi 2 = (ac − bd) + (ad + bc)i.

• Complex conjugation replaces i with −i, and is denoted with a bar:

a + bi = a − bi.

The number a + bi is called the complex conjugate of a + bi. One checks


that for any two complex numbers z, w, we have

z+w=z+w and zw = z · w.

Also, (a + bi)(a − bi) = a2 + b2 , so zz is a nonnegative real number for any


complex number z.
p
• The absolute value of a complex number z is the real number |z| = zz:
p
|a + bi| = a2 + b2 .

One chacks that |zw| = |z| · |w|.

• Division by a nonzero real number proceeds componentwise:


a + bi a b
= + i.
c c c

• Division by a nonzero complex number requires multiplying the numerator


and denominator by the complex conjugate of the denominator:
z zw zw
= = .
w ww |w|2
For example,
1+i (1 + i)2 1 + 2i + i 2
= 2 = = i.
1−i 1 + (−1)2 2
• The real and imaginary parts of a complex number are

Re(a + bi) = a Im(a + bi) = b.


405

The point of introducing complex numbers is to find roots of polynomials. It


turns out that introducing i is sufficent to find the roots of any polynomial.
Fundamental Theorem of Algebra. Every polynomial of degree n has exactly n
(real and) complex roots, counted with multiplicity.
Equivalently, if f (x) = x n + an−1 x n−1 + · · · + a1 x + a0 is a polynomial of degree
n, then f factors as

f (x) = (x − λ1 )(x − λ2 ) · · · (x − λn )

for (not necessarily distinct) complex numbers λ1 , λ2 , . . . , λn .


Degree-2 Polynomials. The quadratic formula gives the roots of a degree-2 poly-
nomial, real or complex:
p
−b ± b2 − 4c
f (x) = x + bx + c =⇒ x =
2
.
2
p
For example, if f (x) = x 2 − 2x + 1, then
p p p
2 ± −2 2 1±i
x= = (1 ± i) = p .
2 2 2
Note that if b, c are real numbers, then the two roots are complex conjugates.
A complex number z is real if and only if z = z. This leads to the following
observation.

If f is a polynomial with real coefficients, and if λ is a complex root of f , then


so is λ:

0 = f (λ) = λn + an−1 λn−1 + · · · + a1 λ + a0



= λn + an−1 λn−1 + · · · + a1 λ + a0 = f λ .

Therefore, complex roots of real polynomials come in conjugate pairs.

Degree-3 Polynomials. A real cubic polynomial has either three real roots, or one
real root and a conjugate pair of complex roots.
For example, f (x) = x 3 − x = x(x − 1)(x + 1) has three real roots; its graph
looks like this:
406 APPENDIX A. COMPLEX NUMBERS

On the other hand, the polynomial

g(x) = x 3 − 5x 2 + x − 5 = (x − 5)(x 2 + 1) = (x − 5)(x + i)(x − i)

has one real root at 5 and a conjugate pair of complex roots ±i. Its graph looks
like this:
Appendix B

Notation

The following table defines the notation used in this book. Page numbers or ref-
erences refer to the first appearance of each symbol.

Symbol Description Page

0 The number zero 11


R The real numbers 11
Rn Real n-space 11
R i Row i of a matrix 21
1
2 A vector 37
0 The zero vector 37
Span{v1 , v2 , . . . , vp } Span of vectors 46
{x | condition} Set builder notation 46
m × n matrix Dimensions of a matrix 49
Col(A) Column space 90
Nul(A) Null space 90
dim V Dimension of a subspace 95
rank(A) The rank of a matrix 109
nullity(A) The nullity of a matrix 109
x 7→ y image of a point under a transformation 120
T : Rn → Rm n
transformation with domain R and codomain R m
120
IdRn Identity transformation 122
e1 , e2 , . . . Standard coordinate vectors 147
In n × n identity matrix 148
0 The zero transformation 154
ai j The i, j entry of a matrix 158
0 The zero matrix 159
A−1 Inverse of a matrix 168
T −1 Inverse of a transformation 175
det(A) The determinant of a matrix 186
AT Transpose of a matrix 197
Ai j Minor of a matrix 205
(Continued on next page)

407
408 APPENDIX B. NOTATION

Symbol Description Page

Ci j Cofactor of a matrix 205


adj(A) Adjugate matrix 216
vol(P) Volume of a region 222
vol(A) Volume of the parallelepiped of a matrix 223
T (S) The image of a region under a transformation 230
Tr(A) Trace of a matrix 254
Re(v) Real part of a complex vector 309
Im(v) Imaginary part of a complex vector 309
x·y Dot product of two vectors 339
x⊥y x is orthogonal to y 342
W⊥ Orthogonal complement of a subspace 345
Row(A) Row space of a matrix 351
xW Orthogonal projection of x onto W 355
xW ⊥ Orthogonal part of x with respect to W 355
C The complex numbers 403
z Complex conjugate 404
Re(z) Real part of a complex number 404
Im(z) Imaginary part of a complex number 404
Appendix C

Hints and Solutions to Selected


Exercises

409
410 APPENDIX C. HINTS AND SOLUTIONS TO SELECTED EXERCISES
Appendix D

GNU Free Documentation License

Version 1.3, 3 November 2008


Copyright © 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
<http://www.fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license
document, but changing it is not allowed.

0. PREAMBLE The purpose of this License is to make a manual, textbook, or


other functional and useful document “free” in the sense of freedom: to assure ev-
eryone the effective freedom to copy and redistribute it, with or without modifying
it, either commercially or noncommercially. Secondarily, this License preserves for
the author and publisher a way to get credit for their work, while not being con-
sidered responsible for modifications made by others.
This License is a kind of “copyleft”, which means that derivative works of the
document must themselves be free in the same sense. It complements the GNU
General Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software,
because free software needs free documentation: a free program should come with
manuals providing the same freedoms that the software does. But this License is
not limited to software manuals; it can be used for any textual work, regardless of
subject matter or whether it is published as a printed book. We recommend this
License principally for works whose purpose is instruction or reference.

1. APPLICABILITY AND DEFINITIONS This License applies to any manual or


other work, in any medium, that contains a notice placed by the copyright holder
saying it can be distributed under the terms of this License. Such a notice grants
a world-wide, royalty-free license, unlimited in duration, to use that work under
the conditions stated herein. The “Document”, below, refers to any such manual
or work. Any member of the public is a licensee, and is addressed as “you”. You
accept the license if you copy, modify or distribute the work in a way requiring
permission under copyright law.

411
412 APPENDIX D. GNU FREE DOCUMENTATION LICENSE

A “Modified Version” of the Document means any work containing the Doc-
ument or a portion of it, either copied verbatim, or with modifications and/or
translated into another language.
A “Secondary Section” is a named appendix or a front-matter section of the
Document that deals exclusively with the relationship of the publishers or authors
of the Document to the Document’s overall subject (or to related matters) and con-
tains nothing that could fall directly within that overall subject. (Thus, if the Doc-
ument is in part a textbook of mathematics, a Secondary Section may not explain
any mathematics.) The relationship could be a matter of historical connection
with the subject or with related matters, or of legal, commercial, philosophical,
ethical or political position regarding them.
The “Invariant Sections” are certain Secondary Sections whose titles are desig-
nated, as being those of Invariant Sections, in the notice that says that the Docu-
ment is released under this License. If a section does not fit the above definition of
Secondary then it is not allowed to be designated as Invariant. The Document may
contain zero Invariant Sections. If the Document does not identify any Invariant
Sections then there are none.
The “Cover Texts” are certain short passages of text that are listed, as Front-
Cover Texts or Back-Cover Texts, in the notice that says that the Document is re-
leased under this License. A Front-Cover Text may be at most 5 words, and a
Back-Cover Text may be at most 25 words.
A “Transparent” copy of the Document means a machine-readable copy, rep-
resented in a format whose specification is available to the general public, that is
suitable for revising the document straightforwardly with generic text editors or
(for images composed of pixels) generic paint programs or (for drawings) some
widely available drawing editor, and that is suitable for input to text formatters
or for automatic translation to a variety of formats suitable for input to text for-
matters. A copy made in an otherwise Transparent file format whose markup, or
absence of markup, has been arranged to thwart or discourage subsequent mod-
ification by readers is not Transparent. An image format is not Transparent if
used for any substantial amount of text. A copy that is not “Transparent” is called
“Opaque”.
Examples of suitable formats for Transparent copies include plain ASCII with-
out markup, Texinfo input format, LaTeX input format, SGML or XML using a
publicly available DTD, and standard-conforming simple HTML, PostScript or PDF
designed for human modification. Examples of transparent image formats include
PNG, XCF and JPG. Opaque formats include proprietary formats that can be read
and edited only by proprietary word processors, SGML or XML for which the DTD
and/or processing tools are not generally available, and the machine-generated
HTML, PostScript or PDF produced by some word processors for output purposes
only.
The “Title Page” means, for a printed book, the title page itself, plus such fol-
lowing pages as are needed to hold, legibly, the material this License requires to
appear in the title page. For works in formats which do not have any title page
as such, “Title Page” means the text near the most prominent appearance of the
413

work’s title, preceding the beginning of the body of the text.


The “publisher” means any person or entity that distributes copies of the Doc-
ument to the public.
A section “Entitled XYZ” means a named subunit of the Document whose title
either is precisely XYZ or contains XYZ in parentheses following text that trans-
lates XYZ in another language. (Here XYZ stands for a specific section name
mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”,
or “History”.) To “Preserve the Title” of such a section when you modify the Doc-
ument means that it remains a section “Entitled XYZ” according to this definition.
The Document may include Warranty Disclaimers next to the notice which
states that this License applies to the Document. These Warranty Disclaimers are
considered to be included by reference in this License, but only as regards dis-
claiming warranties: any other implication that these Warranty Disclaimers may
have is void and has no effect on the meaning of this License.

2. VERBATIM COPYING You may copy and distribute the Document in any
medium, either commercially or noncommercially, provided that this License, the
copyright notices, and the license notice saying this License applies to the Doc-
ument are reproduced in all copies, and that you add no other conditions what-
soever to those of this License. You may not use technical measures to obstruct
or control the reading or further copying of the copies you make or distribute.
However, you may accept compensation in exchange for copies. If you distribute
a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you
may publicly display copies.

3. COPYING IN QUANTITY If you publish printed copies (or copies in media that
commonly have printed covers) of the Document, numbering more than 100, and
the Document’s license notice requires Cover Texts, you must enclose the copies
in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts
on the front cover, and Back-Cover Texts on the back cover. Both covers must also
clearly and legibly identify you as the publisher of these copies. The front cover
must present the full title with all words of the title equally prominent and visible.
You may add other material on the covers in addition. Copying with changes
limited to the covers, as long as they preserve the title of the Document and satisfy
these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should
put the first ones listed (as many as fit reasonably) on the actual cover, and con-
tinue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more
than 100, you must either include a machine-readable Transparent copy along with
each Opaque copy, or state in or with each Opaque copy a computer-network lo-
cation from which the general network-using public has access to download using
public-standard network protocols a complete Transparent copy of the Document,
414 APPENDIX D. GNU FREE DOCUMENTATION LICENSE

free of added material. If you use the latter option, you must take reasonably pru-
dent steps, when you begin distribution of Opaque copies in quantity, to ensure
that this Transparent copy will remain thus accessible at the stated location until
at least one year after the last time you distribute an Opaque copy (directly or
through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document
well before redistributing any large number of copies, to give them a chance to
provide you with an updated version of the Document.

4. MODIFICATIONS You may copy and distribute a Modified Version of the Doc-
ument under the conditions of sections 2 and 3 above, provided that you release
the Modified Version under precisely this License, with the Modified Version fill-
ing the role of the Document, thus licensing distribution and modification of the
Modified Version to whoever possesses a copy of it. In addition, you must do these
things in the Modified Version:

A. Use in the Title Page (and on the covers, if any) a title distinct from that of
the Document, and from those of previous versions (which should, if there
were any, be listed in the History section of the Document). You may use the
same title as a previous version if the original publisher of that version gives
permission.

B. List on the Title Page, as authors, one or more persons or entities respon-
sible for authorship of the modifications in the Modified Version, together
with at least five of the principal authors of the Document (all of its prin-
cipal authors, if it has fewer than five), unless they release you from this
requirement.

C. State on the Title page the name of the publisher of the Modified Version, as
the publisher.

D. Preserve all the copyright notices of the Document.

E. Add an appropriate copyright notice for your modifications adjacent to the


other copyright notices.

F. Include, immediately after the copyright notices, a license notice giving the
public permission to use the Modified Version under the terms of this License,
in the form shown in the Addendum below.

G. Preserve in that license notice the full lists of Invariant Sections and required
Cover Texts given in the Document’s license notice.

H. Include an unaltered copy of this License.


415

I. Preserve the section Entitled “History”, Preserve its Title, and add to it an
item stating at least the title, year, new authors, and publisher of the Modi-
fied Version as given on the Title Page. If there is no section Entitled “History”
in the Document, create one stating the title, year, authors, and publisher of
the Document as given on its Title Page, then add an item describing the
Modified Version as stated in the previous sentence.

J. Preserve the network location, if any, given in the Document for public access
to a Transparent copy of the Document, and likewise the network locations
given in the Document for previous versions it was based on. These may be
placed in the “History” section. You may omit a network location for a work
that was published at least four years before the Document itself, or if the
original publisher of the version it refers to gives permission.

K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the


Title of the section, and preserve in the section all the substance and tone of
each of the contributor acknowledgements and/or dedications given therein.

L. Preserve all the Invariant Sections of the Document, unaltered in their text
and in their titles. Section numbers or the equivalent are not considered part
of the section titles.

M. Delete any section Entitled “Endorsements”. Such a section may not be in-
cluded in the Modified Version.

N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict


in title with any Invariant Section.

O. Preserve any Warranty Disclaimers.

If the Modified Version includes new front-matter sections or appendices that


qualify as Secondary Sections and contain no material copied from the Document,
you may at your option designate some or all of these sections as invariant. To
do this, add their titles to the list of Invariant Sections in the Modified Version’s
license notice. These titles must be distinct from any other section titles.
You may add a section Entitled “Endorsements”, provided it contains noth-
ing but endorsements of your Modified Version by various parties — for example,
statements of peer review or that the text has been approved by an organization
as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage
of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the
Modified Version. Only one passage of Front-Cover Text and one of Back-Cover
Text may be added by (or through arrangements made by) any one entity. If the
Document already includes a cover text for the same cover, previously added by
you or by arrangement made by the same entity you are acting on behalf of, you
may not add another; but you may replace the old one, on explicit permission from
the previous publisher that added the old one.
416 APPENDIX D. GNU FREE DOCUMENTATION LICENSE

The author(s) and publisher(s) of the Document do not by this License give
permission to use their names for publicity for or to assert or imply endorsement
of any Modified Version.

5. COMBINING DOCUMENTS You may combine the Document with other doc-
uments released under this License, under the terms defined in section 4 above
for modified versions, provided that you include in the combination all of the In-
variant Sections of all of the original documents, unmodified, and list them all
as Invariant Sections of your combined work in its license notice, and that you
preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple
identical Invariant Sections may be replaced with a single copy. If there are mul-
tiple Invariant Sections with the same name but different contents, make the title
of each such section unique by adding at the end of it, in parentheses, the name of
the original author or publisher of that section if known, or else a unique number.
Make the same adjustment to the section titles in the list of Invariant Sections in
the license notice of the combined work.
In the combination, you must combine any sections Entitled “History” in the
various original documents, forming one section Entitled “History”; likewise com-
bine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedi-
cations”. You must delete all sections Entitled “Endorsements”.

6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of


the Document and other documents released under this License, and replace the
individual copies of this License in the various documents with a single copy that
is included in the collection, provided that you follow the rules of this License for
verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it
individually under this License, provided you insert a copy of this License into
the extracted document, and follow this License in all other respects regarding
verbatim copying of that document.

7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Docu-


ment or its derivatives with other separate and independent documents or works,
in or on a volume of a storage or distribution medium, is called an “aggregate” if
the copyright resulting from the compilation is not used to limit the legal rights of
the compilation’s users beyond what the individual works permit. When the Doc-
ument is included in an aggregate, this License does not apply to the other works
in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the
Document, then if the Document is less than one half of the entire aggregate,
the Document’s Cover Texts may be placed on covers that bracket the Document
within the aggregate, or the electronic equivalent of covers if the Document is in
417

electronic form. Otherwise they must appear on printed covers that bracket the
whole aggregate.

8. TRANSLATION Translation is considered a kind of modification, so you may


distribute translations of the Document under the terms of section 4. Replacing
Invariant Sections with translations requires special permission from their copy-
right holders, but you may include translations of some or all Invariant Sections
in addition to the original versions of these Invariant Sections. You may include
a translation of this License, and all the license notices in the Document, and any
Warranty Disclaimers, provided that you also include the original English version
of this License and the original versions of those notices and disclaimers. In case
of a disagreement between the translation and the original version of this License
or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or
“History”, the requirement (section 4) to Preserve its Title (section 1) will typically
require changing the actual title.

9. TERMINATION You may not copy, modify, sublicense, or distribute the Doc-
ument except as expressly provided under this License. Any attempt otherwise to
copy, modify, sublicense, or distribute it is void, and will automatically terminate
your rights under this License.
However, if you cease all violation of this License, then your license from a par-
ticular copyright holder is reinstated (a) provisionally, unless and until the copy-
right holder explicitly and finally terminates your license, and (b) permanently, if
the copyright holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated per-
manently if the copyright holder notifies you of the violation by some reasonable
means, this is the first time you have received notice of violation of this License
(for any work) from that copyright holder, and you cure the violation prior to 30
days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses
of parties who have received copies or rights from you under this License. If your
rights have been terminated and not permanently reinstated, receipt of a copy of
some or all of the same material does not give you any rights to use it.

10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may
publish new, revised versions of the GNU Free Documentation License from time
to time. Such new versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns. See http://www.gnu.org/
copyleft/.
Each version of the License is given a distinguishing version number. If the
Document specifies that a particular numbered version of this License “or any later
version” applies to it, you have the option of following the terms and conditions
418 APPENDIX D. GNU FREE DOCUMENTATION LICENSE

either of that specified version or of any later version that has been published (not
as a draft) by the Free Software Foundation. If the Document does not specify a
version number of this License, you may choose any version ever published (not as
a draft) by the Free Software Foundation. If the Document specifies that a proxy
can decide which future versions of this License can be used, that proxy’s public
statement of acceptance of a version permanently authorizes you to choose that
version for the Document.

11. RELICENSING “Massive Multiauthor Collaboration Site” (or “MMC Site”)


means any World Wide Web server that publishes copyrightable works and also
provides prominent facilities for anybody to edit those works. A public wiki that
anybody can edit is an example of such a server. A “Massive Multiauthor Collab-
oration” (or “MMC”) contained in the site means any set of copyrightable works
thus published on the MMC site.
“CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license
published by Creative Commons Corporation, a not-for-profit corporation with a
principal place of business in San Francisco, California, as well as future copyleft
versions of that license published by that same organization.
“Incorporate” means to publish or republish a Document, in whole or in part,
as part of another Document.
An MMC is “eligible for relicensing” if it is licensed under this License, and
if all works that were first published under this License somewhere other than
this MMC, and subsequently incorporated in whole or in part into the MMC, (1)
had no cover texts or invariant sections, and (2) were thus incorporated prior to
November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site
under CC-BY-SA on the same site at any time before August 1, 2009, provided the
MMC is eligible for relicensing.

ADDENDUM: How to use this License for your documents To use this License
in a document you have written, include a copy of the License in the document
and put the following copyright and license notices just after the title page:
Copyright (C) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace
the “with. . . Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
419

If you have Invariant Sections without Cover Texts, or some other combination of
the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recom-
mend releasing these examples in parallel under your choice of free software li-
cense, such as the GNU General Public License, to permit their use in free software.
420 APPENDIX D. GNU FREE DOCUMENTATION LICENSE
Index

B -coordinates uniqueness with respect to, 102


change of basis matrix, 262 Best-fit problem, 387
computing best-fit line, 387
row reduction, 108 general setup, 395
with respect to an orthogonal Block Diagonalization Theorem, 318
basis, 374
definition of, 103 Characteristic polynomial
informally, 103 and eigenvalues, 252
labeling points, 103 definition of, 251
nonstandard grid, 105 factoring by hand, 255
form of, 254
Algebraic multiplicity of a 2 × 2 matrix, 255
and diagonalizability, 289 of similar matrices, 272
and geometric multiplicity, 288 Codomain, see Transformation
definition of, 287 Cofactor, see Matrix
equals one, 288 Color space, 12
of similar matrices, 292 Column rank, see Rank
Approximate solution, see Column space
Least-squares and invertibility, 249
Augmented matrix, see Matrix and rank, see Rank
basis of, see Basis
Basis definition of, 90
and invertibility, 249 is a subspace, 90
and orthogonal projection, 361 is row space of transpose, 351
basis theorem, 110 of an orthogonal projection, 365
coordinates with respect to, see orthogonal complement of, 352
B -coordinates range of a transformation, 124
definition of, 95 versus the solution set, 67
infinitely many, 95 Column span, see Column space
making orthogonal, 376 Complex conjugation, see Complex
of Rn , 96, 97 numbers
of a column space, 98 Complex eigenvalue
of a null space, 101 2 × 2 matrices
of a span, 99 and rotation-scaling matrices,
span of an orthogonal set, 370 308

421
422 INDEX

and rotation-scaling matrices, see Matrix, 188


computing, 310 Diagonalizability
different rotation-scaling algebraic-geometric multiplicity
matrices, 314 criterion, 289
dynamics of, 314, 316, 317 criterion, 278
geometry of, 307 criterion for similarity, 294
conjugate pairs, 302 definition of, 276
existence of, 299 diagonal matrices, 276
Complex numbers distinct eigenvalues, 278
absolute value, 404 geometry of, 294
arithmetic of, 404 is unrelated to invertibility, 286
conjugation, 404 of 2 × 2 matrices, 290
definition of, 403 of 3 × 3 matrices, 290
real and imaginary parts of, 404 of a projection matrix, 365
vectors, 309 order of eigenvalues, 279
Consistent, see System of linear powers of, 277
equations projection, 284
Cramer’s rule, 217 recipe, 283
and computing inverses, 216 shear, 283
similar matrices, 276
Determinant Difference equation, 321
alternative defining properties Dilation, 125
of, 203 Dimension
and column operations, 200 definition of, 95
and computing inverses, 216 of a column space, 109
and powers of matrices, 196 of a null space, 109
and row operations, 186 of a solution set, 67
and volumes, 222 of an orthogonal complement,
computation of 350
cofactor expansion, 205 Domain, see Transformation
row reduction, 190 Dot product
defining properties of, 186 and angles, 344
existence and uniqueness of, 192 and distance, 341
identity matrix, 186 and length, 340
invertibility property, 192, 249 definition of, 339
methods of computation, 215 properties of, 340
multilinearity property, 201
multiplicativity property, 194 Eigenspace
and volumes, 233 and the null space, 249
of a 2 × 2 matrix, 170, 192, 211 computation, 249
of a 3 × 3 matrix, 212 definition of, 245
of similar matrices, 272 is a solution set, 245
properties of, 204 is a subspace, 245
transpose property, 198 of a projection matrix, 365
Diagonal of similar matrices, 273
INDEX 423

Eigenvalue of similar matrices, 292


algebraic multiplicity of, see Google PageRank
Algebraic multiplicity Google Matrix, 333
and diagonalizability, 278 eigenvector of, 333
and invertibility, 249 Importance Matrix, 330
and stochastic matrices, 324 eigenvector of, 331
and the characteristic Importance Rule, 330
polynomial, see Gram–Schmidt Process, 376
Characteristic polynomial detecting linear dependence,
complex, see Complex eigenvalue 379
definition of, 236
eigenvector for, 236 Homogeneous, see System of linear
geometric multiplicity of, see equations
Geometric multiplicity
maximum number of, 244 Identity matrix
of a projection matrix, 365 and matrix multiplication, 166
of a triangular matrix, 257 and standard coordinate vectors,
of similar matrices, 272 148
zero, 249 as a matrix transformation, 125
Eigenvector definition of, 148
and collinearity, 238 determinant of, 186
and diagonalizability, 278 similarity, 260
and stochastic matrices, 325 Identity transformation
computation, 249 and composition, 157
trick for 2 × 2 matrices, 302 definition of, 122
definition of, 236 Imaginary number, see Complex
eigenvalue for, 236 numbers
linear independence of, 244 Imaginary part, see Complex
of a projection matrix, 365 numbers
of similar matrices, 273 Implicit equation, 18, 32
Elimination method, 20 Inconsistent, see System of linear
Equation of linear dependence, see equations
Linear independence Increasing span criterion, see Linear
independence
Free variable, 31 Inhomogeneous, see System of linear
Function, see Transformation equations
Fundamental theorem of algebra, Invertible matrix
405 and invertible transformation,
179
Gaussian elimination, see Row basic facts, 169
reduction computation
Geometric multiplicity 2 × 2 case, 170
and algebraic multiplicity, 288 in general, 171
and diagonalizability, 289 using Cramer’ rule, 216
definition of, 287 definition of, 168
424 INDEX

determinant of, 192, 249 increasing span criterion, 74


inverse of, 169 linear dependence relation, 69
invertible matrix theorem, 249 pictures of, 74
solving linear systems with, 173 redundancy criterion, 73
Invertible matrix theorem, 182, 249 verifying, 71
Invertible transformation wide matrices, 72
and invertible matrices, 179, 249 Linear independence
definition of, 175 and determinants, 193
one-to-one and onto, 178 and invertibility, 249
definition of, 69
Least-squares of an orthogonal set, 370
and Ax = bCol(A) , 381 verifying
computation of with Gram–Schmidt, 379
complicated matrix formula, Linear transformation
385 addition of, see Transformation
Projection Formula, 387 and volumes, see Matrix
row reduction, 382, 383 transformation
definition of, 381 are matrix transformations, 149
picture of, 381 basic facts, 142
uniqueness of, 385 composition of, see
Line Transformation
dimension-1 solution set, 67 and matrix multiplication, 162
geometric definition of, 14 linearity of, 162
number line, 11 definition of, 142
orthogonal projection onto, 357, dictionary, 149
371 invertible, see Invertible
parametric form of, 18 transformation
Linear combination scalar multiplication of, see
collinear vectors, picture of, 42 Transformation
definition of, 40 standard matrix of, 148
single vector, picture of, 41 orthogonal projection, 363
two vectors, picture of, 41 when defined by a formula, 145
Linear dependence, see Linear Lower-triangular
independence see Matrix, 188
Linear dependence relation, see
Linear independence Matrix
Linear equation addition of, 158
definition of, 10 as a function, see Matrix
system of, see System of linear transformation, 114
equations augmented, 21
Linear Independence cofactor of, 205
and matrix columns, 71 and determinants, 205
basic facts, 72 sign of, 205
equation of linear dependence, definition of, 21
69 determinant of, see Determinant
INDEX 425

diagonal entries of, 188 160


dimensions of, 49 associativity of, 166
inverse of, see Invertible matrix caveats, 167
invertible, see Invertible matrix definition of, 159
lower-triangular, 188 determinant of, 194
determinant of, 188 dimensions of, 160
eigenvalues of, 257 inverse of, 169
minor of, 205 noncommutativity of, 166
multiplication, see Matrix order of operations, 167
multiplication powers, 160
nullity of, see Nullity and diagonalizability, 277
parallelepiped determined by, and similarity, 261
222 properties of, 166
product with vector, see row-column rule, 161
Matrix-vector product Matrix transformation
projection, see Orthogonal addition of, see Transformation
projection, standard matrix and volumes, 230
of codomain of, 124
rank of, see Rank composition of, see Linear
rotation-scaling, see transformation, see
Rotation-scaling matrix Transformation
scalar multiplication of, 158 definition of, 123
similar, see Similarity dictionary, 149
stochastic, see Stochastic matrix domain of, 124
trace of, 254 invertible, see Invertible
similar matrices, 272 transformation
transpose of, 197 linearity of, 142
and products, 198 of R2 , 125
determinant of, 198 one-to-one criteria, 131
upper-triangular, 188 onto criteria, 136
determinant of, 188 range of, 124
eigenvalues of, 257 scalar multiplication of, see
Matrix equation Transformation
always consistent, 56 tall matrices, 138
and invertibility, 249 wide matrices, 134
definition of, 50 Matrix-vector product
equivalence with vector and matrix multiplication, 160
equation, 50 definition of, 49
solving with the inverse matrix, row-column rule, 52
173 with standard coordinate
spans and consistency, 53 vectors, 148
Matrix multiplication Minor, see Matrix
and composition of Multiplicity
transformations, 162 algebraic, see Algebraic
and the matrix-vector product, multiplicity
426 INDEX

geometric, see Geometric Orthogonal complement


multiplicity basic facts, 350
definition of, 345
Nontrivial solution, see System of dimension of, 350
linear equations of a column space, 352
Null space of a null space, 352
and invertibility, 249 of a row space, 352
basis of, see Basis of a span, 347, 352
computing, 92 orthogonal complement of, 350
definition of, 90 pictures of, 345
is a solution set, 92 system of linear equations, 348
is a subspace, 90 Orthogonal decomposition, see
is the 0-eigenspace, 249 Orthogonal projection
of an orthogonal projection, 365 Orthogonal projection
orthogonal complement of, 352 and B-coordinates, 374
Nullity, 109 as a transformation, 362
and invertibility, 249 composed with itself, 362
rank theorem, 109 computation of
complicated matrix formula,
One-to-one 361
and invertibility, 249 Projection Formula, 371
criteria for matrix row reduction, 357, 359
transformations, 131 definition of, 355
definition of, 129 distance from, 355
equivalent formulations, 129 existence of, 354
finding two vectors with the is the closest vector, 355
same image, 134 linearity of, 362
functions of one variable, 130 of a vector in W , 356
negation of, 130 of a vector in W ⊥ , 356
square matrices, 141 onto a column space, 381
versus onto, 138 onto a line, 357, 371
wide matrices, 134 properties of, 362
Onto range of, 362
and invertibility, 249 standard matrix of, 363
criteria for matrix column space of, 365
transformations, 136 complicated matrix formula,
definition of, 135 365
equivalent formulations, 135 diagonalizability of, 365
finding a vector not in the range, eigenvalues of, 365
138 eigenvectors of, 365
functions of one variable, 136 noninvertibility of, 365
negation of, 135 null space of, 365
square matrices, 141 properties of, 365
tall matrices, 138 square of, 365
versus one-to-one, 138 Orthogonal set
INDEX 427

and B-coordinates, 374 complex roots, 405


and least squares, 387 conjugate roots, 405
definition of, 368 cubic, 405
linear independence of, 370 factoring by hand, 255
making orthonormal, 370 quadratic, 405
necessity of, 375 rational roots, 255
orthonormality of, 368 Power of a matrix, see Matrix
standard coordinate vectors, multiplication
368 Projection
producing from a basis, 376 diagonalizability of, 284
Orthogonality Projection Formula, 371
and the Pythagorean theorem, Projection matrix, see Orthogonal
343 projection, standard matrix
definition of, 342 of
zero vector, 342
Orthonormal set, see Orthogonal set QR codes, 13
Quadratic formula, 405
Parallelepiped
definition of, 221 Range, see Transformation
flat, 222 Rank, 109
parallelogram, 221 and invertibility, 249
area of, 225 rank theorem, 109
volume of, 222 row and column, 353
Parallelogram, see Parallelepiped Rational Root Theorem, 255
Parameterized equation, 32 Real n-space, 11
Parametric form, 18, 32 as a subspace of itself, 85
Parametric vector form point of, 11
of a homogeneous equation, 62 Real numbers R, 11
of an inhomogeneous equation, Real part, see Complex numbers
66 Red Box, 321, 323, 328
particular solution, 66 Reduced row echelon form, 24
Particular solution, see Parametric and invertibility, 249
vector form Reflection
Perron–Frobenius theorem, 325 in general, 367
Pivot, 24 over the y-axis, 125
Plane Rotation
x y-plane, 11 composition of, 162
dimension-2 solution set, 67 counterclockwise by 90◦ , 125
geometric definition of, 15 non-diagonalizability of, 285
parametric form of, 19 Rotation-scaling matrix
Point, 36 and complex eigenvalues, 308
distance between, 341 computing the angle, 307
Polynomial definition of, 304
characteristic, see Characteristic structure of, 304
polynomial Rotation-Scaling Theorem, 308
428 INDEX

Row echelon form, 24 to a diagonal matrix, see


Row equivalence, 23 Diagonalizability
Row operations, 21 worked example, 263
and determinants, 186 Solution, see System of linear
replacement, 21 equations
scaling, 21 Solution set
swap, 21 definition of, 10
Row rank, see Rank of a homogeneous system is a
Row reduction null space, 92
algorithm, 26 of a homogeneous system is a
computing determinants, 190 span, 62
picture of, 28 picture of, 14
Row replacement, see Row size of, 34
operations, replacement translate of a span, 66
Row space versus the column space, 67
definition of, 351 Space
is column space of transpose, R3 , 12
351 color space, 12
orthogonal complement of, 352 dimension-3 solution set, 67
Row vector, see Vector Span
basis of, see Basis
definition of, 46
Set builder notation, 46
is a subspace, 90
Shear
orthogonal complement of, 347,
in the x-direction, 125
352
multiplicities, 287
pictures of, 47
non-diagonalizability of, 283 Standard coordinate vectors
Similarity and matrix columns, 148
action on a vector, 263 are unit vectors, 341
and eigenspaces, 273 columns of the identity matrix,
and eigenvalues, 272 148
and eigenvectors, 273 definition of, 147
and multiplicities, 292 orthonormality of, 368
and powers, 261 picture of, 147
and the characteristic Standard matrix, see Linear
polynomial, 272 transformation
and the determinant, 272 Steady state, see Stochastic matrix
and the trace, 272 Stochastic matrix
definition of, 259 definition of, 323
equivalence relation, 260 eigenvalues of, 324
geometry of, 262 long-term behavior of, 325
identity matrix, 260 picture of, 328, 330
of 2 × 2 matrices, 317 steady state of, 325
of diagonal matrices, 294 computing, 326
of diagonalizable matrices, 294 sum of entries of vector, 324
INDEX 429

Subset Trace, see Matrix


definition of, 84 Transformation
set builder notation, 46 addition of, 154
versus subspace, 87 as a machine, 120
Subspace associated to a matrix, see Matrix
and spans, 90 transformation
definition of, 84 codomain of, 120
orthogonal complement of, see composition of, 155
Orthogonal complement noncommutativity of, 157
real n-space, 85 order of operations, 167
verifying, 94 definition of, 120
versus subset, 87 domain of, 120
zero, 85 identity, see Identity
Superposition principle, 143 transformation
System of linear equations invertible, see Invertible
consistent, 10 transformation
picture of, 44 linear, see Linear transformation
span criterion, 46, 53 of one variable, 121
definition of, 10 of several variables, 122
four ways of writing, 51 one-to-one, see One-to-one
homogeneous, 58 onto, see Onto
trivial solution, 58 range of, 120
inconsistent, 10 scalar multiplication of, 154
picture of, 47 Transpose, see Matrix
RREF criterion, 29 Trivial solution, see System of linear
inhomogeneous, 58 equations
nontrivial solution, 58
and free variables, 58 Unit cube, 221
and invertibility, 249 Unit vector
finding, 59 and orthonormality, 368
number of solutions of, 34 definition of, 341
parametric form of, see in the direction of a vector, 342
Parametric form standard coordinate vectors, 341
parametric vector form of, see Upper-triangular
Parametric vector form see Matrix, 188
particular solution of, see
Parametric vector form Vector
solution of, 10 addition, 38
solving with the inverse matrix, parallelogram law, 38
173 angle between, 344
trivial solution, 58 definition of, 36
and invertibility, 249 distance between, 341
length of, 340
Tall matrix, see Matrix linear combination of, see Linear
transformation combination
430 INDEX

orthogonal, see Orthogonality definition of, 43


product with matrix, see equivalence with matrix
Matrix-vector product equation, 50
real and imaginary parts of, 309 equivalence with system of
row vector, 52 equations, 43
product with column vector, 52 inconsistent, see System of linear
scalar multiplication, 38 equations, inconsistent
picture of, 40 solving, 45
Volume
subtraction
and length, 225
picture of, 39
of a parallelepiped, 222
unit vector, see Unit vector
of a region, 230
unit vector in the direction of,
signed, 228
342
Vector equation Wide matrix, see Linear
consistent, see System of linear independence, see Matrix
equations, consistent transformation
This book was authored in MathBook XML, with modifications by Joe Rabinoff.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy