0% found this document useful (0 votes)
102 views9 pages

The Fundamental Theorem of Linear Algebra 1993

This document summarizes the Fundamental Theorem of Linear Algebra. It discusses how the theorem describes the action of a matrix A on four subspaces: 1) The row space of A maps to the column space of A. 2) The nullspace of A maps to the zero vector. 3) The column space of A and nullspace of A are orthogonal subspaces of the range and domain, respectively. 4) If the data is not in the column space, the least squares solution finds the projection of the data onto the column space to minimize the error.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views9 pages

The Fundamental Theorem of Linear Algebra 1993

This document summarizes the Fundamental Theorem of Linear Algebra. It discusses how the theorem describes the action of a matrix A on four subspaces: 1) The row space of A maps to the column space of A. 2) The nullspace of A maps to the zero vector. 3) The column space of A and nullspace of A are orthogonal subspaces of the range and domain, respectively. 4) If the data is not in the column space, the least squares solution finds the projection of the data onto the column space to minimize the error.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

The American Mathematical Monthly

ISSN: 0002-9890 (Print) 1930-0972 (Online) Journal homepage: https://www.tandfonline.com/loi/uamm20

The Fundamental Theorem of Linear Algebra

Gilbert Strang

To cite this article: Gilbert Strang (1993) The Fundamental Theorem of Linear Algebra, The
American Mathematical Monthly, 100:9, 848-855, DOI: 10.1080/00029890.1993.11990500

To link to this article: https://doi.org/10.1080/00029890.1993.11990500

Published online: 10 Apr 2018.

Submit your article to this journal

Article views: 2

Citing articles: 7 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=uamm20
The Fundamental Theorem of Linear
Algebra

Gilbert Strang

This paper is about a theorem and the pictures that go with it. The theorem
describes the action of an m by n matrix. The matrix A produces a linear
transformation from Rn to Rm-but this picture by itself is too large. The "truth"
about Ax = b is expressed in terms of four subspaces (two of Rn and two of Rm).
The pictures aim to illustrate the action of A on those subspaces, in a way that
students won't forget.
The first step is to see Ax as a combination of the columns of A. Until then the
multiplication Ax is just numbers. This step raises the viewpoint to subspaces. We
see Ax in the column space. Solving Ax = b means finding all combinations of the
columns that produce b in the column space:

[ I I ... I ] [:,:] = x 1(column 1) + · · · +xn(column n) =b.

Columns of A xn

The column space is the range R(A), a subspace of Rm. This abstraction, from
entries in A or x or b to the picture based on subspaces, is absolutely essential.
Note how subspaces enter for a purpose. We could invent vector spaces and
construct bases at random. That misses the purpose. Virtually all algorithms and
all applications of linear algebra are understood by moving to subspaces.
The key algorithm is elimination. Multiples of rows are subtracted from other
rows (and rows are exchanged). There is no change in the row space. This subspace
contains all combinations of the rows of A, which are the columns of AT. The row
space of A is the column space R(AT).
The other subspace of Rn is the nullspace N(A). It contains all solutions to
Ax = 0. Those solutions are not changed by elimination, whose purpose is to
compute them. A by-product of elimination is to display the dimensions of these
subspaces, which is the first part of the theorem.
The Fundamental Theorem of Linear Algebra has as many as four parts. Its
presentation often stops with Part 1, but the reader is urged to include Part 2.
(That is the only part we will prove-it is too valuable to miss. This is also as far as
we go in teaching.) The last two parts, at the end of this paper, sharpen the first
two. The complete picture shows the action of A on the four subspaces with the
right bases. Those bases come from the singular value decomposition.
The Fundamental Theorem begins with
Part 1. The dimensions of the subspaces.
Part 2. The orthogonality of the subspaces.

848 THE FUNDAMENTAL THEOREM OF LINEAR ALGEBRA [November


The dimensions obey the most important laws of linear algebra:

dim R(A) =dim R(AT) and dim R(A) +dim N(A) = n.


When the row space has dimension r, the nulls pace has dimension n - r.
Elimination identifies r pivot variables and n - r free variables. Those variables
correspond, in the echelon form, to columns with pivots and columns without
pivots. They give the dimension count r and n - r. Students see this for the
echelon matrix and believe it for A.
The orthogonality of those spaces is also essential, and very easy. Every x in the
nullspace is perpendicular to every row of the matrix, exactly because Ax = 0:
-row
Ax= -row
[
-row
The first zero is the dot product of x with row 1. The last zero is the dot product
with row m. One at a time, the rows are perpendicular to any x in the nullspace.
So x is perpendicular to all combinations of the rows.
The nullspace N ( A) is orthogonal to the row space R ( AT) .
What is the fourth subspace? If the matrix A leads to R(A) and N(A), then its
transpose must lead to R(AT) and N(AT). The fourth subspace is N(AT), the
nullspace of AT. We need it! The theory of linear algebra is bound up in the
connections between row spaces and column spaces. If R(AT) is orthogonal to
N(A), then-just by transposing-the column space R(A) is orthogonal to the
"left nullspace" N(AT). Look at ATy = 0:

T
Ay-
_ [ column_ 1 of A
:
column n of A
l _[?l
Y-:·
0
Since y is orthogonal to each column (producing each zero), y is orthogonal to the
whole column space. The point is that AT is just as good a matrix as A. Nothing is
new, except AT is n by m. Therefore the left nullspace has dimension m - r.
ATy = 0 means the same as y TA = 0 T. With the vector on the left, y TA is a
combination of the rows of A. Contrast that with Ax =combination of the
columns.
The First Picture: Linear Equations
Figure 1 shows how A takes x into the column space. The nullspace goes to the
zero vector. Nothing goes elsewhere in the left nullspace-which is waiting its
turn.
With b in the column space, Ax = b can be solved. There is a particular
solution x, in the row space. The homogeneous solutions xn form the nullspace.
The general solution is x, + xn. The particularity of x, is that it is orthogonal to
every xn.
May I add a personal note about this figure? Many readers of Linear Algebra
and Its Applications [4] have seen it as fundamental. It captures so much about
Ax =b. Some letters suggested other ways to draw the orthogonal subspaces-
artistically this is the hardest part. The four subspaces (and very possibly the figure
itself) are of course not original. But as a key to the teaching of linear algebra, this
illustration is a gold mine.

1993] THE FUNDAMENTAL THEOREM OF LINEAR ALGEBRA 849


dim r dim r
row column
space space

''
'.r=.rr+xn
R" 0

null space dim 111- r


of A

dim 11- r

Figure 1. The action of A: Row space to column space, nullspace to zero.

Other writers made a further suggestion. They proposed a lower level textbook,
recognizing that the range of students who need linear algebra (and the variety of
preparation) is enormous. That new book contains Figures 1 and 2-also Figure 0,
to show the dimensions first. The explanation is much more gradual than in this
paper-but every course has to study subspaces! We should teach the important
ones.
The Second Figure: Least Squares Equations
If b is not in the column space, Ax = b cannot be solved. In practice we still
have to come up with a "solution." It is extremely common to have more equations
than unknowns-more output data than input controls, more measurements than
parameters to describe them. The data may lie close to a straight line b = C + Dt.
A parabola C + Dt + Et 2 would come closer. Whether we use polynomials or
sines and cosines or exponentials, the problem is still linear in the coefficients
C,D,E:

or
C + Dt m + Et;, = bm
There are n = 2 or n = 3 unknowns, and m is larger. There is no x = (C, D) or
x = (C, D, E) that satisfies all m equations. Ax = b has a solution only when the
points lie exactly on a line or a parabola-then b is in the column space of the m
by 2 or m by 3 matrix A.
The solution is to make the error b - Ax as small as possible. Since Ax can
never leave the column space, choose the closest point to b in that subspace. This
point is the projection p. Then the error vector e = b - p has minimal length.
To repeat: The best combination p = AX is the projection of b onto the column
space. The error e is perpendicular to that subspace. Therefore e = b - AX is in
the left nullspace:
AT(b- AX) = 0 or ATAX = ATb.
Calculus reaches the same linear equations by minimizing the quadratic lib - Axll 2 •
The chain rule just multiplies both sides of Ax = b by AT.

850 THE FUNDAMENTAL THEOREM OF LINEAR ALGEBRA [November


The "normal equations" are A 'lAx = ATb. They illustrate what is almost invari-
ably true-applications that start with a rectangular A end up computing with the
square symmetric matrix ATA. This matrix is invertible provided A has indepen-
dent columns. We make that assumption: The nullspace of A contains only x = 0.
(Then ATA.x = 0 implies x1A1Ax = 0 which implies Ax= 0 which forces x = 0, so
A1A is invertible.) The picture for least squares shows the action over on the right
side-the splitting of b into p + e.

column
row
space
space Ar = P
_r -----,;<-------------!--~- P
Ax=b ,'
----not possible-- -•- "'fJ = p + e

Figure 2. Least squares: i minimizes lib - Axll 2 by solving ATAi = ATb.

The Third Figure: Orthogonal Bases


Up to this point, nothing was said about bases for the four subspaces. Those
bases can be constructed from an echelon form-the output from elimination.
This construction is simple, but the bases are not perfect. A really good choice, in
fact a "canonical choice" that is close to unique, would achieve much more. To
complete the Fundamental Theorem, we make two requirements:
Part 3. The basis vectors are orthonormal.
Part 4. The matrix with respect to these bases is diagonal.
If v 1,••• , v, is the basis for the row space and u 1, ••• , u, is the basis for the
column space, then Au; = u;u;. That gives a diagonal matrix 2. We can further
ensure that u; > 0.
Orthonormal bases are no problem-the Gram-Schmidt process is available.
But a diagonal form involves eigenvalues. In this case they are the eigenvalues of
A1A and AAT. Those matrices are symmetric and positive semidefinite, so they
have nonnegative eigenvalues and orthonormal eigenvectors (which are the bases!).
Starting from ATAv; = u/v;, here are the key steps:

v;JATAV; = u/vrv; so that IIAv;ll = U;

AA1Av; = u/Av; so that U; = Av;/u; is a unit eigenvector of AAT.

All these matrices have rank r. The r positive eigenvalues u/ give the diagonal
entries u; of 2.

1993] THE FUNDAMENTAL THEOREM OF LINEAR ALGEBRA 851


The whole construction is called the singular value decomposition (SVD). It
amounts to a factorization of the original matrix A into UIVT, where
1. U is an m by m orthogonal matrix. Its columns u 1, •.. , u,, ... , um are basis
vectors for the column space and left nullspace.
2. I is an m by n diagonal matrix. Its nonzero entries are u 1 > 0, ... , u, > 0.
3. V is an n by n orthogonal matrix. Its columns v 1, ••• , v, ... , vn are basis
vectors for the row space and nullspace.
The equations Au;= u;u; mean that AV = UI. Then multiplication by VT
gives A= UIVT.
When A itself is symmetric, its eigenvectors u; make it diagonal: A = UAUT.
The singular value decomposition extends this spectral theorem to matrices that
are not symmetric and not square. The eigenvalues are in A, the singular values
are in I. The factorization A = UIVT joins A = LU (elimination) and A = QR
(orthogonalization) as a beautifully direct statement of a central theorem in linear
algebra.
The history of the SVD is cloudy, beginning with Beltrami and Jordan in the
1870's, but its importance is clear. For a very quick history and proof, and much
more about its uses, please see [1]. "The most recurring theme in the book is the
practical and theoretical value of this matrix decomposition." The SVD in linear
algebra corresponds to the Cartan decomposition in Lie theory [3]. This is one
more case, if further convincing is necessary, in which mathematics gets the
properties right-and the applications follow.

Example

A=U 62] = u -n [{50


v'IO 0
All four subspaces are 1-dimensional. The columns of A are multiples of [!]
in U.
The rows are multiples of [1 2] in VT. Both A'IA and AAT have eigenvalues 50
and 0. So the only singular value is u 1 = /50.

Av,.= a,.u,.

Fipre 3. Orthonormal bases that diagonalize A.

852 THE FUNDAMENTAL THEOREM OF LINEAR ALGEBRA [November


The SVD expresses A as a combination of r rank-one matrices:

A= U!,VT = u I u I vTI + · · · +u r o:vT


r r

The Fourth Figure: The Pseudoinverse


The SVD leads directly to the "pseudoinverse" of A. This is needed, just as the
least squares solution x was needed, to invert A and solve Ax = b when those
steps are strictly speaking impossible. The pseudoinverse A+ agrees with A - 1
when A is invertible. The least squares solution of minimum length (having no
nullspace component) is x+ =A+ b. It coincides with x when A has full column
rank r = n-then ATA is invertible and Figure 4 becomes Figure 2.
A+ takes the column space back to the row space [4]. On these spaces of equal
dimension r, the matrix A is invertible and A+ inverts it. On the left nullspace,
A+ is zero. I hope you will feel, after looking at Figure 4, that this is the one
natural best definition of an inverse. Despite those good adjectives, the SVD and
A+ is too much for an introductory linear algebra course. It belongs in a second
course. Still the picture with the four subspaces is absolutely intuitive.

row
space

null space
of A

Figure 4. The inverse of A (where possible) is the pseudoinverse A+.

The SVD gives an easy formula for A+, because it chooses the right bases. Since
Au; = u;u;, the inverse has to be A +u; = v;/u;. Thus the pseudoinverse of !,
contains the reciprocals 1/u;. The orthogonal matrices U and VT are inverted by
UT and V. All together, the pseudoinverse of A = U!,VT is A+= V!. + UT.

Example (continued)

A+=
[12 -2]1 [1/v'SO o][-~ i] =~[1 3]
15 0 /10 0 50 2 6 .
Always A+A is the identity matrix on the row space, and zero on the nullspace:

A +A =
5~ [ ~~ ~~] = projection onto the line through [ ~] .

1993] THE FUNDAMENTAL THEOREM OF LINEAR ALGEBRA 853


Similarly AA + is the identity on the column space, and zero on the left nullspace:

1 [ 5
AA += 50 15 !~ ]
= projection onto the line through [ j ].
A Summary of the Key Ideas
From its r-dimensional row space to its r-dimensional column space, A yields
an invertible linear transformation.
Proof" Suppose x and x' are in the row space, and Ax equals Ax' in the column
space. Then x - x' is in both the row space and nullspace. It is perpendicular to
itself. Therefore x = x' and the transformation is one-to-one.
The SVD chooses good bases for those subspaces. Compare with the Jordan form
for a real square matrix. There we are choosing the same basis for both domain
and range-our hands are tied. The best we can do is SAS- 1 = J or SA =IS. In
general J is not real. If real, then in general it is not diagonal. If diagonal, then in
general S is not orthogonal. By choosing two bases, not one, every matrix does as
well as a symmetric matrix. The bases are orthonormal and A is diagonalized.
Some applications permit two bases and others don't. For powers An we need
s- 1 to cancel S. Only a similarity is allowed (one basis). In a differential equation
u' =Au, we can make one change of variable u = Su. Then v' = s-~:Asv. But for
Ax = b, the domain and range are philosophically "not the same space." The row
and column spaces are isomorphic, but their bases can be different. And for least
squares the SVD is perfect.
This figure by Tom Hem and Cliff Long [2] shows the diagonalization of A.
Basis vectors go to basis vectors (principal axes). A circle goes to an ellipse. The
matrix is factored into U!.VT. Behind the scenes are two symmetric matrices ATA
and AAT. So we reach two orthogonal matrices U and V.

u
~ lTzUz ".

We close by summarizing the action of A and AT and A+:

1 ~ i ~ r.
The nullspaces go to zero. Linearity does the rest.

854 THE FUNDAMENTAL THEOREM OF LINEAR ALGEBRA [November


The support of the National Science Foundation (OMS 90-06220) is gratefully
acknowledged.

REFERENCES

1. Gene Golub and Charles Van Loan, Matrix Computations, 2nd ed., Johns Hopkins University Press
(1989).
2. Thomas Hern and Cliff Long, Viewing some concepts and applications in linear algebra, VISualiza-
tion in Teaching and Learning Mathematics, MAA Notes 19 (1991) 173-190.
3. Roger Howe, Very basic Lie theory, American Mathematical Monthly, 90 (1983) 600-623.
4. Gilbert Strang, Linear Algebra and Its Applications, 3rd ed., Harcourt Brace Jovanovich (1988).
5. Gilbert Strang, Introduction to Linear Algebra, Wellesley-Cambridge Press (1993).

Department of Mathematics
Massachusetts Institute of Technology
Cambridge, M4 02139
gs@math.mit.edu

An Identity of Daubechies
The generalization of an identity of
Daubechies using a probabilistic interpre-
tation by D. Zeilberger [100 (1993) 487],
has already appeared in SIAM Review
Problem 85-10 (June, 1985) in a slightly
more general context. In addition to a
similar probabilistic derivation there is
also a direct algebraic proof. Incidentally,
problem 10223 [99 (1992) 462] is the same
as the identity of Daubechies and a slight
generalization of this identity has ap-
peared previously as problem 183, Crux
Math. 3(1977) 69-70 and came from a list
of problems considered for the Canadian
Mathematical Olympiad. There was an
inductive solution of the latter by Mark
Kleinman, a high school student at the
time and one of the top students in the
U.S.A.M.O. and the I.M.O.
M. S. Klamkin
Department of Mathematics
University of Alberta
Edmonton, Alberta
CANADA T6G 2G1

1993] THE FUNDAMENTAL THEOREM OF LINEAR ALGEBRA 855

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy