100% found this document useful (4 votes)
458 views244 pages

Matrix Operations - Richard Bronson - 2011 - SCHAUM's OUTLINES

Uploaded by

herbert domingo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
458 views244 pages

Matrix Operations - Richard Bronson - 2011 - SCHAUM's OUTLINES

Uploaded by

herbert domingo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 244

outlines —


‘3 t} =e 3 ;

eG O-(5 Ses olen f}


s
res
t} (} {} {} re {}
Problem
i & Hf ‘* : e} re’
ee of

f
~ “ . :
:

Solved

perations
Second Edition
363 fully solved problems
= Treats matrix computations, algorithms, and operations
® Covers all course fundamentals and supplements any text

Linear Algebra * Matrix Algebra * Engineering Analysis * Applied Mathematics

a
*
SCHAUM’S
outlines

Richard Bronson, Ph.


; | Professor of Mathematics and Compu
_ Science, Fairleigh Dickinson Universi
RICHARD BRONSON, who is professor and chairman of mathematics and computer science at Fairleigh Dickinson
University, received his Ph.D. in applied mathematics from Stevens Institute of Technology in 1968. Dr. Bronson is currently
an associate editor of the journal Simulation, contributing editor to SIAM News, has served as a consultant to Bell Telephone
Laboratories, and has published over 25 technical articles and books, the latter including Schaum's Outline of Modern
Se Introductory Differential Equations and Schaum's Outline of Operations Research.

Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Printed in the United States of America. Except
as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in
any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher.

ISBN 978-0-07-175604-4
MH 0-07-175604-3

:ademarks:McGraw-Hill, the McGraw-Hill Publishing logo, Schaum's and related trade dress are trademarks or registered
| s ofThe McGraw-Hill Companies and/or its affiliates in the United States and other countries and may not be used
written permission. All other trademarks are the property of their respective owners. The McGraw-Hill Companies is
ated with any product or vendor mentioned in this book.
Preface

Perhaps no area of mathematics has changed as dramatically as matrices over


the last 25 years. This is due to both the advent of the computer as well as the
introduction and acceptance of matrix methods into other applied disciplines.
Computers provide an efficient mechanism for doing iterative computations. This,
in turn, has revolutionized the methods used for locating eigenvalues and
eigenvectors and has altered the usefulness of many classical techniques, such as |
those for obtaining inverses and solving simultaneous equations. Relatively new a
fields, such as operations research, lean heavily on matrix algebra, while estab- . pet se.
lished fields, such as economics, probability, and differential equations, continue
to expand their reliance on matrices for clarifying and simplifying complex
concepts.
This book is an algorithmic approach to matrix operations. The more |
complicated procedures are given as a series of steps which may be coded ina =—
straightforward manner for computer implementation. The emphasis throughout aE
is On computationally efficient methods. These should be of value to anyone who =
needs to apply matrix methods to his or her own work. :
The material in this book is self-contained; all concepts and procedures are
stated directly in terms of matrix operations. There are no prerequisites for using_
most of this book other than a working knowledge of high school algebra. Some —
of the applications, however, do require additional expertise, but these are
self-evident and are limited to short portions of the book. For example, elemen-
_ tary calculus is
i neededae the ceca cae onB aisscoir Lae
eA4
Nes

plavebs. ait'wo
“aan
ba ae
Contents

Chapter 1 ASIC TION heer his miner ree oreaaes 1


Matrices. Vectors and dot products. Matrix addition and matrix
subtraction. Scalar multiplication and matrix multiplication. Row-echelon
form. Elementary row and column operations. Rank.

Chapter 2 SIMULTANEOUS LINEAR EQUATIONS....................---. 11


Consistency. Matrix notation. Theory of solutions. Simplifying
operations. Gaussian elimination algorithm. Pivoting strategies.

Chapter 3 SQUARE MATRIC Ge erie Ee ee 163 24


Diagonals. Elementary matrices. LU decomposition. Simultaneous linear
equations. Powers of a matrix. | .*

tuagmer. 4 MATRIX INVERSION .......... 2.000. eces eesnens


ese
O41 The inverse. Simple inverses. Caleulting iavenoeat: Simultaneous linear— HD fe
equations. Properties of the inverse. 8 ee

| Chapter 5 DETERMINANTS a aig


_ Expansion by cofactors. ‘Properties ofdeterminants "Determinants a vise ow
i Pivotal condensation.
dmatrices. Inversion by determinants.
Se xe awe
Zz ;
CONTENTS

ie Chapter 9 @ANONICAL BASES... ... cc cures 6c cbeba-cinas dws Geune eS ns ann 82


ae Generalized eigenvectors. Chains. Canonical basis. The minimum
aie polynomial.

Sate OERIMILARITY 4:2 708s. oo. aE ee eee: , sobagudie sae s 355 91


Similar matrices. Modal matrix. Jordan canonical form. Similarity and
Jordan canonical form. Functions of matrices.

Seechapter If sINNER- PRODUCTS. gcc. . 6c) cic ods. as vse. jase ee 103
Complex conjugates. The inner product. Properties of inner products.
Orthogonality. Gram-Schmidt orthogonalization.

NORMS
. 2600 AG «os EM SS .'.2 PPE ews pass eee
Vector norms. Normalized vectors and distance. fates norms. ile
norms. Compatibility. Spectral radius.

HERMITIAN MATRICES ............ ae AOS


OO
_ Normal matrices. Hermitian matrices. Real symmetric matrices. The
adjoint. Self-adjoint matrices.

14. POSITIVE DEFINITE MATRICES «2.00.06... c0+eceecseeeee. 128


Definite matrices. Tests for positive definiteness. Square roots of
eee =matrices. Sole
kydecomposition. :

AMIONS 00.0000 ccs cts esses.


x

ey.Jaya ee eae ;
bo anehae coneae

(ETN, Rayisigh
Powe baa mca-
. —__-———
; cee. ——— 4
CONTENTS

Chapter 19 POWER METHODS FOR LOCATING REAL EIGENVALUES ...... 169


Numerical methods. The power method. The inverse power method. The
shifted inverse power method. Gerschgorin’s theorem.

Chapter 20 ee OEE Ss eG og sd kasd a a a's a ed oe Oe ew eS 181


The modified Gram-Schmidt process. QR decomposition. The QR
algorithm. Accelerating convergence.

Chapter 2] GENERALIZED INVERSES ......... fe iis 5 ee Vina * ie et gs ae ee


Properties. A formula for generalized inverses. Singular-value See
decomposition. A stable formula for the generalized inverse. Least-squares
solutions. .

ANSWERS TO SUPPLEMENTARY PROBLEMS.................0..000000005


ot git ard BA ee ei aris e aroet e uk
eee
Chapter 1
Basic Operations
MATRICES
A matrix is a rectangular array of elements arranged in horizontal rows and vertical columns, and
usually enclosed in brackets. In this book, the elements of a matrix will almost always be numbers or
functions of the variable ¢. A matrix is real-valued (or, simply, real) if all its elements are real
numbers or real-valued functions; it is complex-valued if at least one element is a complex number or
a complex-valued function. If all its elements are numbers, then a matrix is called a constant matrix.

Example 1.1

He
3 4
Eg
—6
sin f
.0-
ae
cost
and [-1.7, 2+ i6, —3i, 0]
Roar
are all matrices. The first two on the left are ‘daha bag whereas the third is coniplnovaladll (withi
.s-v= Sa ae 5r
the first and nice are constant matrices, but Asse second is‘not constant. tech eer

Matrices are déddinnicd by boldface he ealypeaes letters.Psgeneralmatrix A aie r To


herpes maybe written Sif

aH “thes by; ae Roehl asi ie


Ate at 7G

. do
elementsofthematrixar
-_ wherethe d te location. By
csubsciptee
e uble

\ 2 ~ eae
went li:
ae cee
se ene leas

2 BASIC OPERATIONS (CHAP. 1

Rs MATRIX ADDITION AND MATRIX SUBTRACTION


The sum A+B of two matrices A=[a,] and B=[b,] having the same order is the matrix
obtained by adding corresponding elements of A and B. That is,
A+B= [a,,] bad[b,,] - [a,, + bj)
Matrix addition is both associative and commutative. Thus,

A+(B+C)=(A+B)+C and A+B=Bt+A


_ (See Problem 1.2.)
_- The matrix subtraction A —B is defined similarly: A and B must have the same order, and the
subtractions must be performed on corresponding elements to yield the matrix [a, —b,,]. (See
Problem 1.3.)

‘ALAR MULTIPLICATION AND MATRIX MULTIPLICATION


For any scalar k (in this book, usually a number or a function of ft), the matrix kA (or,
equivalently, Ak) is obtained by multiplying every element of A by the scalar k. That is,
kA= k{a,]=[ka,]. (See Problem 1.3.)
; A=[a,] and B=[5,,] have orders r X p and pXc, respectively, so that the number of
of A equals the number of rows of B. Then the product AB is defined to be the matrix
of order r x c whose elements are given by
=F ayby 1@=4,2)015.. Fj =1,2,5..50
om ntCy.of AB iis a dot product; it is obtained by forming the transpose of the ith row of A
ailiine,its dot product with the jth column of B. (See Problems 1.4 through 1.7.)
ronal ceiis.asaoniativs om distributes over addition and eetse signe in leet it is

=A NO 6)=ABA (B-_C)A=BA-CA eb =
+ a1 aio jis he | “gatas A

Sie
2B iE: ix is ft { 3 Pd 4 ¥ ; FS \ x , ; Pot 7 : al J Phas

7 on 5 i. ; . i - 7 . 5 ‘or =
: v4 re
i308fi:a 4 ws * ya >. baoTe 3: j e ' +4 a 2 ‘§ J Sf F

. “aa
6 ic ic in te a i e B%, 4 nen \ oi ay he a.

are.,
CHAP. 1] BASIC OPERATIONS 3

Example 1.3. The matrix

We Se Sik Se
ps em Oe ia
0 60 @:.9 0
satisfies all four conditions and so is in row-echelon form. (See Problems 1.11 to 1.15 and 1.18.)

ELEMENTARY ROW AND COLUMN OPERATIONS


There are three elementary row operations which may be used to transform a matrix into
row-echelon form. The origins of these operations are discussed in Chapter 2; the operations
themselves are:
(El): Interchange any two rows.
(E2): Multiply the elements of any row by a nonzero scalar.
(E3): Add to any row, element by element, a scalar times the corresponding elements of a
row. |

Three elementary column operations are defined analogously.


An algorithm for using elementary TOW operations to transform a matrix into row-ect
as follows: |

STEP 1.1: Let R denote the work row, ‘and initializees= 1 ae that the top row is‘the fi
S; or Tow). =
STEP 1.2: Find the first column containing a nonzero bewiennsin either row R or any s
row. If no such column exists, — tie pe al agellistone tes a the
_ denote thiscolumn. : : a 2 oe
4 BASIC OPERATIONS (CHAP. 1

Solved Problems

1.1 Find A-B and B-C’ for


2 5
ASS ge PP eee ie ey, 6)
2 | 4 =
A+B = 2(5) + 3(6) + (-7) = 0.
5 7
B-C’= |: - = 5(7) + 6(—8) + (—7)(—9) = 50
-7} L-9

- Show that A+ B=B+A for


ba ae is to
ne i | and B=[5 5
A+B=|9 si+[e Sat seems 4
de Be bh Hee: Saale hd. 6
sie Pa Ll gle cn a Be iat pad

74
J5 e+8 Bhft
‘6ol
3 us)Be a we ees ‘s
3 5)" ohteis hatron sts an iq + ae Bist na
hepa a. b 4 tokvxsikade
}

of
by
was:ti
CHAP. 1] BASIC OPERATIONS 5

1.6 Verify that (BA)’ = A’B’ for the matrices of Problem 1.5.
| 10 4), gy [MD +48) 10)
+4-9) 39 ~36
was) -5|[2 9]=|207)
+(58) 20) + (-5)(-9) -|-2 ‘
ans 3(7) + 6(8) 3(0) + 6(—9) 69 —54
which is the transpose of the product BA found in Problem 1.5.

1.7 Find AB and AC if

ae 7a SR a} ~3
Az=t 2 Pig Ba 2) e2° 2 Cerf 2: . 6
H2 =) sot Sah ogee | >) a a |

F4(2) + 2(2) + 0(-1) 4(3)


+ 2(—2) + 0(2) 4(1)
+ 2(-—2) + 0(1) |
AB = 2(2) + 1(2) + 0(-1) ' 2(3):+ 1(+2):+ 0(2) 2(1)
+ 1(—2) + 0(1)
—2(2) + (—1)(2) + U(- 1) ae 1)(—2) + 1(2) —2(1) + (-6 2) + 1(1)
eee: £46
al 6 4-0
-7 -21
+0
)+266)+0(1)
| a(-3ee 0)
AC= 2(3)+1(0)+0(-1)
1)
=— —-2(1)
4(1)+ (-D@) ++21(2) 02©) ee
(2) ++0(2) es
:
—2(3) 2(0)+0+(-(1)
4(3+)+(-1)(0) 20) 112)2-3) + (16) +10) |
_ Note that, for these matrices, AB = ACandd yetBC. ‘Thisshows thatthecancellation lawis
ist
not
1
—— ae a cts aa ate > 13 3 | 7
6 BASIC OPERATIONS [CHAP. 1

_ a a 4
eB: 2.10.8... oft wale
< ae 78 -1 —2
A partitioned matrix can be viewed as a matrix whose elements are themselves matrices.

1.9 The arithmetic operations defined above for matrices having scalar elements apply as well to
partitioned matrices. Determine AB and A -— B if
Pais a [ep “4 _{F 4
ig a=|o and p=|F E

11 3 fo 7 [2 6 {7 0} -{- 3]
where C=|4 | D=|5 9 Pris: gl FE") ois) 7-S@liy 3
s-| CF + DF
EF+CF
Sa
EG+CE

i | a °|+ 0 0 |

oS i=)|a4 No
— —_—
©
wend

sappy BR
deal oll.iptalten
MBPT Sc obetigBab
BASIC OPERATIONS

We form the partitioned matrices and find their product:


3
ee
=]

0 ee

The product AB is the upper left part of the resulting matrix. Since the row sums of this product, the
column sums, and their sums are correctly given in the matrix, the multiplication checks.

1.11. Determine which of the following matrices are in row-echelon form:

51o. ee
6=[5 04

poe| Pe aeoe eee


> oe
.
oa y w . rc _ Z a is -
a.

oe a0 ve
SKY. <. ote «sult a
TA45, : 2 Fy
aa te e

sig ts ek ae hl Compe eRe, Bremih AA ES hy eS oD a Parity


3 ¥ si ote ae ys :
as et Lag eo a ee ey ee oe -y 5 ee es
53 a 5 cainee. ar ae ae : cz 2. ae o 4 - »

wi
to tr m_ > and Boe. .
4 : ae
7 oa Se
i 1 ysFS ae,
aS aX aba
? << = 7= ~ 3 aitFie ’

ict hal ae = Sa
8 BASIC OPERATIONS (CHAP. 1

I zz +4 6
. 39
2 =} 2 =~2
Here (and in later problems) we shall use an arrow to indicate the row that results from each
elementary row operation.

1 2 -!1l 6 Step 1.6 with R=1, C=1, N=2,


—> and V=3: Add —3 times the first
row to the second row.

1 2 -1 | Step 1.6 with R=1, C=1, N=3,


and V=2: Add —2 times the first
row to the third row.

—/0 1 6 -—-4 plyrow


the second
P=2: Multi
0: -S .4.-14 by 1/2.
1.2 (=3 6 Step 1.6 with R=2, C=2, N=3,
o +. Eos and V=—-5: Add 5times the
—~.0 0 34 -34 second row to the third row.

eat ‘12 -1 6] ~ Step 1.4 with R=3, C=3, and


oa G55 60re P=34: ply
the third
Multi row by
} a 3 ao oY. hs Paeee 1/34.
ood 233 owe .
2 4a x c
) sth? bican
fee JADE Ss Ons
aes

aff
i 7

9—
7

tbe

p 4 with= 1 C=1,and Htndhatannen


srulicn ae
:=>
.f, "“
“4
eea¥éy
_
ately a *
:- emestcan? i
¥nt
ee
oso:

+4
{‘te
a n ’ >+-eme<é
a ‘s ae
Py ake a lath cus
V= 2
a babe
CHAP. 1] BASIC OPERATIONS : 9

1.15 Transform the following matrix into row-echelon form:

oe wee ee
ee. ED a |
. Shas. 38.4.7
1 2/3 143 -4/3 “| Step 1.4: Multiply the first row
2 Se Pe at | by 178.
(eh V8, Gor B eid

be BIB MRIS) HOB 0°193 Step 1.6: Add —2 times the first
O: Sian =2/3. 4543 <~6/3 row to the second row.
1 -6 3 —8 7

k= Aho ts 428 « 3/3 Step 1.6: Add —1 times the aie


GC... BFS =2/3. SIZ —§/3 row to the third row.
0: 20/3" 8/3" 20/3 - 20/3

l af: $13 “43 1/3] Step 1.4: Multiply the second row
ee Da of by 3/5. . gral:
0 -20/3 8/3 -20/3 20/3 : ‘ies
cae 8 1 2/3 1/3 -4/3 1/3] —Step 1.6: Add 20/3 times the
© eee Os DiS ey: eh | Reon 16m to: the tattaow,
a a SAE Me aa : 0 be rds
77

‘Determine the rankof the inatelx of Problem 1.14. 3


cre lis
a: « Ad

det lonn formof this matrixhas three nonzero rows, the rank cf the ori
BA a OE 5ppg ame:
a en. fe
10 BASIC OPERATIONS (CHAP. 1

Supplementary Problems

In Problems 1.19 through 1.32 let

7
asf?io. 42) w-[22 2] c-[2 oe 31]> w-laa2]
6 6 3
2 i]

|S
so —
wh_—
3 "
be —-
_
Nm w—

4.19 Find (a) A + B; (b) 3A; (c) 2A — 3B; (d) C — D; and (e) A+F.
Designate the columns of A as A, and A,, and the columns of C as C,,C,, and C,, from left to right.
Then calculate (a) A,*A,; (b) C,+C,; and (c) C,-C,.

Find (a) AB; (b) BA; (c) (AB)’; (d) B’A’; and (e) A’B’.

Find (a) CD and (b) DC.


23 Find A(A + B).
a20_= Find(a) CE and (b) EC.
2

Fir
oS
d (a) CF and (b) FC.

(a) BFand (b)FE. se


m /A10 towechelon form.

ee wea: ay ‘ 2
Ee4 © Sides. 1 RRR SES tO ANE wt? citeG? ad. Ti
ete .
Rede _ 43 > > oe ae : = 4
eS ee ies , bs, = ele a
Chapter 2
Simultaneous Linear Equations
CONSISTENCY
A system of simultaneous linear equations is a set of equations of the form

@,,X,F Qj,X, + A,.%, + °° + a,x, = D,


Ay ,X, + Ay X, + Az,X%, + °° + a,,x, = bd,

C22 Cake + Aka t tA mn


x, n = m
(2.15
The coefficients a, (=1,2,...,m; j=1,2,...,m) and the quantities b, (i=1,2,...,m) are
known constants. The x, (j= 1,2,...,m) are the unknowns whose values are sought.
A solution for system (2.1) is a set of values, one for each unknown, that, when substituted in _
the system, renders all its equations valid. (See Problem 2.1.) A system of simultaneous linear
equations may possess no solutions, exactly one solution, or more than one solution.

Example 2.1 The system


ae : x, +x, =1 |
: x. a3 va . | xyptxy =O. e ia)

| has no setae, because there are no values for x, and x, thatsum to 1 and0 simultaneously. The
aa
XYveiiioncsie ot Qarek es. 03 thatapie ts Sepang ds
2¥%«mat Hite Dyieiour:: ns Lobster x= 2
s fs:Bigmsiaye “i: sf }i: EGS sé SHy 3. a :
;

ion4sie
; garnet: ea "se ts : 3: 2 i
12 SIMULTANEOUS LINEAR EQUATIONS |CHAP. 2

THEORY OF SOLUTIONS
Theorem 2.1: The system AX = B is consistent if and only if the rank of A equals the rank of [A |B].
Theorem 2.2: Denote the rank of A as k, and the number of unknowns as n. If the system AX = B is
consistent, then the solution contains n — k arbitrary scalars.
(See Problems 2.5 to 2.7.)
System (2.1) is said to be homogeneous if B = 0; that is, if b, = b, =---=6,, =0. If B#0 [i.e.,
if at least one b, (i=1,2,...,m) is not zero], the system ‘ somhoninaiaiines, Homogeneous
systems are consistent and admit the solution x, = x,°-: =x, =0, which is called the trivial solution;
a nontrivial solution is one that contains at least one nonzero value.
Theorem 2.3: Denote the rank of A as k, and the number of unknowns as n. The homogeneous
system AX = 0 has a nontrivial solution if and only if n #k. (See Problem 2.7.)

_ SIMPLIFYING OPERATIONS
Three operations that alter the form of a system of simultaneous linear equations but do not alter
__ its solution set are:
ie . (OL): Interchanging the sequence of two equations.
—|.| (02): Multiplying an equation by a nonzero scalar.
“edee oseietntion a scalar times another equation.
ing opera ions O1, O2, and O3 to system (2.1) is equivalent to applying the elementary
| ope ws El,E2,and E3 (see Chapter 1) to the augmented matrix associated with that system.
— Gaussian | is an algorithm
for applying these operations systematically, to obtain a set of
ationstoa iseasytoanalyze for consistency raeeasy to solve if it is consistent. =
pation typesos fie: =,

3 ge Hepetlay (2v9. 10) eons rant sce


easiri e On mel eae tion
CHAP. 2} SIMULTANEOUS LINEAR EQUATIONS

Partial pivoting involves searching the work column of the augmented matrix for the largest
element in absolute value appearing in the current work row or a succeeding row.- That element
becomes the new pivot. To use partial pivoting, replace Step 1.3 of the algorithm for transforming a
matrix to row-echelon form with the following:
STEP 1.3’: Beginning with row R and continuing through successive rows, locate the largest
element in absolute value appearing in work column C. Denote the first row in which
this element appears as row /. If / is different from R, interchange rows J and R
(elementary row operation El). Row R will now have, in column C, the largest
nonzero element in absolute value appearing in column C of row R or any row
succeeding it. This element in row R and column C is called the pivot; let P denote its
value.
(See Problems
2.9 and 2.10.)
Two other pivoting strategies are described in Problems 2.11 and 2.12; they are successively
more powerful but require additional computations. Since the goal is to avoid significant roundoff
error, it is not necessary to find the best pivot element at each stage, but rather to avoid bad ones.
_ Thus, partial pivoting isthe strategy most often implemented. (a
a}

a RCS, CAT een ee ne Om ile} ose”


omaes pe A “leaiSe a cs eT
aie bed Se ee ed a

q 3
ti:
a2
t . Af
0 Fa ee
= if a
i4 eex | A
> A —i
ee AA Be =) : 7 er
' af «HP
‘the t Z a
ye 4 ey
See , io ey: a
Nat 4
a = “
;
if
;
xe.
— et
b .
Tab SES
ae ee
he eee
fey la
en
“i
2
.
ees... :
Eas

; =i #5Fisoe Negrete We 2°We 'teyk Me


hie Nearane:. Megane ee see
Re ee Re
Weg Eten
See ee oe
9 a he rile as 5OS lal or a
ad _ ing y
as ye
14 SIMULTANEOUS LINEAR EQUATIONS (CHAP. 2

2.3 Write the following system of equations in matrix form, and then determine its augmented
matrix:

3x, + 2x, + 2,7435,2 1


2x; + 3x, —2, m=
X,~ 6x, +32,-8x,= 7
This system is equivalent to the matrix equation
x;
. iw 2 TA 1
x2
“4
Fae
Sd Ode 3 melee
; -6 3 -8 x,

The associated augmented matrix is


Ss iBoth odes zal
[A|BJ=|2 3 0 -1i-1
“fi 3-. pB aad

it
> the set of simultaneous equations that corresponds to the augmented matri

} [alal=
i 2/3 1/3 —4/3 :1/3
1 -2/5 1 {-1
a a Ae 0 = ae : 0
) aRaat th 2cthe
: eet tect gt ee
#2 Hts4 el
tolog
ee am Norom it ayaa 9
CHAP. 2] SIMULTANEOUS LINEAR EQUATIONS

Solve the set of equations given in Problem 2.3 by Gaussian elimination.


The augmented matrix for this system was determined in Problem 2.3 to be
Bae cae 1
[A|B}=|2 3 0 ~J
1-63 7
Using the results of Problem 1.15, we transform this matrix into the row-echelon form
1 2/3 1/3 -4/3 11/3
[CT edn 2/5 1 tt i
'
og 0 0 ! 0

It follows from Problem 1.17 that the rank of [C |D] is 2. Submatrix C is also in row-echelon form, and it
also has rank 2. Thus, the original set of equations is consistent.
Now, using the results of Problem 2.4, we write

x + 3X5 + 4x, =O 3X, =}

x be cS ied ,
"as theset of equations associatwit
ed h{C| D]. Solving
s ee amete
ie :
r~ 5
iad ti‘ rawre
. ”
Be
4 .
Pe >
3 gee
ae, Y-
ys Mite. sah
ee “7. 7 4 . - é qf + ‘A oes ey $ .
Liss.: + y aj “= »oe] x bs alee) . “e it le
* > P a - +5 = * we* e ‘ y
‘ ~ .
7 od *

a La —. fe ; # «a k f > aa . s oe er
Soh
She a Pee Le ‘ee a 7 4 t “ = Cr ik ee ge Oe, Leet: fh oe * } he 314 Oe a
es =~ £(— bsob F - : i 7 ey vel ee
ee 7 3 , ae 7 re
: a , i Ps > % ee = ae a MiePi Pi 5 ss aeae ' a So ea a ieL Bi, °
aay!
Ma ies io 5 rs ‘ ral ae Ped a af <4 ok " . ware ~ ai’ oe 4) ced oe ‘4 pot es “ =
: » is "4 a0 ea : $ par ;
16 SIMULTANEOUS LINEAR EQUATIONS |{CHAP. 2

: The rank of the coefficient matrix A is thus 2, and because there are three unknowns in the original set
as of equations, the system has nontrivial solutions. The set of equations associated with the augmented
matrix [C |D} is
¥, + 5x.7 $x, =0
x, + $x,=0
0=0

X,=—2X, 47 32,
ta
i, =~ 7k,

# Therefore, x, is arbitrary. Solving for x, and x, in terms of x, by back substitution, we find


¥ *2 calling ee

X,= —3(— 9x3)+ 3%, = 5x,

— Solve the following set of equations:


ba debs Rit 2x5 x5= 6
' 3x, + 8x,+9x,= 10
Re BRy Ret 2ky ed
Theaugmented matrix associated with this system is
as ee ee:|
ase
uf2 ae ib? od)
2 [A] B]=)3se , 8 9
. izai} af + POs 7. “Us pH: j F a ap ieeh ‘yet i,
ene
wee
mew
% - ma cee se : ‘2 : i
CHAP. 2] SIMULTANEOUS LINEAR EQUATIONS 17

(a) We write the system in matrix form, rounding 1.00001 to 1.000. Then we transform the augmented
matrix into row-echelon form using the algorithm of Chapter 1, in the following steps:
age ee Ta
1 sipcinitest.- 9 |
—> | ] 100000 | a)
l ps: See?
1 100000 | pee
ab 0 ~100000 |}— 100000 ,
1 100000 | phe iy
mt () 1 ]

(Note that we round to —100000 twice in the next-to-last step.) The resulting augmented
matrix shows that the system is consistent. The equations associated with this matrix are

x, + 100000x, = 100000
x,=1
which have the solution x, =0 and x, =1. However, substitution into the Fie aes equation
shows that this is not the solution to the original system.

(b) Tittouning the augmented matrix into row-echelon form using partial pivoting yes

}0.00001
1
ea.
Ree 2
Rows 1 and 2 are interchanged
0.00001 1 :1.000] © because row 2 has the largest
element in column 1, the current
work column.
; = Rewnding to foursignificant

cizted withthelst
¢ ae .e- .

hy Steam]

Ss 3
18 SIMULTANEOUS LINEAR EQUATIONS |CHAP. 2

In transforming this matrix, we need to use Step 1.3’ immediately, with R = 1 and C = 1. The largest
element in absolute value in column | is —5, appearing in row 3. We interchange the first and third rows,
and then continue the transformation to row-echelon form:

—+f-5 8 17: 9%
2 4 -4+=30
o> t 2235 18
+f1 -16 ~—3.4: -19.2
= E 1 -4 }-30
ae te i
1 -16 -3.4:~-19.2
{0 4.2 2.8: s4|
bac 9 tae
Lei bbpicd.4 4102
0° 42> 2aa.. ee
10136 64% 37.2
- | We next apply Step 1.3’ with R = 2 and C = 2. Considering only rows 2 and 3, we find that largest
element in absolute value in column 2 is 4.2, so / = 2 and no row interchange is required. Continuing
____with the Gaussian elimination, we calculate
1-16 -3.4 :-19.2
>|0 1 0,666667; 2
94.93§4 FOS
1-16 -34 }~19.2
1 0,666667; 2
Od Ooo 4 “hood
ft -16 -3.4 $=19.27
; 1 0.666667; 2 ©
aoe ©, Ls | :
SIMULTANEOUS LINEAR EQUATIONS

We add a column consisting of these scale factors to the augmented matrix for the system, and then
transforming it to row-echelon form as follows:

| 18 The scale-factor quotients for the


130 elements in column | are 1/3 =
| 96 17 0.333, 2/4 = 0.500, and 5/17 =
(0.294.

-4! =30 The largest quotient is 0.500, so


3; 18 the pivot is 2, which appears in
17| 96 row 2. Since /=2 and R = 1, the
first and second rows are
interchanged.

“2:<4
ars
17% 8 AnAwownm
NWA
|
ee _—

=2i~s
sd tone
FESS DWNNWA ————EE ~~

te T= Now work row is 2, and the work


te, column is 2. The quotients are
ete eee1.5/3
NWA
| =0.500 and 10.5/17
_— = 0.618. ©
21> The largest quotient is 0.618, so
ely: the pivot is 10.5, which appears in
~
ei aeOn
KSB
WN ~ row 3. The second and third rows
wD—wns |
See

are interchanged.
.
© . g.3

dt a £ - a dey >
; CE a8 ; (Mage? Bhat t ir ‘
5 »
2+
Ba 4 Pe‘ : Pe="Par adi
a: 4 by Le Fe a au iy

; a, ae Laas b
. V ~~ a > ¥ ‘ - 4
ee é ke Pe ~ s
iS
i i 7
——

20 SIMULTANEOUS LINEAR EQUATIONS [(CHAP. 2

Solve the system of Problem 2.10 using complete pivoting. We add the bookkeeping row 0 to the
augmented matrix of Problem 2.10. Then, beginning with row 1, we transform the remaining rows into
row-echelon form.
a, (eer R=1 and C =1. The largest
co ti Fa eae element in absolute value in the
oe esl : an lower left submatrix is 17, in row
—— 8 8: 3 and column 3. We first
interchange rows | and 3, and
then columns | and 3.

On ied

1 0.470588 —0. 294118 |5.64706


-4 1 2 —30
3 2 I 18

i 0.470588 0.294118¥
—|0 2.88235 0.823528
| —7.41176
ge ae be

| 7 0:470888 204118 Y S.64706


:-7.41176
ie 1.05882
The work row and work column
are now R=2 and C=2. The
CHAP. 2} SIMULTANEOUS LINEAR EQUATIONS 21

2.13 Gauss-Jordan elimination adds a step between Steps 2.3 and 2.4 of the algorithm for Gaussian
elimination. Once the augmented matrix has been reduced to row-echelon form, it is then
reduced still further. Beginning with the last pivot element and continuing sequentially
backward to the first, each pivot element is used to transform all other elements in its column
to zero.
Use Gauss-Jordan elimination to solve Problem 2.8.
- The first two steps of the Gaussian elimination algorithm are used to reduce the augmented matrix
to row-echelon form as in Problems 1.13 and 2.8:

Then the matrix is reduced further, as follows:

PS, its Add —6 times the third row to the


—/|0 1 O28 second row.
) A)Mae el
ie 3. OS = & Add the third row to the first row.
oS Sai | Nee |
| LO 0 1:-1 ’
eee OO) 2) Add 2 times the second row to
OF OF 2) “the aet row.
0 : 0 d 1-1

‘The set of equations associated with this augmented matrix is x, =1, x, =2, and x,= ah
bscriae set for the original system ane back pabeatiysinin:tis required)

EA, ead
SIMULTANEOUS LINEAR EQUATIONS

Supplementary Problems

Which of
(a) x, =x,=x,=1 (b) x, =8,x,= 1,2, =0
(ce) x, =12,x,=-3,x,=2 (d) x, =2, x, =-2,2,=9
are solutions to the system
x, +3x,% z3= 5
2x, + x, ~ 3x, 2718

x, + 7x,+5x,= 1

Write the augmented matrix for the system givenin Problem 2.15.
: Pt. > Crees wie

Write the augmented matrix for system .


2x, — 4x, +7x,+6x,-- 5x,
4x,=17
= he
on ik a seel Bur 6x, — 3x, ~ 4x,
ee a > # + 8x, + 2X, ~ 2x, 14x, = 10

4, aa Z
’ ~¢@ ine ic JWIn,

, : .
= } vl

e . a A ae se
is > © 4 7 ~
CHAP. 2] SIMULTANEOUS LINEAR EQUATIONS

1.0001x, + 2.0000x, i 3.0000x, + 4.0000x, = 5


1.0000x, + 2.0001x, + 3,0000x, + 4.0000x, = 6
1,0000x, + 2.0000x, + 3.0001x, + 4.0000x, = 7
1,.0000x, + 2.0000x, + 3.0000x, + 4.0001x, = 8

What would be the result of solving this system by working with only four significant figures?

0.00001x, + x, + 0.00001x, = 0.00002


a ey x,=1
0.00001x, + x, —0.00001x, = 0.00001

Use Gaussian elimination to determine values of k for which solutions exist to the following systems, and
then find the solutions:
> t + a oy ane x tay ul
ware 3% x, +2x,- x= tog 3x3
,=-4
2x, - x,+3x5=3 2x, + A ad 6

Ge pia te ns Te i x;
ak:
pik Jor 20F ry— 2x, = ites si3 AST 16

Pairkee: ay ating ‘ ¥ j oa BSS: + by” euhey, ae


- 15 sy A, EGS Fils a
cs
theeapes of
desks: custom, ho aeE
pyc y ake 5 Hii cian Sa ecg
Chapter 3
a Square Matrices

DIAGONALS
A matrix is square if it has the same number of rows and columns. Its general form ts then

@,, G2 a...
@x, 2x, Fx ¢@,,
A= ay, Qe as, a.

. ee a.) a> a. a...

seTt > elements @,,, @,, @;;,...,@,, lie on and form the diagonal, also called the main diagonal or
principal diagonal. Tic cheiaedlltthes @,. Su a,_,, immediately above the diagonal elements form
e. > superdiagonal, and the elements @),,@32. +++ + aq) immediately below the diagonal elements

__A diagonal matrixisa squarematrixin whichallelements notonthemaindiagonal areequalto


et wo Sins ek olla: 6 igen een
the diagonal elements are equal to unity. The 2 x 2 and 4x 4 identity matrices are
oo ern a9
bal eho 1 0 01
nitst site ye : e 4 7 0 0
. _ | 0 0
S play the same role in matrix arithmetic asthenumber I plays inreal-number
a strucular, goal ‘matrixA, AI= A and IA = A provided, in each case, that I is of the
a ed multiplication. .
ver?
sf
: nal
e

erof E i to) _ mages ied a 7 |


\ is koe . ; aa : =. ng —— ie

(eee C ca pred Nagy ae PA Pgs >


i BO.
eee “matrix ¢ # ary —

, : nos vi 7 :
CHAP. 3] SQUARE MATRICES 25

I ead 0 1 uy Uy; U1,


2 fy, O 0 Ont bev lass U3,
where L= i, l,> hy at 0 and U=| 0 0 1 u,,

bt Ln Las ben ) 0) 0 1

2 y RA 2 0 0 | jAd2... £13
Example 3.1 2 7-1 1/)/=12..72..0 | 0
4 | ae 4 =-]| -!l 0. «0 1

Crout’s reduction is an algorithm for calculating the elements of L and U. In this procedure, the
first column of L is determined first, then the first row of U, the second column of L, the second row
of U, the third column of L, the third row of U, and so on until all elements have been found. The
order of L and U is the same as that of A, which we here assume is n X n.
STEP 3.1: Initialization: If a,, =0, stop; factorization is not possible. Otherwise, the first column
of L is the first column of A; remaining elements of the first row of L are zero. The first
row of U is the first row of A divided by /,, = a,,; remaining elements of the first column
of U are zero. Set a counter at N =2.
3 STEP 3.2: For i=N,N+1,...,n, set L’ equal to that portion of the ith row of L that has
1 already been determined. That is, L’ consists of the first N — 1 elements of the ith row
of L. ot; sat Re ga
>’ STEP 3.3: Forj=N,N+1,...,7, set U; equal to that corcion oftheith soliattl of U that has | i sees :
24 already been determined. That is, U; consists of the first N — 1 elements of the jth —aa ae
column of U. 7 2g

aude “Compute the Nth column of L. Foreachelementofthatcolumnon or


belowthemain a
____ diagonal, compute _ a
a at se c4 A Ee eee fn= ain Uy partie “s
ee af ee aa
Ramla Nason; (he tacineie ates
ts of the Nth rowof L equ
a

26 SQUARE MATRICES (CHAP. 3

AX =B, which, in light of Eq. (3.1), may be rewritten as L(UX)=B. To obtain X, we first
E : decompose A and then solve the system associated with
LY=B (3.2)
for Y. Then, once Y is known, we solve the system associated with
UX=Y (3.3)
for X. Both (3.2) and (3.3) are easy to solve—the first by forward substitution, and the second by
_ backward substitution. (See Problem 3.7.)
When A is a square matrix, LU factorization and Gountan elimination are equally efficient for
“solving a single set of equations. LU factorization is superior when the system AX = B must be solved
-fiie repeatedly with different right sides, because the same LU factorization of A is used for all B. (See
ay Problem 3.8.) A drawback with LU factorization is that the factorization does not exist when a pivot
element is zero. However, this rarely occurs in practice, and the problem can usually be eliminated
___ by reordering the equations. Gaussian elimination is applicable to all systems, and for that reason is
e _ often the preferred algorithm.

Ba
d s xee4ar. P
é 9 . «) 7 “Ta ©
RiteBok:aa ran ? :
CHAP. 3} SQUARE MATRICES 27

3.3 Find a matrix P such that PA is in row-echelon form when

Fee au. #1
Azi3i%8 9
” Oe
The matrix A consists of the first three columns of the matrix considered in Problem 1.13, so the
same sequence of elementary row operations utilized in that problem will convert this matrix to
row-echelon form. The elementary matrices corresponding to those operations are, sequentially,

ee: ip £ 1 a8
ee [- 1 | e.-| 01 | e=|0 1/2 |
| 001 201 0 6-4
| 100 ‘ya:
e.=|0 1 | e.-|0 qii-<9 |
4 051 0 0 1/34
: , - 1 ea
Then r-e.e8,-| Bit 24/2. 0 |
| ve : -19/68 5/68 2/68
ae e cou (ER REA TS @ ee aay Me Ref
as eae PA=/ -3/2 1/2 0-}3 8 9i=10 1 6]
Seo ee xs : Ade epic) 8
SQUARE MATRICES

STEP 3.6: To this point we have

38 ~2 3
~~ 48 —2/3 5/3

——— +
Since N =2 and n = 4, we increase N by 1 to N =3.

STEP 3.2: L,=[3, —9] and L; =(2, —3].


STEP 3.3:
u;=| oI and v.={,, |
a U-2/3 * 5/3
STEP 3.4:
! Iyy = yy ~ (U4) U3 =4-| -| 37,|-4-0=4
ae pega ARE
f= ay (Li)! -Us=1-[ 5 [i
STEP 3.5: La ct . “

dgrt gale ae
1
-

ee
Pe:
ee foe
eel es
gah aie 7
=
CHAP. 3] SQUARE MATRICES 29

3.6 Factor the following matrix into an upper triangular matrix and a lower triangular matrix:

5 a ae
fe oo ee
ang PA eal a
i id «0 0
Using Crout’s reduction, we have
STEP 3.1:
2 G40 1 1/2 0
on eee do ais lie
iat ea as N=2
: och: See. i

STEP 3.2: L'=[3],L4=[0], and Li =[-1].


. STEP 3.3: Us =[1]},U{=[(1/2], and U; = [0].
) STEP 3.4: lL, = Gz, — (L3)’ Uy = 0 — [3] - [1] = 0 — (3) = -3
bsg = @y, ~ (Ls)"*U; = 1 [0] [1] =1-0=1
Ugg= Gey ~ (Lg) + U3 = 1 -[—1] +[1] = 1 - (-1) =2 ,
STEP 3.5: 3
=e u,,=1 a: iz

. _ Gay — (L3)" Us _ =1-[3]-[1/2] _ 5 |


23 aT ea 6
Au ts _ @z4—(L5)7
Us _ 1- [3][0] 1
7
FS hice? igi
, 22
oe te *

3.6: To this point we have”


t}:
SQUARE MATRICES

STEP 3.6: To this point we have

0 O
~3 0
0 1 -5/6
=~} 2 ~7/6

Since N =3 and n =4, we increase N by 1 to N=4.


STEP 3.2: Li=[-1,2, -7/6].
SLEP 3.3:

STEP 3.4:

a ft
tan au=“4) -U; =0-
4p
2
ae
-1/3
fir} — L-7/6) L-32/5]

;
STEP 35: aga. SinceN=4=n, the factolization isdone. We have A=LU, with
CHAP. 3} SQUARE MATRICES

3.8 Solve the system of equations given in Problem 3.7 if the right side of the second equation is
changed from —11 to 11.
The coefficient matrix A is unchanged, so both L and U are as they were. From (3.2),

ee this system sequentially from top to beatin: we obtain y, =5, y, =4/3, y, = —22/5, and
= —28/17. With these values and U as given in Problem 3.6, (3.3) becomes

X¥, +x,+ 2X;

— this system sequentially from bottom to‘top, we obtain the solution to theee interest:
= 713/17, a4= ad ae = Naked ae: X, = =28/17.
rie PAS

ut’s algorithmor3x3 matrices.

rary 3x3 matrix A,we seek a

8211:
ERs SC In * It * hn,
Bi
SQUARE MATRICES

-1 -18 18 1 =} 2 1 00 000
A>-9A+101=| 0 8 0/-910 2 Ojf+10/0 1 Oj=]0 0 O
9 ~-@ -37 1-1 -3 001 00 0

A square matrix A is said to be nilpotent if A® = 0 for some positive integer p. If p is the least
positive integer for which A’ = 0, then A is said to be nilpotent of index p. Show that
i. 6.4-2
Azil 2 -t
3°16 =3
is nilpotent of index 3.
That is indeed the case, because
7s ye
Melt 2 -111 2
4;
) MERA
3.6 -3]13 6
a?

Fa | as i oe 2 -1] s ‘ a
A =AA=|0 3 -1 oad|
09 -3 3
aetna & mia cio gmen eT
thiol sae MeIESNOPeRl CF i504 sw A Rien - Wein oe OA
‘ 2 * * , .- <
| ‘ co®, 53-% j
a =, Te Sthe Lk '
—. <® = hed

n multiplied on the right by any 3


rae ae Se take 22s NSS
\ uIitipl the first row ‘
SQUARE MATRICES

eres 4 3.22 ie
q =] 1 mde vem |
3 004 —4 ad
7 a Sk 2ind
In Problems 3.23 through 3.29, use LU factorization to solve for the unknowns.

9.23. Qxot Axo tty =: 6 3.24 x, +2x,-2x,+3x,= 2


3x, Tits> sym eh, x, + x, +2x,=-4
a + 5x,=-9 3553x549 4x,+ #2616
—¥,F x, au 2x,+ X,+ *,-2x,= 9
(Hint: See Problems 3.7 and 3.8.) (Hint: See Problem 3.4.)

Repeat Problem 3.24, but with B =[—3, —1,0,4,]’.

¥, + th2+ ox, = 4
4x, +5x,+6x,=16 | é
7 pa mehr geviae BERNE hh SYED LOG ADRR A wk at ous Sale fy Bo

= i.

t bem 32, wihB16,mie beyoe lating 67 —— jae


i=
. , olen & of : Pe a. obs9 0
sie
x
Mgt
ee
- — ¢

yes ote Btpoh Oe Sahar aunseies BA 4


i fibst. abePai ~ 3x; Adis) hh te okey Leno
Ep EES We oa ae
det ++ 2x3x, +Ay =peRarale ne. he ernie nage os
aig , : . wrgeenac Sha oe.
ut ; Sg
a Cee AD yh es ret nape ‘SSe a
? 2 baie 7 Bll iat v a
eral “he eee ee Oe Ae ay
Chapter 4
Matrix Inversion

THE INVERSE
Matrix B is the inverse of a square matrix A if
AB = BA=I | (4.1)

For both products to be defined simultaneously, A and B must be square matrices of the same order.

a ae _ ioe re 9
ie Es! ie be pera. 8 E 4
E 21-3 1 alee le Ba {' 9)
3 4)l3/2 -1/2)~ 13/2 -1/2)13 4] lo 1
A square matrix is said to be singular if it does not have an inverse; a matrix that has an inverse
is called nonsingular or invertible. The inverse of A, when it exists, is denoted as A’.
CHAP. 4] MATRIX INVERSION

STEP 4.4: Beginning with the last column of C and progressing backward iteratively through the
second column, use elementary row operation E3 to transform all elements above the
diagonal of C to zero. Apply each operation, however, to the entire matrix [C |D].
Denote the result as {I| B]. The matrix B is the inverse of the original matrix A.
(See Problems 4.5 through 4.7.) If exact arithmetic is not used in Step 4.2, then a pivoting strategy
(see Chapter 2) should be employed. No pivoting strategy is used in Step 4.4; the pivot is always one
of the unity elements on the diagonal of C. Interchanging any rows after Step 4.2 has been completed
will undo the work of that step and, therefore, is not allowed.

SIMULTANEOUS LINEAR EQUATIONS


A set of linear equations in the matrix form
AX =B © (4.2)
can be solved easily if A isinvertible and its inverse is known. Multiplying each side of this matrix
equation by A | yields A-'AX=A’'B, Which stonaliten: to

= 8. :
sore $a ie< ey
MATRIX INVERSION

B is not square, so it has no inverse. In particular, the product BG is not defined.


For C, matrix multiplication gives

_f2 -4 0 0.5
ce =|5 4 0.25

0 yah —4
and GC =
—0.25 0.25512
so G is the inverse of C.
G andD do not have the same order, so they cannot be inverses of one another.

4.2 Determine the inverses of the following elementary matrices:

+. wT Oe @
A B=/}0 0. Li.
eres:
10 0
ap ag
ie : 0 i 1 hk RatgS ida <i

= B. Matrices
€and
‘© &) Ob gatos

Phe
ir i a See oa a

* . i /0, 2)pions [1 0 07 Diab a


1? a | r) i 1

Mee” r ® oe , a
ay Pte} pr a aeeo ge <oegl
“i gape
=e «
CHAP. 4] MATRIX INVERSION

1/2 ~-1/2 ~-1/6


Thus, Am 0 1 —2/3
0 0 1/3

Determine the inverses of

0 i. A= eee
oe 2 ae
; and B
A) See
) 1. @1.. aaa
Both matrices are lower triangular. Since A has a zero element on its main diagonal, it does not
have an inverse. In contrast, all the elements on the main diagonal of B are nonzero so it has an inverse
which itself must be lower triangular. Since BB~' =I, we may write .
MATRIX INVERSION CHAP. 4).

The left side of this partitioned matrix is in row-echelon form. Since it contains no zero rows, the original
matrix has an inverse. Applying Step 4.4 to the second column, we obtain

~ 1-6; | Adding —0.6 times the second


6.372 ~§ row to the first row

Therefore, A'= ne a

Determine the inverse of

1
So gual
oes" 1 + 1.0 0] Adding —4 times the first row to
Rea isorni00 RESIS cop
os a Ba i 6 -4 _ the second row © Woumn iingo- on? daw
3% tieg Gb pe! -Woisc bs. mi BMesies's «ibe Kp. sieges
s thefirst
ee
CHAP. 4] MATRIX INVERSION 39

be DPS re W/S* 0.) 1i8B<G Adding 17/5 times the second row
i) — ae | 0 0 to the third row

3 a 415 =.
37/5; -2/§ +4
hoes 3/S* 0 VP peak Multiplying the third row by 5/4
Me, Seae
eeee
—"
0... 6 l 17/4 -2/4 5/4

The left side of this partitioned matrix is in row-echelon form and contains no zero rows; thus, the
original matrix has an inverse. Applying Step 4.4, we obtain se

t 1/6 HS A180 1/5 0 Adding —1 times the third row to oe


—|0 1 QO ld 2/4. Sia the second row oa
|
g q 8 5 a 5) ee eee
“[: 1/5 0: 17/20 1/10 4 Adding 1/5 times the third row to
0; —13/4 2/4 —-5/4 the first row
Sis 7, aa), ae
| wet t OO) 26/4 (of 2/4 Adding —1/5 times the second row
1 0 1 0:-13/4 2/4 -S5/4 to the first row ek
001: 17/4 -2/4 5/4
4 | fay 4 0 3 MALE Wi a 6 - 0 2 aa
: (i a -. Aes) 19/4 ie Ria a “13. 2-544 ee
7 ae at POUR AT See | ih a2. Sl
So lnMIA Ie s revipoesn
a 48 Solve oe system | gees
' 3 fF : : ae
; f Z > =
ai ¢ tia Bea Ms -\ { me
* ma = Moat:

~ as eee= Ei
es wa fb] SUG cok aah : = 5
9H is
HTS a haehag aur ds;
aoe 7 heen srain ne
&
40 MATRIX INVERSION CHAP. 4}

= Hing 6 Cul2e2 0
Rx %2)=5)/-13 2 -Si} 3}=| 5/2
eT x 17 -2. Sit-6) {L-1/2
The solution is x, =0, x, =5/2, x, = -1/2.

4.10 Prove that the inverse is unique when it exists.


Assume that A has two inverses, B and C. Then AB=I and CA=I. It follows that

C = CI = C(AB) = (CA)B= IB=B

4.11 Prove that (A ') '=A when A is nonsingular.


(A~')~' is, by definition, the inverse of A™’. A also is the inverse of A’. These inverses must be
equal as a consequence of Problem 4.10.
1
we

4.12, Prove that (AB) ' =B ‘A ' if both A and B are invertible.
- (AB)' is, by definition, the inverse of AB. Furthermore,

(B-'A’')(AB)=B
‘(A -'A)B=B 'IB=B 'B=1
(AB)(B 'A"') = A(BB"')A™'=AIA"'= AA‘ =I
ahaa aTheseiversesmust
beequal
as»
conequence of
Problem410.

yr ’ an

theae ink i el = (a, . Since the


a thay sero row and jth column of this “must be .
ai Ers tj So inertialele= speopeeeycoremmmmeaiar pid
7 Seatig pie gitictys Eafe: vind oa sai
= cate ad iF rt 5 pa al ; tad ee Sa TPG, ind
halk “" \ eh F ce } j : ‘ : Pe ag hia = are: 2
. a x x n ~ . Sie
a ae an ,-
CHAP. 4] MATRIX INVERSION | 4]

Each E, is an elementary matrix of either the second or third kind, so each is lower triangular and
invertible. It follows from Problem 4.12 that if

P=E,E,_,°*'E,E, (2)
then p''=(E,E,_,:::E,E,)"' =E,'E;':::E,',E,'
P~' is thus the product of lower triangular matrices and is itself lower triangular (Problem 3.17). From
(1), PA=U whereupon

A=IA=(P 'P)A=P '(PA)=P™'U

Supplementary Problems

In Problems 4.15 through 4.26 find the inverse of the given matrix if it exists.
4.15 oC. 2) °1 16-6" :0 oo £ a C+PR
ft. G6 0 «7-6. 0 S 3) + -G. -3
: (a) 0 6°46 (b) se 1ety (c) sce” (Gulia Uh. (d) ee alee ee

P: 0.6 6 ie | am ii Ae i al a ai . oe ee
Chapter 5
Ps Determinants

EXPANSION BY COFACTORS
The determinant of a square matrix A, denoted det A or |AI, is a scalar. If the matrix is written
out as an array of elements, then its determinant is indicated by replacing the brackets with vertical
os‘

we
: at
1 lines. For 1 x 1 matrices,
det A=|a,,|=a
For 2 x 2 matrices,
4; 42
det A = a>; a> = G47. — Gj,

pansionn utilizing minors and cofactors, as follows.


inor M,, of an nxn matey, A is the otermathant of the (n— as — 1) submatrix that

oo My=-|5 3|-48)-ie
»=| f }]=07-106)=-
a

pea) viage hin ent


od
,
6
eagie x
s
Papa

e
a

CHAP.5] DETERMINANTS 43

Property 5.3: If B is formed from a square matrix A by interchanging two rows or two columns of
A, then det A= —detB.
Property 5.4: If B is formed from a square matrix A by multiplying every element of a row or
column of A by a scalar k, then det A= (1/k) det B.
Property 5.5: If B is formed from a square matrix A by adding a constant times one row (or
column) of A to another row (or column) of A, then det A = det B.
Property 5.6: If one row or one column of a square matrix is zero, its determinant is zero.
Property 5.7: det A’ = det A, provided A is a square matrix.

Property 5.8: If two rows of a square matrix are equal, its determinant is zero.
Property 5.9: A matrix A (not necessarily square) has rank k if and only if it possesses at least one
k X k submatrix with a nonzero determinant while all square submatrices of larger
order have zero determinants.
Property 5.10: If A has an inverse, then det A ' = 1/det A.

DETERMINANT S OF PARTITIONED MATRICES |


4 DETERMINANTS (CHAP. $

INVERSION BY DETERMINANTS
Fe The cofactor matrix A‘ associated with a square matrix A is obtained by replacing each element of
A with its cofactor. If det A#0, then
T7(A (5.3)

If det A is zero, then A does not have an inverse. (See Problems 5.9 through 5.11 and Problems 5.18
through 5.20.) The method given in Chapter 4 for inversion is almost always quicker than using
(5.3), with 2 x 2 and 3 x 3 matrices being exceptions.

ae tuspe ty Daina, | ;
x wile(det
=215)~ (304) = 2
if cian —) i roonont
CHAP. 5] DETERMINANTS 45

| (c) Expanding along the second column gives us

det A= a,,A,, + 4,,A,, + a3,A;3,


= 3(-1)'*? ¥ | + 5(- ai 2 ; + 8(- Lit? ee ;

= 3(—1)°{(—5)(9) — 6(7)} + oh hes~ 4(7)} + 8(-1)° {2(6) — 4(-5)}


= 3(—1)(~87) + 5(1)(—10) + 8(-1)(32) = -45

5.3. Calculate the determinant of

5 2B,..0
by expanding along (a) the second row and (b) the third column.
(a) Expanding along the second row gives

| den = 2-0" 4
2 ay +7(- LAF est5 gltori” | =~3 at

f= AGOEF(M0)=cable 1)"stil phe x8) = 4(5))


“ee —H-1)(0) + 101)(0) + 6(- 1)(4)=24
© Ang the thitdseomatin,?>
| detB=i:+ 6B; +0B,,=6B,,

oe a"
ras co | egalee e 3K 45)
DETERMINANTS (CHAP. 5

Verify that det AB = det A det B (Property 5.1) for the matrices given in Problems 5.2 and 5.3.
From the results of those problems, we know that det A det B = (—45)(—24) = 1080. Now

23 4-3 € 80 s -3
AB=|-5
5 6}}-2 7 6)={35
-33 30
os .9 § -8 0 8 12 48

To calculate det AB, we expand along the first row, finding that
—a5. 30]. sa, s\tetiae oe _4\l+3 35-33
det AB = 8(-1)'*'
Beal toe + 18(-1) 8 12

= 8(1)(—1944) + (—3)(—1)(1440) + 18(1)(684) = 1080

Use pivotal condensation to evaluate the determinant of


Go 2ore45
=|1
0 3]
Ay Oe
We initialize D=1 and wseelementary ipeleestiape to reduce A to row-echelon tore:
DETERMINANTS 47

= 3 4 Adding 1 times the first row to the


11-14 third row: D remains 1
wr 5G
=3 6
| 4 Adding ~—6 times the first row to
11 —-14 the fourth row: D remains 1
gt pee Ts
accre
ooor iS -18
os 4 Multiplying the second row by
-11/6 14/6 ~1/6: D<— D(—6) = 1(-6)
=] 10 = -6
iS-> ~18
ek 4 Adding —5 times the second row
-11/6 14/6 to the third row: D remains —6
13/6 —5/3
15 -18
= 4 Adding 7 times the second row to
-11/6 14/6] the fourth row: D remains —6
13/6. =5/3 |
13/6 +. =5/3 | Bre. Jo 33
-3 4 Multiplying the third row by 6/13:
-11/6 14/6 | D+<#D(13/6)= -6(13/6)=-13 | |
Ee omOTS:
IG + SIS D>» |
«3 Adding —13/6 times the third row
-11/6 14/6 | to the fourth row: D remains —13
1 is : an ee e e

COSY
coor
SCooYK
COO
COOK
- eon
eocorn
oOOoFN
;
48 DETERMINANTS (CHAP. 5

5.9 Calculate the inverse of

~ oo 4 :
A= E 4
We shall use Eq. (5.3). Since the determinant of a 1 1 matrix is the element itself, we have

nt A,, =(-1)'*' det [4]=(1)(4) =4


A,,=(-1)'*?
det [5] = (—1)(5)=-5
A,,=(-1)**' det [-1] =(-1)(-1) = 1
A,,= (-1)***det [3] = (1)(3) =3
The determinant ofA is 3(4) — (—1)(5) = 17, Ye)

ed AE) ian ey
* Sang;SS comma the inverse of

=
ws
: A= <5 5 6 : :

ae: : soft) OS sity f q 5 °


ih Bets 5.3cag 8h ot ote le matInparticular, “i
i Pee daz!| far
: mea
= ree ae | ar
oa jie et i » ¥
‘Gr

~~ AT
CHAP. 5] DETERMINANTS 49

5.13 Prove that the determinant of an elementary matrix of the first kind is —1.
An elementary matrix E of the first kind is an identity matrix with two rows interchanged. The proof
is inductive on the order of E. If E is 2 x 2, then

and det E= —1. Now assume the proposition is true for all elementary matrices of the first kind with
order (k — 1) x (k — 1), and consider an elementary matrix E of order k x k. Find the first row of E that
was not interchanged, and denote it as row m. Expanding by cofactors along row m yields
detE=a4,,A,,,+@ m2 Ans Ee + Bis ciBin m =. ‘a a mk Ax PA ssn

because a,,, = 0 for all j # m, and a,,,, = 1. Now

A ine (-1)7""M =
But M_,,, is the determinant of an elementary matrix of the first kind having order (k — 1) x (kK —1), so
by induction it is equal to —1. Thus, detE=A,,,=M,,,=-1.

i aa
5.14 Prove Property 5.3.
nena F
an «.sys
: , _ If B is obtained from A by interchanging two rows of A, then B =EA, where Eis
i bre Sey matrixof the first kind. Using Property 5.1 and the result of Problem 5.13, we obtain—
te ik eniets rn : det B = det EA = det E det A Witla
= psig

from which Property 5.3 immediately follows.

5.15 Prove Property 5.4.


i, sh coipmen tha Bi obser a nm"aimultiplying theith row of Atvy the
c
noe ae arty expansion ofpig eg
ieith testain
é
gt | 5 ieee
det B= ka,,A,, + ka, hye
: EE Setat
an 2" => a,
>s
Pe Ae
DETERMINANTS [CHAP. 5

Prove that if the determinant of a matrix A is zero, then the matrix does not have an inverse.
Assume that A does have an inverse. Then

1 = detI=det(A 'A)=detA ‘det A=det A ‘(0) =0


which is absurd. Thus, A cannot have an inverse.

Prove that if each element of the ith row of an n X nm matrix is multiplied by the cofactor of the
corresponding element of the kth row (i,k =1,2,...,m; i#k), then the sum of these n
products is zero.
For any n X n matrix A, construct a new matrix B by alicia the kth row of A with its ith row
(i,k =1,2,...,n;i#k). The ith and kth rows of B are identical, for both are the ith row of A; it
follows from Property 5.8 thatbaa B=0. Thus, evaluating det B via expansion by cofactors along its ith
row, we may write
4 a a

O=detB= 2 b,B,= 2 a,B, (1)


8 ri

= F ad

," ii

he
he iith10 ren
Pa by deleting its jth colum =i tte kthrowand

= a | Sey =. gb be ;
Re he be =e Ay : _ saeco det
Ay

Lo nioersatsinser
at Se A a 4 x Wa =a

te 2 x kj : =
Uotds"
CHAP. 5] DETERMINANTS 51

Supplementary Problems

In Problems 5.21 through 5.26, let

0)
2
1
2 1 3 1-253 Suid: 1
D=|4 2 =~-1 E=|3 61 F=|
69 2 #=5
» ie ie 1
5.21 Find (a) det A and (b) det B, and (c) show that det AB = det A det B.

5.22 Find (a) det C and (b) det D, and (c) show that det CD = detC det D.

lia
4
eas
ies
(ela
ante
tn
iicel
we 5.23 Find (a) det E and (b) det F.

5.24 Use determinants to find (a) A~' and (b) B™*

| he Ve stein 4 fipd D
Chapter 6

as Vectors

DIMENSION
A vector is a matrix having either one row or one column. The number of elements in a row
vector or a column vector is its dimension, and the elements are called components. The transpose of
a row vector is a column vector, and vice versa.

_ LINEAR DEPENDENCE AND INDEPENDENCE


A set of m-dimensional vectors {V,,V,,..-,V,,} of the same type (row or column) is /inearly
im

_ dependent if there exist constants c,,¢,,...,¢, not all zero such that
Magee ce cV, +0, +-::+6,V, =0 (6.1)
a Ni ah : "

le 6.1 The set of five-dimensional vectors


a {[1,0, -2,0,0]’, [2, 0, 3, 0, 0], (0, 2,0, 0, 1]’, and [5, 0, 4,0, 0]")

a +» a ~
A!
CHAP. 6] VECTORS 53

Example 6.2 The vector (—3, 4, —1,0,2]’ is a linear combination of the vectors of Example 6.1 because

-3 1 2 0 5
4 0 0 2 0
—1]=0) -2}+1/3]+210]+ (-1)] 4
0 0 0 0 0
2 0 0 1 0

Equation (6.2) represents a set of simultaneous linear equations in the unknowns


d,,d,,...,d,. The algorithms given in Chapter 2 may be used to determine whether or not the d,
(i=1,2,...,m) exist and what they are. (See Problems 6.4 and 6.5.)
:

a
~~
PROPERTIES OF LINEARLY DEPENDENT VECTORS
Property 6.1: Every set of m+ 1 or more m-dimensional vectors of the same type (either row or
column) is linearly dependent.
Property 6.2: An ordered set of nonzero vectors is linearly dependent if and only if one vector can al |
Se
aa be written as a linear combination of the vectors that precede it.
If a set of vectors is linearly independent, then any subset of those vectors |
is:
_linearly independent.
aeIf ‘set of vectors is linearly dependent, then any larger set containing this settis=
linearly dependent. :
cag ei set of vectors of the same Ce eatin that contains the zero vector is linearly —
54 VECTORS [CHAP. 6

Solved Problems

6.1 Determine whether the set {[{1, 1,3], [2, —1,3], [0, 1,1], [4,4,3]} is linearly independent.
Since the set contains more vectors (four) than the dimension of its member vectors (three), the
vectors are linearly dependent by Property 6.1. They are thus nor linearly independent,

6.2 Determine whether the set {{1, 2, —1, 6], [3, 8,9, 10], [2, —1, 2, —2]} is linearly independent.
Using Steps 6.1 through 6.3, we first construct

1 2 \~1 6
¥sro: ££. 3
2-1 2 2
Matrix V was transformed in Problem 1.13 into the row-echelon form:

1 2 -1 6 e

0 1 In6peedy
0 t=] :
Byimpacto,the ak ofV is3, which equals the number of vactors inthegiven-eet; hence, thegiven
Ay vectors is linearly et

os
D eine
newheterhest(9 2,1, ~4, 1)", 2,3, 0, -1, -1]’, 1, -6, 3, -8, 7]") is linearly
a
i ’ ae Vi
‘ . -

: ihthischapear, we-conetruct ‘
tat tacla & tw pb”
v= a 3 0°Bre a’
e whi oa (rages :
4 39, MS ope,rin thee. ‘ig
~d : 7 -
4 =
el “ = 7 % fe -
"oh ‘= _
=v i + “Ss . =
> ’ ?
i . i.
CHAP. 6] VECTORS 55

[S, 1, 8] = d,[2, 3,5] + d,[1,6, 7] + d,[0, 1, 1]


= (2d, + d,,3d, + 6d, + d,,5d,+7d,+d,]}
which is equivalent to the system
ie ae
3d, +6d,+d,=1
5d,+7d,+d,=8
This system was shown in Problem 2.5 to be inconsistent, so [5, 1, 8] is not a linear combination of the
other three vectors.

6.6 Prove that every set of m + 1 or more m-dimensional vectors of the same type (either row or
column) is linearly dependent.
Consider a set of n-such vectors, with n > m. Equation (6.1) generates m-homogeneous equations
(one for each component of the vectors under consideration) in the n-unknowns c,,c,,...,C¢,. If we
were to solve those equations by Gaussian elimination (see Chapter 2), we would find that the solution
set has at least n — m arbitrary unknowns. Since these arbitrary unknowns may be chosen to be nonzero,
there exists a solution set for (6.1) which is not all zero; thus the n vectors are linearly dependent.
r
a

6.7 Prove that an elementary a of the first kind does not a te oyrank ceo ie

a ~ Let B be obtained desmatrix A by interchanging two rows. Clearly the rows of A ‘ees y
Sek
= 2: set =.ssid vectors as the rowsof B; so A and B must have the same row rank.

0.and BX= 0hav he


— “Prove that if AX =0 para mehs a ea
se 6
ae... meme column. whe ae ais e aE os >
56 VECTORS (CHAP. 6

d,A,+d,A,+-+++d,A,=0
where, as noted, the constants d,,d,,...,d, are not all zero. But this implies that A,,A,,...,A, are
linearly dependent, which is a contraction. Thus the column rank of A cannot be greater than the column
rank of B.
A similar argument, with the roles of A and B reversed, shows that the column rank of B cannot be
greater than the column rank of A, so the two column ranks must be equal.

6.9 Prove that an elementary row operation of any kind does not alter the column rank of a
matrix.
Denote the original matrix as A, and the matrix obtained by applying an elementary row operation
to A as B. The two homogeneous systems of equations AX = 0 and BX = 0 have the same set of solutions
(see Chapter 2). Thus, as a result of Problem 6.8, A and B have the same column rank.

6.10 Prove that the row rank and column rank of any matrix are identical.
Assume that the row rank of an m X n matrix A is r, and its column rank is c. We wish to show that
fe : _ r=c, Rearrange the rows of A so that the first r rows are linearly independent and the remaining m — r
| an ~ rows are the linear combinations of the first r rows. It follows from Problems 6.7 and 6.9 that the column
a ae. rank and row rank of A romain unaltered. Denote the rows of A as A,,A,,...,A,,, inorder, and define

— a A; ne
a= re ane Cae a (gdh) aie st BF ab and C= tage i
- ior Wot 5 fare ae Wil A, se . ii Nia

Furthermore, reag es
=TB, Inpar nek
nore omeianteeofepgs
fie on ‘ey Ta

a bia, +d,A, t+: +4,A, 8= RA cavern ot


rst ol
| SS ee een ete M
in jue ee ee ta —&

is) %,

CHAP. 6] VECTORS 57

6.12 Problem 6.8 suggests the following algorithm for choosing a maximal subset of linearly
independent vectors from any given set: Construct a matrix A whose columns are the given set
of vectors, and transform the matrix into row-echelon form U using elementary row
operations. Then AX = 0 has the same solution set as UX = 0, which implies that any subset of
the columns of A are linearly independent vectors if and only if the same subset of columns of
U are linearly independent. Now the columns of U containing the first nonzero element in
each of the nonzero rows of U are a maximal set of linearly independent column vectors for U,
so those same columns in A are a maximal set of linearly independent column vectors for A.
Use this algorithm to choose a maximal set of linearly independent vectors from [3, 2, 1],
[2;3,;—-6), {t,.0, 3); [—4,.-1, —8]}, and. [1, -1,7],
We form the matrix

which, as shown in Problem 1.15, has the row-echelon form

f 28 WP ae 472 —
Peet 2/8. 3 4st . eS.
ecu Bienwe 9 Tee
Thefirst and second columns of U contain the first nonzero element in each nonzero rowof
refore, the first and second columns of A constitute a maximal set of linearly indep e
the columns of A. That is, [3, 2, 1] and [2,3, —6] are linearly independent,
and all theotherv
¥ he originalset are linear snPisieiions of those two. In particular, %

Ls 0, 3] = 3[3, 2 lie 3(2, a —6]


_[-4, -1, -8]=(-2)[3, 2, 1] + (2, 3, -6]
Mai-1, asOB: 2 ee 3,Mele
v oa iS
e A|

58 VECTORS (CHAP. 6

Suppose the set is linearly dependent, and let i be the first integer between 2 and nm for which
ee V,} forms a linearly dependent set. Such an integer must exist, and at the very worst / = n.
Then there exists a set of constants d,,d,,...,d,, not all zero, such that

dV, +d.V,++->+d,V,_,+dN,=0
Furthermore, d, 0, for otherwise the set {V,, V5, . ., V,_,} would be linearly dependent, contradicting
the defining property of /. Hence,
d, d, d,_,
V.=- a VY,% d.AN, eV
d
That is, V, can be written as a. linear combination of EP ere
On the other hand, suppose that for some i (i =2,3,...,n)
¥_=.d,V, +4.V,+-°°.+24,_V,.,
Then dV, Hey 42° +d a¥ia +(-1)V, +OV,,,+---+0V, =0
This is (6.1) with c, =-140,c,=d, (k=,. .,4-1), andc, =O0(k=i+1,i+2,...,n). So the set
of vectors is linearly dependent.

saeywreeedds Problems —
| CHAP. 6] VECTORS 59

6.26 Choose a maximal subset of linearly independent vectors from those given in Problem 6.16.

6.27 Choose a maximal set of linearly independent vectors from the following: {1,2, 1, —1], [1,0, —1, 2],
(2):2;0, 2),2[3, 3, 0,3).
2; +1, 3},[0)1, 1,0], (3,

6.28 An m-dimensional vector V is a convex combination of the m-dimensional vectors V,, V,,..., V,, of the
same type (row or column) if there exist nonnegative constants d,, d,,...,d, whose sum is 1, such that
~ WV=d,V, + d,V,+---+d,V,. Show that [5/3, 5/6] is a convex combination of the vectors [1, 1], [3, 0],
and [1, 2].

6.29 Determine whether [0, 7]' can be written as a convex combination of the vectors

Hasek
6 9 1 1

le
ee
eee
|
ES
6.30 Prove that if {V,,V,,...,V. islinearly independent and V cannot be written as a linear conventions of
this set, then {V,, a ‘ V,,V} is also linearly a i es =

‘Re. ProvePopes cre


aS Ri vt 8 set of all vectorswhichae soluonsofAX
=@ D tert
min thenu
ome a aig waged | ; r
wAahtcthet ar inte
Chapter 7
Eigenvalues and Eigenvectors
CHARACTERISTIC EQUATION
A nonzero column vector X is an eigenvector (or right eigenvector or right characteristic vector) of
a square matrix A if there exists a scalar A such that.
AX = AX (7.1)
Then J is an eigenvalue (or characteristic value) of A. Eigenvalues may be zero; an eigenvector may
not be the zero vector.

Example 7.1 [1, —1]’ is an eigenvector corresponding to the eigenvalue A = —2 for the matrix
ba | | | ee en
%/ ; ee |2 aia

case
om
L2 -allal-[2]--17]
| = -

saad !
oe, ae. wa fThecharacteristic equation of an n X n matrix A is the nth-degree polynomial equation
pioleninaaieas yates AD TPs sind sesetaci (72)
Solving the characteristicequation for A gives the eigenvalues of A, okiesmay be real, complex,or
| s ofeach other. Once an eigenvalue is determined, it may be substituted into (7.1), and then
‘ion may besolved for the corresponding eigenvectors. (See Problems 7.1 through 7.3.) The
la shagve ica care gscharacteristic polynomial of A.
is Man sds to cen Hen ah ouegeed ~ gia

“— wre td
4 ae Se

iad
pain ee
is -
CHAP. 7} EIGENVALUES AND EIGENVECTORS 61

LINEARLY INDEPENDENT EIGENVECTORS


The eigenvectors corresponding to a particular eigenvalue contain one or more arbitrary scalars.
(See Problems 7.1 through 7.3.) The number of arbitrary scalars is the number of linearly
independent eigenvectors associated with that eigenvalue. To obtain a maximal set of linearly
independent eigenvectors corresponding to an eigenvalue, sequentially set each of these arbitrary
scalars equal to a convenient nonzero number (usually chosen to avoid fractions) with all other
arbitrary scalars set equal to zero. It follows from Property 7.2 that when the sets corresponding to
all the eigenvalues are combined, the result is a maximal set of linearly independent eigenvectors for
the matrix. (See Problems 7.4 through 7.6.)

COMPUTATIONAL CONSIDERATIONS
There are no theoretical difficulties in determining eigenvalues, but there are practical ones.
First, evaluating the determinant in (7.2) for an n X n matrix requires approximately n! multiplica-
| tions, which for large n is a prohibitive number. Second, obtaining the roots of a general
characteristic polynomial poses an intractable algebraic problem. Consequently, numerical algor-
{ ithms are employed for determining the eigenvalues of large matrices (see Chapters 19 and 20).
: | | ae
is Sr 2 feAP gti 1" a ee site Royle i ;3 3 ispeve Bat or ovo eft.

THE CAYLEY-HAMILTON THEOREM | —


Theorem 7.1: Every square matrix satisfies its owt characteristic Saietion: That is,if —
$ rang! —
lagen ed det(A — AI) =5,A" Pug ap ee aa + BA by sci ——
eet OA DL ee b,A + b,A+ b 1 =0 —
Pee sa er = PS : a ae i
ed ge ORs acer oMS af rs Se —
43 rk ere -_ = y - ae é ‘Rep : P : } re : De. % rs | es } , a ‘ “es
4% 4 j i. od itve <' ¥: a

__. (See Problems 7.15 through Far.) =


: es ie i eat Sante
15 =e fis : al J f Bor J : ’ rs na a $3 ay
62 EIGENVALUES AND EIGENVECTORS

2x, + 5x, =0
e ,
As
—2x, - Sx, =0
The solution to this system is x, = — $x, with x, arbitrary, so the eigenvectors corresponding to A = | are

x,)_ - $x, - ow
x=[7| x, *2 :

with x, arbitrary.
When A = —2, (7.1) may be written

{3 -2)-cale we}-[2)
[2 -3llei}-(o]
which is equivalent to the set of linear equations
Sx, + 5x, =0
=2x, —2x,=0

_ The solution to this system is x, = —x, with x, arbitrary, so the eigenvectors corresponding to A = ~2 are

x-[e)-[a-ala]
CHAP. 7] EIGENVALUES AND EIGENVECTORS 63

2x; + 2x, + 2x, =0


| Sx Bas 3x, = 0
6x, + 6x, + 6x, =0
E
The solution to this system is x, = ~x, — x, with x, and x, arbitrary; the eigenvectors corresponding to
A = 3 are thus

x; Matt Rs I —1
X=|%2.}=| *2 |=x,| 1 |+x,| 0
. xs x 0 1 .

with x, and x, arbitrary.


When A= 14, (7.1) becomes

re a 1 0. 0)]\[+: 0
3 & 2[71408 3 @ *2/=10
6 6 9 0 0 14)/L4s 0 :
a A Sm “i Ua 0 oe mn
3 -8 3 Re = 0
or

. whieh}is ssiilia to ee. set of Hiueer aie

a Rae 7 : , af
GA ed i

ae » eos. Vrevhdis docs 1 ot an e ha ey,Sia iiipd e ioSak‘ite


pats

e solution tothissystem isx,=


64 EIGENVALUES AND EIGENVECTORS [CHAP. 7

(4 - i2)x, + 4x,=0
or ~5x,+(-4- i2)x,=0
The solution to this system is x, = (—4/5 — i2/5)x, with x, arbitrary; the eigenvectors corresponding to
A=-—1+ i2 are thus

ea [e| diame oa =x, —4/5- sd


x.- |

with x, arbitrary.
With A = —1 — 2, the corresponding eigenvectors are found in a similar manner to be
_ f%1] _ [(-4/
+ 2/5)x,
5 —4/5
+ i2/5
X= oe ie x, —" 1

with x, arbitrary.
ee

.re Choose a maximal set of linearly independent eigenvectors for the matrix given in Problem

Pe: The eigenvectors associated with A = 3 were found in Problem 7.2 to be


-1 : -1

-| |=] 0| x,,x,arbitrary
0 fF

wo lneanyindependent eigenvectorsassociatedwith= 3. one for each arbitrary


scalar. One
Be ghanne bu sens.4551. ae Oe other.bysatinratty
vector = associated withA=18 are
CHAP. 7} EIGENVALUES AND EIGENVECTORS 65

There is one linearly independent eigenvector associated with this eigenvalue, and it may be obtained by
choosing x, to be any nonzero scalar. A convenient choice here is x, =1. Collecting the linearly
independent eigenvectors for the two eigenvalues, we have

Pal 1
as a maximal set of linearly independent eigenvectors for the matrix.

7.6 Choose a maximal set of linearly independent eigenvectors for the matrix

0
0
0
:
ON @
Soc
kK co
>onBwoeo°

Since this matrix is upper triangular its eigenvalues are the elements on“itemain diagonal. Thus,
A=2 is an ese of scan five. ThePeuiis casesassociated with this eigenvalue are
“~~
li
is
ll
i
Be

i: LER st Pi 9. ea Bal, LicDy Pais 1s if Gis tak Bt aghe ; a


stig bet ube Te) sresege SA) Iti = & bee —

with X1y X55 and £ arbitrary. 'Bekatse there. me chive arbitrary scalars, there are three — -
- eigenvectors cig A
associ ah eine perenne 2 ee =
EIGENVALUES AND EIGENVECTORS [CHAP. 7

c,X, + ¢,X, +---+0e¢,X,, =0


CpAT > €,A,My Ob 4 + GA Be mQ
c,A2K, +¢,A2K, +°°++¢,,A2X,, 0
¢,A°K +¢,A2K, +-:'+0¢,,A,X%, =0
ote s @ etmemaane oe ¢ 6 Bee 2 ee 4 eee 8 ae Oe ee 4.

generated by sequentially multiplying each equation on the left by A. This system can be written in the
matrix form .

1 1 1 1 en, 0

A, A; 3 Ain CK, 0
pt, AS Axess RF c,X, |=| 0 , (4)

Bere yen! pS hte hah 0


=e
The first matrix on the left is an m X m matrix which we shall denote as Q. Its determinant is called the
__ Vandermonde determinant and is
wh (A; - Ay (As = Ag)(Ag = Ay Ag = As )Ag = ADAG = AY) (A, 7 Ay)

whichis not zero in this situation because all the eigenvalues are different. As a result Q is nonsingular,
__ and the system (4) can be written as
a
CHAP. 7] EIGENVALUES AND EIGENVECTORS 67

AX = A(d,X, + d,X, +--+: +d,X,)


= d,AX, + d,AX, +++: + d,AX,
= d,AX, + d,AX, + --:+d,X,
= (d,X, + dX, +++ +d,X,) = AX
n “a

Thus, X is an eigenvector of A. Note that a nonzero constant times an eigenvector is also an eigenvector
corresponding to the same eigenvalue.

7.13 A left eigenvector of a matrix A is a nonzero row vector X having the property that XA = AX
or, equivalently, that
X(A—Al) = 0 (1)
for some scalar A. Again A is an eigenvalue for A, and it is found as before. Once A is determined, it is
substituted into (1) and then that equation is solved for X. Find the eigenvalues and left eigenvectors for
: me
2 an | 4 |
The eigenvalues were found in Problem 7.1 to be A= 1 andA= —2. SetX=[x,,x,]. WithA=1,(I)
becomes : ae
hee art Gok. wig | Sek
a &xlll2 4]-[9 {)-0.9 ae
oe
= e ee Rian bgp, leSees
EOI O)
= aa ____ which isequivalent to the set of equations |
_ ,
ce thee *
- =
68 EIGENVALUES AND EIGENVECTORS (CHAP. 7

The characteristic equation for A was determined in Problem 7.1 to be A? +A—2=0. Substituting
A for A, we obtain

2 eo vd |3 :|- l | -[° 4
osPats a= |; 6}*1-2 -4}]~7Lo 1/7Lo o

7.16 Verify the Cayley-Hamilton theorem for

The characteristic equation for A was found in Problem 7.2 to be —A* + 20d? — 93A + 126=0,
Therefore, we evaluate

521 494 494 43 34 34 5 223)


—A?+20A7-93A+1261=—-| 741 768 741|/+20] 51 60 51|-93}3 6 3
1,482 1,482 1,509 102 102 111 L6 6 9.

i, 1.0 1] $00 6)
+126 00 0
1 O}=/0
1h ae “Fe OOOO 0
, oan
- ie an
ri me!

a ,. “Cy rn
Ui naes 7 ee)

of ann Xn matrix A as | :
- > pas. g jar
_ >? . coe aN ”

BAC
fo tb, A"SYte t ione indy
td At Pandiv
aene
aS y
3 - ‘ ray
~ Al a a as
CHAP. 7] EIGENVALUES AND EIGENVECTORS 69

b,1=—M,_,
b,_,1=AM,_,-—M,_,

b, al nas AM,,_, - M,,_;

b,1= AM, -M,


b,I
=AM,
| Multiplying the first or these equations by A”, the second by A” ', the third by A”, and so on (the last
equation will be multiplied by A° =I) and then adding, we find that terms on the right side cancel,
leaving
b,A" +b, A" +b, JA"? +++ +b, A+ bo =0
which is the Cayley-Hamilton theorem for A with characteristic polynomial given by (1).

Supplementary Problems

In heals 7. 18through 7.26, find the eigenvalues and corresponding eigenvectors 7mthe
ce

a8sees= tet 9s Tl
EIGENVALUES AND EIGENVECTORS

3. 1 1 i..=t ia
7.38 2. Salil 0m i -1 3 -1
-l1 -1 4 -1 - ate

7.41 Verify the Cayley-Hamilton theorem for the matrix in (a) Problem 7.18; (b) Problem 7.24; and (c)
Problem 7.30.

7.42 Show that if A is an eigenvalue of A with corresponding eigenvector X, then X is also an eigenvector of
A’ corresponding to A°.

7.43 Show that if A is an eigenvalue of A with corresponding eigenvector X, then for any scalar c, X is an
eigenvector of A — cI corresponding to the eigenvalue A — c.

Prove that if A has order n Xn, then


- ~ det(A— AL)=(—1)"{A" — (trace A)A"™' + O(A"~*)}
‘ : ,_— oan) denotes a polynomial in A,of,Pantie or less.

9 thatthe trace of a square matrix is equal tothe sum of the 7 of that matrix.
i: 3 gotimegeorcs bas @satevEgaeigt bce. © layered) © rovsiticest 1!
ive that tracec(h+o ksoa
ee aM mauriceofthe
es
Chapter 8
Functions of Matrices

SEQUENCES AND SERIES OF MATRICES


A sequence {B,} of matrices B, = [b\" ], all of the same order, converges to a matrix B = [b,,] if
the elements aie converge to b,, for every i and j. The infinite series L;_,, B, converges to B if the
sequence of partial sums {S, = a -» B,} converges to B. (See Problem 8.1. )

WELL-DEFINED FUNCTIONS
If a function f(z) of a complex variable z has a Maclaurin series expansion

flz)= 2,a2"
which converges for |z|<R, then the matrix series 2*_, a," converges, provided A is cn rahand
each of its eigenvalues has absolute value less than R. In such a case, f(A) is.defined oF *

ay ~* ; . 493 i Spit ;f(A) = 3 aA"


ae e ve ae toe n=0 _.
Ta. Beret} j

and iscalled a paicived Wien. By convention, A” by? (See Problems 8. 2nad 8te:

. . PRE
ae whe bsae2 ioe ge

(at B=) Sinceevery¢


FUNCTIONS OF MATRICES [CHAP. 8

STEP 8.3: lf A, is an eigenvalue of multiplicity k, for k>1, then formulate also the following
equations, involving derivatives of f(A) and r(A) with respect to A:

f'( A)laqa, = r'( A)laqa,


FD vn = PMaen (8.3)
foo (ang, = Oia,

STEP 8.4: Solve the set of all equations obtained in Steps 8.2 and 8.3 for the unknown scalars
ae... « rn
Once the scalars determined in Step 8.4 are substituted into (8.1), f(A) may be
calculated. (See Problems 8.4 through 8.6.)

THE FUNCTION e*
For wig constant square matrix A and real variable 1, the matrix function e“'is computed by
ng E on
and sii galgulating < as described in the preceding section. (See, Problems 8.7
through 8. 10.
/.. The ei genvalues of
BeArarehetemasof Amultiplied | by t (see Property 7.5). Note that
3) involves deriv to d not f; the correct sequence of steps is to first take the
| | with respect to A and then substitute A=A,. The reverse
(a func oh. oft) into (8.2) and then taking derivatives with
sults. & wen Fide
ees hoa: ‘aw ¢ boilco ee Bg
CHAP. 8} FUNCTIONS OF MATRICES 73

In (8.4) and (8.5), the matrices e““~, e~**, and e““” are easily computed from e*’ by
replacing the variable ¢ with ¢—,, —s, and t— 5, respectively. Usually, X(t) is obtained more easily
from (8.5) than from (8.4), because the former involves one fewer matrix multiplication. However,
the integrals arising in (8.5) are generally more difficult to evaluate that those in (8.4). (See
Problems 8.13 and 8.14.)

THE MATRIX EQUATION AX + XB=C


The equation AX + XB=C, where A, B, and C denote constant square matrices of the same
order, has a unique solution if and only if A and B have no eigenvalues in common. This unique
solution is given by

| X=- I e“'Ce™ dt (8.6)


provided the integral exists (see Problem 8.15).

Example 8.2 For A=I and B=0 the matrix equation has the unique solution X = c but the integral (8.6) e
aa
diverges, me
<— 2
pes

Solved Probiedis :

eo: k when y
» as: iMae *
FUNCTIONS OF MATRICES

8.3 Determine whether arctan A is well defined for

The Maclaurin series for arctan z is


3 2 7 =, anes
Z
ctan Zz 3 5
n=0

which converges for all values of z having absolute value less than 1. Therefore,
oe 3 (-1"x°
Yo
arctanA=A- — a a. oe

is well defined for any square matrix whose eigenvalues are all less than 1 in absolute value. The given
matrix A has eigenvalues A, =0 and A, = 4. Since the second of these eigenvalues has absolute value
greater than 1, arctan A is am defined ‘for this matrix.
a s2

a: ata. ye nobilis Ser 43


ted
| Findcos Aforthe ek given in
inProblem 8. a,
ip
Bis
*
ee know neeopeiswell
fees for all matrices. For thispartial2x2
-OomM os
ae

i prone

26 |,
7 CHAP. 8] FUNCTIONS OF MATRICES 75

8.6 Find sinA for

-2 2 0
A= f°Q) =2 1
0 0 -2

The Maclaurin series for sin z converges for all finite values of z, so sin A is well defined for all
matrices. For the given matrix 3 x 3 (8.1) becomes

; 4. =f og =O. 2 0 i 20.9
: sinA=a,A°+a,A+a,I=a, O:: Ape ees 8 2 11 tag 0. 1-0 bs
hy Fee Co Me «2 0-6: (1) %
4a,~— 2a; +a, ° 78a, + 2a, 2a,
j
= 0 4a,-2a,+a
2 1 0 ~44,+.4,
2

; 0 0 4a,-2a,+a,
Matrix A has eigenvalue A = —2 with multiplicity three, so we will have to use Step 8.3. We determine
f(A) =sin A r(A) =a,A°
+ a,A+ a,
f'(A) = cos A r'(A) =2a,A + a,
fA) = —sin A r'(ad 2a,
and write (8
2)andater? as,Tespectively,
sin (—2)= a,(--2)? We; + ay. gas: ee ::
| “. WC0st—2) ® 2a,(—-2) 7 aj e -
| _ =sin(—2)= 2a, a 4
ae Bates
Es are = Dems-2)rs
‘“ obtain a, = — 3 sin(—2) = 0.454649; a, = cos (—2) —2sin1(-2) = 1.40245; and a5
xf a5
_ sin@2)= 0.070086. —— these values into (1) and simplifying give us — ae
sibs [ee Fc Badd |
FUNCTIONS OF MATRICES

l “ -#
7 fe +e “)=cost

Substituting these values into (1), we determine

Pra bey oF)


¢ =f = :
—sinft cost

8.8 Find e”’ for

— _
no ase| 81 <!
t

ae vst iar 2

and femenee e”.


¢ Sage B is of orderéx 2, (8.1) becomes
° #o “Bs

| Peepsod
t he stat wee.
CHAP. 8] , FUNCTIONS OF MATRICES 74

8.10 Establish the equations that are needed to find e“’ if

©)
29
0
©
orm
SOS COO
Ww
ONNoO
SS
WW
ON SCOWNR
AU MNMND
Ke
Oh

We set B= Ar and compute e”. Since B is a 6 X 6 matrix (8.1) becomes


e” =a,B + a,B* + a,B° + a,B’ + a,B+a,1 (1)
The distinct eigenvalues of B are A, = / with multiplicity three, A, = 2/ with multiplicity two, and A, = 0
with multiplicity one. We determine
f(Ay=e* (A) =aA° + aa* + a,A° + a,A* + a,A + ay
: f'(A=e* (A) =5a,A* + 4a, +3a,d? +2a,A +a,
: — f(Ay= es or'(A) = 20a,A* + 12a,A° + 645A + 2a,
and (8.2) and (8.3) become
e‘=a,0° + Ode Na 9 + a,t° t+att a, Stee ee
e'=Sa.t' + 4a, +3a,t?+2a,t+a, — ee
e'=20a,t° + 12a,t°
+ 6a,t + 2a, poe i.
:? a.(2t)° + a,(2t)" + a,(2t)” + a,(2t) + a,(2t) + ay
a : oe = Sa,(2i)" + 4a,(21)" + 3a,(21)° + 2a,(2t) + a,
Ree err wee=0,(0) + a,(0)* ++ aay + a,(0)° + ak
Pie awhichaus be simplified before bea are solved.

ae
eee ae ) : ty * ,: x7i ; ~~». ai"
@
Ps i 2h a; %, aan

' ee pres a ee ee aa he! Sa


ae a eS e’ e [ 7
r f rane f
—— a,
+
FUNCTIONS OF MATRICES

2 4 2
A(i~t) Al 4e°+2e. e Ba sce %
Cc z|Se -8e ™ de" ++ te" -4 "a
ah ee 4e7?? + 2e* Pee vd {°]- ie 1e*

F(s) +|4¢ —2s -ie" yaad + 4e* e -= ze"* +?‘,Ss

AS =
[Gert -beds
: 2 as Ss

cee

|e F(s)ds 30| -10e7'
+ 4e" +6
[iuer + fe” )ds
eg! ag* l| =is te” +6
“eo F(s )ds = =mle +2e- 4
30 8e7-8e~* 2c +4e°* || -10e7' + 4e" +6
(4e” + 2e7")(-Se™'-e +6) +(e" - e)(—10e"'
+4e"' +6)
- imLhe" ~eM Se" eit 6) +e" + 4c '\(—10e' + 4e“ + 6)

oe? eee &

: gk ASR Le’
te he” — te
CHAP. 8] FUNCTIONS OF MATRICES 79

8.15 Solve the matrix equation AX + XB=C for X when

: ae[-¢ 3] wefP 7] e-[2 9


In preparation for the use of (8.6), we calculate

eo € a

q 4 ct 2 Ath 2 fn aE
Alig Bt __ 36 © Se 5e e
Then e” Ce 5 Tee ) ra ae 1
:

—/ aut x =—Ti
: eee Al Br = 84 ze v7 Ls —3¢' + xe [6 Sel ed

ane + I— Ege" + fool be" -de |S 7/27 2/27


8.16 Prove that e“‘e™ = e*®” if and only if the matrices A and B commute (that is, if and only if a
the commutative property for multiplication holds for A and B). me
If AB=BA, and paly then, we have

ae Pet bce: (A+B) = (A+ BYA+ B= A+ ABS BAS BP =A 2A SB

Becta ana oars PatJars , oe.


k=0 x

(A +B)" = Dy(jane!
80 FUNCTIONS OF MATRICES (CHAP. 8

8.18 Prove that e’ =/.


: From the definition of matrix multiplication, 0° =0 for n 21. Hence,

e=e%= Y — o'r" =1+2 — O°" =1+0=1


. l awa . 1

n=U , az=l1 e

Supplementary Problems

B19 - Determine the limit of each of the following sequences of matrices as k goes to ©;
=.

a ae = e! et

kL ae Ok’ +k SRI

nes. A eh ah7, aad ee

a ae note”
well defined? Ae
ner ate , ue) eer. |
0.13 Fae
CHAP. 8] FUNCTIONS OF MATRICES

8.32 Find sin Ar for

8.34 Solve X(t) = AX(r) + F(t) when

8.35 gihis X(t) = AX(t) + F(t); X(0) =C when

A GiavaeRD on) iw boleinrei “peep Sees.) od Seber bes tae in Sou ti F


wh) ae AS Tez-s
—_

Chapter 9

; Canonical Bases
GENERALIZED EIGENVECTORS
A vector X,, is a generalized (right) eigenvector of rank m for the square matrix A and associated
eigenvector A if
(A—AI)"X,=0 but (A-AI)" 'X, #0
(See Problem 9.1 through 9.4.) Right eigenvectors, as defined in Chapter 7, are generalized
eigenvectors of rank 1.

CHAINS
: oe i A chain generated by a generalized eigenvector X,, of rank m associated with the eigenvalue A is
ia wee Sa) See er Dae reas A defined recursively as
<i> AD: (j=m=-1,m-2,...,1) (9.1)
b 59.5 oti 9.6.) A chain is a linearly independent set of generalized eigenvectors of
nd rex. The number of vectors in the set is called the length of the chain.

nem matin A is a set ofnlinearly independent generalized —


hains. Thatimene ee ee |

shes
CANONICAL BASES

generalized eigenvectors associated with A. Form the chain generated by this vector,
and include it in the basis. Return to Step 9.4.
(See Problems 9.10 through 9.13.)

THE MINIMUM POLYNOMIAL


The minimum polynomial m( A) for an n x n matrix A is the monic polynomial of least degree for
which m(A) = 0. Designate the distinct eigenvalues of A as A,, A,,..., A, (1 =s5 =n), and for each A,
determine a p, as in Step 9.1 above. The minimum polynomial for A is then
m(A) =(A= APA = Ag) (A= AS)” (9.3)
(See Problems 9.14 and 9.15.) |

eG Rit. ~— AY Qetiue oP. gist Oe eee eer Cn a EAE “fy ro ee oe


Se.a eK ty ear As iVed Fic Bey Aer Be. SBS eR: '
ss P i <a an —y ~ a ay ~- ’ . ? ~via

th 0,oy’Ceara frank 2¢0


As rity
CANONICAL BASES (CHAP. 9

9.3 Find a generalized eigenvector of rank 2 corresponding to the eigenvalue A = 4 for the matrix
4 000
oe 6
mee
0 003
We seek a four-dimensional vector X, =[x,, ¥5, x, X,]’ such that (A — 41)°X, = 0 and (A — 41)X, #
0. We have

&
©
oo

~X, ~%,— X,
=m
©
©

Was oO tis! have x,=0. Then,toSatisfy (A— 41)X, #0, itech senmitak
Aon Aimplechoice | x,
ay! =1, x, =4,=2 cag gives us X,=[10. 0, a:

e heseingame. teri! ‘weutlcdilty


ding toth or

CO ii athe vector X. =[xr.. %i. xz. Pee he conditions (A —42)°K. 41) :


i ca

oye ae 2 ote o i a | a is 7 — ye s iy , ' ,


as WS. -—- a ; a

ae . Si 2 ee. 2 — 4 j :
\ —. ; T i a
=
~
;
7

¥ 2
Re.‘ =6(A °.~=10.0,0,%,.°
eo

ii We Pe ee es ,akg 0,
10,0, “a te:
Xf = 4 “" - r 4 2 rs ed by _ '
aes ae
CHAP. 9] CANONICAL BASES 85

9.6 Determine the chain that is generated by the generalized eigenvector of rank 2 found in
Problem 9.3.
From Problem 9.3 we have X, =[1,0,0,0]’, corresponding to A = 4. Using (9.1), we write

0 0 0 oft] [ 0
ee | eee ge ay Oe |
es: =| eet 01.0)"|-1
Gee *1)l0} 1 o
The chain is {X,, X,} = {[1, 0,0, 0]’, (0, 1, -1, 0}’}.

9.7 Show that if X,,, is a generalized eigenvector of rank m for matrix A and eigenvalue A, then X,
as defined by (9.1) is a generalized eigenvector of rank j corresponding to the same matrix
and eigenvalue.
Since X,, is a generalized eigenvector of rank m,
(A—AI)"X,,=0 and (A-AI)”"'X, 40
It follows from Eq. (9.1) that
X, =(A—AI)X,,, =(A- AI)” ’X,,
Therefore (A - AL)’, = (A- AI)/(A— A)” /X,, = (A- AD)”X,, = 0
aS dagen (A ~ Al)/"!X,=(A ~ AI)/“"(A - AI)" /X,, = (A ~ AI)” 'X,, 0
a, which togetherimply that ¥, is a generalized eigenvector of rankj for A and A.

8 _ stew
that a chain is a linearly Fidioicniens set of vectors.
is inductive onthe length of the chain. a ¢ — one,
#it aor

or X, must be an Regan where” en Therefore, the only solutior


and the chainisind :
CANONICAL BASES [CHAP. 9

The eigenvalues for this matrix were found in Problem 7.1 to be A= 1 and A= —2. Since they are
distinct, a canonical basis for A will consist of one eigenvector for each eigenvalue. Eigenvectors
corresponding to A = 1 were determined in Problem 7.1 as xf—5/2,1]’ with x, arbitrary, We set x, = 2
to avoid fractions, and obtain the single eigenvector (—5, 2]’. The eigenvectors associated with A = —2
are x,[~ 1,1]’ with x, again arbitrary. Selecting x, =1 in this case, we obtain the single eigenvector
[—1, 1]'. A canonical basis for A is thus[—5, 2]’, [—1, 1]’.

Determine the number of generalized eigenvectors of each rank corresponding to A = 4 that


will appear in a canonical basis for

{=
> ‘Sige oaANOCSO
conooceo yu
©
6
oo
oooftrrDid

_ For this 6% ma ix, the eige vali p multiplicitZ five (while


=ocoocan A=7 has multiplicity one), so
ob
oS
i]

ee
Oy qa
CHAP. 9] CANONICAL BASES

9.11 Find a canonical basis for the matrix given in Problem 9.10.
We first find the vectors in the basis corresponding to A = 4, using the information obtained in the
~~ to Problem 9.10. There is one generalized eigenvector of rank p=3, which we denote as
= [x1 Xa Xa Lay ss Mel We note that to have (A— 4I)*X,=0, we must set x,=0; and to have
a 41)°X, #0, we must have x, #0. A simple choice is X, = (0,0,1,0,0,0]‘, which generates as the
rest of its chain 5
0
0
0
X, =(A-41)X, = 0
0
0
CANONICAL BASES

has rank 0, so p =2. Then


N, = rank(A — 31)' — rank(A — 31)” =2-0=2
N, = rank(A — 31)° — rank(A — 31)’ = rank(I) — rank(A — 31) = 4-—2=2
A canonical basis for A will contain two generalized eigenvectors of rank 2. We denote one of these
as X, =[x,, x,, x,, x,]’. The condition (A — 31)*X, = 0 is satisfied by all four-dimensional vectors, so it
places no constraints on X,. The requirement (A — 31)X, #0 is satisfied if either
x,#0 or 2x,+x,#0 (1)
A convenient choice, therefore, is X, ==(0, 0,0, ~1)", which generates the remaining vector of its chain:
04 0 ro |

_|o00 0
=hod 0 -1
Dio...
Ni _ Another generalized eigenvector of rank 2 for A = 3,
Aipvea (1),is¥,=(0.1, 0,0)" Thisbicsaieh careieet
a.
; te)
i ah oe) .
Raia. he £6 +e fvdie . FQ" haat =
th st xe bee 0 ‘pa pak,
ay }1] Jo)i

| on
fr a | }o} |
sp
CHAP. 9] CANONICAL BASES

0) =3
0
(A — 3I)' = 0)
0 -4
0 4 -4
has rank 1, Thus, p = 3, and N,=2-1=1, N,=3-2=1, and N, =5-3=2.
A generalized eigenvector of rank p = 3 for A =3 is X, = (0,0, 1.0, 0]”, which generates

= = (A ast 31)X, =

> the N,by | to obtain as


=, = } an
peer ot tank,1 fisee This is an.
CANONICAL BASES (CHAP. 9

Supplementary Problems

Determine which of the following are generalized eigenvectors of rank 3 corresponding to A = | for the
matrix

(a) [1,1,0,0,0]" (6) [0,0,0,0,2]" — (e) (0,0, 0, 1, 0)”


(d) [1,1,0,2,0]” (e) (0,0,0,0,0)’ = (f) [1,1,1,1,1]”

Find the chain generated by X,=[0, 0, 0,0, , a parncraliged eigenvector of rank 4 corresponding to
A= 1 for the matrix given inProblem 9.ae e

ee a severed serves of=het to A=5 for the matrix

| oe uDa eer
acos |pppoe aes pe : a
tae te o
Chapter 10
Similarity
SIMILAR MATRICES
-A matrix A is similar to a matrix B if there exists an invertible matrix S such that

A=§7'BS (10.1)
If A is similar to B, then B is also similar to A and both matrices must be of the same order and
square.
Property 10.1: Similar matrices have the same charactetieac equation and, therefore, the same
1
eigenvalues and the same trace. [Acts tee) |
f Property 10.2: If X is an. eigenvector of A associated with eigenvalue A and (10.1) holds, then
Y = SX is an eigenvector of B associated with the same eigenvalue.
{ ee; Pesbiene :10.1 through 10.3 and 10.43.)
Bb Sadar Teh Baie se:
fi ss gaat kate tos
ao 4 taie bat’ dpoi
\L MATRI ix
Benneine
4
-
eS ar
<7
SIMILARITY (CHAP. 10

Ji

where D denotes a diagonal matrix (whose diagonal elements need not be equal) and J, (i=
1,2,...,k) represents a Jordan block. Although the diagonal elements in any one Jordan block
must be equal, different Jordan blocks within the Jordan canonical form may have different
diagonals. (See Problem 10.7.)

SIMILARITY AND JORDAN CANONICAL FORM


Every square matrix A is similar to a matyix J in ean maconical form. If M is a modal matrix
for A, vg ee » pe Sell ;

uaticwp (102)has the form of (10. newith $=oe


nati i diinMarlis uniquely determined by M. Each chain of length r appearing in M and
— espondin + A generates anrXr sia block in J with A on the diagonal. The
chains of I gthone (if.
fthley exis 8ii ape 43 ¢ diagonal subsasteix ofbethed
- mer oO! < ! 'rix are the iate jit. hthe che -

a 2 rr y"
i ee)
5 re ek

‘oblems 10.8 11
Ye 2F re. -

a wit xy. act ¥


E iy
4‘ OT
ae
I yy § >
UuNnIC
ah
af f
a *
1

a a Mh
e' | Tt A
othe r
—_
CHAP. 10] SIMILARITY

FECARyle NAY
(r—2)! (r—-1)!
oie) f(a)
(r — 2)!

Bh). [tr
01 1
g-= ee0!
3 view allderivasioes wietaken sithsbapact to A. (See Protiew cd gitoh a pariond matrix of-
packs 394.9;va "fwees matrix such ay ds a ne
| le ays ae ee oe

Le Meer ees
ak
oe ee ts eine ti A)
SIMILARITY

Solved Problems

10.1 Determine whether

a=|? 4 is similar to B=|? fl


= -2 0 2
The matrices are similar if and only if there exists a matrix § such that A = S”'BS or, equivalently,
such that
SA = BS . (1)

Me J
Then (1) becomes © a

“abs al-lo olle al


aa sae
2b
2 2c aur AT Ste. EVjAI8

jg ‘
iF ermet
a, 10 LP oes
79 eit 1h M
me ciars Seined
pre
© ha bag
at Gi
~~~
esa ; ea
CHAP. 10] SIMILARITY 95

A canonical basis for A was found in Problem 9.13. It consists of one chain of length three,

X, ={0,0,1,0,0)" X,=f4,1,0,0,0]” X,=(2, -2,0,0,0)"

and two chains of length one,

Y, =(0,-1,-7,2,2]" and Z,=[-3,9,0, -4,4]”


cae 94-4
: —] 9 a @
Thus, M=[Y,,Z,,X,,X,,X,)=|-7 0 001
oS he D.0 0
ae a
:q
A second modal matrix may be obtained by interchanging the first two columns of M.

10.5 Construct a modal matrix for


Me ps St
| 04-100. 5 ee
eee Le Oe a 20
er
10 0-6-8 9
00 00 0 ‘
calbasisfortis
matrixwas determined i
in Problem 9.11 to consist ata
one isin
n of

sewee.0,10,0,a Kerth aacor % “(-


“2.040,0.0.07"
0.
Z
SIMILARITY |CHAP. 10

(—1, 1.0}, [-1,0, 1}/, and [2.3, 6]’. Since these three vectors form a full complement of generalized
eigenvectors of rank 1, they are a canonical basis for A. A modal matrix for A ts then

~f- ~=-{ 2
ae eB
ee

Any permutation of the columns of M will produce another equally acceptable modal matrix.

10.7 Determine which of the following matrices are in Jordan canonical form:

' . . . . i . ; a bat, : P . v3
m __ All three matrices are in Jordan canonical form: A, because it isa diagonal matrix; B, because it is
-. inthe form - a .

and C, because it is in the form 7 y=? ‘ : 7 biz


~ Gx. “a
em
Pe igs ect a =e
a ee a ee
. ™ ee o
“rooatiy SR,
<— 9° BS Gee

CHAP. 10} SIMILARITY ; 97

The chain of length three, {X,, X., X,}, corresponds to the eigenvalue A = 3, so it generates the Jordan
block
Bert «<Q
=o 3° 1
0
A is thus similar to » ;
mo 0 0 0
oO. OQ 0
i rie anos. tO}
oS a et
fo oO QO 3

10.10 Find a matrix J in Jordan canonical form that is similar to the matrix A of Problem 10.5.
In Problem 10.5, we found thatM =[Z,, X,, X,. X,, ¥,, Y,]. The single generalized eigenvector of
rank |, Z,, corresponds to the eigenvalue 7 and generates the | x | diagonal submatrix of J comprised of
this eigenvalue. The chain of length three, {X,, X,, X,}, corresponds to the eigenvalue 4 and generates
the Jordan block ;
a
J, =| 0% 4.8195 2
00 4).
3

;
q
The chain of length two, {¥,, Y,}, also corresponds to the eigenvalue 4 and generates the Jor :

Bee Thus A is similar to


SIMILARITY [CHAP. 10

10.12 Verify (10.2) for a modal matrix consisting solely of generalized eigenvectors of rank 1.
Denote the columns of M as E,,E,,...,E,, where each E, (i=1,2,..., m) is an eigenvector of A.
Thus, AE, = A,E,. The eigenvalues A,, A,,..-, "i of A need not be distinct. Now define

and note that

AM=A[E,,E,,...,E,]=[AE,,AE,,...,AE,]
=[A,E,. A,E,,--.,,E,]=[E,.E,,-.-.E,)/J=MJ
from ecriniagane 12follows.

10.13“Verityan2) fora nods matrix consisting of a single chain of length r.


Denote the columns of M as X,,X;,. ..,X,, where each X, (i= 1,2,...,7r) is a generalized
eigenvector of rank i ter A and all se:wie bbeto the same eigenvalue A. Now
4aoe- aula att os ieiges ters Odes
2 | oO ADX, = AX, ~AX,
7 pa Rate j=] pars
Mecaee
CHAP. 10] SIMILARITY

10.15 Calculate sin J for the Jordan block

Here f(A) =sin A, f'(A) = cos A, f’(A) = —sin A, and f(A) = —cos A, so f(2) =sin 2, f'(2) =cos2,
f'"(2) = sin 2, and f"(2) = —cos 2. It follows from (10.4) that

sin2 cos2 -sin2 -cos2


6
—sin 2 0.909297 -—0.416147 -0.454649 0.0693578
0 0.909297 -—0.416147 —0.454649
cos 2 0 0 0.909297 -0.416147
| 0 0 Bi -tey 0.909297
SIMILARITY {(CHAP. 10

10.17 Calculate sin A for the matrix given in Problem 10.6.


Using the results of Problems 10.6, 10.8, and 10.14 along with (10.2), we have

—-1 ~-1 2)f 0.141120 0 0 —3/11 8/11 -—3/11


sinA=M(sinJ)M'=| 1 O 3 0 0.141120 0 -—6/11 -6/11 5/11
0 ft 6 0 0 0.990607 1/11 1/11 1/11

oasien 0.154452 02318


0.231678 0.372798 0.231678
0.463357 0.463357 0.604477

10.18 Calculate e*' for

AUT a opatepte> 04 0
B isnot because itnolonger has 1s
ee tLee To
Beaa
c er

CHAP. 10] SIMILARITY 101

te us Jap ~i te 5 1/2 Be
— eM “|; ls eH Nae 1/2

2 2 =| cos t aA
—sint cost

(Compare with Problem 8.7.)

Supplementary Problems

10.20 Determine which of the following pairs of matrices are similar matrices:
gis 2.3 Se
— Is 7] ae [5
Se ye Denies oe | F 4 hy Oi 24
ee eee ! 172 3 a ee
Bcc Ska ae {4 5 6) ‘and ae ;

-
_
La? ae . ‘} 2 3 hk a
, .
ae
ctieel
oa . ifs ja 4 3 - =? “7 >) asl
i «+ ras j
ae a. a a” ' gi' => 4 “ .
4 cad = = 5
_s 5 ~~ = i® a Ss Z : \
7 y . Ss *
* :
; ‘ i f
SIMILARITY [CHAP. 10

10.29 The matrix in Problem 10.23. 10.30 The matrix in Problem 10.24.

10.31 The matrix in Problem 10.25.

3 0 5
0 0 0
10.32 |_, 0 10.33 |0
0 3 0
0
(Hint: See Problem 9.30.)
(Hint: See Problem 9.31.)

10.34 Find cosA for the following matrices:


200 $3
(a) a-[i | (b) s-[3 2
0 0 245 emokt?:* *42¢nesnstgas-

’ ly -Monte senna) Di.

ta)
a & ere hee ab
Chapter 11
Inner Products

COMPLEX CONJUGATES
The complex conjugate of a scalar z = a + ib (where a and b are real) is z = a — ib; the complex
conjugate of a matrix A is the matrix A whose elements are the complex conjugates of the elements of
A. The following properties are valid for scalars x and y and matrices A and B:

(Cl): x =x; and A= A.

(C2): x is real if and only if x = x; and A is a real matrix if and only if A=A.
(C3): x +x is a real scalar; and A+ A is a real matrix.
(C4): xy =(%)(y); and AB =(A)(B) if the latter product is defined.
(CS): (x
+y)=x + y; and (A
+B) =A +B if the latter sum is defined.
(C6): xx =|x|° is always real and positive, except that x% = 0 when x =0.

THE INNER PRODUCT


‘ _ Let W denote a nonsingularn x n matrix. The inner product of n-dimensional shiek?
Be. and - withoe, to W, denoted by (X,Y) y, is the dot product (see Chapter 1)
ti 2: : | (X, Y) yw= (WX): (WY)
E ¥ aI W=I,thenthe yempar in (11. 1)iiscpa and the inner product
as ae
oe BRIS / #3fs
| (GYR |
ter pro. ux 3 sa 0
oethen
INNER PRODUCTS |CHAP. 11

concept of perpendicularity under the Euclidean inner product when the vectors are real and
restricted to two or three dimensions.
A set of vectors is orthogonal if each vector in the set is orthogonal to every other vector in that
set. Such a set is linearly independent when the vectors are all nonzero. (See Problem 11,27.)

GRAM-SCHMIDT ORTHOGONALIZATION
Every finite set of linearly independent vectors {X,,X,,...,X,} has associated with it an
orthogonal set of nonzero vectors {Q,,Q,,...,Q,} with respect to a specified inner product, such
that each vector Q, (j=1,2,...,M) is a linear combination of X, through X,_,. The following
algorithm for producing the vectors Q) is called the Gram-Schmidt orthogonalization process.
STEP 11.1: Set

Q,= Vix Xe X, and j=1


| (X,.X1)w ig :

If j=n, stop; the algorithm is complete. Otherwise, increase j by 1 and continue.


Ralailate(’ +2 fatyr0} t5 erate 2 2) Og ang test emails at ia .

‘yY; =X, ee (X,, Q;) wQ;


( 2RION Baye a

sane Joy eee a-9) ) 4 tag


FA oad se \ - g

pp tiTw feat WF G90 iow 2 ie


CHAP. 11] INNER PRODUCTS

11.3 Calculate (X,X),,if

3) Cae
and are 0
had aa ae
weet sd ta, 2, |

(X,X)y
= (WX)- (Wx) =| ||Pi btal='ae nk ali aes gay

Calculate (X,Y)y if

W is a singular matrix, so the inner product (X,Y)y is not defined. even though the matrix
operations on the right side of Eq. (11.1) can be performed. When the matrix W is singular, it is always
| possible to find a nonzero vector Z (in this case, Z = [ly my ar ena aePinte. = 0, thereby
ting Pr erty 11.2. ay a ae” “{ A> Fie. a
ia
“ue
. 2 -

. i _ D =a .

oy
INNER PRODUCTS (CHAP. 11

Co Cee |
and Q,== eure |apelay ~ | (8 + i2)/V153

The orthogonal set is {Q,,Q,}.

Use the Gram-Schmidt orthogonalization process with the Euclidean inner product to
construct an orthogonal set of vectors associated with {X,, X,,X,, X,} when

0 1 1 1
11 ee 0
X,< 1
X,=|,1
1 0
1 Ll 1 0
These vectors can be shown be ape independent(see Chapter 6). Using =e 11.1 through

Pe$5.2~ (X,0,)0, st EE — cies Bi

bs -2/3, 1/3, ela tty oe ae id},


Ty takewR i ‘atesi
i Lage
aes said om as, 3
CHAP. 11] INNER PRODUCTS 107

aes.9 9
fet. 23
iy" 2
WF iawiog
ti5 0
Using Steps 11.1 through 11.5, we calculate

WX, =[1, 6,4, -3]”

so (X,,X,)w = (WX,)* (WX, ) = 62

and Q,= X= 10, 1/V62, 1/V62, 1/V62]7

Then WQ, =[1/V62, 6/V62, 4/V62, -3/V62]"

and WX, =[3, 5, 5, 0}”

so (X2,Q,)w = (WX,)* (WQ,) = 53/V62


and nt geal, Bis 2s
=(1, ~ 53/62, 9/62, 9/62)" pai te oe

;¥, = (133/62, -8/62, 98/62, 159/62)"

| its TUR Eneibor

Y= apgaal. -58. 9 91
We ara 8 981
es

cs.-
" .-

ay rz
~
¥ he
eh ei te

e ‘-
<a
Ie ge = v
ae ae
+ _ =
YY
INNER PRODUCTS (CHAP. 11

11.10 Prove Property 11.4.


(X,Y) w= (WeX)
«(WY) = {c(WX)} - (WY) = c{(WX) -(WY)} = c(X, ¥)w
(X, CY) w= (WX) - (We¥) = (WX) >{c(WY)} = (WX) +(€(WY)}
= €{(WX) +(WY)} = (X,Y) w

11.11 Prove that (0, Y), =0 for any Y of appropriate dimension.


Yo - (00, ¥) w =0(0,Y)y =0
because the inner product is a scalar, and zero times any scalar is zero.

11.12 Prove the Schwarz inequality.


_ When X=0, both sides of the inequality are zero (see Problem 11.11), and the pan mya is
satisfied. If X #0, then (X, 5 oe #0 (Property 11. 2), and ny any vectors X and Y and any scalar c, we
have
O55(CMM
cX VP Ok) Xo (Property 11.1)
a ao eiunk Mia ie ¥)w (Property 11.5)
| ~ ¢(3 wt (¥,¥)wy (Property 11.4) :
s Dt Property 11.1,wecancehe fit |
ger i
CHAP. 11] INNER PRODUCTS 109

11.15 X,

11.17 X,

11.19 X,

Hi AQ torey S0y id Saks | ie "ya Z ‘ é '


+ is
. e.
*

Fy 4 4 = % * ’ h :
jTal eee S nidt
eS n-Schr Babeeortho
Ty Sis ponal mled alGat
ces ation

ob lt:ey if nos)
oH pat al elem t Co Sia
° 7R
Rss ba At i ee Y

hs . ae
:
a 4
Chapter 12
Norms

VECTOR NORMS
A norm for an arbitrary finite-dimensional vector X, denoted ||X||, is a real-valued function
satisfying the following four conditions for all vectors X and Y of the same dimension:
(N1): ||X|| =0.
(N2): ||X|| =0 if and only if X=0.
(N3): {eX = |e|||X|| for any scalar c.
(N4) (Triangle inequality): |X + Y|| = ||X|| + |¥|I.
A vector norm is a measure of the length or magnitude of a vector. Just as there are various bases for
measuring scalar length—such as feet and meters—there are alternative norms for measuring the
a of a vector. Some of the more common vector norms for X =[x,, x,,...,%,]’ are:
| ‘The inner-product-generated norm: ||X||
wy=V(X,
X) w
The Euclidean (or l,) norm: IIXI|> =i
= (Re KY
rrhe 1,norm
n: |X|, = Ix,|+ [xp]
+++++ [ql
norm: [x=max(hs le BaD
): IEXIL, (la? + Lal? + ++ (x,|?)'?

wd ecg
and it is 9 specialaseofthe i odu
12. through 123)Pal ees
pe
y. "
ae :

a ie

8
a

Ay! gonesien agonist

alee
——
CHAP. 12] NORMS 11]

Because of the added consistency condition (M5), not all vector norms can be extended to become
matrix norms. (See Problem 12.6.) Two that can be extended are the /, norm (see Problem 12.7) and
the Euclidean norm. For the n X n matrix A =[a,], the Euclidean norm becomes
1/2

e The Frobenius (or Euclidean) norm: |\\|A\|


= (> : |a,,| ’)
1 j=1

INDUCED NORMS
Each vector norm induces (or generates) the matrix norm

|Al| = max (||AXl) (12.1)


on an arbitrary m X n matrix A where the maximum is taken over all n-dimensional vectors X having
vector norm equal to unity. Some induced norms for A =|[a,)| are:

e The L, norm (induced by the |, norm):

[All= max (> lay) pytine i:


| - oe :x
which is the largest column sum of absolute value.
:
e | ‘The L., norm ‘(induced bythe l,, norm):
| << ES; a aes NOT 1 ne) |All. = jmsix (2a,
g coewich
="sity row sam of sbaonute values.
112 NORMS (CHAP. 12

Inequality (/2.3) provides bounds on the eigenvalues of a matrix. (See Problems 12.17 and 12.18.)
a
An equivalent expression for the spectral radius is
a
I
o(A) = lim ||A”||” (12.4)

Solved Problems

Determine ||X|| and ||¥||y for

1 4 Pade @)
X=|2 Y=|5. w=/0 11
3 a ee eee

"Forthe given vectors, we have


—[Xlly =VOEXw ~=VimTWX) -VST STFA = 50
~ WYlw=pis i 1

SF. F:3 age uy Seey i st

raeTks‘yes =
i.
CHAP. 12] NORMS

12.5 Normalize the vector X given in Problem 12.1 with respect to (a) the /, norm, (b), the /,
norm, and (c) the /. norm.
Using the results of Problem 12.2, we obtain the normalized vectors (a) {1/V 14, 2/V 14, 3/V
14]/:
(b) [1/6, 2/6, 3/6]’: and (c) [1/3. 2/3. 1]'. Each of these vectors is a unit vector with respect to its
associated norm.

Show that the /, norm for vectors does not extend to a matrix norm.
The /, norm is simply the largest component of a vector in absolute value, and its extension to
matrices would be the largest element of a matrix in absolute value. That is,

[Al] = max (a,,|)


Pir

Consider the matrices |

nese[t |] corner ane[33]


WePasipl= |[BI|= 1, sd
penis= 2. Since condition M5 is vig. aiemeapened norm is} not a

Ye est k
oS) Sane i
[CHAP. 12

Calculate the (a) Frobenius norm, (b) L, norm, (c) L, norm, and (d) spectral norm for

ee
=|-4-6 0
0 oO -9

(a) |JAl|, = ((7)? + (-4)* + (0)? + (-2)° + (—6)? + (0)* + (0)? + (0)’ + (—9)?} ""* = 13.638.
(b) ||Al], = max(|7| + |—2| + |0|, |—4| + |—6| + [0], || + [0] + |-9|) = max(9, 10, 9) = 10.
(c) ||Al|. = max(|7| + |-4| + |0|, |-2| + |—6| + [0], }0| + [0] + |—9|) = max(11, 8, 9) = 11.
(d) Here we have

65 10 |
ia=a'a=|10 40 0
0 O 81

which has eigenvalues 68.5078, 36.4922, and 81. Thus, ||A||, = max(V68.5078, V36.4922, VBI) = 9.

of SPno Heegative
¢ quantities
* ieelesee
Xl, ad mt nonnegative,
1 come
=
: ee an Te
CHAP. 12] NORMS 115

12.11 Show that the /, vector norm induces the L, matrix norm under Eq. (12.1),

Set ||Al|, = max (\|AX||,). Then |/Al|, is a matrix norm as a result of Problem 12.10, Denote the
columns of A as vectors (ay Sey n?
aie us and set

se
>
Sg
Geese
Ms
ae
Ss a| = max (All)
We wish to show that ||A||, =
For any unit vector X = a BT Re,
||AX||, ” I|x,A, +xX,A,+°°°+ x, A, ll,

S lx, All, = \|x,A.||, ee oe lx, Anll a Ix {A, ll, + Ix |I|All, nthe Ix, {An Il,

=|x,|H +|x,|H +--+ + |x,|H


= H(|x,| + |x.|+ +--+ +1x,[) = AIX, =
Thus, [|All], = max (||AX||,) = max (H)= H | (1)
But for unit vectors Y, (k=1,2,...,m) having a 1 as the kth component and Os as all other aa 4
ae
components,
)
= AY. = WAglh be Sad
|All, = ieee (AX|];

BE ta: ek max (||A,||,)= 4


oesht

~~ * >

ow thatthe 1, vector norm induces theL,_ matrix norm under (12.1).


erat
sel= Re max(AX!) Then Malisa matrix norm as
a result of.Leer
oP
By a janis oar “et
! a ate in
ier! asi ey pice ot a
(CHAP. 12

(2)

Together, (1) and (2) imply the desired equality.

12.13 Show that an induced matrix norm with its associated vector norm satisfy the compatibility
condition ||AY|| = ||A||||¥||.
The inequality is immediate when Y = 0. For any nonzero vector Y, Y/||Y|| is a unit vector, and

max(WAXID =[ATT]
JAll= tyrII
=I|AY
12.14 Show that ||Al] = max (||AX'l) = max(|{|AX|}/ ||X||).
x#0

Set H = max HAXH/4XI). We must show that All:= H. First, we have

HAIL=max(Axi)=max (HARI)<mag(HAL)—
(x=

_____where the inequality follows from taking e maximum a larger set of vectors. Thus,
over

, Sa

art
CHAP. 12] NORMS 117

12.17 Determine bounds on the eigenvalues of


Ty 478 7
aly eg G COOKS
| See 10 9
too. 10
The row sums and column sums are both 32, 23, 33, and 31, so ||A||, = ||A||, = 33. The Frobenius
norm is ||A||, = 30.5450. It follows from (12.3) that o(A) =33 and o(A) = 30.5450, from which we
conclude that every eigenvalue must be no greater than 30.5450 in absolute value. (See also Problem
20.8.) Of course, other norms not considered here might place a still lower bound on the eigenvalues of
A.

12.18 Prove that o(A) <||A|| for any matrix norm.


Let A be an eigenvalue of A for which |A| = o(A), and let X denote a corresponding eigenvector.
Construct a matrix B having each of its columns equal to X, Then AB = AB, and for any matrix norm
|A]||| = ||AB|| = ||AB|| = |/A]|||
Bl :
Since B is not a zero matrix, it follows that |A| = ||A||. But |A| = (A), so o(A) =|lAl].

- Supplementary Problems
ne the/,norm
é «4,
(CHAP. 12

Determine the Frobenius norms for the following matrices:

2 —} A [3 t [ e fe- 2-14
m{, 3} w[oshieeh sl ly Sh Olemee en
Determine the L, norms of the matrices in Problem 12.28.

Determine the L. norms of the matrices in Problem 12.28.

Determine the spectral iain of the matrices in Problem 12.28.

Prove that for any induced matrix norm, ||I|| = 1.

Show that the Fegpenis matrix norm satisfies condition MS.

—— a Pythago‘al
Aaghstben: for an inner-product-generated vector norm; that is, prove that if
(X, Leh then
thie ee |
Chapter 13
Hermitian Matrices

NORMAL MATRICES
The Hermitian transpose of a matrix A, denoted A”, is the complex conjugate transpose of A;
that is, A” = A’. A matrix A is normal if

AA” = A"A (13.1)


(See Problem 13.1.) Normal matrices have the following properties:
Property 13.1: Every normal matrix is similar to a diagonal matrix.
Property 13.2: Every normal matrix possesses a canonical basis of eigenvectors which can be
arranged to form an orthonormal set.
(See Problems 13.7 through 13.9.)

Gis bes. ame. 3242 8-2 Bae


Wu Das ist de 33 wom. va

| rmi
mitiar
n“matrices is Acie as is Me-pratuek: ofa Hermitianmatrix witieaaal
itian matrix is also normal, DECANE Aen vithTherefore, Hermitian matrices
$131ee = 2. In ae a

tes
Be cee
oesnag
2
oe
120 HERMITIAN MATRICES (CHAP. 13

(X. AY) w = (AK, Ve (13.3)


: for all m-dimensional vectors ¥ and n-dimensional vectors X, where the inner product is as defined
in Chapter 11. The adjoint always exists and it is
. = (ww) 'A“(w" Ww) (13.4)

For the special case W=I (the Euclidean inner product), (J3.4) reduces to
A*=A" (13.5)
(See Problems 13.16 through 13.18.) Adjoints satisfy the following identities:
(Al): (A*)*=A
(A2): (A+ B)* = A* + B*.
— (A3): (AB)* = BTA’.
a (A4): (cA)* = cA* for any scalar c.

EADIONT MATRICES
eeerikA is self- nce if it inesits own adjointSuch a matrix isdenmonerie: square, and it
stl eerie |
“oa
AYe=(AX, Y)w ; (13.6)
sior A Re Ieeye Withvenpect 09

‘ : yr i

; WGA: istugasitt Sait ol | ot oa .ky - riko necionehl « [ie


a aes ee a oe a ny. =
+e } 7
rea s) i. > =i o>2 Ss ae ¢ ; ‘AY‘ Cues
4 is 4 ; 7 Ts
:

ad
CHAP. 13] HERMITIAN MATRICES 121

13.3 Prove that the eigenvalues of A”A are nonnegative.


If A is an eigenvalue of A”A, then there must exist a nonzero eigenvector X associated with A
satisfying the equality A“AX = AX. For the Euclidean inner product, it follows from Property 11.1 and
Eqs. (13.3) and (13.5) that
0 =<(AX, AX) = (A*AX, X) = (A"AX, X) = (AX, X) = A(X, X) (1)
Since X is an eigenvector, it is nonzero and we may infer from Property 11.2 that (X, X) is positive.
Dividing (1) by (X, X) yields A>0.

13.4 Prove that the eigenvalues of a Hermitian matrix are real.


Let A denote an eigenvalue of a Hermitian matrix A, and let X denote a corresponding eigenvector.
Then, under the Euclidean inner product,
A(X, X) = (AX, XY = (AX, X) = (X, A*X) = (K, AK) = (XK) AX) = (K; AX) =A(K,X) (1)
Since X is an eigenvector, it is nonzero and so too is (X,X). Dividing (1) by (X, X) gives us A=A,
which implies that A is real.

13.5 Show that if X is an eigenvector of a normal matrix A corresponding to eigenvalue A, then nies ie a
| ann eigenvector of A” corresponding to A. | ae
es , _ Using the Euclidean inner product and (13.1), we obtain. . pyrelte s ke*x
3. ome Aa TeH eT" gay, AX)=(A*AX, X)= (A“AX, X) = (AA"X, X) = (A%X, AX) = (AX, A"X) 1 a
i ——° then follows that | 3 a
“a 0 = (0,0)= (AX
— AX, AX — AX) : 4 =
SR et es ae | = (AX, AX) —A(AX, X) —A(X, AX) + (AX, AX) a
oo ae es = (A"X, AX) —A(X, Be A(A*X,8)
ee ae | =mt!ee A’
Tags
hialegaos:
" 7% ie np.
ce i>
: ten
tC )‘Siem a
HERMITIAN MATRICES (CHAP. 13

13.8 Determine a canonical basis of orthonormal eigenvectors with respect to the Euclidean inner
product for

2° 2 =
A=i 2 2 2
9721272 ‘aS
The matrix is real and symmetric and, therefore, normal. The eigenvalues for A are 0, 2, and 8, and
a corresponding set of eigenvectors is

1 i -1
X,=| 1] X,=/1] x,=|-1
0 1 2
Since each eigenvector corresponds to a different eigenvalue, the vectors are guaranteed to be
orthogonal with respect to the Euclidean inner product. Dividing each vector by its Euclidean norm, we
obtain the orthonormal set of eigenvectors

-1/V2 1/V3_ -1/V6-


Q,=| 1/V2|) Q,=/1/V3]) Q,=| -1/V6
0 1/V3 2/V6

> a canonical basis of orthonormal vectors with respect to the Euclidean inner
_, * mr
Bint eri
CHAP. 13] HERMITIAN MATRICES 123

13.10 Verify Property 13.4 for the matrix in Problem 13.8.


The eigenvalues for the matrix are 0, 2, and 8, so it has one zero eigenvalue and two positive
eigenvalues. Reducing the matrix to upper triangular form using only elementary row operations of the
third kind, we obtain

yy ae
; ey ee
=—2 ~2 6

a. 2 =e Adding —1 times the


—!| 0 O 0 first row to the
~2 -2 6 second row

aw See Adding the first row


bo to the third row
00 4

This last matrix is in upper triangular form, and the diagonal elements consist of one zero and two
positive numbers.

13.11 Verity Property 13.4 for the matrix in Problem 13.ee 7 : | =3


3 sll ye
; The eigenvalues for that matrix are 5, 5, —1, and —1, which consist ofttwo positive and two n
Boi. a? Reduced to upperSoaked form via elementary row operations ofthe thirdbiog!then mi

tis - age3 ee Siti Sart ty SIRs


88 Ges; CW ate if: crve Mis cee tHe Ona)

dicence gba i eo Ta
th ees? Par $¢ ar aig: Sides
=e

1s uot eegett Rae AE


124 HERMITIAN MATRICES [CHAP. 13

13.14 Show if a matrix is upper triangular and normal, then it must be a diagonal matrix.
Let A=[a,] be an n X n upper triangular matrix that is also normal. Then a, = 0 for i> j. We show
sequentially, for i=1,2,...,2-—1, that a, =O when i<j. Since
A”"A = AA” (1)

it follows from equating the (1,1) elements of the two products in (J) that

so that 0= > la,,l’


j=2
©

Thus,
a, =O. .(j/=2,3,...,%) (2)

Next, equating the (2,2) elements of the two products in (/) and using (2), we obtain

4x47 = Az, + 2 G4,


; af ni a ‘ th '

Phar ist. | > la,,|


Say "on ertte
,
Bes eitiecn news iO. igvece thee. este
yy pipet. Leia ‘nai j > ; ’ aT
\ eh ‘ 61 se e row ears! fy 4
ch we infethat . rip
yl Lae.
, : ae
> ae, 4 =0 (j=3,4,. oy:
thiscient: with each successive diagonal clement inturn—we find that all
e theos of A must be zero. ae: all nonl

e er | a "0
ite
*, Pah
ans.
:
meee
ie
aieric‘3 Bathe .*.
1) 20 : IHD nist : ?

t | *
CHAP. 13] HERMITIAN MATRICES 125

C and E are self-adjoint because both are Hermitian.

13.17 Determine the adjoint of A under an inner product with respect to W, where

pt ol tag ae
a=|2, pred ang w=| “4 4
Using (13.4), we calculate
egy | = 18 wat i4 ‘l-l3 ae] :
hart Lo tihaker bi 14d te ed 1526 | a
eae ye ee prt 3 [2 21)
| = ae ogoe ilo 3-i4jli2 2)
; : _ [-2,077
+11,764 2,184+i2,574 ee
+e
a
‘Seag | | ~ | 1,428
+11,680 2,082—11,768 | veg

* Bas ‘Dive (13.4).


e a
: For an arbitrary inner pcedect defined with respect to a nonsingular matrix W, we hav
(X, AY) = (WX) +(WAY)=(WX)"(WAY) =X"W"WAY 35
ae eke aay) “xy
SahX,nilat apa cgdl WWE
126 HERMITIAN MATRICES [CHAP. 13

Supplementary Problems

13.20 Determine which of the following matrices are Hermitian:


S 4 “| ae = E- 3|
- s Af Oo Goa
a -1 1-2 32* —1 -1 1
o ll : 4
nN
E=|1+i2 3 2-ié5| Fe=/-1 -1 -1
9
: —-i3 245 0 1 -1 -1
3 i
G=|2 -2| H=|_) i
0 0 0
1 Oe

_ 13.21 Determine which of the matrices in Problem 13.20 are normal.


4 9
arr”ee

13.22 Find a canonical basis of orthonormal vectors for matrix F in Problem 13,20.
<> ieee

t En Gil oe
CHAP. 13] HERMITIAN MATRICES

Prove that the diagonal elements of a Hermitian matrix must be real.

A matrix A is skew-Hermitian if A= —A”. Show that such a matrix is normal.

Show that if A is skew-Hermitian, then /A is Hermitian.

Show that if A is an nXn skew-Hermitian matrix, then (AX,X) is pure imaginary for every
n-dimensional vector X.

Show that if A is skew-Hermitian, then every eigenvalue of A is pure imaginary.

A matrix A is skew-symmetric if A= —A’. Show that a real skew-symmetric matrix is skew-Hermitian.

Show that any real matrix can be written as the sum of a symmetric matrix and a skew-symmetric matrix,
_ and show that any complex-valued matrix can be written as a sumyah a Hermitian matrix and a
sa: skew-Hermitian matrix. <o
y's .
>
-
to eS, - ) te 46)
ie 4 € ‘ee - 4 e *

EE a tat

Prove that alywell-defined function of a Permitmat


matrixisHermitia
uaa Le Pi aah ahs age eSfg
be iti ge MATS ei:

apts avis See rotie bith Vises :2ahee eRe ‘pm tn getiealies sat 8
1g <7 Se pages ariel S ot Seee agate nethizety ‘od G2 A xppemies ae
x iz oe Cie 3 *: > : kine ess weatt % Sie one.

-
setae MG a
Chapter 14
Positive Definite Matrices

DEFINITE MATRICES
An n X n Hermitian matrix A is positive definite if
(AX, X)>0 : (14.1)
for all nonzero n-dimensional vectors X; and A is positive semidefinite if
(AX, X) 20 (14.2)
If the inequalities in (14.1) and (14.2) are reversed, then A is negative definite and negative
semidefinite, respectively.
The sum of two definite matrices of the same type is again a definite matrix of that type, as is the
Hermitian transpose of such a matrix. Positive (or negative) definite matrices are invertible, and their
inverses are also positive (or negative) definite.
i “4 > ‘

____ TESTS FOR POSITIVE DEFINITENESS


oan ee. - Each of the following three tests stipulates necessary ‘ond sufficient conditions for an n Xn
_-__—_—— Hermitian matrix A to be positive definite. That is, a Hermitian matrix A is positive definite if it
passes
ae. any one of these tests. :
" 4
A | Test(14.1: " is positive definite if and only if it can be ‘aciicaht to upper triangular form using only
a —_ elementary row operations E3 and the diagonal elements of the resulting matrix (the
i pivots) are all positive. ;
‘Test14.2: A principal mista:of A is the determinant of any submatrix obtained from A by deleting
its Tast & row: ans big gee tr Ais postive deftifand only
i al
CHAP. 14] POSITIVE DEFINITE MATRICES

root is a well-defined function. In such cases it may be calculated by the methods given in Chapters 8
and 10. (See Problems 14.13 and 14.14.)

CHOLESKY DECOMPOSITION
Any positive definite matrix A may be factored into
AsLL? (14.3)
where L is a lower triangular matrix having positive values on its diagonal. Equation (14.3) defines
the Cholesky decomposition for A, which is unique.
The following algorithm generates the Cholesky decomposition for an n x n matrix A= [a,,] by
sequentially identifying the columns of L on and below the main diagonal. It is a simplification of the
LU decomposition given in Chapter 3.
STEP 14.1: hoe Set all elements of L above the main diagonal equal to zero, and let
!,, = Va,,. The remainder of the first eqlueg of L is the first column of A divided by
/,,. Set a counter j =2.

caste
as meh=n +E, stops te algorithm icompletOrbe die Li = Ar++M)
ot Sapte : ectively, thefirst
Tekno
arn oh1 A el
veh

ive ee Lice pe y él jem Seto betel tay

14.4: It j=n,skin
to.ae Meas SOAS Come. the jth column of L below ther
a , CN Mea i+ 1F 42... .s, compute
ais ;nk. SIG Wh Yoke: " eye(Li,L')ee A Pl rare <i f

ie ealwate Ah aesmae iene att* ~a =


aa . we
ase j 1
a
by |
iaa:
-

—Th
POSITIVE DEFINITE MATRICES |CHAP. 14

Test 14.2: The principal minors of A are

‘a 6 2 =2
det [6] = 6 ‘ic =36-4=32 and 2 6 —2 | =288
-2: =2e088

Since all three principal minors are positive, the matrix is positive definite.
Test 14.3: The eigenvalues of A are 4, 6, and 12. Since all three are positive, the matrix is positive
definite.

14. 2 Use onlthe tests to determine whether the ponowing matrix is positive definite:
| 2 10 =2
kee ie me
te sicige epee oF} MBG he bie we: AL HY QI

beara» mee ec
Me: ya re eo + 2 4
eT Adding —5 times the firstrow to
Test 1411: =| 0 -45 18] the second row
5) yd SOMESD 3 "| ee
2) UGGS OO ke , GAs
“Since the ‘secondpivot, -45, isen Aissot st nor positive
semidefinite. We can also rule out A being either negative
Fine na the first pee aa)isate “oe

Scr “ ered ba pet hors eae na oie es } - 4

EES
5 POT -)
POS nt Be: 5
eee eee et 1 mi
ee oe

CHAP. 14] POSITIVE DEFINITE MATRICES 131

14.4 Determine whether the following matrix is positive definite:

iy 7
A=|-17 -4 1
7 1 -14
A is not positive definite because it fails tests 14.4, 14.5, and 14.6: Its diagonal elements are not all
positive; the largest element in absolute value, —17, is not on the main diagonal; and a,,a,, = —8 is not
greater than |a,,|° = 289.

14.5 Prove that the diagonal elements of a positive definite matrix must be positive.
If A has order n X n, define X to be an n-dimensional vector having one of its components, say the
kth, equal to unity and all other components equal to zero. For this vector, (14.1) becomes
0< (AX,X)=(AX)
XK =a,,

14.6 Prove that if A=[a,] Msan n Xn positive definite matrix, then for any distinct i and j .
(i, i = 1, 2,. ys aa Wee, a,,|°. =

a
Define X to be an n-dimensional vector havi all components equal to zero except fetheit
fi meapenas: Denote these as x, and x,, respectively. For this vector, (14.1) becomes if ek ge
.

5 peek. #4
0< é X) =(AX) +X = a,x,%, + a,x,%, + a,x,5, + a XE) ?-
|
a Settingx,= By! Gy and x, = 1, we find that the first two terms on the right cancel, and we are le
; —a,a
Cet
a ,
raea art. ows i
.
Sf
“tase
ty A
-
—o___-—
it 4)
gp oe a,a,, + @,,,)

ired¢d.inequality follows, since a,, is positive te Problem 14.5) and, because A is


= Gi,
POR as
POSITIVE DEFINITE MATRICES {CHAP. 14

(Property 6.1). But the orthonormal eigenvectors are linearly independent (Problem 11.27), so it follows
from Property 6.2 that there exist constants d,,d,,...,d, such that
X= d,X,+d,X,+-°:'+4d,X,
Then AX = d,AX, + d,AX, +++: +d, AX, =d,A,X, + d,A,X,+°°>+4d,A,X,
and (AX, X) = (d,A,X, + d,A,X,+°°:+d,A,X,,d,X, + d,X,+---+d,X,)
=|d,\7A, +|d,|7a, +--+ |d,|7A, |
because the eigenvectors are orthonormal. Since the eigenvalues are given to be positive, this last
quantity is positive for any nonzero vector X; thus the matrix A satisfies (14.1) and is positive definite.

Show that the determinant of a positive definite matrix is positive.


The determinant of a matrix is the product of its eigenvalues (Property 7.8), and each eigenvalue of
a positive definite matrix is positive (Problem 14.8).

14.11 Show that all principal minors ofa positive definite matrix must be positive.
Let A be ann X 7 positive definite matrix, and let B be a submatrix of A obtained by deleting from
pi .Do poke TOws pak hae seen 1). Then B has order (n — k) x (n—k). Let ¥ denote
en an n-dimensional vector havingits
et = comF
rey fies Y 7 one>nt
Te)
Miigal.so t
those.ofYang ite last&,componen
sh, ae ce eee bs
ts
equal to zero. It follows from
41) th: foie de
4.1) el a ee AS
7” ih nie 0< (AX.
AX, X)=(BY,wy y= 3 re: We J . gnbtine

leone owssara
t Bis postivetata ot pomPottem .¥,
CHAP. 14] POSITIVE DEFINITE MATRICES 133

Hi es Ma
has the property that its square is A. Only D is positive definite.

14.15 Determine the Cholesky decomposition for

Amie et
=|-i2 10 1
i 2 §
Since A is a 3 X 3 matrix, so too is L in (14.3).
STEP 14.1: Set l,, =V4=2; then /,, = —i2/2 =—i and /,, =i/2. Set j=2. To this point, then, we
have

STEP 14.2: Define L, =(-i] and L; = [i/2].


STEP 14.3; Compute _ 3 Brees eS
x (cg | $3 by= Vax.— (L;, L3) =V(i0-1)=3 7 . y : Se a8
- —s STEP 14.4: Compute Ba oe fetes
ae .- Sie : Wee > a, — (LiL) — uP tesa) 4
. bs ly Mie See ,
aie point, we have
ee i ya oa eee
se eeio : ache t- di aeis“ aes ce
oe x

,
Pe ra
c— <=
“dl >

ic:
POSITIVE DEFINITE MATRICES (CHAP. 14

STEP 14.2: Define L; =[-0.75]; Li = [1.25]; and L; me 2).


STEP 14.3: Compute
L, = Va,,
— (Li, L’) = V16—0,5625 = 3.92906
STEP 14.4: Compute
= 932 ~ (Ly, Ly) _ -5+0.9375 _ Ga: ~ (Ly) 8-15
l,, a = —so7006 = 71-03396 as = =97006 = 7241788
To this point, we have
4 0 0
a -0.75 3.92906 0
» 1.25 -1.03396 -
: SH

STEP 14.5: In j by 1 toj=3.


tian ‘tax. . a (hin?
1 chs ae 7 uel. 96) ahs “[aaatnee]
CHAP. 14] POSITIVE DEFINITE MATRICES

Supplementary Problems

14.17 Determine which of the following matrices are positive definite and which are positive semidefinite:
ae | 5
B Sti? | c
=t 3

i2 2 3. Atiz 32-22
Weide i F=|1+i2 ” 2+i
1h @ a= 12 244 9

G=
=3
Ovid
cog a) ee A

14.18 Find the square root of matrix A in hisses 14.17, ee that its sheen are 2, 3, and 6.

14.19 Find the square root of matrix B in;Beabiets 14. 1”,


giventhatitsp vie Mole Mati; and 4,

Sesh .caa ‘doting i he onl cmages ee x | eee


a Find the square root of —
ae .mee
st | fact Dee
mer: i ALY
Liabene BARE Ss = [aa 35] iersaan
25. i24 Stkiey
[Sidiity vm: Siete 7263

Booey, tion ieBie


at;1A,
trie 7

; is Le . PsA pls heems ae ae Eins a


Chapter 15
Unitary Transformations

UNITARY MATRICES
A matrix is unitary if its inverse equals its Hermitian transpose; that is, U is unitary if

U' saute f" (15.1)

Unitary matrices are normal because UU” = UU '=1=U 'U=U"U. In addition, they have the
following properties:
- Property 15.1: A matrix is unitary if and only if its columns (or rows) form an orthonormal set of
vectors.
: _ Property 15.2: The product of unitary matrices of the same order is a unitary matrix.
Property 15.3: If U is unitary, then (UX, oh = (X,Y) for all vectors X and Y of appropriate
2 ae dimension.
rty All eigenvalues of a unitary matrix have absolute value equal to 1.
t1 5 The determinant of a unitary matrix has absolute value equal to 1.
es
4, 15.5 to 15:43 and 1524.) Unitary matiices are invaluable for constructing
nsformations (see Chapter 10), because their inverses are so easy to obtain.
jal
|matrix is a unitary matrix whose. elements are all real. If P is orthogonal, then

oe ae

gel aa coed
CHAP. 15] UNITARY TRANSFORMATIONS 137

where /, _, is the (k — 1) x (k — 1) identity matrix.


STEP 15.5; Calculate T, = U/T,_,U,.
(See Problems 15.8 and 15.9.)
If A is normal, then the Schur decomposition implies:
Theorem 15.1: Every normal matrix is similar to a diagonal matrix, and the similarity transforma-
tion can be effected with a unitary matrix.
(See Problem 15.11.)

ELEMENTARY REFLECTORS
An elementary reflector (or Householder transformation) associated with a real n-dimensional
column vector V is the n X n matrix
e ;
.
a.
ate
ad
a
~

: : g mae

a + *
Beene 2 ae. lvl a | one > 2 Jew
os ~
:
>

“ .
eAT). 20: . . att 4 =!

elit ma riae
An elementarywellector is both real symmetric and orthogonal, and its square is the
? 9

| Geren 15.13, 15.14, and 15.22.) ge ae


UNITARY TRANSFORMATIONS [CHAP. 15

Solved Problems

15.1 Determine which of the following matrices are unitary:


1 0
6it...2 ae : 1 oof
ae
1/2-i/2 eee
1/2+i/2 5) -6/7
B=|3/7 -2/7|
ase C=1/V3 1 0 i
i 1
All three are unitary, because the product of each with its Hermitian transpose yields an identity
matrix. Since the elements of B are real, that matrix is also orthogonal.

15.2 Prove that a matrix is unitary if and only if its rows (or columns) form an orthonormal set of
vectors.
+ Designate the a of U as U,,U,,...,U,. Then the (i, j) element ((=1,2,...,"; j=
Ay2,...,n) of UU" i
on (ip), = U, U, =(U,,U,)

If U is unitary, then UU”sai


= and j) element (U,,U,) must be 1 when i =jand 0 otherwise,
yoke
‘This, iin turn, implies that the set yer eee ae } is an orthonormal set of vectors. (The columns
of U
a
ixfoSper
.
reo. am Gear
ct U"U instead.) i tJ 1S 0
CHAP. 15} UNITARY TRANSFORMATIONS

This has the row-echelon form

0 1
0 ger ael
which indicates that the first, second, and third vectors of the set form a maximal linearly independent
set. Applying the Gram-Schmidt process to the set {Y,E,,E,}, we obtain the orthonormal set
{Q, =Y,Q, =E,,Q, = (0, 1/V2, 1/V2]"}. Then
0 12g
U=| -1/V2 0 1/V2
pie 0 el

15.5 Brave that the product ofunitary matrices “ the‘same order isalsoJ“unitary matrix.
if A and B are unitary then aie.
2

~pideupe
, is
=
(AB '=B"tsabn"= BA" = (AB)
UNITARY TRANSFORMATIONS [CHAP. 15

This matrix possesses the eigenvalue A =3 with corresponding unit eigenvector ¥ =[1/V2, —1/V 2)’.
Using the procedure given in Problem 15.3 with n =2, we generate the unitary matrix

N= [INE 14]
which is expanded into

3 2/V2 0
0; 1/V2 sothat T,=U/T,U,=|0 3 2
O:-1/V2 1/V2 Gris Qe sny)
Setting uU= U,U,, we have U“AU = T,, a matrix in upper triangular form. In this case, all the elements
of U are real, so it is orthogonal.

Find a Schur decomposition for


Gakic 2: bw 910
Main sveginiy eh
CHAP. 15] UNITARY TRANSFORMATIONS

A a nis Ber
a Pte. 1/3
\

which we expand into

2 2iV3) 21Vv2
U,= so that
2 2/V2 9
01 1/V2 -1/V2 Nips , ; :
O'1/V2 1/V2
Setting U=U,U,U,, we have U“AU =T,, a matrix in upper triangular form.

15.10 Show that if U is unitary and A = U” BU, then B is normal if and only if A is normal. |
If B is normal, then B”B = BB”, and
AA = (U"BU)"(U”BU)= (U“B”U)(U“ BU) = Ge UU (BU)
= (U"B”)(UU~')(BU)=(U“B”)(BU)=U"(B“B)U=U"(BB”)U
= (U"B)(B”U)= (U”B)(UU ')(B”U) = i acs i
= (U"BU)(U"B"U)= (U higsntinasd bea: oe
eesReconoaition:ts sehen ‘lated using t
‘ayBIG fi

>that ha normalmatrix is
= unital similar to a diagonal matrix. |
am 7 pir Sch ete a oie F-= U"AU,
: on
UNITARY TRANSFORMATIONS (CHAP. 15

15.13 Find elementary reflectors associated with (a) V, ={1,2]’ and (b) V, =[9, 3, —6)".
(a) We compute ||V,||, = V5, so

n=(2 9]-2(3Joa-[3 Veal (28 243] —3/5

(b) Similarly, |/V,||, = V 126, so


-|o0 1 ee P| i0 | 1 81 27 “is
R,=|0 1 0 [9,3,-6]=|0 1 O]==z]| 27 > ~18
001 126 aif 0 0 63
1 “34-18. &%

~2/7 —3/7 6/94


=|-3/7 6/7 2/7
aoe 3/7 317

asa
- Provethatan
a nine. e¢
For any constant Gs i

etric. Since Iissymmetric


and the sum of
et
v a reflector
is1
eae eds. a
CHAP. 15] UNITARY TRANSFORMATIONS

Supplementary Problems

15.16 Determine which of the following matrices are unitary:

V3... .cAn v2
1/V3 -1/V2 0 C =(1/V3)
1/V3 0 —1/V2

p=| iv? at g- (13 i


1/V2 1/V2 V2 1/V2

15.17 Apply the procedure of Problem 15.3 to construct a unitary matrix having, as its first column, an
eigenvector corresponding to A = 3 for:

clean | 6) @)Ba]1. $2) @enji $1},


| | RPL RESae8 eee 4 Sere SS
15.18 Find a Schur secomnponitiey, for each of the matrices
mree 1S.is
, perte ro ¢¥; J

een.” — reflectors associatedwith 3 aii |


, * C naty ta PER fines ba
axe Wa Grits Fela e
Chapter 16

Quadratic Forms and Congruence

QUADRATIC FORM
A quadratic form in the real variables x,, x,,...,X, iS a polynomial of the type

SD a,x.x, (16.1)

with real-valued coefficients a,,. This expression has the matrix representation
X’AX (16.2)
with A=[a,] and X=[x,,x,,...,x, |’. The quadratic form X’‘AX is algebraically equivalent to
X"{(A+ A’)! 2}X. Since (A + A’ )/2 is symmetric, it is standard to use it rather than a nonsymmetric
_-——s matrix in expression (16.2). Thus, in what follows, we shall assume that A is symmetric. (See
____— Problems 16.1 and 16.2.)
ae A complex quadratic form is one that has the matrix representation
i | MOARBiss itor | (16.3)
| A Sone Hermitian. Kinwesiee (16.3) reduces to (16.2) when X and A are ponl-yalnad; and
both ex ns are equivalent to the Euclidean inner product (AX, XY
1 inner ae (AX, X) is real whenever A is Hermitian an
(Proper 13.5).If the
(or | onneegative, >gative, or no itive) for . ors e
e
e

ad s positive def nite (or positive nide


Bite testslistec 1 Chapter 14 ma
CHAP. 16] QUADRATIC FORMS AND CONGRUENCE 145

A matrix A is Hermitian congruent (or conjunctive) to a matrix B if there exists a nonsingular


matrix P such that
A = PBP” (16.5)
Hermitian congruence reduces to congruence when P is real. Both Hermitian congruence and
congruence are reflexive, symmetric, and transitive.
Two quadratic forms (AX, X) and (BY, Y) are congruent if and only if A and B are congruent.

INERTIA
Every n X n Hermitian matrix of rank r is congruent to a unique matrix in the partitioned form
L342
0! =I, (16.6)
Goduabe on nian as r-7 j

; 0:0 '0
where I, and I,, are identity matrices of order k x k and m X m, respectively. An inertia matrix is a
matrix having form (16.6).
Property 16.1: (Sylvester's law of inertia) Two Hermitian matrices are congruent if and only if
they are congruent to the same inertia matrix, and then they both have k positive
eigenvalues, m negative eigenvalues, and n — k — m zero eigenvalues. |
iteger k defined by form’ (16.6) is called the index of A, and s = k — m is called its signature.
ZC 1 for obtaining the eee hnmatrix of a given matrix A is the following: |
| Construct thenecngiie matrix [A |I],where I is an identity matrix. having thesame3 4
QUADRATIC FORMS AND CONGRUENCE [CHAP. 16

RAYLEIGH QUOTIENT
The Rayleigh quotient for a Hermitian matrix A is the ratio
(AX, X)
R(X) = (X, X) (16.7)

Property 16.2: (Rayleigh’s principle) If the eigenvalues of A a Hermitian matrix are ordered so
that A, =A, 5°--SA,, then
A, = R(X) SA, (16.8)
R(X) achieves its maximum when X is an eigenvector corresponding to A,,; R(X)
_ achieves its minimum when X is an eigenvector corresponding to A,.
‘(ecSechiiaen 16.10.)

oe Shes Re hi aRE OMe. (aatvars vit > vetenviv? > 2ef eieagest
PoE SNR. ad Oe 317 yuh Fs : (eee j

Lwigevl: end

ee
grisi dns +wey aft...
CHAP. 16] QUADRATIC FORMS AND CONGRUENCE

16.3 Determine whether the quadratic form given in Problem 16.1 is positive definite.
The results of Problems 16.1 and 14.2 indicate that the matrix representation of the quadratic form
is not positive definite. Therefore the quadratic form itself is not positive definite.

16.4 Determine whether the quadratic form given in Problem 16.2 is positive definite.
From the results of Problems 16,2 and 14.3, we determine that the quadratic form is not positive
definite because its matrix representation is not. The quadratic form is, however, positive semidefinite.

16.5 Transform the quadratic form given in Problem 16.1 into a diagonal quadratic form.
Given the result of Problem 16.1, we set

2. ee re
A=| 10 5 8
+ bei 8 ll

4
aligns 0aae 0/9. uy,
orthonormal
ie eigenvalues -9, 9, and 18 and corre
Bae. 1/3, -2/3]’, and Q, = [1/3,2/3, 2/3],bocapa
yy, eae Fe ie |
us| 28 13a)I.
1/3Fecal
QUADRATIC FORMS AND CONGRUENCE [CHAP. 16

addition, we interchange the first and second columns of A but make no corresponding change to the
columns in the right partition. Steps 16.1 through 16.6 are as follows:

Interchanging the first and second


rows

Interchanging the first and second


columns of the left partition only

1 0| Adding —1 times the first row to


the second row

t} etn ~3times
thefirst row to
the third row

te ~— row
i tehisd FW
CHAP. 16] QUADRATIC FORMS AND CONGRUENCE 149

Augmenting onto A the 4 x 4 identity matrix, and then reducing A to upper triangular form, one
column at a time and without using elementary row operation E2, we finally obtain

1
0 !
nN | i) > | nN —

0 0 0 Of} -1 -1
0 ©
Oo
or -—-
&
Oo

4 The left partition is in upper triangular form. Setting all elements above the main diagonal in that
partition equal to zero yields

15e0 QxQiRERe a6 0°09


q 0-20 0| -2 100
. 0 00 OO} -1 -1 10
0: (0°
@) 1¢foea 2 0 1
Following Step 16.4, we interchange the diagonal elements in the third and fourth rows of the left
~ partition while simultaneously interchanging the entire third and fourth rows of the right partition. The
result is ae
| i 0 80614 -3ee x
6-2 6 Oo. tae
0110 16 OANARARe OF TEAL
0, 0 0 OF FEST 10) bet

ing Step |
16.5, we next interchange the (2,2) diagonal element with the (3,3) diagonal clement i
“partitionand simultaneously interchange the order of the second and third rows in the1
. That us the essai matrix :
Bewarn me
oa eis) Tubdy eae ead aT wo

Uris id G-
cee “; a ie

as. 0 : aby .
QUADRATIC FORMS AND CONGRUENCE [CHAP. 16

triangular form. Under a congruence transformation, both sets of operations are applied to A, resulting
in a diagonal matrix. This is the rationale for Steps 16.1 through 16.3.
Interchanging the position of two diagonal elements of a diagonal matrix is equivalent to
interchanging both the rows and the columns in which the two ne elements appear. We
interchange only the designated rows in P, since a postmultiplication byP’ will effect the same type of
column interchange automatically. This is the rationale for Steps 16.4 and 16.5.
Finally, a nonzero diagonal element d is made equal to | in absolute value by dividing its row and its
column by \/|d|. Since the divisions will be done in tandem, we have Step 16.6.

16.10 Prove Rayleigh’s principle.


Let U be a unitary matrix that diagonalizes A. Then

A,
U“AU =D= *s
fied AF ie eee GTi} Di “ioe t oS ct A,,

ant SBT; ie » 2a ; ot iamti $7

} and we may assumea (i


columns of have beenorderedso that A,=A,=°°-SA,. Setting X = UY
and using Property 15. iwe
ee "

Spal oi63
Palsii
CHAP. 16] QUADRATIC FORMS AND CONGRUENCE

Using the results of Problem 16.13, determine whether quadratic forms (a) and (b) of Problem 16.11 are
congruent.

Using the results of Problem 16.13, determine how many positive and negative eigenvalues are
associated with the symmetric matrix corresponding to each quadratic form in Problem 16.11.
*

Characterize the inertia matrix of a positive definite quadratic form

Find a nonsingular matrix P such that PAP’ is an inertia matrix for

| aia Seat She Be ee


Gh A ae P (be AGL. oe) |-1.. 9-8
c=. $°6S 1 8 0
16.18 Determine the inertia matrix associated with
ete

_——

Pas ‘ — St 6 Ani ine i 7

ie au : " 4
ae : i ae ee
Sede
eae Vk ek eee A
o. a "
Chapter 17
Nonnegative Matrices

EIGENVALUES AND EIGENVECTORS


A matrix A is nonnegative, written A=O0, if all its elements are real and nonnegative; A is
positive, written A >, if all its elements are real and positive. A matrix A is greater than a matrix B
of identical order, denoted A>B, if A—B is positive. Similarly, A= B if A — B is nonnegative.
The spectral radii (see Chapter 12) of nonnegative square matrices have the following properties:

Property 17.1: If O0<A<B, then o(A)<o(B).


Property 17.2: If A=0O and if the row (or column) sums of A are a constant k, then o(A) = k.
Property 17.3: If m is the minimum of the row (or column) sums of A, M is the maximum of the
row (or column) sums of A, and A=0, then mS oa(A)=M.
a “q Property 17.4: A nonnegative square matrix has an eigenvalue equal to its spectral radius, and
— } there exist a right eigenvector and a left eigenvector ing to this eigenvalue
. ) that have eal nonnegative components.
> Paee So” WOR ’
i Property 17.5: | rron’s ¢ Pe juare matrix has an seein of multiplici
is : Bet hs ae C teh 4 adi fl alue is as large in
n absolute v °
Beet | ieee eeeaye a
stan ei envector and aleft eigenvector correepond to
“| oA) that have —— cdme are” Bs eae BAe woke eee
a“a
diesProblems1711176) Gb <a antaig a fi: Eins & Woes etre & ‘ . ' oa
pNBAS j 4 Riis Gest seb sia =

e aii it diergnes %+ selignt alist


MATRICES © BoatsMr: oo
CHAP. 17] NONNEGATIVE MATRICES

PRIMITIVE MATRICES
A nonnegative matrix is primitive if it is irreducible and has only one eigenvalue with absolute
value equal to its spectral radius. A nonnegative matrix is regular if one of its powers is a positive
matrix. A nonnegative matrix is primitive if and only if it is regular,
Property 17.9: If A is a nonnegative primitive matrix, then the limit L=lim, .. ({1/o(A)}A)”
exists and is positive. Furthermore, if X and Y are, respectively, left and right
positive eigenvectors of A corresponding to the eigenvalue equal to o(A) and scaled
so that YX = 1, then L= XY.
Positive matrices are primitive and have the limit described in Property 17.9. (See Problem 17.12.)
Reducible matrices may or may not have such a limit. (See Problem 17.13.)

STOCHASTIC MATRICES
A nonnegative matrix is stochastic if all its row sums or all its column sums equal 1. It is doubly
stochastic if all its row sums and all its column sums equal 1. It follows from Property 17.2 that the
spectral radius of such a matrix is unity. If the row (column) sums are all 1, then a oe oe
_ eigenvector corresponding to A=1 has all its components equal. Sea
3 A stochastic matrix is ergodic if the only eigenvalue of absolute value 1 is1 its
1 23=1 has multiplicity k, then there exist k linearlyy independent eigenv C
ingtoit.
Property 17.10 If P is ergodic, thenlim, Pp” a exists.
odie with ==1 and has asimple tindfor the limiting
tbe 8seeatysorskitr oe sumsof
components eq
ae
*: roeeat
NONNEGATIVE MATRICES [CHAP. 17

represents the proportion of objects in each state at the beginning of the process. Necessarily,
x” >0, and the sum of the components of X‘” is 1 for each m=0,1,2,.... Furthermore,
x) = {lp a ERB
If P is primitive, then
X® = lim X™ = XML (17.3)
which is the positive left eigenvector of P corresponding to A=1 and having the sum of its
components equal to unity. The ith component of X“ represents the approximate proportion of
objects in state i after a large number of time periods, and this limiting value is independent of the
initial distribution defined by X°°”. If P is ergodic but not primitive, (17.3) still may be used to obtain
the limiting state distribution, but it will depend on the value of X°°. (See Problems 17.16 and
17.17.)

-Solved Problems
tiger aieaooes

Rigen tte) shennan iru slew ups


7 St i tye
+ fee
CHAP. 17] NONNEGATIVE MATRICES 155

If 0=A=B, then A” =B” for any positive integer m and, therefore, ||A”||, = ||B”||,. It follows
from (12.4) that
(A) = lim A’ |e” = lim ||/B” ||," = o(B)
m2

17.5 Prove that if the row (or column) sums of a nonnegative square matrix A are a constant k,
then o(A) = k.
Using (12.3), we may write
o(A) = |All. =k (1)
If we set X=[1,1,...,1]’, it follows from the row sums being k that AX=kX, so that k is an
eigenvalue of A. Since o(A) must be the largest eigenvalue in absolute value,
oa(A)=k } (2)
Together, (1) and (2) imply o(A) = k. The proof for column sums follows if we consider A’ in place of

17.6 Prove that if m is the minimum row (or column) sumi and M is the maximum row (or column)
sum of annxXn nonnegative matrix A =([q,,], then m= o(A)=M.
ree
er 3 Gonateuct a matrix B= (0 paes: the same orderaspA and such that
24baci ois:
156 NONNEGATIVE MATRICES {CHAP. 17

Therefore, A is reducible.

17.9 Determine whether the following matrix is irreducible

02 0
00 4 0
A='0"'9 0 2
mp 0 0

To use Property 17.7, we calculate


1-6 24 16
» 18 se
oD ae a
3. 62.3 seg
a. | |
SS ae
a
we .% ; = =
- itiive, Aiis se
thisti apositiv ea 7

i. be sha et Lh _ ail)at iy Fm AAT tsi


eek
ma!
— Verityt ni 1
erron-F‘robenius eem
the
ba for the
t Problem 117.1.
matrix.in
| $85 The matrix is inntdtictfae(see neeae sr
Prob may ™ found, to four decimal
cet Pee EIS, eee its spectral OG es ie
. :
this spe ctr | radius, anewith =
CHAP. 17] NONNEGATIVE MATRICES 157

17.14 Find L=lim,,.. A” for the matrix in Problem 17.3.


The matrix is stochastic and primitive (Problem 17.12), and it has a left eigenvector given by
[62, 48, 37]. If we divide each component of that eigenvector by the sum of the components, 62 + 48 +
37 = 147, we obtain a positive left eigenvector whose components sum to unity. Then

62/147 48/147 37/147


62/147 48/147 37/147
62/147 48/147 37/147

17.15 Determine whether the stochastic matrix

bee
sy
=) NY a— —)——
oo
eoco

is ergodic, and, if so, calculate L=lim,,,,.P”. . “3


ee
The eigenvalues of P are A, = A, =1, A; =0.1, and A, = 0, so the matrix is not primitive. P does,
cob ina possess two linearly idepei@ent righit eigenvectors corresponding to AST, .
: (45,24, 10,0]? and [-35, -14, 0, 10]” ate=
“soitis ergodic and L exists. As an easy calculation shows, the right eigenvectors es
(0,6, 1,0] and» 40,4,0, 0)"
to As and A,. Thus,
- eotespond respectively, t ;
(1/45 0 2 ae)
158 NONNEGATIVE MATRICES [CHAP. 17

X‘ = (62/147, 48/147, 37/147] = [0.422, 0.327, 0.252]


Over the long run, approximately 42 percent of the apartments controlled by this real estate manage-
ment firm will be in poor condition, 33 percent will be in average condition, and 25 percent will be in
excellent condition.

17.17 Formulate the following problem as a Markov chain and solve it: The training program for
production supervisors at a particular company consists of two phases. Phase 1, which involves
three weeks of classroom work, is followed by phase 2, which is a three-week apprenticeship
program under the direction of working supervisors. From past experience, the company
expects only 60 percent of those beginning classroom training to be graduated into the
apprenticeship phase, with the remaining 40 percent dropped completely from the training
program. Of those who make it to the apprenticeship phase, 70 percent are graduated as
supervisors, 10 percent are asked to repeat the second phase, and 20 percent are dropped
completely from the program. How many supervisors can the company expect from its current
training program if it has 45 people in the classroom phase and 21 people in the apprenticeship

_ We consider one time period to be three weeks, and define states 1 through 4 as the classification of
being dropped, a classroom trainee, an apprentice, and a supervisor, respectively. If we assume that
discharged individuals never reenter the training program and that supervisors remain supervisors, then
__ the probabilities of moving from one state to the next are given by the stochastic matrix in Problem
17.15. There are 45 + 21 = 66 people currently in the training program, so the initial probability vector
_ oo trainees is X°? = [0, 45/66, 21/66, 0]. It follows from Eq. (17.3) and the results of Problem
ee fede that rae .

7 ae aaa
| 8/15 0 0 7/15] sia
0,0.5657]
343,
Ol 219 0 0 7/9 |= (0-40,
ie NaS ak 7

cs Pons

- 2 eis") an
CHAP. 17] NONNEGATIVE MATRICES

Supplementary Problems

In Problems 17.20 through 17.29, determine whether the given matrix is irreducible, primitive, or
stochastic, and estimate its spectral radius. For those matrices P that are stochastic, determine lim, P” if it
exists,

§- +2":
] 17.22 650...4
1 20,0

23
0 2 17,25
epotiay 0.
19,9 0,1
02.02. 0.6
4 1
e< 0 05 0 05 j
0.79 0 17.27 K 1 | 17.28 0
0.35 0.48| 03 0 074 ene
0.1 0.6 0.3
17.29 |0.6 0.2 0.2|
198 sgins!
fad a ony
Chapter 18
Patterned Matrices

CIRCULANT MATRICES
A circulant matrix is a square matrix in which every row beginning with the second can be
obtained from the preceding row by moving each of its elements one column to the right, with the
last element circling to become the first. Circulant matrices have the general form

_ Ifa circulant matrix A has 5 oran Xn, then its sitar: are
CHAP. 18} PATTERNED MATRICES 161]

TRIDIAGONAL MATRICES
A tridiagonal matrix is a band matrix of width three. Nonzero elements appear only on the main
diagonal, the superdiagonal, and the subdiagonal; all other diagonals contain only zero elements.
Property 18.6: The eigenvalues of an n X n tridiagonal Toeplitz matrix with elements a on the main
diagonal, b on the superdiagonal, and c on the subdiagonal are

Ay =a + 2VB6 cos e120", sth

(See Problem 18.4.)


Crout’s reduction (see Chapter 3) is an algorithm for obtaining an LU factoriza-
tion of a square matrix such that L = [/,,] is lower triangular, and U = [u,)] is upper
triangular with unity elements on the main diagonal. For a tridiagonal matrix
A=[a,,] of order n X n, the algorithm simplifies to the following:
STEP 18.1: Initialization: If a,, =0, stop; factorization is not possible. Otherwise, set /,, = a,,; set
the subdiagonal of L equal to the subdiagonal of A; set each diagonal element of U |
equal to unity; set all other elements of L and U equal to zero; and set a counter at)
i=2. |: =
3 STEP 182: Caloulate-u,i, = @2, 74) .;,;-;- : ‘aes Ee
_ -STEP 18.3: Calculate /;, = a; — 1,;_,u;-,;. If i= n, stop; the algorithm is complete.
STEP 18.4: If1,,is zero, stop; factorization iis not possible. Otherwise, increase i by 1 and ré
— Step 18.2.
E storzation will erodiieilan L matrix having nonzero. elements only on its diagonal‘ id=
jagoni > and a U matrix having nonzero. pelo ‘ogion its wg ote and oe Be See
(Ie et Ab a

* wath
¥.
ae. = —
i
i
‘aed we
| ' 1 ‘

é tae 3 rt -s ep i ss — >« oe
[sas » Ship: 5 iris +e
jKs
PS
. < + * ts : a
5 ~<a a « ey eet ”
a * J~i* >ams - +, .
PATTERNED MATRICES |[CHAP. 18

Solved Problems

18.1 Determine whether the following matrices are circulant, Toeplitz, band matrices, tridiagonal,
and/or in Hessenberg form:

1 -2 ©
-

A=
-4 /
3
-2 i
WN
KF
co
coc ornew
OS CO
©
NK
- oNe
OCC

) er
2 «1
ee
0 0
0.0:
0 0 SO
NONOCC

i 2
|
CONNWW
NW

BNWER
SNWOWS
CoHn
">
“as
Le

ar
*me :4
Gi
a
armed
eec>
anoe

aro ae “ ie -
ae rts i _ a=
: 7 se [Se eo i a Le a é
; nies ~NDeTY LOTIT a a : 24 x ra -

if Oat Me. tyiy ae a y He


* oe oes ae :
“ a ms ia é, . 3; los ee

> - S r

7
CHAP. 18] PATTERNED MATRICES

This system, with (2), has the matrix form yX = AX, for X=[1,7,7r°,...,r" ‘|’. Thus, y, as given by
(2), is an eigenvalue, and X is a corresponding eigenvector for every root r.
Given (2) and the fact that r= 1 is always a root of (1), it follows that the sum of any row of a
circulant matrix is an eigenvalue of that matrix.

Determine the eigenvalue of matrix B in Problem 18.1.


Using Property 18.6 with a = 2, b = —1, and c= 1, we have A, = 2 + 2V/(—1)(1) cos (k7/7), which,
for k=1,2,..,6, yields
A, =2 + i1.801938 A, = 2 + 1.246980 A, =2 + 10.445042
A, =2—i0.445042 A, = 2—11.246980 A, = 2 — (1.801938

Determine an LU decomposition for matrix B in Problem 18.1.


We apply Steps 18.1 through 18.4 to B=(b,,], initializing to
PATTERNED MATRICES

For
i = 2:
Uy = @y2/l\, = 2/1 =2
lop = C22 — layMig = 2—( 192) = 4

: = 3 U2, = é2,/1,. = —2/4= —1/2


A Iyy = 53 ~ /y2M23 = 1 - 2(-1/2) = 2
U5, = C54/l,, = 1/2
For
i= 4:
bog = Cag ~ Lagltsg = —1 — 3(1/2) = -5/2

Ugs = Cg5/l,, = 1/(-5/2) = —2/5


For i=5:
Ig = C55 ~ [slags = 2 ~ (-3)(-2/S) = 4/5
The factorization is, then,

18.7 Transform to Hessenberg form the ma oO { ‘ ,


a ns wa

: j R ee : a e F
es - oat) ; a
hrough 18.8 Vi << * i
b- j Hera { pa J oT ole 418.5
2 se anaes
7. ‘ : sD neat 7
4 - ™ > as in ‘ are Bae<4, -,
int Mw es ork
i? ; , a et.
pn, ae eo
P
* De es Pec ey [-3] a. Se
pe Als 9 Ce ae 7 » Pe .
= a) Fee: ca’ Fs x ‘ | ‘ : ,

‘ : , a ab

baie iti 6, A
His ae a

fat Roe
¥
ort TS acl1)
a o J | ql

a es eeee) a tJ L4d
LOS oe.
pater : r a7 5 a? = lem oh
hh ¥
i 20 33 2a -
- a 7
¥ - a

eo
Ps:
CHAP. 18] PATTERNED MATRICES 165

: ~2/7 -3/7 6/7


R, =| -3/7 6/7 2/7
Bits BT ©3/7

fo.
_10!-2/7 -3/7 6/7
3 me 0:-3/7 6/7 2/7
P C4 Pai 2/7. 3/7

1 mit 0 0 oe
—~7 -114/49 -115/49 27/49 ;
and hyd BAe
O -115/49 -64/49 142/49
0 27/49 = 142/49 227/49
The second iteration (k = 2) yields
x, = —115/49
for which _||X, ||, = V5.811745 = 2.410756 z
ol
115/49 1] _ [0.0638173
and Ve =X, + IIE =| 7149 |*2:410756| 5]=|O'serasns )
for which ||V,||,
=-V0.307696. Then,
a <n ie vvr=| 0.973528 -0.228567 Ree
2=T,~ 9307696 2%? =|-0.228567 -0.973528 he
0

B “=0.228567
0.973528)
PATTERNED MATRICES {CHAP. 18

18.10 Any n Xm matrix X can be converted into an mm x1 column vector x (denoted with a
lowercase boldface letter) by taking the transpose of all the rows of X and placing them
successively below one another into x. The matrix equation
AXB=C (18.1)
is then equivalent to the matrix-vector equation
(A@B’)x=c (18.2)
where x and ¢are the vector representations of the matrices X and C, respectively. Equation
(18.1) may be solved for the unknown matrix X in terms of A, B, and C by solving (18.2) for
the vector x using the methods developed in Chapter 2. Equation (18.1 ) may possess exactly
One solution, no solutions, or infinitely many solutions.
- Solve the matrix equation AXB =C for X when

vi tow sol
CHAP. 18] PATTERNED MATRICES 167

Supplementary Problems

18.12 Determine whether the following matrices are circulant, Toeplitz, band matrices, tridiagonal, and/or in
Hessenberg form:

. Se 1010 101 9
eta ¥ b iO
=). .3...
4 oD QO. grey ian
ee a
ee (b) rope BK (c) ee ae |
eo °Tha Orskefis aor a= eee 2
rez 3 120 23 i
ye a (e) PS bee fy ABO 24
B24 0 bine
pe Og
By aa
mo 3-2
ih:
ne a
18.13 Find the eigenvalues of, and a canonical basis for, the matrix in Problem 18.12(b)..

18.14 Find the eigenvalues of, and a canonical basis for, the matrix in Problem 18.12(d). i
oe:wf
(SAS
PTT

18.15Determine the eigenvalues of the matrix in Probl 18.12(e).

struct an LUfactorization for the matrix in Problem 18. eos


ay
Sa

a anLUfactorizationforthematrix,inProblem =m8).
. osri i “fies
- a ee
PATTERNED MATRICES (CHAP. 18

18.23 Construct C®D when


713 Ls
c=|j ad and p=|} ed
18.24 Rework Problem 18.11 with C = [1, 2, 3].

18.25 Solve the matrix equation AXB =C for X when

a-[} 3]
18.26 Solve the matrix equation AXB = C for X when
J

B=(1,1, 1]

4 ily»
| ch sieatiimaiagiisy «4 Lap Ji coulevesye sit bah. CEas
ees: ae) eet . oo ee » ee
o
i) Paks 9) i 4
Chapter 19
Power Methods for Locating Real Eigenvalues

NUMERICAL METHODS
Algebraic procedures for determining eigenvalues and eigenvectors, as described in Chapter 7,
are impractical for most matrices of large order. Instead, numerical methods that are efficient and
stable when programmed on high-speed computers have been developed for this purpose. Such
methods are iterative, and, in the ideal case, converge to the eigenvalues and eigenvectors of
interest. Included with each method are termination criteria, generally a test to determine when a =
specified precision has been achieved (if the results are converging) and an upper bound on the Es
number of iterations to be performed (in case convergence does not occur).
This chapter describes algorithms for locating a single real eigenvalue and its associated
eigenvector. The first method presented is the simplest; the last is the most powerful. Chapter 20
describes a procedure for obtaining all eigenvalues of a matrix; it is usually packaged with the shifted
inverse power method as an excellent general-purpose algorithm. :

THE POWER METHOD .


|
lial th1
_ Applied to a matrix A, the power method consists in ponent a vector X and forming 1

cyX, c,AX, c,A°X, osAK,


cy, a. are scaling constants eleused to avoid computer overflow due to ext a
onesa The sequence will generally converge to an eigenvector ofA, seoit
| j chosen, the eigenvalue
willbe obvious
too.This eigenvalue is u
ve
{ alue ofA, the porwing greatest absolute value, providec
Se Pa gee
= &

a
Rite, ih

170 POWER METHODS FOR LOCATING REAL EIGENVALUES [CHAP. 19

THE INVERSE POWER METHOD


The inverse power method is the power method applied to AT! »_Provided the matrix is
nonsingular. The procedure will converge to the dominant eigenvalue of iy , the reciprocal of which
is the eigenvalue of A having the smallest absolute value. The associated eigenvector is the same for
both (Property 7.4). The steps are identical to those of the power method with the exception of the
following:
STEP 19.2’: Calculate Y, =A 'X,_, by solving the system AY, = X,_, using LU decomposition, If
this system does not have a unique solution, stop; zero is an eigenvalue of A.
(See Problems 19.8 and 19.9.)

THE SHIFTED INVERSE POWER METHOD


__. The inverse power method may be used to find all real eigenvalues of a matrix if estimates of
__ their locations are available. If u is an estimate of A, then A — ul will have an eigenvalue near zero,
anc ee peopoce will be the dominant eigenvalue of (A—ul)'. Therefore, if A and X are the
envalue and eigenvector obtained by applying the inverse power method to A — ul, then u + 1/A
’: eon to an eigenvalue and eigenvector of A. (See Problem 19.11.)

on'STHEOREM eo
‘a square matrix generates a Gerschgorin disk,which isbounded bya circle, whose
— Se ee nee Se ee ee eee Veen eral
nthat row. .

Let

ineapne
yi. Laat:

mu ina Sl Aes
ested

CHAP. 19) POWER METHODS FOR LOCATING REAL EIGENVALUES 171

Solved Problems

19.1 Use the power method to locate an eigenvalue and eigenvector for

oe wl. 7
Ast=—] ~-1° 1
7 nS

We choose X, =[1, 1, 1]”. Then we have:


First iteration:
Y, = AX,=[11,
-1, 13]”
=13
X, = he
— Y, =[0.846154, —0.076923, 1.000000)’
A,

Second iteration:

Y, = AX, =[11.307692, 0.230767, 10.846157]"


A, = 11.307692 .
tre
=
tls
=

A% ”, X,= . Y, = [1.000000, 0.020408, 0.959184]’


aal
7i i
2 i
-

Y, = AX, =[11.693874, =0.061220, 11 sie


A= IL.816327
ae ix S cibhlied igh

xX F y, =|Q.'
san ae
172 POWER METHODS FOR LOCATING REAL EIGENVALUES {CHAP. 19

First iteration:

Y, = AX, = [9, 12, 21]


A= 21

i,= - Y, = (0.428571, 0.571429, 1.000000}


1

haa’ Second iteration:


ies Y, = AX, = (5.285714, 7.714286, 15.000000}”
A. = 15

X, = — Y, = (0.352381, 0.514286, 1.000000]”


Third iteration: :
Y, = AX, = [4.790476, 7.142857, 14.200000]”
k= 14.2
X,= F Y , = [0.337357, 0.503018, 1.000000]”
Continuing iin this manner, we sia Table 19.2, where all entries are rounded to four decimal places.
oe is converging to the eigenvector [1/3, 1/2, 1]’ with corresponding eigenvalue A = 14.
T ne
ot aeee iSee 4 Fy
,* ‘ >

CHAP. 19] POWER METHODS FOR LOCATING REAL EIGENVALUES 173

Table 19.3

|Iteration | Eigenvector components Eigenvalue

— 13.0000
— 13.1538
— 13.3158
— 13.4822
— 13.6491

— 14.3651
— 14.9160
— 14,9907
— 14,9990
— 14.9997

19.4 Derive thepower method. |


Assume that the matrix A has order n x n and possesses n real eigenvalues A,, A;, . .
| [Ail
>[Aol =>>> = 1A,
_ Furthermore, assume that the ei envectors V,, V,,..., V, corresponding to each of these eige
os Ent reiapie set. oe for Lo n-dimensional vector X there exist%, sistge
coved

oe nivos.
toed rere > ala er
os aN Ho
yi %wa,

gare
Je ge
Was le? argh
pic : ae
POWER METHODS FOR LOCATING REAL EIGENVALUES [CHAP. 19

Table 19.4

|Iteration | ne components Eigenvalue

oy ee Tis
0.5455 1.0000 0.0227
—0.8076 -—0.4290 1.0000

0.7229 1.0000 -—0.2567


—0.6691 —0.1957 1.0000
0.2492 1.0000 0.5958
1.0000 0.8220 -0.9414

19.6 A modification of the power method particularly suited to real symmetric matrices is
_ initialized with a unit vector the Euclidean norm) having all its components equal,
rs rmined as before, but the eigenvalue is approximated as
h Rayleigh quotient. Then Xs=¥,/ Yall unless
_— and theee . Use this
| CHAP. 19} POWER METHODS FOR LOCATING REAL EIGENVALUES 175

19.7 Use the modified power method described in Problem 19.6 to determine a second eigenvalue
and associated eigenvector for the matrix in Problem 19.6.
Having determined that 30.2887 is an eigenvalue of A, we can apply the modified power method to

— 20,2887 7 8 7
. = 7 — 25.2887 6 5
B = A — 30.28871= 8 6 ~20.2887 9
4 5 9 —20.2887
We initialize with X, = (0.5, 0.5, 0.5, 0.5]’. Then with all calculations rounded to four decimal places, we
have:

First iteration:

Y, = BX, = (0.8557, —3.6444, 1.3557, 0.3557]’


A, =X,-Y, = —0.5387
IY, ||, = 3.9972
3 X= We = (0.2141, —0.9117, 0.3391, 0. 0890)"
: : 1H2

: Second iteration:
Y, = BX, = [—7.3891, 27.0345, —9.8380, —1.8130]” _
LaXx,-¥,=~29. 275
, Be: YI. =“gi7579
eg RY): X,=Te = [—0.2483, 0.9085, —0.3306, —0.0609]”
Continuing in this manner, we generate Table 19. 5. Four-place precision is attainedb
r 65, although it takes hanaa few additional iterations before confidence brtos
n nevis >
+ | Thealgorithis
m convergingsel-30.ie eer patel ig
176 POWER METHODS FOR LOCATING REAL EIGENVALUES [{CHAP. 19

5 0 0 1 04 04
L=|3 48 0 and U=|0 1 = 0.375
& 36 5.25 0 0 1

With X, =[1, 1, 1]’, the algorithm yields the following:


First iteration: Solve LZ, = X, to obtain
Z, = (0.200000, 0.083333, —0.095238]"
Solve UY, = Z, to obtain
Y, = [0.190476, 0.119048, —0.095238]”
A, = 0.190476

X, =~ Y, = [1.000000, 0.652500, ~0.500000]


1

Second iteration: Solve LZ, =X, to obtain


Z,, = (0.200000, 0.005208, —-0.327381]”

_-_-¥a= 0.279762, 0.127976, -0. 327381)"


ct Ny “y= 0.327381
i. feroa= [OASAS, 0.390908, 1.000000]
Solve LZ, = X;-to obtain
oe [—0.170909, 0. 025379, 0.368398)”
CHAP. 19] POWER METHODS FOR LOCATING REAL EIGENVALUES
.

19.9 Use the inverse power method to obtain an eigenvalue and eigenvector for the matrix in
Problem 19.6.
For this matrix LU decomposition yields

10 0 0 7 03° 7
20.1 0
8 0.4 0
740.1 0.5

With X, =[1, 1,1, 1]’, the algorithm yields the following:


First iteration: Solve LZ, = X, to obtain |
= (0.100000, 3.000000, ~0(.500000, 3.000000]
Solve UY, = Z, to obtain
= [—12.000000, 20.000000, 5900040, 3.000000)"
fh ee ,
ERARE AER | 71. xe byet-oe,
0, 1.0 0000
eet PAGEL ;
4 tration: Solve =, =X,ea
PEN + Ab eee ;
Si Bice ee Sa ee ts

; OREN gs eplerSae

000,
) 24cookaia
POWER METHODS FOR LOCATING REAL EIGENVALUES [CHAP, 19

19.11 Find the eigenvalues and a corresponding set of eigenvectors for the matrix in Problem 19.10.
From Problem 19.10, we know that one real eigenvalue is located in the interval 24= z= 34. We
take u = 28 as an estimate of this eigenvalue and apply the inverse power method to A — 281. A better
estimate for the eigenvalue might be the center of the interval, u = 29, but an LU decomposition for
A — 291 is not possible because that matrix has a zero in the (1,1) position. For A — 281, we have

0 0 1 -1 4
—47 0 and U=|0 1 —0.127660
6 —42.234043 0 O 1

Applying the inverse power method with these matrices, we obtain, after five iterations, X, =
[1.0, —0.015180, 0.138939]” with A, = 0.636563. The corresponding eigenvalue for A is A=28+
1/0.636563 = 29.5709.
From Problem 19.10 we know that a second real eigenvalue lies between —15 and —21. We estimate
this eigenvalue as u = —19. The LU decomposition for A + 191 has
ie ao ee "1 —0.020833 0.083333
L=|-1 0.979167 0 and U=|0 1 2.127660
A 2006333 saa Np 1
power method yields, after five iterations, X, =
. The corresponding eigenvalue for A is —19+
:
eed40
nateit and apy power
inverse
the

66 ~0. 68 00 .
As a c h 0. 11 44 11 1]
, wi thA,= 1.4 705 66,
fosrAis1/14705 a3.eaotop lipbanae he note
S ckbo » thas heme
tha the three.
(29.57bi + og
109 an 18.a
(—lk + 0.6800 =SL
2509) a Lo
oa

Be | Piensa te
square matrix / y dlein at least one Ge “
CHAP. 19] POWER METHODS FOR LOCATING REAL EIGENVALUES 179
,

Supplementary Problems

19.13 Apply five iterations of the power method to

19.14 Use the power method to locate a second eigenvector and eigenvalue for the matrix in Problem 19.2.
Observe that convergence occurs even though that eigenvalue has multiplicity two.

19.15 Apply the power method to the matrix in Problem 19.11 and stop after four iterations.

19.16 Apply the power method to


rede Dee}
. A=| 01 0

con ver ge to the —


dominant eigenvalue iin iwe
7 why the pon er s
met hod did not

, ol/the power method to


POWER METHODS FOR LOCATING REAL EIGENVALUES {CHAP. 19

19.25 The matrix in Problem 19.24 is known to have an eigenvalue near 9. Use the shifted inverse power
method to find it.

19.26 The matrix in Problem 19.18 is known to have an eigenvalue near 2.5. Use the shifted inverse power
method to find it.

19.27. A modification of the shifted inverse power method uses the Rayleigh quotient as an estimate for the
eigenvalue and then shifts by that amount. At the kth iteration, the shift is A, = XJ AX,/X/X,. Thus, the
shift is different for each iteration. Termination of the algorithm occurs when two successive A iterates
are within the prescribed tolerance of each other. Use this variable shift method on the matrix in
Problem 19.20.
Micidin 2-05!
we ey

ish

oo ha Paale ahd vier Dagpest 25.J Reieys) ol eornsygrieeds Sa) wedetrer


Cay tie subg initoierume ee
Chapter 20
The QR Algorithm
THE MODIFIED GRAM-SCHMIDT PROCESS
The Gram-Schmidt orthogonalization process (as presented in Chapter 11) may yield grossly
inaccurate results due to roundoff error under finite-digit arithmetic (see Problems 20.10 and 20. 11).
A modification of that algorithm exists which is more stable and which generates the same vectors in
the absence of rounding (see Problem 20.12). This modification also transforms a set of linearly %
independent vectors {X,, X,,..., X,} into a set of orthonormal vectors {Q,, Q,,...,Q,} such that =
each vector Q, (k= 1,2,...,n) is a linear combination of X, through X,_,. The modified algorithm a
is iterative, with the kth iteration given by the following steps:
STEP 20.1: Set r,, = ||X,||, and Q, =(1/r,,)X,-
STEP 20.2: For j=k+1,k+2,...,n, set eadige:
STEP 20.3: For j=k+1,k+2,...,n, replace X, by ¥ —r,,Q,.
(See Problems 20.1 and 20.3.)

QR DECOMPOSITION 7 st BO
Every mXn matrix A (m2n) can be factored into the product of a matrix Qt having
"orthonormal vectors for its columns, and an upper (right) triangular matrix R. The pee
A=QR
e1thencTiseine
THE QR ALGORITHM {CHAP. 20

Each A, is similar to its predecessor and has the same eigenvalues (see Problem 20.9). In general,
the sequence {A,} converges to a partitioned matrix having either of two forms:

(20.4)

-}---
and (20.5)

_If form (20.4) occurs, then the element a is an eigenvalue, and the remaining eigenvalues are
obtained by applying the QR algorithm anew to the matrix E. If form (20.5) arises, then two
eigenvalues can be determined from the characteristic equation of the 2 x 2 submatrix in the lower
right partition, and the remaining eigenvalues are obtained by applying the QR algorithm to the
matrix G. If E or G is already a 2 x 2 matrix, its eigenvalues are determined from its characteristic
equation.

“mr
dece shiftedmatrix he= @e vk, pat
MpoOs:sition is constructed forsae
3: tagboaty ohh. . <F 33 ‘ag i?a a ie4 aA iw yiev :
bea

ala is ‘feae a
¥ a
Tiea eReait ee an a Puss vv on
lLeserat
4 ae , 4


tee
a
Ree
7 as
y

CHAP. 20] THE QR ALGORITHM 183

Solved Problems

20.1 Use the modified Gram-Schmidt process to construct an orthogonal set of vectors from the

[df
linearly independent set {X,, X,, X,} when

First iteration:

ri, = ||X, ||, = V6l = 7.810250

ey1 =f it.4 3.3 8)6 |


| 2 50,513148, | 6.384111, 0.768221 T
QT Met" Te Va Ge rs
19
rin = (X,,Q,) = Vai = 2.432701
1 e

ri, = (X;,Q,) = Wai = (0.128037

ae a KX,ee —240/61, 252/61)”


ae Re ome Rex, =>. <126/61, Ls0/er Gipsy?
cn raion saa vectors from the first prey
THE QR ALGORITHM [CHAP. 20

—0.512148 0.494524 0.702247 7.810250 2.432701 0.128037


Q=| 0.384111 —0.599423 0.702247 and R= 0 6.563686 —0.809222
0.768221 0.629395 0.117041 0 0 3.511234

A direct calculation shows that A = QR.

20.3 Use the modified Gram-Schmidt process to construct an orthogonal set of vectors from the
linearly independent set {X,,X,, X,, X,} when
1
0
4 a
X,= 1
1
fees ee

First iteration:

Tidices). +: 5 Ln | ORD ae leit = a


i]
». ap

: ° a Bx \ ‘ ) 4 3 oe: a tg ee wus Ne oe ie
CHAP. 20] THE QR ALGORITHM 185

Third iteration (using vectors from the second iteration):

ry, = IX, ||, ™ V (3/5)? + (3/5)° + (—4/5)°


T (1/5)? = ve

hak | 3 3 —4 ome
1 |

2
ae ane (X,,Q,) = V35

3/5] - 3/V35 3/7


: 3/5 2 3/V35 3/7
Ba ruQs =) 415 |~ 798 |-4/v35 ("|
Rae 3/7
-4/5 1/V35 —6/7

Fourth iteration (using the vector from the third sasocean

r= ale =VOT GOs in“ 7 ;


) 1 ke Seevase wal
ies | "An orthonormal set is {Q,,Q,,Q;,Q,}. (Compare with huaiaediataed baSliee ade: one vst,<aae
186 THE QR ALGORITHM [CHAP. 20

Second iteration:

=1-5. a-(~28 0.2 —0.997459 O.971247 faeries —0,199492


0.2 0 J=ar, | 0.071247 0.997459 0 0.014249
i _ [2.807134 ~0.199482) ~0,997459 0.071247) [) 0
A, RQ, +5.41=| 0 0.014249 || 0.071247 0.997459] *> “lo
~ [2585787 0.001015)
0.001015 5.414213
Third iteration:
a _ [—2.828426 0.001015] _ ~[~ eum pico aera —0,001015"
A, —5.414213] 0.001015 0 ]=@.R, = 0.000359 1.000000 0 0.000000
; + 5.4142141 _= [
|2.828427 nem oees ||- 0.000389
Lome qeeoed
A,=R,Q, r o.c00000{| 1.000000) * 54242831 i9 0
e Rede ee)
0.000000 5.414213
| — - a ;_ At this point we have generated form (20.4). It follows that one eigenvalue is 5.414213 and the
second is 2.585786. (Observe the roundoff error in Q,, which results in columns that are only
ae approximately unit vectors.)

#8 ge ae=[3 ;

he ace : d ar
Por fer... < d : a q
Ar ‘ ; » * ji ; a y = B fos 7 ; .

ae
1g (20.6)
i
and (20.7),
“~ i “ as < mr 5 > ’
we it = > ae a "
r g — » ¥ ar J
~—* —s fa) J io ~ al 4 —
CHAP. 20] THE QR ALGORITHM 187

3.174780 0.197407 4.602285 |


0.127357 3.143844 3.353551
0.405644 0.458158 13.681376
2.995229 —0.005512 -4.152641
—0.003362 2.996115 -—2.926538
0.012647 0.014613 14.008656
2.999996 —0.000004 -—4.166180
—0.000003 2.999997 -—2.939863
0.000010 0.000011 14.000007
3.000000 -—0.000000 -—4.166189
—0.000000 3.000000 —2.939868
0.000000 0.000000 14.000000
Convergence is established to the number of decimal places shown, One eigenvalue, 14, appears in the
(3,3) position; it is obvious that the other two eigenvalues are 3 and 3. (Compare with Problem 19.2.)

20.7. Apply the shifted QRalgorithm to

‘ »

ie> F058Ee. .
Pr) |e -
* es =. «
[= ae

188 THE QR ALGORITHM |[CHAP. 20

This matrix has form (20.4), so one eigenvalue estimate is 3.858057. To determine the others, we apply
the shifted QR algorithm anew to

ooo 0.049230 oes


E 0.049230 0.021553 0.096466
0.006981 0.096466 0.831786

After two iterations, we obtain

30.288686 0.000038 0.000000


E 0.000038 0.010150 0.000000
0.000000 0.000000 0.843107
so a second eigenvalue estimate is 0.843107. The characteristic equation of the upper left 2 x 2 submatrix
in E, is A* — 30.298836A + 0.307430 = 0, which can be solved explicitly to obtain 30.288686 and 0.010150
as the two remaining eigenvalues. (Compare these values with those obtained in Problems 19.6 and
19.7.) ;

.9 Pow that the shifted QR algorithm is a series of similarity transformations that leave the
_ eigen alues invariant.
~Since the Q matrix in any QR decpmipesition isp unitary, it has an inverse. Therefore, (20.6) may be
tea
* R,-, = G-,.-1 —54-11)
ting
gthis
Agent intoaa 7), we obtain
-o
_—
CHAP. 20} THE QR ALGORITHM 189

and WCY,, ¥,) = 0.2452 ~ 10"!


1
Y, =[-0.7765 x 10°’, —0.8193, 0.4107, 0.4001]”
= Q: = 95452 x 107
For these vectors, (Q,, Q,) = 0.8672, which is not near zero as it should be. Similar results are obtained
wherever the components 1.01 are of the form 1+10°“ and all numerical values are rounded to 2k
significant digits.

20.11 Redo Problem 20.10 using the modified Gram-Schmidt process and show that the results are a&
better.
First iteration:

r, = (|X,
||,= 2.005

Q,TF
1 X, = (0.4988, 0.5037, 0.4988, 0.4988]”
= (X,,Q,) =2.005
= (X,,Q,)=2.005
X, —X, — 2.005Q, = [-0.9400x10°‘, -0.9919x10-7,0.9906x10-2, 0.9400 x 10"*)
X,<X, -2. 2 = [0.9400 x 10~*, —0.9919 x 107?, -0.9400 x 107*, 0.9906xoy; ee
‘Second iteration:

In = UIXalla = 0.1402 x 107" 4


‘ cine

pe. : = Saao x 107 avila


e-
eneee
s ke =. ‘ 1
-0.615 10 fae Tes 0.7066, 0.6705 x 10 adie
- > tS
dab
ae 7 ‘ el

“ ds: fe, ; : ; s ae ‘ ee a

oweate sah igiPhea WOhz ber oie LSI? RIM Be


PS, te x 10 :oO. =| 0.4784 ;
st ‘4 me 5 46 x A ee a 050 x ee

Bas ‘pea? fi bend


is
THE QR ALGORITHM [CHAP. 20

Koes Siasars Chee, (1)

x » A * (xi. 6.50. (2)

x, = ne - (xP; Q,)Q, (3)

ae) Blas (ees 32;


Substituting (1) into (2) and noting that Q, and Q, are orthogonal, we obtain

Kee = Xiao ~ (Keo, QQ, — (Kas ~ (Xai Q:)Q,), QQ,


ot aig (X,41,2,)Q, an @,2;:030¢ + (X,,,,0,)(Q,,Q,)Q,
=X... — (Xes1,Q,)Q, — (X,41,02)0,
Substituting this result into (3) and noting that Q, is orthogonal to both Q, and Q,, we obtain
Ris i ) > ey Q,)Q, s (Xpar, Q,)Q, = ((X,., s (X,,,,0,)Q, 7 (X,.1,Q2)Q,), Q,)Q,

ee: Kies Mees 2,0; ~ (X,<x2


02) Qer~ (X,1,Q;)Q, + (X,41,0,)(Q,,Q;)Q;
ciesae Q,)(Q,, Q,)Q, . \a, * ee
Bea %,.,,.2,)0,-in (X01 QQ.
Sweet

C ntinuiing * this manner, we find eed1 is identical to Y, in‘the


C oss.ain.d,sinceAe cine’ inon D
it flo
met a tag
he same amesetth: »eae ors
i romaine

ad eed
CHAP. 20] THE QR ALGORITHM

Third iteration:

rs = ||X5||, = 0.159839 x 10~“

Q,= 2 X, = [—0.335525, —0.523965, —0.782869]’”


33

Observe that r,, is very close to zero, and the last X, vector is very close to the zero vector; if we were
not rounding intermediate results, they would not exist. However, because of the rounding neither is
zero, and Q, can be calculated with what are, in effect, error terms. The result is a vector which is not
orthogonal to either Q, or Q,.

Supplementary Problems

20.14 Construct QR decompositions for the following matrices:


ding he: ai cf. AEE ETOH I), AA o oe 75 Wises

ores. 4.2 (b) i 3 ee anf h” Oey 0| it se aey? ee


oo)a ea ae 5 Re
a1 0 Oo ae eee.
Chapter 21
Generalized Inverses

PROPERTIES
The (Moore-Penrose) generalized inverse (or pseudoinverse) of a matrix A, not necessarily
square, is a matrix A’ that satisfies the conditions:
(11): AA® and A‘A are Hermitian.
(2); AA*A=A.
(13): A’AA’ =A’,
A generalized inverse exists for every matrix. If A has order n x m, then A” has order m X n and has
| following properties:
Propeety 21.1: A” is unique.
‘gh
ro -ty 21.2: A° =A ' for nonsingular A.
ty 21.3: (A*)* =A.
. (kA)* =(1/k)A* for k #0.
(aty* = (a yi

2 aa=0. ; :
3 e rank ofA’ eee nie saa
aes goes of he a en orders so that the product PAQ is

n % kB hasonderKm, andbothmaces haverank kthen


ms
*

matris ae
AAAr =A'AfeteonlyitAYcan|
ceeear
CHAP. 21} GENERALIZED INVERSES 193

STEP 21.5: A’ =QC"(CC")"'(B"B)'B"p (21.1)


(See Problems 21.1, 21.2, and 21.16.) When the columns of A form a linearly independent set of
vectors, (21.1) reduces to
AY = (ATA) A" (21.2)
(See Problem 21.3.)

SINGULAR-VALUE DECOMPOSITION 2
Equations (21.1) and (21.2) are useful formulas for calculating generalized inverses. However,
they are not stable when roundoff error is involved, because small errors in the elements of a matrix
A can result in large errors in the computed elements of A*. (See Problem 21.12.) In such situtations
a better algorithm exists.
For any matrix A, not necessarily square, the product A”A is normal and has nonnegative —_—
eigenvalues (see Problems 13.2 and 13.3). The positive square roots of these eigenvalues are the
singular values of A. Moreover, there exist unitary matrices U and V such that ies

blockce matrix
a = a LE} |
a ee oe m@
he same order as A and, therefore, is square only when Ais square,
on (2!3) isa singular-value decomposition for A. An algorithm for con stru
ucttin
ge
Ping
eae te Rod yet: saimniely ae eo le dt4 sete, pana.see a
194 GENERALIZED INVERSES (CHAP. 21

A* =V,D'‘U? (21.4)
where V, and U, are defined by Steps 21.8 and 21.9, respectively. For the purpose of calculating a
generalized inverse, Steps 21.10 and 21.11 can be ignored. (See Problems 21.6 and 21.7.)

LEAST-SQUARES SOLUTIONS
A least-squares solution to a set of simultaneous linear equations AX = B is the vector of smallest
Euclidean norm that minimizes ||AX — B||,. That vector is
X=A’'B (21.5)
When A has an inverse, (21.5) reduces to X= A’'B, which is the unique solution. For consistent
___ systems (see Chapter 2) that admit infinitely many solutions, (21.5) identifies the solution having
camara Euclidean norm. Equation (21.5) also identifies a solution for inconsistent systems, the
ne that is best in the least-squares sense. (See Problems 21.8 through 21.11. )

Solved Problems
1the generalized inverse of
CHAP. 21] GENERALIZED INVERSES 195

21.2 Find the generalized inverse of


-(9 92 2]
ee ae
The matrix has rank 2. A 2 X 2 submatrix of A having rank 2 (but not the only one) is obtained by
deleting the second and fourth columns of A. This submatrix can be moved into the upper left position
by interchanging the order of the second and third columns. Then, setting |
oth Gi
S Fai-0 _|0 01 0
P=| H and -OFig 10 0
OO. 4

we get pag=|{ ri" a with Annee |


and with both A,, and A,, empty. Then

B-an-[t 3) Feavan=[T oll Sl-[o 2] e=fe tla ia


oatmeal aL 2A] om ccon[§ P= 2)
and a. = Qc"(cc”)'(B”B)'B’P
as . 3

“its OO Oe OF ek 6 ‘a ee es
: {0 501 One 0oie oi. 5 a yf a ah
r OM 6 OFS Of 226 6/26j.-2) 4d Ae SP ee
Suced:. 2a

ae . Bs
; sigtdortat
pi:

a
196 GENERALIZED INVERSES [CHAP. 21

STEP 21.7:

—-1/V6 1/V3
p=|
F salle 8 -1/V2
0 2

STEP 21.8: V,| -1/V6 1/V3 and =6V,=| 1/V2 with V=[V,|V¥.]
2/V6 1/V3 0

2 2 -2Y-1//6 1/V3 ee a -1/V6 1/V3


STEP 21.9: U,=AV.D'=| 2 2 -2]| -1/V6 1/V3 0 FAL ~1/V6 1/V3
=9 -2 68: 26 1/3 2/V6 1/V3

STEP 21.10: Augmenting the 3 x 3 identity matrix onto U,, we generate

-1/V6 1/V3 1 0 0
-1/V6 1/V3 01 0
2/V6 1/V3 0 0 1

" si i]) ‘21.11: The first three columns of this matrix form a maximal set of linearly independent column
as vectors. Discarding the last two columns and applying the modified Gram-Schmidt
process to the first three columns, we obtain

-1/V6 1/V3 -1/V2


v-|-1N8 V3 na
_ 2/V6 1/V3 0
Biel 27*: nee - bf os ;
which, in this case, is identical to V. A direct calculation shows that
be i= ne: ee 3
= art ont am | 7 or :

rere
: =
ee
<

2.4
¥
e a . 2

et 3
ss

ae t; ipke
ats ak

.a. >
A=vi0 2 Olu”
x

;
, "a s

r ay
Lj q

4 r
; « ¥ : y -

= +: = taal
cay E - ¢. i ,
o ' -
ce :

he 2 , i '
= “ee 7 ‘
=
, a >.
as, ) . 4
CHAP. 21] GENERALIZED INVERSES 197

—3/V28 1/V7 = 15/420 0 0 0 0


—2/V28 1/V7 -10/V420 10/V210 0 0 0
—1/V28 1/V7—— 7/420 -8/V210 = 2/10 0 0
U= 0 WV7 —--4/V 420 -S/V210 -2/V10 3/V30 0
VV28 WNT -1 AB 20/910 1/10 4/9011
2/V28 1/V7 2/V420 ~=1/V 210 0 -1/V30 -2/V6
3/V28 1/V7 Sia are. 1S) =D a’S
A direct calculation shows that
oo So

ecoooos
ecooooosg

21.6 Use (21.4) to calculate the generalized inverse of the matrix in Problem 21.1.
Using what we have already found in Problem 21.4, we compute
: asa
[p
Bese Wellin ee Bie ate ins)
ve 1 Vs
i 3 2/V6 1/V3 —t

“ae
i

7 ue(21. 4) to calculate the generalized iinverse of the matrix in Problem 21.3.


___- Using what we have cy found(eProblem 21.5, we compute :
. om“Sal “ wa 4 ied

[| o l"
ear NB
" Re: aN AN san
V28 (2B
GENERALIZED INVERSES {CHAP. 21

-8/26 5/26
_ |-16/26 10/26 (:]-
X=! 9/26 2/26 |L24~
12/26 1/26
Thus, x, = 1/13, x, = 2/13, x, =3/13, and x, = 5/13.

21.10 Verify that the solution obtained in Problem 21.9 is the solution of minimum Euclidean norm
for the set of equations given in that problem.
Interchanging the order of the two equations, we obtain the system
x, +2x,+2x,+3x,=2
Zz, + 2x,>=1
whose coefficient matrix is in row-echelon form. Using the techniques of Chapter 2, we determine the
solution to be
er, re regis ‘ . SPS Oey s 2x, +X
_ ¥ J : x A

ee 5

th xand x, arbitrary, For this vector,iche bie’Je-


Ux =Fox +x.) +23 ea +vem +643~-4x,x,- 4x, +1
CHAP. 21} GENERALIZED INVERSES

Writing this system in matrix form, and then using (21.5) and the results of either Problem 21.3 or
Problem 21.7, we obtain

wel ie| ce -2/28 -1/28 0 1/28 2/28 3/28]),. ead


fad U6 S98 h)7/ a abide eee tare | Ahk) 1/7 25

The equation of the line that best fits the data in the least-squares sense is S = 3¢1 + 25.

21.12 Working to four significant digits, show that (21.2) is numerically unstable when applied to
)sa Bash
At Le “Te:
1 1,004) |
_ Rounding all stored (intermediate) numerical antes four signi
H 3.000 3.004
2 nals004Sak

A = H =
(at 3 a
200 —200 551.2)

Pay vat tees


moran p vith the act <
te | i = b, 22 5

Sat) fees teh ; .


. y
GENERALIZED INVERSES [CHAP. 21

Il: GG* =(PAQ)(Q“A‘


P”) = PA(QQ”)A’P” = PAIA’ P” = P(AA’ )P”
G"G = (Q"A’ P”)(PAQ) = Q”A*(P“P)AQ = Q"A “IAQ = Q"(A"A)Q
Both are Hermitian since AA* and A’‘A are.
12: GG*G =(PAQ)(Q”A’ P”)(PAQ) = PA(QQ” )A*(P”P)AQ = PAIA* IAQ = P(AA*A)Q = PAQ=G
13: G*GG* =(Q%A‘ P”)(PAQ)(Q“A* P”) = Q“A* (P“P)A(QQ”)A* P” = Q"A*IAIA’P”
= Q”(A*AA® ae = QA’ P” =G*

Show that if A can be factored into the product BC, where both B”B and CC” are invertible,
thenA* = C"(CC”) '(B”B) 'B”.
We need to show that A’ satisfies the three conditions required of a generalized inverse.
Il: AA* =(BC)C"(CC”)'(B“B) 'B” = B(CC”)(CC”)
'(B“B) 'B” = B(B”B) 'B”
A‘A=C%(CC”)'(B"B) -'B“(BC) = CNCCAY OR) RBC = io)
by Both are obviously Hermitian.
on a AA‘A = (BC)C"(CC”)'(B"B)'B“(BC) = B[(CC”)(CC”)*]{(B“B)"'(B“B)]C = BIIC = BC = A
- & A‘AA== heel enhy alae carige reteg tn
= =C%(CC")((B"B)”'(B"B)|[(CC")(CC%)”"](B"B)"B”
| BONG Y. a Sees
5"B)
"BY =Ay
oe ¢algorith given §
bySteps2.1throughas Syoeeleft

’ 7 pie Re -
. 4 Piss NE: isto Lo tat : : qa =
OO Ea io) oF os <i i aan
aS ep hipe ewiigdh : ee
tate = 0%(o0")"8 “p
x A ¢‘gee
ie
Age “i ce. 7 ae 7 >
CHAP. 21] GENERALIZED INVERSES

Supplementary Problems

In Problems 21.18 through 21.24, find the generalized inverse of the given matrix.
1 22
21.18 Hi
i 21.19 *113
| 4 21.20 1/1 1 1| ~ 20.21 $1 >|
- 2
e et i 4 2
21.22. |2 0 ; ‘ a
|
In Problems 21.25 through 21.38, find the least-squares solution to the given system of equations.

21.25 x, +%2¢3x,;=1, 21.26 x, +x, +%x,=1


Bp xy 3x, = 2 Mek tg +x)2
e. +x, heed

21.27) x, +x,=1 21.28 38x, +2x,+3x, 4x Se PMIORO


iy 2 ee 8: EG | i et)Fone
1AM. 2 ies Oh Ail BRS Oh od, Re ML cf teh en. 55
7 ae [sMe, sis fk: an , rH ern ie tf el i>

for the originaleye),


sid Gh Beose atl é

:
Show
Ba
nA 208 position of A, then
ey C 0 thake 7 .

= -/
es =R“Q"B, which rechiees
oor - % ye a
iaes
2 ne s
ta f
78 om
3 5 Tie t ‘
"i re oe

Bes:ae ‘
GENERALIZED INVERSES [CHAP. 21

21.37 Prove that 0° =0.

21.38 Prove that (A’)” =

21.39 Prove that (kKA)* =(1/k)A”, provided k 40.

21.40 Prove that if A is Hermitian, then so too is A’.

21.41 Prove that if A is Hermitian and idempotent, then A* =A.

21.42 Prove that AA* and A“A are Hermitian and idempotent.

21.43 Show that if A has order m x n with m =n, then A can be factored into A = UZV", where & is an n x n
ee yee oaede baatnXn unitary matrix, and U isan m Xn matrix with orthonormal columns.

| in Problem 21 43, ne that Fevay” isFai semidefinite, and

P;vine Bata ti ogee


tthee results of Problems 21.43 gaa tesa.Siow tint sity61 eaeatsta AodkNG nen be Peltor
> A=MP, where M has orthonormal columns and P is positive semidefinite. Such a factorization is
2‘cantpeers isa oease wh ia sthogbe i Ee oni} Gusts Hohe 2 a 2
ie blem21.1.
ae Sy Pat jab -amoheups lori

ia
ae une
A AO* 9 sn 4 wit |
ea »!
; hock “=
Answers to Supplementary Problems

CHAPTER 1

1.19 (a) 5 “ (b) ® | (c) t eA (d) |i= 8 “1 (e) undefined


4 -2 &.—}2 Or pie
-§ -4 -2
1.20 (a2) -10; (6) 23; (c) -1
1.21 (a) 4 (b) [s 34 (c) [2 #4 (d) ip a4 (e) ; 7]
6 -2 ~ de eB

he. 26-13), We) [3956 Se | 9 4


[-
1.22 (a)
-10 -10 -5 6 12 6 -13 20
1 16%: 8 9 18 9

1.24 (a) 5] (6) undefined 1.25 (a) undefined; (6b) (2, 5S, 5)
5; ? )

|2 (b) [14] 1.27 E a]


4 ae
6 «a
a
pa %

| a
ae
<
e
De
Cee
t

ae 428 [1 2] a eae Es
: ects es . Ns , iat:
tie— . 0 1) 3
et *
a= 0 1

5 ee: A es
ANSWERS TO SUPPLEMENTARY PROBLEMS

x, =8+2x,,x,=—-1-—4;,, x, arbitrary

The system is not consistent.

Only the trivial solution x, = x, = x, =0 exists.

x, =4+x,, x, = —2x,, x, arbitrary

x, =8, x,=-2,x,=-3

4, ayoe8, 2 2, x, 2-1

= ~560,x, =4860, x, = — 10920, x, = 7000


(2.26 = ~19 999,302022, X,= 9999 .298601, x, = 0.690113, x, = 10000.707204, The system is inconsistent
a i" are continually rounded to four significant figures.

x, =0.49998,x, = 0.00001, 05
~ : e oe pti4 ogee i Pimtep: ef.
2 ts CConsistent only if k=7,an hen =2 + op a -: lene (0) contentoly
it=?
3 , ei
nx i,oi ee
ANSWERS TO SUPPLEMENTARY PROBLEMS

0 1
12 & 0
3/2 11/7 JLO

The factorization cannot be done.

B30 0 0 1
2.6 0 0 0
te §41/6 0) 0
2 -2 -20/3 -155/41jLO 0

Bem ey xy S2,x,2-2, 3.24: s.4, 1-0, chee x,=0


on boo OS FES tere ci ie “Fi! 2a

an weixls 2 Xi, H=—h} » te xX, =4+y,, 2, Sg ae a: icone. de “ial

) 7
/ Cannotbe solved; LY =B is inconsistent, 3.28 x, =8,x,=-2,x,=-3
oy i ‘.

ee
1s ie Sheeus}
206 ANSWERS TO SUPPLEMENTARY PROBLEMS

4,24 -6 ~-16 17 4.25 } 3/2 1/2


=i 0 16, -8 1 od | 0
Siis 16 -11 2 12 1/2
oe a a es
are ae
A
Se
4.27 x, = 37/28, x, = —26/28, x, = —44/28

428 x, 3, %,°0,%,=-3 4.29 x, = —-9/2, x, =1/2, x, =5/2, 2x, =1/2

4.30 (A’)~' is the inverse of A’. Now A’(A~')’=(A'A)’


=’ =I, so (A™')’ is also the inverse of A’.
Equality follows from the uniqueness of the inverse.

1 Each part follows from the uniqueness of the inverse: (a2) A~'B™' and B~'A™' are both inverses of AB;
-@) A’ 'B and BA' are both inverses of AB’; (c) AB~' and B~'A are both inverses of BA™’.

© a6;3015=-3(-33) =99
= as
a 0-0

; 50 setis
undefinedbecause F is notsanere,

“si pe ae :

swear ioe a
rt aes

ee ied
ANSWERS TO SUPPLEMENTARY PROBLEMS

CHAPTER 6

6.15 Linearly dependent 6.16 Linearly dependent

6.17 Linearly independent 6,18 Linearly dependent

6.19 Linearly independent 6,20 Linearly dependent

6.21 No

6.22 (a) Yes, [0,0, 1] = O[1, 1,2] + 1[2,2,2]+(-1)[2,2,1]; (6) no

6.23 (a) Yes, (2, 1,2, 1] = 2[2, 0, 1, 1] + (-1)(0, 1, 2, -1] + (-2)[1, -1, -1, 1] + 0[0, 0, 1, 2);
(b) Yes, [0, 0, 0, 1]=(1/3)[2, 0, 1, 1] + (-2/3)[0, 1, 2, -1] + (-2/3)[1, —1, 1, 1] + (1/3)[0, 0, 1, 2]
6.24 batbeak= oe 11,
0, abeFomor 2,0)+ Bh heen 4.a

21) 6.26 {{1, 1, 2},


ard: ay}

6.27M2 2“Ms (1,0, —1, 2}, (0,1, 1,0)}


6.28 [5/3 sis)=(ayn 1)+ (1/3)[3,0} + 1/6901,2
a 2% GY <x) + i, 4 6] ’ ; P Lhe ae | ib
oa : (seats):
+o} of is sorte
ANSWERS TO SUPPLEMENTARY PROBLEMS

for A= 6 and 717] for A= —6

hy= MeO for A= V5 and ;,| 3° ae for A= —V5

for A=2+ i2 and | rigid for


A= 2-2

1 0
| for A= 1, aH for A=2, and aH for A=3
0. | 1

J) ae i. x}
for A=0 (of multiplicity two) and -| 1 for A= -1
0

: | ie 1/37,
3qd
for a =-3 oF mat two) and | of] for A= 1)
, j S| ' AL.

$16.;1-,1 0), (k.i— 2. 0) eae sai

eer ELEM) it. AHS 1) © foree Od ta

ela bot1) ce
&a
ANSWERS TO SUPPLEMENTARY PROBLEMS 209

7.36 [-—1, 1], corresponding to the eigenvalue 1 (of multiplicity two)

7.37 [1, —2] and [1,4], corresponding to eigenvalues 3 and 9, respectively

7.38 [1,—1,9], [1, 1,2], and [1, 1, —1], corresponding to eigenvalues 1, 3, and 6, respectively

7.39 [-1,1,0], [1,0, 1], and [1,1, —1], corresponding to eigenvalues 0,0, and 3, respectively

7.40 [1,1,0], [-1,0, 1], and [1, —1, 1], corresponding to eigenvalues 2, 2, and 5, respectively

7.42 A°X = A(AX) = A(AX) = A(AX) = A(AX) = A°X

7.43 (A—cI)X = AX — cIX = AX — cX =(A-c)X

7.44 The proof is by induction on the order of the matrices. The proposition is certainly true for 1 x 1
_ matrices. Assume it is true for k x k matrices, and let A be an arbitrary (k + 1) x (k +1) matrix. —
Designate as A’ the matrix obtained from A by deleting its first row and column. Then A’ has order
k x k, and the induction hypothesis can be unee on it. Evaluating det (A — AI) by x Saget ets
row, we get =

det (A — Al) = (a,, — A) det (A’ — al’) + O(a?) hha


+ O(A*~?)
{a* — (trace A’)A*~' + O(A*~7)}
=(a,, — A)(-1)* (by in luction)
= (-1)‘*'{a**! —(a,, + trace A’)A* + O(A*"')} af “
= x F : | = (HTP {AE" ~ (trace AYA + O(AT™')}

7 ae “< IBenoss the Saaiisices of Aas A,, AnyBs.Ay Then Pek


Bee salenad -“Ay )(A= ay “(= An)=ERO 2 2 eH
rote = eres
|
oae equatie
ilwe h t r
ngte
Rise
ee:
Rh:

y*
ae y a 4
ANSWERS TO SUPPLEMENTARY PROBLEMS

CHAPTER8

1 ] ; =|: 0 4
8.19 lim A, =|} , lim B, = > 10
kx

lim C, does not exist because tim {(k - k*)/(k + 1)} = —2,

Every square matrix A

All eigenvalues must be less than 4 in absolute value.

es 1 Lea (-1) 2sin5 —2sin(—1) af Canam —0.039151


6 |4sin5—4sin(—1) 2sin5+4sin(—1)|~ |0.078302 —0.880622
er 1 |4e° 136.
ee 2e° —2e~ J= [2eee i973)
98.6969 49.7163
“h- ‘ % f “ities ap

[4cos1 - 20s (1) ~2.¢0s 1 +2¢0s (-1)] 5 byes


4cos(—1)- eet Aan lp
'&
oe
ae 9 LIK: PA) 10 bth
es edessnti) -*”ALLE ihre
ANSWERS TO SUPPLEMENTARY PROBLEMS

(0, 1, 0)"

X,, X, =[0, 1,0)’, X, =[1,0,0]”


(a) Two chains of length 2; (b) one chain of length 2 and two chains of length 1; (c) one chain of length 3
and one chain of length 1; (d) two chains of length 3 and one chain of length 2; (e) one chain of length
3, one chain of length 2, and three chains of length 1; (f) three chains of length 2 and two chains of
length 1; (g) the eigenvalue rank numbers as given are impossible.

? N,=N,=N,=1 and N,=2 for A=1; (b) the vectors found in Problem 9.17, along with
= [0,; 1,0, 0, 0}”

(a) N,=N,=N,=1 for A=5; (b) the vectors found in Problem 9.19

(a) N,=N,=1 for A=-2; (b) X,=[0,1)’, X; a

for both A=0 and A=4; (b) X,=(-3,1],¥ nay"


9.24 (a) N,=1
ytd N,=1,N,=2 ¥,==(0,-1 ie
forA=2; (6)X,=[0,0, 1)",X, =[1,0,0]’,
bo etch Aa Be oh (6) X,=[1,0,1]% pee 2,

nd=1; @) Hy210.01", =Dtaal ea a a


Ny=2ford
@ Mrla
Be chmict"*-is es
=
< Zz.7)s ee)io4 na os 2) a — aa= a = ud Zz.~ < x ~ I a4 © oO a sa = ~y

—m=nNoocoooo

oooocoocono
Noococooceo

ooonNnccoco
onrrNOod

ooo. oooorno
edc$aen

ee
oooooonn

ocooconNnoo
Sceoceooonc!

esosoonooo.

coceoooNno.
COOO

SONSCOCOOSO.
SOCOCCOnN.

cooceooonN
eoncoc

ee
NOSOCO
@& 6 oe
oS

eee
oe
©
eooconnoo

oonnococo
ooounooo)
cooteaeo.

- ee

Pay *
©

Lei
Pig= jie A 105 t= a tase {=
m - ¥
,
74 a

I gh ey a
te Sie : vasa) ee rc .

7” “i
pl
= yh
a
i]
= POA»)
6. 01
ity :.6 = Maw SK
a ah
* (nh BOM RE
1 AF
; can 7
“VAS28
» - s
i
hid ry -
Pie
ion ae
RS :
7
“a
+
7:
b6: tlhe:
f -

re
}
=ad ae
on
=24
Soe
oe eae
iF
eon i
“ag rq gh
i ‘ Fae Ti, agat
sie
on
‘ .

; Ped H . ©
a>at at-” er aah. ~}.
hie
,
ee Ma. :Le ae o~ y, a
M
_ ~ ©
a en ase is. — wl DG ne
» —S
r ae 7 all
* > ey a eke op r
a
— a oo
. 4 -
ANSWERS TO SUPPLEMENTARY PROBLEMS 213

10.37 a 690)3) 10.38: a a


on. 2 0 B... e
mee 2

10.39 3e'—2e' Beet 10.40 i ae


—2e'+2e' -—2e'+3e' e?!

10.41 i a —3e '+3¢e*' el

10.42 Premultiply (10.1) on the left by S; then postmultiply on the right by S~' and set T=S7'.

10.43 Premultiply (10.1) on the left by S to obtain SA = BS. Then

BY = B(SX) = (BS)X = (SA)X = S(AX) = S(AX) = A(SX) = AY

CHAPTER ll

7yre 11.13 wo (b)0;(c) 3; (d)2;(e) 14; (f) -6; (g)2 | Pai mae.
“f@i+k (b) 1-1; (c)
4- i2; (d) -1-i; (e) i5; (f) 50— i25 |

| AL5 Q=(VDI 1)’, Q, = (1V2)[-1, 1]” os


: a :sect.oe oe (/VERMIN,7sa)"
"
7

sak linearly i dependent. T


’ oe

214 ANSWERS TO SUPPLEMENTARY PROBLEMS

0= (0,Q.)y =(> QQ), 5 2 €(Q,, Qi) w= ¢(Q,,Q))w

Since Q, is nonzero, so too is (Q,,Q,),; thus, c, =0.

11.28 Set WX=Y=[y,,y-,-.-.y,]’. Then (X,X)y=(WX)-(WX)=Y¥-¥Y=2".,|y,|* Therefore,


(X,X), 20. Furthermore, if X #0, then Y=W 'X #0, and 5”, |y,|? is positive.

b411,29 If we continue on Problem 11.28, it follows that (X,X), =0 if and only if Y = 0 and that is the case if
~ ages and only if X=W ‘Y=0.
«

oF
CHAPTER 12
“gi <2
. * i s
ae ~<
a

| 12.19 (a) 1; (b) VT; (€)VI; (d) VB; (e) VB


m2

Te a) 1;(6) 6; (c) 7; (d) 9; (e) 12


A

ee

(a) vi; (b) V46; (c) V29; (d) V298; (e) V464
(a) 3°";(6)V3; (6) 35 (4) (185)"5 (€) 11; (4) 4
| (6) 8;(d) VO
STs
aie

5
nan =
baie
.
le ¥
=
ip ga*. ‘ 7

. as. ; - 7)
a Pigs _* 7 ot.

maitr,at : Se tay iy i

p
Mesh!
-
ON sa
_eyiiaGebn.
Pees By ee r
|
ey pee arear ia
chien a iu o~ (pars
See
ot
, ~
* ax. .
Tent
,
=a
rs : te - Ben grt "y i a | ; oa ; J cy
Sal S De
a> é a
-‘ rae ¢ ‘a
i
ie
Se 4
a . » : sz > : “abu. . i a
ANSWERS TO SUPPLEMENTARY PROBLEMS 215

12.34 UR PV =(X+YX+Y), = (XX) + (KY) t (YX) Vt (Vy


=(X,X)w t (¥,¥)w = [IXIly + IVs
(a) V30; (b) 8; (c) V66; (d) V22; (e) V75
(a) 5.3723; (b) 6.7958: (c) 8.1231
4.8990

The eigenvalues of A and A’ are identical.

(a) 15: (b) 4.158: (c) 66: (d) 2.729; (e) 2.147
I-'=I and |lI|| =1 from Problem 12.31.

For nonsingular A, 1 = |{I|| = ||AA~'|| =< |JA||||A’‘||= c(A).

21x E, F, pee 8.

=[1WV3, 1WV3, 0)", Q:=[-1/V8, 1%, avaa0,7 VSN


en aiues 72.
| —2, and1,eto At A

, 0}, Q,= (0,v3.a /3,.0


INS.
| uN45 :
72 eeEA
216 ANSWERS TO SUPPLEMENTARY PROBLEMS

13.32 AA” = A(—A) =(—A)A=A"A

13.33 (iA)" = (iA)' = ((A)" = (—iA)" = iA” = —iA" = i(-A") = iA


13.34 (AX,X) = (X, A*X) = (X, A”X) = (X, -AX) = —(X, AX) = — (AX, X)

13.35 Let A be an eigenvalue of A corresponding to the eigenvector X. Then


A(X,X) = (AX, X) = (AX, X) = (X, A*X) = (X, A7X) = (X, -AX) = (XK, -AX) = —A(X,X)
Thus, A = —A, and A is pure imaginary.

13.36 -AY=-A7=-A™=A
4(A + A”) is Hermitian and }(A — A”) is skew-Hermitian for any matrix A. For real A, these matrices
are symmetric. and skew-symmetric, respectively.

A ording to (8.1), f(A) can be written as an (n — 1)-degree polynomial in A. iia the eigenvalues of A
e : eal,so are the coefficients of such a polynomial. The result then follows from Problems 13.29 and

and 3 are positive


itive definite;
Band D are positive semidefinite.
ANSWERS TO SUPPLEMENTARY PROBLEMS 217

14.30 If

a=|)0 2)
1 and X = [x Nae x,\F
|

with X real and nonzero, then (AX, X) = x? + 2x,x, + x =(x,+x,)’>0.

CHAPTER 15

15.16 Cand E

15.17 (a) [sven ‘iV91) sa 0 23 f v3 (c) i/V2.. 268


4IVOI_ -9/V55 20
-1/V2 0 1/V2 -1/V31/V6 1/V2
1/V2 0 1/V2 1/V3 -1/V6 1/V2
aaa) —-AASST myers.1B

| 0 0h Beto 0 3
(b) With U = |
-1/V2 0 va -V2/3 vr] we have U”BU= 0
V2 0 1/V2SL0 V3 ~V273 } ees
urea + Sel obs 2G «. 0 ulhivg s10 ohh, oe
dail
ll
ings
ot, (© WithU=| -1/V3_1/V6_1/V2|[0 -v3/2 1/2 | we have: UNCL
o

~~
+
2
aoe. V3 -1/V6 1/V2JL0 1/2 V3/24 eo .
a,

18.19 (a)[3/5 ses) |) [9 91) igs oy “By a


Fo i Seathie a a oa WG GEH2V2 tid go og!
a EO A ee le cof
aeee nena
ee
= 1% Ab ala
+
z i
218 ANSWERS TO SUPPLEMENTARY PROBLEMS

(c) 9 -3 O -3 (d) 1-12 =1


-—3 € 3 6 =§ 226000
oe 2 .38/32
-3 0-3 6 <= 2a

16.12 (a) and (c) are positive definite; (b) is positive semidefinite.

Dr eass (a) {1 0 0 (b) [1 0 0 (c) [1 0 0 0 id fee: #2


bee eT oto] 0 1 o| 0100 ‘4. 2a
sare 001 00 0 0010 “sa ee
0001 0 @.-1
d i ag

16.14 They are not congruent because they do not have the same inertia matrix.

5 (a) Three positive eigenvalues; (b) two positive eigenvalues and one zero eigenvalue; (c) four positive
ic (d) two positive and two negative eigenvalues

ae. 6. 1. -w) f + 6.0 ‘jt e's


-2/V17_ W/V 17 -3/2 0 1/2 -8/9 1/9 1
ors | eyDai a BE aa. | eee

ert 0 Bs i 0 0
m=|0 1 E 140142) || f 0 i
S
os
ce PS bs es Be ™ ig -(S-i)/S 1/5}

can
at.at Bi¥ a
By
*).aah Aroe

20 SSE s a * - ie 4 - ao o Pld

} fe i” m , és ie bes d i ay
mae hg ee pee ig Pot? :
r : ‘ i 2 . - oe
; . ; a ¢ ues
oy y “AS er ete, ew as aes

e's
<<SC I 5 ee
o>" ) 2 oe

~_— ewe
ANSWERS TO SUPPLEMENTARY PROBLEMS 219

17.25 Irreducible, primitive, stochastic, ergodic, a = 1,

19/45 17/45 9/45


L=| 19/45 17/45 9/45
19/45 17/45 9/45

17.26 Reducible, not primitive, stochastic, ergodic, o = 1,

17.27 Reducible, not primitive, stochastic, ergodic, a = 1,

3/8 0 5/8
Lal 0:48
3/8 0 5/8

-Reducible, not primitive, stochastic, not ergodic, a = 1, no limit exists.

Irreducible, primitive, doubly stochastic, ergodic, a = ie ou

a. £3
ao. VS
ry $eer
p Eas box]

ng CEEBA
p ®55.56percent
S$6.48 percent:
Fj
ANSWERS TO SUPPLEMENTARY PROBLEMS

18.19 With U as in Problem 18.18, we have

vev=|-

| 0 o. , to-V3 0 oO
orien =siV2. 0 IN Tan, | ~W2 1/2 -1/2) «0

0 tV2 6 1h 0 0 v2 -!1

0.6 0.8 , 2.32 2.76 1.20


18.21 With R,,(0.927295) =| -0.8 0.6 we have R/,AR,,=| —1.24 0.68 ~1.60
| Lf 0 0 5.00 1.00

18.22, First iteration (k =@.03, j = 2):

[0.554700 -0.832080 ¢
0.554 700
oa— | 0-832nian050
5794) ee
> i ied G 3 f

- 2.153846 ~1.230769 4.160251 0 |


—1.230769 -1.153846 -0.832050 7.211103 |
& 4 0251. . : | ; oa {a} .
i.
. 12
Was _
: a
ANSWERS TO SUPPLEMENTARY PROBLEMS 221

18.24 No solution 18.25 a= i: ey 18.26 X=[1,—1,0]’

CHAPTER 19
19.13

19.14

—0.2857
—0.7143 0.2857
, oe.
0.7143 0.28571.
: eae Widteninine @ 14+(-11)=3.
a NO Ae fay
Co) : a>
“atomPred

I * gulgurreyi be. ;
i - a eed

, rr
ANSWERS TO SUPPLEMENTARY PROBLEMS

19.18 There is no convergence, implying that the dominant eigenvalue is complex.

Eigenvector components Eigenvalue

1.0000
0.9697 33.0000
0.9591 : 30.3939
0.9578 , . 30.3011
0.9577 : ; | 30.2902
0.9576 : ‘ 30.2889
0.9576 0. 1.0 | 30.2887

0.5774 (0.5774
5 0.7621 |
“00147 oe om
ANSWERS TO SUPPLEMENTARY PROBLEMS

ona Eigenvector components

3 oe yee
0.1596 0.0851
—0.0661 0.0968

A =9 + 1/3.08114 = 9.3246

A = 2.5 + 1/39.5709 = 2.5253

First iteration: shift = 7.66667; eigenvector = [1, —0.486452, 0.89935]”


Second iteration: shift=10.5092; eigenvector = [0.976710, 0.067597, 1]”
Third iteration: shift = 11.969339; eigenvector = [1, —0. 000168, 0.999924)"
Fourth iteration: shift=12. 0000; eigenvector= [1, 0, aT oh

sa [-2.cs 0.2357 0.7071] [6 —-5 -0.6667]


=| 0. PD
7 ese 0.7071. —R=|00 2.8284
2.8284 0.2357|
0.9428 0 Lo 0 2aaat
ANSWERS TO SUPPLEMENTARY PROBLEMS

20.17 3.41421, 2, 0.585786 20.18 1, —-17+ i24, -17—i24

20.19 3.61803, 2.61803, 1.38197, 0.381966 20.20 990, 660, 440, 330

20.21 9,9,9,9 20.22 -1+iV3,+i

20.23 The QR algorithm does not converge. 20.24 232.275, 79.6707, 63.8284, 24.2261

CHAPTER 21

21.18 [1/2,1/2] 21.19 , {1.1 21.20 fi 43 21.21 Se, 3)


. Bhs 5) eet 1 -
213.3] ee fer |

21.24
es 1

% BATA
ANSWERS TO SUPPLEMENTARY PROBLEMS 225

Pa: th. |. 3A
WV11l -1/V2 3/V22 1/V2 es Cigar
i)

u bay —1/V2 0
21.34

3/V1l 0 ahi22

—0.597354 ed
U=U,= —0.845435 Beene ial a 0
21.35 v-v,=| 0.801978 0.597354 —0,534078 —0.845435 0 0.444992
21.36 A ' satisfies Properties 11 through I3 because “
(AA')4 =14=1= AAT! ‘Ss
AA 'A=A(A'A)=AI=A a
and A'AA™'=(A'A)A' =IA7' =A!
The result then follows from Property 21.1.

21.37 A’ =0 satisfies conditions I1 through I3 when A= 0. i


am
21.38 Conditions I] through 13 are symmetric with respect to A and A‘. Thus, if A” is the generalized save se
of A, then A is also the generalized inverse of A*. That is, A=(A*)”. oe = *

21.39 Show that conditions I1 through I3 are satisfied. | ‘oe

21.40 It follows from Property 21.5 that (A*)” =(A")* =A


f: . 21.41 Conditions I1 through I3 are satisfied because
‘a | (A‘A)” =(AA)” =A” =A=AA=A‘A
ATAA= AAA=A(AA)=AA=A
ay oe ate
: égqe ui } ete tebe)

as 06 A A WEN. .A jo

i] wrattiinss sot woe 16


index

Addition of matrices, 2 Diagonal, 24, 127


Adjoint, 119 . Diagonal matrix, 24, 127, 137
Angle between vectors, 143 Diagonal quadratic form, 144
Augmented matrix, 11 Differential equations, 72
AX + XB=C, 73 Dimension, 52
AX = B (see Simultaneous linear equations) Direct product, 165
AXB=C, 166 Distance between vectors, 110
Distribution vector, 153
Band matrix, 160 Dominant eigenvalue, 111, 152, 169
Bessel function, 80 Dot product, 1
Block matrix, 43 Doubly stochastic matrix, 153

Canonical basis, 82 “' (matrix exponential), 72, 100


for a normal matrix, 119 for a positive definite matrix, 135
Cayley-Hamilton theorem, 61 Eigenvalue, 60
Chain, 82 bounds on, 112
Characteristic equation, 60 of a circulant matrix, 160
of similar matrices, 91 dominant, 111, 152, 169
Characteristic polynomial, 60 of an elementary reflector, 143
Characteristic value (see Eigenvalue) of f(A), 93 *~
Characteristic vector (see Eigenvector) of a Hermitian matrix, 119
Cholesky decomposition, 129 of an inverse, 60
-Circulant matrix, 160 of a nonnegative matrix, 152...
Coefficient matrix, 11 by numerical eee sd 170
es 42, 50
228

Expansion by cofactors, 42 Jordan block, 91


Exponential of a matrix, 71 Jordan canonical form, 91

Finite Markov chain, 153


Kronecker product, 165
_ Frobenius norm, 111, 116
Function of a matrix, 71, 92
/, norm, 110
eigenvalues of, 93 /, norm, 110
Hermitian, 127 /, norm, 110
/. norm, 110
L, norm, 111
_ Gaussian elimination, 12 L.. norm, 110
_ Gauss-Jordan elimination, 21 Least squares, 194, 201
Generalized eigenvector, 82 Left eigenvector, 67, 154
Generalized inverse, 192 Length, of a chain, 82
_ Gerschgorin’s theorem, 170 of a vector, 110, 143
‘Gi yen’s method, 167 _ Linear combination, 52
Schmidt orthogonalization process, 104, 121 of eigenvectors, 66
‘ if odif
is at :
sd, 181
fe} a
Linear dependence, 52, 53
,
ee 2 “%- ta
*
Linear equations (see Simultaneous linear
Ll)
Th;
ye

al congr uent matrices, 145


/
ae ‘ a 4
equations)
pte aoe
Linear independence, 52, 53
hed os

] mat

of a chain, 85
of eigenvectors, 61
Lower triangular matrix, 24
determinant of, 42
eigenvalues of, 60
inverse of, 32
at

Smee ete Bhai fftie


+4) Te oe xo u7

i
i

~, ¢ ¢
owe aa el
‘ie ‘by ' oy wi ‘
, ‘ Bs een Ly
om > ze cai? a
= _— a?

, +e
INDEX 229

Nonsingular matrix, 34 Row vector, 1


Nontrivial solutions, 12 Row-echelon form, 2
Norm, 110 reduction to, 3
Normal equations, 201
Normal matrix, 119, 136 Scalar, 1
similar to a diagonal matrix, 137 Scalar multiplication, 2
Normalized vector, 110 Scaled pivoting, 18
Null space, 59 Schur decomposition, 136
Numerical methods for eigenvalues, 169, 170, 181 Schwarz inequality, 103
Self-adjoint matrix, 120
Order, 1 Sequences of matrices, 71
Orthogonal matrix, 136, 137 Series of matrices, 71
Orthogonal vectors, 103 Shifted inverse power method, 170
Orthonormal vectors, 110, 119, 136 modified, 180
Shifted QR algorithm, 182
Partial pivoting, 13 Signature, 145 ;
(See also Pivoting strategies) Similar matrices, 91, 123 $e
Partitioned matrix, 5 Similarity transformation, 137
determinant of, 43 QR algorithm, 188
Permutation matrix, 152 Simultaneous linear equations, 11, 25, 35
Perpendicular (see Orthogonal vectors) homogeneous, 12, 55, 59°
Perron’s theorem, 152 matrix form, 11
Perron-Frobenius theorem, 152 (See also Solutions of psaat i lineare
Pivot, 3 tions) 7 :
Boe on condensation, 43 Singular matrix, 34 |
Pi oting strategies, Wee 18): 195 29% 3D eigenvalues of, 60
1r decomposition, 202 Singular value, 193
siti Tamespit 128 Singular value de compo
%Ss lanl trix,
‘Skew-symmetric matrix,
dis
fe
eo
vey
gy
owe
-
Selon of sitet ous
Transpose (continued) Upper triangular matrix, 24, 136
inverse of, 35 determinant of, 42
of an orthogonal matrix, 136 eigenvalues of, 60
rank of, 53 inverse of, 34
of a right eigenvector, 67 when normal, 124
(See also Hermitian matrix)
Triangle inequality, 110 Vandermonde determinant, 66
Tridiagonal matrix, 161 Vector, 1, 52
Trivial solution, 12 angle between, 143
Vector norm, 110
Unit vector, 110
Unitarily similar, 137 Well-defined function, 71
Unitary matrix, 136| (See also Function of a matrix)
alee transformation, 136 Work column, 3
_ preservation of angles, 143 ae. ee P ‘
— heneame of rong, 143 Pi Mins _ Zero matrix, 2, 53, 192
+L: ; ' 42%
{ ‘a

i tho Waive
Pt) atone aoheiuadtr eT
2p) ee en Me OBNO ‘ee) selusiboequed
Ae Sikin.. 3 CFL eminent pliers
me oo ee a chesigen -Brcpanine
ite. ae we fats 7e ,
pueie ;
SCHAUM'S SOLVED PROBLEMS SERIES

m= Learn the best strategies for solving tough problems in step-by-step detail
m@ Prepare effectively for exams and save time in doing homework problems
@ Use the indexes to quickly locate the types of problems you need the most help solving
m@ Save these books for reference in other courses and even for your professional library

To order, please check the appropriate box(es) and complete the following coupon.

aX 3000 SOLVED PROBLEMS IN BIOLOGY


ORDER CODE 005022-8/$16.95 406 pp.

3000 SOLVED PROBLEMS IN CALCULUS


ORDER CODE 041523-4/$19.95 442 pp.

3000 SOLVED PROBLEMS IN CHEMISTRY


ORDER CODE 023684-4/$20.95 624 pp.

2500 SOLVED PROBLEMS IN COLLEGE ALGEBRA & TRIGONOMETRY


ORDER CODE 055373-4/$14.95 608 pp.

2500 SOLVED PROBLEMS IN DIFFERENTIAL EQUATIONS


ORDER CODE 007979-x/$19.95 448 pp.

2000 SOLVED PROBLEMS IN DISCRETE MATHEMATICS


ORDER CODE 038031-7/$16.95 412 pp.

3000 SOLVED PROBLEMS IN ELECTRIC CIRCUITS


ORDER CODE 045936-3/$21.95 746 pp.

2000 SOLVED PROBLEMS IN ELECTROMAGNETICS


ORDER CODE 045902-9/$18.95 480 pp.

2000 SOLVED PROBLEMS IN ELECTRONICS


ORDER CODE 010284-8/$19.95 640 pp.

2500 SOLVED PROBLEMS IN FLUID MECHANICS & HYDRAULICS


ORDER CODE 019784-9/$21.95 800 pp.

1000 SOLVED PROBLEMS IN HEAT TRANSFER


ORDER CODE 050204-8/$19.95 750 pp.

3000 SOLVED PROBLEMS IN LINEAR ALGEBRA


ORDER CODE 038023-6/$19.95 750 pp.

2000 SOLVED PROBLEMS IN Mechanical Engineering THERMODYNAMICS


ORDER CODE 037863-0/$19.95 406 pp.

2000 SOLVED PROBLEMS IN NUMERICAL ANALYSIS


ORDER CODE 055233-9/$20.95 704 pp.

3000 SOLVED PROBLEMS IN ORGANIC CHEMISTRY


ORDER CODE 056424-8/$22.95 688 pp.

2000 SOLVED PROBLEMS IN PHYSICAL CHEMISTRY


ORDER CODE 041716-4/$21.95 448 pp.

3000 SOLVED PROBLEMS IN PHYSICS


ORDER CODE 025734-5/$20.95 752 pp.

3000 SOLVED PROBLEMS IN PRECALCULUS


95 385 pp.
ORDER CODE dice tame
S IN VECTOR MECHANICS FOR ENGINEERS
elo
ee
OP
ooo
- - £ a 5 se Coys ater FZ


7

UM'S SeEbKS PROBLEMS SERIES AT YOUR LOCAL BOOKSTORE |_


3K THE APPROPRIATE BOX(ES)
ONTHE PRECEDING PAGE
\) OQ , <
aa
++

S) < S
Confusing Textbooks? Missed Lectures?,
Tough Test Questions? Don’t miss these
Fortunately, there’s Schaum’s.
More than 40 million students have trusted Schaum's ~<a
outlines
to help them succeed in the classroom and on exams.
Schaum’s is the key to faster learning and higher
grades in every subject. Each Outline presents all the .
essential course information in an easy-to-follow, Elementary
Algebra
° ‘ Third Edition ‘
topic-by-topic format. You also get hundreds of 200 flysledpene
ter mathematical eather

examples, solved problems, and practice exercises to


test your skills.

This Schaum’s Outline gives you


* 363 fully solved problems with step-by-step
solutions
* Clear, concise explanations of matrix operations a
Murray Spiegel, PRO. « Raber
£.Mayer, PLD

* Coverage of all course fundamentals

Fully compatible with your classroom text, Schaum’s highlights all the important facts
you need to know. Use Schaum’s to shorten your study time—and get your best test

Schaum’s Outlines—Problem Solved.

Learn more. Do more.


MHPROFESSIONAL.COM

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy