Matrix Operations - Richard Bronson - 2011 - SCHAUM's OUTLINES
Matrix Operations - Richard Bronson - 2011 - SCHAUM's OUTLINES
—
‘3 t} =e 3 ;
f
~ “ . :
:
Solved
perations
Second Edition
363 fully solved problems
= Treats matrix computations, algorithms, and operations
® Covers all course fundamentals and supplements any text
a
*
SCHAUM’S
outlines
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Printed in the United States of America. Except
as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in
any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher.
ISBN 978-0-07-175604-4
MH 0-07-175604-3
:ademarks:McGraw-Hill, the McGraw-Hill Publishing logo, Schaum's and related trade dress are trademarks or registered
| s ofThe McGraw-Hill Companies and/or its affiliates in the United States and other countries and may not be used
written permission. All other trademarks are the property of their respective owners. The McGraw-Hill Companies is
ated with any product or vendor mentioned in this book.
Preface
plavebs. ait'wo
“aan
ba ae
Contents
Seechapter If sINNER- PRODUCTS. gcc. . 6c) cic ods. as vse. jase ee 103
Complex conjugates. The inner product. Properties of inner products.
Orthogonality. Gram-Schmidt orthogonalization.
NORMS
. 2600 AG «os EM SS .'.2 PPE ews pass eee
Vector norms. Normalized vectors and distance. fates norms. ile
norms. Compatibility. Spectral radius.
ey.Jaya ee eae ;
bo anehae coneae
(ETN, Rayisigh
Powe baa mca-
. —__-———
; cee. ——— 4
CONTENTS
Example 1.1
He
3 4
Eg
—6
sin f
.0-
ae
cost
and [-1.7, 2+ i6, —3i, 0]
Roar
are all matrices. The first two on the left are ‘daha bag whereas the third is coniplnovaladll (withi
.s-v= Sa ae 5r
the first and nice are constant matrices, but Asse second is‘not constant. tech eer
. do
elementsofthematrixar
-_ wherethe d te location. By
csubsciptee
e uble
\ 2 ~ eae
went li:
ae cee
se ene leas
=A NO 6)=ABA (B-_C)A=BA-CA eb =
+ a1 aio jis he | “gatas A
Sie
2B iE: ix is ft { 3 Pd 4 ¥ ; FS \ x , ; Pot 7 : al J Phas
7 on 5 i. ; . i - 7 . 5 ‘or =
: v4 re
i308fi:a 4 ws * ya >. baoTe 3: j e ' +4 a 2 ‘§ J Sf F
. “aa
6 ic ic in te a i e B%, 4 nen \ oi ay he a.
are.,
CHAP. 1] BASIC OPERATIONS 3
We Se Sik Se
ps em Oe ia
0 60 @:.9 0
satisfies all four conditions and so is in row-echelon form. (See Problems 1.11 to 1.15 and 1.18.)
STEP 1.1: Let R denote the work row, ‘and initializees= 1 ae that the top row is‘the fi
S; or Tow). =
STEP 1.2: Find the first column containing a nonzero bewiennsin either row R or any s
row. If no such column exists, — tie pe al agellistone tes a the
_ denote thiscolumn. : : a 2 oe
4 BASIC OPERATIONS (CHAP. 1
Solved Problems
74
J5 e+8 Bhft
‘6ol
3 us)Be a we ees ‘s
3 5)" ohteis hatron sts an iq + ae Bist na
hepa a. b 4 tokvxsikade
}
of
by
was:ti
CHAP. 1] BASIC OPERATIONS 5
1.6 Verify that (BA)’ = A’B’ for the matrices of Problem 1.5.
| 10 4), gy [MD +48) 10)
+4-9) 39 ~36
was) -5|[2 9]=|207)
+(58) 20) + (-5)(-9) -|-2 ‘
ans 3(7) + 6(8) 3(0) + 6(—9) 69 —54
which is the transpose of the product BA found in Problem 1.5.
ae 7a SR a} ~3
Az=t 2 Pig Ba 2) e2° 2 Cerf 2: . 6
H2 =) sot Sah ogee | >) a a |
_ a a 4
eB: 2.10.8... oft wale
< ae 78 -1 —2
A partitioned matrix can be viewed as a matrix whose elements are themselves matrices.
1.9 The arithmetic operations defined above for matrices having scalar elements apply as well to
partitioned matrices. Determine AB and A -— B if
Pais a [ep “4 _{F 4
ig a=|o and p=|F E
11 3 fo 7 [2 6 {7 0} -{- 3]
where C=|4 | D=|5 9 Pris: gl FE") ois) 7-S@liy 3
s-| CF + DF
EF+CF
Sa
EG+CE
i | a °|+ 0 0 |
oS i=)|a4 No
— —_—
©
wend
sappy BR
deal oll.iptalten
MBPT Sc obetigBab
BASIC OPERATIONS
0 ee
The product AB is the upper left part of the resulting matrix. Since the row sums of this product, the
column sums, and their sums are correctly given in the matrix, the multiplication checks.
51o. ee
6=[5 04
oe a0 ve
SKY. <. ote «sult a
TA45, : 2 Fy
aa te e
wi
to tr m_ > and Boe. .
4 : ae
7 oa Se
i 1 ysFS ae,
aS aX aba
? << = 7= ~ 3 aitFie ’
ict hal ae = Sa
8 BASIC OPERATIONS (CHAP. 1
I zz +4 6
. 39
2 =} 2 =~2
Here (and in later problems) we shall use an arrow to indicate the row that results from each
elementary row operation.
aff
i 7
9—
7
tbe
oe wee ee
ee. ED a |
. Shas. 38.4.7
1 2/3 143 -4/3 “| Step 1.4: Multiply the first row
2 Se Pe at | by 178.
(eh V8, Gor B eid
be BIB MRIS) HOB 0°193 Step 1.6: Add —2 times the first
O: Sian =2/3. 4543 <~6/3 row to the second row.
1 -6 3 —8 7
l af: $13 “43 1/3] Step 1.4: Multiply the second row
ee Da of by 3/5. . gral:
0 -20/3 8/3 -20/3 20/3 : ‘ies
cae 8 1 2/3 1/3 -4/3 1/3] —Step 1.6: Add 20/3 times the
© eee Os DiS ey: eh | Reon 16m to: the tattaow,
a a SAE Me aa : 0 be rds
77
det lonn formof this matrixhas three nonzero rows, the rank cf the ori
BA a OE 5ppg ame:
a en. fe
10 BASIC OPERATIONS (CHAP. 1
Supplementary Problems
7
asf?io. 42) w-[22 2] c-[2 oe 31]> w-laa2]
6 6 3
2 i]
|S
so —
wh_—
3 "
be —-
_
Nm w—
4.19 Find (a) A + B; (b) 3A; (c) 2A — 3B; (d) C — D; and (e) A+F.
Designate the columns of A as A, and A,, and the columns of C as C,,C,, and C,, from left to right.
Then calculate (a) A,*A,; (b) C,+C,; and (c) C,-C,.
Find (a) AB; (b) BA; (c) (AB)’; (d) B’A’; and (e) A’B’.
Fir
oS
d (a) CF and (b) FC.
ee wea: ay ‘ 2
Ee4 © Sides. 1 RRR SES tO ANE wt? citeG? ad. Ti
ete .
Rede _ 43 > > oe ae : = 4
eS ee ies , bs, = ele a
Chapter 2
Simultaneous Linear Equations
CONSISTENCY
A system of simultaneous linear equations is a set of equations of the form
| has no setae, because there are no values for x, and x, thatsum to 1 and0 simultaneously. The
aa
XYveiiioncsie ot Qarek es. 03 thatapie ts Sepang ds
2¥%«mat Hite Dyieiour:: ns Lobster x= 2
s fs:Bigmsiaye “i: sf }i: EGS sé SHy 3. a :
;
ion4sie
; garnet: ea "se ts : 3: 2 i
12 SIMULTANEOUS LINEAR EQUATIONS |CHAP. 2
THEORY OF SOLUTIONS
Theorem 2.1: The system AX = B is consistent if and only if the rank of A equals the rank of [A |B].
Theorem 2.2: Denote the rank of A as k, and the number of unknowns as n. If the system AX = B is
consistent, then the solution contains n — k arbitrary scalars.
(See Problems 2.5 to 2.7.)
System (2.1) is said to be homogeneous if B = 0; that is, if b, = b, =---=6,, =0. If B#0 [i.e.,
if at least one b, (i=1,2,...,m) is not zero], the system ‘ somhoninaiaiines, Homogeneous
systems are consistent and admit the solution x, = x,°-: =x, =0, which is called the trivial solution;
a nontrivial solution is one that contains at least one nonzero value.
Theorem 2.3: Denote the rank of A as k, and the number of unknowns as n. The homogeneous
system AX = 0 has a nontrivial solution if and only if n #k. (See Problem 2.7.)
_ SIMPLIFYING OPERATIONS
Three operations that alter the form of a system of simultaneous linear equations but do not alter
__ its solution set are:
ie . (OL): Interchanging the sequence of two equations.
—|.| (02): Multiplying an equation by a nonzero scalar.
“edee oseietntion a scalar times another equation.
ing opera ions O1, O2, and O3 to system (2.1) is equivalent to applying the elementary
| ope ws El,E2,and E3 (see Chapter 1) to the augmented matrix associated with that system.
— Gaussian | is an algorithm
for applying these operations systematically, to obtain a set of
ationstoa iseasytoanalyze for consistency raeeasy to solve if it is consistent. =
pation typesos fie: =,
Partial pivoting involves searching the work column of the augmented matrix for the largest
element in absolute value appearing in the current work row or a succeeding row.- That element
becomes the new pivot. To use partial pivoting, replace Step 1.3 of the algorithm for transforming a
matrix to row-echelon form with the following:
STEP 1.3’: Beginning with row R and continuing through successive rows, locate the largest
element in absolute value appearing in work column C. Denote the first row in which
this element appears as row /. If / is different from R, interchange rows J and R
(elementary row operation El). Row R will now have, in column C, the largest
nonzero element in absolute value appearing in column C of row R or any row
succeeding it. This element in row R and column C is called the pivot; let P denote its
value.
(See Problems
2.9 and 2.10.)
Two other pivoting strategies are described in Problems 2.11 and 2.12; they are successively
more powerful but require additional computations. Since the goal is to avoid significant roundoff
error, it is not necessary to find the best pivot element at each stage, but rather to avoid bad ones.
_ Thus, partial pivoting isthe strategy most often implemented. (a
a}
q 3
ti:
a2
t . Af
0 Fa ee
= if a
i4 eex | A
> A —i
ee AA Be =) : 7 er
' af «HP
‘the t Z a
ye 4 ey
See , io ey: a
Nat 4
a = “
;
if
;
xe.
— et
b .
Tab SES
ae ee
he eee
fey la
en
“i
2
.
ees... :
Eas
2.3 Write the following system of equations in matrix form, and then determine its augmented
matrix:
it
> the set of simultaneous equations that corresponds to the augmented matri
} [alal=
i 2/3 1/3 —4/3 :1/3
1 -2/5 1 {-1
a a Ae 0 = ae : 0
) aRaat th 2cthe
: eet tect gt ee
#2 Hts4 el
tolog
ee am Norom it ayaa 9
CHAP. 2] SIMULTANEOUS LINEAR EQUATIONS
It follows from Problem 1.17 that the rank of [C |D] is 2. Submatrix C is also in row-echelon form, and it
also has rank 2. Thus, the original set of equations is consistent.
Now, using the results of Problem 2.4, we write
x be cS ied ,
"as theset of equations associatwit
ed h{C| D]. Solving
s ee amete
ie :
r~ 5
iad ti‘ rawre
. ”
Be
4 .
Pe >
3 gee
ae, Y-
ys Mite. sah
ee “7. 7 4 . - é qf + ‘A oes ey $ .
Liss.: + y aj “= »oe] x bs alee) . “e it le
* > P a - +5 = * we* e ‘ y
‘ ~ .
7 od *
a La —. fe ; # «a k f > aa . s oe er
Soh
She a Pee Le ‘ee a 7 4 t “ = Cr ik ee ge Oe, Leet: fh oe * } he 314 Oe a
es =~ £(— bsob F - : i 7 ey vel ee
ee 7 3 , ae 7 re
: a , i Ps > % ee = ae a MiePi Pi 5 ss aeae ' a So ea a ieL Bi, °
aay!
Ma ies io 5 rs ‘ ral ae Ped a af <4 ok " . ware ~ ai’ oe 4) ced oe ‘4 pot es “ =
: » is "4 a0 ea : $ par ;
16 SIMULTANEOUS LINEAR EQUATIONS |{CHAP. 2
: The rank of the coefficient matrix A is thus 2, and because there are three unknowns in the original set
as of equations, the system has nontrivial solutions. The set of equations associated with the augmented
matrix [C |D} is
¥, + 5x.7 $x, =0
x, + $x,=0
0=0
X,=—2X, 47 32,
ta
i, =~ 7k,
(a) We write the system in matrix form, rounding 1.00001 to 1.000. Then we transform the augmented
matrix into row-echelon form using the algorithm of Chapter 1, in the following steps:
age ee Ta
1 sipcinitest.- 9 |
—> | ] 100000 | a)
l ps: See?
1 100000 | pee
ab 0 ~100000 |}— 100000 ,
1 100000 | phe iy
mt () 1 ]
(Note that we round to —100000 twice in the next-to-last step.) The resulting augmented
matrix shows that the system is consistent. The equations associated with this matrix are
x, + 100000x, = 100000
x,=1
which have the solution x, =0 and x, =1. However, substitution into the Fie aes equation
shows that this is not the solution to the original system.
(b) Tittouning the augmented matrix into row-echelon form using partial pivoting yes
}0.00001
1
ea.
Ree 2
Rows 1 and 2 are interchanged
0.00001 1 :1.000] © because row 2 has the largest
element in column 1, the current
work column.
; = Rewnding to foursignificant
cizted withthelst
¢ ae .e- .
hy Steam]
Ss 3
18 SIMULTANEOUS LINEAR EQUATIONS |CHAP. 2
In transforming this matrix, we need to use Step 1.3’ immediately, with R = 1 and C = 1. The largest
element in absolute value in column | is —5, appearing in row 3. We interchange the first and third rows,
and then continue the transformation to row-echelon form:
—+f-5 8 17: 9%
2 4 -4+=30
o> t 2235 18
+f1 -16 ~—3.4: -19.2
= E 1 -4 }-30
ae te i
1 -16 -3.4:~-19.2
{0 4.2 2.8: s4|
bac 9 tae
Lei bbpicd.4 4102
0° 42> 2aa.. ee
10136 64% 37.2
- | We next apply Step 1.3’ with R = 2 and C = 2. Considering only rows 2 and 3, we find that largest
element in absolute value in column 2 is 4.2, so / = 2 and no row interchange is required. Continuing
____with the Gaussian elimination, we calculate
1-16 -3.4 :-19.2
>|0 1 0,666667; 2
94.93§4 FOS
1-16 -34 }~19.2
1 0,666667; 2
Od Ooo 4 “hood
ft -16 -3.4 $=19.27
; 1 0.666667; 2 ©
aoe ©, Ls | :
SIMULTANEOUS LINEAR EQUATIONS
We add a column consisting of these scale factors to the augmented matrix for the system, and then
transforming it to row-echelon form as follows:
“2:<4
ars
17% 8 AnAwownm
NWA
|
ee _—
=2i~s
sd tone
FESS DWNNWA ————EE ~~
are interchanged.
.
© . g.3
dt a £ - a dey >
; CE a8 ; (Mage? Bhat t ir ‘
5 »
2+
Ba 4 Pe‘ : Pe="Par adi
a: 4 by Le Fe a au iy
; a, ae Laas b
. V ~~ a > ¥ ‘ - 4
ee é ke Pe ~ s
iS
i i 7
——
Solve the system of Problem 2.10 using complete pivoting. We add the bookkeeping row 0 to the
augmented matrix of Problem 2.10. Then, beginning with row 1, we transform the remaining rows into
row-echelon form.
a, (eer R=1 and C =1. The largest
co ti Fa eae element in absolute value in the
oe esl : an lower left submatrix is 17, in row
—— 8 8: 3 and column 3. We first
interchange rows | and 3, and
then columns | and 3.
On ied
i 0.470588 0.294118¥
—|0 2.88235 0.823528
| —7.41176
ge ae be
2.13 Gauss-Jordan elimination adds a step between Steps 2.3 and 2.4 of the algorithm for Gaussian
elimination. Once the augmented matrix has been reduced to row-echelon form, it is then
reduced still further. Beginning with the last pivot element and continuing sequentially
backward to the first, each pivot element is used to transform all other elements in its column
to zero.
Use Gauss-Jordan elimination to solve Problem 2.8.
- The first two steps of the Gaussian elimination algorithm are used to reduce the augmented matrix
to row-echelon form as in Problems 1.13 and 2.8:
‘The set of equations associated with this augmented matrix is x, =1, x, =2, and x,= ah
bscriae set for the original system ane back pabeatiysinin:tis required)
EA, ead
SIMULTANEOUS LINEAR EQUATIONS
Supplementary Problems
Which of
(a) x, =x,=x,=1 (b) x, =8,x,= 1,2, =0
(ce) x, =12,x,=-3,x,=2 (d) x, =2, x, =-2,2,=9
are solutions to the system
x, +3x,% z3= 5
2x, + x, ~ 3x, 2718
x, + 7x,+5x,= 1
Write the augmented matrix for the system givenin Problem 2.15.
: Pt. > Crees wie
4, aa Z
’ ~¢@ ine ic JWIn,
, : .
= } vl
e . a A ae se
is > © 4 7 ~
CHAP. 2] SIMULTANEOUS LINEAR EQUATIONS
What would be the result of solving this system by working with only four significant figures?
Use Gaussian elimination to determine values of k for which solutions exist to the following systems, and
then find the solutions:
> t + a oy ane x tay ul
ware 3% x, +2x,- x= tog 3x3
,=-4
2x, - x,+3x5=3 2x, + A ad 6
Ge pia te ns Te i x;
ak:
pik Jor 20F ry— 2x, = ites si3 AST 16
DIAGONALS
A matrix is square if it has the same number of rows and columns. Its general form ts then
@,, G2 a...
@x, 2x, Fx ¢@,,
A= ay, Qe as, a.
seTt > elements @,,, @,, @;;,...,@,, lie on and form the diagonal, also called the main diagonal or
principal diagonal. Tic cheiaedlltthes @,. Su a,_,, immediately above the diagonal elements form
e. > superdiagonal, and the elements @),,@32. +++ + aq) immediately below the diagonal elements
, : nos vi 7 :
CHAP. 3] SQUARE MATRICES 25
bt Ln Las ben ) 0) 0 1
2 y RA 2 0 0 | jAd2... £13
Example 3.1 2 7-1 1/)/=12..72..0 | 0
4 | ae 4 =-]| -!l 0. «0 1
Crout’s reduction is an algorithm for calculating the elements of L and U. In this procedure, the
first column of L is determined first, then the first row of U, the second column of L, the second row
of U, the third column of L, the third row of U, and so on until all elements have been found. The
order of L and U is the same as that of A, which we here assume is n X n.
STEP 3.1: Initialization: If a,, =0, stop; factorization is not possible. Otherwise, the first column
of L is the first column of A; remaining elements of the first row of L are zero. The first
row of U is the first row of A divided by /,, = a,,; remaining elements of the first column
of U are zero. Set a counter at N =2.
3 STEP 3.2: For i=N,N+1,...,n, set L’ equal to that portion of the ith row of L that has
1 already been determined. That is, L’ consists of the first N — 1 elements of the ith row
of L. ot; sat Re ga
>’ STEP 3.3: Forj=N,N+1,...,7, set U; equal to that corcion oftheith soliattl of U that has | i sees :
24 already been determined. That is, U; consists of the first N — 1 elements of the jth —aa ae
column of U. 7 2g
AX =B, which, in light of Eq. (3.1), may be rewritten as L(UX)=B. To obtain X, we first
E : decompose A and then solve the system associated with
LY=B (3.2)
for Y. Then, once Y is known, we solve the system associated with
UX=Y (3.3)
for X. Both (3.2) and (3.3) are easy to solve—the first by forward substitution, and the second by
_ backward substitution. (See Problem 3.7.)
When A is a square matrix, LU factorization and Gountan elimination are equally efficient for
“solving a single set of equations. LU factorization is superior when the system AX = B must be solved
-fiie repeatedly with different right sides, because the same LU factorization of A is used for all B. (See
ay Problem 3.8.) A drawback with LU factorization is that the factorization does not exist when a pivot
element is zero. However, this rarely occurs in practice, and the problem can usually be eliminated
___ by reordering the equations. Gaussian elimination is applicable to all systems, and for that reason is
e _ often the preferred algorithm.
Ba
d s xee4ar. P
é 9 . «) 7 “Ta ©
RiteBok:aa ran ? :
CHAP. 3} SQUARE MATRICES 27
Fee au. #1
Azi3i%8 9
” Oe
The matrix A consists of the first three columns of the matrix considered in Problem 1.13, so the
same sequence of elementary row operations utilized in that problem will convert this matrix to
row-echelon form. The elementary matrices corresponding to those operations are, sequentially,
ee: ip £ 1 a8
ee [- 1 | e.-| 01 | e=|0 1/2 |
| 001 201 0 6-4
| 100 ‘ya:
e.=|0 1 | e.-|0 qii-<9 |
4 051 0 0 1/34
: , - 1 ea
Then r-e.e8,-| Bit 24/2. 0 |
| ve : -19/68 5/68 2/68
ae e cou (ER REA TS @ ee aay Me Ref
as eae PA=/ -3/2 1/2 0-}3 8 9i=10 1 6]
Seo ee xs : Ade epic) 8
SQUARE MATRICES
38 ~2 3
~~ 48 —2/3 5/3
——— +
Since N =2 and n = 4, we increase N by 1 to N =3.
dgrt gale ae
1
-
ee
Pe:
ee foe
eel es
gah aie 7
=
CHAP. 3] SQUARE MATRICES 29
3.6 Factor the following matrix into an upper triangular matrix and a lower triangular matrix:
5 a ae
fe oo ee
ang PA eal a
i id «0 0
Using Crout’s reduction, we have
STEP 3.1:
2 G40 1 1/2 0
on eee do ais lie
iat ea as N=2
: och: See. i
0 O
~3 0
0 1 -5/6
=~} 2 ~7/6
STEP 3.4:
a ft
tan au=“4) -U; =0-
4p
2
ae
-1/3
fir} — L-7/6) L-32/5]
;
STEP 35: aga. SinceN=4=n, the factolization isdone. We have A=LU, with
CHAP. 3} SQUARE MATRICES
3.8 Solve the system of equations given in Problem 3.7 if the right side of the second equation is
changed from —11 to 11.
The coefficient matrix A is unchanged, so both L and U are as they were. From (3.2),
ee this system sequentially from top to beatin: we obtain y, =5, y, =4/3, y, = —22/5, and
= —28/17. With these values and U as given in Problem 3.6, (3.3) becomes
— this system sequentially from bottom to‘top, we obtain the solution to theee interest:
= 713/17, a4= ad ae = Naked ae: X, = =28/17.
rie PAS
8211:
ERs SC In * It * hn,
Bi
SQUARE MATRICES
-1 -18 18 1 =} 2 1 00 000
A>-9A+101=| 0 8 0/-910 2 Ojf+10/0 1 Oj=]0 0 O
9 ~-@ -37 1-1 -3 001 00 0
A square matrix A is said to be nilpotent if A® = 0 for some positive integer p. If p is the least
positive integer for which A’ = 0, then A is said to be nilpotent of index p. Show that
i. 6.4-2
Azil 2 -t
3°16 =3
is nilpotent of index 3.
That is indeed the case, because
7s ye
Melt 2 -111 2
4;
) MERA
3.6 -3]13 6
a?
Fa | as i oe 2 -1] s ‘ a
A =AA=|0 3 -1 oad|
09 -3 3
aetna & mia cio gmen eT
thiol sae MeIESNOPeRl CF i504 sw A Rien - Wein oe OA
‘ 2 * * , .- <
| ‘ co®, 53-% j
a =, Te Sthe Lk '
—. <® = hed
eres 4 3.22 ie
q =] 1 mde vem |
3 004 —4 ad
7 a Sk 2ind
In Problems 3.23 through 3.29, use LU factorization to solve for the unknowns.
¥, + th2+ ox, = 4
4x, +5x,+6x,=16 | é
7 pa mehr geviae BERNE hh SYED LOG ADRR A wk at ous Sale fy Bo
= i.
THE INVERSE
Matrix B is the inverse of a square matrix A if
AB = BA=I | (4.1)
For both products to be defined simultaneously, A and B must be square matrices of the same order.
a ae _ ioe re 9
ie Es! ie be pera. 8 E 4
E 21-3 1 alee le Ba {' 9)
3 4)l3/2 -1/2)~ 13/2 -1/2)13 4] lo 1
A square matrix is said to be singular if it does not have an inverse; a matrix that has an inverse
is called nonsingular or invertible. The inverse of A, when it exists, is denoted as A’.
CHAP. 4] MATRIX INVERSION
STEP 4.4: Beginning with the last column of C and progressing backward iteratively through the
second column, use elementary row operation E3 to transform all elements above the
diagonal of C to zero. Apply each operation, however, to the entire matrix [C |D].
Denote the result as {I| B]. The matrix B is the inverse of the original matrix A.
(See Problems 4.5 through 4.7.) If exact arithmetic is not used in Step 4.2, then a pivoting strategy
(see Chapter 2) should be employed. No pivoting strategy is used in Step 4.4; the pivot is always one
of the unity elements on the diagonal of C. Interchanging any rows after Step 4.2 has been completed
will undo the work of that step and, therefore, is not allowed.
= 8. :
sore $a ie< ey
MATRIX INVERSION
_f2 -4 0 0.5
ce =|5 4 0.25
0 yah —4
and GC =
—0.25 0.25512
so G is the inverse of C.
G andD do not have the same order, so they cannot be inverses of one another.
+. wT Oe @
A B=/}0 0. Li.
eres:
10 0
ap ag
ie : 0 i 1 hk RatgS ida <i
= B. Matrices
€and
‘© &) Ob gatos
Phe
ir i a See oa a
Mee” r ® oe , a
ay Pte} pr a aeeo ge <oegl
“i gape
=e «
CHAP. 4] MATRIX INVERSION
0 i. A= eee
oe 2 ae
; and B
A) See
) 1. @1.. aaa
Both matrices are lower triangular. Since A has a zero element on its main diagonal, it does not
have an inverse. In contrast, all the elements on the main diagonal of B are nonzero so it has an inverse
which itself must be lower triangular. Since BB~' =I, we may write .
MATRIX INVERSION CHAP. 4).
The left side of this partitioned matrix is in row-echelon form. Since it contains no zero rows, the original
matrix has an inverse. Applying Step 4.4 to the second column, we obtain
Therefore, A'= ne a
1
So gual
oes" 1 + 1.0 0] Adding —4 times the first row to
Rea isorni00 RESIS cop
os a Ba i 6 -4 _ the second row © Woumn iingo- on? daw
3% tieg Gb pe! -Woisc bs. mi BMesies's «ibe Kp. sieges
s thefirst
ee
CHAP. 4] MATRIX INVERSION 39
be DPS re W/S* 0.) 1i8B<G Adding 17/5 times the second row
i) — ae | 0 0 to the third row
—
3 a 415 =.
37/5; -2/§ +4
hoes 3/S* 0 VP peak Multiplying the third row by 5/4
Me, Seae
eeee
—"
0... 6 l 17/4 -2/4 5/4
The left side of this partitioned matrix is in row-echelon form and contains no zero rows; thus, the
original matrix has an inverse. Applying Step 4.4, we obtain se
~ as eee= Ei
es wa fb] SUG cok aah : = 5
9H is
HTS a haehag aur ds;
aoe 7 heen srain ne
&
40 MATRIX INVERSION CHAP. 4}
= Hing 6 Cul2e2 0
Rx %2)=5)/-13 2 -Si} 3}=| 5/2
eT x 17 -2. Sit-6) {L-1/2
The solution is x, =0, x, =5/2, x, = -1/2.
4.12, Prove that (AB) ' =B ‘A ' if both A and B are invertible.
- (AB)' is, by definition, the inverse of AB. Furthermore,
(B-'A’')(AB)=B
‘(A -'A)B=B 'IB=B 'B=1
(AB)(B 'A"') = A(BB"')A™'=AIA"'= AA‘ =I
ahaa aTheseiversesmust
beequal
as»
conequence of
Problem410.
yr ’ an
Each E, is an elementary matrix of either the second or third kind, so each is lower triangular and
invertible. It follows from Problem 4.12 that if
P=E,E,_,°*'E,E, (2)
then p''=(E,E,_,:::E,E,)"' =E,'E;':::E,',E,'
P~' is thus the product of lower triangular matrices and is itself lower triangular (Problem 3.17). From
(1), PA=U whereupon
Supplementary Problems
In Problems 4.15 through 4.26 find the inverse of the given matrix if it exists.
4.15 oC. 2) °1 16-6" :0 oo £ a C+PR
ft. G6 0 «7-6. 0 S 3) + -G. -3
: (a) 0 6°46 (b) se 1ety (c) sce” (Gulia Uh. (d) ee alee ee
P: 0.6 6 ie | am ii Ae i al a ai . oe ee
Chapter 5
Ps Determinants
EXPANSION BY COFACTORS
The determinant of a square matrix A, denoted det A or |AI, is a scalar. If the matrix is written
out as an array of elements, then its determinant is indicated by replacing the brackets with vertical
os‘
we
: at
1 lines. For 1 x 1 matrices,
det A=|a,,|=a
For 2 x 2 matrices,
4; 42
det A = a>; a> = G47. — Gj,
oo My=-|5 3|-48)-ie
»=| f }]=07-106)=-
a
e
a
CHAP.5] DETERMINANTS 43
Property 5.3: If B is formed from a square matrix A by interchanging two rows or two columns of
A, then det A= —detB.
Property 5.4: If B is formed from a square matrix A by multiplying every element of a row or
column of A by a scalar k, then det A= (1/k) det B.
Property 5.5: If B is formed from a square matrix A by adding a constant times one row (or
column) of A to another row (or column) of A, then det A = det B.
Property 5.6: If one row or one column of a square matrix is zero, its determinant is zero.
Property 5.7: det A’ = det A, provided A is a square matrix.
Property 5.8: If two rows of a square matrix are equal, its determinant is zero.
Property 5.9: A matrix A (not necessarily square) has rank k if and only if it possesses at least one
k X k submatrix with a nonzero determinant while all square submatrices of larger
order have zero determinants.
Property 5.10: If A has an inverse, then det A ' = 1/det A.
INVERSION BY DETERMINANTS
Fe The cofactor matrix A‘ associated with a square matrix A is obtained by replacing each element of
A with its cofactor. If det A#0, then
T7(A (5.3)
If det A is zero, then A does not have an inverse. (See Problems 5.9 through 5.11 and Problems 5.18
through 5.20.) The method given in Chapter 4 for inversion is almost always quicker than using
(5.3), with 2 x 2 and 3 x 3 matrices being exceptions.
ae tuspe ty Daina, | ;
x wile(det
=215)~ (304) = 2
if cian —) i roonont
CHAP. 5] DETERMINANTS 45
5 2B,..0
by expanding along (a) the second row and (b) the third column.
(a) Expanding along the second row gives
| den = 2-0" 4
2 ay +7(- LAF est5 gltori” | =~3 at
oe a"
ras co | egalee e 3K 45)
DETERMINANTS (CHAP. 5
Verify that det AB = det A det B (Property 5.1) for the matrices given in Problems 5.2 and 5.3.
From the results of those problems, we know that det A det B = (—45)(—24) = 1080. Now
23 4-3 € 80 s -3
AB=|-5
5 6}}-2 7 6)={35
-33 30
os .9 § -8 0 8 12 48
To calculate det AB, we expand along the first row, finding that
—a5. 30]. sa, s\tetiae oe _4\l+3 35-33
det AB = 8(-1)'*'
Beal toe + 18(-1) 8 12
COSY
coor
SCooYK
COO
COOK
- eon
eocorn
oOOoFN
;
48 DETERMINANTS (CHAP. 5
~ oo 4 :
A= E 4
We shall use Eq. (5.3). Since the determinant of a 1 1 matrix is the element itself, we have
ed AE) ian ey
* Sang;SS comma the inverse of
=
ws
: A= <5 5 6 : :
~~ AT
CHAP. 5] DETERMINANTS 49
5.13 Prove that the determinant of an elementary matrix of the first kind is —1.
An elementary matrix E of the first kind is an identity matrix with two rows interchanged. The proof
is inductive on the order of E. If E is 2 x 2, then
and det E= —1. Now assume the proposition is true for all elementary matrices of the first kind with
order (k — 1) x (k — 1), and consider an elementary matrix E of order k x k. Find the first row of E that
was not interchanged, and denote it as row m. Expanding by cofactors along row m yields
detE=a4,,A,,,+@ m2 Ans Ee + Bis ciBin m =. ‘a a mk Ax PA ssn
A ine (-1)7""M =
But M_,,, is the determinant of an elementary matrix of the first kind having order (k — 1) x (kK —1), so
by induction it is equal to —1. Thus, detE=A,,,=M,,,=-1.
i aa
5.14 Prove Property 5.3.
nena F
an «.sys
: , _ If B is obtained from A by interchanging two rows of A, then B =EA, where Eis
i bre Sey matrixof the first kind. Using Property 5.1 and the result of Problem 5.13, we obtain—
te ik eniets rn : det B = det EA = det E det A Witla
= psig
Prove that if the determinant of a matrix A is zero, then the matrix does not have an inverse.
Assume that A does have an inverse. Then
Prove that if each element of the ith row of an n X nm matrix is multiplied by the cofactor of the
corresponding element of the kth row (i,k =1,2,...,m; i#k), then the sum of these n
products is zero.
For any n X n matrix A, construct a new matrix B by alicia the kth row of A with its ith row
(i,k =1,2,...,n;i#k). The ith and kth rows of B are identical, for both are the ith row of A; it
follows from Property 5.8 thatbaa B=0. Thus, evaluating det B via expansion by cofactors along its ith
row, we may write
4 a a
= F ad
," ii
he
he iith10 ren
Pa by deleting its jth colum =i tte kthrowand
= a | Sey =. gb be ;
Re he be =e Ay : _ saeco det
Ay
Lo nioersatsinser
at Se A a 4 x Wa =a
te 2 x kj : =
Uotds"
CHAP. 5] DETERMINANTS 51
Supplementary Problems
0)
2
1
2 1 3 1-253 Suid: 1
D=|4 2 =~-1 E=|3 61 F=|
69 2 #=5
» ie ie 1
5.21 Find (a) det A and (b) det B, and (c) show that det AB = det A det B.
5.22 Find (a) det C and (b) det D, and (c) show that det CD = detC det D.
lia
4
eas
ies
(ela
ante
tn
iicel
we 5.23 Find (a) det E and (b) det F.
| he Ve stein 4 fipd D
Chapter 6
as Vectors
DIMENSION
A vector is a matrix having either one row or one column. The number of elements in a row
vector or a column vector is its dimension, and the elements are called components. The transpose of
a row vector is a column vector, and vice versa.
_ dependent if there exist constants c,,¢,,...,¢, not all zero such that
Magee ce cV, +0, +-::+6,V, =0 (6.1)
a Ni ah : "
a +» a ~
A!
CHAP. 6] VECTORS 53
Example 6.2 The vector (—3, 4, —1,0,2]’ is a linear combination of the vectors of Example 6.1 because
-3 1 2 0 5
4 0 0 2 0
—1]=0) -2}+1/3]+210]+ (-1)] 4
0 0 0 0 0
2 0 0 1 0
a
~~
PROPERTIES OF LINEARLY DEPENDENT VECTORS
Property 6.1: Every set of m+ 1 or more m-dimensional vectors of the same type (either row or
column) is linearly dependent.
Property 6.2: An ordered set of nonzero vectors is linearly dependent if and only if one vector can al |
Se
aa be written as a linear combination of the vectors that precede it.
If a set of vectors is linearly independent, then any subset of those vectors |
is:
_linearly independent.
aeIf ‘set of vectors is linearly dependent, then any larger set containing this settis=
linearly dependent. :
cag ei set of vectors of the same Ce eatin that contains the zero vector is linearly —
54 VECTORS [CHAP. 6
Solved Problems
6.1 Determine whether the set {[{1, 1,3], [2, —1,3], [0, 1,1], [4,4,3]} is linearly independent.
Since the set contains more vectors (four) than the dimension of its member vectors (three), the
vectors are linearly dependent by Property 6.1. They are thus nor linearly independent,
6.2 Determine whether the set {{1, 2, —1, 6], [3, 8,9, 10], [2, —1, 2, —2]} is linearly independent.
Using Steps 6.1 through 6.3, we first construct
1 2 \~1 6
¥sro: ££. 3
2-1 2 2
Matrix V was transformed in Problem 1.13 into the row-echelon form:
1 2 -1 6 e
0 1 In6peedy
0 t=] :
Byimpacto,the ak ofV is3, which equals the number of vactors inthegiven-eet; hence, thegiven
Ay vectors is linearly et
os
D eine
newheterhest(9 2,1, ~4, 1)", 2,3, 0, -1, -1]’, 1, -6, 3, -8, 7]") is linearly
a
i ’ ae Vi
‘ . -
: ihthischapear, we-conetruct ‘
tat tacla & tw pb”
v= a 3 0°Bre a’
e whi oa (rages :
4 39, MS ope,rin thee. ‘ig
~d : 7 -
4 =
el “ = 7 % fe -
"oh ‘= _
=v i + “Ss . =
> ’ ?
i . i.
CHAP. 6] VECTORS 55
6.6 Prove that every set of m + 1 or more m-dimensional vectors of the same type (either row or
column) is linearly dependent.
Consider a set of n-such vectors, with n > m. Equation (6.1) generates m-homogeneous equations
(one for each component of the vectors under consideration) in the n-unknowns c,,c,,...,C¢,. If we
were to solve those equations by Gaussian elimination (see Chapter 2), we would find that the solution
set has at least n — m arbitrary unknowns. Since these arbitrary unknowns may be chosen to be nonzero,
there exists a solution set for (6.1) which is not all zero; thus the n vectors are linearly dependent.
r
a
6.7 Prove that an elementary a of the first kind does not a te oyrank ceo ie
a ~ Let B be obtained desmatrix A by interchanging two rows. Clearly the rows of A ‘ees y
Sek
= 2: set =.ssid vectors as the rowsof B; so A and B must have the same row rank.
d,A,+d,A,+-+++d,A,=0
where, as noted, the constants d,,d,,...,d, are not all zero. But this implies that A,,A,,...,A, are
linearly dependent, which is a contraction. Thus the column rank of A cannot be greater than the column
rank of B.
A similar argument, with the roles of A and B reversed, shows that the column rank of B cannot be
greater than the column rank of A, so the two column ranks must be equal.
6.9 Prove that an elementary row operation of any kind does not alter the column rank of a
matrix.
Denote the original matrix as A, and the matrix obtained by applying an elementary row operation
to A as B. The two homogeneous systems of equations AX = 0 and BX = 0 have the same set of solutions
(see Chapter 2). Thus, as a result of Problem 6.8, A and B have the same column rank.
6.10 Prove that the row rank and column rank of any matrix are identical.
Assume that the row rank of an m X n matrix A is r, and its column rank is c. We wish to show that
fe : _ r=c, Rearrange the rows of A so that the first r rows are linearly independent and the remaining m — r
| an ~ rows are the linear combinations of the first r rows. It follows from Problems 6.7 and 6.9 that the column
a ae. rank and row rank of A romain unaltered. Denote the rows of A as A,,A,,...,A,,, inorder, and define
— a A; ne
a= re ane Cae a (gdh) aie st BF ab and C= tage i
- ior Wot 5 fare ae Wil A, se . ii Nia
Furthermore, reag es
=TB, Inpar nek
nore omeianteeofepgs
fie on ‘ey Ta
is) %,
CHAP. 6] VECTORS 57
6.12 Problem 6.8 suggests the following algorithm for choosing a maximal subset of linearly
independent vectors from any given set: Construct a matrix A whose columns are the given set
of vectors, and transform the matrix into row-echelon form U using elementary row
operations. Then AX = 0 has the same solution set as UX = 0, which implies that any subset of
the columns of A are linearly independent vectors if and only if the same subset of columns of
U are linearly independent. Now the columns of U containing the first nonzero element in
each of the nonzero rows of U are a maximal set of linearly independent column vectors for U,
so those same columns in A are a maximal set of linearly independent column vectors for A.
Use this algorithm to choose a maximal set of linearly independent vectors from [3, 2, 1],
[2;3,;—-6), {t,.0, 3); [—4,.-1, —8]}, and. [1, -1,7],
We form the matrix
f 28 WP ae 472 —
Peet 2/8. 3 4st . eS.
ecu Bienwe 9 Tee
Thefirst and second columns of U contain the first nonzero element in each nonzero rowof
refore, the first and second columns of A constitute a maximal set of linearly indep e
the columns of A. That is, [3, 2, 1] and [2,3, —6] are linearly independent,
and all theotherv
¥ he originalset are linear snPisieiions of those two. In particular, %
58 VECTORS (CHAP. 6
Suppose the set is linearly dependent, and let i be the first integer between 2 and nm for which
ee V,} forms a linearly dependent set. Such an integer must exist, and at the very worst / = n.
Then there exists a set of constants d,,d,,...,d,, not all zero, such that
dV, +d.V,++->+d,V,_,+dN,=0
Furthermore, d, 0, for otherwise the set {V,, V5, . ., V,_,} would be linearly dependent, contradicting
the defining property of /. Hence,
d, d, d,_,
V.=- a VY,% d.AN, eV
d
That is, V, can be written as a. linear combination of EP ere
On the other hand, suppose that for some i (i =2,3,...,n)
¥_=.d,V, +4.V,+-°°.+24,_V,.,
Then dV, Hey 42° +d a¥ia +(-1)V, +OV,,,+---+0V, =0
This is (6.1) with c, =-140,c,=d, (k=,. .,4-1), andc, =O0(k=i+1,i+2,...,n). So the set
of vectors is linearly dependent.
saeywreeedds Problems —
| CHAP. 6] VECTORS 59
6.26 Choose a maximal subset of linearly independent vectors from those given in Problem 6.16.
6.27 Choose a maximal set of linearly independent vectors from the following: {1,2, 1, —1], [1,0, —1, 2],
(2):2;0, 2),2[3, 3, 0,3).
2; +1, 3},[0)1, 1,0], (3,
6.28 An m-dimensional vector V is a convex combination of the m-dimensional vectors V,, V,,..., V,, of the
same type (row or column) if there exist nonnegative constants d,, d,,...,d, whose sum is 1, such that
~ WV=d,V, + d,V,+---+d,V,. Show that [5/3, 5/6] is a convex combination of the vectors [1, 1], [3, 0],
and [1, 2].
6.29 Determine whether [0, 7]' can be written as a convex combination of the vectors
Hasek
6 9 1 1
le
ee
eee
|
ES
6.30 Prove that if {V,,V,,...,V. islinearly independent and V cannot be written as a linear conventions of
this set, then {V,, a ‘ V,,V} is also linearly a i es =
Example 7.1 [1, —1]’ is an eigenvector corresponding to the eigenvalue A = —2 for the matrix
ba | | | ee en
%/ ; ee |2 aia
case
om
L2 -allal-[2]--17]
| = -
saad !
oe, ae. wa fThecharacteristic equation of an n X n matrix A is the nth-degree polynomial equation
pioleninaaieas yates AD TPs sind sesetaci (72)
Solving the characteristicequation for A gives the eigenvalues of A, okiesmay be real, complex,or
| s ofeach other. Once an eigenvalue is determined, it may be substituted into (7.1), and then
‘ion may besolved for the corresponding eigenvectors. (See Problems 7.1 through 7.3.) The
la shagve ica care gscharacteristic polynomial of A.
is Man sds to cen Hen ah ouegeed ~ gia
“— wre td
4 ae Se
iad
pain ee
is -
CHAP. 7} EIGENVALUES AND EIGENVECTORS 61
COMPUTATIONAL CONSIDERATIONS
There are no theoretical difficulties in determining eigenvalues, but there are practical ones.
First, evaluating the determinant in (7.2) for an n X n matrix requires approximately n! multiplica-
| tions, which for large n is a prohibitive number. Second, obtaining the roots of a general
characteristic polynomial poses an intractable algebraic problem. Consequently, numerical algor-
{ ithms are employed for determining the eigenvalues of large matrices (see Chapters 19 and 20).
: | | ae
is Sr 2 feAP gti 1" a ee site Royle i ;3 3 ispeve Bat or ovo eft.
2x, + 5x, =0
e ,
As
—2x, - Sx, =0
The solution to this system is x, = — $x, with x, arbitrary, so the eigenvectors corresponding to A = | are
x,)_ - $x, - ow
x=[7| x, *2 :
with x, arbitrary.
When A = —2, (7.1) may be written
{3 -2)-cale we}-[2)
[2 -3llei}-(o]
which is equivalent to the set of linear equations
Sx, + 5x, =0
=2x, —2x,=0
_ The solution to this system is x, = —x, with x, arbitrary, so the eigenvectors corresponding to A = ~2 are
x-[e)-[a-ala]
CHAP. 7] EIGENVALUES AND EIGENVECTORS 63
x; Matt Rs I —1
X=|%2.}=| *2 |=x,| 1 |+x,| 0
. xs x 0 1 .
re a 1 0. 0)]\[+: 0
3 & 2[71408 3 @ *2/=10
6 6 9 0 0 14)/L4s 0 :
a A Sm “i Ua 0 oe mn
3 -8 3 Re = 0
or
a Rae 7 : , af
GA ed i
(4 - i2)x, + 4x,=0
or ~5x,+(-4- i2)x,=0
The solution to this system is x, = (—4/5 — i2/5)x, with x, arbitrary; the eigenvectors corresponding to
A=-—1+ i2 are thus
with x, arbitrary.
With A = —1 — 2, the corresponding eigenvectors are found in a similar manner to be
_ f%1] _ [(-4/
+ 2/5)x,
5 —4/5
+ i2/5
X= oe ie x, —" 1
with x, arbitrary.
ee
.re Choose a maximal set of linearly independent eigenvectors for the matrix given in Problem
-| |=] 0| x,,x,arbitrary
0 fF
There is one linearly independent eigenvector associated with this eigenvalue, and it may be obtained by
choosing x, to be any nonzero scalar. A convenient choice here is x, =1. Collecting the linearly
independent eigenvectors for the two eigenvalues, we have
Pal 1
as a maximal set of linearly independent eigenvectors for the matrix.
7.6 Choose a maximal set of linearly independent eigenvectors for the matrix
0
0
0
:
ON @
Soc
kK co
>onBwoeo°
Since this matrix is upper triangular its eigenvalues are the elements on“itemain diagonal. Thus,
A=2 is an ese of scan five. ThePeuiis casesassociated with this eigenvalue are
“~~
li
is
ll
i
Be
with X1y X55 and £ arbitrary. 'Bekatse there. me chive arbitrary scalars, there are three — -
- eigenvectors cig A
associ ah eine perenne 2 ee =
EIGENVALUES AND EIGENVECTORS [CHAP. 7
generated by sequentially multiplying each equation on the left by A. This system can be written in the
matrix form .
1 1 1 1 en, 0
A, A; 3 Ain CK, 0
pt, AS Axess RF c,X, |=| 0 , (4)
whichis not zero in this situation because all the eigenvalues are different. As a result Q is nonsingular,
__ and the system (4) can be written as
a
CHAP. 7] EIGENVALUES AND EIGENVECTORS 67
Thus, X is an eigenvector of A. Note that a nonzero constant times an eigenvector is also an eigenvector
corresponding to the same eigenvalue.
7.13 A left eigenvector of a matrix A is a nonzero row vector X having the property that XA = AX
or, equivalently, that
X(A—Al) = 0 (1)
for some scalar A. Again A is an eigenvalue for A, and it is found as before. Once A is determined, it is
substituted into (1) and then that equation is solved for X. Find the eigenvalues and left eigenvectors for
: me
2 an | 4 |
The eigenvalues were found in Problem 7.1 to be A= 1 andA= —2. SetX=[x,,x,]. WithA=1,(I)
becomes : ae
hee art Gok. wig | Sek
a &xlll2 4]-[9 {)-0.9 ae
oe
= e ee Rian bgp, leSees
EOI O)
= aa ____ which isequivalent to the set of equations |
_ ,
ce thee *
- =
68 EIGENVALUES AND EIGENVECTORS (CHAP. 7
The characteristic equation for A was determined in Problem 7.1 to be A? +A—2=0. Substituting
A for A, we obtain
2 eo vd |3 :|- l | -[° 4
osPats a= |; 6}*1-2 -4}]~7Lo 1/7Lo o
The characteristic equation for A was found in Problem 7.2 to be —A* + 20d? — 93A + 126=0,
Therefore, we evaluate
i, 1.0 1] $00 6)
+126 00 0
1 O}=/0
1h ae “Fe OOOO 0
, oan
- ie an
ri me!
a ,. “Cy rn
Ui naes 7 ee)
of ann Xn matrix A as | :
- > pas. g jar
_ >? . coe aN ”
BAC
fo tb, A"SYte t ione indy
td At Pandiv
aene
aS y
3 - ‘ ray
~ Al a a as
CHAP. 7] EIGENVALUES AND EIGENVECTORS 69
b,1=—M,_,
b,_,1=AM,_,-—M,_,
Supplementary Problems
In heals 7. 18through 7.26, find the eigenvalues and corresponding eigenvectors 7mthe
ce
a8sees= tet 9s Tl
EIGENVALUES AND EIGENVECTORS
3. 1 1 i..=t ia
7.38 2. Salil 0m i -1 3 -1
-l1 -1 4 -1 - ate
7.41 Verify the Cayley-Hamilton theorem for the matrix in (a) Problem 7.18; (b) Problem 7.24; and (c)
Problem 7.30.
7.42 Show that if A is an eigenvalue of A with corresponding eigenvector X, then X is also an eigenvector of
A’ corresponding to A°.
7.43 Show that if A is an eigenvalue of A with corresponding eigenvector X, then for any scalar c, X is an
eigenvector of A — cI corresponding to the eigenvalue A — c.
9 thatthe trace of a square matrix is equal tothe sum of the 7 of that matrix.
i: 3 gotimegeorcs bas @satevEgaeigt bce. © layered) © rovsiticest 1!
ive that tracec(h+o ksoa
ee aM mauriceofthe
es
Chapter 8
Functions of Matrices
WELL-DEFINED FUNCTIONS
If a function f(z) of a complex variable z has a Maclaurin series expansion
flz)= 2,a2"
which converges for |z|<R, then the matrix series 2*_, a," converges, provided A is cn rahand
each of its eigenvalues has absolute value less than R. In such a case, f(A) is.defined oF *
and iscalled a paicived Wien. By convention, A” by? (See Problems 8. 2nad 8te:
. . PRE
ae whe bsae2 ioe ge
STEP 8.3: lf A, is an eigenvalue of multiplicity k, for k>1, then formulate also the following
equations, involving derivatives of f(A) and r(A) with respect to A:
STEP 8.4: Solve the set of all equations obtained in Steps 8.2 and 8.3 for the unknown scalars
ae... « rn
Once the scalars determined in Step 8.4 are substituted into (8.1), f(A) may be
calculated. (See Problems 8.4 through 8.6.)
THE FUNCTION e*
For wig constant square matrix A and real variable 1, the matrix function e“'is computed by
ng E on
and sii galgulating < as described in the preceding section. (See, Problems 8.7
through 8. 10.
/.. The ei genvalues of
BeArarehetemasof Amultiplied | by t (see Property 7.5). Note that
3) involves deriv to d not f; the correct sequence of steps is to first take the
| | with respect to A and then substitute A=A,. The reverse
(a func oh. oft) into (8.2) and then taking derivatives with
sults. & wen Fide
ees hoa: ‘aw ¢ boilco ee Bg
CHAP. 8} FUNCTIONS OF MATRICES 73
In (8.4) and (8.5), the matrices e““~, e~**, and e““” are easily computed from e*’ by
replacing the variable ¢ with ¢—,, —s, and t— 5, respectively. Usually, X(t) is obtained more easily
from (8.5) than from (8.4), because the former involves one fewer matrix multiplication. However,
the integrals arising in (8.5) are generally more difficult to evaluate that those in (8.4). (See
Problems 8.13 and 8.14.)
Example 8.2 For A=I and B=0 the matrix equation has the unique solution X = c but the integral (8.6) e
aa
diverges, me
<— 2
pes
Solved Probiedis :
eo: k when y
» as: iMae *
FUNCTIONS OF MATRICES
which converges for all values of z having absolute value less than 1. Therefore,
oe 3 (-1"x°
Yo
arctanA=A- — a a. oe
is well defined for any square matrix whose eigenvalues are all less than 1 in absolute value. The given
matrix A has eigenvalues A, =0 and A, = 4. Since the second of these eigenvalues has absolute value
greater than 1, arctan A is am defined ‘for this matrix.
a s2
i prone
26 |,
7 CHAP. 8] FUNCTIONS OF MATRICES 75
-2 2 0
A= f°Q) =2 1
0 0 -2
The Maclaurin series for sin z converges for all finite values of z, so sin A is well defined for all
matrices. For the given matrix 3 x 3 (8.1) becomes
; 4. =f og =O. 2 0 i 20.9
: sinA=a,A°+a,A+a,I=a, O:: Ape ees 8 2 11 tag 0. 1-0 bs
hy Fee Co Me «2 0-6: (1) %
4a,~— 2a; +a, ° 78a, + 2a, 2a,
j
= 0 4a,-2a,+a
2 1 0 ~44,+.4,
2
; 0 0 4a,-2a,+a,
Matrix A has eigenvalue A = —2 with multiplicity three, so we will have to use Step 8.3. We determine
f(A) =sin A r(A) =a,A°
+ a,A+ a,
f'(A) = cos A r'(A) =2a,A + a,
fA) = —sin A r'(ad 2a,
and write (8
2)andater? as,Tespectively,
sin (—2)= a,(--2)? We; + ay. gas: ee ::
| “. WC0st—2) ® 2a,(—-2) 7 aj e -
| _ =sin(—2)= 2a, a 4
ae Bates
Es are = Dems-2)rs
‘“ obtain a, = — 3 sin(—2) = 0.454649; a, = cos (—2) —2sin1(-2) = 1.40245; and a5
xf a5
_ sin@2)= 0.070086. —— these values into (1) and simplifying give us — ae
sibs [ee Fc Badd |
FUNCTIONS OF MATRICES
l “ -#
7 fe +e “)=cost
— _
no ase| 81 <!
t
ae vst iar 2
| Peepsod
t he stat wee.
CHAP. 8] , FUNCTIONS OF MATRICES 74
©)
29
0
©
orm
SOS COO
Ww
ONNoO
SS
WW
ON SCOWNR
AU MNMND
Ke
Oh
ae
eee ae ) : ty * ,: x7i ; ~~». ai"
@
Ps i 2h a; %, aan
2 4 2
A(i~t) Al 4e°+2e. e Ba sce %
Cc z|Se -8e ™ de" ++ te" -4 "a
ah ee 4e7?? + 2e* Pee vd {°]- ie 1e*
AS =
[Gert -beds
: 2 as Ss
cee
—
|e F(s)ds 30| -10e7'
+ 4e" +6
[iuer + fe” )ds
eg! ag* l| =is te” +6
“eo F(s )ds = =mle +2e- 4
30 8e7-8e~* 2c +4e°* || -10e7' + 4e" +6
(4e” + 2e7")(-Se™'-e +6) +(e" - e)(—10e"'
+4e"' +6)
- imLhe" ~eM Se" eit 6) +e" + 4c '\(—10e' + 4e“ + 6)
: gk ASR Le’
te he” — te
CHAP. 8] FUNCTIONS OF MATRICES 79
eo € a
q 4 ct 2 Ath 2 fn aE
Alig Bt __ 36 © Se 5e e
Then e” Ce 5 Tee ) ra ae 1
:
—/ aut x =—Ti
: eee Al Br = 84 ze v7 Ls —3¢' + xe [6 Sel ed
(A +B)" = Dy(jane!
80 FUNCTIONS OF MATRICES (CHAP. 8
n=U , az=l1 e
Supplementary Problems
B19 - Determine the limit of each of the following sequences of matrices as k goes to ©;
=.
a ae = e! et
kL ae Ok’ +k SRI
a ae note”
well defined? Ae
ner ate , ue) eer. |
0.13 Fae
CHAP. 8] FUNCTIONS OF MATRICES
Chapter 9
; Canonical Bases
GENERALIZED EIGENVECTORS
A vector X,, is a generalized (right) eigenvector of rank m for the square matrix A and associated
eigenvector A if
(A—AI)"X,=0 but (A-AI)" 'X, #0
(See Problem 9.1 through 9.4.) Right eigenvectors, as defined in Chapter 7, are generalized
eigenvectors of rank 1.
CHAINS
: oe i A chain generated by a generalized eigenvector X,, of rank m associated with the eigenvalue A is
ia wee Sa) See er Dae reas A defined recursively as
<i> AD: (j=m=-1,m-2,...,1) (9.1)
b 59.5 oti 9.6.) A chain is a linearly independent set of generalized eigenvectors of
nd rex. The number of vectors in the set is called the length of the chain.
shes
CANONICAL BASES
generalized eigenvectors associated with A. Form the chain generated by this vector,
and include it in the basis. Return to Step 9.4.
(See Problems 9.10 through 9.13.)
9.3 Find a generalized eigenvector of rank 2 corresponding to the eigenvalue A = 4 for the matrix
4 000
oe 6
mee
0 003
We seek a four-dimensional vector X, =[x,, ¥5, x, X,]’ such that (A — 41)°X, = 0 and (A — 41)X, #
0. We have
&
©
oo
~X, ~%,— X,
=m
©
©
Was oO tis! have x,=0. Then,toSatisfy (A— 41)X, #0, itech senmitak
Aon Aimplechoice | x,
ay! =1, x, =4,=2 cag gives us X,=[10. 0, a:
ae . Si 2 ee. 2 — 4 j :
\ —. ; T i a
=
~
;
7
—
¥ 2
Re.‘ =6(A °.~=10.0,0,%,.°
eo
ii We Pe ee es ,akg 0,
10,0, “a te:
Xf = 4 “" - r 4 2 rs ed by _ '
aes ae
CHAP. 9] CANONICAL BASES 85
9.6 Determine the chain that is generated by the generalized eigenvector of rank 2 found in
Problem 9.3.
From Problem 9.3 we have X, =[1,0,0,0]’, corresponding to A = 4. Using (9.1), we write
0 0 0 oft] [ 0
ee | eee ge ay Oe |
es: =| eet 01.0)"|-1
Gee *1)l0} 1 o
The chain is {X,, X,} = {[1, 0,0, 0]’, (0, 1, -1, 0}’}.
9.7 Show that if X,,, is a generalized eigenvector of rank m for matrix A and eigenvalue A, then X,
as defined by (9.1) is a generalized eigenvector of rank j corresponding to the same matrix
and eigenvalue.
Since X,, is a generalized eigenvector of rank m,
(A—AI)"X,,=0 and (A-AI)”"'X, 40
It follows from Eq. (9.1) that
X, =(A—AI)X,,, =(A- AI)” ’X,,
Therefore (A - AL)’, = (A- AI)/(A— A)” /X,, = (A- AD)”X,, = 0
aS dagen (A ~ Al)/"!X,=(A ~ AI)/“"(A - AI)" /X,, = (A ~ AI)” 'X,, 0
a, which togetherimply that ¥, is a generalized eigenvector of rankj for A and A.
8 _ stew
that a chain is a linearly Fidioicniens set of vectors.
is inductive onthe length of the chain. a ¢ — one,
#it aor
The eigenvalues for this matrix were found in Problem 7.1 to be A= 1 and A= —2. Since they are
distinct, a canonical basis for A will consist of one eigenvector for each eigenvalue. Eigenvectors
corresponding to A = 1 were determined in Problem 7.1 as xf—5/2,1]’ with x, arbitrary, We set x, = 2
to avoid fractions, and obtain the single eigenvector (—5, 2]’. The eigenvectors associated with A = —2
are x,[~ 1,1]’ with x, again arbitrary. Selecting x, =1 in this case, we obtain the single eigenvector
[—1, 1]'. A canonical basis for A is thus[—5, 2]’, [—1, 1]’.
{=
> ‘Sige oaANOCSO
conooceo yu
©
6
oo
oooftrrDid
ee
Oy qa
CHAP. 9] CANONICAL BASES
9.11 Find a canonical basis for the matrix given in Problem 9.10.
We first find the vectors in the basis corresponding to A = 4, using the information obtained in the
~~ to Problem 9.10. There is one generalized eigenvector of rank p=3, which we denote as
= [x1 Xa Xa Lay ss Mel We note that to have (A— 4I)*X,=0, we must set x,=0; and to have
a 41)°X, #0, we must have x, #0. A simple choice is X, = (0,0,1,0,0,0]‘, which generates as the
rest of its chain 5
0
0
0
X, =(A-41)X, = 0
0
0
CANONICAL BASES
_|o00 0
=hod 0 -1
Dio...
Ni _ Another generalized eigenvector of rank 2 for A = 3,
Aipvea (1),is¥,=(0.1, 0,0)" Thisbicsaieh careieet
a.
; te)
i ah oe) .
Raia. he £6 +e fvdie . FQ" haat =
th st xe bee 0 ‘pa pak,
ay }1] Jo)i
| on
fr a | }o} |
sp
CHAP. 9] CANONICAL BASES
0) =3
0
(A — 3I)' = 0)
0 -4
0 4 -4
has rank 1, Thus, p = 3, and N,=2-1=1, N,=3-2=1, and N, =5-3=2.
A generalized eigenvector of rank p = 3 for A =3 is X, = (0,0, 1.0, 0]”, which generates
= = (A ast 31)X, =
Supplementary Problems
Determine which of the following are generalized eigenvectors of rank 3 corresponding to A = | for the
matrix
Find the chain generated by X,=[0, 0, 0,0, , a parncraliged eigenvector of rank 4 corresponding to
A= 1 for the matrix given inProblem 9.ae e
| oe uDa eer
acos |pppoe aes pe : a
tae te o
Chapter 10
Similarity
SIMILAR MATRICES
-A matrix A is similar to a matrix B if there exists an invertible matrix S such that
A=§7'BS (10.1)
If A is similar to B, then B is also similar to A and both matrices must be of the same order and
square.
Property 10.1: Similar matrices have the same charactetieac equation and, therefore, the same
1
eigenvalues and the same trace. [Acts tee) |
f Property 10.2: If X is an. eigenvector of A associated with eigenvalue A and (10.1) holds, then
Y = SX is an eigenvector of B associated with the same eigenvalue.
{ ee; Pesbiene :10.1 through 10.3 and 10.43.)
Bb Sadar Teh Baie se:
fi ss gaat kate tos
ao 4 taie bat’ dpoi
\L MATRI ix
Benneine
4
-
eS ar
<7
SIMILARITY (CHAP. 10
Ji
where D denotes a diagonal matrix (whose diagonal elements need not be equal) and J, (i=
1,2,...,k) represents a Jordan block. Although the diagonal elements in any one Jordan block
must be equal, different Jordan blocks within the Jordan canonical form may have different
diagonals. (See Problem 10.7.)
a 2 rr y"
i ee)
5 re ek
‘oblems 10.8 11
Ye 2F re. -
a a Mh
e' | Tt A
othe r
—_
CHAP. 10] SIMILARITY
FECARyle NAY
(r—2)! (r—-1)!
oie) f(a)
(r — 2)!
Bh). [tr
01 1
g-= ee0!
3 view allderivasioes wietaken sithsbapact to A. (See Protiew cd gitoh a pariond matrix of-
packs 394.9;va "fwees matrix such ay ds a ne
| le ays ae ee oe
Le Meer ees
ak
oe ee ts eine ti A)
SIMILARITY
Solved Problems
Me J
Then (1) becomes © a
jg ‘
iF ermet
a, 10 LP oes
79 eit 1h M
me ciars Seined
pre
© ha bag
at Gi
~~~
esa ; ea
CHAP. 10] SIMILARITY 95
A canonical basis for A was found in Problem 9.13. It consists of one chain of length three,
(—1, 1.0}, [-1,0, 1}/, and [2.3, 6]’. Since these three vectors form a full complement of generalized
eigenvectors of rank 1, they are a canonical basis for A. A modal matrix for A ts then
~f- ~=-{ 2
ae eB
ee
Any permutation of the columns of M will produce another equally acceptable modal matrix.
10.7 Determine which of the following matrices are in Jordan canonical form:
' . . . . i . ; a bat, : P . v3
m __ All three matrices are in Jordan canonical form: A, because it isa diagonal matrix; B, because it is
-. inthe form - a .
The chain of length three, {X,, X., X,}, corresponds to the eigenvalue A = 3, so it generates the Jordan
block
Bert «<Q
=o 3° 1
0
A is thus similar to » ;
mo 0 0 0
oO. OQ 0
i rie anos. tO}
oS a et
fo oO QO 3
10.10 Find a matrix J in Jordan canonical form that is similar to the matrix A of Problem 10.5.
In Problem 10.5, we found thatM =[Z,, X,, X,. X,, ¥,, Y,]. The single generalized eigenvector of
rank |, Z,, corresponds to the eigenvalue 7 and generates the | x | diagonal submatrix of J comprised of
this eigenvalue. The chain of length three, {X,, X,, X,}, corresponds to the eigenvalue 4 and generates
the Jordan block ;
a
J, =| 0% 4.8195 2
00 4).
3
;
q
The chain of length two, {¥,, Y,}, also corresponds to the eigenvalue 4 and generates the Jor :
10.12 Verify (10.2) for a modal matrix consisting solely of generalized eigenvectors of rank 1.
Denote the columns of M as E,,E,,...,E,, where each E, (i=1,2,..., m) is an eigenvector of A.
Thus, AE, = A,E,. The eigenvalues A,, A,,..-, "i of A need not be distinct. Now define
AM=A[E,,E,,...,E,]=[AE,,AE,,...,AE,]
=[A,E,. A,E,,--.,,E,]=[E,.E,,-.-.E,)/J=MJ
from ecriniagane 12follows.
Here f(A) =sin A, f'(A) = cos A, f’(A) = —sin A, and f(A) = —cos A, so f(2) =sin 2, f'(2) =cos2,
f'"(2) = sin 2, and f"(2) = —cos 2. It follows from (10.4) that
AUT a opatepte> 04 0
B isnot because itnolonger has 1s
ee tLee To
Beaa
c er
te us Jap ~i te 5 1/2 Be
— eM “|; ls eH Nae 1/2
2 2 =| cos t aA
—sint cost
Supplementary Problems
10.20 Determine which of the following pairs of matrices are similar matrices:
gis 2.3 Se
— Is 7] ae [5
Se ye Denies oe | F 4 hy Oi 24
ee eee ! 172 3 a ee
Bcc Ska ae {4 5 6) ‘and ae ;
-
_
La? ae . ‘} 2 3 hk a
, .
ae
ctieel
oa . ifs ja 4 3 - =? “7 >) asl
i «+ ras j
ae a. a a” ' gi' => 4 “ .
4 cad = = 5
_s 5 ~~ = i® a Ss Z : \
7 y . Ss *
* :
; ‘ i f
SIMILARITY [CHAP. 10
10.29 The matrix in Problem 10.23. 10.30 The matrix in Problem 10.24.
3 0 5
0 0 0
10.32 |_, 0 10.33 |0
0 3 0
0
(Hint: See Problem 9.30.)
(Hint: See Problem 9.31.)
ta)
a & ere hee ab
Chapter 11
Inner Products
COMPLEX CONJUGATES
The complex conjugate of a scalar z = a + ib (where a and b are real) is z = a — ib; the complex
conjugate of a matrix A is the matrix A whose elements are the complex conjugates of the elements of
A. The following properties are valid for scalars x and y and matrices A and B:
(C2): x is real if and only if x = x; and A is a real matrix if and only if A=A.
(C3): x +x is a real scalar; and A+ A is a real matrix.
(C4): xy =(%)(y); and AB =(A)(B) if the latter product is defined.
(CS): (x
+y)=x + y; and (A
+B) =A +B if the latter sum is defined.
(C6): xx =|x|° is always real and positive, except that x% = 0 when x =0.
concept of perpendicularity under the Euclidean inner product when the vectors are real and
restricted to two or three dimensions.
A set of vectors is orthogonal if each vector in the set is orthogonal to every other vector in that
set. Such a set is linearly independent when the vectors are all nonzero. (See Problem 11,27.)
GRAM-SCHMIDT ORTHOGONALIZATION
Every finite set of linearly independent vectors {X,,X,,...,X,} has associated with it an
orthogonal set of nonzero vectors {Q,,Q,,...,Q,} with respect to a specified inner product, such
that each vector Q, (j=1,2,...,M) is a linear combination of X, through X,_,. The following
algorithm for producing the vectors Q) is called the Gram-Schmidt orthogonalization process.
STEP 11.1: Set
3) Cae
and are 0
had aa ae
weet sd ta, 2, |
(X,X)y
= (WX)- (Wx) =| ||Pi btal='ae nk ali aes gay
Calculate (X,Y)y if
W is a singular matrix, so the inner product (X,Y)y is not defined. even though the matrix
operations on the right side of Eq. (11.1) can be performed. When the matrix W is singular, it is always
| possible to find a nonzero vector Z (in this case, Z = [ly my ar ena aePinte. = 0, thereby
ting Pr erty 11.2. ay a ae” “{ A> Fie. a
ia
“ue
. 2 -
. i _ D =a .
oy
INNER PRODUCTS (CHAP. 11
Co Cee |
and Q,== eure |apelay ~ | (8 + i2)/V153
Use the Gram-Schmidt orthogonalization process with the Euclidean inner product to
construct an orthogonal set of vectors associated with {X,, X,,X,, X,} when
0 1 1 1
11 ee 0
X,< 1
X,=|,1
1 0
1 Ll 1 0
These vectors can be shown be ape independent(see Chapter 6). Using =e 11.1 through
aes.9 9
fet. 23
iy" 2
WF iawiog
ti5 0
Using Steps 11.1 through 11.5, we calculate
Y= apgaal. -58. 9 91
We ara 8 981
es
’
cs.-
" .-
ay rz
~
¥ he
eh ei te
e ‘-
<a
Ie ge = v
ae ae
+ _ =
YY
INNER PRODUCTS (CHAP. 11
11.15 X,
11.17 X,
11.19 X,
Fy 4 4 = % * ’ h :
jTal eee S nidt
eS n-Schr Babeeortho
Ty Sis ponal mled alGat
ces ation
ob lt:ey if nos)
oH pat al elem t Co Sia
° 7R
Rss ba At i ee Y
hs . ae
:
a 4
Chapter 12
Norms
VECTOR NORMS
A norm for an arbitrary finite-dimensional vector X, denoted ||X||, is a real-valued function
satisfying the following four conditions for all vectors X and Y of the same dimension:
(N1): ||X|| =0.
(N2): ||X|| =0 if and only if X=0.
(N3): {eX = |e|||X|| for any scalar c.
(N4) (Triangle inequality): |X + Y|| = ||X|| + |¥|I.
A vector norm is a measure of the length or magnitude of a vector. Just as there are various bases for
measuring scalar length—such as feet and meters—there are alternative norms for measuring the
a of a vector. Some of the more common vector norms for X =[x,, x,,...,%,]’ are:
| ‘The inner-product-generated norm: ||X||
wy=V(X,
X) w
The Euclidean (or l,) norm: IIXI|> =i
= (Re KY
rrhe 1,norm
n: |X|, = Ix,|+ [xp]
+++++ [ql
norm: [x=max(hs le BaD
): IEXIL, (la? + Lal? + ++ (x,|?)'?
wd ecg
and it is 9 specialaseofthe i odu
12. through 123)Pal ees
pe
y. "
ae :
a ie
8
a
alee
——
CHAP. 12] NORMS 11]
Because of the added consistency condition (M5), not all vector norms can be extended to become
matrix norms. (See Problem 12.6.) Two that can be extended are the /, norm (see Problem 12.7) and
the Euclidean norm. For the n X n matrix A =[a,], the Euclidean norm becomes
1/2
INDUCED NORMS
Each vector norm induces (or generates) the matrix norm
Inequality (/2.3) provides bounds on the eigenvalues of a matrix. (See Problems 12.17 and 12.18.)
a
An equivalent expression for the spectral radius is
a
I
o(A) = lim ||A”||” (12.4)
Solved Problems
1 4 Pade @)
X=|2 Y=|5. w=/0 11
3 a ee eee
raeTks‘yes =
i.
CHAP. 12] NORMS
12.5 Normalize the vector X given in Problem 12.1 with respect to (a) the /, norm, (b), the /,
norm, and (c) the /. norm.
Using the results of Problem 12.2, we obtain the normalized vectors (a) {1/V 14, 2/V 14, 3/V
14]/:
(b) [1/6, 2/6, 3/6]’: and (c) [1/3. 2/3. 1]'. Each of these vectors is a unit vector with respect to its
associated norm.
Show that the /, norm for vectors does not extend to a matrix norm.
The /, norm is simply the largest component of a vector in absolute value, and its extension to
matrices would be the largest element of a matrix in absolute value. That is,
Ye est k
oS) Sane i
[CHAP. 12
Calculate the (a) Frobenius norm, (b) L, norm, (c) L, norm, and (d) spectral norm for
ee
=|-4-6 0
0 oO -9
(a) |JAl|, = ((7)? + (-4)* + (0)? + (-2)° + (—6)? + (0)* + (0)? + (0)’ + (—9)?} ""* = 13.638.
(b) ||Al], = max(|7| + |—2| + |0|, |—4| + |—6| + [0], || + [0] + |-9|) = max(9, 10, 9) = 10.
(c) ||Al|. = max(|7| + |-4| + |0|, |-2| + |—6| + [0], }0| + [0] + |—9|) = max(11, 8, 9) = 11.
(d) Here we have
65 10 |
ia=a'a=|10 40 0
0 O 81
which has eigenvalues 68.5078, 36.4922, and 81. Thus, ||A||, = max(V68.5078, V36.4922, VBI) = 9.
of SPno Heegative
¢ quantities
* ieelesee
Xl, ad mt nonnegative,
1 come
=
: ee an Te
CHAP. 12] NORMS 115
12.11 Show that the /, vector norm induces the L, matrix norm under Eq. (12.1),
Set ||Al|, = max (\|AX||,). Then |/Al|, is a matrix norm as a result of Problem 12.10, Denote the
columns of A as vectors (ay Sey n?
aie us and set
se
>
Sg
Geese
Ms
ae
Ss a| = max (All)
We wish to show that ||A||, =
For any unit vector X = a BT Re,
||AX||, ” I|x,A, +xX,A,+°°°+ x, A, ll,
S lx, All, = \|x,A.||, ee oe lx, Anll a Ix {A, ll, + Ix |I|All, nthe Ix, {An Il,
~~ * >
(2)
12.13 Show that an induced matrix norm with its associated vector norm satisfy the compatibility
condition ||AY|| = ||A||||¥||.
The inequality is immediate when Y = 0. For any nonzero vector Y, Y/||Y|| is a unit vector, and
max(WAXID =[ATT]
JAll= tyrII
=I|AY
12.14 Show that ||Al] = max (||AX'l) = max(|{|AX|}/ ||X||).
x#0
HAIL=max(Axi)=max (HARI)<mag(HAL)—
(x=
_____where the inequality follows from taking e maximum a larger set of vectors. Thus,
over
, Sa
art
CHAP. 12] NORMS 117
- Supplementary Problems
ne the/,norm
é «4,
(CHAP. 12
2 —} A [3 t [ e fe- 2-14
m{, 3} w[oshieeh sl ly Sh Olemee en
Determine the L, norms of the matrices in Problem 12.28.
—— a Pythago‘al
Aaghstben: for an inner-product-generated vector norm; that is, prove that if
(X, Leh then
thie ee |
Chapter 13
Hermitian Matrices
NORMAL MATRICES
The Hermitian transpose of a matrix A, denoted A”, is the complex conjugate transpose of A;
that is, A” = A’. A matrix A is normal if
| rmi
mitiar
n“matrices is Acie as is Me-pratuek: ofa Hermitianmatrix witieaaal
itian matrix is also normal, DECANE Aen vithTherefore, Hermitian matrices
$131ee = 2. In ae a
tes
Be cee
oesnag
2
oe
120 HERMITIAN MATRICES (CHAP. 13
For the special case W=I (the Euclidean inner product), (J3.4) reduces to
A*=A" (13.5)
(See Problems 13.16 through 13.18.) Adjoints satisfy the following identities:
(Al): (A*)*=A
(A2): (A+ B)* = A* + B*.
— (A3): (AB)* = BTA’.
a (A4): (cA)* = cA* for any scalar c.
EADIONT MATRICES
eeerikA is self- nce if it inesits own adjointSuch a matrix isdenmonerie: square, and it
stl eerie |
“oa
AYe=(AX, Y)w ; (13.6)
sior A Re Ieeye Withvenpect 09
‘ : yr i
ad
CHAP. 13] HERMITIAN MATRICES 121
13.5 Show that if X is an eigenvector of a normal matrix A corresponding to eigenvalue A, then nies ie a
| ann eigenvector of A” corresponding to A. | ae
es , _ Using the Euclidean inner product and (13.1), we obtain. . pyrelte s ke*x
3. ome Aa TeH eT" gay, AX)=(A*AX, X)= (A“AX, X) = (AA"X, X) = (A%X, AX) = (AX, A"X) 1 a
i ——° then follows that | 3 a
“a 0 = (0,0)= (AX
— AX, AX — AX) : 4 =
SR et es ae | = (AX, AX) —A(AX, X) —A(X, AX) + (AX, AX) a
oo ae es = (A"X, AX) —A(X, Be A(A*X,8)
ee ae | =mt!ee A’
Tags
hialegaos:
" 7% ie np.
ce i>
: ten
tC )‘Siem a
HERMITIAN MATRICES (CHAP. 13
13.8 Determine a canonical basis of orthonormal eigenvectors with respect to the Euclidean inner
product for
2° 2 =
A=i 2 2 2
9721272 ‘aS
The matrix is real and symmetric and, therefore, normal. The eigenvalues for A are 0, 2, and 8, and
a corresponding set of eigenvectors is
1 i -1
X,=| 1] X,=/1] x,=|-1
0 1 2
Since each eigenvector corresponds to a different eigenvalue, the vectors are guaranteed to be
orthogonal with respect to the Euclidean inner product. Dividing each vector by its Euclidean norm, we
obtain the orthonormal set of eigenvectors
> a canonical basis of orthonormal vectors with respect to the Euclidean inner
_, * mr
Bint eri
CHAP. 13] HERMITIAN MATRICES 123
yy ae
; ey ee
=—2 ~2 6
This last matrix is in upper triangular form, and the diagonal elements consist of one zero and two
positive numbers.
dicence gba i eo Ta
th ees? Par $¢ ar aig: Sides
=e
13.14 Show if a matrix is upper triangular and normal, then it must be a diagonal matrix.
Let A=[a,] be an n X n upper triangular matrix that is also normal. Then a, = 0 for i> j. We show
sequentially, for i=1,2,...,2-—1, that a, =O when i<j. Since
A”"A = AA” (1)
it follows from equating the (1,1) elements of the two products in (J) that
Thus,
a, =O. .(j/=2,3,...,%) (2)
Next, equating the (2,2) elements of the two products in (/) and using (2), we obtain
e er | a "0
ite
*, Pah
ans.
:
meee
ie
aieric‘3 Bathe .*.
1) 20 : IHD nist : ?
t | *
CHAP. 13] HERMITIAN MATRICES 125
13.17 Determine the adjoint of A under an inner product with respect to W, where
pt ol tag ae
a=|2, pred ang w=| “4 4
Using (13.4), we calculate
egy | = 18 wat i4 ‘l-l3 ae] :
hart Lo tihaker bi 14d te ed 1526 | a
eae ye ee prt 3 [2 21)
| = ae ogoe ilo 3-i4jli2 2)
; : _ [-2,077
+11,764 2,184+i2,574 ee
+e
a
‘Seag | | ~ | 1,428
+11,680 2,082—11,768 | veg
Supplementary Problems
13.22 Find a canonical basis of orthonormal vectors for matrix F in Problem 13,20.
<> ieee
t En Gil oe
CHAP. 13] HERMITIAN MATRICES
Show that if A is an nXn skew-Hermitian matrix, then (AX,X) is pure imaginary for every
n-dimensional vector X.
Show that any real matrix can be written as the sum of a symmetric matrix and a skew-symmetric matrix,
_ and show that any complex-valued matrix can be written as a sumyah a Hermitian matrix and a
sa: skew-Hermitian matrix. <o
y's .
>
-
to eS, - ) te 46)
ie 4 € ‘ee - 4 e *
EE a tat
apts avis See rotie bith Vises :2ahee eRe ‘pm tn getiealies sat 8
1g <7 Se pages ariel S ot Seee agate nethizety ‘od G2 A xppemies ae
x iz oe Cie 3 *: > : kine ess weatt % Sie one.
-
setae MG a
Chapter 14
Positive Definite Matrices
DEFINITE MATRICES
An n X n Hermitian matrix A is positive definite if
(AX, X)>0 : (14.1)
for all nonzero n-dimensional vectors X; and A is positive semidefinite if
(AX, X) 20 (14.2)
If the inequalities in (14.1) and (14.2) are reversed, then A is negative definite and negative
semidefinite, respectively.
The sum of two definite matrices of the same type is again a definite matrix of that type, as is the
Hermitian transpose of such a matrix. Positive (or negative) definite matrices are invertible, and their
inverses are also positive (or negative) definite.
i “4 > ‘
root is a well-defined function. In such cases it may be calculated by the methods given in Chapters 8
and 10. (See Problems 14.13 and 14.14.)
CHOLESKY DECOMPOSITION
Any positive definite matrix A may be factored into
AsLL? (14.3)
where L is a lower triangular matrix having positive values on its diagonal. Equation (14.3) defines
the Cholesky decomposition for A, which is unique.
The following algorithm generates the Cholesky decomposition for an n x n matrix A= [a,,] by
sequentially identifying the columns of L on and below the main diagonal. It is a simplification of the
LU decomposition given in Chapter 3.
STEP 14.1: hoe Set all elements of L above the main diagonal equal to zero, and let
!,, = Va,,. The remainder of the first eqlueg of L is the first column of A divided by
/,,. Set a counter j =2.
caste
as meh=n +E, stops te algorithm icompletOrbe die Li = Ar++M)
ot Sapte : ectively, thefirst
Tekno
arn oh1 A el
veh
14.4: It j=n,skin
to.ae Meas SOAS Come. the jth column of L below ther
a , CN Mea i+ 1F 42... .s, compute
ais ;nk. SIG Wh Yoke: " eye(Li,L')ee A Pl rare <i f
—Th
POSITIVE DEFINITE MATRICES |CHAP. 14
‘a 6 2 =2
det [6] = 6 ‘ic =36-4=32 and 2 6 —2 | =288
-2: =2e088
Since all three principal minors are positive, the matrix is positive definite.
Test 14.3: The eigenvalues of A are 4, 6, and 12. Since all three are positive, the matrix is positive
definite.
14. 2 Use onlthe tests to determine whether the ponowing matrix is positive definite:
| 2 10 =2
kee ie me
te sicige epee oF} MBG he bie we: AL HY QI
beara» mee ec
Me: ya re eo + 2 4
eT Adding —5 times the firstrow to
Test 1411: =| 0 -45 18] the second row
5) yd SOMESD 3 "| ee
2) UGGS OO ke , GAs
“Since the ‘secondpivot, -45, isen Aissot st nor positive
semidefinite. We can also rule out A being either negative
Fine na the first pee aa)isate “oe
EES
5 POT -)
POS nt Be: 5
eee eee et 1 mi
ee oe
iy 7
A=|-17 -4 1
7 1 -14
A is not positive definite because it fails tests 14.4, 14.5, and 14.6: Its diagonal elements are not all
positive; the largest element in absolute value, —17, is not on the main diagonal; and a,,a,, = —8 is not
greater than |a,,|° = 289.
14.5 Prove that the diagonal elements of a positive definite matrix must be positive.
If A has order n X n, define X to be an n-dimensional vector having one of its components, say the
kth, equal to unity and all other components equal to zero. For this vector, (14.1) becomes
0< (AX,X)=(AX)
XK =a,,
14.6 Prove that if A=[a,] Msan n Xn positive definite matrix, then for any distinct i and j .
(i, i = 1, 2,. ys aa Wee, a,,|°. =
a
Define X to be an n-dimensional vector havi all components equal to zero except fetheit
fi meapenas: Denote these as x, and x,, respectively. For this vector, (14.1) becomes if ek ge
.
5 peek. #4
0< é X) =(AX) +X = a,x,%, + a,x,%, + a,x,5, + a XE) ?-
|
a Settingx,= By! Gy and x, = 1, we find that the first two terms on the right cancel, and we are le
; —a,a
Cet
a ,
raea art. ows i
.
Sf
“tase
ty A
-
—o___-—
it 4)
gp oe a,a,, + @,,,)
(Property 6.1). But the orthonormal eigenvectors are linearly independent (Problem 11.27), so it follows
from Property 6.2 that there exist constants d,,d,,...,d, such that
X= d,X,+d,X,+-°:'+4d,X,
Then AX = d,AX, + d,AX, +++: +d, AX, =d,A,X, + d,A,X,+°°>+4d,A,X,
and (AX, X) = (d,A,X, + d,A,X,+°°:+d,A,X,,d,X, + d,X,+---+d,X,)
=|d,\7A, +|d,|7a, +--+ |d,|7A, |
because the eigenvectors are orthonormal. Since the eigenvalues are given to be positive, this last
quantity is positive for any nonzero vector X; thus the matrix A satisfies (14.1) and is positive definite.
14.11 Show that all principal minors ofa positive definite matrix must be positive.
Let A be ann X 7 positive definite matrix, and let B be a submatrix of A obtained by deleting from
pi .Do poke TOws pak hae seen 1). Then B has order (n — k) x (n—k). Let ¥ denote
en an n-dimensional vector havingits
et = comF
rey fies Y 7 one>nt
Te)
Miigal.so t
those.ofYang ite last&,componen
sh, ae ce eee bs
ts
equal to zero. It follows from
41) th: foie de
4.1) el a ee AS
7” ih nie 0< (AX.
AX, X)=(BY,wy y= 3 re: We J . gnbtine
leone owssara
t Bis postivetata ot pomPottem .¥,
CHAP. 14] POSITIVE DEFINITE MATRICES 133
Hi es Ma
has the property that its square is A. Only D is positive definite.
Amie et
=|-i2 10 1
i 2 §
Since A is a 3 X 3 matrix, so too is L in (14.3).
STEP 14.1: Set l,, =V4=2; then /,, = —i2/2 =—i and /,, =i/2. Set j=2. To this point, then, we
have
,
Pe ra
c— <=
“dl >
ic:
POSITIVE DEFINITE MATRICES (CHAP. 14
Supplementary Problems
14.17 Determine which of the following matrices are positive definite and which are positive semidefinite:
ae | 5
B Sti? | c
=t 3
i2 2 3. Atiz 32-22
Weide i F=|1+i2 ” 2+i
1h @ a= 12 244 9
G=
=3
Ovid
cog a) ee A
14.18 Find the square root of matrix A in hisses 14.17, ee that its sheen are 2, 3, and 6.
UNITARY MATRICES
A matrix is unitary if its inverse equals its Hermitian transpose; that is, U is unitary if
Unitary matrices are normal because UU” = UU '=1=U 'U=U"U. In addition, they have the
following properties:
- Property 15.1: A matrix is unitary if and only if its columns (or rows) form an orthonormal set of
vectors.
: _ Property 15.2: The product of unitary matrices of the same order is a unitary matrix.
Property 15.3: If U is unitary, then (UX, oh = (X,Y) for all vectors X and Y of appropriate
2 ae dimension.
rty All eigenvalues of a unitary matrix have absolute value equal to 1.
t1 5 The determinant of a unitary matrix has absolute value equal to 1.
es
4, 15.5 to 15:43 and 1524.) Unitary matiices are invaluable for constructing
nsformations (see Chapter 10), because their inverses are so easy to obtain.
jal
|matrix is a unitary matrix whose. elements are all real. If P is orthogonal, then
oe ae
gel aa coed
CHAP. 15] UNITARY TRANSFORMATIONS 137
ELEMENTARY REFLECTORS
An elementary reflector (or Householder transformation) associated with a real n-dimensional
column vector V is the n X n matrix
e ;
.
a.
ate
ad
a
~
: : g mae
a + *
Beene 2 ae. lvl a | one > 2 Jew
os ~
:
>
“ .
eAT). 20: . . att 4 =!
elit ma riae
An elementarywellector is both real symmetric and orthogonal, and its square is the
? 9
Solved Problems
15.2 Prove that a matrix is unitary if and only if its rows (or columns) form an orthonormal set of
vectors.
+ Designate the a of U as U,,U,,...,U,. Then the (i, j) element ((=1,2,...,"; j=
Ay2,...,n) of UU" i
on (ip), = U, U, =(U,,U,)
0 1
0 ger ael
which indicates that the first, second, and third vectors of the set form a maximal linearly independent
set. Applying the Gram-Schmidt process to the set {Y,E,,E,}, we obtain the orthonormal set
{Q, =Y,Q, =E,,Q, = (0, 1/V2, 1/V2]"}. Then
0 12g
U=| -1/V2 0 1/V2
pie 0 el
15.5 Brave that the product ofunitary matrices “ the‘same order isalsoJ“unitary matrix.
if A and B are unitary then aie.
2
~pideupe
, is
=
(AB '=B"tsabn"= BA" = (AB)
UNITARY TRANSFORMATIONS [CHAP. 15
This matrix possesses the eigenvalue A =3 with corresponding unit eigenvector ¥ =[1/V2, —1/V 2)’.
Using the procedure given in Problem 15.3 with n =2, we generate the unitary matrix
N= [INE 14]
which is expanded into
3 2/V2 0
0; 1/V2 sothat T,=U/T,U,=|0 3 2
O:-1/V2 1/V2 Gris Qe sny)
Setting uU= U,U,, we have U“AU = T,, a matrix in upper triangular form. In this case, all the elements
of U are real, so it is orthogonal.
A a nis Ber
a Pte. 1/3
\
2 2iV3) 21Vv2
U,= so that
2 2/V2 9
01 1/V2 -1/V2 Nips , ; :
O'1/V2 1/V2
Setting U=U,U,U,, we have U“AU =T,, a matrix in upper triangular form.
15.10 Show that if U is unitary and A = U” BU, then B is normal if and only if A is normal. |
If B is normal, then B”B = BB”, and
AA = (U"BU)"(U”BU)= (U“B”U)(U“ BU) = Ge UU (BU)
= (U"B”)(UU~')(BU)=(U“B”)(BU)=U"(B“B)U=U"(BB”)U
= (U"B)(B”U)= (U”B)(UU ')(B”U) = i acs i
= (U"BU)(U"B"U)= (U higsntinasd bea: oe
eesReconoaition:ts sehen ‘lated using t
‘ayBIG fi
>that ha normalmatrix is
= unital similar to a diagonal matrix. |
am 7 pir Sch ete a oie F-= U"AU,
: on
UNITARY TRANSFORMATIONS (CHAP. 15
15.13 Find elementary reflectors associated with (a) V, ={1,2]’ and (b) V, =[9, 3, —6)".
(a) We compute ||V,||, = V5, so
asa
- Provethatan
a nine. e¢
For any constant Gs i
Supplementary Problems
V3... .cAn v2
1/V3 -1/V2 0 C =(1/V3)
1/V3 0 —1/V2
15.17 Apply the procedure of Problem 15.3 to construct a unitary matrix having, as its first column, an
eigenvector corresponding to A = 3 for:
QUADRATIC FORM
A quadratic form in the real variables x,, x,,...,X, iS a polynomial of the type
SD a,x.x, (16.1)
with real-valued coefficients a,,. This expression has the matrix representation
X’AX (16.2)
with A=[a,] and X=[x,,x,,...,x, |’. The quadratic form X’‘AX is algebraically equivalent to
X"{(A+ A’)! 2}X. Since (A + A’ )/2 is symmetric, it is standard to use it rather than a nonsymmetric
_-——s matrix in expression (16.2). Thus, in what follows, we shall assume that A is symmetric. (See
____— Problems 16.1 and 16.2.)
ae A complex quadratic form is one that has the matrix representation
i | MOARBiss itor | (16.3)
| A Sone Hermitian. Kinwesiee (16.3) reduces to (16.2) when X and A are ponl-yalnad; and
both ex ns are equivalent to the Euclidean inner product (AX, XY
1 inner ae (AX, X) is real whenever A is Hermitian an
(Proper 13.5).If the
(or | onneegative, >gative, or no itive) for . ors e
e
e
INERTIA
Every n X n Hermitian matrix of rank r is congruent to a unique matrix in the partitioned form
L342
0! =I, (16.6)
Goduabe on nian as r-7 j
; 0:0 '0
where I, and I,, are identity matrices of order k x k and m X m, respectively. An inertia matrix is a
matrix having form (16.6).
Property 16.1: (Sylvester's law of inertia) Two Hermitian matrices are congruent if and only if
they are congruent to the same inertia matrix, and then they both have k positive
eigenvalues, m negative eigenvalues, and n — k — m zero eigenvalues. |
iteger k defined by form’ (16.6) is called the index of A, and s = k — m is called its signature.
ZC 1 for obtaining the eee hnmatrix of a given matrix A is the following: |
| Construct thenecngiie matrix [A |I],where I is an identity matrix. having thesame3 4
QUADRATIC FORMS AND CONGRUENCE [CHAP. 16
RAYLEIGH QUOTIENT
The Rayleigh quotient for a Hermitian matrix A is the ratio
(AX, X)
R(X) = (X, X) (16.7)
Property 16.2: (Rayleigh’s principle) If the eigenvalues of A a Hermitian matrix are ordered so
that A, =A, 5°--SA,, then
A, = R(X) SA, (16.8)
R(X) achieves its maximum when X is an eigenvector corresponding to A,,; R(X)
_ achieves its minimum when X is an eigenvector corresponding to A,.
‘(ecSechiiaen 16.10.)
oe Shes Re hi aRE OMe. (aatvars vit > vetenviv? > 2ef eieagest
PoE SNR. ad Oe 317 yuh Fs : (eee j
Lwigevl: end
ee
grisi dns +wey aft...
CHAP. 16] QUADRATIC FORMS AND CONGRUENCE
16.3 Determine whether the quadratic form given in Problem 16.1 is positive definite.
The results of Problems 16.1 and 14.2 indicate that the matrix representation of the quadratic form
is not positive definite. Therefore the quadratic form itself is not positive definite.
16.4 Determine whether the quadratic form given in Problem 16.2 is positive definite.
From the results of Problems 16,2 and 14.3, we determine that the quadratic form is not positive
definite because its matrix representation is not. The quadratic form is, however, positive semidefinite.
16.5 Transform the quadratic form given in Problem 16.1 into a diagonal quadratic form.
Given the result of Problem 16.1, we set
2. ee re
A=| 10 5 8
+ bei 8 ll
4
aligns 0aae 0/9. uy,
orthonormal
ie eigenvalues -9, 9, and 18 and corre
Bae. 1/3, -2/3]’, and Q, = [1/3,2/3, 2/3],bocapa
yy, eae Fe ie |
us| 28 13a)I.
1/3Fecal
QUADRATIC FORMS AND CONGRUENCE [CHAP. 16
addition, we interchange the first and second columns of A but make no corresponding change to the
columns in the right partition. Steps 16.1 through 16.6 are as follows:
t} etn ~3times
thefirst row to
the third row
te ~— row
i tehisd FW
CHAP. 16] QUADRATIC FORMS AND CONGRUENCE 149
Augmenting onto A the 4 x 4 identity matrix, and then reducing A to upper triangular form, one
column at a time and without using elementary row operation E2, we finally obtain
1
0 !
nN | i) > | nN —
0 0 0 Of} -1 -1
0 ©
Oo
or -—-
&
Oo
4 The left partition is in upper triangular form. Setting all elements above the main diagonal in that
partition equal to zero yields
ing Step |
16.5, we next interchange the (2,2) diagonal element with the (3,3) diagonal clement i
“partitionand simultaneously interchange the order of the second and third rows in the1
. That us the essai matrix :
Bewarn me
oa eis) Tubdy eae ead aT wo
Uris id G-
cee “; a ie
as. 0 : aby .
QUADRATIC FORMS AND CONGRUENCE [CHAP. 16
triangular form. Under a congruence transformation, both sets of operations are applied to A, resulting
in a diagonal matrix. This is the rationale for Steps 16.1 through 16.3.
Interchanging the position of two diagonal elements of a diagonal matrix is equivalent to
interchanging both the rows and the columns in which the two ne elements appear. We
interchange only the designated rows in P, since a postmultiplication byP’ will effect the same type of
column interchange automatically. This is the rationale for Steps 16.4 and 16.5.
Finally, a nonzero diagonal element d is made equal to | in absolute value by dividing its row and its
column by \/|d|. Since the divisions will be done in tandem, we have Step 16.6.
A,
U“AU =D= *s
fied AF ie eee GTi} Di “ioe t oS ct A,,
Spal oi63
Palsii
CHAP. 16] QUADRATIC FORMS AND CONGRUENCE
Using the results of Problem 16.13, determine whether quadratic forms (a) and (b) of Problem 16.11 are
congruent.
Using the results of Problem 16.13, determine how many positive and negative eigenvalues are
associated with the symmetric matrix corresponding to each quadratic form in Problem 16.11.
*
_——
ie au : " 4
ae : i ae ee
Sede
eae Vk ek eee A
o. a "
Chapter 17
Nonnegative Matrices
PRIMITIVE MATRICES
A nonnegative matrix is primitive if it is irreducible and has only one eigenvalue with absolute
value equal to its spectral radius. A nonnegative matrix is regular if one of its powers is a positive
matrix. A nonnegative matrix is primitive if and only if it is regular,
Property 17.9: If A is a nonnegative primitive matrix, then the limit L=lim, .. ({1/o(A)}A)”
exists and is positive. Furthermore, if X and Y are, respectively, left and right
positive eigenvectors of A corresponding to the eigenvalue equal to o(A) and scaled
so that YX = 1, then L= XY.
Positive matrices are primitive and have the limit described in Property 17.9. (See Problem 17.12.)
Reducible matrices may or may not have such a limit. (See Problem 17.13.)
STOCHASTIC MATRICES
A nonnegative matrix is stochastic if all its row sums or all its column sums equal 1. It is doubly
stochastic if all its row sums and all its column sums equal 1. It follows from Property 17.2 that the
spectral radius of such a matrix is unity. If the row (column) sums are all 1, then a oe oe
_ eigenvector corresponding to A=1 has all its components equal. Sea
3 A stochastic matrix is ergodic if the only eigenvalue of absolute value 1 is1 its
1 23=1 has multiplicity k, then there exist k linearlyy independent eigenv C
ingtoit.
Property 17.10 If P is ergodic, thenlim, Pp” a exists.
odie with ==1 and has asimple tindfor the limiting
tbe 8seeatysorskitr oe sumsof
components eq
ae
*: roeeat
NONNEGATIVE MATRICES [CHAP. 17
represents the proportion of objects in each state at the beginning of the process. Necessarily,
x” >0, and the sum of the components of X‘” is 1 for each m=0,1,2,.... Furthermore,
x) = {lp a ERB
If P is primitive, then
X® = lim X™ = XML (17.3)
which is the positive left eigenvector of P corresponding to A=1 and having the sum of its
components equal to unity. The ith component of X“ represents the approximate proportion of
objects in state i after a large number of time periods, and this limiting value is independent of the
initial distribution defined by X°°”. If P is ergodic but not primitive, (17.3) still may be used to obtain
the limiting state distribution, but it will depend on the value of X°°. (See Problems 17.16 and
17.17.)
-Solved Problems
tiger aieaooes
If 0=A=B, then A” =B” for any positive integer m and, therefore, ||A”||, = ||B”||,. It follows
from (12.4) that
(A) = lim A’ |e” = lim ||/B” ||," = o(B)
m2
17.5 Prove that if the row (or column) sums of a nonnegative square matrix A are a constant k,
then o(A) = k.
Using (12.3), we may write
o(A) = |All. =k (1)
If we set X=[1,1,...,1]’, it follows from the row sums being k that AX=kX, so that k is an
eigenvalue of A. Since o(A) must be the largest eigenvalue in absolute value,
oa(A)=k } (2)
Together, (1) and (2) imply o(A) = k. The proof for column sums follows if we consider A’ in place of
17.6 Prove that if m is the minimum row (or column) sumi and M is the maximum row (or column)
sum of annxXn nonnegative matrix A =([q,,], then m= o(A)=M.
ree
er 3 Gonateuct a matrix B= (0 paes: the same orderaspA and such that
24baci ois:
156 NONNEGATIVE MATRICES {CHAP. 17
Therefore, A is reducible.
02 0
00 4 0
A='0"'9 0 2
mp 0 0
bee
sy
=) NY a— —)——
oo
eoco
17.17 Formulate the following problem as a Markov chain and solve it: The training program for
production supervisors at a particular company consists of two phases. Phase 1, which involves
three weeks of classroom work, is followed by phase 2, which is a three-week apprenticeship
program under the direction of working supervisors. From past experience, the company
expects only 60 percent of those beginning classroom training to be graduated into the
apprenticeship phase, with the remaining 40 percent dropped completely from the training
program. Of those who make it to the apprenticeship phase, 70 percent are graduated as
supervisors, 10 percent are asked to repeat the second phase, and 20 percent are dropped
completely from the program. How many supervisors can the company expect from its current
training program if it has 45 people in the classroom phase and 21 people in the apprenticeship
_ We consider one time period to be three weeks, and define states 1 through 4 as the classification of
being dropped, a classroom trainee, an apprentice, and a supervisor, respectively. If we assume that
discharged individuals never reenter the training program and that supervisors remain supervisors, then
__ the probabilities of moving from one state to the next are given by the stochastic matrix in Problem
17.15. There are 45 + 21 = 66 people currently in the training program, so the initial probability vector
_ oo trainees is X°? = [0, 45/66, 21/66, 0]. It follows from Eq. (17.3) and the results of Problem
ee fede that rae .
7 ae aaa
| 8/15 0 0 7/15] sia
0,0.5657]
343,
Ol 219 0 0 7/9 |= (0-40,
ie NaS ak 7
cs Pons
- 2 eis") an
CHAP. 17] NONNEGATIVE MATRICES
Supplementary Problems
In Problems 17.20 through 17.29, determine whether the given matrix is irreducible, primitive, or
stochastic, and estimate its spectral radius. For those matrices P that are stochastic, determine lim, P” if it
exists,
§- +2":
] 17.22 650...4
1 20,0
23
0 2 17,25
epotiay 0.
19,9 0,1
02.02. 0.6
4 1
e< 0 05 0 05 j
0.79 0 17.27 K 1 | 17.28 0
0.35 0.48| 03 0 074 ene
0.1 0.6 0.3
17.29 |0.6 0.2 0.2|
198 sgins!
fad a ony
Chapter 18
Patterned Matrices
CIRCULANT MATRICES
A circulant matrix is a square matrix in which every row beginning with the second can be
obtained from the preceding row by moving each of its elements one column to the right, with the
last element circling to become the first. Circulant matrices have the general form
_ Ifa circulant matrix A has 5 oran Xn, then its sitar: are
CHAP. 18} PATTERNED MATRICES 161]
TRIDIAGONAL MATRICES
A tridiagonal matrix is a band matrix of width three. Nonzero elements appear only on the main
diagonal, the superdiagonal, and the subdiagonal; all other diagonals contain only zero elements.
Property 18.6: The eigenvalues of an n X n tridiagonal Toeplitz matrix with elements a on the main
diagonal, b on the superdiagonal, and c on the subdiagonal are
* wath
¥.
ae. = —
i
i
‘aed we
| ' 1 ‘
é tae 3 rt -s ep i ss — >« oe
[sas » Ship: 5 iris +e
jKs
PS
. < + * ts : a
5 ~<a a « ey eet ”
a * J~i* >ams - +, .
PATTERNED MATRICES |[CHAP. 18
Solved Problems
18.1 Determine whether the following matrices are circulant, Toeplitz, band matrices, tridiagonal,
and/or in Hessenberg form:
1 -2 ©
-
A=
-4 /
3
-2 i
WN
KF
co
coc ornew
OS CO
©
NK
- oNe
OCC
) er
2 «1
ee
0 0
0.0:
0 0 SO
NONOCC
i 2
|
CONNWW
NW
BNWER
SNWOWS
CoHn
">
“as
Le
ar
*me :4
Gi
a
armed
eec>
anoe
aro ae “ ie -
ae rts i _ a=
: 7 se [Se eo i a Le a é
; nies ~NDeTY LOTIT a a : 24 x ra -
> - S r
7
CHAP. 18] PATTERNED MATRICES
This system, with (2), has the matrix form yX = AX, for X=[1,7,7r°,...,r" ‘|’. Thus, y, as given by
(2), is an eigenvalue, and X is a corresponding eigenvector for every root r.
Given (2) and the fact that r= 1 is always a root of (1), it follows that the sum of any row of a
circulant matrix is an eigenvalue of that matrix.
For
i = 2:
Uy = @y2/l\, = 2/1 =2
lop = C22 — layMig = 2—( 192) = 4
: j R ee : a e F
es - oat) ; a
hrough 18.8 Vi << * i
b- j Hera { pa J oT ole 418.5
2 se anaes
7. ‘ : sD neat 7
4 - ™ > as in ‘ are Bae<4, -,
int Mw es ork
i? ; , a et.
pn, ae eo
P
* De es Pec ey [-3] a. Se
pe Als 9 Ce ae 7 » Pe .
= a) Fee: ca’ Fs x ‘ | ‘ : ,
‘ : , a ab
baie iti 6, A
His ae a
fat Roe
¥
ort TS acl1)
a o J | ql
a es eeee) a tJ L4d
LOS oe.
pater : r a7 5 a? = lem oh
hh ¥
i 20 33 2a -
- a 7
¥ - a
eo
Ps:
CHAP. 18] PATTERNED MATRICES 165
fo.
_10!-2/7 -3/7 6/7
3 me 0:-3/7 6/7 2/7
P C4 Pai 2/7. 3/7
1 mit 0 0 oe
—~7 -114/49 -115/49 27/49 ;
and hyd BAe
O -115/49 -64/49 142/49
0 27/49 = 142/49 227/49
The second iteration (k = 2) yields
x, = —115/49
for which _||X, ||, = V5.811745 = 2.410756 z
ol
115/49 1] _ [0.0638173
and Ve =X, + IIE =| 7149 |*2:410756| 5]=|O'serasns )
for which ||V,||,
=-V0.307696. Then,
a <n ie vvr=| 0.973528 -0.228567 Ree
2=T,~ 9307696 2%? =|-0.228567 -0.973528 he
0
B “=0.228567
0.973528)
PATTERNED MATRICES {CHAP. 18
18.10 Any n Xm matrix X can be converted into an mm x1 column vector x (denoted with a
lowercase boldface letter) by taking the transpose of all the rows of X and placing them
successively below one another into x. The matrix equation
AXB=C (18.1)
is then equivalent to the matrix-vector equation
(A@B’)x=c (18.2)
where x and ¢are the vector representations of the matrices X and C, respectively. Equation
(18.1) may be solved for the unknown matrix X in terms of A, B, and C by solving (18.2) for
the vector x using the methods developed in Chapter 2. Equation (18.1 ) may possess exactly
One solution, no solutions, or infinitely many solutions.
- Solve the matrix equation AXB =C for X when
vi tow sol
CHAP. 18] PATTERNED MATRICES 167
Supplementary Problems
18.12 Determine whether the following matrices are circulant, Toeplitz, band matrices, tridiagonal, and/or in
Hessenberg form:
. Se 1010 101 9
eta ¥ b iO
=). .3...
4 oD QO. grey ian
ee a
ee (b) rope BK (c) ee ae |
eo °Tha Orskefis aor a= eee 2
rez 3 120 23 i
ye a (e) PS bee fy ABO 24
B24 0 bine
pe Og
By aa
mo 3-2
ih:
ne a
18.13 Find the eigenvalues of, and a canonical basis for, the matrix in Problem 18.12(b)..
18.14 Find the eigenvalues of, and a canonical basis for, the matrix in Problem 18.12(d). i
oe:wf
(SAS
PTT
a anLUfactorizationforthematrix,inProblem =m8).
. osri i “fies
- a ee
PATTERNED MATRICES (CHAP. 18
a-[} 3]
18.26 Solve the matrix equation AXB = C for X when
J
B=(1,1, 1]
4 ily»
| ch sieatiimaiagiisy «4 Lap Ji coulevesye sit bah. CEas
ees: ae) eet . oo ee » ee
o
i) Paks 9) i 4
Chapter 19
Power Methods for Locating Real Eigenvalues
NUMERICAL METHODS
Algebraic procedures for determining eigenvalues and eigenvectors, as described in Chapter 7,
are impractical for most matrices of large order. Instead, numerical methods that are efficient and
stable when programmed on high-speed computers have been developed for this purpose. Such
methods are iterative, and, in the ideal case, converge to the eigenvalues and eigenvectors of
interest. Included with each method are termination criteria, generally a test to determine when a =
specified precision has been achieved (if the results are converging) and an upper bound on the Es
number of iterations to be performed (in case convergence does not occur).
This chapter describes algorithms for locating a single real eigenvalue and its associated
eigenvector. The first method presented is the simplest; the last is the most powerful. Chapter 20
describes a procedure for obtaining all eigenvalues of a matrix; it is usually packaged with the shifted
inverse power method as an excellent general-purpose algorithm. :
a
Rite, ih
on'STHEOREM eo
‘a square matrix generates a Gerschgorin disk,which isbounded bya circle, whose
— Se ee nee Se ee ee eee Veen eral
nthat row. .
Let
ineapne
yi. Laat:
mu ina Sl Aes
ested
Solved Problems
19.1 Use the power method to locate an eigenvalue and eigenvector for
oe wl. 7
Ast=—] ~-1° 1
7 nS
Second iteration:
xX F y, =|Q.'
san ae
172 POWER METHODS FOR LOCATING REAL EIGENVALUES {CHAP. 19
First iteration:
Table 19.3
— 13.0000
— 13.1538
— 13.3158
— 13.4822
— 13.6491
— 14.3651
— 14.9160
— 14,9907
— 14,9990
— 14.9997
oe nivos.
toed rere > ala er
os aN Ho
yi %wa,
gare
Je ge
Was le? argh
pic : ae
POWER METHODS FOR LOCATING REAL EIGENVALUES [CHAP. 19
Table 19.4
oy ee Tis
0.5455 1.0000 0.0227
—0.8076 -—0.4290 1.0000
19.6 A modification of the power method particularly suited to real symmetric matrices is
_ initialized with a unit vector the Euclidean norm) having all its components equal,
rs rmined as before, but the eigenvalue is approximated as
h Rayleigh quotient. Then Xs=¥,/ Yall unless
_— and theee . Use this
| CHAP. 19} POWER METHODS FOR LOCATING REAL EIGENVALUES 175
19.7 Use the modified power method described in Problem 19.6 to determine a second eigenvalue
and associated eigenvector for the matrix in Problem 19.6.
Having determined that 30.2887 is an eigenvalue of A, we can apply the modified power method to
— 20,2887 7 8 7
. = 7 — 25.2887 6 5
B = A — 30.28871= 8 6 ~20.2887 9
4 5 9 —20.2887
We initialize with X, = (0.5, 0.5, 0.5, 0.5]’. Then with all calculations rounded to four decimal places, we
have:
First iteration:
: Second iteration:
Y, = BX, = [—7.3891, 27.0345, —9.8380, —1.8130]” _
LaXx,-¥,=~29. 275
, Be: YI. =“gi7579
eg RY): X,=Te = [—0.2483, 0.9085, —0.3306, —0.0609]”
Continuing in this manner, we generate Table 19. 5. Four-place precision is attainedb
r 65, although it takes hanaa few additional iterations before confidence brtos
n nevis >
+ | Thealgorithis
m convergingsel-30.ie eer patel ig
176 POWER METHODS FOR LOCATING REAL EIGENVALUES [{CHAP. 19
5 0 0 1 04 04
L=|3 48 0 and U=|0 1 = 0.375
& 36 5.25 0 0 1
19.9 Use the inverse power method to obtain an eigenvalue and eigenvector for the matrix in
Problem 19.6.
For this matrix LU decomposition yields
10 0 0 7 03° 7
20.1 0
8 0.4 0
740.1 0.5
; OREN gs eplerSae
000,
) 24cookaia
POWER METHODS FOR LOCATING REAL EIGENVALUES [CHAP, 19
19.11 Find the eigenvalues and a corresponding set of eigenvectors for the matrix in Problem 19.10.
From Problem 19.10, we know that one real eigenvalue is located in the interval 24= z= 34. We
take u = 28 as an estimate of this eigenvalue and apply the inverse power method to A — 281. A better
estimate for the eigenvalue might be the center of the interval, u = 29, but an LU decomposition for
A — 291 is not possible because that matrix has a zero in the (1,1) position. For A — 281, we have
0 0 1 -1 4
—47 0 and U=|0 1 —0.127660
6 —42.234043 0 O 1
Applying the inverse power method with these matrices, we obtain, after five iterations, X, =
[1.0, —0.015180, 0.138939]” with A, = 0.636563. The corresponding eigenvalue for A is A=28+
1/0.636563 = 29.5709.
From Problem 19.10 we know that a second real eigenvalue lies between —15 and —21. We estimate
this eigenvalue as u = —19. The LU decomposition for A + 191 has
ie ao ee "1 —0.020833 0.083333
L=|-1 0.979167 0 and U=|0 1 2.127660
A 2006333 saa Np 1
power method yields, after five iterations, X, =
. The corresponding eigenvalue for A is —19+
:
eed40
nateit and apy power
inverse
the
66 ~0. 68 00 .
As a c h 0. 11 44 11 1]
, wi thA,= 1.4 705 66,
fosrAis1/14705 a3.eaotop lipbanae he note
S ckbo » thas heme
tha the three.
(29.57bi + og
109 an 18.a
(—lk + 0.6800 =SL
2509) a Lo
oa
Be | Piensa te
square matrix / y dlein at least one Ge “
CHAP. 19] POWER METHODS FOR LOCATING REAL EIGENVALUES 179
,
Supplementary Problems
19.14 Use the power method to locate a second eigenvector and eigenvalue for the matrix in Problem 19.2.
Observe that convergence occurs even though that eigenvalue has multiplicity two.
19.15 Apply the power method to the matrix in Problem 19.11 and stop after four iterations.
19.25 The matrix in Problem 19.24 is known to have an eigenvalue near 9. Use the shifted inverse power
method to find it.
19.26 The matrix in Problem 19.18 is known to have an eigenvalue near 2.5. Use the shifted inverse power
method to find it.
19.27. A modification of the shifted inverse power method uses the Rayleigh quotient as an estimate for the
eigenvalue and then shifts by that amount. At the kth iteration, the shift is A, = XJ AX,/X/X,. Thus, the
shift is different for each iteration. Termination of the algorithm occurs when two successive A iterates
are within the prescribed tolerance of each other. Use this variable shift method on the matrix in
Problem 19.20.
Micidin 2-05!
we ey
ish
QR DECOMPOSITION 7 st BO
Every mXn matrix A (m2n) can be factored into the product of a matrix Qt having
"orthonormal vectors for its columns, and an upper (right) triangular matrix R. The pee
A=QR
e1thencTiseine
THE QR ALGORITHM {CHAP. 20
Each A, is similar to its predecessor and has the same eigenvalues (see Problem 20.9). In general,
the sequence {A,} converges to a partitioned matrix having either of two forms:
(20.4)
-}---
and (20.5)
_If form (20.4) occurs, then the element a is an eigenvalue, and the remaining eigenvalues are
obtained by applying the QR algorithm anew to the matrix E. If form (20.5) arises, then two
eigenvalues can be determined from the characteristic equation of the 2 x 2 submatrix in the lower
right partition, and the remaining eigenvalues are obtained by applying the QR algorithm to the
matrix G. If E or G is already a 2 x 2 matrix, its eigenvalues are determined from its characteristic
equation.
“mr
dece shiftedmatrix he= @e vk, pat
MpoOs:sition is constructed forsae
3: tagboaty ohh. . <F 33 ‘ag i?a a ie4 aA iw yiev :
bea
ala is ‘feae a
¥ a
Tiea eReait ee an a Puss vv on
lLeserat
4 ae , 4
—
tee
a
Ree
7 as
y
Solved Problems
20.1 Use the modified Gram-Schmidt process to construct an orthogonal set of vectors from the
[df
linearly independent set {X,, X,, X,} when
First iteration:
20.3 Use the modified Gram-Schmidt process to construct an orthogonal set of vectors from the
linearly independent set {X,,X,, X,, X,} when
1
0
4 a
X,= 1
1
fees ee
First iteration:
: ° a Bx \ ‘ ) 4 3 oe: a tg ee wus Ne oe ie
CHAP. 20] THE QR ALGORITHM 185
hak | 3 3 —4 ome
1 |
2
ae ane (X,,Q,) = V35
Second iteration:
#8 ge ae=[3 ;
he ace : d ar
Por fer... < d : a q
Ar ‘ ; » * ji ; a y = B fos 7 ; .
ae
1g (20.6)
i
and (20.7),
“~ i “ as < mr 5 > ’
we it = > ae a "
r g — » ¥ ar J
~—* —s fa) J io ~ al 4 —
CHAP. 20] THE QR ALGORITHM 187
‘ »
ie> F058Ee. .
Pr) |e -
* es =. «
[= ae
This matrix has form (20.4), so one eigenvalue estimate is 3.858057. To determine the others, we apply
the shifted QR algorithm anew to
.9 Pow that the shifted QR algorithm is a series of similarity transformations that leave the
_ eigen alues invariant.
~Since the Q matrix in any QR decpmipesition isp unitary, it has an inverse. Therefore, (20.6) may be
tea
* R,-, = G-,.-1 —54-11)
ting
gthis
Agent intoaa 7), we obtain
-o
_—
CHAP. 20} THE QR ALGORITHM 189
20.11 Redo Problem 20.10 using the modified Gram-Schmidt process and show that the results are a&
better.
First iteration:
r, = (|X,
||,= 2.005
Q,TF
1 X, = (0.4988, 0.5037, 0.4988, 0.4988]”
= (X,,Q,) =2.005
= (X,,Q,)=2.005
X, —X, — 2.005Q, = [-0.9400x10°‘, -0.9919x10-7,0.9906x10-2, 0.9400 x 10"*)
X,<X, -2. 2 = [0.9400 x 10~*, —0.9919 x 107?, -0.9400 x 107*, 0.9906xoy; ee
‘Second iteration:
“ ds: fe, ; : ; s ae ‘ ee a
ad eed
CHAP. 20] THE QR ALGORITHM
Third iteration:
Observe that r,, is very close to zero, and the last X, vector is very close to the zero vector; if we were
not rounding intermediate results, they would not exist. However, because of the rounding neither is
zero, and Q, can be calculated with what are, in effect, error terms. The result is a vector which is not
orthogonal to either Q, or Q,.
Supplementary Problems
PROPERTIES
The (Moore-Penrose) generalized inverse (or pseudoinverse) of a matrix A, not necessarily
square, is a matrix A’ that satisfies the conditions:
(11): AA® and A‘A are Hermitian.
(2); AA*A=A.
(13): A’AA’ =A’,
A generalized inverse exists for every matrix. If A has order n x m, then A” has order m X n and has
| following properties:
Propeety 21.1: A” is unique.
‘gh
ro -ty 21.2: A° =A ' for nonsingular A.
ty 21.3: (A*)* =A.
. (kA)* =(1/k)A* for k #0.
(aty* = (a yi
2 aa=0. ; :
3 e rank ofA’ eee nie saa
aes goes of he a en orders so that the product PAQ is
matris ae
AAAr =A'AfeteonlyitAYcan|
ceeear
CHAP. 21} GENERALIZED INVERSES 193
SINGULAR-VALUE DECOMPOSITION 2
Equations (21.1) and (21.2) are useful formulas for calculating generalized inverses. However,
they are not stable when roundoff error is involved, because small errors in the elements of a matrix
A can result in large errors in the computed elements of A*. (See Problem 21.12.) In such situtations
a better algorithm exists.
For any matrix A, not necessarily square, the product A”A is normal and has nonnegative —_—
eigenvalues (see Problems 13.2 and 13.3). The positive square roots of these eigenvalues are the
singular values of A. Moreover, there exist unitary matrices U and V such that ies
blockce matrix
a = a LE} |
a ee oe m@
he same order as A and, therefore, is square only when Ais square,
on (2!3) isa singular-value decomposition for A. An algorithm for con stru
ucttin
ge
Ping
eae te Rod yet: saimniely ae eo le dt4 sete, pana.see a
194 GENERALIZED INVERSES (CHAP. 21
A* =V,D'‘U? (21.4)
where V, and U, are defined by Steps 21.8 and 21.9, respectively. For the purpose of calculating a
generalized inverse, Steps 21.10 and 21.11 can be ignored. (See Problems 21.6 and 21.7.)
LEAST-SQUARES SOLUTIONS
A least-squares solution to a set of simultaneous linear equations AX = B is the vector of smallest
Euclidean norm that minimizes ||AX — B||,. That vector is
X=A’'B (21.5)
When A has an inverse, (21.5) reduces to X= A’'B, which is the unique solution. For consistent
___ systems (see Chapter 2) that admit infinitely many solutions, (21.5) identifies the solution having
camara Euclidean norm. Equation (21.5) also identifies a solution for inconsistent systems, the
ne that is best in the least-squares sense. (See Problems 21.8 through 21.11. )
Solved Problems
1the generalized inverse of
CHAP. 21] GENERALIZED INVERSES 195
“its OO Oe OF ek 6 ‘a ee es
: {0 501 One 0oie oi. 5 a yf a ah
r OM 6 OFS Of 226 6/26j.-2) 4d Ae SP ee
Suced:. 2a
ae . Bs
; sigtdortat
pi:
a
196 GENERALIZED INVERSES [CHAP. 21
STEP 21.7:
—-1/V6 1/V3
p=|
F salle 8 -1/V2
0 2
STEP 21.8: V,| -1/V6 1/V3 and =6V,=| 1/V2 with V=[V,|V¥.]
2/V6 1/V3 0
-1/V6 1/V3 1 0 0
-1/V6 1/V3 01 0
2/V6 1/V3 0 0 1
" si i]) ‘21.11: The first three columns of this matrix form a maximal set of linearly independent column
as vectors. Discarding the last two columns and applying the modified Gram-Schmidt
process to the first three columns, we obtain
rere
: =
ee
<
2.4
¥
e a . 2
et 3
ss
ae t; ipke
ats ak
‘
.a. >
A=vi0 2 Olu”
x
;
, "a s
r ay
Lj q
4 r
; « ¥ : y -
= +: = taal
cay E - ¢. i ,
o ' -
ce :
’
he 2 , i '
= “ee 7 ‘
=
, a >.
as, ) . 4
CHAP. 21] GENERALIZED INVERSES 197
ecoooos
ecooooosg
21.6 Use (21.4) to calculate the generalized inverse of the matrix in Problem 21.1.
Using what we have already found in Problem 21.4, we compute
: asa
[p
Bese Wellin ee Bie ate ins)
ve 1 Vs
i 3 2/V6 1/V3 —t
“ae
i
[| o l"
ear NB
" Re: aN AN san
V28 (2B
GENERALIZED INVERSES {CHAP. 21
-8/26 5/26
_ |-16/26 10/26 (:]-
X=! 9/26 2/26 |L24~
12/26 1/26
Thus, x, = 1/13, x, = 2/13, x, =3/13, and x, = 5/13.
21.10 Verify that the solution obtained in Problem 21.9 is the solution of minimum Euclidean norm
for the set of equations given in that problem.
Interchanging the order of the two equations, we obtain the system
x, +2x,+2x,+3x,=2
Zz, + 2x,>=1
whose coefficient matrix is in row-echelon form. Using the techniques of Chapter 2, we determine the
solution to be
er, re regis ‘ . SPS Oey s 2x, +X
_ ¥ J : x A
ee 5
Writing this system in matrix form, and then using (21.5) and the results of either Problem 21.3 or
Problem 21.7, we obtain
The equation of the line that best fits the data in the least-squares sense is S = 3¢1 + 25.
21.12 Working to four significant digits, show that (21.2) is numerically unstable when applied to
)sa Bash
At Le “Te:
1 1,004) |
_ Rounding all stored (intermediate) numerical antes four signi
H 3.000 3.004
2 nals004Sak
A = H =
(at 3 a
200 —200 551.2)
Show that if A can be factored into the product BC, where both B”B and CC” are invertible,
thenA* = C"(CC”) '(B”B) 'B”.
We need to show that A’ satisfies the three conditions required of a generalized inverse.
Il: AA* =(BC)C"(CC”)'(B“B) 'B” = B(CC”)(CC”)
'(B“B) 'B” = B(B”B) 'B”
A‘A=C%(CC”)'(B"B) -'B“(BC) = CNCCAY OR) RBC = io)
by Both are obviously Hermitian.
on a AA‘A = (BC)C"(CC”)'(B"B)'B“(BC) = B[(CC”)(CC”)*]{(B“B)"'(B“B)]C = BIIC = BC = A
- & A‘AA== heel enhy alae carige reteg tn
= =C%(CC")((B"B)”'(B"B)|[(CC")(CC%)”"](B"B)"B”
| BONG Y. a Sees
5"B)
"BY =Ay
oe ¢algorith given §
bySteps2.1throughas Syoeeleft
’ 7 pie Re -
. 4 Piss NE: isto Lo tat : : qa =
OO Ea io) oF os <i i aan
aS ep hipe ewiigdh : ee
tate = 0%(o0")"8 “p
x A ¢‘gee
ie
Age “i ce. 7 ae 7 >
CHAP. 21] GENERALIZED INVERSES
Supplementary Problems
In Problems 21.18 through 21.24, find the generalized inverse of the given matrix.
1 22
21.18 Hi
i 21.19 *113
| 4 21.20 1/1 1 1| ~ 20.21 $1 >|
- 2
e et i 4 2
21.22. |2 0 ; ‘ a
|
In Problems 21.25 through 21.38, find the least-squares solution to the given system of equations.
:
Show
Ba
nA 208 position of A, then
ey C 0 thake 7 .
= -/
es =R“Q"B, which rechiees
oor - % ye a
iaes
2 ne s
ta f
78 om
3 5 Tie t ‘
"i re oe
Bes:ae ‘
GENERALIZED INVERSES [CHAP. 21
21.42 Prove that AA* and A“A are Hermitian and idempotent.
21.43 Show that if A has order m x n with m =n, then A can be factored into A = UZV", where & is an n x n
ee yee oaede baatnXn unitary matrix, and U isan m Xn matrix with orthonormal columns.
ia
ae une
A AO* 9 sn 4 wit |
ea »!
; hock “=
Answers to Supplementary Problems
CHAPTER 1
1.24 (a) 5] (6) undefined 1.25 (a) undefined; (6b) (2, 5S, 5)
5; ? )
| a
ae
<
e
De
Cee
t
ae 428 [1 2] a eae Es
: ects es . Ns , iat:
tie— . 0 1) 3
et *
a= 0 1
5 ee: A es
ANSWERS TO SUPPLEMENTARY PROBLEMS
x, =8+2x,,x,=—-1-—4;,, x, arbitrary
x, =8, x,=-2,x,=-3
4, ayoe8, 2 2, x, 2-1
x, =0.49998,x, = 0.00001, 05
~ : e oe pti4 ogee i Pimtep: ef.
2 ts CConsistent only if k=7,an hen =2 + op a -: lene (0) contentoly
it=?
3 , ei
nx i,oi ee
ANSWERS TO SUPPLEMENTARY PROBLEMS
0 1
12 & 0
3/2 11/7 JLO
B30 0 0 1
2.6 0 0 0
te §41/6 0) 0
2 -2 -20/3 -155/41jLO 0
) 7
/ Cannotbe solved; LY =B is inconsistent, 3.28 x, =8,x,=-2,x,=-3
oy i ‘.
ee
1s ie Sheeus}
206 ANSWERS TO SUPPLEMENTARY PROBLEMS
1 Each part follows from the uniqueness of the inverse: (a2) A~'B™' and B~'A™' are both inverses of AB;
-@) A’ 'B and BA' are both inverses of AB’; (c) AB~' and B~'A are both inverses of BA™’.
© a6;3015=-3(-33) =99
= as
a 0-0
; 50 setis
undefinedbecause F is notsanere,
“si pe ae :
swear ioe a
rt aes
ee ied
ANSWERS TO SUPPLEMENTARY PROBLEMS
CHAPTER 6
6.21 No
6.23 (a) Yes, (2, 1,2, 1] = 2[2, 0, 1, 1] + (-1)(0, 1, 2, -1] + (-2)[1, -1, -1, 1] + 0[0, 0, 1, 2);
(b) Yes, [0, 0, 0, 1]=(1/3)[2, 0, 1, 1] + (-2/3)[0, 1, 2, -1] + (-2/3)[1, —1, 1, 1] + (1/3)[0, 0, 1, 2]
6.24 batbeak= oe 11,
0, abeFomor 2,0)+ Bh heen 4.a
1 0
| for A= 1, aH for A=2, and aH for A=3
0. | 1
J) ae i. x}
for A=0 (of multiplicity two) and -| 1 for A= -1
0
: | ie 1/37,
3qd
for a =-3 oF mat two) and | of] for A= 1)
, j S| ' AL.
ela bot1) ce
&a
ANSWERS TO SUPPLEMENTARY PROBLEMS 209
7.38 [1,—1,9], [1, 1,2], and [1, 1, —1], corresponding to eigenvalues 1, 3, and 6, respectively
7.39 [-1,1,0], [1,0, 1], and [1,1, —1], corresponding to eigenvalues 0,0, and 3, respectively
7.40 [1,1,0], [-1,0, 1], and [1, —1, 1], corresponding to eigenvalues 2, 2, and 5, respectively
7.44 The proof is by induction on the order of the matrices. The proposition is certainly true for 1 x 1
_ matrices. Assume it is true for k x k matrices, and let A be an arbitrary (k + 1) x (k +1) matrix. —
Designate as A’ the matrix obtained from A by deleting its first row and column. Then A’ has order
k x k, and the induction hypothesis can be unee on it. Evaluating det (A — AI) by x Saget ets
row, we get =
y*
ae y a 4
ANSWERS TO SUPPLEMENTARY PROBLEMS
CHAPTER8
1 ] ; =|: 0 4
8.19 lim A, =|} , lim B, = > 10
kx
lim C, does not exist because tim {(k - k*)/(k + 1)} = —2,
(0, 1, 0)"
? N,=N,=N,=1 and N,=2 for A=1; (b) the vectors found in Problem 9.17, along with
= [0,; 1,0, 0, 0}”
(a) N,=N,=N,=1 for A=5; (b) the vectors found in Problem 9.19
—m=nNoocoooo
oooocoocono
Noococooceo
ooonNnccoco
onrrNOod
ooo. oooorno
edc$aen
ee
oooooonn
ocooconNnoo
Sceoceooonc!
esosoonooo.
coceoooNno.
COOO
SONSCOCOOSO.
SOCOCCOnN.
cooceooonN
eoncoc
ee
NOSOCO
@& 6 oe
oS
eee
oe
©
eooconnoo
oonnococo
ooounooo)
cooteaeo.
- ee
Pay *
©
Lei
Pig= jie A 105 t= a tase {=
m - ¥
,
74 a
I gh ey a
te Sie : vasa) ee rc .
7” “i
pl
= yh
a
i]
= POA»)
6. 01
ity :.6 = Maw SK
a ah
* (nh BOM RE
1 AF
; can 7
“VAS28
» - s
i
hid ry -
Pie
ion ae
RS :
7
“a
+
7:
b6: tlhe:
f -
re
}
=ad ae
on
=24
Soe
oe eae
iF
eon i
“ag rq gh
i ‘ Fae Ti, agat
sie
on
‘ .
; Ped H . ©
a>at at-” er aah. ~}.
hie
,
ee Ma. :Le ae o~ y, a
M
_ ~ ©
a en ase is. — wl DG ne
» —S
r ae 7 all
* > ey a eke op r
a
— a oo
. 4 -
ANSWERS TO SUPPLEMENTARY PROBLEMS 213
10.42 Premultiply (10.1) on the left by S; then postmultiply on the right by S~' and set T=S7'.
CHAPTER ll
7yre 11.13 wo (b)0;(c) 3; (d)2;(e) 14; (f) -6; (g)2 | Pai mae.
“f@i+k (b) 1-1; (c)
4- i2; (d) -1-i; (e) i5; (f) 50— i25 |
b411,29 If we continue on Problem 11.28, it follows that (X,X), =0 if and only if Y = 0 and that is the case if
~ ages and only if X=W ‘Y=0.
«
oF
CHAPTER 12
“gi <2
. * i s
ae ~<
a
ee
(a) vi; (b) V46; (c) V29; (d) V298; (e) V464
(a) 3°";(6)V3; (6) 35 (4) (185)"5 (€) 11; (4) 4
| (6) 8;(d) VO
STs
aie
5
nan =
baie
.
le ¥
=
ip ga*. ‘ 7
. as. ; - 7)
a Pigs _* 7 ot.
maitr,at : Se tay iy i
p
Mesh!
-
ON sa
_eyiiaGebn.
Pees By ee r
|
ey pee arear ia
chien a iu o~ (pars
See
ot
, ~
* ax. .
Tent
,
=a
rs : te - Ben grt "y i a | ; oa ; J cy
Sal S De
a> é a
-‘ rae ¢ ‘a
i
ie
Se 4
a . » : sz > : “abu. . i a
ANSWERS TO SUPPLEMENTARY PROBLEMS 215
(a) 15: (b) 4.158: (c) 66: (d) 2.729; (e) 2.147
I-'=I and |lI|| =1 from Problem 12.31.
21x E, F, pee 8.
13.36 -AY=-A7=-A™=A
4(A + A”) is Hermitian and }(A — A”) is skew-Hermitian for any matrix A. For real A, these matrices
are symmetric. and skew-symmetric, respectively.
A ording to (8.1), f(A) can be written as an (n — 1)-degree polynomial in A. iia the eigenvalues of A
e : eal,so are the coefficients of such a polynomial. The result then follows from Problems 13.29 and
14.30 If
a=|)0 2)
1 and X = [x Nae x,\F
|
CHAPTER 15
15.16 Cand E
| 0 0h Beto 0 3
(b) With U = |
-1/V2 0 va -V2/3 vr] we have U”BU= 0
V2 0 1/V2SL0 V3 ~V273 } ees
urea + Sel obs 2G «. 0 ulhivg s10 ohh, oe
dail
ll
ings
ot, (© WithU=| -1/V3_1/V6_1/V2|[0 -v3/2 1/2 | we have: UNCL
o
~~
+
2
aoe. V3 -1/V6 1/V2JL0 1/2 V3/24 eo .
a,
16.12 (a) and (c) are positive definite; (b) is positive semidefinite.
16.14 They are not congruent because they do not have the same inertia matrix.
5 (a) Three positive eigenvalues; (b) two positive eigenvalues and one zero eigenvalue; (c) four positive
ic (d) two positive and two negative eigenvalues
ert 0 Bs i 0 0
m=|0 1 E 140142) || f 0 i
S
os
ce PS bs es Be ™ ig -(S-i)/S 1/5}
can
at.at Bi¥ a
By
*).aah Aroe
20 SSE s a * - ie 4 - ao o Pld
} fe i” m , és ie bes d i ay
mae hg ee pee ig Pot? :
r : ‘ i 2 . - oe
; . ; a ¢ ues
oy y “AS er ete, ew as aes
e's
<<SC I 5 ee
o>" ) 2 oe
~_— ewe
ANSWERS TO SUPPLEMENTARY PROBLEMS 219
3/8 0 5/8
Lal 0:48
3/8 0 5/8
a. £3
ao. VS
ry $eer
p Eas box]
ng CEEBA
p ®55.56percent
S$6.48 percent:
Fj
ANSWERS TO SUPPLEMENTARY PROBLEMS
vev=|-
| 0 o. , to-V3 0 oO
orien =siV2. 0 IN Tan, | ~W2 1/2 -1/2) «0
0 tV2 6 1h 0 0 v2 -!1
[0.554700 -0.832080 ¢
0.554 700
oa— | 0-832nian050
5794) ee
> i ied G 3 f
CHAPTER 19
19.13
19.14
—0.2857
—0.7143 0.2857
, oe.
0.7143 0.28571.
: eae Widteninine @ 14+(-11)=3.
a NO Ae fay
Co) : a>
“atomPred
I * gulgurreyi be. ;
i - a eed
, rr
ANSWERS TO SUPPLEMENTARY PROBLEMS
1.0000
0.9697 33.0000
0.9591 : 30.3939
0.9578 , . 30.3011
0.9577 : ; | 30.2902
0.9576 : ‘ 30.2889
0.9576 0. 1.0 | 30.2887
0.5774 (0.5774
5 0.7621 |
“00147 oe om
ANSWERS TO SUPPLEMENTARY PROBLEMS
3 oe yee
0.1596 0.0851
—0.0661 0.0968
A =9 + 1/3.08114 = 9.3246
20.19 3.61803, 2.61803, 1.38197, 0.381966 20.20 990, 660, 440, 330
20.23 The QR algorithm does not converge. 20.24 232.275, 79.6707, 63.8284, 24.2261
CHAPTER 21
21.24
es 1
% BATA
ANSWERS TO SUPPLEMENTARY PROBLEMS 225
Pa: th. |. 3A
WV11l -1/V2 3/V22 1/V2 es Cigar
i)
u bay —1/V2 0
21.34
3/V1l 0 ahi22
—0.597354 ed
U=U,= —0.845435 Beene ial a 0
21.35 v-v,=| 0.801978 0.597354 —0,534078 —0.845435 0 0.444992
21.36 A ' satisfies Properties 11 through I3 because “
(AA')4 =14=1= AAT! ‘Ss
AA 'A=A(A'A)=AI=A a
and A'AA™'=(A'A)A' =IA7' =A!
The result then follows from Property 21.1.
as 06 A A WEN. .A jo
] mat
of a chain, 85
of eigenvectors, 61
Lower triangular matrix, 24
determinant of, 42
eigenvalues of, 60
inverse of, 32
at
i
i
~, ¢ ¢
owe aa el
‘ie ‘by ' oy wi ‘
, ‘ Bs een Ly
om > ze cai? a
= _— a?
, +e
INDEX 229
i tho Waive
Pt) atone aoheiuadtr eT
2p) ee en Me OBNO ‘ee) selusiboequed
Ae Sikin.. 3 CFL eminent pliers
me oo ee a chesigen -Brcpanine
ite. ae we fats 7e ,
pueie ;
SCHAUM'S SOLVED PROBLEMS SERIES
m= Learn the best strategies for solving tough problems in step-by-step detail
m@ Prepare effectively for exams and save time in doing homework problems
@ Use the indexes to quickly locate the types of problems you need the most help solving
m@ Save these books for reference in other courses and even for your professional library
To order, please check the appropriate box(es) and complete the following coupon.
“
7
S) < S
Confusing Textbooks? Missed Lectures?,
Tough Test Questions? Don’t miss these
Fortunately, there’s Schaum’s.
More than 40 million students have trusted Schaum's ~<a
outlines
to help them succeed in the classroom and on exams.
Schaum’s is the key to faster learning and higher
grades in every subject. Each Outline presents all the .
essential course information in an easy-to-follow, Elementary
Algebra
° ‘ Third Edition ‘
topic-by-topic format. You also get hundreds of 200 flysledpene
ter mathematical eather
Fully compatible with your classroom text, Schaum’s highlights all the important facts
you need to know. Use Schaum’s to shorten your study time—and get your best test