0% found this document useful (0 votes)
105 views17 pages

Ila Sol5 ch04

This document provides information about a manual for instructors on linear algebra. It contains the title, author, publisher, and contact information for questions about the fifth edition of the manual. The summary provides essential details about the document in a concise manner in 3 sentences.

Uploaded by

double K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views17 pages

Ila Sol5 ch04

This document provides information about a manual for instructors on linear algebra. It contains the title, author, publisher, and contact information for questions about the fifth edition of the manual. The summary provides essential details about the document in a concise manner in 3 sentences.

Uploaded by

double K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

INTRODUCTION

TO

LINEAR

ALGEBRA

Fifth Edition

MANUAL FOR INSTRUCTORS

Gilbert Strang
Massachusetts Institute of Technology

math.mit.edu/linearalgebra
web.mit.edu/18.06
video lectures: ocw.mit.edu
math.mit.edu/∼gs
www.wellesleycambridge.com
email: linearalgebrabook@gmail.com

Wellesley - Cambridge Press

Box 812060
Wellesley, Massachusetts 02482
70 Solutions to Exercises

Problem Set 4.1, page 202

1 Both nullspace vectors will be orthogonal to the row space vector in R3 . The column

space of A and the nullspace of AT are perpendicular lines in R2 because rank = 1.

2 The nullspace of a 3 by 2 matrix with rank 2 is Z (only the zero vector because the 2

columns are independent). So xn = 0, and row space = R2 . Column space = plane


perpendicular to left nullspace = line in R3 (because the rank is 2).
 
1 2 −3
 
 
3 (a) One way is to use these two columns directly : A =  2 −3 1
 
−3 5 −2
   
2 1
Impossible because N (A) and C(AT )   
  
 
(b) −3  is not orthogonal to  1 
are orthogonal subspaces :    
5 1
   
1 1
   
   
(c)  1  and  0  in C(A) and N (AT ) is impossible: not perpendicular
   
1 0
(d) Rows orthogonal to columns makes A times A = zero matrix ρ. An example is A =
 1 −1 
1 −1

(e) (1, 1, 1) in the nullspace (columns add to the zero vector) and also (1, 1, 1) is in
the row space: no such matrix.

4 If AB = 0, the columns of B are in the nullspace of A and the rows of A are in the left

nullspace of B. If rank = 2, all those four subspaces have dimension at least 2 which
is impossible for 3 by 3.

5 (a) If Ax = b has a solution and AT y = 0, then y is perpendicular to b. bT y =

(Ax)T y = xT (AT y) = 0. This says again that C(A) is orthogonal to N (AT ).


(b) If AT y = (1, 1, 1) has a solution, (1, 1, 1) is a combination of the rows of A. It is
in the row space and is orthogonal to every x in the nullspace.
Solutions to Exercises 71

6 Multiply the equations by y1 , y2 , y3 = 1, 1, −1. Now the equations add to 0 = 1 so

there is no solution. In subspace language, y = (1, 1, −1) is in the left nullspace.


Ax = b would need 0 = (y T A)x = yT b = 1 but here yT b = 1.

7 Multiply the 3 equations by y = (1, 1, −1). Then x1 − x2 = 1 plus x2 − x3 = 1 minus

x1 − x3 = 1 is 0 = 1. Key point: This y in N (AT ) is not orthogonal to b = (1, 1, 1)


so b is not in the column space and Ax = b has no solution.

8 Figure 4.3 has x = xr + xn , where xr is in the row space and xn is in the nullspace.

Then Axn = 0 and Ax = Axr + Axn = Axr . The example has x = (1, 0) and row
 
space = line through (1, 1) so the splitting is x = xr + xn = 21 , 12 + 21 , − 12 . All
Ax are in C(A).

9 Ax is always in the column space of A. If AT Ax = 0 then Ax is also in the nullspace

of AT . Those subspaces are perpendicular. So Ax is perpendicular to itself. Conclu-


sion: Ax = 0 if AT Ax = 0.

10 (a) With AT = A, the column and row spaces are the same. The nullspace is always

perpendicular to the row space. (b) x is in the nullspace and z is in the column
space = row space: so these “eigenvectors” x and z have xT z = 0.

11 For A: The nullspace is spanned by (−2, 1), the row space is spanned by (1, 2). The

column space is the line through (1, 3) and N (AT ) is the perpendicular line through
(3, −1). For B: The nullspace of B is spanned by (0, 1), the row space is spanned by
(1, 0). The column space and left nullspace are the same as for A.

12 x = (2, 0) splits into xr + xn = (1, −1) + (1, 1). Notice N (AT ) is the y − z plane.

13 V T W = zero matrix makes each column of V orthogonal to each column of W . This

means: each basis vector for V is orthogonal to each basis vector for W . Then every
v in V (combinations of the basis vectors) is orthogonal to every w in W .
 
x
14 Ax = B x b means that [ A B ]   = 0. Three homogeneous equations (zero right
−bx
hand sides) in four unknowns always have a nonzero solution. Here x = (3, 1) and
72 Solutions to Exercises

b = (5, 6, 5) is in both column spaces. Two planes in R3 must


b = (1, 0) and Ax = B x
x
share a line.

15 A p-dimensional and a q-dimensional subspace of Rn share at least a line if p + q > n.

(The p + q basis vectors of V and W cannot be independent, so same combination of


the basis vectors of V is also a combination of the basis vectors of W .)

16 AT y = 0 leads to (Ax)T y = xT AT y = 0. Then y ⊥ Ax and N (AT ) ⊥ C(A).

17 If S is the subspace of R3 containing only the zero vector, then S ⊥ is all of R3 .

If S is spanned by (1, 1, 1), then S ⊥ is the plane spanned by (1, −1, 0) and (1, 0, −1).
If S is spanned by (1, 1, 1) and (1, 1, −1), then S ⊥ is the line spanned by (1, −1, 0).

18 S ⊥ contains ⊥
 perpendicular to those two given vectors. So S is the nullspace
 all vectors
1 5 1
of A =  . Therefore S ⊥ is a subspace even if S is not.
2 2 2

19 L⊥ is the 2-dimensional subspace (a plane) in R3 perpendicular to L. Then (L⊥ )⊥ is

a 1-dimensional subspace (a line) perpendicular to L⊥ . In fact (L⊥ )⊥ is L.

20 If V is the whole space R4 , then V ⊥ contains only the zero vector. Then (V ⊥ )⊥ =

all vectors perpendicular to the zero vector = R4 = V .


 
1 2 2 3
21 For example (−5, 0, 1, 1) and (0, 1, −1, 0) span S ⊥ =nullspace of A =  .
1 3 3 2
h i
22 (1, 1, 1, 1) is a basis for the line P ⊥ orthogonal to P . A = 1 1 1 1 has P as its
nullspace and P ⊥ as its row space.

23 x in V ⊥ is perpendicular to every vector in V . Since V contains all the vectors in S,

x is perpendicular to every vector in S. So every x in V ⊥ is also in S ⊥ .

24 AA−1 = I: Column 1 of A−1 is orthogonal to rows 2, 3, . . . , n and therefore to the

space spanned by those rows.

25 If the columns of A are unit vectors, all mutually perpendicular, then AT A = I. Simple

but important ! We write Q for such a matrix.


Solutions to Exercises 73
 
2 2 −1 This example shows a matrix with perpendicular columns.
 
  T
26 A = −1 2 2 , A A = 9I is diagonal: (AT A)ij = (column i of A) · (column j of A).
 
2 −1 2 When the columns are unit vectors, then AT A = I.
27 The lines 3x + y = b1 and 6x + 2y = b2 are parallel. They are the same line if

b2 = 2b1 . In that case (b1 , b2 ) is perpendicular to (−2, 1). The nullspace of the 2 by 2
matrix is the line 3x + y = 0. One particular vector in the nullspace is (−1, 3).

28 (a) (1, −1, 0) is in both planes. Normal vectors are perpendicular, but planes still in-

tersect! Two planes in R3 can’t be orthogonal. (b) Need three orthogonal vectors to
span the whole orthogonal complement in R5 . (c) Lines in R3 can meet at the zero
vector without being orthogonal.
   
1 2 3 1 1 −1 A has v = (1, 2, 3) in row and column spaces
   
   
29 A =  2 1 0  , B =  2 −1 0 ; B has v in its column space and nullspace.
   
3 0 1 3 0 −1 v can not be in the nullspace and row space,
or in the left nullspace and column space. These spaces are orthogonal and v T v 6= 0.

30 When AB = 0, every column of B is multiplied by A to give zero. So the column

space of B is contained in the nullspace of A. Therefore the dimension of C(B) ≤


dimension of N (A). This means rank(B) ≤ 4 − rank(A).

31 null(N ′ ) produces a basis for the row space of A (perpendicular to N(A)).

32 We need r T n = 0 and cT ℓ = 0. All possible examples have the form acr T with

a 6= 0.

33 Both r’s must be orthogonal to both n’s, both c’s must be orthogonal to both ℓ’s, each

pair (r’s, n’s, c’s, and ℓ’s) must be independent. Fact : All A’s with these subspaces
have the form [c1 c2 ]M [r1 r 2 ]T for a 2 by 2 invertible M .

You must take [c1 , c2 ] times [r1 , r 2 ]T .

Problem Set 4.2, page 214

1 (a) aT b/aT a = 5/3; p = 5a/3 = (5/3, 5/3, 5/3); e = (−2, 1, 1)/3


74 Solutions to Exercises

(b) aT b/aT a = −1; p = −a; e = 0.

2 (a) The projection of b = (cos θ, sin θ) onto a = (1, 0) is p = (cos θ, 0)

(b) The projection of b = (1, 1) onto a = (1, −1) is p = (0, 0) since aT b = 0.

The picture for part (a) has the vector b at an angle θ with the horizontal a. The picture
for part (b) has vectors a and b at a 90◦ angle.
       
1 1 1 5 1 3 1 1
1

 1 
 1 


 
 
3 P1 =  1 1 1  and P1 b =  5 . P2 = 3 9 3  and P2 b =  3 .
3  3  11    
1 1 1 5 1 3 1 1

    P1 projects onto (1, 0), P2 projects onto (1, −1)


1 0 aa T
1 1 −1
4 P1 =  , P 2 = =  . P1 P2 6= 0 and P1 + P2 is not a projection matrix.
0 0 aT a 2 −1 1
(P1 + P2 )2 is different from P1 + P2 .
   
1 −2 −2 4 4 −2
1

 1


5 P1 =  −2 4 4 and P2 =  4 4 −2 .
9  9 
−2 4 4 −2 −2 1
P1 and P2 are the projection matrices onto the lines through a1 = (−1, 2, 2) and
a2 = (2, 2, −1). P1 P2 = zero matrix because a1 ⊥ a2 .

6 p1 = ( 19 , − 29 , − 29 ) and p2 = ( 94 , 94 , − 29 ) and p3 = ( 49 , − 92 , 49 ). So p1 + p2 + p3 = b.
     
1 −2 −2 4 4 −2 4 −2 4
1
 1
 
 1
 


7 P1 + P2 + P3 = −2 4 4 +  4 4 −2  + −2 1 −2  = I.
9  9  9 
−2 4 4 −2 −2 1 4 −2 4
We can add projections onto orthogonal vectors to get the projection matrix onto the
larger space. This is important.

8 The projections of (1, 1) onto (1, 0) and (1, 2) are p1 = (1, 0) and p2 = 35 (1, 2). Then

p1 + p2 6= b. The sum of projections is not a projection onto the space spanned by


(1, 0) and (1, 2) because those vectors are not orthogonal.

9 Since A is invertible, P = A(AT A)−1 AT separates into AA−1 (AT )−1 AT = I. And

I is the projection matrix onto all of R2 .


Solutions to Exercises 75
     
a2 aT 0.2 0.4 0.2 a1 aT 1 0
10 P2 = 2
=  , P2 a1 =  , P 1 = 1
=  , P1 P2 a1 =
aT a
2 2 0.4 0.8 0.4 aT a
1 1 0 0
 
0.2 This is not a1 = (1, 0)
 .
0 No, P1 P2 6= (P1 P2 )2 .

11 (a) p = A(AT A)−1 AT b = (2, 3, 0), e = (0, 0, 4), AT e = 0

(b) p = (4, 4, 6) and e = 0 because b is in the column space of A.


 
1 0 0
 
 
12 P1 =  0 1 0  = projection matrix onto the column space of A (the xy plane)
 
0 0 0
 
0.5 0.5 0
  Projection matrix A(AT A)−1 AT onto the second column space.
 
P2 =  0.5 0.5 0 =
  Certainly (P )2 = P . A true projection matrix.
2 2
0 0 1
       
1 0 0 1 0 0 0 1 1
       
       
0 1 0 0 1 0 0 2 2
13 A = 
, P = square matrix = 
 
, p = P   =  .
    
0 0 1 0 0 1 0 3 3
       
0 0 0 0 0 0 0 4 0

14 The projection of this b onto the column space of A is b itself because b is in that

column space. But P is not necessarily I. Here b = 2(column 1 of A) :


     
0 1 5 8 −4 0
  1    
     
A =  1 2  gives P =  8 17 2  and b = P b = p =  2 .
  21    
2 0 −4 2 20 4

b for 2A
15 2A has the same column space as A. Then P is the same for A and 2A, but x

b for A.
is half of x

1
16 2 (1, 2, −1) + 32 (1, 0, 1) = (2, 1, 1). So b is in the plane. Projection shows P b = b.

17 If P 2 = P then (I − P )2 = (I − P )(I − P ) = I − P I − IP + P 2 = I − P . When

P projects onto the column space, I − P projects onto the left nullspace.
76 Solutions to Exercises

18 (a) I − P is the projection matrix onto (1, −1) in the perpendicular direction to (1, 1)

(b) I − P projects onto the plane x + y + z = 0 perpendicular to (1, 1, 1).


 
5/6 1/6 1/3
For any basis vectors in the plane x − y − 2z = 0,  
 
19  1/6 5/6 −1/3 .
T −1 T
say (1, 1, 0) and (2, 0, 1), the matrix P = A(A A) A is  
1/3 −1/3 1/3
     
1 1/6 −1/6 −1/3 5/6 1/6 1/3
     
  eeT =    
20 e = −1 , Q = e Te  −1/6 1/6 1/3  , I − Q =  1/6 5/6 −1/3 .
     
−2 −1/3 1/3 2/3 1/3 −1/3 1/3
2
21 A(AT A)−1 AT = A(AT A)−1 (AT A)(AT A)−1 AT = A(AT A)−1 AT . So P 2 = P .

P b is in the column space (where P projects). Then its projection P (P b) is also P b.

22 P T = (A(AT A)−1 AT )T = A((AT A)−1 )T AT = A(AT A)−1 AT = P . (AT A is sym-

metric!)

23 If A is invertible then its column space is all of Rn . So P = I and e = 0.

24 The nullspace of AT is orthogonal to the column space C(A). So if AT b = 0, the pro-

jection of b onto C(A) should be p = 0. Check P b = A(AT A)−1 AT b = A(AT A)−1 0.

25 The column space of P is the space that P projects onto. The column space of A

always contains all outputs Ax and here the outputs P x fill the subspace S. Then rank
of P = dimension of S = n.

26 A−1 exists since the rank is r = m. Multiply A2 = A by A−1 to get A = I.

27 If AT Ax = 0 then Ax is in the nullspace of AT . But Ax is always in the column

space of A. To be in both of those perpendicular spaces, Ax must be zero. So A and


AT A have the same nullspace : AT Ax = 0 exactly when Ax = 0.

28 P 2 = P = P T give P T P = P . Then the (2, 2) entry of P equals the (2, 2) entry of

P T P . But the (2, 2) entry of P T P is the length squared of column 2.

29 A = B T has independent columns, so AT A (which is BB T ) must be invertible.


   
3 T 9 12
aa 1
30 (a) The column space is the line through a =   so PC = T =  .
4 a a 25 12 16
Solutions to Exercises 77

The formula P = A(AT A)−1 AT needs independent columns—this A has dependent


columns. The update formula is correct.

(b) The row space is the line through v = (1, 2, 2) and PR = vv T /v T v. Always
PC A = A (columns of A project to themselves) and APR = A. Then PC APR = A.

31 Test: The error e = b − p must be perpendicular to all the a’s.

32 Since P1 b is in C(A) and P2 projects onto that column space, P2 (P1 b) equals P1 b.

So P2 P1 = P1 = aaT /aT a where a = (1, 2, 0).


1 1 1
 999 1 1
33 Each b1 to b99 is multiplied by 999 − 1000 999 = 1000 999 = 1000 . The last pages of
the book discuss least squares and the Kalman filter.

Problem Set 4.3, page 229


   
1 0 0
       
   
1 1   8  4 8 36
1 A=

 and b =   give AT A = 
  
 and AT b =  .
1 3 8 8 26 112
   
1 4 20
   
1 −1
     
   
1 5  3
AT Ab b =   and p = Ab
x = AT b gives x x= 
  and e = b − p =  
 
4  13  −5 
   
2
17 E = kek = 44 3
     
1 0 0 1
      
     
1 1 C  8  This Ax = b is unsolvable  5 
2 

   =  .
  
 ; When p replaces b,
 
1 3 D  8  Project b to p = P b =  13 
     
1 4 20 17
 
1
b =   exactly solves Ab
x x = p.
4
3 In Problem 2, p = A(AT A)−1 AT b = (1, 5, 13, 17) and e = b − p = (−1, 3, −5, 3).

This e is perpendicular to both columns of A. This shortest distance kek is 44.
78 Solutions to Exercises

4 E = (C + 0D)2 + (C + 1D − 8)2 + (C + 3D − 8)2 + (C + 4D − 20)2 . Then

∂E/∂C = 2C + 2(C + D − 8) + 2(C + 3D − 8) + 2(C + 4D − 20) = 0 and


∂E/∂D = 1 · 2(C + D − 8) + 3 ·  2(C + 3D
 − 8)
 + 4 · 2(C
 + 4D − 20) = 0.
4 8 C 36
These two normal equations are again    =  .
8 26 D 112
5 E = (C − 0)2 + (C − 8)2 + (C − 8)2 + (C − 20)2 . AT = [ 1 1 1 1 ] and AT A = [ 4 ].

AT b = [ 36 ] and (AT A)−1 AT b = 9 = best height C for the horizontal line.


Errors e = b − p = (−9, −1, −1, 11) still add to zero.

b = aT b/aT a = 9 and the projection is


6 a = (1, 1, 1, 1) and b = (0, 8, 8, 20) give x

ba = p = (9, 9, 9, 9). Then eT a = (−9, −1, −1, 11)T(1, 1, 1, 1) = 0 and the shortest
x

distance from b to the line through a is kek = 204.
T
7 Now the 4 by 1 matrix in Ax = b is A = [ 0 1 3 4 ] . Then AT A = [ 26 ] and

AT b = [ 112 ]. Best D = 112/26 = 56/13.

b = aT b/aT a = 56/13 and p = (56/13)(0, 1, 3, 4). (C, D) = (9, 56/13) don’t


8 x

match (C, D) = (1, 4) from Problems 1-4. Columns of A were not perpendicular so
we can’t project separately to find C and D.
   
1 0 0   0     
Parabola  
 C

  4 8 26 C 36
1 1 1    8     
     
 . AT Ab     
9 Project b   D 
    = x =  8 26 92   D  =  112 .
1 3 9  8     
4D to 3D   E   26 92 338 E 400
1 4 16 20
Figure 4.9 (a) is fitting 4 points and 4.9 (b) is a projection in R4 : same problem !
        
1 0 0 0 C 0 C 0 Exact cubic so p = b, e = 0.
        
        
 1 1 1 1  D   8   D  1  47  This Vandermonde matrix
         .
10   = . Then   = 3  
 1 3 9 27  E   8  E −28  gives exact interpolation
        
1 4 16 64 F 20 F 5 by a cubic at 0, 1, 3, 4

11 (a) The best line x = 1 + 4t gives the center point b


b = 9 at center time, b
t = 2.
P P
(b) The first equation Cm + D ti = t=b
bi divided by m gives C + Db b. This
shows : The best line goes through b
b at time b
t.
Solutions to Exercises 79

12 (a) a = (1, . . . , 1) has aT a = m, aT b = b1 + · · · + bm . Therefore x


b = aT b/m is
the mean of the b’s (their average value)

ba and kek2 = (b1 − mean )2 + · · · + (bm − mean )2 = variance


(b) e = b − x
(denoted by σ 2 ).
 
1 1 1
1



(c) p = (3, 3, 3) and e = (−2, −1, 3) pT e = 0. Projection matrix P =  1 1 1 .
3 
1 1 1
13 (AT A)−1 AT (b − Ax) = x
b − x. This tells us: When the components of Ax − b add
b − x : Unbiased.
to zero, so do the components of x

14 The matrix (b x − x)T is (AT A)−1 AT (b − Ax)(b − Ax)T A(AT A)−1 . When
x − x)(b
the average of (b − Ax)(b − Ax)T is σ 2 I, the average of (b x − x)T will be the
x − x)(b
output covariance matrix (AT A)−1 AT σ 2 A(AT A)−1 which simplifies to σ 2 (AT A)−1 .
b − x.
That gives the average of the squared output errors x

x − x)2 as
15 When A has 1 column of 4 ones, Problem 14 gives the expected error (b

σ 2 (AT A)−1 = σ 2 /4. By taking m measurements, the variance drops from σ 2 to


σ 2 /m. This leads to the Monte Carlo method in Section 12.1.
1 9 1
16 b10 + x
b9 = (b1 + · · · + b10 ). Knowing x
b9 avoids adding all ten b’s.
10 10 10
   
1 −1   7     
  C   9 3 2 C
    
17  1 1 =  7 . The solution x b =   comes from    =
  D   4 2 6 D
1 2 21
 
35
 .
42
18 p = Ab
x = (5, 13, 17) gives the heights of the closest line. The vertical errors are
b − p = (2, −6, 4). This error e has P e = P b − P p = p − p = 0.

19 If b = error e then b is perpendicular to the column space of A. Projection p = 0.

20 The matrix A has columns 1, 1, 1 and −1, 1, 2. If b = Ab b = (9, 4)


x = (5, 13, 17) then x
and e = 0 since b = 9 (column 1) + 4 (column 2) is in the column space of A.
80 Solutions to Exercises

21 e is in N(AT ); p is in C(A); x
b is in C(AT ); N(A) = {0} = zero vector only.
    
5 0 C 5
22 The least squares equation is   = . Solution: C = 1, D = −1.
0 10 D −10
The best line is b = 1 − t. Symmetric t’s ⇒ diagonal AT A ⇒ easy solution.

23 e is orthogonal to p in Rm ; then kek2 = eT (b − p) = eT b = bT b − bT p.

24 The derivatives of kAx − bk2 = xT AT Ax − 2bT Ax + bT b (this last term is constant)

are zero when 2AT Ax = 2AT b, or x = (AT A)−1 AT b.

25 3 points on a linewill give equal slopes (b2 − b1 )/(t2 − t1 ) = (b3 − b2 )/(t3 − t2 ).

Linear algebra: Orthogonal to the columns (1, 1, 1) and (t1 , t2 , t3 ) is y = (t2 − t3 , t3 −


t1 , t1 − t2 ) in the left nullspace of A. b is in the column space ! Then y T b = 0 is the
same equal slopes condition written as (b2 − b1 )(t3 − t2 ) = (b3 − b2 )(t2 − t1 ).
   
The unsolvable 1 1 0   0  
  C   4 0 0
    
equations for 1 0 1    1  
26    =   . Then AT
A =

 0 2 0

  D     
C + Dx + Ey = (0, 1, 3, 4)  1 −1 0 3
  E   0 0 2
at the 4 corners are 1 0 −1 4
     
8 C 2
     
     
and A b =  −2  and  D = −1 . At x, y = 0, 0 the best plane 2 − x − 23 y
T
     
−3 E −3/2
has height C = 2 = average of 0, 1, 3, 4.

27 The shortest link connecting two lines in space is perpendicular to those lines.

28 If A has dependent columns, then AT A is not invertable and the usual formula P =

A(AT A)−1 AT will fail. Replace A in that formula by the matrix B that keeps only the
pivot columns of A.

29 Only 1 plane contains 0, a1 , a2 unless a1 , a2 are dependent. Same test for a1 , . . . , an−1 .

If they are dependent, there is a vector v perpendicular to all the a’s. Then they all lie
on the plane v T x = 0 going through x = (0, 0, . . . , 0).
Solutions to Exercises 81

30 When A has orthogonal columns (1, . . . , 1) and (T1 , . . . , Tm ), the matrix AT A is

diagonal with entries m and T12 + · · · + Tm


2
. Also AT b has entries b1 + · · · + bm and
T1 b1 + · · · + Tm bm . The solution with that diagonal AT A is just the given x
b = (C, D).

Problem Set 4.4, page 242

1 (a) Independent (b) Independent and orthogonal (c) Independent and orthonormal.

For orthonormal vectors, (a) becomes (1, 0), (0, 1) and (b) is (.6, .8), (.8, −.6).
 
  5/9 2/9 −4/9
Divide by length 3 to get 1 0  
2 QT Q =   but QQT =   2/9 8/9

2/9 .
2 2 1 1 2 2
q 1 = ( 3 , 3 , − 3 ). q 2 = (− 3 , 3 , 3 ). 0 1  
−4/9 2/9 5/9
3 (a) AT A will be 16I (b) AT A will be diagonal with entries 12 , 22 , 32 = 1, 4, 9.
   
1 0 1 0 0
   
   
4 (a) Q =  0 1 , QQT =  0 1 0  6= I. Any Q with n < m has QQT 6= I.
   
0 0 0 0 0
(b) (1, 0) and (0, 0) are orthogonal, not independent. Nonzero orthogonal vectors are
√ √
independent. (c) From q 1 = (1, 1, 1)/ 3 my favorite is q 2 = (1, −1, 0)/ 2 and

q 3 = (1, 1, −2)/ 6.

5 Orthogonal vectors are (1, −1, 0) and (1, 1, −1). Orthonormal after dividing by their
   
lengths : √1 , − √1 , 0 and √1 , √1 , − √1 .
2 2 3 3 3

6 Q1 Q2 is orthogonal because (Q1 Q2 )T Q1 Q2 = QT T T


2 Q1 Q1 Q2 = Q2 Q2 = I.

7 When Gram-Schmidt gives Q with orthonormal columns, QT Qb


x = QT b becomes
b = QT b. No cost to solve the normal equations !
x

8 If q 1 and q 2 are orthonormal vectors in R5 then p = (q T T


1 b)q 1 +(q 2 b)q 2 is closest to b.

The error e = b − p is orthogonal to q 1 and q 2 .


   
.8 −.6 1 0 0
   
  T  
9 (a) Q =  .6 .8  has P = QQ =  0 1 0  = projection on the xy plane.
   
0 0 0 0 0
82 Solutions to Exercises

(b) (QQT )(QQT ) = Q(QT Q)QT = QQT .

10 (a) If q 1 , q 2 , q 3 are orthonormal then the dot product of q 1 with c1 q 1 + c2 q 2 + c3 q 3 =

0 gives c1 = 0. Similarly c2 = c3 = 0. This proves : Independent q’s

(b) Qx = 0 leads to QT Qx = 0 which says x = 0.


1 1
11 (a) Two orthonormal vectors are q 1 = 10
(1, 3, 4, 5, 7) and q 2 = 10
(−7, 3, 4, −5, 1)
(b) Closest projection in the plane = projection QQT (1, 0, 0, 0, 0) = (0.5, −0.18, −0.24, 0.4, 0).

12 (a) Orthonormal a’s: aT T T


1 b = a1 (x1 a1 + x2 a2 + x3 a3 ) = x1 (a1 a1 ) = x1

(b) Orthogonal a’s: aT T T


1 b = a1 (x1 a1 + x2 a2 + x3 a3 ) = x1 (a1 a1 ). Therefore

x1 = aT T
1 b/a1 a1

(c) x1 is the first component of A−1 times b (A is 3 by 3 and invertible).


   
a b . Then B = b − a b a = 
T T 4 1
13 The multiple to subtract is a Ta
 − 2  =
aT a
0 1
 
2
 .
−2
  " #   √ √  √ √ 
1 4 kak q T 1 b 1/ 2 1/ 2 2 2 2
14   = q1 q 2  = √ √  √  = QR.
1 0 0 kBk 1/ 2 −1/ 2 0 2 2
1 1
15 (a) Gram-Schmidt chooses q 1 = a/||a|| = 3
(1, 2, −2) and q 2 = 3
(2, 1, 2). Then
q 3 = 13 (2, −2, −1).

(b) The nullspace of AT contains q 3

b = (AT A)−1 AT (1, 2, 7) = (1, 2).


(c) x

16 p = (aT b/aT a)a = 14a/49 = 2a/7 is the projection of b onto a. q 1 = a/kak =

a/7 is (4, 5, 2, 2)/7. B = b − p = (−1, 4, −4, −4)/7 has kBk = 1 so q 2 = B.

17 p = (aT b/aT a)a = (3, 3, 3) and e = (−2, 0, 2). Then Gram-Schmidt will choose
√ √
q 1 = (1, 1, 1)/ 3 and q 2 = (−1, 0, 1)/ 2.

18 A = a = (1, −1, 0, 0); B = b−p = ( 12 , 21 , −1, 0); C = c−pA −pB = ( 13 , 31 , 13 , −1).

Notice the pattern in those orthogonal A, B, C. In R5 , D would be ( 14 , 41 , 14 , 41 , −1).

Gram-Schmidt would go on to normalize q 1 = A/||A||, q 2 = B/||B||, q 3 = C/||C||.


Solutions to Exercises 83

19 If A = QR then AT A = RT QT QR = RT R = lower triangular times upper triangular

(this Cholesky
 factorization
 of AT A uses
  the same R as Gram-Schmidt!). The example
−1 1 −1 2  
  1   3 3
   
has A =  2 1  =  2 −1    = QR and the same R appears in
  3  0 3
2 4 2 2
    
9 9 3 0 3 3
AT A =  =   = RT R.
9 18 3 3 0 3

20 (a) True because QT Q = I leads to (Q−1 ) (Q−1 ) = I.

(b) True. Qx = x1 q 1 + x2 q 2 . kQxk2 = x21 + x22 because q 1 · q 2 = 0. Also


||Qx||2 = xT QT Qx = xT x.

21 The orthonormal vectors are q 1 = (1, 1, 1, 1)/2 and q 2 = (−5, −1, 1, 5)/ 52. Then

b = (−4, −3, 3, 0) projects to p = (q T T


1 b)q 1 + (q 2 b)q 2 = (−7, −3, −1, 3)/2. And

b − p = (−1, −3, 7, −3)/2 is orthogonal to both q 1 and q 2 .

22 A = (1, 1, 2), B = (1, −1, 0), C = (−1, −1, 1). These are not yet unit vectors. As in

Problem 18, Gram-Schmidt will divide by ||A|| and ||B|| and ||C||.
        
1 0 0 1 0 0 1 2 4
        
        
23 You can see why q 1 =  0 , q 2 =  0 , q 3 =  1 . A =  0 0 1   0 3 6 =
        
0 1 0 0 1 0 0 0 5
QR. This Q is just a permutation matrix—certainly orthogonal.

24 (a) One basis for the subspace S of solutions to x1 + x2 + x3 − x4 = 0 is the 3 special

solutions v 1 = (−1, 1, 0, 0), v 2 = (−1, 0, 1, 0), v 3 = (1, 0, 0, 1)

(b) Since S contains solutions to (1, 1, 1, −1)T x = 0, a basis for S ⊥ is (1, 1, 1, −1)

(c) Split (1, 1, 1, 1) into b1 + b2 by projection on S ⊥ and S: b2 = ( 12 , 12 , 21 , − 21 ) and


b1 = ( 12 , 21 , 12 , 32 ).

25 This question shows 2 by 2 


formulas for QR;breakdown
 R22 = 0 for singular A.
2 1 2 −1 5 3
Nonsingular example   = √1   · √1  .
1 1 5 1 2 5 0 1
84 Solutions to Exercises
     
1 1 1 1 −1  1 2 2
Singular example   √ = · √  .
1 1 2 1 1 2 0 0
The Gram-Schmidt process breaks down when ad − bc = 0.

26 (q T ∗ B T c B because q = B and the extra q in C ∗ is orthogonal to q .


2 C )q 2 = T 2 1
kB k 2
B B

27 When a and b are not orthogonal, the projections onto these lines do not add to the pro-

jection onto the plane of a and b. We must use the orthogonal A and B (or orthonormal
q 1 and q 2 ) to be allowed to add projections on those lines.

28 There are 12 m2 n multiplications to find the numbers rkj and the same for vij .

29 q 1 = 31 (2, 2, −1), q 2 = 13 (2, −1, 2), q 3 = 31 (1, −2, −2).

30 The columns of the wavelet matrix W are orthonormal. Then W −1 = W T . This is a

useful orthonormal basis with many zeros.

1
31 (a) c = 2
normalizes all the orthogonal columns to have unit length (b) The pro-
T T 1
jection (a b/a a)a of b = (1, 1, 1, 1) onto the first column is p1 = 2
(−1, 1, 1, 1).
(Check e = 0.) To project onto the plane, add p2 = 12 (1, −1, 1, 1) to get (0, 0, 1, 1).
 
  1 0 0
1 0  
32 Q1 =   reflects across x axis, Q2 = 
0 0

−1  across plane y + z = 0.
0 −1  
0 −1 0

33 Orthogonal and lower triangular ⇒ ±1 on the main diagonal and zeros elsewhere.

34 (a) Qu = (I − 2uuT )u = u − 2uuT u. This is −u, provided that uT u equals 1

(b) Qv = (I − 2uuT )v = u − 2uuT v = u, provided that uT v = 0.

35 Starting from A = (1, −1, 0, 0), the orthogonal (not orthonormal) vectors B =

(1, 1, −2, 0) and C = (1, 1, 1, −3) and D = (1, 1, 1, 1) are in the directions of q 2 , q 3 , q 4 .
The 4 by 4 and 5 by 5 matrices with integer orthogonal columns (not orthogonal rows,
since not orthonormal Q!) are
Solutions to Exercises 85

 
  1 1 1 1 1
   
1 1 1 1  
   −1 1 1 1 1
     
   −1 1 1 1  
A D   and  0 −2
 B C =  
1 1 1

   0 −2 1 1  
   0 0 −3 1 1
0 0 −3 1  
0 0 0 −4 1

36 [Q, R]
 =qr(A) produces from A (m by n of rank n) a “full-size” square Q = [ Q1 Q2 ]
R
and  . The columns of Q1 are the orthonormal basis from Gram-Schmidt of the
0
column space of A. The m − n columns of Q2 are an orthonormal basis for the left
nullspace of A. Together the columns of Q = [ Q1 Q2 ] are an orthonormal basis
for Rm .

37 This question describes the next q n+1 in Gram-Schmidt using the matrix Q with the

columns q 1 , . . . , q n (instead of using those q’s separately). Start from a, subtract its
projection p = QT a onto the earlier q’s, divide by the length of e = a − QT a to get
q n+1 = e/kek.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy