짝수답
짝수답
p·q 1+1−1 1
cos θ = = √ √ = , θ ≈ 70.5 ◦ .
|p||q| 3· 3 3
Between two face diagonals, p = (1, 0, −1) and q = (0, 1, −1):
p·q 0+0+1 1
cos θ = = √ √ = , θ = 60 ◦ .
|p||q| 2· 2 2
Between a space diagonal and an edge, p = (1, 1, 1) and q = (1, 0, 0):
p·q 1+0+0 1
cos θ = = √ √ = √ , θ ≈ 54.7 ◦ .
|p||q| 3· 1 3
Between a space diagonal and a face diagonal, p = (1, 1, 1) and q = (1, 1, 0):
p·q 1+1+0 2
cos θ = = √ √ = √ , θ ≈ 35.3 ◦ .
|p||q| 3· 2 6
The other types of angles are obvious: 45 ◦ or 90 ◦ .
−→ −−→
1.2.8. a) Let p = AB = (1, 1) and q = BC = (2, −5). The projection
of p onto q is then
p·q 2−5 6 15
p1 = q= (2, −5) = − , .
|q|2 4 + 25 29 29
3
b) If D denotes the foot of the projection of p onto q, then
−−→ 6 15 35 14
AD = p − p1 = (1, 1) − − , = , ,
29 29 29 29
and
−−→ 1√ 2 7
dist A from BC = |AD| = 35 + 142 = √ .
29 29
c)
1 −−→ −−→ 1√ 2 7 7
Area of ∆ = |BC||AD| = 2 + 52 · √ = .
2 2 29 2
1.2.10.
|p + q|2 = (p + q) · (p + q) = p2 + 2p · q + q2 ,
and
|p − q|2 = (p − q) · (p − q) = p2 − 2p · q + q2 .
Thus
|p + q|2 + |p − q|2 = 2p2 + 2q2 = 2|p|2 + 2|q|2 .
Geometrically, this result says that the sum of the squares of the lengths of
the diagonals of a parallelogram equals the sum of the squares of the lengths
of the four sides.
1.2.12.The result is trivially true if q = 0, so we may assume that q = 0.
Following the hint, let f(λ) = |p − λq|2 = |p|2 − (2p · q)λ + λ2 |q|2 for any
scalar λ. Now, f is a quadratic function of λ and f (λ) ≥ 0 for all λ. Hence the
graph of f is a parabola opening upwards and the minimum value will occur
at the vertex, where f (λ) = 2|q|2 λ − 2(p · q) = 0. Thus λ0 = |q| p·q
2 yields
2 p·q
the minimum value of f , which is f (λ0 ) = |p| − |q|2 . Cauchy’s inequality
follows from the fact that f (λ0 ) ≥ 0. Finally, equality occurs if and only if
q = 0 or |p − λq| = 0, that is, if and only if q = 0 or p = λq, for any λ, and
thus, if and only if p and q are parallel.
1.2.14.
a. Substituting r = p − q into the Triangle Inequality, |r + q| ≤ |r| + |q|
yields the inequality |p| − |q| ≤ |p − q|. Similarly, |q| − |p| ≤ |q − p| =
|p − q|. Combining these two inequalities yields the desired result.
4
b. Equality occurs when the vectors p and q are parallel and point in the same
direction; that is, p = λq, with λ ≥ 0, or q = λp, with λ ≥ 0. This follows
from the fact that equality occurs in the Triangle Inequality if and only if the
vectors are parallel and point in the same direction.
1.2.16.
a. From Figure 1.12 we see that
p3
cos α3 = .
|p|
On the other hand,
p·k (p1 , p2 , p3 ) · (0, 0, 1) p3
up · k = = = .
|p| |p| |p|
and so uP · k = cos α3 . The other two relations can be obtained similarly.
b.. By expressing each of the cosines as in the first equation above, we obtain
p21 p22 p23 p21 + p22 + p23
cos2 α1 + cos2 α2 + cos2 α3 = + + = = 1.
|p|2 |p|2 |p|2 |p|2
This is the three-dimensional analog of the formula sin2 α + cos2 α = 1.
c.
p = (p1 , p2 , p3 ) = (|p| cos α1 , |p| cos α2 , |p| cos α3 )
= |p|(cos α1 , cos α2 , cos α3 ).
7
(8, 3, −4) and v = (0, 1, 9) − (1, −2, 4) = (−1, 3, 5). Thus we obtain
p = (1, −2, 4) + s(8, 3, −4) + t(−1, 3, 5) as a parametric vector equation of
the plane. (As a check, notice that s = 1, t = 0 gives p = a, and s = 0, t = 1
gives p = b.)
1.3.18. The vectors u = p1 −0 = (1, 6, −3) and v = p2 −0 = (7, −2, 5)
are two nonparallel vectors lying in the required plane. Since a vector para-
metric equation of the plane is p = s(1, 6, −3) + t(7, −2, 5), the corre-
sponding scalar form of the equation is x = s + 7t, y = 6s − 2t, z =
−3s + 5t. Eliminating the parameters results in the nonparametric equation
12x − 13y − 22z = 0.
1.3.20. We can proceed as in Example 1.3.3 in the text to decompose the
vector equation
5 − 2s = 3 + 2t, 1 + s = −2 + t, 1 + 6s = 1 − 3t
Solving this system yields s = −1 and t = 2. Hence the lines intersect at the
point (7, 0, −5).
1.3.22. In terms of scalar components, the equation of the given line is
x = 5 − 2s, y = 1 + s, z = 1 + 6s.
Substituting these equations into that of the given plane results in 7(5 − 2s) +
(1+s)+2(1+6s) = 8; the solution is s = 30. Hence the point of intersection
is (−55, 31, 181).
1.3.24. Rewriting the equation of the given line in component form and
replacing s by r yield
x = 3 − 3r, y = −2 + 5r, z = 6 + 7r
and rewriting the equation of the given plane in component form gives
x = 4 − 2s + t, y = −2 + s + 3t, z = 1 + 3s + 2t.
8
Combining those sets of equations and simplifying result in the system
3r − 2s + t = −1
5r − s − 3t = 0
7r − 3s − 2t = −5
1.3.28. First find the plane through 0 that is parallel to the direction vec-
tors of the given lines, as in Example 1.3.6: p = s(−3, 5, 7) + t(−2, 1, 6),
or, in scalar parametric form, x = −3s − 2t, y = 5s + t, z = 7s + 6t.
Eliminating the parameters yields the equation of the plane in scalar form:
23x + 4y + 7z = 0, from which we obtain a normal vector, n = (23, 4, 7), to
the plane, and hence also to the two given lines.
The point P (3, −2, 6) lies on the first line and Q(5, 1, 1) lies on the second
−→
one. Thus P Q = (2, 3, −5), and as in Example 1.3.6,
−→
|P Q · n| |2 · 23 + 3 · 4 + (−5) · 7| 23
D= = √ =√ .
|n| 529 + 16 + 49 594
1.3.30. Following the hint, let Q be the point (3, −2, 6) on the line L , so
−−→
QP0 = (0, 6, −6). The direction of the line Lis u = (−3, 5, 7), and thus the
−−→ −−→ u u
component of QP0 parallel to L is given by QP0 · |u| |u| = 12 83
(−3, 5, 7).
−−→
Then the component of QP0 orthogonal to L is
12 1
(0, −6, 6) − (−3, 5, 7) = (36, −558, 414),
83 83
9
and therefore the distance from P0 to P is
1
d = (36, −558, 414) ≈ 8.38.
83
2.1.6.
1 0 −1 1 1 0 −1 1
−2 3 −1 0 → 0 3 −3 2
−6 6 0 −2 0 6 −6 4
1 0 − 1 1
→ 0 3 −3 2 .
0 0 0 0
Column 3 is free. Thus we set x3 = t. Then the second row of the reduced
matrix gives 3x2 − 3t = 2 and so x2 = t + 23 , and from the first row we get
x1 = t + 1. Hence
1 1
x = t 1 + 2/3 .
1 0
2.1.8.
3 −6 −1 1 12 −1 2 2 3 1
−1 2 2 3 1 r1 ↔ r2 3 −6 −1 1 12
6 −8 −3 −2 9 6 −8 −3 −2 9
−1 2 2 3 1 −1 2 2 3 1
→ 0 0 5 10 15 r2 ↔ r3 0 4 9 16 15 .
0 4 9 16 15 0 0 5 10 15
The last matrix is in echelon form and the forward elimination is finished.
The fourth column has no pivot and so x4 is free and we set x4 = t. Then the
last row corresponds to the equation 5x3 +10t = 15, which gives x3 = 3−2t.
The second row yields 4x2 + 9 (3 − 2t)+ 16t = 15, whence x2 = 12 t − 3.
Finally, the first row gives x1 = −1 + 2 12 t − 3 + 2 (3 − 2t) + 3t = −1. In
11
vector form the solution is
−1 0
−3 1/2
x =
3 + −2
t.
0 1
2.1.10.
2 4 1 7 1 2 3 11
0 1 3 7 0 1 3 7
→
3 3 −1 9 r1 ↔ r4 3 3 −1 9
1 2 3 11 2 4 1 7
1 2 3 11 1 2 3 11
0 1 3
7 3 7
0 1
0 −3 − 10 −24 → 0 0 − 1 −3
0 0 −5 −15 0 0 −5 −15
1 2 3 11
0 1 3 7
→ .
0 0 − 1 −3
0 0 0 0
Thus x3 = 3, x2 = −2, and x1 = 6 and in vector form,
6
x = −2 .
3
2.1.12.
3 −6 −1 1 7 −1 2 2 3 1
−1 2 2 3 1 r1 ↔ r2 3 −6 −1 1 7
4 −8 −3 −2 6 4 −8 −3 −2 6
−1 2 2 3 1 −1 2 2 3 1
→ 0 0 5 10 10 → 0 0 5 10 10 .
0 0 5 10 10 0 0 0 0 0
The last matrix is in echelon form and the forward elimination is finished.
12
The second and fourth columns have no pivot and so we set x2 = s and
x4 = t. Then the second row yields 5x3 + 10t = 10, whence x3 = 2 − 2t.
Finally, the first row gives x1 = −1 + 2s + 2 (2 − 2t) + 3t = 2s − t + 3. Thus,
in vector form the solution is
3 2 −1
0 1 0
x =
2 + 0 s + −2 t.
0 0 1
2.1.14. In the second step of the row-reduction, two elementary row op-
erations are performed simultaneously so that r2 and r3 are both replaced
by essentially the same expression, apart from a minus sign. Geometrically
this action means that the line of intersection of two planes is replaced by a
plane containing that line, rather than by two new planes intersecting in the
same line. Indeed, if we substitute s = 14 5
− 2t in the wrong solution, we
obtain the correct one. Algebraically the mistake is that the second step is not
reversible, it enlarges the solution set, and so the new augmented matrix is not
row-equivalent to the original one.
2.1.16. The three planes have equations
x + 2y = 2 (0.1)
3x + 6y − z = 8 (0.2)
x + 2y + z = 4 (0.3)
Hence all three lines of intersection of pairs of the planes have the same
direction vector, (−2, 1, 0), and thus are parallel.
13
2.2.2.
p1 ∗ ∗ p1 ∗ ∗ p1 ∗ ∗ p1 ∗ ∗
0 p2 ∗ , 0 p2 ∗ , 0 0 p2 , 0 0 0 ,
0 0 p3 0 0 0 0 0 0 0 0 0
0 p1 ∗ 0 p1 ∗ 0 0 p1 0 0 0
0 0 p2 , 0 0 0 , 0 0 0 , 0 0 0 .
0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 1 ∗ 0 0 1 0 0 0
0 0
1 , 0 0
0 , 0 0
0 , 0 0 0 .
0 0 0 0 0 0 0 0 0 0 0 0
−1 −1/2
1 0
v = s
0 + t −1
.
0 1
1/2 −1 −1/2
0
x= + s 1 + t 0
−1 0 −1
0 0 1
16
or as
−1/2 −1 −1/2
1
x= + s 1 + t 0 .
−1 0 −1
0 0 1
The two equation represent the same plane, since replacing s by s + 1 in the
first of these results in the second one.
2.4.2. By Equation (2.69)
in Definition 2.4.2, (cA)x = c(Ax) for all
n
x ∈ R . Now, (Ax)i = j aij xj and [(cA)x]i = j (cA)ij xj . Thus, from
Equation (2.69) we get
(cA)ij xj = c aij xj = (caij )xj .
j j j
Since this equation must hold for all xj , the coefficients of the xj must be the
same on both sides (just choose one xj = 1 and the rest 0, for j = 1, 2, . . . , n),
that is, we must have
(cA)ij = caij for all i, j.
2.4.4.
3 −4
2 3 5 2 17 − 17
AB = 2 = ,
1 −2 3 2 −17
1 −3
3 −4 2 17 3
2 3 5
BA = 2 2 = 6 2 16 .
1 −2 3
1 −3 −1 9 −4
2.4.6.
3 −6 9 − 12
BA = 2 −4 6 −8 .
1 −2 3 −4
and AB does not exist.
17
2.4.8. AB is undefined and
3 −4 2 17 3
2 2 6 16
BA = 2 3 5
=
2 .
1 −3 1 − 2 3 −1 9 −4
−2 5 1 − 16 5
2.4.10.
cos α − sin α cos β − sin β
Rα Rβ =
sin α cos α sin β cos β
cos α cos β − sin α sin β − cos α sin β − cos β sin α
=
cos α sin β + cos β sin α cos α cos β − sin α sin β
cos (α + β) − sin (α + β)
= = Rα+β .
sin (α + β) cos (α + β)
Similarly,
Then
0 0 1 0 0 0
A2 = 0 0 0 and A3 = 0 0 0 .
0 0 0 0 0 0
2.4.18.
where
1 −2 1 −2 1 0 0 0 −3 −2
X11 = + = ,
3 4 2 0 0 1 0 0 11 −6
1 −2 1 0 1 0 2 3 9 1
X12 = + = ,
3 4 −3 1 0 1 7 4 −2 8
−1 0 1 −2 0 0 0 0 −1 2
X21 = + = ,
0 −1 2 0 0 0 0 0 −2 0
and
−1 0 1 0 0 0 2 3 −1 0
X22 = + = .
0 −1 −3 1 0 0 7 4 3 −1
20
Thus
1 −2 1 0 1 −2 1 0
3 4 0 1 2 0 −3 1
−1 0 0 0 0 0 2 3
0 −1 0 0 0 0 7 4
−3 −2 9 1
11 −6 −2 8
=
−1
.
2 −1 0
−2 0 3 −1
2.5.2.
−1 1 4 −2
A =
14 −3 5
2.5.4.
1 −2 6 −2
A−1 = −6 8 −6 .
10 −13 24 − 18
2.5.6.
2 1 1 0
1 0 1 1 0
A−1 =
.
2 0 −1 1 0
−2 −2 0 2
2.5.8. a) Augment the given matrix by the 2 × 2 unit matrix, and reduce it
as follows:
2 0 4 1 0 2 0 4 1 0
→ .
4 −1 1 0 1 0 − 1 − 7 −2 1
We may write the equations corresponding to the reduced matrix as we did in
Example 2.5.1:
x11 x12
2 0 4 1 0
x21 x22 = .
4 −1 1 −2 1
x31 x32
21
This matrix equation corresponds to a system of two scalar equations for the
unknown x s in the first column and another system of two scalar equations
for those in the second column. The unknowns x31 and x32 are free. Choosing
x31 = s and x32 = t, we get the systems of equations
2x11 + 4s = 1 2x12 + 4t = 0
,
−2x21 − 7s = −2 − 2x22 − 7t = 1
1
from which x11 = 2
− 2s, x21 = 2 − 7s, x12 = −2t, and x22 = −1 − 7t.
Thus
1
− 2s
2
−2t
X = 2 − 7s −1 − 7t
s t
is a right inverse of A for any s, t, and every right inverse is of this form.
b) From
y11 y12 1 0 0
y21 y22 2 0 4
= 0 1 0 .
4 −1 1
y31 y32 0 0 1
we obtain the equations
2y11 + 4y12 = 1
−y12 = 0
4y11 + y12 = 0
The augmented matrix of this system can be reduced as
2 4 1 2 4 1 2 4 1
0 −1 0 → 0 −1 0 → 0 −1 0 .
4 1 0 0 −7 −2 0 0 −2
23
4. A has a (unique) two-sided inverse.
2.5.12.. Writing ai for the ith row of any 3 × n matrix A, we can evaluate
the product EA as
1
1 0 0 a a1
EA = c 1 0 a2 = ca1 + a2 .
0 0 1 a3 a3
Thus
1 0 0
E −1
= −c 1 0 ,
0 0 1
and
1
1 0 0 a a1
−1
E A = −c 1 0 a2 = −ca1 + a2 ,
0 0 1 a3 a3
24
But then P also produces the desired row interchange for any 3 × n matrix A:
1 3
0 0 1 a a
2
PA = 0 1 0 a = a2 .
3
1 0 0 a a1
The inverse matrix P −1 can be obtained by row reduction:
0 0 1 1 0 0
0 1 0 0 1 0
1 0 0 0 0 1
1 0 0 0 0 1
r3 ↔ r1 0 1 0 0 1 0 .
0 0 1 1 0 0
Thus P −1 = P . This result could also have been obtained without any com-
putation, by observing that an application of P to itself would interchange the
first and third rows of P , and so restore I.
2.5.16. Applying the definition A−n = (A−1 )n , and using the properties
of positive integer powers of a matrix from Exercise 2.4.14, we obtain
An (A−1 )n = (A(n−1) A)(A−1 (A−1 )n−1 ) = A(n−1) (AA−1 )(A−1 )n−1
= A(n−1) I(A−1 )n−1 = ∂A(n−1) (A−1 )n−1 = · · · = AA−1 = I.
Thus A−n = (A−1 )n is the inverse of An , that is, A−n = (An )−1 . (Remark:
this derivation can be made more rigorous through the use of mathematical
induction.)
Again applying the definition A−n = (A−1 )n , together with the properties
of positive integer powers of a matrix from Exercise 2.4.14, we have
27
case, cx ∈ V and so cx ∈ U ∪ V. Furthermore, U ∪ V is closed under addition
only if V ⊂ U or U ⊂ V . For, otherwise, assume that neither V ⊂ U nor
U ⊂ V hold, that is, that there exist u ∈ U, u ∈/ V and v ∈ V, v ∈/ U. Then u
and v both belong to U ∪ V, but u + v ∈ / U ∪ V, because, by the closure of U,
u + v ∈ U would imply v ∈ U, and, by the closure of V, u + v ∈ V would
imply u ∈ V. Thus, U ∪ V is a subspace of X only if V ⊂ U or U ⊂ V . It is
easy to see that the converse is also true. Hence, U ∪ V is a subspace of X if
and only if V ⊂ U or U ⊂ V .
3.2.12. U = {x|x ∈ Rn , a · x = 0} is a subspace of Rn :
a) 0 ∈ U , and so U is nonempty.
b) U is closed under addition: Let u, v ∈ U . Then a · u = 0 and a · v = 0.
Hence a · (u + v) = 0, which shows that u + v ∈ U .
c) U is closed under multiplication by scalars: Let u ∈ U and c ∈ R. Then
a · u = 0, and therefore c (a · u) = a · (cu) = 0, which shows that cu ∈ U.
Alternatively, a · x = aT x, and with A = aT , Theorem 3.2.2 proves the
statement.
3.3.2. Suppose c has a decomposition
n
c= si ai ,
i=1
n
n
b= ti ai and b = ui ai .
i=1 i=1
Then
n
n
n
n
c = c+b−b = si ai + ti ai − ui ai = (si + ti − ui )ai .
i=1 i=1 i=1 i=1
Since the two decompositions of b were different, for some i we must have
si + ti − ui = si . Thus the last expression provides a new decomposition of
c.
28
3.3.4. We have to solve
4 4 1 s1 7
2 −3 3 s2 = 16 .
1 2 −1 s3 −3
Since this system is the same as that of Exercise 3.3.3 without the second
row, which did not contribute anything to the problem, it must have the same
solution. Thus b = 2a1 − a2 + 3a3 .
3.3.6. We have to solve
4 4 0 s1 4
2 −3 5 s2 = 7 .
1 2 −1 s3 0
4 4 1 s1 0
2 −3 3 s2 = 0
1 2 −1 s3 0
has only the trivial solution, as can be seen by row reduction:
4 4 1 0 1 2 − 1 0
2 −3 3 0 → 2 − 3 3 0
1 2 −1 0 4 4 1 0
29
1 2 −1 0 1 2 −1 0
→ 0 −7 5 0 → 0 −7 5 0 .
0 −4 5 0 0 0 15/7 0
3.4.4. Row-reduce A as
1 1 1 2 −1 1 1 1 2 −1
2 2 2 4
−2 → 0 0 −3 −2 7 .
3 3 0 4 4 0 0 0 0 0
Because the pivots are in the first and third columns of the reduced matrix,
the corresponding columns of A constitute a basis for Col(A). Thus a basis
31
for Col(A) is the set
1 1
2 , 2 ,
3 0
and a basis for Row(A) is
1 0
1
0
1 , −3 .
2 −2
−1
7
These are the basis vectors for Null(A). Thus, to get the desired decomposi-
tion of x = (1, 1, 1, 1, 1)T , we must solve x = sc1 + tc2 + uc3 + vc4 + wb1 .
By row reduction:
−1 − 1 − 2 1 1 1
1 0 0 0 1 1
0 1 0 0 1 1
→
0 0 1 0
2 1
0 0 0 1 −1 1
1 0 0 0 1 1
0 1 0 0 1 1
0 0 1 0 2 1 →
0 0 0 1 −1 1
−1 −1 −2 1 1 1
1 0 0 0 1 1
0 1 0 0 1 1
0 0 1 0 2 1 .
0 0 0 1 −1 1
0 0 0 0 8 4
34
Hence w = 1/2, v = 3/2, u = 0, t = 1/2, s = 1/2, and so
1 1 3 1
x0 = c1 + c2 + c4 = (1, 1, 1, 0, 3)T
2 2 2 2
and
1
xR = (1, 1, 1, 2, −1)T .
2
Alternatively, we may solve x = sc1 + tc2 + uc3 + vc4 + wb1 for w
by left-multiplying it with bT1 and obtain, because of the orthogonality of the
row space to the nullspace, bT1 x = wbT1 b1 , which becomes 4 = 8w. Thus
again w = 1/2 and xR = 12 (1, 1, 1, 2, −1)T . From here x0 = x − xR =
1
2
(1, 1, 1, 0, 3)T , as before.
3.5.4. a) dim(Row (A)) = dim(Co1 (A)) = 3, and, by Corollary 3.5.2
and Theorem 3.5.4, dim(Null (A)) = 2, and dim(Left-null (A)) = 0.
b) Since A is an echelon matrix, its transposed rows form a basis for Row(A).
To find a basis for Null(A), solve Ax = 0 by setting x3 = t and x5 = u.
Then we get x4 = −u, x2 = 0, and x1 = −2x4 + x5 = 3u. So any vector of
Null(A) can be written as x0 = t1 c1 + t2 c2 , where
0 3
0 0
c1 = 1
, c2 = 0 .
0 −1
0 1
These are the basis vectors for Null(A). Thus, to get the desired decomposi-
tion of x = (6, 2, 1, 4, 8)T , we solve (6, 2, 1, 4, 8)T = t1 c1 + t2 c2 + s1 b1 +
s2 b2 + s3 b3 , where the bi are the transposed rows of A. We solve by row-
reduction:
0 3 1 0 0 6
0 0 0 2 0 2
1 0 0 0 0 1 →
0 −1 2 0 2 4
0 1 −1 0 2 8
35
1 0 0 0 0 1
0 −1 2 0 2 4
0 1 −1 0 2 8 →
0 0 0 2 0 2
0 3 1 0 0 6
1 0 0 0 0 1
0 1 −1 0 2 8
0 0 1 0 4 12 →
0 0 0 2 0 2
0 0 7 0 6 18
1 0 0 0 0 1
0 1 −1 0 2 8
0 0 1 0 4 12 .
0 0 0 2 0 2
0 0 0 0 −22 −66
and
0 3 6
0 0 0
x0 = 1 ·
1 +2·
0 =
1 .
0 −1 −2
0 1 2
36
Indeed,
0 6 6
2 0 2
0 + 1 = 1
6 −2 4
6 2 8
and
6
0
0 2 0 6 6
1 = 0.
−2
2
Hence t3 = 1, t2 = 0, t1 = 3, s2 = 2, s1 = 1,
−1 −2 3 0
0 0 −2 −2
x0 = 3 ·
1 +0· 0 +1· 0
=
3
0 1 0 0
0 0 1 1
38
and
1 0 1
1 2 5
xR = 1 ·
1 +2·
0 =
1 .
2 0 2
−1 4 7
Indeed,
0 1 1
−2 5 3
3 + 1 = 4
0 2 2
1 7 8
and
1
5
0 −2 3 0 1
1 = 0.
2
7
3.5.8.
a) 0 ∈ U + V , and so U + V is nonempty.
b) U + V is closed under addition: Let x, y ∈ U + V . Then we can write
x = u1 +v1 and y = u2 +v2 , where u1 , u2 ∈ U and v1 , v2 ∈ V . Since both
U and V are vector spaces, they are closed under addition, and consequently
u1 +u2 ∈ U and v1 +v2 ∈ V . Thus, x + y = (u1 +u2 )+(v1 +v2 ) ∈ U +V .
c) U + V is closed under multiplication by scalars: Let x = u + v ∈ U + V
and c ∈ R, with u ∈ U and v ∈ V . Since both U and V are vector spaces,
they are closed under multiplication by scalars, that is, cu ∈ U and cv ∈ V .
Thus, cx = cu + cv ∈ U + V .
3.5.10. Write A = (a1 , a2 , . . . , an )and B = (b1 , b2 , . . . , bp ). Then
the members
p of Col(A) have the form ni=1 si ai , and those of Col(B) the
form
n i=1 ti
bi . Hence any member of Col(A) + Col(B) can be written as
p
s a
i=1 i i + i=1 ti bi , which is exactly the same as the form of an arbitrary
member of Col[A B].
3.5.12.
39
a) Col(A) has {a1 , a2 } for a basis, and since a1 = e1 and a2 − a1 = 2e2 ,
the set {e1 , e2 } for a basis as well. B can be reduced to the echelon form
(0, e1 , e2 ), and so {b2 , b3 } is a basis for Col(B). Since
1 2 0
A+B = 0 2 0 ,
0 2 0
the vectors (1, 0, 0)T and (2, 2, 2)T form a basis for Col(A + B).
b) Clearly, Col(A) + Col(B) = R3 , Col(A) ∩ Col(B) = {se1 }, and so they
have {e1 , e2 , e3 } and {e1 }, respectively, for bases.
c) No, because Col(A+B) is two-dimensional and Col(A)+Col(B) is three-
dimensional. Addition of matrices is different from addition of subspaces!
d) 3 = 2 + 2 − 1.
3.5.14. If A is m × p and B is p × n, then AB is m × n. By the result
of Exercise 3.4.9, we have nullity(B) = nullity(AB), and, by Corollary
3.5.1, rank(B) = n − nullity(B) and rank(AB) = n − nullity(AB). Thus
rank(B) = rank(AB).
3.5.16. Assume that A is m × p and B is p × n.
The elements of Col(A) are of the form As, where s ∈ Rp , and the
elements of Col(AB) are of the form ABt, where t ∈ Rn . Thus, writing
s = Bt in ABt, we can see that every element of Col(AB) is also an element
of Col(A).
The elements of Row(B) are of the form B T s, where s ∈ Rp , and the
elements of Row(AB) are of the form B T AT t, where t ∈ Rm . Thus, writing
s = AT t in B T AT t, we can see that every element of Row(AB) is also an
element of Row(B).
Since the rank of a matrix equals the dimension of its column space, we
have rank(AB) ≤ rank(A), and because it also equals the dimension of
its row space, rank(AB) ≤ rank(B). Since, furthermore, the number of
columns is the same n for both B and AB and nullity + rank = n, we have
nullity(AB) ≥ nullity(B)
3.5.18. The first two columns are independent, and so they form a basis
for U. Similarly, the last two columns form a basis for V .
To find a basis for U ∩ V , we must find all vectors x that can be written
both as s1 a1 +s2 a2 and as t4 a4 +t5 a5 , that is, find all solutions of the equation
40
s1 a1 + s2 a2 = t4 a4 + t5 a5 or, equivalently, of
1 1 2 −1 0
s1 0 + s2 2 − t4 0 − t5 4 = 0 .
0 0 2 2 0
41
4
has 3 for a basis.
−4
3.5.20.The subspace S generated by U ∪ V is the set of all linear combi-
nations of the elements of U ∪ V . Thus
m n
S = s s = ai ui + bj vj ; m, n ∈ N+ ; all ui ∈ U, vj ∈ V
i=1 j=1
Since
n
m
u= ai ui ∈ U and v = bj vj ∈ V,
i=1 j=1
we have S = U + V .
3.5.22.Assume that there is an x ∈ U + V with two different decompo-
sitions: x = u1 + v1 = u2 + v2 , with u1 , u2 ∈ U, u1 = u2 and v1 , v2 ∈ V ,
v1 = v2 . Then u1 − u2 = v2 − v1 = 0, where u1 − u2 ∈ U and v2 − v1 ∈ V .
Hence U ∩ V = {0}, and the sum is not direct.
Conversely, assume that U ∩ V = {0}; say, w ∈ U ∩ V and w = 0.
For any u ∈ U and v ∈ V , let x = u + w + v. Then x = (u + w) + v and
x = u + (w + v) provide two different decompositions of x with one term
from U and another from V .
n
3.5.24.For n > 2 subspaces U i of a vector space X, we define i=1 Ui =
{ ni=1 ui |ui ∈ Ui for i = 1, . . . , n} .
3.5.26. If U ⊥ V , then, for all u ∈ U and v ∈ V, uT v = 0. Choose
w ∈ U ∩ V for both u and v above. Then wT w = 0, which implies w = 0.
The converse is not true: In R2 let U = {u | u = ti} and V = {v |
v = t(i + j)}. Then U ∩ V = {0}, but U is not orthogonal to V , since
i · (i + j) = 0.
3.5.28.As in the solution of Exercise 3.5.27, let A be a basis for U ∩ V, B
a basis for U ∩ V ⊥ , C a basis for U ⊥ ∩ V , and D a basis for U ⊥ ∩ V ⊥ . Then
U + V has A ∪ B ∪ C for a basis, and so D is a basis not only for U ⊥ ∩ V ⊥
but also for (U + V )⊥ . Hence these two subspaces must be identical.
3.5.30.(U ⊥ )⊥ is the set of all vectors of Rn that are orthogonal to all
vectors of U ⊥ . In other words, for all v ∈ (U ⊥ )⊥ we have vT u = 0 for all
u ∈ U ⊥ . This identity is true, however, precisely for all v ∈ U.
42
3.5.32. Taking the orthogonal complement of both sides of the equation
Null(A) = Row(B), we get Row(A) = Null(B). Hence the transposed
column vectors of a basis for Null(B) may serve as the rows of A.
Since B is an echelon matrix, no row reduction is needed for finding its
nullspace. The variables x3 and x5 are free: Set x3 = s1 and x5 = s2 . Then
x4 = −s2 , x2 = −2s2 and x1 − 2s2 + s1 − 2s2 − s2 = 0; and so finally
x1 = −s1 + 5s2 . Hence we can write the solution vectors as
−1 5
0 −2
x= 1 0 s
0 −1
0 1
and obtain
−1 0 1 0 0
A= .
5 −2 0 −1 1
3.5.34. According to the algorithm of Exercise 3.5.33, if we reduce [B|I]
U L
to the form with U an echelon matrix, then the transposed rows of
O M
M will form a basis for Left-null(B). Thus M can be taken as the matrix A
such that Null(A) = Row(B). The reduction goes as follows:
1 1 1 1 0 0 0
0 2 0 0 1 0 0
→
0 0 0 0 0 1 0
2 1 0 0 0 0 1
1 1 1 1 0 0 0
2 1 0 0 0 0 1
→
0 2 0 0 0 1 0
0 0 0 0 0 0 1
1 1 1 1 0 0 0
0 −1 −2 −2 0 0 1
→
0 2 0 0 0 1 0
0 0 0 0 0 0 1
43
1 1 1 1 0 0 0
0 −1 −2 −2 0 0 1
.
0 0 −4 −4 1 0 2
0 0 0 0 0 0 1
45
and
1 2 0 3 1 0
1
S = A−1 B = −2 1 0 2 1 1
5 0 0 5 0 0 1
7 3 2
1
= −4 − 1 1 .
5 0 0 5
b) Hence
−1 −3 1
S −1 = 4 7 −3
0 0 1
and
−1 −3 1 2 −11
−1
xB = S xA = 4 7 − 3 4 = 27 .
0 0 1 3 3
Thus x = −11b1 + 27b2 + 3b3 .
3.6.6. We have
cos θ − sin θ
A = Rθ =
sin θ cos θ
and
−1 cos θ sin θ
A = R−θ = .
− sin θ cos θ
Thus,
−1 cos2 θ − sin θ cos θ
MA = A M A = .
− sin θ cos θ sin2 θ
46
in Equation (3.138) to
I C
S= .
O O
Hence IS = C, that is, the solution of AS = B is the S = C obtained by the
reduction.
3.6.10. The vector c2 must be a linear combination x1 a1 + x2 a2 such that
a1 · c2 = 0. Thus, we need to find a solution of a1 · (x1 a1 + x2 a2 ) = 0. This
is equivalent to 14x1 + 5x2 = 0, which is solved by x1 = 5 and x2 = −14.
Hence c2 = 5a1 − 14a2 = 5(1, 2, 3)T − 14(0, 4, −1)T = (5, −46, 29)T , or
any nonzero multiple of it, is a solution.
3.6.12.
a) (L + M)A = A−1 (L + M )A = A−1 LA + A−1 M A = LA + MA .
b) (LM )A = A−1 (LM)A = A−1 LAA−1 M A = LA MA .
t 0
3.6.14. For any nonzero t and t we can choose S = and then
0 t
1 −t
SM(t)S −1 = M (t ). If t = 0, choose S =
0 1
0 1
3.6.16. Yes: The permutation matrix S = is its own inverse,
1 0
and premultiplying the first given matrix by it switches the rows, and post-
multiplying switches the columns.
3.6.18. Let X denote the set of n × n matrices. For matrices A, B of X,
we say that B is similar to A if there exists an invertible matrix S in X such
that B = S −1 AS.
1. This relation on X is reflexive: For any A we have A = I −1 AI.
2. It is symmetric: If B = S −1 AS, then A = T −1 BT with T = S −1 .
3. It is transitive: If B = S −1 AS and C = T −1 BT , then C = T −1 S −1 AST =
(ST )−1 A(ST ).
3.6.20. n n
n
a) (AB)ik = j=1 aij bjk and so Tr(AB) = i=1 j=1 aij bji . Similarly,
n n
Tr(BA) = i=1 j=1 bij aji , and so Tr(AB) = Tr(BA).
b) If A = SBS −1 , then Tr(A) = Tr(S(BS −1 )) = Tr((BS −1 )S) = Tr(B).
c) They are not similar: the trace of the first matrix is 4, but the trace of the
second matrix is 5.
47
n n
4.1.2. Let x = i=1 xAi ai and y = i=1 yAi ai . Then
n
x+y = (xAi + yAi ) ai
i=1
Thus
1 −1 0
[T ] = 0 1 − 1 .
−1 0 1
4.1.8. We have
1
1 1
T = [T ] = 1
1 1
1
and
1
1 1
T = [T ] = −1 .
−1 −1
−1
49
we get
1 1 1 0
1 1 1
[T ] = 1 −1 = 0 1 .
2 1 1 −1
−1 0 1
Furthermore,
1 0 x1
x1
T (x) = [T ] x = 0 1 = x2 .
x2
0 1 x2
y
ax + by = 0
j
v
t1
v1
v1 - i
O i x
Then, from the picture,
1 2b2 1 1 b2 − a2
t1 = i + 2 (v1 − i) = 2 − = 2 .
a + b2 −2ab 0 a + b2 −2ab
Similarly,
j·v −a b
v2 = 2v = 2
|v| a + b2 −a
50
and
1 −2ab 0 1 −2ab
t2 = j + 2 (v2 − j) = 2 − = 2 .
a + b2 2a2 1 a + b2 a2 − b2
Thus,
1 b2 − a2 −2ab
[T ] = 2 .
a + b2 −2ab a2 − b2
and
−1 −11 1 1 x1 − x2
yB = B y =B T (x) =
2 −1 1 x1 + x3
1 −x2 − x3
= .
2 2x1 − x2 + x3
Thus,
x1 + x2 − x3
1 −1 −2 −1 1
TAB xA = −x1 + x2 + x3
2 1 0 3 2 x1 − x2 + x3
1 −x2 − x3
= ,
2 2x1 − x2 + x3
the same as yB .
4.1.18. The matrix [A] that represents the transformation from the stan-
dard basis E = (1, x, . . . , xn ) to the new basis A = (1, 1 + x, 1 + x +
x2 , . . . , 1 + x + · · · + xn ) is the (n + 1) × (n + 1) matrix
1 1 1 ··· 1
0 1 1 ··· 1
[A] = 0 0 1 ··· 1 ,
. .. .. ..
.. . . ··· .
0 0 0 ··· 1
4.1.20. Similarly to Example 4.1.13 in the text, consider the ordered bases
A = (1, x, . . . , xn) and B = (1, x, . . . , xn+1 ). Then, using the notation aj =
xj−1 for j = 1, 2, . . . , n + 1 and bi = xi−1 for i = 1, 2, . . . , n + 2, we have
X(a1 ) = 0b1 + 1b2 + 0b3 + · · · + 0bn+2 ,
X(a2 ) = 0b1 + 0b2 + 1b3 + · · · + 0bn+2 ,
···
X(an+1 ) = 0b1 + 0b2 + 0b3 + · · · + 1bn+2 .
According to Equation 4.29, the coefficients of the bi here form the transpose
of the matrix that represents X relative to these bases. Thus XA,B is the
53
(n + 2) × (n + 1) matrix
0 0 0 ··· 0
1 0 0 ··· 0
0 1 0 ··· 0
XA,B =
0 0 1 ··· 0
.
.. .. .. ..
. . . ··· .
0 0 0 ··· 1
The linear independence of the bi implies that ci = 0 for all i. Hence the ai
vectors are independent, and consequently, by the result of Exercise 3.5.40,
they form a basis for U.
All that remains to show is that T is a one-to-one mapping of U onto V.
But this proof is now easy: Let a ∈U such
that T (a) = 0. Since the ai vectors
form a basis for U, we can write a = ni=1 ci ai . But then
n
n
n
0 = T (a) = T ci ai = ci T (ai ) = ci bi .
i=1 i=1 i=1
The linear independence of the bi implies that ci = 0 for all i, and therefore
a = 0. Thus the kernel of T contains just the zero vector, which, by Exercise
4.2.3, implies that T is one-to-one.
4.2.10. If TA,B represents an isomorphism of U onto V , then rank (TA,B ) =
56
−1
rank (T ) = dim (V ) = dim (U) . Thus TA,B is nonsingular. TA,B represents
−1
the reverse isomorphism T .
4.2.12. One possible choice for K is the subspace of R3 generated by e2
and e3 , that is,
K 2 = {x : x = af + be3 , for a, b ∈ R} .
4.3.2. First translate by (−1, 2) to move the first vertex to the origin, then
shrink in the x-direction by a factor of 1/3, and in the y-direction by a factor
of 1/4 (other solutions would be more involved):
1 0 0 1/3 0 0 1 0 −1
L = 0 1/4 0 0 1 0 0 1 2
0 0 1 0 0 1 0 0 1
1/3 0 −1/3
= 0 1/4 1/2 .
0 0 1
#
4.3.4. a) Let p = |p| and p12 = p21 + p22 , and let R1 denote the matrix
of the rotation by θ about the z-axis, where θ stands for the angle from the
y-axis to the vector (p1 , p2 , 0)T . Then
cos θ − sinθ 0 p2 − p 1 0
1
R1 = sinθ cos θ 0 = p1 p2 0 .
0 0 1 p12 0 0 p12
57
Hence,
p 0 0 p2 − p1 0
1 p1
R = R2 R1 = 0 p3 − p12 p2 0
pp12 0 p p3 0 0 p12
12
pp2 −pp1 0
1
= p1 p3 p2 p3 −p212 .
pp12 p p pp pp
1 12 2 12 3 12
#
b) Let p = |p| and p23 = p22 + p23 , and let R1 denote the matrix of the
rotation by ψ about the x-axis, where ψ stands for the angle from the z-axis
to the vector (0, p2 , p3 )T . Then
1 0 0 p23 0 0
1
R1 = 0 cos ψ − sinψ = 0 p3 − p2 .
0 sinψ cos ψ p23 0 p2 p3
Similarly, the rotation by the angle φ about the y-axis is given by
p23 0 − p1
1
R2 = 0 p 0 .
p p 0 p
1 23
Thus,
p23 0 − p1 p23 0 0
1
R = R2 R1 = 0 p 0 0 p3 − p2
pp23 p1 0 p23 0 p2 p3
2
p −p1 p2 −p1 p3
1 23
= 0 pp3 −pp2 .
pp23 p p p2 p23 p3 p23
1 23
4.3.6.
1 0 0 t1
0 1 0 t2
T (t1 , t2 , t3 ) =
0
.
0 1 t3
0 0 0 1
Then the required matrix is given by the product T (1, 0, 0) Rθ , where T (1, 0, 0)
is the shift matrix from Exercise 4.3.6 with t1 = 1, t2 = t3 = 0.
4.3.10. By Corollary 3.6.1, the transition matrix from the standard basis
to the new basis B is S = B = (u, v, n) , and, since S is orthogonal, the
transition matrix in the reverse direction is S −1 = S T . (See Theorem 5.2.2.)
Also, for any vector x, the corresponding coordinate vector relative to the new
basis is xB = S −1 x. If T denotes the matrix that deletes the n-component,
then T xB = T S −1 x is the projection in the new basis, and xV = ST xB =
ST S −1 x is the projection in the standard basis. Thus the projection matrix is
P (u, v) = ST S −1
u1 v1 n1 1 0 0 u1 u2 u3
= u2 v2 n2 0 1 0 v1 v2 v3
u3 v3 n3 0 0 0 n1 n2 n3
2 2
u1 + v1 u1 u2 + v1 v2 u1 u3 + v1 v3
= u1 u2 + v1 v2 u22 + v22 u2 u3 + v2 v3 .
u1 u3 + v1 v3 u2 u3 + v2 v3 u23 + v32
59
5.1.6.
1 0 0
P = 0 0 0 .
0 0 0
5.1.8.
a2 ab ac
1
P = 2 ab b2 bc .
a + b2 + c2 ac bc c2
and
z1
x1 x2 ··· xm z xi zi
2
AT z = y1 y2 ··· ym
... = yi zi .
1 1 ··· 1 zi
zm
5.1.20.
x21 x1 1
x21 2
x2 ··· 2
xm x2 x2 1
T 2
A A = x1 x2 ··· xm ... .. ..
1 1 ··· 1 . .
x2m xm 1
4 3 2
xi xi2 xi
= x3i x
i xi
x2i xi m
and
y1 2
x21 x22 ··· x2m y2 xi yi
AT y = x1 x2 ··· xm
... =
xi yi .
1 1 ··· 1 yi
ym
5.2.2. From Equation 4.115, the viewplane V is the column space of the
61
matrix A = (u, v), that is,
u1 v1
T u1 u2 u3
A = u2 v2 and A = .
v1 v2 v3
u3 v3
Hence P = AAT = P (u, v) .
5.2.4.Orthogonal matrices preserve angles: Let Q be any n×n orthogonal
matrix, x, y any vectors in Rn , θ the angle between Qx and Qy, and φ the
angle between x and y. Then
(Qx) · (Qy) (Qx)T Qy xT QT Qy xT y x·y
cos θ = = = = = = cos φ.
|Qx| |Qy| |Qx| |Qy| |Qx| |Qy| |x| |y| |x| |y|
Since the angles are in [0, π] and the cosine function is one-to-one there, we
obtain θ = φ.
5.2.6. (P Q)T = QT P T = Q−1 P −1 = (P Q)−1 .
5.2.8. 1. As in Example 5.2.2, let b1 = a1 ,
a2 · b1 1
a2 − p2 = a2 − b1 = (1, 0, 0, 1)T − (0, 0, −1, 1)T ,
b1 · b1 2
and
b2 = 2 (a2 − p2 ) = (2, 0, 0, 2)T − (0, 0, −1, 1)T = (2, 0, 1, 1)T .
Similarly, let
a3 · b1 a3 · b2
a3 − p3 = a3 − b1 − b2
b1 · b1 b2 · b2
1 1
= (1, 0, −1, 0)T − (0, 0, −1, 1)T − (2, 0, 1, 1)T ,
2 6
and
b3 = (6, 0, −6, 0)T − (0, 0, −3, 3)T − (2, 0, 1, 1)T = (4, 0, −4, −4)T .
Hence
1 1 1
q1 = √ b1 , q2 = √ b2 , q3 = b3
2 6 48
form an orthonormal basis for U.
62
2. On the other hand, it is clear that e2 is orthogonal to U, and so it extends
the basis above to an orthonormal basis for R4 . But then e1 , e3 , e4 must also
form an orthonormal basis for U.
3. The QR factorization of A is
0 1 1
0 0 0
A =
−1
−1 0
1 0 1
√ √
0 2/ 6 1/ 3 √ √ √
2 1/
√ 2 1/ √2
0√ 0√ 0√
=
−1/ 2 1/ 6
0
6/2 1/
√6 .
√ √ −1/√3
0 0 2/ 3
1/ 2 1/ 6 −1/ 3
6.1.2.
0 1 2 0 1 2 0 1 2
4 0 3 = 4 0
3 = 3 4 0 3 =
3 2 1 3 3 3 1 1 1
1 1 1 1 1 1
(−3) 4 0 3 = (−3) 0 − 4 − 1 =
0 1 2 0 1 2
1 1 1 1 1 1
3 0 1 2 = 3 0
1 2 = 21.
0 −4 −1 0 0 7
63
6.1.4.
−1 −2 2 1 −1 −2 2 1
0 0 − 1 0 0 0 − 1 0
= =
0 0 1 −1 0 0 1 −1
1 −1 1 −1 0 −3 3 0
−1 −2 2 1 −1 −2 2 1
0 0 −1 0 0 −3 3 0
(−1) = (−1)
0
=
0 0 1 −1 0 1 −1
0 −3 3 0 0 0 −1 0
−1 −2 2 1
0 −3 3 0
(−1) = 3.
0 0 1 −1
0 0 0 −1
6.1.8. Yes, a similar statement is true for any n. The fact that all products
in Theorem 6.1.4 are zero, except the product of the secondary diagonal ele-
ments, can be proved much the same way as for Theorem 6.1.7; just replace
the word "diagonal" by "secondary diagonal" everywhere in the proof and call
the zero entries above the secondary diagonal the bad entries. Thus, Equation
6.1.17 becomes
det(A) = :(P )an1 an−1,2 · · · a1n ,
where P = (n, n − 1, . . . , 1) , that is, P is the permutation of the first n
natural numbers in descending order. This P requires [n/2] switches around
the middle to be brought to the natural order (where [n/2] is the greatest
integer ≤ n/2) and so :(P ) = (−1)[n/2] .
An alternative way of proving the statement is by exchanging columns (or
rows) of the given matrix so as to change it into upper triangular form, count-
ing the number of exchanges, and applying Theorem 6.1.1 to each exchange.
6.1.10. A and B are similar if there exists an invertible matrix S such that
64
B = S −1 AS. In that case
1
det(B) = det(S −1 ) det(A) det(S) = det(A) det(S) = det(A).
det(S)
1s1 + as2 + a2 s3 = 0
1s1 + bs2 + b2 s3 = 0
1s1 + cs2 + c2 s3 = 0.
6.2.4. Expansion along the second row and then along the second row of
the resulting determinant gives
−1 −2 2 1
−1 −2 1
0 0 −1 0
0 − 1 =
0 0 1 − 1 = − (−1) 0
1 −1 −1
1 −1 1 −1
2 −1 −2
(−1) = 3.
1 −1
6.2.6.
−1 3 1 3
x1 = ÷
2 = 2,
3 1 1
and
1 −1 1 3
x2 = ÷
2 = −1.
2 3 1
66
6.2.12. If we apply Theorem 6.2.3 to A−1 in place of A, we get
−1 −1 Adj (A−1 ) Adj (A−1 )
A = = ,
|A−1 | 1/ |A|
and so
−1
(A−1 ) −1
Adj A −1
= = |A| A−1 = (Adj (A))−1 .
|A|
6.2.14. Expanding the determinant along the first row, we we can see that
this is a linear equation; and substituting the coordinates of either of the given
points for x, y in the determinant, we get two equal rows, which makes the
determinant vanish. Thus this is an equation of the line containing the given
points.
6.2.16. Expanding the determinant along the first row, we obtain a linear
combination of the elements of that row. That combination can be rearranged
into the given form, because the the coefficient of x2 +y 2 is a nonzero determi-
nant by the result in Exercise 6.2.13, since the given points are noncollinear.
Substituting the coordinates of any of the given points for x, y in the deter-
minant, we get two equal rows, which makes the determinant vanish, and so
these points lie on the circle.
6.2.18.
a1 a 2 a3 1 a1 a2 a3 1
1 b1 b2 b3
1 1 b1 − a1 b2 − a2 b3 − a3 0
=
6 c1 c2 c3 1 6 c1 − a1 c2 − a2 c3 − a3 0
d1 d2 d3 1 d1 − a1 d2 − a2 d3 − a3 0
b − a1 b2 − a2 b3 − a3
1 1
= − 1 |b − a c − a d − a| .
= − c1 − a1 c2 − a2 c3 − a3
6 d −a d2 − a2 d3 − a3 6
1 1
By the remarks in the last paragraph of Section 6.2, the absolute value of
the determinant above gives the volume of the parallelepiped with the given
edges. (A geometric proof, involving the cross product, is suggested in Ex-
ercise 6.3.5.) The volume of the tetrahedron is one sixth of that of the corre-
sponding parallelepiped.
6.2.20. Write T (e1 ) = t1 and T (e2 ) = t2 . Then, by Theorem 4.1.2,
67
[T ] = [t1 t2 ] . Thus the unit square is mapped to the parallelogram spanned
by t1 and t2 , which has area |det ([T ])| .
6.3.2. We may choose
i j k
−→ −→
n = AB × AC = −1 0 1 = −i + 2j − k
2 1 0
as the normal vector of the plane, and so the plane’s equation can be written
as n · (r − a) = 0, that is, as − (x − 1) + 2 (y + 1) − (z − 2) = 0.
6.3.4.
w · (u × v) = w · (u2 v3 − u3 v2 )i − w · (u1 v3 − u3 v1 )j + w · (u1 v2 − u2 v1 )k
= w1 (u2 v3 − u3 v2 ) − w2 (u1 v3 − u3 v1 ) + w3 (u1 v2 − u2 v1 )
w1 w2 w3 w1 u1 v1
= u1 u2 u3 = w2 u2 v2
v1 v2 v3 w3 u3 v3
= det(w, u, v) = det(u, v, w).
Similarly,
u · (v × w) = det(u, v, w)
and
v · (w × u) = det(v, w, u) = det(u, v, w).
6.3.6. Let the edge vectors be directed as shown in the diagram, and let
n1 = 12 a1 × a2 , n2 = 12 a2 × a3 , n3 = 12 a3 × a4 , and n4 = 12 a4 × a5 .
a2
a3 a5
a4
a1
Then a4 = a1 − a3 and a5 = a1 − a2 . Hence
2 (n1 + n2 + n3 + n4 ) =
68
a1 × a2 + a2 × a3 + a3 × (a1 − a3 ) + (a1 − a3 ) × (a1 − a2 ) =
a1 × a2 + a2 × a3 + a3 × a1 − a3 × a1 − a1 × a2 + a3 × a2 = 0.
v
F
or in echelon form if
1 0 s1 0
(A − 0I)s = = .
0 0 s2 0
0
Thus, s2 is free and s1 = 0 or, equivalently, s = s , which shows that
1
λ = 0 has geometric multiplicity 1, and so A is defective.
7.1.6. The characteristic equation is
2−λ 0 1
|A − λI| = 0 2−λ 0 = 0,
0 0 2−λ
The variables s1 and s2 are free and s3 = 0. Thus the geometric multiplicity
of the eigenvalue λ = 2 is two, the corresponding eigenspace is spanned by
s1 = e1 and s2 = e2 , and A is defective.
7.1.8. The characteristic equation is
1−λ 0 0 1
0 1−λ 1 1
|A − λI| = = 0,
0 0 2−λ 0
0 0 0 2−λ
72
or equivalently, (3 − λ)2 − 22 = 0. The solutions are λ1 = 1 and
λ2 = 5,
1
and corresponding normalized eigenvectors are s1 = √12 and s2 =
−1
√1
1
. Hence
2 1
1 1 1
S=√
2 −1 1
is an orthogonal matrix, and so
−1 T 1 1 −1
S =S = √ .
2 1 1
Also,
1 0
Λ= .
0 5
Thus,
100 100 1
−1 1 1 1 0 1 −1
A = SΛ S =
2 −1 1 0 5100 1 1
100
1 1+5 − 1 + 5100
= .
2 −1 + 5100 1 + 5100
Similarly,
1/2 1/2 −11 1 1 1 0 1 −1
A = SΛ S =
2 −1 1 0 51/2 1 1
1 1 + 51/2 − 1 + 51/2
= .
2 −1 + 51/2 1 + 51/2
73
and
1 0 0 0
0 1 0 0
Λ=
0
.
0 2 0
0 0 0 2
Hence
1 0 0 −1
0 1 −1 −1
S −1 =
0
0 1 0
0 0 0 1
and
A4 = SΛ4 S −1 =
1 0 0 1 1 0 0 0 1 0 0 −1
0 1 1 1 0 1 0 0 0 1 −1 −1
0 0 1 0 0 0 16 0 0 0 1 0
0 0 0 1 0 0 0 16 0 0 0 1
1 0 0 15
0 1 15 15
=
0
.
0 16 0
0 0 0 16
7.2.10. Substituting the expressions (7.91) and (7.92) into the initial con-
ditions (7.93) and (7.94) , we obtain c11 + c12 = 0 and c21 + c22 = Q. Now,
substituting the expressions (7.91) and (7.92) into the differential equations
74
(7.83) and (7.84) , and considering the above relations, we get
R λ1 t 1 Q λ2 t
c11 λ1 eλ1 t − λ2 eλ2 t = − e − eλ2 t − c21 eλ1 t − eλ2 t − e
L LC LC
and
c21 λ1 eλ1 t − λ2 eλ2 t + Qλ2 eλ2 t = c11 eλ1 t − eλ2 t .
Setting here t = 0 results in
Q
c11 (λ1 − λ2 ) = −
LC
and
c21 (λ1 − λ2 ) = −Qλ2 .
These equations in conjunction with Equations (7.89) and (7.90) yield the
desired formulas for the cij .
7.2.12.
a. From
At (At)2 (At)3
u(t) = e u0 = I + At + + + · · · u0
2! 3!
we get
du(t) 1 2 1 3 2
= O + A + A 2t + A 3t + · · · u0
dt 2! 3!
2
(At) (At)3
= A I + At + + + · · · u0 = Au.
2! 3!
Similarly,
A0 (A0)2 (A0)3
u(0) = e u0 = I + A0 + + + · · · u0 = Iu0 = u0 .
2! 3!
b.
−1 (SΛS −1 t)2 (SΛS −1 t)3
eAt = eSΛS t = I + SΛS −1 t + + + ···
2! 3!
(Λt)2 (Λt)3
= S I + Λt + + + · · · S −1 = SeΛt S −1 ,
2! 3!
75
and, by Equation (7.51),
Λt (Λt)2 (Λt)3
e ii = I + Λt + + + ··· = eλi t
2! 3! ii
Λt
and e ij = 0 if i = j. Thus
eλ1 t 0 ··· 0
0 eλ2 t ··· 0
eΛt =
.
···
0 0 ··· eλn t
Writing skl for the kl component of S −1 and u0l for the l component of u0 ,
we obtain
SeΛt S −1 u0 j
= eλk t sjk skl u0l .
k l
Letting ck denote the vector that has the sum in parentheses on the right above
as its jth component results in the desired expression
n
Λt −1
u (t) = Se S u0 = eλk t ck .
k=1
c. From the given data and Equation (7.85) , the matrix of the system is
−5 − 4
A= .
1 0
with inverse
−1 1 1 1
S =− .
3 1 4
76
Thus,
u (t) = SeΛt S −1 u0
−4t
1 −4 1 e 0 1 1 0
= − −t
3 1 − 1 0 e 1 4 10
1 40 (e−4t − e−t )
= ,
3 10 (4e−t − e−4t )
with the first component giving i (t) and the second one q (t) .
Plots:
6e+09
5e+09
4e+09
i
3e+09
2e+09
1e+09
0
-5 -4 -3 t -2 -1 1
40 −4t −t
i= 3
(e −e )
-5 -4 -3 t -2 -1 1
0
-2e+08
-4e+08
-6e+08
-8e+08 q
-1e+09
-1.2e+09
-1.4e+09
10
q= 3
(4e−t − e−4t )
77
d. From the given data and Equation (7.85) , the matrix of this system is
−2 − 1
A= .
1 0
(At)2 (At)3
eAt = I + At + + + ···
2! 3!
are
3t2 4t3
eAt 11
= 1 − 2t + − + · · · = e−t (1 − t) ,
2! 3!
At t3 t4
e 21 = − eAt 12 = t − t2 + − + · · · = −te−t ,
2! 3!
t2 2t3 3t4
eAt 22
=1− − + − · · · = e−t (1 + t) .
2! 3! 4!
Thus
At −t 1 − t −t
e =e
−t 1 + t
and
At −t −t
u(t) = e u0 = 10e .
1+t
Plots:
78
7000
6000
5000
4000 i
3000
2000
1000
0
-5 -4 -3 t -2 -1 1
−t
i = −10te
-5 -4 -3 t -2 -1 1
0
-1000
-2000
-3000 q
-4000
-5000
q = 10 (1 + t) e−t
1 7 24
A= .
25 24 −7
-3 -2 -1 0 1 2 3
-1
-2
-3
In the basis S the equation of the ellipsoid takes on the standard form
y32
y12 + y22 + = 1.
4
This is an ellipsoid of revolution, with major axis of half-length 2 pointing in
the s3 = √13 (1, 1, 1)T direction, and a circle of radius 1 as its cross section in
the plane of the minor axes.
80
7.3.6. The matrix of the given quadratic form is
0 1 1
A= 1 0 1 .
1 1 0
The eigenvalues are λ1 = 2 and λ2 = λ3 = −1 and an orthogonal matrix of
corresponding eigenvectors is
√ √
2√3 0√ 2 √6
1
S = 2√3 −3√ 2 −√6 .
6
2 3 3 2 − 6
In the basis S the equation of the surface takes on the standard form
2y12 − y22 − y32 = 1.
This is a hyperboloid of revolution of two sheets, with its axis pointing in the
s1 = √13 (1, 1, 1)T direction.
7.4.2.
a)
xH = (2 − 4i, 1 + 2i) and yH = (1 + 5i, 4 − 2i),
b)
2 H 2 + 4i
|x| = x x = (2 − 4i, 1 + 2i) = 25 and |x| = 5,
1 − 2i
2 H 1 − 5i √
|y| = y y = (1 + 5i, 4 − 2i) = 46 and |y| = 46,
4 + 2i
c)
1 − 5i
xH y = (2 − 4i, 1 + 2i) = −18 − 14i + 10i = −18 − 4i,
4 + 2i
H 2 + 4i
y x = (1 + 5i, 4 − 2i) = −18 + 14i − 10i = −18 + 4i.
1 − 2i
7.4.4.
a)
xH = (2, −2i, 1 − i) and yH = (−5i, 4 − i, 4 + i),
b)
81
2 √
|x|2 = xH x = (2, −2i, 1 − i) 2i = 10 and |x| = 10,
1+i
5i √
|y|2 = yH y = (−5i, 4 − i, 4 + i) 4 + i = 59 and |y| = 59,
4−i
c)
5i
xH y = (2, −2i, 1 − i) 4 + i = 5 − 3i,
4−i
2
yH x = (−5i, 4 − i, 4 + i) 2i = 5 + 3i.
1+i
7.4.6. The characteristic equation is (cos θ − λ)2 + sin2 θ = 0. Thus, the
eigenvalues areλ = cos θ ± i sin θ = e±iθ . The corresponding eigenvectors
i 1
are s1 = s and s2 = t .
1 i
di(t) 2Q −at
= # αe sin ωt − ωe−at cos ωt
dt LC |D|
QR Q −at
= # e−at sin ωt − e cos ωt,
L2 C |D| LC
82
4 R2
Similarly, using also |D| = LC
− L2
, we find that
dq(t) QR
= αe−at # sin ωt + Q cos ωt
dt L |D|
QR
+e−at −Qω sin ωt + # ω cos ωt
L |D|
−2Q −at
= # e sin ωt = i(t).
LC |D|
1 2 1 0 1 0 1 2
= .
2 4 2 1 0 1 0 0
The machine would then solve this system by back substitution and obtain
the wrong solution x2 = 4 and x1 = 0. The correct solution is x2 = 11998 3001
=
3.9980 . . . and x1 = 6000
6002
= 0.9996 . . ., the same as in Exercise 8.2.1.
The reason for the discrepancy is that in the first step of the back substitu-
tion the machine rounded x2 = 3.998 . . . to 4, and in the next step the machine
had to multiply x2 by 1000 in solving for x1 . Here the small roundoff error,
hidden in taking x2 as 4, got magnified a thousandfold.
8.2.4. The scale factors are s1 = 4, s2 = 4 and s3 = 5, and the ratios
r1 = 1/2, r2 = 1/4 and r3 = 1. Since r3 > r1 > r2 , we put the third row on
top, and then proceed with the row reduction:
5 2 1 2 5 2 1 2
2 4 − 2 6 → 0 16 − 12 26
1 3
4 −1 0 13 19 −7
The new scale factors are s2 = 16 and s3 = 19, and the ratios r2 = 1 and
r3 = 13/19. Thus we leave the rows in place, and reduce further to obtain
5 2 1 2
→ 0 16 − 12 26
0 0 46 −45
Hence x3 = −45/46, x2 = 41/46, and x1 = 11/46.
8.2.6.
a. Since for such matrices, c = |λ2 |/|λ1 | and |λ2 | ≥ |λ1 |, the relation c ≥ 1
follows at once.
84
b. By the given formula, c = 1 implies λ2 = ±λ1 for the two eigenvalues.
c. The characteristic equation is
0.001 − λ 1
|A − λI| = = 0,
1 1−λ
√ (0.001−λ)(1−λ)−1
or equivalently, √ = 0. The eigenvalues are approximately
λ1 ≈ (1 + 5)/2 and λ2 ≈ (1 − 5)/2. Thus c ≈ 2.62.
d. The given formula does not apply in this case, because # the matrix is not
symmetric. In this case, c is given by the formula c = |µn |/|µ1 | with µi
denoting eigenvalues of the matrix AT A. Hence c ≈ 2.62 again.
e When c = 1, the solution of Ax = b is 100% accurate. For c = 2.62 it can
be in serious error on a two-digit machine, but the condition number does not
show the advantage resulting from partial pivoting.
8.3.2. The eigenvalues of B = A − cI are 1 − c, 2 − c, and 3 − c. For a
given c, the eigenvalue 2 − c would be dominant if and only if |2 − c| > |1 − c|
and |2 − c| > |3 − c| were true simultaneously. But this is impossible:
iy
c
|2 - c|
|1 - c|
|3 - c|
1 2 3 x
Taking square roots results in the second part of inequality A.22. The first
part follows from the second one by observing that
and thus |z1 | − |z2 | ≤ |z1 − z2 | . Similarly, |z2 | − |z1 | ≤ |z2 − z1 | . Hence
||z1 | − |z2 || ≤ |z1 − z2 | .
k k k
∞A.2.4. Note first that n=0 zn = n=0 xn + i n=0 yn . Suppose that
n=0 zn converges to z = x + iy. Then, since
k k k
xn − x = R zn − z ≤ zn − z ,
n=0 n=0
n=0
∞
the real part ∞ n=0 xn converges to x. Similarly, the imaginary part n=0 yn
converges to y.
∞
Conversely, if ∞ n=0 xn converges to x and n=0 yn converges to y, then
the inequality
k k
k
zn − z = xn − x + i yn − y
n=0 n=0
k k n=0
≤ xn − x + yn − y
n=0 n=0
implies that ∞n=0 zn converges to z.
A.2.6. If
z = ei(φ+2kπ) ,
86
with z = 0, and if wn = z, where w = ReiΦ , then
wn = Rn einΦ .
Thus, we must have Rn = r and einΦ = eiΦ , from which it folows that
R = r1/n and Φ = (φ + 2kπ)/n. Therefore,
z 1/n = r1/n ei(φ+2kπ)/n .
Substituting k = 0, 1, 2, . . . , n − 1 into the last expression results in n distinct
roots of z. (Note that k = n leads to the angle Φ = (φ+2nπ)/n = (φ/n)+2π,
which is equivalent to the angle φ/n corresponding to k = 0, and k = n + 1
leads to an angle equivalent to the earlier angle corresponding to k = 1, etc.)
87
http://www.springer.com/978-0-8176-8324-5