0% found this document useful (0 votes)
135 views38 pages

SolutionHoffmanAlgebra Chapter3

This document provides solutions to exercises related to linear transformations. It examines which of several functions are linear transformations and finds the range, rank, null space, and nullity of the zero and identity transformations on a vector space. It also analyzes the range and null space of differentiation and integration transformations on polynomials. Further exercises involve determining if specific linear transformations exist based on given conditions and explicitly describing a linear transformation based on its actions on a basis.

Uploaded by

hs rezaee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views38 pages

SolutionHoffmanAlgebra Chapter3

This document provides solutions to exercises related to linear transformations. It examines which of several functions are linear transformations and finds the range, rank, null space, and nullity of the zero and identity transformations on a vector space. It also analyzes the range and null space of differentiation and integration transformations on polynomials. Further exercises involve determining if specific linear transformations exist based on given conditions and explicitly describing a linear transformation based on its actions on a basis.

Uploaded by

hs rezaee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Chapter 3: Linear Transformations

Section 3.1: Linear Transformations


Exercise 1: Which of the following functions T from R2 into R2 are linear transformations?

(a) T (x1 , x2 ) = (1 + x1 , x2 );

(b) T (x1 , x2 ) = (x2 , x1 );

(c) T (x1 , x2 ) = (x12 , x2 );

(d) T (x1 , x2 ) = (sin x1 , x2 );

(e) T (x1 , x2 ) = (x1 − x2 , 0).

Solution:

(a) T is not a linear transformation because T (0, 0) = (1, 0) and according to the comments after Example 5 on page 68 we
know that it must always be that T (0, 0) = (0, 0).

(b) T is a linear transformation. Let α = (x1 , x2 ) and β = (y1 , y2 ). Then T (cα+β) = T ((cx1 +y1 , cx2 +y2 )) = (cx2 +y2 , cx1 +y1 ) =
c(x2 , x1 ) + (y2 , y1 ) = cT (α) + T (β).

(c) T is not a linear transformation. If T were a linear transformation then we’d have (1, 0) = T ((−1, 0)) = T (−1 · (1, 0)) =
−1 · T (1, 0) = −1 · (1, 0) = (−1, 0) which is a contradiction, (1, 0) , (−1, 0).

(d) T is not a linear transformation. If T were a linear transformation then (0, 0) = T (π, 0) = T (2(π/2, 0)) = 2T ((π/2, 0)) =
2(sin(π/2), 0) = 2(1, 0) = (2, 0) which is a contradiction, (0, 0) , (2, 0).
" #
1 0
(e) T is a linear transformation. Let Q = . Then (identifying R2 with R1×2 ) T (x1 , x2 ) = [x1 x2 ]Q so from Example
−1 0
4, page 68, (with P being the identity matrix), it follows that T is a linear transformation.

Exercise 2: Find the range, rank, null space, and nullity for the zero transformation and the identity transformation on a
finite-dimensional vector space V.

Solution: Suppose V has dimension n. The range of the zero transformation is the zero subspace {0}; the range of the identity
transformation is the whole space V. The rank of the zero transformation is the dimension of the range which is zero; the rank
of the identity transformation is the rank of the whole space V which is n. The null space of the zero transformation is the
whole space V; the null space of the identity transformation is the zero subspace {0}. The nullity of the zero transformation is
the dimension of its null space, which is the whole space, so is n; the nullity of the identity transformation is the dimension
of its null space, which is the zero space, so is 0.

51
52 Chapter 3: Linear Transformations

Exercise 3: Describe the range and the null space for the differentiation transformation of Example 2. Do the same for the
integration transformation of Example 5.

Solution: V is the space of polynomals. The range of the differentiiation transformation is all of V since if f (x) =
c0 + c1 x + · · · + cn xn then f (x) = (Dg)(x) where g(x) = c0 x + c21 x2 + c32 x3 + · · · + n+1
cn n+1
x . The null space of the differ-
entiation transformation is the set of constant polynomials since (Dc)(x) = 0 for constants c ∈ F.

The range of the integration transformation is all polynomials with constant term equal to zero. Let f (x) = c1 x + c2 x2 + · · · +
xn cn . Then f (x) = (T g)(x) where g(x) = c1 +2c2 x +3c3 x2 +· · ·+ncn xn−1 . Clearly the integral transoformation of a polynomial
has constant term equal to zero, so this is the entire range of the integration transformation. The null space of the integration
transformation is the zero space {0} since the (indefinite) integral of any other polynomial is non-zero.

Exercise 4: Is there a linear transformation T from R3 into R2 such that T (1, −1, 1) = (1, 0) and T (1, 1, 1) = (0, 1)?

Solution: Yes, there is such a linear transformation. Clearly α1 = (1, −1, 1) and α2 = (1, 1, 1) are linearly independent. By
Corollary 2, page 46, ∃ a third vector α3 such that {α1 , α2 , α3 ) is a basis for R3 . By Theorem 1, page 69, there is a linear
transformation that takes α1 , α2 , α3 to any three vectors we want. Therefore we can find a linear transformation that takes
α1 7→ (1, 0), α2 7→ (0, 1) and α3 7→ (0, 0). (We could have used any vector instead of (0, 0).)

Exercise 5: If
α1 = (1, −1), β1 = (1, 0)
α2 = (2, −1), β2 = (0, 1)
α3 = (−3, 2), β3 = (1, 1)
is there a linear transformation T from R to R such that T αi = βi for i = 1, 2 and 3?
2 2

Solution: No there is no such transformation. If there was then since {α1 , α2 } is a basis for R2 their images determine T com-
pletely. Now α3 = −α1 − α2 , thus it must be that T (α3 ) = T (−α1 − α2 ) = −T (α1 ) − T (α2 ) = −(1, 0) − (0, 1) = (−1, −1) , (1, 1).
Thus no such T can exist.

Exercise 6: Describe explicitly (as in Exercises 1 and 2) the linear transformation T from F 2 into F 2 such that T 1 = (a, b),
T 2 = (c, d).
" #
a b
Solution: I’m not 100% sure I understand what they want here. Let A be the matrix . Then the range of T is
c d
the row-space of A which can have dimension 0, 1, or 2 depending on the row-rank. Explicitly it is all vectors of the form
x(a, b) + y(c, d) = (ax + cy, bx + dy) where x, y are arbitrary elements of F. The rank is the dimension of this row-space, which
is 0 if a = b = c = d = 0 and if not all a, b, c, d are zero then by Exercise 1.6.8, page 27, the rank is 2 if ad − bc , 0 and
equals 1 if ad − bc = 0.
" #
a c
Now let A be the matrix . Then the null space is the solution space of AX = 0. Thus the nullity is 2 if a = b = c =
b d
d = 0, and if not all a, b, c, d are zero then by Exercise 1.6.8, page 27 and Theorem 13, page 23, is 0 if ad − bc , 0 and is 1 if
ad − bc = 0.

Exercise 7: Let F be a subfield of the complex numbers and let T be the function from F 3 into F 3 defined by

T (x1 , x2 , x3 ) = (x1 − x2 + 2x3 , 2x1 + x2 , −x1 − 2x2 + 2x3 ).

(a) Verify that T is a linear transformation.


(b) If (a, b, c) is a vector in F 3 , what are the conditions on a, b, and c that the vector be in the range of T ? What is the rank
of T ?
Section 3.1: Linear Transformations 53

(c) What are the conditions on a, b, and c that (a, b, c) be in the null space of T ? What is the nullity of T ?

Solution: (a) Let


 
 1 −1 2 
P =  2 1 0  .
 
−1 −2 2
 

Then T can be represented by


   
 x1   x1 
x 7 → P  x2  .
   
 2 
x3 x3
By Example 4, page 68, this is a linear transformation, where we’ve identified F 3 with F 3×1 and taken Q in Example 4 to be
the identity matrix.

(b) The range of T is the column space of P, or equivalently the row space of
 
 1 2 −1 
PT =  −1 1 −2  .
 
2 0 2

We row reduce the matrix as follows  


 1 2 −1 
→  0 3 −3  .
 
0 −4 4
 
 
 1 2 −1 
→  0 1 −1  .
 
0 0 0
 
 
 1 0 1 
→  0 1 −1  .
 
0 0 0
 

Let ρ1 = (1, 0, 1) and ρ2 = (0, 1, −1). Then elements of the row space are elements of the form b1 ρ1 + b2 ρ2 = (b1 , b2 , b1 − b2 ).
Thus the rank of T is two and (a, b, c) is in the range of T as long as c = a − b.

Alternatively, we can row reduce the augmented matrix


 
 1 −1 2 a 
 2 1 0 b
 

−1 −2 2 c
 
 1 −1 2 a 
→  0 3 −4 b − 2a 
 
0 −3 4 a+c
 
 
 1 −1 2 a 
→  0 3 −4 b − 2a 
 
0 0 0 −a + b + c
 
 
 1 −1 2 a 
→  0 1 −4/3 (b − 2a)/4 
 
0 0 0 −a + b + c
 
54 Chapter 3: Linear Transformations

(b + 2a)/4
 
 1 0 2/3 
→  0 1 −4/3 (b − 2a)/4
 

0 0 0 −a + b + c

from which we arrive at the condition −a + b + c = 0 or equivalently c = a − b.


 
 a 
(c) We must find all X =  b  such that PX = 0 where P is the matrix from part (a). We row reduce the matrix
 
c
 

 
 1 −1 2 
 2 1 0 
 
−1 −2 2

 
 1 −1 2 
→  0 3 −4
 

0 −3 4

 
 1 −1 2 
→  0 3 −4
 

0 0 0

 
 1 −1 2 
→  0 1 −4/3
 

0 0 0

 
 1 0 2/3 
→  0 1 −4/3
 

0 0 0

Therefore
a + 32 c = 0
(

b − 43 c = 0
So elements of the null space of T are of the form (− 23 c, 43 c, c) for arbitrary c ∈ F and the dimension of the null space (the
nullity) equals one.

Exercise 8: Describe explicitly a linear transformation from R3 to R3 which has as its range the subspace spanned by (1, 0, −1)
and (1, 2, 2).

Solution: By Theorem 1, page 69, (and its proof) there is a linear transformation T from R3 to R3 such that T (1, 0, 0) =
(1, 0, −1), T (0, 1, 0) = (1, 0, −1) and T (0, 0, 1) = (1, 2, 2) and the range of T is exactly the subspace generated by

{T (1, 0, 0), T (0, 1, 0), T (0, 0, 1)} = {(1, 0, −1), (1, 2, 2)}.

Exercise 9: Let V be the vector space of all n × n matrices over the field F, and let B be a fixed n × n matrix. If

T (A) = AB − BA

verify that T is a linear transformation from V into V.

Solution: T (cA1 + A2 ) = (cA1 + A2 )B − B(cA1 + A2 ) = cA1 B + A2 B − cBA1 − BA2

= c(A1 B − BA1 ) + (A2 B − BA2 ) = cT (A1 ) + T (A2 ).


Section 3.2: The Algebra of Linear Transformations 55

Exercise 10: Let V be the set of all complex numbers regarded as a vector space over the field of real numbers (usual oper-
ations). Find a function from V into V which is a linear transformation on the above vector space, but which is not a linear
transformation on C1 , i.e., which is not complex linear.

Solution: Let T : V → V be given by a + bi 7→ a. Let z = a + bi and w = a0 + b0 i and c ∈ R. Then T (cz + w) =


T ((ca + a0 ) + (cb + b0 )i) = ca + a0 = cT (a + bi) + T (a0 + b0 i) = aT (z) + T (w). Thus T is real linear. However, if T were
complex linear then we must have 0 = T (i) = T (i · 1) = i · T (1) = i · 1 = i. But 0 , i so this is a contradiction. Thus T is not
complex linear.

Exercise 11: Let V be the space of n × 1 matrices over F and let W be the space of m × 1 matrices over F. Let A be a fixed
m × n matrix over F and let T be the linear transformation from V into W defined by T (X) = AX. Prove that T is the zero
transformation if and only if A is the zero matrix.

Solution: If A is the zero matrix then clearly T is the zero transformation. Conversely, suppose A is not the zero matrix,
suppose the k-th column Ak has a non-zero entry. Then T (k ) = Ak , 0.

Exercise 12: Let V be an n-dimensional vector space over the field F and let T be a linear transformation from V into V such
that the range and null space of T are identical. Prove that n is even. (Can you give an example of such a linear transformation
T ?)

Solution: From Theorem 2, page 71, we know rank(T ) + nullity(T ) = dim V. In this case we are assuming both terms on the
left hand side are equal, say equal to m. Thus m + m = n or equivalently n = 2m which implies n is even.

The simplest example is V = {0} the zero space. Then trivially the range and null space are equal. To give a less trivial
example assume V = R2 and define T by T (1, 0) = (0, 0) and T (0, 1) = (1, 0). We can do this by Theorem 1, page 69 because
{(1, 0), (0, 1)} is a basis for R2 . Then clearly the range and null space are both equal to the subspace of R2 generated by (1, 0).

Exercise 13: Let V be a vector space and T a linear transformation from V into V. Prove that the following two statements
about T are equivalent.
(a) The intersection of the range of T and the null space of T is the zero subspace of V.
(b) If T (T α) = 0, then T α = 0.
Solution: (a) ⇒ (b): Statement (a) says that nothing in the range gets mapped to zero except for 0. In other words if x is in
the range of T then T x = 0 ⇒ x = 0. Now T α is in the range of T , thus T (T α) = 0 ⇒ T α = 0.

(b) ⇒ (a): Suppose x is in both the range and null space of T . Since x is in the range, x = T α for some α. But then x in the
null space of T implies T (x) = 0 which implies T (T α) = 0. Thus statement (b) implies T α = 0 or equivalently x = 0. Thus
the only thing in both the range and null space of T is the zero vector 0.

Section 3.2: The Algebra of Linear Transformations


Page 76: Typo in line 1: It says Ai j , . . . , Am j , it should say A1 j , . . . , Am j .

Exercise 1: Let T and U be the linear operators on R2 defined by

T (x1 , x2 ) = (x2 , x1 ) and U(x1 , x2 ) = (x1 , 0).

(a) How would you describe T and U geometrically?


(b) Give rules like the ones defining T and U for each of the transformations (U + T ), UT , T U, T 2 , U 2 .
56 Chapter 3: Linear Transformations

Solution: (a) Geometrically, in the x−y plane, T is the reflection about the diagonal x = y and U is a projection onto the x-axis.

(b)
• (U + T )(x1 , x2 ) = (x2 , x1 ) + (x1 , 0) = (x1 + x2 , x1 ).
• (UT )(x1 , x2 ) = U(x2 , x1 ) = (x2 , 0).
• (T U)(x1 , x2 ) = T (x1 , 0) = (0, x1 ).
• T 2 (x1 , x2 ) = T (x2 , x1 ) = (x1 , x2 ), the identity function.
• U 2 (x1 , x2 ) = U(x1 , 0) = (x1 , 0). So U 2 = U.
Exercise 2: Let T be the (unique) linear operator on C3 for which

T 1 = (1, 0, i), T 2 = (0, 1, 1), T 3 = (i, 1, 0).

Is T invertible?

Solution: By Theorem 9 part (v), top of page 82, T is invertible if {T 1 , T 2 , T 3 } is a basis of C3 . Since C3 has dimension
three, it suffices (by Corollary 1 page 46) to show T 1 , T 2 , T 3 are linearly independent. To do this we row reduce the matrix
 
 1 0 i 
 0 1 1 
 
i 1 0

to row-reduced echelon form. If it reduces to the identity then its rows are independent, otherwise they are dependent. Row
reduction follows:      
 1 0 i   1 0 i   1 0 i 
 0 1 1  →  0 1 1  →  0 1 1 
     
i 1 0 0 1 1 0 0 0
This is in row-reduced echelon form not equal to the identity. Thus T is not invertible.

Exercise 3: Let T be the linear operator on R3 defined by

T (x1 , x2 , x3 ) = (3x1 , x1 − x2 , 2x1 + x2 + x3 ).

Is T invertible? If so, find a rule for T −1 like the one which defines T .

Solution: The matrix representation of the transformation is


     
 x1   3 0 0   x1 
 x2  7→  1 −1 0  ·  x2 
     
x3 2 1 1 x3

where we’ve identified R3 with R3×1 . T is invertible if the matrix of the transformation is invertible. To determine this we
row-reduce the matrix - we row-reduce the augmented matrix to determine the inverse for the second part of the Exercise.
 
 3 0 0 1 0 0 
 1 −1 0 0 1 0 
 
2 1 1 0 0 1
 
 1 −1 0 0 1 0 
→  3 0 0 1 0 0
 

2 1 1 0 0 1

Section 3.2: The Algebra of Linear Transformations 57
 
 1 −1 0 0 1 0 
→  0 3 0 1 −3 0
 

0 3 1 0 −2 1

 
 1 −1 0 0 1 0 
→  0 1 0 1/3 −1 0 
 
0 3 1 0 −2 1
 
 
 1 0 0 1/3 0 0 
→  0 1 0 1/3 −1 0 
 
0 0 1 −1 1 1
 

Since the left side transformed into the identity, T is invertible. The inverse transformation is given by
     
 x1   1/3 0 0   x1 
x 7 →  1/3 −1 0  ·  x2 
     
 2 
x3 −1 1 1 x3

So
T −1 (x1 , x2 , x3 ) = (x1 /3, x1 /3 − x2 , −x1 + x2 + x3 ).
Exercise 4: For the linear operator T of Exercise 3, prove that

(T 2 − I)(T − 3I) = 0.

Solution: Working with the matrix representation of T we must show

(A2 − I)(A − 3I) = 0

where  
 3 0 0 
A =  1 −1 0  .
 
2 1 1

Calculating:
  
 3 0 0   3 0 0 
A2 =  1 −1 0   1 −1 0
  

2 1 1 2 1 1
 
 
 9 0 0 
=  2 1 0 
 
9 0 1
 

Thus  
 8 0 0 
A2 − I =  2 0 0  .
 
9 0 0

Also  
 0 0 0 
A − 3I =  1 −4 0
 

2 1 −2

Thus    
 8 0 0   0 0 0 
(A2 − I)(A − 3I) =  2 0 0  ·  1 −4 0
   

9 0 0 2 1 −2

58 Chapter 3: Linear Transformations
 
 0 0 0 
=  0 0 0  .
 
0 0 0

Exercise 5: Let C 2×2 be the complex vector space of 2 × 2 matrices with complex entries. Let
" #
1 −1
B=
−4 4

and let T be the linear operator on C2×2 defined by T (A) = BA. What is the rank of T ? Can you describe T 2 ?

Solution: An (ordered) basis for C2×2 is given by


" # " #
1 0 0 0
A11 = , A21 =
0 0 1 0
" # " #
0 1 0 0
A12 = , A22 = .
0 0 0 1
If we identify C2×2 with C4 by " #
a b
7→ (a, b, c, d)
c d
then since
A11 7→ A11 − 4A21
A21 7→ −A11 + 4A21
A12 7→ A12 − 4A22
A22 7→ −A12 + 4A22
the matrix of the transformation is given by
 1 −4 0 0
 

 −1 4 0 0 
  .
 0
 0 1 −4 

0 0 −1 4
To find the rank of T we row-reduce this matrix:

 1 −4 0 0
 

 0 0 1 −4 
 .
→ 
 0 0 0 0 
0 0 0 0

It has rank two so the rank, so the rank of T is 2.

T 2 (A) = T (T (A)) = T (BA) = B(BA) = B2 A. Thus T 2 is given by multiplication by a matrix just as T is, but multiplication
with B2 instead of B. Explicitly " #" #
1 −1 1 −1
B =
2
−4 4 −4 4
" #
5 −5
= .
−20 20
Exercise 6: Let T be a linear transformation from R3 into R2 , and let U be a linear transformation from R2 into R3 . Prove
that the transformation UT is not invertible. Generalize the theorem.
Section 3.2: The Algebra of Linear Transformations 59

Solution: Let {α1 , α2 , α3 } be a basis for R3 . Then T (α1 ), T (α2 ), T (α3 ) must be linearly dependent in R2 , because R2 has
dimension 2. So suppose b1 T (α1 ) + b2 T (α2 ) + b3 T (α3 ) = 0 and not all b1 , b2 , b3 are zero. Then b1 α1 + b2 α2 + b3 α3 , 0 and

UT (b1 α1 + b2 α2 + b3 α3 )

= U(T (b1 α1 + b2 α2 + b3 α3 ))
= U(b1 T (α1 ) + b2 T (α2 ) + b3 T (α3 )
= U(0) = 0.
Thus (by the definition at the bottom of page 79) UT is not non-singular and thus by Theorem 9, page 81, UT is not invertible.

The obvious generalization is that if n > m and T : Rn → Rm and U : Rm → Rn are linear transformations, then UT is not
invertible. The proof is an immediate generalization the proof of the special case above, just replace α3 with . . . , αn .

Exercise 7: Find two linear operators T and U on R2 such that T U = 0 but UT , 0.

Solution: Identify R2 with R2×1 and let T and U be given by the matrices
" # " #
1 0 0 1
A= , B= .
0 0 0 0

More precisely, for " #


x
X= .
y
let T be given by X 7→ AX and let U be given by X 7→ BX. Thus T U is given by X 7→ ABX and UT is given by X 7→ BAX.
But BA = 0 and AB , 0 so we have the desired example.

Exercise 8: Let V be a vector space over the field F and T a linear operator on V. If T 2 = 0, what can you say about the
relation of the range of T to the null space of T ? Give an example of a linear operator T on R2 such that T 2 = 0 but T , 0.

Solution: If T 2 = 0 then the range of T must be contained in the null space of T since if y is in the range of T then y = T x
for some x so T y = T (T x) = T 2 x = 0. Thus y is in the null space of T .

To give an example of an operator where T 2 = 0 but T , 0, let V = R2×1 and let T be given by the matrix
" #
0 1
A= .
0 0

Specifically, for " #


x
X= .
y
let T be given by X 7→ AX. Since A , 0, T , 0. Now T 2 is given by X 7→ A2 X, but A2 = 0. Thus T 2 = 0.

Exercise 9: Let T be a linear operator on the finite-dimensional space V. Suppose there is a linear operator U on V such
that T U = I. Prove that T is invertible and U = T −1 . Give an example which shows that this is false when V is not finite-
dimensional. (Hint: Let T = D, be the differentiation operator on the space of polynomial functions.)

Solution: By the comments in the Appendix on functions, at the bottom of page 389, we see that simply because T U = I as
functions, then necessarily T is onto and U is one-to-one. It then follows immediately from Theorem 9, page 81, that T is
invertible. Now T T −1 = I = T U and multiplying on the left by T −1 we get T −1 T T −1 = T −1 T U which implies (I)T −1 = (I)U
and thus U = T −1 .
60 Chapter 3: Linear Transformations

Let V be the space of polynomial functions in one variable over R. Let D be the differentiation operator and let T be the
operator “multiplication by x” (exactly as in Example 11, page 80). As shown in Example 11, UT = I while T U , I. Thus
this example fulfills the requirement.

Exercise 10: Let A be an m × n matrix with entries in F and let T be the linear transformation from F n×1 into F m×1 defined
by T X = AX. Show that if m < n it may happen that T is onto without being non-singular. Similarly, show that if m > n we
may have T non-singular but not onto.

Solution: Let B = {α1 , . . . , αn } be a basis for F n×1 and let B0 = {β1 , . . . , βm } be a basis for F m×1 . We can define a linear
transformation from F n×1 to F m×1 uniquely by specifying where each member of B goes in F m×1 . If m < n then we can define
a linear transformation that maps at least one member of B to each member of B0 and maps at least two members of B to the
same member of B0 . Any linear transformation so defined must necessarily be onto without being one-to-one. Similarly, if
m > n then we can map each member of B to a unique member of B0 with at least one member of B0 not mapped to by any
member of B. Any such transformation so defined will necessarily be one-to-one but not onto.

Exercise 11: Let V be a finite-dimensional vector space and let T be a linear operator on V. Suppose that rank(T 2 ) = rank(T ).
Prove that the range and null space of T are disjoint, i.e., have only the zero vector in common.

Solution: Let {α1 , . . . , αn } be a basis for V. Then the rank of T is the number of linearly independent vectors in the set
{T α1 , . . . , T αn }. Suppose the rank of T equals k and suppose WLOG that {T α1 , . . . , T αk } is a linearly independent set (it might
be that k = 1, pardon the notation). Then {T α1 , . . . , T αk } give a basis for the range of T . It follows that {T 2 α1 , . . . , T 2 αk } span
the range of T 2 and since the dimension of the range of T 2 is also equal to k, {T 2 α1 , . . . , T 2 αk } must be a basis for the range
of T 2 . Now suppose v is in the range of T . Then v = c1 T α1 + · · · + ck T αk . Suppose v is also in the null space of T . Then
0 = T (v) = T (c1 T α1 + · · · + ck T αk ) = c1 T 2 α1 + · · · + ck T 2 αk . But {T 2 α1 , . . . , T 2 αk } is a basis, so T 2 α1 , . . . , T 2 αk are linearly
independent, thus it must be that c1 = · · · = ck = 0, which implies v = 0. Thus we have shown that if v is in both the range of
T and the null space of T then v = 0, as required.

Exercise 12: Let p, m, and n be positive integers and F a field. Let V be the space of m × n matrices over F and W the space
of p × n matrices over F. Let B be a fixed p × m matrix and let T be the linear transformation from V into W defined by
T (A) = BA. Prove that T is invertible if and only if p = m and B is an invertible m × m matrix.

Solution: We showed in Exercise 2.3.12, page 49, that the dimension of V is mn and the dimension of W is pn. By Theorem
9 page (iv) we know that an invertible linear transformation must take a basis to a basis. Thus if there’s an invertible linear
transformation between V and W it must be that both spaces have the same dimension. Thus if T is inverible then pn = mn
which implies p = m. The matrix B is then invertible because the assignment B 7→ BX is one-to-one (Theorem 9 (ii), page
81) and non-invertible matrices have non-trivial solutions to BX = 0 (Theorem 13, page 23). Conversely, if p = n and B is
invertible, then we can define the inverse transformation T −1 by T −1 (A) = B−1 A and it follows that T is invertible.

Section 3.3: Isomorphism


Exercise 1: Let V be the set of complex numbers and let F be the field of real numbers. With the usual operations, V is a
vector space over F. Describe explicitly an isomorphism of this space onto R2 .

Solution: The natural isomorphism from V to R2 is given by a + bi 7→ (a, b). Since i acts like a placeholder for addition in C,
(a + bi) + (c + di) = (a + c) + (b + d)i 7→ (a + c, b + d) = (a, b) + (c, d). And c(a + bi) = ca + cbi 7→ (ca, cb) = c(a, b). Thus this
is a linear transformation. The inverse is clearly (a, b) 7→ a + bi. Thus the two spaces are isomorphic as vector spaces over R.

Exercise 2: Let V be a vector space over the field of complex numbers, and suppose there is an isomorphism T of V into C3 .
Let α1 , α2 , α3 , α4 be vectors in V such that

T α1 = (1, 0, i), T α2 = (−2, 1 + i, 0),


Section 3.3: Isomorphism 61

T α3 = (−1, 1, 1), T α4 = ( 2, i, 3).

(a) Is α1 in the subspace spanned by α2 and α3 ?

(b) Let W1 be the subspace spanned by α1 and α2 , and let W2 be the subspace spanned by α3 and α4 . What is the intersection
of W1 and W2 ?

(c) Find a basis for the subspace of V spanned by the four vectors α j .

Solution: (a) Since T is an isomorphism, it suffices to determine whether T α1 is contained in the subspace spanned by T α2
and T α3 . In other words we need to determine if there is a solution to
   
 −2 −1  " #  1 
 x
 1 + i 1  y =  0  .
  
0 1 i

To do this we row-reduce the augmented matrix


     
 −2 −1 1   1 1/2 −1/2   1 1/2 −1/2 
 1 + i 1 0  →  1 + i 1 0  →  0 1 i
     

0 1 i 0 1 i 1+i 1 0
   −1−i 
 1 1/2 −1/2   1 0 2

→  0 1 i →  0 1 i
   
 
1−i 1+i
0 0 0 0

2 2

The zero row on the left of the dividing line has zero also on the right. This means the system has a solution. Therefore we
can conclude that α1 is in the subspace generated by α2 and α3 .

(b) Since T α1 and T α2 are linearly independent, and T α3 and T α4 are linearly independent, dim(W1 ) = dim(W2 ) = 2. We
row-reduce the matrix whose columns are the T αi :
 √ 
 1 −2 −1 2 
 0 1 + i 1 i 
 

i 0 1 3

which yields
 
 1 0 −i 0 
 ,
1−i
 0 1 0
 
2
0 0 0 1
from which we deduce that T α1 , T α2 , T α3 , T α4 generate a space of dimension three, thus dim(W1 +W2 ) = 3. Since dim(W1 ) =
dim(W2 ) = 2 it follows from Theorem 6, page 46 that dim(W1 ∩ W2 ) = 1. Now AX = 0 ⇔ RX = 0 where R is the row reduced
echelon form of A. This follows from the fact that R = PA; multiply both sides of AX = 0 on the left by P. Solving for X in
RX = 0 gives the general solution is of the form (ic, i−12 c, c, 0). Letting c = 2 gives

2iT α1 + (i − 1)T α2 + 2T α3 = 0

which implies T α3 = −iT α1 + 1−i


2 T α2 which implies T α3 ∈ T W1 . Thus α3 ∈ W1 . Thus α3 ∈ W1 ∩W2 . Since dim(W1 ∩W2 ) = 1
it follows that W1 ∩ W2 = Cα3 .

(c) We have determined in part (b) that the {α1 , α2 , α3 , α4 } span a space of dimension three, and that α3 is in the space gener-
ated by α1 and α2 . Thus {α1 , α2 , α4 } give a basis for the subspace spanned by {α1 , α2 , α3 , α4 }, which in fact is all of C3 .
62 Chapter 3: Linear Transformations

Exercise 3: Let W be the set of all 2 × 2 complex Hermitian matrices, that is, the set of 2 × 2 complex matrices A such that
Ai j = A ji (the bar denoting complex conjugation). As we pointed out in Example 6 of Chapter 2, W is a vector space over the
field of real numbers, under the usual operations. Verify that

t + x y + iz
" #
(x, y, z, t) →
y − iz t − x

is an isomorphism of R4 onto W.

Solution: The function is linear since the four components are all linear combinations of the components of the domain
(x, y, z, t). Identify C2×2 with C4 by A 7→ (A11 , A12 , A21 , A22 ). Then the matrix of the transformation is given by

 1 0 0 1
 

 0 1 i 0 
  .
 0
 1 −i 0 

−1 0 0 1

As usual, the transformation is an isomorphism if the matrix is invertible. We row-reduce to veryify the matrix is invertible.
We will row-reduce the augmented matrix in order to find the inverse explicitly:

 1 0 0 1 1 0 0 0
 

 0 1 i 0 0 1 0 0 
  .
 0 1
 −i 0 0 0 1 0 

−1 0 0 1 0 0 0 1

This reduces to
 1 0 0 0 1/2 0 0 −1/2
 

 0 1 0 0 0 1/2 1/2 0 
→   .
 0 0 1 0 0 −i/2 i/2 0 
0 0 0 1 1/2 0 0 1/2
Thus the inverse transformation is
x − w y + z i(z − y) x + w
" # !
x y
7→ , , , .
z w 2 2 2 2

Exercise 4: Show that F m×n is isomorphic to F mn .

Solution: Define the bijection σ from {(a, b) | a, b ∈ N, 1 ≤ a ≤ m, 1 ≤ b ≤ n} to {1, 2, . . . , mn} by (a, b) 7→ (a−1)n+b. Define
the function G from F m×n to F mn as follows. Let A ∈ F m×n . Then map A to the mn-tuple that has Ai j in the σ(i, j) position. In
other words A 7→ (A11 , A12 , A13 , . . . , A1n , A21 , A22 , A23 , . . . , A2n , . . . . . . , Ann ). Since addition in F m×n and in F mn is performed
compenent-wise, G(A + B) = G(A) + G(B). Similarly since scalar multiplication factors out of vectors component-wise in the
same way in F m×n as in F mn , we also have G(cA) = cG(A). Thus G is a linear function. G is clearly one-to-one (as well as
clearly onto), and both F m×n and F mn have dimension mn (by Example 17, page 45 and Exercise 2.3.12, page 49), thus (by
Theorem 9, page 81) it follows that G has an inverse and therefore is an isomorphism.

Exercise 5: Let V be the set of complex numbers regarded as a vector space over the field of real numbers (Exercise 1). We
define a function T from V into the space of 2 × 2 real matrices, as follows. If z = x + iy with x and y real numbers, then

x + 7y
" #
5y
T (z) = .
−10y x − 7y

(a) Verify that T is a one-one (real) linear transformation of V into the space of 2 × 2 matrices.

(b) Verify that T (z1 z2 ) = T (z1 )T (z2 ).


Section 3.4: Representation of Transformations by Matrices 63

(c) How would you describe the range of T ?

Solution:

(a) The four coordinates of T (z) are written as linear combinations of the coordinates of z (as a vector over R). Thus T is
clearly a linear transformation. To see that T is one-to-one, let z = x + yi and w = a + bi and suppose T (z) = T (w). Then
considering the top right entry of the matrix we see that 5y = 5b which implies b = y. It now follows from the top left entry
of the matrix that x = a. Thus T (z) = T (w) ⇒ z = w, thus T is one-to-one.

(b) Let z1 = x + yi and z2 = a + bi. Then

(ax − by) + 7(ay + bx) 5(ay + bx)


" #
T (z1 z2 ) = T ((ax − by) + (ay + bx)i) = .
−10(ay + bx) (ax − by) − 7(ay + bx)

On the other hand,


x + 7y a + 7b
" #" #
5y 5b
T (z1 )T (z2 ) =
−10y x − 7y −10b a − 7b
(ax − by) + 7(ay + bx) 5(ay + bx)
" #
= .
−10(ay + bx) (ax − by) − 7(ay + bx)
Thus T (z1 z2 ) = T (z1 )T (z2 ).

(c) The range of T has (real) dimension equal to two by part (a), and so the range of T is isomorphic to C as real vector
spaces. But both spaces also have a natural multiplication and in part (b) we showed that T respects the multiplication. Thus
the range of T is isomorphic to C as fields and we have essentially found an isomorphic copy of the field C in the algebra of
2 × 2 real matrices.

Exercise 6: Let V and W be finite-dimensional vector spaces over the field F. Prove that V and W are isomorphic if and only
if dim(V) = dim(W).

Solution: Suppose dim(V) = dim(W) = n. By Theorem 10, page 84, both V and W are isomorphic to F n , and consequently,
since isomorphism is an equivalence relation, V and W are isomorphic to each other. Conversely, suppose T is an isomor-
phism from V to W. Suppose dim(W) = n. Then by Theorem 10 again, there is an isomorphism S : W → F n . Thus S T is an
isomorphism from V to F n implying also dim(V) = n.

Exercise 7: Let V and W be vector spaces over the field F and let U be an isomorphism of V onto W. Prove that T → UT U −1
is an isomorphism of L(V, V) onto L(W, W).

Solution: L(V, V) is defined on page 75 as the vector space of linear transformations from V to V, and likewise L(W, W) is the
vector space of linear transformations from W to W.

Call the function f . We know f (T ) is linear since it is a composition of three linear tranformations UT U −1 . Thus indeed f
is a function from L(V, V) to L(W, W). Now f (aT + T 0 ) = U(aT + T 0 )U −1 = (aUT + UT 0 )U −1 = aUT U −1 + UT 0 U −1 =
a f (T ) + f (T 0 ). Thus f is linear. We just must show f has an inverse. Let g be the function from L(W, W) to L(V, V) given by
g(T ) = U −1 T U. Then g f (T ) = U −1 (UT U −1 )U = T . Similarly f g = I. Thus f and g are inverses. Thus f is an isomorphism.

Section 3.4: Representation of Transformations by Matrices


Page 90: Typo. Four lines from the bottom it says “Example 12” where they probably meant Example 10 (page 78).

Page 91: Just before (3-8) it says ”By definition”. I think it’s more than just by definition, see bottom of page 88.
64 Chapter 3: Linear Transformations

Exercise 1: Let T be the linear operator on C2 defined by T (x1 , x2 ) = (x1 , 0). Let B be the standard ordered basis for C2 and
let B0 = {α1 , α2 } be the ordered basis defined by α1 = (1, i), α2 = (−i, 2).
(a) What is the matrix of T relative to the pair B, B0 ?
(b) What is the matrix of T relative to the pair B0 , B?
(c) What is the matrix of T in the ordered basis B0 ?
(d) What is the matrix of T in the ordered basis {α2 , α1 }?
Solution: (a) According to the comments at the bottom of page 87, the i-th column of the matrix is given by [T i ]B0 , where
1 = (1, 0) and 2 = (0, 1), the standard basis vectors of C2 . Now T 1 = (1, 0) and T 2 = (0, 0). To write these in terms of α1
and α2 we use the approach of row-reducing the augmented matrix
" # " # " #
1 −i 1 0 1 −i 1 0 1 0 2 0
→ → .
i 2 0 0 0 1 −i 0 0 1 −i 0

Thus T 1 = 2α1 − iα2 and T 2 = 0 · α1 + 0 · α2 and the matrix of T relative to B, B0 is


" #
2 0
.
−i 0

(b) In this case we have to write T α1 and T α2 as linear combinations of 1 , 2 .

T α1 = (1, 0) = 1 · 1 + 0 · 2

T α2 = (−i, 0) = −i · 1 + 0 · 2 .
0
Thus the matrix of T relative to B , B is " #
1 −i
.
0 0
(c) In this case we need to write T α1 and T α2 as linear combinations of α1 and α2 . T α1 = (1, 0), T α2 = (−i, 0). We row-reduce
the augmented matrix: " # " # " #
1 −i 1 −i 1 −i 1 −i 1 0 2 −2i
→ → .
i 2 0 0 0 1 −i −1 0 1 −i −1
Thus the matrix of T in the ordered basis B0 is " #
2 −2i
.
−i −1
(d) In this case we need to write T α2 and T α1 as linear combinations of α2 and α1 . In this case the matrix we need to
row-reduce is just the same as in (c) but with columns switched:
" # " # " #
−i 1 −i 1 1 i 1 i 1 i 1 i
→ →
2 i 0 0 2 i 0 0 0 −i −2 −2i
" # " #
1 i 1 i 1 0 −1 −i
→ →
0 1 −2i 2 0 1 −2i 2
Thus the matrix of T in the ordered basis {α2 , α1 } is
" #
−1 −i
.
−2i 2

Exercise 2: Let T be the linear transformation from R3 to R2 defined by

T (x1 , x2 , x3 ) = (x1 + x2 , 2x3 − x1 ).


Section 3.4: Representation of Transformations by Matrices 65

(a) If B is the standard ordered basisfor R3 and B0 is the standard ordered basis for R2 , what is the matrix of T relative to
the pair B, B0 ?

(b) If B = {α1 , α2 , α3 } and B0 = {β1 , β2 }, where

α1 = (1, 0, −1), α2 = (1, 1, 1), α3 = (1, 0, 0), β1 = (0, 1), β2 = (1, 0)

what is the matrix of T relative to the pair B, B0 ?

Solution: With respect to the standard bases, the matrix is simply


" #
1 1 0
.
−1 0 2

(b) We must write T α1 , T α2 , T α3 in terms of β1 , β2 .

T α1 = (1, −3)

T α2 = (2, 1)
T α3 = (1, 0).
We row-reduce the augmented matrix
" # " #
0 1 1 2 1 1 0 −3 1 0
→ .
1 0 −3 1 0 0 1 1 2 1

Thus the matrix of T with respect to B, B0 is " #


−3 1 0
.
1 2 1
Exercise 3: Let T be a linear operator on F n , let A be the matrix of T in the standard ordered basis for F n , and let W be the
subspace of F n spanned by the column vectors of A. What does W have to do with T ?

Solution: Since {α1 , . . . , αn } is a basis of F n , we know {T 1 , . . . , T n } generate the range of T . But T i equals the i-th column
vector of A. Thus the column vectors of A generate the range of T (where we identify F n with F n×1 ). We can also conclude
that a subset of the columns of A give a basis for the range of T .

Exercise 4: Let V be a two-dimensional vector space over the field F, and let B be an ordered basis for V. If T is a linear
operator on V and " #
a b
[T ]B =
c d
prove that T 2 − (a + d)T + (ad − bc)I = 0.

Solution: The coordinate matrix of T 2 − (a + d)T + (ad − bc)I with respect to B is


" #2 " # " #
a b a b 1 0
[T 2 − (a + d)T + (ad − bc)I]B = − (a + d) + (ad − bc)
c d c d 0 1

Expanding gives
a2 + bc ab + bd a2 + ad ab + bd
" # " # " #
ad − bc 0
= − +
ac + cd bc + d2 ac + cd ad + d2 0 ad − bc
" #
0 0
= .
0 0
66 Chapter 3: Linear Transformations

Thus T 2 − (a + d)T + (ad − bc)I is represented by the zero matrix with respect to B. Thus T 2 − (a + d)T + (ad − bc)I = 0.

Exercise 5: Let T be the linear operator on R3 , the matrix of which in the standard ordered basis is
 
 1 2 1 
 0 1 1  .
 
−1 3 4

Find a basis for the range of T and a basis for the null space of T .

Solution: The range is the column-space, which is the row-space of the following matrix (the transpose):
 
 1 0 −1 
 2 1 3 
 
1 1 4

which we can easily determine a basis of by putting it in row-reduced echelon form.


     
 1 0 −1   1 0 −1   1 0 −1 
 2 1 3  →  0 1 5  →  0 1 5  .

    
1 1 4 0 1 5 0 0 0

So a basis of the range is {(1, 0, −1), (0, 1, 5)}.

The null space can be found by row-reducing the matrix


     
 1 2 1   1 2 1   1 0 −1 
 0 1 1  →  0 1 1  →  0 1 1
     

−1 3 4 0 5 5 0 0 0

So
x−z=0
(
y+z=0
which implies
x=z
(
y = −z
The solutions are parameterized by the one variable z, thus the null space has dimension equal to one. A basis is obtained by
setting z = 1. Thus {(1, −1, 1)} is a basis for the null space.

Exercise 6: Let T be the linear operator on R2 defined by

T (x1 , x2 ) = (−x2 , x1 ).

(a) What is the matrix of T in the standard ordered basis for R2 ?


(b) What is the matrix of T in the ordered basis B = {α1 , α2 }, where α1 = (1, 2) and α2 = (1, −1)?
(c) Prove that for every real number c the operator (T − cI) is invertible.
(d) Prove that if B is any ordered basis for R2 and [T ]B = A, then A12 A21 , 0.
Solution: (a) We must write T 1 = (0, 1) and T 2 = (−1, 0) in terms of 1 and 2 . Clearly T 1 = 2 and T 2 = −1 . Thus the
matrix is " #
0 −1
.
1 0
Section 3.4: Representation of Transformations by Matrices 67

(b) We must write T α1 = (−2, 1) and T α2 = (1, 1) in terms of α1 , α2 . We can do this by row-reducing the augmented matrix
" #
1 1 −2 1
2 −1 1 1
" #
1 1 −2 1

0 −3 5 −1
" #
1 1 −2 1

0 1 −5/3 1/3
" #
1 0 −1/3 2/3

0 1 −5/3 1/3
Thus the matrix of T in the ordered basis B is
" #
−1/3 2/3
[T ]B = .
−5/3 1/3

(c) The matrix of T − cI with respect to the standard basis is


" # " #
0 −1 1 0
−c
1 0 0 1
" # " #
0 −1 c 0

1 0 0 c
" #
−c −1
.
1 −c
Row-reducing the matrix " # " # " #
−c −1 1 −c 1 −c
→ → .
1 −c −c −1 0 −1 − c2
Now −1 − c2 , 0 (since c2 ≥ 0). Thus we can continue row-reducing by dividing the second row by −1 − c2 to get
" # " #
1 −c 1 0
→ → .
0 1 0 1

Thus the matrix has rank two, thus T is invertible.

(d) Let {α1 , α2 } be any basis. Write α1 = (a, b), α2 = (c, d). Then T α1 = (−b, a), T α2 = (−d, c). We need to write T α1 and
T α2 in terms of α1 and α2 . We can do this by row reducing the augmented matrix
" #
a c −b −d
.
b d a c
" #
a b
Since {α1 , α2 } is a basis, the matrix is invertible. Thus (recalling Exercise 1.6.8, page 27), ad − bc , 0. Thus the
c d
matrix row-reduces to
c2 +d2
 ac+bd

 1 0 
 ad−bc ad−bc  .
a2 +b2 ac+bd
0 1
 
ad−bc ad−bc
Assuming a , 0 this can be shown as follows:
" #
1 c/a −b/a −d/a
→ .
b d a c
68 Chapter 3: Linear Transformations
" #
1 c/a −b/a −d/a
→ ad−bc a2 +b2 ac+bd .
0 a a a
" #
1 c/a −b/a −d/a
→ a2 +b2 ac+bd .
0 1 ad−bc ad−bc
c2 +d2
 ac+bd

 1 0 
ad−bc ad−bc  .
→ 
a2 +b2 ac+bd
0 1

ad−bc ad−bc

If b , 0 then a similar computation results in the same thing. Thus


c2 +d2
 ac+bd 
 
[T ]B =  ad−bc ad−bc  .
a2 +b2 ac+bd 
ad−bc ad−bc

Now ad − bc , 0 implies that at least one of a or b is non-zero and at least one of c or d is non-zero, it follows that a2 + b2 > 0
and c2 + d2 > 0. Thus (a2 + b2 )(c2 + d2 ) , 0. Thus

a2 + b2 c2 + d2
· ,0
ad − bc ad − bc
Exercise 7: Let T be the linear operator on R3 defined by

T (x1 , x2 , x3 ) = (3x1 + x3 , −2x1 + x2 , −x1 + 2x2 + 4x3 ).

(a) What is the matrix of T in the standard ordered basis for R3 .

(b) What is the matrix of T in the ordered basis


(α1 , α2 , α3 )
where α1 = (1, 0, 1), α2 = (−1, 2, 1), and α3 = (2, 1, 1)?

(c) Prove that T is invertible and give a rule for T −1 like the one which defines T .

Solution: (a) As usual we can read the matrix in the standard basis right off the definition of T :
 
 3 0 1 
[T ]{1 ,2 ,3 } =  −2 1 0  .
 
−1 2 4
 

(b) T α1 = (4, −2, 3), T α2 = (−2, 4, 9) and T α3 = (7, −3, 4). We must write these in terms of α1 , α2 , α3 . We do this by
row-reducing the augmented matrix  
 1 −1 2 4 −2 7 
 0 2 1 −2 4 −3 
 
1 1 1 3 9 4
 
 1 −1 2 4 −2 7 
→  0 2 1 −2 4 −3 
 
0 2 −1 −1 11 −3
 
 
 1 −1 2 4 −2 7 
→  0 2 1 −2 4 −3 
 
0 0 −2 1 7 0
 
 
 1 −1 2 4 −2 7 
→  0 1 1/2 −1 2 −3/2 
 
0 0 1 −1/2 −7/2 0
 
Section 3.4: Representation of Transformations by Matrices 69
 
 1 0 5/2 3 0 11/2 
→  0 1 1/2 −1 2 −3/2
 

0 0 1 −1/2 −7/2 0

 
 1 0 0 17/4 35/4 11/2 
→  0 1 0 −3/4 15/4 −3/2
 

0 0 1 −1/2 −7/2 0

Thus the matrix of T in the basis {α1 , α2 , α3 } is


 
 17/4 35/4 11/2 
[T ]{α1 ,α2 ,α3 } =  −3/4 15/4 −3/2  .
 
−1/2 −7/2 0

(c) We row reduce the augmented matrix (of T in the standard basis). If we achieve the identity matrix on the left of the
dividing line then T is invertible and the matrix on the right will represent T −1 in the standard basis, from which we will be
able read the rule for T −1 by inspection.
 
 3 0 1 1 0 0 
 −2 1 0 0 1 0 
 
−1 2 4 0 0 1
 
 −1 2 4 0 0 1 
→  3 0 1 1 0 0 
 
−2 1 0 0 1 0
 
 
 1 −2 −4 0 0 −1 
→  3 0 1 1 0 0 
 
−2 1 0 0 1 0
 
 
 1 −2 −4 0 0 −1 
→  0 6 13 1 0 3 
 
0 −3 −8 0 1 −2
 
 
 1 −2 −4 0 0 −1 
→  0 0 −3 1 2 −1 
 
0 −3 −8 0 1 −2
 
 
 1 −2 −4 0 0 −1 
→  0 −3 −8 0 1 −2 
 
0 0 −3 1 2 −1
 
 
 1 −2 −4 0 0 −1 
→  0 1 8/3 0 −1/3 2/3 
 
0 0 −3 1 2 −1
 
 
 1 0 4/3 0 −2/3 1/3 
→  0 1 8/3 0 −1/3 2/3 
 
0 0 −3 1 2 −1
 
 
 1 0 4/3 0 −2/3 1/3 
→  0 1 8/3 0 −1/3 2/3 
 
0 0 1 −1/3 −2/3 1/3
 
 
 1 0 0 4/9 2/9 −1/9 
→  0 1 0 8/9 13/9 −2/9 
 
0 0 1 −1/3 −2/3 1/3
 
70 Chapter 3: Linear Transformations

Thus T is invertible and the matrix for T −1 in the standard basis is


 
 4/9 2/9 −1/9 
 8/9 13/9 −2/9  .
 
−1/3 −2/3 1/3
 
Thus T −1 (x1 , x2 , x3 ) = 49 x1 + 92 x2 − 91 x3 , 98 x1 + 13 9 x2 − 9 x3 , − 3 x1 − 3 x2 + 3 x3 .
2 1 2 1

Exercise 8: Let θ be a real number. Prove that the following two matrices are similar over the field of complex numbers:

cos θ − sin θ
" # " iθ #
e 0
,
sin θ cos θ 0 e−iθ

(Hint: Let T be the linear operator on C2 which is represented by the first matrix in the standard ordered basis. Then find
vectors α1 and α2 such that T α1 = eiθ α1 , T α2 = e−iθ α2 , and {α1 , α2 } is a basis.)

Solution: Let B be the standard basis. Following the hint, let T be the linear operator on C2 which is represented by the first
matrix in the standard ordered basis B. Thus [T ]B is the first matrix above. Let α1 = (i, 1), α2 = (i, −1). Then α1 , α2 are
clealry linearly independent so B0 = {α1 , α2 } is a basis for C2 (as a vector space over C). Since eiθ = cos θ + i sin θ, it follows
that T α1 = (i cos θ − sin θ, i sin θ + cos θ) = (cos θ + i sin θ)(i, 1) = eiθ α1 and similarly since and e−iθ = cos θ − i sin θ, it follows
that T α2 = e−iθ α2 . Thus the matrix of T with respect to B0 is
" iθ #
e 0
[T ]B0 = .
0 e−iθ

By Theorem 14, page 92, [T ]B and [T ]B0 are similar.

Exercise 9: Let V be a finite-dimensional vector space over the field F and let S and T be linear operators on V. We ask:
When do there exist ordered bases B and B0 for V such that [S ]B = [T ]B0 ? Prove that such bases exist if and only if there is
an invertible linear operator U on V such that T = US U −1 . (Outline of proof: If [S ]B = [T ]B0 , let U be the operator which
carries B onto B0 and show that S = UT U −1 . Conversely, if T = US U −1 for some invertible U, let B be any ordered basis
for V and let B0 be its image under U. Then show that [S ]B = [T ]B0 .)

Solution: We follow the hint. Suppose there exist bases B = {α1 , . . . , αn } and B = {β1 , . . . , βn } such that [S ]B = [T ]B0 . Let
U be the operator which carries B onto B0 . Then by Theorem 14, page 92, [US U −1 ]B0 = [U]−1 −1
B [US U ]B [U]B and by the
comments at the very bottom of page 90, this equals [U]−1 B [U] B [S ] B [U] −1
B [U] B which equals [S ] B , which we’ve assumed
equals [T ]B0 . Thus [US U −1 ]B0 = [T ]B0 . Thus US U −1 = T .

Conversely, assume T = US U −1 for some invertible U. Let B be any ordered basis for V and let B0 be its image under U.
Then [T ]B0 = [US U −1 ]B0 = [U]B0 [S ]B0 [U]−1
B0 , which by Theorem 14, page 92, equals [S ]B (because U
−1
carries B0 into B).
Thus [T ]B0 = [S ]B .

Exercise 10: We have seen that the linear operator T on R2 defined by T (x1 , x2 ) = (x1 , 0) is represented in the standard
ordered basis by the matrix " #
1 0
A= .
0 0
This operator satisfies T 2 = T . Prove that if S is a linear operator on R2 such that S 2 = S , then S = 0, or S = I, or there is an
ordered basis B for R2 such that [S ]B = A (above).

Solution: Suppose S 2 = S . Let 1 , 2 be the standard basis vectors for R2 . Consider {S 1 , S 2 }.

If both S 1 = S 2 = 0 then S = 0. Thus suppose WLOG that S 1 , 0.


Section 3.4: Representation of Transformations by Matrices 71

First note that if x ∈ S (R2 ) then x = S (y) for some y ∈ R2 and therefore S (x) = S (S (y)) = S 2 (y) = S (y) = x. In other words
S (x) = x ∀ x ∈ S (R2 ).

Case 1: Suppose ∃ c ∈ R such that S 2 = cS 1 . Then S (2 − c1 ) = 0. In this case S is singular because it maps a
non-zero vector to zero. Thus since S 1 , 0 we can conclude that dim(S (R2 )) = 1. Let α1 be a basis for S (R2 ). Let
α2 ∈ R2 be such that {α1 , α2 } is a basis for R2 . Then S α2 = kα1 for some k ∈ R. Let α02 = α2 − kα1 . Then {α1 , α02 }
span R2 because if x = aα1 + bα2 then x = (a + bk)α1 + bα02 . Thus {α1 , α02 } is a basis for R2 . We now determine the
matrix of S with respect to this basis. Since α1 ∈ S (R2 ) and S (x) = x ∀ x ∈ S (R2 ), it follows that S α1 = α1 . And
consequently S (α1 ) = 1 · α1 + 0 · α02 . Thus the first column of the matrix of S with respect to α1 , α02 is [1, 0]T . Also
S α02 = S (α2 − kα1 ) = S α2 − kS α1 = S α2 − kα1 = kα1 − kα1 = 0 = 0 · α1 + 0 · α02 . So the second column of the matrix is
[0, 0]T . Thus the matrix of S with respect to the basis {α1 , α02 } is exactly A.

Case 2: There does not exist c ∈ R such that S 2 = cS 1 . In this case S 1 and S 2 are linearly independent from each other.
Thus if we let αi = S i then {α1 , α2 } is a basis for R2 . Now by assumption S (x) = x ∀ x ∈ S (R2 ), thus S α1 = α1 and S α2 = α2 .
Thus the matrix of S with respect to the basis {α1 , α2 } is exactly the identity matrix I.

Exercise 11: Let W be the space of all n × 1 column matrices over a field F. If A is an n × n matrix over F, then A defines a
linear operator LA on W through left multiplication: LA (X) = AX. Prove that every linear operator on W is left multiplication
by some n × n matrix, i.e., is LA for some A.
Now suppose V is an n-dimensional vector space over the field F, and let B be an ordered basis for V. For each α in
V, define Uα = [α]B . Prove that U is an isomorphism of V onto W. If T is a linear operator on V, then UT U −1 is a linear
operator on W. Accordingly, UT U −1 is left multiplication by some n × n matrix A. What is A?

Solution: Part 1: I’m confused by the first half of this question because isn’t this exactly Theorem 11, page 87 in the special
case V = W where B = B0 is the standard basis of F n×1 . This special case is discussed on page 88 after Theorem 12, and in
particular in Example 13. I don’t know what we’re supposed to add to that.

Part 2: Since U(cα1 + α2 ) = [cα1 + α2 ]B = c[α1 ]B + [α2 ]B = cU(α1 ) + U(α2 ), U is linear, we just must show it is invertible.
Suppose B = {α1 , . . . , αn }. Let T be the function from W to V defined as follows:

a1
 
 
 a2 
..  7→ a1 α1 + · · · an αn .
 
.

 
 
an

Then T is well defined and linear and it is also clear by inspection that T U is the identity transformation on V and UT is the
identity transformation on W. Thus U is an isomorphism from V to W.

It remains to deterine the matrix of UT U −1 . Now Uαi is the standard n × 1 matrix with all zeros except in the i-th place which
equals one. Let B0 be the standard basis for W. Then the matrix of U with respect to B and B0 is the identity matrix. Likewise
the matrix of U −1 with respect to B0 and B is the identity matrix. Thus [UT U −1 ]B0 = I[T ]B I −1 = [T ]B . Therefore the matrix
A is simply [T ]B , the matrix of T with respect to B.

Problem 12: Let V be an n-dimensional vector space over the field F, and let B = {α1 , . . . , αn } be an ordered basis for V.

(a) According to Theorem 1, there is a unique linear operator T on V such that

T α j = α j+1 , j = 1, . . . , n − 1, T αn = 0.

What is the matrix A of T in the ordered basis B.?


(b) Prove that T n = 0 but T n−1 , 0.
72 Chapter 3: Linear Transformations

(c) Let S be any linear operator on V such that S n = 0 but S n−1 , 0. Prove that there is an ordered basis B0 for V such that
the matrix of S in the ordered basis B0 is the matrix A of part (a).
(d) Prove that if M and N are n × n matrices over F such that M n = N n = 0 but M n−1 , 0 , N n−1 , then M and N are similar.
Solution: (a) The i-th column of A is given by the coefficients obtained by writing αi in terms of {α1 , . . . , αn }. Since T αi =
αi+1 , i < n and T αn = 0, the matrix is therefore
 
 0 0 0 0 · · · 0 0 
 1 0 0 0 · · · 0 0 
 
 0 1 0 0 · · · 0 0 
A =  0 0 1 0 · · · 0 0  .
 .. .. .. .. . . .. .. 
 
 . . . . . . . 
 
0 0 0 0 ··· 1 0

(b) A has all zeros except 1’s along the diagonal one below the main diagonal. Thus A2 has all zeros except 1’s along the
diagonal that is two diagonals below the main diagonal, as follows:

0 0 0 0 ··· 0 0
 
 

 0 0 0 0 ··· 0 0 


 1 0 0 0 ··· 0 0 

A2 =  0 1 0 0 ··· 0 0  .
0 0 1 0 ··· 0 0
 

.. .. .. .. .. .. ..
 
. . . . . . .
 
 
0 0 0 0 ··· 0 0

Similarly A3 has all zeros except the diagonal three below the main diagonal. Continuing we see that An−1 is the matrix that
is all zeros except for the bottom left entry which is a 1:
 
 0 0 0 0 ··· 0 0 

 0 0 0 0 ··· 0 0 

 0 0 0 0 ··· 0 0 
An−1 =   .
 
0 0 0 0 ··· 0 0
.. .. .. .. .. .. ..
 
. . . . . . .
 
 

1 0 0 0 ··· 0 0

Multiplying by A one more time then yields the zero matrix, An = 0. Since A represents T with respect to the basis B, and Ai
represents T i , we see that T n−1 , 0 and T n = 0.

(c) We will first show that dim(S k (V)) = n − k. Suppose dim(S (V)) = n. Then dim(S k (V)) = n ∀ k = 1, 2, . . . , which
contradicts the fact that S n = 0. Thus it must be that dim(S (V)) ≤ n − 1. Now dim(S 2 (V)) cannot be greater than dim(S (V))
because a linear transformation cannot map a space onto one with higher dimension. Thus dim(S 2 (V)) ≤ n − 1. Suppose that
dim(S 2 (V)) = n − 1. Thus n − 1 = dim(S 2 (V)) ≤ dim(S (V)) ≤ n − 1. Thus it must be that dim(S (V)) = n − 1. Thus S is
an isomorphism on S (V) because S (V) and S (S (V)) have the same dimension. It follows that S k is also an isomorphism on
S (V) ∀ k ≥ 2. Thus it follows that dim(S k (V)) = n − 1 for all k = 2, 3, 4, . . . , another contradiction. Thus dim(S 2 (V)) ≤ n − 2.
Suppose that dim(S 3 (V)) = n − 2, then it must be that dim(S 2 (V)) = n − 2 and therefore S is an isomorphism on S 2 (V), from
which it follows that dim(S k (V)) = n−2 for all k = 3, 4, . . . , a contradiction. Thus dim(S 3 (V)) ≤ n−3. Continuing in this way
we see that dim(S k (V)) ≤ n − k. Thus dim(S n−1 (V)) ≤ 1. Since we are assuming S n−1 , 0 it follows that dim(S n−1 (V)) = 1.
We have seen that dim(S k (V)) cannot equal dim(S k+1 (V)) for k = 1, 2, . . . , n − 1, thus it follows that the dimension must go
down by one for each application of S . In other words dim(S n−2 (V)) must equal 2, and then in turn dim(S n−3 (V)) must equal
3, and generally dim(S k (V)) = n − k.
Section 3.4: Representation of Transformations by Matrices 73

Now let α1 be any basis vector for S n−1 (V) which we have shown has dimension one. Now S n−2 (V) has dimension two and
S takes this space onto a space S n−1 (V) of dimension one. Thus there must be α2 ∈ S n−2 (V) \ S n−1 (V) such that S (α2 ) = α1 .
Since α2 is not in the space generated by α1 and {α1 , α2 } are in the space S n−2 (V) of dimension two, it follows that {α1 , α2 }
is a basis for S n−2 (V). Now S n−3 (V) has dimension three and S takes this space onto a space S n−2 (V) of dimension two.
Thus there must be α3 ∈ S n−3 (V) \ S n−2 (V) such that S (α3 ) = α2 . Since α3 is not in the space generated by α1 and α2
and {α1 , α2 , α3 } are in the space S n−3 (V) of dimension three, it follows that {α1 , α2 , α3 } is a basis for S n−3 (V). Continuing
in this way we produce a sequence of elements {α1 , α2 , . . . , αk } that is a basis for S n−k (V) and such that S (αi ) = αi−1 for all
i = 2, 3, . . . , k. In particular we have a basis {α1 , α2 , . . . , αn } for V and such that S (αi ) = αi−1 for all i = 2, 3, . . . , n. Reverse
the ordering of this bases to give B = {αn , αn−1 , . . . , α1 }. Then B therefore is the required basis for which the matrix of S with
respect to this basis will be the matrix given in part (a).

(d) Suppose S is the transformation of F n×1 given by v 7→ Mv and similarly let T be the transformation v 7→ Nv. Then
S n = T n = 0 and S n−1 , 0 , T n−1 . Then we know from the previous parts of this problem that there is a basis B for which
S is represented by the matrix from part (a). By Theorem 14, page 92, it follows that M is similar to the matrix in part (a).
Likewise there’s a basis B0 for which T is represented by the matrix from part (a) and thus the matrix N is also similar to the
matrix in part (a). Since similarity is an equivalence relation (see last paragraph page 94), it follows that since M and N are
similar to the same matrix that they must be similar to each other.

Exercise 13: Let V and W be finite-dimensional vector spaces over the field F and let T be a linear transformation from V
into W. If
B = {α1 , . . . , αn } and B0 = {β1 , . . . , βn }
are ordered bases for V and W, respectively, define the linear transformations E p,q as in the proof of Theorem 5: E p,q (αi ) =
δi,q β p . Then the E p,q , 1 ≤ p ≤ m, 1 ≤ q ≤ n, form a basis for L(V, W), and so
m X
X n
T= A pq E p,q
p=1 q=1

for certain scalars A pq (the coordinates of T in this basis for L(V, W)). Show that the matrix A with entries A(p, q) = A pq is
precisely the matrix of T relative to the pair B, B0 .
p,q
Solution: Let EM be the matrix of the linear transformation E p,q with respect to the bases B and B0 . Then by the formula for
p,q
a matrix associated to a linear transformation as given in the proof of Theorem 11, page 87, EM is the matrix all of whose
p,q
entries are zero except for the p, q-the entry which is one. Thus A = p,q A p,q EM . Since the association between linear
P
p,q
transformations and matrices is an isomorphism, T 7→ A implies p,q A pq E p,q 7→ p,q A pq EM
P P
. And thus A is exactly the
matrix whose entries are the A pq ’s.

Section 3.5: Linear Functionals


Page 100: Typo line 5 from the top. It says f (αi ) = αi , should be f (αi ) = ai .

Page 100: In Example 22, it says the matrix


 
 1 1 1 
 t1 t2 t3
 

t12 t22 t32
is invertible “as a short computation shows.” The way to see this is with what we know so far is to row reduce the matrix. As
long as t1 , t2 we can get to
 t2 −t3 
 1 0 t2 −t1 
t3 −t1
0 1  .
 
 t2 −t1
0 0 (t3 −t1 )(t3 −t2 ) 
 
t22 −t12
74 Chapter 3: Linear Transformations

Now we can continue and obtain  t2 −t3 


 1 0 t2 −t1 
t3 −t1
 
 0 1 
 t2 −t1 
0 0 1
as long as (t3 − t1 )(t3 − t2 ) , 0. From there we can finish row-reducing to obtain the identity. Thus we can row-reduce the
matrix to the identity if and only if t1 , t2 , t3 are distinct, that is no two of them are equal.

Exercise 1: In R3 let α1 = (1, 0, 1), α2 = (0, 1, −2), α3 = (−1, −1, 0).

(a) If f is a linear functional on R3 such that

f (α1 ) = 1, f (α2 ) = −1, f (α3 ) = 3,

and if α = (a, b, c), find f (α).

(b) Describe explicitly a linear functional f on R3 such that

f (α1 ) = f (α2 ) = 0 but f (α3 ) , 0.

(c) Let f be any linear functional such that

f (α1 ) = f (α2 ) = 0 and f (α3 ) , 0.

If α = (2, 3, −1), show that f (α) , 0.

Solution: (a) We need to write (a, b, c) in terms of α1 , α2 , α3 . We can do this by row reducing the following augmented matrix
whose colums are the αi ’s.  
 1 0 −1 a 
 0 1 −1 b 
 
1 −2 0 c
 
 1 0 −1 a 
→  0 1 −1 b 
 
0 −2 −1 c − a
 
 
 1 0 −1 a 
→  0 1 −1 b
 

0 0 −1 c − a + 2b

 
 1 0 −1 a 
→  0 1 −1 b
 

0 0 1 a − 2b − c

 
 1 0 0 2a − 2b − c 
→  0 1 0 a − b − c 
 
0 0 1 a − 2b − c
 

Thus if (a, b, c) = x1 α1 + x2 α2 + x3 α3 then x1 = 2a − 2b − c, x2 = a − b − c and x3 = a − 2b − c. Now f (a, b, c) =


f (x1 α1 + x2 α2 + x3 α3 ) = x1 f (α1 ) + x2 f (α2 ) + x3 f (α3 ) = (2a − 2b − c) · 1 + (a − b − c) · (−1) + (a − 2b − c) · 3 =
(2a − 2b − c) − (a − b − c) + (3a − 6b − 3c) = 4a − 7b − 3c. In summary

f (α) = 4a − 7b − 3c.

(b) Let f (x, y, z) = x − 2y − z. The f (1, 0, 1) = 0, f (0, 1, −2) = 0, and f (−1, −1, 0) = 1.
Section 3.4: Representation of Transformations by Matrices 75

(c) Using part (a) we know that α = (2, 3, −1) = −α1 − 3α3 (plug in a = 2, b = 3, c = −1 for the formulas for x1 , x2 , x3 ). Thus
f (α) = − f (α1 ) − 3 f (α3 ) = 0 − 3 f (α3 ) and since f (α3 ) , 0, −3 f (α3 ) , 0 and thus f (α) , 0.

Exercise 2: Let B = {α1 , α2 , α3 } be the basis for C3 defined by


α1 = (1, 0, −1), α2 = (1, 1, 1), α3 = (2, 2, 0).
Find the dual basis of B.

Solution: The dual basis { f1 , f2 , f3 } are given by fi (x1 , x2 , x3 ) = 3j=1 Ai j x j where (A1,1 , A1,2 , A1,3 ) is the solution to the system
P

 
 1 0 −1 1 
 1 1 1 0  ,
 
2 2 0 0
(A2,1 , A2,2 , A2,3 ) is the solution to the system  
 1 0 −1 0 
 1 1 1 1  ,
 
2 2 0 0
and (A3,1 , A3,2 , A3,3 ) is the solution to the system
 
 1 0 −1 0 
 1 1 1 0  ,
 
2 2 0 1
We row reduce the generic matrix
a + b − 12 c
   
 1 0 −1 a   1 0 0 
 1 1 1 b  →  0 1 0 c−b−a  .
   
2 2 0 c 0 0 1 b − 12 c
a = 1, b = 0, c = 0 ⇒ f1 (x1 , x2 , x3 ) = x1 − x2
a = 0, b = 1, c = 0 ⇒ f2 (x1 , x2 , x3 ) = x1 − x2 + x3
a = 0, b = 0, c = 1 ⇒ f3 (x1 , x2 , x3 ) = − 12 x1 + x2 − 12 x3 .

Then { f1 , f2 , f3 } is the dual basis to {α1 , α2 , α3 }.

Exercise 3: If A and B are n × n matrices over the field F, show that trace(AB) = trace(BA). Now show that similar matrices
have the same trace.

Solution: (AB)i j = Aik Bk j and (BA)i j =


Pn Pn
k=1 k=1 Bik Ak j . Thus
n
X n X
X n n X
X n n X
X n n
X
trace(AB) = (AB)ii = Aik Bki = Bki Aik = Bki Aik = (BA)kk = trace(BA).
i=1 i=1 k=1 i=1 k=1 k=1 i=1 k=1

Suppose A and B are similar. Then ∃ an invertible n × n matrix P such that A = PBP−1 . Thus trace(A) = trace(PBP−1 ) =
trace((P)(BP−1 )) = trace((BP−1 )(P)) = trace(B).

Exercise 4: Let V be the vector space of all polynomial functions p from R into R which have degree 2 or less:
p(x) = c0 + c1 x + c2 x2 .
Define three linear functionals on V by
Z 1 Z 2 Z 3
f1 (p) = p(x)dx, f2 (x) = p(x)dx, f3 (x) = p(x)dx.
0 0 0
76 Chapter 3: Linear Transformations

Show that { f1 , f2 , f3 } is a basis for V ∗ by exhibiting the basis for V of which it is the dual.

Solution: Z a
c0 + c1 x + c2 x2 dx
0
1 1
= c0 x + c1 x2 + c2 x3 |a0
2 3
1 2 1 3
= c0 a + c1 a + c2 a .
2 3
Thus Z 1
1 1
p(x)dx = c1 + c1 + c2
0 2 3
Z 2
8
p(x)dx = 2c1 + 2c1 + c2
0 3
Z 3
9
p(x)dx = 3c1 + c1 + 9c2
0 2
Thus we need to solve the following system three times

c + 1c + 1c = u

 1 2 1 38 2


2c1 + 2c1 + 3 c2 = v


 3c + 9 c + 9c


=w 1 2 1 2

Once when (u, v, w) = (1, 0, 0), once when (u, v, w) = (0, 1, 0) and once when (u, v, w) = (0, 0, 1).

We therefore row reduce the following matrix


 
 1 1/2 1/3 1 0 0 
 2 2 8/3 0 1 0
 

3 9/2 9 0 0 1
 
 1 1/2 1/3 1 0 0 
→  0 1 2 −2 1 0 
 
0 3 8 −3 0 1
 
 
 1 0 −2/3 2 −1/2 0 
→  0 1 2 −2 1 0 
 
0 0 2 3 −3 1
 
 
 1 0 −2/3 2 −1/2 0 
→  0 1 2 −2 1 0 
 
0 0 1 3/2 −3/2 1/2
 
 
 1 0 0 3 −3/2 1/3 
→  0 1 0 −5 4 −1  .
 
0 0 1 3/2 −3/2 1/2
 

Thus
3 2
α1 = 3 − 5x +
x
2
3 3
α2 = − + 4x − x2
2 2
1 1
α3 = − x + x2 .
3 2
Section 3.4: Representation of Transformations by Matrices 77

Exercise 5: If A and B are n × n complex matrices, show that AB − BA = I is impossible.

Solution: Recall for n × n matrices M, trace(M) = ni=1 Mii . The trace is clearly additive trace(M1 + M2 ) = trace(M1 ) +
P
trace(M2 ). We know from Exercise 3 that trace(AB) = trace(BA). Thus trace(AB − BA) = trace(AB) − trace(BA) =
trace(AB) − trace(AB) = 0. But trace(I) = n and n , 0 in C.

Exercise 6: Let m and n be positive integers and F a field. Let f1 , . . . , fm be linear functionals on F n . For α in F n define

T (α) = ( f1 (α), . . . , fm (α)).

Show that T is a linear transformation from F n into F m . Then show that every linear transformation from F n into F m is of the
above form, for some f1 , . . . , fm .

Solution: Clearly T is a well defined function from F n into F m . We must just show it is linear. Let α, β ∈ F n , c ∈ C. Then

T (cα + β) = ( f1 (cα + β), . . . , fm (cα + β))

= (c f1 (α) + f1 (β), . . . , c fn (α) + fn (β))


= c( f1 (α), . . . , fn (α)) + ( f1 (β), . . . , fn (β))
= cT (α) + T (β).
Thus T is a linear transformation.

Let S be any linear transformation from F n to F m . Let M be the matrix of S with respect to the standard bases of F n and
F m . Then M is an m × n matrix and S is given by X 7→ MX where we identify F n as F n×1 and F m with F m×1 . Now for each
i = 1, . . . , m let fi (x1 , . . . , xn ) = nj=1 Mi j x j . Then X 7→ MX is the same as X 7→ ( f1 (X), . . . , fm (x)) (keeping in mind our
P

identification of F m with F m×1 ). Thus S has been written in the desired form.

Exercise 7: Let α1 = (1, 0, −1, 2) and α2 = (2, 3, 1, 1), and let W be the subspace of R4 spanned by α1 and α2 . Which linear
functionals f :
f (x1 , x2 , x3 , x4 ) = c1 x1 + c2 x2 + c3 x3 + c4 x4
are in the annihilator of W?

Solution: The two vectors α1 and α2 are linearly independent since neither is a multiple of the other. Thus W has dimension
2 and {α1 , α2 } is a basis for W. Therefore a functional f is in the annihilator of W if and only if f (α1 ) = f (α2 ) = 0. We find
such f by solving the system
f (α1 ) = 0
(
f (α2 ) = 0
or equivalently
c1 − c3 + 2c4 = 0
(
2c1 + 3c2 + c3 + c4 = 0
We do this by row reducing the matrix " #
1 0 −1 2
2 3 1 1
" #
1 0 −1 2

0 1 1 −1
Therefore
c1 = c3 − 2c4
c2 = −c3 + c4 .
78 Chapter 3: Linear Transformations

The general element of W 0 is therefore

f (x1 , x2 , x3 , x4 ) = (c3 − 2c4 )x1 + (c3 + c4 )x2 + c3 x3 + c4 x4 ,

for arbitrary elements c3 and c4 . Thus W 0 has dimension 2 as expected.

Exercise 8: Let W be the subspace of R5 which is spanned by the vectors

α1 = 1 + 22 + 3 , α2 = 2 + 33 + 34 + 5

α3 = 1 + 42 + 63 + 44 + 5 .


Find a basis for W 0 .

Solution: The vectors α1 , α2 , α3 are linearly independent as can be seen by row reducing the matrix
 
 1 2 1 0 0 
 0 1 3 3 1
 

1 4 6 4 1
 
 1 2 1 0 0 
→  0 1 3 3 1
 

0 2 5 4 1

 
 1 0 −5 −6 −2 
→  0 1 3 3 1
 

0 0 −1 −2 −1

 
 1 0 −5 −6 −2 
→  0 1 3 3 1 
 
0 0 1 2 1
 
 
 1 0 0 4 3 
→  0 1 0 −3 −2  .
 
0 0 1 2 1
 

Thus W has dimension 3 and {α1 , α2 , α3 } is a basis for W. We know every functional is given by f (x1 , x2 , x3 , x4 , x5 ) =
c1 x2 + c2 x2 + c3 x3 + c4 x4 + c5 x5 for some c1 , . . . , c5 . From the row reduced matrix we see that the general solution for an
element of W 0 is

f (x1 , x2 , x3 , x4 , x5 ) = (−4c4 − 3c5 )x1 + (3c4 + 2c5 )x2 − (2c4 + c5 )x3 + c4 x4 + c5 x5 .

Exercise 9: Let V be the vector space of all 2 × 2 matrices over the field of real numbers, and let
" #
2 −2
B= .
−1 1

Let W be the subspace of V consisting of all A such that AB = 0. Let f be a linear functional on V which is in the annihilator
of W. Suppose that f (I) = 0 and f (C) = 3, where I is the 2 × 2 identity matrix and
" #
0 0
C= .
0 1

Find f (B).
Section 3.4: Representation of Transformations by Matrices 79

Solution: The general linear functional on V is of the form f (A) = aA11 + bA12 + cA21 + dA22 for some a, b, c, d ∈ R. If A ∈ W
then " #" # " #
x y 2 −2 0 0
=
z w −1 1 0 0
implies y = 2x and w = 2y. So W consists of all matrices of the form
" #
x 2x
y 2y
" #!
x 2x
Now f ∈ W ⇒ f0
= 0 ∀ x, y ∈ R ⇒ ax + 2bx + cy + 2dy = 0 ∀ x, y ∈ R ⇒ (a + 2b)x + (c + 2d)y = 0 ∀ x, y ∈ R
y 2y
⇒ b = − 12 a and d = − 21 c. So the general f ∈ W 0 is of the form

1 1
f (A) = aA11 − aA12 + cA21 − cA22 .
2 2
Now f (C) = 3 ⇒ d = 3 ⇒ − 12 c = 3 ⇒ c = −6. And f (I) = 0 ⇒ a − 21 c = 0 ⇒ c = 2a ⇒ a = −3. Thus

3
f (A) = −3A11 + A12 − 6A21 + 3A22 .
2
Thus
3
· (−2) − 6 · (−1) + 3 · 1 = 0.
f (B) = −3 · 2 +
2
Exercise 10: Let F be a subfield of the complex numbers. We define n linear functionals on F n (n ≥ 2) by
n
X
fk (x1 , . . . , xn ) = (k − j)x j , 1 ≤ k ≤ n.
j=1

What is the dimension of the subspace annihilated by f1 , . . . , fn ?

Solution: N fk is the subspace annihilated by fk . By the comments on page 101, N fk has dimension n − 1. Now the standard
basis vector 2 is in N f2 but is not in N f1 . Thus N f1 and N f2 are distinct hyperspaces. Thus their intersection has dimension
n − 2. Now 3 is in N f3 but is not in N f1 ∪ N f2 . Thus N f1 ∩ N f2 ∩ N f3 is the intersection of three distinct hyperspaces and so has
dimension n − 3. Continuing in this way, i < ∪i−1 i
j=1 N fi . Thus ∪ j=1 N fi is the intersection of i distinct hyperspaces and so has
dimension n − i. Thus when i = n we have ∪ j=1 N fi has dimension 0.
n

Exercise 11: Let W1 and W2 be subspace of a finite-dimensional vector space V.


(a) Prove that (W1 + W2 )0 = W10 ∩ W20 .
(b) Prove that (W1 ∩ W2 )0 = W10 + W20 .
Solution: (a) f ∈ (W1 + W2 )0 ⇒ f (v) = 0 ∀ v ∈ W1 + W2 ⇒ f (w1 + w2 ) = 0 ∀ w1 ∈ W1 , w2 ∈ W2 ⇒ f (w1 ) = 0 ∀ w1 ∈ W1 (take
w2 = 0) and f (w2 ) = 0 ∀ w2 ∈ W2 (take w1 = 0). Thus f ∈ W10 and f ∈ W20 . Thus f ∈ W10 ∩ W20 . Thus (W1 + W2 )0 ⊆ W10 ∩ W20 .

Conversely, let f ∈ W10 ∩W20 . Let v ∈ W1 +W2 . Then v = w1 +w2 where wi ∈ Wi . Thus f (v) = f (w1 +w2 ) = f (w1 )+ f (w2 ) = 0+0
(since f ∈ W10 and f ∈ W20 ). Thus f (v) = 0 ∀ v ∈ W1 + W2 . Thus f ∈ (W1 + W2 )0 . Thus W10 ∩ W20 ⊆ (W1 + W2 )0 .

Since (W1 + W2 )0 ⊆ W10 ∩ W20 and W10 ∩ W20 ⊆ (W1 + W2 )0 it follows that W10 ∩ W20 = (W1 + W2 )0

(b) f ∈ W10 + W20 ⇒ f = f1 + f2 , for some fi ∈ Wi0 . Now let v ∈ W1 ∩ W2 . Then f (v) = ( f1 + f2 )(v) = f1 (v) + f2 (v) = 0 + 0.
Thus f ∈ (W1 ∩ W2 )0 . Thus W10 + W20 ⊆ (W1 ∩ W2 )0 .
80 Chapter 3: Linear Transformations

Now let f ∈ (W1 ∩ W2 )0 . In the proof of Theorem 6 on page 46 it was shown that we can choose a basis for W1 + W2

{α1 , . . . , αk , β1 , . . . , βm , γ1 , . . . , γn }

where {α1 , . . . , αk } is a basis for W1 ∩ W2 , {α1 , . . . , αk , β1 , . . . , βm } is a basis for W1 and {α1 , . . . , αk , γ1 , . . . , γn } is a basis
for W2 . We expand this to a basis for all of V

{α1 , . . . , αk , β1 , . . . , βm , γ1 , . . . , γn , λ1 , . . . , λ` }.

Now the general element v ∈ V can be written as


k
X m
X n
X `
X
v= xi αi + yi βi + zi γi + wi λi (20)
i=1 i=1 i=1 i=1

and f is given by
k
X m
X n
X `
X
f (v) = ai xi + bi yi + ci zi + di wi
i=1 i=1 i=1 i=1

for some constants ai , bi , ci , di . Since f (v) = 0 for all v ∈ W1 ∩ W2 , it follows that a1 = · · · = ak = 0. So


X X X
f (v) = bi yi + ci zi + di wi .

Define X X
f1 (v) = ci zi + di wi
and X
f2 (v) = bi yi .
Then f = f1 + f2 . Now if v ∈ W1 then
k
X m
X
v= xi αi + yi βi
i=1 i=1

so that the coefficients zi and wi in (20) are all zero. Thus f1 (v) = 0. Thus f1 ∈ W10 . Similarly if v ∈ W2 then the coefficients yi
and wi in (20) are all zero and thus f2 (v) = 0. So f2 ∈ W2 . Thus f = f1 + f2 where f1 ∈ W10 and f2 ∈ W20 . Thus f ∈ W10 + W20 .
Thus (W1 ∩ W2 )0 ⊆ W10 + W20 .

Thus (W1 ∩ W2 )0 ⊆ W10 + W20 .

Exercise 12: Let V be a finite-dimensional vector space over the field F and let W be a subspace of V. If f is a linear
functional on W, prove that there is a linear functional g on V suvch that g(α) = f (α) for each α in the subspace W.

Solution: Let B be a basis for W and let B0 be a basis for V such that B ⊆ B0 . A linear function on a vector space is uniquely
determined by its values on a basis, and conversely any function on the basis can be extended to a linear function on the space.
Thus we define g on B by g(β) = f (β) ∀ β ∈ B. Then define g(β) = 0 for all β ∈ B0 \ B. Since we have defined g on B0 it
defines a linear functional on V and since it agrees with f on a basis for W it agrees with f on all of W.

Exercise 13: Let F be a subfield of the field of complex numbers and let V be any vector space over F. Suppose that f and
g are linear functionals on V such that the function h defined by h(α) = f (α)g(α) is also a linear functional on V. Prove that
either f = 0 or g = 0.

Solution: Suppose neither f nor g is the zero function. We will derive a contradiction. Let v ∈ V. Then h(2v) = f (2v)g(2v) =
4 f (v)g(v). But also h(2v) = 2h(v) = 2 f (v)g(v). Therefore f (v)g(v) = 2 f (v)g(v) ∀ v ∈ V. Thus f (v)g(v) = 0 ∀ v ∈ V.
Let B be a basis for V. Let B1 = {β ∈ B | f (β) = 0} and B2 = {β ∈ B | g(β) = 0}. Since f (β)g(β) = 0 ∀ β ∈ B,
we have B = B1 ∪ B2 . Suppose B1 ⊆ B2 . Then B2 = B and consequently g is the zero function. Thus B1 * B2 . And
Section 3.4: Representation of Transformations by Matrices 81

similarly B2 * B1 . Thus we can choose β1 ∈ B1 \ B2 and β2 ∈ B2 \ B1 . So we have f (β2 ) , 0 and g(β1 ) , 0. Then
f (β1 + β2 )g(β1 + β2 ) = f (β1 )g(β1 ) + f (β2 )g(β1 ) + f (β1 )g(β2 ) + f (β2 )g(β2 ). Since f (β1 ) = g(β2 ) = 0, this equals f (β2 )g(β1 )
which is non-zero since each term is non-zero. And this contradicts the fact that f (v)g(v) = 0 ∀ v ∈ V.

Exercise 14: Let F be a field of characteristic zero and let V be a finite-dimensional vector space over F. If α1 , . . . , αm are
finitely many vectors in V, each different from the zero vector, prove that there is a linear functional f on V such that

f (αi ) , 0, i = 1, . . . , m.

Solution: Re-index if necessary so that {α1 , . . . , αk } is a basis for the subspace generated by {α1 , . . . , αm }. So each αk+1 , . . . , αm
can be written in terms of α1 , . . . , αk . Extend {α1 , . . . , αk } to a basis for V

{α1 , . . . , αk , β1 , . . . , βn }.

For each i = k + 1, . . . , m write αi = j=1 Ai j α j . Since αk+1 , . . . , αm are all non-zero, for each i = k + 1, . . . , m ∃ ji ≤ k
Pk
such that Ai ji , 0. Now define f by mapping α1 , . . . , αk to k arbitrary non-zero values and map βi to zero ∀ i. Then
f (αk+1 ) = kj=1 Ak+1, j f (α j ). If f (αk+1 ) = 0 then leaving f (αi ) fixed for all i ≤ k and adjusting f (α jk+1 ), it equals zero for ex-
P
actly one possible value of f (α jk+1 ) (since Ak+1, jk+1 , 0). Thus we can redefine f (α jk+1 ) so that f (αk+1 ) , 0 while maintaining
f (α jk+1 ) , 0.

Now if f (αk+2 ) = 0, then leaving f (αi ) fixed for i , jk+2 , it equals zero for exactly one possible value of f (α jk+2 ) (since
Ak+2, jk+2 , 0) So we can adjust f (α jk+2 ) so that f (αk+2 ) , 0 and f (αk+1 ) , 0 and f (αk+2 ) , 0 simultaneously.

Continuing in this way we can adjust f (α jk+3 ), . . . , f (α jm ) as necessary until all f (αk+1 ), . . . , f (αm ) are non-zero and also all
of f (α1 ), . . . , f (αk ) are non-zero.

Exercise 15: According to Exercise 3, similar matrices have the same trace. Thus we can define the trace of a linear operator
on a finite-dimensional space to be the trace of any matrix which represents the operator in an ordered basis. This is well-
defined since all such representing matrices for one operator are similar.
Now let V be the space of all 2 × 2 matrices over the field F and let P be a fixed 2 × 2 matrix. Let T be the linear operator
on V defined by T (A) = PA. Prove that trace(T ) = 2trace(P).

Solution: Write " #


P11 P12
P= .
P21 P22
Let " # " #
1 0 0 1
e11 = , e12 =
0 0 0 0
" # " #
0 0 0 0
e21 = , e22 =
1 0 0 1
Then B = {e11 , e12 , e21 , e22 } is an ordered basis for V. We find the matrix of the linear transformation with respect to this basis.
" #
P11 0
T (e11 ) = = P11 e11 + P21 e21
P21 0
" #
0 P11
T (e12 ) = = P11 e12 + P21 e22
0 P21
" #
P21 0
T (e21 ) = = P12 e11 + P22 e21
P22 0
" #
0 P12
T (e22 ) = = P12 e12 + P22 e22 .
0 P22
82 Chapter 3: Linear Transformations

Thus the matrix of T with respect to B is


 P11 0 P12 0
 

 0 P11 0 P12 
  .
 P21
 0 P22 0 
0 P21 0 P22
The trace of this matrix is 2P11 + 2P22 = 2trace(P).

Exercise 16: Show that the trace functional on n × n matrices is unique in the following sense. If W is the space of n × n
matrices over the field F and if f is a linear functional on W such that f (AB) = f (BA) for each A and B in W, then f is a
scalar multiple of the trace function. If, in addition, f (I) = n then f is the trace function.

Solution: Let A and B be n × n matrices. The `, m entry in AB is


n
X
(AB)`m = A`k Bkm (21)
k=1

and the `, m entry in BA is


n
X
(BA)`m = B`k Akm . (22)
k=1

Fix i, j ∈ {1, . . . , n} such that i > j. Let A be the matrix where Ai j = 1 and all other entries are zero. Let B be the matrix where
Bii = 1 and all other entries are zero. Consider the general element of AB
n
X
(AB)`m = A`k Bkm .
k=1

The only non-zero A in the sum on the right is Ai j . But B jm = 0 since j > i and only Bii , 0. Thus AB is the zero matrix.

Now we compute BA. From (22) the only non-zero term is when ` = i, m = j and k = i.

Thus the matrix AB has zeros in every position except for the i, j position where it equals one.

Now the general functional on n × n matrices is of the form


n X
X n
f (M) = c`m M`m
`=1 m=1

for some constants c`m . Now f (AB) = f (0) = 0 and f (BA) = ci j . So if f (AB) = f (BA) then it follows that ci j = 0.

Thus we have shown that ci j = 0 for all i > j. Similarly ci j = 0 for all i < j. Thus the only possible non-zero coefficients are
c11 , . . . , cnn .
Xn
f (M) = cii Mii .
i=1

We will be done if we show c11 = cmm for all m = 2, . . . , n. Fix 2 ≤ i ≤ n. Let A be the matrix such that A11 = Ai1 = 1
and A`m = 0 in all other positions. Let B = AT . Then AB is zero in every position except A11 = A1i = Ai1 = Aii = 1. And
BA is zero in every position except (BA)11 = 2. Thus f (AB) = c11 + cii and f (BA) = 2c11 . Thus if f (AB) = f (BA) then
c11 + cii = 2c11 which implies c11 = cii . Thus there’s a constant c such that cii = c for all i.

Thus f is given by
n
X
f (M) = cMii .
k=1
Section 3.4: Representation of Transformations by Matrices 83

If f (I) = n then c = 1 and we have the trace function.

Exercise 17: Let W be the space of n × n matrices over the field F, and let W0 be the subspace spanned by the matrices
C of the form C = AB − BA. Prove that W0 is exactly the subspace of matrices which have trace zero. (Hint: What is the
dimension of the space of matrices of trace zero? Use the matrix ’units,’ i.e., matrices with exactly one non-zero entry, to
construct enough linearly independent matrices of the form AB − BA.)

Solution: Let W 0 = {w ∈ W | trace(w) = 0}. We want to show W 0 = W0 . We know from Exercise 3 that trace(AB − BA) = 0
for all matrices A, B. Since matrices of the form AB− BA span W0 , it follows that trace(M) = 0 for all M ∈ W0 . Thus W0 ⊆ W 0 .

Since the trace function is a linear functional, the dimension of W 0 is dim(W)−1 = n2 −1. Thus if we show the dimension of W0
is also n2 −1 then we will be done. We do this by exhibiting n2 −1 linearly independent elements of W0 . Denote by Ei j the ma-
trix with a one in the i, j position and zeros in all other positions. Let Hi j = Eii − E j j . Let B = {Ei j | i , j} ∪ {H1,i | 2 ≤ i ≤ n}.
We will show that B ⊆ W0 and that B is a linearly independent set. First, it clear that they are linearly independent be-
cause Ei j is the only vector in B with a non-zero value in the i, j position and H1,i is the only vector in B with a non-zero
value in the i, i position. Now 2Ei j = Hi j Ei j − Ei j Hi j and Hi j = Ei j E ji − E ji Ei j . Thus Ei j ∈ W0 and Hi j ∈ W0 . Now
|B| = |{Ei j | i , j}| + |{H1,i | 2 ≤ i ≤ n}| = (n2 − n) + (n − 1) = n2 − 1 Thus we are done.

Section 3.6: The Double Dual


Exercise 1: Let n be a positive integer and F a field. Let W be the set of all vectors (x1 , . . . , xn ) in F n such that x1 +· · ·+ xn = 0.
(a) Prove that W 0 consists of all linear functionals f of the form
n
X
f (x1 , . . . , xn ) = c x j.
j=1

(b) Show that the dual space W ∗ of W can be ‘naturally’ identified with the linear functionals

f (x1 , . . . , xn ) = c1 x1 + · · · + cn xn

on F n which satisfy c1 + · · · + cn = 0.
Solution: (a) Let g be the functional g(x1 , . . . , xn ) = x1 + · · · + xn . Then W is exactly the kernel of g. Thus dim(W) = n − 1.
Let αi = 1 − i+1 for i = 1, . . . , n − 1. Then {α1 , . . . , αn−1 } are linearly independent and are all in W so they must be a basis
for W. Let f (x1 , . . . , xn ) = c1 x1 + · · · + cn xn be a linear functional. Then f ∈ W 0 ⇒ f (α1 ) = · · · = f (αn ) = 0 ⇒ c1 − ci = 0 ∀
i = 2, . . . , n ⇒ ∃ c such that ci = c ∀ i. Thus f (x1 , . . . , xn ) = c(x1 + · · · + xn ).

(b) Consider the sequence of functions


W → (F n )∗ → W ∗
where the first function is
(c1 , . . . , cn ) 7→ fc1 ,...,cn
where fc1 ,...,cn (x1 , . . . , xn ) = c1 x1 + · · · cn xn and the second function is restriction from F n to W. We know both W and W ∗ have
the same dimension. Thus if we show the composition of these two functions is one-to-one then it must be an isomorphism.
Suppose (c1 , . . . , cn ) ∈ W 7→ fc1 ,...,cn = 0 ∈ W ∗ .

ci = 0 and ci xi = 0 for all (x1 , . . . , xn ) ∈ W.


P P
Then

ci = 0 and ci xi = 0 for all (x1 , . . . , xn ) such that xi = 0.


P P P
In other words
84 Chapter 3: Linear Transformations

Let {α1 , . . . , αn−1 } be the basis for W from part (a). Then fc1 ,...,cn (αi ) = 0 ∀ i = 1, . . . , n−1; which implies c1 = ci ∀ i = 2, . . . , n.
Thus ci = (n − 1)c1 . But ci = 0, thus c1 = 0. Thus fc1 ,...,cn is the zero function.
P P

Thus the mapping W → W ∗ is a natural isomorphism. We therefore naturally identify each element in W ∗ with a linear
functional f (x1 , . . . , xn ) = c1 x1 + · · · cn xn where ci = 0.
P

Exercise 2: Use Theorem 20 to prove the following. If W is a subspace of a finite-dimensional vector space V and if
{g1 , . . . , gr } is any basis for W 0 , then
r
\
W= Ngi .
i=1

Solution: {g1 , . . . , gr } a basis for W ⇒ gi ∈ W ∀ i ⇒ gi (W) = 0 ∀ i ⇒ W ⊆ Ngi ∀ i ⇒ W ⊆ ri=1 Ngi . Let n = dim(V).
0 0 T
By Theorem 2, page 71, we know dim(Ng1 ) = n − 1. Since the gi ’s are linearly independent, g2 is not a multiple of g1 , thus
by Theorem 20, Ng1 * Ng2 . Thus dim(Ng1 ∩ Ng2 ) ≤ n − 2. By Theorem 20 again, Ng3 * Ng1 ∩ Ng2 since g3 is not a linear
combination of g1 and g2 . Thus dim(Ng1 ∩ Ng2 ∩ Ng3 ) ≤ n − 3. By induction dim( ri=1 Ngi ) ≤ n − r. Now by Theorem 16,
T
dim(W) = n − r. Thus since W ⊆ i=1 Ngi , it follows that dim( i=1 Ngi ) ≥ n − r. Thus it must be that dim( ri=1 Ngi ) = n − r
Tr Tr T
and it must be that W = i=1 Ngi since we have shown the left hand side is contained in the right hand side and both sides
Tr
have the same dimension.

Exercise 3: Let S be a set, F a field, and V(S ; F) the space of all functions from S into F:

( f + g)(x) = f (x) + g(x)

(c f )(x) = c f (x).
Let W be any n-dimensional subspace of V(S ; F). Show that there exist points x1 , . . . , xn in S and functions f1 , . . . , fn in W
such that fi (x j ) = δi j .

Solution: I’m not sure using the double dual is really the easiest way to prove this. It can be done rather easily directly by
induction on n (in fact see question 121704 on math.stackexchange.com). However, since H&K clearly want this done with
the double dual. At first glance you might try to think of W as a dual on S and W ∗ as the double dual somehow. But that
doesn’t work since S is just a set. Instead I think you have to consider the double dual of W, W ∗∗ to make it work. I came up
with the following solution.

Let s ∈ S . We first show that the function

φ s :W → F
w 7→ w(s)

is a linear functional on W (in other words for each s, we have φ s ∈ W ∗ ).

Let w1 , w2 ∈ W, c ∈ F. Then φ s (cw1 + w2 ) = (cw1 + w2 )(s) which by definition equals cw1 (s) + w2 (s) which equals
cφ s (w1 ) + φ s (w2 ). Thus φ s is a linear functional on W.

Suppose φ s (w) = 0 for all s ∈ S , w ∈ W. Then w(s) = 0 ∀ s ∈ S , w ∈ W, which implies dim(W) = 0. So as long as n > 0,
∃ s1 ∈ S such that φ s1 (w) , 0 for some w ∈ W. Equivalently there is an s1 ∈ S and a w1 ∈ W such that w1 (s1 ) , 0. This
means φ s1 , 0 as elements of W ∗ . It follows that hφ s1 i, the subspace of W ∗ generated by φ s1 , has dimension one. By scaling
if necessary, we can further assume w1 (s1 ) = 1.

Now suppose ∀ s ∈ S that we have φ s ∈ hφ s1 i, the subspace of W ∗ generated by φ s1 . Then for each s ∈ S there is a c(s) ∈ F
such that φ s = c(s)φ s1 in W ∗ . Then for each s ∈ S , w(s) = c(s)w(s1 ) for all w ∈ W. In particular w1 (s) = c(s) (recall
w1 (s1 ) = 1). Let w ∈ W. Let b = w(s1 ). Then w(s) = c(s)w(s1 ) = bw1 (s) ∀ s ∈ S . Notice that b depends on w but does not
depend on s. Thus w = bw1 as functions on S where b ∈ F is a fixed constant. Thus w ∈ hw1 i, the subspace of W generated
Section 3.7: The Transpose of a Linear Transformation 85

by w1 . Since w was arbitrary, it follows that dim(W) = 1. Thus as long as dim(W) ≥ 2 we can find w2 ∈ W and s2 ∈ S
such that hw1 , w2 i (the subspace of W generated by w1 , w2 ) and hφ s1 , φ s2 i (the subspace of W ∗ generated by {φ s1 , φ s2 }) both
have dimension two. Let W0 = hw1 , w2 i. Then we’ve shown that {φ s1 , φ s2 } is a basis for W0∗ . Therefore there’s a dual basis
{F1 , F2 } ⊆ W0∗∗ ; so that Fi (φ s j ) = δi j , i, j ∈ {1, 2}. By Theorem 17, ∃ corresponding w1 , w2 ∈ W so that Fi = Lwi (in the
notation of Theorem 17). Therefore, δi j = Fi (φ s j ) = Lwi (φ s j ) = φ s j (wi ) = wi (s j ), for i, j ∈ {1, 2}.

Now suppose ∀ s ∈ S that we have φ s ∈ hφ s1 , φ s2 i ⊆ W ∗ . Then ∀ s ∈ S , there are constants c1 (s), c2 (s) ∈ F and we have
w(s) = c1 (s)w(s1 ) + c2 (s)w(s2 ) for all w ∈ W. Similar to the argument in the previous paragraph, this implies dim(W) ≤ 2
(for w ∈ W let b1 = w(s1 ) and b2 = w(s2 ) and argue as before). Therefore, as long as dim(W) ≥ 3 we can find s3 so that
hφ s1 , φ s2 , φ s3 i ⊆ W ∗ , the subspace of W ∗ generated by φ s1 , φ s2 , φ s3 , has dimension three. And as before we can find w3 ∈ W
such that wi (s j ) = δi j , for i, j ∈ {1, 2, 3}.

Continuing in this way we can find n elements s1 , . . . , sn ∈ S such that φ s1 , . . . , φ sn are linearly independent in W ∗ and corre-
sponding elements w1 , . . . , wn ∈ W such that wi (s j ) = δi j . Let fi = wi and we are done.

Section 3.7: The Transpose of a Linear Transformation


Exercise 1: Let F be a field and let f be the linear functional on F 2 defined by f (x1 , x2 ) = ax1 +bx2 . For each of the following
linear operators T , let g = f t f , and find g(x1 , x2 ).
(a) T (x1 , x2 ) = (x1 , 0);
(b) T (x1 , x2 ) = (−x2 , x1 );
(c) T (x1 , x2 ) = (x1 − x2 , x1 + x2 ).
Solution:

(a) g(x1 , x2 ) = T t f (x1 , x2 ) = f (T (x1 , x2 )) = f (x1 , 0) = ax1 .

(b) g(x1 , x2 ) = T t f (x1 , x2 ) = f (T (x1 , x2 )) = f (−x2 , x1 ) = −ax s + bx1 .

(c) g(x1 , x2 ) = T t f (x1 , x2 ) = f (T (x1 , x2 )) = f (x1 − x2 , x1 + x2 ) = a(x1 − x2 ) + b(x1 + x2 ) = (a + b)x1 + (b − a)x2 .

Exercise 2: Let V be the vector space of all polynomial functions over the field of real numbers. Let a and b be fixed real
numbers and let f be the linear functional on V defined by
Z b
f (p) = p(x)dx.
a

If D is the differentiation operator on V, what is Dt f ?

Solution: Let p(x) = c0 + c1 x + · · · + cn xn . Then

Dt f (p) = f (D(p))
= f (c1 + 2c2 x + 3c3 x2 + · · · + ncn xn−1 )
=c1 + c2 x2 + · · · cn xn |b0
=p(b) − p(a)

Exercise 3: Let V be the space of all n × n matrices over a field F and let B be a fixed n × n matrix. If T is the linear operator
on V defined by T (A) = AB − BA, and if f is the trace function, what is T t f ?
86 Chapter 3: Linear Transformation

Solution: By exercise 3 in section 3.5, we know trace(AB) = trace(BA). Thus T t f (A) = f (T (A)) = trace(AB − BA) =
trace(AB) − trace(BA) = 0.

Exercise 4: Let V be a finite-dimensional vector space over the field F and let T be a linear operator on V. Let c be a scalar
and suppose there is a non-zero vector α in V such that T α = cα. Prove that there is a non-zero linear functional f on V such
that T t f = c f .

Solution: Consider the operator U = T − cI. Then U(α) = 0 so rank(U) < n. Therefore rank(U t ) < n as an operator on V ∗ .
It follows that there’s a f ∈ V ∗ such that U t ( f ) = 0. Now U t = T t − cI, thus T t ( f ) = c f .

Exercise 5: Let A be an m × n matrix with real entries. Prove that A = 0 if and only if trace(At A) = 0.

Solution: Suppose A is the m × n matrix with entries ai j and B is the n × k matrix with entries bi j . Then the i, j entry of AB is
n
X
aik bk j .
k=1

Substituting At for A and A for B we get the i, j entry of At A is (note that At A has dimension n × n)
m
X
aki ak j .
k=1

Thus the diagonal entries of At A are


m
X
aki aki
k=1

for i = 1, . . . , n. Thus the trace is


n X
X m
a2ki .
i=1 k=1

If all ai j ∈ R then this sum is zero if and only if each ai j = 0 because every one of them appears in this sum.

Exercise 6: Let n be a positive integer and let V be the space of all polynomial functions over the field of real numbers which
have degree at most n, i.e., functions of the form

f (x) = c0 + c1 x + · · · + cn xn .

Let D be the differentiation operator on V. Find a basis for the null space of the transpose operator Dt .

Solution: The null space of Dt consists of all linear functionals g : V → R such that Dt g = 0, or equivalently g ◦ D( f ) = 0 ∀
f ∈ V. Now g(a0 +a1 x+· · ·+an xn ) = c0 a0 +c1 a1 +· · ·+cn an for some consants c0 , . . . , cn ∈ R. So g◦ D(a0 +a1 x+· · ·+an xn ) =
g(a1 + 2a2 x + · · · + nan xn−1 ) = n−1
i=0 (i + 1)ci ai+1 .
P

This sum n−1i=0 (i + 1)ci ai+1 equals zero for all vectors (a0 , a1 , . . . , an ) ∈ R if and only if c0 = c1 = · · · = cn−1 = 0. While cn
P n

can be anything. Therefore the null space has dimension one and a basis is given by taking cn = 1, which gives the function
g : ai xi 7→ an , the projection onto the xn coordinate.
P

Exercise 7: Let V be a finite-dimensional vector space over the field F. Show that T → T t is an isomorphism of L(V, V) onto
L(V ∗ , V ∗ ).

Solution: Choose a basis B for V. This gives an isomorphism L(V, V) → Mn the space of n × n matrices. And the dual basis
B0 gives an isomorphism L(V ∗ , V ∗ ) → Mn . Now we know by Theorem 23 that the following diagram of functions commutes.
Section 3.7: The Transpose of a Linear Transformation 87

This means if we start at L(V, V) and follow two functinos to the Mn in the bottom left, it doesn’t matter which way around
the diagram we go, we end up at the same place.

L(V, V) −→ Mn
transpose ↓ ↓ transpose
L(V ∗ , V ∗ ) −→ Mn

Both horizontal arrows are isomorphisms (by Theorem 12, page 88). Now clearly transpose on matrices is a one-to-one and
onto function from the set of matrices to itself. Also (rA)t = rAt and (A + B)t = At + Bt for any two n × n matrices A and
B. Thus transpose is also a linear transformation. Thus transpose is an isomorphism. Therefore three of the arrows in this
diagram are isomorphisms and it follows that the fourth arrow must also be an isomorphism.

Exercise 8: Let V be the vector space of n × n matrices over the field F.

(a) If B is a fixed n × n matrix, define a function fB on V by F B (A) = trace(Bt A). Show that F B is a linear functional on V.
(b) Show that every linear functional on V is of the above form, i.e., is fB for some B.
(c) Show that B → fB is an isomorphism of V onto V ∗ .

Solution:

(a) This follows from the fact that the trace function is a linear functional and left multiplication by a matrix is a linear transfor-
mation from V to V. In other words F B (cA1 + A2 ) = trace(Bt (cA1 + A2 )) = trace(cBt A1 + Bt A2 ) = c·trace(Bt A1 )+trace(Bt A2 ) =
c · F B (A1 ) + F B (A2 ).

(b) Let f : V → F be a linear functional. Let A = (ai j ) ∈ V. Then


n
X
f (A) = ci j ai j (23)
i, j=1

for some fixed constants ci j ∈ F.

Now let B = (bi j ) ∈ V be any matrix. Then the i, j element of Bt A is


Pn
k=1 bki ak j . Thus
n X
X n
trace(Bt A) = bki aki . (24)
i=1 k=1

Comparing (23) and (24) we see each ai j apperas exactly once in each sum. So setting bki = cki for all i, k = 1, . . . , n we get
the appropriate matrix B such that trace(Bt A) = f .

(c) Let F be the function F : V → V ∗ such that F(B) = fB . Part (a) shows this function is into V ∗ . Part (b) shows it is onto V ∗ .
We must show it is linear and one-to-one. Let r ∈ F and B1 , B2 ∈ V. Then (rB1 + B2 )t A = (rBt1 + Bt2 )A = rBt1 A + Bt2 A. We
know the trace function itself is linear. Thus F is linear. Now suppose trace(Bt A) = 0 ∀ A ∈ V. Fix i, j ∈ {1, 2, · · · , n}. Let A
be the matrix with a one in the i, j position and zeros elsewhere. Then by the proof of part (b) we know that trace(Bt A) = bi j .
So if F(A) = 0 then bi j = 0. Thus B must be the zero matrix. Thus F is one-to-one. It follows that F is an isomorphism.
88 Chapter 3: Linear Transformation

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy