0% found this document useful (0 votes)
111 views3 pages

Lecture 13 & 14

This document discusses matrix representations of linear transformations between vector spaces. It defines how the matrix of a linear transformation T with respect to ordered bases B and B' of the vector and codomain spaces relates the coordinate vectors of a vector v and its image T(v). It also shows that this matrix representation is unique and proves formulas relating matrix representations under different bases.

Uploaded by

Vardaan Taneja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views3 pages

Lecture 13 & 14

This document discusses matrix representations of linear transformations between vector spaces. It defines how the matrix of a linear transformation T with respect to ordered bases B and B' of the vector and codomain spaces relates the coordinate vectors of a vector v and its image T(v). It also shows that this matrix representation is unique and proves formulas relating matrix representations under different bases.

Uploaded by

Vardaan Taneja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

16

MTL101
Lecture 13 (Matrix representation of a linear transformation)
(43) We continue to assume that the vector spaces we consider are all finite dimensional (unless
otherwise stated). Suppose T : V → W is a linear transformation of vector spaces over F
and suppose, B = {v1 , v2 , . . . , vm }, B 0 = {w1 , w2 , . . . , wn } are ordered bases of V and W
respectively. We know that the image of each vector in B is a linear combination of vectors in
B 0 . Let
X n
T (vj ) = ti,j wi for (1 ≤ j ≤ m)
i=1

where ti,j ∈ F. Thus we have got an n × m matrix A (warning: not m × n) whose (i, j)-th
coefficient is ti,j . Thus the coefficients of T (vj ) in the above expression constitutes the j-th
column of A. This matrix is called the matrix representation of T with respect to the bases
0
B, B 0 and is denoted by [T ]BB . When V = W and B = B , we denote the martix representation
0

of T simply by [T ]B .

Example: Let the linear transformation T : R3 → R2 be defined by T (x, y, z) = (2x+z, y+3z),


B = {(1, 1, 0), (1, 0, 1), (1, 1, 1)}, B 0 = {(2, 3), (3, 2)}. Then
1 4
T (1, 1, 0) = (2, 1) = − (2, 3) + (3, 2),
5 5
3 3
T (1, 0, 1) = (3, 3) = (2, 3) + (2, 3),
5 5
6 1
T (1, 1, 1) = (3, 4) = (2, 3) + (3, 2).
5 5
Thus  
− 15 3
5
6
5
B0
[T ]B =  
4 3 1
5 5 5

0
(44) Given any vector v ∈ V , we have [T (v)]B 0 = [T ]BB [v]B . This formula relates the coordinate
vectors of v and T (v) by means of the matrix representation of T with respect to the same pair
of bases.
Proof of the formula: Suppose [v]B = (a1 a2 · · · am )T (which is a column having m rows)
Pm
so that v = aj vj . Then
j=1
m
X
T (v) = aj T (vj )
j=1
Xm n
X
= aj ti,j wi
j=1 i=1
n X
X m

= ti,j aj wi
i=1 j=1
m m m  0
tn,j aj = [T ]B T
P P P
so that [T (v)]B 0 = t1,j aj , t2,j aj , . . . , B · (a1 a2 · · · am ) which is to be
j=1 j=1 j=1
shown.

(45) Lemma: Suppose A, B ∈ Mm×n (F). If AX = BX for every X ∈ Mn×1 (F), then A = B.
Proof of the lemma: To prove the statement pick X = eTi the column matrix whose i-th
entry is 1 and other entries are all zero. Then (a1,i a2,i · · · am,i ) = (b1,i b2,i · · · bm,i ) for each i.
Thus, ai,j = bi,j for each i and each j so that A = B. 
Lemma: The map v 7→ [v]B from V to Fn (where n = dimV ) is an isomorphism.
17

Proof of the lemma: We know that given an ordered basis B, for any vector, the coefficients
are uniquely determined. This assures that the map is well-defined. Further, if [v]B = [w]B ,
then v = w so that the map is one to one. That this map is onto is clear because given any
n
P
n-tuple X = (a1 , a2 , . . . , an ), we have v = ai vi ∈ V (where B = {v1 , v2 , . . . , vn }) which is
i=1
mapped to X. We leave to the students to show that [v + w]B = [v]B + [w]B and [av]B = a[v]B
for a ∈ F and v, w ∈ V . 

(46) Let T : V → W be a linear transformation of finite dimensional vector spaces and let B1 , B10
are bases of V and let B2 , B20 be bases of W . Recall change of bases rule:
[v]B10 = P [v]B1 in V and (3)
[T v]B20 = Q[T v]B2 in W (4)
where P ∈ Mm×m (F) and Q ∈ Mn×n (F) are invertible matrices (recall how to write down these
change of basis matrices). Now we apply the formula for coordinate vectors related by matrix
representation of T in either sides of the second equality (4).
B0
[T ]B20 [v]B10 = Q[T ]B
B1 [v]B1 .
2
1

Now we use (3) and get


B0
[T ]B20 P [v]B1 = Q[T ]B
B1 [v]B1 .
2
1
B0
Then for A = [T ]B20 P and B = Q[T ]B
B1 , we have AX = BX for every X ∈ Mm×1 (F) ( since
2
1
B0
every X is attained by [v]B1 (See the second lemma above). We conclude, [T ]B20 P = Q[T ]B 2
B1
1
(using the first leema above). We know that the change of bases matrices P, Q are invertible.
B20
So [T ]B
B1 = Q [T ]B10 P .
2 −1

In particular, when V = W and B1 = B2 = B and B10 = B20 = B 0 , we have P = Q and


0
[T ]B
B = P
−1
[T ]B
B 0 P . In this case we normally denote the matrix of T with respect to B, B by
simply [T ]B . Thus the above relation becomes
[T ]B = P −1 [T ]B 0 P.

Example: Let the linear transformation T : R3 → R2 be defined by T (x, y, z) = (2x+z, y+3z),


B10 = {e1 , e2 , e3 }, B1 = {(1, 1, 0), (1, 0, 1), (1, 1, 1)}, B20 = {e1 , e2 }, B2 = {(2, 3), (3, 2)}. Then
(see the example above),
 1 3 6    
−5 5 5 2 0 1 1 1 1  
B2 B20 2 3
[T ]B1 =   , [T ]B 0 =   , P = 1 0 1 , Q=
  .
4 3 1 1 3 2
5 5 5
0 1 3 0 1 1
You may verify that  
B0 2 3 3
[T ]B20 P = = Q[T ]B
B1 .
2
1 1 3 4

(47) If S, T : V → W are linear transformations then for any bases B, B 0 (respectively of V, W ), we


have the following properties (verifications are left to the students):
0 B0 B0
(a) [S + T ]B B = [S]B + [T ]B .
0 B0
(b) [λT ]B B = λ[T ]B .
(c) If T 0 : W → U is another linear transformation and B 00 is an ordered basis of U . then
00 0 B 00 B0
[T 0 ◦ T ]B
B = [T ]B 0 · [T ]B .
18

MTL101
Lecture 14 (Eigenvalue, Eigenvector, characteristic polynomial of an operator)
(48) Suppose T : V → V is a linear operator on a vector space V . A scalar λ is said to be an
eigenvalue of T if there is a nonzero vector v ∈ V such that T (v) = λv. Such a nonzero vector
v is called an eigenvector of T associated to the eigenvalue λ. e.g., Zero is the only eigenvalue
of the zero operator; one is the only eigenvalue of the identity operator. However, if zero is
the only eigenvalue of an operator T then T need not be the zero operator. For instance, zero
is the only eigenvalue of T : R2 → R2 defined by T (x, y) = (0, x). Further, if one is the only
eigenvalue of an operator, then it need not be the identity operator. For instance, one is the
only eigenvalue of T (x, y) = (x + y, y).
Example: Let T (x, y) = (2x + 3y, 3x + 2y) is a linear operator on R2 . We have to find λ ∈ R
and (x, y) ∈ R2 such that (2x + 3y, 3x + 2y) = λ(x, y) or (2 − λ)x + 3y = 0, 3x + (2 − λ)y = 0
which is a system of homogenous linear equations in two unknowns. We know that this system
has a non-zero solution if andonly if the determinant of the coefficient matrix is zero. Thus we
2−λ 3
must have det = 0 or λ = −1, 5. When λ = −1, 3x + 3y = 0 so that (1, −1)
3 2−λ
is an eigenvector ((−a, a) are eigenvectors of corresponding to eigenvalue 1 for every a 6= 0.).
When λ = 5, 3x − 3y = 0 so that (1, 1) is an eigenvector (in fact, (a, a) is an eigenvector
corresponding to eigenvalue 5 for a 6= 0).
(49) Suppose B, B 0 are bases of V . Recall: for an operator T : V → V , the matrices [T ]B and [T ]B 0
are similar matrices (see the previous lecture). We define trace and determinant of T by
tr(T ) := tr([T ]B ), det T := det[T ]B .
We have already seen that the trace and the determinant of similar matrices are same; therefore,
the trace and the determinant of a linear operator is independent of choice of basis.
The polynomial det(XI − T ) is called the characteristic polynomial of T . This is a monic
polynomial (the coefficient of highest degree term is one) of degree n = dimV . The equation
det(XI − T ) = 0 is called the characteristic equation.
Theorem: A scalar λ ∈ F is an eigenvalue of T if and only if λ is a root of the characteristic
polynomial of T .
Remark: As a result of above theorem, finding eigenvalues of a linear tranformation boils down
to finding roots of a polynomial. You may see in the example above that while finding eigen
values we solved a quadratic equation. We cannot escape solving the characteristic equation
to find eigenvalues. We will not discuss the proof of this theorem.
Example: To find the eigenvalues of T : R3 → R3 defined by T (x, y, z) = (x + y, y + z, z + x),
we first down thecharacteristic
 polynomial.
 Choose the standard  basis B = {e1 , e2 , e3 } of
1 1 0 X −1 −1 0
R3 , then [T ]B = 0 1 1 so that det  0 X −1 −1  = x3 − 3X 2 + 3X − 2 =
1 0 1 −1 0 X −1
(X − 2)(X 2 − X + 1) is the characteristic polynomial. X = 2 is an eigenvalue and the other
two roots of the characteristic equation are non-real (since the discriminant of the quadratic
factor is -3). We leave it as an exercise to find the eigen vectors corresponding to eigenvalue 2.
Example: T : C2 → C2 , where T (z1 , z2 ) = (z1 − z2 , z1 + z2 ). The characteristic polynomial is
X 2 −2X +2 and the eigenvalues are 1+i, 1−i. (1, −i) and (1, i) are eigenvectors corresponding
to 1 + iand 1 − i respectively.
 Observe that B = {(1, −i), (1, i)} is a basis of C2 over C and
1+i 0
[T ]B = . Note that T 0 : R2 → R2 defined by (x, y) 7→ (x − y, x + y) over R has
0 1−i
no eigenvalue.
(50) If A ∈ Mn×n (F), the polynomial det(XI − A) is called the characteristic polynomial of the
matrix A. The roots of the characteristic polymial (which are in F) are called the eigenvalues
A. We often take F = C, the field of complex numbers, so that every root of the characteristic
polynomial is included in our discussion.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy