0% found this document useful (0 votes)
178 views22 pages

MA 106: Linear Algebra: Prof. B.V. Limaye IIT Dharwad

The document summarizes a lecture on linear transformations between vector spaces. It introduces key concepts such as linear transformations, their null spaces and image spaces. It proves the Rank-Nullity Theorem for linear transformations and provides examples of linear transformations between various function spaces. Matrix representations of linear transformations with respect to bases are also discussed.

Uploaded by

amar Baronia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
178 views22 pages

MA 106: Linear Algebra: Prof. B.V. Limaye IIT Dharwad

The document summarizes a lecture on linear transformations between vector spaces. It introduces key concepts such as linear transformations, their null spaces and image spaces. It proves the Rank-Nullity Theorem for linear transformations and provides examples of linear transformations between various function spaces. Matrix representations of linear transformations with respect to bases are also discussed.

Uploaded by

amar Baronia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

MA 106: Linear Algebra

Lecture 20

Prof. B.V. Limaye


IIT Dharwad

Wednesday, 21 February 2018

B.V. Limaye, IITDH MA 106: Lec-20


Linear Transformations
In the last lecture, we saw that the elements of an abstract
vector space V play the role of column vectors very well. In
this lecture we shall show that linear transformations from a
vector space V to a vector space W play the role of matrices.
In Lecture 9, we had discussed linear transformations from a
subspace V of Kn×1 to another subspace W of Kn×1 . Those
concepts carry over to the general case as follows.
Let V and W be vector spaces over K. A linear
transformation or a linear map from V to W is a function
T : V → W which ‘preserves’ the operations of addition and
scalar multiplication, that is, for all u, v ∈ V and α ∈ K,

T (u + v ) = T (u) + T (v ) and T (α v ) = α T (v ).

It follows that if T : V → W is linear, then T (0) = 0.


B.V. Limaye, IITDH MA 106: Lec-20
Also, T ‘preserves’ linear combinations of elements of V :

T (α1 v1 + · · · + αk vk ) = α1 T (v1 ) + · · · + αk T (vk )

for all v1 , . . . , vk ∈ V and α1 , . . . , αk ∈ K.


Examples
1 . Let A be an m × n matrix with entries in K. Then the
map T : Kn×1 → Km×1 defined by T (x) := A x is linear.
Similarly, the map T ′ : K1×m → K1×n defined by T ′ (y) := yA
is linear.
More generally, the map T : Kn×p → Km×p defined by
T (X) := A X is linear. Similarly, the map T ′ : Kp×m → Kp×n
defined by T ′ (Y) := YA is linear.
2 . The map T : Km×n → Kn×m defined by T (A) := AT is
linear.

B.V. Limaye, IITDH MA 106: Lec-20


3 . Let V := c0 , the set of all sequences in K which converge
to 0. Then the map T : V → V defined by
T (x1 , x2 , . . .) := (0, x1 , x2 , . . .) is linear, and so is the map
defined by T ′ (x1 , x2 , . . .) := (x2 , x3 , . . .). Note that T ′ ◦T is the
identity map on V , but T ◦T ′ is not the identity map on V .
The map T is called the left shift operator and the map T ′
is called the right shift operator on V .
4 . Let V := C 1 ([a, b]), the set of all real-valued continuously
differentiable functions, and let W := C ([a, b]), the set of all
real-valued continuous functions on [a, b]. Then the map
T ′ : V → W defined by T ′ (f ) = f ′ is linear.
∫x
Also, the map T : W → V defined by T (f )(x) := a f (t)dt
for x ∈ [a, b], is linear.
In fact, the Fundamental Theorem of Calculus says that T ′ ◦T
is the identity map on W . Is T ◦T ′ the identity map on V ?

B.V. Limaye, IITDH MA 106: Lec-20


Let V and W be vector spaces over K, and let T : V → W be
a linear map. Two important subspaces associated with T are
(i) N (T ) := {v ∈ V : T (v ) = 0}, the null space of T , which
is a subspace of V ,
(ii) I(T ) := {T (v ) : v ∈ V }, the image space of T , which
is a subspace of W .
Suppose V is finite dimensional, and let dim V = n. Since
N (T ) is a subspace of V , it is finite dimensional and
dim N (T ) ≤ n
Let v1 , . . . , vn be a basis for V . If v ∈ V , then there are
α1 , . . . , αn ∈ K such that v = α1 v1 + · · · + αn vn , so that
T (v ) = α1 T (v1 ) + · · · + αn T (vn ). This shows that
I(T ) = span{T (v1 ), . . . , T (vn )}. Hence I(T ) is also finite
dimensional and dim I(T ) ≤ n.

B.V. Limaye, IITDH MA 106: Lec-20


The dimension of N (T ) is called the nullity of the linear
map T , and the dimension of I(T ) is called the rank of the
linear map T .
Example
Let A ∈ Km×n , and define the linear map TA : Kn×1 → Km×1
by TA (x) := A x. Then it is easy to see that N (TA ) = N (A)
and I(TA ) = C(A), the column space of A. Hence
nullity(TA ) = nullity(A), rank(TA ) = column rank(A) = rank(A).

Therefore, the Rank-Nullity Theorem for a matrix A that we


proved in Lecture 6 is a special case of the following result.

Proposition (Rank-Nullity Theorem for Linear Maps)


Let V and W be vector spaces over K, and let T : V → W
be a linear map. Suppose dim V = n ∈ N. Then
rank(T ) + nullity(T ) = n.
B.V. Limaye, IITDH MA 106: Lec-20
Proof.
Let nullity(T ) = k. If k = n, then T = O, and so rank(T ) = 0.
Suppose now k < n. Let {v1 , . . . , vk } be a basis for N (T ).
Then there are vk+1 , . . . , vn in V such that {v1 , . . . , vn } is a
basis for V .
We claim that {T (vk+1 ), . . . , T (vn )} is a basis for I(T ). We
have already seen that I(T ) = span{T (v1 ), . . . , T (vn )}. But
T (v1 ) = · · · = T (vk ) = 0, and so

I(T ) = span{T (vk+1 ), . . . , T (vn )}.

Also, {T (vk+1 ), . . . , T (vn )} is a linearly independent set. To


see this, let αk+1 T (vk+1 ) + · · · + αn T (vn ) = 0, where
αk+1 , . . . , αn ∈ K. Then T (αk+1 vk+1 + · · · + αn vn ) = 0, that
is, (αk+1 vk+1 + · · · + αn vn ) ∈ N (T ).
Hence αk+1 vk+1 + · · · + αn vn is a linear combination of
v1 , . . . , vk . But {v1 , . . . , vn } is a linearly independent set.
B.V. Limaye, IITDH MA 106: Lec-20
This shows that αk+1 = · · · = αn = 0 as desired.
Thus nullity(T ) = k and rank(T ) = dim I(T ) = n − k. As a
consequence, rank(T ) + nullity(T ) = n.
Corollary
Suppose V and W be finite dimensional vector spaces,
dim V = n and dim W = m. Let T : V → W be a linear map.
Then T is one-one ⇐⇒ rank(T ) = n.
If T is one-one, then n ≤ m. Further, in case m = n, the
linear map T is one-one ⇐⇒ T is onto.

Proof. The first assertion follows from the Rank-Nullity


Theorem since T is one-one ⇐⇒ N (T ) = {0}, that is,
nullity(T ) = 0.
If T is one-one, then n = rank(T ) = dim I(T ) ≤ dim W = m.
If m = n, then rank(T ) = n ⇐⇒ T is onto.
B.V. Limaye, IITDH MA 106: Lec-20
As another application of the Rank-Nullity Theorem, we find
an interesting relation between dimensions of finite
dimensional subspaces of a vector space.
Proposition
Let W1 and W2 be finite dimensional subspaces of a vector
space V . Then

dim W1 + dim W2 = dim(W1 ∩ W2 ) + dim(W1 + W2 ).

Proof. The dimension of the vector space W1 × W2 is equal to


dim W1 + dim W2 . (See Tutorial 7.) Define
T : W1 × W2 → W1 + W2 by T (w1 , w2 ) := w1 − w2 .
Then T is linear, N (T ) = {(w , w ) : w ∈ W1 ∩ W2 } and
I(T ) = W1 + W2 . Hence by the Rank-Nullity Theorem,
dim(W1 + W2 ) + dim(W1 ∩ W2 ) = dim(W1 × W2 ).
B.V. Limaye, IITDH MA 106: Lec-20
To study a linear map from a finite dimensional vector space
to a finite dimensional space, it is convenient to associate a
matrix with the map. We have outlined this for subspaces of
Rn×1 and Rm×1 toward the end of Lecture 9. The same
reasoning works in general. Let us work it out.
Let V be a vector space of dimension n, and let
E := (v1 , . . . , vn ) be an ordered basis for V . Also, let W be a
vector space of dimension m, and let F := (w1 , . . . , wm ) be an
ordered basis for W . Let T : V → W be a linear map. Then
there are unique a11 , . . . , a1n , . . . , am1 , . . . , amn ∈ K such that
T (v1 ) = a11 w1 + · · · + aj1 wj + · · · + am1 wm ,
.. .. ..
. . .
T (vk ) = a1k w1 + · · · + ajk wj + · · · + amk wm ,
.. .. ..
. . .
T (vn ) = a1n w1 + · · · + ajn wj + · · · + amn wm .
B.V. Limaye, IITDH MA 106: Lec-20
The m×n matrix A := [ajk ] is called the matrix of the linear
transformation T : V → W with respect to the ordered
basis E := (v1 , . . . , vn ) of V and the ordered basis
F := (w1 , . . . , wm ) of W . It is denoted by MEF (T ) .
The m × n matrix MEF (T ) represents the linear map T in the
following sense. For α1 , . . . , αn ∈ K,
∑n ∑
n ∑
m
T (α1 v1 + · · · + αn vn ) = αk T (vk ) = αk ajk wj
k=1 k=1 j=1
∑m (∑
n )
= ajk αk wj ,
j=1 k=1

that is,
    
( α1 ) a ··· a1n α1
[ ]  [ ]  11 ..   .. .
T v1 · · · vn  ...  = w1 · · · wm  ... ..
. .  . 
αn am1 · · · amn αm
B.V. Limaye, IITDH MA 106: Lec-20
Conversely, suppose we are given an m × n matrix A. Define
TA : V → W as follows. For v := α1 v1 + · · · + αn vn , let
   
β1 α1
 ..   .. 
TA (v ) := β1 w1 + · · · + βm wm , where  .  := A  .  .
βm αm

Then TA is a linear map. It can be checked that

MEF (TA ) = A and TB = T , where B := MEF (T ).

Remark The correspondence between an m × n matrix and a


linear map from an n dimensional vector space V to an m
dimensional vector space W allows us to obtain two versions
of the same result. A case in point is the Rank-Nullity
Theorem. While we have given independent proofs of both
versions, any one version can be derived from the other.
B.V. Limaye, IITDH MA 106: Lec-20
Examples
1. For n ∈ N, let Pn denote the vector space of all polynomial
functions of degree less than or equal to n. Define
T : Pn → Pn−1 by T (p) = p ′ , the derivative of p.
Let E := (1, t, . . . , t n ) and F := (1, t, . . . , t n−1 ) be the ordered
bases of Pn and Pn−1 respectively. Then the n × (n + 1)
matrix of the linear map T with respect to these bases is
 
0 1 0 0 ··· 0
0 0 2 0 · · · 0
 
0 0 0 3 · · · 0
MF (T ) := 
E
.
 .. .. .. .. . . .. 
. . . . . .
0 0 0 0 ··· n

B.V. Limaye, IITDH MA 106: Lec-20


2. Let V := P3 , the set of all polynomials of degree 3 or less,
and let W := R3×1 . Define T : V → W by
[ ]T
T (α0 + α1 t + α2 t 2 + α3 t 2 ) := α0 + α1 α1 + α2 α2 + α3
for α0 , α1 , α2 , α3 ∈ R. Consider the ordered bases
E := (1, t, t 2 , t 3 ) for V and F := (e1 , e2 , e3 ) for W . Then
 
1 1 0 0
MEF (T ) = 0 1 1 0 .
0 0 1 1
( )
Next, consider bases E ′ := 1, t − 1, (t − 1)2 , (t − 1)3 for V
and F ′ := (e1 + e2 , e2 + e3 , e3 + e1 ) for W . Then
[ ]T 1 1 1
T (1) = 1 0 0 = e1 = (e1 +e2 )− (e2 +e3 )+ (e3 +e1 ),
2 2 2
[ ]T 1 1 1
T (t−1) = 0 1 0 = e2 = (e1 +e2 )+ (e2 +e3 )− (e3 +e1 ),
2 2 2
B.V. Limaye, IITDH MA 106: Lec-20
( )
T (t − 1)2 = T (t 2 ) − 2T (t) + T (1)
[ ]T [ ]T [ ]T
= [0 1 1 − 2 2 0 + 1 0 0
[ ]T
= −1 −1 1 = −e1 − e2 − e3
3 1 1
= − (e1 + e2 ) + (e2 + e3 ) + (e3 + e1 ),
2 2 2
( )
T (t − 1)3 = T (t 3 ) − 3T (t 2 ) + 3T (t) − T (1)
[ ]T [ ]T [ ]T [
= 0 0 1 − 3 [0 1 1 + 3 1 1 0 − 1 1
[ ]T
= 2 0 −2 = 2e1 − 2e3
= 2(e1 + e2 ) − 2(e2 + e3 ).
 
1 1 −3 4
1
MEF ′ (T ) = −1 1 1 −4 .

Hence
2
1 −1 1 0
B.V. Limaye, IITDH MA 106: Lec-20
One more level up!
We have seen that the set of all m × n matrices with entries in
K is a vector space. The correspondence between matrices
and linear maps suggests that the set of all linear maps from
an n dimensional vector space to an m dimensional vector
space would also form a vector space.
Indeed, let V and W be vector spaces, and let L(V , W )
denote the set of all linear maps from V to W . Let
T1 , T2 ∈ L(V , W ) and α1 , α2 ∈ K. Define
(α1 T1 + α2 T2 )(v ) := α1 T1 (v ) + α2 T2 (v ) for v ∈ V .
Then (α1 T1 + α2 T2 ) ∈ L(V , W ). Further, if U is a vector
space and S ∈ L(U, V ), then T ◦ S ∈ L(U, W ) for every
T ∈ L(V , W ) since for u, v ∈ U and α, β ∈ K,
(T ◦ S)(αu + βv ) = T (αS(u) + βS(v ))
= α(T ◦ S)(u) + β(T ◦ S)(v ).
B.V. Limaye, IITDH MA 106: Lec-20
If U, V , W are all finite dimensional, then the composition of
linear maps T and S corresponds to the multiplication of the
matrices associated with T and S as follows.
Suppose G , E and F are ordered bases for the vector spaces
S T
U, V and W respectively. If U −−−−−→ V −−−−−→ W , then
(G −→E ) (E −→F )

MGF (T ◦ S) = MEF (T ) MGE (S).

(We omit the proof.) This says that the matrix of the
composition of two linear maps is the product of the matrices
of the two linear maps taken in the same order, and offers a
justification of the matrix multiplication defined in Lecture 1.
If V = W and T := I , the identity map, then we write
MEF := MEF (I ), the matrix associated with the change of
basis from E to F . Note that if E = F , then MEF = I, the
identity matrix.
B.V. Limaye, IITDH MA 106: Lec-20
I T I
Considering V −−−−−→ V −−−−−→ V −−−−−→ V , we obtain
(F −→E ) (E −→E ) (E −→F )

MFF (T ) = MEF MEE (T )MFE .

In particular, if T = I , then MEE (T ) = I = MFF (T ), and so


I = MEF MFE , that is, (MFE )−1 = MFE . Thus

MFF (T ) = (MFE )−1 MEE (T )MFE ,

so that the matrix B := MFF (T ) of the linear map T : V → V


with respect to the ordered basis F of V is similar to the
matrix A := MEE (T ) of the linear map T : V → V with
respect to the ordered basis E of V .

B.V. Limaye, IITDH MA 106: Lec-20


Solutions of Operator Equations
Let V and W be vector spaces, and let T ∈ L(V , W ). Just as
we attempted to find solutions of a linear system Ax = b, we
can attempt to find solutions of the operator equation
T (v ) = w . The nature of these solutions is the same as the
nature of the solutions of a linear system.
Just as the linear system Ax = b has a solution if and only if
b ∈ C(A), the operator equation T (v ) = w has a solution if
and only if w ∈ I(T ). Also, the solution space of the
homogeneous operator equation T (v ) = 0 is a subspace of V ,
and its dimension is the nullity of T which is equal to n − r ,
where n is the dimension of V and r is the rank of T .
Further, if v0 is a particular solution of the operator equation
T (v ) = w , then its general solution is given by v0 + vh , where
vh satisfies the homogeneous operator equation T (vh ) = 0.
B.V. Limaye, IITDH MA 106: Lec-20
Eigenvalue Problems for Linear Operators
Let V be a vector space over K, and let T : V → V be a
linear operator. A scalar λ ∈ K is called an eigenvalue of T
if there is a nonzero v ∈ V such that T (v ) = λv , and then v
is called an eigenvector or an eigenfunction of T
corresponding to λ, and the subspace N (T − λI ) is called the
eigenspace of T .
Example
Let V denote the vector space of all real-valued infinitely
differentiable functions on R. Define T (f ) = f ′ for f ∈ V .
Then T is a linear operator on V .
Given λ ∈ R, consider fλ (t) := e λt for t ∈ R. Then fλ ∈ V ,
fλ ̸= 0 and T (fλ ) = λ fλ . Thus every λ ∈ R is an eigenvalue of
T with fλ as a corresponding eigenfunction. In fact, any
eigenfunction of T corresponding to λ is a scalar multiple of fλ .
B.V. Limaye, IITDH MA 106: Lec-20
Let us assume that the vector space V is finite dimensional.
Let E be an ordered basis for V , and let A := MEE (T ), the
matrix of the linear operator T with respect to E . We remark
that if F is another ordered basis for V , and B := MFF (T ), the
matrix of the linear operator T with respect (to F), then B is
−1
similar to A; in fact we have seen that B = MFE A MFE .
The geometric multiplicity of an eigenvalue of T is the
dimension of the corresponding eigenspace. It equals the
geometric multiplicity of λ as an eigenvalue of the associated
matrix A. Further, the linear operator T is called
diagonalizable if the matrix A is diagonalizable.
Results about the linear independence of eigenvectors
corresponding to distinct eigenvalues hold in the general case.
The characteristic polynomial of a linear operator T is
defined to be the characteristic polynomials of A = MEE (T ).

B.V. Limaye, IITDH MA 106: Lec-20


The algebraic multiplicity of an eigenvalue of the linear
operator T is defined to be the algebraic multiplicity of the
associated matrix A. Hence the relationships between the
geometric multiplicity and the algebraic multiplicity of an
eigenvalue of a square matrix hold for a linear operator as well.
The above definitions do not depend on the choice of the
ordered basis E for V because of the similarity of the matrices
involved therein.

B.V. Limaye, IITDH MA 106: Lec-20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy