0% found this document useful (0 votes)
20 views89 pages

Chapter 3

The document provides a comprehensive overview of linear transformations, defining them as functions between vector spaces that satisfy specific properties. It includes examples such as the zero and identity transformations, differentiation and integral transformations, as well as properties and theorems related to linear transformations. Additionally, it discusses the concepts of range, null space, rank, and nullity, along with illustrative problems and solutions.

Uploaded by

asmitsd2024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views89 pages

Chapter 3

The document provides a comprehensive overview of linear transformations, defining them as functions between vector spaces that satisfy specific properties. It includes examples such as the zero and identity transformations, differentiation and integral transformations, as well as properties and theorems related to linear transformations. Additionally, it discusses the concepts of range, null space, rank, and nullity, along with illustrative problems and solutions.

Uploaded by

asmitsd2024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Linear Transformations

Jagannath
IIITDM Kancheepuram, Chennai
Linear Transformations

Definition:
Let V and W be vector spaces over the field F . A linear transformation
from V into W is a function T : V −→ W such that

T (cα + β) = cT (α) + T (β) for all α, β ∈ V , c ∈ F .

1
Examples of Linear Transformations

(1) Let V , W be vector spaces over the same field F . We define a


function 0 : V −→ W as 0(v ) = 0 for all v ∈ V , where 0 is the zero
vector of W . Then

0(cα + β) = 0 = c · 0 + 0 = c0(α) + 0(β) for all α, β ∈ V , c ∈ F .

Thus, 0 is a linear transformation, called the zero linear


transformation.

(2) Let V be a vector space over the field F . We define a function


I : V −→ V as I (v ) = v for all v ∈ V . Then

I (cα + β) = cα + β = cI (α) + I (β) for all α, β ∈ V , c ∈ F .

Thus, I is a linear transformation, called the identity linear


transformation.

2
Examples

(3) Let V = f (x) = c0 + c1 x + c2 x 2 + · · · + cn x n : n ∈ N, ci ∈ F .
That is, V is the vector space of all polynomials over the field F .
We define a function D : V −→ V as
(Df )(x) := c1 + 2c2 x + · · · + ncn x n−1 . Then D is a linear
transformation, called the differentiation transformation.
(4) Let F = R. Let V be the vector space of all continuous functions
from R to R. We define a function T : V −→ R as
Rx
(Tf )(x) := 0 f (t)dt. Then T is a linear transformation, called the
integral transformation.
(5) Let A ∈ F m×n be a fixed m × n matrix. Define a function
T : F n×1 −→ F m×1 as T (X ) = AX . Then

T (cX + Y ) = A(cX + Y ) = cAX + AY = cT (X ) + T (Y ).

Hence, T is a linear transformation.


3
Properties of linear transformations

Property 1: T (0) = 0.
Proof. As

T (0) = T (1 · 0 + 0)
= 1 · T (0) + T (0) (∵ T is a L.T. )
= T (0) + T (0).

This implies
T (0) = 0.

4
Properties of linear transformations

Property 2: The following two conditions are equivalent:

1. T (cα + β) = cT (α) + T (β).


2. T (α + β) = T (α) + T (β) and T (cα) = cT (α).

Proof. (1 ⇒ 2).

T (α + β) = T (1 · α + β) = 1 · T (α) + T (β) = T (α) + T (β)

and

T (cα) = T (cα + 0) = cT (α) + T (0) = cT (α) + 0 = cT (α).

(2 ⇒ 1).

T (cα + β) = T (cα) + T (β) = cT (α) + T (β).

5
Properties of linear transformations

Property 3:

T (c1 α1 + c2 α2 + · · · + cn αn ) = c1 T (α1 ) + c2 T (α2 ) + · · · + cn T (αn ).

Proof.

T (c1 α1 + c2 α2 + · · · + cn αn )
= c1 T (α1 ) + T (c2 α2 + · · · + cn αn ) (∵ is a L.T.)
= c1 T (α1 ) + c2 T (α2 ) + T (c3 α3 + · · · + cn αn ) (∵ T is a L.T.)
..
.
= c1 T (α1 ) + c2 T (α2 ) + · · · + cn T (αn ).

6
Problem 1:

Verify which of the following functions T : R2 −→ R2 are linear


transformations?

(1) T (x1 , x2 ) = (1 + x1 , x2 ).
Ans: It is not a linear transformation as
T (0, 0) = (1, 0) =⇒ T (0) ̸= 0.
(2) T (x1 , x2 ) = (x2 , x1 ).
Ans:
" It is
# a linear
" transformation
#" # as
x1 0 1 x1
T = =⇒ T (X ) = AX .
x2 1 0 x2
(3) T (x1 , x2 ) = (x12 , x2 ).
Ans: It is not a linear transformation (Verify!).

7
Linear transformations are special !!

α1 T β1

α2 β2

αj βj

αn βn

V W
Ordered basis,B = {α1 , α2 , . . . , αn }
βj ′ s need not be distinct
T is a unique linear transformation with T (αj ) = βj

8
Theorem 1.

Let V be a finite-dimensional vector space over the field F and let


{α1 , α2 , . . . , αn } be an ordered basis for V . Let W be a vector space over
the same field F and let β1 , β2 , . . . , βn be any vectors in W . Then there is
precisely one linear transformation T : V −→ W such that T (αj ) = βj for
j = 1, 2, . . . , n.
Proof: Since {α1 , α2 , . . . , αn } is an ordered basis for V , for a given
vector α ∈ V , there is a unique n−tuple (x1 , x2 , . . . , xn ) such that

α = x1 α1 + x2 α2 + · · · + xn αn .

9
Proof contd.

We define a function T : V −→ W as

T (α) = T (x1 α1 + · · · + xn αn ) := x1 β1 + · · · + xn βn .

Claim 1: T (αj ) = βj .

T (αj ) = T (0α1 + · · · + 0αj−1 + 1αj + 0αj+1 + · · · + 0αn )


= 0β1 + · · · + 0βj−1 + 1βj + 0βj+1 + · · · + 0βn
= βj .

Claim 2: T is a linear transformation.


We show that T (cα + β) = cT (α) + T (β) for all α, β ∈ V , c ∈ F .
Let β = y1 α1 + y2 α2 + · · · + yn αn . Then

cα + β = (cx1 + y1 )α1 + (cx2 + y2 )α2 + · · · + (cxn + yn )αn .

10
Proof contd.

Now, by the definition of T we have

T (cα + β) = (cx1 + y1 )β1 + (cx2 + y2 )β2 + · · · + (cxn + yn )βn


= c(x1 β1 + x2 β2 + · · · + xn βn ) + (y1 β1 + y2 β2 + · · · + yn βn )
= cT (x1 α1 + · · · + xn αn ) + T (y1 α1 + · · · + yn αn )
= cT (α) + T (β).

Claim 3: T is unique.
It is enough to prove that if U : V −→ W is a linear transformation with
U(αj ) = βj for j = 1, 2, . . . , n, then T (α) = U(α) for all α ∈ V .
Consider

U(α) = U(x1 α1 + x2 α2 + · · · + xn αn )
= x1 U(α1 ) + x2 U(α2 ) + · · · + xn U(αn ) (∵ U is a L.T.)
= x1 β1 + x2 β2 + · · · + xn βn (∵ U(αj ) = βj )
= T (α).

This completes the proof of the theorem. 11


Problem 2.

Let B = {α1 = (1, 2), α2 = (3, 4)} be an ordered basis for R2 . Let
β1 = (3, 2, 1), β2 = (6, 5, 4) ∈ R3 . Find the unique linear transformation
T : R2 −→ R3 such that T (αj ) = βj for j = 1, 2.
Solution: T (α1 ) = T (1, 2) = (3, 2, 1) = β1 and
T (α2 ) = T (3, 4) = (6, 5, 4) = β2 .
Let α = (x, y ) ∈ R2 . As {α1 , α2 } is a basis, we have α = aα1 + bα2 for
some a, b ∈ R. That is (x, y ) = a(1, 2) + b(3, 4) = (a + 3b, 2a + 4b). So,

a + 3b = x and 2a + 4b = y .

On solving these two equations for a, b we obtain


   
3 1
a = −2x + y and b = x − y .
2 2

12
Problem 2 contd.

So,    
3 1
(x, y ) = −2x + y α1 + x − y α2 .
2 2
Thus,
   
3 1
T (x, y ) = −2x + y β1 + x − y β2 (from the definition of T )
2 2
   
3 1
= −2x + y (3, 2, 1) + x − y (6, 5, 4)
2 2
 
3 1 1
= y , x + y , 2x − y .
2 2 2

The transformation T is unique, thanks to Theorem 1.

13
Range of a linear transformation

Let V , W be vector spaces over a field F . Let T : V −→ W be a linear


transformation. The Range of T is defined by

R(T ) := {β ∈ W : T (α) = β for some α ∈ V } .

14
Range of T

β = T (α)

R(T )

V W

15
Proposition-1: R(T ) is a subspace of W .

Proof: Let T : V −→ W be a linear transformation. As T (0) = 0, we


have 0 ∈ R(T ). Thus, R(T ) ̸= ϕ.
Now, let β1 , β2 ∈ R(T ) and c ∈ F . By the definition of R(T ) there exist
α1 , α2 ∈ V such that T (α1 ) = β1 and T (α2 ) = β2 .
As
cβ1 + β2 = cT (α1 ) + T (α2 ) = T (cα1 + α2 ),

from the definition of R(T ) it follows that cβ1 + β2 ∈ R(T ). This proves
R(T ) is a subspace of W .

Definition: The rank of a linear transformation T is defined as the


dimension of the range of T .
That is

Rank of T := dim R(T ).

16
The null space of a linear transformation

Let V , W be vector spaces over a field F . Let T : V −→ W be a linear


transformation. The null space of T is defined as

N(T ) := {α ∈ V : T (α) = 0} .

17
Null space of T

α
T (α) = 0

N(T )

V W

18
Proposition-2: N(T ) is a subspace of V .

Proof: Let T : V −→ W is a linear transformation. As T (0) = 0, we


have 0 ∈ N(T ). This implies N(T ) ̸= ϕ.
Let α1 , α2 ∈ N(T ) and c ∈ F . Then T (α1 ) = T (α2 ) = 0. Since T is a
linear transformation

T (cα1 + α2 ) = cT (α1 ) + T (α2 ) = c0 + 0 = 0.

This implies cα1 + α2 ∈ N(T ). Hence N(T ) is a subspace of V .

Definition: The nullity of a linear transformation T is defined as the


dimension of the null space of T . That is

Nullity of T = dim N(T ).

19
Examples

Example 1. Find the rank and nullity of the zero linear transformation
O : V −→ W defined by O(α) = 0 for all α ∈ V .

Solution:

R(O) = {β ∈ W : β = O(α) for some α ∈ V } = {0}.

N(O) = {α ∈ V : O(α) = 0} = V .

Hence,
Rank (O) = dim R(O) = 0

and
Nullity (O) = dim N(O) = dim V .
20
Example 2. Find the rank and nullity of the identity linear transformation
I : V −→ V defined by I (α) = α for all α ∈ V .

Solution:

R(I ) = {β ∈ V : β = I (α) for some α ∈ V }


= {β ∈ V : β = I (α) = α for some α ∈ V }
= {α ∈ V }
= V.

N(O) = {α ∈ V : I (α) = 0} = {0}.

Hence,
Rank (I ) = dim R(I ) = dim V

and
Nullity (I ) = dim N(I ) = 0. 21
Example 3: Find the rank and nullity of the linear transformation
T : R2 −→ R3 defined as

T (x1 , x2 ) = (x1 , 0, 0).

Solution:

Y ∈ R3 : Y = T (X ) for some X ∈ R2

R(T ) =
Y ∈ R3 : Y = T (X ) = (x1 , 0, 0) for some X ∈ R2

=
Y = (x1 , 0, 0) : X = (x1 , x2 ) ∈ R2

=
= {(x1 , 0, 0) : x1 ∈ R}
= {x1 (1, 0, 0) : x1 ∈ R}
= Span of {(1, 0, 0)} .

As (1, 0, 0) ∈ R3 is a non-zero vector, so {(1, 0, 0)} is a linearly


independent set in R3 , thus forms a basis of R(T ). Hence,

Rank(T ) = 1.
22
X ∈ R2 : T (X ) = 0

N(T ) =
= {X = (x1 , x2 ) : T (x1 , x2 ) = 0}
= {(x1 , x2 ) : (x1 , 0, 0) = (0, 0, 0)}
= {(0, x2 ) : x2 ∈ R}
= {x2 (0, 1) : x2 ∈ R}
= Span of {(0, 1)} .

As (0, 1) ∈ R2 is a non-zero vector, so {(0, 1)} is a linearly independent


set in R2 , thus forms a basis of N(T ). Hence,

Nullity (T ) = 1.

23
Example 4: Let T : R3 −→ R3 be a function defined by

T (x1 , x2 , x3 ) = (x1 − x2 + 2x3 , 2x1 + x2 , −x1 − 2x2 + 2x3 ).

Show that T is a linear transformation. Also find rank(T) and nullity(T).


Solution: Note that the function T defined above can also be written in
the following form:
    
x1 1 −1 2 x1
T  x2 =
  2 1 0   x2 
    
x3 −1 −2 2 x3

This is of the form


T (X ) = AX .

Hence T is a linear transformation.

24
Now,

Y ∈ R3 : Y = T (X ) for some X ∈ R3

R(T ) =
Y ∈ R3 : Y = AX , X ∈ R3

=
AX : X ∈ R3

=
= Set of linear combinations of columns of A
= Column space of A
= Row space of At .

To find the row space of At we first find the row-reduced echelon form of
At .

25
   
1 2 −1 1 2 −1
At =  −1 1 −2  ∼  0 3 −3 
   
2 0 2 0 −4 4
   
1 2 −1 1 0 1
∼ 0 1 −1  ∼  0 1 −1 
   
0 0 0 0 0 0
So,

R(T ) = Row space of At


= Span {(1, 0, 1), (0, 1, −1)}
= {a(1, 0, 1) + b(0, 1, −1) : a, b ∈ R}
= {(a, b, a − b) : a, b ∈ R} .

As the set {(1, 0, 1), (0, 1, −1)} is linearly independent and spans the
range space R(T ), thus forms a basis for R(T ). Hence,
Rank(T ) = dim R(T ) = 2.
26
Now,

X ∈ R3 : T (X ) = 0

N(T ) =
X ∈ R3 : AX = 0

=
= Set of solutions of the system AX = 0
= Solution space of AX = 0.

To find the solution space of AX = 0 we first find the row-reduced echelon


form of A.

27
   
1 −1 2 1 −1 2
A= 2 1 0 ∼ 0 3 −4
   

−1 −2 2 0 −3 4
     
2
1 −1 2 1 −1 2 1 0 3
∼  0 3 −4  ∼  0 1 − 43  ∼  0 1 − 43 
     
0 0 0 0 0 0 0 0 0
2 4
AX = 0 =⇒ x1 + x3 = 0, x2 − x3 = 0
3 3
2 4
Take x3 = a. This implies x1 = − a, x2 = a.
3 3

28
Hence,

N(T ) = Solution space of AX = 0


  
2 4
= − a, a, a : a ∈ R
3 3
   
2 4
= a − , ,1 : a ∈ R
3 3
 
2 4
= Span − , ,1 .
3 3

As the set − 23 , 43 , 1 is linearly independent and spans the null space


 

N(T ), thus forms a basis for N(T ). Hence

nullity (T ) = dim N(T ) = 1.

29
Some remarks

• The method that we used to find the rank and nullity of the linear
transformation in Example-4 can also be used in Example-3.
• In all the four examples above, we have

rank(T ) + nullity (T ) = dim(V ),

where V represents the domain set of the linear transformation T .


• The above observation is indeed a fact, which holds in arbitrary finite
dimensional vector space. In the next theorem we shall prove this fact.

30
Theorem 2 (Rank-Nullity-Dimension Theorem)

Note: This theorem is also known as the Rank-Nullity Theorem.


Statement: Let V and W be vector spaces over the field F and let
T : V −→ W be a linear transformation. Suppose that V is
finite-dimensional. Then

rank(T ) + nullity(T ) = dim(V ).

Proof: Let dim(V ) = n. Let {α1 , α2 , . . . , αk } be a basis for N(T ). This


means nullity (T ) = k. As N(T ) is a subspace of V , we have k ≤ n.
Since {α1 , α2 , . . . , αk } ⊆ V and it is linearly independent in V , using
Corollary 2 of Theorem 5 we can extend this linearly independent set to a
basis of V . That is, there are non-zero vectors αk+1 , . . . , αn ∈ V such
that {α1 , . . . , αk , αk+1 , . . . , αn } is a basis for V .
We prove that B = {T (αk+1 ), . . . , T (αn )} is a basis for R(T ).
31
Theorem 2 contd.

αk+1

α1 T (αk+1 )

αk αn T (αn )

N(T ) R(T )

V W

32
Theorem 2 contd.

Claim 1: The set {T (αk+1 ), . . . , T (αn )} spans R(T ).


Let β ∈ R(T ). Then by the definition of range of T there exists α ∈ V
such that β = T (α). Since α ∈ V = Span {α1 , . . . , αn }, there exist
scalars c1 , c2 , . . . , cn such that

α = c1 α1 + · · · + ck αk + ck+1 αk+1 + · · · + cn αn .

So,

β = T (α) = T (c1 α1 + · · · + ck αk + ck+1 αk+1 + · · · + cn αn ).

As T is a linear transformation, we have

β = c1 T (α1 ) + · · · + ck T (αk ) + ck+1 T (αk+1 ) + · · · + cn T (αn ).

But, α1 , . . . , αk ∈ N(T ), so we have T (α1 ) = 0, . . . , T (αk ) = 0.


33
Thus,
β = ck+1 T (αk+1 ) + · · · + cn T (αn ).

Hence, the set {T (αk+1 ), . . . , T (αn )} spans R(T ).

Claim 2: {T (αk+1 ), . . . , T (αn )} is an L.I. set.


Consider
ck+1 T (αk+1 ) + · · · + cn T (αn ) = 0.

As T is a linear transformation, we have

T (ck+1 αk+1 + · · · + cn αn ) = 0.

This implies
ck+1 αk+1 + · · · + cn αn ∈ N(T ).

But,
N(T ) = Span {α1 , . . . , αk } .

34
So, there exist scalars b1 , . . . , bk ∈ F such that

ck+1 αk+1 + · · · + cn αn = b1 α1 + · · · + bk αk

=⇒ b1 α1 + · · · + bk αk − ck+1 αk+1 − · · · − cn αn = 0.

Since {α1 , . . . , αk , αk+1 , . . . , αn } is an L.I. set, we have

b1 = · · · = bk = −ck+1 = · · · = −cn = 0.

This shows that

ck+1 T (αk+1 ) + · · · + cn T (αn ) = 0 implies ck+1 = · · · = cn = 0.

This proves Claim 2. By Claims 1 and 2, B is a basis of R(T ) and dim


R(T ) = |B| = n − k. This implies dim R(T ) = dim(V ) − dim N(T ).
Hence,
rank(T ) + nullity(T ) = dim(V ).

35
Theorem 3.

If A is an m × n matrix, then row rank (A) = column rank (A).


Proof: We define a linear transformation T : F n×1 −→ F m×1 by
T (X ) = AX . By Rank-Nullity-Dimension Theorem,

rank (T ) + nullity (T ) = dim V = dim F n×1 = n − − − −(1).

Now,

R(T ) = Y ∈ F m×1 : T (X ) = Y for some X ∈ F n×1

= Y ∈ F m×1 : AX = Y for some X ∈ F n×1

= AX : X ∈ F n×1
= Set of all linear combinations of columns of A
= Column space (A)

rank (T ) = dim R(T ) = dim column space (A) = column rank (A) − (2)

36
Theorem 3 contd.

Now, 
N(T ) = X ∈ F n×1 : T (X ) = 0

= X ∈ F n×1 : AX = 0
= S ( the solution space of AX = 0.)

Let R be the row-reduced echelon matrix row-equivalent to A. Let r be


the number of non-zero rows of R.

r = row rank (R) = row rank (A) − − − −(3)

The system RX = 0 has n − r free variables, thus

n − r = dim S = dim N(T ) = nullity(T ) − −(4)

37
Theorem 3 contd.

From (1), (2) and (4),

column rank (A) + n − r = n

=⇒ column rank (A) = r = row rank (A), by (3)

This completes the proof.

Definition:

rank (A) = column rank (A) = row rank (A).

38
Problem 3

Find a linear transformation (if exists) T : R3 −→ R3 such that


N(T ) = Span {(1, 1, 1)} and R(T ) = Span {(1, 0, −1), (1, 2, 2)}.

Solution: It is given that {α1 = (1, 1, 1)} is a basis for N(T ). By


extending this basis we construct a basis for V = R3 , say
{α1 = (1, 1, 1), α2 = (0, 1, 1), α3 = (0, 0, 1)} (We have solved similar
problems in the past!).

Note that β1 = (0, 0, 0), β2 = (1, 0, −1), β3 = (1, 2, 2) ∈ R(T ). Let us


construct T such that

T (α1 ) = T (1, 1, 1) = β1 = (0, 0, 0),

T (α2 ) = T (0, 1, 1) = β2 = (1, 0, −1)

and
T (α3 ) = T (0, 0, 1) = β3 = (1, 2, 2).
39
Problem 3 contd.

Let

(x, y , z) = aα1 + bα2 + cα3 = a(1, 1, 1) + b(0, 1, 1) + c(0, 0, 1)

=⇒ (x, y , z) = xα1 + (y − x)α2 + (z − y )α3

=⇒ T (x, y , z) = xβ1 + (y − x)β2 + (z − y )β3

=⇒ T (x, y , z) = x(0, 0, 0) + (y − x)(1, 0, −1) + (z − y )(1, 2, 2)

=⇒ T (x, y , z) = (−x + z, −2y + 2z, x − 3y + 2z)


  
−1 0 1 x
=⇒ T (x, y , z) =  0 −2 2   y  .
  
1 −3 2 z

40
L(V , W ): Set of all linear transformations from V into W .

Let V , W be vector spaces over the field F . Define the set

L(V , W ) = {T : T : V −→ W is a L.T. } .

Observation 1: L(V , W ) is a vector space under the operations

(T + U)(α) = T (α) + U(α)

and
(cT )(α) = cT (α)

for all T , U ∈ L(V , W ) and c ∈ F .

Observation 2: If V and W are finite dimensional vector spaces, then

dim L(V , W ) = dim V · dim W .

41
Linear Operator

Definition:
If V is a vector space over the field F , then a linear operator T is a linear
transformation from V into V .

42
One to one (1:1) function.
A function f : X −→ Y is said to be an one to one function if each
element in X has exactly one image in Y . In other words,

if f (x) = f (y ), then x = y .

Onto function.
A function f : X −→ Y is said to be an onto function if the range of f is
Y.

Invertible function.
A function f : X −→ Y is said to be an invertible function if there exists a
function g : Y −→ X such that
(i) gof : X −→ X and
(ii) fog : Y −→ Y are identity functions.

Proposition 1. A function f : X −→ Y is invertible if and only if f is


one-to-one and onto.

43
If T is linear then T −1 is linear

Theorem 4. Let V and W be two vector spaces over the field F and let
T : V −→ W be a linear transformation. If T is invertible, then the
inverse function T −1 : W −→ V is a linear transformation.

Proof: Suppose that T : V −→ W is an invertible linear transformation.


Then there exists a function T −1 : W −→ V such that
TT −1 : W −→ W and T −1 T : V −→ V are identity functions.

We want to show that

T −1 (cβ1 + β2 ) = cT −1 (β1 ) + T −1 (β2 ) for all β1 , β2 ∈ W , c ∈ F .

44
Let α1 = T −1 (β1 ) and α2 = T −1 (β2 ). Since T is invertible, α1 , α2 are
unique vectors in V such that T (α1 ) = β1 and T (α2 ) = β2 . Since T is a
linear transformation,

T (cα1 + α2 ) = cT (α1 ) + T (α2 ) = cβ1 + β2 .

=⇒ T −1 (cβ1 + β2 ) = T −1 T (cα1 + α2 )

=⇒ T −1 (cβ1 + β2 ) = cα1 + α2 = cT −1 (β1 ) + T −1 (β2 ) .

Hence T −1 is a linear transformation.

45
Problem 4. Let T (x1 , x2 ) = (x1 + x2 , x1 ) be a linear operator defined on
F 2 . Find T −1 if exists.

Solution:
" #" #
1 1 x1
T (x1 , x2 ) = =⇒ T (X ) = AX .
1 0 x2

Now,
" # " #
1 1 1 0 1 0 0 1
= I |A−1 .
 
[A|I ] = ∼
1 0 0 1 0 1 1 −1

Thus, " #" #


−1 −1 0 1 x1
T (X ) = A X = .
1 −1 x2

This implies
T −1 (x1 , x2 ) = (x2 , x1 − x2 ).

46
Problem 5. Find the inverse of a linear operator T on R3 defined as

T (x1 , x2 , x3 ) = (3x1 , x1 − x2 , 2x1 + x2 + x3 ).

Solution:   
3 0 0 x1
T (x1 , x2 , x3 ) =  1 −1 0   x2  .
  
2 1 1 x3
Now,
   
1
3 0 0 1 0 0 1 0 0 3 0 0
[A|I ] =  1 −1 0 0 1 0 ∼ 0 1 0 1
−1 0  = [I |A−1 ].
   
3
2 1 1 0 0 1 0 0 1 −1 1 1

47
Thus,
T −1 (X ) = A−1 X .

This implies
 
1 1
T −1 (x1 , x2 , x3 ) = x1 , x1 − x2 , −x1 + x2 + x3 .
3 3

48
Definition. A linear transformation T : V −→ W is non-singular

if T (α) = 0 implies α = 0.

That is, N(T ) = {0}.

49
Lemma 1. Let T : V −→ W be a linear transformation. Then the
following statements are equivalent.
(1) T is one-to-one.
(2) T is non-singular.
Proof: (1) =⇒ (2). Suppose that T is one-to-one. Let T (α) = 0. As T
is a linear transformation we have T (0) = 0. This implies T (α) = T (0).
But T is one-to-one, so α = 0. This shows T is non-singular.

(2) =⇒ (1). Suppose that T is non-singular. Then by definition


N(T ) = {0}.

Let T (α) = T (β)


=⇒ T (α) − T (β) = 0
=⇒ T (α − β) = 0 (∵ T is a L.T.)
=⇒ α − β ∈ N(T ) = {0}
=⇒ α = β.

This proves T is one-to-one. 50


Non-singular linear transformations preserve linear independence

Theorem 5. Let T : V −→ W be a linear transformation. Then T is


non-singular if and only if T carries each linearly independent subset of V
onto a linearly independent subset of W .

Proof:
Case 1: Suppose that T is non-singular. Then by definition N(T ) = {0}.
Let S = {α1 , α2 , . . . , αk } be a linearly independent set V . We show that
{T (α1 ), T (α2 ), . . . , T (αk )} is linearly independent in W .

Let c1 T (α1 ) + c2 T (α2 ) + · · · + ck T (αk ) = 0


=⇒ T (c1 α1 + c2 α2 + · · · + ck αk ) = 0 (∵ T is a L.T.)
=⇒ c1 α1 + c2 α2 + · · · + ck αk ∈ N(T ) = {0}
=⇒ c1 α1 + c2 α2 + · · · + ck αk = 0.

51
As S = {α1 , α2 , . . . , αk } is linearly independent, we have

c1 = c2 = · · · = ck = 0.

This shows if

c1 T (α1 ) + c2 T (α2 ) + · · · + ck T (αk ) = 0,

then
c1 = c2 = · · · = ck = 0.

Hence, {T (α1 ), T (α2 ), . . . , T (αk )} is a linearly independent subset of W .


This completes the proof of Case 1.

Case 2: Suppose that T carries linearly independent subset onto linearly


independent subset.
Let T (α) = 0. If α ̸= 0, then T carries a linearly independent set {α}
onto a linearly dependent set {T (α)} = {0}, a contradiction. Thus,
α = 0. This implies T is non-singular.
52
Theorem 6. Let V and W be finite dimensional vector spaces over the
field F such that dim V = dim W . If T : V −→ W is a linear
transformation, then the followings are equivalent.

(i) T is invertible.
(ii) T is non-singular.
(iii) T is onto.
(iv) T carries a basis of V to a basis of W. That is, if {α1 , α2 , . . . , αn } is
a basis for V , then {T (α1 ), T (α2 ), . . . , T (αn )} is a basis for W .

53
Proof. Let dim V = dim W = n. By Rank-Nullity-Dimension Theorem,

rank (T ) + nullity (T ) = dim V = n. (1)

(i) =⇒ (ii). Assume that T is invertible. So, by proposition 1, T is


one-to-one. Then Lemma 1 implies T is non-singular.

(ii) =⇒ (iii). Assume that T is non-singular. Then by definition


N(T ) = {0}. So, nullity (T ) = 0. From equation (1) we have
rank (T ) = n. Thus, dim R(T ) = dim W . This implies

R(T ) = W (∵ R(T ) ⊆ W and dim R(T ) = dim W ).

Hence, T is onto.

54
(iii) =⇒ (iv ). Assume that T is onto. That is R(T ) = W . Let
{α1 , . . . , αn } be a basis for V . Our aim is to show that
{T (α1 ), . . . , T (αn )} is a basis for W .

First, we prove that {T (α1 ), . . . , T (αn )} spans W . Let β ∈ W = R(T ).


Since T is onto, there exists α ∈ V such that T (α) = β. Since α ∈ V
and {α1 , α2 , . . . , αn } is a basis for V , there exists scalars c1 , c2 , . . . , cn
such that α = c1 α1 + c2 α2 + · · · + cn αn . So,

β = T (α) = c1 T (α1 ) + c2 T (α2 ) + · · · + cn T (αn ).

This implies {T (α1 ), . . . , T (αn )} spans R(T ) = W . Since dim W = n,


{T (α1 ), . . . , T (αn )} is a basis for R(T ) = W .

55
(iv ) =⇒ (i). Let {α1 , . . . , αn } be a basis for V . By our assumption
{T (α1 ), . . . , T (αn )} forms a basis for R(T ). Since
dim W = n = dim R(T ) and R(T ) ⊆ W , we must have R(T ) = W .
Thus, T is onto.

Next, we show that T is one-to-one. That is N(T ) = {0}. Let α ∈ N(T ).


As we have a basis {α1 , α2 , . . . , αn } for V , there exists scalars
c1 , c2 , . . . , cn such that α = c1 α1 + c2 α2 + · · · + cn αn . So,

0 = T (α) = c1 T (α1 ) + c2 T (α2 ) + · · · + cn T (αn ).

Since, {T (α1 ), . . . , T (αn )} is a basis, hence an L. I. set. So,


c1 = c2 = · · · = cn = 0. This implies α = 0. So, T is one-to-one. Hence
T is invertible by Proposition 1.

56
If A is a given m × n matrix, then we can define a linear transformation
from Rn into Rm by
T (x) = Ax.

What about the converse?

57
Theorem 7. Let V be an n-dimensional vector space over the field F and
W an m-dimensional vector space over F . Let B be an ordered basis for
V and B ′ an ordered basis for W . For each linear transformation
T : V −→ W there is an m × n matrix A with entries in F such that

[T (α)]B ′ = A [α]B ,

for every vector α ∈ V . Furthermore T −→ A is a one-to-one


correspondence between the set of all linear transformations from V into
W and the set of all m × n matrices over the field F .

58
Proof.
Note that T (αj ) ∈ W . Since {β1 , . . . , βm } is a basis for W there exist
unique scalars A1j , A2j , . . . , Amj such that
m
X
T (αj ) = A1j β1 + A2j β2 + · · · + Amjβm = Aij βi
i=1

for j = 1, 2, . . . , n. Therefore
 
A1j
A2j
 
 
[T (αj )]B ′ = .. 
.
 
 
Amj

for j = 1, 2, . . . , n.

59
Define the matrix
 
A11 A12 ··· A1n
A21 A22 ··· A2n
 
 
A = [[T (α1 )]B ′ , [T (α2 )]B ′ , . . . , [T (αn )]B ′ ] =  .. .. .. .
. . ··· .
 
 
Am1 Am2 ... Amn

This m × n matrix A is called the matrix of T relative to the ordered bases


B, B ′ or the matrix of T relative to B, B ′ . The matrix A is denoted by

A = [T ]BB .

60
Our aim is to understand explicitly how the matrix A determines the linear
transformation T .
We claim that
[T (α)]B ′ = A [α]B .

Proof. Let α ∈ V . As B = {α1 , α2 , . . . , αn } is a basis for V there exist


unique scalars x1 , x2 , . . . , xn such that
n
X
α = x1 α1 + x2 α2 + · · · + xn αn = xj αj .
j=1

Since T is a linear transformation, we have


 
n
X n
X
T (α) = T  xj αj  = xj T (αj )
j=1 j=1
!  
n
X m
X m
X n
X
= xj Aij βi =  Aij xj  βi .
j=1 i=1 i=1 j=1

61
So,
 Xn 
 A1j xj 
 j=1 
 n 
 X 

 A2j xj 

[T (α)]B ′ = 
 j=1


 .. 

 . 

 Xn 
Amj xj
 
j=1
 
A11 x1 + A12 x2 + · · · + A1n xn
A21 x1 + A22 x2 + · · · + A2n xn
 
 
=  .. 
.
 
 
Am1 x1 + Am2 x2 + · · · + Amn xn
  
A11 A12 · · · A1n x1
A21 A22 · · · A2n   x2
  
 
=  .. .. ..  .
 .
. . ··· .   ..
 
 
Am1 Am2 ... Amn xn 62
This implies
[T (α)]B ′ = A [α]B ,

where A = [[T (α1 )]B ′ , [T (α2 )]B ′ , . . . , [T (αn )]B ′ ] .

This completes the proof of the theorem.

63
Problem 6. Let T : R2 −→ R3 be a linear transformation defined as

T (x1 , x2 ) = (x2 , x1 − x2 , x1 + x2 ) .

Let
B = {α1 = (1, 0), α2 = (0, 1)}

and
B ′ = {β1 = (1, 1, 1), β2 = (1, 1, 0), β3 = (1, 0, 0)}

be respective ordered bases for R2 and R3 . Find [T ]BB .
Solution.

T (α1 ) = T (1, 0)
= (0, 1, 1)
= (1, 1, 1) + 0(1, 1, 0) − (1, 0, 0)
= β1 + 0β2 − β3 .

64
 
1
So, [T (α1 )]B ′ =  0 .
 
−1
Similarly,

T (α2 ) = T (0, 1)
= (1, −1, 1)
= (1, 1, 1) − 2(1, 1, 0) + 2(1, 0, 0)
= β1 − 2β2 + 2β3 .
 
1
So, [T (α2 )]B ′ =  −2  .
 
2
Hence,
 
1 1

A = [T ]BB
  
= [T (α1 )]B ′ , [T (α2 )]B ′ =  0 −2  .

−1 2

65
Note: Let V be a finite dimensional vector space and B an ordered basis
for V . If T : V −→ V is a linear operator, then A is denoted as [T ]B . So
by Theorem 7 we have

[T (α)]B = [T ]B [α]B .

The matrix [T ]B is called the matrix of T relative to the ordered basis B.

66
Problem 7. Let T : R2 −→ R2 be a linear transformation defined as
T (x1 , x2 ) = (x1 , 0). Let B = {α1 = (1, 1), α2 = (1, 2)} be an ordered
basis for R2 . Find [T ]B .

Solution.
T (α1 ) = T (1, 1) = (1, 0) = 2α1 − α2 .

So, " #
2
[T (α1 )]B = .
−1

Similarly,
T (α2 ) = T (1, 2) = (1, 0) = 2α1 − α2 .

So, " #
2
[T (α2 )]B = .
−1
" #
2 2
Hence, [T ]B = ([T (α1 )]B , [T (α2 )]B ) = .
−1 −1
67
Problem 8. Let P3 be the vector space of all real polynomials of degree at
most three and P2 be the vector space of all real polynomials of degree at
most two. Let D be the differentiation transformation from P3 into P2 .
Let B = {1, x, x 2 , x 3 } and B ′ = {1, x, x 2 } be two ordered bases for P3
B′
and P2 , respectively. Find [D]B .

68
Solution.

D(α1 ) = D(1) = 0 = 0 · 1 + 0 · x + 0 · x 2 = 0 · β1 + 0 · β2 + 0 · β3

D(α2 ) = D(x) = 1 = 1 · 1 + 0 · x + 0 · x 2 = 1 · β1 + 0 · β2 + 0 · β3

D(α3 ) = D(x 2 ) = 2x = 0 · 1 + 2 · x + 0 · x 2 = 0 · β1 + 2 · β2 + 0 · β3

D(α4 ) = D(x 3 ) = 3x 2 = 0 · 1 + 0 · x + 3 · x 2 = 0 · β1 + 0 · β2 + 3 · β3

Hence,  
0 1 0 0
B′
[D]B = 0 0 2 0 .
 
0 0 0 3

69
Theorem 8. Let V be a finite dimensional vector space over the field F
and let B = {α1 , α2 , . . . , αn } and B ′ = {β1 , β2 , . . . , βn } be two ordered
bases for V . Suppose T : V −→ V is a linear operator. If
P = [P1 , P2 , . . . , Pn ] is the n × n matrix with columns Pj = [βj ]B , then

[T ]B ′ = P −1 [T ]B P.

Proof: Reading assignment.

Similar matrices.
Let A and B be n × n matrices over the field F . We say B is similar to A
over F if there exists an invertible n × n matrix P over F such that

B = P −1 AP.

Note. From Theorem 8, it follows that matrices [T ]B and TB ′ are similar.

70
Problem 9. Let T be a linear operator on R2 defined as
T (x1 , x2 ) = (x1 , 0). Let B = {α1 = (1, 1), α2 = (1, 2)} be an ordered
basis for R2 . Let B ′ = {β1 = (1, 0), β2 = (0, 1)} denotes the standard
basis of R2 . Find a matrix P such that [T ]B ′ = P −1 [T ]B P.
" #
2 2
Solution. From Problem 7 we know that [T ]B = .
−1 −1
" #
1 0
Similarly, we can show that [T ]B ′ = .
0 0
Now we find the matrix P. Note that

β1 = (1, 0) = 2(1, 1) − (1, 2) = 2α1 − α2

and
β2 = (0, 1) = −(1, 1) + (1, 2) = −α1 + α2 .

71
So, " #
2 −1
P=
−1 1

and " #
1 1
P −1 = .
1 2

Therefore
" #" #" # " #
−1 1 1 2 2 2 −1 1 0
P [T ]B P = = = [T ]B ′ .
1 2 −1 −1 −1 1 0 0

72
Problem 10. Let P3 be the vector space of all real polynomials of degree
at most three. Let D be the differentiation operator on P3 . Let
B = {1, x, x 2 , x 3 } and B ′ = {1, 2x, −3x 2 , 2x 3 } be two ordered bases for
P3 . Find a matrix P such that [D]B ′ = P −1 [D]B P.

73
Eigenvalues / Characteristic values / Characteristic roots

Definitions.

• Let A be an n × n (square) matrix over the field F . A scalar λ ∈ F is


an eigenvalue of A if there exists a non-zero vector X ∈ F n×1 such
that
AX = λX .

• Any non-zero vector X such that AX = λX is called an eigenvector


of A corresponding to the eigenvalue λ.

• EA (λ) = {X : AX = λX } = {X : (λI − A)X = 0} is called the


eigenspace of A associated to λ.

Proposition 2. The eigenspace EA (λ) associated with the eigenvalue λ is


a subspace of F n×1 .

74
How to find the eigenvalues.
Let A be a given n × n matrix and λ be an eigenvalue of A. Then by
definition there exists a non-zero vector X such that AX = λX . This
implies the system
(λI − A)X = 0

has a non-trivial solution. This holds if and only if (λI − A) is not


invertible. This holds if and only if

det (λI − A) = 0.

Characteristic Polynomial.
Let A be an n × n matrix over the field F . The polynomial
f (x) = det(xI − A) is called the characteristic polynomial of A.

Note. Thus, the eigenvalues of a matrix A are the roots of the


characteristic polynomial of matrix A.

75
How to find the eigenvectors.
Fix one eigenvalue λ of matrix A. Then solve the system (λI − A)X = 0.
The non-zero solutions are the eigenvectors of matrix A corresponding to
the eigenvalue λ.

76
Problem 11. Find the!eigenvalues and the corresponding eigenvectors of
1 2
the matrix A = .
0 2
Solution: Consider
det (λI − A) = 0
!
λ−1 −2
=⇒ det =0
0 λ−2

=⇒ (λ − 1)(λ − 2) = 0

=⇒ λ = 1, 2.

So, λ1 = 1, λ2 = 2 are two eigenvalues of the given matrix.

77
The eigenspace corresponding to λ = 1.

EA (1) = {X : (λI −A)X = 0} = {X : (I −A)X = 0} = {X : (A−I )X = 0}.

Consider the system of equations

(A − I )X = 0
! ! !
0 2 x 0
=⇒ = .
0 1 y 0
=⇒ y = 0.

The solutions of this system are of the form (a, 0), where a ∈ R. Hence

EA (1) = {(a, 0) : a ∈ R} = {a(1, 0) : a ∈ R} = Span {(1, 0)}.

Each nonzero vector in E1 (A) is an eigenvector of A corresponding to the


eigenvalue λ = 1.

78
The eigenspace corresponding to λ = 2.

EA (2) = {X : (2I − A)X = 0} = {X : (A − 2I )X = 0}.

Consider the system of equations

(A − 2I )X = 0
! ! !
−1 2 x 0
=⇒ = .
0 0 y 0
=⇒ x = 2y .

Set y = a, then x = 2a. Thus, the solutions of this system are of the form
(2a, a), where a ∈ R. Hence

EA (2) = {(2a, a) : a ∈ R} = {a(2, 1) : a ∈ R} = Span {(2, 1)}.

Each nonzero vector in E2 (A) is an eigenvector of A corresponding to the


eigenvalue λ = 2.
79
Problem 12. Find the eigenvalues and corresponding eigenspaces of the
matrix  
5 −6 −6
A =  −1 4 2 .
 
3 −6 −4

Solution. The characteristic polynomial of A

λ−5 6 6
fA (λ) = det(λI − A) = 1 λ−4 −2 = (λ − 2)2 (λ − 1).
−3 6 λ+4

=⇒ λ = 1, 2, 2.

Hence eigenvalues of A are 1, 2.

80
The eigenspace corresponding to λ = 1.

EA (1) = {X : (I − A)X = 0} = {X : (A − I )X = 0} .
   
4 −6 −6 1 0 −1
1
A − I =  −1 3 2 ∼ 0 1
   
3 
3 −6 −5 0 0 0
1
(A − I )X = 0 =⇒ x1 − x3 = 0, x2 + x3 = 0
3
Note that (i) pivot variables = {x1 , x2 } and (ii) free variables = {x3 }. Let
x3 = a. This implies x1 = a and x2 = − 3a . Thus
n a  o na o
EA (1) = a, − , a : a ∈ R = (3, −1, 3) : a ∈ R = Span {(3, −1, 3)} .
3 3

81
The eigenspace corresponding to λ = 2.

EA (2) = {X : (2I − A)X = 0} = {X : (A − 2I )X = 0} .


   
3 −6 −6 1 −2 −2
A − 2I =  −1 2 2 ∼ 0 0 0 .
   
3 −6 −6 0 0 0
(A − 2I )X = 0 =⇒ x1 − 2x2 − 2x3 = 0.

Note that (i) pivot variables = {x1 } and (ii) free variables = {x2 , x3 }. Let
x2 = a and x3 = b. This implies x1 = 2a + 2b. Therefore

EA (2) = {(2a + 2b, a, b) : a, b ∈ R} = {a(2, 1, 0) + b(2, 0, 1) : a, b ∈ R} .

Thus,
EA (2) = Span {(2, 1, 0), (2, 0, 1)} .

82
! !
a b λ−a −b
Notes. Let A = . Then λI − A =
c d −c λ−d

The characteristic polynomial of A is

fA (λ) = det(λI − A) = (λ − a)(λ − d) − bc = λ2 − (a + d)λ + ad − bc.

That is
fA (λ) = λ2 − trace (A)λ + det (A).

But, if λ1 , λ2 are the roots of the characteristic polynomial, then

fA (λ) = (λ − λ1 )(λ − λ2 ) = λ2 − (λ1 + λ2 )λ + λ1 λ2

Hence,
λ1 + λ2 = trace (A) and λ1 λ2 = det (A).

83
Notes. Let A be an n × n matrix over the field F.

• The characteristic polynomial of A is of the form

fA (λ) = λn + (−1)1 trace (A)λn−1 + · · · + (−1)n det(A).

• If λ1 , . . . , λn are the eigenvalues of A, then

trace (A) = λ1 + · · · + λn

and
det(A) = λ1 · · · λn .

84
The Cayley-Hamilton theorem

Theorem 9. (Cayley-Hamilton theorem)


Every square matrix satisfies its own characteristic polynomial.
That is, if A is an n × n matrix over the field F and f (λ) is the
characteristic polynomial of A, then f (A) = 0.
!
1 2
Example. Let A = . Then from problem 11 we know that the
0 2
characteristic polynomial of A is f (λ) = λ2 − 3λ + 2. Then
!
0 0
f (A) = A2 − 3A + 2I = . (verify !)
0 0

85
Applications of Cayley-Hamilton theorem

Application 1. Let f (λ) = λn + cn−1 λn−1 + · · · + c1 λ + c0 be the


characteristic polynomial of A. By Cayley-Hamilton theorem

f (λ) = 0
=⇒ An + cn−1 An−1 + · · · + c1 A + c0 I = 0
=⇒ An = −cn−1 An−1 − · · · − c1 A − c0 I .

Thus, all the higher powers of A starting from n can be calculated as a


linear combination of lower powers A0 , A1 , . . . , An−1 .

86
Applications of Cayley-Hamilton theorem

Application 2. Let f (λ) = λn + cn−1 λn−1 + · · · + c1 λ + c0 be the


characteristic polynomial of A. By Cayley-Hamilton theorem

f (A) = 0
=⇒ An + cn−1 An−1 + · · · + c1 A + c0 I = 0
=⇒ A(An−1 + cn−1 An−2 + · · · + c1 I ) = −c0 I
 
−1 n−1 cn−1 n−2 c1
=⇒ A A − A − ··· − I = I
c0 c0 c0

Thus,
−1 n−1 cn−1 n−2 c1
A−1 = A − A − ··· − I.
c0 c0 c0

87
Example
!
1 2
Let A = . Then from problem 11 we know that the characteristic
0 2
polynomial of A is λ2 − 3λ + 2. By Cayley-Hamilton theorem

f (A) = 0
=⇒ A2 − 3A + 2I = 0
=⇒ A(A − 3I ) = −2I
 
−1 3
=⇒ A A− I =I
2 2

Thus, !
−1 −1 3 1 −1
A = A+ I = 1 .
2 2 0 2

88

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy