Chapter 3
Chapter 3
Jagannath
IIITDM Kancheepuram, Chennai
Linear Transformations
Definition:
Let V and W be vector spaces over the field F . A linear transformation
from V into W is a function T : V −→ W such that
1
Examples of Linear Transformations
2
Examples
(3) Let V = f (x) = c0 + c1 x + c2 x 2 + · · · + cn x n : n ∈ N, ci ∈ F .
That is, V is the vector space of all polynomials over the field F .
We define a function D : V −→ V as
(Df )(x) := c1 + 2c2 x + · · · + ncn x n−1 . Then D is a linear
transformation, called the differentiation transformation.
(4) Let F = R. Let V be the vector space of all continuous functions
from R to R. We define a function T : V −→ R as
Rx
(Tf )(x) := 0 f (t)dt. Then T is a linear transformation, called the
integral transformation.
(5) Let A ∈ F m×n be a fixed m × n matrix. Define a function
T : F n×1 −→ F m×1 as T (X ) = AX . Then
Property 1: T (0) = 0.
Proof. As
T (0) = T (1 · 0 + 0)
= 1 · T (0) + T (0) (∵ T is a L.T. )
= T (0) + T (0).
This implies
T (0) = 0.
4
Properties of linear transformations
Proof. (1 ⇒ 2).
and
(2 ⇒ 1).
5
Properties of linear transformations
Property 3:
Proof.
T (c1 α1 + c2 α2 + · · · + cn αn )
= c1 T (α1 ) + T (c2 α2 + · · · + cn αn ) (∵ is a L.T.)
= c1 T (α1 ) + c2 T (α2 ) + T (c3 α3 + · · · + cn αn ) (∵ T is a L.T.)
..
.
= c1 T (α1 ) + c2 T (α2 ) + · · · + cn T (αn ).
6
Problem 1:
(1) T (x1 , x2 ) = (1 + x1 , x2 ).
Ans: It is not a linear transformation as
T (0, 0) = (1, 0) =⇒ T (0) ̸= 0.
(2) T (x1 , x2 ) = (x2 , x1 ).
Ans:
" It is
# a linear
" transformation
#" # as
x1 0 1 x1
T = =⇒ T (X ) = AX .
x2 1 0 x2
(3) T (x1 , x2 ) = (x12 , x2 ).
Ans: It is not a linear transformation (Verify!).
7
Linear transformations are special !!
α1 T β1
α2 β2
αj βj
αn βn
V W
Ordered basis,B = {α1 , α2 , . . . , αn }
βj ′ s need not be distinct
T is a unique linear transformation with T (αj ) = βj
8
Theorem 1.
α = x1 α1 + x2 α2 + · · · + xn αn .
9
Proof contd.
We define a function T : V −→ W as
T (α) = T (x1 α1 + · · · + xn αn ) := x1 β1 + · · · + xn βn .
Claim 1: T (αj ) = βj .
10
Proof contd.
Claim 3: T is unique.
It is enough to prove that if U : V −→ W is a linear transformation with
U(αj ) = βj for j = 1, 2, . . . , n, then T (α) = U(α) for all α ∈ V .
Consider
U(α) = U(x1 α1 + x2 α2 + · · · + xn αn )
= x1 U(α1 ) + x2 U(α2 ) + · · · + xn U(αn ) (∵ U is a L.T.)
= x1 β1 + x2 β2 + · · · + xn βn (∵ U(αj ) = βj )
= T (α).
Let B = {α1 = (1, 2), α2 = (3, 4)} be an ordered basis for R2 . Let
β1 = (3, 2, 1), β2 = (6, 5, 4) ∈ R3 . Find the unique linear transformation
T : R2 −→ R3 such that T (αj ) = βj for j = 1, 2.
Solution: T (α1 ) = T (1, 2) = (3, 2, 1) = β1 and
T (α2 ) = T (3, 4) = (6, 5, 4) = β2 .
Let α = (x, y ) ∈ R2 . As {α1 , α2 } is a basis, we have α = aα1 + bα2 for
some a, b ∈ R. That is (x, y ) = a(1, 2) + b(3, 4) = (a + 3b, 2a + 4b). So,
a + 3b = x and 2a + 4b = y .
12
Problem 2 contd.
So,
3 1
(x, y ) = −2x + y α1 + x − y α2 .
2 2
Thus,
3 1
T (x, y ) = −2x + y β1 + x − y β2 (from the definition of T )
2 2
3 1
= −2x + y (3, 2, 1) + x − y (6, 5, 4)
2 2
3 1 1
= y , x + y , 2x − y .
2 2 2
13
Range of a linear transformation
14
Range of T
β = T (α)
R(T )
V W
15
Proposition-1: R(T ) is a subspace of W .
from the definition of R(T ) it follows that cβ1 + β2 ∈ R(T ). This proves
R(T ) is a subspace of W .
16
The null space of a linear transformation
N(T ) := {α ∈ V : T (α) = 0} .
17
Null space of T
α
T (α) = 0
N(T )
V W
18
Proposition-2: N(T ) is a subspace of V .
19
Examples
Example 1. Find the rank and nullity of the zero linear transformation
O : V −→ W defined by O(α) = 0 for all α ∈ V .
Solution:
N(O) = {α ∈ V : O(α) = 0} = V .
Hence,
Rank (O) = dim R(O) = 0
and
Nullity (O) = dim N(O) = dim V .
20
Example 2. Find the rank and nullity of the identity linear transformation
I : V −→ V defined by I (α) = α for all α ∈ V .
Solution:
Hence,
Rank (I ) = dim R(I ) = dim V
and
Nullity (I ) = dim N(I ) = 0. 21
Example 3: Find the rank and nullity of the linear transformation
T : R2 −→ R3 defined as
Solution:
Y ∈ R3 : Y = T (X ) for some X ∈ R2
R(T ) =
Y ∈ R3 : Y = T (X ) = (x1 , 0, 0) for some X ∈ R2
=
Y = (x1 , 0, 0) : X = (x1 , x2 ) ∈ R2
=
= {(x1 , 0, 0) : x1 ∈ R}
= {x1 (1, 0, 0) : x1 ∈ R}
= Span of {(1, 0, 0)} .
Rank(T ) = 1.
22
X ∈ R2 : T (X ) = 0
N(T ) =
= {X = (x1 , x2 ) : T (x1 , x2 ) = 0}
= {(x1 , x2 ) : (x1 , 0, 0) = (0, 0, 0)}
= {(0, x2 ) : x2 ∈ R}
= {x2 (0, 1) : x2 ∈ R}
= Span of {(0, 1)} .
Nullity (T ) = 1.
23
Example 4: Let T : R3 −→ R3 be a function defined by
24
Now,
Y ∈ R3 : Y = T (X ) for some X ∈ R3
R(T ) =
Y ∈ R3 : Y = AX , X ∈ R3
=
AX : X ∈ R3
=
= Set of linear combinations of columns of A
= Column space of A
= Row space of At .
To find the row space of At we first find the row-reduced echelon form of
At .
25
1 2 −1 1 2 −1
At = −1 1 −2 ∼ 0 3 −3
2 0 2 0 −4 4
1 2 −1 1 0 1
∼ 0 1 −1 ∼ 0 1 −1
0 0 0 0 0 0
So,
As the set {(1, 0, 1), (0, 1, −1)} is linearly independent and spans the
range space R(T ), thus forms a basis for R(T ). Hence,
Rank(T ) = dim R(T ) = 2.
26
Now,
X ∈ R3 : T (X ) = 0
N(T ) =
X ∈ R3 : AX = 0
=
= Set of solutions of the system AX = 0
= Solution space of AX = 0.
27
1 −1 2 1 −1 2
A= 2 1 0 ∼ 0 3 −4
−1 −2 2 0 −3 4
2
1 −1 2 1 −1 2 1 0 3
∼ 0 3 −4 ∼ 0 1 − 43 ∼ 0 1 − 43
0 0 0 0 0 0 0 0 0
2 4
AX = 0 =⇒ x1 + x3 = 0, x2 − x3 = 0
3 3
2 4
Take x3 = a. This implies x1 = − a, x2 = a.
3 3
28
Hence,
29
Some remarks
• The method that we used to find the rank and nullity of the linear
transformation in Example-4 can also be used in Example-3.
• In all the four examples above, we have
30
Theorem 2 (Rank-Nullity-Dimension Theorem)
αk+1
α1 T (αk+1 )
αk αn T (αn )
N(T ) R(T )
V W
32
Theorem 2 contd.
α = c1 α1 + · · · + ck αk + ck+1 αk+1 + · · · + cn αn .
So,
T (ck+1 αk+1 + · · · + cn αn ) = 0.
This implies
ck+1 αk+1 + · · · + cn αn ∈ N(T ).
But,
N(T ) = Span {α1 , . . . , αk } .
34
So, there exist scalars b1 , . . . , bk ∈ F such that
ck+1 αk+1 + · · · + cn αn = b1 α1 + · · · + bk αk
=⇒ b1 α1 + · · · + bk αk − ck+1 αk+1 − · · · − cn αn = 0.
b1 = · · · = bk = −ck+1 = · · · = −cn = 0.
35
Theorem 3.
Now,
R(T ) = Y ∈ F m×1 : T (X ) = Y for some X ∈ F n×1
= Y ∈ F m×1 : AX = Y for some X ∈ F n×1
= AX : X ∈ F n×1
= Set of all linear combinations of columns of A
= Column space (A)
rank (T ) = dim R(T ) = dim column space (A) = column rank (A) − (2)
36
Theorem 3 contd.
Now,
N(T ) = X ∈ F n×1 : T (X ) = 0
= X ∈ F n×1 : AX = 0
= S ( the solution space of AX = 0.)
37
Theorem 3 contd.
Definition:
38
Problem 3
and
T (α3 ) = T (0, 0, 1) = β3 = (1, 2, 2).
39
Problem 3 contd.
Let
40
L(V , W ): Set of all linear transformations from V into W .
L(V , W ) = {T : T : V −→ W is a L.T. } .
and
(cT )(α) = cT (α)
41
Linear Operator
Definition:
If V is a vector space over the field F , then a linear operator T is a linear
transformation from V into V .
42
One to one (1:1) function.
A function f : X −→ Y is said to be an one to one function if each
element in X has exactly one image in Y . In other words,
if f (x) = f (y ), then x = y .
Onto function.
A function f : X −→ Y is said to be an onto function if the range of f is
Y.
Invertible function.
A function f : X −→ Y is said to be an invertible function if there exists a
function g : Y −→ X such that
(i) gof : X −→ X and
(ii) fog : Y −→ Y are identity functions.
43
If T is linear then T −1 is linear
Theorem 4. Let V and W be two vector spaces over the field F and let
T : V −→ W be a linear transformation. If T is invertible, then the
inverse function T −1 : W −→ V is a linear transformation.
44
Let α1 = T −1 (β1 ) and α2 = T −1 (β2 ). Since T is invertible, α1 , α2 are
unique vectors in V such that T (α1 ) = β1 and T (α2 ) = β2 . Since T is a
linear transformation,
=⇒ T −1 (cβ1 + β2 ) = T −1 T (cα1 + α2 )
45
Problem 4. Let T (x1 , x2 ) = (x1 + x2 , x1 ) be a linear operator defined on
F 2 . Find T −1 if exists.
Solution:
" #" #
1 1 x1
T (x1 , x2 ) = =⇒ T (X ) = AX .
1 0 x2
Now,
" # " #
1 1 1 0 1 0 0 1
= I |A−1 .
[A|I ] = ∼
1 0 0 1 0 1 1 −1
This implies
T −1 (x1 , x2 ) = (x2 , x1 − x2 ).
46
Problem 5. Find the inverse of a linear operator T on R3 defined as
Solution:
3 0 0 x1
T (x1 , x2 , x3 ) = 1 −1 0 x2 .
2 1 1 x3
Now,
1
3 0 0 1 0 0 1 0 0 3 0 0
[A|I ] = 1 −1 0 0 1 0 ∼ 0 1 0 1
−1 0 = [I |A−1 ].
3
2 1 1 0 0 1 0 0 1 −1 1 1
47
Thus,
T −1 (X ) = A−1 X .
This implies
1 1
T −1 (x1 , x2 , x3 ) = x1 , x1 − x2 , −x1 + x2 + x3 .
3 3
48
Definition. A linear transformation T : V −→ W is non-singular
if T (α) = 0 implies α = 0.
49
Lemma 1. Let T : V −→ W be a linear transformation. Then the
following statements are equivalent.
(1) T is one-to-one.
(2) T is non-singular.
Proof: (1) =⇒ (2). Suppose that T is one-to-one. Let T (α) = 0. As T
is a linear transformation we have T (0) = 0. This implies T (α) = T (0).
But T is one-to-one, so α = 0. This shows T is non-singular.
Proof:
Case 1: Suppose that T is non-singular. Then by definition N(T ) = {0}.
Let S = {α1 , α2 , . . . , αk } be a linearly independent set V . We show that
{T (α1 ), T (α2 ), . . . , T (αk )} is linearly independent in W .
51
As S = {α1 , α2 , . . . , αk } is linearly independent, we have
c1 = c2 = · · · = ck = 0.
This shows if
then
c1 = c2 = · · · = ck = 0.
(i) T is invertible.
(ii) T is non-singular.
(iii) T is onto.
(iv) T carries a basis of V to a basis of W. That is, if {α1 , α2 , . . . , αn } is
a basis for V , then {T (α1 ), T (α2 ), . . . , T (αn )} is a basis for W .
53
Proof. Let dim V = dim W = n. By Rank-Nullity-Dimension Theorem,
Hence, T is onto.
54
(iii) =⇒ (iv ). Assume that T is onto. That is R(T ) = W . Let
{α1 , . . . , αn } be a basis for V . Our aim is to show that
{T (α1 ), . . . , T (αn )} is a basis for W .
55
(iv ) =⇒ (i). Let {α1 , . . . , αn } be a basis for V . By our assumption
{T (α1 ), . . . , T (αn )} forms a basis for R(T ). Since
dim W = n = dim R(T ) and R(T ) ⊆ W , we must have R(T ) = W .
Thus, T is onto.
56
If A is a given m × n matrix, then we can define a linear transformation
from Rn into Rm by
T (x) = Ax.
57
Theorem 7. Let V be an n-dimensional vector space over the field F and
W an m-dimensional vector space over F . Let B be an ordered basis for
V and B ′ an ordered basis for W . For each linear transformation
T : V −→ W there is an m × n matrix A with entries in F such that
[T (α)]B ′ = A [α]B ,
58
Proof.
Note that T (αj ) ∈ W . Since {β1 , . . . , βm } is a basis for W there exist
unique scalars A1j , A2j , . . . , Amj such that
m
X
T (αj ) = A1j β1 + A2j β2 + · · · + Amjβm = Aij βi
i=1
for j = 1, 2, . . . , n. Therefore
A1j
A2j
[T (αj )]B ′ = ..
.
Amj
for j = 1, 2, . . . , n.
59
Define the matrix
A11 A12 ··· A1n
A21 A22 ··· A2n
A = [[T (α1 )]B ′ , [T (α2 )]B ′ , . . . , [T (αn )]B ′ ] = .. .. .. .
. . ··· .
Am1 Am2 ... Amn
60
Our aim is to understand explicitly how the matrix A determines the linear
transformation T .
We claim that
[T (α)]B ′ = A [α]B .
61
So,
Xn
A1j xj
j=1
n
X
A2j xj
[T (α)]B ′ =
j=1
..
.
Xn
Amj xj
j=1
A11 x1 + A12 x2 + · · · + A1n xn
A21 x1 + A22 x2 + · · · + A2n xn
= ..
.
Am1 x1 + Am2 x2 + · · · + Amn xn
A11 A12 · · · A1n x1
A21 A22 · · · A2n x2
= .. .. .. .
.
. . ··· . ..
Am1 Am2 ... Amn xn 62
This implies
[T (α)]B ′ = A [α]B ,
63
Problem 6. Let T : R2 −→ R3 be a linear transformation defined as
T (x1 , x2 ) = (x2 , x1 − x2 , x1 + x2 ) .
Let
B = {α1 = (1, 0), α2 = (0, 1)}
and
B ′ = {β1 = (1, 1, 1), β2 = (1, 1, 0), β3 = (1, 0, 0)}
′
be respective ordered bases for R2 and R3 . Find [T ]BB .
Solution.
T (α1 ) = T (1, 0)
= (0, 1, 1)
= (1, 1, 1) + 0(1, 1, 0) − (1, 0, 0)
= β1 + 0β2 − β3 .
64
1
So, [T (α1 )]B ′ = 0 .
−1
Similarly,
T (α2 ) = T (0, 1)
= (1, −1, 1)
= (1, 1, 1) − 2(1, 1, 0) + 2(1, 0, 0)
= β1 − 2β2 + 2β3 .
1
So, [T (α2 )]B ′ = −2 .
2
Hence,
1 1
′
A = [T ]BB
= [T (α1 )]B ′ , [T (α2 )]B ′ = 0 −2 .
−1 2
65
Note: Let V be a finite dimensional vector space and B an ordered basis
for V . If T : V −→ V is a linear operator, then A is denoted as [T ]B . So
by Theorem 7 we have
[T (α)]B = [T ]B [α]B .
66
Problem 7. Let T : R2 −→ R2 be a linear transformation defined as
T (x1 , x2 ) = (x1 , 0). Let B = {α1 = (1, 1), α2 = (1, 2)} be an ordered
basis for R2 . Find [T ]B .
Solution.
T (α1 ) = T (1, 1) = (1, 0) = 2α1 − α2 .
So, " #
2
[T (α1 )]B = .
−1
Similarly,
T (α2 ) = T (1, 2) = (1, 0) = 2α1 − α2 .
So, " #
2
[T (α2 )]B = .
−1
" #
2 2
Hence, [T ]B = ([T (α1 )]B , [T (α2 )]B ) = .
−1 −1
67
Problem 8. Let P3 be the vector space of all real polynomials of degree at
most three and P2 be the vector space of all real polynomials of degree at
most two. Let D be the differentiation transformation from P3 into P2 .
Let B = {1, x, x 2 , x 3 } and B ′ = {1, x, x 2 } be two ordered bases for P3
B′
and P2 , respectively. Find [D]B .
68
Solution.
D(α1 ) = D(1) = 0 = 0 · 1 + 0 · x + 0 · x 2 = 0 · β1 + 0 · β2 + 0 · β3
D(α2 ) = D(x) = 1 = 1 · 1 + 0 · x + 0 · x 2 = 1 · β1 + 0 · β2 + 0 · β3
D(α3 ) = D(x 2 ) = 2x = 0 · 1 + 2 · x + 0 · x 2 = 0 · β1 + 2 · β2 + 0 · β3
D(α4 ) = D(x 3 ) = 3x 2 = 0 · 1 + 0 · x + 3 · x 2 = 0 · β1 + 0 · β2 + 3 · β3
Hence,
0 1 0 0
B′
[D]B = 0 0 2 0 .
0 0 0 3
69
Theorem 8. Let V be a finite dimensional vector space over the field F
and let B = {α1 , α2 , . . . , αn } and B ′ = {β1 , β2 , . . . , βn } be two ordered
bases for V . Suppose T : V −→ V is a linear operator. If
P = [P1 , P2 , . . . , Pn ] is the n × n matrix with columns Pj = [βj ]B , then
[T ]B ′ = P −1 [T ]B P.
Similar matrices.
Let A and B be n × n matrices over the field F . We say B is similar to A
over F if there exists an invertible n × n matrix P over F such that
B = P −1 AP.
70
Problem 9. Let T be a linear operator on R2 defined as
T (x1 , x2 ) = (x1 , 0). Let B = {α1 = (1, 1), α2 = (1, 2)} be an ordered
basis for R2 . Let B ′ = {β1 = (1, 0), β2 = (0, 1)} denotes the standard
basis of R2 . Find a matrix P such that [T ]B ′ = P −1 [T ]B P.
" #
2 2
Solution. From Problem 7 we know that [T ]B = .
−1 −1
" #
1 0
Similarly, we can show that [T ]B ′ = .
0 0
Now we find the matrix P. Note that
and
β2 = (0, 1) = −(1, 1) + (1, 2) = −α1 + α2 .
71
So, " #
2 −1
P=
−1 1
and " #
1 1
P −1 = .
1 2
Therefore
" #" #" # " #
−1 1 1 2 2 2 −1 1 0
P [T ]B P = = = [T ]B ′ .
1 2 −1 −1 −1 1 0 0
72
Problem 10. Let P3 be the vector space of all real polynomials of degree
at most three. Let D be the differentiation operator on P3 . Let
B = {1, x, x 2 , x 3 } and B ′ = {1, 2x, −3x 2 , 2x 3 } be two ordered bases for
P3 . Find a matrix P such that [D]B ′ = P −1 [D]B P.
73
Eigenvalues / Characteristic values / Characteristic roots
Definitions.
74
How to find the eigenvalues.
Let A be a given n × n matrix and λ be an eigenvalue of A. Then by
definition there exists a non-zero vector X such that AX = λX . This
implies the system
(λI − A)X = 0
det (λI − A) = 0.
Characteristic Polynomial.
Let A be an n × n matrix over the field F . The polynomial
f (x) = det(xI − A) is called the characteristic polynomial of A.
75
How to find the eigenvectors.
Fix one eigenvalue λ of matrix A. Then solve the system (λI − A)X = 0.
The non-zero solutions are the eigenvectors of matrix A corresponding to
the eigenvalue λ.
76
Problem 11. Find the!eigenvalues and the corresponding eigenvectors of
1 2
the matrix A = .
0 2
Solution: Consider
det (λI − A) = 0
!
λ−1 −2
=⇒ det =0
0 λ−2
=⇒ (λ − 1)(λ − 2) = 0
=⇒ λ = 1, 2.
77
The eigenspace corresponding to λ = 1.
(A − I )X = 0
! ! !
0 2 x 0
=⇒ = .
0 1 y 0
=⇒ y = 0.
The solutions of this system are of the form (a, 0), where a ∈ R. Hence
78
The eigenspace corresponding to λ = 2.
(A − 2I )X = 0
! ! !
−1 2 x 0
=⇒ = .
0 0 y 0
=⇒ x = 2y .
Set y = a, then x = 2a. Thus, the solutions of this system are of the form
(2a, a), where a ∈ R. Hence
λ−5 6 6
fA (λ) = det(λI − A) = 1 λ−4 −2 = (λ − 2)2 (λ − 1).
−3 6 λ+4
=⇒ λ = 1, 2, 2.
80
The eigenspace corresponding to λ = 1.
EA (1) = {X : (I − A)X = 0} = {X : (A − I )X = 0} .
4 −6 −6 1 0 −1
1
A − I = −1 3 2 ∼ 0 1
3
3 −6 −5 0 0 0
1
(A − I )X = 0 =⇒ x1 − x3 = 0, x2 + x3 = 0
3
Note that (i) pivot variables = {x1 , x2 } and (ii) free variables = {x3 }. Let
x3 = a. This implies x1 = a and x2 = − 3a . Thus
n a o na o
EA (1) = a, − , a : a ∈ R = (3, −1, 3) : a ∈ R = Span {(3, −1, 3)} .
3 3
81
The eigenspace corresponding to λ = 2.
Note that (i) pivot variables = {x1 } and (ii) free variables = {x2 , x3 }. Let
x2 = a and x3 = b. This implies x1 = 2a + 2b. Therefore
Thus,
EA (2) = Span {(2, 1, 0), (2, 0, 1)} .
82
! !
a b λ−a −b
Notes. Let A = . Then λI − A =
c d −c λ−d
That is
fA (λ) = λ2 − trace (A)λ + det (A).
Hence,
λ1 + λ2 = trace (A) and λ1 λ2 = det (A).
83
Notes. Let A be an n × n matrix over the field F.
trace (A) = λ1 + · · · + λn
and
det(A) = λ1 · · · λn .
84
The Cayley-Hamilton theorem
85
Applications of Cayley-Hamilton theorem
f (λ) = 0
=⇒ An + cn−1 An−1 + · · · + c1 A + c0 I = 0
=⇒ An = −cn−1 An−1 − · · · − c1 A − c0 I .
86
Applications of Cayley-Hamilton theorem
f (A) = 0
=⇒ An + cn−1 An−1 + · · · + c1 A + c0 I = 0
=⇒ A(An−1 + cn−1 An−2 + · · · + c1 I ) = −c0 I
−1 n−1 cn−1 n−2 c1
=⇒ A A − A − ··· − I = I
c0 c0 c0
Thus,
−1 n−1 cn−1 n−2 c1
A−1 = A − A − ··· − I.
c0 c0 c0
87
Example
!
1 2
Let A = . Then from problem 11 we know that the characteristic
0 2
polynomial of A is λ2 − 3λ + 2. By Cayley-Hamilton theorem
f (A) = 0
=⇒ A2 − 3A + 2I = 0
=⇒ A(A − 3I ) = −2I
−1 3
=⇒ A A− I =I
2 2
Thus, !
−1 −1 3 1 −1
A = A+ I = 1 .
2 2 0 2
88