0% found this document useful (0 votes)
48 views85 pages

Vector Spaces Project

The document is a project report titled 'Vector Spaces and Linear Functional' submitted by Vishiv Raj Singh for the Integrated Master’s degree in Mathematics at Cluster University of Jammu. It includes acknowledgments, a certificate of completion, and a detailed table of contents covering various mathematical concepts related to vector spaces, linear operators, and linear functionals. The introduction outlines the significance of vector spaces in mathematics and their applications in various fields.

Uploaded by

Raj Sardar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views85 pages

Vector Spaces Project

The document is a project report titled 'Vector Spaces and Linear Functional' submitted by Vishiv Raj Singh for the Integrated Master’s degree in Mathematics at Cluster University of Jammu. It includes acknowledgments, a certificate of completion, and a detailed table of contents covering various mathematical concepts related to vector spaces, linear operators, and linear functionals. The introduction outlines the significance of vector spaces in mathematics and their applications in various fields.

Uploaded by

Raj Sardar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

Vector Spaces and Linear Functional

A Project Report Submitted


In Partial Fulfillment
of the Requirement for the Degree of
Integrated Master’s
in Mathematics

By

Vishiv Raj Singh

Department of Mathematics

School of Sciences

Cluster University of Jammu, J&K

Dr. Charu Sharma Dr. Narinder Sharma

Supervisor Coordinator
CERTIFICATE

This is to certify that the project work entitled “Vector Spaces and Linear Functional” has been
submitted for the partial fulfillment of the requirement for the degree of Integrated Master’s in
Mathamatics by students of B.Sc Honors/M.Sc. Integrated, 6th sem., Department of Mathematics,
School of Sciences, Cluster University of Jammu. He/She has completed his/her project themselves
under my supervision. His/Her work is commendable and I wish them all the best in his/her future
endeavour.

Dr. Charu Sharma


Department of Mathematics
Faculty of Sciences
Cluster University of Jammu (J&K)
Dated:

2
ACKNOWLEDGEMENT

Primarily I would thank God for being able to complete this project with success.
Then I would love to express my special thanks of gratitude to my project advisor “Dr. Charu
Sharma ”, Department of Mathematics, Cluster University of Jammu, who gave me the golden
opportunity to do this wonderful project on the topic “Vector Spaces and Linear Functional”. I am
extremely grateful for having such a component and kind teacher to provide me with knowledge
and interest in this project which most people fear. This Project has strengthened my passion for
mathematics and boosted my confidence in the topic “Vector Spaces and Linear Functional”. Of
course it is very hard work but the project was most interesting. I have learned a lot from it besides
having a chance to sharpen my computer skills.
I would also like to thank Dr. Sapna mam, whose valuable guidance served as the major contributor
towards the completion of the project.
Furthermore, I would like to thank the rest of my teammates and the staff of Mathematics Depart-
ment for their collaborative effort that they’ve made to complete this project in such a time frame.

Vishiv Raj Singh Class Roll No.: 121-PGI-MTH-2018

University Roll No.: 18532040021

3
Contents
1 Preliminaries 6
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Matrix Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 The Invertible Matrix Theorem(IMT) . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Systems in Triangular and Echelon Forms . . . . . . . . . . . . . . . . . . . . . . . 18
1.5 Introduction to Rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2 Group Theory 22
2.1 Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Finite and Infinite groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Some special composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.6 Integral Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7 Ideals in a Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 Vector Spaces 32
3.1 Vector Space (linear Space)(Vector Axioms) . . . . . . . . . . . . . . . . . . . . . . 32
3.2 Subspaces of A Vector Space V(F) . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Linear sum of subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Linear Combination of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 Finite Dimensional Vector Space
OR
Finitely Generated Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.7 Ordered Basis and Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.8 Cosets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.9 Quotient Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.10 Rank and Nullity of a Linear Transformation . . . . . . . . . . . . . . . . . . . . . . 56
3.11 Non singular and Singular Transformation . . . . . . . . . . . . . . . . . . . . . . . 58

4 Linear Operator and Linear Functional 65


4.1 Practical Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2 Linear operator in 3-D space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3 Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.4 Linear Transformation as Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.5 Linear Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.6 Dual Space (Conjugate space of V(F)) . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7 Dual of dual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.8 Annihilator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.9 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

Appendix on Symbol Notations 84

4
INTRODUCTION

A vector space (also called a linear space) is a set of objects called vectors, which may be added
together and multiplied ‘scaled’) by numbers, called scalars. Scalars are often taken to be real
numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational
numbers, or generally any field. The operations of vector addition and scalar multiplication must
satisfy certain requirements, called vector axioms (listed below in § Definition). To specify that the
scalars are real or complex numbers, the terms real vector space and complex vector space are often
used.
Certain sets of Euclidean vectors are common examples of a vector space. They represent
physical quantities such as forces, where any two forces (of the same type) can be added to yield
a third, and the multiplication of a force vector by a real multiplier is another force vector. In
the same way (but in a more geometric sense), vectors representing displacements in the plane or
three-dimensional space also form vector spaces. Vectors in vector spaces do not necessarily have to
be arrow-like objects as they appear in the mentioned examples: vectors are regarded as abstract
mathematical objects with particular properties, which in some cases can be visualized as arrows.
Vector spaces are the subject of linear algebra and are well characterized by their dimension,
which, roughly speaking, specifies the number of independent directions in the space. Infinite-
dimensional vector spaces arise naturally in mathematical analysis as function spaces, whose vectors
are functions. These vector spaces are generally endowed with some additional structure such as
a topology, which allows the consideration of issues of proximity and continuity. Among these
topologies, those that are defined by a norm or inner product are more commonly used (being
equipped with a notion of distance between two vectors). This is particularly the case of Banach
spaces and Hilbert spaces, which are fundamental in mathematical analysis.
Historically, the first ideas leading to vector spaces can be traced back as far as the 17th century’s
analytic geometry, matrices, systems of linear equations and Euclidean vectors. The modern, more
abstract treatment, first formulated by Giuseppe Peano in 1888, encompasses more general objects
than Euclidean space, but much of the theory can be seen as an extension of classical geometric
ideas like lines, planes and their higher-dimensional analogs.
Today, vector spaces are applied throughout mathematics, science and engineering. They are the
appropriate linear-algebraic notion to deal with systems of linear equations. They offer a framework
for Fourier expansion, which is employed in image compression routines, and they provide an en-
vironment that can be used for solution techniques for partial differential equations. Furthermore,
vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical
objects such as tensors. This in turn allows the examination of local properties of manifolds by lin-
earization techniques. Vector spaces may be generalized in several ways, leading to more advanced
notions in geometry and abstract algebra.
This article deals mainly with finite-dimensional vector spaces. However, many of the principles
are also valid for infinite-dimensional vector spaces.

5
Chapter 1

1 Preliminaries
The purpose of this chapter is to give a brief account of number of useful concepts and facts which
are required in the test. The reader may be familiar with the concepts of this chapter. Nonetheless,
this may serve as a short review, and an introduction to the basic notation.

1.1 Introduction
This chapter investigates matrices and algebraic operations defined on them. These matrices may
be viewed as rectangular arrays of elements where each entry depends on two subscripts (as com-
pared with vectors, where each entry depended on only one subscript). Systems of linear equations
and their solutions may be efficiently investigated using the language of matrices. . Furthermore,
certain abstract objects introduced in later chapters, such as “change of basis,” “linear transfor-
mations,” and “quadratic forms” can be represented by these matrices (rectangular arrays). On
the other hand, the abstract treatment of linear algebra presented later on will give us new insight
into the structure of these matrices. The entries in our matrices will come from some arbitrary,
but fixed, field K. The elements of K are called numbers or scalars. Nothing essential is lost if the
reader assumes that K is the real field R

1.2 Matrix Arithmetic


In this note we explore matrix arithmetic for its own sake.  
a11 a12 · · · a1n
 a21 a22 · · · a2n 
For a shortcut notation instead of writing a matrix A as  .. .. , we will write
 
.. ..
 . . . . 
am1 am2 · · · amn
A = (aij )m×n or just A = (aij ) if the size of A is understood.
When given a set of objects in mathematics, there are two basic questions we should ask: When
are two objects equal? and How can we combine two objects to produce a third object? For the
first question we have the following definition.
Definition 1
Two matrices A = (aij ) and B = (bij ) are shape equal, denoted by A = B, provided they have the
same size and their corresponding entries are equal, that is, their sizes are both m × n and for each
1 ≤ i ≤ m and 1 ≤ j ≤ n, aij = bij .
Example  
  1 −9
1 −9 7
Let A = and B =  0 1 . Are A and B equal?
0 1 −5
7 −5
Since the size of A is 2 × 3 and that of B is 3 × 2, A 6= B. Do note, however, that they have the
same entries.

6
   
x2 y − x 1 x−y
Find all values of x and y so that = .
0 y2 x+1 1
We see that the size of each matrix is 2 × 2. So we set their corresponding entries equal:
x2 = 1 y−x=x−y
0=x+1 y 2 = 1.
We see that x = ±1 and y = ±1. From 0 = x + 1, we get that x must be −1. From y − x = x − y,
we get that 2y = 2x and so x = y. Thus y must also be −1.
As for the second question, we have been doing this for quite a while now: Adding, subtracting,
multiplying, and dividing(when possible) real numbers. So we add and subtract two matrices
Eventually we will multiply matrices, but for now we consider another multiplication. Here are the
definitions.
Definition 2
Let A = (aij ) and B = (bij ) be m × n matrices. We define their shape sum, denoted by A + B, and
their shape difference, denoted by A − B, to be the respective matrices (aij + bij ) and (aij − bij ).
We define shape scalar multiplication by for any r ∈ R, rA is the matrix (raij ).
These definitions should appear quite natural: When two matrices have the same size, we just
add or subtract their corresponding entries, and for the scalar multiplication, we just multiply each
entry by the scalar. Just a note: Since multiplication of real numbers is commutative, we have that
rA = Ar for any real number r and matrix A. Here are some examples.
Example     
2 3 −1 2 1 2 3
Let A = , B = , and C = . Compute each of the
−1 2 6 −2 −1 −2 −3
following, if possible. If a computation is not possible, explain why it is not.
A + B.
Since A and B are both 2 × 2 matrices, we can add them. Here we go:
       
2 3 −1 2 2 + (−1) 3+2 1 5
A+B = + = = .
−1 2 6 −2 −1 + 6 2 + (−2) 5 0
B − A.
Since A and B are both 2 × 2 matrices, we can subtract them. Here we go:
       
−1 2 2 3 −1 − 2 2−3 −3 −1
B−A= − = = .
6 −2 −1 2 6 − (−1) −2 − 2 7 −4
B + C.
No can do. B and C have different sizes: B is 2 × 2 and C is 2 × 3. 4C.
We just multiply each entry of C by 4:
     
1 2 3 4(1) 4(2) 4(3) 4 8 24
4C = 4 = = .
−1 −2 −3 4(−1) 4(−2) 4(−3) −4 −8 −24
2A − 3B.
Since scalar multiplication does not affect the size of a matrix, the matrices 2A and 3B have
the same size and so we can subtract them. We’ll do the scalar multiplication first and then
the subtraction. Here we go:
         
2 3 −1 2 4 6 −3 6 7 0
2A − 3B = 2 −3 = − = .
−1 2 6 −2 −2 4 18 −6 −20 10

7
Matrix arithmetic has some of the same properties as real number arithmetic.
Properties of Matrix Arithmetic
Let A, B, and C be m × n matrices and r, s ∈ R.
1. A + B = B + A Matrix addition is commutative.
2. A + (B + C) = (A + B) + C Matrix addition is associative.
3. r(A + B) = rA + rB Scalar multiplication distributes over matrix addition.
4. (r + s)A = rA + sA Real number addition distributes over scalar multiplication.
5. (rs)A = r(sA) An associativity for scalar multiplication.
6. There is a unique m × n matrix Θ such that for any m × n matrix M , M + Θ = M .
7. For every m × n matrix M there is a unique m × n matrix N such that M + N = Θ.

The above Θ is suggestively called the m × n shape zero matrix. The above N is suggestively
called the shape negative of M and is so denoted by −M . Let’s prove something. How about
that real number addition distributes over matrix addition, there is a unique zero matrix, and each
matrix has a unique negative? You should prove the rest at some point in your life.
Proof
Let A = (aij ) be an m × n matrix and r, s ∈ R. By definition, (r + s)A and rA + sA have the same
size. Now we must show that their corresponding entries are equal. Let 1 ≤ i ≤ m and 1 ≤ j ≤ n.
Then the ij-entry of (r + s)A is (r + s)aij . Using the usual properties of real number arithmetic,
we have (r + s)(aij ) = raij + saij , which is the sum of the ij-entries of rA and sA, that is, it’s the
ij-entry of rA + sA. Hence (r + s)A = rA + sA.
Let M = (mij ) be an m × n matrix and let Θ be the m × n matrix all of whose entries are 0.
By assumption M , Θ, and M + Θ have the same size. Notice that the ij-entry of M + Θ is mij + 0.
This is exactly the ij-entry of M . Hence M + Θ = M . For uniqueness, suppose that Ψ is an m × n
matrix with the property that for any m × n matrix C, C + Ψ = C. Then Θ = Θ + Ψ by the
property of Ψ. But by the property of Θ, Ψ = Ψ + Θ. Since matrix addition is commutative, we
see that Θ = Ψ. Hence Θ is unique.
Let N = (−mij ). Now this makes sense as each mij is a real number and so its negative is also a
real number. Notice that M , N , M + N , and Θ all have the same size. Now the ij-entry of M + N
is mij + (−mij ) = 0, the ij-entry of Θ. Hence a desired N exists. For uniqueness suppose that P
is an m × n matrix with the property that M + P = Θ. Then

N =N +Θ as Θ is the zero matrix


= N + (M + P ) as M + P = Θ
= (N + M ) + P by associativity of matrix addition
=Θ+P as N + M = Θ
=P as Θ is the zero matrix.

Hence this N is unique.


Now we will multiply matrices, but not in the way we thinking. We will never just simply
multiply the corresponding entries, What we do is an extension of the dot product of vectors. First
we will multiply a row by a column and the result will be a real number(or scalar).

8
Definition 3  
b1
  b2 
We take a row vector a1 a2 · · · ap with p entries and a column vector  ..  with p entries
 
.
bp
 
b1
  b2 

and define their shape product, denoted by a1 a2 · · · ap  .. , to be the real number a1 b1 +

.
bp
a2 b2 + · · · + ap bp . Notice that we’re just taking the sum of the products of the corresponding entries
and that we may view a real number as a 1 × 1 matrix.
Let’s do a couple of examples.
Example
Multiply the row and column vectors.
 
  2
 3   −2 
2 3 4  4  = 2(3) + 3(4) + 4(5) = 38. −1 2 −2 3   −1  = −1(2) + 2(−2) +

5
2
(−2)(−1) + 3(2) = 2.
Now we’ll multiply a general matrix by a column. After all, we can view a matrix as several row
vectors of the same size put together. To do such a multiplication, the number of entries in each
row must be the number of entries in the column and then we multiply each row of the matrix by
the column.

Definition 4

Let A be an m × p matrix and b̄ a p × 1 column vector. We define their shape product, denoted
by Ab̄, to be the m × 1 column vector whose i-th entry, 1 ≤ i ≤ m, is the product of the i-th row
of A and b̄.
Here are a couple of examples.

Example
Multiply the matrix by the column.
   
  1     2 −2  
1 2 3  1(1) + 2(2) + 3(−3) −4 5
2  = = .  0 3  =
−2 1 2 −2(1) + 1(2) + 2(−3) −6 −1
 −3
   −1 4
2(5) + (−2)(−1) 12
 0(5) + 3(−1)  =  −3 .
−1(5) + 4(−1) −9
We now extend this multiplication to appropriately sized arbitrary matrices. We can view a
matrix as several column vectors of the same size put together. To multiply a row by a column, we
must be sure that they have the same number of entries. This means that the number of columns
of our first matrix must be the number of rows of the second.
Definition 5
Let A be an m × p matrix and B a p × n matrix. We define their shape product, denoted by AB,

9
to be the m × n matrix whose ij-entry, 1 ≤ i ≤ m and 1 ≤ j ≤ n, is the product of the i-th row of
A and the j-th column of B.
Here are a few examples.

Example       
1 2 4 −3 2 2 9 1 2 3
Let A = ,B= ,C= , and D = . Compute each of
3 4 −2 1 −1 0 8 5 2 3
the following, if possible. If a computation is not possible, explain why it is not.
AB.
This computation is possible, Since the size of both matrices is 2 × 2, the number of columns of the
first is the same as the number of rows of the second. Note that the size of the product is 2 × 2.
Here we go:
  
1 2 4 −3
AB =
3 4 −2 1
 st 
1 row of A times 1st column of B 1st row of A times 2nd column of B
=
2nd row of A times 1st column of B 2nd row of A times 2nd column of B
   
1(4) + 2(−2) 1(−3) + 2(1) 0 −1
= = .
3(4) + 4(−2) 3(−3) + 4(1) 4 −5
BA.
Again, the size of both matrices is 2 × 2, so this computation is possible and the size of the product
is 2 × 2. Here we go again:
  
4 −3 1 2
BA =
−2 1 3 4
 st 
1 row of B times 1st column of A 1st row of B times 2nd column of A
=
2nd row of B times 1st column of A 2nd row of B times 2nd column of A
   
4(1) + (−3)(3) 4(2) + (−3)(4) −5 −4
= = .
−2(1) + 1(3) −2(2) + 1(4) 1 0
we have that AB 6= BA Yes, it’s true: Matrix multiplication is not commutative.
CD. = not possible
The size of the first matrix is 2 × 3 and the size of the second is also 2 × 3. The number of columns
of the first, 3, is not the same as the number of rows of the second, 2.
BC.
This computation is possible. The size of the first matrix is 2 × 2 and the size of the second is 2 × 3.
The number of columns of the first, 2, is the same as the number of rows of the second, 2. Then
the size of this product is 2 × 3. Here we go:
  
4 −3 2 2 9
BC =
−2 1 −1 0 8
 
4(2) + (−3)(−1) 4(2) + (−3)(0) 4(9) + (−3)(8)
=
−2(2) + 1(−1) −2(2) + 1(0) −2(9) + 1(8)
 
11 8 12
= .
−5 −4 −10

10
CB. = not possible
The size of the first matrix is 2 × 3 and the size of the second is 2 × 2. The number of columns of
the first, 3, is not the same as the number of rows of the second, 2.
We were able to find the product BC, but not the product CB. It’s not that BC 6= CB, it’s
that CB isn’t even possible. This is what makes matrix multiplication so not commutative.
Now we state some more, properties of matrix arithmetic.

More Properties of Matrix Arithmetic


Let A, B, and C be matrices of the appropriate sizes and r ∈ R. Then

1. A(BC) = (AB)C Matrix multiplication is associative.


2. (rA)B = r(AB) = A(rB) Scalar multiplication commutes with matrix multiplication.
3. A(B + C) = AB + AC
4. (A + B)C = AC + BC Matrix multiplication distributes over matrix addition.
5. There exists a unique n × n matrix I such that for all n × n matrices M , IM = M I = M .

The matrix I is called the n × n shape identity matrix. A proof that matrix multiplication is
associative would be quite messy at this point. We will just take it to be true. There is an elegant
proof, but we need to learn some more linear algebra first, which is in Chapter Three of the text.
Let’s prove the first distributive property and existence of the identity matrix. You should prove
the rest at some point in your life.
Proof
Let A be an m × p matrix and B and C p × n matrices. This is what we mean by appropriate
sizes: B and C must be the same size in order to add them, and the number of columns in A must
be the number of rows in B and C in order to multiply them. We have that the two matrices on
each side of the equals sign have the same size, namely, m × n. Now we show their corresponding
entries are equal. Let 1 ≤ i ≤ m and 1 ≤ j ≤ n. For simplicity,
  let’s
  write the i-th row of A as
b1 c1
  b2   c2 
a1 a2 · · · ap and the j-th columns of B and C as  ..  and  .. , respectively. Then the
   
. .
bp cp
 
b1 + c 1
 b2 + c 2 
j-th column of B + C is  .. . So the ij-entry of A(B + C) is the product of the i-th row of
 
 . 
bp + c 2
A and the j-th column of B + C. Multiplying and then using the usual properties of real number
arithmetic, we have
 
b1 + c1
 b2 + c 2 

a1 a2 · · · ap  ..  = a1 (b1 + c1 ) + a2 (b2 + c2 ) + · · · + ap (bp + cp )
 . 
bp + c 2
= a1 b 1 + a1 c 1 + a2 b 2 + b 2 c 2 + · · · + ap b p + ap c p
= (a1 b1 + a2 b2 + · · · + ap bp ) + (a1 c1 + a2 c2 + · · · + ap cp ).

11
We see that the two expressions in parentheses are the products of the i-th row of A with the j-th
columns of B and C, respectively. We know that the sum of these two is the ij-entry of AB + AC.
And we’re done.
Now we will prove that last statement, about this mysterious identity matrix. We need a
definition first: The shape main diagonal of a matrix A consists of its entries of the from aii . Let
M = (mij ) be an n × n matrix. Let I be the n × n matrix whose main diagonal entries are all 1’s
and all of its other entries 0’s, that is,
1 0 0 0 ··· 0
 
0 1 0 0 · · · 0
 
0 0 1 0 · · · 0
. . . .. 
I = . . . .. .
. . . . .
. . . . . .. 
 .. .. .. . .
0 0 0 0 ··· 1
Since the sizes of M and I are n × n, the sizes of the products IM and M I are also n × n. Let
1 ≤ i ≤ m and 1 ≤ j ≤ n. Notice that the i-th row of I is the row vector whose i-th entry is 1 and
all others 0’s. So when we multiply this i-th row of I by the j-th column of M , the only entry in
the column that gets multiplied by the 1 is the i-th, which is mij . Thus IM = M . Now notice that
the j-th column of I is the column vector whose j-th entry is a 1 and all others 0’s. So when we
multiply the i-th row of M by the j-th column of I, the only entry in the row that gets multiplied
by the 1 is j-th, which is just mij . Thus M I = M . The proof that I is unique is quite similar to
that of the zero matrix. And we’re done.

Now we return to linear systems. Here’s a generic one now:


a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
am1 x1 + am2 x2 + · · · + amn xn = bm .
We can express this system as a matrix equation Ax̄ = b̄. Just look at each equation: We’re
multiplying a’s by x’s and adding them up. This is exactly how we multiply a row by a column.
The matrix A we need is the matrix of the coefficients in the system and the x̄ is the column vector
of the variables. So the b̄ is the column vector of the constants. More explicitly, we have
     
a11 a12 · · · a1n x1 b1
 a21 a22 · · · a2n   x2   b2 
A =  .. .. , x̄ =  .. , and b̄ =  .. .
     
.. ..
 . . . .   .   . 
am1 am2 · · · amn xn bm
So our matrix equation Ax̄ = b̄ represents the system of linear equations, which is a much more
concise way 
of writing
 the system. It also provides a more convenient way of determining whether
u1
 u2 
or not ū =  ..  is a solution to the system: Just check whether or not Aū = b̄. Let’s do an
 
.
un
example.

12
Example
Consider the linear system
2x − y= 0
x + z = 4.
x + 2y − 2z = −1

Write it as a matrix equation Ax̄ = b̄.


Following the above, we let A be the matrix of coefficients, x̄ the column vector of the variables,
and b̄ the column vector of the constants. We have
     
2 −1 0 x 0
A= 1 0 1 , x̄ =  y , and b̄ =  4 .
1 2 −2 z −1
Then the equation is
    
2 −1 0 x 0
 1 0 1   y  =  4 .
1 2 −2 z −1
 
1
Determine whether or not  2  is a solution to the system.
  3
1
Let ū =  2 . Then multiplying, we have
3
      
2 −1 0 1 2(1) − 1(2) + 0(3) 0
Aū =  1 0 1   2  =  1(1) + 0(2) + 1(3)  =  4  = b̄.
1 2 −2 3 1(1) + 2(2) − 2(3) −1
   
1 2
So  2  is a solution to the system. Determine whether or not  4  is a solution.
3   2
2
Let v̄ =  4 . Then multiplying, we have
2
      
2 −1 0 2 2(2) − 1(4) + 0(2) 0
Av̄ =  1 0 1   4  =  1(2) + 0(4) + 1(2)  =  4  6= b̄.
1 2 −2 2 1(2) + 2(4) − 2(2) 6
 
2
So  4  is not a solution to the system.
2
We know that every nonzero real number x has a multiplicative inverse, namely x−1 = 1/x, as
−1
xx = 1. Is there an analogous inverse for matrices? That is, for any nonzero n × n matrix A, is
therean n ×
 n matrix B such that AB = BA = I,where I is the n × n identity
  matrix? Consider
1 0 a b a b
A= . Let’s try to find its B. Write B = . Then AB = . Oh. The matrix
0 0 c d 0 0

13
 
1 0
AB has a row of 0’s, so it can never be the identity, which is . So the answer to our question
0 1
is no. Here, then, is a definition:
Definition 6
An n × n matrix A is shape invertible provided that there is an n × n matrix B for which AB =
BA = I. This B is called shape an inverse of A.
Notice that II = I, so there is at least one invertible matrix for each possible size.
Example
   
2 1 1 −1
Let A = and B = . Multiplying, we get that
1 1 −1 2
      
2 1 1 −1 2(1) + 1(−1) 2(−1) + 1(2) 1 0
AB = = = and
1 1 −1 2 1(1) + 1(−1) 1(−1) + 1(2) 0 1
      
1 −1 2 1 1(2) − 1(1) 1(1) − 1(1) 1 0
BA = = = .
−1 2 1 1 −1(2) + 2(1) −1(1) + 2(1) 0 1

So A is invertible. Note that B is also invertible.


Hence there are lots of invertible matrices. But there are also lots of matrices that are not
invertible. Before determining a method to check whether or not a matrix is invertible, let’s state
and prove some properties of invertible matrices.

Properties of Invertible Matrices

If an n × n matrix is invertible, then its inverse is unique. The inverse of an invertible matrix
is invertible. Furthermore, if A is an n × n invertible matrix, then the inverse of the inverse of A is
A. The product of two invertible n × n matrices is invertible. Moreover, if A and B are invertible
n × n matrices, then (AB)−1 = B −1 A−1 .
Now we can refer to shape the inverse of a square matrix A and we will write its inverse as A−1
and read it as ”A inverse”. In this case we have AA−1 = A−1 A = I.
Proof
Let A, C, and D be n × n matrices and I the n × n identity matrix. Assume that A is invertible
and C and D are its inverses. So we have that AC = CA = AD = DA = I. Now C = CI =
C(AD) = (CA)D = ID = D. Notice that we used the associativity of matrix multiplication here.
Now we have that AA−1 = A−1 A = I. So A satisfies the definition for A−1 being invertible.
Thus the inverse of the inverse of A is A, that is, (A−1 )−1 = A.
Finally, let B be n × n invertible matrix. To show that AB is invertible, we will just multiply,
taking full advantage of the associativity of matrix multiplication:

(AB)(B −1 A−1 ) = A(BB −1 )A−1 = AIA−1 = AA−1 = I and


(B −1 A−1 )(AB) = B −1 (A−1 A)B = B −1 IB = B −1 B = I.

Hence AB is invertible and its inverse is B −1 A−1 .


The proof of the following corollary is a exercise using mathematical induction.
Corollary 1
If A1 , A2 , . . . , Am are invertible n×n matrices, then A1 A2 · · · Am is invertible and (A1 A2 · · · Am )−1 =
Am−1
· · · A−1
2 A1 .
−1

How do we know if a matrix is invertible or not? The following Theorem tells us. All vectors in
n
R will be written as columns.

14
1.3 The Invertible Matrix Theorem(IMT)
Let A be an n × n matrix, I the n × n identity matrix, and θ̄ the vector in Rn all of whose entries
are zero. Then the following are equivalent.
A is invertible. The reduced echelon form of A is I. For any b̄ ∈ Rn the matrix equation
Ax̄ = b̄ has exactly one solution. The matrix equation Ax̄ = θ̄ has only x̄ = θ̄ as its solution.
Example
The first part of the proof provides a method for determining whether or not a matrix
is invertible and if so, finding its inverse: Given an n × n matrix A, we form the giant
augmented matrix (A|I) and reduce it until the A part is in reduced echelon form. If
this form is I, then we know that A is invertible and the matrix in the I part is its
inverse; if this form is not I, then A is not invertible. Determine
 whether or not the
2 1
matrix is invertible and if so, to find its inverse.Let A = .
1 −1
As stated above, we form the giant augmented matrix (A|I) and reduce:
     
2 1 | 1 0 1 −1 | 0 1 1 −1 | 0 1
∼ ∼
1 −1 | 0 1 2 1 | 1 0 0 3 | 1 −2
   
1 −1 | 0 1 1 0 | 1/3 1/3
∼ ∼ .
0 1 | 1/3 −2/3 0 1 | 1/3 −2/3

So we see
 that the reduced
 echelon form of A is the identity. Thus A is invertible and
1/3 1/3
A−1 = . We can rewrite this inverse a bit more nicely by factoring out
1/3 −2/3  
  1 0 2
1 1 1
the 1/3: A−1 = . Let A =  −1 1 −2 .
3 1 −2
2 2 1
We form the giant augmented matrix (A|I) and reduce:
     
1 0 2 | 1 0 0 1 0 2 | 1 0 0 1 0 2 | 1 0 0
 −1 1 −2 | 0 1 0  ∼  0 1 0 | 1 1 0 ∼ 0 1 0 | 1 1 0 
2 2 1 | 0 0 1 0 2 −3 | −2 0 1 0 0 −3 | −4 −2 1
   
1 0 2 | 1 0 0 1 0 0 | −5/3 −4/3 2/3
∼ 0 1 0 | 1 1 0 ∼ 0 1 0 | 1 1 0 .
0 0 1 | 4/3 2/3 −1/3 0 0 1 | 4/3 2/3 −1/3
 
−5/3 −4/3 2/3
So we see that A is invertible and A−1 =  1 1 0 . Factoring out a
  4/3 2/3 −1/3
−5 −4 2  
−1 1 1 −1
1/3, we get A = 3 3 0 . Let B =
 .
3 −1 1
4 2 −1
We form the giant augmented matrix (B|I) and reduce:
   
1 −1 | 1 0 1 −1 | 1 0
∼ .
−1 1 | 0 1 0 0 | 1 1

Since the reduced echelon form of B is not I, B is not invertible.

15
As seen in the proof of the Theorem, we can use the inverse of a matrix to solve a linear
system with the same number of equations and unknowns. Specifically, we express the system
as a matrix equation Ax̄ = b̄, where A is the matrix of the coefficients. If A is invertible, then
the solution is x̄ = A−1 b̄. Solve the following linear system using the inverse of the matrix of
coefficients:

x + 2z = 3
−x + y − 2z = −3.
2x + 2y + z = 6

Notice that the coefficient matrix is, conveniently, the matrix A from part (b) above, whose
inverse
  we’ve already
 found.
 The matrix equation for this system is Ax̄ = b̄ where x̄ =
x 3
 y  and b̄ =  −3 . Multiplying and using the fact that scalars commute with matrix
z 6
multiplication, we get that
      
−5 −4 2 3 −5 −4 2 3
1 1
x̄ = A−1 b̄ =  3 3 0   −3  =  3 3 0  ·  −3 
3 3
4 2 −1 6 4 2 −1 6
    
−5 −4 2 1 3
=  3 3 0   −1  =  0 .
4 2 −1 2 0

So x = 3, y = 0, and z = 0.
The following Theorem tells us that if the product of two square matrices is the identity, then
they are in fact inverses of each other.
Theorem 2
Let A and B be n × n matrices and I the n × n identity matrix. If AB = I, then A and B are
invertible and A−1 = B.
Proof
Let A and B be n × n matrices, I the n × n identity matrix, and θ̄ the vector in Rn all of whose
entries are zero. Assume AB = I. We will use the IMT Part I to prove that B is invertible first.
Consider the matrix equation B x̄ = θ̄ and let ū ∈ Rn be a solution. So we have

B ū = θ̄
A(B ū) = Aθ̄
(AB)ū = θ̄
I ū = θ̄
ū = θ̄.

The only solution to B x̄ = θ̄ is x̄ = θ̄. Hence by the IMT Part I, B is invertible. So B −1 exists.
Then multiplying both sides of AB = I on the right by B −1 gives us that A = B −1 . Since B −1 is
invertible, A is too and A−1 = (B −1 )−1 = B.
We finish this note off with what’s called the transpose of a matrix. Here’s the definition.
Definition 7
Let A = (aij ) be an m × n matrix. The shape transpose of A, denoted by AT , is the matrix whose

16
i-th column is the i-th row of A, or equivalently, whose j-th row is the j-th column of A. Notice
that AT is an n × m matrix. We will write AT = (aTji ) where aTji = aij .
Notice that the ji-entry of AT is the ij-entry of A. This tells us that the main diagonals of a
matrix and its transpose are the same and that entries of AT are the entries of A reflected about
the main diagonal. Here are a couple of examples.

Example

Find the transpose of each of the given matrices.


 
1 2 3
A= .
4 5 6  
The first row of A is 1 2 3 and the second row is 4 5 6 . So these become the columns
1 4    
T T 1 2
of A , that is, A = 2 5. Alternatively, we see that the columns of A are , ,
4 5
3 6  
  −1 0 6
3
and . So these become the rows of AT , as we can see above. B =  −4 1 9 .
6
 2 3 0
−1 −4 2
We make the rows of B the columns of B T . Doing so, we get B T =  0 1 3 . Notice
6 9 0
T
how the entries of B are those of B reflected about the main diagonal.

Properties of the Transpose


Let A and B be appropriately sized matrices and r ∈ R. Then
(AT )T = A.
(A + B)T = AT + B T .
(rA)T = rAT .
(AB)T = B T AT .

Proof
Let A be an m×p matrix and B a p×n matrix. Then AB is an m×n matrix. So (AB)T is an n×m
matrix. Then B T is an n × p matrix and AT is a p × m matrix. Thus multiplying B T AT makes
sense and its size is also n × m. But what about their corresponding entries? Let 1 ≤ i ≤ m and
1 ≤ j ≤ n. The ji-entry of (AB)T is the ij-entry of AB, which is the i-throwof A times the j-th
b1
 b2 
  
column of B. For simplicity, let a1 a2 · · · ap be the i-th row of A and  ..  the j-th column of
.
bp
B. Then the ij-entry of AB is a1 b1 + a2 b2 + · · · 
+ apbp , but this is also equal to b1 a1 + b2 a2 + · · · bp ap ,
a1
  a2 
which is the product of b1 b2 · · · bp and  .. . This is exactly the product of the j-th row
 
.
ap
of B and the i-th column of A , which is the ji-entry of B T AT . Thus ji-entry of (AB)T is the
T T

17
ji-entry of B T AT . Therefore (AB)T = B T AT .

1.4 Systems in Triangular and Echelon Forms


The main method for solving systems of linear equations. We consider two simple types of systems
of linear equations: systems in triangular form and the more general systems in echelon form.
Triangular Form
Consider the following system of linear equations, which is in triangular form:

2x1 − 3x2 + 5x3 − 2x4 =9


5x2 − x3 + 3x4 =1
7x3 − x4 =3
2x4 =8

That is, the first unknown x1 is the leading unknown in the first equation, the second unknown x2
is the leading unknown in the second equation, and so on. Thus, in particular, the system is square
and each leading unknown is directly to the right of the leading unknown in the preceding equation.
Such a triangular system always has a unique solution, which may be obtained by back-substitution.
That is,
(1) First solve the last equation for the last unknown to get x4 = 4.
(2) Then substitute this value x4 = 4 in the next-to-last equation, and solve for the next-to-last
unknown x3 as follows:
7x3 − 4 = 3 or 7x3 = 7 and x4 = 4 in the second equation, and solve for the second unknown x2 as
follows:
5x2 − 1 + 12 = 1 or 5x2 + 11 = 1 or 5x2 = 10 or x2 = 2
(4) Finally, substitute x2 = 2, x3 = 1, x4 = 4 in the first equation, and solve for the first unknown
x1 as follows:
2x1 + 6 + 5 − 8 = 9 or 2x1 + 3 = 9 or 2x1 = 6 or x1 = 3
Thus, x1 = 3 , x2 = 2, x3 = 1, x4 = 4, or, equivalently, the vector ū = (3, 2, 1, 4) is the unique
solution of the system.
Remark: There is an alternative form for back-substitution (which will be used when solving a
system using the matrix format). Namely, after first finding the value of the last unknown, we
substitute this value for the last unknown in all the preceding equations before solving for the next-
to-last unknown. This yields a triangular system with one less equation and one less unknown. For
example, in the above triangular system, we substitute x4 = 4 in all the preceding equations to
obtain the triangular system

2x1 3x2 + 5x3 = 17


5x2 − x3 = 1
7x3 = 7

We then repeat the process using the new last equation. And so on.

18
Echelon Form, Pivot and Free Variables
The following system of linear equations is said to be in echelon form:

2x1 + 6x2 − x3 + 4x4 − 2x5 = 15


x3 + 2x4 + 2x5 = 5
3x4 − 9x5 = 6

That is, no equation is degenerate and the leading unknown in each equation other than the first is
to the right of the leading unknown in the preceding equation. The leading unknowns in the system
x1 , x3 , x4 , are called pivot variables, and the other unknowns, x2 and x5 , are called free variables.
Generally speaking, an echelon system or a system in echelon form has the following form:

a11 x1 + a12 x2 + a13 x3 + a14 x4 + . . . + a1n xn = b1


a2j2 xj2 + a2j2+1 xj2+1 + . . . + a2n xn = b2
···
xrj + . . . + arn xn = br

where 1 < j2 < · · · < jr and a11 , a2j2 · · · arj are not zero. The pivot variables are x1 , xj2 · · · xjr .
Note that r ≤ n. The solution set of any echelon system is described in the following Theorem.
Theorem 3 :Consider a system of linear equations in echelon form, say with r equations in n
unknowns. There are two cases: (i) r = n. That is, there are as many equations as unknowns
(triangular form). Then the system has a unique solution.
(ii) r < n. That is, there are more unknowns than equations. Then we can arbitrarily assign values
to the n - r free variables and solve uniquely for the r pivot variables, obtaining a solution of the
system.
Suppose an echelon system contains more unknowns than equations. Assuming the field K is infi-
nite, the system has an infinite number of solutions, because each of the n - r free variables may be
assigned any scalar.
The general solution of a system with free variables may be described in either of two equivalent
ways, which we illustrate using the above echelon system where there are r = 3 equations and n
= 5 unknowns. One description is called the “Parametric Form” of the solution, and the other
description is called the “Free-Variable Form.”

Parametric Form
Assign arbitrary values, called parameters, to the free variables x2 and x5 , say x2 = a and x5 =
b, and then use back-substitution to obtain values for the pivot variables x1 , x3 , x5 in terms of the
parameters a and b. Specifically,
(1) Substitute x5 = b in the last equation, and solve for x4 :
3x4 − 9b = 6 or 3x4 = 6 + 9b or x4 = 2 + 3b
(2) Substitute x4 = 2 + 3b and x5 = b into the second equation, and solve for x3 :
x3 + 2(2 + 3b) + 2b = 5 or x3 + 4 + 8b = 5 or x3 = 1 − 8b
(3) Substitute x2 = a, x3 = 1 − 8b, x4 = 2 + 3b, x5 = b into the first equation, and solve for x1 :
2x1 + 6a − (1 − 8b) + 4(2 + 3b)2b = 15 or x1 = 4 − 3a − 9b
Accordingly, the general solution in parametric form is
x1 = 4 − 3a − 9b, x2 = a, x3 = 1 − 8b, x4 = 2 + 3b, x5 = b
or, equivalently, v = (4 − 3a − 9b, a, 1 − 8b, 2 + 3b, b) where a and b are arbitrary numbers.

19
Free-Variable Form
Use back-substitution to solve for the pivot variables x1 , x3 , x4 directly in terms of the free variables
x2 and x5 . That is, the last equation gives x4 = 2 + 3x5 . Substitution in the second equation yields
x3 = 1 − 8x5 , and then substitution in the first equation yields x1 = 4 − 3x2 − 9x5 . Accordingly,
x1 = 4 − 3x2 − 9x5 , x2 = free variable, x3 = 1 − 8x5 , x4 = 2 + 3x5 , x5 = free variable or, equivalently,
v = (4 − 3x2 − 9x5 , x2 , 1 − 8x5 , 2 + 3x5 ; x5 )
is the free-variable form for the general solution of the system.
We emphasize that there is no difference between the above two forms of the general solution, and
the use of one or the other to represent the general solution is simply a matter of taste.
Remark: A particular solution of the above system can be found by assigning any values to the
free variables and then solving for the pivot variables by back-substitution. For example, setting
x2 = 1 and x5 = 1, we obtain
x4 = 2 + 3 = 5, x3 = 1 − 8 = 7, x1 = 4 − 3 − 9 = 8
Thus, u = (8, 1, 7, 5, 1) is the particular solution corresponding to x2 = 1 and x5 = 1.

Echelon Matrices
A matrix A is called an echelon matrix, or is said to be in echelon form, if the following two
conditions hold (where a leading nonzero element of a row of A is the first nonzero element in the
row):
(1) All zero rows, if any, are at the bottom of the matrix.
(2) Each leading nonzero entry in a row is to the right of the leading nonzero entry in the preceding
row.
That is, A = [aij ] is an echelon matrix if there exist nonzero entries
a1j , a2j2 . . . , arjr j1 < j2 < . . . < j( r
(i)i ≤ r, j < ji
with the property that aij = 0 for
(ii)i > r
The entries a1j1 , a2j2 , . . . , arjr , which are the leading nonzero elements in their respective rows,
are called the pivots of the echelon matrix.
Example : The following is an echelon matrix whose pivots have been circled

0 ○
 
2 3 4 5 9 0 7
0 0 0 ○ 3 4 1 2 5
A=0 0 0 0 0 ○
 
 5 7 2
0 0 0 0 0 0 ○

8 6
0 0 0 0 0 0 0 0
Observe that the pivots are in columns C2 , C4 , C6 , C7 , and each is to the right of the one above.
Using the above notation, the pivots are
a1j1 = 2, a2j2 = 3, a3j3 = 5, a4j4 = 8
where j1 = 2, j2 = 4, j3 = 6, j4 = 7. Here r = 4.

1.5 Introduction to Rank


We begin recalling following results:
(i) sub matrix of a given matrix A is either the matrix itself or a matrix obtained from A after
deleting some rows or column or both.
(ii) The determinants of an rowed sub matrix of a matrix A is called a minor of order r of A.

20
(iii) if every r-rowed minor of A is zero, then every minor is also zero.

Rank of a Matrix

A number is said to be the rank of a non zero matrix A if


(i) there exists at least one minor of order r of A which does not vanish, and
(ii) every minor of order (r + 1), if vanishes,.
The rank of matrix A denoted by ρ(A).
∴ we have ρ(A) = r.

Another definition of Rank of a Matrix

The rank of a non-zero matrix is the


largest order
 of any non-vanishing minor of a matrix.
0 −1 2
Example Find the rank of a matrix: 4 3 1
4 2 3

 
0 −1 2
Sol. Let A= 4 3 1
4 2 3
Interchanging R1 and R2  
4 2 3
∼ 4 3 1
0 −1 2
R2 ⇒R2 − R1
 
4 2 3
∼ 0 1 −2
0 −1 2
R3 ⇒R3 + R2
 
4 2 3
∼ 0 1 −2
0 0 0
1
C2 ⇒C2 − C1
2  
4 0 0
∼ 0 1 −2
0 0 0
C3 ⇒C3 − 2C2
 
4 0 0
∼ 0 1 0
0 0 0
 
4 0 0
4 0
The rank of 0 1 0 is 2 as the minor = 4 6= 0 of order 2 does not vanish.
0 1
0 0 0

21
Chapter 2

2 Group Theory
The axioms for a group are short and natural. . . . Yet somehow hidden behind these axioms is
the monster simple group, a huge and extraordinary mathematical object, which appears to rely on
numerous bizarre coincidences to exist. The axioms for groups give no obvious hint that anything
like this exists.
Richard Borcherds, in Mathematicians: An Outer View. . . .
The one thing I would really like to know before I die is why the monster group exists.
John Conway, in a 2014 interview on Numberphile.

Group Theory is the study of symmetries

Binary Composition 0 O0 on a set is said to be binary composition if ∀ a,b ∈ A a unique C ∈


A such that a ∗ b = c.
In mapping form f : A × A → A is a binary composition

Example:
1. Set of integers form binary composition under addition.
2. The arithmetic operation +, -,× are binary operation on suitable sets of number (such as R).
3. Matrix addition and multiplication are binary operation on the set of all n×n matrix .

Note: Number of binary compositions on a set A having n elements will be number of functions
from f :A × A → A i.e. nn2 or |A|| A|2 .

If 0 O0 is replaced by +,-,· then it is called addition,subtraction,multiplication.

Groupoid:
A non empty set G with binary operation ’o’ is said to be a groupoid if it satisfies closure
property:
a,b ∈ G ⇒ a∗b ∈ G

Example: (N,+), (N, ·);(W,+), (W,·) etc..

Theorem: (W,-) is not a groupoid.


Proof: Let 1,2 ∈ W
⇒ 1 - 2 = -1 ∈/W
Thus,(W,-) is not a groupoid

Algebraic Structure
A non empty set A is called algebric structure w.r.t binary operation 0 ∗0
if

22
(a ∗ b) ∈ A ∀ a,b ∈ A

0 0
∗ is a closure property on A

Example: (N,+) , (N,-) , (R,+)

Semi-Group

A non empty set S is said to be semi-group under operation 0 00 if

1) a ∈ S , b ∈ S ⇒ a ∗ b = C ∈ S , C is unique
2) a ∗ (b ∗ C) = (a ∗ B) ∗ C ∀ a,b,c ∈ S i.e. S is associative

Example: (N,·) , (I,+) , (R,+) are all semi-group

Example : Consider an algebraic system (A,∗), where A = 1,3,5,7,9....., the set of positive odd
integers and ∗ is a binary operation means multiplication. Determine whether (A,∗) is a semi-group.

Solution:
Closure property: The operation ∗ is a closed operation because multiplication of two +ve odd
integers is a +ve odd number.

Associative property: The operation ∗ is an associative operation on set A. Since every a,b,c, ∈
A we have
(a ∗ b) ∗ c = a ∗ (b ∗ c)

Hence, the algebraic system (A,∗) is a semi-group.

Monoid:
A non empty set G is said to be monoid under binary operation ’o’if it satisfies following prop-
erties:

1.Closure property:
if a,b ∈ G then a∗b ∈ G

2.Associative property:
if a∗(b∗c) = (a∗b)∗c ∀ a,b,c ∈ G

3.Existence of Identity element:


∃ an element e ∈ G such that; a∗e = a = e∗a ∀ a ∈ G

’e’ is then called the identity of G

Example: The semigroup (N,·) is a monoid because 1 is the identity for the multiplication. but
the semigroup (N,+) is not, because 0 is the identity for addition is not in N.

23
2.1 Group
A non empty set G is said to be group under composition denoted 0 O0 if it satisfies the following
postulates:

P1: Closure property


i.e. a ∈ G , b ∈ G ⇒ a ∗ b ∈ G

P2: Associative property


i.e. (a ∗ b) ∗ c = a ∗ (b ∗ c) ∀ a,b,c ∈ G

P3: Existence of identity


i.e. ∀ a ∈ G ∃ e ∈ G such that
e∗a=a=a∗e,
where e is the identity element of G

P4: Existence of inverse


i.e. ∀ a ∈ G ∃ b ∈ G such that
a∗b=e=b∗a
where e is the identity and b is called inverse of a.

Remark: If in addition to above 4 postulates G also satisfies following postulates.

Exercise: Show that the set of integers is a group w.r.t addition composition.
proof: P1: Closure property:
, a∈Z,b∈Z
⇒ a+b ∈ Z
Hence Z is closed under addition

P2: Associative property:


a+(b+c) = (a+b)+c ∀ a,b,c, ∈ Z

P3: Existence of identity:


∃ o ∈ Z such that
a+o = a = o+a ∀ a ∈ Z
Hence o is the additive identity of Z

P4:Existence of inverse
∀ a ∈ Z , -a ∈ Z such that
a+(-a) = 0 = (-a)+a
Hence -a is the inverse of a ∈ Z under addition

Hence G ∈ Z is a group under addition.

Example :Is N the set of Natural numbers a Group or not under addition composition ?

24
Solution: No,because there does not exist e ∈ N such that
a+e = a = e+a ∀ a ∈ N

Although N is closed under addition composition and is associative.

Hence(N,+) is not group but is a semi-group.

Abelian Group
A group (G,∗) is said to be an abelian group if
i.e. a ∗ b = b ∗ a ∀ a,b ∈ G

Example:Let G = {1,±i,±j,±k} . Define a product on G by usual multiplication together with


i2 = j 2 = k 2 = −1, ij = −ji = k, jk = −kj = i, ki = −ik = j
then G forms a group. G is not abelion as ij 6= ji
This is called the Quaternion group.

Example : (Q,+) is an abelian group.

We have to prove five properties as follows :

1. Closure property : Let a,b∈Q


we know that the sum of rational numbers is rational, which means a+b is also rational.
⇒a+b∈ Q

2.Associative property : Since addition of rationals is always associative, it follows that


(a)+(b+c) = (a+b)+(c) ,∀ a,b,c ∈Q.

3. Identity element : Clearly 0∈Q and when 0 is added to any rational number, a remains
same i.e
a+0 = a = 0+a, ∀ a ∈Q

4.Inverse element : For each a ∈ Q ∃(-a) ∈Q s.t; a + (-a) = 0 =(-a) + a ,∀ a ∈Q.


$⇒(-a) is the additive inverse of a in Q.

5. Abelian : Since sum of rational number is always commutative so , we have ;


a+b = b+a ,∀ a,b ∈Q

Hence (Q,+) is abelian group .

2.2 Finite and Infinite groups.


Finite and Infinite Group: A group(G,o) is said to be finite if it has finite number of elements
otherwise it is an infinite group.

Example : Symmetry group Sn , cyclic group Zn ,


alternating group An are the example of finite group.

25
(Z,+);(Q,+);(Q,.) are some of the example of an infinite group.

Order of an element of a Group and Group Students generally think that order of an
element of a Group and order of a Group are same aspects but they are not the same so far.

Order of a Group Order of a group G is the number of elements of set G and denoted by O
(G)
Order of an element of a Group If G is a group then we say that an element a of group G
has order n if n is the least +ve integer such that a∗a∗a...... n times = e where e is the identity
and ∗ is the group compositions and is denoted by ∗(a) = n

Note: If ∗ group composition is addition + then we say a + a + .... = na = e


and if ∗ is × then it is a×a..... = an = e.

Note :If there does not exist any +ve integer n such that an = e,
then we say that element has infinite order

2.3 Some special composition


1. Addition module m : It is denoted by a⊕m b and is defined as the remainder left when sum
a+b is divided by m or a⊕m b = r
such that 0 ≤r ¡ m.

Example :If m = 3 and a = 4 and b = 5 then , a⊕m b = 4⊕3 5 = 0.


Hence 4+5 = 9 and when divided by 3 leaves remainder 0.

2. Multiplication module m : It is denoted by a⊗m b and is defined as the remainder left when
product ab is divided by m or a⊗m b = r
such that 0 ≤r ¡ m.

Example: If m =3 ,a = 4 and b = 5
then 4⊗3 5 = 2
when 45 is divided by 3 leaves remainder 2.

2.4 Rings
A non-empty set R is said to be a ring under operation 0 +0 & 0 .0 is a ring.

OR
(R,+,.) is said to be a ring if it satisfies the following properties:
(a) (R,+) is an abelian group.
(b) (i)(R,.) is a semi-group.
(ii)∀ a,b∈ R,
a(b+c)=ab+ac
&(b+c)a=ba+ca

26
In addittion,if
• (R,.) has unity ‘1’,then (R,+,.)is said to be a ring.

• (R,.)is commutative i.e − a.b = b.a,then (R,+,.)is said to be commutative a ring.


Subring A non-empty subset H of a ring R is said to be its subring if it itself is a ring under
the same composition as in R.

2.5 Field
A ring (R,+,.) is said to be a field if
(i)R has unity element.
(ii)R is commutative.
(iii)every element of R has multiplicative inverse in R.
Subfield: A non-empty subset S of a field F is said to be its subfield if it itself is a field under
the same composition as in F.

Example: R(set of real no.s) is a field under operation 0 +0 & 0 .0 .


Now Q(set of rationals)⊆ R and Q is a field under operation 0 +0 & 0 .0 .Hence Q is a subfield of
R.Similarly,R is a sub-field of (C,+,).Q is a subfield of (C,+,·).

Division ring or Skew field


A ring (R,+,.) is said to be a skew field if
(a)R has unity element.
(b)every non-zero element of R has multiplicative inverse in R.

2.6 Integral Domain


A ring (R,+,.)is said to be an integral domain if
(i)R has unity element.
(ii)R is commutative.
(iii)R is without zero-divisor i.e − a.b = 0 ∀ a, b ∈ R,then either a = 0 or b = 0.

Note: Every finite integral domain is a field.

Example: Show that the ring Zp of integers modulo p(p being finite) is a field iff p is prime.

Proof : let p be a prime and a, b ∈ Zp


s.t a.b ≡ 0 modp - (1)
since a, b ∈ Zp = (0, 1, 2, , , , , , ,p-1)
=⇒ 0 ≤ a, b ≤ p-1 - (2)
=⇒ p/a.b − 0 - [from(1)]
=⇒ p/a.b

27
Now,p being a prime =⇒ p/a or p/b
Using(2),we have
either a = 0 or b = 0
(as only 0 ∈ [0,p−1] s.t (p/0)
∴ a.b ≡ 0 modp
=⇒ a = 0 or b = 0
Thus Zp is without zero divisor.
Also,since Zp is commutative and has unity 1,therefore,Zp is an integral domain.

Conversely let Zp is an integral domain.

We claim : p is prime.
If possible,let p is not a prime number.
=⇒ p= a.b, where 1 ≤ a, b ≤ p - (3)
i.e − a, b are factors of p other than 1 & p
Also,p ≡ 0 modp
=⇒ a.b ≡ 0 modp - [from(3)]
=⇒ a = 0 or b = 0 (∵ Zp is an integral domain.)
then a < 1 or b < 1 which is a contradiction to our supposition that p is not a prime number.
∴ p is prime.

Hence the result.

2.7 Ideals in a Ring


Left ideal: A non-empty subset S of a ring R is called a left ideal of R if
(i)(S,+) is a subgroup of (R,+),i.e − a ∈ S and b ∈ S =⇒ a − b ∈ S.
(ii)a ∈ S and r ∈ R =⇒ ra ∈ S.

Right ideal: A non-empty subset S of a ring R is called a right ideal of R if


(i)(S,+) is a subgroup of (R,+),i.e − a ∈ S and b ∈ S =⇒ a − b ∈ S.
(ii)a ∈ S and r ∈ R =⇒ ar ∈ S.

Ideal: A non-empty subset S of a ring R is called a an ideal of R if


(i)(S,+) is a subgroup of (R,+),i.e − a ∈ S and b ∈ S =⇒ a − b ∈ S.
(ii)a ∈ S and r ∈ R =⇒ ra ∈ S and ar ∈ S.

REMARK : In a commutative ring,every left ideal or right ideal is a two-sided ideal.

Theorem: Every ideal of a ring R is a subring in R,but the converse need not be true.

Proof : Let S be an ideal of R


=⇒ a − b ∈ S ∀ a, b ∈ S - (1)

28
=⇒ ra & ar ∈ S - (2)
Now, clearly S is a subgroup of R.We only need to prove that
a.b ∈ S
Let a ∈ S & b ∈ S
=⇒ a ∈ S & b ∈ R =⇒ a.b ∈ S
Hence S is a subring of R.
Converse need not be true;
e.g :- Z (set of integers) is a subring of Q(set of rationals).
However,Z is not an ideal of Q.

Theorem: The intersection of two ideals of a ring R is an ideal of R.

Proof : Let A and B be two ideals of R.


Now,0 ∈ A and B
=⇒ 0 ∈ A ∩ B
=⇒ A ∩ B is non-empty.
Let x, y ∈ A ∩ B.
=⇒ x, y ∈ A and x, y ∈ B
=⇒ x − y ∈ A and x − y ∈ B
Let r ∈ R and x ∈ A ∩ B
=⇒ r ∈ R and x ∈ A & r ∈ R and x ∈ B
=⇒ rx and xr ∈ A & rx and xr ∈ B
Hence A ∩ B is an ideal of R.

Theorem: The union of two ideals of a ring R need not be an ideal of R.

Proof : Let A= {. . . − 4, −2, 0, 2, 4 . . .} & B= {. . . − 6, −3, 0, 3, 6 . . .} be the two ideals of ring


Z(of integers A & B) = {. . . − 6, −4, −3, −2, 0, 2, 3, 4, 6 . . .}.
Now,2 ∈ A & 3 ∈ B =⇒ 2 & 3 ∈ A ∪ B.
But 3 − 2 = 1 ∈∈
/ A ∪ B.
Hence the union of two ideals of a ring R need not be an ideal of R.

Question: Prove that a field F only ideals (O) and F itself i.e. F has no proper ideals ?
Proof:
If U = (O) then there is nothing to prove
if U 6= (O) we shall show that U = F
where u is an ideal of F
how U 6= O , so for u ∈ U ∃ u−2 ∈ U ( ∴ U ⊆ F)
Such that uu−2 = 1 = u−2 u = 1 ∈ U
Now since U is an ideal of F
Hence x ∈ F, 1 ∈ U ⇒ x-1 ∈ U
⇒x∈F⇒x∈U⇒F⊆U (1)
also U is an ideal of F ⇒ U ∈ F (2)
Fron (1) and (2)
F=U

29
Hence u=(o) and F itself are only ideal of field F

Example: Let R be a ring with unity.Prove that no proper ideal of R can contain an invertible
element of R.

Proof: Let R be a proper ideal of R and a ∈ A


such that a−1 ∈ where aa−1 = a−1 = 1
since a−1 ∈ R and a ∈ A ⇒ a−a ∈ A ⇒ 1∈ A
⇒ r∈ R and 1 ∈ A ⇒ r.1∈ A ⇒ R ∈ A
and A∈ R ⇒ A ∈ R
Hence A is not a proper ideal which is a contradiction to our statement that A is proper ideal
of R hence our supposition that ∃ a ∈ A is a− 1∈ A is wrong
Hence no proper ideal of R can contain an invertible ideal.

Simple Ring:

A Ring R is called a simple ring,if


1) ∃ two element a,b in R such that ab 6= 0
2) R has no proper ideal i.e. the only ideal of R are 0 and R.

Example: Z2 = {0, 1} modulo 2


here | ⊗ | = 1 6= 0 and Z2 has no proper ideals
Hence Z2 is a simple ring.

Maximal and Prime Ideal

Maximal ideal: A prime ideal M 6= R of a ring R is said to be maximal iff for any ideal U or
R such that M ⊂ U ⊂ R
⇒ M = U or U = R
or iff @ any proper ideal of R between M and R

Example: Find the maximal ideals of Z6 ,the ring of integers modulo 6.

Solution: Z6 = {0, 1, 2, 3, 4, 5}
proper ideals of Z6 are
(3) = {0,3} (2) = {0,2,4}
(4) = {0,4,2} = 2
(5) = {0,5,4,3,2,1} = Z6 6= maximal ideal
(1) = {0,1,2,3,4,5} = Z6 6= maximal ideal ( an Z6 M 6= R )
Hence only proper ideal are (3) and (4)
here (2) is a maximal ideal of Z6
and similarly (3) is a maximal ideal of Z6 .
as @ any ideal ( proper) between (2) and Z6
and (5) and Z6 .

Prime ideal:
An ideal P of a ring R is called a prime ideal, if for any a ∈ R , b ∈ R

30
ab ∈ P ⇒ a ∈ P or b∈ P

Example: The ideal {O} in Z (ring of integers)


Let a,b ∈ Z such that ab ∈ {O} ⇒ ab = 0 ⇒ a = 0 or b = 0
⇒ either a ∈ {O} or b ∈ {O}
Hence {O} is a prime ideals of Z.

Example: For any prime number P, (P) = {Px: x ∈ Z } is a prime ideal of Z.

Solution: let a,b ∈ Z such that ab ∈ (P)


⇒ ab = P x, x∈ Z
⇒ P/ab ⇒ P/a or P/b
⇒ a = P x2 or b = P x3
Hence (P) = {P(x): x∈ Z } is prime ideal of Z
Hence (2),(3),(5),(7)˙˙˙˙are prime ideals of Z
The ideal (4) is not a prime ideals since 2.6 ∈ (4) but 2 6= (4) and 6 6= (4).

31
Chapter 3

3 Vector Spaces
Pure Mathematicians love to generalize ideas. If they manage, by means of a new trick, to
prove some conjecture, they always endeavor to get maximum mileage out of the idea by searching
for other situations in which it can be used. To do this successfully one must discard unnecessary
details and focus attention only on what is really needed to make the idea work; furthermore,
this should lead both to simplifications and deeper understanding. It also leads naturally to an
axiomatic approach to mathematics, in which one lists initially as axioms all the things which have
to hold before the theory will be applicable, and then attempts to derive consequences of these
axioms. Potentially this kills many birds with one stone, since good theories are applicable in many
different situations. We have already used the axiomatic approach in Chapter 2 in the definition of
‘field’, and in this chapter we proceed to the definition of ‘vector space’. We start with a discussion
of linearity, since one of the major reasons for introducing vector spaces is to provide a suitable
context for discussion of this concept.

Binary Composition
f : A × A → A is called binary mapping. i.e If f is defines under composition ∗ then a ∗ b = c
i.e ∀ a,b ∈ A a ∗ b = c ∈ A
No. of binary composition of A will be
2 2
if A contain n elements =nn or |A||A|

Internal and external composition

(1) A operator ∗ on A is said to be Internal Composition on A , if ∀ a,b ∈ A a∗b ∈ A and it


is unique,hence internal composition is binary operation and Internal Composition = f : A×A → A

(2) A composition operation is said to be External Composition in set A over B if ∀ a∈A ,


b∈B
a∗ b =c ∈ A and c is unique element
Note: Internal Composition is also External Composition but External Composition may not be
Internal Composition.

Example : A is external composition in C over C


A is external composition in R over R
A may or may not be external composition in C over R
A is external composition in C over R
A is external composition in R over Q
A is external composition in Q over Q

3.1 Vector Space (linear Space)(Vector Axioms)


A vector space should be a set (whose elements will be called vectors) which is equipped with an
operation of addition, and a scalar multiplication function which determines an element λx of the

32
vector space whenever x is an element of the vector space and λ is a scalar. That is, if x and y are
two vectors then their sum x + y must exist and be another vector, and if x is a vector and λ a
scalar then λx must exist and be vector.
Definition: If (F, +, ·) is a field of scalars generally denoted by (a,b,c . . . ) and V is a non-empty
set called set of vectors space over F denoted as (α, β, γ,. . . ). Then V is a vector space over F
denoted by V(F) if,
(1) Internal combination + k/as vector addition is V is an abelian group i.e (V, +).
(2) External composition f: V×F→ V k/as scalar multiplication is defined i.e a ∈ F, α ∈ V ⇒
aα ∈V and is unique.
(3) Scalar multiplication (External composition) and vector addition (Internal composition) satisfy
the following.
(a) a(α + β) = aα + aβ ; ∀ a ∈ F, α, β ∈ V.
(b) (a+b)α = aα + bα ; ∀ a,b ∈ F, α ∈ V.
(c) (ab)α = a(bα) ; ∀ a,b ∈ F, α ∈ V.
(d) 1·α = α where 1 ∈ F and ∀ α ∈ V.

Example : Every field K is a vector space over its subfield F i.e K(F) satisfies above postu-
late.Since C, R, Q, are subfield C then

C(C), C(R), C(Q)


¯ p are vector spaces.
is a vector space similarly Zp (Zp ) , (0)R, (0)C, (0)Q, (0)Z

Note: If K is the field and F is proper superfield of K then F(K) is not a vector space as ∃ a∈
F and 1∈F where a∈K/ but a· 1 ∈K i.e external composition is not define.
For example: R(C), Q(R), Q(C) are not vector spaces

 Set of all matrices of order m×n whose entries are from field F is a vector space over F.
Mm×n (F ) = F m×n (F )

 No. of elements in Vector space Mm×n (Zp ) = pm×n

Exapmle 3: If F is a field then set of all functions from a domain D to F is a vector space over
field F w.r.t vector addition.
(f+g)x = f(x) + g(x) ;∀ x∈ D and scalar multiplication (af)x = a(f(x))
, ;∀ x∈D and a∈F.

Solution: Here V= {f | f : D→F}


(1) Internal composition V over V

(a) let f∈ V & g∈ V ⇒ f:D→F & g:D→F


Now (f+g)x = f(x) + g(x) ∈ V ;x∈D ; f,g∈V

(b) ((f + g) + h)x = (f + g)x + h(x) = f(x) + (g + h)x

(c) ∃ 0 : 0 → 0 s.t
(f + 0)x = f(x) + 0(x) = f(x)

33
(d) for f: D→F ∈V ∃ -f ∈ V
s.t (f + -(f))x = f(x) + (-f(x)) = f(x) - f(x) = 0

(e) (f + g)x = (g + f)x


hence vector addition is an abelian group.

Now
(2) External composition V over V.

(i) Let a∈ F & f : D→F


⇒ af = af(x);x∈D = (af)x ∈ V

(ii) a(f + g)(x) = af(x) + ag(x)

(iii) (a + b)f(x) = af(x) + bf(x) ; a, b ∈F & f∈V

(iv) (ab)f = a(bf)

(v) ∃ 1∈F s.t 1 · F(x) = f(x)


hence V = V= {f | f : D→F} is a vector space over field F.

Example *: Let S be any set and let S be the set of all functions from S to F (where F is a
field). If f, g ∈S and λ ∈ F define f + g, λf ∈ S as follows:
(f + g)(a) = f(a) + g(a)
(λf)(a) = λf(a))
for all a ∈ S. (Note that addition and multiplication on the right hand side of these equations take
place in F; you do not have to be able to add and multiply elements of S for the definitions to make
sense.) It can now be shown these definitions of addition and scalar multiplication make S into a
vector space over F; to do this we must verify that all the axioms are satisfied. In each case the
proof is routine, based on the fact that F satisfies the field axioms.

(i) Let f, g, h ∈ S. Then for all a in S,


((f + g) + h)(a) = (f + g)(a) + h(a) (by definition addition on S)
= (f(a) + g(a)) + h(a) (same reason)
= f(a) + (g(a) + h(a)) (addition in F is associative)
= f(a) + (g + h)(a) (definition of addition on S)
= (f + (g + h))(a) (same reason)
and so (f + g) + h = f + (g + h).

(ii) Let f, g ∈ S. Then for all a ∈ S,

(f + g)(a) = f(a) + g(a) (by definition)


= g(a) + f(a) (since addition in F is commutative)
= (g + f)(a) (by definition),
so that f + g = g + f.

(iii) Define z: S → F by z(a) = 0 (the zero element of F) for all a ∈ S. We must show that this

34
zero function z satisfies f + z = f for all f ∈ S. For all a |in S we have
(f + z)(a) = f(a) + z(a) = f(a) + 0 = f(a)

by the definition of addition in S and the Zero Axiom for fields, whence the result.
(iv) Suppose that f∈S. Define g∈S by g(a) = -f(a) for all a∈S. Then for all a ∈ S,
(g + f)(a) = g(a) + f(a) = 0 = z(a),
so that g + f = z. Thus each element of S has a negative.

(v) Suppose that f ∈ S. By definition of scalar multiplication for S and the Identity Axiom for
fields, we have, for all a ∈ S,
(1f)(a) = 1(f(a)) = f(a)
and therefore 1·f = f.

(vi) Let λ, µ ∈ F and f ∈ S. Then for all a ∈ S,


(λ(µ f))(a) = λ((µ f)(a))= λ(µ f(a)) = (λµ) f(a) = ((λµ)f)(a)
by the definition of scalar multiplication for S and associativity of multiplication in the field F.Thus
λ(µ f) = (λµ)f.

(vii) Let λ, µ ∈ F and f ∈ S. Then for all a ∈ S,


((λ + µ)f)(a) = (λ + µ)f(a))= λ f(a) + µ f(a)= (λf)(a) + (µf)(a) = (λf + µf)(a)

Trivial consequences of the axioms

Having defined vector spaces in the previous section, our next objective is to prove Theorems
about vector spaces. In doing this we must be careful to make sure that our proofs use only the
axioms and nothing else, for only that way can we be sure that every system which satisfies the
axioms will also satisfy all the Theorems that we prove. An unfortunate consequence of this is that
we must start by proving trivialities, or, rather, things which seem to be trivialities because they
are familiar to us in slightly different contexts. It is necessary to prove that these things are indeed
consequences of the vector space axioms. It is also useful to learn the art of constructing proofs by
doing proofs of trivial facts before trying to prove difficult Theorems.
Throughout this section F will be a field and V a vector space over F.

Proposition 3.1 For all u, v, w ∈ V , if v + u = w + u then v = w.


Proof. Assume that v + u = w + u. By Inverse law in Definition of vector space there exists t in
V such that u + t = 0. Now adding t to both sides of the equation gives
(v + u) + t = (w + u) + t v + (u + t) = w + (u + t) (by Associative law) v + 0 = w +0 (by the
choice of t) v = w (by Identity law).

Proposition 3.2 For each v ∈ V there is a unique t ∈ V which satisfies t + v = 0.

Proof. The existence of such a t for each v is immediate from Axioms Inverse and Commuta-
tive. Uniqueness follows from Proposition 3.1 above since if t + v = 0 and t’ + v = 0 then, by
Proposition 3.1, t = t’.
By Proposition 3.2 there is no ambiguity in using the customary notation ‘-v’ for the negative of a
vector v, and we will do this hence forward.
Proposition 3.3 If u, v ∈ V and u + v = v then u = 0.

35
. Proof. Assume that u+v = v. Then by identity we have u+v = 0 +v, and by Preposition 3.1, v
=0

We comment that Proposition 3.3 shows that V cannot have more than one zero element.

Proposition 3.4 Let λ ∈ F and v ∈ V . Then λ0= 0= 0v, and, conversely,if λv = 0 then either
λ = 0 or v = 0. We also have (-1)v = -v.

Proof. By definition of 0 we have 0 + 0 = 0, and therefore

λ(0 + 0) = λ0
λ0 + λ0 = λ0 (by distributive property)
λ0 = 0 (by Preposition 3.3).

By the field axioms (There is a zero element in F) we have 0+0=0, and so by distributive prop-
erty 0v + 0v = (0 + 0)v = 0v,
whence 0v = 0 by 3.3. For the converse, suppose that λv = 0 and λ 6= 0. Field axioms guarantee
that λ− 1 exists, and we deduce that

v = 1v = (λ− 1λ)v = λ− 1(λv) = λ− 10 = 0

For the last part, observe that we have (-1) + 1 = 0 by field axioms, and therefore
(-1)v + v = (-1)v + 1v = ((-1) + 1)v = 0v = 0.
By Preposition 3.2 above we deduce that (-1)v = -v.

3.2 Subspaces of A Vector Space V(F)


If V is a vector space over a field F and U is a subset of V we may ask whether the operation of
addition on V gives rise to an operation of addition on U. Since an operation on U is a function U
× U → U, it does so if and only if adding two elements of U always gives another element of U.
Likewise, the scalar multiplication function for V gives rise to a scalar multiplication function for
U if and only if multiplying an element of U by a scalar always gives another element of U.

Definition: A subset U of a vector space V is said to be closed under addition and scalar
multiplication if (i) u1 + u2 2 U ∀ u1 , u2 ∈ U.
(ii) λu ∈ U ∀ u ∈ U and all scalars λ.

If a subset U of V is closed in this sense, it is natural to ask whether U is a vector space relative
to the addition and scalar multiplication inherited from V ; if it is we say that U is a subspace of
V.

Definition: A subset U of a vector space V is called a subspace of V if U is itself a vector space


relative to addition and scalar multiplication inherited from V .

It turns out that a subset which is closed under addition and scalar multiplication is always a
subspace, provided only that it is nonempty.

36
Theorem 3.1 If V is a vector space and U a subset of V which is nonempty and closed under
addition and scalar multiplication, then U is a subspace of V.

Proof: It is necessary only to verify that the inherited operations satisfy the vector space axioms.
In most cases the fact that a given axiom is satisfied in V trivially implies that the same axiom is
satisfied in U. Let x, y, z ∈ U. Then x, y, z ∈ V , and so Associative law for V it follows that (x
+ y) + z = x + (y + z). Thus Associative law holds in U. Let x, y ∈ U. Then x, y ∈ V , and so
x + y = y + x. Thus Commutative law holds in addition. The next task is to prove that U has a
zero element. Since V is a vector space we know that V has a zero element, which we will denote
by ’0’ but at first sight it seems possible that ’0’ may fail to be in the subset U. However, since U
is nonempty there certainly exists at least one element in U. Let x be such an element. By closure
under scalar multiplication we have that 0x ∈ U. Also we know 0x = 0 (since x is an element of V
), and so, after all, it is necessarily true that ’0’ ∈ U. It is now trivial that ’0’ is also a zero element
for U, since if y ∈ U is arbitrary then y ∈ V and by Identity law for V gives 0+ y = y. For Inverse
in addition we must prove that each x ∈ U has a negative in U. Since Inverse for V guarantees that
x has a negative in V and since the zero of U is the same as the zero of V , it suffices to show that
-x ∈ U. But x ∈ U gives (-1)x ∈ U (by closure under scalar multiplication), and as we know -x =
(-1)x ; so the result follows.
The remaining axioms are trivially proved by arguments similar to those used for axioms associative
and commutative law in addition.
 
x
Example Let F = R, V = R 3 and U = {  y  | x,y ∈ R }. Prove that U is a subspace
x+y
of V .
Solution: In view of Theorem 3.1, we must prove that U is nonempty and closed under addition
and scalar multiplication.  
0
It is clear that U is nonempty: 0 ∈ U.
0
Let u, v be arbitrary elements  U. Then
of
0
  
x x
u =  y , v =  y 0 
x+y x0 + y 0
0 0
for some  x, y, x , y ∈ R, and we see that
00
x
u + v =  y 00  (wherex00 = x + x0 , y 00 = y + y 0 )
x00 + y 00
and this is an element of U. Hence U is closed under
 addition. Let if u be an arbitrary
 element  of
x λx
U and λ an arbitrary scalar. Then u =  y  A for some x, y ∈ R, and λu =  λy  =
  x+y λ(x + y)
λx
 λy  ∈ U Thus U is closed under scalar multiplication.
λx + λy
Example : Use the result proved in Example * and elementary calculus to prove that the set

37
C of all real valued continuous functions on the closed interval [0, 1] is a vector space over R.

Solution: By Example * the set S of all real valued functions on [0, 1] is a vector space over
R; so it will suffice to prove that C is a subspace of S. By Theorem 3.1 then it suffices to prove that
C is nonempty and closed under addition and scalar multiplication.
The zero function is clearly continuous; so C is nonempty.
Let f, g ∈ C and t ∈ R. For all a ∈ [0, 1] we have

lim (f + g)(x) = lim (f (x) + g(x)) (definition of f + g)


x→a x→a
=lim f (x) + lim g(x) (basic calculas)
x→a x→a
= f(a) + g(a) (since f,g are continuous)
= (f + g)(a)

and similarly

lim (tf )(x) = lim t(f (x)) (definition of tf)


x→a x→a
= t lim f (x) (basic calculus)
x→a
= t(f(a)) (since f is continuous)
= (tf)(a),

so that f + g and tf are continuous. Hence C is closed under addition and multiplication.

Algebra of Subspaces
Remark:Intersection of two subspaces is again a subspace.

Theorem 3.2 Union of two subspaces is a subspace iff one is contained in the other or we can
say that they are comparable.

Proof: Let w1 ans w2 be the two subspace of V (F ) and w1 ⊆ w2 or w2 ⊆ w1 .


if w1 ⊆ w2 then w1 ∪ w2 = w2 is a subspace of V (F )
if w2 ⊆ w1 then w2 ∪ w1 = w1 is a subspace of V (F )

Hence Union of subspace is a subspace.

For this if possible let w1 ∪ w2 is a subspace of V (F ) and w1 ∈


/ w2 and w2 ∈
/ w1

Hence ∃ some α ∈ w1 such that α ∈


/ w2
and β ∈ w2 such that β ∈
/ w1

now α ∈ w1 and β ∈ w2
⇒ α ∈ w1 ∪ w2 and β ∈ w1 ∪ w2
⇒ α + β ∈ w1 ∪ w2
⇒ either α + β ∈ w1 or α + β ∈ w2
iff α + β ∈ w1 and we have -α ∈ w1 for α ∈ w1
⇒ -α + α + β ∈ w1 ⇒ β ∈ w1
which is a contradiction to our assumption that β ∈
/ w1

38
Hence our assumption that w2 * w1 is wrong hence w1 ⊆ w2 if α + β ∈ w1
similarly iff α + β ∈ w2 then w2 ⊆ w1
Hence either w1 ⊆ w2 or w2 ⊆ w1 .

Example : If w1 = { (x,y,z) |2x + 3y + 4z = 0 and x+4y+3z = 0 }


w2 = { (x,y,z) |x − y + z = 0} are subspace of R3 (R). Then w1 ∪ w2 is a
subspace of R3 (R).

Solution: Here w1 and w2 are two zero subspaces of R3 (R)


Now
 consider  
2 3 4 2 3 4
1 4 3 v 1 4 3
1 −1 1 0 0 0
Hence solution depends upon I 17 two rows i.e. w1 ⇒ w1 ⊆ w2

Hence w1 ∪ w2 = w2 which is a subspace of R3 (R)

3.3 Linear sum of subspaces


w1 + w2 = { α + β | α ∈ w1 and β ∈ w2 }

Theorem 3.3: Linear sum of two subspaces of vector space V (F ) is subspace of V (F ).

Proof: let α , β ∈ w1 + w2
⇒ α = a1 + b 1 β = a2 + b 2
a1 ,a2 ∈ w1 and b1 ,b2 ∈ w2 for a ∈ F
now a α + β
= a( a1 + b1 ) + (a2 + b2 )
= (a a1 + a2 ) + (a b1 + b2 )
Now since w1 is subspace of V over F
⇒ a a 1 + a2 ∈ w 2 for a ∈ F
similarly a b1 + b2 ∈ w2 for a ∈ F

⇒ (a a1 + a2 + (a b1 + b2 ) ∈ w1 + w2
∴ w1 + w2 is a subspace of V (F ).

: If U ,V ,W are subspaces of vector space then

1: (U ∩ V ) + W ⊆ (U + W ) ∩ (V + W )
2: U (V + W ) ⊇ (U ∩ V ) + (U ∩ W )

Example Which of the following may be true for the subspaces U , V and W .

1: (U ∩ V ) + W ⊂ (U + W ) ∩ (V + W ) (X)
2: (U ∩ V ) + W = (U + W ) ∩ (V + W ) (X)

39
3: (U ∩ V ) + W (⊃ (U + W ) ∩ (V + W ) (×)

if there is null be true or must be true


then only none is correct as (U ∩ V ) + W ⊆ (U + W ) (V + W )

3.4 Linear Combination of Vectors


If S = { α1 , α2 ..........αn } is a set of vectors from a vector space V (F ) then a1 α1 + a2 α2 + ........
+ an αn is called linear combination of vectors of S for some a1 , a2 ...... an ∈ F .

: S = { (x + 2) ,x2 ,x3 } ⊆ P 3 (R)


then S(x + 2) + 7x2 + (-11)x3 is a linear combinaion of vectors in S.

Linear Span:

Theorem 3.4: If S = { α1 , α2 ..........αn } is a set of vectors from a vector space V (F ) then


the set of all linear combination of vectors of S is known as linear span in S denoted by L(S) or { S }

L(S) = { a1 α1 + a2 α2 + ....... + an αn }
Pn
={ i=1 ai αi | ai ∈ F }

Note: If S = { α1 , α2 ...... αn } then L(S) is the smallest subspace of V (F ) containing S.

Proof: Let W be the subspace of containing S


We shall prove that W contains L(S)
Since S ⊆ W
⇒ ∀ a1 ∈ F P, ai αi ∈ W (By Entered Composition)
⇒ ∀ αi ∈ S ni=1 αi ai ∈ W

Hence L(S) ⊆ W
Hence L(S) is the smallest subspace of V (F ) containing S.

Example : If S = P { (1, 2,1) (3, 4,5) } ⊆ R3 .Write down L(S) in proper form.
Solution: L(S) = ni=1 a1 α1
⇒ L(S) = { a1 (1, 2,1) + a2 (3, 4,5) }
= { a1 + 3a2 , 2a1 + 4a2 , a1 + 5a2 | a,b ∈ R }
= { x, y, z | a + 3b = x , 2a + 4b = y , a + 5b = z }
Now solving again
a + 3b = x........ (1)
2a + 4b = y........ (2)
a + 5b = z........ (3)

From (1) and (3)


2b = z - x ........ (4)

40
From (1) and (2)
6b - 4b = 2x - y
2b = 2x - y......... (5)

From (4) and (5)


2x - y = z - x
2x - y + z + x = 0
3x - y + z = 0

Hence, L(S) = { x, y, z | 3x - y + z = 0 }

Example : If V (F ) is a vector space and S, T ⊆ V , then L(S ∪ T ) = L(S) + L(V ).

Solution: S ⊆ S ∪ T ⇒ L(S) ⊆ L(S ∪ T )


and T ⊆ S ∩ T ⇒ L(T ) ⊆ L (S ∩ T )
⇒ L(S) + L(T ) ⊆ L(S ∩ T ) ( ∴ L(S) is a subspace of V (F ).......... (1)
L(S ∩ T ) = L(S) + L(T + S) ⊆ L(S) + L(T )........ (2)
From (1) and (2)
L(S ∩ T ) = L(S) + L(T ).

Note: If S and T are subspaces of V (F ) then


L(S) = S and L(T ) = T
L(S ∩ T ) = S + T .

Theorem 3.5: If S ⊂ V, V (F ) is a vector space, then L(S) is a subspace of V .

Proof : L(S) = {a1 α1 + a2 α2 + ... + an αn ai ∈ F }

Since ai ∈ F and αi ∈ V (F )
=⇒ ai V (F ) ∈ F (By scalar multiplication)

Consequently by Closure property of addition


n
X
ai αi ai ∈ F ∈ V
i=1
=⇒ L(S) ⊂ V

Now V (F ) is non-empty (always)


=⇒ ∃ αk ∈ V
and 1 ∈ F =⇒ 1 − αk ∈ L(S)
hence V (S) is non-empty.

Now let α, β ∈ L(S).


Xn n
X
=⇒ α = ai αi , β = bi αi ; ai , bi ∈ F
i=1 i=1

Now aα + β

41
n
X n
X
= a( ai α i ) + bi αi
i=1 i=1
n
X
= (aai + bi )αi
i=1

Since ai , bi , a ∈ F =⇒ aai + bi ∈ F
Xn
=⇒ aα + β = (aai + bi )αi ∈ L(S)
i=1

So L(S) is a subspace of (F ).

Direct sum : A vector space V(F) is said


L to be direct sum of its two
subspaces W1 and W2 denoted by V = W1 W2 if

1. V = W1 + W2
T
2.W1 W2 = 0 zero space.

Here W1 andW2 are said to be complementary subspaces of V(F).

Example : Mn (R = W1 ⊕ W2 .

1. W1 = Mn and W2 = 0n × n.
T
Clearly W1 + W2 = V and W1 W2 = { 0n × n}.

Hence W1 and W2 are complementory subspaces of V(F).

2. W1 = {A/AT = A} and W2 = {A/AT = −A}

Clearly W1 and W2 = Mn (R)

and {0n×n } is the only matrix such that

{0n×n } = 0n×n = −0n×n .


T
Hence W1 W2 = {0n×n }

Hence W1 and W2 are complementory subspaces of V(F).

3. W1 = {[aij ]/aij = 0, whenever i>j

W2 {[aij ]/aij = 0, whenever i<j}

Clearly W1 + W2 = Mn (R).

42
T
but W1 W2 can be any matrix with all the element other than diagonal element = 0.
T T
Hence W1 W2 6= {0} , (W1 W2 contains zero but not only zero)
Hence is not the direct sum.

4. W1 = {[aij ]/aij } = 0, whenever {ij}

W2 $[aij ]/aij = 0, whenever i<j}

Sol: Here W1 + W2 = Mn (R)


T
also W1 W2 = {0}
L
Hence Mn (R) = W1 W2

5. W1 = {A/Aθ }andW2 ={A/Aθ = −A}

since Mn (R) ⇒ F = R

⇒ Aθ = AT see example 2.

Example : R3 (R) = W1
T
W2 where W1 and W2 are

W1 = {(x,y,z) —(xyz) > -1} = R3

W2 = {(x,y,z) — ex2+2y2+224 = 1}

Sol : Clearly W1 = R3 ⇒ W1 + W2 = R3 (R)

e0 = 1⇒ W2 = {(x,y,z) — x2 + 2y 2 + 224 = 0} = zerospace


T
HenceW1 W2 = {0}

2. W1 ={ (x,y,z) — x = y = z}

W2 = { (o,b,c) — b,c ∈ R}

Solution : Here W1 + W2 = {(x,y+b,z+c)} = R3

we have ,
L
in W1 W2 ist element is 0 and by W1 x = y = z
L
⇒ in W1 W2 other element of triplet also zero

43
L
⇒ W1 W2 is a zero space

⇒ R3 (R) = W1
T
W2
Linear Dependence and Independence.

Linear Dependence : A set of vectors S = {α1 ,α2 ,.......αn } is said to be linearly dependent
set of vectors if
{c1 α1 + c2 α2 +....... +cn αn } = 0 ⇒ atleast one ci 6= 0.

Linear Independence : A set of vectors A set of vectors S = {α1 ,α2 ,.......αn } is said to be
linearly independent set of vectors if
{c1 α1 + c2 α2 +....... +cn αn } = 0 ⇒ atleast one ci = 0.
⇒ c1 = c2 = c3 = ....... = cn = 0.

Note: A set contaning 0 element is always Linearly Dependent as if αi = 0 for some i


then {cα1 + cα2 +....+ cαi ... +cαn } = 0.

⇒ ci may or may not be zero

as if c1 = c2 = ....... = cn = 0, n6=0
Note: A set containing single element non-zero vector is linearly independent.
Proof : Let α ∈ S such that α 6= 0 i.e S = {α}

and c· α = 0 since α 6= 0 ⇒ c = 0.

Hence S is linear independent

Note : (1) Every superset of linearly dependent set is linearly dependent.

(2) Every subset of linearly independent set is linearly independent.

Theorem 3.6 : If V(F) is a vector space , then any set S = {α1 ,α2 ,.......αn }
of non-zero vectors is either Linear independent or ∃ k such that 2≤ k ≤ n
such that αn is the linear combination of the preceding vector.

Proof : If S is Linear independent , we have nothing to prove else if S is linear dependent then cα1
+ cα2 +....+ cαi ... +cαn = 0 ...........(1)

⇒ ∃ ci 6= 0

Now Let k be the largest number in eq (1) such that ck 6= 0

then ck+ 16= ck +2 = .....cn = 0

44
now k6=0 for if k = 1

then c1 6= 0

and c2 = c3 = c4 = .....cn = 0.........2

using (2) in (1) we get

cα1 = 0 ⇒ c1 = 0

but c1 6= 0 which is contradiction

Hence k6= 1 ⇒ 2≤ k ≤ n

c1 α1 + c2 α2 +....... +ck αk = 0 , cj 6= 0 , 1≤j≤k

⇒ ak = −1 ck (c1 α1 + c2 α2 +....... +ck− 1αk−1 )

( Note if k = 1 then c1 α1 = 0,⇒ c1 = 0

1
⇒ α1 = ck
(0) cannot exists = ∞.0 form

Hence αk is the linear combination of the preceding vectors.

Example : Consider S ={1 + x, 3 + x2 , 9 + 11x2 + 7x2 , x3 + 1} be the set of vectors from


P3 (R) then it is linearly independent or linearly dependent.

Solution : a(1 + x) + b(3 + x2 ) +c(a + 11x + 7x3 ) + d(x3 + 1) = 0

⇒ (a + 3b + 9c + d) + (a + 11c)x + bx2 + (7c + d)x3 = 0

⇒ (a + 3b + 9c + d) = 0 , (a + 11c) = 0 , b = 0 (7c + d) = 0 is matrix form.


    
1 3 9 1 a 0
1 0 11 0  b  0
0 1 0 0  c  = 0
    

0 0 7 1 d 0
    
1 3 2 0 a 0
1 0 11 0  b  0
   
⇒
0 = 
1 0 0  c  0
0 0 7 1 d 0

45
    
1 0 0 0 a 0
0 0 1 0  b  0
   
⇒
0 = 
0 0 0  c  0
0 0 0 1 d 0

⇒ a = 0 , b = 0 , c= 0 d = 0.

Hence S is linearly independent.

3.5 Basis
A set S ⊆ V is a basis of vector space V(F) if
1) S is linear independent .
2) S spans V i.e; L(S) = V .

Since L(S) is the largest subspace of V containing and hence we say basis of vector space is the
largest independent set of vectors from V(F) .

3.6 Finite Dimensional Vector Space


OR
Finitely Generated Vector Space
A vector space is said to be finitely generated vector space or finitely dimensional vector space if ∃
a finite subset S ⊆ V ,
such that L(S) = V , else V.S.( Vector Space ) will said to be infinite dimensional vector space .

Existence Theorem : Basis of F.D.V.S.

If V(F) is Finite dimensional vector space , then there exists finite set of vectors S ⊆ V such
that L(S) = V . If S is L.I. then S is the basis else there exists αi such that αi ∈ L(S:αi )
⇒ L(S:αi ) = V

If S-αi is L.I. , then it is basis of V else there exist αj ∈ (S:αi ) such that αj ∈ L(S-αi ,αj )
Continuous the process we get singleton non zero set which is L.I. and span V .
Hence there exist basis of each finite dimensional vector space .

Theorem 3.7 : Number of elements in basis of finite dimensional vector space is same .

P roof : Let V(F) be a finite dimensional vector space so it must have a basis set .
Let B1 = x1 , x2 , .........., xm and B2 = y1 , y2 , .........., yn be two basis of V(F) .
⇒ B1 and B2 span V(F) and are Linear Independent .

46
Now
y1 ∈ B2 ⊆ V(F)
⇒ y1 ∈ V(F) .

Since B1 spans V(F) ,


→ y1 is a linear combination of elements of B1 .
→ y1 = α1x1 +α2x2 +......+αi−1 xi−1 +αi + 1xi+1 +....αm xm .
→ y1 ,x1 ,x2 ,.....xm are linear dependent.
∴ ∃ xi s.t xi is a L.C of preceding vectors y1 ,x1 ,.....xi−1
i.e xi = αy 1+α1x 1+.......+αi − 1x i − 1

Now
Consider S1={y1 ,x1 ,x2 ,....,xi − 1,xi + 1,...xm }

We shall show L(S1 )=V

Let x∈V → x is linear combination of vector of B1 .

⇒ x= β1 x1 +β2 x2 +....+βm xm

= β1 x1 +β2 x2 +....βi − 1 xi − 1+βi xi +βi + 1 xi + 1+.......+βm xm

=β1 x1 +β2 x+....βi − 1 xi − 1+βi (α y1 +α1 x1 +.....+αi − 1 xi − 1+βi + 1xi + 1+....βm xm ).

= (α βi )y1+(β1+βi α1 )xi +.....+(βi − 1+β1 αi − 1)xi − 1+βi + 1 xi + 1+.....+ βm xm .

⇒ X is a L.C of vectors y1 ,x1 x2 ,x3 ,.....xi − 1,xi + 1.....xm of S1.

⇒ x ∈ L(S1) ⇒ V ⊆ L(S1 ) ( Also L(S1 ) ⊂ V)


⇒ V = L(S1 ).
i.e s1 generates V.

Now y2 ∈ B2 and B2 ⊂V
⇒ y2 ∈V and L(S1 )=V then y2 can be expressed as L.C of elements of S1 .⇒ Set X ={y2 ,y1 ,x1 ,x2 ,.....xi − 1,xi + 1
is a L.D set.
∴ ∃ x2 which is L.C of preceding vectors .
we claim x2 6= y1 for if x1 = y1 , then y1 is the L.C. of preced mi vector or we say { y1 , y2 } is L.D.
set ⇒ B2 > { y1 , y2 } is L.D. [ ∵ super set of L.D. set is L.D. ]
which is a contradiction hence x2 6= y1 .
Now by similar process we show thatS2 = { y2 , y1 , x1 , x2 , ..... , xi − 1, xi + 1 , xk + 1 , xk − 1 ,
xk + 1 , ..... xm } obtained after deleting xk generate V . we shall on repeating the process and one
element of B1 is removed and element of B2 is added at each step .
we claim that m ≮ n , if m < n then after m steps all xi ’ s are removed and we remain with the
set sk = { ym , ym − 1 , .... y1 } which spans V .
On next step we conclude that ym + 1 ∈ B2 ⇒ ym + 1 ∈ V ( F ) and hence ym + 1 is L.C. of element
of sk .

47
i.e { y1 , y2 , ym + 1 } is L.D. set
⇒ B2 = { y1 , y2 , ym + 1 , ym } is L.D set . which is a contradiction
∴ our supposition is wrong .
hence m ≮ n .... ( * )
Similarly we take the elements of B1 and remove bone element of B2 at each step and. we then
conclude that n ≮ m .... ( ** )
from ( * ) and ( ** )
m=n.

Dimensions of finitely generated vector space


Number of elements in basis of finitely generated vector space is known as dimension of finitely
generated vector space denoted as ( Dim V ) .
Note : . If V ( F ) is n dimensional vector space then any set of n + 1 or mor vectors from V ( F )
is L.D. i.e, Basis is the highest L.I. set in finite dimensional vector space and hence no. of element
in any subset of V.S is ≤ dim V
i.e if S is L.I. set in V ( F ) then | S | ≤ n = dim V
2. If s spans n dimensional vector space then
| S | > n if | S | = n
⇒ S is L.I. then S is basis of V if S > n then S is L.D. Hence s not a basis of V.S.

Standard basis of vector space

If V ( F ) is a Vector space , then write V as Linear combination of its vector which are sum of
product of essential coefficient from field F and vectors of V . These vectors from standard basis of
V ( F ) . No. of essential arbitrary constants introduce over field F to write down vectors of V is
known as dim V i.e dim ( V ( F ) )

Example Find standard basis of dimensions of vn ( F ) = F n ( F )


Solution F n ( F ) = { ( a1 , a2 , .... , an / a ∈ F }
= a1 ( 1 , 0 , 0 , ..... n time ) + a2 ( 0 , 1 , 0 , .... n time ) + ..... + an ( 0 , 0 , 0 , ..... , 0 , 0 , 1 )
or a1 e1 + a2 e2 + .... + an en
hence { e1 , e2 , e3 , ..... en } is standard basis of Vn ( F ) = F n ( F ) .
Notte: If V ( F ) is n dimensional vector space , then any set of n L.I. vector from V ( F ) is basis
of V ( F ) .

Extension Theorem: If V(F) is a finite dimensional vector space,then,every Linear Indepen-


dent set of vectors from V(F) is either a basis of V(F),i.e-it is a part of a basis of V(F).

Proof : Let dimension of V(F)=n.


Now,letS= {α1 , α2 , α3 , ...., αm } be the set of L.I vectors of V(F).
Now,if n=m,then,we know basis have the same no. of elements.
Hence S= {α1 , α2 , α3 , ...., αm } is basis of V(F).
Now,if m<n,then,∃ β ∈ V s.t β ∈ / L(S).
So,S1 = {α1 , α2 , α3 , ...., αm , β1 } is L.I if m+1=n.
If m+1 < n,then
∃ β2 ∈ V s.t β2 ∈ / L(S1 ).
So,S2 = {α1 , α2 , α3 , ...., αm , β1 , β2 } is L.I.

48
If m+2=n,then it is basis for V(F).
If m+2 <n,then,we go on repeating the process till we get a set Sn − m= {α1 , α2 , α3 , ...., αm , β1 , β2 , ....., βn−m }.
|Sn−m |=m+(n-m)=n=n
Hence Sn−m is L.I.H.
Hence it is basis for V(F).Hence any L.I set of vector space can be either basis or can be extended
to form basis of V(F).

Example :If V(F) is a finite dimensional vector space and W is the subspace of V(F),then,dim
W ≤ dim V.

Solution:Let W be a subspace of V(F).


Hence W⊆V(F) and W is a vector space.
Clearly,if W has dim n,then,W has n L.I vectors.Also,W⊆V(F)
Hence no. of L.I vectors in V(F) is greater or equal to n.
→ dim V ≥ n = dim W.
Hence dim W ≤ dim V.

Theorem 3.8:If V(F) is a vector space of dimension n and S= {α1 , α2 , α3 , ...., αn } is a basis of
V(F),then,every vector α ∈ V is uniquely expressed as α = {a1 α1 , a2 α2 , a3 α3 , ...., an αn },ai ∈ F.

Proof : If α=b1 α1 + b2 α2 + b3 α3 + .... + bn αn ,bi ∈ F.


−→ (a1 − b1 )α1 + (b2 − a2 )α2 + ..... + (bn − an ) = 0
−→ (a1 − b1 ) = 0,(b2 − a2 ) = 0,....,(bn − an ) = 0
(∵ S= {α1 , α2 , α3 , ...., αn } ia basis and so is L.I set.)
a1 = b1 , a2 = b2 , ......, an = bn ∀i.

3.7 Ordered Basis and Coordinates


Ordered basis: If V(F) is an n-dimensional vector space,then,by ordered basis S= {α1 , α2 , α3 , ...., αn },we
mean αi is the ith vector of S.

Co-ordinates of α w.r.t ordered basis :If V(F) is an n-dimensional vector space,then,if


S= {α1 , α2 , α3 , ...., αn },we mean αi is the ordered basis of V(F) and if α ∈ V is s.t α = a1 α1 +
a2 α2 + a3 α3 + .... + an αn ,then,(a1 , a2 , ....., an ) is known as co-ordinate matrix ofα w.r.t ordered basis
S of V(F) and [a1 , a2 , ....., an ]T is known as co-ordinate matrix of α w.r.t ordered basis S of α and
w.r.t ordered basis of V(F).

Example :Find the co-ordinates of (20, 16, 2016) ∈ R3 w.r.t ordered basis {(1,1),(1,1,0),(1,0,0)}.

Solution: Let (20, 16, 2016) = a(1, 1, 1) + b(1, 1, 0) + c(1, 0, 0)


=(a+b+c),a+b,a)
−→ a = 2016, a + b = 16 −→ b = −2000
a + b + c = 20 −→ c = 4
Co-ordinates of (20, 16, 2016) ∈ R3 w.r.t ordered basis are (2016,-2000,4).

Theorem 3.9 : If W1 and W2 are the subspaces of finite dimensional vector space V(F),then,(dim
W1 + W2 ) =dim W1 + dim W2 −(dim W1 ∩ dim W2 ).

49
Proof : Let BW 1 ∩ W2 ={α1 , α2 , α3 , ...., αm } is basis.
Hence W1 and W2 both contain this L.I set of vectors.Thus W1 and W2 can be extended to form
a basis of W1 and W2 both.
Let BW1 ={α1 , α2 , α3 , ...., αm , β1 , β2 , ....., βp }

BW 2={α1 , α2 , α3 , ...., αm , γ1 , γ2 , ....., γq }

Let B={α1 , α2 , α3 , ...., αm , β1 , β2 , ....., βp , ......., , γ1 , γ2 , ....., γq }

Clearly B=BW 1+BW 2-BW 1 ∩ W2 .


Also B contains element of BW 1 and BW 2 only.

−→ B = BW 1 + W2

Now,we claim : BW 1 + W2 is basis for V(F).

For this,we shall prove that BW 1 + W2 is L.I.


If BW 1 + W2 is not L.I,then,∃ some γi s.t γi ∈ L(BW 1).
−→ γi ∈ W1 and W2 which is not true.Therefore our supposition is wrong.
−→B is L.I.
Hence B is basis for V(F). s.t B=BW1 +BW 2-BW 1 ∩ W2
or (dim W1 + W2 ) =dim W1 + dim W2 −(dim W1 ∩ dim W2 ).

Example : Find the basis and dimension of zero space.

Solution : V={0} which is L.D.


−→ B={} , L(φ)=zero space={0}
∴ dim V=0

Note :dim v(F)=No. of essential arbitrary constants introduced over field F to write down the
set of vectors of V.
 
a b
Example : If W1 = { | a+b+c+d = 0
c d
a-b+c-d = 0
& 3a + b+3c+d = 0}  and
a a
W1 = { | a,b ∈ R} be subspace of M2 (R)then
a + b a + 3b
(i) dim W1 (ii) dim W2
(iii) dim W1 ∩ W2  (iv)dim
 W1 
+W2
1 1 1 1 a 0
Solution:(i)For W1 =1 −1 1 −1  b =0
  3 1 3 1  c  0 
1 1 1 1 1 1 1 1 1 1 1 1
−→ 0 2 0 2 ∼ 0 2 0 2  ∼ 0 2 0 2 (here Rank=2 and hence 2 L.D solu-
3 1 3 1 0 −2 0 −2 0 0 0 0

50
tions)
−→ dim W1 = 2(Total variable-No. of dependent variables)
and a+b+c+d= 0 & 2b+2d=0 −→ b+d=0
−→ a+c=0 & b+d=0.
       
a a a a 1 1 0 0
(ii) ∼ =a +b
a + b a + 3b b 3b 0 0 1 3
 
a a
(iii)W1 ∩ W2 ={ | a+b+(a+b)+(a+3b)=0
a + b a + 3b
a-b+(a+b)-(a+3b)=0
& 3a+b+3(a+b)+(a+3b)=0}
 
a a
={ | 3a+5b=0
a + b a + 3b
a-3b=0
& 7a+7b=0}
 
3 5    
a 0
−→ 1 −3 =
b 0
 7  7  
3 5 3 5
=1 −3 ∼ 1 −3(RanK=2)
6 10 0 0
−→ a=0 & b=0
−→ No. of dependent variables=2
−→ dim W1 ∩ W2 =total variables - dependent variables=2-2=0
∴ dim W1 ∩ W2 =0

(iv)dim (W1 + W2 )=dim W1 + dim W2 =2+2=4

3.8 Cosets
If V(F) is a vector space and W is a subspace of V,if α ∈ V then the set
W+α={γ + α : γ ∈ W } is said to be right coset of W in V generated by α.

Similarly for left coset α+W={α + γ : γ ∈W} Since V(F) is abelian,


∴,every left coset is right coset.
NOTE:(i) W+0=W
(ii) If α ∈ W, Then W+α=W
(iii) W+α=W if α ∈ W (iv) W+α=W+β iff α − β ∈W

3.9 Quotient Space


If V(F)is a vector space and W is a subspace of V, then set of all cosets of W in V
V
ie. W ={W+α | α ∈V} for a vector space with respect to V.A and S.M as
(w+α) + (W+β)=W+ (α + β) and
a(W+α)= W+aα

51
if a=0, 0(W+α)=W
V
Theorem 3.10:If V(F)is finite dimensional vector space and W is a subspace of V(F),then dim W =
dimV-dimW
Proof :First we define Basis for V and W
Let BW ={α1 ,α2 ,.......αm }
Since W is subspace of V then BW can be extended to form a basis of V.
Now let BV ={α1 ,α2 ,.....,αm ,β1 ,β2 ,.....βn }
V
Now let B={W+β1 ,W+β2 ,.....W+βn } Now we show that B is basis for W
(i) To show B is L.I:
Let C1 (W+β1 )+C2 (W+β2 )+......+Cn (W+βn )=W (∵ 0+W=W)
W+(C1 β1 +C2 β2 +....+Cn βn )=W
Pn
⇒ C i βi ∈ W
i=1
Pn m
P
⇒ C i βi = aj αj for some aj ∈F , αj ∈W
i=1 j=1
Pm n
P
⇒ aj α j + (−Ci )βi =0
j=1 i=1
Now αj ,βi ∈ Bv
L(αi ,βj )=V and αj ,βi are L.I
∴ αj ,−Ci must be zero.
ie. B is L.I.
V
(ii) To show B space W :
n n
V
P P
Let W+α ∈ W
⇒ W+α=W+( aj α j + C i βi )
j=1 i=1
n
P
=W+ Ci βi
i=1
n
P
= Ci (W+βi )∈ L(B)
i=1
⇒ dim Vw =n
dimW=m, dimV=m+n
Also n=(m+n)-m
V
⇒ dim W =dimV-dimW

Example  :If V=M3 (R)and


 W={A∈ M3 /AB=0}
1 3 4
V
where B= 2 15 17 then dim W =?
3 18 21 
a11 a12 a13
Solution:Let A=a21 a22 a23  ∈ W
 a31 a32 a33

a11 a12 a13 1 3 4
⇒ a21 a22 a23  2 15 17=0
a31 a32 a33 3 18 21 
a11 + 2a12 + 3a13 3a11 + 15a12 + 18a13 4a11 + 17a12 + 21a13
⇒ a21 + 2a22 + 3a23 3a21 + 15a22 + 18a23 4a21 + 17a22 + 21a23 
a31 + 2a32 + 3a33 3a31 + 15a32 + 18a33 4a31 + 17a32 + 21a33
Ist row

52
⇒ a11 + 2a12 +3a13 
= 0 , 3a11 +15a12 + 18a13 = 0 , 4a11 + 17a12 + 21a13 = 0
1 2 3 1 2 3
⇒ 3 15 18 ∼ 3 15 18
  
4 17 21 0 0 0
Rank 2 of vectors ,hence 2 L.I vectors.
Similarly , for second row and third row 2 L.I vectors
Hence,there are total 6 L.I variables and total variables are 9
⇒ 6 L.I equations will supress 6 variables.
Hence dimW=9-6=3
for dimV=9
⇒ dim wv = dimV-dimW
=9-3=6=3×2 rank

Note:In general,if V=Mn (R) and its S.S W={A∈ Mn (R)/AB=0}


Where rank B =r,then
dimW=n2 -nr=n(n-r) and
V
dim W =nr.
P2 R
Example :If W=L({1+2x}) then which of the following pairs of W set from W
are identical.
(i) W+(7 + x2 + x);W+(5 + x2 − 3x)
(ii) W+(x2 + 5x);W+(x2 + 6x + 12 )
(iii) W+(x2 + 7x + 11);W+(13x + 3x2 + 15)
Solution: (i) (7 + x2 + x)-(5 + x2 − 3x)
=2+4x=1(1+2x) ∈ L({1+2x})
Hence W+(7 + x2 + x)=W+(5 + x2 − 3x)
(ii) W+(x2 + 5x);W+(x2 + 6x + 12 )
⇒ (x2 + 5x)-(x2 + 6x + 21 )
=-x- 12
=- 12 (1+2x)∈ W as - 12 ∈ R as - 12 (1+2x)∈ L{(1+2x)}
(iii) (x2 + 7x + 11)-(13x + 3x2 + 15)
=-2x2 -6x-4=0 ∈ / L({1+2x})
Hence W are not identical.
Example : If V=R4 (R) and W=L({(1,1,1,1),(1,1,0,0),(0,0,1,1)})
V
then dim W =?
   
1 1 1 1 0 0 0 0
Solution: 1 1 0 0 ∼ 1 1 0 0 → Rank 2
0 0 1 1 0 0 1 1
Hence dimW= 2
dimV=4
V
⇒ dim W =4-2=2

Definition : If U(F) and V(F) are vector spaces over same field F,then a transformation T
from U(F) to V(F) if ∀ a ∈ F and ∀ α, β ∈ U,
T(aα+beta) = a.T(α) + T(β)
or
(i)T(αβ) = T(α) + T(β) ∀ α, β ∈ U.
(ii)T(aα) = a.T(α) ∀ a ∈ F,α ∈ F.

53
NOTE :(a) A Linear transformation T:U(F)→V(F) is known as linear operator.e.g.Let F:R3
→ R3 ,then
f(x,y,z)=(ax+by+cz,a1 x,b1 y,c1 z,a2 x,b2 y,c2 z) is k/as linear operator
but
3 3
f(x,y,z)=(x2 yz,exyz ,sin(x)+3y+4z3 ) may or may not be linear operator since exyz may or may not
∈ R ∀ x,y,z ∈ R.

(b)If V(F) and U(F) vector space then a function T:U(F)→V(F) is linear transformation iff
under T every tuple of T(α) is k/as linear combination of tuples of U(F).
m
Example : Under what conditions over a,b,c,d ∈ R,T:C(c)→C(c) is k/as linear transformation
given by T(z)=Tx+iy)=(ax+by)+(cx+dy).
Solution : Since L.T of T(z) is a linear combination of elements of U=C(c)
−→ T(z)=K(z) for K ∈ C
−→ T(x+iy)=K(x+iy)
=(K1 +iK2 )(x+iy) {∵ K ∈ C −→ K=K1 +iK2 }
=K1 x+iK2 x-K2 y
=(K1 x-K2 y)+i(K2 +iK2 x)
−→ K1 =a and K2 =-b and K1 =d and K2 =c
−→ a=d and c=-b
−→ (a,b)=(d,-c)

Example : If P and Q are any field ,matrices of order m and n respectively over field F,then
T:Mm × n(F)→Mm × n(F) given by T(A)=PAQ is linear transformation.
Solution : Since PAQ=[P]m × n[A]m × m[Q]n × n
=[D]m × n over F
Hence T:Mm × n(F)→Mm × n(F) given by T(A)=PAQ is a linear transformation and therefore a
linear operator.

Example  : T:V(F)−→M (R) is a set of all real valued over R satisfies D3 (D-1)f(x)=0 is given
2
f (0) f / (0)
by T(f(x))= // .
f (1) f /// (13)
Solution : D3 (D-1)f(x)=0
−→ m3 (m-1)=0
−→ m=0,0,0,1
−→ f(x)=(C1 +C2 x+C3 x2 )e0x +C4 ex
−→ f(0)=C1 +C4
f/ (x)=C2 +2C3 x+C4 ex
f/ (0)=C2 +C4
f// (x)=2C3 +C4 ex
f// (0)=2C3 +C4
f/// (x)=C4 ex
f/// (0)=C4 
C1 + C4 C2 + C4
−→
2C3 + C4 C4
Example : Show that T:P4 (R)→P4 (R) given by T(p(x))=p(2x+5) is a linear transformation.
Solution : p(x)=C0 +C1 x+C2 x2 +C3 x3 +C4 x4

54
p(2x+5)=C0 +C1 (2x+5)+C2 (2x+5)2 +C3 (2x+5)3 +C4 (2x+5)4
=C0 +2C1 x+5C1 +4C2 x2 +25C2 +20C2 x+8C3 x3 +125C3 +60x2 C3 +150C3 x+C4 [(4x2 +25+20x)(4x2 +25+
20x)] is a linear transformation.
∴ T:P4 (R)→P4 (R) is a linear transformation.

REMARK: T:Pn (R)→Pn (R) is a linear transformation.

• Zero Transformation : A transformation T:U(F)→V(F) is k/as zero transformation if ∀


α ∈ V T(α) = 0 ∈ V.

• Identity Transformation : A transformation s.t. T(α)=α is k/as identity transformation.

• Negative of Linear transformation. : If T:U(F)→V(F) is a Linear transformation,then


(-T):U(F)→V(F) given by (-T)(α)=(-T(α)) ∀ α ∈ U.

• Elementary properties of L.T. : If U(F) and V(F) are vector spaces and T is a L.T. from
U(F) to V(F),then
(a)T(0)=0,0∈ U and 0∈ V.
(b)T(−α)=-T(α)
(c)T(α − β)=T(α) -T(β)
(d)T(a1 α1 +.......+an αn )=a1 T(α1 )+.......+an T(αn ).
NOTE : We first express;
T(a1 α1 +.......+an αn )=a1 T(α1 )+.......+an T(αn )-(*) where αi, s are the given vectors and we
find a relation between (x,y,z) in equation form as ax+by+cz=0-(1)
Now,T(p,q,r) can only be determined p,q,r satisfies equation(1)
Using the relation (*),we find linear transformation equation of the given data.

Example : Find L.T. T:R2 (R)→R2 (R) s.t T(1,10)=(2,5) and T(1,2)=(4,11)
Solution: Let a(1,1)+(1,2)=0

−→ a+b=0 and a+2b=0

−→ a=0,b=0

Hence given vectors are linearly independent.

Now,T(a(1,1))+(b(1,2))=a.T(1,1)+b.T(2,5)=a(2,5)+b(4,11)

∴T(a+b,a+2b)=(2a+4b,5a+11b)

55
Range or Range Space of L.T

Definition If T : U(F)→V(F) is a L.T from vector space U(F) to V(F) then range of L.T
denoted by R(T)=Range T = {β∈V | T(α)=β} for some α∈U.

Null Space or Kernel of L.T

Definition If T : U(F)→V(F) is a L.T from vector space U(F)→ V(F) then null space or kernel
of T denoted by kernel(T) = ker(T)=N(T)={α ∈U | T(α)=0∈V}

Note: Ker(T) is a subspace of U(F)


If BU ={α1,α2 ,...αn } and T:U(F)→V(F) is L.T then R(T) =L({T.(α1 ),T(α2 )T(α3 )....T(αn )}) (∵
αi ∈U→T(αi )∈V and range=Homomorphic image.)
Since dim(U)=n and maximum of n L.I vectors exist in R(T) →dimR(T)≤dim(U).

3.10 Rank and Nullity of a Linear Transformation


If T : U(F)→V(F) is a L.T from V.S U(F)→V.S V(F) then dimR(T)=RankT=δ(T) and dim
ker(T)=dimN(T)=Nullity(T)=γ(T)
If U is finite dimensional vector space then
δ(T)≤dimU and γ ≤dim(U).

Rank Nullity Theorem

If U(F) is F.D.V.S and T : U(F)→V(F)is a Linear transformation then δ(T)+γ(T)=dimU.

Proof ... Let BN (T ) ={α1 ,α2 ,......,αn }


Since N(T) is a subspace of V.
∴ αi ’s are L.I set of vectors of V can be extended to form basis for V.
Let BU ={α1 ,α2 ,....αn ,β1 ,β2 ,...βm }
Now
Let BR(T ) ={T(B1 ),T(B2 ),......T(Bm )}
Now We show that BR(T ) is basis for R(T)

(1) BR(T ) is L.T

Let a1 T(B1 )+a2 T(B2 )+.....+an T(Bm )=0 → T(a1 B1 +a2 B2 +....+an Bm )=0’∈V (∵ T(a1 x1 +a2 x2 +....+an xm )
a1 T(x1 )+....+an T(xm ).

⇒ a1 β1 +a2 β2 +....+an βm ∈N(T).


Hence is a Linear Combination of N(T).
m
X
ai βi = nj=1 cj αj
P

i=1
m
X
ai βi - nj=1 cj αj =0
P

i=1

56
n
X
⇒ ai =0 ∀i (∵ cj αj =0 as αj ’s∈N(T).)
j=1

⇒ BR(T ) is Linear Independent Set. (2) Let β ∈R(T) ⇒ β=T(α) for some α ∈U
n
X Xm
⇒ β=T( cj αj + ci βi ) for ai ,cj ∈F
j=1 i=1
m
X n
X Xm
⇒ β=T( ci βi ) (∵ cj αj =0)= ai T(Bi ) ∈ L(BR(T ) )
i=1 j=1 i=1

⇒ BR(T ) is basis for BR(T ) .


Clearly dimN(T)=n
dimRank(T)=m and dimU=m+n

⇒dimU=dimN(T)+dimRank(T)
⇒dimU=dimN(T)+dim(RankT)
⇒dimU=γ(T)+δ(T),whereγ(T),δ(T)are Nullity and Rank of T respectively.

Note.... δ(T)≤dimV⇒ δ(T)≤min(dimU,dimV).

Example. If T:Mn (R)→ Mn (R) s.t {T(A)=A,if A is symmetric and T(A)=0,if A is skew symmet-
ric}.

Solution. N(T)={A∈ Mn (R)s.tT(A)=0}⇒A is skew symmetric matrix


n(n−1)
For skew symmetric matrices diagonal elements are all zero and from remaining only 2
are L.T
elements

Hence γ(T)= n(n−1)


2
and dimMn (R)=n2

⇒ δ(T)=n2 - n(n−1)
2
= n(n+1)
2

Example. If Pn (C) → Pn (C) given by T(p(x))=ṕ (1)=0

Solution. N(T)={p(x)∈Pn (C):T(p(x))=0}


i.e T(p(x))=ṕ(1)=0
Let p(x)=a0 +b0 i+(a1 +b1 i)x+....(an +bn i)xn =0
⇒ ṕ(x)= (a1 +b1 i)+2(a2 +b2 i)x+.....+n(an +bn i)xn−1 ⇒ṕ(1)=(a1 +b1 i)+2(a2 +b2 i)+.....+n(an +bn i)
=(a1 +2a2 +3a3 +...+nan )+i(b1 +2b2 +3b3 +....+nbn )=0
Where all these are L.I variables=2n
Hence Nullity(T)=2n=γ(T)
Also dimPn (c)=2(n+1)=2n+2
⇒ δ(T)=dimPn (C)-γ(T)=2(n+1)-(2n+1).

57
3.11 Non singular and Singular Transformation
Definition : If U(F) and V(F) are vector spaces then a Linear Transformation

T : u(F)→V(F) is said to be Non-singular Transformation if r(T) = 0 and

if r(T) ≥ 1 , then it is called as Singular transformation.

Example : Show that a Linear transformation is one-one iff


Nullity(T) = 0. T: U→V over F

Proof : Let Nullity(T) = 0


⇒ T(α) = 0 ⇒ α = 0 α ∈U(F) ......(1)
Now let T(α1 ) = T(α2 )

⇒ T(α1 ) - T(α2 ) = 0

⇒ T(α1 - α2 ) = 0 ⇒ α1 - α2 = 0.........

⇒ α1 = α2

Hence T is one-one.

Conversely let (T) is one-one

T(α1 ) = T(α2 )
⇒ α1 = α2

Let Nullity (T) = {α}

⇒ α ∈ U such that T(α) = 0

⇒ also T(0) = 0

⇒ T(α) = T(0)

⇒ α = 0 (T is one-one)

Hence Nullity T = {0}

Note: A necessary condition for T : U(F) → V(F) to be one-one is that dim u ≤ dim v.

Solution : If T is one-one ⇒ r(T) = 0

⇒ ρ(T) = dim u

58
also ρ(T) ≤ dim v

⇒ dim u ≤ dim v.

Theorem : A Linear Transformation T:U(F) → V(F) is onto iff Rank(T) = dim V.

Proof : Since T is onto , hence for every element β of V(F) ∃ α ∈ U such that T(α) = β

Hence Rank(T) contains every element of V(F) hence Rank(T) ¡ dim V

Conversely if Rank(T) = dim V

Hence every element of V is associated to element of U by mapping T

Hence T is onto.

Theorem : A necessary and sufficient condition for T : U(F) → V(T) to be onto is that dim
V≤ dim U.

Proof : If T : U(F) → V(F) is one-one

then T is bijective ⇒ dim V = dim U ......(i)

if T : U(F) → V(F) is many-one

hence ∃ at least one pair having same image

also T is onto.

Hence number of element of U(F) ≥ number of element of V(F)

Hence dim V¡ dim U .̇...(ii)

from (i) and (ii)

dim V ≤ dim U .

Note : A linear transformation T : u(F)→V(F) is said to be one-one ,onto or invertible linear


transformation if it is both one-one and onto.

Theorem : Necessary condition for linear transformation T : u(F)→V(F) is said to be one-one and
onto if dim U = dim V

Proof : T: U(F) → V(F) is one-one


hence , r(T) = 0

59
⇒ ρ(T) = dim U .̇.....(i)
also ρ(T) ≤ dim V

Now T: U(F) → V(F) is onto also

hence every element of V(F) has pre image in U(F)

i.e ∀ β ∈ V(F)

∃ α ∈ U(F) such that T(α) = β

since T is one-one ⇒ α is unique

⇒ Range(T) = V(F)

⇒ ρ(T) = dim V ......(ii)

from (i) and (ii)

dim U = dim V .

Example : If T : P2 (R) → R2 (R) given by

T(p(x)) = ( p(0)) , p(0) + 2 p(1) ,p(0) + p(1) - p(2) ) then T is

1. One-One 2. Onto 3. Invertible 4. None.

Solution : Let p(x) = a0 = a1 x + a2 x2

⇒ p(0) = a0 p(1) = a0 + a1 + a2

f (2) = a0 + 2a1 + 4a2

⇒ T( p(x) ) = ( a0 , 3a0 + 2a1 + 2a2 , a0 − a1 − 3a2 )

N ow dimP2 (R) = 3
dim R3 (R) = 3

hence if T is one-one → T is onto

→ T is invertible otherwise 4) None

Nullity T = p(x) ∈ P2 (X) such that

T p(x) = ( 0,0,0)

60
⇒ a0 = 0

3a0 + 2a1 + 2a2 = 0


a0 − a1 − 3a2 = 0
     
1 0 0 1 0 0 1 0 0
3 2 1  ∼ 3 2 1  ∼ 0 2 1
1 −1 −3 0 −1 −3 0 −1 −3
   
1 0 0 1 0 0
∼ 0 0 5 ∼ 0 0 1
0 −1 −3 0 1 0
⇒ a0 = 0 a1 = 0 and a2 = 0

⇒ T(p(x)) = (0,0,0) hence r(T) = 0 and dim P2 (R) < dim R3 (R) = ρ(T)

Hence T is one-one , onto and hence invertible.

Now T(a0 = a1 x + a2 x2 ) = (a0 , 3a0 + 2a1 + 2a2 , a0 − a1 − 3a2 )

Leta0 = a
3a0 + 2a1 + 2a2 = b
a0 − a1 − 3a2 = c

⇒ 2a1 + 2a2 = b − 3a0


− a1 − 3a2 = c − a
or a1 + 3a2 = a − c

⇒ 4a2 = 2a + 2c − b + 3a
= 5a + 2c − b
5a − b + 2c
⇒ a2 =
4
−11a + 3b + 2c
and a1 =
4
−11a + 3b + 2c 5a − b + 2c 2
⇒ T− 1(a, b, c) = a + ( ) x+ ( )x.
4 4

Isomorphic vector space


Vector spaces U(F) and V(F) are said to isomorphic to each other if ∃ homomorphism (L.T) from
one vector space to another which is one-one and onto.

Note : A necessary and sufficient condition for vector spaces U(F) and V(F) to be isomorphic
to each other is that dim U(F) = dim V(F)

NOTE:If T1 and T2 are linear transformations from U(F)to V(F) then

61
(i) T1 + T2 is again a linear transformation from U(F)to V(F) defined as
(T1 + T2 )α = T1 (α) + T2 (α)
(ii) KT is also a linear transformation defined as
(CT )α = C(T (α))
In general using (i) and (ii),
If T1 ,T2 ,.....,Tn are linear transformations
then C1 T1 + C2 T2 + ..... + Cn Tn is also a linear transformation,Ci ∈ F and
(C1 T1 + C2 T2 + ..... + Cn Tn )α=C1 T1 (α) + C2 T2 (α) + ...... + Cn Tn (α)

Example:Find Rank and Nullity of transformation


T:P2 (R)→ P4 (R) given by
Rx
T(p(x))=2p(x)+3 0 p(tdt + Tp(x2 ))

Solution: Let p(x)∈ P2 (R)

⇒ p(x) = a0 + a1 x + a2 x2

Now T (p(x)) = T (a0 + a1 x + a2 x2 )


2 a2 x3
=2(a1 + 2a2 x) + 3[a0 x + a1 x2 + 3
] + 7[a0 + a1 x2 + a2 x4 ]
2
=7a0 + 2a1 + 4a2 x + 3a0 x + 3a1 x2 + 7a1 x2 + a2 x3 + 7a2 x4

17
= (7a0 + 2a1 ) + (4a2 + 3a0 )x + 2 1
a x2 + a2 x3 + 7a2 x4 (1)

N(T)={p(x)∈ P2 (R) such that Tp(x)=0}


ie. eq.(1)=0
⇒ 7a0 +2a1 =0 and
4a2 +3a0 =0
a1 = 0
a2 = 0
a0 = 0
N(T)={p(x)| ie. a0 =0a1 =a2 }
=0+0x+0x2
=0
Hence N(T)=0 and
BN (T ) = {}
⇒ γ(t) = 0
Now dim P2 (R)=RankT+γ(T)
⇒ 3=Rank T+0
⇒ Rank T=3
or ρ(T)=3

62
COMPOSITION:

Let f:A→B and g:B→C then composition of these mappings is define by gof :A→C
Note:(i) g may or may not be one-one.
(ii) If gof is onto then g must be onto and f may or may not be onto.
(iii) A necessary condition for gof to be one-one is that f is one-one and a necessary condition for
gof to be onto is that g is onto.
Corollary: gof is bijective ⇒ f is injective and g is surjective.

PRODUCT OF LINEAR TRANSFORMATIONS

If T1 :V(F)→W(F) and T2 :U(F)→V(F),then product T1 T2 :U(F)→W(F) defined by T1 T2 (α)=T1 (T2 (α))


,∀α∈U

Example:T2 , T1 :R3 (R) ⇒ R3 (R) given by T1 (x,y,z)=(y,x,0) and T2 (x,y,z)=(0,0,x).Find T1 T2 and


T2 T1
Solution:T1 T2 =T1 (T2 (x,y,z))=T1 (0,0,x)=(0,0,0)
T2 T1 =T2 (T1 (x,y,z)=T2 (y,x,0)=(0,0,y)
T2 T1 :R4 (R) → R4 (R)
Now, T2 T1 is invertible if
T1 is one-one and T2 is onto.
Since T1 :R4 (R) → R5 (R)
and dim R4 (R) < R5 (R)
Hence T1 may be one-one
and T2 :R5 (R) → R4 (R)
⇒ T2 may be onto.
Hence T2 T1 may be invertible.
Also T2 T1 may not be invertible.

POLYNOMIALS OF LINEAR OPERATION


If V(F) us a vector space and T:V(F)→ V(F)is linear operator on V(F), then
I:V(F)→V(f) ; I(α)=α ∀ α ∈ V
C0 I:V(F)→V(F)
C1 T:V(F) →V(F)
C2 T 2 :V(F) → V(F)
...
...
...
Cn T n :V(F) → V(F)
and the polynomial of linear operator.
(C0 I + C1 T + Cn T n )is called polynomial of linear operator and is also a linear operator.

Example:If T:R3 (R) → R3 (R) is linear transformation such that


T(x,y,z)=(x+y+z,x-y+z,x+z) then (2T 2 +5T+3I)(x,y,z)=?
Solution:2T 2 =T(T(x,y,z))=2T(x+y+z,x-y+z,x+z)

63
=2(3x+3z,2y+x+z,2x+y+2z)
=(6x+6z,4y+2x+2z,4x+2y+4z)

5T=(5x+5y+5z,5x+5y+5z,5x+5z)

3I=(3x,3y,3z)
(2T 2 +5T+3I)(x,y,z)=(14x+5y+6z,7x+2y+7z,9x+2y+12z)

64
Chapter 4

4 Linear Operator and Linear Functional


4.1 Practical Operators
Consider a vector at point P with x-coordinates x and y-coordinate y and making angle
α with line of inclination and modulus r.

Let new coordinates after rotation by an angle θ be (x’,y’).


Now pt (x,y) in polar coordinates will be given as (x,y) = (r cosα + r sinα)
Now after rotation r(modulus) of vector remains same and there is a change in inclination angle i.e
from α → θ + α
hence (x’,y’) = (rcos(θ + α) , rsin(θ + α)).

Hence a linear operator on R2 (R) which rotates each vector of R2 (R) by an angle θ in anticlock-
wise direction is given by .
T(x’,y’) = (rcos(θ + α) , rsin(θ + α))
= (xcos θ - ysin θ, ycos θ + xsin θ)
if rotated clockwise by θ then T(x’,y’) = (rcos(α − θ) + rsin(α − θ))
= (xcos θ + ysin θ, ycos θ - xsin θ)
Example: Find linear operator on R (R) which rotates every vector by an angle of 60◦ in clock
2

wise direction.

Solution: T(x,y) = (xcos60 √
+ y sin√60◦ , ycos60◦ - xsin60◦ )
= ( x2 + 23 y , y2 - 23 x)

Example: Find linear operator of R2 (R) which rotates every vector of R2 by factor a & b in
the direction of x-axis and y-axis respectively.

Solution: By Q T(x,y) = (ax,by)


also
By Q T(x,y)=(xcosθ-ysinθ,xsinθ +ycosθ)
=⇒ xcosθ-ysinθ = ax ——— (1)
and xsinθ+ycosθ = by ——— (2)
Multiplying (1) by y and (2) by x and subtract we have
x2 sinθ + y2 sinθ = bxy - axy

65
=⇒ (x2 + y2 )sinθ = (b - a)xy
=⇒ sinθ = (b−a)xy
x2 +y 2

(b−a)xy
=⇒ θ = sin−1 x2 +y 2
similarly we find w1 Q

Example: For T:R2 (R) i.e 2-D space.

Rank T Nullity T R(T) N(T)


2 0 2-D space (0,0)
1 1 straight line straight line
through origin passing through
origin
0 2 Point(origin) 2-D space
{(x,y) | ax+by = 0}

Solution:
(i) If Nullity T =0 =⇒ ax+by = 0
=⇒ a=0 , b=0 i.e T(x,y) =(0,0)
Rank T = 2-D space.
(ii) If Nullity T =1 =⇒ for T(x,y) = 0
∃ one a or b such that ax+by=0 =⇒ a=0 or b=0
1
If a6=0 then a exists
=⇒ ax = by
=⇒ x = −by a
straight line through origin

−a
If b6=0 then y = b
x straight line through origin.

(iii) If Nullity =2 =⇒ T(x,y) = 0


=⇒ ax+by = 0 where a and b are dependent
=⇒ a=kb & T(x,y) = {kb,b}

4.2 Linear operator in 3-D space


T:R2 (R) → R2 (R)

Rank T Nullity T R(T) N(T)


3 0 3-D space (0,0,0)
2 1 plane through origin line through origin
1 2 line through origin plane through origin
0 1 (0,0,0) 3-D space
 
1 2 3
Example: If T:R2 (R) → R2 (R) is TX = XA , A = 4 5 6  Then:
7 11 15
(1) R(T) and N(T) both are lines ×

66
(2) R(T) and N(T) are plane ×
(3) R(T) is line and N(T) is plane ×
(4) R(T) is plane and N(T) is line X
   
1 2 3 1 2 3
Solution: 4 5 6  ∼ 4 5 6 Rank of matrix is 2
7 11 15 0 0 0
  
1 2 3 x
Hence AX = 4 5 6
   y
7 11 15 z
i.e T(X) = { x+2y+3z , 4x+5y+6z}
and Rank of T = 2 =⇒ R(T) is a plane
Nullity T = T(X) st T(X) =0

=⇒ x + 2y + 3z = 0 ——-(1)

4x + 5y + 6z = 0 ——–(2)

Note:- Using (1) and (2) one variable can be removed other remaining two depend on each other
have only one linear independent vector
=⇒ Nullity T = 1
hence N(T) is a straight line through origin.

4.3 Invariance
If V(F) is a vector space and T:V(F) → V(F) is a linear operator than a subspace W of V is said
to be invariance under T if
T(W) ⊆ W , i.e α ∈ W
=⇒ T(α) ∈ W, (for example zero space and vector space V are invariant under any linear operator
or we say Linear transformation is closed).

Example If T:P3 (R) → P3 (R) is a linear operator given by T(p(x)) = p(x+1), then which of
the subspace of P3 (R) are invariant under T.

(i) L{1,x,x2 +1} X (ii) {p(x) | p(0) = 0} ×


(iii) { x,x3 } × (iv) {p(x) | p”(x) = 0} X

Solution: (i) Some L(span) of space is subspace


=⇒ L{1,x,x2 +1} is subspace of P3 (R)
Now let α = a+bx+c(x2 +1) ∈ L
=⇒ T(a+bx+c(x2 +1)) = a+b(x+1) + c((x+1)2 +1)
= a+bx + b +c(x2 +2x +2)
= a+b+c+(b+2c)x + cx2 +c
= (a+b+c)+ (b+2c)x + C(x2 +1)
∈ L{ 1,x,(x2 +1}

67
=⇒ α ∈L , =⇒ T(α) ∈ L
=⇒ L{1,x,x2 +1} is invariant under T.

(ii) let α ∈ {p(x) | p(0) = 0}


α = a+bx+cx2 +dx3
=⇒ T(α)= a + b(x+1) + c(x+1)2 +d(x+1)3
= a + bx + b + cx2 + 2cx +c +d(x+1)3
at x = 0
T(α) = a +b +c +d
here a=0 but b+c+d may not be equal to zero if b+c+d=0 then T(α) = 0, x=0
but if b+c+d=0 then T(α) 6= 0, x=0
hence T(α) ∈ / (p(x) | p(0) = 0)
∴ T is invariant

(iii) T(x) = x+1


T(x3 ) = (x +1)3
Now 1 ∈ T(W) but 1∈W/
∴ T(W) * W
Not invariant

(iv) let p(x) = a+bx+cx2 +dx3


=⇒ p(x) = b +2cx +3dx2
p”(x) = 2c +6dx
p”(x) = 0 , =⇒ 2c=0 and 6d = 0
Now T’(p(x)) = b + 2c + 3d + 2(c+ 3d)x + 3dx2 + 3dx
= a + b + c + d + (b + 2c + 3d)x + (c + 3d)x2 + dx3
T’(p(x)) = b + 2c + 3d + 2(c + 3d)x + 3dx2
T”(p(x)) = 2c + 6d + 6dx = 0 (∵ c = 0 , d = 0)
=⇒ T(p(x)) ∈ { p(x) | p”(x) = 0}
hence if p(x) ∈ W =⇒ T(p(x)) ∈ W.
hence W is invariant.

Reducibility:

If V (F ) is a vector space and T : V (F ) + V (F ) is a linear operation then V is said to be reduced


by subspaces W1 and W2 under T if

1) W1 and W2 are invariant under T .

2) V = W1 ⊕ W2 (i.e direct sum i.e W1 ⊕ W2 = V and W1 ∩ W2 = 0)

Note: 1) V is reduced by (V ,{0}) under linear operation T V , {0} are invariant


under V

2) V ⊕ {0} = V { V ∩ {0} = zero space and V + {0} = V

Example: Which of the following pairs of subspaces of M2 (R) reduces to under linear operation

68
 
a b
T =
 c d 
a+b a-b
c+d 3c+3d
 
a b
1) W1 ={ | a, b ∈ R }
0 0
 
0 0
W2 = { | c, d ∈ R }
c d
 
a b
2) W1 = { |a+b+c+d=0}
c d
 
a b a+b+c=0
W2 = { | }
c d b+cd=0
 
a b a+b+c=0
3) W1 = { | }
c d a+b+c=0
 
a b a+b=0
W2 = { | }
c d c+d=0
Solution:
 
0 0
1) W1 ∩ W2 = = zero space
0 0
 
a b
W1 + W2 = | a, b , c, d ∈ R = M2 (R)
c d
here M2 (R) = W1 ⊕ W2
   
a+b a-b a b
T(α) = ∈ W1 ∀ α ∈ W1 { α = }
0 0 0 0
   
0 0 0 0
T (β) = ∈ W2 ∀ β ∈ W2 { β = }
c + d 3c + 3d c d
Hence from (1) and (2) W1 and W2 reduces M2 (R).
  a+b+c+d=0
a b
2) W1 ∩ W2 = | a+b+c=0
c d
b+cd=0
⇒ a = 0 , d = 0 and b = −c
   
0 b 0 -c
⇒ W1 ∩ W2 = or 6 [0] if c, b 6= 0
=
-b 0 c 0
Hence M2 (R) 6= W1 ⊕ W2 .

  a+b+c+d=0
a b a−b+c−d=0
3) W1 ∩ W2 = |
c d a+b=0
c+d=0

69
   
1 1 1 1 0 0 0 0
1 -1 1 -1 2 0
  2 0
⇒
1 v 
1 0 0  1 1 0 0
0 0 1 1 0 0 1 1
a = −c = d = −b
       
a -a -c c d -d -b b
W1 ∩ W2 = = = = 6= [0]
-a a c -c -d d b -b
a = −c = d = −b 6= 0

Hence M2 (R) 6= W1 ⊕ W2

Projection:

If V (F ) is a vector space and W1 and W2 are its subspaces such that

V = W1 ⊕ W2

then, every vector α can be unique expressed as α =β + γ where α ∈ W1 and γ ∈ W2 is linear


operation

T : V (F ) → V (F ) given by T α = β ∀ α ∈ V is a projection of V upon W1 along W2 .

Example: If V (F ) is a vector space then linear operation T is projection iff T 2 = T .

Solution: Let V = W1 ⊕ W2

let T is projection upon W1 along W2

∴ α ∈ V → T (α) = β (1)

and α = β + γ γ ∈ W2

⇒ T (α) = T (β + γ) (2)

Now T (α) = β

⇒ T (T (α) ) = T (β) = T (β + 0 ) 0 ∈ W2 (∴ of (2) )

→ T 2 (α) = β = T (α) ⇒ T 2 = T

conversly let T 2 = T
⇒ Rank T 2 = Rank T

⇒ R(T ) ∩ N (T ) = {0}

70
⇒ V = R(T ) ⊕ N (T )

we have T (α) = T (β + γ) = β

β ∈ R(T ) and γ ∈ N (T )

T (R(T ) + N (T )) = R(T )

Example: If T1 and T2 are projection on V (F ) then T1 + T2 is again a projection on V (F ) iff


T1 T2 = 0 = T2 T1 .

Solution: Let T1 + T2 is a projection on V (F )

⇒ (T1 + T2 ) = T1 + T2

⇒ T12 + T1 T2 + T2 T1 + T22 = T1 + T2

⇒ T1 + T1 T2 + T2 T1 + T2 = T1 + T2

⇒ T1 T2 = T2 T1
⇒ T1 T2 + T2 T1 = 0 (1)

⇒ T1 T2 = -T2 T1 . → (T1 T2 + T2 T1 ) = T1 (0)

⇒ T12 T2 + T1 T2 T1 = T (0) = 0

⇒ T1 T2 + T1 T2 T1 = 0 (2)

Similarly T2 T1 T2 + T2 T1 = 0 (3)

from (2) and (3)


T1 T2 = T2 T1

From (1) T1 T2 = 0 = T2 T1

Example: If T is a projection on V (f ) how ( I − T ) is again a projection on V (f ).

Solution: (I − T = I 2 + 2T + T 2 = I - 2T +T = I − T

Hence I − T is a projection on V (f ).

Example: If T1 , T2 are projection on V (f ) then T1 T2 is again projection on V (f ) if T1 and T2


commute with each other.

Solution: Let T1 T2 is a projection

71
⇒ (T1 T22 ) = T1 T2

⇒ T1 T2 T1 T2 = T1 T2

⇒ T1 T2 T1 T2 = T12 T22

⇒ T1 T2 T1 T2 = T1 T1 T2 T2

⇒ T2 T1 = T1 T2

Hence T1 and T2 commute.

4.4 Linear Transformation as Vector Space


If U(F) and V(F) are n-dimensional and m-dimensional vector space then the set of all transforma-
tion from U(F)→V(F) denoted by

L(u,v)={T/T:U(F)→V(F)}is a linear transformation

Note dim(L(U,V))=dimU.dimV=m×n

Proof

Let Bu={α1, α2 , ....αn } and Bv={β1, β2 , ....βn } s.t

Tij(αi)=0

Tij(αi − 1)=0

Tij(αi)=βj

...Tij(αi)=0

i.e Tij(αk)=βik βj

T11(α1)=β1 T21(α1)=0

T12(α1)=β2 T23(α2)=β3 T13(α1)=β3 T25(α2)=β3

T11(α2)=0

Ti3(α5)=0

Let B={Tij /1 ≤ i ≤ n; 1 ≤ j ≤ m}

72
For L.I
n X
X m
Let Cij Tij =0
i=1 j=1
n X
X m
i.e ( Cij Tij )(a1 α1 +a2 α2 +.....+an αn )=0 ∀ai ∈F
i=1 j=1

⇒ a1 (c11 β1 +c12 β2 +.....+c1m βm )+a2(c21β1 +c22 β2 +.....+c2m βm )+

.....+a3 (cn1 β1 +cn2 β2 +.....+cnm βm )=0

⇒ Coefficient of each ai is 0

∴ c11 β1 +c12 β2 +.....+c1m βm =0

.....cn1 β1 +cn2 β2 +.....+cnm βm =0

∴cij =0∀i=j

∴ B is L.I

To Prove B spans L(U,V)

T:U(F)→V(F)

T(a1 α1 +a2 α2 +.....+an αn )

=a1 T (α1 )+a2 T (α2 )+.....+an T (αn )

=a1(c11β1 +c12 β2 +.....+c1m βm )+a2(c21β1 +c22 β2 +.....+c2m βm )+...

...+an(cn1β1 +cn2 β2 +.....+cnm βm )

i.e T∈L(U,V)∈L(B)

⇒ B spans L(U,V)....(2)

⇒ B is basis of L(U,V)

dimL(U,V)=nm=dimU dimV

73
Example: Dim L(M5 (R),PM20 (R))

Solution: Basis of M5 is 25 and basis of P20 is 20+1


∴ 25×21=525

4.5 Linear Functional


If V(F)is a vector space then we know that F(F) is also vector space, so we can take linear trans-
formation from V(F)→ F(F)
For example: T:Pn (F) → F(F) given by
Xn Xn
T( ai xi )= C i ai
i=1 i=1

T:P3 (R) → R(R)

T:P3 (R) → R(R)

T(a0 +a1 x+a2 x2 +a3 x3 )=4a0 +2a1 -7a2 +6a3

4.6 Dual Space (Conjugate space of V(F))


The set of all linear functional on vector space V(F) is known as dual space.It is denoted by V 0 ,V ∗
V 0 ={f | f:V(F) → F(F) is linear transformation}

Note:If dim V(F)=n,then dimV 0 =n


V 0 =L(V(F),F(F))

Proof:Let BV ={a1 ,a2 ,....., an }


⇒ V(F)=C1 a1 + C2 a2 + Cn an
where C1 =C2 =Cn =0
Now dim V 0 =dim(L(V(F),F(F)))
=dimV(F).dimF(F)
=dimV [∵ BF (F ) ={1}]

Dual Basis

If V(F) is a vector space with ordered basis BV ={α1 ,α2 ,.....,αn }, then its dual basis ie. basis of
its dual is BV 0 ={f1 ,f2 ,....,fn }where

f1 (α1 )=1 f1 (α1 )=0

f1 (α2 )=0 f2 (α2 )=1

74
f2 (α3 )=1 f2 (α2 )=0

f1 (αn )=0......f2 αn =1

Example:If ordered basis of R3 (R) is {(1,1,1),(1,1,2),(1,2,4)},then find the coordinate of linear


functional on R3 as
L(x,y,z)=7x+11y-9z .
Solution:Let a,b,c be coordinates of L
⇒ L(a,b,c)=af1 +bf2 +cf3
L(1,1,1)=a=7(1)+11(1)-9(1)=9
L(1,1,2)=b=7(1)+11(2)-9(4)=0
L(1,2,4)=c=7(1)+11(2)-9(4)=-7
⇒ L(a,b,c)=9f1 -9f3
∴ Coordinates of L are (9,0,-7)

Second Dual Space:

If V(F)is a vector space,then V 0 ie. set of all linear functional on V is again a vector space,then
set of all linear functions on V 0 denoted by V 00 is known as second dual space of V.

4.7 Dual of dual space


→ Lα ∈ V 00 where
Lα (f)=f(α),α ∈ V,f ∈ V 0
Lα is linear transformation.

Result:
Let BV = {α1 ,α2 ,...,αn }
BV 0 ={f1 ,f2 ,....,fn }
BV 00 ={Lα1 ,Lα2 ,....,Lαn }
Now Lα1 (f)=f(α1 )
⇒ Lα1 (f1 )=1.α1 [Lα1 (fi )=0;i6=1]
Similarly, Lα2 (f2 )=1.α2 [Lα2 (fi )=0;i6=2]
..
..
..
Lαn (fn )=1.αn [Lαn (fi )=0;i6=n]
BV 00 ={α1 ,α2 ,.....,αn }
⇒ BV =BV 00

4.8 Annihilator
If V(F) is a vector space,then for any subset S of V annihilator of S,denoted by S 0 or A(S) contains
those linear functions from V 0 which transforms every vector of S to 0 ∈ F.
For example;V 0 =Zero space of V 0 (zero functional)

75
{0}0 =V 0

Result: If V(F)is a vector space and S ∈ V, then S 0 is a subspace of V 0 .


Proof:S 0 is well defined (∵ 0 functional ∈ S 0 )
Let f,g ∈ S 0
f(α)=0,g(α)=0; ∀ α ∈ S
⇒ (af+g)(α)=a.f(α) + g(α)
=0; ∀ α ∈ S
⇒ a.f+g ∈ S 0
⇒ S 0 is a subspace of V 0

Result:If W is a subspace of F.D.U.S vector space V(F),then dimW+dimW 0 =dimV


Proof: Let BW ={α1 ,α2 ,....,αm }
and BV ={α1 ,α2 ,....αm ,β1 ,β2 ,....,βn } BV 0 ={f1 ,f2 ,....,fm ,g1 ,g2 ,.....,gn }
Let B={g1 ,g2 ,....,gn }
Since B ∈ BV 0
⇒ B is L.I

Now let f ∈ W 0
Xn Xm
⇒ f= C j gj + ai f i
j=1 i=1

and f(α)=0;∀ α ∈ W 0
m
X
and f( bk αk )=0; ∀ bk ∈ F
k=1
m
X n
X Xm
⇒( ai f i + ci gi )( bk αk =0
i=1 j=1 k=1

⇒ a1 b1 +a2 b2 +....+am bm =0; ∀ bi ∈ F

⇒ for each ai =0 (1≤ i ≤ m)


Xn
0
∴ f ∈ W ⇒ f= Cj gj
j=1

ie. f ∈ L(B)
⇒ B is basis of W 0
and dimW 0 =n
also dimW=m
dimV=m+n
⇒ dimV=dimW+dimW 0
       
1 1 1 2 3 5 4 7
EXAMPLE: If S={ , , , },then dim(S 0 )=?
0 0 0 0 0 0 0 0
Solution:S 0 =(L(S))0
⇒ dimS 0 =dim(L(S))0 Also dimW + dimW 0 =dimV
and L(S)is subspace of V

76
⇒ dimL(S) + dim(L(S))0 = dimV
or dim(L(S))0 = dimV-dimL(S)
⇒ dim(S 0 )=DimV-dimL(S)
Now for
 dimL(S)
      
1 1 1 2 3 5 4 7
Let a +b +c +d =0
0 0 0 0 0 0 0 0
 
a + b + 3c + 4d a + 2b + 5c + 7d
⇒ =0
0 0
⇒ a+b+3c+4d=0 and a+2b+4c+7d=0
 
1 1 3 4
⇒ =0
 1 2 4 7    
1 1 3 4 1 1 3 4 1 0 2 1
⇒ ∼ ∼ =0
1 2 4 7 0 1 1 3 0 1 1 3
⇒ a+2c+d=0
b+c+3d=0

a-2b+0-5d=0

⇒ d= 2b−a
5
Also b-3a-5c=0

⇒ c= 3a−b
5

3a−b 2b−a
 
⇒ B(S) = a b c d = a b 5 5
(two L.I variables ie. a and b and c,d depends upon
a and b)

⇒ dim(S)=2

⇒ dimL(S)=2

dim S 0 =dimV-2=4-2=2

Note:dimW+dimW 0 =dimV
dimW 0 +dimW 00 =dimV 0 =dimV

⇒ dimW=dimW 00

W=W 00 =W 0000 =W 000000

Matrix Representation of Linear Transformation

If T:V(F) → V(F) with Ordered Basis Bu ={α1 ,α2 ,.....,αn } and


Bv ={β1 ,β2 ,.....,βm }
then transformation matrix T with Bu ,Bv ie.[T,Bu ,Bv ]=[T] contains its j th column as coordinate

77
matrix of T(αj ) with respect to ordered basis of V
ie. I st column has coordinates of α1
2nd column has coordinates of α1
...
...
nth column has coordinates of αn
If T(αj )=α1j β1 +α2j β2 +.....+αnj βn
T (α1 ) · · · T (αj )
α1 a11 · · · a1j
.. ..
 n × m ie. 1 ≤ j ≤ m

. .
αn an1 · · · anj
Note:If ordered basis is not given we take standard Basis.

Example:If T:P3 (R) −→ P3 (R) is given by T(p(x))=p(2x+11) then trace(T)=?

Solution:Take {1,x,x2 ,x3 } as standard basis of both P3 (R), (∵ standard basis of polynomial)
2 3
T (1) T (x) T (x ) T (x )
1 1 11 121 1331
x 0
 2 44 726  
2
x 0 0 4 132 
x3 0 0 0 8
∵ T(1)=1(in column1) T(x)=2x+11
=2.(x)+ll.1 (in column 2)
T(x2 )=(2x + 11)2 =4x2 +121+44x (in column 3)
T(x3 )=(2x + 11)3 =8x3 +1331+132x2 +726x (in column 4)

but for trace we only need diagonal element ie,1,2,4,8


⇒ trace(T)=1+2+4+8=15

Example:T:P3 (R) −→ P3 (R) given by where T(P(x))=D(P(x)).Then find triangular matrix


with respect to ordred basis {1,x,x2 ,x3 }.
2 3
T (1) T (x) T (x ) T (x )
1 0 1 0 0
x 0 0 2 0 
Solution:T= 2  
x  0 0 0 3 
x3 0 0 0 0
T(1)=D(1)=0=0.1+0.x+0.x2 +0.x3
T(x)=D(x)=1=1.1+0.x+0.x2 +0.x3
T(x2 )=D(x2 )=2x=0.1+2.x+0.x2 +0.x3
T(x3 )=D(x3 )=3x2 =0.1+0.x+3.x2 +0.x3
Note:Since all eigen values are zero
⇒ Given matrix is nilpotent of order4.

Example:If
 T:R3(R) −→ R2 (R) then find its linear transformation whose transformation ma-
13 18 12
trix is with respect to ordered Basis of R3 as {(1,1,1),(1,1,0),(1,0,0)} and ordered
−3 −8 −5

78
basis of R2 as {(1,1),(1,2)}.

Solution:Let T(x.y,z)=(a,b) be a linear transformation.


Since ordered basis of R3 ={(1,1,1),(1,1,0),(1,0,0)}
⇒ every element of R3 (R) must be of the type C1 (1,1,1)+C2 (1,1,0)+(1,0,0)
⇒ (x,y,z) ∈ R =(C1 +C2 +C3 ,C1 +C2 ,C1 )
⇒ C1 =z
⇒ C1 +C2 =y
⇒ C2 =y-z
C1 +C2 +C3 =x ⇒ C3 =x-y-z
(x,y,z) ∈ R3 (R) ⇒ (x,y,z)=z(1,1,1)+(y-z)(1,1,0)+(x-y-z)(1,0,0)
⇒ T(x,y,z)=(y-z)T(1,1,0)+z.T(1,1,1)+(x-y-z)(1,0,0) ——(1)
T (1, 1, 1) T (1, 1, 0) T (1, 0, 0)
(1, 1) 13 18 12
Also, T.m=
(1, 2) −3 −8 −5
T(1,1,1)=13(1,1)-3(1,2)=(10,7)
T(1,1,0)=18(1,1)-8(1,2)=(10,2)
T(1,0,0)=12(1,1)-5(1,2)=(7,2) ——-(2)
Using (2)in (1),we have
T(x,y,z)=(y-z)(10,2)+z(10,7)+(x-y-z)(7,2)
=(10y-10z,2y-2z)+(10z,7z)+(7x-7y-2z)
=(10y-10z+10z+7z+7x-7y-2z,2y-2z+7z+2x-2y-2z)
=(7x+3y,2x+5z)
ie, T:R3 (R) −→ R2 (R) such that
T(x,y,z)=(7x+3y,2x+5z)

Example: If T:M2 (R) −→ M2 (R) is given by T(X)=AX,where A is fixed matrix of order2,then


(i) find the transformation matrix T.
(ii) find K such that trace (T)= R(Trace A)
l
(iii) find l such that
 det(T)=(detA)

a b
Solution:Let A=
c d     
1 0 0 1 0 0 0 0
We know that standard basis of M2 (R) is BM2 (R)={ , , , }
0 0 0 0 1 0 0 1
Since T(X)=AX   
1 0 a b 1 0
⇒T =
 0 0 c d 0 0
a 0
=
c 0   
1 0 0 0
=a +c
0 0  0 1    
1 0 1 0 0 0
Similarly, T =a +c
0 0 0 0 0 1
     
0 0 1 0 0 0
T =b +d
1 0 0 0 1 0

79
        
0 0 a b 0 0 0 1 0 0
T = =b +d
0 1 c d 0 1 0 0 0 1
       
1 0 0 1 0 0 0 0
T T T T
 0 0 0 0 1 0 0 1 
 
1 0  
 a 0 b 0 
0 0 
 
0 1  
 0 a 0 b 
T= 0 0


0 0  
 c 0 d 0 
1 0 
 
0 0  
0 c 0 d
0 1

⇒ Trace(T)=a+a+d+d=2a+2d
als0 trace A=a+d
⇒ Trace(T)=2a+2d=2(a+d)=2Trace(A)
R=2
Now T can be written
 as
aI2×2 bI2×2
T=
cI2×2 dI2×2
⇒ det(T)=det ((ad-bc)I2 )
=(ad − bc)2
=(detA)2 [∵ det[AIn×n ]
=An
Note:In finding Transformation matrix sometimes basis is indirectly given,which we have to rec-
ognize.If T is L.O on V(F) then T is nilpotent/idempotent/invertible/symmetric/orthogonal etc.
according as transformation matrix has same property.
Note:If C(x)and m(x) of T or Transformation matrix,we write its jordan canonical form.
Example:IF V(F) is n-dimensional vector space and if B={α1 ,α2 ,.....,α(n)} is a basis of V(F).If T1
is a linear operator on V(F) such that T1 (αi )=kαi for some k6=0
then,if T2 is a linear operator on V(F)which has n L.I eigen vectors then whivh of the following
is/are correct?
(i) T1 is diagonal operator.
(ii) T2 is diagonal operator.
Solution:Since T1 (αi )=kαn
⇒ matric representation of T1 is
T1 (α1 ) T1 (α2 ) · · · T1 (αn )
α1 k 0 ··· 0
α2 
 0 k · · · 0 

.. .. .. .. .. 
. . . . . 
αn 0 0 ··· k
is a diagonal matrix.
hence (i) is correct,
Now let {X1 ,X2 ,....,Xn } be L.I eigen vectors of T2 then it can be taken as basis of V(F).

80
⇒ T2 (Xi )=λi Xi where λi is some constant.
which is same linear operator as T1
Hence it is a diagonal operator ie.T2 is a diagonal operator
Note: Since{X1 ,X2 ,....,Xn } are L.I vectors of T2 and by Cayley’s Theorem,they all satisfy character-
istic equation and an element T2 can be expressed as linear combination of element Xi ={X1 ,X2 ,....,Xn }
hence spans T2
⇒ { X1 ,X2 ,.....,Xn } is basis for T2 .

4.9 Quadratic Forms


Quadratic forms are the next simplest functions after linear ones. Like linear functions they have
a matrix representation, so that studying quadratic forms reduces to studying symmetric matrices.
This is what this section is about.

Consider the function F : R2 → R , where F = a11 x1 2 + a12 x1 x2 + a22 x2 2 . We call this a


quadratic form in R2 . Notice that this can be expressed in matrix form as
 a1 1 1 a1 2
  
x1
F(x) = x1 x2 1
2 = XT AX
a
2 2
1 a 2 2 x 2

where X = (x1 , x2 ), and A is unique and symmetric.


The quadratic form in Rn is
n
X
F(x) = ai jxi xj
i,j=1

where X = (x1 , ..., xn ), and A is unique and symmetric. This can also be expressed in matrix
form:
  
a11 12 a12 · · · 12 a1n x1
1 1
  a21 a22 · · · 2 a2n   x2 
  
F(x) = x1 x2 · · · xn  2 .. T
..   .. = X AX

.. ..
 . . . .   . 
1 1
a a
2 n1 2 n2
· · · a nn x n

XT AX is known as matrix representation of Quadratic form

X is Varible Matrix and A is Coefficient Matrix

Note: (1) Elementary Row Transformation done in Pre I of RHS and

Note: (2) Elementary Column Transformation done in Post I of RHS

Example: Find representation of quadratic form Q(x,y,z) = 7x2 +9xy +11y 2 +12yz +14xz −9z 2

Solution:
9
  
 9 7 7 x 2
Q(x,y,z) = x y z 
2
11 6   y
7 6 −9 z

81
Example: if Q(x,y) = 5x2 + 10xy + 25y 2 = ax21 + bx22 where a,b ∈ {0,1,-1}
Then find a,b and also
  find 
P such
 that
x1 x
P =
x2 y
Solution:
  Given ax21 + bx22 i.e only digonal place, other are zero.
x
X= , Q(x,y) = XT AX
y
 
5 5
where A =
5 25
     
5 5 1 0 1 0
Now = A
5 25 0 1 0 1
R2 → R2 − R1 , C 2 → C2 − C1
     
5 0 1 0 1 −1
= A
0 20 −1 1 0 1
R1 → √15 R1 , C1 → √15 C1
   1  1 
1 0 √
5
0 √
5
−1
= A
0 20 −1 1 0 1
R1 → √1 R2 , C2 → √1 C2
20 20
! !
√1 √1 − √120
 
1 0 5
0 5
= A
0 1 − √120 √1
20
0 √1
20
!
√1 − 2√1 5
 
5 1 0
P= 1 , B=
0 √
2 5
0 1
B has four possibilities
       
1 0 −1 0 1 0 −1 0
, , ,
0 1 0 −1 0 −1 0 1
 
−1 0
To obtain
0 −1
R1 → iR1 a has 2 possibilities
C1 → iC1 b has 2 possibilities

X = Py ! 
√1 − 2√1 5
 
x 5 x1
= √1
y 0 5
x2
Put values of x and y in
Q(x,y) = 5x2 + 10xy + 25y 2
x2
= 5 2x21√−x
5
2
+ 10 2x21√−x
5
2
× x√2
2 5
+ 25 202

82
x22 x22 5x22
= x21 + 4
− x1 x2 + x1 x2 − 2
+ 4

6x22 −2x22
= x21 + 4

= x21 + x22

83
Appendix on Symbol Notations
= equals

≡ is defined as

⇒ implies

⇔ is equivalent to

∃ there exists

∀ for all

∈ is an element of

∪ union

∩ intersect

⊂ subset or proper subset

⊆ subset

+ vector addition

⊕ vector addition or direct sum

scalar multiplication

· dot product or scalar multiplication

kuk norm of u

Σ sum

Σni=0 ui u1 + u2 + . . . un

d(u, v) distance between u and v


R set of Real Numbers
C set of Complex Numbers
Q set of Rational Numbers
N set of Natural Numbers

84
References
[1] S.Lang . “Introduction of linear Algebra ”, 2nd edition, Springer, 2005
[2] Schaum’s outlines “Linear Algebra”, 2nd edition, SeymourLipschutz
[3] M.Astin “Abstract Algebra”, 2nd Edition, P earson, 2011

85

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy