0% found this document useful (0 votes)
7 views75 pages

Lecture Notes On Linear Algebra: Prepared by Muhammad Shahnewaz Bhuyan

The document contains lecture notes for the MATH 243 course on Vector Calculus, Linear Algebra, and Complex Variables, prepared by Muhammad Shahnewaz Bhuyan. It outlines the syllabus, recommended books, class routine, and detailed topics covered in linear algebra, including matrices, determinants, and systems of linear equations. The notes are structured to aid students in understanding key concepts and applications in the field.

Uploaded by

rahulsahaantu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views75 pages

Lecture Notes On Linear Algebra: Prepared by Muhammad Shahnewaz Bhuyan

The document contains lecture notes for the MATH 243 course on Vector Calculus, Linear Algebra, and Complex Variables, prepared by Muhammad Shahnewaz Bhuyan. It outlines the syllabus, recommended books, class routine, and detailed topics covered in linear algebra, including matrices, determinants, and systems of linear equations. The notes are structured to aid students in understanding key concepts and applications in the field.

Uploaded by

rahulsahaantu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

Lecture Notes on Linear Algebra

For the Partial Fulfillment of


Course Code: MATH 243
Course Title: Vector Calculus, Linear Algebra and Complex Variables

Prepared by
Muhammad Shahnewaz Bhuyan

Department of Mathematics
Chittagong University of Engineering and Technology
Chattogram-4349

Started From: September 30, 2024


Last Updated On: January 13, 2025
Outline

Course Code: MATH 243


Course Title: Vector Calculus, Linear Algebra and Complex Variables
Credit: 3.0
Contact Hours/Week: 3.0

Syllabus
Matrices and elementary row operations, Rank, Inverse of a square matrix.
System of linear equations and their applications and their applications in network flow and
electric circuits.
Vectors in Rn , Linear combinations, Linear dependence and independence
Eigenvalues and eigenvectors, Diagonalization.

Books Recommended
(i) H. Anton and C. Rorres, Elementary Linear Algebra (Applications Version), Jhon Wiley &
Sons, Eleventh Edition, 2020-21.

(ii) S. Lipschutz and M. L. Lipson, Theory and Problems of Linear Algebra, Schaum’s Outline
Series, McGraw-Hill, Third Edition, 2013-2014.

(iii) M. Abdur Rahman, College Linear Algebra: Theory of Matrices with Applications, Nahar
Book Depot & Publications, Dhaka, Sixth Edition (Reprint), 2012.

Class Routine
Monday
11:00 AM — 11:50 AM (Section B)
11:50 AM — 12:40 PM (Section A)

Tuesday
11:00 AM — 11:50 AM (Section B)
11:50 AM — 12:40 PM (Section A)

ii
Contents

Outline ii

I Linear Algebra 1
1 Matrices 2
1.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Order of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Equality of two matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Sum of two matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Subtraction of two matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Scalar multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.5 Visualizing the effect of matrix multiplication on plane . . . . . . . . . . . . 6
1.3 Differences between Real Arithmetic and Matrix Arithmetic . . . . . . . . . . . . . 6
1.3.1 Failure of multiplicative commutativity . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 Failure of cancellation Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.3 Matrix arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Some Special Types of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 Row matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2 Column matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3 Zero matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.4 Zero product with nonzero factors . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.5 Identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.6 Diagonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.7 Lower triangular matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.8 Upper triangular matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.9 Scalar matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.10 Transpose matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.11 Symmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.12 Skew-symmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.13 Orthogonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.14 Involutory matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4.15 Idempotent matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.16 Nilpotent matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.17 Periodic matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4.18 The Vandermonde matrix of order m . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Square Root of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

iii
1.6 Matrix Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.7 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.7.1 A special set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Determinants 16
2.1 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.1 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Evaluating Determinants by Cofactor Expansion . . . . . . . . . . . . . . . . . . . . 16
2.2.1 Determinant of a 1 × 1 matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.2 Determinant of a 2 × 2 matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.3 Minors and cofactors of a square matrix . . . . . . . . . . . . . . . . . . . . 17
2.2.4 Determinant of a general n × n square matrix . . . . . . . . . . . . . . . . . 19
2.3 Determinants for Solving a System of Linear Equations . . . . . . . . . . . . . . . . 21
2.4 Some Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.1 Some Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5 Construction of a Square Matrix with Given Order and Determinant . . . . . . . . 28
2.5.1 Construction of a n × n matrix with determinant ∆ . . . . . . . . . . . . . . 28
2.6 Inverse of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.6.1 Cofactor matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.6.2 Adjoint matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.6.3 Another approach for finding inverse matrix . . . . . . . . . . . . . . . . . . 33

3 System of Linear Equations 34


3.1 System of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.1 Linear equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.2 System of linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.3 Consistent and inconsistent linear system . . . . . . . . . . . . . . . . . . . . 35
3.2 Augmented Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Matrix representation of a system of Linear equations with two variables . . 35
3.2.2 Matrix representation of a general linear system . . . . . . . . . . . . . . . . 36
3.2.3 Augmented matrix of a linear system . . . . . . . . . . . . . . . . . . . . . . 36
3.2.4 Different representations of augmented matrices in different texts . . . . . . 36
3.3 Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.1 Objective of algebraic operation on a linear system . . . . . . . . . . . . . . 37
3.3.2 When should performing algebraic operations be stopped . . . . . . . . . . . 37
3.3.3 Elementary row operations of matrices . . . . . . . . . . . . . . . . . . . . . 37
3.4 Methods for Solving a Linear System . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4.1 Solving a system of linear equations by using the concept of inverse matrix . 37
3.4.2 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.1 Application in network flow . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.2 Application in electrical circuits . . . . . . . . . . . . . . . . . . . . . . . . . 40

4 Row Echelon Form of a Matrix 41


4.1 Row Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.1 Row echelon form of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.2 Reduced row echelon form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2 Inverse and Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.1 Rank of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.2 Finding inverse of a matrix by row operations . . . . . . . . . . . . . . . . . 42

iv
5 Vector Spaces 44
5.1 Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.1.1 Lines and planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.1.2 Norm, dot product and distance . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.1.3 Distance between two vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.1.4 Normalizing a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.1.5 Euclidean inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.1.6 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Vector Spaces and Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2.1 Real vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2.2 Vector subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3.1 Linear combination of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3.2 Linear independence of vectors . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.4 Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.4.1 Spanning set and basis of a vector space . . . . . . . . . . . . . . . . . . . . 47
5.4.2 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.5 Row Spaces and Column Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.6 Rank and Nullity of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6 Basis and Dimension 51

7 Eigenvalues and Eigenvectors 52


7.1 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.1.1 Eigenvalues of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.1.2 How to find eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . 52
7.2 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.2.1 Similar matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.2.2 Computing higher power of matrix . . . . . . . . . . . . . . . . . . . . . . . 55
7.3 The Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.3.1 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.3.2 Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.4 Application of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.4.1 Stability test of a dynamical system . . . . . . . . . . . . . . . . . . . . . . . 56

8 Linear Transformation 57
8.1 Linear Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8.1.1 Reflection about x-axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8.1.2 Reflection about line y = x . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8.1.3 Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8.2 Kernel and Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.3 Rank and Nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.4 Isomorphism of Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.4.1 One-one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.4.2 Onto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.4.3 Inverse transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.4.4 Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.5 Matrices for General Linear Transformation . . . . . . . . . . . . . . . . . . . . . . 58

v
9 Quadratic Forms 60
9.1 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
9.2 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

II Geometry from Algebraic Point of View 61


10 Algebra of Lines and Planes 62
10.1 Parametric Equation of Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.1.1 Parametric equation of a line . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.2 Lines in 2-dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.2.1 Lines in 2-dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.3 Planes in 3-dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
10.3.1 Planes in 3-dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
10.3.2 Meet of three planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
10.4 Lines in 3-dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.4.1 Lines in 3-dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.4.2 Deriving cartesian equation of a line from its parametric equation . . . . . . 64
10.4.3 Meet of a line a plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.5 Meet of Two Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.5.1 Meet of two planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.5.2 Finding meet of two planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.6 Cartesian and Parametric Equations of a Plane . . . . . . . . . . . . . . . . . . . . 67
10.6.1 Parametric equation of a plane . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.6.2 Deducing cartesian equation of a plane from its parametric equation . . . . . 67
10.6.3 Deducing parametric equation of a plane from its cartesian equation . . . . . 68

Bibliography 69

vi
Part I

Linear Algebra

1
Chapter 1

Matrices

For detailed explanations and solving more problems, readers are referred to Anton and Ror-
res [2], Hamilton [4], Hadley [3], Larson and Falvo [5], Lipschutz and Lipson [6], Abdur Rahman[7],
Wildgerger [8].

1.1 Matrices
1.1.1 Matrices
A matrix is nothing but a rectangular array consisting of some horizontal and some vertical lines of
numbers. Each number of that array is referred as an entry of that matrix, when horizontal lines of
numbers are referred as rows and vertical lines of numbers are referred as columns of that matrix.

Definition 1.1.1 A matrix A with m rows and n columns is a rectangular array of numbers,
represented in either of the forms
   
a11 a12 a13 · · · a1n a11 a12 a13 ··· a1n
 a21 a22 a23 · · · a2n   a21 a22 a23 ··· a2n 
   
A =  a31 a32 a33 · · · a3n  or A =  a31 a32 a33 ··· a3n 
.
 

 .. .. .. .. ..   .. .. .. .. .. 
 . . . . .   . . . . . 
am1 am2 am3 · · · amn am1 am2 am3 ··· amn

Here

ˆ a11

a12 a13 ··· a1n is called the first row of A

ˆ a21

a22 a23 ··· a2n is called the second row of A

ˆ a31

a32 a33 ··· a3n is called the third row of A

and so on. Similarly,      


a11 a12 a13
 a21   a22   a23 
     
 a31   a32   a33 
 ,  ,  
 ..   ..   .. 
 .   .   . 
am1 am2 am3
are respectively called first column, second column, third column of the matrix A and
 so on.
Sometimes, for simplicity, we represent the matrix A by writing A = aij or A = aij .

2

Remark 1.1.1 In this text, we use the symbol for matrix notation.

Definition 1.1.2 In a matrix A = aij , each aij is called an entry located at the intersection
of i-th row and j-th column. The entry aij is also referred as (i, j)-th entry.
 
0 1 4
Example 1.1.1 The rectangular array of real numbers is a matrix.
3 13 5

Counterexample 1.1.1 Suppose that


   
1 2 3 7 0 1 π
A = 11 3 7 and B = 0
 9 3 .
6 −2 0 −1 ∞ −3 5

Here A is not a matrix, as it’s 2nd row 2nd column position is empty. Again, B is not a matrix, as
in it’s 1st row 3rd column position there is ∞ symbol, which is not a number.

Definition 1.1.3 Matrix A = aij is said to be a real matrix, if all aij ∈ R.
   
1 4 √ 1 4
Example 1.1.2 A = is a real matrix, but B = is not a real matrix,
−4 8 −4 8

as −4 is an entry of B, which does not belong to R.

Remark 1.1.2 In this text, unless otherwise specified explicitly, all matrices are assumed to be
real.

Question 1.1.1 Is a213 the (21, 3)-th entry or (2, 13)-th entry in aij ?

Answer It depends on the previous entry’s row number and column number.

Definition 1.1.4 A matrix having equal number of rows and columns is called a square matrix.

Definition 1.1.5 Let A = aij be a square matrix of order n × n. Then the entries aij for which
i = j form the principal diagonal of A and are called the diagonal entries of A.
 
2 3 5
Example 1.1.3 1 0 6  is a square matrix in which 2, 0 and −2 are the diagonal
0 0  −2 
2 3 5
entries. On the other hand is not a square matrix.
1 0 6

1.1.2 Order of a matrix


Let A be a matrix with m rows and n columns. Then m × n, read as m by n, is called the order
or size of A. A matrix with order m × n is called an m by n matrix.
 
0 −1 4
Example 1.1.4 The matrix has 2 rows and 3 columns. So it’s order is 2 × 3.
3 4 5

1.1.3 Equality of two matrices


Two matrices become equal, when they have same order (or size) and their corresponding entries
are equal.

3
 
Definition 1.1.6 Let A = aij and B = bij be two matrices. Then A and B are said to be
equal, if for each i = 1, 2, 3, . . . , m and j = 1, 2, 3, . . . , n,

aij = bij .

To denote two matrices A and B are equal we write A = B whereas to denote two matrices A and
B are not equal, we write A ̸= B.

Example 1.1.5 Consider the matrices


 
    1 2  
1 2 5 2 1 2
A= , B= , C= 3
 7 , D= .
3 7 3 7 3 7
5 6

Here A and D are equal as their orders and corresponding entries are same. Again, A and B are
not equal though they have same order, as (1, 1)-th entry of A is 1 whereas (1, 1)-th entry of B is
5. Moreover, A and C are not equal, are their orders are different.

1.2 Matrix Operations


1.2.1 Sum of two matrices
Let A and B be two matrices of same size. Sum of A and B is denoted by A + B and obtained by
adding the entries of B to the corresponding entries of A.
     
1 3 4 3 3 4 3 3
Example 1.2.1 Suppose that A = ,B = and C = .
5 6 7 5 0 7 5 0
Here A and B are same of same order. So A + B is defined and
 
4 6 8
A+B = .
10 6 14

On the other hand, A + C is not defined, as their orders are not same.

1.2.2 Subtraction of two matrices


 
Let A = aij and B = bij be two matrices of same order  m × n. Subtraction of B from A,
denoted by A − B, is another matrix C defined as C = cij , where cij = aij − bij .
     
1 3 4 3 3 4 3 3
Example 1.2.2 Suppose that A = ,B = and C = .
5 6 7 5 0 7 5 0
Here A and B are same of same order. So A − B is defined and
 
−2 0 0
A−B = .
0 6 0

On the other hand, A − C is not defined, as their orders are not same.

1.2.3 Scalar multiplication



Let A = aij be a matrix and α be a scalar. Scalar multiplication of A by α is denoted by αA
and defined as  
αA = α aij = αaij .

4
 
1 3 4
Example 1.2.3 Suppose that A = 5
 6 7 and α = 4. Then
3 3 4
   
1 3 4 4 12 16
αA = 4 5 0 7  = 20 0 28  .
0 3 −4 0 12 −16

1.2.4 Matrix multiplication


Matrix multiplication 1 is somewhat complicated, as sometimes product of two matrices may not
be defined. Product of two matrices is only then possible, when the number of columns of the first
matrix is equal to the number of rows of the second matrix. Product of two matrices produces
another matrix.
 
Definition 1.2.1 Let A = aij be a matrix of order m × l and B = bij be a matrixof l × n.
Then the product of A and B, denoted by AB, is another matrix C defined as C = cij for

l
X
cij = aik bkj
k=1
= ai1 b1j + ai2 b2j + ai3 b3j + · · · + ain bnj , 1 ≤ i ≤ m, 1 ≤ j ≤ n .

The order of C = AB is m × n.

Formula used in the above definition is known as row-column rule for matrix multiplication.

Note 1.2.1 With the help of the above row-column formula for matrix multiplication, it is possible
to find a specific entry of a product matrix without determining the product matrix fully.
   
1 3 4 3 3 4
Example 1.2.4 Suppose that A = and B = . Here AB is not
5 6 0 5 0 7
defined, as the number of columns of A is not equal to the number of rows of B.
 
  3 3
1 3 4
Example 1.2.5 Suppose that A = and B = −1 0 . Here AB is defined,
5 6 0
1 0
as the number of columns of A is equal to the number of rows of B. Say AB = C. Clearly, C is a
1
The notion of matrix multiplication was first introduced by German mathematician Gotthold Eisenstein, a
student of Gauss, around 1844. And this concept was expanded and formalized further by Cayley in 1858. Eisentstien
was suffering from bad health throughout his life and died when he was only 30. So his potential was never realized
[2, Page 30].

5
matrix of order 2 × 2. Now,
3
X
c11 = a1k bk1 = a11 b11 + a12 b21 + a13 b31 = 1(3) + 3(−1) + 4(1) = 4,
k=1
X3
c12 = a1k bk2 = a11 b12 + a12 b22 + a13 b32 = 1(3) + 3(0) + 4(0) = 3,
k=1
X3
c21 = a2k bk1 = a21 b11 + a22 b21 + a23 b31 = 5(3) + 6(−1) + 0(1) = 5,
k=1
3
X
c22 = a2k bk2 = a21 b12 + a22 b22 + a23 b32 = 5(3) + 6(0) + 0(0) = 15.
k=1
 
4 3
Therefore C = .
9 15
 
  3 3
1 3 0
Problem 1.2.1 Find the product of A = and B = −1 0 .
5 6 0
1 0

Solution Here
 
  3 3
1 3 0 
AB = . −1 0
5 6 0
1 0
   
1(3) + 3(−1) + 0(1) 1(3) + 3(0) + 0(0) 0 3
= = .
5(3) + 6(−1) + 0(1) 5(3) + 6(0) + 0(0) 9 15

Problem 1.2.2 Problem 25 [2, Page 37 of Exercise 1.3]

1.2.5 Visualizing the effect of matrix multiplication on plane


Here the reader is referred to [1].

1.3 Differences between Real Arithmetic and Matrix Arith-


metic
There are some laws which are valid in real arithmetic, but fail in matrix arithmetic. For instance,
commutative law, cancellation law etc.

1.3.1 Failure of multiplicative commutativity


Matrix multiplication is not commutative. That is, for two matrices A, B, in general AB ̸= BA.

Problem 1.3.1 Find two such matrices A and B for which AB ̸= BA.

Question 1.3.1 Is (A + B)2 = A2 + 2AB + B 2 true? Why?

Answer No. Because in general AB ̸= BA for two matrices A and B.

Proposition 1.3.1 Square of a matrix exists if and only if it is a square matrix.

6
Proof. We have A2 = A.A. But A.A is defined only when A is a square matrix.

Problem 1.3.2 Prove or disprove that (AB)2 = A2 B 2 for any two square matrices A and B.
   
2 1 1 3
Disproof Suppose that A = and B = . Here
3 2 4 1
   
2 85 119 2 2 123 94
(AB) = , whereas A B = .
187 198 250 241

So (AB)2 = A2 B 2 for any two square matrices A and B, is a false statement.

1.3.2 Failure of cancellation Law


Product of two different pairs of matrices may be same. So in matrix arithmetic cancellation law
does not hold. The next example clarifies this fact.
     
1 1 1 1 2 −3
Example 1.3.1 Consider the matrices A = ,B = and C = .
  1 1 −1 −1 −2 3
0 0
Here AB = = AC, though B ̸= C.
0 0

1.3.3 Matrix arithmetic


The next theorem lists the basic algebraic properties of matrix operations.

Theorem 1.3.1 Let α, β be two scalars and A, B, C be three matrices of suitable order so that
indicated operations are defined. Then

(a) A + B = B + A (commutativity of matrix addition)

(b) A + (B + c) = (A + B) + C (associativity of matrix addition)

(c) A(BC) = (AB)C (associativity of matrix multiplication)

(d) A(B + C) = AB + AC (left distributivity)

(e) (A + B)C = AC + BC (right distributivity)

(f) α(A + B) = αA + αB

(g) (α + β)A = αA + βA

(h) α(βA) = (αβ)A.

(i) α(AB) = (αA)B = A(αB).

Problem 1.3.3 Problem 23 of [2, Page 50 of Exercise 1.4]

Problem 1.3.4 Problem 24 of [2, Page 50 of Exercise 1.4]


   
1 0 −1 4 0 0
Exercise 1.3.1 If A = 0
 1 2  and B = 0 1 2, then find A2 + AB − 2(A +
1 5 −5 7 0 0
B) − I3 .

7
Question 1.3.2 If A2 + AB − 2(A + B) − I3 is defined, then what are the order of it and A?

Answer Order of A2 + AB − 2(A + B) − I3 is 3 × 3 and order of A is also 3 × 3.

1.4 Some Special Types of Matrices


1.4.1 Row matrix
A matrix consisting of only one row is called a row matrix. Row matrices are also referred as
row vectors.
 
Example 1.4.1 2 3 4 8 , 2 8 etc are row matrices.

1.4.2 Column matrix


A matrix consisting of only one column is called a column matrix. Column matrices are also
referred as column vectors.
 
2  
2
Example 1.4.2 3, etc are column matrices.
8
4

Question 1.4.1 Is there any matrix which is at the same time a row matrix and a column matrix?
  
Answer Infinitely many. For example, 4 , −4 , 0 etc.

1.4.3 Zero matrix


A matrix with all entries zero is called a null matrix or zero matrix. We denote a zero matrix
by 0. A zero matrix may be square or not.
 
      0 0 0
  0 0 0 0 0 0
Example 1.4.3 0 , 0 0 , , , , 0 0 0
0 0 0 0 0 0
0 0 0
etc are zero matrices.
   
0 0 0 0 0
Note 1.4.1 Two zero matrices may not be equal. For example, and
0 0 0 0 0
are two zero matrices, but they are not equal.

The next theorem enlists some properties of zero matrices.

Theorem 1.4.1 Let α ∈ R be a scalar and 0 be a zero matrix. Then for a matrix A with suitable
order for which corresponding matrix operation to be defined

(a) A + 0 = 0 + A = A.

(b) A − A = A + (−A) = 0.

(c) A − 0 = A.

(d) A0 = 0A = 0.

(e) αA = 0 implies either α = 0 or A = 0.

8
1.4.4 Zero product with nonzero factors
In case of usual algebra of real numbers, product of two nonzero numbers can not be zero. But
in algebra of matrices, product of two nonzero matrices can be a zero matrix. Following example
demonstrates this fact.
   
0 −1 10 1
Example 1.4.4 For the matrices A = and B = , though AB = 0, neither
0 3 0 0
A = 0 nor B = 0.

Question 1.4.2 For two matrices A and B, does AB = 0 imply either A = 0 or B = 0? Justify
your answer.

Answer No. Following counterexample provides reason on behalf of our answer.

Counterexample 1.4.2 Consider the matrices A and B of Example 1.4.4. Here though AB = 0,
neither A = 0 nor B = 0.

1.4.5 Identity matrix



Let I = aij be a square matrix. Then I is said to be a unit matrix or an identity matrix, if
(
1, if i = j
aij = .
̸ j
0, if i =
That is, identity matrix is a square matrix in which all diagonal entries are 1 and all off diagonal
entries are 0. Identity matrix with order n × n is denoted by In . For any n × n square matrix A,
always In A = AIn = A.
 
1 0 0 0
0 1 0 0
Example 1.4.5   is an identity matrix of order 4 × 4. Simply, it is also
0 0 1 0
0 0 0 1
denoted by I4 .

1.4.6 Diagonal matrix



A square matrix A = aij is called a diagonal matrix, if
aij = 0, whenever i ̸= j.
In a diagonal matrix, all off diagonal entries must be zero, but diagonal entries may be any number.

Example 1.4.6 Identity and zero matrices are diagonal.


 
4 0 0
Example 1.4.7 0 0 0  is a diagonal matrix.
0 0 −1

1.4.7 Lower triangular matrix



A square matrix A = aij is called lower triangular, if
aij = 0 whenever i < j.
That is, in a lower triangular matrix all entries above the diagonal are zero.

9
 
4 0 0
Example 1.4.8  1 0 0  is a lower triangular matrix.
0 1 −1

1.4.8 Upper triangular matrix



A square matrix A = aij is called upper triangular, if

aij = 0 whenever i > j.

That is, in an upper triangular matrix all entries below the diagonal are zero.
 
4 0 5
Example 1.4.9 0 0 1  is a upper triangular matrix.
0 0 −1

Note 1.4.2 Zero matrix and identity matrix are at a time upper triangular and lower triangular.

Note 1.4.3 Triangular matrix is either upper triangular or lower triangular.

1.4.9 Scalar matrix


A square matrix A is called a scalar matrix, if

A = αI for some scalar α.

1.4.10 Transpose matrix



Let A = aij be matrix. Then the transpose matrix of A is denoted by AT and defined as

AT = aji .


If a matrix A is of order m × n, then its transpose matrix will be of order n × m.


 
4 3 3
Example 1.4.10 Consider the matrix A = . Then transpose of A is AT =
  −1 0 −1
4 −1
3 0 .
3 −1

Following theorem describes some properties of transpose matrices.

Theorem 1.4.3 Let A and B be two matrices of suitable orders to define the matrix operations
between them. Then
T
(a) AT = A.

(b) (A + B)T = AT + B T .
(c) (AB)T = B T AT .
(d) (αA)T = αAT for a scalar α.

Question 1.4.3 (Should be rectified) Can the property (AB)T = B T AT be extended for 3 matri-
ces? Can this property be extended to finite number of matrices? What is about infinite number
of matrices?

10
Answer Yes. Yes. Because product of any two matrices is again a matrix.

(A1 A2 A3 · · · An )T = ATn ATn−1 ATn−2 · · · AT1 .

In case of infinite number of matrices it can not be said that from where it will start back.

1.4.11 Symmetric matrix


A matrix A is said to be symmetric if
AT = A.
Obviously, every symmetric matrix is a square matrix.

Example 1.4.11 The matrix


 
4 5 6 0
5 3 0 10 
A= 
6 0 7 −1
0 10 −1 8

is a symmetric matrix, as  
4 5 6 0
5 3 0 10 
AT =   = A.
6 0 7 −1
0 10 −1 8

1.4.12 Skew-symmetric matrix


A matrix A is said to be skew-symmetric if

AT = −A.

Obviously, every skew-symmetric matrix is a square matrix. In a skew-symmetric matrix all entries
along its principal diagonal are 0 (zero).

Example 1.4.12 The matrix


 
0 5 6 0
−5 0 0 10 
A=
−6

0 0 −1
0 −10 1 0

is a skew-symmetric matrix, as
 
0 −5 −6 0
 5 0 0 −10
AT = 
6
 = −A.
0 0 1 
0 10 −1 0

Example 1.4.13 The matrix


 
0 5 6 −8
−5 0 −4 10 
A=
−6

4 0 −1
8 −10 1 0

11
is a skew-symmetric matrix, as
 
0 −5 −6 8
 5 0 4 −10
AT =   = −A.
6 −4 0 1 
−8 10 −1 0

Proposition 1.4.1 Transpose matrix of a skew-symmetric matrix is also skew-symmetric.

1.4.13 Orthogonal matrix


A matrix A is said to be orthogonal if

AAT = AT A = I.

That means, for an orthogonal matrix A, always

A−1 = AT .

Obviously, every orthogonal matrix is a square matrix. All identity matrices are orthogonal.

Example 1.4.14 The matrix  1 1 


√ √
A =  12 2 

1 
√ −√
2 2
is an orthogonal matrix, as  1 1 
√ √
AT =  12 2 

1 
√ −√
2 2
and  
T 1
T 0
AA = A A = .
0 1

Proposition 1.4.2 Product of two orthogonal matrices is again orthogonal.

Proposition 1.4.3 Transpose of an orthogonal matrices is also orthogonal.

Note 1.4.4 Collection O of all orthogonal matrices of order m × m forms a group, called or-
thogonal group.

1.4.14 Involutory matrix


A matrix A is called involutory, if A2 = I.
 
4 3 3
Example 1.4.15 The matrix −1  0 −1 is involutory.
−4 −4 −3

Problem 1.4.1 If A is a involutory matrix, then what does equal to An for n ∈ N?


(
I, if n is even
Solution An = .
A, if n is odd

12
1.4.15 Idempotent matrix
Let A be a square matrix. Then A is said to be idempotent, if A2 = A.
  1  
1 1

−1 3 5 2 
Example 1.4.16 The matrices  1 −3 −5,  21 2
1 ,  1
2
1  etc are idempo-

−1 3 5
8 2 2 2
tent.
 
2 −2 −4
Problem 1.4.2 Show that −1 3 4  is an idempotent matrix.
1 −2 −3

1
Example 1.4.17 Any n × n matrix with the entries of the form is idempotent.
n
Problem 1.4.3 If A is a idempotent matrix, then what does equal to An for n ∈ N?

Solution A.

Problem 1.4.4 Write down a 4 × 4 nonzero idempotent matrix A for which |A| = 0.
1 1 1 1
4 4 4 4
1 1 1 1
 
4 4 4 4
Solution The matrix  1  is of such form.

 1 1 1 
4 4 4 4
 
1 1 1 1
4 4 4 4
Proposition 1.4.4 If A is an idempotent matrix, then I − A is also idempotent.

Proof. Let A be an idempotent matrix. So A2 = A. Then

(I − A)2 = (I − A)(I − A)
= I 2 − IA − AI + A2
=I −A−A+A
= I − A.

Hence the statement.

Theorem 1.4.4 Let A and B be two idempotent matrices. Then AB is idempotent, when
AB = BA.

Proof. Suppose that AB = BA. Since A and B are idempotent,

A2 = A and B 2 = B.

Here

(AB)2 = (AB)(AB) = A(BA)B; by associativity


= A(AB)B; by hypothesis
= (AA)(BB) = A2 B 2 = AB.

Hence AB is idempotent.

13
Theorem 1.4.5 Let A and B be two idempotent matrices. Then A + B is idempotent if and
only if AB = BA = 0.

Theorem 1.4.6 Let A and B be two square matrices order n with AB = A and BA = B. Then
A and B are als0 idempotent.

1.4.16 Nilpotent matrix


A matrix A is called a nilpotent matrix, if there exists a least positive integer n such that

An = 0.

In this case, n is called the index of the nilpotent matrix A.


 
1 1 3
Exercise 1.4.1 Show that  5 2 6  is a nilpotent matrix. Then find its index also.
−2 −1 −3

1.4.17 Periodic matrix


A matrix A is called a periodic matrix, if there exists a least positive integer n such that

An+1 = A.

In this case, n is called the period of A.


 
1 −2 −6
Exercise 1.4.2 Show that −3 2 9  is a periodic matrix of period 2.
2 0 −3

1.4.18 The Vandermonde matrix of order m


A square matrix of the form
 
1 a1 a1 2 ... (a1 )n−1
1
 a2 (a2 )2 ... (a2 )n−1 

1
 a2 (a2 )2 ... (a2 )n−1 
.
 .. .. .. ... .. 
. . . . 
1 an (an )2 ... (an )n−1

is called the Vandermonde matrix of order m. It is usually denoted by Vn .

Example 1.4.18 The Vandermonde matrix of order 3 is as of the following form:


 
1 a1 (a1 )2
V 3 = 1 a2 (a2 )2  .
1 a3 (a3 )2

1.5 Square Root of a Matrix


Definition 1.5.1 (Square Root of a matrix) A matrix B is called a square root of a matrix A,
if BB = A.

14
Problem 1.5.1
 [2,Like Problem 29 of Exerecise Set 1.3, Page 38] Find the square root of the
1 1
matrix B = .
0 1

1! 1!  
1 1 1 1
Solution A2 = 2 2 = 0 .
0 1 0 1 1

Problem 1.5.2 Problem 29(a) and Problem 29(b) [2, Exerecise Set 1.3, Page 38]

1.6 Matrix Polynomial


Definition 1.6.1 [2, Page 46-48]

1.7 Application
Problem 1.7.1 Suppose that three students Jone, Pitar and Strang obtained marked in three
assignments A20, A10, A10 and in class performance C05 as the following Table 1.1.

Assignments
Student’s Name A20 A10 A10 C05
Jone 20 07 9.5 03
Pitar 19 08 09 05
Strang 10 06 07 00
Tk per number 05 03 04 10

Table 1.1:

If anyone get Tk 05, Tk 03, Tk 03 and Tk 04 for each mark in A20, A10, A10 and C05
respectively, then calculate the amount of money received by Jone, Pitar and Strang.

Solution Matrix formulation of the given problem is as follows.


 
  05  
20 07 9.5 03   20 × 05 + 07 × 03 + 9.5 × 04 + 03 × 10
19 08 09 05 03 =  19 × 05 + 08 × 03 + 09 × 04 + 05 × 10 
04
10 06 07 00 10 × 05 + 06 × 03 + 07 × 04 + 00 × 10
10
 
189
= 205

96

1.7.1 A special set


        
1 0 1 0 0 1 0 −1
M= ± ,± ,± ,±
0 1 0 −1 1 0 1 0

15
Chapter 2

Determinants

For detailed explanations and solving more problems, readers are referred to Anton and Rorres [2],
Hamilton [4], Hadley [3], Larson and Falvo [5], Lipschutz and Lipson [6].

2.1 Determinants
2.1.1 Determinants
Determinant is defined for square matrices. Corresponding to every square matrix there exists a
unique number, which is referred as it’s determinant.

Definition 2.1.1 Let A be a square matrix of order m × m. Then determinant of A, denoted


by det(A) or A , is a number associated with it.

Determinant of a square matrix may be negative, positive or zero. Every square matrix relates
to a unique determinant. That is why, determinants can also be interpreted as a function.

Definition 2.1.2 Let S = {A : A is a square matrix}. Determinant, det, of square matrices is


a function
det : S → R
that assigns a unique number det(A) to every square matrix A.

Repelling a misconception
Sometimes it is said that determinant is the value of a square matrix. This is a misconception.
Determinant is not the value of a matrix, rather it is simply a number related to a square matrix.

2.2 Evaluating Determinants by Cofactor Expansion


2.2.1 Determinant of a 1 × 1 matrix
Any 1 × 1 matrix is a square matrix and so determinant of a 1 × 1 matrix is defined.
 
Definition 2.2.1 Determinant of the 1×1 matrix a11 , denoted by det a11 or a11 , is defined
as
a11 = a11 .

16

Example 2.2.1 Consider the 1 × 1 matrix A = −2 . Then

−2 = −2.

Observe that here the determinant of the matrix A is negative.

2.2.2 Determinant of a 2 × 2 matrix


Like the determinant of a 1 × 1 matrix, determinant of a 2 × 2 matrix is also defined, as any 2 × 2
matrix is a square matrix.
 
a11 a12
Definition 2.2.2 Determinant of the 2 × 2 matrix is denoted by
a21 a22
 
a11 a12 a11 a12
det or
a21 a22 a21 a22
and defined as
a11
a12
= a11 a22 − a21 a12 .
a21
a22
 
−2 0
Example 2.2.2 Consider the 2 × 2 matrix A = . Then the determinant of A is
2 3

−2 0
= (−2)(3) − (2)(0) = −6 .
2 3

Remark 2.2.1 From here of this text, we use the notation det(A) only with the name A of the
square matrix A = aij and the notation aij elsewhere.

2.2.3 Minors and cofactors of a square matrix


Minors and cofactors of a square matrix is defined in terms of determinants.

Definition 2.2.3 Let A = aij be a square matrix of order n×n. Then the minor corresponding
to the (i, j)-th entry aij of the matrix A, denoted by Mij , is defined as the determinant of the square
matrix of order (n − 1) × (n − 1) obtained by deleting the i-th row and j-th column of A.

As minor is itself a determinant, it is simply a number.


 
1 −4
Example 2.2.3 Consider the matrix A = . Here minor corresponding to the (1, 1)-th
8 0
entry 1 of the matrix A is
M11 = 0 = 0,
minor corresponding to the (1, 2)-th entry −4 is

M12 = 8 = 8,

minor corresponding to the (2, 1)-th entry 8 is

M21 = −4 = −4

and minor corresponding to the (2, 2)-th entry 0 is

M22 = 1 = 1.

17
 
1 −4 4
Example 2.2.4 Consider the matrix A =  5 0 8  . Here minor corresponding to the
1
−5 −9 2
(1, 1)-th entry 1 is
0 8
M11 = 1 = 0 − (−72) = 72,
−9 2
minor corresponding to the (1, 2)-th entry −4 is

5 8 5 85
M12 = 1 = − (−40) = ,
−5 2 2 2
minor corresponding to the (1, 3)-th entry 4 is

5 0
M12 = = (−45) − (0) = −45,
−5 −9
and similarly
M21 = −38, M22 = 392
, M23 = 11,
M31 = −32, M32 = −12, M33 = 20.

Definition 2.2.4 Let A = aij be square matrix and Mij be the corresponding minor of the
entry aij . Then the cofactor corresponding to the entry aij is denoted by Cij and defined as

Cij = (−1)i+j · Mij .

Clearly, cofactor corresponding to an entry of a square matrix is nothing but the corresponding
signed minor and so cofactor is also a number.

Example
 2.2.5 In Example
 2.2.4 we obtained the minors of corresponding entries of the matrix
1 −4 4
A= 5 0 8  as
1
−5 −9 2

M11 = 72, M12 = 85


2
, M13 = −45,
39
M21 = −38, M22 = 2 , M23 = 11,
M31 = −32, M32 = −12, M33 = 20.

Then cofactor corresponding to the element (1, 1)-th entry of A is

C11 = (−1)1+1 · M11 = (1) · (72) = 72,

cofactor corresponding to the element (1, 2)-th entry of A is


85 85
C12 = (−1)1+2 · M12 = (−1) · =−
2 2
cofactor corresponding to the element (1, 3)-th entry of A is

C13 = (−1)1+3 · M13 = (1) · (−45) = −45

and similarly
C21 = 38, C22 = 39
2
, C23 = −11,
C31 = −32, C32 = 12, C33 = 20.
Here observe that the cofactor corresponding to any entry of the matrix A, is either same or opposite
to its corresponding minor.

18
Note 2.2.1 The sign of the cofactor corresponding to an entry of a square matrix is either same
or opposite to the sign of its corresponding minor.

Note 2.2.2 The checkerboard array


 
+ − + − + ...
− + − + − . . .
 
+
 − + − + . . .
− + − + − . . .
 
+
 − + − + . . .
.. .. .. .. .. ..
. . . . . .

displays the rule (−1)i+j for assigning sign of the cofactor Cij corresponding to the entry aij of a
square matrix. According to the pattern of this checkerboard array,

C11 = M11 , C12 = −M12 , C13 = M11 , C21 = −M21

and so on.
Observe that in the checkerboard array along the principal diagonal all signs are positive.

Note 2.2.3 The sum of the products of the elements of a row (or a column) and  the respective

10 5
cofactors of any other row (or any other column) is zero. For example, in A = we
3 −8
consider the elements of 1st row and the corresponding cofactors of 2nd row to obtain that

a11 C21 + a12 C22 = 10(−1)2+1 5 + 5(−1)2+2 10 = −50 + 50 = 0.

2.2.4 Determinant of a general n × n square matrix



Let A = aij be a square matrix of order n × n. Then determinant of A is defined by

det(A) = ai1 Ci1 + ai2 Ci2 + ai3 Ci3 + · · · + ain Cin


| {z }
cofactor expansion along the i−th row

or
det(A) = a1j C1j + a2j C2j + a3j C3j + · · · + anj Cnj .
| {z }
cofactor expansion along the j−th column
 
−2 0
Example 2.2.6 Consider the matrix A = given in Example 2.2.2 . All minors and
2 3
the cofactors of A are respectively

M11 = 3, M12 = 2, M21 = 0, M22 = −2,

and
C11 = 3, C12 = −2, C21 = −0 = 0, C22 = −2,
Taking cofactor expansion along the 1st row of A, we obtain

det(A) = a11 C11 + a12 C12 = (−2)(3) + (0)(−2) = −6,

which coincides with the det(A) obtained in Example 2.2.2 .


Again, taking cofactor expansion along the 1st column of A, we obtain

det(A) = a11 C11 + a21 C21 = (−2)(3) + (2)(0) = −6,

19
which also coincides with the det(A) obtained in Example 2.2.2 .
Similarly, we get the same value of det(A), taking cofactor expansion along the 2nd row and
2nd column of A.
 
−2 0 1
Example 2.2.7 Consider the matrix A =  2 3 0 . Here cofactor corresponding to
−1 −3 4
the (1, 1)-th entry −2 of A is
3 0
C11 = (−1)1+1 · = (1)(12 − 0) = 12,
−3 4
cofactor corresponding to the (1, 2)-th entry 0 of A is
2 0
C12 = (−1)1+2 · = (−1)(8 − 0) = −8,
−1 4
cofactor corresponding to the (1, 3)-th entry 1 of A is
2 3
C13 = (−1)1+3 · = (1) ((−6) − (−3)) = −3,
−1 −3
and similarly
C21 = −3, C22 = −7, C23 = −6,
C31 = −3, C32 = 2, C33 = −6.
Taking cofactor expansion along the 1st row of A, we obtain
det(A) = a11 C11 + a12 C12 + a13 C13
= (−2)(12) + (0)(−8) + (1)(−3) = −27.
Again, taking cofactor expansion along the 1st column of A, we obtain
det(A) = a11 C11 + a21 C21 + a31 C31
= (−2)(12) + (2)(−3) + (−1)(−3) = −27,
which coincides with the det(A) obtained by taking cofactor expansion along the 1st row of A.
Similarly, we get the same value of det(A), taking cofactor expansion any row and any column
of A.
 
0 −2 1
Problem 2.2.1 For the matrix A = −1 3 0 ,
2 −3 3
a. find det(A), taking cofactor expansion along the 1st row.
b. find det(A), taking cofactor expansion along the 2nd column.
c. compare the values of det(A) obtained by cofactor expansion along the 1st row and 2nd column
of A.

Solution a. Taking cofactor expansion along the 1st row of A, we obtain


det(A) = a11 C11 + a12 C12 + a13 C13
3 0 −1 0 −1 3
= (0) · (1) + (−2) · (−1) + (1) · (1)
−3 3 2 3 2 −3
= (0)(9) + (−2)(3) + (1)(−3) = −9.

20
b. Taking cofactor expansion along the 2nd column of A, we obtain

det(A) = a12 C12 + a22 C22 + a32 C32


−1 0 0 1 0 1
= (−2) · (−1) + (3) · (1) + (−3) · (−1)
2 3 2 3 −1 0
= (2)(−3) + (3)(−2) + (3)(1) = −9.

c. Values of det(A) obtained by cofactor expansion along the 1st row and 2nd column of A
are equal.
 
0 70 1000
Exercise 2.2.1 To find the determinant of the matrix A = 5 0 10  , cofactor ex-
8 0 2
pansion along which row or column will be comparatively easy? Why? Find det(A) also.

Problem 2.2.2 Prove or disprove that

ˆ determinant of two different matrices with different order never become equal.

ˆ or, determinant of two matrices are equal implies these are of same order.

ˆ or, determinant of two different matrices can not be equal.


 
  5 0 0
−2 2
Disproof Consider the matrices A = and B = 1 3 1 . Here order of A
−2 −3
8 −2 0
and B are not equal, but det(A) = 10 = det(B).

2.3 Determinants for Solving a System of Linear Equations


Theorem 2.3.1 (Cramer’s Rule 1 for solving a system of linear equations in two
variables) Solution, if exists, of the system of linear equations in two variables

a1 x + b 1 y = c 1 (2.1)
a2 x + b 2 y = c 2 (2.2)

is
Dx Dy
x= , y= ,
D D
where
a1 b1 c1 b1 a1 c1
D= , Dx = , Dy = .
a2 b2 c2 b2 a2 c2

Proof. Multiplying Equation (2.8) by b2 and Equation (2.10) by b1 , we obtain respectively

a1 b 2 x + b 1 b 2 y = b 2 c 1 (2.3)
a2 b 1 x + b 1 b 2 y = b 1 c 2 (2.4)
1
Though variations of Cramer’s rule were fairly well know before it was discussed in the research work of the
Swiss mathematician Gabriel Cramer (1704-1752) in the year 1750, it was popularised by him. So mathematicians
attached his name with this method (see [2, Page 124]).

21
Subtracting Equation (2.4) from Equation (2.3), we find

b2 c 1 − b1 c 2
x=
a1 b 2 − a2 b 1
c1 b1
c2 b2 Dx
= = .
a1 b1 D
a2 b2

Again, multiplying Equation (2.8) by a2 and Equation (2.10) by a1 , we obtain respectively

a1 a2 x + a2 b1 y = a2 c1 (2.5)
a1 a2 x + a1 b2 y = a1 c2 (2.6)

Subtracting Equation (2.6) from Equation (2.5), we find


a1 c 2 − a2 c 1
y=
a1 b2 − a2 b1
a1 c1
a2 c2 Dy
= = .
a1 b1 D
a2 b2

Hence the statement.

Problem 2.3.1 Using Cramer’s Rule, solve the following system of linear equations

2x + 3y = 1
3x − 4y = 3

Solution Given system of linear equations is


)
2x + 3y = 1
(2.7)
3x − 4y = 3

Now, for the system (2.7), we get

2 3 1 3 2 1
D= = −17, Dx = = −13, Dy = = 3.
3 −4 3 −4 3 3

Thus
Dx 13 Dy 3
x= = , y= =− .
D 17 D 17
Therefore our required solution is
 
13 3
(x, y) = ,− .
17 17
13 3

Answer. (x, y) = 17
, − 17 .

22
Theorem 2.3.2 (Cramer’s rule for solving a system of linear equations in three vari-
ables) Solution, if exists, of the system of linear equations in three variables
a1 x + b1 y + c1 z = d1 (2.8)
a2 x + b2 y + c2 z = d2 (2.9)
a3 x + b 3 y + c 3 z = d 3 (2.10)
is
Dx Dy Dz
x= , y= , z= ,
D D D
where
a1 b1 c1 d1 b1 c1 a1 d1 c1
D = a2 b2 c2 , D x = d 2 b2 c2 , Dy = a2 d2 c2 ,
a3 b3 c3 d3 b3 c3 a3 d3 c3
a1 b1 d1
D z = a2 b2 d2 .
a3 b3 d3
Proof. Left as an exercise to the reader.

Note 2.3.1 Since determinants are used to solve a system a linear equations by means of Cramer’s
rule, this method is also referred as determinant method.

Exercise 2.3.1 Using determinant method, solve the following system of linear equations
2x + 3y + 3z = 1
3x − 4y + z = 3
x − z = 0.

2.4 Some Properties of Determinants


2.4.1 Some Properties of Determinants
Here we list some basic properties of determinants.
The next Theorem 2.4.1 shows that determinant of the transpose of a square matrix is same to
its own determinant.

Theorem 2.4.1 If A is a square matrix, then


det(AT ) = det(A).
Proof. Simple consequence of the definition of determinant.
Theorem 2.4.1 implies that if the rows of a square matrix are changed into columns and its
columns are changed into rows, then the determinant of that matrix remains unaltered.
 
1 2 0
Example 2.4.1 Consider the matrix A = 3 0 −2 . Then
5 3 0
 
1 3 5
AT = 2 0 3 .
0 −2 0
Here det(AT ) = det(A).

23
Theorem 2.4.2 If A is a triangular matrix, then

det(A) = product of the diagonal elements of A.

Proof. Simple consequence of the definition of determinant.


 
1 2 0
Example 2.4.2 Consider the matrix A = 0 0 −2 . Here A is an upper triangular
0 0 5
matrix and
det(A) = 0 = 1 × 0 × 5 = product of the diagonal elements of A.

Corollary 2.4.1 If A is a diagonal matrix, then

det(A) = product of the diagonal elements.


 
1 0 0
Example 2.4.3 Consider the matrix A = 0 2 0 . Here A is a diagonal matrix and
0 0 5

det(A) = 10 = 1 × 2 × 5 = product of the diagonal elements of A.

Corollary 2.4.2 det(In ) = 1.

Example 2.4.4 Consider the identity matrix I3 of order 3


 
1 0 0
I3 = 0 1 0 .
0 0 1

Here
det(I3 ) = 1 × 1 × 1 = 1.

Theorem 2.4.3 If a square matrix A has a row or column of zeros, then

det(A) = 0.
 
5 10 0
Example 2.4.5 Consider the matrix A = 0  0 0 . Here all entries along the 2nd row
2 6 5
of A are 0 (zero) and its determinant is

det(A) = 0.

Theorem 2.4.4 If any row (or a column) in a square matrix A is a multiple of another row (or
column) of it, then det(A) = 0.
 
1 3 0
Example 2.4.6 Consider the matrix A = 3 9 −1 . Here elements of the 2nd column
7 21 1
st
are 3 times of the corresponding elements of the 1 column in matrix A and

det(A) = 1(9 − 9) = 0.

Note 2.4.1 The statement

24
If the ratios of the corresponding elements of any two rows or columns are equal in a
square matrix A, then det(A) = 0

is
 not the equivalent
 statement of the above theorem. For example, in case of the matrix A =
3 9 0
1 3 0, elements of the 1st row are 3 times of the corresponding elements of the 2nd
17 21 1
0
row. But here the ratio of the third elements of these two rows is , which is in indeterminant
0
form. In this case also
det(A) = 1(9 − 9) = 0.

Corollary 2.4.3 If a square matrix A has two identical rows or columns, then det(A) = 0.
 
1 3 10
Example 2.4.7 Consider the matrix A = 1 3 10 . Here 1st row and 2nd row in matrix
7 21 1
A are identical and
det(A) = 0.
 
a a x
Problem 2.4.1 Let ∆ = det m m m. The roots of the equation ∆ = 0 are
b x b

(a) x = a, m

(b) x = b, m

(c) x = a, b

(d) None.

Answer (b) x = a, b.

Theorem 2.4.5 If two rows or two columns of a square matrix are interchanged, then the deter-
minant of the new matrix will be the negative of the old one.

Example 2.4.8
a11 a12 a13 a11 a12 a13
a21 a22 a23 = − a31 a32 a33
a31 a32 a33 a21 a22 a23
or
a11 a12 a13 a12 a11 a13
a21 a22 a23 = − a22 a21 a23
a31 a32 a33 a32 a31 a33

Theorem 2.4.6 If each element of a row or of a column in a square matrix A are multiplied by
a constant k, then the determinant of the new matrix A′ will be equal to k · det(A).

Example 2.4.9
ka11 ka12 ka13 a11 a12 a13
a21 a22 a23 = k a21 a22 a23
a31 a32 a33 a31 a32 a33

25
or
ka11 a12 a13 a11 a12 a13
ka21 a22 a23 = k a21 a22 a23
ka31 a32 a33 a31 a32 a33

Theorem 2.4.7 If each element of a row (column) in a square matrix A are increased or de-
creased by the equal multiple of the corresponding elements of another row (or column), then the
determinant of the new matrix A′ remains same as det(A).

Example 2.4.10
a11 + ka12 a12 a13 a11 a12 a13
a21 + ka22 a22 a23 = a21 a22 a23
a31 + ka32 a32 a33 a31 a32 a33
or
a11 a12 − ka11 a13 a11 a12 a13
a21 a22 − ka21 a23 = a21 a22 a23
a31 a32 − ka31 a33 a31 a32 a33
 
α α x
Problem 2.4.2 For det β
 β β  = 0, find x.
θ x θ

Solution Here  
α α x
det β
 β β = 0
θ x θ
implies that
 
0 α x
det  0 β β  = 0, c′1 = c1 − c2
θ−x x θ
 
0 α x
⇒(θ − x) · det 0
 β β = 0
1 x θ
 
0 α x
⇒β(θ − x) · det 0
 1 1 = 0
1 x θ
⇒(θ − x) · (−1)3+1 · 1 · (α − x) = 0
⇒(θ − x)(α − x) = 0
∴ x = α, θ.

Answer x = α, θ.

Theorem 2.4.8 If each entry of a row (column) in a square matrix A of order n × n can be
expressed as the sum of the entries of corresponding rows (columns) of two square matrices A′ and
A′′ with size n × n, then det(A) can be expressed as the sum of det(A′ ) and det(A′′ ).

Example 2.4.11

a11 + k1 a12 a13 a11 a12 a13 k1 a12 a13


a21 + k2 a22 a23 = a21 a22 a23 + k2 a22 a23
a31 + k3 a32 a33 a31 a32 a33 k3 a32 a33

26
or
a11 + k1 a12 + k2a13 + k3 a11 a12 a13 k1 k2 k3
a21 a22 a23 = a21 a22 a23 + a21 a22 a23
a31 a32 a33 a31 a32 a33 a31 a32 a33
 
f (x) ϕ(x)
Problem 2.4.3 Let F (x) = det . Then establish the equality
g(x) ϕ(x)

F (x + h) − F (x) = ∆1 + ∆2 ,
   
f (x + h) − f (x) ϕ(x + h) f (x) ϕ(x + h) − ϕ(x)
whenever ∆1 = det and ∆2 = det .
g(x + h) − g(x) ϕ(x + h) g(x) ϕ(x + h) − ϕ(x)
   
1 4 x 22 4 x
Problem 2.4.4 Solve det 2 5 8  = det 26 5 8 .
3 6 9 30 6 9

Solution Here
   
1 4 x 22 4 x
det 2
 5 8 = det 26
  5 8
3 6 9 30 6 9
   
1 4 x 22 4 x
⇒ det 2
 5 8 − det 26
  5 8 = 0
3 6 9 30 6 9
   
1 − 22 4 x −21 4 x
⇒ det 2 − 26
 5 8 = 0 ⇒ det −24
 5 8 = 0
3 − 30 6 9 −27 6 9
   
7 4 x 7 4 x
⇒ (−3) · det 8
 5 8 = 0 ⇒ det 8
 5 8 = 0
9 6 9 9 6 9

Sice the determinant is zero, 1st column and 3rd column are identical. Therefore x = 7.

Answer 7.

Theorem 2.4.9 If A and B are two square matrices with same order, then

det(AB) = det(A)det(B).

Above Theorem 2.4.9 reflects the fact that Determinant of the product of two square matrices
with same order is equal to the product of their individual determinant. That means, determinant
is a multiplicative function.

Definition 2.4.1 The determinant of a Vandermonde matrix of order n is called the Vander-
monde determinant of order n.

Exercise 2.4.1 Solve the equation


 
x2 x 2
det  2 1 1  = 0.
0 0 −5

Answer x = 0, 2.

27
2.5 Construction of a Square Matrix with Given Order and
Determinant
Here we study how can a square matrix of order n × n having determinant ∆ be formed.

2.5.1 Construction of a n × n matrix with determinant ∆


There are infinitely many matrices having∆ as their determinant.
If n = 1, then the matrix is simply ∆ .
If n > 1, then the matrix will have n2 entries. In this case, just fill any of the n2 − 1 entries of
the matrix with any random values that come to your mind. When the last entry remains, two of
the following cases can happen :

ˆ Case 01 : The minor of the remaining element is non zero. Just write a in place of
the remaining entry, expand the determinant, set it equal to ∆ and solve the equation for a.
There you have your matrix.

ˆ Case 02 : The minor of the remaining element is zero. Change one of the other values
that occur in the calculation of the minor so that the minor becomes non zero and then follow
Case 01.

Question 2.5.1 How many 1 × 1 square matrix are there having ∆ as its determinant?

Answer Unique.

2.6 Inverse of a Square Matrix


Definition 2.6.1 Two matrices A and B are said to be inverse of each other, if AB = BA = I.
Sometimes, inverse of a matrix may not exist. When inverse of a matrix exists, then that matrix
is said to be invertible.
 
3 5

1 5
 − 
Example 2.6.1 Consider that A = and B =  13 13
1  . Here A and B are

−2 3 2
  13 13
1 0
inverse of each other, as AB = BA = = I.
0 1

Question 2.6.1 Can AB be equal to I without being BA equal to I? Explain with example.
 
  1 1
1 3 −1
Answer Yes. Consider that A = and B = −1
 0. Here AB = I, but
1 1 0
−3 1
BA ̸= I.

Note 2.6.1 Some similar pairs of matrices satisfying the answer of Question 2.6.1 
are as follows.
−2

 

0 1

 

2 2

  1
0 0 1 1 −1 −1 1 2 0 3 1
, 0 0 ;
 , 1 1 ; ,  − ;
1 1 0 −1 2 1 3 4 0 2 2
1 0 0 1 0 0
   
  1 −2   2 −3
6 2 1 2 3 0
, −2 5 ; , −1 2  etc.
5 1 3 1 2 0
−1 2 0 0

28
Question 2.6.2 Why in the definition of inverse matrices both left product AB and right product
BA must be I?

Answer Because for some pairs of matrices A and B, it may happen that AB = I whereas
BA ̸= I. For instant, consider the pair of matrices
 
  −3 −3
2 −3 4
A= and B =  3 2 .
−2 2 −3
4 3

Here AB = I, though BA ̸= I.

Note 2.6.2 Some  similar pairs


 of matrices satisfying  the answer
 of Question 2.6.2are as follows.

  3 −1   1 0   2 2
1 1 0  1 0 0  3 1 0 
, −2 1 ; , 1 0; , −5 −6;
2 3 0 0 0 1 5 2 3
0 0 0 1 0 1
 
  1 0 0  
1 0 0 0 0
  1 0  
1 0 2 3 2 5 6 0
0 1 0 0 , 0
; , −1 2 ; ,
0 0 2 5 3 7 8 0
0 0 0 1 1 −3
0 0 1
1 2
    
−4 3  
14
− 7
  −1 2
 7 2 4 0 5 4 0
2
− 25 ; 3
,  14 1 
7
; ,  21 − 52  etc.
3 1 0 3 2 0
0 0 0 0 0 0
   
a b −1 p q
Note 2.6.3 Let A = and A = . Adding a column of zeros with A and
c d r s  
−1 a b 0
adding a row of zeros with A , we obtain respectively two such matrices B = and
    c d 0
p q 1 0 0
C = r s , for which BC = I whenever CB = 0 1 0 ̸= I.
0 0 0 0 0

Theorem 2.6.1 Inverse of a matrix, if exists, is unique.

Proof. Let A be a square matrix. If possible, then let us suppose that B, C are two inverse matrices
of A. So
AB = BA = I and AC = CA = I.
Now,

B = BI
= B(AC)
= (BA)C; [as matrix multiplication is associative]
= IC = C.

Hence the proof is complete.

Note 2.6.4 Sine the inverse of a matrix is unique, the inverse of a matrix A, if exists, is also
unique and it is denoted by A−1 .

Theorem 2.6.2 If A and B are two invertible matrices, then

(AB)−1 = B −1 A−1 .

29
Proof. Let A and B be two invertible matrices. So A−1 and B −1 exist.
Now,
(AB)(B −1 A−1 ) = A(BB −1 )A−1 = AIA−1 = (AI)A−1 = AA−1 = I.
Again,
(B −1 A−1 )(AB) = B −1 (A−1 A)B = B −1 IB = B −1 (IB) = B −1 B = I.
Hence (AB)−1 = B −1 A−1 .

Theorem 2.6.3 If A is an invertible matrix, then (A−1 )−1 = A.

Proof.

Definition 2.6.2 Let A be a square matrix. Then A is said to be singular, if det(A) = 0. If


det(A) ̸= 0, then A is said to be a non-singular matrix.
   
1 2 1 0
Example 2.6.2 Consider the matrices A = and B = . Here A is a singular
3 6 3 6
matrix, as det(A) = 6 − 6 = 0 and B is a non-singular matrix, as det(B) = 6 − 0 = 6 ̸= 0.

The next theorem known as determinant test for invertibility provides an important crite-
rion to determine whether a matrix is invertible.

Theorem 2.6.4 A square matrix A is invertible if and only if det(A) ̸= 0.

Proof. See [2, page 120–121].


The next Theorem provides a technique to find the inverse of a 2 × 2 square matrix easily.
 
a b
Theorem 2.6.5 Inverse of the matrix A = is
c d
 
−1 1 d −b
A = ,
det(A) −c a
provided det(A) ̸= 0.

Proof. Left as an exercise to the reader.

2.6.1 Cofactor matrix



Let A = aij be square matrix and Cij be the corresponding cofactor of the entry aij . Then the
cofactor matrix of A is denoted by cof(A) and defined as

cof(A) = Cij .
 
−2 0 1
Example 2.6.3 Consider the matrix A =  2 3 0 . All cofactors corresponding to
−1 −3 4
the entries of the matrix A are
C11 = 12, C12 = −8, C13 = −3,
C21 = −3, C22 = −7, C23 = −6,
C31 = −3, C32 = 2, C33 = −6.
So here  
12 −8 −3
cof(A) = −3 −7 −6 .
−3 2 −6

30
2.6.2 Adjoint matrix

Let A = aij be square matrix and Cij be the corresponding cofactor of the entry aij . Then the
adjoint matrix of A is denoted by adj(A) and defined as
T 
adj(A) = Cij = Cji .
That is, adjoint matrix of a given square matrix is the transpose of its cofactor matrix.
 
−2 0 1
Example 2.6.4 Consider the matrix A =  2 3 0 given in Example 2.6.3. The
−1 −3 4
adjoint matrix of A is  
12 −3 −3
adj(A) = −8 −7 2 .
−3 −6 −6

Theorem 2.6.6 A · adj(A) = det(A)I.

Proof.
The next Theorem provides a technique to find the inverse of any invertible square matrix.

Theorem 2.6.7 If the matrix A is invertible, then


1
A−1 = adj(A) .
det(A)

Proof.
   
4 3 10 17
Problem 2.6.1 If A = and AB = , then find the matrix B.
2 1 4 7

Solution Given that


 
10 17
AB =
4 7
 
−1 −1 10 17
⇒ A (AB) = A ; multiplying A−1 on the left of both sides
4 7
 
−1 −1 10 17
⇒ (A A)B = A ; by associativity
4 7
 
−1 10 17
⇒ I2 B = A ; as A−1 A = I2
4 7
 
−1 10 17
⇒ B=A ; as I2 B = B
4 7
1 3 !  1 3!
− 10 17 −1 −
⇒ B= 2 2 ; as A = 2 2
1 −2 4 7 1 −2
 
1 2
∴ B= .
2 3
Theorem 2.6.8 If A is a square matrix, then
1
det(A−1 ) =
det(A)

31
Proof. We have

det(AA−1 ) = det(In )
⇒ det(A)det(A−1 ) = 1 [as A and A−1 are of same of order]
1
∴ det(A−1 ) = .
det(A)

Hence our proof is complete.

Problem 2.6.2 If A is a n × n matrix and det(A) = σ, then for λ ∈ R find det((λA)−1 ).

Solution Now,

det(λA) = λn det(A) = λn σ.

So
1
det((λA)−1 ) =
det(λA)
1
= n .
λ σ
Problem 2.6.3 Problem 34 of [2, Page 50 of Exercise 1.4]

Problem 2.6.4 Problem 35 of [2, Page 50 of Exercise 1.4]

Problem 2.6.5 When AAT = In , then determine det(A).

Solution Since AAT = In , we have

det(AAT ) = det(In )
⇒det(A)det(AT ) = 1
⇒det(A)det(A) = 1; [by Theorem 2.4.1]
1
⇒det(A2 ) = [Why? Not Understanding]
det(A)
∴ det(A) = ±1 .

Alternative solution We have

det(AAT ) = det(In )
⇒det(A)det(AT ) = 1
⇒det(A)det(A) = 1; [by Theorem 2.4.1]
⇒(det(A))2 = 1
∴ det(A) = ±1 .

Exercise 2.6.1 Suppose that

T = {A | A is a square matrix and AAT = In }.

Then obtain Range(det) for the function

det : T → R.

32
Solution Range(det) = {−1, 1}.

Problem 2.6.6 (Application of the Problem 2.6.5) Define an orthogonal matrix of order 2 × 2
that includes trigonometric identities cos θ and sin θ as its entry.
 
a b
Solution Let our required orthogonal matrix be A = . According to the definition of
c d
T
 AA = I2 and so either det(A) = 1 or det(A) = −1.
orthogonal matrix,
a c
Here AT = and so
b d
 2   
T a + b2 ac + bd 1 0
AA = = I2 =
ac + bd c2 + d2 0 1
⇒ a2 + b2 = c2 + d2 = 1, ac + bd = 0 .

Now,

ˆ as a2 + b2 = 1, we can take a = cos θ and b = sin θ for θ ∈ (−π, π].

ˆ for any real number ρ, if we take c = ρ sin θ and d = −ρ cos θ or c = −ρ sin θ and d = ρ cos θ,
then ac + bd becomes 0 (zero). But in this problem c2 + d2 = 1. So we have to consider ρ = 1.
Thus either c = sin θ and d = − cos θ or c = sin θ and d = − cos θ.

Hence our required matrix is


   
cos θ sin θ cos θ sin θ
A= or .
sin θ − cos θ − sin θ cos θ

Exercise 2.6.2 Obtain the determinant of the following matrices :


   
cos θ sin θ cos θ sin θ
, .
sin θ − cos θ − sin θ cos θ
 
cos θ sin θ
Exercise 2.6.3 For which value of m, the matrix will represent a reflection
sin θ − cos θ
about the line y = mx?
 
cos θ sin θ
Exercise 2.6.4 If represents a rotation through α, what will be the value
− sin θ cos θ
of α?

2.6.3 Another approach for finding inverse matrix


It is discussed in Subsection 4.2.2 of this text.

33
Chapter 3

System of Linear Equations

For better understanding and detail discussions of this chapter the reader is referred to Anton and
Rorres [2, Chapter 1], Lipschutz and Lipson [6, Chapter 3], Wildgerger [8].

3.1 System of Linear Equations


3.1.1 Linear equation
Definition 3.1.1 (Linear equation) [2, Page 2] An equation expressible in the form

a1 x 1 + a2 x 2 + a3 x 3 + · · · + an x n = b

is called a linear equation in the variables xi , where ai , b are constants for 1 ≤ i ≤ n and all ai
are not zero at a time.

In special case, when b = 0, then the linear equation is referred as homogeneous.

Definition 3.1.2 (Homogeneous linear equation) [2, Page 2] A linear equation in n variables x1 ,
x2 , x3 , · · · , xn having the form

a1 x1 + a2 x2 + a3 x3 + · · · + an xn = 0

is called a homogeneous linear equation.

Definition 3.1.3 (Degenerate linear equation) [6, Page 62] A linear equation expressible in the
form
0x1 + 0x2 + 0x3 + · · · + 0xn = b
is called a degenerate linear equation in the variables xi .

The solution of a degenerate linear equation only depends on the value of b. If b ̸= 0, then
0x1 + 0x2 + 0x3 + · · · + 0xn = b has no solution. On the other hand, if b = 0, it has infinitely
many solutions. In fact, any n-tuples of Rn -space is a solution of the degenerate linear equation in
n-variables.

Exercise 3.1.1 Write down which of the following equations are linear and which are not.

(a) 2x2 + y − 3 = 0 (b) x − 5y 3 − 3 = 0 (c) x − 5y − 3 = 0
3
(d) x − 5y = 8 (e) x − 5y − 10 = 0 (f ) 3 − 5y − 103 = 0
x
x
(g) xy − 5y = 8 (h) − 5y − 10 = 0 (i) 3xy − 5y − 103 = 0.
y

34
3.1.2 System of linear equations
A system of linear equations is nothing but a finite set of linear equations.

Definition 3.1.4 (Linear system) [2, Page 2] A system of m linear equations in n unknown
variables x1 , x2 , x3 , · · · , xn is of the form
a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + · · · + a3n xn = b3
.. .. .. ... .. ..
. . . . .
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bm
The the variables x1 , x2 , x3 , · · · , xn are called unknowns.

Briefly a system of linear equations is referred as a linear system.

Exercise 3.1.2 Write down which of the following system of equations are System of linear
equations and which are not :
(a) 2x√+ y − 8 = 0 (b) 3x√+ 5y − 8 = 0 (c) x + 5y − 87 = 0
x+5=0 x + 5y = 8 x + 5y = 12
(d) xy + 5y − 9 = 0 (e) ex + 5y − 9 = 0 (f ) 2x + 5y − 9 = 0
5
x + 5y = 12 x + 5y = 12 + 5y = 12.
x
Definition 3.1.5 A solution of a linear system in n unknowns x1 , x2 , x3 , · · · , xn is the ordered
n-tuple
(x1 , x2 , x3 , · · · , xn ) = (s1 , s2 , s3 , · · · , sn )
for which the substitution x1 = s1 , x2 = s2 , x3 = s3 , · · · , xn = sn makes each equation a true
statement.

Definition 3.1.6 (Equivalent linear systems) [6, Page 63] Two linear systems are called equiv-
alent, if they have same solution set.

3.1.3 Consistent and inconsistent linear system


Definition 3.1.7 [2, Page 3] A linear system is consistent if it has at least one solution and
inconsistent if it has no solution.

Every consistent linear system has zero, one or infinitely many solutions.

3.2 Augmented Matrix


3.2.1 Matrix representation of a system of Linear equations with two
variables
The system of linear equations in two variables
)
a1 x + b 1 y = c 1
(3.1)
a2 x + b 2 y = c 2
can be represented as     
a1 b1 x c
= 1
a2 b2 y c2

35
3.2.2 Matrix representation of a general linear system
The general linear system

a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1


a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + · · · + a3n xn = b3
.. .. .. .. .. ..
. . . . . .
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bm

can be represented as
    
a11 a12 a13 ··· a1n x1 b1
 a21
 a22 a23 ··· a2n   x2   b2 
   

 a31
 a32 a33 ··· a3n 
  x3  =  b 3  .
   
 .. .. .. ... ..   ..   .. 
 . . . .  .   . 
am1 am2 am3 ··· amn xm bm

3.2.3 Augmented matrix of a linear system


Augmented matrices are used to abbreviate a system of linear equations.

Definition 3.2.1 Let A⃗x = ⃗b be the matrix equation of a system of linear equations

a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1


a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + · · · + a3n xn = b3 .
.. .. .. .. .. ..
. . . . . .
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bm

Then the augmented matrix of the system is obtained by adjoining ⃗b to A as the last column as
 
a11 a12 a13 · · · a1n b1
   a21 a22 a23 · · · a2n b2 
 
A|⃗b =  a31 a32 a33 · · · a3n b3  .
 
 .. .. .. ... .. .. 
 . . . . . 
am1 am2 am3 · · · amn bm
 
⃗ ⃗ = ⃗b.
The matrix A|b is called the augmented matrix of the system of the linear equation AX
h i
Note 3.2.1 Vertical partition line | in A|⃗b is optional [2, Page 34].

3.2.4 Different representations of augmented matrices in different texts


In different texts augmented matrices are also represented in either of following samples.
..
 
2 1 3 .1
   
2 1 3 1 1 2 1 3 1
 ..  or 1 −1 1 3 3 or 1 −1 1 3 
1
 −1 1 . 3   
.. 1 −1 2 0 0 1 −1 2 0
1 −1 2 .0

36
3.3 Elementary Row Operations
3.3.1 Objective of algebraic operation on a linear system
[2, Page 7] Each row in augmented matrix corresponds to the equations in the associated linear
system. To solve a linear system, algebraic operations are performed in such a way so that solution
set of the system remains unaltered and the system becomes simpler.

3.3.2 When should performing algebraic operations be stopped


[2, Page 7] Until it can be ascertained whether the linear system is consistent or inconsistent, and
(if exists) what its solutions are, algebraic operations are performed on the system.

3.3.3 Elementary row operations of matrices


Definition 3.3.1 [2, Page 7] The three below listed operations are referred as elementary row
operations.

(i) Interchange two rows: ri ↔ rj .

(ii) Multiply a row through by a nonzero constant: kri → ri for a nonzero constant k.

(iii) Add a constant times one row to another: kri + rj → rj for a nonzero constant k.

Question 3.3.1 Why column operations are not defined?

Answer Each row in augmented matrix corresponds to the equations in the associated linear
system, but a column does not do so. Moreover, if column operation is defined, then coefficient of
x will be added to the coefficient of y, which can not be. Because then linear system will be broken
down.

3.4 Methods for Solving a Linear System


3.4.1 Solving a system of linear equations by using the concept of in-
verse matrix
Solution of system (3.1) is
   −1  
x a1 b1 c1
=
y a2 b2 C2
Again, the system of linear equations in three variables

a1 x + b 1 y + c 1 z = d 1 

a2 x + b 2 y + c 2 z = d 2 (3.2)

a3 x + b 3 y + c 3 z = d 3 

can be represented as     
a1 b1 c1 x d1
 a2 b2 c2  y  = d2 
a3 b3 c3 z d3

37
and solution of system (3.2) is
   −1  
x a1 b1 c1 d1
 y  =  a2 b2 c2   d2 
z a3 b3 c3 d3
Problem 3.4.1 Using the concept of inverse matrix, solve the following system of linear equations
2x + 3y = 1
3x − 4y = 3
Solution Given system of linear equations is
)
2x + 3y = 1
(3.3)
3x − 4y = 3

Now, system (3.3) can be represented as


    
2 3 x 1
= .
3 −4 y 3
Thus
   −1  
x 2 3 1
=
y 3 −4 3
  
1 −4 −3 1
=−
17 −3 2 3
4 3
    13 
1
= 173
17
2 = 17
3 .
17
− 17
3 − 17

Therefore our required solution is


 
13 3
(x, y) = ,− .
17 17
13 3

Answer. (x, y) = 17
, − 17 .

Exercise 3.4.1 Using the concept of inverse matrix, solve the following system of linear equations
2x + 3y + 3z = 1
3x − 4y + z = 3
x − z = 0.

3.4.2 Gaussian elimination


[6, Pages 64, 69] Gaussian elimination is the main method for finding the solution of a given linear
system. This method consists of using the elementary row operations to transform a given system
into an equivalent system whose solution can be easily obtained. This method essentially consists
of two parts:
(i) Forward elimination: yields either a degenerate equation with no solution or an equivalent
simpler system.
(ii) Backward elimination: Step-by-step back-substitution gives the solution of the simpler sys-
tem.

38
Pivot variable and free variables
See [6, Page 68].

Note 3.4.1 Forward elimination part of Gaussian elimination tells that whether the linear system
is consistent or inconsistent. If the system is inconsistent, then the backward elimination part should
not be applied.

Problem 3.4.2 [6, Example 3.8] Solve the following system

x1 + 3x2 − 2x3 + 5x4 = 4


2x1 + 8x2 − x3 + 9x4 = 9
3x1 + 5x2 − 12x3 + 17x4 = 7.

by Gaussian elimination.

Solution Here is degenerate equation. So this system has no solution.

Problem 3.4.3 [6, Page 71] Solve the following system

x − 3y − 2z = 6
2x − 4y − 3z = 4
−3x + 6y − 8z = −5.

by Gaussian elimination.

Solution x = 1, y = −3, x = 2.

Problem 3.4.4 Does the following system of equations

x+y−z =8
2x − y + 3z = 9
2x + 5y + z = 8

have unique solution? Why?

Problem 3.4.5 [6, Example 3.7] Solve the following system

x + 2y − 3z = 1
2x + 5y − 8z = 4
3x + 8y − 13z = 7.

by Gaussian elimination.

Solution Here z is the free variable. Say z = λ. Then y = 2 + 2λ, x = −3 − λ.

Problem 3.4.6 [2, Problem 25 & 26 of Exercise 1.2] Determine the value of a for which the
following system has no solutions, exactly one solution, or infinitely many solutions.

(a) x + 2y − 3z = 4 (b) x + 2y + z = 2 (c) x + y + az = 2


3x − y + 5z = 2 2x − 2y + 3z = 1 3x + 4y + 2z = a
4x + y + (a2 − 14)z = a + 2 x + 2y − (a2 − 3)z = a 2x + 3y − z = 1.

39
Solution (a) For a = −4 the system has no solution. For a ̸= −4 and a ̸= 4, it has unique
solution. For a = 4, it has infinitely many solutions.
√ √ √
(b) For a = 2 or a = − 2 the system has no solution. For a ̸= ± 2, it has unique solution.
There is no value of a for which the system has infinitely many solutions.
(c) There is no such value of a, for which the system has no solution. For a ̸= 3, it has unique
solution. For a = 3, it has infinitely many solutions.

Problem 3.4.7 [2, Problem 27 & 28 of Exercise 1.2] What condition, if any, must a, b, and c
satisfy for the following linear system to be consisitent.

(a) x + 3y − z = a (b) x + 3y + z = a
x + y + 2z = b −x − 2y + z = b
2y − 3z = c 3x + 7y − z = c.

Solution (a) The echelon form of the corresponding augmented matrix is

1 3 −1
 
a
3 a b
 0 1 − −
 

2 2 2
0 0 0 −a + b + c

For −a + b + c = 0, the system is consistent. Otherwise, it is inconsistent.


(b) The echelon form of the corresponding augmented matrix is
 
1 3 1 a
 0 1 2 a+b 
0 0 0 −a + 2b + c

If −a + 2b + c = 0, then the system is consistent. Otherwise, it is inconsistent.

3.5 Application
3.5.1 Application in network flow
Problem 3.5.1 Example 1 [2, Page 84].

Problem 3.5.2 Solve [2, Problem 1 of Exercise 1.9, Page 94].

Problem 3.5.3 Solve [2, Problem 2 of Exercise 1.9, Page 94].

3.5.2 Application in electrical circuits


Problem 3.5.4 Example 4 [2, Page 87].

Problem 3.5.5 Solve [2, Problem 5 of Exercise 1.9, Page 94].

Problem 3.5.6 Solve [2, Problem 6 of Exercise 1.9, Page 94].

Problem 3.5.7 Solve [2, Problem 7 of Exercise 1.9, Page 94].

Problem 3.5.8 Solve [2, Problem 8 of Exercise 1.9, Page 94].

40
Chapter 4

Row Echelon Form of a Matrix

For detail explanations and solving more problems readers are referred to Anton and Rorres [2],
Lipschutz and Lipson [6], Abdur Rahman[7].

4.1 Row Echelon Form


4.1.1 Row echelon form of a matrix
Definition 4.1.1 A matrix is said to be in row echelon form, if the following conditions hold

(a) All rows containing only zero (0)s are at bottom.

(b) Reading top to bottom, the leading entries are to the right of the previous one.


v

Problem 4.1.1 Which one is not in echelon form?


     
1 1 1 0 1 1 1 0
0 0 0 , 1
  0 1 , 0
  5
0 2 0 0 0 1 0 0

Problem 4.1.2 Using the row reduction solve the system of equations

2x + y + 3z = 1
x−y+z =3
x − z + 2z = 0.

41
Solution Augmented matrix of the given system is
 
2 1 3 1 1
1 −1 1 3 3 
1 −1 2 0 0

 
1 2 2 −1 0
r2′ = r1 − r2
∼ 0 0 −1 −2 0  ;
r3′ = 3r1 − r3
0 0 −2 −4 0
 
1 2 2 −1 0
r2′ = (−1)r1
∼ 0 0 1 2 0 ;
r3′ = (− 12 )r1
0 0 1 2 0
 
1 2 2 −1 0
∼ 0
 0 1 2 0 ; r3′ = r2 − r3
0 0 0 0 0
 
1 2 0 −5 0
∼ 0
 0 1 2 0 ; r1′ = r1 − 2r2
0 0 0 0 0

Note 4.1.1 [6, Remark 2 of Page 73] If a system has more than four unknowns and four equations,
then it may be more convenient to employ the matrix format to solve a linear system.

4.1.2 Reduced row echelon form


For detailed go through [2, Page 11].

Definition 4.1.2 A matrix in row echelon form is said to be in reduced row echelon form, if
the following two conditions hold:

(i) all leading entries are 1

(ii) each column containing a leading 1 has zeros everywhere else in that column.

Note 4.1.2

4.2 Inverse and Rank of a Matrix


4.2.1 Rank of a matrix
Definition 4.2.1 The number of nonzero rows in the row echelon form of a matrix is called the
rank of that matrix.

4.2.2 Finding inverse of a matrix by row operations


For details go through [2, Section 1.5].
 
1 6 4
Problem 4.2.1 [2, Example 5, Page 57] Employ the row operations to show that  2 4 −1
−1 2 5
is not invertible.

42
Problem 
 4.2.2 [2, Example 4, Page 56] Employ the row operations to find the inverse of
1 2 3
2 5 3, if exists.
1 0 8

43
Chapter 5

Vector Spaces

For detail explanations, better understanding and solving more problems readers are referred to
Anton and Rorres [2], Lipschutz and Lipson [6].

5.1 Vectors in Rn
See [2, Chapter 2]

5.1.1 Lines and planes


Vector equation of a line
In terms of affine combination the line AB joining the points A and B can be described as follows:
−→
(1 − λ)A + λB = A + λAB

Theorem 5.1.1 Let A and B be two points on a line l. Then the vector equation of l is
−→
A + λ direction vector AB

Note 5.1.1 Vector equation of a line is not unique, it is highly non unique.

Example 5.1.1 Vector equation of a line passing through the points (2, 0) and (6, 2) is

(x, y) = ((2, 0) + λ[(6, 2) − (2, 0)])


= ((2, 0) + λ(4, 2))
= (2 + 4λ, 2λ)

That is x = 2 + 4λ and y = 2λ.

Example 5.1.2 Vector equation of the line passing through the points A(3, 1, 1) and B(2, 2, −1)
is
 −→
(3, 1, 1) + λ direction vectorBA , where λ is a parameter
= (3, 1, 1) + λ(3 − 2, 1 − 2, 1 − (−1))
= (3, 1, 1) + λ(−1, −1, 2).

Question 5.1.1 How many vector equations of a line do exist?

44
Answer Infinitely many.

Question 5.1.2 How many parameters are involved in the vcector equation of a straight line?

Answer One.

Vector equation of a plane


Let A, B and C be three non-collinear points lying on a plane p. Then the vector equation of p is
−→ −→
A + λAB + η AC,

where λ and η are two parameters.

Note 5.1.2 Vector equation of a plane is highly non-unique.

Example 5.1.3 Vector equation of the plane containing the points (1, −1, 0), (2, 1, 4) and (1, −1, 9)
is
(x, y, z) = (1, −1, 0) + λ[(2, 1, 4) − (1, −1, 0)] + η[(1, −1, 9) − (1, −1, 0)],
where λ and η are two parameters.

5.1.2 Norm, dot product and distance



− →

Definition 5.1.1 Let v = (v1 , v2 , v3 , · · · , vn ) be a vector in Rn , then the norm of V is denoted


||→
− p
by || V || and defined as v || = v 2 + v 2 + v 2 + · · · + v 2 .
1 2 3 n

Note 5.1.3 Also known as length or magnitude.

Theorem 5.1.2 If →

v = (v1 , v2 , v3 , · · · , vn ) is a vector and α is a scalar in Rn ,

(i) ||→

v || ≥ 0.

− →

(ii) || V || = 0 ⇔ V = 0.

− →

(iii) ||α V || = |α|||α V ||.

5.1.3 Distance between two vectors


5.1.4 Normalizing a vector
5.1.5 Euclidean inner product
Definition 5.1.2 Let →
−u = (u1 , u2 , u3 , · · · , un ) and →−
v = (v1 , v2 , v3 , · · · , vn ) be two vectors in Rn ,
then the Euclidean inner product of → −u and → −v is denoted by → −u ·→ −v and defined as

− √
u ·→

v = u1 v1 + u2 v2 + u3 v3 + · · · + un vn .

Note 5.1.4 also know as dot product.

45
5.1.6 Inequalities
Theorem 5.1.3 (Cauchy-Schwarz inequality) If →

u = (u1 , u2 , u3 , · · · , un ) and →

v = (v1 , v2 , v3 , · · · , vn )
n
be two vectors in R , then

−u ·→−
v ≤ ||→

u || ||→

v ||.

Theorem 5.1.4 (Triangle inequality for vectors) If →



u, →

v and →

w be any three vectors in Rn ,

− →
− →
− →

then || u + v || ≤ || u || + || v ||.

Note 5.1.5 For zero vectors ≤ is written instead of <

Theorem 5.1.5 (Triangle inequality for distances) If u, v, w ∈ Rn are any three points, then
d (u, v) ≤ d(u, w) + d(w, v).

5.2 Vector Spaces and Subspaces


Real, complex vector spaces. Scalars come from field.

Definition 5.2.1 (Scalar multiplication) Let V be a nonempty set, ⃗u ∈ V and a scalar k be an


element of a field F. Then scalar multiplication of u by k is a function from V to itself that


assigns every ⃗u ∈ V to a unique ku ∈ V .

Definition 5.2.2 (Addition) Let V be a nonempty set and ⃗u, ⃗v ∈ V . Then addition or sum of
⃗u and ⃗v , denoted by ⃗u + ⃗v , is a function from V × V to V that assigns every (⃗u, ⃗v ) ∈ V × V to a
unique →−u +→ −
v ∈V.

Definition 5.2.3 (Vector Spaces) [2, Page 184] Let V be a nonempty set and F be a field with
underlying set F . Suppose that scalar multiplication and addition are defined on V . Then V
⃗ ∈ V and
together with these two operations is called a vector space over the field F, if for ⃗u, ⃗v , w
α, β, 1 ∈ F , the undermentioned axioms are obeyed:
(i) ⃗u + ⃗v = ⃗v + ⃗u
(ii) ⃗u + (⃗v + w)
⃗ = (⃗u + ⃗v ) + w

(iii) there exists a ⃗0 ∈ V such that ⃗u + ⃗0 = ⃗0 + ⃗u = ⃗u
⃗ = (−u)
(iv) for each ⃗u ∈ V , there exists a −⃗u such that ⃗u + (−u) ⃗ + ⃗u = 0

(v) α(⃗u + ⃗v ) = α⃗u + α⃗v


(vi) (α + β)⃗u = α⃗u + β⃗u
(vii) α(β⃗u) = (αβ)⃗u
(viii) 1⃗u = ⃗u.
An element of a vector space is called a vector.

Example 5.2.1 R2 forms a vector space with ...

Example 5.2.2 Set of all 2 × 2 matrices form a vector space with addition and scalar multipli-
cation.

Example 5.2.3 Let V be the set of all real-valued functions defined on R. Then V forms a vector
space with addition and scalar multiplication of real-valued functions.

46
5.2.1 Real vector space
Scalars k ∈ R come from real field [2, Page 184, side note]. That means, here the scalars are
numbers.

5.2.2 Vector subspaces


Problem 5.2.1 Show that T = {(x, y, z, w) ∈ R4 : 2x − 3y + 5c − d = 0} is a vector subspace of
R4 .

Problem 5.2.2 Show that Ms = {A : A is a 2 × 2 symmetric matrix} is a vector subspace of the


vector space M = {P : P is a 2 × 2 matrix}.

Problem 5.2.3 Show that y = x is a vector subspace of the vector space R2 .

5.3 Linear Independence


5.3.1 Linear combination of vectors
Definition 5.3.1 Linear combination of vectors [6, Page 4]
       
−5 3 1 0
Problem 5.3.1 Can  7  be written as a linear combination of 1 , 3 and 2? Why?
    
10 2 5 4
       
−5 3 1 0

− →
− →
− →

Solution Let V =  7 , V1 = 1 , V2 = 3 and V3 = 2.
     
10 2 5 4
     
5 1 2
Problem 5.3.2 In how many ways can be written as a linear combination of ,
  3 −1 1
−1
and ? Why? With the help of affine grid plane A2 interpret your answer also.
4
       

− 5 → − 1 →
− 2 →
− −1
Solution Let V = , V1 = , V2 = and V3 = .
3 −1 1 4

5.3.2 Linear independence of vectors


Definition 5.3.2

      
1 2 −4 3
Problem 5.3.3 Are  0 , 1 ,
    0 , −1 linearly independent? Why?
 
−1 3 1 5

5.4 Basis and Dimension


5.4.1 Spanning set and basis of a vector space
Definition 5.4.1 (Spanning set of a vector space) [8, Part 16, 27 min] Let S = {v⃗1 , v⃗2 , v⃗3 , . . . , v⃗n }
be a set of vectors in a finite dimensional vector space V . Then S is called a spanning set for V

47
if for all ⃗v ∈ V ,
⃗v = α1 v⃗1 + α2 v⃗2 + α3 v⃗3 + · · · + αn v⃗n
for some numbers α1 , α2 , α3 , · · · , αn . If S is a spanning set of V , then it is said that S spans V .
         
 1 0 0 2  3
Example 5.4.1 0 , 1 , 0 ,  1  spans R3 . For example, 1
0 0 1 −5 4
 
         
 1 0 2 3  0
Example 5.4.2 0 , 1 , 5 , −1 does not span R3 . For example, 0 can not
0 0 0 0 1
 
be written as a linear combination of the vectors of the given set.
      
 1 2 −1 
Problem 5.4.1 Does  2 , 0 , 4  span R3 ? Why?
   
−1 1 3
 
 
y1
Solution Let y2  be any vector of R3 . Yes. Unique solution.

y3

Definition 5.4.2 (Basis of a vector space) [2, Page 214] Let S = {v⃗1 , v⃗2 , v⃗3 , . . . , v⃗n } be a set of
vectors in a finite dimensional vector space V . Then S is called a basis for V if the following two
conditions hold :

(i) S spans V

(ii) S is linearly independent.

Theorem 5.4.1 [2, Theorem 4.5.1 of Page 221] All bases of a finite-dimensional vector space
have the equal number of vectors.

5.4.2 Dimension
Definition 5.4.3 [2, Page 222] Let the set of vectors S be a basis of a finite dimensional vector
space V . Then the dimension of V , denoted by dim(V ), is the number of vectors of S. Zero
vector space is defined to have the dimension zero.

Problem 5.4.2 Define basis of a vector space. Find a basis for the solution space of the linear
system of equations

x + 2y + 2z − s + 3t = 0
x + 2y + 3z + s + t = 0
3x + 6y + 8z + s + 5t = 0

and find the dimension of that space.

Solution See Definition 5.4.2.


[2, Like Problems 1-6 of Exerecise Set 4.5, Page 228] The augmented matrix of the given linear

48
system is
 
1 2 2 −1
3 0
1 2 3 11 0
3 6 8 15 0
 
1 2 2 −1 3 0
r2′ = r1 − r2
∼ 0 0 −1 −2 2 0  ;
r3′ = 3r1 − r3

0 0 −2 −4 4 0
 
1 2 2 −1 3 0
r2′ = (−1)r1
∼ 0 0 1 2 −2 0  ;
r3′ = (− 12 )r1

0 0 1 2 −2 0
 
1 2 2 −1 3 0
∼ 0 0 1 2 −2 0  ; r3′ = r2 − r3
0 0 0 0 0 0
 
1 2 0 −5 7 0
∼ 0 0 1 2 −2 0  ; r1′ = r1 − 2r2
0 0 0 0 0 0

which is the reduced row echelon form of the corresponding augmented matrix of the given system.
Here y, s and t are the 3 independent variables. Say y = λ, s = µ and t = η. Then

x = −2λ + 5µ − 7η and z = −2µ + 2η.

So

(x, y, z, s, t) = (−2λ + 5µ − 7η, λ, −2µ + 2η, µ, η)


= λ(−2, 1, 0, 0, 0) + µ(5, 0, −2, 1, 0) + η(−7, 0, 2, 0, 1).

Let →

v1 = (−2, 1, 0, 0, 0), →

v2 = (5, 0, −2, 1, 0) and →
−v3 = (−7, 0, 2, 0, 1). Thus the solution space is
spanned by the vectors → −v1 , →

v2 and →

v3 . By inspection, these vectors are linearly independent since


λ→

v1 + µ→

v2 + η →

v3 = 0 ⇒ λ = µ = η = 0.

Therefore →

v1 , →

v2 and →

v3 form a basis for the solution space and the dimension of the solution space
is 3.

5.5 Row Spaces and Column Spaces


Definition 5.5.1 (Row space) [2, Page 237] Let A be a m × n matrix. Then the subspace of Rn
spanned by the row vectors of A is called the row space of A and it is denoted by row(A).

Definition 5.5.2 (Column space) [2, Page 237] Let A be a m × n matrix. Then the subspace
of Rm spanned by the column vectors of A is called the column space of A and it is denoted by
col(A).

Definition 5.5.3 (Null space) [2, Page 237] Let A be a m × n matrix. Then the solution space
of the homogeneous linear system
AX⃗ = ⃗0

is called the null space of A and it is denoted by null(A). Null space is a subspace of Rn .

49
5.6 Rank and Nullity of a Matrix
For details see [2, Section 4.8].

Definition 5.6.1 [2, Section 4.8] The common dimension of the row space and column space of
a matrix A is called the rank of A. It is denoted by rank(A).

Definition 5.6.2 [2, Section 4.8] The dimension of the null space of a matrix A is called the
nullity of A. It is denoted by nullity(A).

Theorem 5.6.1 (Dimension theorem for matrices) [2, Theorem 4.8.2] If A is a matrix with n
columns, then rank(A) + nullity(A) = n.

Problem 5.6.1 Find the rank of


     
2 1 3 1 2 1 0 1 2 1 3
1 −1 1 3 ,
 0 −1 0 3 and 1 −1 1
1 −1 2 0 1 −1 2 0 1 −1 2

Solution

50
Chapter 6

Basis and Dimension

For detail explanations and solving more problems readers are referred to Anton and Rorres [2],
Lipschutz and Lipson [6], Abdur Rahman[7].

51
Chapter 7

Eigenvalues and Eigenvectors

For detail explanations, better understanding and solving more problems readers are referred to
Anton and Rorres [2], Lipschutz and Lipson [6], Wildgerger [8].

7.1 Eigenvalues
7.1.1 Eigenvalues of a matrix
Definition 7.1.1 (Eigenvalues of a matrix) [2, Page 291] Let A be a square matrix of order n × n
and ⃗v ∈ Rn such that ⃗v ̸= 0 and

A⃗v = λ⃗v for some scalar λ.

Then λ is called eigenvalue of A and ⃗v is called the eigenvector corresponding to λ.

Definition 7.1.2 (Spectrum) Set of all eigenvalues of a matrix is called spectrum.

7.1.2 How to find eigenvalues and eigenvectors


Theorem 7.1.1 If A is a n × n matrix, then λ is an eigenvalue of A if and only if

det(λI − A) = 0.

Definition 7.1.3 (Characteristic equation) Let A be a n × n matrix and λ be an eigenvalue of


A. Then the equation
det(λI − A) = 0
is called the characteristic equation of A.

Definition 7.1.4 (Characteristic polynomial) Let A be a n × n matrix and λ be an eigenvalue


of A. Then the polynomial, denoted by P (λ) and defined as

P (λ) = λn + c1 λn−1 + c2 λn−2 + · · · + cn ,

is called the characteristic polynomial of A. In this case, λn + c1 λn−1 + c2 λn−2 + · · · + cn = 0 is


called the characteristic equation of A.

Note 7.1.1 The characteristic polynomial of a matrix A of order n × n is of degree n. So it has


at most n distinct roots or zeros, which are the eigenvalues of A. Thus A has at most n distinct
eigenvalues.

52
Note 7.1.2 A matrix A of order n × n defined over the real field may have complex eigenvalues.

The next theorem is due to [2, Theorem 5.1.2]

Theorem 7.1.2 (Eigenvalues of a triangular matrix) If A is an n × n triangular (upper trian-


gular, lower triangular or diagonal) matrix, then the entries of the principal diagonal of A are its
eigenvalues.
 
0 0 −2 5 7
0 2 1 8 100
 
Problem 7.1.1 Find the spectrum of   0 0 3 0 0 .

0 0 0 8 10 
0 0 0 0 10

The following theorem is due to [2, Theorem 5.1.3].

Theorem 7.1.3 (Alternative ways of describing eigenvalues) If A is an n × n matrix, then the


below mentioned statements are equivalent:

(i) λ is an eigenvalue of A.

(ii) λ is a solution of det(λI − A) = 0.

(iii) the linear system (λI − A)⃗v = ⃗0 has nontrivial solutions.

(iv) there exists a nonzero vector ⃗v belonging to Rn such that A⃗v = λ⃗v .

Definition 7.1.5 (Eigenspace and its basis) [2, Page 295] Let A be a n × n matrix, λ be an
eigenvalue and ⃗v be the corresponding eigenvector of A. Then the solution space of the linear
system
(λI − A)⃗v = ⃗0
is called the eigenspace of A corresponding to λ and the basis of this solution space is called the
corresponding basis for the eigenspace.
 
4 −2
Problem 7.1.2 Find the eigenvalues and eigenvectors of the matrix .
2 6
 
−1 3
Problem 7.1.3 [2, Example 6] Find bases for the eigenspaces of the matrix . Give the
2 0
geometrical interpretation of the eigenvalues and eigenvectors of this matrix with the help of a
sketch.
 
0 0 −2
Problem 7.1.4 [2, Example 7] Find the eigenvalues and eigenvectors of the matrix 1 2 1 .
  1 0 3
0 0 −2
Or, find the bases for the eigenspaces of the matrix 1 2 1 .

1 0 3
 
cos θ − sin θ
Problem 7.1.5 [2, Problem 1(a), Page 343] Show that if 0 < θ < π, then has
sin θ cos θ
no real eigenvalues and consequently no real eigenvectors.

53
Problem 7.1.6 [2, Like Problem 27 of Exerecise Set 5.1, Page 301] Find the matrix A such that
its eigenvalues are 2, 4, 6 and corresponding eigenvectors are
     
1 1 0
1 , 0 , 1
0 1 1

Solution Substitution of the given eigenvectors ⃗v and the corresponding eigenvalues λ into A⃗v =
λ⃗v yields the following equations
     
1 1 2
A 1 = 2 1 = 2 ,
    
0 0 0
     
1 1 4
A 0 = 4 0 = 0
1 1 4
and      
0 0 0
A 1 = 6 1 = 6 .
1 1 6
Above three equations can be written together as
   
1 1 1 2 4 0
A 1
 0 1 = 2
  0 6
0 1 0 0 4 6
  −1
2 4 0 1 1 1
⇒ A = 2 0 6 1 0 1 .
0 4 6 0 1 0

The next theorem describes a relationship between eigenvalues and the invertibility of a matrix.

Theorem 7.1.4 A square matrix A is invertible if and only if λ = 0 is not an eigenvalue of A.

7.2 Diagonalization
For details go through Anton and Rorres [2, Section 5.2].

7.2.1 Similar matrices


Definition 7.2.1 Let A and B be two square matrices. Then B is called similar to A, if there
exists an invertible matrix P such that B = P −1 AP .

If A is similar to B, then B is also similar to A.

Problem 7.2.1 If B is to A, then show that A is also similar to B.

Proof. Let B be similar to A, there exists an invertible matrix P such that B = P −1 AP .


−1
Suppose that P −1 = Q. So Q−1 = (P −1 ) .

54
Now,

B = P −1 AP ⇒ Q−1 B = Q−1 P −1 AP = (P P −1 )AP = (IA)P


= AP
⇒ Q BQ = AP (P −1 ) = A(P P −1 )
−1

= AI = A.

Hence A is also similar to B

Definition 7.2.2 A square matrix A is called diagonalizable, if it is similar to some diagonal


matrix. That is, a matrix is diagonalizable if and only if there exists an invertible matrix P such
that P −1 AP is a diagonal matrix. In this case, we say that P diagonalizes A.

Note 7.2.1 A and P −1 AP are similar and have same determinant, rank, nullity, trace, charac-
teristic polynomial, eigenvalues etc [2, Table 1, Page 303].

7.2.2 Computing higher power of matrix


 
λk1 0 0 ··· 0
0
 λk2 0 ··· 0 
Theorem 7.2.1 If P diagonalizes A, then P A P = D =  0
−1 k k  0 λk3 ··· 0  and so
 .. .. .. .. .. 
. . . . .
0 0 0 ··· λkn
 
λk1 0 0 ··· 0
0
 λk2 0 ··· 0 
Ak = P Dk P −1 =P0
 0 λk3 ··· 0  P −1 .
 .. .. .. .. .. 
. . . . .
0 0 0 ··· λkn
 
0 0 −2
Problem 7.2.2 Find a matrix that diagonalizes 1 2 1 . Determine A13 also.
1 0 3
 
2 4 2
Exercise 7.2.1 Find a matrix that diagonalizes 1 0 5. Determine A100 also.
2 4 3
 
1 0 0
Problem 7.2.3 Show that the matrix  1 2 0 is not diagonalizable.
−3 5 2

7.3 The Cayley-Hamilton Theorem


7.3.1 Statement
Theorem 7.3.1 (The Cayley-Hamilton theorem) If A is an n × n matrix with characteristic
equation λn + c1 λn−1 + c2 λn−2 + · · · + cn = 0, then An + c1 An−1 + c2 An−2 + · · · + cn In = 0.

That is, every square matrix satisfies its characteristic equation.

55
 
0 0 −2
Problem 7.3.1 Verify the Cayley-Hamilton theorem for the matrix 1
 2 1 .
1 0 3
 
1 0 0 0 0
0 2 0 0 0
 
0
Problem 7.3.2 Verify the Cayley-Hamilton theorem for the matrix  0 3 0 0 .
0 0 0 8 0
0 0 0 0 −1

7.3.2 Importance
The Cayley-Hamilton theorem helps to calculate the power of a matrix.
 5
2 1 1
Problem 7.3.3 Find 1 1 0 using the Cayley-Hamilton theorem.
1 0 1
 5
2 1 1
Solution Let 1 1 0 . Eigenvalues of A are 0, 1, 3. So the corresponding characteristic
1 0 1
3 2
equation is λ − 4λ + 3λ = 0. Thus by the Cayley-Hamilton theorem,

A3 − 4A2 + 3A = 0 ⇒ A3 = 4A2 − 3A.

Thus A4 = 4A3 −3A2 = 4(4A2 −3A)−3A2 = 13A2 −12A. Therefore A5 = 13A3 −12A2 = 40A2 −39A
 5
2 0 0
Problem 7.3.4 Find 1 1 0  using the Cayley-Hamilton theorem.
1 0 −1

7.4 Application of Eigenvalues


For details, go through the provided sheet.

7.4.1 Stability test of a dynamical system


Let the coefficient matrix A of order has n eigenvalues ri = ai + bi ι for i = 1, 2, 3, · · · , n. Then the
dynamical system is

(i) stable, if Re(ri ) ≤ 0 for all i

(ii) asymptotically stable, if Re(ri ) < 0 for all i

(ii) unstable, at least one of the Re(ri ) > 0.

56
Chapter 8

Linear Transformation

For detail explanations and solving more problems readers are referred to Anton and Rorres [2,
Chapter 8], Lipschutz and Lipson [6].

8.1 Linear Transformation


Definition 8.1.1 (Linear transformation and linear operator) [2, Definition 1 of Page 447]

8.1.1 Reflection about x-axis


 
1 0
0 −1

8.1.2 Reflection about line y = x


 
0 1
1 0

8.1.3 Combination
    
0 1 1 0 0 −1
=
1 0 0 −1 1 0

Problem 8.1.1 [2, Like Problem 1, 2, 20, 21 of Exerecise Set 4.11, Page
 287-288]
 Find and sketch
3 1
the image of y = 2x + 1 under the multiplication of the matrix A = .
6 2

Solution Let the coordinates (x, y) on the line y = 2x + 1 be transformed to the point with
coordinates (x′ , y ′ ) under the multiplication by A. Then
 ′   
x 3 1 x
′ =
y 6 2 y
 ′    
x 3x + y 3x + y
⇒ = = .
y′ 6x + 2y 2(3x + y)

The points on the plane R2 maps to the line y = 2x under the described map.
→
− →−
Theorem 8.1.1 [2, Theorem 8.1.1] If T : V → W is a linear transformation, then T 0 = 0
and T (→

u −→

v ) = T (→

u ) − T (→

v ) for all →

u ,→

v ∈V.

57
Problem 8.1.2 [2, Example 10, Page 451] Consider the basis {(1, 1, 1), (1, 1, 0), (1, 0, 0)} for R3 .
Suppose that T : R3 → R2 is a linear transformation for which
T (1, 1, 1) = (1, 0), T (1, 1, 0) = (2, −1), T (1, 0, 0) = (4, 3).
Determine a formula for T (x, y, z), and then employ this formula to compute T (2, −3, 5).

8.2 Kernel and Range


Definition 8.2.1 (Kernel and range) [2, Definition 2 of Page 452]

Theorem 8.2.1 [2, Theorem 8.1.3] If T : V → W is a linear transformation, then ker(T ) and
R(T ) are subspaces of V and W respectively.

8.3 Rank and Nullity


Definition 8.3.1 (Rank and nullity) [2, Definition 3 of Page 454] Let T : V → W be a linear
transformation. Then rank(T ), nullity(T )

Theorem 8.3.1 (Dimension theorem for linear transformation) [2, Theorem 8.1.4] If T is a linear
transformation from a finite dimensional vector space V to a vector space W , then the range of T
is finite dimensional and rank(T ) + nullity(T ) = dim(V ).

Problem 8.3.1 Determine the kernel of the linear transformation defined by the matrix
 
1 0 4 −2
1 −1 3 0 .
1 1 5 −4
Hence find the dimension of the kernel space (or nullity of the transformation).

8.4 Isomorphism of Vector Spaces


8.4.1 One-one
8.4.2 Onto
8.4.3 Inverse transformation
8.4.4 Isomorphism
Importance of Rn

Theorem 8.4.1 Every real n-dimensional vector space is isomorphic to Rn .

8.5 Matrices for General Linear Transformation


Problem 8.5.1 (Example 3, Page 475) Anton Suppose that T : R2 → R3 is a linear transforma-
tion defined as    
  y 0 1  
x x
T = −5x + 13y  = −5 13 .
y y
−7x + 16y −7 16

58
   
3 5
Find the matrix transformation T with respect to the bases B = , for R2 and B ′ =
       1 2
 1 −1 0 
 0  ,  2  , 1 for R3 .
−1 2 2
 

59
Chapter 9

Quadratic Forms

For detailed explanations and solving more problems, readers are referred to Anton and Rorres [2,
Section 7.3 and 7.4].

9.1 Quadratic Forms


See Anton and Rorres [2, Section 7.3].

9.2 Application
See Anton and Rorres [2, Section 7.4].

60
Part II

Geometry from Algebraic Point of View

61
Chapter 10

Algebra of Lines and Planes

For detail explanations and solving more problems readers are referred to Anton and Rorres [2],
Lipschutz and Lipson [6], Wildberger [8].

10.1 Parametric Equation of Line


10.1.1 Parametric equation of a line
Let A and B be two points on a line l. Then the parametric equation of l is
−→
A + λ direction vector AB

Note 10.1.1 Parametric equation of a line is not unique, it is highly non unique.

Example 10.1.1 Parametric equation of a line passing through the points (2, 0) and (6, 2) is

(x, y) = ((2, 0) + λ[(6, 2) − (2, 0)])


= ((2, 0) + λ(4, 2))
= (2 + 4λ, 2λ)

That is x = 2 + 4λ and y = 2λ.

Example 10.1.2 Parametric equation of the line passing through the points A(3, 1, 1) and
B(2, 2, −1) is
 −→
(3, 1, 1) + λ direction vectorBA , where λ is a parameter
= (3, 1, 1) + λ(3 − 2, 1 − 2, 1 − (−1))
= (3, 1, 1) + λ(−1, −1, 2).

10.2 Lines in 2-dimensions


10.2.1 Lines in 2-dimensions
Parametric form of a straight line

62
Problem 10.2.1 Find the meet of the lines x − 2y = 2 and 5x + 4y = 20.

Solution
    
1 −2 x 2
=
5 4 y 20
   −1     
x 1 −2 2 1 4 2 2
⇒ = = .
y 5 4 20 14 −5 1 20
Problem 10.2.2 Find the meet of the lines

l1 : (2, 0) + λ(4, 2) and l1 : (0, 5) + µ(4, −5).

Solution We need to solve

(2 + 4λ, 2λ) = (4µ, 5 − 5µ)


(
4λ − 4µ = −2

2λ + 5µ = 5
   −1     
λ 4 −4 −2 1 5 4 −2
⇒ = = .
µ 2 5 5 28 −2 4 5

10.3 Planes in 3-dimensions


10.3.1 Planes in 3-dimensions
Analogue of 2-dimensional lines are planes in 3-dimensions. Planes in 3-dimensions act most like
lines in 2-dimensions.

Example 10.3.1 Three points on the plane p : 2x + 6y + 3z = 6 are (0, 0, 2), (0, 1, 0) and (3, 0, 0).
Moreover,
ˆ when x = 0, then 2y + z = 2.

ˆ when y = 0, then 2x + 3z = 6.

ˆ when x = 0, then x + 3y = 3.

10.3.2 Meet of three planes


Three planes generally intersect at a point with an exception, when among three planes there is at
least two parallel lines.

Problem 10.3.1 Find the meet of the planes



x + 2y − z = −3

2x + 7y + 2z = 1 (10.1)

4x − 2y + z = −2

Solution We have to solve the equivalent form of (10.1)


   −1    
x 1 2 −1 −3 −1
 y  = 3 7 2   1  =  0 .
z 4 −2 1 −2 2

63
10.4 Lines in 3-dimensions
10.4.1 Lines in 3-dimensions
Lines in 3-dimensions are more complicated. Any line can be determined by two points on it.
Moreover, a line can be interpreted as the intersection of two planes.

10.4.2 Deriving cartesian equation of a line from its parametric equa-


tion
A line has also cartesian equation. In a cartesian equation of line, there are actually two separate
equations of planes which meet at a line. Those two planes are also non-unique, because same line
can be found by the intersection of infinitely many pairs of planes.

Problem 10.4.1 Find the cartesian equation of the line

(3, 1, 1) + λ(−1, −1, 2), where λ is a parameter.

Solution Let

(x, y, z) = (3, 1, 1) + λ(−1, −1, 2)


⇒ x = 3 − λ, y = 1 − λ, z = 1 + 2λ
z−1
⇒ λ = 3 − x, λ = 1 − y, λ =
2
z−1
⇒3−x=1−y = =λ
2
x−3 y−1 z−1
⇒ = = ,
−1 −1 2
which is the required cartesian equation of the given straight line.
x−3 y−1 y−1 z−1
Note 10.4.1 In the above problem, = is a plane and = is another plane.
−1 −1 −1 2
Note 10.4.2 Observe that the coordinates of the point (3, 1, 1) are respectively subtracted from
x, y and z in the numerator. Moreover, the components of the direction vector (−1, −1, 2) are in
the corresponding denominators.

Note 10.4.3 Cartesian equation of a line is also highly non-unique. Because

(i) we can subtract the coordinates of any point (x0 , y0 , z0 ) lying on the line respectively from x,
y and z in the numerator and

(ii) we can place the corresponding components of any scalar multiple of the direction vector
in the corresponding denominators, as any scalar multiple of a direction vector is again a
direction vector.

Problem 10.4.2 Find the cartesian equation of the line

l : (3, −4, 9) + λ(7, 0, −2), where λ is a parameter.


x−3 z−9
Solution = , y = −4.
7 −2

64
10.4.3 Meet of a line a plane
If a line and a plane meet, then we get a point.

Problem 10.4.3 Find the meet of the line l : (−1, 2, −3)+λ(0, 4, 6) and the plane p : x−2y+3z =
6.

Solution From the parametric equation of l,

x = −1, y = 2 + 4λ, z = −3 + 6λ.

Substituting these expressions in the equation of p,

− 1 − 2(2 + 4λ) + 3(−3 + 6λ) = 6 ⇒ 10λ = 20 ∴ λ = 2.

Therefore the intersecting point is

(x, y, z) = (−1, 2 + 4 × 2, −3 + 6 × 2) = (−1, 10, 9).

10.5 Meet of Two Planes


10.5.1 Meet of two planes
Usually meet of two planes is a line.

10.5.2 Finding meet of two planes


When is asked to find the meet of two planes

ax + by + cz = d
ex + f y + gz = h.

algebraically, then introduce a third plane z = λ and solve the new system. If
a b c
e f g = 0,
0 0 1.

then we introduce x = λ or y = λ (like in Problem 10.5.2).

Problem 10.5.1 Find the meet of two planes

x + 2y − z = −3
3x + 7y + 2z = 1.

Solution Let z = λ be another plane. Then the corresponding matrix equation of the system

x + 2y − z = −3
3x + 7y + 2z = 1
z=λ

is     
1 2 −1 x −3
3 7 2  y  =  1  .
0 0 1 z λ

65
 
1 2 −1 1 2 −1
Since 3 7 2 = 1(7 − 6) = 1 ̸= 0, the matrix 3
 7 2  is invertible. Thus
0 0 1 0 0 1
   −1  
x 1 2 −1 −3
 y  = 3 7 2 1
z 0 0 1 λ
    
7 −2 11 −3 −23 + 11λ
= −3 1 −5  1  =  10 − 5λ 
0 0 1 λ λ

Therefore meet of the given two planes is the line with parametric equations

(x, y, z) = (−23 + 11λ, 10 − 5λ, λ) = (−23, 10, 0) + λ(11, −5, 1),


x + 23 y − 10 z
which has the cartesian equation = = .
11 −5 1

Verification technique of the answer


Here both equations of the planes are satisfied by the point (−23, 10, 0). The direction vector
(11, −5, 1) of the line is the common direction of the both line. So we plug this vector in the left
hand side of the planes, then we get 0, like the left hand sides of the above equations

11 + 5 × (−5) − 1 = 0 and 3 × 11 + 7 × (−5) + 2 × 1 = 0.

That is, if we add any multiple of this vector with a solution of the given two equations of planes,
the solution remain same.

Problem 10.5.2 Find the meet of two planes

x + 3y − 2z = 2
2x + 6y − 5z = 3.

Solution Let z = λ be another plane. Then the corresponding matrix equation of the system

x + 3y − 2z = 2
2x + 6y − 5z = 3
z=λ

is     
1 3−2 x 2
2 6−5   y = 3 .
 
0 0 1 z λ
 
1 3 −2 1 3 −2
Since 2 6 −5 = 0, the matrix 2 6 −5 is not invertible. Thus we introduce
0 0 1 0 0 1
another plane y = λ and now the new system becomes

x + 3y − 2z = 2
2x + 6y − 5z = 3
y=λ

66
is     
1 3 −2 x 2
2 6 −5 y  =  3  .
0 1 0 z λ
 
1 3 −2 1 3 −2
Since 2 6 −5 = 1 ̸= 0, the matrix 2 6 −5 is invertible. Thus
0 1 0 0 1 0
   −1  
x 1 3 −2 2
 y  = 2 6 −5  3 
z 0 1 0 λ
    
5 −2 −3 2 4 − 3λ
= 0 0 1  3 =  λ 
2 −1 0 λ 1
Therefore meet of the given two planes is the line with parametric equations
(x, y, z) = (4 − 3λ, λ, 1) = (4, 0, 1) + λ(−3, 1, 0),
x−4
which has the cartesian equation = y, z = 1.
−3

10.6 Cartesian and Parametric Equations of a Plane


10.6.1 Parametric equation of a plane
Let A, B and C be three non-collinear points lying on a plane p. Then the parametric equation of
p is
−→ −→
A + λAB + η AC,
where λ and η are two parameters.

Note 10.6.1 Parametric equation of a plane is highly non-unique.

Example 10.6.1 Parametric equation of the plane containing the points (1, −1, 0), (2, 1, 4) and
(1, −1, 9) is
(x, y, z) = (1, −1, 0) + λ[(2, 1, 4) − (1, −1, 0)] + η[(1, −1, 9) − (1, −1, 0)],
where λ and η are two parameters.

10.6.2 Deducing cartesian equation of a plane from its parametric


equation
Ax Ay Az

− → − → −
The determinant Bx By Bz in terms of triple product A · ( B × C ) represents the signed
Cx Cy Cz

− → − →

volume of the parallelepiped formed by the three vectors A , B and C . The necessary and sufficient

− →− →
− →
− →− → −
conditions for the three vectors A , B and C to be coplanar is that A · ( B × C ) is zero. It is to
be noted that

− → − → − →
− → − → − →
− → − → −
A · ( B × C ) = B · ( C × A ) = C · ( A × B ).
Also

− → − → − →
− →
− → −
A · (B × C ) = (A × (B ) · C .

67
Problem 10.6.1 Obtain the cartesian equation of the plane

p : (1, −1, 0) + λ(1, 2, 4) + η(0, 0, 9).

Solution Let P (x, y, z) be any point on p, A = (1, −1, 0), →


−u = (1, 2, 4), →

v = (0, 0, 9). So
−→ −→ → − →

AP = (x − 1, y + 1, z). Here AP , u and v must be coplanar. So

x−1 1 0
y+1 2 0 =0
z 4 9
⇒ (−1)3+3 9[2(x − 1) − (y + 1)] = 0 ∴ 2x − y = 3.

10.6.3 Deducing parametric equation of a plane from its cartesian


equation
From the cartesian equation of a plane, it can easily be found three points lying on it.

Problem 10.6.2 Find the parametric equation of the plane 2x + y + 3z = 6.

Solution Clearly, A = (0, 0, 2), B = (0, 6, 0) and C = (3, 0, 0) lie on the given plane.

68
Bibliography

[1] Khan Academy, URL: https://www.khanacademy.org/math/multivariable-calculus/


thinking-about-multivariable-function/x786f2022:vectors-and-matrices/a/
matrices--visualized-mvc.

[2] H. Anton and C. Rorres, Elementary Linear Algebra (Applications Version), Eleventh ed., Jhon
Wiley & Sons, 2020-21.

[3] G. Hadley, Linear Algebra, Addison-Wesley World Student Series ed., Addison Wesley Publish-
ing Company, INC, 1977.

[4] A. G. Hamilton, Linear Algebra : An introduction with concurernt example, First ed., Cambridge
University Press, Cambridge, 1989.

[5] R. Larson and D. C. Falvo, Elementary Linear Algebra, Sixth ed., Houghton Mifflin Harcourt
Publishing Company, Boston, Newyork, 2009.

[6] S. Lipschutz and M. L. Lipson, Theory and problems of linear Algebra, Third ed., Schaum’s
Outline Series, McGraw-Hill, 2013-2014.

[7] M. Abdur Rahman, College Linear Algebra: Theory of Matrices with Applications, Sixth
(Reprint) ed., Nahar Book Depot & Publications, Dhaka, 2012.

[8] N. J. Wildberger, Wild Linear Algebra, Insights into Mathematics, Youtube


Channel, URL: https://www.youtube.com/watch?v=yAb12PWrhV0&list=
PLIljB45xT85BhzJ-oWNug1YtUjfWp1qAp, March 08, 2011.

69

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy