Applied Linear Algebra MTH 3003 29aug
Applied Linear Algebra MTH 3003 29aug
Dr. S Panda
Department of Mathematics
ITER (SOA Deemed to be University)
Chapter 1: Matrices and
Gaussian Elimination
Matrices and Gaussian Elimination
2
Lecture 1: The Geometry of Linear
Equations
3
System of linear equations
2x + y − z = 2
x−y+z = 1
4
Matrix Representation
Ax = b
5
The Matrix Representation
6
The Matrix Representation
x + 2y = 3 x + 2y = 3 x + 2y = 3
4x + 5y = 6 4x + 8y = 6 4x + 8y = 12
7
Row Picture
x + 2y = 3 . . . . . . (1) x + 2y = 3
8
Column Picture
x + 2y = 3
4x + 5y = 6
9
Column Picture
(3, 6)
(2, 5)
(1, 4)
10
Consistency and Solution of a system
11
Practice Problems
x + y = 2, 2x − 2y = 4.
(3) With reference to Q.(2), write down the augmented matrix for
each of the above systems.
12
Lecture 2: The Gaussian Elimination
13
Elementary Row Operations & Elementary Matrices
14
Properties
15
Pivots
Pivot
The first non-zero entry in each row is called a pivot.
For example
" # " #
1 2 0 0 1
A= , B=
0 -1 1 1 0
The pivots of A and B are highlighted in the boxes.
The columns containing the pivots are called pivotal columns of the
matrix.
16
Row Echelon Form (REF)
Exercise
List all REF of 2 × 2 matrices.
Ans. The possible structures are as follows:
" # " # " # " #
a1 ∗ a1 ∗ 0 a1 0 0
, , , ,
0 a2 0 0 0 0 0 0
Examples
Consider the following matrices:
" # " # " # 1 0
1 3 1 0 0 1
A= , B= , C= , D = 0 0
0 2 2 1 1 0
2 1
Properties
a) REF of a non-zero matrix is not unique.
b) The number of non-zero rows in the REF is called the rank of
the matrix.
18
Reduced Row Echelon Form (RREF)
(i) R is in REF.
(ii) Pivots of R are all 1.
(iii) Entries above and below a pivot are all zero.
Exercise
List all RREF of 2 × 2 matrices.
Ans. The possible structures are as follows:
" # " # " # " #
1 0 1 ∗ 0 1 0 0
, , , ,
0 1 0 0 0 0 0 0
where ∗ ∈ R.
19
Test for Consistency
20
Elimination Method
22
Lecture 3: Matrix Factorization
23
LU Factorization
25
LDU Factorization (A = LU = LDU ′ )
2 1 0 1 0 0 2 1 0
1 2 1 = 1/2 1 0 0 3/2 1
0 1 2 0 2/3 1 0 0 4/3
| {z }| {z }
L U
1 0 0 2 0 0 1 1/2 0
= 1/2 1 0 0 3/2 0 0 1 2/3
0 2/3 1 0 0 4/3 0 0 1
| {z }| {z }| {z }
L D U′
Similarly,
" # " #" # " #" #" #
1 3 1 0 1 3 1 0 1 0 1 3
A= = =
5 7 5 1 0 −8 5 1 0 −8 0 1
| {z } | {z } | {z } | {z } | {z }
L U L D U′
26
Practice Problems
27
Lecture 4: Gauss-Jordan Method &
Matrix Inverse
28
Inverse of a Matrix
Definition
The inverse of A is a matrix B such that BA = I and AB = I. There
is at most one such B, and it is denoted by A−1 .
AA−1 = A−1 A = I.
Properties of Inverse
1) Existence: The inverse exists if and only if elimination
produces n pivots (row exchanges allowed).
2) Uniqueness: The matrix A cannot have two different inverses,
Suppose BA = I and also AC = I. Then B = C.
3) Unique solution: If A is invertible, the one and only solution to
Ax = b is x = A−1 b.
4) Suppose there is a nonzero vector x such that Ax = 0. Then A
cannot have an inverse.
29
Gauss-Jordan Method
30
Finding Inverse of a Matrix
Example
1 0 1 1 00 1 0 1 1 0 0
R2 −R1
[A|I] = 1 1 0 0 0 −−−−−→ 0 1 −1
1 −1 1 0
0 1 1 0 10 0 1 1 0 0 1
1 1 1
1 0 1 1 0 0 1
1 0 0 −
R3 −R2 R1 − 2 R3 2 2 2
−−− −−→ 0 1 0 −1 1 0 −
− −− − −→ 0 1 0
−1 1 0
0 0 2 1 −1 1 0 0 2 1 −1 1
1
1 0 0 1/2 1/2 −1/2
2 R3
0 = I|A−1
−−−→ 0 1 0 −1 1
0 0 1 1/2 −1/2 1/2
31
Practice Problems
32
Lecture 5: Transpose, Symmetric &
Skew-symmetric Matrices
33
Transpose
Note that
The rows (respectively columns) of A becomes the columns
(respectively rows) of AT .
For example,
" # " # " # 1 0
1 2 T 1 3 1 2 0
A= =⇒ A = , and B = =⇒ B T = 2 2
3 4 2 4 0 2 1
0 1
34
Properties of Transpose
35
Symmetric Matrices
Symmetric Matrices
A matrix A is called symmetric if
AT = A.
Observe that,
" # 1 0 1
1 2
A= , B = 0 2 4 ,
2 4
1 4 1
are examples of symmetric matrix.
Results
a) Symmetric matrices are all square matrices.
b) For any square matrix A, A + AT is symmetric.
c) AAT and AT A are always symmetric for any matrix A (need
not be square matrix). 36
Skew-symmetric Matrices
Skew-symmetric Matrices
A matrix A is called skew-symmetric if
AT = −A.
Observe that,
" # 0 −2 1
0 2
A= , B= 2 0 4 ,
−2 0
−1 −4 0
37
Properties
Theorem
Every matrix can be expressed as the sum of a symmetric and a
skew-symmetric matrix.
Note that,
1 1
A + AT + A − AT
A=
2
| {z } 2
| {z }
symmetric skew−symmetric
Note that
If A is a symmetric matrix i.e., AT = A, that can be factored into
A = LDU without row exchanges. Then
L = UT .
38
Chapter 2: Vector Spaces
Chapter 2: Vector Spaces
39
Lecture 6: Vector Spaces and Subspaces
40
Vector Space
42
Subspace
(1) W ̸= ∅.
(2) u, v ∈ W, =⇒ u + v ∈ W .
(3) u ∈ W, a ∈ R =⇒ au ∈ W .
43
Example
Problem
Examine whether the given subset of R3 is a subspace?
V = (b1 , b2 , b3 ) ∈ R3 : b1 = 0
Linear Span
The linear span or the span of a set of vectors S = {v1 , . . . , vn } is
defined as
span S = {c1 v1 + · · · + cn vn : c1 , . . . , cn ∈ R}
For example,
The span of T = {(1, 1), (1, −1)} is
( " # " # ) (" # )
1 1 x+y
span T = x +y : x, y ∈ R = : x, y ∈ R = R2
1 −1 x−y
Result
Suppose that V is a vector space and S ⊂ V . Then span S is a
subspace of V .
Spanning set
Suppose V is a vector space and T ⊂ V such that
span T = V.
For example,
T = {(1, 1), (1, −1)} is a spanning set of R2 .
Similarly, B = {(1, 0), (0, 1)} is another spanning set of R2 .
Note that, spanning set of a vector space is not unique in general.
46
Lecture 7: Solving Ax = 0 and Ax = b
47
Solving Ax = 0 when A is not invertible
48
Solving Ax = 0 when N (A) ̸= {0}
Results
1) If xp is a particular solution of Ax = b, and xn is any solution of
Ax = 0, then xp + xn is also a solution of Ax = b. In other
words, if xp is a particular solution of Ax = b then
49
Solving Ax = 0 when A is not invertible
Example
x1
1 3 3 2 x
2
Solve: Ax = 0 where A = 2 6 9 7 and x = .
x3
−1 −3 3 4
x4
Solution.
" # " #
1 3 3 2 1 3 3 2 1 3 3 2
R2 =R2 −2R1 R3 =R3 −2R2
2 6 9 7 −−−−−−−−−→ 0 0 3 3 −−−−−−−−−→ 0 0 3 3
R3 =R3 +R1
−1 −3 3 4 0 0 6 6 0 0 0 0
| {z }
U
So the second and fourth column does not contain any pivot. We call the corre-
sponding variables x2 and x4 , free variables and x1 , x3 are called basic variables.
Set parameters for the free variables x2 = s, x4 = t. Then, from the reduced system
we have x1 + 3s + 3x3 + 2t = 0, 3x3 + 3t = 0 i.e.,
x1 = t − 3s
−3 1
x3 = −t, x2 =s
1 0
⇒ ⇒ x = s + t : s, t ∈ R .
x1 = t − 3s x = −t 0 −1
3
x4 =t 0 1 50
Solving Ax = b when A is not invertible
Example
" # " # x1
1 3 3 1
Solve: Ax = b where A = ,b= and x = x2 .
2 6 9 2
x3
Solution.Here we start with the augmented matrix [A|b], i.e.,
1 3 3 1 R2 =R2 −2R1 1 3 3 1
−−−−−−−−−→ = U.
2 6 9 2 0 0 3 0
Here x2 is the free variable. We set x2 = a. Then, the reduced system implies,
x1 = 1 − 3a
" # " #
−3
1
x1 + 3a + 3x3 =1
⇒ x2 =a ⇒x= a 1 + 0 , a∈R .
x3 =0 0 0
x3 =0
| {z } |{z}
xn xp
51
Lecture 8: Linear Independence, Basis &
Dimension
52
Linear Dependence & Independence
Definition
A set S = {v⃗1 , . . . , v⃗n } is said to be linearly independent if and
only if
c1 v⃗1 + · · · + cn v⃗0 = ⃗0 =⇒ c1 = · · · = cn = 0.
If a set of vectors S in not linearly independent, we say that S is
linearly dependent.
In other words, the vectors v⃗1 , . . . , v⃗n are linearly dependent if
there exists scalars c1 , . . . , cn , not all zeros, such that
c1 v⃗1 + · · · + cn v⃗0 = ⃗0.
The above definition provides a method to test for linear
independence: Finding c1 , . . . , cn is same as solving
c 0
i 1
(
h
. . no zero rows =⇒ ci = 0∀i LI
v⃗1 · · · v⃗n .. = .. =⇒
∃ a zero row =⇒ some ci ̸= 0 LD.
cn 0 53
Linear Independence
Example
Examine whether the vectors {(1, 2, 1), (2, 4, 3), (3, 6, 4)} are linearly
independent?
1 2 1
Solution. Consider the matrix A = 2 4 3 (whose rows are the
3 6 4
given vectors). Then
1 2 1 1 2 1 1 2 1
R2 =R2 −2R1 R3 =R3 −R2
2 4 3 −−−−−−−−→ 0 0 1 −−−−−−−→ 0 0 1 .
R3 =R3 −2R1
3 6 4 0 0 1 0 0 0
Thus REF (A) has a zero row. Therefore, the given vectors are
linearly dependent.
54
Basis & Dimension
Basis
A linearly independent spanning set B of a vector space V is called a
basis for the vector space V .
Properties:
Problem
Examine whether the given set forms a basis of R3 ?
Solution.
1 2 2 1 2 2 1 2 2
R2 =R2 +R1 R3 =R3 −2R2
A = −1 2 1 −−−−−−−→ 0 4 3 −−−−−−−−→ 0 4 3
0 8 6 0 8 6 0 0 0
Since the REF has a zero row, the vectors are linearly dependent and
therefore, they cannot form a basis.
56
Exercises
Problem
Find a basis for the plane x + y + z = 0 in R3 and a basis for the
intersection of this plane with the yz-plane.
Solution. The points on the plane satisfy z = −x − y and therefore
can be denoted as
x
1 0
y : x, y ∈ R = x 0 + y 1 : x, y ∈ R
−x − y
−1
−1
57
Exercise
Problem
h i
Let An×n = a1 a2 · · · an , where ai ∈ Rn for each i and
c1 , . . . , cn ∈ R, not all zero, such that
n
X n
X
ci ai = 0, and ai = b,
i=1 i=1
59
Fundamental Subspaces
Note that
the row space, column space, null space, and the left-null space of A
and AT are related by the following relations:
and,
LN (A) = N (AT ) and LN (AT ) = N (A).
60
Dimensions of the Fundamental Subspaces
Result
If A is a square matrix of order n and rank(A) = r, then
dim(R(A)) = dim(C(A)) = r,
T
dim(N (A)) = dim(N (A )) = n − r.
dim(R(A)) = dim(C(A)) = r,
T
dim(N (A)) = n − r, dim(N (A )) = m − r.
61
Practice Problem
Example
Determine the dimensions of four fundamental subspaces of
1 2 1
A = 2 4 3 .
3 6 4
Solution.
1 2 1 1 2 1 1 2 1
R2 =R2 −2R1 R3 =R3 −R2
A = 2 4 3 −−−−−−−−→ 0 0 1 −−−−−−−→ 0 0 1
R3 =R3 −3R1
3 6 4 0 0 1 0 0 0
Thus rank(A) = 2. Therefore,
dim(R(A)) = dim(C(A)) = 2, dim(N (A)) = dim(N (AT )) = 1.
62
Row Space
a11 a12 ··· a1n
. .. .. ..
..
Suppose A = . . . be any m × n matrix.
am1 am2 ··· amn
In other words, the row space contains all linear combinations of the
rows of A.
63
Properties of Row Space
Properties
If A is an m × n matrix, then R(A) is a subspace of Rn and
Example
1 1 0 2
Determine the row space of A = 1 0 1 2. Also determine the
0 1 2 1
dimensions of four fundamental subspaces.
Solution. R(A) = span{(1, 1, 0, 2), (1, 0, 1, 2), (0, 1, 2, 1)}.
" # " # " #
1 1 0 2 1 1 0 2 1 1 0 2
R2 ↔R3 R3 =R3 −R1
1 0 1 2 −
−−−−−
→ 0 1 2 1 −−−−−−−−→ 0 1 2 1
0 1 2 1 1 0 1 2 0 −1 1 0
dim R(A) = dim C(A) = 3
" #
1 1 0 2
R3 =R3 +R2
−−−−−−−−→ 0 1 2 1 ⇒ rank(A) = 3 =⇒ dim N (A) = 3 − 3 =0
0 0 3 1
64
dim N (AT ) = 4 − 3 = 1.
Column Space
Column Space
The column space of Am×n contains all linear combinations of the
columns of A. It is a subspace of Rm .
a11 a12 · · · a1n
a21 a22 · · · a2n
Suppose A = .. .. .. be any m × n matrix.
..
. . . .
am1 am2 · · · amn
Theorem
The system Ax = b is solvable if and only if the vector b can be
expressed as a combination of the columns of A. Then b is in the
column space.
Exercise
1 0 2
Determine the column space of A = 0 2 1 and find a basis for
1 2 3
C(A).
Solution. Here the column space is C(A) = span{(1, 0, 1), (0, 2, 2), (2, 1, 3)}.
For a basis, we test these vectors for linear independence:
" # " # " #
1 0 1 1 0 1 1R 1 0 1
R3 =R3 −2R1 R3 =R3 − 2 2
0 2 2 −−−−−−−−−→ 0 2 2 −
−−−−−−−−−
→ 0 2 2 ⇒ rank(A) = 2.
2 1 3 0 1 1 0 0 0
Thus {(1, 0, 1), (0, 2, 2)} is a basis for C(A).
66
Null Space
N (A) = {x ∈ Rn : Ax = 0}.
Note that
(1) nullity(A) = dim N (A) = n − r, where r = rank(A). In other
words,
rank(A) + nullity(A) = n.
This is famously known as the rank-nullity theorem.
(2) N (A) is orthogonal to C(A), i.e., each vector in C(A) is normal
to any vector in N (A).
67
Left-null Space
Note that
(1) dim LN (A) = dim N (AT ) = m − r, where
r = rank(A) = rank(AT ). In other words,
rank(AT ) + nullity(AT ) = m.
68
Chapter 3: Orthogonality
Chapter 3: Orthogonality
Chapter 3: Orthogonality
Lecture 10: Orthogonal Vectors and Subspaces
Lecture 11: Projections onto Lines
Lecture 12: Projections and Least Squares
Chapter 4: Determinants
Lecture 13: Properties of the Determinant
69
Lecture 10: Orthogonal Vectors and
Subspaces
70
Length of a Vector
Length
The length ∥x∥ in Rn is the positive square root of xT x. In other
words, the length of a vector x = (x1 , . . . , xn ) ∈ Rn is given by
√ q
∥x∥ = xT x = x21 + · · · + x2n .
For example,
The length of the vector u = (1, −2, 3) is
v
u
uh
u i 1 √
p 2 2 2
∥u∥ = u
t 1 −2 3 −2 = 1 + (−2) + 3 = 14.
3
71
Inner Product of Two Vectors
Inner Product
The inner product of any two vectors x = (x1 , . . . , xn ) and
y = (y1 , . . . , yn ) in Rn is defined by
y1
n
y2 X
i
h
T
x y = x1 x2 · · · xn .
= xi yi .
.. i=1
yn
For example,
The inner product of two vectors u = (1, −2, 1) and v = (1, 1, 1) is
h i 1
uT v = 1 −2 1 1 = 1 + (−2) + 1 = 0.
1
72
Properties of Inner Product
Properties
(1) xT y = y T x, for all x, y ∈ Rn .
(2) xT x = ∥x∥2 , for all x ∈ Rn .
(3) This product is also called the scalar product or the dot
product of two vectors.
(4) xT y = ∥x∥∥y∥ cos θ, where θ is the angle between the vectors x
and y.
(5) In particular, the inner product xT y is zero if and only if x
and y are orthogonal vectors. If xT y > 0, their angle is less
than 90◦ . If xT y < 0, their angle is greater than 90◦ .
73
Orthogonal Vectors
Orthogonality
Any two non-zero vectors x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) in Rn
are said to be orthogonal if
xT y = 0.
Exercise
Suppose A is a matrix of order m × n. Then show that each vector in
R(A) is orthogonal to all vectors of N (A).
Hints: Let aT
k denote the vector corresponding to the k-th row of A. Now,
x ∈ N (A) implies,
T
a1 x = 0,
Ax = 0 =⇒ ··· ··· =⇒ x ⊥ ak , for all k.
aT
mx =0
74
Properties of Orthogonal Vectors
Theorem
If nonzero vectors v1 , . . . , vk are mutually orthogonal (every vector
is perpendicular to every other), then those vectors are linearly inde-
pendent.
Remark
If nonzero vectors v1 , . . . , vk are linearly independent, then there
exists mutually orthogonal vectors u1 , . . . , uk such that
span{v1 , . . . , vk } = span{u1 , . . . , uk }
Orthogonal Basis
If B is a basis for a vector space V where the basis vectors are
mutually orthogonal, then B is said to be an orthogonal basis of
V . If further all the basis vectors are chosen to be unit vectors then
it is called an orthonormal basis.
75
Practice Problems
(2) Find the space of all vectors orthogonal to the column space of
1 2
B = 2 4 .
3 6
76
Lecture 11: Projections onto Lines
77
78
Lecture 12: Projections and Least
Squares
79
80
Chapter 4: Determinants
Lecture 13: Properties of the
Determinant
81
Four of the main uses of determinants
Basic Properties:
1) det I = 1.
2) The sign (of determinant) is reversed by a row exchange.
3) The determinant is linear in each row separately.
83
Properties of determinant
85