0% found this document useful (0 votes)
31 views89 pages

Applied Linear Algebra MTH 3003 29aug

The document outlines the course Applied Linear Algebra (MTH 3003) taught by Dr. S Panda, covering topics such as matrices, Gaussian elimination, and systems of linear equations. It includes definitions, matrix representations, row and column pictures, consistency of systems, and methods for solving equations. Additionally, it provides practice problems and discusses elementary row operations and matrix factorization.

Uploaded by

Riemann 1729
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views89 pages

Applied Linear Algebra MTH 3003 29aug

The document outlines the course Applied Linear Algebra (MTH 3003) taught by Dr. S Panda, covering topics such as matrices, Gaussian elimination, and systems of linear equations. It includes definitions, matrix representations, row and column pictures, consistency of systems, and methods for solving equations. Additionally, it provides practice problems and discusses elementary row operations and matrix factorization.

Uploaded by

Riemann 1729
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Applied Linear Algebra (MTH 3003)

B. Tech. 2nd Year

Dr. S Panda

Department of Mathematics
ITER (SOA Deemed to be University)
Chapter 1: Matrices and
Gaussian Elimination
Matrices and Gaussian Elimination

Chapter 1: Matrices and Gaussian Elimination


Lecture 1: The Geometry of Linear Equations
Lecture 2: The Gaussian Elimination
Lecture 3: Matrix Factorization
Lecture 4: Gauss-Jordan Method & Matrix Inverse
Lecture 5: Transpose, Symmetric & Skew-symmetric Matrices

2
Lecture 1: The Geometry of Linear
Equations

3
System of linear equations

Definition (System of linear equations)


A system of m linear equations in n variables is given by

a11 x1 + a12 x2 + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2n xn = b2
.. .. .. ..
. . . .
am1 x1 + am2 x2 + · · · + amn xn = bm
For example,

2x + y − z = 2
x−y+z = 1

is a system of two equations in three variables.

4
Matrix Representation

A system of linear equations can be expressed as a matrix equation,

Ax = b

where A is the coefficient matrix, x is the unknown vector, and b


is the constant vector so that
     
a11 a12 · · · a1n x1 b1
 a21 a22 · · · a2n   x2   b2 
     
n m
A=  .. .. ..  , x=  ..  ∈ R , b =  ..  ∈ R
  
 . . .   .   . 

am1 am2 · · · amn m×n xn bm

5
The Matrix Representation

The augmented matrix of the system is given by


 
a11 a12 ··· a1n b1
 a21 a22 ··· a2n b2 
 
[A|b] = 
 .. .. .. .. 
 . . . . 

am1 am2 ··· amn bm

For instance, in our previous example


 
" # x " # " #
2 1 −1 2 2 1 −1 2
A= , x = y  , b = =⇒ [A|b] =
 
1 −1 1 1 1 −1 1 1
z

6
The Matrix Representation

Write down the following systems in Ax = b form:

x + 2y = 3 x + 2y = 3 x + 2y = 3
4x + 5y = 6 4x + 8y = 6 4x + 8y = 12

Also write down their corresponding augmented matrices.

7
Row Picture

Each row of the augmented matrix corresponds to an equation from the


system. Thus the row picture corresponds to the graphs of each (lin-
ear) equation in a given system. Therefore, a solution to the system
is basically when there is a common point of intersection among
all the graphs of the system.

Draw the row pictures of the systems given in the previous


exercise.
4x + 5y = 6

x + 2y = 3 . . . . . . (1) x + 2y = 3

P (−1, 2) B1 (0, 1.5)


4x + 5y = 6 . . . . . . (2)
B2 (0, 1.2)
Equation (1) Equation (2)
x 0 3 x 0 3 A2 (1.5, 0) A1 (3, 0)
y 1.5 0 y 1.5 0

8
Column Picture

If aj denotes the j-th column of the coefficient matrix A, then the


system can be viewed as a single vector equation
x1 [a1 ] + · · · + xn [an ] = [b] ,
where each aj and b are vectors in Rm . The graph of this vector equa-
tion is the column picture of the system.
Consider the system

x + 2y = 3
4x + 5y = 6

The column picture representation is of the form


" # " # " #
1 2 3
x +y =
4 5 6

9
Column Picture

" # " # " #


1 2 3
x +y =
4 5 6

(3, 6)
(2, 5)
(1, 4)

10
Consistency and Solution of a system

A system is said to be consistent if it has a solution (not necessarily


unique). However, if a system has no solution, then it is said to be
inconsistent.

11
Practice Problems

(1) Solve by sketching row picture for the system

x + y = 2, 2x − 2y = 4.

(2) Determine the solution(s) (if any) of the following systems by


sketching their row pictures.
  


 x + 2y = 3 

 x + 2y = 3 x + 2y

 =3
(a) x−y = −3 (b) x−y =0 (c) x−y = −3

 
 

2x + y =0 2x + y =6 2x + 4y =4

(3) With reference to Q.(2), write down the augmented matrix for
each of the above systems.

12
Lecture 2: The Gaussian Elimination

13
Elementary Row Operations & Elementary Matrices

There are three types of elementary row operations:


(1) Row exchange (Rij ) : i-th and j-th row are interchanged.
(2) Scalar multiplication (Ri ← cRi (c ̸= 0)) : i-th row is multiplied
by a non-zero constant c.
(3) Row modification using other row (Ri ← Ri + cRj ) : i-th row is
added with c times j-th row.
Elementary matrices
Each elementary row operation on a matrix A can be represented by
matrix multiplication from left of A. For instance, for a 3 × 3 matrix
A, the matrices for the above three operations are as follows:
     
0 1 0 1 0 0 1 0 0
(1) E12 = 1 0 0 , (2) cE2 = 0 c 0 , and (3) E2+3c = 0 1 c 
     
0 0 1 0 0 1 0 0 1

14
Properties

1. Elementary matrices are non-singular (i.e. determinant ̸= 0)


or invertible.
2. If a matrix B can be obtained from another matrix A by a
sequence of elementary row operations, then B is said to be row
equivalent to A.
3. B is row equivalent to A if and only if A is row equivalent to B.
4. If A and B are row-equivalent, then Ax = 0 and Bx = 0 have the
same solution. Such systems having the same solution are called
equivalent systems.
5. If the augmented matrix [A|b] and [C|d] are row-equivalent then
the systems Ax = b and Cx = d have the same set of solutions.

15
Pivots

Pivot
The first non-zero entry in each row is called a pivot.

For example
" # " #
1 2 0 0 1
A= , B=
0 -1 1 1 0
The pivots of A and B are highlighted in the boxes.
The columns containing the pivots are called pivotal columns of the
matrix.

16
Row Echelon Form (REF)

Row Echelon Form


A matrix R is said to be in row echelon form if

(i) Pivots of R goes from top-left to bottom-right.


(ii) Entries (if any) below a pivot are zero.
(iii) Any row with all zeros lie at the bottom.

Exercise
List all REF of 2 × 2 matrices.
Ans. The possible structures are as follows:
" # " # " # " #
a1 ∗ a1 ∗ 0 a1 0 0
, , , ,
0 a2 0 0 0 0 0 0

where ai are any non-zero real, and ∗ ∈ R.


17
REF of a matrix

Examples
Consider the following matrices:
 
" # " # " # 1 0
1 3 1 0 0 1
A= , B= , C= , D = 0 0
 
0 2 2 1 1 0
2 1

Here A is in REF but B, C, D are not in REF.

Properties
a) REF of a non-zero matrix is not unique.
b) The number of non-zero rows in the REF is called the rank of
the matrix.

18
Reduced Row Echelon Form (RREF)

Reduced Row Echelon Form


A matrix R is said to be in reduced row echelon form (RREF) if

(i) R is in REF.
(ii) Pivots of R are all 1.
(iii) Entries above and below a pivot are all zero.

Exercise
List all RREF of 2 × 2 matrices.
Ans. The possible structures are as follows:
" # " # " # " #
1 0 1 ∗ 0 1 0 0
, , , ,
0 1 0 0 0 0 0 0

where ∗ ∈ R.
19
Test for Consistency

Consider a system of n linear equations in n variables given by


Ax = b. Then the given system has

(i) no solution if rank([A|b]) > rank(A)


(ii) unique solution if rank([A|b]) = rank(A) = n (order of A)
(iii) infinite solution if rank([A|b]) = rank(A) < n (order of A)

20
Elimination Method

Consider the system of equations given by


2x + y + z = 5
4x − 6y = −2
−2x + 7y + 2z = 9
GE Method: Consider the augmented matrix [A|b] and apply row
operations to obtain a REF. Finally use back substitution to solve the
system:
   
2 1 1 5 2 1 1 5
 R =R −2R1 
[A|b] =  4 −6 0 −2 −−2−−−2−−−→ 0 −8 −2 −12
 
R3 =R3 +R1
−2 7 2 9 0 8 3 14
  
2 1 1 5 z = 2


R3 =R3 +R2 
−−−−−−−→ 0 −8 −2 −12 =⇒ y = 1 .


0 0 1 2 
x = 1
21
Practice Problems

(1) Determine b so that the Gaussian Elimination leads to missing


pivot in the system:
x + by = 0, x − 2y − z = 0, y + z = 0
(2) Apply Gaussian elimination to solve:
2x + 3y = 0, 4x + 5y + z = 3, 2x − y − 3z = 5.
(3) Evaluate d for which the system has unique solution:
x + y + 2z = 1, x + 2y + 3z = 2, x + 4y + 2dz = 1.
(4) List all possible RREFs of real 2 × 3 matrices.
(5) Determine the ranks of
     
1 2 −3 1 −2 1 1 0 −1
A = 2 −4 2  , B =  0 1 −2 , C = 0 −1 1 .
     
1 −1 0 −2 0 1 1 −1 0

22
Lecture 3: Matrix Factorization

23
LU Factorization

The LU -factorization of a matrix A is about expressing A as a


product of two matrices
A = LU
where, L is a lower triangular matrix and U is an upper triangular
matrix.
Gaussian elimination is used to determine this factorization.
Example  
1 2 0
Determine the LU decomposition of the matrix A = 0 1 0
 
3 6 1
Soln. Applying elementary row operations, we have
     
1 2 0 1 2 0 1 0 0
 R3 =R3 −3R1 
0 1 0 −−−−−−−−→ 0 1 0 = U, & L = 0 1 0
   
3 6 1 0 0 1 3 0 1
24
Determine the LU factorization
 
" # 2 1 0
1 3
(a) A = (b) B = 1 2 1
 
5 7
0 1 2
(a) Here,
" # " # " #
1 3 R2 =R2 −5R1 1 3 1 0
A= −−−−−−−−→ = U (REF (A)) =⇒ L =
5 7 0 −8 5 1
(b) Now,
     
2 1 0 1
2 1 0 2
2 1 0
 R2 − 2 R1   R3 − 3 R2 
B = 1 2 1 −−−−−−→ 0 3/2 1 −−−−−−→ 0 3/2 1  = U
 
0 1 2 0 1 2 0 0 4/3
    
1 0 0 1 0 0 1 0 0
Therefore L = 1/2 1 0 0 1 0 = 1/2 1 0.
    
0 0 1 0 2/3 1 0 2/3 1

25
LDU Factorization (A = LU = LDU ′ )

    
2 1 0 1 0 0 2 1 0
1 2 1 = 1/2 1 0 0 3/2 1 
    
0 1 2 0 2/3 1 0 0 4/3
| {z }| {z }
L U
   
1 0 0 2 0 0 1 1/2 0
= 1/2 1 0 0 3/2 0  0 1 2/3
   
0 2/3 1 0 0 4/3 0 0 1
| {z }| {z }| {z }
L D U′

Similarly,
" # " #" # " #" #" #
1 3 1 0 1 3 1 0 1 0 1 3
A= = =
5 7 5 1 0 −8 5 1 0 −8 0 1
| {z } | {z } | {z } | {z } | {z }
L U L D U′

26
Practice Problems

(1) Find LDU factorization of


   
1 1 0 1 0 2
A = 1 2 1 B = 2 1 0
   
0 1 2 0 0 2

(2) Write down the LU decomposition of


 
1 1 0
C = 1 2 1
 
0 1 2

27
Lecture 4: Gauss-Jordan Method &
Matrix Inverse

28
Inverse of a Matrix

Definition
The inverse of A is a matrix B such that BA = I and AB = I. There
is at most one such B, and it is denoted by A−1 .

AA−1 = A−1 A = I.

Properties of Inverse
1) Existence: The inverse exists if and only if elimination
produces n pivots (row exchanges allowed).
2) Uniqueness: The matrix A cannot have two different inverses,
Suppose BA = I and also AC = I. Then B = C.
3) Unique solution: If A is invertible, the one and only solution to
Ax = b is x = A−1 b.
4) Suppose there is a nonzero vector x such that Ax = 0. Then A
cannot have an inverse.
29
Gauss-Jordan Method

Suppose that A is an n × n invertible matrix and I is the identity


matrix of order n. Now consider the augmented matrix [A|I]. Then

RREF ([A|I]) = I|A−1 .


 

Use Gauss-Jordan method to


 find the
 inverse of
1 0 1
A = 1 1 0
 
0 1 1

30
Finding Inverse of a Matrix

Example

   
1 0 1 1 00 1 0 1 1 0 0
 R2 −R1 
[A|I] = 1 1 0 0 0 −−−−−→ 0 1 −1
1 −1 1 0
 
0 1 1 0 10 0 1 1 0 0 1
1 1 1
   
1 0 1 1 0 0 1
1 0 0 − 
R3 −R2  R1 − 2 R3  2 2 2
−−− −−→ 0 1 0 −1 1 0 −
− −− − −→ 0 1 0

−1 1 0 
0 0 2 1 −1 1 0 0 2 1 −1 1
 
1
1 0 0 1/2 1/2 −1/2
2 R3
0  = I|A−1
  
−−−→ 0 1 0 −1 1

0 0 1 1/2 −1/2 1/2

31
Practice Problems

(1) Use Gauss-Jordan method to find inverse of the following


matrices:
     
1 0 0 1 0 1 0 0 1
A = 1 1 1 , B = 1 1 0 , C = 1 0 0
     
0 0 1 0 0 1 0 1 0

32
Lecture 5: Transpose, Symmetric &
Skew-symmetric Matrices

33
Transpose

The transpose of A is denoted by AT and is defined as

(AT )ij = (A)ji .

Note that
The rows (respectively columns) of A becomes the columns
(respectively rows) of AT .

For example,
 
" # " # " # 1 0
1 2 T 1 3 1 2 0
A= =⇒ A = , and B = =⇒ B T = 2 2
 
3 4 2 4 0 2 1
0 1

In other words transpose flips a matrix about its diagonal.

34
Properties of Transpose

(1) If A is an m × n matrix, then AT is a n × m matrix.


(2) (AT )T = A.
(3) (A + B)T = AT + B T .
(4) (AB)T = B T AT .
−1
(5) A−1 = AT

.

35
Symmetric Matrices

Symmetric Matrices
A matrix A is called symmetric if

AT = A.

Observe that,
 
" # 1 0 1
1 2
A= , B = 0 2 4 ,
 
2 4
1 4 1
are examples of symmetric matrix.
Results
a) Symmetric matrices are all square matrices.
b) For any square matrix A, A + AT is symmetric.
c) AAT and AT A are always symmetric for any matrix A (need
not be square matrix). 36
Skew-symmetric Matrices

Skew-symmetric Matrices
A matrix A is called skew-symmetric if

AT = −A.

Observe that,
 
" # 0 −2 1
0 2
A= , B= 2 0 4 ,
 
−2 0
−1 −4 0

are examples of skew-symmetric matrix.


Results
a) Skew-symmetric matrices must have all zeros on the diagonal.
b) For any square matrix A, A − AT is skew-symmetric.

37
Properties

Theorem
Every matrix can be expressed as the sum of a symmetric and a
skew-symmetric matrix.
Note that,
1 1
A + AT + A − AT
 
A=
2
| {z } 2
| {z }
symmetric skew−symmetric

Note that
If A is a symmetric matrix i.e., AT = A, that can be factored into
A = LDU without row exchanges. Then

L = UT .

38
Chapter 2: Vector Spaces
Chapter 2: Vector Spaces

Chapter 2: Vector Spaces


Lecture 6: Vector Spaces and Subspaces
Lecture 7: Solving Ax = 0 and Ax = b
Lecture 8: Linear Independence, Basis & Dimension
Lecture 9: Four Fundamental Subspaces of a Matrix

39
Lecture 6: Vector Spaces and Subspaces

40
Vector Space

A non-empty set V is called a vector space over R (the real numbers)


if the following properties are satisfied:
(1) v, w ∈ V =⇒ v + w ∈ V .
(2) u + (v + w) = (u + v) + w
(3) There exists a unique 0 ∈ V such that 0 + v = v + 0 = v for all
v ∈V.
(4) For each v ∈ V , there exists a unique (−v) ∈ V such that
v + (−v) = (−v) + v = 0.
(5) v + w = w + v for all v, w ∈ V .
(6) α ∈ R, v ∈ V =⇒ αv ∈ V .
(7) 1v = v for all v ∈ V .
(8) α, β ∈ R, v ∈ V =⇒ α(βv) = (αβ)v.
(9) α, β ∈ R, v ∈ V =⇒ (α + β)v = αv + βv.
(10) α ∈ R, v, w ∈ V =⇒ α(v + w) = αv + αw.
41
Vector Space - Examples

1) R, R2 , R3 , . . . , Rn are vector spaces over R. . . . . . Euclidean spaces


2) Mn (R) i.e. the space of all n × n matrices with real entries is a
vector space over R.
3) C ([0, 1]) i.e. the space of all continuous functions f : [0, 1] → R
4) R is a vector space over Q (rational numbers). However Q is not
a vector space over R.

Elements in a vector space are called vectors.


Vector space is always defined over a field of scalars. In our above
examples, this field of scalars is R.

A field of numbers is a set of numbers where addition, subtraction,


multiplication and division are well defined operations.

42
Subspace

A subspace of a vector space is a nonempty subset that satisfies the


requirements for a vector space:

Linear combinations stay in the subspace.

Test for subspace:


Suppose V is a vector space and W ⊂ V . Then W is said to be a
subspace of V if

(1) W ̸= ∅.
(2) u, v ∈ W, =⇒ u + v ∈ W .
(3) u ∈ W, a ∈ R =⇒ au ∈ W .

43
Example

Problem
Examine whether the given subset of R3 is a subspace?

V = (b1 , b2 , b3 ) ∈ R3 : b1 = 0


Solution. Note that


(i) (0, 0, 0) ∈ V by the description of V .
(ii) Suppose u = (0, b2 , b3 ), v = (0, c2 , c3 ) ∈ V . Then
     
0 0 0
u + v = b2  + c2  = b2 + c2  ∈ V.
     
b3 c3 b3 + c3

(iii) Also for any k ∈ R,


ku = k(0, b2 , b3 ) = (0, kb2 , kb3 ) ∈ V.
From (i), (ii), and (iii) it follows that V is a subspace of R3 .
44
Linear Span

Linear Span
The linear span or the span of a set of vectors S = {v1 , . . . , vn } is
defined as

span S = {c1 v1 + · · · + cn vn : c1 , . . . , cn ∈ R}

For example,
The span of T = {(1, 1), (1, −1)} is
( " # " # ) (" # )
1 1 x+y
span T = x +y : x, y ∈ R = : x, y ∈ R = R2
1 −1 x−y

Note that, for any element (a, b) ∈ R2 , we can take


a+b a−b
x= , y= so that, (a, b) = (x + y, x − y). Thus T spans R2 .
2 2
45
Linear Span (continued..)

Result
Suppose that V is a vector space and S ⊂ V . Then span S is a
subspace of V .

Spanning set
Suppose V is a vector space and T ⊂ V such that

span T = V.

Then T is called a spanning set of V .

For example,
T = {(1, 1), (1, −1)} is a spanning set of R2 .
Similarly, B = {(1, 0), (0, 1)} is another spanning set of R2 .
Note that, spanning set of a vector space is not unique in general.

46
Lecture 7: Solving Ax = 0 and Ax = b

47
Solving Ax = 0 when A is not invertible

Note that, our previous discussion describes the method to solve a


system Ax = b, when A is an invertible matrix. But if A is not a
square matrix, or if the echelon form U = REF (A) misses some pivots,
then it is not immediately clear, how to determine a solution.
For an invertible matrix A, the nullspace N (A) contains only x = 0
(multiply Ax = 0 by A−1 ). The column space is the whole space
(Ax = b has a solution for every b).
Question.
How to find the solution when the nullspace contains more than the
zero vector and/or the column space contains less than all vectors?

48
Solving Ax = 0 when N (A) ̸= {0}

Results
1) If xp is a particular solution of Ax = b, and xn is any solution of
Ax = 0, then xp + xn is also a solution of Ax = b. In other
words, if xp is a particular solution of Ax = b then

Ax̃ = b =⇒ x̃ ∈ {xp + xn : xn ∈ N (A)}.

2) If x1 , x2 are two solutions of Ax = b, then x1 − x2 ∈ N (A).

Ax = b has a solution if and only if b ∈ C(A). Therefore, if b ∈


/ C(A)
then Ax = b does not have a solution.
Question.
What are the conditions on b that make Ax = b solvable?

49
Solving Ax = 0 when A is not invertible

Example  
  x1
1 3 3 2 x 
 2
Solve: Ax = 0 where A =  2 6 9 7 and x =  .
 
x3 
−1 −3 3 4
x4
Solution.  
" # " #
1 3 3 2 1 3 3 2 1 3 3 2
R2 =R2 −2R1 R3 =R3 −2R2
2 6 9 7 −−−−−−−−−→ 0 0 3 3 −−−−−−−−−→  0 0 3 3
R3 =R3 +R1
−1 −3 3 4 0 0 6 6 0 0 0 0
| {z }
U
So the second and fourth column does not contain any pivot. We call the corre-
sponding variables x2 and x4 , free variables and x1 , x3 are called basic variables.
Set parameters for the free variables x2 = s, x4 = t. Then, from the reduced system
we have x1 + 3s + 3x3 + 2t = 0, 3x3 + 3t = 0 i.e.,

 x1 = t − 3s     
−3 1

   
x3 = −t, x2 =s
  
1  0 
⇒ ⇒ x = s   + t   : s, t ∈ R .
 
x1 = t − 3s x = −t 0 −1
 3

 
 

x4 =t 0 1 50
Solving Ax = b when A is not invertible

Example  
" # " # x1
1 3 3 1
Solve: Ax = b where A = ,b= and x = x2 .
 
2 6 9 2
x3
Solution.Here we start with the augmented matrix [A|b], i.e.,
   
1 3 3 1 R2 =R2 −2R1 1 3 3 1
−−−−−−−−−→ = U.
2 6 9 2 0 0 3 0

Here x2 is the free variable. We set x2 = a. Then, the reduced system implies,
 

 

  
x1 = 1 − 3a
 " # " # 
  −3
 1 
x1 + 3a + 3x3 =1

⇒ x2 =a ⇒x= a 1 + 0 , a∈R .
x3 =0   0 0

x3 =0 

 


| {z } |{z}
 

xn xp

51
Lecture 8: Linear Independence, Basis &
Dimension

52
Linear Dependence & Independence

Definition
A set S = {v⃗1 , . . . , v⃗n } is said to be linearly independent if and
only if
c1 v⃗1 + · · · + cn v⃗0 = ⃗0 =⇒ c1 = · · · = cn = 0.
If a set of vectors S in not linearly independent, we say that S is
linearly dependent.
In other words, the vectors v⃗1 , . . . , v⃗n are linearly dependent if
there exists scalars c1 , . . . , cn , not all zeros, such that
c1 v⃗1 + · · · + cn v⃗0 = ⃗0.
The above definition provides a method to test for linear
independence: Finding c1 , . . . , cn is same as solving
   
c 0
i  1  
(
h
. . no zero rows =⇒ ci = 0∀i LI
v⃗1 · · · v⃗n  ..  =  ..  =⇒
  
∃ a zero row =⇒ some ci ̸= 0 LD.
cn 0 53
Linear Independence

Example
Examine whether the vectors {(1, 2, 1), (2, 4, 3), (3, 6, 4)} are linearly
independent?
 
1 2 1
Solution. Consider the matrix A = 2 4 3 (whose rows are the
 
3 6 4
given vectors). Then
     
1 2 1 1 2 1 1 2 1
 R2 =R2 −2R1   R3 =R3 −R2 
2 4 3 −−−−−−−−→ 0 0 1 −−−−−−−→ 0 0 1 .
 
R3 =R3 −2R1
3 6 4 0 0 1 0 0 0

Thus REF (A) has a zero row. Therefore, the given vectors are
linearly dependent.

54
Basis & Dimension

Basis
A linearly independent spanning set B of a vector space V is called a
basis for the vector space V .
Properties:

(1) A basis is also defined as a minimal spanning set.


(2) For a given vector space V , a basis is not unique. However
the number of elements in any basis of V is same. This constant
number is called the dimension of the vector space.
n(n + 1)
(3) The vector space of all symmetric matrices has dimension
2
n(n − 1)
and the space of all skew-symmetric matrices has dimension .
2
n
(4) The Euclidean space R has the standard basis {e1 , . . . , en }, where
ek denote the k-th column of the identity matrix. Thus dim Rn =
n.
55
Practice Problems

Problem
Examine whether the given set forms a basis of R3 ?

{(1, 2, 2), (−1, 2, 1), (0, 8, 6)}

Solution.
     
1 2 2 1 2 2 1 2 2
 R2 =R2 +R1   R3 =R3 −2R2 
A = −1 2 1 −−−−−−−→ 0 4 3 −−−−−−−−→ 0 4 3
 
0 8 6 0 8 6 0 0 0

Since the REF has a zero row, the vectors are linearly dependent and
therefore, they cannot form a basis.

56
Exercises

Problem
Find a basis for the plane x + y + z = 0 in R3 and a basis for the
intersection of this plane with the yz-plane.
Solution. The points on the plane satisfy z = −x − y and therefore
can be denoted as
        

 x 
   1 0 

 y  : x, y ∈ R = x  0  + y  1  : x, y ∈ R
     

 −x − y 
   −1 
−1 

Therefore a basis for the plane is given by


{(1, 0, −1), (0, 1, −1)}.
Any point on the yz-plane is of the form {(0, y, z) : y, z ∈ R}.
Therefore a basis for the line of intersection (i.e., x = 0, y + z = 0) is
{(0, 1, −1)}.

57
Exercise

Problem
h i
Let An×n = a1 a2 · · · an , where ai ∈ Rn for each i and
c1 , . . . , cn ∈ R, not all zero, such that
n
X n
X
ci ai = 0, and ai = b,
i=1 i=1

then Ax = b has how many solutions? Justify.


Solution.Since there exists c1 , . . . , cn ∈ R, not all zero, such that
Pn
ci ai = 0, therefore the columns of A i.e., a1 , . . . , an are linearly
i=1
dependent. Thus Ax = b cannot have a unique solution.
n
P
Now, ai = b implies (1, 1, . . . , 1) is a solution of Ax = b. Therefore,
i=1
the system has at least one solution.
From the above two conclusion, it follows that Ax = b has infinitely
many solutions.
58
Lecture 9: Four Fundamental Subspaces
of a Matrix

59
Fundamental Subspaces

There are four fundamental subspaces of a matrix A:

(1) Row Space: {Span of all rows of A } = R(A).


(2) Column Space: {Span of all columns of A} = C(A).
(3) Null Space: {Set of all vectors x such that Ax = 0} = N (A).
(4) Left-null Space: {Set of all vectors y such that y T A = 0} = N (AT ).

Note that
the row space, column space, null space, and the left-null space of A
and AT are related by the following relations:

C(A) = R(AT ) and R(A) = C(AT )

and,
LN (A) = N (AT ) and LN (AT ) = N (A).

60
Dimensions of the Fundamental Subspaces

Result
If A is a square matrix of order n and rank(A) = r, then

dim(R(A)) = dim(C(A)) = r,
T
dim(N (A)) = dim(N (A )) = n − r.

However if A is an m × n rectangular matrix, then


rank(A) = r ≤ min{m, n} and

dim(R(A)) = dim(C(A)) = r,
T
dim(N (A)) = n − r, dim(N (A )) = m − r.

61
Practice Problem

Example
Determine the dimensions of four fundamental subspaces of
 
1 2 1
A = 2 4 3 .
 
3 6 4

Solution.
     
1 2 1 1 2 1 1 2 1
 R2 =R2 −2R1   R3 =R3 −R2 
A = 2 4 3 −−−−−−−−→ 0 0 1 −−−−−−−→ 0 0 1
 
R3 =R3 −3R1
3 6 4 0 0 1 0 0 0
Thus rank(A) = 2. Therefore,
dim(R(A)) = dim(C(A)) = 2, dim(N (A)) = dim(N (AT )) = 1.

62
Row Space
 
a11 a12 ··· a1n
 . .. .. .. 
 ..
Suppose A =  . . .  be any m × n matrix.

am1 am2 ··· amn

Suppose that its rows are vectors in Rn , denoted by


h i
r⃗1 T = a11 a12 · · · a1n
..
.
h i
r⃗m T = am1 am2 ··· amn

Then the row space of A is given by

R(A) = span {r⃗1 , . . . , r⃗m }.

In other words, the row space contains all linear combinations of the
rows of A.
63
Properties of Row Space

Properties
If A is an m × n matrix, then R(A) is a subspace of Rn and

dim(R(A)) = number of non-zero rows in REF (A) = rank(A).

Example  
1 1 0 2
Determine the row space of A = 1 0 1 2. Also determine the
 
0 1 2 1
dimensions of four fundamental subspaces.
Solution. R(A) = span{(1, 1, 0, 2), (1, 0, 1, 2), (0, 1, 2, 1)}.
" # " # " #
1 1 0 2 1 1 0 2 1 1 0 2
R2 ↔R3 R3 =R3 −R1
1 0 1 2 −
−−−−−
→ 0 1 2 1 −−−−−−−−→ 0 1 2 1
0 1 2 1 1 0 1 2 0 −1 1 0

dim R(A) = dim C(A) = 3
" #
1 1 0 2
R3 =R3 +R2
−−−−−−−−→ 0 1 2 1 ⇒ rank(A) = 3 =⇒ dim N (A) = 3 − 3 =0
0 0 3 1
 64
dim N (AT ) = 4 − 3 = 1.
Column Space

Column Space
The column space of Am×n contains all linear combinations of the
columns of A. It is a subspace of Rm .
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
 
Suppose A =  .. .. ..  be any m × n matrix.
.. 
 . . . . 
am1 am2 · · · amn

Suppose that its columns are vectors in Rm , denoted by


     
a11 a12 a1n
 .   .   . 
 ..  , a⃗2 =  ..  , · · · , a⃗n =  ..  .
a⃗1 =      
am1 am2 amn
Then the column space of A is given by
col(A) = span{a⃗1 , a⃗2 , . . . , a⃗n }.
65
Column Space (continued..)

Theorem
The system Ax = b is solvable if and only if the vector b can be
expressed as a combination of the columns of A. Then b is in the
column space.
Exercise  
1 0 2
Determine the column space of A = 0 2 1 and find a basis for
 
1 2 3
C(A).
Solution. Here the column space is C(A) = span{(1, 0, 1), (0, 2, 2), (2, 1, 3)}.
For a basis, we test these vectors for linear independence:
" # " # " #
1 0 1 1 0 1 1R 1 0 1
R3 =R3 −2R1 R3 =R3 − 2 2
0 2 2 −−−−−−−−−→ 0 2 2 −
−−−−−−−−−
→ 0 2 2 ⇒ rank(A) = 2.
2 1 3 0 1 1 0 0 0
Thus {(1, 0, 1), (0, 2, 2)} is a basis for C(A).

66
Null Space

The null space of Am×n is a subspace of Rn given by

N (A) = {x ∈ Rn : Ax = 0}.

Note that
(1) nullity(A) = dim N (A) = n − r, where r = rank(A). In other
words,
rank(A) + nullity(A) = n.
This is famously known as the rank-nullity theorem.
(2) N (A) is orthogonal to C(A), i.e., each vector in C(A) is normal
to any vector in N (A).

67
Left-null Space

The left-null space of Am×n is a subspace of Rm given by

LN (A) = N (AT ) = {y ∈ Rm : y T A = 0}.

Note that
(1) dim LN (A) = dim N (AT ) = m − r, where
r = rank(A) = rank(AT ). In other words,

rank(AT ) + nullity(AT ) = m.

(2) N (AT ) is orthogonal to R(A), i.e., each vector in R(A) is


normal to any vector in N (AT ).

68
Chapter 3: Orthogonality
Chapter 3: Orthogonality

Chapter 3: Orthogonality
Lecture 10: Orthogonal Vectors and Subspaces
Lecture 11: Projections onto Lines
Lecture 12: Projections and Least Squares

Chapter 4: Determinants
Lecture 13: Properties of the Determinant

69
Lecture 10: Orthogonal Vectors and
Subspaces

70
Length of a Vector

Length
The length ∥x∥ in Rn is the positive square root of xT x. In other
words, the length of a vector x = (x1 , . . . , xn ) ∈ Rn is given by
√ q
∥x∥ = xT x = x21 + · · · + x2n .

For example,
The length of the vector u = (1, −2, 3) is
v  
u
uh
u i 1 √
  p 2 2 2
∥u∥ = u
t 1 −2 3 −2 = 1 + (−2) + 3 = 14.
3

71
Inner Product of Two Vectors

Inner Product
The inner product of any two vectors x = (x1 , . . . , xn ) and
y = (y1 , . . . , yn ) in Rn is defined by
 
y1
n
 y2  X
i
h 
T
x y = x1 x2 · · · xn  . 
 = xi yi .
 ..  i=1

yn

For example,
The inner product of two vectors u = (1, −2, 1) and v = (1, 1, 1) is
 
h i 1
uT v = 1 −2 1 1 = 1 + (−2) + 1 = 0.
 
1
72
Properties of Inner Product

Properties
(1) xT y = y T x, for all x, y ∈ Rn .
(2) xT x = ∥x∥2 , for all x ∈ Rn .
(3) This product is also called the scalar product or the dot
product of two vectors.
(4) xT y = ∥x∥∥y∥ cos θ, where θ is the angle between the vectors x
and y.
(5) In particular, the inner product xT y is zero if and only if x
and y are orthogonal vectors. If xT y > 0, their angle is less
than 90◦ . If xT y < 0, their angle is greater than 90◦ .

73
Orthogonal Vectors

Orthogonality
Any two non-zero vectors x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) in Rn
are said to be orthogonal if

xT y = 0.

Exercise
Suppose A is a matrix of order m × n. Then show that each vector in
R(A) is orthogonal to all vectors of N (A).
Hints: Let aT
k denote the vector corresponding to the k-th row of A. Now,
x ∈ N (A) implies,
 T
a1 x = 0,
Ax = 0 =⇒ ··· ··· =⇒ x ⊥ ak , for all k.

aT
mx =0

74
Properties of Orthogonal Vectors

Theorem
If nonzero vectors v1 , . . . , vk are mutually orthogonal (every vector
is perpendicular to every other), then those vectors are linearly inde-
pendent.
Remark
If nonzero vectors v1 , . . . , vk are linearly independent, then there
exists mutually orthogonal vectors u1 , . . . , uk such that

span{v1 , . . . , vk } = span{u1 , . . . , uk }

Orthogonal Basis
If B is a basis for a vector space V where the basis vectors are
mutually orthogonal, then B is said to be an orthogonal basis of
V . If further all the basis vectors are chosen to be unit vectors then
it is called an orthonormal basis.
75
Practice Problems

(1) Determine a vector x orthogonal to the row space of A and a


vector y orthogonal to the column space of A, where
 
1 2 1
A = 2 4 3 .
 
3 6 4

(2) Find the space of all vectors orthogonal to the column space of
 
1 2
B = 2 4 .
 
3 6

76
Lecture 11: Projections onto Lines

77
78
Lecture 12: Projections and Least
Squares

79
80
Chapter 4: Determinants
Lecture 13: Properties of the
Determinant

81
Four of the main uses of determinants

1) The test for invertibility:


If the determinant of A is zero, then A is singular. If det A ̸= 0,
then A is invertible (and A−1 involves 1/ det A).
2) Volume of a box in Rn :
The determinant of A equals the volume of a box in n-dimensional
space. The edges of the box come from the rows of A. The
columns of A would give an entirely different box with the same
volume. (recall that det A = det AT .)
3) Formula for pivots:
The determinant gives a formula for each pivot.

determinant = ±(product of the pivots)

4) The determinant measures the dependence of A−1 b on each


element of b.
82
Three basic properties of determinant

Basic Properties:
1) det I = 1.
2) The sign (of determinant) is reversed by a row exchange.
3) The determinant is linear in each row separately.

There are other important properties of determinant. But every


property is a consequence of the first three.

83
Properties of determinant

1) The determinant of the identity matrix is 1.


2) The determinant changes sign when two rows are exchanged.
3) The determinant depends linearly on the first row.
4) If two rows of A are equal, then det A = 0.
5) Subtracting a multiple of one row from another row leaves the
same determinant.
6) If A has a row of zeros, then det A = 0.
7) If A is triangular then det A is the product a11 a22 . . . ann of the
diagonal entries. If the triangular A has 1s along the diagonal,
then det A = 1.
8) If A is singular, then det A = 0. If A is invertible, then det A ̸=
0.
9) The determinant of AB is the product of det A times det B.
10) The transpose of A has the same determinant as A itself: det AT =
det A.
84
Formulas for the Determinant

Pivots and determinant


If A is invertible, then P A = LDU and det P = ±1. The product rule
gives

det A = ± det L det D det U = ±(product of the pivots).

85

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy