PETE-560 Mathematical Methods in Petroleum Engineering Lecture Notes Chapter # 5 6.1 Linear Algebra
PETE-560 Mathematical Methods in Petroleum Engineering Lecture Notes Chapter # 5 6.1 Linear Algebra
1 2 3
4 5 6
7 8 9
Non Square
1 2
3 4
5 6
Symmetric Matrix:
1 2 3
2 4 5
3 5 6
Scalar Matrix:
2 0 0
0 2 0
0 0 2
1
Skew Symmetric Matrix:
0 2 3
2 0 5
3 5 0
5.2 PROPERTIES:
Commutative law of Addition
a) A B B A
b) A B C A B C
Distributive law
K A B K A K B
A B C
lm m n ln
3 0 1
2 4 0
A B 2 1 0
5 0 1 1 2 0
6 8 4 2
15 1 2 5
2 4 2
A B
16 2 5
5.4 Fundamental properties of Scalar Algebra that” Do Not” Hold for Matrices
2
5.5 Properties of Matrix Operations:
1) If A and B are two Square Matrices of same order
A B A B BA
It follows that whenever the product of two matrices A and B is 0, then one of them has the determinant
equal to zero.
1 2 3 3 8 1
3 2 B 4 6 1
Example: A 0
2 7 4 5 2 4
A 1 12 14 2 0 4 3 0 6
26 8 18
0
B 3 24 2 8 16 5 18 30
3 22 8 11 1 22
66 88 22
B 0
3 8 15 8 12 6 1 2 12
A B 0 12 10 0 18 4 0 3 8
6 28 20 16 42 8 2 7 16
10 2 9
22 22 11
2 18 7
A B 0
2) Associative law:
A B C A B C A B C
3) Distributive law:
A B C A B B C
3
Also B C A BA CA
4) Only square matrix can multiply itself
If A l h
B m n
AA and BB does not exist
5) Product of two diagonal Matrices A and B is a diagonal matrix and here;
AB BA
Example:
1 0 0 3 0 0
A 0 0 0 and B 0 5 0
0 0 5 0 0 2
3 0 0
A B 0 0 0
0 0 10
3 0 0
B A 0 0 0
0 0 10
6) Transpose of a Matrix:
Let
1 2
A 3 4 then At
1 3 5
2 4 6
5 6
A
t
(a) t
A
(b) If A is symmetric;
Then;
A
t
t
A
(c) If A is skew symmetric;
Then;
At A
AB B t At ; AB At B t
t t
(d)
A B At Bt
t
(e)
4
(8) Inverse of Matrix:
Let A be any matrix, then a matrix B if it exists such that
AB BA I
is called the inverse of A
(a) A non-square matrix has no inverse.
(b) The inverse of a matrix, when it exists is unique.
(c) The inverse of A is denoted as A1 and is given by;
adj A
A1
A
Where; adj: A is transpose of the matrix that contains the co-factor of A
(d) A has an inverse, if and only if A 0 i.e.; nonsingular matrix
AB
1
(e) B 1 A1
A a11 a22 a33 a23a32 a12 a21a33 a31a23 a13 a21a32 a31a22
A a11a22 a33 a12 a23a31 a13a21a32 a31a22 a13 a32 a23a11 a33a21a12
1 2 3
A 0 4 3
1 0 6
5
A 1 2 3 1 2
0 4 3 0 4
1 0 6 1 0
A 24 10 0 12 0 0
A 22
(a) I 1
(b) At A
1 1
(c) A
A
(d) AB A B
(e) If A is an n n matrix, and ‘m’ is scalar quantity; then
mA m A n
n
(g) A
i 1
Determinant of A is the product of eigen values of A
6
(h) If a column (or row) of a matrix can be expressed as linear combination of other two or more
columns (rows) of that matrix, then the determinant of that matrix is zero
1 2
e.g.; A 0 5
3 1
we make it a linear relation of two columns
like c2 c3 3c1
or c2 2c3 5c1
then A 0
(i) Interchanging any two columns of a matrix multiples its determinant by 1
(j) Adding a scalar multiple of one column to another column does not change the determinant
of matrix
1 2 3
e.g.; A 1 2 3
1 2 3
then A B
1 2 6
B 1 2 6
1 2 6
a b
A
c d
Then;
d c
Co A
b a
Example:
1 2 3
A 0 4 5
1 o 3
Find cofactor of A:
7
4 5
M 11 24
0 6
0 5
M 12 5
1 6
2 3
M13 4 , M 21 12
0 6
M 22 3 , M 23 2 , M 31 2 , M 32 5 , M 33 4
Cij 1
i j
M ij
24 5 4
cof A 12 3 2
2 5 4
n 1
(4) AdjA A n n matrices
(5) A Adj A Adj A . A A I
1 2 3 1 4 2 3
A 0 3 2 4 6
2 7 4 2
8
26 4 6
Co. A 13 2 3
13 2 3
26 13 13
Adj. A 4 2 2
6 3 3
1 2 3 26 13 13
A Adj A 0 3 2 4 2 2
2 7 4 6 3 3
26 8 18 13 4 9 13 4 9
0 12 12 0 16 6 0 6 6
52 28 24 26 14 12 26 14 12
0 0 0
A. Adj 0 0 0
0 0 0
Note: Singular Matrix A when multiplied by its Adjacent will result in zero matrix
If A 0 , A Adj A ZerosMatrix
1. The Row Rank, rrow of a matrix is the maximum number of linearly independent rows, in the
matrix.
2. The column rank, rcol of a matrix is the maximum number of linearly independent column in that
matrix.
3. If A is an m n matrix, then;
rrowofA m
9
5.11 Elementary Transformation:
These are operations that help to transform a matrix to an equivalent Matrix.
Elementary Transformation include:
1. Interchange of any two columns rows.
2. Addition of multiple of any column (or row) to another particular column (or row).
3. Multiple of any column (or row) by a non-zero scalar.
Note:
A) The rank of a matrix is unaltered by elementary transformations.
B) If all the elements of a column (or row)of a square matrix are multiplied by , the determinant
of the matrix is multiplied by .
C) If two of the rows or columns are interchanged, the determinant changes sign.
3 2
Example-1: Reduce A to row echelon form.
2 3
R
3 2 1
3 2
2 3 2 13
3 R2 2 R1
3
3
R R2
5 1 2
3 2 0
Or 2
2 3 2 3
R2
r A 2
A 13
10
If A n n matrix has rank rrow rcol n , it is called rank deficient.
1 5 4
A 0 3 2
2 13 10
1 5 4 1 5 4 R1
0 3 2 A 2 13 10 R 1swap
2 A A
2 13 10 0 3 2 R3
R
1 5 4
1
0 3 10 R2 2 R1
0 3 2 R
3
1 5 4 R1
0 3 10 R2
0 3 2 R3 R2
r A 2
And A 0
1 2 1 6 1 1 1
A 2 4 2 B 16 1 1 5
1 2 1 7 2 3 0
1 2 1 1 2 1 R1
2 4 2 0 0 0 R 2 R
2 1
1 2 1 0 0 0 R3 R1
A 1 , A 0
CA C2 C3 C1
6 1 1 1 1 1 1 6
16 1 1 5 5 1 1 16
7 2 3 0 0 2 3 7
11
1 1 1 6 R1
0 4 6 14 R2 5 R1
0 2 3 7
1 1 1 6 R1
0 4 6 14 R2
0 0 0 0 R2
R3
2
B 2
e.g.; there unknowns x1 , x2 , x3 and two equations, you will have to assume (freedom to choose) one
variable to find the other two.
Because of this, it is necessary
Non-Homogenous Equations:
12
a21 x1 a22 x2 ... a2n xn b2
Ax b
A is the co-efficient matrix and A, b is the augmented matrix given by;
Theorem: Ax b has a solution (i.e. the system of equation is consistent) if and only if A and
augmented matrix A, b have the same rank.
Rule: If a system of ‘m’ linear equation, with ‘n’ unknown are given and m n , to known the nature of
the solution, find r A, b and r (A)
(a) If r A, b r A , the equations are inconsistent column.
(b) If r A, b r A n , the equations are consistent, and they have a unique solution.
(c) If r A, b r A n , the equations are consistent, and they have infinite number of solutions.
2x y z 7
3 x y 5 z 13
x yz 5
13
2 1 1 x 7
3 1 5 y 13
1 1 1 z 5
To find r(A);
2 1 1
A 3 1 5
1 1 1
2 1 1 R1
5 13 3
0 R2 R1
2 2 2
3 3 1
0 R3 R1
2 2 2
2 1 1
R1 2 1 1 R1
R2 0 5 13 2 R2
5 13
0
2 2
R 0 3 3 2 R3
3 3 3
0
2 2
2 1 1
A 3 1 5
1 1 1
2 1 1 7
A, b 3 1 5 13
1 1 1 5
2 1 1 7 R1
5 13 5 3
0 R2 R1
2 2 2 2
3 3 3 1
0 R3 R1
2 2 2 2
14
2 1 1 7 R1
0 5 13 5 R2
0 3 3 3 R3
2 1 1 7 R1
0 5 13 5 R2
24 3
0 0 0 R3 R2
5 5
2 1 1 7
0 5 13 5
24
0 0 0
5
System of equation is represented by;
2x y z 7
0 x 5 y 13 z 5
24
0x 0 y z0
5
z 0 , y 1 , x 4
6 3 5 9
5 2 3 6
A
3 1 2 3
2 1 1 a
a3
6 3 5
9
0 1
7
3
R1
2 6 R
2
2 2
0 0 0 R3 R2
3
2 R4
0 0 a 3
3
15
6 3 5 9
0 1
7
3 R1
2 6 2 R
2 2
0 0 0 R3
3
2 R4 R3
0 0 a 3
3
To make the rank=3
a 3= 0
a3
5.15 Determinant from Gaussian Elimination:
Determinant of the original square matrix can be obtained by taking the product of the diagonal elements
of the resulting triangular matrix provide that adjustments are made for two elementary operations (a) row
swapping and (b) Multiplication of a row by a scalar.
2 1 1
A 3 1 5
1 1 1
2 1 1
0 5 13 B
24
0 0
5
24
B 2 5
5
B 48
B 2 2 A
B
A
4
A 12
5.16 LU DECOMPOSITION:
16
LU Without Divoting:
Matrix A can be decomposed into two triangular matrices (L and U) if and only if A is non-singular L
lower triangular U upper triangular.
To solve the linear system.
Ax b
We let A LU so that; when L is lower triangular matrix.
LUx b
Now let Ux y so that;
Ly b
Notes: The LU decomposition of a matrix A does not always exist even if A is nonsingular.
Further Reading Jacobi Method Guassidal Method } Iterative Method to solve Linear system.
A = L U
Then,
L31u11 a31
L31u12 L32u23 a32
L31u13 L32u23 l33u33 a33
Here we have 9 equations but 12 unknowns (L & U elements) so we assume 3 unknowns to get a unique
matrix L and U.
17
There are 3 sets of linear equations, In each set there is one redundancy (more unknown then equations).
This will lead to infinite number of solutions. To ensure that the system may lead to a unique solution, we
have to fix some of the unknown.
We choose to fix:
A= LU
1 2 3 1 0 0 1 2 3
2 5 12 2 1 0 0 1 6
0 2 10 0 2 1 0 0 2
18
2. Record the factors used in every step of the gaussian elimination i.e. The matrix L store m where
m is the factor in R. mR; that create the zero in the upper triangular matrix.
1 2 3
A 2 5 12
0 2 10
1 2 3 R1
0 1 6 R2 2 R1 m 2 forL21
0 2 10 R3 0 R1 m 0 forL32
1 2 3 R1
0 1 6 R2
0 0 2 R3 2 R2 m 2 forL32
S0 here;
1 2 3
u 0 1 6
0 0 2
1 0 0
L 2 1 0
0 2 1
Here, A PLU
2 3
The cost of LU factorization is n if the order of A is n.
3
Solving Linear equation by LU factorization:
19
2 3
Cost: n
3
(1) Permutation: z pT b
z LU n
Pz b
z p 1b
z pT b
Cost: 0 flop
(2) Forward Substitution
Solve Ly z
Cost: n 2
When A becomes very large, direct solution method such as LU becomes inefficient or
impractical. Then iterative solvers become handy.
e.g.
Jacobi
Gauss sidle Stationary Methods.
Successive over Relation
Conjugate gradient *
Biconjugate * Minimum Residual Non Stationary Methods Also called Kaylor Subspace Method .
Ginres Generalized Min Residual
K
b , Ab , A2b , A3b An 1b
Kaylor Subspace Method
Orthogonal Matrix: Matrix in which rows are orthogonal. i.e.: dot product of rows is zero.
They are also called rotational matrix. Their inverse is equal to their transpose;
p 1 PT
P PT I
P P 1 I
1
P P 1
1
20
*Flops: Number of Multiplications and divisions to be performed.
(iii) Backward Substitutions
Solve: Ux y
Cost: n 2 flops
2 3
Total cost: n n2 n2
3
2 3
n 2n 2
3
2 3
n when n is very large.
3
Linear System with Multiple R.H.S:
AX B
Where; X x1 , x2 , x3 xm
B b1 , b2 , b3 bm
Ax1 b1
Ax2 b2
Ax3 b3
2
Axm bm Total Cost= n3 2n 2 M
3
This is very expensive process because A will be factorized M times so instead of this, we should
decompose A into L and U only once.
2 3
A PLU n flops
3
And implement the forward and backward substitution M times each.
2 3
Total cost= n 2 Mn 2
3
Q: Pivoting is optional or there are some problems which can not be solved without pivoting??
5.17 Cholesky Factorization: (Direct Solver):
21
This factorization is only defined for symmetric or Hermitian +ve definit matrices. We shall restrict our
discussion to the case where A is real and symmetric +ve definite (SPO)
Positive Definite: All the eigen values are positive.
Positive Semi-Definite: All the eigen values are either positive or zero.
Negative Semi Definite: Vice versa to positive semi definit.
Negative Definite: Vice versa to positive definit.
Symmetric Positive Definite: (SPD)
A matrix A nxn T
is SPD if it is symmetric A A and for all non-zero vectors
x n
, x T Ax 0
Theorem-1: Given a SPD matrix A, there exists a lower triangular matrix L such that A LLT . That is
every SPD matrix can be factored as A LLT
1. L is known as Cholesky factor of A.
2. The Cholesky factorization is unique if the diagonal elements in L are positive.
1 3
3. The cost of factorizing A in LLT is n .
3
Ax b
Let, A LLT
So that LL x b
T
Where LT x y
Steps:
1. Factorized A in LLT
2. Solve by forward substitution
Ly b
3. Solve by backward substitution
LT x y
5.18 Eigen Values and Eigen Vectors:
22
Let A be given matrix of order n n , if there exists a non-zero vector column x of order n1 and a
scalar such that Ax x then x is called the eigen vector of A and is called eigen value of A.
1 4
A
3 2
Solution: Ax x
Here 2 ,
1 4 x1 x1
3 2 x 2 x
2 2
1x1 4 x2 2 x1
3x 2 x 2 x
1 2 2
x1 4 x2 2 x1
3x1 4 x2 0
3x1 2 x2 2 x2
3x1 4 x2
Solution: Ax x
1 4 x1 x1
3 2 x 2 x
2 2
1x1 4 x2 2 x1
3x 2 x 2 x
1 2 2
x1 4 x2 2 x1
3x1 2 x2 2 x2
3x1 4 x2 0
3x1 4 x2 0
23
3x1 4 x2 0
0 x1 0 x2 0
4
x1 x2
3
So, let x2 3
x1 4
4
x
3
(1) Every non-zero vector is an eigen vector of the identity matrix I and of the zero matrix 0
(2) Eigen values and vectors exist only for square matrix
(3) Eigen values must be non-zero.
Theorem-I: If x is an eigen vector of a matrix A then any non-zero scalar multiple of x is also an eigen
of A.
Proof: Let Ax x
And
Then; A x Ax
Ax
A x x
Ay y
Theorem-II: An eigen vector can not correspond to two different eigen values.
Proof: (by Contraction):
If Ax x and Ax x
Subtracting both values;
Ax Ax Y x
0 Y x
Then; Y 0
v
24
Example-2: Find the eigen values of
1 3 1 2 0 1
(a) A ;B ;C
3 1 3 0 1 0
Bx x
Bx x 0
B I x 0
Sin x is never zero;
And above equation represents the homogenous equation solution which states that if x is non zero the
matrices B I must be singular (case of infinite solutions) this leads to;
B I 0
1 2 0
i.e.
3 0 0
1 2
3 0
0
1 0 2 3 0
2 6 0
2 6 0
2 3 2 6 0
2 3 2 6 0
3 2 3 0
3 2 0
3, 2
0 1
For C
1 0
0 1
1 0
0
2 1 0
25
2 1
2 2
Now finding eigen vector of B.
B I x 0
1 2 0
B I
3 0 0
1 2
B I for 3
3
2 2
3 3
Reducing to row echelon form;
R
2 2 1
B I 3
0 0 R2 R1
2
So, the equation becomes;
B I x 0
2 2 x1 0
0 0 x 0
2
2 x1 2 x2 0
x1 x2
Let x1 1
x2 1
1
Therefore; x for 3 Similarly solve for 2
1
For 2
3 2 x1
3 2 x 0
2
26
3x1 2 x2 0
2
x1 x2
3
At x2 3 , x1 2
2
V2
3
1 3
Now for A
3 1
A x 0
And A I 0
1 3
0
3 1
1 9 0
2
1 9
2
1 3
1 3 , 1 3
2 4
So; A I x 0
For 2
1 2 3 x1
3 1 2 x 0
2
3 3 x1
3 3 x 0
2
3 3 x1
0 0 x 0
2
3x1 3x2 0
x1 x2
27
Let x1 1 , x2 1
1
V1
1
For 4
3 3 x1
3 3 x 0
2
3x1 3x2 0
x1 x2
Let x1 1 , x2 1
1
V2
1
Example: find the eigen vector and values of
2 2 0
A 2 1 1
2 2 3
A I 0
2 2 0 2 2
2 1 1 2 1 0
2 2 3 2 2
2 2 2 3 4 0 0 4 2 12 4
2 3 2 3 4 8 6 0
6 9 3 2
2 3 2 3 48 6 0
3 2 9 6 2 2 0
3 13 2 0
28
Now for;
2 2 0
A 2 1 1
7 2 3
2 2 0 2 2
A I 2 1 1 2 1
7 2 3 7 2
2 3 2 3 4 4 2 12 4
6 9 3 2 2 3 2 3 14 8 6
0 3 9 6 2 6 14 8
3 13 12 0
3 13 12 0
At 1 ;
1 13 1 12 0
3
1 13 12 0
00
1 is a factor
1 2 12 0
2
12 0
2 4 3 12 0
1 4 3 0
29
So; 1 , 3 , 4
For 1
Now;
1 2 0
A I 2 0 1
7 2 4
1 2 0 R1
0 4 1 R2 2 R1
0 16 4 R3 7 R1
1 2 0 R1
0 4 1 R2
0 0 0 R3 4 R2
1 2 0 x1 0
0 4 1 x2 0
0 0 0 x3 0
x1 2 x2 0
4 x x3 0
2
0 0 0
x1 2 x2 0 4 x2 x3 0
x1 2 x2 x3 4 x2
At x2 1 x1 2 x3 4
2
v1 1
4
Or 3
1 2 0
A I 2 2 1
7 2 6
30
1 2 0
0 R1
R 2R
0 2 1
AT A (A is symmetric)
Let Ax x
Take the complex conjugates of both of both sides;
Ax x
And the transpose them;
Ax x
Taking transpose: taking transpose of multiple of matrix, we change the
Ax x order AB B A
T T T T T
x T Ax x T x T
Ax x
xT x x T x
31
xT x xT x
Since for any vector x , we know that xx T gives scalar; which leads to
x T
x0
Because x is non zero; x is also non zero; and x T x is also non zero.
Then; 0
Which leads that; contains only that real parts;
x 2 3 if xx
x 2 3 xx
x x Real only
x x Only imaginary
Let Ax x
Taking conjugate:
Ax x
Ax x
Taking transpose:
xT A xT
x T A x T
Multiply by x
x T Ax x T x
xT x xT x
x T x x T x
x T
x0
x T
x 0
32
Since, x T x cannot be zero which leads to 0
1
(c) Orthogonal matrix P P have eigen values unit modules
T
Ax x
Taking transpose:
xA x
T
x T A1 x T
Multiply by Ax
x T A1 Ax x T Ax
xT x xTx
1 x T
x0
1 0
Or 1
Theorem # 06: The eigenvectors corresponding to distinct eigen values of a real symmetric matrix are
orthogonal.
Theorem # 07: If x1 and x2 are eigen vectors corresponding to distinct eigen values of a real symmetric
matrix of order 3, then the cross product of x1 and x2 is the third eigen vector.
i.e. AT A
33
3 2
A
1
e.g.:
2
i.e. AAT AT A
Theorem # 10: The eigen vector corresponding to distinct eigen values of Hermitian matrix are
orthogonal.
Theorem # 11: The eigen vectors of corresponding to distinct eigenvals of a normal matrix are
orthogonal.
Example # 01.If is the eigen values of A, show that is also an eigen value of A
T
Let; Ax x
Leading to Ax x 0
A I x 0
Which leads to A I 0
A I A I 0
T
Solution:
We know that;
Ax x
Adding x
Ax x x x
A x x
Which leads that; is an eigen values of A I
34
Note: If you add diagonal elements of A by an scalar quantity; the eigen values will be added by same
scalar.
5.20 Linear Independence:
A finite of an vectors V1,V2,V3, ,Vn from the vector space V is linearly dependent if there exists a
set of n-scalar 1, 2,3, , n , not all zeros, such that;
If such scalar doesn’t exist, then the vectors are said to be linearly independent.
1 3
Example: Check vectors and 2 are linearly independent.
a
Solution
Method- I: Put these vectors to a square matrix;
1 3
A
4 2
Method- II: Reduce to row echelon and find rank; if A is rank deficit, then vectors are dependent.
Method- II:
1V1, 2V2, 0
1 3 2 0 1
41 2 2 0 2
0 2 2 12 2 0
2 0
And 1 3 0 0
1 0
So, since all values of are zeros, then the vectors are independent.
35
Example-2: Are;
1 3 2
4 , 2 and 1 are linearly independent.
Solution: Here the rank of matrix with these vectors cannot be more than 2 and hence it will always be
rank deficit and hence will be linearly dependent.
n
Therefore, if V1 Vn 6 such that m n , they will be linearly dependent.
1 7 2 1 7 2 R1
4 10 1 0 18 9 R 4 R
A 2 1
2 4 5 0 18 9 R3 2 R1
3 1 4 0 22 10 R4 3R1
1 7 2 R1
0 18 9 R
2
0 0 9 R3 R2
0 0 2 R4 18R2
Swapping R3 R4
1 7 2 R1
0 18 9 R
2
0 0 2
0 0 0
Method-II:
11 7 2 23 0
41 10 2 3 0
36
21 4 2 53 0
31 2 43 0
Similarly using same co-efficient as row-echelon for reduction of matrix we get.
1 7 2 23 0 1
01 0 2 63 0 3
17
01 0 2 3 0 4
3
01 18 2 9 0 0
2 0
Similarly eq(1)
1
Restart the Example-3:
1 7 2 1 7 2 R1
4 10 1 0 18 9 R 4 R
A 2 1
2 4 5 0 18 9 R3 2 R1
3 1 4 0 22 10 R4 3R1
1 7 2 R1 1 7 2 R1
0 18 9 R 0 18 9 R
2
2
0 0 0 R3 R2 0 11 5 R3 R4
2
0 22 10 R4 0 0 0 R4 R
4
1 7 2
0 18 9
1 R A 3 no. of vectors then the vectors are linearly independent.
0 0 2
0 0 0
37
Theorem-1: Suppose V V1 ,V2
Vn is a set of vectors in m
(i.e. m elements in each vectors). If
n m , then the set of vectors is linearly dependent.
Theorem-1: A finite set of vectors that contain the zero vectors 0 , will be linearly dependent.
Let V be a set on which addition and scalar multiplication Dot product are defined. If the
following axioms are true for all objects u , V , w in V and all scalar “ ”and “ ”than V is called a
Vector Space and the objects in V are called Vectors.
(h) u u u
(i) u u
(j) 1u u
Example-1:
V1 V1
The set V 2
with standard vector addition and scalar multiplication is defined as;
V2 V2
Now show if V is a vector space?
Solution:
Check u u u u 0
u1 V1
Here let u , V V
u2 2
u V1
u 1 and V also belongs to
2
It does obey this axiom.
u2 V2
Also, u u u
38
u1 u1 u1
u2 u2 u2
u1 u1 u1 u1
so, it does not obey this axiom
u2 u2 u2 2u2
Example-2:
The set V 2
with the vector addition and multiplication defined as;
V1 0
V2 0
V3 V3
0
(j) 1u 0 u it does not satisfy
V3
5.21 Subspaces: Suppose that V is a vector space and W is a subset of V. If under the vector addition and
multiplication defined of V, W is also a vector space, then W is a subspace of V.
(1) Zero vector 0 belongs to W i.e.; 0 is an element of W
39
V1 V1
Here also for any u V2 inverse identity V2
V3 V3
and
0 0
u 0 V 0 which is also inverse of u and 2
thus it obeys Axiom
u3 V3
Example 1: Determine if the given set is a subspace of the given vector space.
Example 2:
a
S 0 a, bare real Is ‘S’ a subspace of 3
(Ans: Yes)
b
Example 3:
x1
0
S : x1 , x3 Is ‘S’ a subspace of
4
(Ans: Yes)
x3
5 x1
Example 4:
Example 5:
x
S : x Is ‘S’ a subspace of 2
(Ans: No)
x 1
40
Sol Ex # 5:
x y x y x y
x 1 y 1 x y 2 but S ,
x y 1
Sol Ex # 3:
x1 y1 x1 y1
0 0 0
4
satisfied.
x3 y3 x3 y3
5 x1 5 y1 5 x1 5 y1
x1 x1
0 0
4
satisfied.
x3 x3
5 x1 5 x1
Sol Ex # 4:
a 2b c 2d a 2b c 2d a c 2 b d a 2 b
2a 3b 2c 3d 2a 3b 2c 3d S
2 2
2 a c 3 b d 2 a 3 b
5.22 SPAN:
Def 1:
We say that the vector W from a vector space V is a linear combination of vectors V1 ,V2 ,... Vn all from
V if there exists scalars 1 , 2 ,... n such that;
Def 2:
Given that a vector space V, the span of a set S (not necessary finite) is defined as the intersection W of
all subspace of V that contains S. W is referred to as the subspace of V spanned by S. S is called spanning
set of W, and we say that S spansW. If S is a finite subset of V, then the span of S, is the set of all linear
combinations of the elements in S.
Def 2b:
41
Let S V1 ,V2 ,... Vn be a set of vectors from the vectors space V and let W be the set of all linear
combinations of the vectors V1 ,V2 ,... Vn . The set W is the span of the vectors V1 ,V2 ,... Vn and it is
denoted by
W span S or W span V1 ,V2 ,V3 ,... Vn
n
W span S aiV1 : n ,V1 S , i
i 1
Def 2c:
Let V the vector space and let S V1 ,V2 ,V3 ,... Vn be a subset of V. We say S span W, a subset of V if
Example:
0 1 1
(a) Show that S 1 0 1 span 3
1 1 0
2
(b) Write the vector 4 as a linear combination of the vectors in S.
8
x
Solution: Any vector b in 3
has the form y Hence, we need to show that every vectors b can be written
z
as,
42
x 0 1 1
b y 1 1 2 0 3 1
z 1 1 0
01 2 3 x
1 0 2 3 y
1 2 03 z
Ax b
0 1 1 0 0 0
A 0 1 1 0 1
1 0 1 1 0 2
1 1 0 1 1
Since A C ; A is non-singular and are symmetric of equations have a unique solution which means
that; then units ‘ 1 , 2 , 3 ’ which are not all zeros. Which means;
x 0 1 1
y 1 0 1 exists and 5 in linear combination of all the vectors of S.
1 2 3
3
Thus, ‘S’ span
(b)Here;
2
b 4
8
0 1 1 1 2
1 0 1 4
2
1 1 0 3 8
Ax b
Here; Ans:
1 5
43
2 3
3 1
A LU
0 1 1
A 1 0 1
1 1 0
Example:
2 3 7 4
Does the set 1 , 0 , 2 , 1 Spans 3
?
1 1 3 1
a 2 3 7 4
b 1 0 2 1
1 2 3 4
c 1 1 3 1
A b
2 3 7 4 1 a
1 0 2 1 b
2
1 1 3 1 3 c
2 3 7 4 a
R1
2 a
Ab 1 0 2 1 b R2 2 R1/2
2
R 2 R1/2
2a 3
1 1 3 1 c
2
2 3 7 4 a
3 3 a
0 3 b
2 2 2
1 1 a
0 1 c
2 2 2
44
2 3 7 4 a
R1
3 3 a
0 3 b R
2 2 2 2
a b a 3 R2
R 1
0 0 0 0 c 3
2 3 6
2 3 7 4 a
3 3 a
0 3 b
2 2 2
b a
0 0 0 0 c
3 3
3
Since the rank of A is less than n, the set S does not span obtained by setting;
b 9
c 0 (1)
3 3
i.e. only vectors satisfying equs (1) belong to W.
9 3
c 0
3 3
c 3 1 0
c4
3
w 9
4
Remark: The subspace 0 has no basis. This is bcoz every vector in a linearly independent family must
be non-zero.
Notes:
1) A basis of a set W is the largest collection of linearly independent vectors in W.
2) A basis of W is the smallest collection of vectors spanning S.
45
3) All basis of W have the same number of vectors
0 1 1 2
e.g. 1 , 0 1 , 4
1 1 0 3
3 3
Spans but does not form a basis of because at least one vector is linearly dependent another.
0 1 1
However, 1 , 0 1 spans 3
and also forms a basis for 3
1 1 0
Example 1:
Example 2:
Example 3:
Solution 1:
1 1 0
, ,
0 1 2
Here number of vectors are more than rows, means linearly dependent vectors set, therefore, they do not
form a basis.
Solution 2:
1 0
1 1
0 1
Here m n for n therefor, even they are linearly independent they will not form a basis as number of
vectors is less than the number of row
Solution 3:
1 1
A A , A 2 0
1 1
Here A is non singular and m
46
5.24 Matrix Subspaces:
Row Spaces of a Matrix: Let A be a m n matrix, the space spanned by the rows of A is called the row
space of A, denoted by R A It is a subspace of n
(1) The collection of 3 r1 , r2 , rm consisting of the rows of A may not form a basis for the row
space of A, R A ; because the collection may not be linearly dependent.
(2) However, a maximally linearly independent subset of Rs r1 , r2 , rm will form a basis of the
row space of A.
(3) Since, the maximum numbers of linearly independent rows in A, is equal to rank of A;
dim R A r A rankA
(1) The maximum linearly independent subset of Cs C1 , C2 , C3 , Cn forms a basis of the
column space of A, C A .
(2) The maximum number of linearly independent columns in A is equal to the rank of A.
Example:
Ax b eq (1)
47
The null space of a m n matrix, denoted by N A or Null A, is the set of all solutions to be transforms
equation Ax 0
Null A x : x n
andAx 0
(1) The null space of A is the subspace of n .
(2) Null A can never be equal to null (empty set)
Null A 0
Since it must contain zero vector, 0 (the trivial solution)
(3) The kind of elements Null A contains (which vectors space they belong to) depends on the
number of columns in A.
(4) The prove that N A is a subspace of n
, closer under both vector addition and scalar
Example 1:
2 6 3
1 0 1
A
0 2 1
1 1 9
(a) Find the basis for the row space of A.
(b) Find the basis for the column space of A
(c) Find the Null space of A
2 1 3 R1
2 1 3
1 1 1 1 R2 1
R1
0 1 2
2 2
0 2 1 0 2 1 R3
1 1 4 3 5 R1
1 2 R4
2 2
2 1 3 R1
0 1 1 R 2
2
0 0 1 R3 2 R1
0 0 8 2 R4 3R2
Ans
48
1 0 1
0 1 1
0 0 1
0 0 0
1 0 0
Basis of R A 0 , 1 , 0
1 1 1
(b) Take transpose of A and then find r A using zero echelon form
(c) Ax 0
1 0 1 x1 0
0 1 1 x 0 Since last row is zero no need to make it
2
0 0 1 x3 0
x3 0 , x2 0 , x1 0
0
Null A 0
0
Example:
Find Null(A) for
1 1 2 4
A
2 0 1 7
Reducing to row echelon form
1 1 2 4 R1
A
2 0 1 7 R2 2 R1
Ax 0
x1 0
1 1 2 4 x2 0
2 0 1 7 x 0
3
x4 0
x1 x2 3x3 4 x4 0
49
5 1
x1 x3 4 x4 9 x3 4 x4 0
2 2
1 9
x1 x3 x4 0
2 2
9 x4 x3
x1
2
9 x4 x3
2
5 x3 x4
x
2
x3
x4
Let x3 b , x2 a
7 18
0
1
N A a b
0 1
2
5
x1 0
1 1 2 4 x2 0
2 0 1 7 x 0
3
x4 0
Let x3 b , x2 a
x1 a 2b 4 x4 0
x4 2a 5b
2 x1 0 b 7 x4 0
x1 7a 18b
2 x1 b
x4
7
50