0% found this document useful (0 votes)
42 views158 pages

m111 Notes-1

Uploaded by

haimiryaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views158 pages

m111 Notes-1

Uploaded by

haimiryaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 158

Linear Algebra

Dr. Abdullah Alazemi


Department of Mathematics
Kuwait University

February 7, 2023
Contents

1 System of Linear Equations and Matrices 1

1.1 Introduction to Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Matrices and Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.4 Inverses; Algebraic Properties of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 21

1.5 A Method for Finding A−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

1.6 More on Linear Systems and Invertible Matrices . . . . . . . . . . . . . . . . . . . . . 34

1.7 Diagonal, Triangular, and Symmetric Matrices . . . . . . . . . . . . . . . . . . . . . . 41

2 Determinants 49

2.1 Determinants by Cofactor Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.2 Evaluating Determinants by Row Reduction . . . . . . . . . . . . . . . . . . . . . . . 55

2.3 Properties of Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3 Euclidean Vector Spaces 69

3.1 Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.2 Norm, Dot Product, and Distance in Rn . . . . . . . . . . . . . . . . . . . . . . . . . 72

4 General Vector Spaces 81

4.1 Real Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.3 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.4 Coordinates and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

i
ii CONTENTS

4.5 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

4.7 Row Space, Column Space, and Null Space . . . . . . . . . . . . . . . . . . . . . . . . 107

4.8 Rank, Nullity and the Fundamental Matrix Spaces . . . . . . . . . . . . . . . . . . . 115

5 Eigenvalues and Eigenvectors 119

5.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

5.2 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

6 Applications of Vectors in R2 and R3 133

6.1 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

6.2 Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

6.3 Lines and Planes in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143


Chapter

1
System of Linear Equations and Matrices

Section 1.1: Introduction to Systems of Linear Equations

Remark 1.1.1

A general linear system of m equations in the n unknowns x1 , x2 , · · · , xn :

a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1


a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2
.. (1.1.1)
.
am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bm
A system in the Form (1.1.1) is called a homogeneous system if b1 = b2 = · · · = bm = 0.
Otherwise it is called a non-homogeneous system.
A solution, if any, of a linear system in n unknowns x1 , x2 , · · · , xn is a sequence of n numbers
s1 , s2 , · · · , sn . That is, x1 = s1 , x2 = s2 , · · · , xn = sn is a solution satisfying all the above
equations.
Any system might have a unique solution, no solutions, or infinite solutions. In general, we
say that the system is consistent if it has at least one solution, and inconsistent if it has
no solutions.

Example 1.1.1

Solve the following linear system:


x + y = 1
2x − y = 5

Solution:

Clearly adding the two (non-homogeneous) equations, we get 3x = 6. Thus, x = 2 and hence
y = −1. Therefore, the system has a unique soltion: x = 2 and y = −1.

1
2 Chapter 1. System of Linear Equations and Matrices

Example 1.1.2

Solve the following linear system:


x + y = 1
2x + 2y = 2

Solution:

We can eliminate x from the second equation by adding −2 times the first equation to the
second. Thus, we get
x + y = 1
0 = 0
Thus, we simply can omit the second equation. Therefore, the solutions are of the form
x = 1 − y. Setting a parameter t ∈ R for y, we get infinite solutions of the form x = 1 − t
and y = t. Therefore, the system has infinite solutions.

Example 1.1.3

Solve the following linear system:


x1 + x2 − x3 = 1
x2 − 2x3 = 0
2x1 + 2x2 − 2x3 = 5

Solution:

Adding the third equation to −2 times the first equation, we get 0 = 3 which is impossible.
Therefore, this system has no solutions.

Definition 1.1.1 Augmented Matrix

Given a linear system of m equations on n unknowns as in (1.1.1), we transform this system


into the following matrix form
 
a a · · · a1n b1
 11 12 
 a21 a22

· · · a2n b2 

 ..

.. .. .. ..  (1.1.2)
 . . . . .


 
am1 am2 · · · amn bm
This form is called the augmented matrix. Note that each row in the augmented matrix
corresponds to an equation in the associated system.
1.1. Introduction to Systems of Linear Equations 3

Example 1.1.4

Here is an example of transforming a system of equations into its augmented matrix form:
 
x1 + x2 − x3 = 1 1 1 −1 1
 
x2 − 2x3 = 0 ⇒  0 1 −2 0 
 
2x1 + 2x2 − 2x3 = 5 2 2 −2 5

The basic method for solving a linear system is to perform algebraic operations on the system
that do not change the solution set so that it produces a simpler version of the same system.

In matrix form, these algebraic operations are called elementary row operations:

Remark 1.1.2

Given an augmented matrix of some linear system, we define the following elementary row
operations:
1. interchanging a row by another row,
2. multiplying a row by a non-zero scalar,
3. adding a multiple of a row to another row.
4 Chapter 1. System of Linear Equations and Matrices

Section 1.2: Gaussian Elimination

In this section, we solve systems of linear equations. We do that using some operations on the rows
of the augmented matrix of the system. Consider the following augmented matrix form
 
1 0 0 1
 
0 1 0 2
 
0 0 1 3
which describes the solution x = 1, y = 2 and z = 3 of some system. This matrix is an example of a
matrix in reduced row echelon form, abbreviated as r.r.e.f. .

Definition 1.2.1 Reduced Row Echelon Form

A matrix A is said to be in the reduced row echelon form (r.r.e.f. for short) if it satisfies
the following conditions:

1. the row of zeros (if any) should be at the bottom of A,


2. the leading (first) entry of a non-zero row should be 1,
3. the leading entry of row i + 1 should be on the right of the leading entry of row i,
4. the columns that contain a leading entry 1, all its other entries are zeros.

A matrix is said to be in the row echelon form (r.e.f.) if the last condition is not satisfied.

Example 1.2.1

The following matrices are in reduced row echelon form.


 
    0 1 −2 0 7 0
1 0 0 3 1 0 0 
0
  
    0 0 1 0 1 0 0 0
, , , .
0 1 0 5 0 1 0


 
  0 0 0 0 0 0 0 0 0
0 0 1 −2 0 0 1  
0 0 0 0 0 0
The following matrices are in row echelon form, not satisfying the fourth condition.
   
1 4 2 3 1 7 0
   
 0 1 −1 5  , 0 1 5.
   
0 0 1 −2 0 0

Theorem 1.2.1

Every non-zero m × n matrix is row equivalent to a unique matrix in the r.r.e.f.


1.2. Gaussian Elimination 5

In the following three examples, we consider different augmented matrices for three systems of
linear equations.

Example 1.2.2 Linear System with No Solution!!

Solve the linear system in the three unknowns x, y and z which is given by the following
augmented matrix in the reduced row echelon form.
 
1 0 0 1
 
0 1 2 0.
 
0 0 0 3

Solution:

The last row of the augmented matrix corresponds to the equation 0x + 0y + 0z = 3. This
equation is not satisfied for any values of x, y and z. Hence, the system has no solution. That
is, this system is inconsistent.

Example 1.2.3 Linear System with a Unique Solution

Solve the linear system in the three unknowns x, y and z which is given by the following
augmented matrix in the reduced row echelon form.
 
1 0 0 1
 
0 1 0 1.
 
0 0 1 2

Solution:

The resulted reduced system is x = 1, y = 1, and z = 2. The system has a unique solution
and hence this system is consistent.

Example 1.2.4 Linear System with Infinite Many Solutions

Solve the linear system in the three unknowns x, y and z which is given by the following
augmented matrix in the reduced row echelon form.
 
1 0 1 1
 
 0 1 −1 1  .
 
0 0 0 0

Solution:
6 Chapter 1. System of Linear Equations and Matrices

The last row of the augmented matrix corresponds to the equation

0x + 0y + 0z = 0.

This equation is of no use since it adds no restrictions on x, y and z and thus can be omitted.
Hence the resulted reduced system is
x + z = 1
y − z = 1
In this system, x and y are called the leading variables since they correspond to the leading
1’s in the augmented matrix. The remaining variables, which is z in this case, are called free
variables.
Solving the leading variables in terms of the free variables, we get
x = 1 − z
y = 1 + z
The free variable z can be assigned an arbitrary parameter value t which then determines the
values x and y. Thus, the solution set can be expressed by the parameteric equations as

x = 1 − t, y = 1 + t, and z = t.

This solution set can produce infinite solutions to the linear system. That is, this system is
consistent.

Remark 1.2.1 General Solution

If a linear system has infinitely many solutions, then a set of parametric equations expressing
all solutions to the system is called a general solution of the system.

Remark 1.2.2 Guass-Jordan Elimination

This method is used to solve systems of linear equations by applying the following steps:

1. Find the r.r.e.f. of the augmented matrix by applying elementary row operations,
2. Solve the reduced system.

Note that, the solution of the reduced system is the solution of the original one.
1.2. Gaussian Elimination 7

Example 1.2.5

Solve the following system using the Guass-Jordan method.


x + y = 2
2x + y + z = 3
3x + 3z = 3

Solution:

We first write the system in its augmented matrix form and then apply elementary row
operations
 
to reachthe reduced row echelon

form:   
1 1 0 2 1 1 0 2 1 1 0 2 1 0 1 1
  r2 −2r1   −r2   r1 −r2  
2 1 1 3 −
  r− −−→ 
−3r  0 −1 1 −1 
 −1−→  0 1 −1 1  −−−→ 
r +r  0 1 −1 1 

3 1 r3 3 2
3 0 3 3 0 −3 3 −3 3 0 −1 1 −1 0 0 0 0
Therefore, the reduced system is:
x + z = 1 x = 1 − z

y − z = 1 y = 1 + z
The free variable here is z and hence we have the following general solution:

x = 1 − t, y = 1 + t, and z = t.

Remark 1.2.3

A linear system is called homogenous if the constant terms in the last column of the aug-
mented matrix of the system are all zeros.
a11 x1 + a12 x2 + · · · + a1n xn = 0
a21 x1 + a22 x2 + · · · + a2n xn = 0
..
.
am1 x1 + am2 x2 + · · · + amn xn = 0
Every homogenous linear system is consistent because all such systems have

x1 = 0, x2 = 0, · · · , xn = 0,

as a solution. This solution is called the trivial solution. If there are other solutions, they
are called nontrivial solutions.
There are two possibilities for the solutions of homogenous linear system:

• The system has only the trivial solution.


• The system has infinitely many solutions, in addition to the trivial solution.
8 Chapter 1. System of Linear Equations and Matrices

Example 1.2.6

Solve the following system using the Guass-Jordan method.


x + y + w = 0
2x + y + z + 2w = 0
3x + 3z + 3w = 0

Solution:

Clearly
   
1 1 0 1 0 1 0 1 1 0
   
2 1 1 2 0
  ⇒ ··· ⇒  0 1 −1 0 0 
 
3 0 3 3 0 0 0 0 0 0
Therefore, the reduced system is:
x+z+w = 0 x = −z − w

y−z = 0 y = z
Setting z = t and w = s, the system has nontrivial solutions: x = −t − s, y = t, z = t, and
w = s. Note that the trivial solution results when t = s = 0.

Theorem 1.2.2 Solutions for Homogenous System

• If a homogenous linear system has n unknowns and if the reduced row echelon form of its
augmented matrix has k nonzero rows, then the system has n − k free variables.
• A homogenous linear system has infinitely many solutions if the number of unknowns
(columns of augmented matrix) is greater than the number of equations (rows of augmented
matrix).

Example 1.2.7

Suppose that the following matrices are augmented matrices for linear systems in the un-
knowns x, y, z and w. These matrices are all in row echelon form. Discuss the existence and
uniqueness of solutions of the corresponing systems.
     
1 1 0 0 2 1 1 0 0 2 1 1 0 0 2
     
0 1 2 0 5 0 1 2 0 5 0 1 2 0 5
(a)  , (b)  , (c)  .
    

0 0 1 2 8 0 0 1 2 8 0 0 1 2 8
     
0 0 0 0 3 0 0 0 1 3 0 0 0 0 0
Exercise.
1.2. Gaussian Elimination 9

Exercise 1.2.1

Find the reduced row echelon form (r.r.e.f.) of the following matrix:
 
2 4 6 0
 
1 2 4 0
 
1 3 3 1

Exercise 1.2.2

Solve the systems:


x + y + z = 0
x + 2y + 3z = 0
x + 3y + 4z = 0
x + 4y + 5z = 0.

Exercise 1.2.3

Solve the systems


x1 + 2x2 − 3x3 = 6
2x1 − x2 + 4x3 = 1
x1 − x2 + x3 = 3.

Exercise 1.2.4

Solve the following system using the Guass-Jordan method.


x1 + x2 − x3 + 4x4 = 1
+ x2 − 3x3 + 4x4 = 0
2x1 + 2x2 − 2x3 + 8x4 = 2

Exercise 1.2.5

Solve the following system using the Guass-Jordan method.


x1 + x2 − x3 + 4x4 = 1
+ x2 − 3x3 + 4x4 = 0
2x1 + 2x2 − 2x3 + 8x4 = 3
10 Chapter 1. System of Linear Equations and Matrices

Exercise 1.2.6

Solve the following system using the Guass-Jordan method.


x + 2y + 3z = 0
x + 3y + 2z = 0
2x + y − 2z = 0

Exercise 1.2.7

Solve the following system using the Guass-Jordan method.


x1 + x2 − x3 + 4x4 = 0
+ x2 − 3x3 + 4x4 = 0
2x1 + 2x2 − 2x3 + 8x4 = 0
1.3. Matrices and Matrix Operations 11

Section 1.3: Matrices and Matrix Operations

Definition 1.3.1

An m × n matrix A is a rectangular array of m · n real numbers arranged in m horizontal


rows and n vertical columns. That is,
 
a11 a12 . . . a1j . . . a1n
 

 a21 a22 . . . a2j . . . a2n 

 .. .. .. .. .. .. 
. . . . . .
 
 
A=  
ai1 ai2 . . . aij . . . ain
 
 
.. .. .. .. .. ..
 
 

 . . . . . . 

am1 am2 . . . amj . . . amn

In the matrix A above, we have:


• a11 , a12 , . . . , amn are called elements (or entries) of the matrix.
• The entry aij lies in the intersection of row i and column j.
• We write A = [aij ] where aij correspond to the entry at row i and column j.
• We also might write (A)ij to denote the same entry aij .
• The size of A is m (rows) by n (columns), written as m × n.
• The matrix is called a square matrix if m = n.
If a matrix A is 1 × n matrix, then we say that A is a row vector. If A is n × 1 matrix, then we
say that A is a column vector . In Chapter 3, we speak of n-vectors to be elements of Rn .

Example 1.3.1
 
  3 0 1
2 3 −2  
 is a 2 × 3 matrix and matrix B =  4 2 0  is 3 × 3 matrix.
Matrix A = 
4 1 −1  
−1 4 1

The shaded entries of the square (n × n) matrix A are said to be on the main diagonal of A.
 

a11 a12 ··· a1n 
 
 a21
 a22 ··· a2n 

A= 
.. .. .. ..
 
.
 

 . . . 

···
 
an1 an2 ann
12 Chapter 1. System of Linear Equations and Matrices

Definition 1.3.2 Trace

The trace of a square (n × n) matrix A = [aij ], denoted by tr(A), is the sum of the entries
n
X
on the main diagonal. That is, tr(A) = aii = a11 + a22 + · · · + ann . If A is not a square
i=1
matrix, then the trace of A is undefined.

Definition 1.3.3 Transpose

If A = [aij ] is an m × n matrix, then the n × m matrix AT = [aji ] is called the transpose of


A. That is aTij = aji . Observe that AT is resulted by interchanging rows and columns of A.

Example 1.3.2

Here are some examples of matrices and their transpose:


 
a b c  
  1 4 0 h i
A = d e f 
 , B= , and C= 2 5 7

−1 2 3 1×3
g h i 3×3 2×3

     
a d g 1 −1 2
T T T
     
A =  b e h

 , B = 4 2

 , and C = 5


c f i 3×3 0 3 3×2 7 3×1

Definition 1.3.4 Equality

Two m × n matrices A and B are said to be equal if aij = bij for all 1 ≤ i ≤ m and 1 ≤ j ≤ n.

Example 1.3.3

Find the values of a, b, c, and d if


   
a+b c+d  1 2
 = .
c−d a−b −2 1

Solution:

Since the two matrices are equal. Then


a+b=1 c+d=2
and
a−b=1 c − d = −2
Thus, 2a = 2 and 2c = 0. This implies that a = 1, b = 0, c = 0 and d = 2.
1.3. Matrices and Matrix Operations 13

Definition 1.3.5 Addition and Subtraction

If A = [aij ] and B = [bij ] are two m × n matrices, then their sum is A + B = [cij ] where
cij = aij + bij , and their difference is A − B = [dij ] where dij = aij − bij .

Example 1.3.4

If possible, find tr(C), A + B, A − B, A + C, and B − C, where


     
2 0 3 1 1 1 2 1
A= ,B =  , and C =  ,
−1 1 0 0 −1 −2 0 −1

Solution:

Clearly tr(C) = 2 + (−1) = 1. Moreover,


   
3 1 4 1 −1 2 
A+B = and A − B =  .
−1 0 −2 −1 2 2
The expression A + C and B − C are clearly undefined since their sizes are different.

Definition 1.3.6 Scalar Multiples

If A = [aij ] is any m × n matrix and c is any scalar, then c A = [c aij ] for all 1 ≤ i ≤ m and
1 ≤ j ≤ n. The matrix c A is called a scalar multiple of A. That is,

(cA)ij = c(A)ij = c aij .

Example 1.3.5

If possible, find 2 A + B T , where


 
2 −1  
  1 1 1
A=
 0 1  , and B =
  ,
0 −1 −2
3 0

Solution:

Clearly
     
4 −2 1 0 5 −2
T
     
 0 2  +  1 −1  =  1 1  .
2A + B =      

6 0 1 −2 7 −2
14 Chapter 1. System of Linear Equations and Matrices

A matrix can be partitioned in smaller matrices by inserting horizontal and vertical rules between
selected rows and columns.
Example 1.3.6

The matrix
 
a11 a12 a13 a14
 
A=
 a21 a22 a23 a24 

a31 a32 a33 a34


can be partitioned into four submatrices A11 , A12 , A21 and A22
 
a11 a12 a13 a14  
  A11 A12
 a21 a22 a23 a24  =
A=   .
A21 A22
a31 a32 a33 a34
It can be also be partitioned into row vectors or column vectors (respectively):
     
a11 a12 a13 a14 r1 a11 a12 a13 a14 h i
     
a a a a  = r  or a a a a  = c c c c .
 21 22 23 24   2  21 22 23 24  1 2 3 4

a31 a32 a33 a34 r3 a31 a32 a33 a34

we use the terms: row vector, ri (A), and column vector, cj , to denote the ith row and j th
column of the matrix A. That is
 
a1j
 
h i 
 a2j 

ri (A) = ai1 ai2 · · · ain and cj (A) = 
..  .
1×n
.
 
 
 
amj m×1

Definition 1.3.7 Dot Product

An n-vector (row or column vector) x = [x1 x2 · · · xn ] can be written as x = (x1 , x2 , · · · , xn )


which is a vector in Rn . The dot (inner) product of the n-vectors x = (x1 , x2 , · · · , xn ) and
y = (y1 , y2 , · · · , yn ) is defined by
n
X
x · y = x1 y1 + x2 y2 + · · · + xn yn = xi y i .
i=1

Example 1.3.7

If x = (1, 0, 2, −1) and y = (3, 5, −1, 4), then x · y = 3 + 0 + (−2) + (−4) = −3.
1.3. Matrices and Matrix Operations 15

Definition 1.3.8 Product of Matrices

Let A = [aij ] be an m × p matrix and B = [bij ] be a p × n matrix. Then, the product of A


and B is the m × n matrix AB = [cij ] where cij = ri (A) · cj (B). That is,
p
X
cij = ai1 b1j + ai2 b2j + · · · + aip bpj = aik bkj , for 1 ≤ i ≤ m and 1 ≤ j ≤ n.
k=1

The product is undefined if the number of columns of A not equals the number of rows of B.

Example 1.3.8
 
" # 2 1
1 3 −1 
Evaluate AB, where A = , and B = 
 0 2
 .
2 0 1 2×3
−1 2
3×2

Solution:
" # " #
2+0+1 1+6−2 3 5
Clearly, AB = = .
4+0−1 2+0+2 3 4 2×2

Example 1.3.9
" # " #
T 1 0 2 4 1 −1
Find AB and A B if possible, where A = , and B = .
1 −2 0 2 5 1

Solution:

• AB is not
 defined
 since # columns
 in A is not
 the same as # rows of B.
1 1 " # 6 6 0
T
 4 1 −1 
• A B =  0 −2 
  =  −4 −10 −2 

2 5 1 
2 0 8 2 −2

Example 1.3.10
" # " #
T 1 0 2 1 2 3 0 1 2 3
Evaluate the (3, 2)-entry of A B, where A = , and B = .
3 −2 1 0 −1 1 2 −1 0 4

Solution:
 
0 h i
Clearly, the (3, 2)-entry of AT B equals r3 (AT ) · c2 (B). That is, 2 1 ·   = 0 + 2 = 2.
2
16 Chapter 1. System of Linear Equations and Matrices

Example 1.3.11
h i
Let A = 1 −2 0 . Find all values of c so that c A · AT = 15.

Solution:
 
h i 
1
Note that cA · AT = c 1 −2

0 ·
 −2 .
 Therefore, 5c = 15 and hence c = 3.
0

The product of two matrices A (m × p) and B (p × n) can be computed column by column


h i h i
AB = A b1 b2 · · · bn = A b1 A b2 · · · A bn ,

or as row by row
   
a1 a1 B
   

 a2 


 a2 B 

AB = 
.. B = 
.. .
. .
   
   
   
am am B

Theorem 1.3.1

If A is an m × p matrix and B is a p × n matrix, then


(a) for each 1 ≤ j ≤ n, cj (AB) = A cj (B).
 
(b) for each 1 ≤ i ≤ m, ri (AB) = ri (A) B.

Example 1.3.12

Find the first column of AB and the second row of AB, where
 
1 2  
  −2 3 4
A=
 3 4
 and B =   .
3 2 1
−1 5 3×2 2×3

Solution:

Clearly
   
1 2   4
  −2  
c1 (AB) = A c1 (B) = 
 3 4
  =  6.
3  
−1 5 17
1.3. Matrices and Matrix Operations 17

In a similar way,
 
−2 3 4  h
h i i
r2 (AB) = r2 (A) B = 3 4  = 6 17 16 .
3 2 1

Definition 1.3.9 Linear Combination

If A1 , A2 , · · · , Ar are matrices of the same size and c1 , c2 , · · · , cr are scalars, then the form

c1 A1 + c2 A2 + · · · + cr Ar

is called a linear combination of A1 , A2 , · · · , Ar with coefficients c1 , c2 , · · · , cr .

Theorem 1.3.2

If A is an m×n matrix and x is an n×1 column vector, then the product Ax can be expressed
as a linear combination of the column of A in which the coefficients are the entries of x.

Proof:

       
a11 x1 + a12 x2 + · · · + a1n xn a11 a12 a1n
       

 a21 x1 + a22 x2 + · · · + a2n xn 


 a21 


 a22 


 a2n 

Ax = = x1   + x2   + · · · + xn  .

.. 
.. .. ..
. . . .
       
       
       
am1 x1 + am2 x2 + · · · + amn xn am1 am2 amn
That is, Ax = x1 c1 (A) + x2 c2 (A) + · · · + xn cn (A) and hence Ax is a linear combination of
columns of A and the entries of X are the coefficients.

Example 1.3.13

Find the second column of AB as a linear combination of columns of A, where


 
1 2  
  −2 3 4
A=
 3 4  and B =
  
3 2 1
−1 5

Solution:

Clearly,
     
1 2   1 2
  3    
c2 (AB) = A c2 (B) =  3 4 
    = 3 3 + 24
  
.
2
−1 5 −1 5
18 Chapter 1. System of Linear Equations and Matrices

Remark 1.3.1

Note that a system of linear equations


a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
..
.
am1 x1 + am2 x2 + · · · + amn xn = bm
can be represented as a single matrix equation:
    
a a · · · a1n x b
 11 12  1   1 
 a21 a22 · · · a2n   x2   b 
    
 =  2 .
 .. .. ..   ..   .. 
 
 . . . 
 . 
   . 
  
am1 am2 · · · amn xn bm
Moreover, these matrices denoted by A, x and b, respectively, can be replaced by a single
matrix equation Ax = b. In this case, the matrix A is called the coefficient matrix of the
system.
In the augmented matrix form, we add a vertical bar | to recognize the scalars in the last
column of the matrix. That is a system of linear equations might be transformed into aug-
h i
mented matrix form A | b . The augmented matrix now can be expressed in the following
way.
 
a a · · · a1n b1
 11 12 
 a21 a22 · · · a2n b2
h i  

A b =
 .. .. .. .. .
 . . . .


 
am1 am2 · · · amn bm
1.3. Matrices and Matrix Operations 19

Exercise 1.3.1
 
1 2
Let A =  . Compute A2 + 3A.
0 −1

Exercise 1.3.2
 
1 2  
  0 1 −3
Let A = 
4  and B =
5  . Find the third row of AB.
−3 1 4
3 6

Exercise 1.3.3
 
1 1  
  4 1 −1
Let A =  0 2 and B =
  . Find AB and express the second column of AB as a
2 5 1
2 0
linear combination of the columns of A.

Exercise 1.3.4

Write the following product as a linear combination of the columns of the first matrix.
 
1 3  
  3
2 1  .
 
−2
4 2

Exercise 1.3.5
 
1 2 7
 
Let A = 1 −5 7

. Find tr(A).
0 −1 10

Exercise 1.3.6
 
1 1
Let A =  . Find A1977 , and find all matrices B such that AB = BA.
0 1
20 Chapter 1. System of Linear Equations and Matrices

Exercise 1.3.7
 
1 1
Let A = 2
1
2
1
. Find A100 .
2 2

Exercise 1.3.8
     
0 0 1 0 1 2
Find a 2 × 2 matrix B =
6   and B 6=   so that AB = BA if A =  .
0 0 0 1 0 1

Exercise 1.3.9

Show that if A and B are n × n matrices, then tr(A + B) = tr(A) + tr(B).

Exercise 1.3.10
 
1 0
Show that there are no 2 × 2 matrices A and B so that AB − BA = I2 , where I2 =  .
0 1

Exercise 1.3.11

If A and B are n × n matrices, then


1. tr(cA) = c tr(A), where c is any real number.
2. tr(AB) = tr(BA).
1.4. Inverses; Algebraic Properties of Matrices 21

Section 1.4: Inverses; Algebraic Properties of Matrices

We write Mm×n to denote the set of all matrices of size m × n.

Theorem 1.4.1

Let A, B, and C be matrices of appropriate sizes, and let r, s ∈ R. Then:


1. A + B = B + A (Commutative law for matrix addition)
2. A + (B + C) = (A + B) + C (Associative law for matrix addition)
3. A(BC) = (AB)C (Associative law for matrix multiplication)
4. A(B + C) = AB + AC (Left distributive law)
5. (A + B)C = AC + BC (right distributive law)
6. A(B − C) = AB − AC
7. (A − B)C = AC − BC
8. r(A + B) = rA + rB
9. r(A − B) = rA − rB
10. (r + s)A = rA + sA
11. (r − s)A = rA − sA
12. r(sA) = (rs)A
13. r(AB) = (rA)B = A(rB)

Proof:

We only proof (1) and (10). Let A = [aij ] and B = [bij ] for 1 ≤ i ≤ m and 1 ≤ j ≤ n. Then
1. A + B = [aij + bij ] = [bij + aij ] = B + A.
10. (r + s)A = (r + s)[aij ] = [(r + s) aij ] = [r aij + s aij ] = [r aij ] + [s aij ] = rA + sA.

Definition 1.4.1 Zero Matrix

A matrix whose entries are all zero is called a zero matrix and is denoted by 0.

Definition 1.4.2 Identity Matrix

A square (n × n) matrix with 1’s on main diagonal and zeros elsewhere is called an identity
matrix and is denoted by In .
22 Chapter 1. System of Linear Equations and Matrices

Theorem 1.4.2 Properties of Zero Matrices

Let c be a scalar, and let A be a matrix of an appropriate size. Then:


1. A + 0 = 0 + A = A
2. A − 0 = A
3. A − A = A + (−A) = 0
4. 0A = 0
5. If cA = 0, then c = 0 or A = 0.

Proof:

We only proof (5). Let A = [aij ] for 1 ≤ i ≤ m and 1 ≤ j ≤ n. If cA = 0, then for each i
and j, we have (cA)ij = c(A)ij = c aij = 0. Therefore, either c = 0 or we have aij = 0 for all
i and j. Therefore, either c = 0 or A = 0.

Remark 1.4.1

Let
       
0 1 1 3 2 5 1 2
A= ,B =  ,C =  , and D =  .
0 2 1 2 1 2 0 0
Then,
 
1 2
AB = AC =  ,
2 4
but B 6= C. That is, the cancellation law does not hold here.
For any a, b ∈ R, we have ab = 0 implies that a = 0 or b = 0. However, in matrices we have
AD = 0 but A 6= 0 and D 6= 0.
Moreover, the n×n identity matrix I commutes with any other matrix. That is, AI = IA = A
for any n × n matrix A.

Remark 1.4.2

If A is an m × n matrix, observe that A In = A and A = Im A.

Theorem 1.4.3

If R is the reduced row echelon form of an n × n matrix A, then either R has a row of zeros
or R is the identity matrix In .
1.4. Inverses; Algebraic Properties of Matrices 23

Definition 1.4.3 Inverse Matrix

A matrix A ∈ Mn×n is called nonsingular or invertable if there exists a matrix B ∈ Mn×n


such that

A B = B A = In .

In particular, B is called the inverse of A and is denoted by A−1 . If there is no such B, we


say that A is singular which means that A has no inverse.

Example 1.4.1

Here is an example of a 2 × 2 matrix along with its inverse.


       
1 1   3 −1   3 −1   1 1   1 0 
 = = = I2 .
2 3 −2 1 −2 1 2 3 0 1

Remark 1.4.3

Note that if A is a square matrix with a row (or a column) of zeros, then A is singular. For
example, if A is a 3 × 3 with a column of zeros, then
h i h i
BA = B c1 c2 0 = B c1 B c2 0 6= I3 .

Theorem 1.4.4 Uniqueness of Inverse

If a matrix has an inverse, then its inverse is unique.

Proof:

Assume that A ∈ Mn×n is a matrix with two inverses B and C, then

B = B In = B (A C) = (B A) C = In C = C.

Therefore, A has a unique inverse.


24 Chapter 1. System of Linear Equations and Matrices

Theorem 1.4.5 Inverse of 2 × 2 Matrices


 
a b
The matrix A =   is invertible iff ad − bc 6= 0, in which case the inverse of A is
c d
 
1 d −b
A−1 =  .
ad − bc −c a

Proof:

We simply show that AA−1 = A−1 A = I. Here, we only do the following:


     
a b 1 d −b 1 ad − bc −ab + ab
A A−1 =  ·  = 
c d ad − bc −c a ad − bc cd − cd −bc + ad
 
1 ad − bc 0
=   = I.
ad − bc 0 −bc + ad

Theorem 1.4.6

If A and B are invertible n × n matrices, then A B is invertible and (A B)−1 = B −1 A−1 .

Proof:

Since both A and B are invertible, then both A−1 and B −1 exist. Thus

(AB)(B −1 A−1 ) = A(BB −1 )A−1 = AIn A−1 = AA−1 = In .


   
B −1 A−1 (AB) = B −1 A−1 A B = B −1 In B = B −1 B = In .

Thus, A B is invertible and (A B)−1 = B −1 A−1 . Note that, this result can be generalized to
any number of invertible matrices. That is,

(A1 A2 · · · Ak )−1 = A−1 −1 −1


k Ak−1 · · · A1 .

Remark 1.4.4

Let m and n be nonnegative integers and let A and B be matrices of appropriate sizes, then
1. A0 = I and An = A A · · · A (n-times),
2. Am An = Am+n and (Am )n = Amn ,
3. (AB)n = (AB)(AB) · · · (AB), (n-times), and in general (AB)n 6= An B n .
4. In general, (A + B)2 = (A + B)(A + B) = A2 + AB + BA + B 2 6= A2 + 2AB + B 2 .
1.4. Inverses; Algebraic Properties of Matrices 25

Theorem 1.4.7

If A is a invertible matrix and n is a nonnegative integer, then:


1. A−1 is invertible and (A−1 )−1 = A.
2. An is invertible and (An )−1 = A−n = (A−1 )n .
3. k A is invertible for any nonzero scalar k, and (k A)−1 = k −1 A−1 .

Proof:

We use the idea of the definition. A matrix B is the inverse of matrix C if BC = CB = I.


1. Clearly, A−1 A = A A−1 = I and hence A is the inverse of A−1 .
2. An A−n = An−n = A0 = I. That is A−n is the inverse of An .
3. (kA)(k −1 A−1 ) = (kk −1 )(AA−1 ) = I. Also, (k −1 A−1 )(kA) = I. Thus, (kA)−1 = k −1 A−1 .

Theorem 1.4.8

Let A and B be two matrices of appropriate size and let c be a scalar. Then:
1. (AT )T = A,
2. (A ± B)T = AT ± B T ,
3. (c A)T = c AT , and
4. (AB)T = B T AT . This result can be extended to three or more factors.

Exercise 1.4.1

If A is a square matrix and n is a positive integer, is it true that (An )T = (AT )n ? Justify
your answer.

Theorem 1.4.9

If A is invertible matrix, then AT is also invertible and (AT )−1 = (A−1 )T .

Proof:

We simply show that the product AT (A−1 )T = (A−1 )T AT = I. Clearly, AT (A−1 )T =


(A−1 A)T = I T = I. Similarly, (A−1 )T AT = I.
26 Chapter 1. System of Linear Equations and Matrices

Example 1.4.2
   
2 0 1 4
Let A =  and B =  . Find AB, and B T AT .
1 −1 3 5

Solution:
     
2 + 0 8 + 0  2 8  2 −2
Clearly, AB =  = . Moreover, B T AT = (AB)T = 
1−3 4−5 −2 −1 8 −1

Definition 1.4.4

Let A be an n × n (square) matrix and p(x) = a0 + a1 x + · · · + am xm be any polynomial.


Then we define the matrix polynomial in A as the n × n matrix p(A) where

p(A) = a0 In + a1 A + · · · + am Am .

Example 1.4.3
 
−1 2
Compute p(A) where p(x) = x2 − 2x − 3 and A =  .
0 3

Solution:

 2    
2 −1 2 −1 2 1 0
p(A) = A − 2A − 3I2 =  − 2 − 3 
0 3 0 3 0 1
       
1 4 −2 4 3 0 0 0
= − − = =0
0 9 0 6 0 3 0 0

Example 1.4.4

Show that if A2 + 5A − 2I = 0, then A−1 = 12 (A + 5I).

Solution:

A2 + 5A − 2I = 0 implies that 2I = A2 + 5A = A(A + 5I). Then I = 12 A(A + 5I). Therefore,


 
I=A 1
2
(A + 5I) , which shows that A−1 = 12 (A + 5I).
1.4. Inverses; Algebraic Properties of Matrices 27

Example 1.4.5

Show that if A is a square matrix such that Ak = 0 for some positive integer k, then the
matrix A is invertible and (I − A)−1 = I + A + A2 + · · · + Ak−1 .

Solution:

h i h i
(I − A)(I + A + · · · + Ak−1 ) = I + @ AZ2 + · · · + Ak−1
HH − @ AZ3 + · · · + Ak−1
AZ2 + Z HH + Ak
H H
A +Z A +Z

= I − Ak = I − 0 = I.
28 Chapter 1. System of Linear Equations and Matrices

Exercise 1.4.2
 
1
 
 3
Let A =  . Find all constants c ∈ R such that (cA)T · (cA) = 5.
 
 −1 
 
3

Exercise 1.4.3

Let A be an n × n matrix such that A3 = O. With justification, prove or disprove that

(In − A)−1 = A2 + A + In .

Exercise 1.4.4

A square matrix A is said to be idempotent if A2 = A.


1. Show that if A is idempotent, then so is I − A.
2. Show that if A is idempotent, then 2A − I is invertible and is its own inverse.

Exercise 1.4.5

Let p1 (x) = x2 − 9, p2 (x) = x + 3, and p3 (x) = x − 3. Show that p1 (A) = p2 (A)p3 (A) for any
square matrix A.

Exercise 1.4.6

Let A be a square matrix. Then


1. Show that (I − A)−1 = I + A + A2 + A3 if A4 = 0.
2. Show that (I − A)−1 = I + A + A2 + · · · + An if An+1 = 0.

Exercise 1.4.7

Let Jn be the n × n matrix each of whose entries is 1. Show that if n > 1, then
1
(I − Jn )−1 = I − Jn .
n−1
1.4. Inverses; Algebraic Properties of Matrices 29

Exercise 1.4.8

Let A, B, and C be n × n matrices such that D = AB + AC is invertible.


1. Find A−1 if possible.
2. Find (B + C)−1 if possible.

Exercise 1.4.9
   
1 2 1 −1 2
Let A =  , and B =  . Find C if (B T + C)A−1 = B T .
1 3 3 2 0

Exercise 1.4.10
   
1 2 0 1 1 0
Let A−1 T
   
= 0 1 −1 and B = 1 0
   0. Find C if A C = B .
2 0 1 0 1 1
h
Hint: Consider multiplying both sides from the left with A−1 .]
30 Chapter 1. System of Linear Equations and Matrices

Section 1.5: A Method for Finding A−1

Definition 1.5.1

An m × n matrix A is called row equivalent to an m × n matrix B if B can be obtained by


applying a finite sequence of elementary row operation to A. In this case, we write A ≈ B.

Remark 1.5.1 Properties of matrix equivalence

For any m × n matrices A, B, and C we have


1. A ≈ A,
2. A ≈ B → B ≈ A,
3. A ≈ B and B ≈ C → A ≈ C.

Theorem 1.5.1

Let Ax = B and Cx = D be two linear systems of equations. If [ A | B ] ≈ [ C | D ], then the


two system have the same solutions.

Remark 1.5.2

Unless otherwise specified, all matrices in this section are considered to be square matrices.

Theorem 1.5.2

If A is an n × n matrix, then the following statements are equivalent:


1. A is invertible.
2. AX = 0 has only the trivial solution.
3. A is row equivalent to In .

Remark 1.5.3 How to find A−1 for a given A ∈ Mn×n

1. form the augmented matrix [ A | In ],


2. find the r.r.e.f. of [ A | In ], say [ C | B ]:
(a) if C = In , then A is invertible and A−1 = B,
(b) if C 6= In , then C has a row of zeros and A is singular matrix with no inverse.
1.5. A Method for Finding A−1 31

Example 1.5.1
 
1 1
Find, if possible, A−1 for A = 
2 3

Solution:
     
1 1 1 0 r2 −2r1 ⇒r2 1 1 1 0 r1 −r2 ⇒r1 1 0 3 −1
 −−−−−−→ −−−−−−→ .
2 3 0 1 0 1 −2 1 0 1 −2 1
 
3 −1 
Therefore, A is invertible and A−1 = .
−2 1

Example 1.5.2
 
1 0 1
Find, if possible, A−1
 
for A = 
1 1 2

2 0 1

Solution:
     
1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0
  r2 −r1 ⇒r2   −r3  
1 1 2 0 1 0 −−−−−−→ 0 1 1 −1 1 0 −−→ 0 1 1 −1 1 0
 
  r3 −2r1 ⇒r3 
2 0 1 0 0 1 0 0 −1 −2 0 1 0 0 1 2 0 −1
   
1 0 0 −1 0 1 −1 0 1
r1 −r3 ⇒r1   −1
 
−−−−−−→  0 1 0 −3 1 1 . Therefore, A is invertible and A =  −3 1 1 .
r2 −r3 ⇒r2   
0 0 1 2 0 −1 2 0 −1

Example 1.5.3
 
1 2 −3
Find, if possible, A−1
 
for A = 1 −2 1


5 −2 −3

Solution:
     
1 2 −3 1 0 0 1 2 −3 1 0 0 1 2 −3 1 0 0
 2 −r1 ⇒r2   r3 −3r2 ⇒r3 
1 −2 1 0 1 0 −r−
 
−−−−→ 0 −4 4 −1 1 0 −−−−−−→  0 −4 4 −1 1 0
 r3 −5r
  
.
1 ⇒r3

5 −2 −3 0 0 1 0 −12 12 −5 0 1 0 0 0 −2 −3 1
At this point, we can conclude that this matrix is singular with no inverse because of the fact
that the third row in the first part is a zero row. In particular, A−1 does not exist.
32 Chapter 1. System of Linear Equations and Matrices

Remark 1.5.4

Let A be an n × n matrix:

1. if A is row equivalent to In , then it is non-singular,


Example 1.5.4
 
−3 0
A= is row equivalent to I2 .
0 5

2. if A is row equivalent to a matrix with a row of zeros, then it is singular.


Example 1.5.5
 
−1 1
A= is row equivalent to a matrix with a row of zeros.
0 0

TRUE or FALSE:

? If A and B are
 singular
 matrices,
  then so is A + B.   (FALSE).
1 0 0 0 1 0
reason: A =   and B =   are two singular matrices, while A + B =  is a invertible.
0 0 0 1 0 1
? If A and B are
 invertible
 matrices,
 then so is A + B. (FALSE).

1 0 −1 0 0 0
reason: A =   and B =   are two invertible matrices, while A + B =   is a
0 1 0 −1 0 0
singular.
1.5. A Method for Finding A−1 33

Exercise 1.5.1
 
1 0 1
T −1
 
Let A = 
1 1 0. Find (2A) .

2 0 1
h
Hint: If c is a nonzero constant, then what is (cA)−1 ?]

Exercise 1.5.2
 
1 2 3 h i h i
Let A−1
 
= 0 1 2

. Find all x, y, z ∈ R such that x y z A = 1 2 3 .
0 0 1

Exercise 1.5.3
   
0 1 −1 1 2 −1
   
Let A = 1 1 0 and B = 1 3 0
  
.
0 1 2 1 1 −1
1. Find B −1 .
2. Find C if A = B C.
34 Chapter 1. System of Linear Equations and Matrices

Section 1.6: More on Linear Systems and Invertible Matrices

Consider a system of linear equations presented by its matrix equation Ax = b, where A is the
coefficients, x is the column vector of unknowns, and b is the column vector of scalars.

Solving Ax = b

Ax = b, where b 6= 0 (non-homogenous) Ax = 0 (homogenous)

unique sol no sol infinite sol trivial sol non-trivial sol

Remark 1.6.1 Non-Homogenous System

Let Ax = b be a system of linear equations. Then,


• the system has zero solution if A ≈ a matrix with a row of zeros while [ A | b ] ≈ a
matrix with no rows of zeros.
• the system has one (unique) solution if A ≈ I.
• the system has infinitely many solutions if the number of unknowns (columns of
matrix A) is more than the number of equations (rows of matrix A).

Remark 1.6.2 Homogenous System

Let Ax = b be a system of linear equations. Then,


• the system has trivial solution if A ≈ I.
• the system has non-trivial solution if the number of unknowns (columns of matrix A)
is greater than the number of equations (rows of matrix A) in the reduced system.

Theorem 1.6.1 Non-Homogenous System

A (nonhomogeneous) system of linear equations has zero, one, or infinitely many solutions.

Proof:

If Ax = b, b 6= O has no solutions or one solution, then we are done. So, we only need to
show that if the system has more than one solution, then it has infinitely many solutions.
Assume that y and z are two solutions for the system Ax = b. Then, for r, s ∈ R, define
u = r y + s z to get:

Au = A(r y + s z) = A(r y) + A(s z) = r (Ay) + s (Az) = r b + s b = (r + s)b.


1.6. More on Linear Systems and Invertible Matrices 35

If r + s = 1, then u is a solution to the system. Since there are infinitely many choices for r
and s in R, the system has infinitely many solutions.

TRUE or FALSE:

? If x1 and x2 are two solutions for Ax = b, then 32 x1 − 2x2 is also a solution. (FALSE).
3 −1
reason: Since 2
−2 = 2
6= 1.

Theorem 1.6.2

If A is an invertible n × n matrix, then for each n × 1 matrix b, the system of equations


Ax = b has a unique solution, namely x = A−1 b.

Proof:

Given Ax = b, we multiply both sides (from left) by A−1 to get x = A−1 b. That is A−1 b is
a solution to the system.
To show that it is unique, assume that y is any solution for Ax = b. Hence Ay = b and
again y = A−1 b.

Example 1.6.1

Solve the following system ”Ax = b” (given in its matrix form)


    
1 0 1 x 0
    
1 1 2 y  =  1
    
2 0 1 z −1

Solution:

We solve the system by using A−1 , we can find the inverse of A as in Example 1.5.2, to get
 
−1 0 1
A−1
 
=  −3 1 1 


2 0 −1
Then, using Theorem 1.6.2, we get the following unique solution:
      
x −1 0 1 0 −1
−1
      
x =  y  = A b =  −3 1 1   1  =  0 
      
.
z 2 0 −1 −1 1
36 Chapter 1. System of Linear Equations and Matrices

Example 1.6.2

Solve the following system ”Ax = 0”


    
1 0 1 x 0
    
1 1 2 y  = 0
    
2 0 1 z 0

Solution:

Again, the inverse of A from Example 1.5.2 is


 
−1 0 1
A−1
 
=  −3 1 1 


2 0 −1
Then, using Theorem 1.6.2, we get the trivial solution:
      
x −1 0 1 0 0
−1
      
 y  = A 0 =  −3 1 1   0  =  0  .
x=      

z 2 0 −1 0 0

Remark 1.6.3 Solving Multiple Systems with the Same Coefficient Matrix

Given the systems Ax = b1 ; Ax = b2 ; · · · ; Ax = bk , an efficient way to solve these systems


at once is to form the augmented matrix [A | b1 | b2 | · · · | bk ], and use the Guass-Jordan
elimination to solve all of the system at the same time.

Example 1.6.3

Solve the following linear systems:


x + z = 0 x + z = 1
x + y + 2z = 1 and x + y + 2z = 2
2x + 3z = 1 2x + 3z = 3

Solution:

     
1 0 1 0 1 1 0 1 0 1 1 0 0 −1 0
  r2 −r1 ⇒r2   r1 −r2 ⇒r1  
1 1 2 1 2 −−−−−−→ 0 1 1 1
  1 −−−−−−→ 0 1 0 0
  0
 r3 −2r1 ⇒r3 r2 −r3 ⇒r2 
2 0 3 1 3 0 0 1 1 1 0 0 1 1 1
Therefore, the solution of the first system is x = −1, y = 0, z = 1 and the solution of the
second system is x = 0, y = 0, z = 1.
1.6. More on Linear Systems and Invertible Matrices 37

Theorem 1.6.3

Let A be a square matrix. Then


1. If B is a square matrix satisfying BA = I, then B = A−1 .
2. If B is a square matrix satisfying AB = I, then B = A−1 .

Proof:

1. Assuming BA = I, we show that A is invertible by showing that the system Ax = 0


has only the trivial solution. Multiplying both sides (from left) by B, we get

B (Ax) = B 0. That is, (BA)x = Ix = x = 0.

Thus, the system Ax = 0 has only the trivial solution. Theorem 1.5.2 implies that A is
invertible. Therefore BA = I implies (BA)A−1 = IA−1 . Hence B = A−1 .
2. Using part (1), AB = I implies that A = B −1 . Taking the inverse for both sides, we
get A−1 = B as desired.

Theorem 1.6.4 Equivalent Statements

If A is an n × n matrix, then the following statements are equivalent:


1. A is invertible.
2. Ax = 0 has only the trivial solution.
3. A is row equivalent to In .
4. Ax = b is consistent for every n × 1 matrix b.
5. Ax = b has exactly one solution for every n × 1 matrix b.

Theorem 1.6.5

Let A and B be two square matrices of the same size. If AB is invertible, then A and B must
be also invertible.

Example 1.6.4

Consider the system:


x + y + z = 2
x + 2y − z = 2
2
x + 2y + (a − 5)z = a
38 Chapter 1. System of Linear Equations and Matrices

Find all values of a so that the system has:


(a) a unique solution (consistent).
(b) infinite many solutions (consistent).
(c) no solution (inconsistent).

Solution:

In this kind of questions, it is not necessary to get the r.r.e.f. So, we will try to focus on the
last row which contains the term ”a” as follows:
     
1 1 1 2 1 1 1 2 1 1 1 2
  r2 −r1 ⇒r2   r3 −r2 ⇒r3  
 1 2 −1 2 −−−−−−→ 0 1
  −2 0 −−−−−−→ 0 1
  −2 0
 r3 −r1 ⇒r3 
2
1 2 a −5 a 0 1 a2 − 6 a − 2 0 0 a2 − 4 a − 2
At this point, we have:
(a) a unique solution if a2 − 4 6= 0 ⇐⇒ a 6= ±2 ⇐⇒ a ∈ R\{−2, +2}.
n o n o
(b) infinite solutions if a2 − 4 = 0 and a − 2 = 0 ⇐⇒ a = ±2 and a = +2 ⇐⇒ a =
+2.
n o n o
(c) no solution if a2 − 4 = 0 and a − 2 6= 0 ⇐⇒ a = ±2 and a 6= +2 ⇐⇒ a = −2.

Example 1.6.5

Discuss the consistency of the following non-homogenous system:


x + 3y + kz = 4
2x + ky + 12z = 6

Solution:

   
1 3 k 4 r2 −2r1 ⇒r2 1 3 k 4
 −−−−−−→ .
2 k 12 6 0 k − 6 12 − 2k −2
The system is consistent only if k 6= 6:
(a) can not have a unique solution (not equivalent to I because it is not square matrix!!),
(b) has infinite solutions when k 6= 6,
(c) has no solutions when k = 6.
1.6. More on Linear Systems and Invertible Matrices 39

Exercise 1.6.1
   
1 0 3 3 1
   
0 1 1 −1   0
Let A =   and b =  . Find all value(s) of a such that the system
   
 1 −2 3 1  0
  
0 2 0 a2 + 1 a+2
Ax = b is consistent.

Exercise 1.6.2

Find all value(s) of a for which the system


x − y + (a + 3)z = a3 − a − 7
−x + ay − az = a
2
2(a − 1)y + (a + 2)z = 8a − 14.
has (a) no solution (inconsistent), (b) unique solution (consistent), and (c) infinite many
solutions (consistent).

Exercise 1.6.3

Discuss the consistency of the following homogenous system:


x + y − z = 0
x − y + 3z = 0
2
x + y + (a − 5)z = 0

Exercise 1.6.4

Show that if c1 and c2 are solutions of the system Ax = b, then 4c1 − 3c2 is also a solution
of this system.

Exercise 1.6.5

Let u and v be two solutions of the homogenous system Ax = 0. Show that ru + sv (for
r, s ∈ R) is a solution to the same system.

Exercise 1.6.6

If A is invertible matrix, then AAT and AT A are both invertible matrices.


40 Chapter 1. System of Linear Equations and Matrices

Exercise 1.6.7

Show that if A, B, and A + B are invertible matrices with the same size, then

A (A−1 + B −1 ) B (A + B)−1 = I.

Exercise 1.6.8

Let A be an m × n matrix, and b be an m × 1 column vector. Show that the system Ax = b


has a solution if and only if b is a linear combination of columns of A. [Hint: Recall Theorem
1.3.2].
1.7. Diagonal, Triangular, and Symmetric Matrices 41

Section 1.7: Diagonal, Triangular, and Symmetric Matrices

Remark 1.7.1 Diagonal Matrix

The ”diagonal matrix”, D = [dij ], is an n × n matrix so that dij = 0 for all i 6= j.


   
d11 0 ··· 0 d1 0 ··· 0
   

 0 d22 ··· 0 


 0 d2 ··· 0 

D= 
.. .. .. ..  it is also can be written as D = 
.. .. .. .. .
. . . . . . . .
   
   
   
0 0 ··· dnn 0 0 ··· dn
Moreover, we sometime write D = diag(d1 , d2 , · · · , dn ). If all scalars in the diagonal matrix
are equal, say equal c, then D is said to be a scalar matrix. In particular, the identity
matrix In is a scalar matrix with c = 1. That is In = diag(1, 1, · · · , 1).
1. D is invertible if and only if all of its diagonal entries are nonzero; in this case we have
 
1/d1 0 ··· 0
 

0 1/d2 ··· 0 
D−1 =
 

.. .. .. .. 
. . . .
 
 
 
0 0 ··· 1/dn
2. If k is a positive integer, then Dk is computed as
 
dk1 0 ··· 0
 

0 dk2 ··· 0 
Dk =
 

.. .. .. .. 
. . . .
 
 
 
0 0 ··· dkn
3. If A is an n × m matrix and B is an m × n matrix, then
 
d1 r1 (A)
 

 d2 r2 (A) 
 h i
DA = 
..  and BD = d1 c1 (B) d2 c2 (B) ··· dn cn (B)
.
 
 
 
dn rn (A)

Example 1.7.1
 
  1 0 −1  
1 0 0   1 1 −1 0
   1 1 0   
Let D =  0
 3 0  , A = 
 ,

and B =  2
 3 1  . Compute
2 
  2 1 4 
0 0 2  
4 2 1 3
5 1 −1
D−3 , AD, and DB.
42 Chapter 1. System of Linear Equations and Matrices

Solution:
   
3
Clearly D = diag(1, 3, 2) and D−1 = diag 1, 13 , 12 . That is, D−3 = (D−1 ) = diag 1, 27
1 1
,8 .
   
1 0 −1   1 0 −2
  1 0 0  
 1 1 0    1 3 0  h i
AD = 0 3 =
0  = c1 (A) 3 c2 (A) 2 c3 (A) ,
   
  
 2 1 4   2 3 8 
 
0 0 2  
5 1 −1 5 3 −2
      
1 0 0 1 1 −1 0 1 1 −1 0 r1 (B)
      
and DB = 
 0 3  2
0   3 1 = 6
2   9 3  =  3 r2 (B)  .
6   

0 0 2 4 2 1 3 8 4 2 6 2 r3 (B)

Example 1.7.2

Find all 2 × 2 diagonal matrices A that satisfy the equation A2 − 3A + 2I = 0.

Solution:
 
a 0 
Assume that A =  be a 2 × 2 diagonal matrix. Then,
0 b
       
2 a2 0   3a 0   2 0   0 0 
A − 3A + 2I =  − + = .
0 b2 0 3b 0 2 0 0

Hence, a2 − 3a + 2 = 0 and b2 − 3b + 2 = 0. That is (a − 1)(a − 2) = 0 and (b − 1)(b − 2) = 0


which implies that a = 1 or 2 and b = 1 or 2. Therefore,
       
 1 0   1 0   2 0   2 0 
A∈  , , , .
 0 1 0 2 0 1 0 2 

Remark 1.7.2 Triangular Matrices

The ”lower triangular matrix” , L = [lij ], is an n × n matrix so that lij = 0 for all i < j.
The ”upper triangular matrix” , U = [uij ], is an n × n matrix so that uij = 0 for all i > j.
   
 l11 u11 u12 ··· u1n 


 l21 l22
0 






 u22 ···

u2n 

L= , and U=
   
 
... ..
 
.. .. ..
0
   

 . . .

 
 . 

   

ln1 ln2 ··· lnn

unn
1.7. Diagonal, Triangular, and Symmetric Matrices 43

Theorem 1.7.1

• The transpose of a lower triangular matrix is upper triangular


 
matrix, and the transpose of an upper triangular matrix is lower  
 i<j 
triangular matrix. 



 
 i>j 
• The product of lower triangular matrices is lower triangular, and  

the product of upper triangular matrices is upper triangular.


• A triangular matrix is invertible if and only if its diagonal entries
are all nonzero.
• The inverse of an invertible lower triangular matrix is lower triangular, and the inverse of
an invertible upper triangular matrix is upper triangular.

Example 1.7.3
 
8 0
Find a lower triangular matrix that satisfies A3 =  .
9 −1

Solution:
 
a 0
Assume that A =  be a 2 × 2 lower triangular matrix. Then,
b c
   
3 a3 0  8 0
A = 2
 = .
a b + c(ab + bc) c3 9 −1
Hence, a = 2 and c = −1 and thus 4b − (2b − b) = 9 implies that b = 3.

Definition 1.7.1 Symmetric Matrix

A square matrix A is called symmetric if AT = A. It is called skew-symmetric if AT = −A.

Remark 1.7.3

A square matrix A = [aij ] is symmetric if aij = aji and it is skew-symmetric if aij = −aji ..
For example,
   
1 2 3 0 −2 3
   
 2
 4 5  is symmetric, and  2
  0 −5 
 is skew-symmetric.
3 5 0 −3 5 0
44 Chapter 1. System of Linear Equations and Matrices

Theorem 1.7.2

If A and B are symmetric matrices with the same size, and k is any scalar, then:
1. AT is symmetric.
2. A + B and A − B are symmetric.
3. kA is symmetric.

Proof:

Note that A and B are symmetric matrices and hence AT = A and B T = B. Then,
1. (AT )T = A = AT . Then AT is symmetric.
2. (A ± B)T = AT ± B T = A ± B. Then A + B and A − B are symmetric.
3. (kA)T = kAT = kA. Then, kA is symmetric.

Theorem 1.7.3

For any two matrices A and B, AB is symmetric if and only if AB = BA.

Theorem 1.7.4

If A is a symmetric invertible matrix, then A−1 is symmetric.

Proof:

Assume that A is symmetric invertible matrix. Then A−1 is symmetric as well since

(A−1 )T = (AT )−1 = A−1 .

Example 1.7.4

Show that AT A and AAT are both symmetric matrices.

Solution:

It is clear that (AT A)T = AT (AT )T = AT A which shows that AT A is symmetric. In addition,
(AAT )T = (AT )T AT = AAT shows that AAT is also symmetric.
1.7. Diagonal, Triangular, and Symmetric Matrices 45

Example 1.7.5

1. If A is nonsingular skew-symmetric matrix, then A−1 is skew-symmetric.


2. If A and B are skew-symmetric matrices, then so are AT , A + B, A − B and kA for any
scalar k.
3. Every square matrix A can be expressed as the sum of a symmetric matrix and a skew-
symmetric matrix.

Solution:

1. Assume that AT = −A. Then (A−1 )T = (AT )−1 = (−A)−1 = −A−1 . That is A−1 is
skew-symmetric.
2. We only show that A + B is skew-symmetric: (A + B)T = AT + B T = −A + (−B) =
−(A + B).
   
1 1
3. If A is any square matrix, then A = 2
A + AT + 2
A − AT . Then we only need
   
1 1
to show that 2
A + AT and 2
A − AT are symmetric and skew-symmetric matrices,
respectively.

Theorem 1.7.5

If A is an invertible matrix, then AAT and AT A are also invertible.

Proof:

Since A is invertible, AT is also invertible. Then, AAT and AT A are invertible since they are
product of two invertible matrices.
46 Chapter 1. System of Linear Equations and Matrices

Exercise 1.7.1

Show that if A and B are symmetric matrices, then AB − BA is a skew-symmetric matrix.

Exercise 1.7.2

Let A be 2 × 2 skew-symmetric matrix. If A2 = A, then A = 0.

Exercise 1.7.3

If A and B are lower triangular matrices, show that A + B is lower triangular as well.

Exercise 1.7.4

If A and B are skew-symmetric matrices and AB = BA, then AB is symmetric.

Exercise 1.7.5

Let A ∈ Mn×n . Show that

1. AT + A is symmetirc.

2. A − AT is skew-symmetric.

Exercise 1.7.6

Let A ∈ Mn×n . Then A can be written as A = S + K, where S is symmetric matrix and K


is skew-symmetric matrix.

Exercise 1.7.7
 
2 −1 3
 
Let A = 0
 4 1. Find a symmetric matrix S and a skew symmetric matrix K such
1 −2 −3
that A = S + K.
1.7. Diagonal, Triangular, and Symmetric Matrices 47

Exercise 1.7.8

Find all values of a, b, c and d for which A is skew-symmetric, where


 
0 2a − 3b + c 3a − 5b + 5c
 
A =  −2
 0 5a − 8b + 6c 
.
−3 −5 d
48 Chapter 1. System of Linear Equations and Matrices
Chapter

2
Determinants

Section 2.1: Determinants by Cofactor Expansion

 
a b 
Recall from Theorem 1.4.5 that the 2×2 matrix A =  is invertible if and only if ad−bc 6= 0.
c d
The expression ad − bc is called the determinant of A. It is denoted by det (A) or | A |. That is,
a b
det (A) = | A | = = ad − bc,
c d
and the inverse of A (if it exists) can be expressed as
 
1 d −b 
A−1 =  .
det (A) −c a

Remark 2.1.1 Determinant of a 2 × 2 Matrix


 
a11 a12  a11 a12
If A =  , then det (A) = = a11 a22 − a12 a21 .
a21 a22 a21 a22

Definition 2.1.1 Minors and Cofactors

If A is a square matrix, then the minor of entry aij is denoted by Mij and is defined to be
the determinant of the submatrix that remains after the ith row and j th column are deleted
from A. The number (−1)i+j Mij is denoted by Cij and is called the cofactor of entry aij .
Moreover, the cofactor matrix denoted by coef(A) = [Cij ] where 1 ≤ i, j ≤ n.

Example 2.1.1

Compute the minors, the cofactors, and the cofactor matrix of A, where
 
1 2 1
 
A=
 0 1 −1 
.
3 1 −2

Solution:

49
50 Chapter 2. Determinants

1 −1 0 −1 0 1
M11 = = −1, M12 = = 3, M13 = = −3,
1 −2 3 −2 3 1
2 1 1 1 1 2
M21 = = −5, M22 = = −5, M23 = = −5,
1 −2 3 −2 3 1
2 1 1 1 1 2
M31 = = −3, M32 = = −1, M33 = = 1.
1 −1 0 −1 0 1
Thus, the cofactors are:

C11 = (−1)2 M11 = −1, C12 = (−1)3 M11 = −3, C13 = (−1)4 M11 = −3,

C21 = (−1)3 M11 = 5, C22 = (−1)4 M11 = −5, C23 = (−1)5 M11 = 5,

C31 = (−1)4 M11 = −3, C32 = (−1)5 M11 = 1, C33 = (−1)6 M11 = 1.
 
−1 −3 −3
 
Therefore, coef(A) =  5
 −5 .
5
−3 1 1

Remark 2.1.2

Note that the minors Mij and the cofactors Cij are either the same or the negative of each
other. This results from the (−1)i+j entry in the cofactor value.
This case is common for a general n × n matrix and can be seen in the following sign matrix:
 
+ − + ···
 
− + − · · ·


 
+
 − + · · ·
. .. ..
.. .. 
. . .
For example, C11 = M11 , C21 = −M21 and C31 = M31 .

Theorem 2.1.1

If A is n × n matrix, then choosing any row or any column of A, the number obtained by
multiplying the entries in that row or column by the corresponding cofactors and adding the
resulting product is always the same.
2.1. Determinants by Cofactor Expansion 51

Definition 2.1.2

If A is an n × n matrix, then the number resulted by the sum of multiplying the entries in any
row by the corresponding cofactors is called the determinant of A, and the sums themselves
are called cofactor expansion of A.
That is, the determinant of A using the cofactor expansion along the ith row is:

det (A) = ai1 Ci1 + ai2 Ci2 + · · · + ain Cin .

While the determinant of A using the cofactor expansion along the j th column is:

det (A) = a1j C1j + a2j C2j + · · · + anj Cnj .

Theorem 2.1.2

Let A be an n × n matrix, then for each 1 ≤ i ≤ n we have



det (A) if i = k,


ai1 Ck1 + ai2 Ck2 + · · · + ain Ckn =

0 if i 6= k.

and for each 1 ≤ j ≤ n



det (A) if j = k,


a1j C1k + a2j C2k + · · · + anj Cnk =

0 if j 6= k.

Example 2.1.2

Use the cofactor expansion method to compute det (A),


 
1 2 1
 
A=
 0 1 −1 
.
3 1 −2

Solution:

Here we can choose any row or column to compute the determinant. Using the cofactor
expansion along the 1st row, we get

det (A) = a11 C11 + a12 C12 + a13 C13 = (1)(−1) + (2)(−3) + (1)(−3) = −10.

If we choose the second row, then we only need to know two cofactors:

det (A) = a21 C21 + a22 C22 + a23 C23 = (0)(C21 ) + (1)(−5) + (−1)(5) = −10.
52 Chapter 2. Determinants

Example 2.1.3

Compute det (A) using the cofactor expansion method, where


 
0 3 0 1
 
 2 1 1 2 
A= .
 


 0 0 1 2 

0 1 0 1

Solution:

We use the cofactor expansion along the 1st -column since it has the most zeros:

0 3 0 1
− 3 0 1
2 1 1 2 2+1
= (0) C11 + (−1) (2) 0 1 2 + (0) C31 + (0) C41
0 0 1 2
1 0 1
0 1 0 1

3 0 1
+ 3 1
= (−2) 0 1 2 = (−2) = (−2) [3 − 1] = −4.
1 1
1 0 1

Remark 2.1.3 Determinants of 2 × 2 and 3 × 3 Matrices

Determinants of 2 × 2 and 3 × 3 matrices can be evaluated using

+ + +− − −
+ − a11 a12 a13 a11 a12
a11 a12
C1 = C2 = a21 a22 a23 a21 a22
a21 a22
a31 a32 a33 a31 a32

For the 2 × 2 matrix C1 , we get det (C1 ) = a11 a22 − a12 a21 . It is simply the result of ”blue
stripe” product minus ”red stripe” product.
While for the 3 × 3 matrix C2 , we first recopy the first two columns and then we add up the
product of the blue stripes and subtract the product of the red stripes.
   
det (C2 ) = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 + a11 a23 a32 + a12 a21 a33 .

3 0 1
3 2
For example, = 6 − 8 = −2, and 0 1 2 = (3 + 0 + 0) − (1 + 0 + 0) = 2.
4 2
1 0 1
2.1. Determinants by Cofactor Expansion 53

Theorem 2.1.3 Determinant of Triangular Matrices

If A is an n × n triangular matrix (upper triangular, lower triangular, or diagonal), then

det (A) = a11 a22 · · · ann .

Example 2.1.4

We use Theorem 2.1.2 to compute:


3 0 0
2 5 0 = (3)(5)(2) = 30.
100 1987 2
Can you check the answer using another method?
54 Chapter 2. Determinants

Exercise 2.1.1
 
0 4 4 0
 
1 1 2 0
Let D =  . Evaluate D .
 
1 3 5 3
 
0 1 2 6
Final answer: |D| = −12.

Exercise 2.1.2
     
a+e b + f a b e f
Let A =  ,B= , and C =  . Show that | A | = | B | + | C |.
c d c d c d

Exercise 2.1.3
 
1 4 2
 
Let A = 5 −3 6. Compute the cofactors C11 , C12 , and C13 , and show that 5C11 −

2 3 2
3C12 + 6C13 = 0.

Exercise 2.1.4

Find det(A) for


 
2 0 1
 
−2
A= 3 4.
5 1 2
2.2. Evaluating Determinants by Row Reduction 55

Section 2.2: Evaluating Determinants by Row Reduction

In this section, we introduce some basic properties and theorems to compute determinants.

Theorem 2.2.1

Let A be an n × n matrix.
1. If A is a square matrix with a row of zeros (or a column of zeros), then det (A) = 0.
2. If A is a square matrix, then det (A) = det (AT ).
a11 a12 a13 a11 a21 a31
a21 a22 a23 = a12 a22 a32
a31 a32 a33 a13 a23 a33
3. If B is obtained from A by multiplying a single row (or a single column) by a scalar k,
then det (B) = k det (A). This result can be generalized as det (k A) = k n det (A).
k a11 k a12 k a13 a11 a12 a13
a21 a22 a23 = k a21 a22 a23
a31 a32 a33 a31 a32 a33
4. If B is obtained from A by interchaning two rows (or columns), then det (B) = − det (A).
a21 a22 a23 a11 a12 a13
a11 a12 a13 = − a21 a22 a23
a31 a32 a33 a31 a32 a33
5. If B is obtained from A by adding a multiple of one row (one column) to another row
(column, respectively), then det (B) = det (A).
a11 + k a21 a12 + k a22 a13 + k a23 a11 a12 a13
a21 a22 a23 = a21 a22 a23
a31 a32 a33 a31 a32 a33
6. If A has two proportional rows (or columns), then det (A) = 0.
a11 a12 a13
k a11 k a12 k a13 = 0
a31 a32 a33
56 Chapter 2. Determinants

Example 2.2.1

Evaluate the determinant of the matrix A, where


 
2 −2 5 3
 
 6 −6 15 9 
A= .
 


 0 3 1 9 

1 4 2 −1

Solution:

Note that the second row of A is 3 times the first one. Then, det (A) = 0.

2 −2 5 3 2 −2 5 3
6 −6 15 9 r2 −3r1 0 0 0 0
== = 0.
0 3 1 9 0 3 1 9
1 4 2 −1 1 4 2 −1

Example 2.2.2

Evaluate the determinant of the matrix A, where


 
2 −2 5
 
A=
 0 3 .
1 
1 4 2

Solution:

2 −2 5 1 4 2 1 4 2
r1 ↔r3 r3 −2r1
0 3 1 == (−1) 0 3 1 == (−1) 0 3 1
1 4 2 2 −2 5 0 −10 1

1 2 4 1 2 4
c2 ↔c3 r3 −r2
== (−1)(−1) 0 1 3 == 0 1 3
0 1 −10 0 0 −13
= (1)(1)(−13) = −13
2.2. Evaluating Determinants by Row Reduction 57

Example 2.2.3

Evaluate the determinant of the matrix A, where


 
2 −2 5 1
 
 3 3 1 3 
A= .
 


 1 1 2 1 

4 3 5 6

Solution:

2 −2 5 1 0 −4 1 −1
r1 −2r3 −4 1 −1
r2 −3r3
3 3 1 3 r4 −4r3 0 0 −5 0 +
== + = 0 −5 0
1 1 2 1 1 1 2 1
−1 −3 2
4 3 5 6 0 −1 −3 2

−4 −1
= (−5) = (−5)[−8 − 1] = 45.
−1 2

Example 2.2.4
   
a b c 3g 3h 3i
   
Let 
 d e f  = −6. Compute  2a + d 2b + e .
2c + f 

g h i d e f

Solution:

We use the properties stated in Theorem 2.2.1:


3g 3h 3i 2a + d 2b + e 2c + f
r1 ↔r2 r1 −r2
2a + d 2b + e 2c + f −−−→ (−1)(−1)
r2 ↔r3
d e f −−
1
−→
r
3 3
d e f 3g 3h 3i
2a 2b 2c 1 a b c
r
2 1
(3) d e f −−→ (3)(2) d e f = (6)(−6) = −36.
g h i g h i
58 Chapter 2. Determinants

Exercise 2.2.1
 
1 0 0 3
 
2 7 0 6
Compute det (A) where A =  .
 
0 6 3 0
 
2 3 1 5
[Hint: Simply reduce A to a lower triangular matrix! Find a relation between the first and
the fourth columns.]

Exercise 2.2.2

a b c −a −b −c
Let d e f = −6. Compute 2d 2e 2f .
g h i 3g 3h 3i
Final answer: 36.

Exercise 2.2.3

Solve for x:
0 1 0
x 1
= x 3 −2 .
1 x−1
1 5 −1
Final answer: x = 1.

Exercise 2.2.4

a1 b 1 c1 b1 b2 b1 − 3b3
Given that a2 b2 c2 = 7, evaluate a1 a2 a1 − 3a3 .
a3 b 3 c3 c1 c2 c1 − 3c3
Final answer: 21.

Exercise 2.2.5
   
a1 a2 a3 2a3 2a2 2a1
   
Let A =  b1 b2 b3 , and B =  b3 − a3 b 2 − a2 . If A = −4, find B .
b 1 − a1 
  

c1 c2 c3 c3 + 3b3 c2 + 3b2 c1 + 3b1


Final answer: 8.
2.2. Evaluating Determinants by Row Reduction 59

Exercise 2.2.6
 
7 1 0 3
 
2 0 0 0
Let A be a matrix with A−1 = . Find det (A).
 
1 3 5 4
 
6 2 0 5
60 Chapter 2. Determinants

Section 2.3: Properties of Determinants

Theorem 2.3.1

Let A and B be n × n matrices.


1. In general, det (A + B) 6= det (A) + det (B).
2. A is invertible if and only if det (A) 6= 0.
3. det (AB) = det (A) det (B).

Theorem 2.3.2
  1
If A is an invertible matrix, then det A−1 = .
det (A)

Proof:

Since A−1 A = I, it follows that det (A−1 A) = det (I) = 1. That is, det (A−1 ) det (A) = 1.
Since A is invertible, det (A) 6= 0 and hence
1
A−1 =
det (A)

Definition 2.3.1 Adjoint

Let A be an n × n matrix. The adjoint matrix of A, denoted by adj(A), is given by

adj (A) = (coef(A))T .

Example 2.3.1
 
1 2 1
 
Compute the adjoint of A, where A as given in Example 2.1.1: A = 
 0 1 −1 
.
3 1 −2

Solution:
   
−1 −3 −3 −1 5 −3
   
In Example 2.1.1, we got coef(A) =  5
 −5 5  . Thus adj(A) =  −3
  −5 .
1 
−3 1 1 −3 5 1
2.3. Properties of Determinants 61

Example 2.3.2
 
1 2 
Suppose that adj (A) =  . Find A.
4 6

Solution:
   
x y  w −z 
Assume that A =  . Then coef(A) =  . Therefore,
z w −y x
   
w −y   1 2 
adj (A) = (coef(A))T =  = .
−z x 4 6
 
6 −2 
Thus, x = 6, y = −2, z = −4, and w = 1, and A =  .
−4 1

Theorem 2.3.3

Let A = [aij ] ∈ Mn×n . Then,

A adj (A) = adj (A) A = det (A) In .

Proof:

We first prove that A adj (A) = det (A) In ,


  
a a12 ··· a1n A A21 ··· An1
 11   11 
 a21

a22 ··· a2n 
  A12

A22 ··· An2 

A adj (A) =  ..

.. .. ..   ..

.. .. .. 
 . . . .   . . . . 
 
  
an1 an2 ··· ann A1n A2n ··· Ann
   
det (A) 0 ··· 0 1 0 ··· 0
   

 0 det (A) ··· 0 
 0

1 ··· 0 
= 
.. .. ... ..  = det (A) 
 .. .. ...  = det (A) In .
.. 
. . . . . .
 
 
   
0 0 ··· det (A) 0 0 ··· 1
The proof of adj (A) A = det (A) In is similar.

Theorem 2.3.4

1
If A is an invertible matrix, then A−1 = adj (A).
det (A)

Proof:
62 Chapter 2. Determinants

Theorem 2.3.3 implies that A adj (A) = det (A) I. Since A is invertible, det (A) 6= 0. Hence
" #
1   1 1
A adj (A) = det (A) I ⇒ A adj (A) = I.
det (A) det (A) det (A)
Multiplying both sides (from left) by A−1 implies
1
A−1 = adj (A).
det (A)

Example 2.3.3

Compute the inverse of A, where A is the 3 × 3 matrix of Example 2.1.1.


 
1 2 1
 
A=
 0 1 −1 
.
3 1 −2

Proof:

The determinant of A computed in Example 2.1.2 is −10. Moreover, as we have seen in


Example 2.3.1,
 
−1 5 −3
 
 −3
adj (A) =  −5 .
1 
−3 5 1
Therefore, by Theorem 2.3.4 we get
 
−1 5 −3
1 
A−1

=  −3 −5 .
1 
−10 
−3 5 1

Example 2.3.4

Show that if A ∈ Mn×n is skew-symmetric matrix and n is odd, then det (A) = 0.

Solution:

Since A is skew-symmetric, then AT = −A and taking the determinant for both sides
 
det AT = det (−A) ⇐⇒ det (A) = (−1)n det (A).

Since n is odd, we get det (A) = − det (A) which means that det (A) = 0.

TRUE or FALSE:
2.3. Properties of Determinants 63

? If A, B ∈ Mn×n with det (A) = det (B), then A = B. (FALSE).


reason: I2 6= −I2 while I2 = 1 and − I2 = (−1)2 I2 = 1.

Example 2.3.5

Show that if A−1 = AT , then det (A) = 1 or det (A) = −1.

Solution:

Since A−1 = AT , we have


1
A−1 = AT ⇐⇒ = | A | ⇐⇒ | A |2 = 1 ⇐⇒ | A | = ±1.
|A|

Example 2.3.6

Show that if A is an invertible n × n matrix, then

det (adj (A)) = det (A)n−1 .

Solution:

Since A is invertible, det (A) 6= 0. Theorem 2.3.3 implies

A adj (A) = |A| In

A adj (A) = |A| In

det (A) det (adj (A)) = det (A)n |In |

det (adj (A)) = det (A)n−1

Example 2.3.7
 
Let A, B ∈ M3×3 with det (A) = 2 and det (B) = −2. Find det 2 A−1 (B 2 )T .

Solution:

1
2 A−1 (B 2 )T = 23 A−1 B2 = 8 |B | |B |
|A|
1
 
= 8 (−2)(−2) = 16.
2
64 Chapter 2. Determinants

Example 2.3.8

If A is 5 × 5 matrix with | (A) | = −2, then find


1. 4 (A2 )T .
2. A−1 adj (A) .

Solution:

1. 4 (A2 )T = 45 A2 = 45 (−2)2 = 46 .
−1
2. A−1 adj (A) = A−1 adj (A) = (−2)4 = −8.
2

Theorem 2.3.5 The Extended Version of Theorem 1.6.4

If A is an n × n matrix, then the following statements are equivalent:


1. A is invertible.
2. AX = O has only the trivial solution.
3. A is row equivalent to In .
4. AX = B is consistent for every n × 1 matrix B.
5. AX = B has exactly one solution for every n × 1 matrix B.
6. det (A) 6= 0.
2.3. Properties of Determinants 65

Exercise 2.3.1
 
2 8
Let adj (A) =  . Find A.
1 4

Exercise 2.3.2

Let A be an invertible n × n matrix.


1. Show that adj (A) is invertible.
2. Prove that adj (A−1 ) = 1
det (A)
A.

Exercise 2.3.3
 
1 −1 2
 
Let A = 
 0 3 1 .
−1 3 1
1. Compute det (A) using a cofactor expansion along row 2.
2. Find A adj (A).
3. Find det (2A−1 adj (A)).

Exercise 2.3.4
 
1 2 4
 
Find all values of α for which the matrix 
1 3 9 is singular.
1 α α2
Final answer: α = 2, 3.

Exercise 2.3.5

Let A and B be two n × n matrices such that A is invertible and B is singular. Prove that
A−1 B is singular.

Exercise 2.3.6

If A and B are 2 × 2 matrices with det (A) = 2 and det (B) = 5, compute 3 A2 (AB −1 )T .
66 Chapter 2. Determinants

Exercise 2.3.7
 
2 8
Let adj (A) =  . Find A.
1 4

Exercise 2.3.8

Let A be a nonsingular n × n matrix.


1. Show that adj (A) is nonsingular.
2. Prove that adj (A−1 ) = 1
det (A)
A.

Exercise 2.3.9

Show that if A is non-singular, then adj (A) is non-singular and


 −1 1
adj (A) = A = adj(A−1 ).
|A|
Hint: Use Theorem 2.3.3.

Exercise 2.3.10

Let A be a non-singular 4 × 4 matrix with det (A−1 ) = −2. Find


(a) det (adj (A)), and
(b) 1
2
AT adj (A−1 ) .
−1
Final answer: a: 8
and b: 14 .

Exercise 2.3.11

Let A be a 4 × 4 matrix with det (adj (A)) = −8. Find 2A adj(2A).


Final answer: −32 I4 .

Exercise 2.3.12

Let A be an invertible n × n matrix. Show that det (adj (A)) = |An−1 |.


Hint: Look at Example 2.3.3.
2.3. Properties of Determinants 67

Exercise 2.3.13

Let A be an invertible n × n matrix. Show that adj(adj(A)) = det (A)n−2 A.


Hint: Use Theorem 2.3.3.

Exercise 2.3.14

Let A, B ∈ Mn×n . Show that, if A B = In , then B A = In .

Exercise 2.3.15

If |A B| = 0, then either det (A) = 0 or det (B) = 0.

Exercise 2.3.16

If A B = In , then det (A) 6= 0 and B −1 6= 0.

Exercise 2.3.17

Show that if A is non-singular and A2 = A, then det (A) = 1.

Exercise 2.3.18

Show that for any A, B, C ∈ Mn×n , if A B = A C and det (A) 6= 0, then B = C.

Exercise 2.3.19
 
1 2 4
 
Find all values of α for which the matrix 
1 3 9 is singular.
1 α α2

Exercise 2.3.20

Let A and B be two n × n matrices such that A is invertible and B is singular. Prove that
A−1 B is singular.
68 Chapter 2. Determinants

Exercise 2.3.21

If A and B are 2 × 2 matrices with det (A) = 2 and det (B) = 5, compute 3 A2 (AB −1 )T .
Chapter

3
Euclidean Vector Spaces

Section 3.1: Vectors in Rn

In this chapter, we deal with vectors in Rn , or sometimes we simply call them n-vectors. For instance,
 
y1
 
h i 
y2 
are vectors in Rn .
 
(row-vector) x = x1 x2 ··· xn , and (column-vector) y = 
.. 
.
 
 
 
yn
In the notion of Rn , A vector is simply written as x = (x1 , x2 , · · · , xn ). While A points is written
as x(x1 , x2 , · · · , xn ).

Remark 3.1.1 Vectors in Rn

Vecors in R3 can be manipulated by arrows starting at


Q
initial point and pointing at terminal point. In the
x
P terminal point
Figure, the vector x has its initial point at P and has its
initial point
terminal point at Q.
For instance, if P (p1 , p2 , p3 ) and Q(q1 , q2 , q3 ) are points in R3 , then the vector x is written
−→
as x = P Q = (p1 − q1 , p2 − q2 , p3 − q3 ). This vector is different from its opposite vector −x
−→
which is given by −x = QP .

Definition 3.1.1 Vectors

Let x = (x1 , x2 , · · · , xn ), y = (y1 , y2 , · · · , yn ) ∈ Rn and let c ∈ R.


1. The vector 0 = (0, 0, · · · , 0) ∈ Rn is called the zero vector .
2. x and y are said to be equal if x1 = y1 , x2 = y2 , · · · , xn = yn .
3. Adding or substracting two vectors, they must have the same number of components:

x ± y = (x1 ± y1 , x2 ± y2 , · · · , xn ± yn ) ∈ Rn .

4. Multiplying a vector by a scalar:

c x = (c x1 , c x2 , · · · , c xn ) ∈ Rn .

5. Two vectors x and y are said to be equal if x1 = y1 , x2 = y2 , · · · , xn = yn .

69
70 Chapter 3. Euclidean Vector Spaces

Remark 3.1.2

Try to prove the statements in the following two Theorems.

Theorem 3.1.1 Closure of Rn under Vector Addition and Scalar Multiplication

Let x, y, z ∈ Rn and let c, d ∈ R. Then


1. Rn is closed under vector addition (i.e. x + y ∈ Rn ):
(a) x + y = y + x,
(b) x + (y + z) = (x + y) + z,
(c) ∃ a vector 0 ∈ Rn such that x + 0 = 0 + x = x. ”additive identity”
(d) for any x ∈ Rn , ∃(−x) such that x + (−x) = (−x) + x = 0. ”additive inverse”
2. Rn is closed under scalar multiplication (i.e. c x ∈ Rn ):
(a) c (x + y) = c x + c y,
(b) (c + d)x = c x + d x,
(c) c (d x) = (c d) x,
(d) 1 x = x, where 1 ∈ R.

Theorem 3.1.2

If x is a vector in Rn and c is any scalar, then


1. 0 x = 0.
2. c 0 = 0.
3. (−1) x = −x.

Definition 3.1.2 Linear Combinations

If x ∈ Rn , then we say that x is a linear combination of the vectors v1 , v2 , · · · , vn ∈ Rn , if


it can be expressed as

x = c1 v 1 + c2 v 2 + · · · + cn v n ,

where c1 , c2 , · · · , cn are scalars in R. These scalars are called the coefficients of the linear
combination.
3.1. Vectors in Rn 71

Exercise 3.1.1

Find all scalars c1 , c2 , c3 ∈ R such that

c1 (1, 2, 0) + c2 (2, 1, 1) + c3 (0, 0, 1) = (0, 0, 0).

Final answer: c1 = c2 = c3 = 0. ”Try to create a system of linear equations.”


72 Chapter 3. Euclidean Vector Spaces

Section 3.2: Norm, Dot Product, and Distance in Rn

Definition 3.2.1 Dot Product and Norm

Let x = (x1 , x2 , · · · , xn ) and y = (y1 , y2 , · · · , yn ) be two vectors in Rn . Then


1. The dot product of x and y is defined as
n
X
x·y= xi yi = x1 y1 + x2 y2 + · · · + xn yn .
i=1

2. The norm (or length) of the vector x is defined as


q
kxk = x21 + x22 + · · · + x2n .

Theorem 3.2.1

If x is a vector in Rn and c ∈ R, then:


1. k x k ≥ 0
2. k x k = 0 if and only if x = 0.
3. k c x k = | c | k x k.

Proof:

The proof of the first two parts is easy. So we only prove the third statement of the Teorem.
Let x = (x1 , x2 , · · · , xn ). Then c x = (c x1 , c x2 , · · · , c xn ). Thus,
q q
kcxk = (c x1 )2 + (c x2 )2 + · · · + (cxn )2 = c2 (x21 + x22 + · · · + x2n )
q
= | c | x21 + x22 + · · · + x2n = | c | k x k.

Remark 3.2.1

We note that, k x k2 = x21 + x22 + · · · + x2n = x · x. Therefore, k x k2 = x · x.

Remark 3.2.2 Normalizing a Vector

1
If x is a nonzero vector in Rn , then the vector u = x is a vector of norm 1 in the same
kxk
direction as x. Clearly,
1 1 1
kuk = x = kxk = k x k = 1.
kxk kxk kxk
3.2. Norm, Dot Product, and Distance in Rn 73

Definition 3.2.2 Unit Vector

A vector of norm 1 is called a unit vector in Rn . That is, u is a unit if k u k = 1.

Example 3.2.1

Let x = (1, 0, −1) and y = (0, 1, 1) be two vectors in R3 . Compute x · y, k x k and construct
a unit vector in the same direction of x.

Solution:

1. Clearly, x · y = 0 + 0 + (−1) = −1.


q √
2. k x k = 12 + 02 + (−1)2 = 2.
3. Consider
!
1 1 1 −1
u= x = √ (1, 0, −1) = √ , 0, √ .
kxk 2 2 2

Definition 3.2.3 Dot Product and Norm

Let x = (x1 , x2 , · · · , xn ) and y = (y1 , y2 , · · · , yn ) be two points in Rn . Then we denote the


distance between x and y is defined as
q
d(x, y) = k x − y k = (x1 − y1 )2 + (x2 − y2 )2 + · · · + (xn − yn )2 .

Remark 3.2.3

We note that if P (p1 , · · · , pn ), Q(q1 , · · · , qn ) ∈ Rn , then the distance between P and Q is


q
kP − Qk = kQ − P k = (p1 − q1 )2 + · · · + (pn − qn )2 .

Example 3.2.2

Find the distance between the points x = (2, 4, 3, 1) and y = (5, 3, 2, 1) in R4 .

Solution:

x − y = (−3, 1, 1, 0) and hence


q √
d(x, y) = k x − y k = (−3)2 + (1)2 + (1)2 + 02 = 11.
74 Chapter 3. Euclidean Vector Spaces

Remark 3.2.4 Standard Units

The vectors i = (1, 0) and j = (0, 1) are called the standard units in R2 .
The vectors i = (1, 0, 0), j = (0, 1, 0) and k = (0, 0, 1) are called the standard units in R3 .
In general, the vectors e1 = (1, 0, · · · , 0), e2 = (0, 1, 0, · · · , 0), · · · , en = (0, · · · , 0, 1) are called
the standard unit vectors in Rn . Note that any vector x = (x1 , x2 , · · · , xn ) ∈ Rn is a linear
combination of these vectors: x = x1 e1 + x2 e2 + · · · + xn en .

Remark 3.2.5 Another definition of dot product

If x and y are nonzero vectors in Rn and if θ is the angle between x and


y, then the dot product (also called the Euclidean inner product) y

of x and y is defined as
x·y
x · y = k x kk y k cos θ or cos θ = ,
k x kk y k θ
x
where 0 ≤ θ ≤ π. Since, −1 ≤ cos θ ≤ 1, we get
x·y
−1 ≤ ≤ 1.
kxkkyk

Example 3.2.3

Find the angle between x = (0, 1, 1, 0) and y = (1, 1, 0, 0).

Solution:
x·y
We have cos θ = where x · y = 0 + 1 + 0 + 0 = 1 and
kxkkyk
√ √
k x k = 02 + 12 + 12 + 02 = 2 = k y k.
1 π
Therefore, cos θ = which implies that θ = .
2 3

Theorem 3.2.2

If x, y, z are vectors in Rn and c ∈ R. Then:


1. x · y = y · x.
2. x · (y + z) = x · y + x · z.
3. c (x · y) = (c x) · y.
4. x · x ≥ 0 and x · x = 0 if and only if x = 0.
3.2. Norm, Dot Product, and Distance in Rn 75

Theorem 3.2.3

If x, y, z are vectors in Rn and c ∈ R. Then:


1. 0 · x = x · 0 = 0.
2. (x + y) · z = x · z + y · z.
3. x · (y − z) = x · y − x · z.
4. (x − y) · z = x · z − y · z.
5. c (x · y) = x · (c y).

Example 3.2.4

Let x and y be two vectors in Rn so that x · y = 4, k x k = 3 and k y k = 2. Evaluate


(x − 2y) · (3x + y).

Solution:

Clearly,

(x − 2y) · (3x + y) = x · (3x + y) − 2y · (3x + y)

= 3(x · x) + x · y − 6(y · x) − 2(y · y)

= 3k x k2 − 5(x · y) − 2k y k2

= 27 − 20 − 8 = −1.

Theorem 3.2.4 Cauchy-Schwarz Inequality

If x = (x1 , x2 , · · · , xn ) and y = (y1 , y2 , · · · , yn ) are vectors in Rn , then

| x · y | ≤ k x k k y k.

Or, in terms of components


 1  1
| x1 y1 + x2 y2 + · · · xn yn | ≤ x21 + x22 + · · · + x2n 2
y12 + y22 + · · · + yn2 2

Theorem 3.2.5 Triangle Inequality for Vectors and Distances

If x, y, z are vectors in Rn , then:


1. k x + y k ≤ k x k + k y k. Triangle inequality for vectors
2. d(x, y) ≤ d(x, z) + d(z, y). Triangle inequality for distances
76 Chapter 3. Euclidean Vector Spaces

Proof:

1. By Remark 3.2.1, we have

k x + y k2 = (x + y) · (x + y) = x · x + x · y + y · x + y · y

= k x k2 + 2 x · y + k y k2 absolute value.

= k x k2 + 2 | x · y | + k y k2 Cauchy-Schwarz Inequality.

≤ k x k2 + 2 k x k k y k + k y k2 = (k x k + k y k)2 .

2. By Part 1, we have

d(x, y) = k x − y k = k (x − z) + (z − y) k

≤ k x − z k + k z − y k = d(x, z) + d(z, y).

Theorem 3.2.6 Parallelogram Equation for Vectors

If x and y are vectors in Rn , then:


 
k x + y k2 + k x − y k2 = 2 k x k2 + k y k2

Proof:

Clearly,

k x + y k2 + k x − y k2 = (x + y) · (x + y) + (x − y) · (x − y)
 
= 2(x · x) + 2(y · y) = 2 k x k2 + k y k2 .

Theorem 3.2.7

If x and y are vectors in Rn with the Euclidean inner product, then:


1 1
x · y = k x + y k2 − k x − y k2 .
4 4
Proof:

Clearly,

k x + y k2 = (x + y) · (x + y) = k x k2 + 2(x · y) + k y k2

k x − y k2 = (x − y) · (x − y) = k x k2 − 2(x · y) + k y k2 .

The result follows by simple algebra.


3.2. Norm, Dot Product, and Distance in Rn 77

Remark 3.2.6

If x and y are in Rn , then k x − y k ≥ k x k − k y k .

Proof:

Recall that for real values x and a, we have | x | ≤ a iff −a ≤ x ≤ a. That is a ≥ x and
a ≥ −x.
Therefore, we simply show that k x − y k ≥ k x k − k y k and k x − y k ≥ k y k − k x k. First

k x k = k (x − y) + y k ≤ k x − y k + k y k ⇒ k x k − k y k ≤ k x − y k.

For the second inequality, we use the first one (interchinging x and y) in the following way:

kx − yk = ky − xk ≥ kyk − kxk by the first inequality.

Example 3.2.5

If k x k = 2 and k y k = 3, what are the largest and smallest values possible for k x − y k?

Solution:

By Triangle inequality, we have k x − y k ≤ k x k + k y k = 5 which is the largest values of


k x − y k. For the smallest value, we use Remark 3.2.6. That is,

k x − y k ≥ k x k − k y k = | 2 − 3 | = 1.
78 Chapter 3. Euclidean Vector Spaces

Exercise 3.2.1

Use Cauchy-SchwarzâĂŹs inequality to show that

(ab − cd + xy)2 ≤ (a2 + d2 + y 2 )(b2 + c2 + x2 )

for all real numbers a, b, c, d, x, and y.


Hint: Find two suitable vectors in R3.

Exercise 3.2.2

Let U, V ∈ Rn be unit vectors. Prove that (U + 2V ) · (2U − V ) ≤ 3.

Exercise 3.2.3

Let θ be the angle between the vectors U = (4, −2, 1, 2) and V = (4, 2, 5, 2). Find cos θ.
21
Final answer: 35
.

Exercise 3.2.4

For any vectors X and Y in Rn , show that k x k ≤ k x − 2y k + 2k y k.


Hint: Triangle inequality.

Exercise 3.2.5

Let x and y be two vectors in Rn . Prove that kx − yk ≤ kxk + kyk.


Hint: Triangle inequality.

Exercise 3.2.6

Find a vector X, of length 6, in the opposite direction of Y = (1, 2, −2).


Hint: What is −6 k y1 k y?

Exercise 3.2.7

Let X and Y be vectors in Rn such that kXk = kY k. Show that (X + Y ) · (X − Y ) = 0.


3.2. Norm, Dot Product, and Distance in Rn 79

Exercise 3.2.8

Let U and V be two vectors in R3 such that k U k = 2 and k V k = 3.


1. Find the maximum possible value for k 2U + 3V k.
2. If U · V = 0, find k 2U + 3V k.

Exercise 3.2.9

Let X, Y ∈ Rn . Find X · Y given that k x + y k = 1 and k x − y k = 5.


Final answer: −6.

Exercise 3.2.10

Answer each of the following as True or False:


1. If U and V are two unit vectors in Rn , then k U − 6V k ≥ 5.
2. There exist X, Y ∈ R4 such that k x k = k y k = 2 and X · Y = 6.

Exercise 3.2.11

Find all values of a for which x · y = 0, where x = (a2 − a, −3, −1) and y = (2, a − 1, 2a).
1
Final answer: a = 2
or 3.

Exercise 3.2.12

1 1
If x and y are vectors in Rn , then x · y = k x + y k2 − k x − y k2 .
4 4

Exercise 3.2.13

Show that if X · Y = 0 for all Y ∈ Rn , then X = O. Use the standard unit vectors of Rn for
Y.

Exercise 3.2.14

Show that if X · Z = Y · Z for all Z ∈ Rn , then X = Y . Use the standard unit vectors of Rn
for Z.
80 Chapter 3. Euclidean Vector Spaces
Chapter

4
General Vector Spaces

Section 4.1: Real Vector Spaces

Definition 4.1.1 Real Vector Space

A real vector space V is a set of elements with two operations ⊕ and satisfying the
following conditions. For short, we write (V, ⊕, ) is a vector space if
(α) if x, y ∈ V, then x ⊕ y ∈ V, that is ”V is closed under ⊕”: for all x, y, z ∈ V
(a) x ⊕ y = y ⊕ x,
(b) x ⊕ (y ⊕ z) = (x ⊕ y) ⊕ z,
(c) there exists 0 ∈ V (zero vector of V) such that x ⊕ 0 = 0 ⊕ x = x for all x ∈ V,
(d) for each x ∈ V, there exists −x ∈ V (a negative of x) such that (−x) ⊕ x =
x ⊕ (−x) = 0.
(β) if x ∈ V and c ∈ R, then c x ∈ V, that is ”V is closed under ”: for all x, y ∈ V
and for all c, d ∈ R
(a) c (x ⊕ y) = c x⊕c y,
(b) (c + d) x=c x⊕d x,
(c) c (d x) = (c d) x,
(d) 1 x=x 1 = x.

Remark 4.1.1 Real Vector Space Rn

(Rn , +, ·) is a vector space. That is, Rn with vector addition and scalar multiplication is a
vector space.

Remark 4.1.2 Zero Vector Space

Let V consist of a single object, which we denote by 0, and define

0 ⊕ 0 = 0 and k 0=0

for all scalars k. This is a vector space which is called the zero vector space.

81
82 Chapter 4. General Vector Spaces

Remark 4.1.3

For simplicity, we write + and · instead of ⊕ and . That is, we write (V, +, ·) instead of
(V, ⊕, ) for a vector space V with + addition operation and · scalar multiplication.

Example 4.1.1

Consider V = {(x, y, z) : x, y, z ∈ R} with

(x1 , y1 , z1 ) + (x2 , y2 , z2 ) = (x1 + x2 , y1 + y2 , z1 + z2 ),

and

c(x, y, z) = (cx, cy, 0).

Is (V, +, ·) a vector space? Explain.

Solution:

Clearly, the α conditions are satisfied because this is the usual vector addition and hence V is
closed under +. Thus, we only check on the β conditions. Let x = (x1 , y1 , z1 ), y = (x2 , y2 , z2 )
be any two vectors in V, then
1. c(x + y) = c(x1 + x2 , y1 + y2 , z1 + z2 ) = (cx1 + cx2 , cy1 + cy2 , 0) = (cx1 , cy1 , 0) +
(cx2 , cy2 , 0) = cx + cy. This condition is satisfied.
2. (c+d)x = ((c + d)x1 , (c + d)y1 , 0) = (cx1 , cy1 , 0)+(dx1 , dy1 , 0) = cx+dx. This condition
is satisfied.
3. c(dx) = c(dx1 , dy1 , 0) = (cdx1 , cdy1 , 0) = (cd)x. This condition is satisfied.
4. 1x = (x1 , y1 , 0) 6= (x1 , y1 , z1 ). This condition is NOT satisfied.
Therefore, (V, +, ·) is not a vector space.

Example 4.1.2

Let V = {(x, y, z) : x, y, z ∈ R and z > 0} associated with the operations:

(x1 , y1 , z1 ) + (x2 , y2 , z2 ) = (x1 + x2 , y1 + y2 , z1 + z2 ), and c(x, y, z) = (cx, cy, cz).

Is (V, +, ·) a vector space? Explain.

Solution:

No. If c ∈ R with c < 0, then c(x, y, z) = (cx, cy, cz) 6∈ V since cz < 0.
4.1. Real Vector Spaces 83

Example 4.1.3

Is the set of real numbers under the subtraction and scalar multiplication a vector space?
Explain.

Solution:

No. Clearly, for x, y ∈ R, x + y = x − y 6= y − x = y + x.

Theorem 4.1.1

Let V be a vector space, x is a vector in V, and c is a scalar, then:


1. The zero vector is unique in V.
2. 0 x = 0.
3. c 0 = 0.
4. (−1)x = −x.
5. If c x = 0, then c = 0 or x = 0.
84 Chapter 4. General Vector Spaces

Exercise 4.1.1

Let V = {(x, y, 0) | x, y ∈ R}, be associated with the operations:

(x1 , y1 , 0) + (x2 , y2 , 0) = (x1 + x2 , y1 + y2 , 0), and c(x, y, 0) = (cx, cy, 0).

Is (V, +, ·) form a vector space? Explain. (This is a Vector Space!!).

Exercise 4.1.2

Let V = {(x, y) | x, y ∈ R}. Define addition and scalar multiplication on V as follows: for
each (x, y), (x0 , y 0 ) ∈ V and a ∈ R,

(x, y) + (x0 , y 0 ) = (x + x0 , y + y 0 ) and a(x, y) = (ay, ax).

Determine whether V with the given operations is a vector space. Justify your answer.

Exercise 4.1.3

Consider R2 with the operations + and · where (x, y) + (x0 , y 0 ) = (2x − x0 , 2y − y 0 ) and
c(x, y) = c(x, y). Does the property (c + d)x = cx + dx hold for all c, d ∈ R and all x ∈ R2 ?
Explain.

Exercise 4.1.4

Consider the set V = {(x, y, z) : x, y, z ∈ R} with the following operations

(x1 , y1 , z1 ) + (x2 , y2 , z2 ) = (x1 + x2 , y1 + y2 , z1 + z2 ) and c(x, y, z) = (z, y, x).

Is V a vector space? Explain.

Exercise 4.1.5

Let V = {(x, y, z) : x, y, z ∈ R} and define

(x, y, z) + (a, b, c) = (x + a, y + b, z + c) and k(x, y, z) = (kx, ky, 0).

Show that V is not a vector space.


4.1. Real Vector Spaces 85

Exercise 4.1.6

Consider the set V = {(x, y) | x, y ∈ R} with the following operations

(x, y) + (x0 , y 0 ) = (x − x0 , y − y 0 ) and k(x, y) = (kx, ky).

Determine whether (V, +, ·) is a vector space (justify your answer).

Exercise 4.1.7

Let V be a real vector space. Show that 0x = 0 for any x ∈ V.

Exercise 4.1.8

Let V be a vector space with a zero vector 0. Show that the zero vector 0 of V is unique.

Exercise 4.1.9

Prove that the negative of a vector x in a vector space V is unique.

Exercise 4.1.10

Determine whether V = R is a vector space with respect to the following operations: x + y =


2x − y and cx = cx, for all x, y ∈ V and for all c ∈ R.

Exercise 4.1.11

Determine whether V = {(x, y, z) : x, y, z ∈ R} is a vector space with respect to the fol-


lowing operations: (x, y, z) + (x0 , y 0 , z 0 ) = (xx0 , yy 0 , zz 0 ) and c(x, y, z) = (cx, cy, cz), for all
(x, y, z), (x0 , y 0 , z 0 ) ∈ V and for all c ∈ R.

Exercise 4.1.12

Answer each of the following as True or False:


1. (T) V = {(x, y) ∈ R2 : y < 0} is closed under the operation c(x, y) = (cx, y).
2. (F) V = {(x, y) ∈ R2 : y < 0} is closed under the operation c(x, y) = (cy, x).
86 Chapter 4. General Vector Spaces

Section 4.2: Subspaces

Definition 4.2.1 Subspace

Let (V, +, ·) be a vector space and let W ⊆ V be non-empty. If (W, +, ·) is a vector space,
then W is a subspace of V.

Remark 4.2.1 Trivial Subspaces

If V is a vector space, then V and {0} are subspaces of V. They are called trivial subspaces
of V.

Theorem 4.2.1

Let (V, +, ·) be a vector space and let W be a subset of V. Then, W is a subspace of V if


and only if the following conditions hold:
1. W 6= ∅,
2. for all x, y ∈ W, x + y ∈ W,
3. for all x ∈ W and c ∈ R, cx ∈ W.

Example 4.2.1

Is W = {(x, y, 0, z 2 ) : x, y ∈ R and z ∈ Z} a subspace of R4 ? Explain.

Solution:

1. (0, 0, 0, 0) ∈ W and hence W is non-empty.


2. let (x1 , y1 , 0, z12 ), (x2 , y2 , 0, z22 ) ∈ W, then

(x1 , y1 , 0, z12 ) + (x2 , y2 , 0, z22 ) = (x1 + x2 , y1 + y2 , 0, z12 + z22 ) 6∈ W,

since z12 + z22 6= (z1 + z2 )2 .


For example, (0, 0, 0, 4), (0, 0, 0, 9) ∈ W while the sum of them (0, 0, 0, 15) 6∈ W. Therefore,
W is not a subspace of R4 .
4.2. Subspaces 87

Example 4.2.2

Is W = {(x, y, z) : x + y + z = 1, where x, y, z ∈ R} a subspace of R3 ? Explain.

Solution:

Clearly, (0, 0, 0) 6∈ W and hence W is not a vector space and it is not a subspace of R3 .

Example 4.2.3

Let W = {(a, b, c, d) : d = 2a − b and c = a}. Is (W, +, ·) a subspace of R4 ? Explain.

Solution:

1. (0, 0, 0, 0) ∈ W, then W 6= ∅,
2. let x = (a1 , b1 , a1 , 2a1 − b1 ) and y = (a2 , b2 , a2 , 2a2 − b2 ). Then,

x + y = (a1 , b1 , a1 , 2a1 − b1 ) + (a2 , b2 , a2 , 2a2 − b2 )


= (a1 + a2 , b1 + b2 , a1 + a2 , 2(a1 + a2 ) − (b1 + b2 )) ∈ W.

3. for x = (a, b, a, 2a − b) ∈ W and k ∈ R, we have

k(a, b, a, 2a − b) = (ka, kb, ka, 2(ka), −kb) ∈ W.

Therefore, W is a subspace of R4 .

Theorem 4.2.2

If W1 , W2 , · · · , Wn are subspaces of a vector space V, the the intersection of these subspaces


is also a subspace of V.

Proof:

Let W be the intersection of these subspaces. Then W is not empty since Wi contains the
zero vector for all 1 ≤ i ≤ n.
Moreover, if x, y ∈ W, then x, y ∈ Wi for all i and hence x + y ∈ Wi which implies that
x + y ∈ W.
Finaly, if c is a scalar and x ∈ W, then x ∈ Wi for all i and hence c x ∈ Wi which implies
that c x ∈ W.
Therefore, W is a subspace of V.
88 Chapter 4. General Vector Spaces

Definition 4.2.2 Linear Combination

Let x1 , x2 , · · · , xn be vectors in a vector space V, a vector x ∈ V is called a linear combi-


nation of the vectors x1 , x2 , · · · , xn if and only if there are some scalars c1 , c2 , · · · , cn

x = c1 x 1 + c2 x 2 + · · · + cn x n .

These scalars are called the coefficients of the linear combination.

Definition 4.2.3 Span of S

Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. The set of all vectors in V that
are linear combination of the vectors in S is denoted by span S or span {x1 , x2 , · · · , xk }.
Moreover, if W = span S then W is a subspace of V and we say that S spans W or that
W is spanned by S.

Theorem 4.2.3 Span S is the Smallest Subspace of V Containing S

If S = {x1 , x2 , · · · , xk } is a nonempty set of vectors in a vector space V, then


1. span S is a subspace of V.
2. span S is the smallest subspace of V that contains all of the vectors in S. That is, any
other subspace that contains S contains span S.

Proof:

1. Let W = span S = {z : z = c1 x1 + c2 x2 + · · · + ck xk } ⊆ V. Then,


(a) W 6= ∅ since z = x1 + x2 + · · · + xk ∈ W for ci = 1 for all i = 1, 2, · · · , k,
(b) let z1 = c1 x1 + c2 x2 + · · · + ck xk , z2 = d1 x1 + d2 x2 + · · · + dk xk ∈ W, then

z1 + z2 = (c1 + d1 )x1 + (c2 + d2 )x2 + · · · + (ck + dk )xk ∈ W,

(c) for c ∈ R and z = c1 x1 + c2 x2 + · · · + ck xk ∈ W, we have

z = c c1 x1 + c c2 x2 + · · · + c ck xk ∈ W.

Therefore, W = span S is a subspace of V.


2. If W = span S and W0 is any subspace of V that contains all vectors of S, then since
W0 is closed under addition and scalar multiplication, it contains all linear combinations
of the vectors in S and hence it contains W.
4.2. Subspaces 89

Remark 4.2.2 Standard Unit Vectors in Rn

Recall that the standard unit vectors in Rn are

e1 = (1, 0, 0, · · · , 0), e2 = (0, 1, 0, · · · , 0), · · · , en = (0, 0, 0, · · · , 1).

Observe that {e1 , e2 , · · · , en } spans Rn . That is, span {e1 , e2 , · · · , en } = Rn since every
vector x = (x1 , x2 , · · · , xn ) in Rn can be written as

x = x1 e 1 + x2 e 2 + · · · + xn e n .

Example 4.2.4

Determine whether the vector x = (2, 1, 5) is a linear combination of the set of vectors
{x1 , x2 , x3 } where x1 = (1, 2, 1), x2 = (1, 0, 2), and x3 = (1, 1, 0).

Solution:

x is a linear combination of x1 , x2 , x3 if there are scalars c1 , c2 , c3 so that x = c1 x1 +c2 x2 +c3 x3 .


Consider c1 (1, 2, 1) + c2 (1, 0, 2) + c3 (1, 1, 0) = (2, 1, 5), this is a system in three unknowns:

c1 + c2 + c3 = 2
2c1 + 0 + c3 = 1
c1 + 2c2 + 0 = 5

This system has a unique solution (c1 , c2 , c3 ) = (1, 2, −1). Thus, x = x1 + 2x2 − x3
Note that we can solve the problem as follows: The matrix of coefficient above has determi-
nant equals to 3 and hence the system has a unique solution. Therefore there are c1 , c2 , and
c3 satisfying the linear combination equation. This shows that x is a linear combination of
x1 , x2 , x3 and we are done without solving the system. In particular, span {x1 , x2 , x3 } = R3 .

Example 4.2.5

Let S = {u1 = (1, 1, 0, 1), u2 = (1, −1, 0, 1), u3 = (0, 1, 2, 1)}. Determine whether x and y
belong to span S, where x = (2, 3, 2, 3), and y = (0, 1, 2, 3).

Solution:

x = 2u1 + 0u2 + 1u3 . Thus, x ∈ span S. However, y 6∈ span S since y is not a linear
combination of u1 , u2 , u3 .
90 Chapter 4. General Vector Spaces

Definition 4.2.4 Null Space

Let Ax = 0 be a homogenous system for A ∈ Mm×n and x ∈ Rn . We define the null space
(or the solution space of Ax = 0) of A by W = {x : Ax = 0} ⊆ Rn .

Theorem 4.2.4 The Null Space of of A is a Subspace of Rn

The solution space of the homogeneous system AX = 0 (null space of A), where A is m × n
matrix and X ∈ Rn is a subspace of Rn .

Proof:

Let W = {x : Ax = 0} ⊆ Rn be the null space of A. Then


1. Clearly, W 6= ∅ since Ax = 0 always has a solution (either trivial or non-trivial),
2. If x, y ∈ W, then Ax = Ay = 0. But, A(x + y) = Ax + Ay = 0 + 0 = 0. Thus,
x + y ∈ W,
3. For any c ∈ R and x ∈ W, we have A(cx) = cAx = c 0 = 0, thus cx ∈ W.
Therefore, W is a subsapce of Rn .

Example 4.2.6 Three Different Ways

Let W = {(a, b, c, d) : d = 2a − b and c = a}. Is (W, +, ·) a subspace of R4 ? Explain.

Solution: Definition

We simply show it by the meaning of the definition of subspaces. Look at the Example 4.2.3.

Solution: Null Space

W = {(a, b, c, d) : 2a − b − d = 0 and a − c = 0}.


 
2 −1 0 −1
That is W is the solution space of Ax = 0 where A =  and x = (a, b, c, d).
1 0 −1 0
4
Therefore, W is a subspace of R .

Solution: The Span

W = {(a, b, a, 2a − b) : a, b ∈ R} = {a (1, 0, 1, 2) + b (0, 1, 0, −1) : a, b ∈ R}.

Therefore W = span {(1, 0, 1, 2), (0, 1, 0, −1)}. That is W is a subspace of R4 .


4.2. Subspaces 91

Example 4.2.7

Let S = {x1 , x2 } where x1 = (1, 1, 0), and x2 = (1, 1, 1). Does S spans R3 ? Explain.

Solution:

Let x = (a, b, c) be any vector in R3 . Consider

c1 (1, 1, 0) + c2 (1, 1, 1) = (a, b, c).

Note that we can not use the determinant argument here since we have no square matrix.
Thus, solving the system, we get
   
1 1 a r −r 1 1 a
2 1
1 1 b −− −→ 0 0 b − a .

0 1 c 0 1 c
This system has no solution if b − a 6= 0. Therefore, x is not a linear combination of S and
S does not span R3 .

Theorem 4.2.5 Spanning Sets Are Not Unique

If S = {x1 , x2 , · · · , xn } and S 0 = {y1 , y2 , · · · , yn } are nonempty sets of vectors in a vector


space V, then span S = span S 0 if and only if each vector of S is a linear combination of
vectors in S 0 and each vector of S 0 is a linear combination of vectors in S.
92 Chapter 4. General Vector Spaces

Exercise 4.2.1

Let x, y ∈ Rn and W = {z : z = ax + by, for a, b ∈ R}. Is W a subspace of Rn ? Explain.

Exercise 4.2.2

Let x1 = (1, 0, 2), x2 = (2, 0, 1) and x3 = (1, 0, 3) be vectors in R3 . Determine whether the
vector x = (1, 2, 3) can be written as a linear combination of x1 , x2 , and x3 .

Exercise 4.2.3
n √  o
Determine whether the subset W = a, a + 2, 3a : a ∈ R of R3 is a subspace of R3 .

Exercise 4.2.4

Determine whether the vectors x1 = (3, 0, 0, 0), x2 = (0, −1, 2, 1), x3 = (6, 2, −6, 0), and
x4 = (3, −2, 3, 3) spans the vector space R4 .

Exercise 4.2.5

Let W = {(a, 2a, b, a − b) : a, b ∈ R} be a subset of R4 . Show that W is a subspace of R4 .

Exercise 4.2.6

Determine whether W1 and W2 are subspaces of R4 .


1. W1 = {(a, b, c, d) : a2 + b2 + c2 + d2 > 0}.
2. W2 = {(a, b, c, d) : a + 3b − 2c + 4d = 0 and a − 5b + 4c + 7d = 0}.

Exercise 4.2.7

Determine whether W1 and W2 are subspaces of R3 .


1. W1 = {(a, b, c) : a − c = b}.
2. W2 = {(a, b, c) : ab ≥ 0}.

Exercise 4.2.8

Let W = {(a, b, a, 2a − b)}. Is (W, +, ·) a subspace of R4 ? Explain.


4.3. Linear Independence 93

Section 4.3: Linear Independence

Definition 4.3.1 Linear Independence

The set of vectors S = {x1 , x2 , · · · , xn } in a vector space V is said to be linearly dependent


if there exist scalars c1 , c2 , · · · , cn not all zeros, such that

c1 x1 + c2 x2 + · · · + cn xn = 0.

Otherwise, S is said to be linearly independent. That is, x1 , x2 , · · · , xn are linearly inde-


pendent if whenever, c1 x1 + c2 x2 + · · · + cn xn = 0, we must have c1 = c2 = · · · = cn = 0.

Remark 4.3.1 Standard Unit Vectors in Rn

Note that the standard unit vectors e1 , e2 , · · · , en are linearly independent in Rn since

c1 (1, 0, · · · , 0) + c2 (0, 1, · · · , 0) + · · · + cn (0, 0, · · · , 1) = (0, 0, · · · , 0)

clearly has only the trivial solution c1 = c2 = · · · = cn = 0.

Example 4.3.1

Determine whether x1 = (1, 0, 1, 2), x2 = (0, 1, 1, 2), and x3 = (1, 1, 1, 3) in R4 are linearly
independent or linearly dependent? Explain.

Solution:

We solve the homogenous system: c1 x1 + c2 x2 + c3 x3 = 0. That is,


   
1 0 1 0 1 0 0 0
0 1 1 0 0 1 0 0
  
 → ··· → 

1 1 1 0 0 0 1 0
 
2 2 3 0 0 0 0 0
Therefore, c1 = c2 = c3 = 0 and thus x1 , x2 , x3 are linearly independent.

Remark 4.3.2

Let S = {x1 , x2 , · · · , xn } be a set of vectors in Rn and let A be an n×n matrix whose columns
are the n-vectors of S. Then,
1. if A is singular, then S is linearly dependent,
2. if A is invertible, then S is linearly independent.
94 Chapter 4. General Vector Spaces

Example 4.3.2

Determine whether the vectors x1 = (1, 2, −1), x2 = (1, −2, 1), x3 = (−3, 2, −1), and x4 =
(2, 0, 0) in R3 is linearly independent or linearly dependent? Explain.

Solution:

Consider the homogenous system: c1 x1 + c2 x2 + c3 x3 + c4 x4 = 0. This system has a nontrivial


solutions because the number of unknowns (4) is greater than the number of equations (3).
Therefore, x1 , x2 , x3 , x4 are linearly dependent.
In addition, we can show that this set is linearly dependent by the mean of r.r.e.f. as follows:
   
1 1 −3 2 0 1 0 −1 1 0
 2 −2 2 0 0 → · · · → 0 1 −2 1 0 .
   
−1 1 −1 0 0 0 0 0 0 0
From the reduced system above, we see that (from the third column) x3 = −x1 − 2x2 and
that (from the fourth column) x4 = x1 + x2 .

Theorem 4.3.1

1. A set that contains the zero vector 0 is linearly dependent.


2. A set with exactly one vector is linearly independent iff that vector is not 0.
3. A set with exactly two nonzero vectors is linearly independent iff neither vector is a
scalar multiple of the other.

Example 4.3.3

For what values of α are the vectors (−1, 0, −1), (2, 1, 2), (1, 1, α) in R3 linearly dependent?

Solution:

We want the vectors to be linearly dependent, so consider the system c1 (−1, 0, −1) +
c2 (2, 1, 2) + c3 (1, 1, α) = (0, 0, 0). This system has nontrivial solutions only if det (A) = 0,
where A is the matrix whose columns are [−1, 0, −1]T , [2, 1, 2]T , and [1, 1, α]T . That is,
-1 2 1
det (A) = 0 1 1 = 0 ⇐⇒ −(α − 2) − (2 − 1) = 0 ⇐⇒ 2 − α − 1 = 0 ⇐⇒ α = 1.
-1 2 α
Therefore, if α = 1 the vectors are linearly dependent. Otherwise if α ∈ R\{1}, the vectors
are linearly independent.
4.3. Linear Independence 95

Theorem 4.3.2

Let S = {x1 , x2 , · · · , xm } be a set of vectors in Rn . If m > n, then S is linearly dependent.

Example 4.3.4

Suppose that S = {x1 , x2 , x3 } is a linearly independent set of vectors in a vector space


V. Show that T = {y1 , y2 , y3 } is also linearly independent set, where y1 = x1 + x2 + x3 ,
y2 = x2 + x3 , and y3 = x3

Solution:

Consider the system c1 y1 + c2 y2 + c3 y3 = 0. Therefore,


c1 y 1 + c2 y 2 + c3 y 3 = 0
c1 (x1 + x2 + x3 ) + c2 (x2 + x3 ) + c3 (x3 ) = 0
c1 x 1 + c1 x 2 + c1 x 3 + c2 x 2 + c2 x 3 + c3 x 3 = 0
(c1 )x1 + (c1 + c2 )x2 + (c1 + c2 + c3 )x3 = 0
But x1 , x2 , x3 are linearly independent, thus c1 = c1 + c2 = c1 + c2 + c3 = 0. Therefore,
c1 = c2 = c3 = 0 and hence T is linealry independent.

Example 4.3.5

Suppose that S = {x1 , x2 , x3 } is a linearly dependent set of vectors in a vector space V.


Show that T = {y1 , y2 , y3 } is also linearly dependent set, where y1 = x1 , y2 = x1 + x2 , and
y3 = x1 + x2 + x3

Solution:

Consider the system c1 y1 + c2 y2 + c3 y3 = 0. Therefore,


c1 y 1 + c2 y 2 + c3 y 3 = 0
c1 (x1 ) + c2 (x1 + x2 ) + c3 (x1 + x2 + x3 ) = 0
(c1 + c2 + c3 )x1 + (c2 + c3 )x2 + (c3 )x3 = 0
But x1 , x2 , x3 are linearly dependent, thus at least one of c1 +c2 +c3 , c2 +c3 , and c3 is nonzero.
Therefore, one of c1 , c2 , c3 is nonzero and hence T is linealry dependent.

Remark 4.3.3

The set of linearly independent vectors should be nonzeros distinct vectors.


96 Chapter 4. General Vector Spaces

Remark 4.3.4

Let S1 , S2 be two subsets of a vector space V with S1 ⊆ S2 . Then,


1. if S1 is linearly dependent, then S2 is linearly dependent,
2. if S2 is linearly independent, then S1 is linearly independent.
4.3. Linear Independence 97

Exercise 4.3.1

Show that if {x1 , x2 } is a linearly dependent set, then one of the vector is a scalar multiple
of the other.

Exercise 4.3.2

Show that any subset of a vector space V contains the zero vector is a linearly dependent set.

Exercise 4.3.3

Show that if {x1 , x2 , · · · , xn } is a linearly dependent set, then we can express one of the
vectors in terms of the others.

Exercise 4.3.4

Let x, y, z ∈ Rn be three nonzero vectors where the dot product of any (distinct) two vectors
is 0. Show that the set {x, y, z} is linearly independent.
98 Chapter 4. General Vector Spaces

Section 4.4: Coordinates and Basis

Definition 4.4.1 Basis

A set S = {x1 , x2 , · · · , xn } of distinct nonzero vectors in a vector space V is called a basis iff
1. S spans V (V = span S),
2. S is linearly independent set.
The dimension of V is the number of vectors in its basis and is denoted by dim(V).

Remark 4.4.1 The Standard Unit Vectors

The set of standard unit vectors {e1 , e2 , · · · , en } ∈ Rn forms the standard basis for Rn and
hence dim(Rn ) = n.

Example 4.4.1

Show that the set S = {x1 = (1, 0, 1), x2 = (0, 1, −1), x3 = (0, 2, 2)} is a basis for R3 .

Solution:

To show that S is a basis for R3 , we show that S is a linearly independent set that spans R3 .
1. S is linealy independent? Consider the homogenous system

c1 (1, 0, 1) + c2 (0, 1, −1) + c3 (0, 2, 2) = (0, 0, 0).

This system has a trivial solution if det (A) 6= 0, where A is the matrix of coefficients.
That is,
1 0 0
1 2
|A| = 0 1 2 = (1) = 2 − (−2) = 4 6= 0.
−1 2
1 −1 2
Thus, the system has only the trivial solution and hence S is linearly independent.
2. S spans R3 ? For any x = (a, b, c) ∈ R3 , consider the nonhomogenous system:

c1 (1, 0, 1) + c2 (0, 1, −1) + c3 (0, 2, 2) = (a, b, c).

Since the det (A) 6= 0 where A is the matrix of coefficients, the system has a unique
solution and thus S spans R3 .
Therefore, S is a basis for R3 .
4.4. Coordinates and Basis 99

Theorem 4.4.1 Uniqueness of Basis

Let S = {x1 , x2 , · · · , xn } be a basis for a vector space V. Then, every vector in V can be
written in exactly one way as a linear combination of the vectors in S.

Proof:

Let x ∈ V. Since S is a basis of V, then S spans V. That is, we can write

x = c1 x1 + c2 x2 + · · · + cn xn , and (4.4.1)

x = d1 x1 + d2 x2 + · · · + dn xn . (4.4.2)

By substracting Equation (4.4.2) out of Equation (4.4.1), we get

0 = (c1 − d1 )x1 + (c2 − d2 )x2 + · · · + (cn − dn )xn .

But S is linearly independent set (it is a basis). Thus, c1 − d1 = 0, · · · , cn − dn = 0.


Therefore, c1 = d1 , c2 = d2 , · · · , cn = dn , and hence x can be written in one and only one
way as a linear combination of vectors in S.

Definition 4.4.2 Coordinates of a Vector

If S = {x1 , x2 , · · · , xn } is a basis for a vector space V, and x = c1 x1 + c2 x2 + · · · + cn xn is


the expression for a vector x in terms of the basis S, then the scalars c1 , c2 , · · · , cn are called
the coordinates of x relative to the basis S.
The vector (c1 , c2 , · · · , cn ) in Rn constructed in this way is called the coordinate vector of
x relative to S and is denoted by

(x)S = (c1 , c2 , · · · , cn ).

Example 4.4.2 Coordinates Relative to the Standard Basis for Rn

Consider the vector space Rn with its standard basis S = {e1 , e2 , e3 }.


For any vector x = (a, b, c) ∈ R3 , the coordinate vector relative to the standard basis for R3
is simply

(x)S = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1) = (a, b, c)

which is the same as the vector x.


100 Chapter 4. General Vector Spaces

Example 4.4.3

Let S = {x1 = (1, 0, 1), x2 = (0, 1, −1), x3 = (0, 2, 2)} be a basis for R3 .
1. Find the coordinate vector of x = (−1, 1, 2).
2. Find the vector x in R3 whose coordinate vector relative to S is (x)S = (−1, 3, 2).

Solution:

1. To find (x)S , we find c1 , c2 , c3 of the system

(−1, 1, 2) = c1 (1, 0, 1) + c2 (0, 1, −1) + c3 (0, 2, 2).

Solving this system, we obtain c1 = −1, c2 = −1 and c3 = 1. That is, (x)S = (−1, −1, 1).
2. We simply evaluate x = (−1)x1 + (3)x2 + (2)x3 to get x = (−1, 7, 0).
4.4. Coordinates and Basis 101

Exercise 4.4.1

Show that the set S = {x1 , x2 , x3 , x4 } is a basis for R4 , where

x1 = (1, 0, 1, 0), x2 = (0, 1, −1, 2), x3 = (0, 2, 2, 1), and x4 = (1, 0, 0, 1).

Exercise 4.4.2

Let S = {x1 , x2 , · · · , xn } be a set of vectors in a vector space V. Show that S is a basis for V
if and only if every vector in V can be expressed in exactly one way as a linar combination of
the vectors in S. ” ⇒ ” : Use Theorem 4.4.1. And for ” ⇐ ” : Show the linear independence
of S using the uniqueness.
102 Chapter 4. General Vector Spaces

Section 4.5: Dimension

Theorem 4.5.1 Dimension

All bases for a finite-dimensional vector space have the same number of vectors, defined as
dimension and is denoted by dim(V). The zero vector space is of dimension zero.

Theorem 4.5.2

Let V be a finite-dimensional vector space, and let {x1 , x2 , · · · , xn } be any basis:


1. If a set has more than n vectors, then it is linearly dependent.
2. If a set has fewer than n vectors, then it does not span V.

Remark 4.5.1 Dimension of span S

If S = {x1 , x2 , · · · , xr } is linearly independent set in a vector space V, then S is a basis for


the subspace span S and dim(span S) = r.

Example 4.5.1

Find a basis for the subspace of all vectors of the form (a, b, −a − b, a − b), for a, b ∈ R. Find
its dimension.

Solution:

Let W = {(a, b, −a − b, a − b) | a, b ∈ R} ⊆ R4 . Let x be any vector of W, then

x = (a, b, −a − b, a − b) = a(1, 0, −1, 1) + b(0, 1, −1, −1) ∈ W.

Therefore, S = {(1, 0, −1, 1), (0, 1, −1, −1)} spans W.


Clearly,

c1 (1, 0, −1, 1) + c2 (0, 1, −1, −1) = (0, 0, 0, 0)

holds only if c1 = c2 = 0 which shows that S is linearly independent. That is, S is a basis for
W and dim(W) = 2.
4.5. Dimension 103

Example 4.5.2

Find a basis for and the dimension of the solution space of the homogeneous system
x1 + x2 + 2x4 = 0
x2 − x3 + x4 = 0
x1 + x2 + 2x4 = 0

Solution:

We first solve the system using Gauss-Jordan method:


   
1 1 0 2 0 1 0 1 1 0
  r.r.e.f.  
0
 1 −1 1 0 ≈ · · · · · · ≈  0
  1 −1 1 0

1 1 0 2 0 0 0 0 0 0
Thus the solutions are: x1 = −x3 − x4 ; x2 = x3 − x4 and x3 = t and x4 = r for t, r ∈ R. That
is the solution space of the homogeneous system is W = {(−t − r, t − r, t, r) : t, r ∈ R}.
Therefore, any vector x in W is of the form: x = t(−1, 1, 1, 0) + r(−1, −1, 0, 1) which means
that S = {(−1, 1, 1, 0), (−1, −1, 0, 1)} spans W.
As S is a linearly independent set (none of the vectors is a scalar multiple of the other), S
forms a basis for W, and hence the solution space has dimension 2.
In fact, this method always produces a basis for the solution space of the system.

Theorem 4.5.3

Let V be an n -dimensional vector space, and let S = {x1 , x2 , · · · , x n } be a set in V. Then,


S is a basis for V iff S spans V or S is linearly independent.

Remark 4.5.2

The set S = {x1 = (1, 5), x2 = (1, 4)} is linearly independent in the 2-dimensional vector
space R2 . Hence, S forms a basis for R2 .
Moreover, considering S = {x1 = (1, 0, 5), x2 = (1, 0, 4), x3 = (1, 1, 1)}, we see that x1 and x2
form a linearly independent set in the xz-plane. The vector x3 is outside of the xz-plane, so
the set S is linearly independent set in R3 . Hence, S forms a basis for R3 .
104 Chapter 4. General Vector Spaces

Example 4.5.3

Find all values of a for which S = {(a2 , 0, 1), (0, a, 2), (1, 0, 1)} is a basis for R3 .

Solution:

Since dim(R3 ) = 3 = size of S, it is enough to show that S is linearly independent (or it


spans R3 ) to show that it is a basis for R3 . Consider c1 (a2 , 0, 1) + c2 (0, a, 2) + c3 (1, 0, 1) =
(0, 0, 0). Clearly, S is linearly independent if det (A) 6= 0, where A is the coefficient matrix.
That is,
a2 0 1
0 a 0 = a(a2 − 1) 6= 0 =⇒ a 6= 0 and a 6= ±1.
1 2 1
Therefore, S is a basis for R3 if a ∈ R\{−1, 0, 1}.

Theorem 4.5.4 Reduction and Extension Theorem

Let S be a finite set of vectors in a finite-dimensional vector space V.


1. If S spans V but is not a basis, then S can be reduced to a basis for V by removing
appropriate vectors from S.
2. If S is a linearly independent set that is not a basis for V, then S can be extended to
a basis for V by adding appropriate vectors to S.

Remark 4.5.3 How to construct a basis?

Let V be a vector space and S = {x1 , x2 , · · · , xn } is a subset of V. The procedure to find a


subset of S that is a basis for W = span S is:
1. form the linear combination c1 x1 + c2 x2 + · · · + cn xn = 0,
2. form the augmented matrix of the homogenous system in step (1),
3. find the r.r.e.f. of the augmented matrix,
4. Vectors in S corresponding to leading columns form a basis for W = span S.

Example 4.5.4

Let S = {x1 = (1, 0, 1), x2 = (1, 1, 1), x3 = (0, −1, 0), x4 = (2, 1, 2)} be a set of vectors in R3 .
Find a subset of S that is a basis for W = span S, and find the dimension of W.
4.5. Dimension 105

Solution:

We form the homogenous system: c1 x1 + c2 x2 + c3 x3 + c4 x4 = O to find a linearly independent


subset of S:
   
1 1 0 2 0 1 0 1 1 0
  r.r.e.f.  
−1  ≈ ······ ≈  0 −1
0 1 1 0  1 1 0
 
1 1 0 2 0 0 0 0 0 0
The leading entries are pointing (appear) on the first two columns, namely columns 1 and
2. Therefore, {x1 , x2 } is linearly independent and it spans W. Thus, {x1 , x2 } is a basis for
W = span S and dim(W) = 2.

Example 4.5.5

Find a basis for R4 that contains the vectors x1 = (1, 0, 1, 0) and x2 = (−1, 1, −1, 0).

Solution:

Consider the set S = {x1 , x2 , e1 , e2 , e3 , e4 }. The set S spans R4 but it contains some linearly
dependent vectors. In order to delete those, we follow the following procedure:
   
1 −1 1 0 0 0 0 1 0 0 1 1 0 0
   
0 1 0 1 0 0 0 r.r.e.f.  0 1 0 1 0 0 0
 
 ≈ ······ ≈ 
 
 
1
 −1 0 0 1 0 0

 0
 0 1 0 −1 0 0

0 0 0 0 0 1 0 0 0 0 0 0 1 0
The leading entries pointing on the columns 1, 2, 3, and 6. Therefore, the set {x1 , x2 , e1 , e4 }
is a basis for R4 containing x1 and x2 .

Theorem 4.5.5

If W is a subspace of a finite-dimensional vector space V, then


1. W is finite-dimensional.
2. dim(W) ≤ dim(V).
3. W = V iff dim(W) = dim(V).
106 Chapter 4. General Vector Spaces

Exercise 4.5.1

Let S = {x1 , x2 , x3 , x4 , x5 } be a set of R4 where x1 = (1, 2, −2, 1), x2 = (−3, 0, −4, 3),
x3 = (2, 1, 1, −1), x4 = (−3, 3, −9, 6), and x5 = (9, 3, 7, −6). Find a subset of S that is a basis
for W = span S. Find dim(W). Final answer: {x1 , x2 } is a basis for W and the dimension
is 2.

Exercise 4.5.2

Find the dimension of the subspace of all vectors of the form (a, b, c, d) where c = a − b and
d = a + b (for a, b ∈ R). Final answer: the dimension is 2.

Exercise 4.5.3

Find the dimension of the subspace of all vectors of the form (a + c, a + b + 2c, a + c, a − b)
where a, b, c ∈ R. Final answer: the dimension is 2.

Exercise 4.5.4

Let S = {x1 , x2 , x3 } be a basis for a vector space V. Show that T = {y1 , y2 , y3 } is also a
basis for V, where y1 = x1 + x2 + x3 , y2 = x2 + x3 , and y3 = x3 .

Exercise 4.5.5

Find a standard basis vector for R3 that can be added to the set {x1 = (1, 1, 1), x2 = (2, −1, 3)}
to produce a basis a basis for R3 . Final answer: any vector of the standard basis will work.

Exercise 4.5.6

The set S = {x1 = (1, 2, 3), x2 = (0, 1, 1)} is linearly independent in R3 . Extend (enlarge) S
to a basis for R3 . Final answer: S = {x1 = (1, 2, 3), x2 = (0, 1, 1), x3 = (1, 0, 0)}

Exercise 4.5.7

Let S = {x1 = (1, 0, 2), x2 = (−1, 0, −1)} be a set of vectors in R3 . Find a basis for R3 that
contains the set S. Final answer: {(1, 0, 2), (−1, 0, −1), (0, 1, 0)}.
4.7. Row Space, Column Space, and Null Space 107

Section 4.7: Row Space, Column Space, and Null Space

Definition 4.7.1
 
a11 a12 ··· a1n
 a21 a22 ··· a2n 
 
Let A = 
 .. .. ..  ∈ Mm×n . The set of rows of A are:
.. 
 . . . . 
am1 am2 ··· amn

x1 = [a11 a12 ··· a1n ] 


x2 = [a21 a22 ··· a2n ]


.. .. .. ∈ Rn
. . . 



xm = [am1 am2 ··· amn ]

These row vectors span a subspace of Rn which is called the row space of A.
Similarly, the columns of A are:
     
a a a
 11   12   1n  




 a21   a22   a2n  
     
y1 =  .  , y2 =  .  , · · · , yn =  .  ∈ Rm .
     
 ..   ..   ..  
     

am1 am2 amn 

These column vectors span a subspace of Rm which is called the column space of A .
Moreover, the solution space of the homogeneous system Ax = 0 (which is a subspace of Rn )
is called the null space of A.

Theorem 4.7.1

A linear system Ax = b is consistent if and only if b is in the column space of A.

Solution:

Recall from Theorem 1.3.2 that the product Ax can be expressed as

Ax = x1 c1 + x2 c2 + · · · + xn cn

where c1 , c2 , · · · , cn denote the column vectors of A and x = (x1 , x2 , · · · , xn ). Thus, Ax = b


can be written as

x1 c1 + x2 c2 + · · · + xn cn = b

from which we conclude that Ax = b is consistent if and only if b can be written as a linear
combination of the column vectors of A.
108 Chapter 4. General Vector Spaces

Theorem 4.7.2

Elementary row operations do not change the null and row spaces of a matrix.

Theorem 4.7.3

If R is a matrix in row echelon form, then the row vectors with the leading 1’s (the nonzero
row vectors) form a basis for the row space of R; and the column vectors with the leading 1’s
of the row vectors form a basis for the column space of R.

Example 4.7.1 Bases for Row Space and Column Space

Let
 
1 −2 0 3 −4
 
 3 2 8 1 4 
A= .
 


 2 3 7 2 3 

−1 2 0 4 −3
1. find a basis for the row space of A,
2. find a basis for the column space of A,
3. find a basis for the row space that contains only rows of A,
4. find a basis for the column space that contains only columns of A.

Solution:

1. To find a basis for the row space of A, we have to find the r.r.e.f. of A, then the set of
non-zero rows of the r.r.e.f. forms a basis for the row space.
 
1 −2 0 3 −4  
1 0 2 0 1 ←
 
 3 2 8 1 4  r.r.e.f.
≈ ······ ≈ ←
 
0 1 1 0 1
   
 2 3 7 2 3   
  0 0 0 1 −1 ←
−1 2 0 4 −3
 
0 0 0 0 0

Therefore, the set {(1, 0, 2, 0, 1), (0, 1, 1, 0, 1), (0, 0, 0, 1, −1)} forms a basis for the row
space of A.

2. To find a basis for the column space of A, we have to find a basis for the row space of
4.7. Row Space, Column Space, and Null Space 109

AT . Therefore,
 
1 3 2 −1  
11
1 0 0 ←
 

 −2 2 3 2 
 24
−49
 
r.r.e.f. 0 1 0 ←
AT =
 
 0 8 7 0  ≈ ······ ≈  24
7

0 0 1 ←
   
 
 3 1 2 4   3 
  0 0 0 0 
−4 4 3 −3
 
0 0 0 0
n o
11
Therefore, the set (1, 0, 0, 24 ), (0, 1, 0, −49
24
), (0, 0, 1, 73 ) is a basis for the row space of
AT and it is a basis for the column space of A.

3. To find a basis for the row space of A that contains only rows of A, we do as follows:
 
1 3 2 −1  
11
0 0 1
 
−2 2 3 2 24 
−49
 
1 0 0

T
  r.r.e.f.
A =
 0 8 7  ≈ ······ ≈
0

 24 
0 7

 3
 0 1 3
1 2 4

0 0 0 0
   
−4 4 3 −3
 
0 0 0 0
↑ ↑ ↑
Then, the leading entries are pointing to column 1, column 2, and column 3 in the r.r.e.f.
of AT which correspond to row 1, row 2, and row 3 in A. Thus,

{(1, −2, 0, 3, −4), (3, 2, 8, 1, 4), (2, 3, 7, 2, 3)}

forms a basis for the row space of A containing only rows of A.

4. To find a basis for the column space of A that only contains columns of A, we do the
following:
 
1 −2 0 3 −4  

 3
 0 2 0 1 1
2 8 1 4 r.r.e.f.
 ≈ ······ ≈




1 1 0 1 0
 2 3 7 2 3
 
 0 0 1 −1 0


−1 2 0 4 −3 0 0 0 0 0


↑ ↑ ↑
Then, the leading entries are pointing to column 1, column 2 and column 3 in the r.r.e.f.
of A . Thus,

{(1, 3, 2, −1), (−2, 2, 3, 3), (3, 1, 2, 4)}

forms a basis for the column space of A containing only columns of A.


110 Chapter 4. General Vector Spaces

Example 4.7.2 Basis for Span Subspace

Find a basis for the subspace of R4 spanned by S = {x1 , x2 , x3 } where

x1 = (1, 0, 2, 1), x2 = (1, 1, −1, 0) and x3 = (0, −1, 3, 1).

Solution:

The space spanned by these vectors is the row space of the matrix
 
1 0 2 1
 
A=
 1 1 −1 .
0 
0 −1 3 1
Reducing this matrix to its r.r.e.f. we get
 
1 0 2 1
 
 0
 1 −3 −1 
.
0 0 0 0
The nonzero vectors in this matrix are

y1 = (1, 0, 2, 1) and y2 = (0, 1, −3, −1).

These vectors form a basis for the row space of A and consequently form a basis for span S.

Example 4.7.3 Basis and Linear Combination

Let S = {x1 = (1, 1, 0), x2 = (0, 1, −1), x3 = (2, −1, 3), x4 = (1, 0, 1)}.
1. Find a subset of S that forms a basis for span S.
2. Express each vector not in the basis as a linear combination of the basis vectors.

Solution:

1. We begin by constructing a matrix whose columns are the vectors of S.


 
1 0 2 1
 
A=
 1 1 −1 .
0 
0 −1 3 1
The r.r.e.f. of A is
 
1 0 2 1
 
 0
 1 −3 −1 
.
0 0 0 0
The leading 1’s occur in column 1 and 2. Then the basis is formed by x1 and x2 .
2. We write x3 and x4 as linear combinations of x1 and x2 . Considering the third column
4.7. Row Space, Column Space, and Null Space 111

of the r.r.e.f. matrix, we obtain

x3 = (2)x1 + (−3)x2 .

The fourth column of the r.r.e.f. implies that

x4 = (1)x1 + (−1)x2 .

Example 4.7.4
 
1 −2 0 3 −4
 
 3 2 8 1 4
Let A =  .
 
 2 3 7 2 3
 
−1 2 0 4 −3
1. Find bases for the row and column spaces of A,
2. Find a basis for the null space of A.
3. Does x = (1, 2, 4, 3, 0) belong to the row space of A? Explain.
4. Express each column of A not in the basis of column space as a linear combination of
the vectors in the basis you got in step 1.

Solution:

1. To get bases for the row space and column spaces of A, we do the following:
 
1 −2 0 3 −4  

 3
 1 0 2 0 1 ←
2 8 1 4 r.r.e.f.
 ≈ ······ ≈ ←
 


 0 1 1 0 1
 2 3 7 2 3
  
 0

0 0 1 −1 ←

−1 2 0 4 −3 
0 0 0 0 0

↑ ↑ ↑
Thus, the set {(1, 0, 2, 0, 1), (0, 1, 1, 0, 1), (0, 0, 0, 1, −1)} forms a basis for the row space
of A, while the set {(1, 3, 2, −1), (−2, 2, 3, 2), (3, 1, 2, 4)} forms a basis for the column
space of A that only contains columns of A, but this is fine since there is no restrictions
on the basis of column space of A mentioned in the question.
2. Using what we got in the previous step, the solution space of the homogeneous system
is:

x1 + 2x3 + x5 = 0

x2 + x3 + x 5 = 0

x4 − x5 = 0
112 Chapter 4. General Vector Spaces

Let x3 = r, x5 = t, where t, r ∈ R to get


     
−2r − t −2 −1
     
 −r − t  −1 −1
     
     
x= r 

=r
 1  + t  0 .
   
     
 t   0   1 
     
t 0 1
Therefore, {(−2, −1, 1, 0, 0), (−1, −1, 0, 1, 1)} is a basis for the null space of A.
3. Yes. It is clear that x = (1)(1, 0, 2, 0, 1) + (2)(0, 1, 1, 0, 1) + (3)(0, 0, 0, 1, −1) where those
vectors are the vectors of the basis of the row space that were found in (1). It is also
possible to consider the non-homogenous system x = c1 (1, 0, 2, 0, 1) + c2 (0, 1, 1, 0, 1) +
c3 (0, 0, 0, 1, −1) to find the same answer.
4. Let the columns of A called x1 , · · · , x5 . Then, we will express x3 and x5 (not in the
basis) as a linear combination of the vectors (those in the basis) {x1 , x2 , x4 }. We can do
so by looking at the r.r.e.f. form we got in step 1. For x3 : The third column of the rref
matrix suggest that x3 = 2 x1 + x2 + 0 x4 . For x5 : The fifth column of the rref matrix
suggest that x5 = x1 + x2 − x4 . Can you confirm that!?
4.7. Row Space, Column Space, and Null Space 113

Exercise 4.7.1
 
−1 −1 0
Let A =  . Find a basis for the null space of A. Final answer: S =
2 0 4
{x1 = (1, 2, 3), x2 = (0, 1, 1), x3 = (1, 0, 0)}

Exercise 4.7.2
 
0 0 0 −1
 
Let A = 
0 1 0 .
0
0 0 0 1
1. Find a basis for the null space of A.
2. Find a basis for the row space of AT .
3. Find a basis for the row space of A.
Final answer:
1. rank(A) = 2, nullity(A) = 2, rank(AT ) = 2, and nullity(AT ) = 1.
2. a basis for the null space of A = {(1, 0, 0, 0), (0, 0, 1, 0)}.
3. a basis for the row space of AT = {(0, 1, 0), (−1, 0, 1)}.
4. a basis for the row space of A = {(0, 1, 0, 0), (0, 0, 0, 1)}.

Exercise 4.7.3
 
1 0 −1 1
 
Let A = 1
 1 1 .
1
1 2 3 1
1. Find a basis for the null space of A.
2. Find a basis for the row space of AT .
3. Find a basis for the column space of A.
Final answer:
1. rank(A) = 2, nullity(A) = 2, rank(AT ) = 2, and nullity(AT ) = 1.
2. a basis for the null space of A = {(1, 0, 0, 0), (0, 0, 1, 0)}.
3. a basis for the row space of AT = {(0, 1, 0), (−1, 0, 1)}.
4. a basis for the row space of A = {(0, 1, 0, 0), (0, 0, 0, 1)}.
114 Chapter 4. General Vector Spaces

Exercise 4.7.4

Let S = {x1 , x2 , x3 , x4 , x5 }, where x1 = (1, −2, 0, 3), x2 = (2, −5, −3, 6), x3 = (0, 1, 3, 0), x4 =
(2, −1, 4, −7), and x5 = (5, −8, 1, 2).
1. Find a subset of S that forms a basis for the subspace span S.
2. Express each vector not in the basis as a linear combination of the basis vectors.
3. If A is the 4 × 5 matrix whose columns are the vectors of S in order, then find a basis
for the row space of A, a basis for the column space of A, and a basis for the null space
of A.
Hint: Here is the reduced row echelon form (r.r.e.f.) of A:
 
1 0 2 0 1
 
0 1 −1 0 1
.
 

0 0 0 1 1
 
0 0 0 0 0
4.8. Rank, Nullity and the Fundamental Matrix Spaces 115

Section 4.8: Rank, Nullity and the Fundamental Matrix Spaces

Theorem 4.8.1 Rank

The row space and column space of a matrix A have the same dimension.

Definition 4.8.1 Rank and Nullity

The common dimension of the row space and column space of a matrix A is called the rank
of A and is denoted by rank(A); the dimension of the null space of A is called the nullity of
A and is denoted by nullity(A).

Example 4.8.1 Find Rank and Nullity

Find the rank and nullity of the matrix


 
1 2 0 2 5
 
 −2 −5 1 −1 −8 
A= .
 


 0 −3 3 4 1 

3 6 0 −7 2

Solution:

The reduced row echelon form of A is


 
1 0 2 0 1
 
 0 1 −1 0 1 
.
 


 0 0 0 1 1 

0 0 0 0 0
Since it has 3 leading 1’s, its row and column spaces have dimension 3. Hence rank(A) = 3.
To get the nullity, we find the dimension of the null space by considering the system Ax = 0
where x = (x1 , x2 , x3 , x4 , x5 ). By the above r.r.e.f. we conclude that

x1 + 2x3 + x5 = 0,

x2 − x3 + x5 = 0,

x4 + x5 = 0.
116 Chapter 4. General Vector Spaces

Let (free variables) x3 = t and x5 = r to get


     
−2t − r −2 −1
     

 t−r 


 1 


 −1 

     
x= 
 t 
 =t 
 1 
 +r 
 0 .

     

 −r 


 0 


 −1 

r 0 1
Therefore, the two vectors on the right side above form a basis for the null space of A and
hence nullity(A) = 2.

Remark 4.8.1 Maximum Value for Rank

If A is any m × n matrix, then rank(A) ≤ min (m, n).

Theorem 4.8.2 Dimension Theorem for Matrices

If A is a matrix with n columns, then

rank(A) + nullity(A) = n.

Remark 4.8.2

The matrix A of Example 4.8.1 has 5 columns with rank(A) = 3 and nullity(A) = 2.

Theorem 4.8.3 Dimension Theorem for Matrices

If A is an m × n matrix, then
1. rank(A) = the number of leading variables in the general sollution of Ax = 0.
2. nullity(A) = the number of parameters in the general solution of Ax = 0.

Remark 4.8.3

Let A be a 3 × 5 matrix. Then: the largest possible rank of A is 3 and the smallest possible
rank of A is 0 (the zero matrix). This is because, rank of A = row rank = column rank, and
we only have 3 rows. Also, the largest nullity of A is 5 (zero matrix) and the smallest nullity
is 2 (when rank of A = 3). Moreover, the largest possible rank of AT is 3, and the largest
possible nullity of AT is 3.
4.8. Rank, Nullity and the Fundamental Matrix Spaces 117

Theorem 4.8.4
 
If A is any matrix, then rank(A) = rank AT .

Example 4.8.2

If A is a 3 × 7 matrix, what is the largest possible rank of A (or what is the largest possible
number of linearly independent columns of A)?

Solution:

The largest possible row rank of A is 3 while the largest possible column rank of A is 7, but
row rank of A = column rank of A. Therefore, the largest possible rank of A is 3. The same
thing applies for the largest possible number of linearly independent columns of A.

Remark 4.8.4

Let A be any m × n matrix, then


1. the row rank of A = the column rank of A = the rank of A = the rank of AT .
2. n = the nullity of A + the rank of A.
3. m = the nullity of AT + the rank of A.

Theorem 4.8.5 Equivalent Statements

If A is an n × n matrix, then the following statements are equivalent:


1. A is invertible.
2. Ax = O has only the trivial solution.
3. A is row equivalent to In .
4. Ax = b is consistent and has a unique solution for every n × 1 matrix b.
5. det (A) 6= 0.
6. The column vectors of A are: linearly independent; span Rn ; and form a basis for Rn .
7. The row vectors of A are: linearly independent; span Rn ; and form a basis for Rn .
8. A has rank n.
9. A has nullity 0.
118 Chapter 4. General Vector Spaces

Exercise 4.8.1
 
−1 −1 0
Let A =  . Find a basis for the null space of A and determine the nullity of A.
2 0 4
Final answer: S = {x1 = (1, 2, 3), x2 = (0, 1, 1), x3 = (1, 0, 0)}

Exercise 4.8.2
 
0 0 0 −1
 
Let A = 
0 1 0 0.
0 0 0 1
1. Find rank(A), nullity(A), rank(AT ), and nullity(AT ).
2. Find a basis for the null space of A.
3. Find a basis for the row space of AT .
4. Find a basis for the row space of A.
Final answer:
1. rank(A) = 2, nullity(A) = 2, rank(AT ) = 2, and nullity(AT ) = 1.
2. a basis for the null space of A = {(1, 0, 0, 0), (0, 0, 1, 0)}.
3. a basis for the row space of AT = {(0, 1, 0), (−1, 0, 1)}.
4. a basis for the row space of A = {(0, 1, 0, 0), (0, 0, 0, 1)}.

Exercise 4.8.3
 
1 1 4 1 2
 
0 1 2 1 1
 
 
Let A = 
0 0 0 1 2. Find the rank of A and the nullity of A.
 
1
 −1 0 0 2
2 1 6 0 1
Chapter

5
Eigenvalues and Eigenvectors

Section 5.1: Eigenvalues and Eigenvectors

Definition 5.1.1 Eigenproblem

If A is an n × n matrix, then a nonzero vector x ∈ Rn is called an eigenvector

Ax = λx

for some scalar λ. The scalar λ is called an eigenvalue of A and x is called an eigenvector
corresponding to λ.

   
1 −1  1 
For example, let A =  and x =  . Then,
−1 1 −1
       
1 −1   1   2  1 
Ax =  = = 2 = 2x.
−1 1 −1 −2 −1
Therefore, x is an eigenvector of A corresponding to λ = 2 (or λ = 2 is an eigenvalue of A corre-
sponding to eigenvector x).

Definition 5.1.2 The Characteristic Polynomial

If A ∈ Mn×n (R), then pA (λ) = | λIn − A | is called the characteristic polynomial of A.

Theorem 5.1.1 The Characteristic Equation

If A is an n × n matrix, then λ is an eigenvalue of A if and only if pA (λ) = | λIn − A | = 0.

Proof:

λ is an eigenvalue of A iff λ satisfies Ax = λx, with x 6= 0 iff λ satisfies λx − Ax = 0, with


x 6= 0 iff (λIn − A)x = 0 has a nontrivial solutions iff | λ In − A | = 0.

119
120 Chapter 5. Eigenvalues and Eigenvectors

Example 5.1.1 Finding Eigenvalues

Find the eigenvalues of A,


 
1 0 2
 
.
 1 0 0 

0 0 −1

Solution:

We first compute the characteristic polynomial pA (λ) = |λI3 − A| = 0 as follows:


λ−1 0 −2
λ−1 −2
−1 λ 0 =λ = λ(λ − 1)(λ + 1) = 0.
0 λ+1
0 0 λ+1
which implies that

λ1 = −1 , λ2 = 0 , λ3 = 1 .

Remark 5.1.1

Let A be an n × n matrix. Then


1. The det (A) is the product of the eigenvalues of A.
2. A is invertible if and only if λ = 0 is not an eigenvalue of A.

Remark 5.1.2

Note that if λ is an eigenvalue of an n × n matrix A, then

| λIn − A | = λn + c1 λn−1 + c2 λn−2 + · · · + cn = 0,

implies that | A | = (−1)n cn , by setting λ = 0.

Theorem 5.1.2

If A is an n × n triangular matrix (upper triangular, lower triangular, or diagonal), then the


eigenvalues of A are the entries on the main diagonal.
5.1. Eigenvalues and Eigenvectors 121

Theorem 5.1.3

Let A is an n × n matrix. Then λ is an eigenvalue of A iff the system (λIn − A)x = 0 has
nontrivial solutions iff there is a nonzero vector x such that Ax = λx iff λ is a solution of
pA (λ) = | λIn − A | = 0

Definition 5.1.3 Eigenspace of a Matrix

Let A be an n × n matrix with an eigenvalue λ. The eigenspace of A corresponding to λ,


denoted Eλ , is defined as the solution space of the homogeneous system (λIn − A)x = 0. That
is, Eλ is the null space of the matrix λIn − A.

Example 5.1.2 Computing Eigenvectors of a Matrix

Find the eigenvalues and their corresponding eigenvectors of A,


 
1 0 2
 
.
 1 0 0 

0 0 −1
OR: Find bases for the eigenspaces of A.

Solution:

As in Example 5.1.1, the characteristic polynomial pA (λ) = |λI3 − A| = 0 as follows:


 
λ−1 0 −2  
  λ−1 −2 
 −1 λ =λ
0   = λ(λ − 1)(λ + 1) = 0.

0 λ+1
0 0 λ+1
which implies that

λ1 = −1 , λ2 = 0 , λ3 = 1 .

Thus, there are three eigenspaces of A corresponding to these eigenvalues. To find bases for
these eigenspaces, we solve the homogeneous system (λI3 − A)x = 0, for λ1 , λ2 , λ3 . That is:
 
λi − 1 0 −2 0
 
 −1 (5.1.1)
 λi 0 .
0
0 0 λi + 1 0

1. λ1 = −1 ⇒ (λ1 I3 − A)x1 = 0, x1 = (a, b, c) 6= (0, 0, 0). Substitute λ1 = −1 in


122 Chapter 5. Eigenvalues and Eigenvectors

Equation 5.1.1 to get:


   
−2 0 −2 0 1 0 1 0
   
 ⇒ · · · ⇒ 0
−1 −1 −1
0 0 1 .
0


0 0 0 0 0 0 0 0
Thus, a + c = 0, and − b + c = 0. That is, a = −c, and b = c. Let c = t ∈ R\{0} to
 
−t
 
. Choosing t = 1, we get a basis for Eλ1 containing the vector
get x1 =  t 

t
 
−1
 
P1 =  1 

.
1

2. λ2 = 0 ⇒ (λ2 I3 − A)x2 = 0, x2 = (a, b, c) 6= (0, 0, 0). Substitute λ2 = 0 in Equation


5.1.1 to get:
   
−1 0 −2 0 1 0 0 0
   
 ⇒ · · · ⇒ 0
−1 0 0 0 0 1 .
0


0 0 1 0 0 0 0 0
 
0
 
Thus, a = c = 0. Let b = t ∈ R\{0} to get x2 =  t .
  Choosing t = 1, we get a basis
0
for Eλ2 containing the vector
 
0
 
P2 = 1 .
 
0

3. λ3 = 1 ⇒ (λ3 I3 − A)x3 = 0, x3 = (a, b, c) 6= (0, 0, 0). Substitute λ3 = 1 in Equation


5.1.1 to get:
   
0 0 −2 0 1 −1 0 0
   
 ⇒ · · · ⇒ 0
−1 1 0 0 0 1 .
0


0 0 2 0 0 0 0 0
 
t
 
Thus, a − b = 0, and c = 0. If b = t ∈ R\{0}, then a = t as well and we get x3 =  t .
 
0
Choosing t = 1, we get a basis for Eλ3 containing the vector
 
1
 
P3 = 1 .
 
0
5.1. Eigenvalues and Eigenvectors 123

Theorem 5.1.4 Powers of a Matrix

If k is a positive integer, λ is an eigenvalue of a matrix A, and x is a corresponding eigenvector,


then λk is an eigenvalue of Ak and x is a corresponding eigenvector.

Proof:

If Ax = λx, then we have A2 x = A(Ax) = A(λx) = λ(Ax) = λ2 x. Applying this simple idea
k times, we get
   
Ak x = Ak−1 (Ax) = λ Ak−1 x = λ2 Ak−2 x = · · · = λk x.

Theorem 5.1.5 Eigenvalues of an Inverse of a Matrix

1
If λ is an eigenvalue of an invertible matrix A, and x is a corresponding eigenvector, then λ

is an eigenvalue of A−1 and x is a corresponding eigenvector.

Proof:

If Ax = λx and A is invertible, then multiplying with A−1 both sides (from left), we get
1
A−1 · Ax = A−1 · λx ⇒ x = λ A−1 x ⇒ x = A−1 x.
λ
124 Chapter 5. Eigenvalues and Eigenvectors

Exercise 5.1.1

Show that A and AT have the same eigenvalues. Hint: |λIn − A| = (λIn − A)T .

Exercise 5.1.2

Suppose that pA (x) = λ2 (λ + 3)3 (λ − 4) is the characteristic polynomial of some matrix A.


Then,
1. What is the size of A? Explain.
2. Is A invertible? Why?
3. How many eigenspaces does A have? Explain.

Exercise 5.1.3
 
1 3 3
 
Find the eigenvalues and bases for the eigenspaces of A =  1
 −1 −4 Final result:
−1 −1 2
     
−1 2 0
     
λ1 = −2, λ2 = 1, λ3 = 3. And P1 = 
 1 , P2 = −1, and P3 = −1.
    

0 1 1

Exercise 5.1.4
 
0 1 0
3
 
Find the eigenvalues of A = 
0 0 1 Final result: λ = k since p(λ) = (λ − k) .
k3 −3k 2 3k

Exercise 5.1.5
 
a b
Show that if a, b, c, d are integers such that a + b = c + d, then A =  has integer
c d
eigenvalues λ1 = a + b and λ2 = a − c. Hint: Use your algebraic abilities.

Exercise 5.1.6

Find det (A) given that A has the characteristic polynomial pA (λ) = λ3 + 2λ2 − 4λ − 5. Hint:
Use Remark 5.1.2.
5.2. Diagonalization 125

Section 5.2: Diagonalization

Definition 5.2.1 Similar Matrices

If A and B are two square matrices, then we say that B is similar to A if there is an
invertible matrix P such that B = P −1 AP . In that case, we write B ≡ A.

Remark 5.2.1 Properties of Similar Matrices

1. A ≡ A since A = I −1 AI.
2. If B ≡ A, then A ≡ B.

Proof. If B ≡ A, then ∃ P, P −1 such that B = P −1 AP or P BP −1 = A. Let Q = P −1


to get A = Q−1 BQ. Thus, A ≡ B.

3. If A ≡ B and B ≡ C, then A ≡ C.

Proof. A ≡ B and B ≡ C implies that there invertible matrices P and Q such that
A = P −1 BP and B = Q−1 CQ. Therefore,

A = P −1 BP = P −1 Q−1 C QP = (QP )−1 C (QP ) ⇒ A ≡ C.

4. If A ≡ B, then | A | = | B |.

Proof. If A ≡ B, then there exists P, P −1 such that B = P −1 A P with |P | =


6 0.
Therefore,
1
| B | = P −1 A P = P −1 | A | | P | =  | A | 
| P
| = | A | .
|P |

5. if A ≡ B, then AT ≡ B T .

Proof. If A ≡ B, then there exist P, P −1 such that B = P −1 A P . Thus,

B = P −1 A P,

BT = (P −1 A P )T ,

BT = P T AT (P −1 )T ,

BT = P T AT (P T )−1 .

Let Q−1 = P T , to get B T = Q−1 AT Q. Therefore, B T ≡ AT .


126 Chapter 5. Eigenvalues and Eigenvectors

Theorem 5.2.1 Similar Matrices

Similar matrices have the same eigenvalues.

Proof:

Let A and B be two similar n × n matrices. Then, there is an invertible matrix P such that
B = P −1 A P . Then,

pB (λ) = | λIn − B | = λIn − P −1 A P = P −1 (λP P −1 − A)P

= P−1
 | λI − A | | P
| = | λI − A | = p (λ).

n  n A

The characteristic polynomials of A and B are the same. Hence they have the same eigenval-
ues.

Definition 5.2.2 Diagonalizability

An n × n matrix A is diagonalizable if and only if A is similar to a diagonal matrix D, i.e.

D = P −1 A P with | P | =
6 0.

D: its diagonal entries are the eigenvalues of A.


P: its columns are the linearly independent eigenvectors of A.
That is,

D = diag(λ1 , λ2 , · · · , λn ) and P = [P1 | P2 | · · · | Pn ] .

Theorem 5.2.2 Distinct Eigenvalues

A matrix A has linearly independent eigenvectors if all of its eigenvalues are real and distinct.

Theorem 5.2.3 Linearly Independent Eigenvectors

An n × n matrix A is diagonalizable if and only if it has n linearly independent eigenvectors.

Definition 5.2.3 Multiplicities

If A is an n×n matrix with eigenvalue λ0 , then the dimension of the eigenspace corresponding
to λ0 is called geometric multiplicitiy of λ0 ; and the number of times that λ − λ0 appears
as a factor in the characteristic polynomial of A is called the algebraic multiplicity of λ0 .
5.2. Diagonalization 127

Theorem 5.2.4 Geometric and Algebraic Multiplicities

Let A be a square matrix. Then


1. For every eigenvalue of A, the geometric multiplicity is less than or equal to the algebraic
multiplicity.
2. A is diagonalizable if and only if the geometric multiplicity of every eigenvalue is equal
to the algebraic multiplicity.

Example 5.2.1
 
1 0 2
 
Let A = 1
 0 0. If possible, find matrices P and D so that A is diagonalizable.
0 0 −1

Solution:

Recall that in Example 5.1.2, we found λ1 = −1, λ2 = 0, and λ3 = 1 with bases


     
−1 0 1
     
P1 =  1  , P2 = 1 , P3 = 1
    
.
1 0 0
Since, we have real and distinct eigenvalues, the eigenvectors P1 , P2 , and P3 are linearly
independent. Thus, A is diagonalizable and
   
−1 0 0 −1 0 1
   
D= 0
 0 0 and P =  1
  1 1.
0 0 1 1 0 0

Example 5.2.2

Find a matrix P that diagonalizes


 
0 0 1
 
A = 0
 1 2.
0 0 1

Solution:
128 Chapter 5. Eigenvalues and Eigenvectors

We first compute the characteristic polynomial pA (λ) = |λI3 − A| = 0 as follows:


λ 0 −1
0 λ−1 −2 = λ(λ − 1)(λ − 1) = 0
0 0 λ−1
Thus, λ1 = 1, λ2 = 1, and λ3 = 0. To find the corresponding bases for eigenspaces of A, we
solve the homogeneous system
 
λi 0 −1 0
 
0
 λi − 1 −2 .
0 (5.2.1)
0 0 λi − 1 0

1. λ1 = λ2 = 1 ⇒ (λ1 I3 − A)x1 = 0, x1 = (a, b, c) 6= (0, 0, 0). Substitute λ1 = λ2 = 1 in


Equation (5.2.1) to get:
 
1 0 −1 0
 
0
 0 −2 .
0
0 0 0 0
Thus, we get a − c = 0 and −2c = 0 which implies that a = c = 0. If b = t ∈ R\{0},
   
0 0
   
 t . We choose t = 1 to get a basis for Eλ1 with one vector P1 = 1.
then we get x1 =    

0 0
We see here that the (geometric multiplicity) dimension of Eλ1 is 1 while the (algebraic)
multiplicity of λ1 is 2. Therefore, A is not diagonalizable.

Example 5.2.3

Find, if possible, matrices D and P so that D = P −1 A P , where


 
1 0 0
 
A=
 0 0 0.
−1 0 0

Solution:

We first compute the characteristic polynomial pA (λ) = |λI3 − A| = 0 as follows:


λ−1 0 0
0 λ 0 = λ2 (λ − 1) = 0.
1 0 λ
Thus, λ1 = 0, λ2 = 0, and λ3 = 1. To find the corresponding bases for eigenspaces of A, we
5.2. Diagonalization 129

solve the homogeneous system


 
λi − 1 0 0 0
 
 0
 λi 0 .
0 (5.2.2)
1 0 λi 0

1. λ1 = λ2 = 0 ⇒ (λ1 I3 − A)x1 = 0, x1 = (a, b, c) 6= (0, 0, 0). Substitute λ1 = λ2 = 0 in


Equation 5.2.2 to get:
 
−1 0 0 0
 
.
 0 0 0 0

1 0 0 0
Thus, we get a = 0. If b = t and c = r (not both zeros) be two real numbers, we get
         
0 0 0 ( ) 0 ( ) 0
      t=1   t=0  
x1 = 
t = t
 1 + r 0 if ⇒ P1 = 
1 ,
 if ⇒ P2 = 0 .
    r=0 r=1  
r 0 1 0 1
Here, the dimension of Eλ1 is 2 which equals to the algebraic multiplicity of λ = 0. So,
we continue with the other eigenvalues.
2. λ3 = 1 ⇒ (λ3 I3 − A)x3 = 0, x3 = (a, b, c) 6= (0, 0, 0). Substitute λ3 = 1 in Equation
5.2.2 to get:
 
0 0 0 0
 
.
0 1 0 0

1 0 1 0
 
t
 
Thus, we get b = 0 and a + c = 0. If c = t ∈ R\{0}, we get x3 = 
 0 . We choose

−t
 
1
 
t = 1 to get a basis with one vector P3 =  0 .

−1
There are three basis vectors in total, so the matrix P = [P1 | P2 | P3 ] diagonalize A and we
get D = P −1 AP = diag(1, 0, 0).
   
1 0 0 1 0 0
   
D = 0
 0 0 and P =  0
  1 0.
0 0 0 −1 0 1
130 Chapter 5. Eigenvalues and Eigenvectors

Exercise 5.2.1

Show that similar matrices have the same trace. Hint: Recall that tr(AB) = tr(BA).

Exercise 5.2.2

Show that A andB in each 


of the following are not similar matrices:
2 1 2 0
1. A =  and B =  .
3 4 3 3
   
4 2 0 0 3 4
   
2. A = 2 1 0 and B = 0
  7 2 .
1 1 7 0 0 4
Hint: Similar matrices shares some properties like determinants and traces.

Exercise 5.2.3
 
0 −3 −3
 
Let A =  1
 4 1.
−1 −1 2
1. Find the eigenvalues of A.
2. For each eigenvalue λ, find the rank of the matrix λ I3 − A.
3. Is A diagonalizable? Why?
Hint: For part 3, use what you got in part 2 and recall that for n × n matrix, we have
n = rank − nullity.

Exercise 5.2.4

Show that if A is diagonalizable, then


1. AT is diagonalizable.
2. Ak is diagonalizable, for any positive integer k.
Hint: A is diagonalizable implies that A = P D P −1 .

Exercise 5.2.5
 
1 −2 8
 
0 −1
Let A =  0.
0 0 −1
10000
1. Find A .
5.2. Diagonalization 131

2. Find A20021 .
3. Find A−20021 .
Hint: Write A in the form A = P D P −1 .

Exercise 5.2.6

Show that if A and B are invertible matrices, then AB and BA are similar. Hint: They are
similar if AB = (?)−1 (BA) (?).

Exercise 5.2.7

Prove: If A and B are n × n invertible matrices, then A B −1 and B −1 A have the same
eigenvalues. Hint: Show that they have the same characteristic polynomial.

Exercise 5.2.8
 
1 2 0
 
Let A = 0 2 0 , where a ∈ R.

1 a 2
1. Find all eigenvalues of A.
2. For a = −2, determine whether A is diagonalizable.
3. For a 6= −2, find all eigenvectors of A.
Final answer: Eigenvalues: 1 and 2. If a = −2, A is diagonalizable. Otherwise, A is not
diagonalizable.

Exercise 5.2.9

1. Show that a square matrix A is singular iff it has an eigenvalue



0. 
2017 −2017 2020
 
2. Use part 1 to show that 0 is an eigenvalue of the matrix A =  2018
 −2018 .
2021 
2019 −2019 2022
132 Chapter 5. Eigenvalues and Eigenvectors
Chapter

6
Applications of Vectors in R2 and R3

Section 6.1: Orthogonality

Definition 6.1.1 Orthogonal Vectors

Two nonzero vectors x and y in Rn are said to be orthogonal (or perpendicular) if x·y = 0.

Remark 6.1.1

• The set of standard unit vectors {i, j, k} in R3 is called an orthogonal set since the
dot product of any pair of distinct vectors is 0. Moreover, it is called an orthonormal
set since each vector in the set is a unit vector.

Example 6.1.1

Show that x = (−2, 3, 1, 4) and y = (1, 2, 0, −1) are orthogonal in R4 .

Solution:

Clearly, x and y are orthogonal since x · y = −2 + 6 + 0 − 4 = 0.

Example 6.1.2

Find a unit vector that is orthogonal to both x = (1, 0, 1) and y = (0, 1, 1).

Solution:

Consider z = (a, b, c) such that x · z = y · z = 0. Then,

a + c = 0 and b + c = 0.

Thus, z is of the form (−c, −c, c). Setting c = 1, we get z = (−1, −1, 1) which is orthogonal
to both x and y.
To find a unit vector that is orthogonal to both x and y, we consider
!
1 −1 −1 1
w= z= √ ,√ ,√ .
kzk 3 3 3

133
134 Chapter 6. Applications of Vectors in R2 and R3

Theorem 6.1.1 The Pythagoras Theorem in Rn

If x and y are orthogonal vectors in Rn, then

k x + y k2 = k x k2 + k y k2
x+y
y
Proof:

Since x and y are orthogonal, we have x · y = 0. Then


x

k x + y k2 = (x + y) · (x + y) = k x k2 + 2 (x · y) + k y k2 = k x k2 + k y k2 .

Example 6.1.3

Find all vectors in R4 that are orthogonal to both

x = (1, 1, 1, 1) and y = (1, 1, −1, −1).

Solution:

Let z = (a, b, c, d) ∈ R4 so that z · x = z · y = 0. Therefore, we get the following homogenous


system:
   
a+b+c+d=0 1 1 1 1 0 1 1 0 0 0
=⇒  ∼
a+b−c−d=0 1 1 −1 −1 0 0 0 1 1 0
Solving this system, we get a = −b and c = −d. Let b = r and d = s where r, s ∈ R to get
z = (−r, r, −s, s) which is the form of any vector in R4 that is orthogonal to both x and y.

Example 6.1.4

Show that the triangle with vertices P1 (2, 3, −4), P2 (3, 1, 2), and P3 (7, 0, 1) is a right triangle.

Solution:

P3 P3 P1

−−−→ −−−→
P1 P3 P2 P3
−−−→
P1 P2
P1 P2 P1 P2 P3 P2
Figure 1 Figure 2 Figure 3

We start with Figure 1 as we do not know if there is a right angle. We create three vectors,
6.1. Orthogonality 135

namely
−−→
x = P 1 P2 = P2 − P1 = (1, −2, 6)
−−→
y = P 1 P3 = P3 − P1 = (5, −3, 5)
−−→
z = P 2 P3 = P3 − P2 = (4, −1, −1)
This is draw in Figure 2. Then, we want to find two vectors whose dot product is zero which
is valid by considering x and z. That is

x · z = 4 + 2 − 6 = 0.

Therefore, this triangle has a right angle at P2 and it is drawn at Figure 3.


Also, we can use the Pythagoras Theorem to show that k y k2 = k x k2 + k z k2 .
136 Chapter 6. Applications of Vectors in R2 and R3

Exercise 6.1.1

Find all values of c so that x = (c, 2, 1, c) and y = (c, −1, −2, −3) are orthogonal.

Exercise 6.1.2

Show that if x and y are orthogonal unit vectors in Rn , then



k a x + b y k = a2 + b 2 .

Exercise 6.1.3

Show that if x and y are orthogonal unit vectors in Rn , then

k 4 x + 3 y k = 5.

Exercise 6.1.4

Let x and y be two vectors in Rn so that k x k = k y k. Show that x − y and x + y are


orthogonal.

Exercise 6.1.5

Find all values of a so that x = (a2 − a, −3, −1) and y = (2, a − 1, 2a) are orthogonal.

Exercise 6.1.6

Find a unit vector that is orthogonal to both x = (1, 1, 0) and y = (−1, 0, 1).

Exercise 6.1.7

Show that the triangle with vertices P1 (2, 3, −4), P2 (3, 1, 2), and P3 (7, 0, 1) is a right triangle.
6.2. Cross Product 137

Section 6.2: Cross Product

Remark 6.2.1

• Recall that if 0 ≤ θ ≤ π is an angle between two vectors x and y in Rn , then


x·y
−1 ≤ cos θ = ≤1 or x · y = k x k k y k cos θ
kxkkyk
• Two nonzero vectors x and y in Rn are said to be orthogonal (or perpendicular) if
x · y = 0.

Remark 6.2.2 Cross Product

Let x, y ∈ Rn and 0 ≤ θ ≤ π, then


π
1. x ⊥ y (orthogonal) ⇔ θ = ⇔ cos θ = 0 ⇔ x · y = 0.
2
2. x // y (parallel (same direction)) ⇔ θ = 0 ⇔ cos θ = 1 ⇔ x · y = kxk kyk ⇔ y = c x
with c > 0.
3. x // y (parallel (opposite direction)) ⇔ θ = π ⇔ cos θ = −1 ⇔ x · y = −kxk kyk ⇔
y = c x with c < 0.

Definition 6.2.1

Let x = (x1 , x2 , x3 ) = x1 i + x2 j + x3 k and y = (y1 , y2 , y3 ) = y1 i + y2 j + y3 k


be two vectors in R3 , then the cross product of x and y, denoted by x×y, x×y
y
is defined by
i j k
x × y = x1 x2 x3
y1 y2 y3 x
x2 x3 x1 x3 x1 x2
= i− j+ k
y2 y3 y1 y3 y1 y2
 
x2 x3 x1 x3 x1 x2 
= ,− , .
y2 y3 y1 y3 y1 y2
That is

x × y = (x2 y3 − x3 y2 , x1 y3 − x3 y1 , x1 y2 − x2 y1 ).
138 Chapter 6. Applications of Vectors in R2 and R3

Example 6.2.1

Find x × y where x = (2, 1, 2) = 2i + j + 2k and y = (3, −1, −1) = 3i − j − k.

Solution:

i j k
1 2 2 2 2 1
x×y= 2 1 2 = i− j+ k = i + 8j − 5k = (1, 8, −5).
−1 −1 3 −1 3 −1
3 −1 −1

Theorem 6.2.1 Properties of Cross Product

If x, y, z ∈ R3 and c ∈ R, then

1. x × y = −(y × x), 5. x × x = 0,
2. x × (y + z) = x × y + x × z, 6. x × 0 = 0 × x = 0,
3. (x + y) × z = x × z + y × z, 7. x × (y × z) = (x · z)y − (x · y)z,
4. c (x × y) = c x × y = x × c y, 8. (x × y) × z = (x · z)y − (y · z)x.

Remark 6.2.3 Cross Product of Unit Vectors

It can be shown (Try it your self) that

i
− −
i×i=0 j×j=0 k×k=0 + +
i×j=k j×k=i k × j = −i + j
k
i × k = −j j × i = −k k×i=j

Remark 6.2.4 Scalar Triple Product

If x = (x1 , x2 , x3 ); y = (y1 , y2 , y3 ); z = (z1 , z2 , z3 ) ∈ R3 , then observe that


x1 x2 x3
x · (y × z) = (x × y) · z = y1 y2 y3 .
z1 z2 z3
6.2. Cross Product 139

Theorem 6.2.2 Cross Product and Dot Product

If x, y, z ∈ R3 , then
1. x · (x × y) = 0, and y · (x × y) = 0, [x × y is orthogonal to both x and y]
2. k x × y k2 = k x k2 k y k2 − (x · y)2 , [Lagrange’s identity]

Remark 6.2.5

If θ denotes the angle between x and y, then Theorem 6.2.2(2) and Remark 6.2.1 implies that

k x × y k = k x kk y k sin θ.

Remark 6.2.6

If x, y, z ∈ R3 , then
π
1. x ⊥ y ⇐⇒ θ = 2
⇐⇒ sin θ = 1 ⇐⇒ kx × yk = kxkkyk,
2. x // y ⇐⇒ θ = 0 or π ⇐⇒ kx × yk = 0 ⇐⇒ x × y = 0,
3. Area of triangle:

1
A∆ = ah kxk x
2 h
h θ
sin θ = =⇒ h = kxk sin θ and if kyk = a,
kxk y
1 1 kyk = a
A∆ = kykkxk sin θ = kx × yk.
2 2
4. Area of parallelgram (two triangles):

A = kx × yk.

5. Volume of parallelpiped:

V olume = |x · (y × z)| .
y
z
x
140 Chapter 6. Applications of Vectors in R2 and R3

Example 6.2.2

Find the area of the triangle with vertices: P1 (2, 2, 4), P2 (−1, 0, 5), and P3 (3, 4, 3). Find the
−−−→ −−−→
area of the parallelogram with adjacent sides P1 P2 and P1 P3 .

Solution:
−−→ −−→
Let x = P1 P2 = P2 − P1 = (−3, −2, 1) and y = P1 P3 = P3 − P1 = (1, 2, −1). Then,
i j k
x×y = −3 −2 1
1 2 −1
−21 −3 1 −3 −2
= i− j+ k = 0i − 2j − 4k.
−1
2 1 −1 1 2
√ √ √ √
Therefore, kx × yk = 4 + 16 = 20 = 2 5 and A∆ = 12 kx × yk = 5.

The area of the parallelogram A is simply 2A∆ = 2 5.

Example 6.2.3

Find the volume of the parallelpiped with a vertex at the origin and edges x = (1, −2, 3),
y = (1, 3, 1), and z = (2, 1, 2).

Solution:

1 −2 3
Volume = x · (y × z) = (x × y) · z = 1 3 1
2 1 2
= 1(6 − 1) − 1(−4 − 3) + 2(−2 − 9) = 5 + 7 − 22 = | − 10| = 10.

Theorem 6.2.3

If x = (x1 , x2 , x3 ), y = (y1 , y2 , y3 ), and z = (z1 , z2 , z3 ) have the same initial point, then the lie
in the same plane if and only if

x · (y × z) = 0.
6.2. Cross Product 141

Exercise 6.2.1

Consider the points P (3, −1, 4), Q(6, 0, 2) and R(5, 1, 1). Find the point S in R3 whose first
−−→ −→
component is −1 and such that PQ is parallel to RS.

Exercise 6.2.2

Use cross product to find a vector that is orthogonal to both x = (1, −1, 2) and y = (2, 1, 2).
Hint: Find x × y.

Exercise 6.2.3

Find the area of the parallelogram determined by x = (1, 3, 4) and y = (5, 1, 2). Hint: See
Remark 6.2.6.

Exercise 6.2.4

Find the area of the triangle whose vertices are P (1, 1, 2), Q(0, −3, 4), and R(2, 3, 5). Hint:
See Example 6.2.2.

Exercise 6.2.5

Find the volume of the parallelpiped with sides x = (−3, 1, −2), y = (−1, 2, 3), and z =
(1, 2, 4). Hint: See Example 6.2.3.

Exercise 6.2.6

Determine whether x = (0, 1, −1), y = (2, 2, 0), and z = (4, 1, 2) lie in the same plane when
positioned so that their initial points coincide. Hint: See Theorem 6.2.3.

Exercise 6.2.7

Show that two nonzero vectors x and y in R3 are parallel, if and only if, x × y = 0.
142 Chapter 6. Applications of Vectors in R2 and R3

Exercise 6.2.8

If x and y are nonzero vectors in R3 such that k(2x) × (2y)k = −4x · y, compute the angle
between x and y.
Hint: What is θ if tan (θ) = −1.

Exercise 6.2.9

Find the area of the triangle whose vertices are P (1, 0, −1), Q(2, −1, 3) and R(0, 1, −2).

Exercise 6.2.10

Let x and y be unit vectors in R3 . Show that kx × yk2 + (x · y)2 = 1.

Exercise 6.2.11
π
Let x and y be two nonzero vectors in R3 , with angle θ = 3
between them. Find k x × y k,
if k x k = 3 and k −2y k = 4.

Exercise 6.2.12

Find the are of the parallelogram determined by x = (1, 3, 4) and y = (5, 1, 2).

Final answer: 2 131.

Exercise 6.2.13

Let x, y, z be in R3 such that (x × y) · z = 6. Find a) x · (y × z), b) 2x · (y × z), c) x · (z × y)


and d) x × (y × 4x).

Exercise 6.2.14

Find a vector of length 12 so that it is perpendicular to both

x = 2i − j − 2k and y = 2i + j.
6.3. Lines and Planes in R3 143

Section 6.3: Lines and Planes in R3

Definition 6.3.1

A line in R3 is determined by a fixed point P0 = (x0 , y0 , z0 )


and a directional vector u = (a, b, c). The line L through P0 y
L
and parallel to u consists of the points P (x, y, z) such that P (x, y, z)

(x, y, z) = (x0 , y0 , z0 ) + t(a, b, c), where t ∈ R.


P0 (x0 , y0 , z0 ) u = (a, b, c)
x

Such equation is written as x = P0 + tu, where x = (x, y, z).


x
O(0, 0, 0)
The parametric equation of line L:

x = x0 + at, y = y0 + bt, z = z0 + ct, tz ∈ R,

while the symmetric form of L is given by:


x − x0 y − y0 z − z0
t := = = .
a b c

Example 6.3.1

Let P1 (2, −2, 3), P2 (−1, 0, 4) be two points in R3 . Find the parametric equation and the
symmetric form of the line that passes through the points P1 and P2 .

Solution:
−−→
Let u = P1 P2 = P2 − P1 = (−3, 2, 1) and let P0 = P1 be a fixed point on the line call it L.
Then, L : (x, y, z) = (2, −2, 3) + t(−3, 2, 1) with parametric equations:

x = 2 − 3t, y = −2 + 2t, z = 3 + t, t ∈ R.
x−2 y − (−2) z−3 2−x y+2
while the symmetric form of L is: = = ⇐⇒ = = z − 3.
−3 2 1 3 2

Remark 6.3.1

Let u1 = (a1 , b1 , c1 ) and u2 = (a2 , b2 , c2 ) be two vectors associated with L1 and L2 so that
x − x1 y − y1 z − z1 x − x2 y − y2 z − z2
L1 : = = and L2 : = = . Then,
a1 b1 c1 a2 b2 c2
1. L1 ⊥ L2 ⇐⇒ u1 ⊥ u2 ⇐⇒ u1 · u2 = 0,
2. L1 // L2 ⇐⇒ u1 // u2 ⇐⇒ u1 × u2 = 0 ⇐⇒ u2 = c u1 for c ∈ R.
144 Chapter 6. Applications of Vectors in R2 and R3

Example 6.3.2

Show that L1 : P1 (4, −1, 4) and u1 = (1, 1, −3) and L2 : P2 (3, −1, 0) and u2 = (2, 1, 1)
intersect orthogonally, and find the intersection point.

Solution:

Clearly, u1 · u2 = 2 + 1 − 3 = 0. Then, u1 ⊥ u2 =⇒ L1 ⊥ L2 . To find the intersection point


P (x, y, z), we look for a point satisfying both parametric equations at the same time:
L1 : x = 4 + t1 , y = −1 + t1 , and z = 4 − 3t1 ,
L2 : x = 3 + 2t2 , y = −1 + t2 , and z = t2 .
Clearly, since y = −1 + t1 = −1 + t2 , we get t1 = t2 . Substituting this in z = 4 − 3t1 = t2 , we
get 4 − 3t1 = t1 which implies that t1 = 1 = t2 . Therefore, the intersection point according
to L1 is P (4 + 1, −1 + 1, 4 − 3), that is P (5, 0, 1).

Definition 6.3.2

The equation of a plane Π is determined by a fixed point P0 (x0 , y0 , z0 )


n
contained in Π and a normal vector n = (a, b, c) which is orthogonal
to Π. A point P (x, y, z) lies on the plane Π if and only if
−−→ −−→
n ⊥ P0 P ⇐⇒ n · P0 P = 0. L
P
P0
The point-normal equations (general form) of the plane Π that passes through
P0 (x0 , y0 , z0 ) and its normal vector is n = (a, b, c) is

a(x − x0 ) + b(y − y0 ) + c(z − z0 ) = 0.

Where the standard form of the plane Π is ax + by + cz + d = 0.

Example 6.3.3

The equation of the plane P ii containing the point P0 (1, 0, −1) with normal vector n =
(2, −1, 1) is

2(x − 1) + (−1)(y − 0) + 1(z − (−1)) = 0.

That is, the standard form of Π is

2x − y + z = 1.
6.3. Lines and Planes in R3 145

Remark 6.3.2

To find an equation of a plane Π containing three points P1 (x1 , y1 , z1 ), P2 (x2 , y2 , z2 ), and


P3 (x3 , y3 , z3 ), we can use one of the following two ways:
1. For any point P (x, y, z) ∈ Π, we apply the standard form of Π: ax + by + cz + d = 0
on the points P1 , P2 , and P3 to get a homogenous system in a, b, c, and d. This system
has nontrivial solutions if
x y z 1
x1 y1 z1 1
= 0.
x2 y2 z2 1
x3 y3 z3 1
Solving this determinant, we get an equation in the standard form for Π.
−−→ −−→
2. Another way is to compute two contained vectors in Π namely x = P1 P2 and y = P1 P3
and consider the normal vector to Π, n = x × y = (a, b, c) which is orthogonal to Π.
Then, the general form of Π is a(x − x1 ) + b(y − y1 ) + c(z − z1 ) = 0.

Example 6.3.4

Let P1 (2, −2, 1), P2 (−1, 0, 3), P3 (5, −3, 4), and P4 (4, −3, 7) be four points in R3 . Then,
1. Find an equation of the plane Π that passes through P1 , P2 , and P3 .
2. Is P4 contained in Π? Explain.

Solution:
−−→ −−→
1. Let x = P1 P2 = P2 − P1 = (−3, 2, 2) and y = P1 P3 = P3 − P1 = (3, −1, 3). These
two vectors are contained in Π while n = x × y = (8, 15, −3) is a normal vector to Π.
Therefore, a general form of Π is 8(x − 2) + 15(y + 2) − 3(z − 1) = 0. The standard
form of Π is

8x + 15y − 3z + 17 = 0.

2. P4 is contained in Π if it satisfies its equation:

8(4) + 15(−3) − 3(7) + 17 = −17 6= 0.

Therefore, P4 is not on the plane Π.


146 Chapter 6. Applications of Vectors in R2 and R3

Remark 6.3.3

Let Π1 : a1 x + b1 y + c1 z + d1 = 0 and Π2 : a2 x + b2 y + c2 z + d2 = 0. Then,


1. Π1 // Π2 ⇐⇒ n1 // n2 ⇐⇒ n1 × n2 = 0 ⇐⇒ n2 = c n1 , where c ∈ R,
2. Π1 ⊥ Π2 ⇐⇒ n1 ⊥ n2 ⇐⇒ n1 · n2 = 0.
n1 n1

Π2
n2

Π1

n2

Π1 // Π2 Π1 ⊥ Π2

Example 6.3.5

Find the parametric equation of the intersection line of the two planes:

Π1 : x − y + 2z = 3 and Π2 : 2x + 4y − 2z = −6.

Solution:

We form a nonhomogenous system to solve for the parametric equation of the intersection
line:
   
1 −1 2 3 1 0 1 1
 → ··· → 
2 4 −2 −6 0 1 −1 −2
Therefore, the reduced system is:

x + z = 1 and y − z = −2.

Let z = t ∈ R to get the parametric equation of the intersection line:


   
x 1−t
   
y  = −2 + t
   
z t

Example 6.3.6

Find two equations of two planes whose intersection line is the line L:

x = −2 + 3t; y = 3 − 2t; z = 5 + 4t; where t ∈ R.


6.3. Lines and Planes in R3 147

Solution:

The symmetric form of L is:


x+2 y−3 z−5
= = .
3 −2 4
x+2 y−3
Therefore, a first plane is by equating 3
= −2
, to get −2x − 4 = 3y − 9. Thus, Π1 :
x+2 z−5
2x + 3y − 5 = 0. Another plane is by equating 3
= 4
, to get 4x + 8 = 3z − 15. Thus
Π2 : 4x − 3z + 23 = 0.

Further Considerations:
Remark 6.3.4

x − x0 y − y0 z − z0
Let u = (a1 , b1 , c1 ) be associated with the line L : = = and n =
a1 b1 c1
(a2 , b2 , c2 ) be associated with the plane Π2 : a2 x + b2 y + c2 z + d2 = 0. Then,
1. L ⊥ Π ⇐⇒ u // n ⇐⇒ u × n = 0 ⇐⇒ u = c n where c ∈ R,
2. L // Π ⇐⇒ u ⊥ n ⇐⇒ u · n = 0.
n1
n1

u
L
u

Π Π

L⊥Π L // Π

Example 6.3.7

Find a plane that passes through the point (2, 4, −3) and is parallel to the plane −2x + 4y −
5z + 6 = 0.

Solution:

Since the two planes parallel, we can choose the normal vector of the second plane. That is
n = (−2, 4, −5). Thus, the equation of the plane is

−2(x − 2) + 4(y − 4) − 5(z + 3) = 0 =⇒ −2x + 4y − 5z − 27 = 0.


148 Chapter 6. Applications of Vectors in R2 and R3

Example 6.3.8

Find a line that passes through the point (−2, 5, −3) and is perpendicular to the plane 2x −
3y + 4z + 7 = 0.

Solution:

The line L is perpendicular to our plane. So, it is parallel to its normal vector, so we can
choose the normal vector as u. That is u = (2, −3, 4) and hence the parametric equation of
L is

x = −2 + 2t 


y = 5 − 3t 
t∈R

z = −3 + 4t 

Example 6.3.9

x−1 −y+4
Show that the plane Π : 6x − 4y + 2z = 0 and the line L : 3
= 2
= z − 5 intersect
orthogonally. Find the intersection point.

Solution:

We first have to write the symmetric form as


x−1 y−4 z−5
L: = = .
3 −2 1
Then, the normal vector of Π is n = (6, −4, 2) and the directional vector of L is u = (3, −2, 1).
Clearly, n = 2u which implies that n // u ⇐⇒ Π ⊥ L.
The intersection point with respect to L is

x = 1 + 3t 


y = 4 − 2t 
t∈R

z = 5 + t 

Therefore, plugin these values into the plane equation, we get

6(1 + 3t) − 4(4 − 2t) + 2(5 + t) = 0

6 + 18t − 16 + 8t + 10 + 2t = 0

28t = 0

Therefore, we get t = 0. Substituting this in the parametric equatio, we get the intersection
point as P (1, 4, 5).
6.3. Lines and Planes in R3 149

Exercise 6.3.1

Consider the planes:

Π1 : x + y + z = 3, Π2 : x + 2y − 2z = 2k, and Π3 : x + k 2 z = 2.

Find all values of k for which the intersection of the three planes is a line. Hint: Any point
on the intersection of the three planes must satisfies the three equations. This would give a
system of three equation. This system must have infinitely many solutions to describe a line.

Exercise 6.3.2

Consider the lines:


x+1 y+4 x−3 y−4 z−2
L1 : = = z − 1, and L2 : = = .
3 2 2 −4 2
1. Show that L1 and L2 are perpendicular and find their point of intersection.
2. Find an equation of the plane Π that contains both L1 and L2 .
Hint: (a) Show that u1 · u2 = 0 and then find a point satisfying both equations of x, y, and
z in terms of t1 and t2 , for instance. (b) Consider n = u1 × u2 .

Exercise 6.3.3

Let L be the line through the points P1 (−4, 2, −6) and P2 (1, 4, 3).
1. Find parametric equations for L.
2. Find two planes whose intersection is L.

Exercise 6.3.4

Find the parametric equations for the line L which passes through the points P (2, −1, 4) and
Q(4, 4, −2). For what value of k is the point R(k + 2, 14, −14) on the line L?

Exercise 6.3.5

Find the point of intersection of the line x = 1−t, y = 1+t, z = t, and the plane 3x+y+3z−1 =
0.
150 Chapter 6. Applications of Vectors in R2 and R3

Exercise 6.3.6

Find the equations in symmetric form of line of intersection of planes:

Π1 : x + 2y − z = 2, and Π2 : 3x + 7y + z = 11.

Exercise 6.3.7

Find an equation of the plane containing the lines

L1 : x = 3 + t, y = 1 − t, z = 3t, and L2 : x = 2s, y = −2 + s, z = 5 − s.

Exercise 6.3.8

Find a, b ∈ R so that the point P (3, a − 2b, 2a + b) lies on the line

L : x = 1 + 2t, y = 2 − t, z = 4 + 3t.
The Index

A inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
addition triangle inequality . . . . . . . . . . . . . . . . . . . . . 75
closed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . 14, 72
adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
E
algebraic multiplicity . . . . . . . . . . . . . . . . . . . . 126
eigenspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
augmented matrix . . . . . . . . . . . . . . . . . . . . . . . . . .2
eigenvalue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B algebraic multiplicity . . . . . . . . . . . . . . . . .126
basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 geometric multiplicity . . . . . . . . . . . . . . . . 126
eigenvector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
C
elementary row operations . . . . . . . . . . . . . . . . . 3
Cauchy-Schwarz Inequality . . . . . . . . . . . . . . . . 75
Euclidean vector space . . . . . . . . . . . . . . . . . . . . 69
Cauchy-Schwarz inequality . . . . . . . . . . . . . . . . 75
characteristic polynomial . . . . . . . . . . . . . . . . .119 F
cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 free variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
cofactor expansion . . . . . . . . . . . . . . . . . . . . . . . . 51
G
column space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
general solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
column vector . . . . . . . . . . . . . . . . . . . . . . . . . 11, 14
general vector space . . . . . . . . . . . . . . . . . . . . . . . 81
consistent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
geometric multiplicity . . . . . . . . . . . . . . . . . . . . 126
coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98, 99
cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 H
homogeneous system . . . . . . . . . . . . . . . . . . . . . . . 1
D
homogenous system
determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
nontrivial solution . . . . . . . . . . . . . . . . . . . . . . 7
cofactor expansion . . . . . . . . . . . . . . . . . . . . 49
trivial solution . . . . . . . . . . . . . . . . . . . . . . . . . 7
determinant of matrix . . . . . . . . . . . . . . . . . . . . .51
diagonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 I
diagonalizable . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 idempotent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . 125 inconsistent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 initial point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72 inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

151
152 THE INDEX

inverse of matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 23 transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


triangular
L
lower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
leading entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
upper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
leading variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
matrix polynomial . . . . . . . . . . . . . . . . . . . . . . . . 26
parametric equation . . . . . . . . . . . . . . . . . . 143
minor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
linear combination . . . . . . . . . . . . . . . . . 17, 70, 88
coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 N
linear independence . . . . . . . . . . . . . . . . . . . . . . . 93 non-homogeneous system . . . . . . . . . . . . . . . . . . . 1
linearly dependent . . . . . . . . . . . . . . . . . . . . . . . . 93 nontrivial solution . . . . . . . . . . . . . . . . . . . . . . . . . .7
linearly independent . . . . . . . . . . . . . . . . . . . . . . 93 norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
lower triangular matrix . . . . . . . . . . . . . . . . . . . 42 null space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90, 107

M O
matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 137
adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 orthogonal vectors . . . . . . . . . . . . . . . . . . 133, 137
cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
P
determinant . . . . . . . . . . . . . . . . . . . . . . . 49, 51
parallelgram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
diagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
parallelogram equation for vectors . . . . . . . . . 76
difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
parallelpiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
identity . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 41
parameteric equations . . . . . . . . . . . . . . . . . . . . . . 6
inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
perpendicular . . . . . . . . . . . . . . . . . . . . . . . 133, 137
invertable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
perpendicular vectors . . . . . . . . . . . . . . . 133, 137
nonsingular . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143, 144
scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
general form . . . . . . . . . . . . . . . . . . . . . . . . . 144
similar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
standard form . . . . . . . . . . . . . . . . . . . . . . . .144
singular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
point-normal equation . . . . . . . . . . . . . . . . . . . 144
size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
skew-symmetric . . . . . . . . . . . . . . . . . . . . . . . 43
square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 R
sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
symmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 reduced row echelon form . . . . . . . . . . . . . . . . . . 4
trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 row echelon form . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
THE INDEX 153

row equivalent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 inconsistent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


row space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 non-homogeneous . . . . . . . . . . . . . . . . . . . . . . 1
row vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11, 14 non-homogenous . . . . . . . . . . . . . . . . . . . . . . 34

T
S
terminal point . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
scalar multiple of a matrix . . . . . . . . . . . . . . . . 13
transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
scalar multiplication
triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
closed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
triangle inequality for distance . . . . . . . . . . . . 75
similar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
triangle inequality for vector . . . . . . . . . . . . . . 75
size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Triangle Inequality for vectors and distances
skew-symmetric matrix . . . . . . . . . . . . . . . . . . . 43
75
solution
triangular
infinite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
lower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
no . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
upper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
non-trivial . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
triple vector product . . . . . . . . . . . . . . . . . . . . . 138
trivial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
trivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
space U

column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 unit

null . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 upper triangular matrix . . . . . . . . . . . . . . . . . . . 42

spanned by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 V
spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 vector
standard form . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 column . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11, 14
standard unit vector . . . . . . . . . . . . . . . . . . . . . . 74 coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 cross product . . . . . . . . . . . . . . . . . . . . . . . . 137
trivial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 directional . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
symmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . 43 distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
system dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
augmented matrix form . . . . . . . . . . . . . . . . 2 inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
consistent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 initial point . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
homogenous . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
154 THE INDEX

normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 triangle inequality . . . . . . . . . . . . . . . . . . . . . 75


orthogonal . . . . . . . . . . . . . . . . . . . . . . 133, 137 unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

parallelogram equation . . . . . . . . . . . . . . . . 76 vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 81

perpendicular . . . . . . . . . . . . . . . . . . . 133, 137 general . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81


linear combination . . . . . . . . . . . . . . . . . . . . 88
row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11, 14
standard unit . . . . . . . . . . . . . . . . . . . . . . . . . 74 Z
terminal point . . . . . . . . . . . . . . . . . . . . . . . . 69 zero vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy