m111 Notes-1
m111 Notes-1
February 7, 2023
Contents
2 Determinants 49
3.1 Vectors in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
i
ii CONTENTS
1
System of Linear Equations and Matrices
Remark 1.1.1
Example 1.1.1
Solution:
Clearly adding the two (non-homogeneous) equations, we get 3x = 6. Thus, x = 2 and hence
y = −1. Therefore, the system has a unique soltion: x = 2 and y = −1.
1
2 Chapter 1. System of Linear Equations and Matrices
Example 1.1.2
Solution:
We can eliminate x from the second equation by adding −2 times the first equation to the
second. Thus, we get
x + y = 1
0 = 0
Thus, we simply can omit the second equation. Therefore, the solutions are of the form
x = 1 − y. Setting a parameter t ∈ R for y, we get infinite solutions of the form x = 1 − t
and y = t. Therefore, the system has infinite solutions.
Example 1.1.3
Solution:
Adding the third equation to −2 times the first equation, we get 0 = 3 which is impossible.
Therefore, this system has no solutions.
Example 1.1.4
Here is an example of transforming a system of equations into its augmented matrix form:
x1 + x2 − x3 = 1 1 1 −1 1
x2 − 2x3 = 0 ⇒ 0 1 −2 0
2x1 + 2x2 − 2x3 = 5 2 2 −2 5
The basic method for solving a linear system is to perform algebraic operations on the system
that do not change the solution set so that it produces a simpler version of the same system.
In matrix form, these algebraic operations are called elementary row operations:
Remark 1.1.2
Given an augmented matrix of some linear system, we define the following elementary row
operations:
1. interchanging a row by another row,
2. multiplying a row by a non-zero scalar,
3. adding a multiple of a row to another row.
4 Chapter 1. System of Linear Equations and Matrices
In this section, we solve systems of linear equations. We do that using some operations on the rows
of the augmented matrix of the system. Consider the following augmented matrix form
1 0 0 1
0 1 0 2
0 0 1 3
which describes the solution x = 1, y = 2 and z = 3 of some system. This matrix is an example of a
matrix in reduced row echelon form, abbreviated as r.r.e.f. .
A matrix A is said to be in the reduced row echelon form (r.r.e.f. for short) if it satisfies
the following conditions:
A matrix is said to be in the row echelon form (r.e.f.) if the last condition is not satisfied.
Example 1.2.1
Theorem 1.2.1
In the following three examples, we consider different augmented matrices for three systems of
linear equations.
Solve the linear system in the three unknowns x, y and z which is given by the following
augmented matrix in the reduced row echelon form.
1 0 0 1
0 1 2 0.
0 0 0 3
Solution:
The last row of the augmented matrix corresponds to the equation 0x + 0y + 0z = 3. This
equation is not satisfied for any values of x, y and z. Hence, the system has no solution. That
is, this system is inconsistent.
Solve the linear system in the three unknowns x, y and z which is given by the following
augmented matrix in the reduced row echelon form.
1 0 0 1
0 1 0 1.
0 0 1 2
Solution:
The resulted reduced system is x = 1, y = 1, and z = 2. The system has a unique solution
and hence this system is consistent.
Solve the linear system in the three unknowns x, y and z which is given by the following
augmented matrix in the reduced row echelon form.
1 0 1 1
0 1 −1 1 .
0 0 0 0
Solution:
6 Chapter 1. System of Linear Equations and Matrices
0x + 0y + 0z = 0.
This equation is of no use since it adds no restrictions on x, y and z and thus can be omitted.
Hence the resulted reduced system is
x + z = 1
y − z = 1
In this system, x and y are called the leading variables since they correspond to the leading
1’s in the augmented matrix. The remaining variables, which is z in this case, are called free
variables.
Solving the leading variables in terms of the free variables, we get
x = 1 − z
y = 1 + z
The free variable z can be assigned an arbitrary parameter value t which then determines the
values x and y. Thus, the solution set can be expressed by the parameteric equations as
x = 1 − t, y = 1 + t, and z = t.
This solution set can produce infinite solutions to the linear system. That is, this system is
consistent.
If a linear system has infinitely many solutions, then a set of parametric equations expressing
all solutions to the system is called a general solution of the system.
This method is used to solve systems of linear equations by applying the following steps:
1. Find the r.r.e.f. of the augmented matrix by applying elementary row operations,
2. Solve the reduced system.
Note that, the solution of the reduced system is the solution of the original one.
1.2. Gaussian Elimination 7
Example 1.2.5
Solution:
We first write the system in its augmented matrix form and then apply elementary row
operations
to reachthe reduced row echelon
form:
1 1 0 2 1 1 0 2 1 1 0 2 1 0 1 1
r2 −2r1 −r2 r1 −r2
2 1 1 3 −
r− −−→
−3r 0 −1 1 −1
−1−→ 0 1 −1 1 −−−→
r +r 0 1 −1 1
3 1 r3 3 2
3 0 3 3 0 −3 3 −3 3 0 −1 1 −1 0 0 0 0
Therefore, the reduced system is:
x + z = 1 x = 1 − z
⇒
y − z = 1 y = 1 + z
The free variable here is z and hence we have the following general solution:
x = 1 − t, y = 1 + t, and z = t.
Remark 1.2.3
A linear system is called homogenous if the constant terms in the last column of the aug-
mented matrix of the system are all zeros.
a11 x1 + a12 x2 + · · · + a1n xn = 0
a21 x1 + a22 x2 + · · · + a2n xn = 0
..
.
am1 x1 + am2 x2 + · · · + amn xn = 0
Every homogenous linear system is consistent because all such systems have
x1 = 0, x2 = 0, · · · , xn = 0,
as a solution. This solution is called the trivial solution. If there are other solutions, they
are called nontrivial solutions.
There are two possibilities for the solutions of homogenous linear system:
Example 1.2.6
Solution:
Clearly
1 1 0 1 0 1 0 1 1 0
2 1 1 2 0
⇒ ··· ⇒ 0 1 −1 0 0
3 0 3 3 0 0 0 0 0 0
Therefore, the reduced system is:
x+z+w = 0 x = −z − w
⇒
y−z = 0 y = z
Setting z = t and w = s, the system has nontrivial solutions: x = −t − s, y = t, z = t, and
w = s. Note that the trivial solution results when t = s = 0.
• If a homogenous linear system has n unknowns and if the reduced row echelon form of its
augmented matrix has k nonzero rows, then the system has n − k free variables.
• A homogenous linear system has infinitely many solutions if the number of unknowns
(columns of augmented matrix) is greater than the number of equations (rows of augmented
matrix).
Example 1.2.7
Suppose that the following matrices are augmented matrices for linear systems in the un-
knowns x, y, z and w. These matrices are all in row echelon form. Discuss the existence and
uniqueness of solutions of the corresponing systems.
1 1 0 0 2 1 1 0 0 2 1 1 0 0 2
0 1 2 0 5 0 1 2 0 5 0 1 2 0 5
(a) , (b) , (c) .
0 0 1 2 8 0 0 1 2 8 0 0 1 2 8
0 0 0 0 3 0 0 0 1 3 0 0 0 0 0
Exercise.
1.2. Gaussian Elimination 9
Exercise 1.2.1
Find the reduced row echelon form (r.r.e.f.) of the following matrix:
2 4 6 0
1 2 4 0
1 3 3 1
Exercise 1.2.2
Exercise 1.2.3
Exercise 1.2.4
Exercise 1.2.5
Exercise 1.2.6
Exercise 1.2.7
Definition 1.3.1
Example 1.3.1
3 0 1
2 3 −2
is a 2 × 3 matrix and matrix B = 4 2 0 is 3 × 3 matrix.
Matrix A =
4 1 −1
−1 4 1
The shaded entries of the square (n × n) matrix A are said to be on the main diagonal of A.
a11 a12 ··· a1n
a21
a22 ··· a2n
A=
.. .. .. ..
.
. . .
···
an1 an2 ann
12 Chapter 1. System of Linear Equations and Matrices
The trace of a square (n × n) matrix A = [aij ], denoted by tr(A), is the sum of the entries
n
X
on the main diagonal. That is, tr(A) = aii = a11 + a22 + · · · + ann . If A is not a square
i=1
matrix, then the trace of A is undefined.
Example 1.3.2
a d g 1 −1 2
T T T
A = b e h
, B = 4 2
, and C = 5
c f i 3×3 0 3 3×2 7 3×1
Two m × n matrices A and B are said to be equal if aij = bij for all 1 ≤ i ≤ m and 1 ≤ j ≤ n.
Example 1.3.3
Solution:
If A = [aij ] and B = [bij ] are two m × n matrices, then their sum is A + B = [cij ] where
cij = aij + bij , and their difference is A − B = [dij ] where dij = aij − bij .
Example 1.3.4
Solution:
If A = [aij ] is any m × n matrix and c is any scalar, then c A = [c aij ] for all 1 ≤ i ≤ m and
1 ≤ j ≤ n. The matrix c A is called a scalar multiple of A. That is,
Example 1.3.5
Solution:
Clearly
4 −2 1 0 5 −2
T
0 2 + 1 −1 = 1 1 .
2A + B =
6 0 1 −2 7 −2
14 Chapter 1. System of Linear Equations and Matrices
A matrix can be partitioned in smaller matrices by inserting horizontal and vertical rules between
selected rows and columns.
Example 1.3.6
The matrix
a11 a12 a13 a14
A=
a21 a22 a23 a24
we use the terms: row vector, ri (A), and column vector, cj , to denote the ith row and j th
column of the matrix A. That is
a1j
h i
a2j
ri (A) = ai1 ai2 · · · ain and cj (A) =
.. .
1×n
.
amj m×1
Example 1.3.7
If x = (1, 0, 2, −1) and y = (3, 5, −1, 4), then x · y = 3 + 0 + (−2) + (−4) = −3.
1.3. Matrices and Matrix Operations 15
The product is undefined if the number of columns of A not equals the number of rows of B.
Example 1.3.8
" # 2 1
1 3 −1
Evaluate AB, where A = , and B =
0 2
.
2 0 1 2×3
−1 2
3×2
Solution:
" # " #
2+0+1 1+6−2 3 5
Clearly, AB = = .
4+0−1 2+0+2 3 4 2×2
Example 1.3.9
" # " #
T 1 0 2 4 1 −1
Find AB and A B if possible, where A = , and B = .
1 −2 0 2 5 1
Solution:
• AB is not
defined
since # columns
in A is not
the same as # rows of B.
1 1 " # 6 6 0
T
4 1 −1
• A B = 0 −2
= −4 −10 −2
2 5 1
2 0 8 2 −2
Example 1.3.10
" # " #
T 1 0 2 1 2 3 0 1 2 3
Evaluate the (3, 2)-entry of A B, where A = , and B = .
3 −2 1 0 −1 1 2 −1 0 4
Solution:
0 h i
Clearly, the (3, 2)-entry of AT B equals r3 (AT ) · c2 (B). That is, 2 1 · = 0 + 2 = 2.
2
16 Chapter 1. System of Linear Equations and Matrices
Example 1.3.11
h i
Let A = 1 −2 0 . Find all values of c so that c A · AT = 15.
Solution:
h i
1
Note that cA · AT = c 1 −2
0 ·
−2 .
Therefore, 5c = 15 and hence c = 3.
0
or as row by row
a1 a1 B
a2
a2 B
AB =
.. B =
.. .
. .
am am B
Theorem 1.3.1
Example 1.3.12
Find the first column of AB and the second row of AB, where
1 2
−2 3 4
A=
3 4
and B = .
3 2 1
−1 5 3×2 2×3
Solution:
Clearly
1 2 4
−2
c1 (AB) = A c1 (B) =
3 4
= 6.
3
−1 5 17
1.3. Matrices and Matrix Operations 17
In a similar way,
−2 3 4 h
h i i
r2 (AB) = r2 (A) B = 3 4 = 6 17 16 .
3 2 1
If A1 , A2 , · · · , Ar are matrices of the same size and c1 , c2 , · · · , cr are scalars, then the form
c1 A1 + c2 A2 + · · · + cr Ar
Theorem 1.3.2
If A is an m×n matrix and x is an n×1 column vector, then the product Ax can be expressed
as a linear combination of the column of A in which the coefficients are the entries of x.
Proof:
a11 x1 + a12 x2 + · · · + a1n xn a11 a12 a1n
a21 x1 + a22 x2 + · · · + a2n xn
a21
a22
a2n
Ax = = x1 + x2 + · · · + xn .
..
.. .. ..
. . . .
am1 x1 + am2 x2 + · · · + amn xn am1 am2 amn
That is, Ax = x1 c1 (A) + x2 c2 (A) + · · · + xn cn (A) and hence Ax is a linear combination of
columns of A and the entries of X are the coefficients.
Example 1.3.13
Solution:
Clearly,
1 2 1 2
3
c2 (AB) = A c2 (B) = 3 4
= 3 3 + 24
.
2
−1 5 −1 5
18 Chapter 1. System of Linear Equations and Matrices
Remark 1.3.1
Exercise 1.3.1
1 2
Let A = . Compute A2 + 3A.
0 −1
Exercise 1.3.2
1 2
0 1 −3
Let A =
4 and B =
5 . Find the third row of AB.
−3 1 4
3 6
Exercise 1.3.3
1 1
4 1 −1
Let A = 0 2 and B =
. Find AB and express the second column of AB as a
2 5 1
2 0
linear combination of the columns of A.
Exercise 1.3.4
Write the following product as a linear combination of the columns of the first matrix.
1 3
3
2 1 .
−2
4 2
Exercise 1.3.5
1 2 7
Let A = 1 −5 7
. Find tr(A).
0 −1 10
Exercise 1.3.6
1 1
Let A = . Find A1977 , and find all matrices B such that AB = BA.
0 1
20 Chapter 1. System of Linear Equations and Matrices
Exercise 1.3.7
1 1
Let A = 2
1
2
1
. Find A100 .
2 2
Exercise 1.3.8
0 0 1 0 1 2
Find a 2 × 2 matrix B =
6 and B 6= so that AB = BA if A = .
0 0 0 1 0 1
Exercise 1.3.9
Exercise 1.3.10
1 0
Show that there are no 2 × 2 matrices A and B so that AB − BA = I2 , where I2 = .
0 1
Exercise 1.3.11
Theorem 1.4.1
Proof:
We only proof (1) and (10). Let A = [aij ] and B = [bij ] for 1 ≤ i ≤ m and 1 ≤ j ≤ n. Then
1. A + B = [aij + bij ] = [bij + aij ] = B + A.
10. (r + s)A = (r + s)[aij ] = [(r + s) aij ] = [r aij + s aij ] = [r aij ] + [s aij ] = rA + sA.
A matrix whose entries are all zero is called a zero matrix and is denoted by 0.
A square (n × n) matrix with 1’s on main diagonal and zeros elsewhere is called an identity
matrix and is denoted by In .
22 Chapter 1. System of Linear Equations and Matrices
Proof:
We only proof (5). Let A = [aij ] for 1 ≤ i ≤ m and 1 ≤ j ≤ n. If cA = 0, then for each i
and j, we have (cA)ij = c(A)ij = c aij = 0. Therefore, either c = 0 or we have aij = 0 for all
i and j. Therefore, either c = 0 or A = 0.
Remark 1.4.1
Let
0 1 1 3 2 5 1 2
A= ,B = ,C = , and D = .
0 2 1 2 1 2 0 0
Then,
1 2
AB = AC = ,
2 4
but B 6= C. That is, the cancellation law does not hold here.
For any a, b ∈ R, we have ab = 0 implies that a = 0 or b = 0. However, in matrices we have
AD = 0 but A 6= 0 and D 6= 0.
Moreover, the n×n identity matrix I commutes with any other matrix. That is, AI = IA = A
for any n × n matrix A.
Remark 1.4.2
Theorem 1.4.3
If R is the reduced row echelon form of an n × n matrix A, then either R has a row of zeros
or R is the identity matrix In .
1.4. Inverses; Algebraic Properties of Matrices 23
A B = B A = In .
Example 1.4.1
Remark 1.4.3
Note that if A is a square matrix with a row (or a column) of zeros, then A is singular. For
example, if A is a 3 × 3 with a column of zeros, then
h i h i
BA = B c1 c2 0 = B c1 B c2 0 6= I3 .
Proof:
B = B In = B (A C) = (B A) C = In C = C.
Proof:
Theorem 1.4.6
Proof:
Since both A and B are invertible, then both A−1 and B −1 exist. Thus
Thus, A B is invertible and (A B)−1 = B −1 A−1 . Note that, this result can be generalized to
any number of invertible matrices. That is,
Remark 1.4.4
Let m and n be nonnegative integers and let A and B be matrices of appropriate sizes, then
1. A0 = I and An = A A · · · A (n-times),
2. Am An = Am+n and (Am )n = Amn ,
3. (AB)n = (AB)(AB) · · · (AB), (n-times), and in general (AB)n 6= An B n .
4. In general, (A + B)2 = (A + B)(A + B) = A2 + AB + BA + B 2 6= A2 + 2AB + B 2 .
1.4. Inverses; Algebraic Properties of Matrices 25
Theorem 1.4.7
Proof:
Theorem 1.4.8
Let A and B be two matrices of appropriate size and let c be a scalar. Then:
1. (AT )T = A,
2. (A ± B)T = AT ± B T ,
3. (c A)T = c AT , and
4. (AB)T = B T AT . This result can be extended to three or more factors.
Exercise 1.4.1
If A is a square matrix and n is a positive integer, is it true that (An )T = (AT )n ? Justify
your answer.
Theorem 1.4.9
Proof:
Example 1.4.2
2 0 1 4
Let A = and B = . Find AB, and B T AT .
1 −1 3 5
Solution:
2 + 0 8 + 0 2 8 2 −2
Clearly, AB = = . Moreover, B T AT = (AB)T =
1−3 4−5 −2 −1 8 −1
Definition 1.4.4
p(A) = a0 In + a1 A + · · · + am Am .
Example 1.4.3
−1 2
Compute p(A) where p(x) = x2 − 2x − 3 and A = .
0 3
Solution:
2
2 −1 2 −1 2 1 0
p(A) = A − 2A − 3I2 = − 2 − 3
0 3 0 3 0 1
1 4 −2 4 3 0 0 0
= − − = =0
0 9 0 6 0 3 0 0
Example 1.4.4
Solution:
Example 1.4.5
Show that if A is a square matrix such that Ak = 0 for some positive integer k, then the
matrix A is invertible and (I − A)−1 = I + A + A2 + · · · + Ak−1 .
Solution:
h i h i
(I − A)(I + A + · · · + Ak−1 ) = I + @ AZ2 + · · · + Ak−1
HH − @ AZ3 + · · · + Ak−1
AZ2 + Z HH + Ak
H H
A +Z A +Z
= I − Ak = I − 0 = I.
28 Chapter 1. System of Linear Equations and Matrices
Exercise 1.4.2
1
3
Let A = . Find all constants c ∈ R such that (cA)T · (cA) = 5.
−1
3
Exercise 1.4.3
(In − A)−1 = A2 + A + In .
Exercise 1.4.4
Exercise 1.4.5
Let p1 (x) = x2 − 9, p2 (x) = x + 3, and p3 (x) = x − 3. Show that p1 (A) = p2 (A)p3 (A) for any
square matrix A.
Exercise 1.4.6
Exercise 1.4.7
Let Jn be the n × n matrix each of whose entries is 1. Show that if n > 1, then
1
(I − Jn )−1 = I − Jn .
n−1
1.4. Inverses; Algebraic Properties of Matrices 29
Exercise 1.4.8
Exercise 1.4.9
1 2 1 −1 2
Let A = , and B = . Find C if (B T + C)A−1 = B T .
1 3 3 2 0
Exercise 1.4.10
1 2 0 1 1 0
Let A−1 T
= 0 1 −1 and B = 1 0
0. Find C if A C = B .
2 0 1 0 1 1
h
Hint: Consider multiplying both sides from the left with A−1 .]
30 Chapter 1. System of Linear Equations and Matrices
Definition 1.5.1
Theorem 1.5.1
Remark 1.5.2
Unless otherwise specified, all matrices in this section are considered to be square matrices.
Theorem 1.5.2
Example 1.5.1
1 1
Find, if possible, A−1 for A =
2 3
Solution:
1 1 1 0 r2 −2r1 ⇒r2 1 1 1 0 r1 −r2 ⇒r1 1 0 3 −1
−−−−−−→ −−−−−−→ .
2 3 0 1 0 1 −2 1 0 1 −2 1
3 −1
Therefore, A is invertible and A−1 = .
−2 1
Example 1.5.2
1 0 1
Find, if possible, A−1
for A =
1 1 2
2 0 1
Solution:
1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0
r2 −r1 ⇒r2 −r3
1 1 2 0 1 0 −−−−−−→ 0 1 1 −1 1 0 −−→ 0 1 1 −1 1 0
r3 −2r1 ⇒r3
2 0 1 0 0 1 0 0 −1 −2 0 1 0 0 1 2 0 −1
1 0 0 −1 0 1 −1 0 1
r1 −r3 ⇒r1 −1
−−−−−−→ 0 1 0 −3 1 1 . Therefore, A is invertible and A = −3 1 1 .
r2 −r3 ⇒r2
0 0 1 2 0 −1 2 0 −1
Example 1.5.3
1 2 −3
Find, if possible, A−1
for A = 1 −2 1
5 −2 −3
Solution:
1 2 −3 1 0 0 1 2 −3 1 0 0 1 2 −3 1 0 0
2 −r1 ⇒r2 r3 −3r2 ⇒r3
1 −2 1 0 1 0 −r−
−−−−→ 0 −4 4 −1 1 0 −−−−−−→ 0 −4 4 −1 1 0
r3 −5r
.
1 ⇒r3
5 −2 −3 0 0 1 0 −12 12 −5 0 1 0 0 0 −2 −3 1
At this point, we can conclude that this matrix is singular with no inverse because of the fact
that the third row in the first part is a zero row. In particular, A−1 does not exist.
32 Chapter 1. System of Linear Equations and Matrices
Remark 1.5.4
Let A be an n × n matrix:
TRUE or FALSE:
? If A and B are
singular
matrices,
then so is A + B. (FALSE).
1 0 0 0 1 0
reason: A = and B = are two singular matrices, while A + B = is a invertible.
0 0 0 1 0 1
? If A and B are
invertible
matrices,
then so is A + B. (FALSE).
1 0 −1 0 0 0
reason: A = and B = are two invertible matrices, while A + B = is a
0 1 0 −1 0 0
singular.
1.5. A Method for Finding A−1 33
Exercise 1.5.1
1 0 1
T −1
Let A =
1 1 0. Find (2A) .
2 0 1
h
Hint: If c is a nonzero constant, then what is (cA)−1 ?]
Exercise 1.5.2
1 2 3 h i h i
Let A−1
= 0 1 2
. Find all x, y, z ∈ R such that x y z A = 1 2 3 .
0 0 1
Exercise 1.5.3
0 1 −1 1 2 −1
Let A = 1 1 0 and B = 1 3 0
.
0 1 2 1 1 −1
1. Find B −1 .
2. Find C if A = B C.
34 Chapter 1. System of Linear Equations and Matrices
Consider a system of linear equations presented by its matrix equation Ax = b, where A is the
coefficients, x is the column vector of unknowns, and b is the column vector of scalars.
Solving Ax = b
A (nonhomogeneous) system of linear equations has zero, one, or infinitely many solutions.
Proof:
If Ax = b, b 6= O has no solutions or one solution, then we are done. So, we only need to
show that if the system has more than one solution, then it has infinitely many solutions.
Assume that y and z are two solutions for the system Ax = b. Then, for r, s ∈ R, define
u = r y + s z to get:
If r + s = 1, then u is a solution to the system. Since there are infinitely many choices for r
and s in R, the system has infinitely many solutions.
TRUE or FALSE:
? If x1 and x2 are two solutions for Ax = b, then 32 x1 − 2x2 is also a solution. (FALSE).
3 −1
reason: Since 2
−2 = 2
6= 1.
Theorem 1.6.2
Proof:
Given Ax = b, we multiply both sides (from left) by A−1 to get x = A−1 b. That is A−1 b is
a solution to the system.
To show that it is unique, assume that y is any solution for Ax = b. Hence Ay = b and
again y = A−1 b.
Example 1.6.1
Solution:
We solve the system by using A−1 , we can find the inverse of A as in Example 1.5.2, to get
−1 0 1
A−1
= −3 1 1
2 0 −1
Then, using Theorem 1.6.2, we get the following unique solution:
x −1 0 1 0 −1
−1
x = y = A b = −3 1 1 1 = 0
.
z 2 0 −1 −1 1
36 Chapter 1. System of Linear Equations and Matrices
Example 1.6.2
Solution:
z 2 0 −1 0 0
Remark 1.6.3 Solving Multiple Systems with the Same Coefficient Matrix
Example 1.6.3
Solution:
1 0 1 0 1 1 0 1 0 1 1 0 0 −1 0
r2 −r1 ⇒r2 r1 −r2 ⇒r1
1 1 2 1 2 −−−−−−→ 0 1 1 1
1 −−−−−−→ 0 1 0 0
0
r3 −2r1 ⇒r3 r2 −r3 ⇒r2
2 0 3 1 3 0 0 1 1 1 0 0 1 1 1
Therefore, the solution of the first system is x = −1, y = 0, z = 1 and the solution of the
second system is x = 0, y = 0, z = 1.
1.6. More on Linear Systems and Invertible Matrices 37
Theorem 1.6.3
Proof:
Thus, the system Ax = 0 has only the trivial solution. Theorem 1.5.2 implies that A is
invertible. Therefore BA = I implies (BA)A−1 = IA−1 . Hence B = A−1 .
2. Using part (1), AB = I implies that A = B −1 . Taking the inverse for both sides, we
get A−1 = B as desired.
Theorem 1.6.5
Let A and B be two square matrices of the same size. If AB is invertible, then A and B must
be also invertible.
Example 1.6.4
Solution:
In this kind of questions, it is not necessary to get the r.r.e.f. So, we will try to focus on the
last row which contains the term ”a” as follows:
1 1 1 2 1 1 1 2 1 1 1 2
r2 −r1 ⇒r2 r3 −r2 ⇒r3
1 2 −1 2 −−−−−−→ 0 1
−2 0 −−−−−−→ 0 1
−2 0
r3 −r1 ⇒r3
2
1 2 a −5 a 0 1 a2 − 6 a − 2 0 0 a2 − 4 a − 2
At this point, we have:
(a) a unique solution if a2 − 4 6= 0 ⇐⇒ a 6= ±2 ⇐⇒ a ∈ R\{−2, +2}.
n o n o
(b) infinite solutions if a2 − 4 = 0 and a − 2 = 0 ⇐⇒ a = ±2 and a = +2 ⇐⇒ a =
+2.
n o n o
(c) no solution if a2 − 4 = 0 and a − 2 6= 0 ⇐⇒ a = ±2 and a 6= +2 ⇐⇒ a = −2.
Example 1.6.5
Solution:
1 3 k 4 r2 −2r1 ⇒r2 1 3 k 4
−−−−−−→ .
2 k 12 6 0 k − 6 12 − 2k −2
The system is consistent only if k 6= 6:
(a) can not have a unique solution (not equivalent to I because it is not square matrix!!),
(b) has infinite solutions when k 6= 6,
(c) has no solutions when k = 6.
1.6. More on Linear Systems and Invertible Matrices 39
Exercise 1.6.1
1 0 3 3 1
0 1 1 −1 0
Let A = and b = . Find all value(s) of a such that the system
1 −2 3 1 0
0 2 0 a2 + 1 a+2
Ax = b is consistent.
Exercise 1.6.2
Exercise 1.6.3
Exercise 1.6.4
Show that if c1 and c2 are solutions of the system Ax = b, then 4c1 − 3c2 is also a solution
of this system.
Exercise 1.6.5
Let u and v be two solutions of the homogenous system Ax = 0. Show that ru + sv (for
r, s ∈ R) is a solution to the same system.
Exercise 1.6.6
Exercise 1.6.7
Show that if A, B, and A + B are invertible matrices with the same size, then
A (A−1 + B −1 ) B (A + B)−1 = I.
Exercise 1.6.8
Example 1.7.1
1 0 −1
1 0 0 1 1 −1 0
1 1 0
Let D = 0
3 0 , A =
,
and B = 2
3 1 . Compute
2
2 1 4
0 0 2
4 2 1 3
5 1 −1
D−3 , AD, and DB.
42 Chapter 1. System of Linear Equations and Matrices
Solution:
3
Clearly D = diag(1, 3, 2) and D−1 = diag 1, 13 , 12 . That is, D−3 = (D−1 ) = diag 1, 27
1 1
,8 .
1 0 −1 1 0 −2
1 0 0
1 1 0 1 3 0 h i
AD = 0 3 =
0 = c1 (A) 3 c2 (A) 2 c3 (A) ,
2 1 4 2 3 8
0 0 2
5 1 −1 5 3 −2
1 0 0 1 1 −1 0 1 1 −1 0 r1 (B)
and DB =
0 3 2
0 3 1 = 6
2 9 3 = 3 r2 (B) .
6
0 0 2 4 2 1 3 8 4 2 6 2 r3 (B)
Example 1.7.2
Solution:
a 0
Assume that A = be a 2 × 2 diagonal matrix. Then,
0 b
2 a2 0 3a 0 2 0 0 0
A − 3A + 2I = − + = .
0 b2 0 3b 0 2 0 0
The ”lower triangular matrix” , L = [lij ], is an n × n matrix so that lij = 0 for all i < j.
The ”upper triangular matrix” , U = [uij ], is an n × n matrix so that uij = 0 for all i > j.
l11 u11 u12 ··· u1n
l21 l22
0
u22 ···
u2n
L= , and U=
... ..
.. .. ..
0
. . .
.
ln1 ln2 ··· lnn
unn
1.7. Diagonal, Triangular, and Symmetric Matrices 43
Theorem 1.7.1
Example 1.7.3
8 0
Find a lower triangular matrix that satisfies A3 = .
9 −1
Solution:
a 0
Assume that A = be a 2 × 2 lower triangular matrix. Then,
b c
3 a3 0 8 0
A = 2
= .
a b + c(ab + bc) c3 9 −1
Hence, a = 2 and c = −1 and thus 4b − (2b − b) = 9 implies that b = 3.
Remark 1.7.3
A square matrix A = [aij ] is symmetric if aij = aji and it is skew-symmetric if aij = −aji ..
For example,
1 2 3 0 −2 3
2
4 5 is symmetric, and 2
0 −5
is skew-symmetric.
3 5 0 −3 5 0
44 Chapter 1. System of Linear Equations and Matrices
Theorem 1.7.2
If A and B are symmetric matrices with the same size, and k is any scalar, then:
1. AT is symmetric.
2. A + B and A − B are symmetric.
3. kA is symmetric.
Proof:
Note that A and B are symmetric matrices and hence AT = A and B T = B. Then,
1. (AT )T = A = AT . Then AT is symmetric.
2. (A ± B)T = AT ± B T = A ± B. Then A + B and A − B are symmetric.
3. (kA)T = kAT = kA. Then, kA is symmetric.
Theorem 1.7.3
Theorem 1.7.4
Proof:
Assume that A is symmetric invertible matrix. Then A−1 is symmetric as well since
Example 1.7.4
Solution:
It is clear that (AT A)T = AT (AT )T = AT A which shows that AT A is symmetric. In addition,
(AAT )T = (AT )T AT = AAT shows that AAT is also symmetric.
1.7. Diagonal, Triangular, and Symmetric Matrices 45
Example 1.7.5
Solution:
1. Assume that AT = −A. Then (A−1 )T = (AT )−1 = (−A)−1 = −A−1 . That is A−1 is
skew-symmetric.
2. We only show that A + B is skew-symmetric: (A + B)T = AT + B T = −A + (−B) =
−(A + B).
1 1
3. If A is any square matrix, then A = 2
A + AT + 2
A − AT . Then we only need
1 1
to show that 2
A + AT and 2
A − AT are symmetric and skew-symmetric matrices,
respectively.
Theorem 1.7.5
Proof:
Since A is invertible, AT is also invertible. Then, AAT and AT A are invertible since they are
product of two invertible matrices.
46 Chapter 1. System of Linear Equations and Matrices
Exercise 1.7.1
Exercise 1.7.2
Exercise 1.7.3
If A and B are lower triangular matrices, show that A + B is lower triangular as well.
Exercise 1.7.4
Exercise 1.7.5
1. AT + A is symmetirc.
2. A − AT is skew-symmetric.
Exercise 1.7.6
Exercise 1.7.7
2 −1 3
Let A = 0
4 1. Find a symmetric matrix S and a skew symmetric matrix K such
1 −2 −3
that A = S + K.
1.7. Diagonal, Triangular, and Symmetric Matrices 47
Exercise 1.7.8
2
Determinants
a b
Recall from Theorem 1.4.5 that the 2×2 matrix A = is invertible if and only if ad−bc 6= 0.
c d
The expression ad − bc is called the determinant of A. It is denoted by det (A) or | A |. That is,
a b
det (A) = | A | = = ad − bc,
c d
and the inverse of A (if it exists) can be expressed as
1 d −b
A−1 = .
det (A) −c a
If A is a square matrix, then the minor of entry aij is denoted by Mij and is defined to be
the determinant of the submatrix that remains after the ith row and j th column are deleted
from A. The number (−1)i+j Mij is denoted by Cij and is called the cofactor of entry aij .
Moreover, the cofactor matrix denoted by coef(A) = [Cij ] where 1 ≤ i, j ≤ n.
Example 2.1.1
Compute the minors, the cofactors, and the cofactor matrix of A, where
1 2 1
A=
0 1 −1
.
3 1 −2
Solution:
49
50 Chapter 2. Determinants
1 −1 0 −1 0 1
M11 = = −1, M12 = = 3, M13 = = −3,
1 −2 3 −2 3 1
2 1 1 1 1 2
M21 = = −5, M22 = = −5, M23 = = −5,
1 −2 3 −2 3 1
2 1 1 1 1 2
M31 = = −3, M32 = = −1, M33 = = 1.
1 −1 0 −1 0 1
Thus, the cofactors are:
C11 = (−1)2 M11 = −1, C12 = (−1)3 M11 = −3, C13 = (−1)4 M11 = −3,
C21 = (−1)3 M11 = 5, C22 = (−1)4 M11 = −5, C23 = (−1)5 M11 = 5,
C31 = (−1)4 M11 = −3, C32 = (−1)5 M11 = 1, C33 = (−1)6 M11 = 1.
−1 −3 −3
Therefore, coef(A) = 5
−5 .
5
−3 1 1
Remark 2.1.2
Note that the minors Mij and the cofactors Cij are either the same or the negative of each
other. This results from the (−1)i+j entry in the cofactor value.
This case is common for a general n × n matrix and can be seen in the following sign matrix:
+ − + ···
− + − · · ·
+
− + · · ·
. .. ..
.. ..
. . .
For example, C11 = M11 , C21 = −M21 and C31 = M31 .
Theorem 2.1.1
If A is n × n matrix, then choosing any row or any column of A, the number obtained by
multiplying the entries in that row or column by the corresponding cofactors and adding the
resulting product is always the same.
2.1. Determinants by Cofactor Expansion 51
Definition 2.1.2
If A is an n × n matrix, then the number resulted by the sum of multiplying the entries in any
row by the corresponding cofactors is called the determinant of A, and the sums themselves
are called cofactor expansion of A.
That is, the determinant of A using the cofactor expansion along the ith row is:
While the determinant of A using the cofactor expansion along the j th column is:
Theorem 2.1.2
Example 2.1.2
Solution:
Here we can choose any row or column to compute the determinant. Using the cofactor
expansion along the 1st row, we get
det (A) = a11 C11 + a12 C12 + a13 C13 = (1)(−1) + (2)(−3) + (1)(−3) = −10.
If we choose the second row, then we only need to know two cofactors:
det (A) = a21 C21 + a22 C22 + a23 C23 = (0)(C21 ) + (1)(−5) + (−1)(5) = −10.
52 Chapter 2. Determinants
Example 2.1.3
Solution:
We use the cofactor expansion along the 1st -column since it has the most zeros:
0 3 0 1
− 3 0 1
2 1 1 2 2+1
= (0) C11 + (−1) (2) 0 1 2 + (0) C31 + (0) C41
0 0 1 2
1 0 1
0 1 0 1
3 0 1
+ 3 1
= (−2) 0 1 2 = (−2) = (−2) [3 − 1] = −4.
1 1
1 0 1
+ + +− − −
+ − a11 a12 a13 a11 a12
a11 a12
C1 = C2 = a21 a22 a23 a21 a22
a21 a22
a31 a32 a33 a31 a32
For the 2 × 2 matrix C1 , we get det (C1 ) = a11 a22 − a12 a21 . It is simply the result of ”blue
stripe” product minus ”red stripe” product.
While for the 3 × 3 matrix C2 , we first recopy the first two columns and then we add up the
product of the blue stripes and subtract the product of the red stripes.
det (C2 ) = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 + a11 a23 a32 + a12 a21 a33 .
3 0 1
3 2
For example, = 6 − 8 = −2, and 0 1 2 = (3 + 0 + 0) − (1 + 0 + 0) = 2.
4 2
1 0 1
2.1. Determinants by Cofactor Expansion 53
Example 2.1.4
Exercise 2.1.1
0 4 4 0
1 1 2 0
Let D = . Evaluate D .
1 3 5 3
0 1 2 6
Final answer: |D| = −12.
Exercise 2.1.2
a+e b + f a b e f
Let A = ,B= , and C = . Show that | A | = | B | + | C |.
c d c d c d
Exercise 2.1.3
1 4 2
Let A = 5 −3 6. Compute the cofactors C11 , C12 , and C13 , and show that 5C11 −
2 3 2
3C12 + 6C13 = 0.
Exercise 2.1.4
In this section, we introduce some basic properties and theorems to compute determinants.
Theorem 2.2.1
Let A be an n × n matrix.
1. If A is a square matrix with a row of zeros (or a column of zeros), then det (A) = 0.
2. If A is a square matrix, then det (A) = det (AT ).
a11 a12 a13 a11 a21 a31
a21 a22 a23 = a12 a22 a32
a31 a32 a33 a13 a23 a33
3. If B is obtained from A by multiplying a single row (or a single column) by a scalar k,
then det (B) = k det (A). This result can be generalized as det (k A) = k n det (A).
k a11 k a12 k a13 a11 a12 a13
a21 a22 a23 = k a21 a22 a23
a31 a32 a33 a31 a32 a33
4. If B is obtained from A by interchaning two rows (or columns), then det (B) = − det (A).
a21 a22 a23 a11 a12 a13
a11 a12 a13 = − a21 a22 a23
a31 a32 a33 a31 a32 a33
5. If B is obtained from A by adding a multiple of one row (one column) to another row
(column, respectively), then det (B) = det (A).
a11 + k a21 a12 + k a22 a13 + k a23 a11 a12 a13
a21 a22 a23 = a21 a22 a23
a31 a32 a33 a31 a32 a33
6. If A has two proportional rows (or columns), then det (A) = 0.
a11 a12 a13
k a11 k a12 k a13 = 0
a31 a32 a33
56 Chapter 2. Determinants
Example 2.2.1
Solution:
Note that the second row of A is 3 times the first one. Then, det (A) = 0.
2 −2 5 3 2 −2 5 3
6 −6 15 9 r2 −3r1 0 0 0 0
== = 0.
0 3 1 9 0 3 1 9
1 4 2 −1 1 4 2 −1
Example 2.2.2
Solution:
2 −2 5 1 4 2 1 4 2
r1 ↔r3 r3 −2r1
0 3 1 == (−1) 0 3 1 == (−1) 0 3 1
1 4 2 2 −2 5 0 −10 1
1 2 4 1 2 4
c2 ↔c3 r3 −r2
== (−1)(−1) 0 1 3 == 0 1 3
0 1 −10 0 0 −13
= (1)(1)(−13) = −13
2.2. Evaluating Determinants by Row Reduction 57
Example 2.2.3
Solution:
2 −2 5 1 0 −4 1 −1
r1 −2r3 −4 1 −1
r2 −3r3
3 3 1 3 r4 −4r3 0 0 −5 0 +
== + = 0 −5 0
1 1 2 1 1 1 2 1
−1 −3 2
4 3 5 6 0 −1 −3 2
−4 −1
= (−5) = (−5)[−8 − 1] = 45.
−1 2
Example 2.2.4
a b c 3g 3h 3i
Let
d e f = −6. Compute 2a + d 2b + e .
2c + f
g h i d e f
Solution:
Exercise 2.2.1
1 0 0 3
2 7 0 6
Compute det (A) where A = .
0 6 3 0
2 3 1 5
[Hint: Simply reduce A to a lower triangular matrix! Find a relation between the first and
the fourth columns.]
Exercise 2.2.2
a b c −a −b −c
Let d e f = −6. Compute 2d 2e 2f .
g h i 3g 3h 3i
Final answer: 36.
Exercise 2.2.3
Solve for x:
0 1 0
x 1
= x 3 −2 .
1 x−1
1 5 −1
Final answer: x = 1.
Exercise 2.2.4
a1 b 1 c1 b1 b2 b1 − 3b3
Given that a2 b2 c2 = 7, evaluate a1 a2 a1 − 3a3 .
a3 b 3 c3 c1 c2 c1 − 3c3
Final answer: 21.
Exercise 2.2.5
a1 a2 a3 2a3 2a2 2a1
Let A = b1 b2 b3 , and B = b3 − a3 b 2 − a2 . If A = −4, find B .
b 1 − a1
Exercise 2.2.6
7 1 0 3
2 0 0 0
Let A be a matrix with A−1 = . Find det (A).
1 3 5 4
6 2 0 5
60 Chapter 2. Determinants
Theorem 2.3.1
Theorem 2.3.2
1
If A is an invertible matrix, then det A−1 = .
det (A)
Proof:
Since A−1 A = I, it follows that det (A−1 A) = det (I) = 1. That is, det (A−1 ) det (A) = 1.
Since A is invertible, det (A) 6= 0 and hence
1
A−1 =
det (A)
Example 2.3.1
1 2 1
Compute the adjoint of A, where A as given in Example 2.1.1: A =
0 1 −1
.
3 1 −2
Solution:
−1 −3 −3 −1 5 −3
In Example 2.1.1, we got coef(A) = 5
−5 5 . Thus adj(A) = −3
−5 .
1
−3 1 1 −3 5 1
2.3. Properties of Determinants 61
Example 2.3.2
1 2
Suppose that adj (A) = . Find A.
4 6
Solution:
x y w −z
Assume that A = . Then coef(A) = . Therefore,
z w −y x
w −y 1 2
adj (A) = (coef(A))T = = .
−z x 4 6
6 −2
Thus, x = 6, y = −2, z = −4, and w = 1, and A = .
−4 1
Theorem 2.3.3
Proof:
Theorem 2.3.4
1
If A is an invertible matrix, then A−1 = adj (A).
det (A)
Proof:
62 Chapter 2. Determinants
Theorem 2.3.3 implies that A adj (A) = det (A) I. Since A is invertible, det (A) 6= 0. Hence
" #
1 1 1
A adj (A) = det (A) I ⇒ A adj (A) = I.
det (A) det (A) det (A)
Multiplying both sides (from left) by A−1 implies
1
A−1 = adj (A).
det (A)
Example 2.3.3
Proof:
Example 2.3.4
Show that if A ∈ Mn×n is skew-symmetric matrix and n is odd, then det (A) = 0.
Solution:
Since A is skew-symmetric, then AT = −A and taking the determinant for both sides
det AT = det (−A) ⇐⇒ det (A) = (−1)n det (A).
Since n is odd, we get det (A) = − det (A) which means that det (A) = 0.
TRUE or FALSE:
2.3. Properties of Determinants 63
Example 2.3.5
Solution:
Example 2.3.6
Solution:
Example 2.3.7
Let A, B ∈ M3×3 with det (A) = 2 and det (B) = −2. Find det 2 A−1 (B 2 )T .
Solution:
1
2 A−1 (B 2 )T = 23 A−1 B2 = 8 |B | |B |
|A|
1
= 8 (−2)(−2) = 16.
2
64 Chapter 2. Determinants
Example 2.3.8
Solution:
1. 4 (A2 )T = 45 A2 = 45 (−2)2 = 46 .
−1
2. A−1 adj (A) = A−1 adj (A) = (−2)4 = −8.
2
Exercise 2.3.1
2 8
Let adj (A) = . Find A.
1 4
Exercise 2.3.2
Exercise 2.3.3
1 −1 2
Let A =
0 3 1 .
−1 3 1
1. Compute det (A) using a cofactor expansion along row 2.
2. Find A adj (A).
3. Find det (2A−1 adj (A)).
Exercise 2.3.4
1 2 4
Find all values of α for which the matrix
1 3 9 is singular.
1 α α2
Final answer: α = 2, 3.
Exercise 2.3.5
Let A and B be two n × n matrices such that A is invertible and B is singular. Prove that
A−1 B is singular.
Exercise 2.3.6
If A and B are 2 × 2 matrices with det (A) = 2 and det (B) = 5, compute 3 A2 (AB −1 )T .
66 Chapter 2. Determinants
Exercise 2.3.7
2 8
Let adj (A) = . Find A.
1 4
Exercise 2.3.8
Exercise 2.3.9
Exercise 2.3.10
Exercise 2.3.11
Exercise 2.3.12
Exercise 2.3.13
Exercise 2.3.14
Exercise 2.3.15
Exercise 2.3.16
Exercise 2.3.17
Exercise 2.3.18
Exercise 2.3.19
1 2 4
Find all values of α for which the matrix
1 3 9 is singular.
1 α α2
Exercise 2.3.20
Let A and B be two n × n matrices such that A is invertible and B is singular. Prove that
A−1 B is singular.
68 Chapter 2. Determinants
Exercise 2.3.21
If A and B are 2 × 2 matrices with det (A) = 2 and det (B) = 5, compute 3 A2 (AB −1 )T .
Chapter
3
Euclidean Vector Spaces
In this chapter, we deal with vectors in Rn , or sometimes we simply call them n-vectors. For instance,
y1
h i
y2
are vectors in Rn .
(row-vector) x = x1 x2 ··· xn , and (column-vector) y =
..
.
yn
In the notion of Rn , A vector is simply written as x = (x1 , x2 , · · · , xn ). While A points is written
as x(x1 , x2 , · · · , xn ).
x ± y = (x1 ± y1 , x2 ± y2 , · · · , xn ± yn ) ∈ Rn .
c x = (c x1 , c x2 , · · · , c xn ) ∈ Rn .
69
70 Chapter 3. Euclidean Vector Spaces
Remark 3.1.2
Theorem 3.1.2
x = c1 v 1 + c2 v 2 + · · · + cn v n ,
where c1 , c2 , · · · , cn are scalars in R. These scalars are called the coefficients of the linear
combination.
3.1. Vectors in Rn 71
Exercise 3.1.1
Theorem 3.2.1
Proof:
The proof of the first two parts is easy. So we only prove the third statement of the Teorem.
Let x = (x1 , x2 , · · · , xn ). Then c x = (c x1 , c x2 , · · · , c xn ). Thus,
q q
kcxk = (c x1 )2 + (c x2 )2 + · · · + (cxn )2 = c2 (x21 + x22 + · · · + x2n )
q
= | c | x21 + x22 + · · · + x2n = | c | k x k.
Remark 3.2.1
1
If x is a nonzero vector in Rn , then the vector u = x is a vector of norm 1 in the same
kxk
direction as x. Clearly,
1 1 1
kuk = x = kxk = k x k = 1.
kxk kxk kxk
3.2. Norm, Dot Product, and Distance in Rn 73
Example 3.2.1
Let x = (1, 0, −1) and y = (0, 1, 1) be two vectors in R3 . Compute x · y, k x k and construct
a unit vector in the same direction of x.
Solution:
Remark 3.2.3
Example 3.2.2
Solution:
The vectors i = (1, 0) and j = (0, 1) are called the standard units in R2 .
The vectors i = (1, 0, 0), j = (0, 1, 0) and k = (0, 0, 1) are called the standard units in R3 .
In general, the vectors e1 = (1, 0, · · · , 0), e2 = (0, 1, 0, · · · , 0), · · · , en = (0, · · · , 0, 1) are called
the standard unit vectors in Rn . Note that any vector x = (x1 , x2 , · · · , xn ) ∈ Rn is a linear
combination of these vectors: x = x1 e1 + x2 e2 + · · · + xn en .
of x and y is defined as
x·y
x · y = k x kk y k cos θ or cos θ = ,
k x kk y k θ
x
where 0 ≤ θ ≤ π. Since, −1 ≤ cos θ ≤ 1, we get
x·y
−1 ≤ ≤ 1.
kxkkyk
Example 3.2.3
Solution:
x·y
We have cos θ = where x · y = 0 + 1 + 0 + 0 = 1 and
kxkkyk
√ √
k x k = 02 + 12 + 12 + 02 = 2 = k y k.
1 π
Therefore, cos θ = which implies that θ = .
2 3
Theorem 3.2.2
Theorem 3.2.3
Example 3.2.4
Solution:
Clearly,
= 3k x k2 − 5(x · y) − 2k y k2
= 27 − 20 − 8 = −1.
| x · y | ≤ k x k k y k.
Proof:
k x + y k2 = (x + y) · (x + y) = x · x + x · y + y · x + y · y
= k x k2 + 2 x · y + k y k2 absolute value.
= k x k2 + 2 | x · y | + k y k2 Cauchy-Schwarz Inequality.
≤ k x k2 + 2 k x k k y k + k y k2 = (k x k + k y k)2 .
2. By Part 1, we have
d(x, y) = k x − y k = k (x − z) + (z − y) k
Proof:
Clearly,
k x + y k2 + k x − y k2 = (x + y) · (x + y) + (x − y) · (x − y)
= 2(x · x) + 2(y · y) = 2 k x k2 + k y k2 .
Theorem 3.2.7
Clearly,
k x + y k2 = (x + y) · (x + y) = k x k2 + 2(x · y) + k y k2
k x − y k2 = (x − y) · (x − y) = k x k2 − 2(x · y) + k y k2 .
Remark 3.2.6
Proof:
Recall that for real values x and a, we have | x | ≤ a iff −a ≤ x ≤ a. That is a ≥ x and
a ≥ −x.
Therefore, we simply show that k x − y k ≥ k x k − k y k and k x − y k ≥ k y k − k x k. First
k x k = k (x − y) + y k ≤ k x − y k + k y k ⇒ k x k − k y k ≤ k x − y k.
For the second inequality, we use the first one (interchinging x and y) in the following way:
Example 3.2.5
If k x k = 2 and k y k = 3, what are the largest and smallest values possible for k x − y k?
Solution:
k x − y k ≥ k x k − k y k = | 2 − 3 | = 1.
78 Chapter 3. Euclidean Vector Spaces
Exercise 3.2.1
Exercise 3.2.2
Exercise 3.2.3
Let θ be the angle between the vectors U = (4, −2, 1, 2) and V = (4, 2, 5, 2). Find cos θ.
21
Final answer: 35
.
Exercise 3.2.4
Exercise 3.2.5
Exercise 3.2.6
Exercise 3.2.7
Exercise 3.2.8
Exercise 3.2.9
Exercise 3.2.10
Exercise 3.2.11
Find all values of a for which x · y = 0, where x = (a2 − a, −3, −1) and y = (2, a − 1, 2a).
1
Final answer: a = 2
or 3.
Exercise 3.2.12
1 1
If x and y are vectors in Rn , then x · y = k x + y k2 − k x − y k2 .
4 4
Exercise 3.2.13
Show that if X · Y = 0 for all Y ∈ Rn , then X = O. Use the standard unit vectors of Rn for
Y.
Exercise 3.2.14
Show that if X · Z = Y · Z for all Z ∈ Rn , then X = Y . Use the standard unit vectors of Rn
for Z.
80 Chapter 3. Euclidean Vector Spaces
Chapter
4
General Vector Spaces
A real vector space V is a set of elements with two operations ⊕ and satisfying the
following conditions. For short, we write (V, ⊕, ) is a vector space if
(α) if x, y ∈ V, then x ⊕ y ∈ V, that is ”V is closed under ⊕”: for all x, y, z ∈ V
(a) x ⊕ y = y ⊕ x,
(b) x ⊕ (y ⊕ z) = (x ⊕ y) ⊕ z,
(c) there exists 0 ∈ V (zero vector of V) such that x ⊕ 0 = 0 ⊕ x = x for all x ∈ V,
(d) for each x ∈ V, there exists −x ∈ V (a negative of x) such that (−x) ⊕ x =
x ⊕ (−x) = 0.
(β) if x ∈ V and c ∈ R, then c x ∈ V, that is ”V is closed under ”: for all x, y ∈ V
and for all c, d ∈ R
(a) c (x ⊕ y) = c x⊕c y,
(b) (c + d) x=c x⊕d x,
(c) c (d x) = (c d) x,
(d) 1 x=x 1 = x.
(Rn , +, ·) is a vector space. That is, Rn with vector addition and scalar multiplication is a
vector space.
0 ⊕ 0 = 0 and k 0=0
for all scalars k. This is a vector space which is called the zero vector space.
81
82 Chapter 4. General Vector Spaces
Remark 4.1.3
For simplicity, we write + and · instead of ⊕ and . That is, we write (V, +, ·) instead of
(V, ⊕, ) for a vector space V with + addition operation and · scalar multiplication.
Example 4.1.1
and
Solution:
Clearly, the α conditions are satisfied because this is the usual vector addition and hence V is
closed under +. Thus, we only check on the β conditions. Let x = (x1 , y1 , z1 ), y = (x2 , y2 , z2 )
be any two vectors in V, then
1. c(x + y) = c(x1 + x2 , y1 + y2 , z1 + z2 ) = (cx1 + cx2 , cy1 + cy2 , 0) = (cx1 , cy1 , 0) +
(cx2 , cy2 , 0) = cx + cy. This condition is satisfied.
2. (c+d)x = ((c + d)x1 , (c + d)y1 , 0) = (cx1 , cy1 , 0)+(dx1 , dy1 , 0) = cx+dx. This condition
is satisfied.
3. c(dx) = c(dx1 , dy1 , 0) = (cdx1 , cdy1 , 0) = (cd)x. This condition is satisfied.
4. 1x = (x1 , y1 , 0) 6= (x1 , y1 , z1 ). This condition is NOT satisfied.
Therefore, (V, +, ·) is not a vector space.
Example 4.1.2
Solution:
No. If c ∈ R with c < 0, then c(x, y, z) = (cx, cy, cz) 6∈ V since cz < 0.
4.1. Real Vector Spaces 83
Example 4.1.3
Is the set of real numbers under the subtraction and scalar multiplication a vector space?
Explain.
Solution:
Theorem 4.1.1
Exercise 4.1.1
Exercise 4.1.2
Let V = {(x, y) | x, y ∈ R}. Define addition and scalar multiplication on V as follows: for
each (x, y), (x0 , y 0 ) ∈ V and a ∈ R,
Determine whether V with the given operations is a vector space. Justify your answer.
Exercise 4.1.3
Consider R2 with the operations + and · where (x, y) + (x0 , y 0 ) = (2x − x0 , 2y − y 0 ) and
c(x, y) = c(x, y). Does the property (c + d)x = cx + dx hold for all c, d ∈ R and all x ∈ R2 ?
Explain.
Exercise 4.1.4
Exercise 4.1.5
Exercise 4.1.6
Exercise 4.1.7
Exercise 4.1.8
Let V be a vector space with a zero vector 0. Show that the zero vector 0 of V is unique.
Exercise 4.1.9
Exercise 4.1.10
Exercise 4.1.11
Exercise 4.1.12
Let (V, +, ·) be a vector space and let W ⊆ V be non-empty. If (W, +, ·) is a vector space,
then W is a subspace of V.
If V is a vector space, then V and {0} are subspaces of V. They are called trivial subspaces
of V.
Theorem 4.2.1
Example 4.2.1
Solution:
Example 4.2.2
Solution:
Clearly, (0, 0, 0) 6∈ W and hence W is not a vector space and it is not a subspace of R3 .
Example 4.2.3
Solution:
1. (0, 0, 0, 0) ∈ W, then W 6= ∅,
2. let x = (a1 , b1 , a1 , 2a1 − b1 ) and y = (a2 , b2 , a2 , 2a2 − b2 ). Then,
Therefore, W is a subspace of R4 .
Theorem 4.2.2
Proof:
Let W be the intersection of these subspaces. Then W is not empty since Wi contains the
zero vector for all 1 ≤ i ≤ n.
Moreover, if x, y ∈ W, then x, y ∈ Wi for all i and hence x + y ∈ Wi which implies that
x + y ∈ W.
Finaly, if c is a scalar and x ∈ W, then x ∈ Wi for all i and hence c x ∈ Wi which implies
that c x ∈ W.
Therefore, W is a subspace of V.
88 Chapter 4. General Vector Spaces
x = c1 x 1 + c2 x 2 + · · · + cn x n .
Let S = {x1 , x2 , · · · , xk } be a subset of a vector space V. The set of all vectors in V that
are linear combination of the vectors in S is denoted by span S or span {x1 , x2 , · · · , xk }.
Moreover, if W = span S then W is a subspace of V and we say that S spans W or that
W is spanned by S.
Proof:
z = c c1 x1 + c c2 x2 + · · · + c ck xk ∈ W.
Observe that {e1 , e2 , · · · , en } spans Rn . That is, span {e1 , e2 , · · · , en } = Rn since every
vector x = (x1 , x2 , · · · , xn ) in Rn can be written as
x = x1 e 1 + x2 e 2 + · · · + xn e n .
Example 4.2.4
Determine whether the vector x = (2, 1, 5) is a linear combination of the set of vectors
{x1 , x2 , x3 } where x1 = (1, 2, 1), x2 = (1, 0, 2), and x3 = (1, 1, 0).
Solution:
c1 + c2 + c3 = 2
2c1 + 0 + c3 = 1
c1 + 2c2 + 0 = 5
This system has a unique solution (c1 , c2 , c3 ) = (1, 2, −1). Thus, x = x1 + 2x2 − x3
Note that we can solve the problem as follows: The matrix of coefficient above has determi-
nant equals to 3 and hence the system has a unique solution. Therefore there are c1 , c2 , and
c3 satisfying the linear combination equation. This shows that x is a linear combination of
x1 , x2 , x3 and we are done without solving the system. In particular, span {x1 , x2 , x3 } = R3 .
Example 4.2.5
Let S = {u1 = (1, 1, 0, 1), u2 = (1, −1, 0, 1), u3 = (0, 1, 2, 1)}. Determine whether x and y
belong to span S, where x = (2, 3, 2, 3), and y = (0, 1, 2, 3).
Solution:
x = 2u1 + 0u2 + 1u3 . Thus, x ∈ span S. However, y 6∈ span S since y is not a linear
combination of u1 , u2 , u3 .
90 Chapter 4. General Vector Spaces
Let Ax = 0 be a homogenous system for A ∈ Mm×n and x ∈ Rn . We define the null space
(or the solution space of Ax = 0) of A by W = {x : Ax = 0} ⊆ Rn .
The solution space of the homogeneous system AX = 0 (null space of A), where A is m × n
matrix and X ∈ Rn is a subspace of Rn .
Proof:
Solution: Definition
We simply show it by the meaning of the definition of subspaces. Look at the Example 4.2.3.
Example 4.2.7
Let S = {x1 , x2 } where x1 = (1, 1, 0), and x2 = (1, 1, 1). Does S spans R3 ? Explain.
Solution:
Note that we can not use the determinant argument here since we have no square matrix.
Thus, solving the system, we get
1 1 a r −r 1 1 a
2 1
1 1 b −− −→ 0 0 b − a .
0 1 c 0 1 c
This system has no solution if b − a 6= 0. Therefore, x is not a linear combination of S and
S does not span R3 .
Exercise 4.2.1
Exercise 4.2.2
Let x1 = (1, 0, 2), x2 = (2, 0, 1) and x3 = (1, 0, 3) be vectors in R3 . Determine whether the
vector x = (1, 2, 3) can be written as a linear combination of x1 , x2 , and x3 .
Exercise 4.2.3
n √ o
Determine whether the subset W = a, a + 2, 3a : a ∈ R of R3 is a subspace of R3 .
Exercise 4.2.4
Determine whether the vectors x1 = (3, 0, 0, 0), x2 = (0, −1, 2, 1), x3 = (6, 2, −6, 0), and
x4 = (3, −2, 3, 3) spans the vector space R4 .
Exercise 4.2.5
Exercise 4.2.6
Exercise 4.2.7
Exercise 4.2.8
c1 x1 + c2 x2 + · · · + cn xn = 0.
Note that the standard unit vectors e1 , e2 , · · · , en are linearly independent in Rn since
Example 4.3.1
Determine whether x1 = (1, 0, 1, 2), x2 = (0, 1, 1, 2), and x3 = (1, 1, 1, 3) in R4 are linearly
independent or linearly dependent? Explain.
Solution:
Remark 4.3.2
Let S = {x1 , x2 , · · · , xn } be a set of vectors in Rn and let A be an n×n matrix whose columns
are the n-vectors of S. Then,
1. if A is singular, then S is linearly dependent,
2. if A is invertible, then S is linearly independent.
94 Chapter 4. General Vector Spaces
Example 4.3.2
Determine whether the vectors x1 = (1, 2, −1), x2 = (1, −2, 1), x3 = (−3, 2, −1), and x4 =
(2, 0, 0) in R3 is linearly independent or linearly dependent? Explain.
Solution:
Theorem 4.3.1
Example 4.3.3
For what values of α are the vectors (−1, 0, −1), (2, 1, 2), (1, 1, α) in R3 linearly dependent?
Solution:
We want the vectors to be linearly dependent, so consider the system c1 (−1, 0, −1) +
c2 (2, 1, 2) + c3 (1, 1, α) = (0, 0, 0). This system has nontrivial solutions only if det (A) = 0,
where A is the matrix whose columns are [−1, 0, −1]T , [2, 1, 2]T , and [1, 1, α]T . That is,
-1 2 1
det (A) = 0 1 1 = 0 ⇐⇒ −(α − 2) − (2 − 1) = 0 ⇐⇒ 2 − α − 1 = 0 ⇐⇒ α = 1.
-1 2 α
Therefore, if α = 1 the vectors are linearly dependent. Otherwise if α ∈ R\{1}, the vectors
are linearly independent.
4.3. Linear Independence 95
Theorem 4.3.2
Example 4.3.4
Solution:
Example 4.3.5
Solution:
Remark 4.3.3
Remark 4.3.4
Exercise 4.3.1
Show that if {x1 , x2 } is a linearly dependent set, then one of the vector is a scalar multiple
of the other.
Exercise 4.3.2
Show that any subset of a vector space V contains the zero vector is a linearly dependent set.
Exercise 4.3.3
Show that if {x1 , x2 , · · · , xn } is a linearly dependent set, then we can express one of the
vectors in terms of the others.
Exercise 4.3.4
Let x, y, z ∈ Rn be three nonzero vectors where the dot product of any (distinct) two vectors
is 0. Show that the set {x, y, z} is linearly independent.
98 Chapter 4. General Vector Spaces
A set S = {x1 , x2 , · · · , xn } of distinct nonzero vectors in a vector space V is called a basis iff
1. S spans V (V = span S),
2. S is linearly independent set.
The dimension of V is the number of vectors in its basis and is denoted by dim(V).
The set of standard unit vectors {e1 , e2 , · · · , en } ∈ Rn forms the standard basis for Rn and
hence dim(Rn ) = n.
Example 4.4.1
Show that the set S = {x1 = (1, 0, 1), x2 = (0, 1, −1), x3 = (0, 2, 2)} is a basis for R3 .
Solution:
To show that S is a basis for R3 , we show that S is a linearly independent set that spans R3 .
1. S is linealy independent? Consider the homogenous system
This system has a trivial solution if det (A) 6= 0, where A is the matrix of coefficients.
That is,
1 0 0
1 2
|A| = 0 1 2 = (1) = 2 − (−2) = 4 6= 0.
−1 2
1 −1 2
Thus, the system has only the trivial solution and hence S is linearly independent.
2. S spans R3 ? For any x = (a, b, c) ∈ R3 , consider the nonhomogenous system:
Since the det (A) 6= 0 where A is the matrix of coefficients, the system has a unique
solution and thus S spans R3 .
Therefore, S is a basis for R3 .
4.4. Coordinates and Basis 99
Let S = {x1 , x2 , · · · , xn } be a basis for a vector space V. Then, every vector in V can be
written in exactly one way as a linear combination of the vectors in S.
Proof:
x = c1 x1 + c2 x2 + · · · + cn xn , and (4.4.1)
x = d1 x1 + d2 x2 + · · · + dn xn . (4.4.2)
(x)S = (c1 , c2 , · · · , cn ).
Example 4.4.3
Let S = {x1 = (1, 0, 1), x2 = (0, 1, −1), x3 = (0, 2, 2)} be a basis for R3 .
1. Find the coordinate vector of x = (−1, 1, 2).
2. Find the vector x in R3 whose coordinate vector relative to S is (x)S = (−1, 3, 2).
Solution:
Solving this system, we obtain c1 = −1, c2 = −1 and c3 = 1. That is, (x)S = (−1, −1, 1).
2. We simply evaluate x = (−1)x1 + (3)x2 + (2)x3 to get x = (−1, 7, 0).
4.4. Coordinates and Basis 101
Exercise 4.4.1
x1 = (1, 0, 1, 0), x2 = (0, 1, −1, 2), x3 = (0, 2, 2, 1), and x4 = (1, 0, 0, 1).
Exercise 4.4.2
Let S = {x1 , x2 , · · · , xn } be a set of vectors in a vector space V. Show that S is a basis for V
if and only if every vector in V can be expressed in exactly one way as a linar combination of
the vectors in S. ” ⇒ ” : Use Theorem 4.4.1. And for ” ⇐ ” : Show the linear independence
of S using the uniqueness.
102 Chapter 4. General Vector Spaces
All bases for a finite-dimensional vector space have the same number of vectors, defined as
dimension and is denoted by dim(V). The zero vector space is of dimension zero.
Theorem 4.5.2
Example 4.5.1
Find a basis for the subspace of all vectors of the form (a, b, −a − b, a − b), for a, b ∈ R. Find
its dimension.
Solution:
holds only if c1 = c2 = 0 which shows that S is linearly independent. That is, S is a basis for
W and dim(W) = 2.
4.5. Dimension 103
Example 4.5.2
Find a basis for and the dimension of the solution space of the homogeneous system
x1 + x2 + 2x4 = 0
x2 − x3 + x4 = 0
x1 + x2 + 2x4 = 0
Solution:
Theorem 4.5.3
Remark 4.5.2
The set S = {x1 = (1, 5), x2 = (1, 4)} is linearly independent in the 2-dimensional vector
space R2 . Hence, S forms a basis for R2 .
Moreover, considering S = {x1 = (1, 0, 5), x2 = (1, 0, 4), x3 = (1, 1, 1)}, we see that x1 and x2
form a linearly independent set in the xz-plane. The vector x3 is outside of the xz-plane, so
the set S is linearly independent set in R3 . Hence, S forms a basis for R3 .
104 Chapter 4. General Vector Spaces
Example 4.5.3
Find all values of a for which S = {(a2 , 0, 1), (0, a, 2), (1, 0, 1)} is a basis for R3 .
Solution:
Example 4.5.4
Let S = {x1 = (1, 0, 1), x2 = (1, 1, 1), x3 = (0, −1, 0), x4 = (2, 1, 2)} be a set of vectors in R3 .
Find a subset of S that is a basis for W = span S, and find the dimension of W.
4.5. Dimension 105
Solution:
Example 4.5.5
Find a basis for R4 that contains the vectors x1 = (1, 0, 1, 0) and x2 = (−1, 1, −1, 0).
Solution:
Consider the set S = {x1 , x2 , e1 , e2 , e3 , e4 }. The set S spans R4 but it contains some linearly
dependent vectors. In order to delete those, we follow the following procedure:
1 −1 1 0 0 0 0 1 0 0 1 1 0 0
0 1 0 1 0 0 0 r.r.e.f. 0 1 0 1 0 0 0
≈ ······ ≈
1
−1 0 0 1 0 0
0
0 1 0 −1 0 0
0 0 0 0 0 1 0 0 0 0 0 0 1 0
The leading entries pointing on the columns 1, 2, 3, and 6. Therefore, the set {x1 , x2 , e1 , e4 }
is a basis for R4 containing x1 and x2 .
Theorem 4.5.5
Exercise 4.5.1
Let S = {x1 , x2 , x3 , x4 , x5 } be a set of R4 where x1 = (1, 2, −2, 1), x2 = (−3, 0, −4, 3),
x3 = (2, 1, 1, −1), x4 = (−3, 3, −9, 6), and x5 = (9, 3, 7, −6). Find a subset of S that is a basis
for W = span S. Find dim(W). Final answer: {x1 , x2 } is a basis for W and the dimension
is 2.
Exercise 4.5.2
Find the dimension of the subspace of all vectors of the form (a, b, c, d) where c = a − b and
d = a + b (for a, b ∈ R). Final answer: the dimension is 2.
Exercise 4.5.3
Find the dimension of the subspace of all vectors of the form (a + c, a + b + 2c, a + c, a − b)
where a, b, c ∈ R. Final answer: the dimension is 2.
Exercise 4.5.4
Let S = {x1 , x2 , x3 } be a basis for a vector space V. Show that T = {y1 , y2 , y3 } is also a
basis for V, where y1 = x1 + x2 + x3 , y2 = x2 + x3 , and y3 = x3 .
Exercise 4.5.5
Find a standard basis vector for R3 that can be added to the set {x1 = (1, 1, 1), x2 = (2, −1, 3)}
to produce a basis a basis for R3 . Final answer: any vector of the standard basis will work.
Exercise 4.5.6
The set S = {x1 = (1, 2, 3), x2 = (0, 1, 1)} is linearly independent in R3 . Extend (enlarge) S
to a basis for R3 . Final answer: S = {x1 = (1, 2, 3), x2 = (0, 1, 1), x3 = (1, 0, 0)}
Exercise 4.5.7
Let S = {x1 = (1, 0, 2), x2 = (−1, 0, −1)} be a set of vectors in R3 . Find a basis for R3 that
contains the set S. Final answer: {(1, 0, 2), (−1, 0, −1), (0, 1, 0)}.
4.7. Row Space, Column Space, and Null Space 107
Definition 4.7.1
a11 a12 ··· a1n
a21 a22 ··· a2n
Let A =
.. .. .. ∈ Mm×n . The set of rows of A are:
..
. . . .
am1 am2 ··· amn
x1 = [a11 a12 ··· a1n ]
x2 = [a21 a22 ··· a2n ]
.. .. .. ∈ Rn
. . .
xm = [am1 am2 ··· amn ]
These row vectors span a subspace of Rn which is called the row space of A.
Similarly, the columns of A are:
a a a
11 12 1n
a21 a22 a2n
y1 = . , y2 = . , · · · , yn = . ∈ Rm .
.. .. ..
am1 am2 amn
These column vectors span a subspace of Rm which is called the column space of A .
Moreover, the solution space of the homogeneous system Ax = 0 (which is a subspace of Rn )
is called the null space of A.
Theorem 4.7.1
Solution:
Ax = x1 c1 + x2 c2 + · · · + xn cn
x1 c1 + x2 c2 + · · · + xn cn = b
from which we conclude that Ax = b is consistent if and only if b can be written as a linear
combination of the column vectors of A.
108 Chapter 4. General Vector Spaces
Theorem 4.7.2
Elementary row operations do not change the null and row spaces of a matrix.
Theorem 4.7.3
If R is a matrix in row echelon form, then the row vectors with the leading 1’s (the nonzero
row vectors) form a basis for the row space of R; and the column vectors with the leading 1’s
of the row vectors form a basis for the column space of R.
Let
1 −2 0 3 −4
3 2 8 1 4
A= .
2 3 7 2 3
−1 2 0 4 −3
1. find a basis for the row space of A,
2. find a basis for the column space of A,
3. find a basis for the row space that contains only rows of A,
4. find a basis for the column space that contains only columns of A.
Solution:
1. To find a basis for the row space of A, we have to find the r.r.e.f. of A, then the set of
non-zero rows of the r.r.e.f. forms a basis for the row space.
1 −2 0 3 −4
1 0 2 0 1 ←
3 2 8 1 4 r.r.e.f.
≈ ······ ≈ ←
0 1 1 0 1
2 3 7 2 3
0 0 0 1 −1 ←
−1 2 0 4 −3
0 0 0 0 0
Therefore, the set {(1, 0, 2, 0, 1), (0, 1, 1, 0, 1), (0, 0, 0, 1, −1)} forms a basis for the row
space of A.
2. To find a basis for the column space of A, we have to find a basis for the row space of
4.7. Row Space, Column Space, and Null Space 109
AT . Therefore,
1 3 2 −1
11
1 0 0 ←
−2 2 3 2
24
−49
r.r.e.f. 0 1 0 ←
AT =
0 8 7 0 ≈ ······ ≈ 24
7
0 0 1 ←
3 1 2 4 3
0 0 0 0
−4 4 3 −3
0 0 0 0
n o
11
Therefore, the set (1, 0, 0, 24 ), (0, 1, 0, −49
24
), (0, 0, 1, 73 ) is a basis for the row space of
AT and it is a basis for the column space of A.
3. To find a basis for the row space of A that contains only rows of A, we do as follows:
1 3 2 −1
11
0 0 1
−2 2 3 2 24
−49
1 0 0
T
r.r.e.f.
A =
0 8 7 ≈ ······ ≈
0
24
0 7
3
0 1 3
1 2 4
0 0 0 0
−4 4 3 −3
0 0 0 0
↑ ↑ ↑
Then, the leading entries are pointing to column 1, column 2, and column 3 in the r.r.e.f.
of AT which correspond to row 1, row 2, and row 3 in A. Thus,
4. To find a basis for the column space of A that only contains columns of A, we do the
following:
1 −2 0 3 −4
3
0 2 0 1 1
2 8 1 4 r.r.e.f.
≈ ······ ≈
1 1 0 1 0
2 3 7 2 3
0 0 1 −1 0
−1 2 0 4 −3 0 0 0 0 0
↑ ↑ ↑
Then, the leading entries are pointing to column 1, column 2 and column 3 in the r.r.e.f.
of A . Thus,
Solution:
The space spanned by these vectors is the row space of the matrix
1 0 2 1
A=
1 1 −1 .
0
0 −1 3 1
Reducing this matrix to its r.r.e.f. we get
1 0 2 1
0
1 −3 −1
.
0 0 0 0
The nonzero vectors in this matrix are
These vectors form a basis for the row space of A and consequently form a basis for span S.
Let S = {x1 = (1, 1, 0), x2 = (0, 1, −1), x3 = (2, −1, 3), x4 = (1, 0, 1)}.
1. Find a subset of S that forms a basis for span S.
2. Express each vector not in the basis as a linear combination of the basis vectors.
Solution:
x3 = (2)x1 + (−3)x2 .
x4 = (1)x1 + (−1)x2 .
Example 4.7.4
1 −2 0 3 −4
3 2 8 1 4
Let A = .
2 3 7 2 3
−1 2 0 4 −3
1. Find bases for the row and column spaces of A,
2. Find a basis for the null space of A.
3. Does x = (1, 2, 4, 3, 0) belong to the row space of A? Explain.
4. Express each column of A not in the basis of column space as a linear combination of
the vectors in the basis you got in step 1.
Solution:
1. To get bases for the row space and column spaces of A, we do the following:
1 −2 0 3 −4
3
1 0 2 0 1 ←
2 8 1 4 r.r.e.f.
≈ ······ ≈ ←
0 1 1 0 1
2 3 7 2 3
0
0 0 1 −1 ←
−1 2 0 4 −3
0 0 0 0 0
↑ ↑ ↑
Thus, the set {(1, 0, 2, 0, 1), (0, 1, 1, 0, 1), (0, 0, 0, 1, −1)} forms a basis for the row space
of A, while the set {(1, 3, 2, −1), (−2, 2, 3, 2), (3, 1, 2, 4)} forms a basis for the column
space of A that only contains columns of A, but this is fine since there is no restrictions
on the basis of column space of A mentioned in the question.
2. Using what we got in the previous step, the solution space of the homogeneous system
is:
x1 + 2x3 + x5 = 0
x2 + x3 + x 5 = 0
x4 − x5 = 0
112 Chapter 4. General Vector Spaces
Exercise 4.7.1
−1 −1 0
Let A = . Find a basis for the null space of A. Final answer: S =
2 0 4
{x1 = (1, 2, 3), x2 = (0, 1, 1), x3 = (1, 0, 0)}
Exercise 4.7.2
0 0 0 −1
Let A =
0 1 0 .
0
0 0 0 1
1. Find a basis for the null space of A.
2. Find a basis for the row space of AT .
3. Find a basis for the row space of A.
Final answer:
1. rank(A) = 2, nullity(A) = 2, rank(AT ) = 2, and nullity(AT ) = 1.
2. a basis for the null space of A = {(1, 0, 0, 0), (0, 0, 1, 0)}.
3. a basis for the row space of AT = {(0, 1, 0), (−1, 0, 1)}.
4. a basis for the row space of A = {(0, 1, 0, 0), (0, 0, 0, 1)}.
Exercise 4.7.3
1 0 −1 1
Let A = 1
1 1 .
1
1 2 3 1
1. Find a basis for the null space of A.
2. Find a basis for the row space of AT .
3. Find a basis for the column space of A.
Final answer:
1. rank(A) = 2, nullity(A) = 2, rank(AT ) = 2, and nullity(AT ) = 1.
2. a basis for the null space of A = {(1, 0, 0, 0), (0, 0, 1, 0)}.
3. a basis for the row space of AT = {(0, 1, 0), (−1, 0, 1)}.
4. a basis for the row space of A = {(0, 1, 0, 0), (0, 0, 0, 1)}.
114 Chapter 4. General Vector Spaces
Exercise 4.7.4
Let S = {x1 , x2 , x3 , x4 , x5 }, where x1 = (1, −2, 0, 3), x2 = (2, −5, −3, 6), x3 = (0, 1, 3, 0), x4 =
(2, −1, 4, −7), and x5 = (5, −8, 1, 2).
1. Find a subset of S that forms a basis for the subspace span S.
2. Express each vector not in the basis as a linear combination of the basis vectors.
3. If A is the 4 × 5 matrix whose columns are the vectors of S in order, then find a basis
for the row space of A, a basis for the column space of A, and a basis for the null space
of A.
Hint: Here is the reduced row echelon form (r.r.e.f.) of A:
1 0 2 0 1
0 1 −1 0 1
.
0 0 0 1 1
0 0 0 0 0
4.8. Rank, Nullity and the Fundamental Matrix Spaces 115
The row space and column space of a matrix A have the same dimension.
The common dimension of the row space and column space of a matrix A is called the rank
of A and is denoted by rank(A); the dimension of the null space of A is called the nullity of
A and is denoted by nullity(A).
Solution:
x1 + 2x3 + x5 = 0,
x2 − x3 + x5 = 0,
x4 + x5 = 0.
116 Chapter 4. General Vector Spaces
rank(A) + nullity(A) = n.
Remark 4.8.2
The matrix A of Example 4.8.1 has 5 columns with rank(A) = 3 and nullity(A) = 2.
If A is an m × n matrix, then
1. rank(A) = the number of leading variables in the general sollution of Ax = 0.
2. nullity(A) = the number of parameters in the general solution of Ax = 0.
Remark 4.8.3
Let A be a 3 × 5 matrix. Then: the largest possible rank of A is 3 and the smallest possible
rank of A is 0 (the zero matrix). This is because, rank of A = row rank = column rank, and
we only have 3 rows. Also, the largest nullity of A is 5 (zero matrix) and the smallest nullity
is 2 (when rank of A = 3). Moreover, the largest possible rank of AT is 3, and the largest
possible nullity of AT is 3.
4.8. Rank, Nullity and the Fundamental Matrix Spaces 117
Theorem 4.8.4
If A is any matrix, then rank(A) = rank AT .
Example 4.8.2
If A is a 3 × 7 matrix, what is the largest possible rank of A (or what is the largest possible
number of linearly independent columns of A)?
Solution:
The largest possible row rank of A is 3 while the largest possible column rank of A is 7, but
row rank of A = column rank of A. Therefore, the largest possible rank of A is 3. The same
thing applies for the largest possible number of linearly independent columns of A.
Remark 4.8.4
Exercise 4.8.1
−1 −1 0
Let A = . Find a basis for the null space of A and determine the nullity of A.
2 0 4
Final answer: S = {x1 = (1, 2, 3), x2 = (0, 1, 1), x3 = (1, 0, 0)}
Exercise 4.8.2
0 0 0 −1
Let A =
0 1 0 0.
0 0 0 1
1. Find rank(A), nullity(A), rank(AT ), and nullity(AT ).
2. Find a basis for the null space of A.
3. Find a basis for the row space of AT .
4. Find a basis for the row space of A.
Final answer:
1. rank(A) = 2, nullity(A) = 2, rank(AT ) = 2, and nullity(AT ) = 1.
2. a basis for the null space of A = {(1, 0, 0, 0), (0, 0, 1, 0)}.
3. a basis for the row space of AT = {(0, 1, 0), (−1, 0, 1)}.
4. a basis for the row space of A = {(0, 1, 0, 0), (0, 0, 0, 1)}.
Exercise 4.8.3
1 1 4 1 2
0 1 2 1 1
Let A =
0 0 0 1 2. Find the rank of A and the nullity of A.
1
−1 0 0 2
2 1 6 0 1
Chapter
5
Eigenvalues and Eigenvectors
Ax = λx
for some scalar λ. The scalar λ is called an eigenvalue of A and x is called an eigenvector
corresponding to λ.
1 −1 1
For example, let A = and x = . Then,
−1 1 −1
1 −1 1 2 1
Ax = = = 2 = 2x.
−1 1 −1 −2 −1
Therefore, x is an eigenvector of A corresponding to λ = 2 (or λ = 2 is an eigenvalue of A corre-
sponding to eigenvector x).
Proof:
119
120 Chapter 5. Eigenvalues and Eigenvectors
Solution:
λ1 = −1 , λ2 = 0 , λ3 = 1 .
Remark 5.1.1
Remark 5.1.2
Theorem 5.1.2
Theorem 5.1.3
Let A is an n × n matrix. Then λ is an eigenvalue of A iff the system (λIn − A)x = 0 has
nontrivial solutions iff there is a nonzero vector x such that Ax = λx iff λ is a solution of
pA (λ) = | λIn − A | = 0
Solution:
λ1 = −1 , λ2 = 0 , λ3 = 1 .
Thus, there are three eigenspaces of A corresponding to these eigenvalues. To find bases for
these eigenspaces, we solve the homogeneous system (λI3 − A)x = 0, for λ1 , λ2 , λ3 . That is:
λi − 1 0 −2 0
−1 (5.1.1)
λi 0 .
0
0 0 λi + 1 0
t
−1
P1 = 1
.
1
Proof:
If Ax = λx, then we have A2 x = A(Ax) = A(λx) = λ(Ax) = λ2 x. Applying this simple idea
k times, we get
Ak x = Ak−1 (Ax) = λ Ak−1 x = λ2 Ak−2 x = · · · = λk x.
1
If λ is an eigenvalue of an invertible matrix A, and x is a corresponding eigenvector, then λ
Proof:
If Ax = λx and A is invertible, then multiplying with A−1 both sides (from left), we get
1
A−1 · Ax = A−1 · λx ⇒ x = λ A−1 x ⇒ x = A−1 x.
λ
124 Chapter 5. Eigenvalues and Eigenvectors
Exercise 5.1.1
Show that A and AT have the same eigenvalues. Hint: |λIn − A| = (λIn − A)T .
Exercise 5.1.2
Exercise 5.1.3
1 3 3
Find the eigenvalues and bases for the eigenspaces of A = 1
−1 −4 Final result:
−1 −1 2
−1 2 0
λ1 = −2, λ2 = 1, λ3 = 3. And P1 =
1 , P2 = −1, and P3 = −1.
0 1 1
Exercise 5.1.4
0 1 0
3
Find the eigenvalues of A =
0 0 1 Final result: λ = k since p(λ) = (λ − k) .
k3 −3k 2 3k
Exercise 5.1.5
a b
Show that if a, b, c, d are integers such that a + b = c + d, then A = has integer
c d
eigenvalues λ1 = a + b and λ2 = a − c. Hint: Use your algebraic abilities.
Exercise 5.1.6
Find det (A) given that A has the characteristic polynomial pA (λ) = λ3 + 2λ2 − 4λ − 5. Hint:
Use Remark 5.1.2.
5.2. Diagonalization 125
If A and B are two square matrices, then we say that B is similar to A if there is an
invertible matrix P such that B = P −1 AP . In that case, we write B ≡ A.
1. A ≡ A since A = I −1 AI.
2. If B ≡ A, then A ≡ B.
3. If A ≡ B and B ≡ C, then A ≡ C.
Proof. A ≡ B and B ≡ C implies that there invertible matrices P and Q such that
A = P −1 BP and B = Q−1 CQ. Therefore,
4. If A ≡ B, then | A | = | B |.
5. if A ≡ B, then AT ≡ B T .
B = P −1 A P,
BT = (P −1 A P )T ,
BT = P T AT (P −1 )T ,
BT = P T AT (P T )−1 .
Proof:
Let A and B be two similar n × n matrices. Then, there is an invertible matrix P such that
B = P −1 A P . Then,
= P−1
| λI − A | | P
| = | λI − A | = p (λ).
n n A
The characteristic polynomials of A and B are the same. Hence they have the same eigenval-
ues.
D = P −1 A P with | P | =
6 0.
A matrix A has linearly independent eigenvectors if all of its eigenvalues are real and distinct.
If A is an n×n matrix with eigenvalue λ0 , then the dimension of the eigenspace corresponding
to λ0 is called geometric multiplicitiy of λ0 ; and the number of times that λ − λ0 appears
as a factor in the characteristic polynomial of A is called the algebraic multiplicity of λ0 .
5.2. Diagonalization 127
Example 5.2.1
1 0 2
Let A = 1
0 0. If possible, find matrices P and D so that A is diagonalizable.
0 0 −1
Solution:
Example 5.2.2
Solution:
128 Chapter 5. Eigenvalues and Eigenvectors
0 0
We see here that the (geometric multiplicity) dimension of Eλ1 is 1 while the (algebraic)
multiplicity of λ1 is 2. Therefore, A is not diagonalizable.
Example 5.2.3
Solution:
−t
1
t = 1 to get a basis with one vector P3 = 0 .
−1
There are three basis vectors in total, so the matrix P = [P1 | P2 | P3 ] diagonalize A and we
get D = P −1 AP = diag(1, 0, 0).
1 0 0 1 0 0
D = 0
0 0 and P = 0
1 0.
0 0 0 −1 0 1
130 Chapter 5. Eigenvalues and Eigenvectors
Exercise 5.2.1
Show that similar matrices have the same trace. Hint: Recall that tr(AB) = tr(BA).
Exercise 5.2.2
Exercise 5.2.3
0 −3 −3
Let A = 1
4 1.
−1 −1 2
1. Find the eigenvalues of A.
2. For each eigenvalue λ, find the rank of the matrix λ I3 − A.
3. Is A diagonalizable? Why?
Hint: For part 3, use what you got in part 2 and recall that for n × n matrix, we have
n = rank − nullity.
Exercise 5.2.4
Exercise 5.2.5
1 −2 8
0 −1
Let A = 0.
0 0 −1
10000
1. Find A .
5.2. Diagonalization 131
2. Find A20021 .
3. Find A−20021 .
Hint: Write A in the form A = P D P −1 .
Exercise 5.2.6
Show that if A and B are invertible matrices, then AB and BA are similar. Hint: They are
similar if AB = (?)−1 (BA) (?).
Exercise 5.2.7
Prove: If A and B are n × n invertible matrices, then A B −1 and B −1 A have the same
eigenvalues. Hint: Show that they have the same characteristic polynomial.
Exercise 5.2.8
1 2 0
Let A = 0 2 0 , where a ∈ R.
1 a 2
1. Find all eigenvalues of A.
2. For a = −2, determine whether A is diagonalizable.
3. For a 6= −2, find all eigenvectors of A.
Final answer: Eigenvalues: 1 and 2. If a = −2, A is diagonalizable. Otherwise, A is not
diagonalizable.
Exercise 5.2.9
6
Applications of Vectors in R2 and R3
Two nonzero vectors x and y in Rn are said to be orthogonal (or perpendicular) if x·y = 0.
Remark 6.1.1
• The set of standard unit vectors {i, j, k} in R3 is called an orthogonal set since the
dot product of any pair of distinct vectors is 0. Moreover, it is called an orthonormal
set since each vector in the set is a unit vector.
Example 6.1.1
Solution:
Example 6.1.2
Find a unit vector that is orthogonal to both x = (1, 0, 1) and y = (0, 1, 1).
Solution:
a + c = 0 and b + c = 0.
Thus, z is of the form (−c, −c, c). Setting c = 1, we get z = (−1, −1, 1) which is orthogonal
to both x and y.
To find a unit vector that is orthogonal to both x and y, we consider
!
1 −1 −1 1
w= z= √ ,√ ,√ .
kzk 3 3 3
133
134 Chapter 6. Applications of Vectors in R2 and R3
k x + y k2 = k x k2 + k y k2
x+y
y
Proof:
k x + y k2 = (x + y) · (x + y) = k x k2 + 2 (x · y) + k y k2 = k x k2 + k y k2 .
Example 6.1.3
Solution:
Example 6.1.4
Show that the triangle with vertices P1 (2, 3, −4), P2 (3, 1, 2), and P3 (7, 0, 1) is a right triangle.
Solution:
P3 P3 P1
−−−→ −−−→
P1 P3 P2 P3
−−−→
P1 P2
P1 P2 P1 P2 P3 P2
Figure 1 Figure 2 Figure 3
We start with Figure 1 as we do not know if there is a right angle. We create three vectors,
6.1. Orthogonality 135
namely
−−→
x = P 1 P2 = P2 − P1 = (1, −2, 6)
−−→
y = P 1 P3 = P3 − P1 = (5, −3, 5)
−−→
z = P 2 P3 = P3 − P2 = (4, −1, −1)
This is draw in Figure 2. Then, we want to find two vectors whose dot product is zero which
is valid by considering x and z. That is
x · z = 4 + 2 − 6 = 0.
Exercise 6.1.1
Find all values of c so that x = (c, 2, 1, c) and y = (c, −1, −2, −3) are orthogonal.
Exercise 6.1.2
Exercise 6.1.3
k 4 x + 3 y k = 5.
Exercise 6.1.4
Exercise 6.1.5
Find all values of a so that x = (a2 − a, −3, −1) and y = (2, a − 1, 2a) are orthogonal.
Exercise 6.1.6
Find a unit vector that is orthogonal to both x = (1, 1, 0) and y = (−1, 0, 1).
Exercise 6.1.7
Show that the triangle with vertices P1 (2, 3, −4), P2 (3, 1, 2), and P3 (7, 0, 1) is a right triangle.
6.2. Cross Product 137
Remark 6.2.1
Definition 6.2.1
x × y = (x2 y3 − x3 y2 , x1 y3 − x3 y1 , x1 y2 − x2 y1 ).
138 Chapter 6. Applications of Vectors in R2 and R3
Example 6.2.1
Solution:
i j k
1 2 2 2 2 1
x×y= 2 1 2 = i− j+ k = i + 8j − 5k = (1, 8, −5).
−1 −1 3 −1 3 −1
3 −1 −1
If x, y, z ∈ R3 and c ∈ R, then
1. x × y = −(y × x), 5. x × x = 0,
2. x × (y + z) = x × y + x × z, 6. x × 0 = 0 × x = 0,
3. (x + y) × z = x × z + y × z, 7. x × (y × z) = (x · z)y − (x · y)z,
4. c (x × y) = c x × y = x × c y, 8. (x × y) × z = (x · z)y − (y · z)x.
i
− −
i×i=0 j×j=0 k×k=0 + +
i×j=k j×k=i k × j = −i + j
k
i × k = −j j × i = −k k×i=j
If x, y, z ∈ R3 , then
1. x · (x × y) = 0, and y · (x × y) = 0, [x × y is orthogonal to both x and y]
2. k x × y k2 = k x k2 k y k2 − (x · y)2 , [Lagrange’s identity]
Remark 6.2.5
If θ denotes the angle between x and y, then Theorem 6.2.2(2) and Remark 6.2.1 implies that
k x × y k = k x kk y k sin θ.
Remark 6.2.6
If x, y, z ∈ R3 , then
π
1. x ⊥ y ⇐⇒ θ = 2
⇐⇒ sin θ = 1 ⇐⇒ kx × yk = kxkkyk,
2. x // y ⇐⇒ θ = 0 or π ⇐⇒ kx × yk = 0 ⇐⇒ x × y = 0,
3. Area of triangle:
1
A∆ = ah kxk x
2 h
h θ
sin θ = =⇒ h = kxk sin θ and if kyk = a,
kxk y
1 1 kyk = a
A∆ = kykkxk sin θ = kx × yk.
2 2
4. Area of parallelgram (two triangles):
A = kx × yk.
5. Volume of parallelpiped:
V olume = |x · (y × z)| .
y
z
x
140 Chapter 6. Applications of Vectors in R2 and R3
Example 6.2.2
Find the area of the triangle with vertices: P1 (2, 2, 4), P2 (−1, 0, 5), and P3 (3, 4, 3). Find the
−−−→ −−−→
area of the parallelogram with adjacent sides P1 P2 and P1 P3 .
Solution:
−−→ −−→
Let x = P1 P2 = P2 − P1 = (−3, −2, 1) and y = P1 P3 = P3 − P1 = (1, 2, −1). Then,
i j k
x×y = −3 −2 1
1 2 −1
−21 −3 1 −3 −2
= i− j+ k = 0i − 2j − 4k.
−1
2 1 −1 1 2
√ √ √ √
Therefore, kx × yk = 4 + 16 = 20 = 2 5 and A∆ = 12 kx × yk = 5.
√
The area of the parallelogram A is simply 2A∆ = 2 5.
Example 6.2.3
Find the volume of the parallelpiped with a vertex at the origin and edges x = (1, −2, 3),
y = (1, 3, 1), and z = (2, 1, 2).
Solution:
1 −2 3
Volume = x · (y × z) = (x × y) · z = 1 3 1
2 1 2
= 1(6 − 1) − 1(−4 − 3) + 2(−2 − 9) = 5 + 7 − 22 = | − 10| = 10.
Theorem 6.2.3
If x = (x1 , x2 , x3 ), y = (y1 , y2 , y3 ), and z = (z1 , z2 , z3 ) have the same initial point, then the lie
in the same plane if and only if
x · (y × z) = 0.
6.2. Cross Product 141
Exercise 6.2.1
Consider the points P (3, −1, 4), Q(6, 0, 2) and R(5, 1, 1). Find the point S in R3 whose first
−−→ −→
component is −1 and such that PQ is parallel to RS.
Exercise 6.2.2
Use cross product to find a vector that is orthogonal to both x = (1, −1, 2) and y = (2, 1, 2).
Hint: Find x × y.
Exercise 6.2.3
Find the area of the parallelogram determined by x = (1, 3, 4) and y = (5, 1, 2). Hint: See
Remark 6.2.6.
Exercise 6.2.4
Find the area of the triangle whose vertices are P (1, 1, 2), Q(0, −3, 4), and R(2, 3, 5). Hint:
See Example 6.2.2.
Exercise 6.2.5
Find the volume of the parallelpiped with sides x = (−3, 1, −2), y = (−1, 2, 3), and z =
(1, 2, 4). Hint: See Example 6.2.3.
Exercise 6.2.6
Determine whether x = (0, 1, −1), y = (2, 2, 0), and z = (4, 1, 2) lie in the same plane when
positioned so that their initial points coincide. Hint: See Theorem 6.2.3.
Exercise 6.2.7
Show that two nonzero vectors x and y in R3 are parallel, if and only if, x × y = 0.
142 Chapter 6. Applications of Vectors in R2 and R3
Exercise 6.2.8
If x and y are nonzero vectors in R3 such that k(2x) × (2y)k = −4x · y, compute the angle
between x and y.
Hint: What is θ if tan (θ) = −1.
Exercise 6.2.9
Find the area of the triangle whose vertices are P (1, 0, −1), Q(2, −1, 3) and R(0, 1, −2).
Exercise 6.2.10
Exercise 6.2.11
π
Let x and y be two nonzero vectors in R3 , with angle θ = 3
between them. Find k x × y k,
if k x k = 3 and k −2y k = 4.
Exercise 6.2.12
Find the are of the parallelogram determined by x = (1, 3, 4) and y = (5, 1, 2).
√
Final answer: 2 131.
Exercise 6.2.13
Exercise 6.2.14
x = 2i − j − 2k and y = 2i + j.
6.3. Lines and Planes in R3 143
Definition 6.3.1
Example 6.3.1
Let P1 (2, −2, 3), P2 (−1, 0, 4) be two points in R3 . Find the parametric equation and the
symmetric form of the line that passes through the points P1 and P2 .
Solution:
−−→
Let u = P1 P2 = P2 − P1 = (−3, 2, 1) and let P0 = P1 be a fixed point on the line call it L.
Then, L : (x, y, z) = (2, −2, 3) + t(−3, 2, 1) with parametric equations:
x = 2 − 3t, y = −2 + 2t, z = 3 + t, t ∈ R.
x−2 y − (−2) z−3 2−x y+2
while the symmetric form of L is: = = ⇐⇒ = = z − 3.
−3 2 1 3 2
Remark 6.3.1
Let u1 = (a1 , b1 , c1 ) and u2 = (a2 , b2 , c2 ) be two vectors associated with L1 and L2 so that
x − x1 y − y1 z − z1 x − x2 y − y2 z − z2
L1 : = = and L2 : = = . Then,
a1 b1 c1 a2 b2 c2
1. L1 ⊥ L2 ⇐⇒ u1 ⊥ u2 ⇐⇒ u1 · u2 = 0,
2. L1 // L2 ⇐⇒ u1 // u2 ⇐⇒ u1 × u2 = 0 ⇐⇒ u2 = c u1 for c ∈ R.
144 Chapter 6. Applications of Vectors in R2 and R3
Example 6.3.2
Show that L1 : P1 (4, −1, 4) and u1 = (1, 1, −3) and L2 : P2 (3, −1, 0) and u2 = (2, 1, 1)
intersect orthogonally, and find the intersection point.
Solution:
Definition 6.3.2
Example 6.3.3
The equation of the plane P ii containing the point P0 (1, 0, −1) with normal vector n =
(2, −1, 1) is
2x − y + z = 1.
6.3. Lines and Planes in R3 145
Remark 6.3.2
Example 6.3.4
Let P1 (2, −2, 1), P2 (−1, 0, 3), P3 (5, −3, 4), and P4 (4, −3, 7) be four points in R3 . Then,
1. Find an equation of the plane Π that passes through P1 , P2 , and P3 .
2. Is P4 contained in Π? Explain.
Solution:
−−→ −−→
1. Let x = P1 P2 = P2 − P1 = (−3, 2, 2) and y = P1 P3 = P3 − P1 = (3, −1, 3). These
two vectors are contained in Π while n = x × y = (8, 15, −3) is a normal vector to Π.
Therefore, a general form of Π is 8(x − 2) + 15(y + 2) − 3(z − 1) = 0. The standard
form of Π is
8x + 15y − 3z + 17 = 0.
Remark 6.3.3
Π2
n2
Π1
n2
Π1 // Π2 Π1 ⊥ Π2
Example 6.3.5
Find the parametric equation of the intersection line of the two planes:
Π1 : x − y + 2z = 3 and Π2 : 2x + 4y − 2z = −6.
Solution:
We form a nonhomogenous system to solve for the parametric equation of the intersection
line:
1 −1 2 3 1 0 1 1
→ ··· →
2 4 −2 −6 0 1 −1 −2
Therefore, the reduced system is:
x + z = 1 and y − z = −2.
Example 6.3.6
Find two equations of two planes whose intersection line is the line L:
Solution:
Further Considerations:
Remark 6.3.4
x − x0 y − y0 z − z0
Let u = (a1 , b1 , c1 ) be associated with the line L : = = and n =
a1 b1 c1
(a2 , b2 , c2 ) be associated with the plane Π2 : a2 x + b2 y + c2 z + d2 = 0. Then,
1. L ⊥ Π ⇐⇒ u // n ⇐⇒ u × n = 0 ⇐⇒ u = c n where c ∈ R,
2. L // Π ⇐⇒ u ⊥ n ⇐⇒ u · n = 0.
n1
n1
u
L
u
Π Π
L⊥Π L // Π
Example 6.3.7
Find a plane that passes through the point (2, 4, −3) and is parallel to the plane −2x + 4y −
5z + 6 = 0.
Solution:
Since the two planes parallel, we can choose the normal vector of the second plane. That is
n = (−2, 4, −5). Thus, the equation of the plane is
Example 6.3.8
Find a line that passes through the point (−2, 5, −3) and is perpendicular to the plane 2x −
3y + 4z + 7 = 0.
Solution:
The line L is perpendicular to our plane. So, it is parallel to its normal vector, so we can
choose the normal vector as u. That is u = (2, −3, 4) and hence the parametric equation of
L is
x = −2 + 2t
y = 5 − 3t
t∈R
z = −3 + 4t
Example 6.3.9
x−1 −y+4
Show that the plane Π : 6x − 4y + 2z = 0 and the line L : 3
= 2
= z − 5 intersect
orthogonally. Find the intersection point.
Solution:
6 + 18t − 16 + 8t + 10 + 2t = 0
28t = 0
Therefore, we get t = 0. Substituting this in the parametric equatio, we get the intersection
point as P (1, 4, 5).
6.3. Lines and Planes in R3 149
Exercise 6.3.1
Π1 : x + y + z = 3, Π2 : x + 2y − 2z = 2k, and Π3 : x + k 2 z = 2.
Find all values of k for which the intersection of the three planes is a line. Hint: Any point
on the intersection of the three planes must satisfies the three equations. This would give a
system of three equation. This system must have infinitely many solutions to describe a line.
Exercise 6.3.2
Exercise 6.3.3
Let L be the line through the points P1 (−4, 2, −6) and P2 (1, 4, 3).
1. Find parametric equations for L.
2. Find two planes whose intersection is L.
Exercise 6.3.4
Find the parametric equations for the line L which passes through the points P (2, −1, 4) and
Q(4, 4, −2). For what value of k is the point R(k + 2, 14, −14) on the line L?
Exercise 6.3.5
Find the point of intersection of the line x = 1−t, y = 1+t, z = t, and the plane 3x+y+3z−1 =
0.
150 Chapter 6. Applications of Vectors in R2 and R3
Exercise 6.3.6
Π1 : x + 2y − z = 2, and Π2 : 3x + 7y + z = 11.
Exercise 6.3.7
Exercise 6.3.8
L : x = 1 + 2t, y = 2 − t, z = 4 + 3t.
The Index
A inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
addition triangle inequality . . . . . . . . . . . . . . . . . . . . . 75
closed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . 14, 72
adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
E
algebraic multiplicity . . . . . . . . . . . . . . . . . . . . 126
eigenspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
augmented matrix . . . . . . . . . . . . . . . . . . . . . . . . . .2
eigenvalue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B algebraic multiplicity . . . . . . . . . . . . . . . . .126
basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 geometric multiplicity . . . . . . . . . . . . . . . . 126
eigenvector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
C
elementary row operations . . . . . . . . . . . . . . . . . 3
Cauchy-Schwarz Inequality . . . . . . . . . . . . . . . . 75
Euclidean vector space . . . . . . . . . . . . . . . . . . . . 69
Cauchy-Schwarz inequality . . . . . . . . . . . . . . . . 75
characteristic polynomial . . . . . . . . . . . . . . . . .119 F
cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 free variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
cofactor expansion . . . . . . . . . . . . . . . . . . . . . . . . 51
G
column space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
general solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
column vector . . . . . . . . . . . . . . . . . . . . . . . . . 11, 14
general vector space . . . . . . . . . . . . . . . . . . . . . . . 81
consistent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
geometric multiplicity . . . . . . . . . . . . . . . . . . . . 126
coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98, 99
cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 H
homogeneous system . . . . . . . . . . . . . . . . . . . . . . . 1
D
homogenous system
determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
nontrivial solution . . . . . . . . . . . . . . . . . . . . . . 7
cofactor expansion . . . . . . . . . . . . . . . . . . . . 49
trivial solution . . . . . . . . . . . . . . . . . . . . . . . . . 7
determinant of matrix . . . . . . . . . . . . . . . . . . . . .51
diagonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 I
diagonalizable . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 idempotent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . 125 inconsistent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 initial point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72 inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
151
152 THE INDEX
M O
matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 137
adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 orthogonal vectors . . . . . . . . . . . . . . . . . . 133, 137
cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
P
determinant . . . . . . . . . . . . . . . . . . . . . . . 49, 51
parallelgram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
diagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
parallelogram equation for vectors . . . . . . . . . 76
difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
parallelpiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
identity . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 41
parameteric equations . . . . . . . . . . . . . . . . . . . . . . 6
inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
perpendicular . . . . . . . . . . . . . . . . . . . . . . . 133, 137
invertable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
perpendicular vectors . . . . . . . . . . . . . . . 133, 137
nonsingular . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143, 144
scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
general form . . . . . . . . . . . . . . . . . . . . . . . . . 144
similar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
standard form . . . . . . . . . . . . . . . . . . . . . . . .144
singular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
point-normal equation . . . . . . . . . . . . . . . . . . . 144
size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
skew-symmetric . . . . . . . . . . . . . . . . . . . . . . . 43
square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 R
sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
symmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 reduced row echelon form . . . . . . . . . . . . . . . . . . 4
trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 row echelon form . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
THE INDEX 153
T
S
terminal point . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
scalar multiple of a matrix . . . . . . . . . . . . . . . . 13
transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
scalar multiplication
triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
closed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
triangle inequality for distance . . . . . . . . . . . . 75
similar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
triangle inequality for vector . . . . . . . . . . . . . . 75
size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Triangle Inequality for vectors and distances
skew-symmetric matrix . . . . . . . . . . . . . . . . . . . 43
75
solution
triangular
infinite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
lower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
no . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
upper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
non-trivial . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
triple vector product . . . . . . . . . . . . . . . . . . . . . 138
trivial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
trivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
unique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
space U
spanned by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 V
spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 vector
standard form . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 column . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11, 14
standard unit vector . . . . . . . . . . . . . . . . . . . . . . 74 coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 cross product . . . . . . . . . . . . . . . . . . . . . . . . 137
trivial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 directional . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
symmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . 43 distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
system dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
augmented matrix form . . . . . . . . . . . . . . . . 2 inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
consistent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 initial point . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
homogenous . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
154 THE INDEX