0% found this document useful (0 votes)
17 views85 pages

1-Matrices and Determinants

Chapter 7 of CHE 111 covers linear algebra, focusing on matrices, vectors, determinants, and linear systems. It introduces matrix operations, the Gauss elimination method for solving linear equations, and discusses the importance of matrices in various fields. The chapter also addresses matrix multiplication, properties of linear transformations, and the concept of transposition.

Uploaded by

Yasser Bacus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views85 pages

1-Matrices and Determinants

Chapter 7 of CHE 111 covers linear algebra, focusing on matrices, vectors, determinants, and linear systems. It introduces matrix operations, the Gauss elimination method for solving linear equations, and discusses the importance of matrices in various fields. The chapter also addresses matrix multiplication, properties of linear transformations, and the concept of transposition.

Uploaded by

Yasser Bacus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

CHE 111 – Advanced Engineering Mathematics in

Chemical Engineering
DCHET, COET, MSU-IIT
7. Matrices,
Vectors, Determinants.
Linear Systems
▪ Linear algebra is a fairly extensive subject that covers vectors and
matrices, determinants, systems of linear equations, vector spaces
and linear transformations, eigenvalue problems, and other topics.
As an area of study, it has a broad appeal in that it has many
applications in engineering, physics, geometry, computer science,
economics, and other areas. It also contributes to a deeper
understanding of mathematics itself.
▪ Matrices, which are rectangular arrays of numbers or functions,
and vectors are the main tools of linear algebra. Matrices are
important because they let us express large amounts of data and
functions in an organized and concise form. Furthermore, since
matrices are single objects, we denote them by single letters and
calculate with them directly. All these features have made matrices
and vectors very popular for expressing scientific and
mathematical ideas.
The chapter keeps a good mix between applications and theory.
Chapter 7 is structured as follows:
▪ Sections 7.1 and 7.2 provide an intuitive introduction to matrices
and vectors and their operations, including matrix
multiplication.
▪ The next block of sections, that is, Secs. 7.3–7.5 provide the most
important method for solving systems of linear equations by
the Gauss elimination method. This method is a cornerstone of
linear algebra, and the method itself and variants of it appear in
different areas of mathematics and in many applications. It leads
to a consideration of the behavior of solutions and concepts such
as rank of a matrix, linear independence, and bases.
▪ We shift to determinants, a topic that has declined in
importance, in Secs. 7.6 and 7.7. Section 7.8 covers inverses of
matrices. The chapter ends with vector spaces, inner product
spaces, linear transformations, and composition of linear
transformations. Eigenvalue problems follow in Chap. 8.
7.1 Matrices, Vectors:
Addition and Scalar
Multiplication
ADDITION AND SCALAR MULTIPLICATION
Let us first take a leisurely look at matrices before we formalize our
discussion. A matrix is a rectangular array of numbers or functions
which we will enclose in brackets. For example,

(1)

are matrices. The numbers (or functions) are called entries or, less
commonly, elements of the matrix. The first matrix in (1) has two rows,
which are the horizontal lines of entries. Furthermore, it has three
columns, which are the vertical lines of entries. The second and third
matrices are square matrices, which means that each has as many
rows as columns— 3 and 2, respectively.
ADDITION AND SCALAR MULTIPLICATION

(1)

The entries of the second matrix have two indices, signifying their
location within the matrix. The first index is the number of the row
and the second is the number of the column, so that together the
entry’s position is uniquely identified. For example, 𝑎23 (read a two
three) is in Row 2 and Column 3, etc. The notation is standard and
applies to all matrices, including those that are not square.
ADDITION AND SCALAR MULTIPLICATION
Matrices having just a single row or column are called vectors. Thus,
the fourth matrix in (1) has just one row and is called a row vector.
The last matrix in (1) has just one column and is called a column
vector. Because the goal of the indexing of entries was to uniquely
identify the position of an element within a matrix, one index suffices
for vectors, whether they are row or column vectors. Thus, the third
entry of the row vector in (1) is denoted by 𝑎3 .
ADDITION AND SCALAR MULTIPLICATION
Example 1: Linear Systems, a Major Application of Matrices
We are given a system of linear equations, briefly a linear system,
such as

Where 𝑥1 , 𝑥2 , 𝑥3 are the unknowns. We form the coefficient matrix,


call it 𝐀, by listing the coefficients of the unknowns in the position in
which they appear in the linear equations. In the second equation,
there is no unknown 𝑥2 , which means that the coefficient of 𝑥2 is 0 and
hence in matrix 𝐀, 𝑎22 = 0, Thus,
ADDITION AND SCALAR MULTIPLICATION
by augmenting 𝐀 with the right sides of the linear system and call it
the augmented matrix of the system.

Since we can go back and recapture the system of linear equations


directly from the augmented matrix 𝐀 ෩, 𝐀
෩ contains all the information
of the system and can thus be used to solve the linear system. This
means that we can just use the augmented matrix to do the
calculations needed to solve the system.

The notation 𝑥1 , 𝑥2 , 𝑥3 for the unknowns is practical but not essential;


we could choose x, y, z or some other letters.
GENERAL CONCEPTS AND NOTATIONS
Let us formalize what we just have discussed. We shall denote
matrices by capital boldface letters 𝐀, 𝐁, 𝐂, , or by writing the general
entry in brackets; thus , 𝐀 = 𝑎𝑗𝑘 and so on. By an 𝒎 × 𝒏 matrix
(read 𝑚 by 𝑛 matrix) we mean a matrix with 𝑚 rows and 𝑛 columns—
rows always come first! 𝑚 × 𝑛 is called the size of the matrix. Thus, an
𝐦 × 𝐧 matrix is of the form

(2)
VECTORS
A vector is a matrix with only one row or column. Its entries are
called the components of the vector. We shall denote vectors by
lowercase boldface letters 𝐚, 𝐛, or by its general component in
brackets, 𝐚 = [𝑎𝑗 ] , and so on. Our special vectors in (1) suggest that a
(general) row vector is of the form

A column vector is of the form


ADDITION AND SCALAR MULTIPLICATION OF MATRICES AND VECTORS
What makes matrices and vectors really useful and particularly
suitable for computers is the fact that we can calculate with them
almost as easily as with numbers. Indeed, we now introduce rules for
addition and for scalar multiplication (multiplication by numbers)
that were suggested by practical applications. (Multiplication of
matrices by matrices follows in the next section.) We first need the
concept of equality.

DEFINITION:
ADDITION AND SCALAR MULTIPLICATION OF MATRICES AND VECTORS
Example 2: Equality of Matrices
Let

Then

The following matrices are all different. Explain!


ADDITION AND SCALAR MULTIPLICATION OF MATRICES AND VECTORS
DEFINITION:

As a special case, the sum 𝐚 + 𝐛 of two row vectors or two column


vectors, which must have the same number of components, is
obtained by adding the corresponding components.

Example 3: Addition of Matrices and Vectors


If
ADDITION AND SCALAR MULTIPLICATION OF MATRICES AND VECTORS
𝐀 in Example 2 and our present 𝐀 cannot be added. If 𝐚 = 5 7 2
and 𝐛 = −6 2 0 , then 𝐚 + 𝐛 = −1 9 2 .

DEFINITION:

Here −1 𝐀 is simply written −𝐀 and is called the negative of 𝐀.


Similarly, −𝑘 𝐀 is written −𝑘𝐀 . Also, 𝐀 + −𝐁 is written 𝐀 − 𝐁 and is
called the difference of 𝐀 and 𝐁 (which must have the same size!).
ADDITION AND SCALAR MULTIPLICATION OF MATRICES AND VECTORS
Example 4: Scalar Multiplication
If

If a matrix 𝐁 shows the distances between some cities in miles, 1.609𝐁


gives these distances in kilometers.
ADDITION AND SCALAR MULTIPLICATION OF MATRICES AND VECTORS
Rules for Matrix Addition and Scalar Multiplication. From the
familiar laws for the addition of numbers we obtain similar laws for
the addition of matrices of the same size 𝑚 × 𝑛, namely,

(3)

Here 𝟎 denotes the zero matrix (of size 𝑚 × 𝑛), that is, the 𝑚 × 𝑛
matrix with all entries zero. If 𝑚 = 1 or 𝑛 = 1, this is a vector, called a
zero vector.
ADDITION AND SCALAR MULTIPLICATION OF MATRICES AND VECTORS
Hence matrix addition is commutative and associative [by (3a) and
(3b)]. Similarly, for scalar multiplication we obtain the rules

(4)
7.2 Matrix
Multiplication
MATRIX MULTIPLICATION
Matrix multiplication means that one multiplies matrices by matrices.
Its definition is standard but it looks artificial. Thus you have to study
matrix multiplication carefully, multiply a few matrices together for
practice until you can understand how to do it. Here then is the
definition.

DEFINITION:
MATRIX MULTIPLICATION
The condition 𝑟 = 𝑛 means that the second factor, 𝐁, must have as
many rows as the first factor has columns, namely 𝑛. A diagram of
sizes that shows when matrix multiplication is possible is as follows:

The entry 𝑐𝑗𝑘 in (1) is obtained by multiplying each entry in the 𝑗th
row of 𝐀 by the corresponding entry in the 𝑘th column of 𝐁 and then
adding these 𝑛 products. For instance, 𝑐21 = 𝑎21 𝑏11 + 𝑎22 𝑏21 + ⋯ +
𝑎2𝑛 𝑏𝑛1 and so on. One calls this briefly a multiplication of rows into
columns. For 𝑛 = 3, this is illustrated by
MATRIX MULTIPLICATION
Let us illustrate the main points of matrix multiplication by some
examples. Note that matrix multiplication also includes multiplying a
matrix by a vector, since, after all, a vector is a special matrix.

Example 1: Matrix Multiplication

Here

and so on. The entry in the box is

The product 𝐁𝐀 is not defined.


MATRIX MULTIPLICATION
Example 2: Multiplication of a Matrix and a Vector

whereas

is undefined.
MATRIX MULTIPLICATION
Example 3: Products of Row and Column Vectors

CAUTION! Matrix Multiplication Is Not Commutative, 𝐀𝐁 ≠ 𝐁𝐀


in General
This is illustrated by Examples 1 and 2, where one of the two products
is not even defined, and by Example 3, where the two products have
different sizes. But it also holds for square matrices. For instance,

It is interesting that this also shows that 𝐀𝐁 = 𝟎 does not necessarily


imply 𝐁𝐀 = 𝟎 or 𝐀 = 𝟎 or 𝐁 = 𝟎.
MATRIX MULTIPLICATION
Our examples show that in matrix products the order of factors must
always be observed very carefully. Otherwise matrix multiplication
satisfies rules similar to those for numbers, namely.

(2)

provided 𝐀, 𝐁, and 𝐂 are such that the expressions on the left are
defined; here, 𝑘 is any scalar. (2b) is called the associative law. (2c)
and (2d) are called the distributive laws.
MATRIX MULTIPLICATION
Since matrix multiplication is a multiplication of rows into columns,
we can write the defining formula (1) more compactly as

(3)

Where 𝐚𝑗 is the 𝑗th row vector of 𝐀 and 𝐛𝑘 is the 𝑘th column vector of
𝐁, so that in agreement with (1),
MATRIX MULTIPLICATION
Example 5: Product in Terms of Row and Column Vectors

If 𝐀 = 𝑎𝑗𝑘 is of size 3 × 3 and 𝐁 = 𝑏𝑗𝑘 is of size 3 × 4, then

(4)
MATRIX MULTIPLICATION
Example 6: Computing Products Columnwise by (5)
To obtain

from (5), calculate the columns

of 𝐀𝐁 and then write them as a single matrix, as shown in the first


formula on the right.
MOTIVATION OF MULTIPLICATION BY LINEAR TRANSFORMATIONS
Let us now motivate the “unnatural” matrix multiplication by its use in
linear transformations. For 𝑛 = 2 variables these transformations are
of the form

(6*)

and suffice to explain the idea. (For general 𝑛 they will be discussed
in Sec. 7.9.) For instance, (6*) may relate an 𝑥1 𝑥2 -coordinate system to
a 𝑦1 𝑦2 -coordinate system in the plane. In vectorial form we can write
(6*) as

(6)

Now suppose further that the 𝑥1 𝑥2 -system is related to a 𝑤1 𝑤2 -system


by another linear transformation, say,

(7)
MOTIVATION OF MULTIPLICATION BY LINEAR TRANSFORMATIONS
Then the 𝑦1 𝑦2 -system is related to the 𝑤1 𝑤2 -system indirectly via the
𝑥1 𝑥2 -system, and we wish to express this relation directly. Substitution
will show that this direct relation is a linear transformation, too, say,

(8)

Indeed, substituting (7) into (6), we obtain


MOTIVATION OF MULTIPLICATION BY LINEAR TRANSFORMATIONS
Comparing this with (8), we see that

This proves that 𝐂 = 𝐀𝐁 with the product defined as in (1). For larger
matrix sizes the idea and result are exactly the same. Only the
number of variables changes. We then have 𝑚 variables 𝑦 and 𝑛
variables 𝑥 and 𝑝 variables 𝑤. The matrices 𝐀, 𝐁, and 𝐂 = 𝐀𝐁 then
have sizes 𝑚 × 𝑛, 𝑛 × 𝑝 and 𝑚 × 𝑝, respectively. And the requirement
that 𝐂 be the product 𝐀𝐁 leads to formula (1) in its general form. This
motivates matrix multiplication.
TRANSPOSITION
We obtain the transpose of a matrix by writing its rows as columns (or
equivalently its columns as rows). This also applies to the transpose of
vectors. Thus, a row vector becomes a column vector and vice versa.
In addition, for square matrices, we can also “reflect” the elements
along the main diagonal, that is, interchange entries that are
symmetrically positioned with respect to the main diagonal to obtain
the transpose. Hence 𝑎12 becomes 𝑎21 , 𝑎31 becomes 𝑎13 and so forth.
Example 7 illustrates these ideas. Also note that, if 𝐀 is the given
matrix, then we denote its transpose by 𝐀T .

Example 7: Transposition of Matrices and Vectors


If
TRANSPOSITION
A little more compactly, we can write

Furthermore, the transpose 6 2 3 T


of the row vector 6 2 3 is
the column vector
TRANSPOSITION
DEFINITION:

Transposition gives us a choice in that we can work either with the


matrix or its transpose, whichever is more convenient.
TRANSPOSITION
Rules for transposition are

CAUTION! Note that in (10d) the transposed matrices are in


reversed order.
SPECIAL MATRICES
Symmetric and Skew-Symmetric Matrices. Transposition gives
rise to two useful classes of matrices. Symmetric matrices are square
matrices whose transpose equals the matrix itself. Skew-symmetric
matrices are square matrices whose transpose equals minus the
matrix. Both cases are defined in (11) and illustrated by Example 8.

Example 8: Symmetric and Skew-Symmetric Matrices


SPECIAL MATRICES
Triangular Matrices. Upper triangular matrices are square
matrices that can have nonzero entries only on and above the main
diagonal, whereas any entry below the diagonal must be zero.
Similarly, lower triangular matrices can have nonzero entries only
on and below the main diagonal. Any entry on the main diagonal of a
triangular matrix may be zero or not.

Example 9: Upper and Lower Triangular Matrices


SPECIAL MATRICES
Diagonal Matrices. These are square matrices that can have
nonzero entries only on the main diagonal. Any entry above or below
the main diagonal must be zero. If all the diagonal entries of a
diagonal matrix 𝐒 are equal, say, 𝑐, we call 𝐒 a scalar matrix because
multiplication of any square matrix 𝐀 of the same size by 𝐒 has the
same effect as the multiplication by a scalar, that is,

(12)

In particular, a scalar matrix, whose entries on the main diagonal are


all 1, is called a unit matrix (or identity matrix) and is denoted by
𝐈𝑛 or simply by 𝐈. For 𝐈, formula (12) becomes

(13)
SPECIAL MATRICES
Example 10: Diagonal Matrix 𝐃. Scalar Matrix 𝐒. Unit Matrix 𝐈
7.3 Linear Systems
of Equations.
Gauss Elimination
LINEAR SYSTEM, COEFFICIENT MATRIX, AUGMENTED MATRIX
A linear system of 𝒎 equations in 𝒏 unknowns 𝑥1 , 𝑥2 , ⋯ , 𝑥𝑛 is a set
of equations of the form

(1)

The system is called linear because each variable 𝑥𝑗 appears in the


first power only, just as in the equation of a straight line. 𝑎11 , ⋯ , 𝑎𝑚𝑛
are given numbers, called the coefficients of the system. 𝑏1 , ⋯ , 𝑏𝑚 on
the right are also given numbers. If all the 𝑏𝑗 are zero, then (1) is
called a homogeneous system. If at least one is not zero, then (1) is
called a nonhomogeneous system.
A solution of (1) is a set of numbers 𝑥1 , ⋯ , 𝑥𝑛 that satisfies all the 𝑚
equations. A solution vector of (1) is a vector 𝐱 whose components
form a solution of (1). If the system (1) is homogeneous, it always has
at least the trivial solution 𝑥1 = 0, ⋯ , 𝑥𝑛 = 0.
LINEAR SYSTEM, COEFFICIENT MATRIX, AUGMENTED MATRIX
Matrix Form of the Linear System (1).
From the definition of matrix multiplication, we see that the 𝑚
equations of (1) may be written as a single vector equation

(2)

where the coefficient matrix 𝐀 = 𝑎𝑗𝑘 is the 𝑚 × 𝑛 matrix

are column vectors. We assume that the coefficients 𝑎𝑗𝑘 are not all
zero, so that 𝐀 is not a zero matrix. Note that 𝐱 has 𝑛 components,
whereas 𝐛 has 𝑚 components.
LINEAR SYSTEM, COEFFICIENT MATRIX, AUGMENTED MATRIX
The matrix

is called the augmented matrix of the system (1). The dashed


vertical line could be omitted, as we shall do later. It is merely a
෩ did not come from matrix 𝐀 but
reminder that the last column of 𝐀
came from vector 𝐛. Thus, we augmented the matrix 𝐀.
Note that the augmented matrix 𝐀 ෩ determines the system (1)
completely because it contains all the given numbers appearing in
(1).
LINEAR SYSTEM, COEFFICIENT MATRIX, AUGMENTED MATRIX
Example 1: Geometric Interpretation. Existence and Uniqueness of
Solutions
If 𝑚 = 𝑛 = 2, we have two equations in two unknowns 𝑥1 , 𝑥2

If we interpret 𝑥1 , 𝑥2 as coordinates in the 𝑥1 , 𝑥2 -plane, then each of


the two equations represents a straight line, and 𝑥1 , 𝑥2 is a solution if
and only if the point 𝑃 with coordinates 𝑥1 , 𝑥2 lies on both lines. Hence
there are three possible cases:
(a) Precisely one solution if the lines intersect
(b) Infinitely many solutions if the lines coincide
(c) No solution if the lines are parallel
LINEAR SYSTEM, COEFFICIENT MATRIX, AUGMENTED MATRIX
For instance,
GAUSS ELIMINATION AND BACK SUBSTITUTION
The Gauss elimination method can be motivated as follows. Consider
a linear system that is in triangular form (in full, upper triangular
form) such as

(Triangular means that all the nonzero entries of the corresponding


coefficient matrix lie above the diagonal and form an upside-down
90° triangle.) Then we can solve the system by back substitution,
that is,
GAUSS ELIMINATION AND BACK SUBSTITUTION
For instance, let the given system be

We eliminate 𝑥1 from the second equation, to get a triangular system.


For this we add twice the first equation to the second, and we do the
same operation on the rows of the augmented matrix. This gives

This is the Gauss elimination (for 2 equations in 2 unknowns) giving


the triangular form, from which back substitution now yields 𝑥2 = −2
and 𝑥1 = 6, as before.
Since a linear system is completely determined by its augmented
matrix, Gauss elimination can be done by merely considering the
matrices, as we have just indicated.
GAUSS ELIMINATION AND BACK SUBSTITUTION
Example 2: Gauss Elimination.
Solve the linear system

Solution by Gauss Elimination.


This system could be solved rather quickly by noticing its particular
form. But this is not the point. The point is that the Gauss elimination is
systematic and will work in general, also for large systems. We apply
it to our system and then do back substitution. As indicated, let us
write the augmented matrix of the system first and then the system
itself:
GAUSS ELIMINATION AND BACK SUBSTITUTION

Step 1. Elimination of 𝒙𝟏
Call the first row of 𝐀 the pivot row and the first equation the pivot
equation. Call the coefficient 1 of its 𝑥1 -term the pivot in this step.
Use this equation to eliminate 𝑥1 (get rid of 𝑥1 ) in the other equations.
For this, do:
Add 1 times the pivot equation to the second equation.
Add −20 times the pivot equation to the fourth equation.
GAUSS ELIMINATION AND BACK SUBSTITUTION
This corresponds to row operations on the augmented matrix as
indicated in BLUE behind the new matrix in (3). So the operations are
performed on the preceding matrix. The result is

(3)

Step 2. Elimination of 𝒙𝟐
The first equation remains as it is. We want the new second equation
to serve as the next pivot equation. But since it has no 𝑥2 -term (in fact,
it is 0 = 0, we must first change the order of the equations and the
corresponding rows of the new matrix. We put 0 = 0 at the end and
move the third equation and the fourth equation one place up. This is
called partial pivoting (as opposed to the rarely used total pivoting,
in which the order of the unknowns is also changed). It gives
GAUSS ELIMINATION AND BACK SUBSTITUTION

To eliminate 𝑥2 , do:
Add −3 times the pivot equation to the third equation.

The result is

(4)
GAUSS ELIMINATION AND BACK SUBSTITUTION
Back Substitution. Determination of 𝒙𝟑 , 𝒙𝟐 , 𝒙𝟏 (in this order)
Working backward from the last to the first equation of this
“triangular” system (4), we can now readily find 𝑥3 , then 𝑥2 , and then
𝑥1 :

where A stands for “amperes.” This is the answer to our problem. The
solution is unique.
ELEMENTARY ROW OPERATIONS. ROW-EQUIVALENT SYSTEMS
Elementary Row Operations for Matrices:
Interchange of two rows
Addition of a constant multiple of one row to another row
Multiplication of a row by a nonzero constant c

CAUTION! These operations are for rows, not for columns! They
correspond to the following

Elementary Operations for Equations:


Interchange of two equations
Addition of a constant multiple of one equation to another
equation
Multiplication of an equation by a nonzero constant c
ELEMENTARY ROW OPERATIONS. ROW-EQUIVALENT SYSTEMS
We now call a linear system row-equivalent to a linear system if can
be obtained from by (finitely many!) row operations. This justifies
Gauss elimination and establishes the following result.

Theorem 1:

Because of this theorem, systems having the same solution sets are
often called equivalent systems. But note well that we are dealing with
row operations. No column operations on the augmented matrix are
permitted in this context because they would generally alter the
solution set.
GAUSS ELIMINATION: THE THREE POSSIBLE CASES OF SYSTEMS
Example 3: Gauss Elimination if Infinitely Many Solutions Exist
Solve the following linear system of three equations in four unknowns
whose augmented matrix is

(5)

Thus
GAUSS ELIMINATION: THE THREE POSSIBLE CASES OF SYSTEMS
Solution.
As in the previous example, we circle pivots and box terms of
equations and corresponding entries to be eliminated. We indicate
the operations in terms of equations and operate on both equations
and matrices.
Step 1. Elimination of 𝒙𝟏 from the second and third equations by
adding

This gives the following, in which the pivot of the next step is circled.

(6)
GAUSS ELIMINATION: THE THREE POSSIBLE CASES OF SYSTEMS
Step 2. Elimination of 𝒙𝟐 from the third equation of (6) by adding
times

This gives

(7)

Back Substitution.
From the second equation, 𝑥2 = 1 − 𝑥3 + 4𝑥4 . From this and the first
equation, 𝑥1 = 2 − 𝑥4 . Since 𝑥3 and 𝑥4 remain arbitrary, we have
infinitely many solutions. If we choose a value of 𝑥3 and a value of 𝑥4 ,
then the corresponding values of 𝑥1 and 𝑥2 are uniquely determined.
GAUSS ELIMINATION: THE THREE POSSIBLE CASES OF SYSTEMS
Example 4: Gauss Elimination if no Solution Exists
What will happen if we apply the Gauss elimination to a linear system
that has no solution? The answer is that in this case the method will
show this fact by producing a contradiction. For instance, consider

Step 1. Elimination of 𝒙𝟏 from the second and third equations by


adding
GAUSS ELIMINATION: THE THREE POSSIBLE CASES OF SYSTEMS
This gives

Step 2. Elimination of 𝒙𝟐 from the third equation gives

The false statement 0 = 12 shows that the system has no solution.


ROW ECHELON FORM AND INFORMATION FROM IT
At the end of the Gauss elimination the form of the coefficient matrix,
the augmented matrix, and the system itself are called the row
echelon form. In it, rows of zeros, if present, are the last rows, and, in
each nonzero row, the leftmost nonzero entry is farther to the right
than in the previous row. For instance, in Example 4 the coefficient
matrix and its augmented in row echelon form are

(8)

The original system of 𝑚 equations in 𝑛 unknowns has augmented


matrix 𝐀ȁ𝐛 . This is to be row reduced to matrix 𝐑ȁ𝐟 . The two
systems 𝐀𝐱 = 𝐛 and 𝐑𝐱 = 𝐟 are equivalent: if either one has a solution,
so does the other, and the solutions are identical.
ROW ECHELON FORM AND INFORMATION FROM IT
At the end of the Gauss elimination (before the back substitution), the
row echelon form of the augmented matrix will be

(9)

Here, 𝑟 ≦ 𝑚, 𝑟11 ≠ 0, and all entries in the blue triangle and blue
rectangle are zero.
7.7 Determinants.
Cramer’s Rule
DETERMINANTS. CRAMER’S RULE
A determinant of order 𝑛 is a scalar associated with an 𝑛 × 𝑛 (hence
square!) matrix 𝐀 = 𝑎𝑗𝑘 and is denoted by

(1)

For 𝑛 = 1, this determinant is defined by

(2)
DETERMINANTS. CRAMER’S RULE
A determinant of order 𝑛 is a scalar associated with an 𝑛 × 𝑛 (hence
square!) matrix 𝐀 = 𝑎𝑗𝑘 and is denoted by

(1)

For 𝑛 = 1, this determinant is defined by

(2)
DETERMINANTS. CRAMER’S RULE
For 𝑛 ≧ 2 by

(3a)

Or

(3b)

Here,

and 𝑀𝑗𝑘 is a determinant of order 𝑛 − 1, namely, the determinant of


the submatrix of 𝐀 obtained from 𝐀 by omitting the row and column of
the entry 𝑎𝑗𝑘 , that is, the 𝑗th row and the 𝑘th column.
DETERMINANTS. CRAMER’S RULE
A determinant of third order can be defined by

Note the following. The signs on the right are + − + . Each of the
three terms on the right is an entry in the first column of 𝐷 times its
minor, that is, the second-order determinant obtained from 𝐷 by
deleting the row and column of that entry; thus, for 𝑎11 delete the first
row and first column, and so on.
DETERMINANTS. CRAMER’S RULE
Example 1: Expansions of a Third-Order Determinant

This is the expansion by the first row. The expansion by the third
column is

Verify that the other four expansions also give the value −12.
DETERMINANTS. CRAMER’S RULE
Checkerboard pattern

Example 2: Determinant of a Triangular Matrix


CRAMER’S RULE
CRAMER’S RULE
Example 2: Cramer’s Rule for Two Equations
If

then
CRAMER’S RULE
Cramer’s Rule for Linear Systems of Three Equations

is

and
7.8 Inverse of a
Matrix.
Gauss–Jordan
Elimination
INVERSE OF A MATRIX. GAUSS–JORDAN ELIMINATION
In this section we consider square matrices exclusively.
The inverse of an 𝑛 × 𝑛 matrix 𝐀 = 𝑎𝑗𝑘 is denoted by 𝐀−𝟏 and is an
𝑛 × 𝑛 matrix such that

(1)

where 𝐈 is the 𝑛 × 𝑛 unit matrix.


If 𝐀 has an inverse, then 𝐀 is called a nonsingular matrix. If 𝐀 has
no inverse, then 𝐀 is called a singular matrix.
If 𝐀 has an inverse, the inverse is unique.
Indeed, if both 𝐁 and 𝐂 are inverses of 𝐀, then 𝐀𝐁 = 𝐈 and 𝐂𝐀 = 𝐈,
so that we obtain the uniqueness from
INVERSE OF A MATRIX. GAUSS–JORDAN ELIMINATION
DETERMINATION OF THE INVERSE BY THE GAUSS–JORDAN METHOD
To actually determine the inverse 𝐀−𝟏 of a nonsingular 𝑛 × 𝑛 matrix 𝐀,
we can use a variant of the Gauss elimination (Sec. 7.3), called the
Gauss–Jordan elimination.
DETERMINATION OF THE INVERSE BY THE GAUSS–JORDAN METHOD
To actually determine the inverse 𝐀−𝟏 of a nonsingular 𝑛 × 𝑛 matrix 𝐀,
we can use a variant of the Gauss elimination (Sec. 7.3), called the
Gauss–Jordan elimination.

Example 1: Finding the Inverse of a Matrix by Gauss–Jordan


Elimination
Determine the inverse 𝐀−𝟏 of

Solution.
We apply the Gauss elimination (Sec. 7.3) to the following matrix,
where BLUE always refers to the previous matrix.
DETERMINATION OF THE INVERSE BY THE GAUSS–JORDAN METHOD
DETERMINATION OF THE INVERSE BY THE GAUSS–JORDAN METHOD
This is 𝐔 𝐇 as produced by the Gauss elimination. Now follow the
additional Gauss–Jordan steps, reducing U to I, that is, to diagonal
form with entries 1 on the main diagonal.
DETERMINATION OF THE INVERSE BY THE GAUSS–JORDAN METHOD
The last three columns constitute 𝐀−𝟏 . Check:

Hence 𝐀𝐀−𝟏 = 𝐈. Similarly, 𝐀−𝟏 𝐀 = 𝐈.


INVERSE OF A MATRIX BY DETERMINANTS
INVERSE OF A MATRIX BY DETERMINANTS
Example 2: Inverse of a Matrix by Determinants

Example 3: Further Illustration of Theorem 2


Using (4), find the inverse of
INVERSE OF A MATRIX BY DETERMINANTS
Solution.
We obtain Det 𝐀 = −1 −7 − 1 13 + 2 8 = 10 and in (4),

so that by (4), in agreement with Example 1,


METHOD OF INVERSE
From the compact form:
𝐀𝐱 = 𝐛

Multiplying both sides with 𝐀−𝟏 :


𝐀−𝟏 𝐀𝐱 = 𝐀−𝟏 𝐛

Note that 𝐀−𝟏 𝐀 produces an identity matrix.


Then
𝐀𝐱 = 𝐀−𝟏 𝐛
Therefore, if we can find 𝐀−𝟏 , solution for 𝐱 can be achieved.
Thank You!

Engr. Karl C. Ondoy, M.Sc.


Assistant Professor III
Department of Chemical Engineering and Technology
MSU-Iligan Institute of Technology
Iligan City, Philippines

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy