0% found this document useful (0 votes)
269 views16 pages

Chapter 1

1. Matrices and systems of linear equations are reviewed. Key concepts include matrix operations like addition, scalar multiplication, and matrix multiplication. 2. A matrix is a rectangular array of numbers with rows and columns. For matrix multiplication, the number of columns in the first matrix must equal the number of rows in the second. 3. Matrix addition and scalar multiplication follow standard rules. Matrix multiplication is not commutative in general.

Uploaded by

hola
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
269 views16 pages

Chapter 1

1. Matrices and systems of linear equations are reviewed. Key concepts include matrix operations like addition, scalar multiplication, and matrix multiplication. 2. A matrix is a rectangular array of numbers with rows and columns. For matrix multiplication, the number of columns in the first matrix must equal the number of rows in the second. 3. Matrix addition and scalar multiplication follow standard rules. Matrix multiplication is not commutative in general.

Uploaded by

hola
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Topic 1: Matrices and Systems of Linear Equations.

Let us start with a review of some linear algebra concepts we have already
learned, such as matrices, determinants, etc. Also, we shall review the method of
solving systems of linear equations, as well as the geometric interpretation of the
solution set.

1. Matrices and Determinants. Rank of a matrix. Matrix


multiplication and inverse of a matrix

An n × m matrix is a set of n × m real numbers arranged in n rows and m


columns. A matrix A of order n × m is represented as follows
 
a11 a12 ··· a1m
 a21 a22 ··· a2m 
A= . .. .. .. 
 
 .. . . . 
an1 an2 ··· anm

that is the element aij is the entry in the i0 th row and the j 0 th column. We write
A ∈ Mn×m . Sometimes, we denote the matrix that has elements aij using an older
convention: A = (aij )i=1,...,n
j=1,...,m or A = (aij )ij .
The elements aii are called the diagonal of A.
When a matrix has n rows and m columns we say that the matrix is of the
dimension (size) n × m. If the number of rows and columns is the same (i.e., if
m = n), the matrix is called a square matrix.
The square matrix
 
1 0 ... 0
 0 1 ... 0 
 .. .. .. .. 
 
 . . . . 
0 0 ... 1
is known as the identity matrix of dimension n, and is denoted by In .
Denition. Given a matrix
 
a11 a12 ··· a1m
 a21 a22 ··· a2m 
A= . .. .. ..  ∈ Mn×m
 
 .. . . . 
an1 an2 ··· anm

we dene the transpose matrix At ∈ Mm×n (or A∗ ) of A, as the matrix whose


row i is equal to column i of A. That is,
 
a11 a21 ··· an1
 a12 a22 ··· an2 
At = A∗ =  . .. .. ..  ∈ Mm×n
 
 .. . . . 
a1m a2m ··· anm
1
2

Denition. Given a square matrix,


 
a11 a12 ··· a1n
 a21 a22 ··· a2n 
A= . .. .. ..  ∈ Mn×m
 
 .. . . . 
an1 an2 ··· ann
the trace of A is the following real number
trace(A) = a11 + a22 + · · · + ann
1.1. Addition and multiplication by scalars. Matrices of the same dimension
can be added. The sum is obtained by adding the elements that are located in
the same row and column of the two matrices. Thus, if A = (aij )i=1,...,n
j=1,...,m and
B = (bij )j=1,...,m (as you see the matrices have the same dimension), then
i=1,...,n

i=1,...,n
A + B = (aij + bij )j=1,...,m
Example.    
2 1 3 4 10
A= B=
9 6 5 −2 −3
5
   
2+1 1+4 3+0 3 5 3
A+B = =
9+5 6 + (−2) 5 + (−3) 14 4 2
Multiplication of a matrix by a real number is dened similarly. If
A = (aij )j=1,...,m and λ ∈ R, then
i=1,...,n

i=1,...,n
λA = (λaij )j=1,...,m

Example. Let λ be any (real) number and consider the matrix


 
2 1 3
A=
9 6 5
then  
λ·2 λ·1 λ·3
λA =
λ·9 λ·6 λ·5
If we x λ = 7
   
7·2 7·1 7·3 14 7 21
7A = =
7·9 7·6 7·5 63 42 35
The matrix addition and multiplication by scalars satisfy the following properties
that can be easily derived from the above denitions:
Proposition 1. Let A, B and C be two matrices of the same dimension and let α
and β be any real numbers. Then,
(1) A + B = B + A (commutativity).
(2) A + (B + C) = (A + B) + C (associativity).
(3) α(A + B) = αA + αB .
(4) (α + β)A = αA + βA.
(5) α(βA) = (αβ)A.
(6) (A + B)t = At + B t .
3

1.2. Matrix Multiplication. Consider now A = (aij )i=1,...n


j=1,...,m an n × m matrix
and B = (bij ) of size m × l. That is, the number of columns of matrix A equals
the number of rows in matrix B . This is a necessary (and sucient) condition for
being able to calculate the product matrix A · B .
Denition. The product matrix C = A·B is the matrix with n rows (same number
as A) and l columns (same as B ) such that the element
cij = ai1 b1j + ai2 b2j + · · · + aim bmj
that is, the element of the product matrix in the position i, j is obtained by multi-
plying row i of A times column j of B .
Example. Consider the matrices
 
  1 6
2 1 5
A= B= 7 −4 
−3 0 2
8 0
the matrix C = A · B is the 2 × 2 matrix
 
49 8= 2 · 6 + 1 · (−4) + 5 · 0
C=
13 −18
Remark. In general, the product of matrices is not commutative. In fact, in many
cases it is only possible to calculate one product, but not the other.
For instance, if we consider the matrices
   
2 1 1 2 5
A= B=
3 1 1 4 4
we can calculate A · B . But, B · A is not dened, since the number of columns of
B does not coincide with the number of rows of A.
If we take    
1 2 3 1
A= B=
2 1 2 1
then,    
7 3 5 7
A·B = B·A=
8 3 4 5
Proposition 2. If the product AB exists, then the product B t At is also possible
and
(AB)t = B t At
1.3. Equivalent Matrices. The Gauss-Jordan elimination Method. Given
a matrix A we say that B is equivalent to A if we can transform A into B using
any combination of the following elementary operations:
• Multiply a row of A by any real number not equal to zero.
• Interchange two rows.
• Add a row of A to any other row.
These three operations can be described by means of matrix products
• Multiplying the ith row of an n × m matrix by a real number a is equivalent
to multiplying on the left by the identity matrix In in which the element
in the position ii is replaced by a.
4

For instance, if we consider a matrix


 
2 4
 3 6 
7 9
and want to multiply the 2nd row by 5 we have to multiply this matrix by
 
1 0 0
 0 5 0 
0 0 1
• To swap two rows, say rows i and j , the only thing we have to do is to
multiply on the left by an identity matrix with rows i and j swapped
For instance, if in the previous matrix:
 
2 4
 3 6 
7 9
we want to exchange rows 1 and 3, then we multiply
   
0 0 1 2 4
 0 1 0 · 3 6 
1 0 0 7 9

• The operation of adding the j th row multiplied by a real number a, to the


ith row is equivalent to multiply on the left by the identity matrix but with
an a in the entry ij .
For instance, if in the matrix
 
2 4
 3 6 
7 9
we want to add to row 2 seven times row 3, we can do it in the following
way,
   
1 0 0 2 4
 0 1 7 · 3 6 
0 0 1 7 9
that is, we put a 7 into the position 2, 3
Denition 3. We say that a matrix A ∈ Mn×m is in (row) echelon form if it
satises the following:
(1) All zero rows are at the bottom of the matrix.
(2) For each row i = 1, . . . , n, if the rst non-zero element is aij (that is, aij 6= 0
and aik = 0 for every k < j ), then al,k = 0 for every l > i and every k ≤ l.
It follows that, if the matrix A is in echelon form, then given any row i, if its
rst element that is not equal to zero is aij then akl = 0 whenever k > i and l ≤ j .
In an echelon matrix, every row (except perhaps the rst one) starts with 0 and
each row i + 1 starts with at least one more 0 than the preceding row i.
5

For example,
  
5 3 0 4 1 2 0 0 1
 0 1 6 2 0   0 1 2 3 
  
 0 0 1 3 0  0 0 0 5 
0 0 0 0 1 0 0 0 0
are matrices in echelon form. While,
 
2 0 0 1
 0 1 2 3 
 
 0 0 0 0 
0 0 0 1

is not.
The Gauss-Jordan elimination method gives us a systematic way of obtaining
an echelon form for any matrix by means of elementary matrix operations. We will
explain the method by means of an example. Consider the following matrix,
 
0 3 2 5 7

 1 7 2 4 3 


 0 0 0 1 3 

 0 0 0 0 0 
0 5 0 4 7

(1) Reorder the rows so that all the rows with all elements equal to zero, if any,
are below all other rows.
 
0 3 2 5 7

 1 7 2 4 3 


 0 0 0 1 3 

 0 5 0 4 7 
0 0 0 0 0

(2) Look for the rst column that doesn't have all zeros.
 
0 3 2 5 7

 1 7 2 4 3 


 0 0 0 1 3 

 0 5 0 4 7 
0 0 0 0 0

(3) Reorder the rows again, so that all the zeros in this column are below all
the non-zero elements.
 
1 7 2 4 3

 0 3 2 5 7 


 0 0 0 1 3 

 0 5 0 4 7 
0 0 0 0 0

(4) If the matrix is already in echelon form, we are nished.


6

(5) If not, we look for the rst element that violates the echelon condition (we
call it pivotal) and from now on forget all the rows above it.
 
1 7 2 4 3

 0 3 2 5 7 


 0 0 0 1 3 

 0 5 0 4 7 
0 0 0 0 0

(6) Repeat the previous step (ignoring the upper rows that have been already
dealt with). If the matrix is already in echelon form, we are done.
 
1 7 2 4 3

 0 3 2 5 7 


 0 5 0 4 7 

 0 0 0 1 3 
0 0 0 0 0
(7) If not, eliminate all the non-zero numbers that are in the same column as
the pivotal element (let it be aij ) and below it by adding multiples of row
i to the multiples of rows below it:

 
1 7 2 4 3
 0 3 2 5 7 
3 · row 3 − 5 · row 2 
 
 0 15 − 15 0 − 10 12 − 25 21 − 35 

 0 0 0 1 3 
0 0 0 0 0
 
1 7 2 4 3

 0 3 2 5 7 

=
 0 0 −10 −13 −14 

 0 0 0 1 3 
0 0 0 0 0
(8) If the matrix is already in echelon form, we are done. Otherwise, repeat
the operations with the rows that are still not echelon form, until you are
done.

1.4. Determinants. To any square matrix one can assign a real number called
the determinant of the matrix. As we are going to see, this number is important
and useful. We shall dene it by induction.
Let A be a 1 × 1 matrix, i.e. it has a single row and column. In other words, A
is a scalar A = (a). The determinant in this case is simply dened to be a.
Let A be a 2 × 2 matrix. Then, the determinant is dened as

a b
det(A) = = ad − cb
c d

Let A be a 3 × 3 matrix. In this case we have two equivalent ways of dening


the determinant:
7

(1) Sarrus rule:



a11 a12 a13

a21 a22 a23 = a11 a22 a33 +a12 a23 a31 +a13 a21 a32 −a31 a22 a13 −a32 a23 a11 −a33 a21 a12

a31 a32 a33
(2) Expanding by a row (or column):

a11 a12 a13
= a11 a22 a23 a a13 a a13
− a21 12 + a31 12

a21 a22 a23
a32 a33 a32 a33 a22 a23
a31 a32 a33
We don't have to be using the rst row or column; any row or column would
work. We have to be careful, though, with signs, since the term multiplied
also by the element aij has to be multiplied by (−1)i+j .
Example. If we want to expand the determinant by column 2,

1 2 1
= (−1)1+2 2 4 5 2+2 1 1 2+3 1 1

4 3 5 + (−1) 3 + (−1) 1 =
3 3 3 3 4 5
3 1 3
−2 · (−3) + 3 · (0) − (1) · 1 = 5
For matrices of higher dimensions the rule is the same as for the 3 × 3 matrices:
we can expand by rows or columns in such a way that the determinant of a 4 × 4
matrix can be calculated by calculating 4 determinants of 3 × 3 matrices.
In general, if A = (aij ) is a square n × n matrix , the minor complementary to
the element aij of A is the determinant of the n−1×n−1 submatrix that is obtained
by eliminating the row i and the column j from the matrix A. The cofactor Aij
of the element aij of A is the minor complementary to aij multiplied by the factor
(−1)i+j . With this notation, we can write the expansion of the determinant of a
matrix A by row i as,
|A| = ai1 Ai1 + ai2 Ai2 + · · · + aim Aim
or by column j ,
|A| = a1j A1j + a2j A2j + · · · + anj Anj
Example. Consider the determinant


1 2 0 3


4 7 1 1


1 3 3 1

0 2 0 7
It is best to expand this determinant by row 4 or column 3. This is so, because
there are more zeroes in them. So that in the end, we would have to do fewer
calculations. Let us expand by column 3:

1 2 0 3
4

7 1 1 2 3 1 2 3 1 2 3
4 7 2 1
= 0 1 +(−1)3+2 2 1

+(−1)3+3 3 4

3 1 3 1 7 1 +0 4 7 1
1 3 3 1
0

2 7 0 2 7 0 2 7 1 3 1
0 2 0 7

As you see from this example, the more zeroes there are, the easier it is to do the
calculations. We shall now explain how to make there as many zeroes as possible.
8

Proposition 4.
(1) Let A and B be square matrices of the same size. Then,
det(A · B) = det(A) · det(B)
(2) If all elements of a row or a column of a matrix are equal to zero, then the
determinant also equals to zero.
The remaining properties are easily derived from those above and can be used
to increase the number of zeros.
Proposition 5.
(1) If we multiply a row or a column of a matrix by a number, then the deter-
minant is also multiplied by the same number.
(2) If we swap two rows (or two columns) the determinant changes sign.
(3) If we add to a row (or a column) a multiple of another row (respectively,
column) the determinant does not change.
(4) If two rows or columns are equal, then the determinant is equal to 0.
Using the rules above, we can simplify the computation of determinants by using
the same elementary operations we have used to put the matrix into its echelon
form.
Example. Consider the determinant:


1 2 3 4 1 2 3 4

2 3 5 1 0 −1 −1 −7
= =

3 1 4 2 0 −5 −5 −10

2 5 5 1 0 1 −1 −7

−1 −1 −7 1 1 7 1 1 7

= −5 −5 −10 = − −5 −5 −10 = 0 0 25 =

1 −1 −7 1 −1 −7 0 −2 −14

1 1 7

= − 0 −2 −14 = 50
0 0 25
1.5. Rank of a Matrix.
Remark. The echelon form obtained by the above procedure is not unique. That
is, when we apply the method above to reduce a matrix to an echelon form, the
nal matrix that we obtain, depends on the exact way the steps are carried out.
However, one can prove that the number of zero rows obtained is independent of
the procedure to nd the echelon form.
Denition 6. Given any matrix A we dene rank of A to be the number of rows
not all equal to zero that the matrix has in any of its echelon forms.
Example. Consider the matrix
 
0 3 2 5 7

 1 7 2 4 3 

A=
 0 0 0 1 3 

 0 0 0 0 0 
0 5 0 4 7
9

it is equivalent to the echelon matrix:


 
1 7 2 4 3

 0 3 2 5 7 
=
 0 0 −10 −13 −14 

 0 0 0 1 3 
0 0 0 0 0
so that the rank of A is four.
There is an equivalent denition of the matrix rank that uses determinants.
Proposition 7. Let A be any matrix. The rank of A is the dimension of the largest
square matrix with a non-zero determinant that we can construct from the original
matrix by eliminating rows and columns.
Example. (1) Take a matrix
 
1 2 5
2 4 9
The rank of this matrix can't be more than 2 since it is impossible to
construct inside it a 3 × 3 matrix. Let us see if we can construct a 2x2
matrix with a non-zero determinant

1 2

2 =0
4
doesn't work
1 5

2 = −1 6= 0
9
Since, this determinant does not equal to zero, the rank of this matrix is 2.
(2) If we now take  
1 2 5
2 4 10
then, as before, the rank can't be more than 2. Nor can it be smaller than
1, since for that the matrix would have to be a matrix of zeroes. Let us see
if it is 2:
1 2 1 5 2 5

2 = 0, = 0, =0
4 2 10 4 10
Therefore, the rank is 1.
Remark. The rank of two equivalent matrices is the same
1.6. Inverse Matrix. Given an n × n square matrix A, we say that it has an
inverse if there exists an n × n matrix, which we denote by A−1 , such that A−1 A =
AA−1 = In , where In is the identity matrix of order n × n.
Remark. Is there a unique inverse matrix? In other words, is it possible that
there are two dierent matrices, say B and C such that BA = AB = In and
CA = AC = In ? Let us show that, if these equations hold then, we must have
B = C.
Since, CA = In , multiplying by B on the right we get that
(CA)B = In B = B
10

and since,
(CA)B = C(AB) = CIn = C
we see that B = C .
Proposition 8. A square matrix has an inverse if and only if its determinant is
not equal to zero. Equivalently, an n × n square matrix has an inverse if and only
if it has the rank n (that is, it has full rank )
Therefore, using the properties of determinants it is easy to derive the following:
1
Proposition 9. If A has an inverse, then det(A−1 ) = .
det(A)
Proof: Since, A · A−1 = In , we must have that det(A · A−1 ) = det(In ). One
checks easily that det(In ) = 1. And, since det(A · A−1 ) = det(A) det(A−1 ) we must
have that det(A) det(A−1 ) = 1 and the proposition follows.

Proposition 10. Let A and B be n × n square matrices. Then, A · B and B · A


have inverses only if A and B have inverses. Furthermore,
(A · B)−1 = B −1 · A−1 (B · A)−1 = A−1 · B −1
There are various methods of computing the inverse of a matrix, but undoubt-
edly the easiest is the Gauss-Jordan elimination method that we have used to put
matrices in echelon form.
Consider the matrix
 
a11 a12 ... a1n
 a21 a22 ... a2n 
A= . .. .. .. 
 
 .. . . . 
an1 an2 ... ann
Construct the new matrix
 
a11 a12 ... a1n | 1 0 ... 0
 a21 a22 ... a2n | 0 1 ... 0 
 .. .. .. .. . .. .. .. 
 
 . . . . | .. . . . 
an1 an2 ... ann | 0 0 ... 1
To this entire matrix we apply the elementary operations until the left side
becomes an identity matrix. It can be shown that this can be done if and only if
the original matrix has full-rank, in which case we obtain
 
1 0 ... 0 | b11 b12 ... b1n
 0 1 ... 0 | b21 b22 ... b2n 
 .. .. .. .. .. .. .. .. 
 
 . . . . | . . . . 
0 0 ... 1 | bn1 bn2 ... bnn
and the matrix on the right is the inverse of A.
This method works since the matrix that we obtain on the right is a product of
all those matrices by which we were multiplying the original matrix to turn it into
the identity matrix.
11

Example. Consider the matrix


 
1 1 0
 0 1 1 
1 0 1
construct the larger matrix:
 
1 1 0 | 1 0 0
 0 1 1 | 0 1 0 
1 0 1 | 0 0 1
Make the following operations
   
1 1 0 | 1 0 0 1 1 0 | 1 0 0
(f3 −f1 ) ∼  0 1 1 | 0 1 0  (f3 +f2 ) ∼  0 1 1 | 0 1 0 
0 −1 1 | −1 0 1 0 0 2 | −1 1 1
At this point we see that this is a rank 3 matrix, so it can be inverted
   
1 1 0 | 1 0 0 2 0 0 | 1 −1 1
(2f2 −f3 ) ∼  0 2 0 | 1 1 −1  (2f1 −f2 ) ∼  0 2 0 | 1 1 −1 
0 0 2 | −1 1 1 0 0 2 | −1 1 1
Finally, dividing by 2
 
1 0 0 | 1/2 −1/2 1/2
∼ 0 1 0 | 1/2 1/2 −1/2 
0 0 1 | −1/2 1/2 1/2
Therefore, the inverse matrix is
 
1/2 −1/2 1/2
A−1 =  1/2 1/2 −1/2 
−1/2 1/2 1/2
For those who are absolutely in love with formulas, there does exist a formula
that provides and inverse of a matrix. For a square matrix A, its adjoint matrix
(Adj(A)) is the matrix of cofactors of A. That is, an element in the ith row and j th
column of Adj(A), is Aij .
Proposition. If |A| =
6 0, then

1 t
A−1 = (Adj(A))
|A|

2. Systems of Linear Equations

A system of linear equations is a system of equations of the form



a x + · · · + a1n xn = b1
 11 1

.. ..

.

.
a x + · · · + a x = b
m1 1 mn n m

where aij and bk are known real numbers and x1 , . . . , xn are the unknowns (or
variables) of the system.
12

A system of linear equations can be rewritten using matrix notation as:


    
a11 ... a1n x1 b1
.. ..   ..   .. 
. .  .  =  . 

 ...
am1 ... amn xn bm

A solution of the system is an n−dimensional vector that solves the matrix


equation. Or, in other words, it is the vector of real numbers (x∗1 , . . . x∗n ) which
satisfy all the equations of the system.
Denition 11. A system of linear equations is called consistent if it has a solu-
tion, otherwise it is called inconsistent or overdetermined. If the system of linear
equations has multiple (actually, innitely many) solutions, it is called underde-
termined.
Example. The system of linear equations
(
2x + y = 5
4x + 2y = 7
doesn't have a solution, so it is inconsistent
The system of equations (
x+y =5
4y = 8
has as its unique solution the vector (3, 2). Thus, it is consistent and uniquely
determined.
The system of equations (
x+y =4
2x + 2y = 8
has innitely many solutions, since all vectors of the form (x, 4 − x) solve it; there-
fore, it is consistent and underdetermined.
2.1. The Rouché-Frobenius Theorem. The Rouché-Frobenius theorem gives
us a criterion to decide when the system is consistent or inconsistent. And, in the
former case, how many parameters do we need to describe the set of solutions.
For instance, consider the system of equations
(
x+y =2
2x + 2y = 4

It can be easily seen that the second equation is just the double of the rst, and
is, therefore, redundant. Thus, the set of solutions is the set of two-dimensional
vectors (x, y) that satisfy the condition y = 2 − x. If we choose a value of x we,
therefore, obtain the unique value of y that satises the system. This set,
{(x, 2 − x) : x ∈ R}
can be described by means of a single parameter. We say that it has one degree of
freedom.
The Rouché-Frobenius theorem tells us exactly how many parameters (degrees
of freedom) we need to describe a solution of any system. Consider the system of
linear equations,
13


a x + · · · + a1n xn = b1
 11 1

.. ..

.

.
a x + · · · + a x = b
m1 1 mn n m

with m equations and n unknowns. We dene the matrix of the system to be


 
a11 ... a1n
.. ..
A= . .
 
... 
am1 ... amn
The extended matrix of the system is
 
a11 ... a1n | b1
.. .. . 
(A|b) =  . . | .. 

...
am1 ... amn | bm
Theorem 1 (Rouché-Frobenius).
(1) The system is consistent if and only if rank A = rank(A|b).

(2) Suppose the system is consistent (Hence, rank A = rank(A|b) ≤ n). Then,

(a) The system has a unique solution if and only if rank A = rank(A|b) =
n.

(b) The system is underdetermined if and only if rank A = rank(A|b) <


n. In this case, the number of parameters necessary to describe the
solutions of the system is n − rank(A).
Example. Consider the system

x + y + 2z = 1

2x + y + 3z = 2

3x + 2y + 5z = 3

Its extended matrix is  


1 1 2 | 1
 2 1 3 | 2 
3 2 5 | 3
Calculate the rank of A and of (A|b):
     
1 1 2 | 1 1 1 2 | 1 1 1 2 | 1
 2 1 3 | 2 ∼ 0 −1 −1 | 0 ∼ 0 1 1 | 0 
3 2 5 | 3 0 −1 −1 | 0 0 0 0 | 0
Therefore, the rank of A and of (A|b) is 2. Hence, the system is consistent and
underdetermined.
Proposition 12. A homogeneous system (i.e., a system in which all free terms on
the right hand side of the equation are equal to zero) is always consistent (It has
at least one solution).
14

2.2. Gauss-Jordan elimination method for solving systems of equations.


The Rouché-Frobenius theorem tells us when the system of linear equations has (or
does not have) a solution. Combining it with the Gauss-Jordan elimination method
we can explicitly obtain the solutions. The idea is that, if the two systems have
equivalent extended matrices, they have the same solutions.
On the other hand, if a matrix is already put into its echelon form, it is very
easy to calculate the solutions. Consider, for example, the system whose extended
matrix is  
1 3 5 | 1
 0 1 4 | 2 
0 0 2 | 4
The associated system has a unique solution. To nd the solution we only have to
recall that each column corresponds to a variable and that the column after the
line corresponds to free terms. To give the solution, let us calculate the variables
from the bottom up.
The last row tells us that 2z = 4, so we can substitute z = 2 into the second
equation and get y + 8 = 2, so that y = −6. Finally, we substitute these values into
the rst equation to obtain that x − 18 + 10 = 1. Hence, x = 9.
Let us now consider the system represented by the matrix:
 
1 3 5 | 1
 0 1 4 | 2 
0 0 0 | 0
This system is consistent and underdetermined, since the rank of A and of the
augmented matrix are both equal to 2 and the number of unknowns is 3. Therefore,
we have one degree of freedom. The second equation is y +4z = 2, which we rewrite
as
y = 2 − 4z
Substituting now into the rst equation, we obtain that x + 3(2 − 4z) + 5z = 1,
meaning that
x = −5 + 7z
Therefore, the set of all solutions of our system is
{(7z − 5, 2 − 4z, z) : z ∈ R}
Sometimes we have to be careful with the variables we leave as parameters. In
the previous case any one of the variables would work ne. In the following example
that's not the case.  
1 4 −1 | 1
 0 1 0 | 1 
0 0 0 | 0
Here the second equations tells us that y = 1. So, y cannot be left as a parameter.
If, on the other hand we substitute now the value y = 1 into the rst equation, we
obtain that x + 4 − z = 1, so that x = z − 3. Here, the variables that can be left
as parameters are x or z . The general solution of the system can be written as
{(z − 3, 1, z) ∈ R3 : z ∈ R}
or, if you so prefer,
{(x, 1, x + 3) ∈ R3 : x ∈ R}
15

The reason that systems represented with equivalent matrices have the same
solutions are the following.
• If we multiply an equation by a real number dierent from zero, the solution
doesn't change.
• If we reorder the equations, the solution of the system doesn't change
• If we add (a multiple of) one of the equations to another the solution doesn't
change.
Thus, if two systems are represented by (row) equivalent matrices, their solutions
are the same. The Gauss-Jordan elimination method consists of transforming a
system into another one, representable by a matrix in echelon form. The solutions
of the new system are easy to calculate and are the same as those of the original
system.
2.3. Cramer's Rule. Suppose now we have a system of n equations in n un-
knowns. In this case the system can be represented by a square matrix. This
system has a unique solution if and only if it has full-rank, which, in turn, is equiv-
alent to having a non-zero determinant of the matrix. Cramer's rule provides a way
of nding solutions of the system using the determinants. This is how it works.
Consider the system:

a x + · · · + a1n xn = b1
 11 1

.. ..

.

.
a x + · · · + a x = b
n1 1 nn n n

and suppose that


a11 a12 ... a1n
.. .. .. .. 6= 0

.
. . .
an1 an2 ... ann
so the system has a unique solution. Denote the unique solution of the system as
(x∗1 , x∗2 , . . . , x∗n ). Then,

b1 a12 . . . a1n a11 b1 ... a1n
.. .. . . .. .. .. .. ..

.
. . .
.
. . .
bn an2 . . . ann an1 bn . . . ann
∗ x∗2 =
x1 = ...
a11 a12 . . . a1n a11 a12 . . . a1n
.. .. . . .. .. .. .. ..

.
. . . .
. . .
an1 an2 . . . ann an1 an2 . . . ann

a11 a12 . . . b1
.. .. .. .

.
. . ..
an1 an2 . . . bn
. . . x∗n =
a11 a12 . . . a1n
.. .. .. ..

.
. . .
an1 an2 . . . ann
We shall make two observations about his method.
16

• Cramer's method requires substantially more operations than the Gauss-


Jordan elimination method.
• It is easy to extend Cramer's method to underdetermined systems (we won't
do this in this class, though).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy