0% found this document useful (0 votes)
8 views54 pages

LA 1 LinearSystems

The document discusses systems of linear equations and their solutions. A system of linear equations involves variables that satisfy multiple equations simultaneously. The solution is where the equations intersect. Systems can have zero, one, or infinitely many solutions. Matrices are used to represent systems, and row operations are performed to put them in reduced row echelon form to determine the solution set.

Uploaded by

Maher Trabelsi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views54 pages

LA 1 LinearSystems

The document discusses systems of linear equations and their solutions. A system of linear equations involves variables that satisfy multiple equations simultaneously. The solution is where the equations intersect. Systems can have zero, one, or infinitely many solutions. Matrices are used to represent systems, and row operations are performed to put them in reduced row echelon form to determine the solution set.

Uploaded by

Maher Trabelsi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Systems of Linear Equations

(Reference book: Sections 1.1 and 1.2)


Introduction to Systems of Linear Equations
Notes: Systems of equations

I A system of equations is a collection of finitely many


equations involving the same set of variables.
I The solution of the system is the intersection of the solution
sets of each equation.

I If some equation has no solution, the system has no solution.


I It is possible that each equation has a solution but not the
system.
Lines and planes

I Line in R2 : take a, b, c ∈ R with a, b not both zero

ax + by = c

The numbers a, b, c are constants. The variables are x, y .

I Plane in R3 : take a, b, c, d ∈ R with a, b, c not all zero

ax + by + cz = d

The numbers a, b, c, d are constants. The variables are x, y , z.

Logic remark: Notice the difference between


“are (all) non-zero” and “are not all zero”.
Linear equations

A linear equation in the n variables x1 , . . . , xn is

a1 x1 + a2 x2 + . . . + an xn = b

where a1 , a2 , . . . , an , b are constants


and a1 , a2 , . . . , an are not all zero.

A homogeneous linear equation is

a1 x1 + a2 x2 + . . . + an xn = 0
Examples of linear equations

x + 2y = 5 not homogeneous
x + 2y − 5 = 0 not homogeneous
x + 2y = 0 homogeneous

Not linear equations:


x2 + y2 = 1
sin x + y = 0

x +y =0
Linear system
A system of linear equations or linear system
in the unknowns x1 , . . . , xn is


 a11 x1 + a12 x2 + . . . + a1n xn = b1
a21 x1 + a22 x2 + . . . + a2n xn = b2


..


 .
am1 x1 + am2 x2 + . . . + amn xn = bm

where aij for i = 1, . . . , m, j = 1, . . . , n,


and bk for k = 1, . . . , m are constants.

Example:

 7x + 3y = 10
x −y =0
2x − 2y =0

Solution of a linear system

A solution of the linear system is a sequence of numbers


(s1 , . . . , sn ) such that the substitution

x 1 = s1 x2 = s2 ... xn = sn

makes each equation a true equality.

Example: (s1 , s2 ) = (1, 0) is a solution for



5x + y = 5
y =0

Remark: The solution is an ordered pair, an ordered triple, or in


general an ordered n-tuple.
It is important to know which are the unknowns!

4
! Maybe not all variables appear in the equations.

For example
x =5
could have as solution 5, if there is only the unknown x.
But if we have unknowns x, y the above equation stands for

x + 0y = 5

and the solutions are (5, t) where t ∈ R.


Linear system in two unknowns
A linear equation in two unknowns is (except in trivial cases) the
equation of a line. For a system of two such equations we have to
intersect the two lines.

I The lines are parallel and distinct: no solutions.


I The lines intersect at only one point: exactly one solution.
I The lines coincide: infinitely many solutions.
Linear system in three unknowns
A linear system of three equations in three unknowns is (except in
trivial cases) the intersection of three planes.
Important result

IMPORTANT RESULT: Every linear system has either zero,


one, or infinitely many solutions.

Terminology: A linear system is consistent if it has at least one


solution and inconsistent if it has no solutions.

Any linear system of homogeneous equations is consistent


(because one can set all variables to be zero).
Examples of linear systems

I The only solution is (1, 0):



5x + y =5
y =0

I There are no solutions:



5x + y =5
5x + y =3

I There are infinitely many solutions, (t, 5 − 5t) for t ∈ R:



5x + y = 5
10x + 2y = 10
Infinitely many solutions

Example: 
y = −x − 1
z = −x + 2
The solutions are (x, y , z) = (t, −t − 1, −t + 2) with t ∈ R.
We can assign an arbitrary value to the parameter t. We have
parametric equations (called the general solution of the system)

 x =t
y = −t − 1
z = −t + 2

If there are infinitely many solutions for a linear system:


The solutions can be expressed with finitely many
parameters to which we can assign arbitrary values in R.
Augmented matrix for a linear system
Augmented matrix
A matrix is a rectangular array of numbers, consisting of rows and
columns.
The augmented matrix associated to the system


 a11 x1 + a12 x2 + . . . + a1n xn = b1
 a21 x1 + a22 x2 + . . . + a2n xn = b2

..


 .
am1 x1 + am2 x2 + . . . + amn xn = bm

is  
a11 a12 . . . a1n b1
 a21 a22 . . . a2n b2 
 
 .. 
 . 
am1 am2 . . . amn bm
Notice: the row number is mentioned before the column number.
Augmented matrix

I We associate a row to each equation.

I We associate a column to each variable.


(4! order the variables in the same way in each equation!)
I We also consider one additional column to take care of the
constant terms.
Example of augmented matrix

The augmented matrix associated to the system



 x1 + x2 + 2x3 =9
2x1 − 3x3 =1
3x1 + 6x2 − 5x3 =0

is  
1 1 2 9
2 0 −3 1
3 6 −5 0
Solving a linear system/Elementary row operations

We perform algebraic operations that do not alter the solution set


and that produce simpler systems, until it is evident what the
solutions are.

1. Multiply an equation by a nonzero constant.


2. Interchange two equations.
3. Add a constant times one equation to another equation.

The equations correspond to the rows of the augmented matrix.


We have the corresponding elementary row operations:
1. Multiply a row by a nonzero constant.
2. Interchange two rows.
3. Add a constant times one row to another row.
Example

We now perform operations both on the linear system and the


augmented matrix. Later we will only work with the augmented
matrix.
Example (continuation)
Example (continuation)
Reduced Row Echelon Form
Row Echelon and Reduced Row Echelon Form

ROW ECHELON FORM:


• If there are any rows that consist entirely of zeros, then they are
grouped together at the bottom of the matrix.
• If a row does not consist entirely of zeros, then the first nonzero
number in the row is a 1. We call this a leading 1.
• In any two successive rows that do not consist entirely of zeros,
the leading 1 in the lower row occurs farther to the right than the
leading 1 in the higher row.

REDUCED ROW ECHELON FORM: If moreover we have


• Each column that contains a leading 1 has zeros everywhere else
in that column.
In the reduced row echelon form there are zeros not only below the
leading 1, but also above.
Examples
Examples

These matrices are surely in row echelon form (maybe also in


reduced echelon form):

These matrices are surely in reduced row echelon form:


Some Facts About Echelon Forms

I Every matrix has a unique reduced row echelon form.


I Row echelon forms are not unique. But they all have the
same number of zero rows, and the leading 1’s always occur in
the same positions.
Reduced row echelon form
The aim of our elementary operations should be to put the
augmented matrix in reduced row echelon form.

If the augmented matrix is in reduced row echelon form,


then the solution set of the linear system can be read off
(either by direct inspection or by converting certain linear
equations to parametric form)

We call leading variables the variables corresponding to the


columns with the leading 1’s.
The other variables are called free variables. The free variables
can be treated as parameters and can be assigned an arbitrary
value.

The rows consisting only of 0 can be ignored because they


correspond to a trivial equation:

0x1 + 0x2 + · · · + 0xn = 0


Example: Unique solution

In this case, every variable is a leading variable.


There is a leading 1 in each column different from the last one.
Example: No solutions

Consider the augmented matrix


 
1 0 0 0
0 1 2 0
0 0 0 1

The last row corresponds to the equation

0x + 0y + 0z = 1

which has no solutions. The system is inconsistent.


No solutions?
Trick: As soon as you spot an insolvable equation, the system is
inconsistent.

CRITERION: An inhomogeneous linear system is inconsistent if


and only if the augmented matrix, when in row echelon form,
has the last non-zero row of the form
 
0 0 ··· 0 1
Why? This row corresponds to the unsolvable equation

0x1 + 0x2 + · · · + 0xn = 1

4! A leading 1 in the last column does not correspond to a leading


variable!
This means that the inhomogeneous system is inconsistent.
Example: Infinitely many solutions

Consider the augmented matrix (where the variables are x, y , z):


 
1 0 3 −1  
1 0 3 −1
0 1 −4 2 
0 1 −4 2
0 0 0 0

The leading variables are x and y . The free variable is z.


( (
x + 3z = −1 x = −3z − 1
y − 4z = 2 y = 4z + 2

Solutions: (x, y , z) = (−3s − 1, 4s + 2, s) with s ∈ R.

The free variables become parameters.


We express the leading variables as a sum of multiples of the
free variables, and possibly a constant term.
Example: Infinitely many solutions
Trick: We can remove zero rows, because they are useless
equations (everything is a solution).

Consider the augmented matrix (where the variables are x, y , z):


 
1 −5 1 4  
0 0 0 0 1 −5 1 4
0 0 0 0

The leading variable is x. The free variables are y and z.

x − 5y + z = 4
x = 5y − z + 4
The solutions are

(x, y , z) = (5s − t + 4, s, t) for s, t ∈ R


Remark
4
! We cannot remove a column of zeros, because that would
mean removing a variable or the constant terms!

Consider the augmented matrix (where the variables are x, y , z, w ):


 
1 0 0 4 2
0 0 1 1 3

The leading variables are x and z. The free variables are y and w .
(
x = −4w + 2
z = −w + 3

The solutions are

(x, y , z, w ) = (−4t + 2, s, −t + 3, t) for s, t ∈ R


Gauss-Jordan elimination
Gauss-Jordan elimination

The standard procedure to reduce a matrix to row echelon form is


called Gaussian elimination.
The standard procedure to reduce a matrix to reduced row echelon
form is called Gauss-Jordan elimination (the first part is
Gaussian elimination, to get the matrix in row echelon form).

The algorithm consists in particular of two parts:


a forward phase: zeros are introduced below the leading 1’s;
a backward phase (for the reduced form): zeros are introduced
above the leading 1’s.
Gaussian elimination

We work with the leftmost nonzero column C .


I Swap rows so that the top entry of C is non-zero.
(For this swap we select the first suitable row.)
I Multiply the top row by a non-zero constant and make the
top entry of C a 1.
I Add suitable multiples of the top row to the other rows and
make sure that all entries of C apart from the top one are 0.
We cover the top row and iterate the above procedure on the
remaining submatrix.

We continue the iteration until we have covered all rows, or until


all remaining rows consist of zeros: the matrix is in both cases in
row echelon form.
Example: Gaussian elimination
Example: Gaussian elimination
Example: Gaussian elimination

The matrix is now in row echelon form.


Gauss-Jordan elimination

To transform a matrix from the row echelon form to the reduced


row echelon form:

We work with the last nonzero row R.


I Consider the leading 1 of R which is in some column C
I Multiply R by non-zero constants and add it to previous rows
and make 0 all other entries of C apart from the leading 1.
We cover R and iterate the above procedure on the remaining
submatrix.

We continue the iteration until all entries above all leading ones
are zero.
Example: Gauss-Jordan elimination

The matrix is now in reduced row echelon form.


Recap
To solve a linear system:
I Write down the augmented matrix.
I Perform Gauss-Jordan elimination.
I From the matrix in reduced row echelon form read off the
solution set.

4! Ordering the variables in a different way may lead to a different

set of free variables. Example:

x +y =1 x = −s + 1, y = s (s ∈ R)

y +x =1 y = −t + 1, x = t (t ∈ R)
Consistent linear systems
Homogeneous linear systems



 a11 x1 + a12 x2 + . . . + a1n xn = 0
 a21 x1 + a22 x2 + . . . + a2n xn = 0

..


 .
am1 x1 + am2 x2 + . . . + amn xn = 0

There is always the trivial solution

(x1 , x2 , . . . , xn ) = (0, 0, . . . , 0)

If there are other solutions, they are called nontrivial solutions.

I EITHER the homogeneous system has only the trivial solution


I OR the homogeneous system has infinitely many solutions.
Example: Homogeneous linear system
Homogeneous linear systems

I Geometry: The trivial solution corresponds to the origin.

I The leading variables can each be expressed as sums of


multiples of the free variables, without an additional constant
term (because the constant terms in the system are all 0).
I There cannot be a leading 1 in the last column (because
elementary row operations preserve the last column being
zero).

I Every linear system has an associated homogeneous linear


system (simply set the constant terms to zero).
Geometry: For a line, this is like considering the parallel line
through the origin.
Superposition principle

Consider a consistent inhomogeneous linear system. Fix a solution.


All solutions are obtained by summing to the chosen solution
the solutions of the associated homogeneous linear system.

Example:
x + 2y = 10
(10, 0) + (−2t, t) = (−2t + 10, t)
(4, 3) + (−2s, s) = (4 − 2s, 3 + s)
Consistent linear systems: number of free variables
For any consistent linear system: the number of parameters in
the solution set is the number of free variables.
(No free variables means that there is exactly one solution.
Free variables means that there are infinitely many solutions.)

I The number of free variables is the number of variables minus


the number of leading variables.

I The number of leading variables is the number of non-zero


rows in the row echelon form
(because each non-zero row has a leading 1, which for
consistent systems is not in the last column).

Since the number of leading variables does not exceed the number
of equations we get in particular: if there are more unknowns than
equations, there must be free variables.
Additional notions
Terminology: Pivot positions and Pivot columns

The positions of the leading 1’s are the pivot positions.


The columns with the leading 1’s are the pivot columns.

There is no need of defining ‘pivot rows’. Because they would


simply be the non-zero rows.
Alternative method: Back substitution
To solve a linear system we can use Gauss-Jordan elimination or
Gaussian elimination + back substitution.

Back substitution means starting with a linear system whose


augmented matrix is in row echelon form, and solve the system
with substitutions as follows:
I Consider the free variables as parameters.
I Solve the equations for the leading variables (xi = . . .).
I Beginning with the bottom equation and working upward,
successively substitute each equation into all the equations
above it.
This means replacing the expression for a leading variable in
all expressions above.
I Write the solutions.
In the end each leading variable is expressed in terms of the
free variables plus a constant.
Example: Back substitution
Proof of the superposition principle
Consider the linear system where the i-th equation is
n
X
aih xh = bi
h=1

The corresponding homogeneous linear system is nh=1 aih xh = 0.


P
Let (X1 , . . . , Xn ) be a fixed solution, (x1 , . . . , xn ) any solution,
(y1 , . . . , yn ) any solution of the homogeneous system. Then
n
X n
X n
X
aih (xh − Xh ) = aih xh − aih Xh = bi − bi = 0
h=1 h=1 h=1

so (x1 − X1 , . . . , xn − Xn ) is a solution of the homogeneous system.


Conversely,
n
X n
X n
X
aih (Xh + yh ) = aih Xh + aih yh = bi + 0 = bi
h=1 h=1 h=1

so summing to (X1 , . . . , Xn ) any solution of the homogeneous


system gives again a solution.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy