Numerical Methods-04
Numerical Methods-04
Lecture Note 04
1
Introduction
• If an equation system contains only first order expressions of parameters or variables,
the equation system is said to be a linear algebraic equation system. It has the
following form:
3
Introduction
A system of n linear equations in n unknowns can have a unique
solution, infinite solutions, or no solution, depending on the properties
of its coefficient matrix. A system has a unique solution if:
• The determinant of the coefficient matrix is nonzero (det (A) ≠ 0 for
square matrices).
• The matrix is nonsingular, meaning its rows and
columns are linearly independent (no row can be
written as a combination of other rows).
• The system is consistent and has exactly one
intersection point.
4
Introduction
A system has infinite solutions when:
5
Introduction
A system has no solutions when:
6
Cramer’s Rule
Cramer's Rule is another solution technique used to solve systems of
linear algebraic equations when the coefficient matrix is square and has a
nonzero determinant.
The solution for each unknown is obtained using determinants:
where:
• is the matrix obtained by replacing the column of A with the constant
vector b.
• det(A)is the determinant of the original coefficient matrix.
• is the determinant of the modified matrix . 7
Example 1: Cramer’s Rule
8
Example 1: Cramer’s Rule
| 0.3 -0.01 1 |
| 0.5 0.67 1.9|
| 0.1 -0.44 0.5| 0.0649
X2= ---------------------------- = --------------------
= -29.5
-0.0022 -0.0022
10
Direct Methods
Here are three popular direct methods, each utilizing elementary
operations to transform a system of linear equations into a more
easily solvable form:
11
Gauss Elimination
13
Gauss Elimination
15
Example 2 : Gauss Elimination
16
Example 2 : Gauss Elimination
17
Example 2 : Gauss Elimination
18
Example 2 : Gauss Elimination
19
Example 2 : Gauss Elimination
20
Example 2 : Gauss Elimination
Back substitution:
Obtain the value of x3 from (9)
x3 = 70.0843 / 10.0200
x3 = 7.00003 (10)
21
Example 2 : Gauss Elimination
22
Example 2 : Gauss Elimination
x1 = 3.0
(12)
23
Example 2 : Gauss Elimination
26
Pivoting in Gaussian Elimination
28
Example 3: Solving the following system using Gaussian Elimination
with pivoting.
Solution:
29
Solution:
30
Scaling
31
Scaling
• Scaling transforms the original equations to
0.00002x1 + x2 = 1
x1 + x 2 = 2
Solution: x1 = x2 = 1 32
LU Decomposition Method
A= LU
where:
• L is a lower triangular matrix with ones on the diagonal.
• U is an upper triangular matrix.
One reason for initiating LU decomposition is that it provides an effective means
of calculating the inverse of The Matrix.
The method is solved in two steps:
1. Solve for in using forward substitution.
2. Solve for in using back substitution. 33
LU Decomposition Method
34
LU Decomposition Method
Name Constraints
Doolittle’s decomposition L= 1, i = 1, 2, . . . , n
Choleski’s decomposition L = UT
35
Doolittle’s Decomposition Method
Doolittle’s decomposition is closely related to Gauss elimination. To illustrate the relationship, consider a
3 × 3 matrix A and assume that there exist triangular matrices:
L=
such that A = LU. Then, we get:
A=
The first pass of the elimination procedure consists of choosing the first row as the pivot row and
applying the elementary operations:
A’=
1. The matrix U is identical to the upper triangular matrix that results from Gauss
elimination.
2. The off - diagonal elements of L are the pivot equation multipliers used during Gauss
elimination; that is, Lij is the multiplier that eliminated A ij .
The diagonal elements of L do not have to be stored, because it is understood that each of
them is unity. The final form of the coefficient matrix would thus be the following
mixture of L and U:
37
Doolittle’s Decomposition Method
Example 4: Solve the system of equations with Doolittle’s Decomposition Method
2. Let A = LU, where L is the lower triangular matrix and U is the upper triangular
matrix assume that the diagonal entries L is equal to 1
x1 + x2 + x3 =5
x1 + 2x2 + 2x3 =6
x + 2x + 3x =8
38
Solution:
A= X= B=
A=LU =.
39
Ly=B =
y1=5
y1+y2=6 y2=1
y1+y2+y3=8 y3=2
Ux=y =
x3=2
x2+x3=1 x2=-1
x1+x2+x3=5 x1=4
X=
40
Choleski’s Decomposition Method
Choleski’s decomposition contains approximately n3/6 long operations plus n square root
computations. This is about half the number of operations required in LU decomposition. The relative
efficiency of Choleski’s decomposition is due to its exploitation of symmetry.
A = L LT
Choleski’s decomposition;
of a 3 ×3matrix:
41
After completing the matrix multiplication on the right-hand side, we get
By equating the elements in the first column, starting with the first row and proceeding downward, we
can compute L11, L21 and L31 in that order:
The second column, starting with second row, yields L22 and L32:
42
Proceeding to other columns, we observe that the unknown in Eq. (1) is L ij (the other elements of L
appearing in the equation have already been computed). Taking the term containing L ij outside the
summation in Eq. (1), we obtain:
Aij=
Lij= , j=2,3......n
Lij = / L jj , j= 2, 3, . . . , n − 1, i = j + 1, j + 2, . . . , n
43
Example 5: Compute Choleski’s decomposition of the matrix.
Solution:
L11= =2 L22===1
L21=-2/L11 =-2/2= -1 L32=(-4-L21L31)/(L22)= (-4-(-1)(1))/1 =-3
•
L=
A=LLT A=
A=
44
Gauss-Jordan Method
45
THE STEPS TO GAUSS-JORDAN ELIMINATION:
46
Example 6: Solve the following system by using Gauss-
Jordan elimnation method
x+y+z=5
2x + 3y + 5z = 8
4x + 5z = 2
47
Solution: The augmented matrix of the system is the
following.
1 1 1|5
2 3 5|8
4 0 5|2
1 1 1|5
1 1 1 5
2 3 5|8
4 0 5|2
0 1 3 -2
4 0 5 2
48
1 1 1 5 1 1 1 5
0 1 3 -2 R3-4R1 0 1 3 -2
4 0 5 2
0 -4 1 -18
1 1 1 5
1 1 1 5
R3+4R2 0 1 3 -2
0 1 3 -2
0 0 13 -26
0 -4 1 -18
49
1 1 1 5 1 1 1 5
0 1 3 -2 (1/13)R3 0 1 3 -2
0 0 13 -26
0 0 1 -2
1 1 1 5 1 1 0 7
0 1 0 4 0 1 0 4
R1-R3
0 0 1 -2 0 0 1 -2
50
1 1 0 7
1 0 0 3
0 1 0 4 0 1 0 4
R1-R2
0 0 1 -2 0 0 1 -2
51
From this final matrix, we can read the solution of the
system. It is
X= 3
Y=4
Z=-2
53
Iterative Methods
54
Iterative Methods
• One major advantage of iterative methods is their
ability to store only the nonzero elements of the
coefficient matrix. This makes them especially
well-suited for solving large sparse systems,
which frequently occur in scientific computing,
simulations, and engineering applications.
• Despite these strengths, iterative methods do have
a drawback: they do not always guarantee
convergence. In other words, depending on the
properties of the coefficient matrix, the method
may fail to reach an acceptable approximation of 55
the solution.
Iterative Methods
56
Jacobi method
• By example:
Diagonally Non
dominant Diagonally
dominant
57
Jacobi iteration
1
x11 (b1 a12 x20 a1n xn0 )
a11
1 i 1 n
1
x12 (b2 a21 x10 a23 x30 a2 n xn0 )
a22
x k 1
i
aii
bi aij x aij x
k
j
k
j
j 1 j i 1
1
x1n (bn an1 x10 an 2 x20 ann 1 xn0 1 )
ann
58
Jacobi method Algorithm
3. Check if ∥𝑥(𝑘+1)−𝑥(𝑘)∥<𝜖
4. Repeat until convergence
59
Example 7: (Jacobi Iteration)
4 1 1 x1 7 0
4 8 1 x 21 x 0 0
2 b Ax 0 26.7395
2
2 1 5 x3 15 0
7 3.875 4.225
x13 1.6625
4
21 4 1.65625 4.225 b Ax 2 1.9534
x23 3.98125 2
8
15 2 1.65625 3.875
x33 2.8875
5
62
Gauss-Seidel method
63
Gauss-Seidel (GS) iteration
1
x11 (b1 a12 x20 a1n xn0 )
a11 1 i 1 n
1
x2 (b2 a21 x11 a23 x30 a2 n xn0 )
1 xik 1
aii
bi a x ij
k 1
j aij x
k
j
a22 j 1 j i 1
1
x1n (bn an1 x11 an 2 x12 ann 1 x1n 1 )
ann
64
Example 8: Gauss-Seidel Iteration
4 1 1 x1 7 0
4 8 1 x 21 x 0 0
2 b Ax 0 26.7395
2
2 1 5 x3 15 0
7 x20 x30 7
1
x
1
1.75 b Ax1 3.0414
4 4 2
67