0% found this document useful (0 votes)
14 views69 pages

Numerical Methods-04

This document discusses systems of linear algebraic equations, which are essential in numerical methods for electrical engineering, particularly in circuit analysis and simulations. It covers various solution techniques, including Cramer's Rule, Gaussian elimination, and LU decomposition, detailing their processes and applications. The document emphasizes the importance of efficient solutions for large-scale systems due to increasing complexity and computational constraints.

Uploaded by

mounaismail05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views69 pages

Numerical Methods-04

This document discusses systems of linear algebraic equations, which are essential in numerical methods for electrical engineering, particularly in circuit analysis and simulations. It covers various solution techniques, including Cramer's Rule, Gaussian elimination, and LU decomposition, detailing their processes and applications. The document emphasizes the importance of efficient solutions for large-scale systems due to increasing complexity and computational constraints.

Uploaded by

mounaismail05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

NUMERICAL METHODS

Lecture Note 04

Dr. Oubah ISMAN OKIEH


Systems of Linear
Algebraic Equations
Introduction
• Systems of linear algebraic equations are fundamental in numerical
methods for electrical engineering, as they frequently arise in circuit
analysis, power system studies, and electromagnetic simulations.
• These equations are commonly used in applications such as
Kirchhoff’s circuit laws, load flow analysis, and finite element
modeling in electromagnetics.
• Solving large-scale systems efficiently is crucial due to the increasing
complexity of modern electrical networks and computational
constraints.

1
Introduction
• If an equation system contains only first order expressions of parameters or variables,
the equation system is said to be a linear algebraic equation system. It has the
following form:

Where and are known while represent the unknowns.


2
Introduction
• In matrix notation the equations are written as :

[A] . [x] = [b]

Or, in simple form:


Ax=b

3
Introduction
A system of n linear equations in n unknowns can have a unique
solution, infinite solutions, or no solution, depending on the properties
of its coefficient matrix. A system has a unique solution if:
• The determinant of the coefficient matrix is nonzero (det (A) ≠ 0 for
square matrices).
• The matrix is nonsingular, meaning its rows and
columns are linearly independent (no row can be
written as a combination of other rows).
• The system is consistent and has exactly one
intersection point.
4
Introduction
A system has infinite solutions when:

• The coefficient matrix is singular (det (A) ≠ 0) , meaning


its rows and columns are linearly dependent.
• One equation is a multiple of another.
• The system is consistent (the augmented matrix does
not produce a contradiction).

5
Introduction
A system has no solutions when:

• The coefficient matrix is singular, but the augmented


matrix (including the right-hand side vector) leads to
inconsistency.
• The equations contradict each other. By example:

6
Cramer’s Rule
Cramer's Rule is another solution technique used to solve systems of
linear algebraic equations when the coefficient matrix is square and has a
nonzero determinant.
The solution for each unknown is obtained using determinants:

where:
• is the matrix obtained by replacing the column of A with the constant
vector b.
• det(A)is the determinant of the original coefficient matrix.
• is the determinant of the modified matrix ​. 7
Example 1: Cramer’s Rule

Solve the given equation system by using Cramer’s rule.

0.3x1 + 0.52x2 + x3 = -0.01 | 0.3 0.52 1|


0.5x1 + x2 + 1.9x3 = 0.67 D= | 0.5 1 1.9| =-
0.0022
0.1x1 + 0.3x2 + 0.5x3 = -0.44 | 0.1 0.3 0.5|

8
Example 1: Cramer’s Rule

Vector b  |-0.01 0.52 1 |


| 0.67 1 1.9|
|-0.44 0.3 0.5| 0.03278
X1 = ---------------------------- =
-------------------- = -14.9
-0.0022 -0.0022

| 0.3 -0.01 1 |
| 0.5 0.67 1.9|
| 0.1 -0.44 0.5| 0.0649
X2= ---------------------------- = --------------------
= -29.5
-0.0022 -0.0022

| 0.3 0.52 -0.01|


| 0.5 1 0.67 |
| 0.1 0.3 -0.44| -0.04356
X3 = ---------------------------- =
-------------------- = 19.8
-0.0022 -0.0022
9
Methods of Solution
There are two numerical techniques which are mostly employed to solve
systems of linear algrebaic equations, including:

• Direct methods such as Gaussian elimination and LU decomposition, which


provide exact solutions but can be computationally expensive for large
systems.
• Iterative methods like Jacobi, and Gauss-Seidel, which are more suitable for
sparse and large-scale matrices.

10
Direct Methods
Here are three popular direct methods, each utilizing elementary
operations to transform a system of linear equations into a more
easily solvable form:

11
Gauss Elimination

• This method can be applied to large sets of equations.


• Gaussian Elimination is a direct method used to solve systems of linear
equations. It transforms a given system into an upper triangular form using
elementary row operations, making it easy to solve using back-substitution.

The method has two steps:


1. forward elimination of unknown: Converts the coefficient matrix into
an upper triangular form by eliminating the lower elements below the
main diagonal.
2. back substitution: Solves the system starting from the last equation
(bottom row) and working upwards.
12
Gauss Elimination

1. Forward elimination reduces the set of equations


to an upper triangular system.
Convert to Augmented Matrix

 Upper triangle matrix

13
Gauss Elimination

2. Back substitution: Once the system is in upper triangular


form, we solve for xn ​first, then back-substitute to find the
other unknowns.

• The process is repeated to evaluate the remaining x’s.


14
Example 2 : Gauss Elimination

3x1 - 0.1x2 – 0.2 x3 = 7.85


(1)
0.1x1 + 7x2 – 0.3x3 = -19.3 (2)
0.3x1 - 0.2x2 + 10x3 = 71.4 (3)

Carry six significant figures during the


computation.

15
Example 2 : Gauss Elimination

3x1 - 0.1x2 – 0.2 x3 = 7.85


(1)
0.1x1 + 7x2 – 0.3x3 = -19.3 (2)

0.3x1 - 0.2x2 + 10x3 = 71.4 (3)


Forward elimination.
Multiply the equation (1) by (0.1)/3 and subtract
the result from (2) to give

7.00333x2 – 0.293333x3 = -19.5617

16
Example 2 : Gauss Elimination

3x1 - 0.1x2 – 0.2 x3 = 7.85


(1)
0.1x1 + 7x2 – 0.3x3 = -19.3 (2)

0.3x1 - 0.2x2 + 10x3 = 71.4 (3)

Multiply the equation (1) by (0.3)/3 and subtract it


from (3) to eliminate x1.

-0.190000x2 + 10.0200x3 = 70.6150

17
Example 2 : Gauss Elimination

After the first eliminations:

18
Example 2 : Gauss Elimination

• Remove - 0.190000 from (6),

• multiply (5) by –0.190000 / 7.00333,

• subtract the result from (6).

19
Example 2 : Gauss Elimination

The result will be the system to an upper


triangular form:

20
Example 2 : Gauss Elimination

Back substitution:
Obtain the value of x3 from (9)
x3 = 70.0843 / 10.0200
x3 = 7.00003 (10)

21
Example 2 : Gauss Elimination

Substitute the value of x3 into (8)

7.00333x2 – 0.293333(7.00003) = -19.5617


x2 = (-19.5617 + 0.293333(7.00003) ) / 7.00003
x2 = -2.5 (11)

22
Example 2 : Gauss Elimination

Substitute (10) and (11) into (7)

3x1 - 0.1(-2.5) - 0.2 (7.00003) = 7.85

x1 = (7.85 - 0.1(2.5) + 0.2 (7.00003) ) /3

x1 = 3.0
(12)
23
Example 2 : Gauss Elimination

Although there is a slight round-off error, the results are


very close to the exact solution of
x1 = 3, x2 = -2.5, and x3 = 7

Verify the results:

3(3) - 0.1(-2.5) - 0.2 (7.00003) = 7.84999  7.85

0.1(3) + 7(-2.5) – 0.3(7.00003) = -19.3000 = -19.3

0.3(3) - 0.2(-2.5)+ 10 (7.00003) = 71.4003  71.4


24
25
Pivoting in Gaussian Elimination

Pivoting is a technique used in Gaussian Elimination to improve


numerical stability. It is essential for:
• Avoiding division by zero (if a pivot element is zero, elimination
cannot proceed).
• Pivoting is used to avoid this problem. We interchange rows and
columns at each step to put the coefficient with the largest magnitude
on the diagonal.
• Reducing round-off errors (small pivot values can lead to large
numerical errors).

26
Pivoting in Gaussian Elimination

There are three main types of pivoting used in Gaussian Elimination:


• Partial Pivoting (Row Pivoting): At each step, the row with the
largest absolute value in the pivot column is swapped with the
current row.
• Complete Pivoting (Row and Column Pivoting): At each step, both
row and column swaps are performed to select the largest absolute
value element in the entire remaining matrix.
• Scaled Partial Pivoting: Similar to partial pivoting, but each row is
scaled before comparison. The row with the largest relative pivot (ratio
of the pivot candidate to the largest element in that row) is selected.
27
Example 3: Solving the following system using Gaussian Elimination
with pivoting.

28
Example 3: Solving the following system using Gaussian Elimination
with pivoting.

Solution:

To eliminate ​, we must swap


the 1st row with another
since its pivot is 0. The 4th
row is chosen because 6 is
the largest available pivot.

29
Solution:

30
Scaling

• It is used to minimize round-off errors for those cases


where some of the equations in a system have a larger
coefficients than others.

• Solve the following set of equations using Gauss elimination


with scaling transformation.
2x1 + 100,000x2 = 100,000
x1 + x2 = 2

31
Scaling
• Scaling transforms the original equations to
0.00002x1 + x2 = 1
x1 + x 2 = 2

• Pivot the rows to put the greatest value on the diagonal.


x1 + x 2 = 2
0.00002x1 + x2 = 1.00

Solution: x1 = x2 = 1 32
LU Decomposition Method

• LU decomposition is one of the direct methods used for solving systems of


linear algrebaic equations by decomposing or factoring the coefficient matrix
A into two matrices:

A= LU
where:
• L is a lower triangular matrix with ones on the diagonal.
• U is an upper triangular matrix.
One reason for initiating LU decomposition is that it provides an effective means
of calculating the inverse of The Matrix.
The method is solved in two steps:
1. Solve for in using forward substitution.
2. Solve for in using back substitution. 33
LU Decomposition Method

34
LU Decomposition Method

• However, LU decomposition is not unique—many different pairs of L and U


can satisfy A=LU unless additional constraints are imposed. To standardize
the decomposition, different constraints are used to define specific types of
LU factorizations.
• Two commonly used decompositions are listed in:

Name Constraints

Doolittle’s decomposition L= 1, i = 1, 2, . . . , n

Choleski’s decomposition L = UT

35
Doolittle’s Decomposition Method
Doolittle’s decomposition is closely related to Gauss elimination. To illustrate the relationship, consider a
3 × 3 matrix A and assume that there exist triangular matrices:
L=
such that A = LU. Then, we get:

A=

The first pass of the elimination procedure consists of choosing the first row as the pivot row and
applying the elementary operations:
A’=

row 2 ← row 2 − L21 × row 1


(eliminatesA21)
A’’ =U =
row 3 ← row 3 − L31 × row 1
(eliminatesA31)

row 3 ← row 3 − L32 × row 2


(eliminatesA32)
36
Doolittle’s Decomposition Method
Doolittle's decomposition has two important properties:

1. The matrix U is identical to the upper triangular matrix that results from Gauss
elimination.
2. The off - diagonal elements of L are the pivot equation multipliers used during Gauss
elimination; that is, Lij is the multiplier that eliminated A ij .

coefficient matrix, replacing the coefficients as they are eliminated (L ij replacing A ij ).


It is usual practice to store the multipliers in the lower triangular portion of the

The diagonal elements of L do not have to be stored, because it is understood that each of
them is unity. The final form of the coefficient matrix would thus be the following
mixture of L and U:

37
Doolittle’s Decomposition Method
Example 4: Solve the system of equations with Doolittle’s Decomposition Method

1. Create matrices A, X and B , where A is the augmented matrix, X constitutes the


variable vectors and B are the constants

2. Let A = LU, where L is the lower triangular matrix and U is the upper triangular
matrix assume that the diagonal entries L is equal to 1

3. Let Ly = B, solve for y’s

4. Let Ux = y, solve for the variable vectors x

x1 + x2 + x3 =5
x1 + 2x2 + 2x3 =6
x + 2x + 3x =8
38
Solution:

A= X= B=

A=LU =.

d=1 e=1 f=1

ad=1 ag+g=2 af+h=2


a=1 g=1 h=1

bd=1 be+cg=2 bf+ch+i= 3


b=1 c=1 i=1

39
Ly=B =

y1=5

y1+y2=6 y2=1

y1+y2+y3=8 y3=2

y1=5 y2=1 y3=2

Ux=y =

x3=2

x2+x3=1 x2=-1

x1+x2+x3=5 x1=4

X=

40
Choleski’s Decomposition Method

Choleski’s decomposition A = LLT has two limitations:

1. Since LLT is always a symmetric matrix, Choleski’s decomposition requires A to be symmetric.


2. The decomposition process involves taking square roots of certain combinations of the elements of A.
It can be shown that to avoid square roots of negative numbers A must be positive definite.

Choleski’s decomposition contains approximately n3/6 long operations plus n square root
computations. This is about half the number of operations required in LU decomposition. The relative
efficiency of Choleski’s decomposition is due to its exploitation of symmetry.

A = L LT
Choleski’s decomposition;

of a 3 ×3matrix:

41
After completing the matrix multiplication on the right-hand side, we get

By equating the elements in the first column, starting with the first row and proceeding downward, we
can compute L11, L21 and L31 in that order:

A11 = L211 L11 =


A21 = L11L21 L21 = A21/L11
A31 = L11L31 L31 = A31/L11

The second column, starting with second row, yields L22 and L32:

A22 = L221+ L222 L22 =


A32 = L21L31 + L22L32 L32 = (A32 − L21L31)/L22

42
Proceeding to other columns, we observe that the unknown in Eq. (1) is L ij (the other elements of L
appearing in the equation have already been computed). Taking the term containing L ij outside the
summation in Eq. (1), we obtain:

Aij=

If i = j (a diagonal term), the solution is:

Lij= , j=2,3......n

For a non diagonal term we get:

Lij = / L jj , j= 2, 3, . . . , n − 1, i = j + 1, j + 2, . . . , n

43
Example 5: Compute Choleski’s decomposition of the matrix.

Solution:
L11= =2 L22===1
L21=-2/L11 =-2/2= -1 L32=(-4-L21L31)/(L22)= (-4-(-1)(1))/1 =-3

L31=2/L11 =2/2 = 1 L33=



L=

A=LLT A=

A=

44
Gauss-Jordan Method

• This is another elimination technique (direct method). It is a variation


of Gauss Elimination.
• The difference is, when an unknown is eliminated, it is eliminated
from all other equations, not just the subsequent ones. At the same
time all rows are normalized by dividing them to their pivot element.
• At the end of the forward elimination step Gauss-Jordan method
yields an identity matrix, not an upper triangular one.
• There is no need for back substitution. Right-hand-side vector has the
results.

45
THE STEPS TO GAUSS-JORDAN ELIMINATION:

• Turn the equations into an augmented matrix.

• Use elementary row operations on the augmented matrix


to transform A into diagonal form. Make sure there are no zeroes
in the diagonal.

• Divide the diagonal element and the right-hand element (b)


for that diagonal element's row so that the diagonal element
is equal to one.

46
Example 6: Solve the following system by using Gauss-
Jordan elimnation method

x+y+z=5
2x + 3y + 5z = 8
4x + 5z = 2

47
Solution: The augmented matrix of the system is the
following.

1 1 1|5
2 3 5|8
4 0 5|2

We will now perform row operations until we obtain a matrix


in reduced row echelon form.

1 1 1|5
1 1 1 5

2 3 5|8
4 0 5|2
0 1 3 -2
4 0 5 2

48
1 1 1 5 1 1 1 5

0 1 3 -2 R3-4R1 0 1 3 -2

4 0 5 2
0 -4 1 -18

1 1 1 5
1 1 1 5
R3+4R2 0 1 3 -2
0 1 3 -2
0 0 13 -26
0 -4 1 -18

49
1 1 1 5 1 1 1 5

0 1 3 -2 (1/13)R3 0 1 3 -2

0 0 13 -26
0 0 1 -2

1 1 1 5 1 1 0 7

0 1 0 4 0 1 0 4
R1-R3
0 0 1 -2 0 0 1 -2

50
1 1 0 7
1 0 0 3

0 1 0 4 0 1 0 4
R1-R2
0 0 1 -2 0 0 1 -2

51
From this final matrix, we can read the solution of the
system. It is

X= 3
Y=4
Z=-2

Substitute x , y, z in system equation, if the right side = left side in three

equation then the solution is correct


• 3+4-2=5
• 2*3+3*4+5*(-2)=8
• 4*3+5*(-2)=2 52
MATLAB Implementation of Gauss-Jordan
Method

53
Iterative Methods

• Unlike direct methods (e.g., Gauss elimination), iterative


methods start with an initial guess for the solution and improve
it through successive approximations.
• These methods often require many iterations to achieve
sufficient accuracy, making them slower than direct methods in
some cases. However, they become highly valuable and
efficient for very large systems where storing and manipulating
the entire matrix is computationally expensive or impractical.

54
Iterative Methods
• One major advantage of iterative methods is their
ability to store only the nonzero elements of the
coefficient matrix. This makes them especially
well-suited for solving large sparse systems,
which frequently occur in scientific computing,
simulations, and engineering applications.
• Despite these strengths, iterative methods do have
a drawback: they do not always guarantee
convergence. In other words, depending on the
properties of the coefficient matrix, the method
may fail to reach an acceptable approximation of 55
the solution.
Iterative Methods

• To apply iterative methods, we write each equation


in the form:

56
Jacobi method

• The Jacobi method calculates all new values of simultaneously


using only the values from the previous iteration.
• The method only converges for diagonally dominant matrices. A
square matrix is said to be diagonally dominant if for each row the
magnitude of the diagonal element is greater than or equal to the sum
of the magnitudes of all other elements in that row.

• By example:

Diagonally Non
dominant Diagonally
dominant
57
Jacobi iteration

a11 x1  a12 x2    a1n xn b1  x10 


a21 x1  a22 x2    a2 n xn b2  0
0  x2 
x 
  
 0
an1 x1  an 2 x2    ann xn bn  xn 

1
x11  (b1  a12 x20    a1n xn0 )
a11
1  i 1 n 
1
x12  (b2  a21 x10  a23 x30    a2 n xn0 )
a22
x k 1
i 
aii
 bi   aij x   aij x 
k
j
k
j
 j 1 j i 1 
1
x1n  (bn  an1 x10  an 2 x20    ann 1 xn0 1 )
ann
58
Jacobi method Algorithm

1. Start with initial guess 𝑥(0)


2. For each iteration:
Compute 𝑥(𝑘+1) using previous 𝑥(𝑘)

3. Check if ∥𝑥(𝑘+1)−𝑥(𝑘)∥<𝜖
4. Repeat until convergence

59
Example 7: (Jacobi Iteration)

 4  1 1  x1   7   0
 4  8 1  x    21 x 0  0
   2   b  Ax 0 26.7395
2
  2 1 5  x3   15   0

Diagonally dominant matrix


1 7  x20  x30 7
x 
1  1.75
4 4
0 0
21  4 x  x 21
x12  1 3
 2.625 b  Ax1 10.0452
2
8 8
0 0
1 15  2 x  x 15
x3  1 2
 3.0
5 5 60
Example 7: (Jacobi Iteration)
2 7  x12  x31 7  2.625  3
x 
1  1.65625
4 4
b  Ax 2 6.7413
2 21  4 x11  x31 21  4 1.75  3 2
x2   3.875
8 8
15  2 x11  x12 15  2 1.75  2.625
2
x3   4.225
5 5

7  3.875  4.225
x13  1.6625
4
21  4 1.65625  4.225 b  Ax 2 1.9534
x23  3.98125 2
8
15  2 1.65625  3.875
x33  2.8875
5

As the Matrix is diagonally dominant, Jacobi iterations are


converging 61
MATLAB code for Jacobi

62
Gauss-Seidel method

• The Gauss-Seidel method is a popular iterative technique


used for solving systems of linear equations, especially when
the system is large.
• It improves upon Jacobi by using the newly computed values
immediately as they become available in the current iteration.
• Gauss-Seidel iteration converges more rapidly than the Jacobi
iteration since it uses the latest updates

63
Gauss-Seidel (GS) iteration

a11 x1  a12 x2    a1n xn b1  x10 


Use the latest a21x1  a22 x2    a2 n xn b2  0
0  x2 
update x 
  
 0
an1 x1  an 2 x2    ann xn bn  xn 

1
x11  (b1  a12 x20    a1n xn0 )
a11 1  i 1 n 
1
x2  (b2  a21 x11  a23 x30    a2 n xn0 )
1 xik 1 
aii
 bi  a x ij
k 1
j   aij x 
k
j
a22  j 1 j i 1 
1
x1n  (bn  an1 x11  an 2 x12    ann 1 x1n  1 )
ann
64
Example 8: Gauss-Seidel Iteration

 4  1 1  x1   7   0
 4  8 1  x    21 x 0  0
   2   b  Ax 0 26.7395
2
  2 1 5  x3   15   0

Diagonally dominant matrix

7  x20  x30 7
1
x 
1
 1.75 b  Ax1 3.0414
4 4 2

21  4 x11  x30 21  4 1.75


1
x2   3.5
8 8 b  Ax1 10.0452
2
15  2 x11  x12 15  2 1.75  3.5
1
x3   3.0
5 5
Jacobi iteration
65
Example 8: Gauss-Seidel Iteration

2 7  x12  x31 7  3.5  3 b  Ax 2 0.4765


x 
1  1.875 2
4 4
2 21  4 x12  x31 21  4 1.875  3
x2   3.9375 b  Ax 2 6.7413
8 8 2
15  2 x12  x22 15  2 1.875  3.9375
2
x3   2.9625
5 5 Jacobi iteration

When both Jacobi and Gauss-Seidel


iterations converge, Gauss-Seidel
converges faster
66
MATLAB code for Gauss-Seidel

67

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy