0% found this document useful (0 votes)
4 views35 pages

NA Lecture 13

This document discusses methods for solving linear systems of equations, including Gaussian elimination, Gauss-Jordan elimination, and iterative methods like Jacobi's and Gauss-Seidel. It explains the relaxation method for improving solutions by reducing residuals and provides an example of solving a system of equations using this method. The document concludes with a numerical solution and comparison to the exact solution.

Uploaded by

xibejir727
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views35 pages

NA Lecture 13

This document discusses methods for solving linear systems of equations, including Gaussian elimination, Gauss-Jordan elimination, and iterative methods like Jacobi's and Gauss-Seidel. It explains the relaxation method for improving solutions by reducing residuals and provides an example of solving a system of equations using this method. The document concludes with a numerical solution and comparison to the exact solution.

Uploaded by

xibejir727
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 35

Numerical

Analysis
Lecture13
Chapter 3
Solution of
Linear System
of Equations
and Matrix
Inversion
Introduction
Gaussian Elimination
Gauss-Jordon Elimination
Crout’s Reduction
Jacobi’s Gauss- Seidal Iteration
Relaxation
Matrix Inversion
Gauss–Seidel
Iteration
Method
It is another well-known
iterative method for solving a
system of linear equations of
the form
a11 x1  a12 x2    a1n xn b1 
a21 x1  a22 x2    a2 n xn b2 

   
an1 x1  an 2 x2    ann xn b1 
In Jacobi’s method,
the (r + 1)th
approximation to the
above system is given
by Equations
( r 1) b1 a12 ( r ) a1n ( r ) 
x
1   x2    xn 
a11 a11 a11

( r 1) b2 a21 (r ) a2 n ( r ) 
x 2   x1    xn 
a22 a22 a22 
    

( r 1) bn an1 (r )
an ( n  1) ( r ) 
x n   x1    xn  1 
ann ann ann 
Here we can observe
( r 1)
that no element of xi
replaces xi entirely
(r )

for the next cycle of


computation.
In Gauss-Seidel method,
the corresponding
( r 1)
elements of xi
(r )
replaces those of xi as
soon as they become
available.
Hence, it is called the method
of successive displacements.
For illustration consider
a11 x1  a12 x2    a1n xn b1 
a21 x1  a22 x2    a2 n xn b2 

   
an1 x1  an 2 x2    ann xn bn 
In Gauss-Seidel
iteration, the (r + 1)th
approximation or
iteration is computed
from:
( r 1) b1 a12 ( r ) a1n ( r ) 
x
1   x2    xn 
a11 a11 a11

( r 1) b2 a21 ( r 1) a2 n ( r ) 
x 2   x1    xn 
a22 a22 a22 
    

( r 1) bn an1 ( r 1)
an ( n  1) ( r 1) 
x n   x1    xn  1 
ann ann ann 
Thus, the general procedure
can be written in the following
compact form
bi i 1 aij n aij
x( r 1)
i  
aii
a
j 1
x ( r 1)
j  a
j i 1
x (r )
j
ii ii

for all i 1, 2,..., n and r 1, 2,...


To describe system in the
first equation, we substitute
the r-th approximation into
the right-hand side and
denote the result by x1 .In the
( r 1)

second equation, we
substitute ( x1 , x3 ,..., xn ) and
( r 1) (r ) (r )

( r 1)
denote the result by x2
In the third equation, we
substitute ( x1 , x2 , x4 ,..., xn )
( r 1) ( r 1) (r ) (r )

and denote the result by


( r 1)
x3 , and so on. This
process is continued till we
arrive at the desired result.
For illustration, we consider
the following example :
Relaxation
Method
This is also an iterative method
and is due to Southwell.
To explain the details, consider
again the system of equations
a11 x1  a12 x2    a1n xn b1 
a21 x1  a22 x2    a2 n xn b2 

   
an1 x1  an 2 x2    ann xn bn 
Let ( p) ( p) ( p) ( p) T
X ( x , x ,..., x )
1 2 n

be the solution vector


obtained iteratively after p-th
( p)
iteration. If Ri denotes the
residual of the i-th equation
of system given above , that
is of a x  a x    a x b
i1 1 i2 2 in n i
defined
defined by
by
( p) ( p) ( p) ( p)
Ri bi  a x
i1 1 a x
i2 2   a x
in n

we can improve the solution


vector successively by
reducing the largest residual
to zero at that iteration. This
is the basic idea of relaxation
method.
To achieve the fast convergence of the
procedure, we take all terms to one side
and then reorder the equations so that the
largest negative coefficients in the
equations appear on the diagonal.
Now, if at any iteration, Ri is
the largest residual in
magnitude, then we give an
increment to xi ; aii being the
coefficient of xi

Ri
dxi 
aii
In other words, we change xi .
to ( xi  dxi )
to relax Ri
that is to reduce Ri to zero.
Example
Solve the system of equations
6 x1  3 x2  x3 11
2 x1  x2  8 x3  15
x1  7 x2  x3 10
by the relaxation method,
starting with the vector (0, 0, 0).
Solution
At first, we transfer all the
terms to the right-hand side
and reorder the equations, so
that the largest coefficients in
the equations appear on the
diagonal.
Thus, we get
0  11  6 x1  3 x2  x3 

0  10  x1  7 x2  x3 

0  15  2 x1  x2  8 x3 
after interchanging the 2 nd

and 3 equations.
rd
Starting with the initial
solution vector (0, 0, 0), that is
taking x1  x2  x3 0,
we find the residuals
R1 11, R2 10, R3  15
of which the largest residual in
magnitude is R3, i.e. the 3rd equation
has more error and needs immediate
attention for improvement.
Thus, we introduce a change,
dx3in x3 which is obtained
from the formula
R3 15
dx3   1.875
a33 8
Similarly, we find the new
residuals of large
magnitude and relax it to
zero, and so on.
We shall continue this
process, until all the
residuals are zero or very
small.
Iteration Residuals Maximum Difference Variables

number R1 R2 R3 x1 x2 x3
Ri dxi

0 11 10 -15 -15 1.875 0 0 0

1 9.125 8.125 0 9.125 1.5288 0 0 1.875

2 0.0478 6.5962 -3.0576 6.5962 -0.9423 1.5288 0 1.875


Iteration Residuals Maximum Difference Variables

number R1 R2 R3 x1 x2 x3
Ri dxi

0 11 10 -15 -15 15/8 0 0 0


=1.875

1 9.125 8.125 0 9.125 -9.125/(-6) 0 0 1.875


=1.5288

2 0.0478 6.5962 -3.0576 6.5962 -6.5962/7 1.5288 0 1.875


=-0.9423

3 -2.8747 0.0001 -2.1153 -2.8747 2.8747/(-6) 1.0497 -0.9423 1.875


=-0.4791

4 -0.0031 0.4792 -1.1571 -1.1571 1.1571/8 1.0497 -0.9423 1.875


=0.1446
Iteration Residuals Maximum Difference Variables

number R1 R2 R3 x1 x2 x3
Ri
dxi

5 -0.1447 0.3346 0.0003 0.3346 -.3346/7 1.0497 -0.9423 2.0196


=-0.0478

6 0.2881 0.0000 0.0475 0.2881 -.2881/(-6) 1.0497 -0.9901 2.0196


=0.0480

7 -0.0001 0.048 0.1435 0.1435 =-0.0179 1.0017 -0.9901 2.0196

8 0.0178 0.0659 0.0003 - - 1.0017 -0.9901 2.0017


At this stage, we observe
that all the residuals R1, R2
and R3 are small enough
and therefore we may take
the corresponding values
of xi at this iteration as the
solution.
Hence, the numerical solution
is given by
x1 1.0017, x2  0.9901, x3 2.0017,

The exact solution is


x1 1.0, x2  1.0, x3 2.0
Numerical
Analysis
Lecture13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy