0% found this document useful (0 votes)
77 views19 pages

CE 007 (Numerical Solutions To CE Problems)

This document contains an assignment submission for a numerical methods course. It includes algorithms for solving systems of linear equations using Gauss elimination, Gauss-Jordan, LU decomposition (Crout's method and Doolittle's method), and iterative methods (Gauss-Jacobi, Gauss-Seidel, successive relaxation, conjugate gradient). It also includes sample C++ code implementing Gauss elimination and Crout's LU decomposition method.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views19 pages

CE 007 (Numerical Solutions To CE Problems)

This document contains an assignment submission for a numerical methods course. It includes algorithms for solving systems of linear equations using Gauss elimination, Gauss-Jordan, LU decomposition (Crout's method and Doolittle's method), and iterative methods (Gauss-Jacobi, Gauss-Seidel, successive relaxation, conjugate gradient). It also includes sample C++ code implementing Gauss elimination and Crout's LU decomposition method.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

TECHNOLOGICAL INSTITUTE OF THE PHILIPINES

938 Aurora Blvd., Cubao, Quezon City

CE 007 (Numerical Solutions to CE Problems)


CE31S2

Machine Problem 2.1

Numerical Solutions of Equations and Systems of Equations

Submitted by:
Santos, Michelle Anne F.
1820027

ACADEMIC INTEGRITY PLEDGE

I swear on my honor that I did not use any inappropriate aid, nor give such to others, in accomplishing
this coursework. I understand that cheating and/or plagiarism is a major offense, as stated in TIP
Memorandum No. P-04, s. 2017-2018, and that I will be sanctioned appropriately once I have committed
such acts.

Santos, Michelle Anne F.


1820027
Machine Exercise:
A. Make an Algorithm for solving systems of linear algebraic equations using the methods below:
I. Gauss Elimination Method
1. Start the program.
2. Declare the variables and read the order of the matrix n.
3. Take the coefficients of the linear equation as:
• Do for k=1 to n
• Do for j=1 to n+1
• Read a[k][j]
• End for j
• End for k
4. Do for k=1 to n-1
• Do for i=k+1 to n
• Do for j=k+1 to n+1
• a[i][j] = a[i][j] – a[i][k] /a[k][k] * a[k][j]
• End for j
• End for i
• End for k
5. Compute x[n] = a[n][n+1]/a[n][n]
6. Do for k=n-1 to 1
• sum = 0
• Do for j=k+1 to n\
• sum = sum + a[k][j] * x[j]
• End for j
• x[k] = 1/a[k][k] * (a[k][n+1] – sum)
• End for k
7. Display the result x[k]
8. Stop the program.

II. Gauss Jordan Method


1. Start the program.
2. Read the order of the matrix ‘n’ and read the coefficients of the linear equations.
3. Do for k=1 to n
• Do for l=k+1 to n+1
• a[k][l] = a[k][l] / a[k][k]End for l
• Set a[k][k] = 1
• Do for i=1 to n
• if (i not equal to k) then,
• Do for j=k+1 to n+1
• a[i][j] = a[i][j] – (a[k][j] * a[i][k])
• End for j
• End for i
• End for k
4. Do for m=1 to n
• x[m] = a[m][n+1]
• Display x[m]
• End for m
5. Stop the program.

III. LU Decomposition Method


In numerical analysis and linear algebra, LU decomposition (where ‘LU’ stands for ‘lower upper’ and
called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular
matrix. Computers usually solve square systems of linear equations using the LU decomposition, and it
is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition
was introduced by mathematician Tadeusz Banachiewicz in 1938. Let A be a square matrix. An LU
factorization refers to the factorization of A, with proper row and/or column orderings or permutations,
into two factors, a lower triangular matrix L and an upper triangular matrix U, A=LU.
a. Doolittle’s decomposition
It is always possible to factor a square matrix into a lower triangular matrix and an upper triangular matrix.
That is, [A] = [L][U] Doolittle’s method provides an alternative way to factor A into an LU decomposition
without going through the hassle of Gaussian Elimination. For a general n x n matrix A, we assume that
an LU decomposition exists, and write the form of L and U explicitly. We then systematically solve for the
entries in L and U from the equations that result from the multiplications necessary for A=LU.
Terms of U matrix are given by:

And the terms for L matrix:

b. Crout’s decomposition
Crout's method decomposes a nonsingular n × n matrix A into the product of an n×n lower triangular
matrix L and an n×n unit upper triangular matrix U. A unit triangular matrix is a triangular matrix with 1's
along the diagonal. Crout's algorithm proceeds as follows:
1. Evaluate the following pair of expressions for k = 0, . . . , n-1;

𝐿𝐼𝑘 = {𝐴𝐼𝑘 − (𝐿𝑖𝑘 𝑈0𝑘 + ⋯ + [𝐿𝑖,𝑘−1 + 𝑈𝑘−1,𝑘 )) for i = k, . . . , n-1,

and
Ukj = Akj - (Lk0 U0j + · · · + Lk,k-1 Uk-1,j) / Lkk, for j = k+1, . . . , n-1.
2. Note that Lik = 0 for k > i, Uik = 0 for k < i, and Ukk = 1 for k = 0, … , n-1. The matrix U forms a
unit upper triangular matrix, and the matrix L forms a lower triangular matrix. The matrix A =
LU.
3. After the LU decomposition of A is performed, the solution to the system of linear equations A x
= L U x = B is solved by solving the system of linear equations L y = B by forward substitution for
y, and then solving the system of linear equations U x = y by backward substitution for x.

Crout's LU decomposition with pivoting is similar to the above algorithm except that for each k a pivot row
is determined and interchanged with row k, the algorithm then proceeds as before. Source code is
provided for the two different versions of Crout's LU decomposition, one version performs pivoting and
the other version does not. If the matrix A is positive definite symmetric or if the matrix is diagonally
dominant, then pivoting is not necessary; otherwise the version using pivoting should be used.

c. Cholesky’s decomposition
Input data: a symmetric positive definite matrix A whose elements are denoted by aij.
Output data: the lower triangular matrix L whose elements are denoted by lij.
The Cholesky algorithm can be represented in the form

There exist block versions of this algorithm; however, here we consider only its “dot” version. In a number
of implementations, the division by the diagonal element lii is made in the following two steps: the
computation of 1lii and, then, the multiplication of the result by the modified values of aji . Here we do not
consider this computational scheme, since this scheme has worse parallel characteristics than that given
above.

IV. Iterative Methods


a. Gauss-Jacobi
1. Read the coefficients aij, i,j = 1, 2, …, n and the right hand vector bi, i= 1, 2, …, n of the system
of equations and error tolerance ϵ.
2. Rearrange the given equations, if possible, such that the system becomes diagonally dominant.
3. Rewrite the ith equation as

4. Set the initial solution as

5. Calculate the new value xi(n) of xi as

6. If | xi – xi(n)| < ϵ for all i, then goto Step 7 else xi = xi(n) for all i and goto step 5.
7. Print xi(n) , i = 1, 2, …, n as solution.

b. Gauss-Seidel
1. Start the program.
2. Arrange given system of linear equations in diagonally dominant form.
3. Read tolerable error (e)
4. Convert the first equation in terms of first variable, second equation in terms of second variable
and so on.
5. Set initial guesses for x0, y0, z0 and so on.
6. Substitute value of y0, z0 ... from step 5 in first equation obtained from step 4 to calculate new
value of x1. Use x1, z0, u0 .... in second equation obtained from step 4 to caluclate new value of
y1. Similarly, use x1, y1, u0... to find new z1 and so on.
7. If| x0 - x1| > e and | y0 - y1| > e and | z0 - z1| > e and so on then goto step 9.
8. Set x0=x1, y0=y1, z0=z1 and so on and goto step 6.
9. Print value of x1, y1, z1 and so on.
10. Stop the program.
c. Successive Relaxation

d. Conjugate Gradient
B. Write a program for solving the system Ax = b by
I. Gauss Elimination Algorithm
#include<iostream>
#include<iomanip>
#include<math.h>
#include<stdlib.h>
#define SIZE 10
using namespace std;
int main(){
float a[SIZE][SIZE], x[SIZE], ratio;
int i,j,k,n;
cout<< setprecision(3)<< fixed;
cout<<"Enter number of unknowns: ";
cin>>n;
cout<<"Enter Coefficients of Augmented Matrix: "<< endl;
for(i=1;i<=n;i++){
for(j=1;j<=n+1;j++) {
cout<<"a["<< i<<"]"<< j<<"]= ";
cin>>a[i][j]; }}
for(i=1;i<=n-1;i++){
if(a[i][i] == 0.0) {
cout<<"Mathematical Error!";
exit(0); }
for(j=i+1;j<=n;j++) {
ratio = a[j][i]/a[i][i];
(k=1;k<=n+1;k++){
a[j][k] = a[j][k] - ratio*a[i][k]; }}}
x[n] = a[n][n+1]/a[n][n];
for(i=n-1;i>=1;i--){
x[i] = a[i][n+1];
for(j=i+1;j<=n;j++){
x[i] = x[i] - a[i][j]*x[j]; }
x[i] = x[i]/a[i][i]; }
cout<< endl<<"Solution: "<< endl;
for(i=1;i<=n;i++) {
cout<<"x["<< i<<"] = "<< x[i]<< endl; }
return(0);}

II. LU Decomposition Methods – Crout’s Decomposition


#include<iostream>
using namespace std;
void LUdecomposition(float a[10][10], float l[10][10], float u[10][10], int n) {
int i = 0, j = 0, k = 0;
for (i = 0; i < n; i++) {
for (j = 0; j < n; j++) {
if (j < i)
l[j][i] = 0;
else {
l[j][i] = a[j][i];
for (k = 0; k < i; k++) {
l[j][i] = l[j][i] - l[j][k] * u[k][i]; }}}
for (j = 0; j < n; j++) {
if (j < i)
u[i][j] = 0;
else if (j == i)
u[i][j] = 1;
else {
u[i][j] = a[i][j] / l[i][i];
for (k = 0; k < i; k++) {
u[i][j] = u[i][j] - ((l[i][k] * u[k][j]) / l[i][i]); }}}}}
int main() {
float a[10][10], l[10][10], u[10][10];
int n = 0, i = 0, j = 0;
cout << "Enter size of square matrix : "<<endl;
cin >> n;
cout<<"Enter matrix values: "<<endl;
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
cin >> a[i][j];
LUdecomposition(a, l, u, n);
cout << "L Decomposition is as follows..."<<endl;
for (i = 0; i < n; i++) {
for (j = 0; j < n; j++) {
cout<<l[i][j]<<" ";}
cout << endl; }
cout << "U Decomposition is as follows..."<<endl;
for (i = 0; i < n; i++) {
for (j = 0; j < n; j++) {
cout<<u[i][j]<<" "; }
cout << endl; }
return 0;}
III. Iterative Methods (Use x(0) = 0, ε = 10− 6.)
vec conjugateGradientSolver( const matrix &A, const vec &B ){
double TOLERANCE = 1.0e-10;
int n = A.size();
vec X( n, 0.0 );
vec R = B;
vec P = R;
int k = 0;
while ( k < n ) {
vec Rold = R; // Store previous residual
vec AP = matrixTimesVector( A, P );
double alpha = innerProduct( R, R ) / max( innerProduct( P, AP ), NEARZERO );
X = vectorCombination( 1.0, X, alpha, P ); // Next estimate of solution
R = vectorCombination( 1.0, R, -alpha, AP ); // Residual
if ( vectorNorm( R ) < TOLERANCE ) break; // Convergence test
double beta = innerProduct( R, R ) / max( innerProduct( Rold, Rold ), NEARZERO );
P = vectorCombination( 1.0, R, beta, P ); // Next gradient
k++;}
return X;}

Machine Problem:
1. (Hilbert matrix) Suppose A=[hij],hij=1i+j−1,i,j=1,...,n.A=[hij],hij=1i+j−1,i,j=1,...,n. Also, let b=Ax,
xi=1, 1, ..., n, n=5, 10, 20.
• Solve for Ax' = b (given any value for x(0)), and compare the results for different values of n with
x∗ = 1 (i.e., one vector). Discuss in detail the results obtained for each case.
• Note: Hilbert matrices are notoriously ill-conditioned matrices. Because of this, you may get
almost zero pivot during the process, and as a result, you may not be able to solve the linear
system for some large n.
2. (Use hand computation for methods you did not program.) Solve the equations Ax = b where
by:
I. Gauss elimination
II. Gauss-Jordan
III. LU decomposition methods
a. Doolittle’s decomposition
b. Crout’s decomposition
c. Cholesky’s decomposition
IV. Iterative methods
a. Gauss-Jacobi
b. Gauss-Seidel
c. Successive Relaxation
d. Conjugate Gradient
For Iterative methods, such as Gauss-Jacobi, Gauss-Seidel, Successive Relaxation and Conjugate
Gradient methods, the given equation cannot be solved by MATLAB and hand computation because the
equations are not diagonally dominant. Based on our discussion, when we are using Iterative formulas,
we transform algebraically the equations in the diagonally dominant system and solve for one unknown
in each equation in terms of the other unknowns.
References:
Anonymous. The Jacobi and Gauss-Seidel Iterative Methods. Retrieved from
https://www3.nd.edu/~zxu2/acms40390F12/Lec-7.3.pdf
Anonymous. (2014, May 16). Gauss Elimination Method Algorithm and Flowchart. Retrieved from
https://www.codewithc.com/gauss-elimination-method-algorithm-flowchart/#comments
Anonymous. (2014, May 17). Gauss Jordan Method Algorithm and Flowchart. Retrieved from
https://www.codewithc.com/gauss-jordan-method-algorithm-flowchart/
Anonymous. (2020, November 11). Doolittle Algorithm : LU Decomposition. Retrieved from
https://www.geeksforgeeks.org/doolittle-algorithm-lu-
decomposition/#:~:text=Doolittle's%20method%20provides%20an%20alternative,the%20hassle%20of
%20Gaussian%20Elimination.&text=We%20then%20systematically%20solve%20for,multiplications%2
0necessary%20for%20A%3DLU.
Bathendu. (2017, May 29). Gauss-Seidel Method, Jacobi Method. Retrieved from
https://www.mathworks.com/matlabcentral/fileexchange/63167-gauss-seidel-method-jacobi-
method#:~:text=Jacobi%20iterative%20method%20is%20an,then%20iterated%20until%20it%20conver
ges.

Liu, Jinn-Liang. (2017, April 18). Successive Overrelaxation Method. Retrieved from
http://www.nhcue.edu.tw/~jinnliu/teaching/nde07/Lecture5.pdf

Saha, Rajib Kumar. (2018, October 13). Gauss-Jacobi’s Iteration Method – Algorithm,
Implementation in C With Solved Examples. Retrieved from https://livedu.in/gauss-jacobis-iteration-
method-algorithm-implementation-in-c-with-solved-examples/

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy