Compilation of Algorithms
Compilation of Algorithms
Graphical Method
We have learnt that the graph of a linear equation is a line. Coordinates of each point on
the line is a solution to the equation. For a system of simultaneous linear equations, we will
graph two lines. Then we can see all the points that are solutions to each equation. And, by
finding all the points that the lines have in common, we'll find the solution to the system.
Most systems of simultaneous linear equations have one solution, but we saw in the previous
lesson that some equations have no solutions and
for other equations all the points on the lines are
solutions.
Step 1: Obtain the given system of simultaneous linear equations in two variables. Let the system
of simultaneous linear equations be
Step 2: Plot the graph of the first equation and then graph the second equation on the same
rectangular coordinate system. The following three cases may arise.
Case 1 - If the lines intersect at a point, then the given system has a unique solution given
by the coordinates of the point of intersection.
Case 2 - If the lines are coincident, then the system is consistent and has infinitely many
solutions. In this case, every solution of one of the equations is a solution of the system.
Case 3 - If the lines are parallel, then the given system of equations is inconsistent i.e., it
has no solution.
Graphical Method
3x + 2y = 4 2x + 7y = 13
-2x -y = -3 -x + 3y = 0
x Eq 1 Eq 2 x Eq 1 Eq 2
-3 6.50 9.00 -2 2.429 -0.667
-2.5 5.75 8.00 -1.5 2.286 -0.500
-2 5.00 7.00 -1 2.143 -0.333
-1.5 4.25 6.00 -0.5 2.000 -0.167
-1 3.50 5.00 0 1.857 0.000
-0.5 2.75 4.00 0.5 1.714 0.167
0 2.00 3.00 1 1.571 0.333
0.5 1.25 2.00 1.5 1.429 0.500
1 0.50 1.00 2 1.286 0.667
1.5 -0.25 0.00 2.5 1.143 0.833
2 -1.00 -1.00 3 1.000 1.000
2.5 -1.75 -2.00 3.5 0.857 1.167
3 -2.50 -3.00 4 0.714 1.333
10.00 3.000
x
8.00 2.500
6.00 2.000
1.500
4.00
1.000
2.00
0.500
0.00
0.000
-4 -3 -2 -1 0 1 2 3 4
-2.00 -3 -2 -1 0 1 2 3 4 5
-0.500
-4.00 -1.000
Series1 Series2 Series1 Series2
x = 2, y = -1 x = 3, y = 1
Cramer’s Rule
Cramer’s Rule is an explicit formula for the solution of a system of linear equations with
as many equations as unknowns, i.e., a square matrix, valid whenever the system has a unique
solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix
and of matrices obtained from it by replacing one column by the vector of the right-hand sides of
the equations.
2 x 2 Matrix 2x2
3x + 2y = 4 2x + 7y = 13
-2x-y=-3 -x + 3y = 0
3 2 4 2 7 13
-2 -1 -3 -1 3 0
4 2 13 7
-3 -1 2 0 3 39
x= 3 2
= 1
= 2 x= 2 7
= 13
= 3
-2 -1 -1 3
3 4 2 13
-2 -3 -1 -1 0 13
x= 3 2
= 1
= -1 x= 2 7
= 13
= 1
-2 -1 -1 3
Elimination of Unknowns
Consider the simultaneous polynomial equations
the degree of which is less than m. When (x0, y0) is a solution of (1), then it satisfies
On the other hand, when (x1, y1) is a solution (3), then (2) implies that it satisfies also (1), except
possible in the case bn(y1) = 0. We can continue similarly until we arrive at a pair of equations
Substituting the roots of the former of the equations (4) into the latter one, which in practice is
usually of first degree with respect to x, one can get the corresponding values of x. Hence, one
obtains all solutions of the original system of equations (1). Since the cases bn(y1) = 0 may yield
wrong solutions, one should check them by substituting into (1).
Elimination of Unknowns
3x + 2y = 4 2x + 7y = 13
-2x-y=-3 -x + 3y = 0
x y constant x y constant
3 2 4 2 7 13
-2 -1 -3 -1 3 0
-6 -4 -8 -2 -7 -13
-6 -3 -9 -2 6 0
0 -1 1 0 -13 -13
y= -1 y= 1
x= 2 x= 3
Answers obtained using Gauss elimination may be checked by substituting them into the
original equations. However, this does not always represent a reliable check for ill-conditioned
systems. Therefore, some measure of condition, such as the determinant of the scaled system,
should be computed if round-off error is suspected. Using partial pivoting and more significant
figures in the computation are two options for mitigating round-off error.
Naïve Gauss Elimination is the application of Gaussian elimination to solve systems of linear
equations with the assumption that pivot values will never be zero.
Gauss-Jordan method is a variation of Gauss elimination. The major difference is that when an
unknown is eliminated in the Gausso-Jordan method, it is eliminated from all other equations
rather than just the subsequent ones. In addition, all rows are normalized by dividing them by
their pivot elements. Thus, the elimination step results in an identity matrix rather than a
triangular matrix. Consequently, it is not necessary to employ back substitution to obtain the
solution.
GAUSSIAN ELIMINATION
x1 x2 x3 x4 B x1 x2 x3 x4 B
7 -1 2 3 -13 -10 2 -2 3 -24
m21= 0.2857 2 -8 3 -2 -35 m21= -0.1 1 8 1 3 8
m31= 0.5714 4 1 -8 2 24 m31= -0.1 1 -3 -1 1 -12
m41= 0.4286 3 2 -2 -9 -23 m41= -0.2 2 2 -2 -9 36
x1 x2 x3 x4 B x1 x2 x3 x4 B
7 -1 2 3 -13 -10 2 -2 3 -24
0 -7.7143 2.42857 -2.8571 -31.286 0 8.2 0.8 3.3 5.6
m32= -0.204 0 1.57143 -9.1429 0.28571 31.4286 m32= -0.3415 0 -2.8 -1.2 1.3 -14.4
m42= -0.315 0 2.42857 -2.8571 -10.286 -17.429 m42= 0.2927 0 2.4 -2.4 -8.4 31.2
x1 x2 x3 x4 B x1 x2 x3 x4 B
7 -1 2 3 -13 -10 2 -2 3 -24
0 -7.7143 2.42857 -2.8571 -31.286 0 8.2 0.8 3.3 5.6
0 0 -8.6481 -0.2963 25.0556 0 0 -0.9268 2.42683 -12.488
m43= 0.242 0 0 -2.0926 -11.185 -27.278 m43= 2.8421 0 0 -2.6341 -9.3659 29.561
x1 x2 x3 x4 B x1 x2 x3 x4 B
7 -1 2 3 -13 -10 2 -2 3 -24
0 -7.7143 2.42857 -2.8571 -31.286 0 8.2 0.8 3.3 5.6
0 0 -8.6481 -0.2963 25.0556 0 0 -0.9268 2.42683 -12.488
0 0 0 -11.113 -33.34 0 0 0 -16.263 65.0526
LU decomposition methods separate the time-consuming elimination of the matrix [A] from the
manipulations of the right-hand side {B}. Thus, once [A] has been “decomposed,” multiple
right-hand side vectors can be evaluated in an efficient manner.
LU DECOMPOSITION
x1 x2 x3 x4 B
7 -1 2 3 -13
m21= 0.28571 2 -8 3 -2 -35
m31= 0.57143 4 1 -8 2 24
m41= 0.42857 3 2 -2 -9 -23
x1 x2 x3 x4 B
7 -1 2 3 -13
0 -7.7143 2.42857 -2.8571 -31.286
m32= -0.2037 0 1.57143 -9.1429 0.28571 31.4286
m42= -0.3148 0 2.42857 -2.8571 -10.286 -17.429
x1 x2 x3 x4 B
7 -1 2 3 -13
0 -7.7143 2.42857 -2.8571 -31.286
0 0 -8.6481 -0.2963 25.0556
m43= 0.24197 0 0 -2.0926 -11.185 -27.278
x1 x2 x3 x4 B
7 -1 2 3 -13
0 -7.7143 2.42857 -2.8571 -31.286
0 0 -8.6481 -0.2963 25.0556
0 0 0 -11.113 -33.34
B
1 0 0 0 7 -1 2 3 -13
0.28571 1 0 0 0 -7.7143 2.42857 -2.8571 -35
L= U=
0.57143 -0.2037 1 0 0 0 -8.6481 -0.2963 24
0.42857 -0.3148 0.24197 1 0 0 0 -11.113 -23
A = LU
A L x U
7 -1 2 3 1 0 0 0 7 -1 2 3
2 -8 3 -2 0.28571 1 0 0 0 -7.7143 2.42857 -2.8571
=
4 1 -8 2 0.57143 -0.2037 1 0 0 0 -8.6481 -0.2963
3 2 -2 -9 0.42857 -0.3148 0.24197 1 0 0 0 -11.113
LY=B UX=Y
x1 x2 x3 x4 B
-10 2 -2 3 -24
0 8.2 0.8 3.3 5.6
m32= -0.3415 0 -2.8 -1.2 1.3 -14.4
m42= 0.29268 0 2.4 -2.4 -8.4 31.2
x1 x2 x3 x4 B
-10 2 -2 3 -24
0 8.2 0.8 3.3 5.6
0 0 -0.9268 2.42683 -12.488
m43= 2.84211 0 0 -2.6341 -9.3659 29.561
x1 x2 x3 x4 B
-10 2 -2 3 -24
0 8.2 0.8 3.3 5.6
0 0 -0.9268 2.42683 -12.488
0 0 0 -16.263 32.5827
B
1 0 0 0 -10 2 -2 3 -24
-0.1 1 0 0 0 8.2 0.8 3.3 8
L= U=
-0.1 -0.3415 1 0 0 0 -0.9268 2.42683 -12
-0.2 0.29268 2.84211 1 0 0 0 -16.263 36
A = LU
A L x U
-10 2 -2 3 1 0 0 0 -10 2 -2 3
1 8 1 3 -0.1 1 0 0 0 8.2 0.8 3.3
=
1 -3 -1 1 -0.1 -0.3415 1 0 0 0 -0.9268 2.42683
2 2 -2 -9 -0.2 0.29268 2.84211 1 0 0 0 -16.263
LY=B UX=Y
The Jacobi method is a method of solving a matrix equation on a matrix that has no zeros along
its main diagonal. Each diagonal element is solved for, and an approximate value plugged in.
The process is then iterated until it converges. This algorithm is a stripped-down version of the
Jacobi transformation method of matrix diagonalization.
The Jacobi method is easily derived by examining each of the equations in the linear system of
equations Ax=b in isolation. If, in the ith equation
solve for the value of xi while assuming the other entries of x remain fixed. This gives
In this method, the order in which the equations are examined is irrelevant, since the Jacobi
method treats them independently. The definition of the Jacobi method can be expressed with
matrices as
Where the matrices D, -L, and -U represent the diagonal, strictly lower triangular, and strictly
upper triangular parts of A, respectively.
Gauss-Seidel Method
The Gauss-Seidel method is the most commonly used iterative method. Assume that we are
given a set of n equations [A]{X} = {B}
Suppose that for conciseness, we limit ourselves to a 3 x 3 set of equations. If the diagonal
elements are all nonzero, the first equation can be solved for x1, the second for x2, and the third
for x3 to yield
Now, we can start the solution process by choosing guesses for the x’s. A simple way to obtain
initial guesses is to assume that they are all zero. These zeros can be substituted into Eq. (11.5a),
which can be used to calculate a new value for x1=b1/a11. Then, we substitute this new value of x1
along with the previous guess of zero for x3 into Eq. (11.5b) to compute a new value for x2. The
process is repeated for Eq. (11.5c) to calculate a new estimate for x3. Then we return to the first
equation and repeat the entire procedure until our solution converges closely enough to the true
values. Convergence can be checked using the criterion
Jacobi Gauss-Seidel
x1 x2 x3 x4 x1 x2 x3 x4
7 -1 2 3 -13 7 -1 2 3 -13
2 -8 3 -2 -35 2 -8 3 -2 -35
4 1 -8 2 24 4 1 -8 2 24
3 2 -2 -9 -23 3 2 -2 -9 -23
k x1 x2 x3 x4 k x1 x2 x3 x4
0 0 0 0 0 0 0 0 0 0
1 -1.857 4.375 -3.000 2.556 1 -1.857 3.911 -3.440 3.570
2 -1.470 2.147 -2.743 3.575 2 -1.846 1.731 -2.814 2.950
3 -2.299 2.085 -2.573 3.152 3 -2.070 2.065 -3.039 3.000
4 -2.175 2.047 -3.101 2.824 4 -1.979 1.990 -2.991 3.003
5 -1.889 1.962 -3.126 2.975 5 -2.005 2.001 -3.002 2.999
6 -1.959 1.987 -2.956 3.056 6 -1.999 2.000 -3.000 3.000
7 -2.039 2.013 -2.967 3.001 7 -2.000 2.000 -3.000 3.000
8 -2.008 2.002 -3.018 2.983 8 -2.000 2.000 -3.000 3.000
9 -1.987 1.996 -3.008 3.002 9 -2.000 2.000 -3.000 3.000
10 -1.999 2.000 -2.994 3.005 10 -2.000 2.000 -3.000 3.000
11 -2.004 2.001 -2.998 2.999 11 -2.000 2.000 -3.000 3.000
12 -2.000 2.000 -3.002 2.999 12 -2.000 2.000 -3.000 3.000
13 -1.999 2.000 -3.000 3.001 13 -2.000 2.000 -3.000 3.000
14 -2.000 2.000 -2.999 3.000 14 -2.000 2.000 -3.000 3.000
15 -2.000 2.000 -3.000 3.000 15 -2.000 2.000 -3.000 3.000
16 -2.000 2.000 -3.000 3.000 16 -2.000 2.000 -3.000 3.000
17 -2.000 2.000 -3.000 3.000 17 -2.000 2.000 -3.000 3.000
18 -2.000 2.000 -3.000 3.000 18 -2.000 2.000 -3.000 3.000
19 -2.000 2.000 -3.000 3.000 19 -2.000 2.000 -3.000 3.000
20 -2.000 2.000 -3.000 3.000 20 -2.000 2.000 -3.000 3.000
k x1 x2 x3 x4 k x1 x2 x3 x4
0 0 0 0 0 0 0 0 0 0
1 2.400 1.000 12.000 -4.000 1 2.400 0.700 12.300 -6.044
2 -1.000 0.700 7.400 -5.911 2 -1.733 1.946 -1.615 -3.594
3 -0.713 2.417 2.989 -5.711 3 2.034 2.295 3.554 -3.828
4 0.572 2.857 -1.674 -4.286 4 1.000 1.866 3.574 -4.157
5 2.021 2.745 -0.285 -2.866 5 0.811 2.011 2.621 -3.955
6 2.146 1.858 2.920 -2.878 6 1.091 2.019 3.078 -3.993
7 1.324 1.446 5.695 -3.759 7 0.990 1.989 3.031 -4.012
8 0.422 1.532 5.228 -4.650 8 0.988 2.002 2.971 -3.996
9 0.266 2.038 3.176 -4.727 9 1.008 2.001 3.009 -4.000
10 0.754 2.343 1.426 -4.194 10 0.999 1.999 3.001 -4.001
11 1.325 2.300 1.533 -3.629 11 0.999 2.000 2.998 -4.000
12 1.465 2.004 2.796 -3.535 12 1.001 2.000 3.001 -4.000
13 1.181 1.793 3.919 -3.851 13 1.000 2.000 3.000 -4.000
14 0.820 1.806 3.951 -4.210 14 1.000 2.000 3.000 -4.000
15 0.708 1.982 3.190 -4.295 15 1.000 2.000 3.000 -4.000
16 0.870 2.123 2.466 -4.111 16 1.000 2.000 3.000 -4.000
17 1.098 2.125 2.389 -3.883 17 1.000 2.000 3.000 -4.000
18 1.182 2.020 2.841 -3.815 18 1.000 2.000 3.000 -4.000
19 1.091 1.928 3.307 -3.920 19 1.000 2.000 3.000 -4.000
20 0.948 1.920 3.389 -4.064 20 1.000 2.000 3.000 -4.000
21 0.887 1.982 3.124 -4.116
22 0.937 2.042 2.826 -4.057
23 1.026 2.051 2.754 -3.966 Converges at the 13th iteration
24 1.070 2.015 2.908 -3.928
25 1.043 1.976 3.097 -3.961
26 0.988 1.968 3.154 -4.017
27 0.957 1.989 3.067 -4.044
28 0.971 2.014 2.947 -4.027
29 1.005 2.020 2.904 -3.992
30 1.026 2.008 2.953 -3.973
31 1.019 1.993 3.028 -3.982
32 0.998 1.987 3.060 -4.004
33 0.984 1.994 3.033 -4.016
34 0.987 2.004 2.986 -4.012
35 1.000 2.008 2.963 -3.999
36 1.009 2.004 2.978 -3.990
37 1.008 1.998 3.007 -3.992
38 1.001 1.995 3.023 -4.000
39 0.994 1.997 3.015 -4.006
40 0.995 2.001 2.997 -4.005
41 0.999 2.003 2.986 -4.000
42 1.003 2.002 2.990 -3.996
43 1.003 2.000 3.001 -3.997
44 1.001 1.998 3.008 -4.000
45 0.998 1.999 3.007 -4.002
46 0.998 2.000 3.000 -4.002
47 0.999 2.001 2.995 -4.000
48 1.001 2.001 2.996 -3.999
49 1.001 2.000 3.000 -3.999
50 1.000 1.999 3.003 -4.000
51 0.999 1.999 3.003 -4.001
52 0.999 2.000 3.000 -4.001
53 1.000 2.000 2.998 -4.000
54 1.000 2.000 2.998 -4.000
55 1.001 2.000 3.000 -3.999
56 1.000 2.000 3.001 -4.000
57 1.000 2.000 3.001 -4.000
58 1.000 2.000 3.000 -4.000
59 1.000 2.000 2.999 -4.000
60 1.000 2.000 2.999 -4.000
Case Study 1
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.707 0 0.707 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
1 -0.866 0 0.5 0 0 0 0 0 0 0
0 0.5 0 0.866 0 0 0 0 0 0 -100
0 0.866 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.866 -0.707 0 0 0 0 0 0
0 0 0 0 0 -1 -0.707 0 -1 0 0
0 0 0 0 0 0 -0.707 0 0 -1 0
COLUMN 1
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.707 0 0.707 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 -0.866 0 0.5 -0.707 0 0.707 0 0 0 0
0 0.5 0 0.866 0 0 0 0 0 0 -100
0 0.866 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.866 -0.707 0 0 0 0 0 0
0 0 0 0 0 -1 -0.707 0 -1 0 0
0 0 0 0 0 0 -0.707 0 0 -1 0
COLUMN 2
*swap R2 & R3
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.707 0 0.707 0 0 0 0
0 -0.866 0 0.5 -0.707 0 0.707 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0.5 0 0.866 0 0 0 0 0 0 -100
0 0.866 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.866 -0.707 0 0 0 0 0 0
0 0 0 0 0 -1 -0.707 0 -1 0 0
0 0 0 0 0 0 -0.707 0 0 -1 0
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.707 0 0.707 0 0 0 0
0 -0.866 0 0.5 -0.707 0 0.707 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 1.1547 -0.408 0 0.4082 0 0 0 -100
0 0 1 0.5 -0.707 0 0.707 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.866 -0.707 0 0 0 0 0 0
0 0 0 0 0 -1 -0.707 0 -1 0 0
0 0 0 0 0 0 -0.707 0 0 -1 0
COLUMN 3
*swap R3 & R5
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.87 -0.71 0 0 0 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0
F1 F2 F3 F4 F5 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 0 0 -1 0 0 0
0 0 0 0 0 1 0.707 0 0 0 0
0 0 0 -0.87 -0.71 0 0 0 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0
COLUMN 4
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100
0 0 0 0 0.707 0 0.707 0 0 0 -500 A = LU
0 0 0 0 0 0 0 -1 0 0 0 A
0 0 0 0 0 1 0.707 0 0 0 0
0 0 0 0 -1.01 0 0.306 0 0 0 -75
-1 0 0 0 -0.71 0 0.707 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0 0 0 0 0 0.707 0 0.707 0 0 0
1 -0.87 0 0.5 0 0 0 0 0 0
COLUMN 5 0 0.5 0 0.866 0 0 0 0 0 0
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 0.866 1 0 0 0 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0 0 0 0 0 0 0 0 -1 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0 0 0 -1 -0.5 0.707 1 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100 0 0 0 -0.87 -0.71 0 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 0 0 -1 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0
0 0 0 0 0 1 0.707 0 0 0 0 0 0 0 0 0 0 -0.71 0 0 -1
0 0 0 0 0 0 1.319 0 0 0 -792
0 0 0 0 0 -1 -0.71 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0
COLUMN 6
*swap R6 & R7
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0 L
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100 -1 1 0 0 0 0 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500 0 -1 1 0 0 0 0 0 0 0
0 0 0 0 0 1 0.707 0 0 0 0 0 -0.5774 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 1.319 0 0 0 -792 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0 0 0 0 0 -0.75 -1.433 0 1 0 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0 0 0 -1 0 0 0 0 1 0 0
0 0 0 0 0 0 -1 0 1 0
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B 0 0 0 0 0 0 0 -0.5359 0 1
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100 U
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 1 0.707 0 0 0 0 -1 0 0 0 -0.707 0 0.707 0 0 0
0 0 0 0 0 0 0 -1 0 0 0 0 -0.866 0 0.5 -0.707 0 0.707 0 0 0
0 0 0 0 0 0 1.319 0 0 0 -792 0 0 1 0.5 -0.707 0 0.707 0 0 0
0 0 0 0 0 0 0 0 -1 0 0 0 0 0 1.15468 -0.4082 0 0.4082 0 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0 0 0 0 0 0.707 0 0.707 0 0 0
COLUMN 7 0 0 0 0 0 1 0.707 0 0 0
*swap R7 & R8 0 0 0 0 0 0 1.31929 0 0 0
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B 0 0 0 0 0 0 0 -1 0 0
-1 0 0 0 -0.71 0 0.707 0 0 0 0 0 0 0 0 0 0 0 0 -1 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0 0 0 0 0 0 0 0 0 0 -1
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 1 0.707 0 0 0 0
0 0 0 0 0 0 1.319 0 0 0 -792
0 0 0 0 0 0 0 -1 0 0 0
0 0 0 0 0 0 0 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
UX=Y
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100 F1= -348.330 N F6= 424.165 N
0 0 0 0 0.707 0 0.707 0 0 0 -500 F2= -351.67 N F7= -599.950778 N
0 0 0 0 0 1 0.707 0 0 0 0
F3= 304.546 N H2= 175.833 N
0 0 0 0 0 0 1.319 0 0 0 -792
0 0 0 0 0 0 0 -1 0 0 0 F4= 87.569 N V2= 0.000 N
0 0 0 0 0 0 0 0 -1 0 0 F5= -107.2628005 N V3= 424.1652 N
0 0 0 0 0 0 0 0 0 -1 -424
Roots of Non-linear Equations
Open Methods:
Fixed Point Iteration Method
The idea of the fixed point iteration methods is to first reformulate an equation to an
equivalent fixed point problem:
f (x) = 0 ⇐⇒ x = g(x)
and then to use the iteration: with an initial guess x0 chosen, compute a sequence
xn+1 = g(xn), n ≥ 0
in the hope that xn → α.
There are infinite many ways to introduce an equivalent fixed point problem for a given
equation; e.g., for any function G(t) with the property
G(t) = 0 ⇐⇒ t = 0,
we can take g(x) = x + G(f (x)). The resulting iteration method may or may not
converge, though.
Example:
For For
k x f(x) e k x f(x) e
1 0.000 4.000 1 0.000 4.000
2 4.000 1.333 1.000 2 4.000 -4.000 1.000
3 1.333 2.400 -2.000 3 -4.000 -4.000 2.000
4 2.400 1.818 0.444 4 -4.000 -4.000 0.000
5 1.818 2.095 -0.320 5 -4.000 -4.000 0.000
6 2.095 1.953 0.132 6 -4.000 -4.000 0.000
7 1.953 2.024 -0.073 7 -4.000 -4.000 0.000
8 2.024 1.988 0.035 8 -4.000 -4.000 0.000
9 1.988 2.006 -0.018 9 -4.000 -4.000 0.000
10 2.006 1.997 0.009 10 -4.000 -4.000 0.000
11 1.997 2.001 -0.004
12 2.001 1.999 0.002
13 1.999 2.000 -0.001
14 2.000 2.000 0.001
15 2.000 2.000 0.000
16 2.000 2.000 0.000
17 2.000 2.000 0.000
18 2.000 2.000 0.000
19 2.000 2.000 0.000
20 2.000 2.000 0.000
k x f(x) e k x f(x) e
1 0.000 1.250 1 0.000 1.250
2 1.250 0.952 1.000 2 1.250 0.859 1.000
3 0.952 1.010 -0.313 3 0.859 1.065 -0.455
4 1.010 0.998 0.057 4 1.065 0.966 0.193
5 0.998 1.000 -0.012 5 0.966 1.017 -0.103
6 1.000 1.000 0.002 6 1.017 0.992 0.050
7 1.000 1.000 0.000 7 0.992 1.004 -0.025
8 1.000 1.000 0.000 8 1.004 0.998 0.012
9 1.000 1.000 0.000 9 0.998 1.001 -0.006
10 1.000 1.000 0.000 10 1.001 0.999 0.003
11 1.000 1.000 0.000 11 0.999 1.000 -0.002
12 1.000 1.000 0.000 12 1.000 1.000 0.001
13 1.000 1.000 0.000 13 1.000 1.000 0.000
14 1.000 1.000 0.000 14 1.000 1.000 0.000
15 1.000 1.000 0.000 15 1.000 1.000 0.000
Example
Given: Given:
x0 = 0 x0 = 0
x1 = 3 x1 = 3
k Xo X1 f(Xo) f(X1) e k Xo X1 f(Xo) f(X1) e
0 0.000 3.000 -8.000 7.000 0 0.000 3.000 -5.000 16.000
1 3.000 1.600 7.000 -2.240 1.000 1 3.000 0.714 16.000 -1.633 1.000
2 1.600 1.939 -2.240 -0.360 -0.875 2 0.714 0.926 -1.633 -0.439 -3.200
3 1.939 2.004 -0.360 0.026 0.175 3 0.926 1.004 -0.439 0.023 0.229
4 2.004 2.000 0.026 0.000 0.032 4 1.004 1.000 0.023 0.000 0.078
5 2.000 2.000 0.000 0.000 -0.002 5 1.000 1.000 0.000 0.000 -0.004
6 2.000 2.000 0.000 0.000 0.000 6 1.000 1.000 0.000 0.000 0.000
7 2.000 2.000 0.000 0.000 0.000 7 1.000 1.000 0.000 0.000 0.000
8 2.000 2.000 0.000 0.000 0.000 8 1.000 1.000 0.000 0.000 0.000
Bisection Method
Suppose that f(x) is a continuous function that changes sign on the interval [a, b]. Then,
by the Intermediate Value Theorem, f(x) = 0 for some x ∈ [a, b]. How can we find the solution,
knowing that it lies in this interval?
The method of bisection attempts to reduce the size of the interval in which a solution is
known to exist. Suppose that we evaluate f(m), where m = (a + b)/2. If f(m) = 0, then we are
done. Otherwise, f must change sign on the interval [a, m] or [m, b], since f(a) and f(b) have
different signs. Therefore, we can cut the size of our search space in half, and continue this
process until the interval of interest is sufficiently small, in which case we must be close to a
solution
Example
Given: 3.00
2.50
2.00
H f(H) 1.50
0 -1.00 1.00
0.5 -1.25 0.50
1 -1.00 0.00
1.5 -0.25 -0.50 0 1 2 3
2 1.00 -1.00
2.5 2.75 -1.50
Given: 5.00
4.00
3.00
H f(H)
2.00
0 -2.00
1.00
0.5 -1.75
0.00
1 -1.00
1.5 0.25 -1.00 0 1 2 3
2 2.00 -2.00
2.5 4.25 -3.00
Given: Given:
ሺʹ ͲܪሻହΤଷ ሺʹ ͲܪሻହΤଷ
݂ ܪൌ
ͲǤͶͳͶͲ
ͳͶͲͷ
ͷ െͷ ݂ ܪൌ
ͲǤͶͳͶͲ
ͳͶͲͷ
ͷ െͷ
ሺʹ Ͳ ʹ ܪሻଶΤଷ ሺʹ Ͳ ʹ ܪሻଶΤଷ
ͷͲ ͵ ܪ
ͲǤʹ Ͳʹ ͳʹ Ͷͳሺʹ Ͳ ʹ ܪሻଶΤହ
݃ ܪൌ ͶǤ͵ ͳͳʹ ʹ ͵ ܪଶΤଷሾ
݂ᇱ ܪൌ ሿ
ʹ Ͳ ʹ ܪହΤଷ
3. Secant Method
ሺʹ ͲܪሻହΤଷ
݂ ܪൌ
ͲǤͶͳͶͲ
ͳͶͲͷ
ͷ െͷ
ሺʹ Ͳ ʹ ܪሻଶΤଷ
2. Bisection Method
Bracketing x f(x)
f(x) Methods
0 -5.000
250
1 3.848
200
2 21.507
i Ha 150 Hb f(Ha) f(Hb) error
3 44.393
4
1 4.0000
70.934
100
2.0000 70.9344 21.5066
50
5 2 2.0000 0 1.1298
100.191 21.5066 5.7586 0.7703
3 1.1298 -50 0 0.81162 5.7586
4 6 1.3198 8 0.3921
10
The re fo re , H= 0.702293
Curve Fitting and Interpolation
Least Square Line
If your data shows a linear relationship between the X and Y variables, you will want to
find the line that best fits that relationship. That line is called a Regression Line and has the
equation ŷ= a + b x. The Least Squares Regression Line is the line that makes the vertical
distance from the data points to the regression line as small as possible. It’s called a “least
squares” because the best line of fit is one that minimizes the variance (the sum of squares of the
errors). This can be a bit hard to visualize but the main point is you are aiming to find the
equation that fits the points as closely as possible.
x y
1.2 1.1
2.3 2.1
3 3.1
3.8 4
4.7 4.9
5.9 5.9 n= 6
0 2 4 6 8
x y x^2 xy
1.2 1.1 1.44 1.32
2.3 2.1 5.29 4.83
3 3.1 9 9.3 x y y'
3.8 4 14.44 15.2 1.2 1.1 1.117579722
4.7 4.9 22.09 23.03 2.3 2.1 2.273344235
5.9 5.9 34.81 34.81 3 3.1 3.008830744
20.9 21.1 87.07 88.49 3.8 4 3.849386754
4.7 4.9 4.795012265
a= -0.143 b= 1.051 5.9 5.9 6.05584628
y = -0.143 + 1.051x
x y
1 3.8
2 6.4
3 7.8
4 10.5
5 13
6 17 n= 6
0 2 4 6 8
x y x^2 xy
1 3.8 1 3.8
2 6.4 4 12.8
3 7.8 9 23.4 x y y'
4 10.5 16 42 1 3.8 3.428571429
5 13 25 65 2 6.4 5.957142857
6 17 36 102 3 7.8 8.485714286
a= 0.900 b= 2.529 6 17
Polynomial Regression
Polynomial regression is a form of regression analysis in which the relationship between the
independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.
Polynomial regression fits a nonlinear relationship between the value of x and the corresponding
conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the
data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is
linear in the unknown parameters that are estimated from the data. For this reason, polynomial
regression is considered to be a special case of multiple linear regression.
The goal of regression analysis is to model the expected value of a dependent variable y in terms of
the value of an independent variable (or vector of independent variables) x. In simple linear
regression, the model
is used, where ε is an unobserved random error with mean zero conditioned on a scalar variable x. In
this model, for each unit increase in the value of x, the conditional expectation of y increases by β1
units.
In many settings, such a linear relationship may not hold. For example, if we are modeling the yield
of a chemical synthesis in terms of the temperature at which the synthesis takes place, we may find
that the yield improved by increasing amounts for each unit increase in temperature. In this case, we
might propose a quadratic model of the form
In this model, when the temperature is increased from x to x + 1 units, the expected yield changes
byβ1+β2(2x+1) (This can be seen by replacing x in this equation with x+1 and subtracting the
equation in x from the equation in x+1.) For infinitesimal changes in x, the effect on y is given by the
total derivative with respect to x:
β1+2β2xThe fact that the change in yield depends on x is what makes the relationship
between x and y nonlinear even though the model is linear in the parameters to be estimated.
In general, we can model the expected value of y as an nth degree polynomial, yielding the
general polynomial regression model
Conveniently, these models are all linear from the point of view of estimation, since the
regression function is linear in terms of the unknown parameters β0, β1, .... Therefore, for least
squares analysis, the computational and inferential problems of polynomial regression can be
completely addressed using the techniques of multiple regression. This is done by treating x, x2, ... as
being distinct independent variables in a multiple regression model.
Polynomial Regression
The insight that Euler had was to see how this estimate could be improved on. The
strategy he used was to divide the interval [x0, xact] in the case (xact<x0) into equal subintervals
and recompute the tangent line as you go. This would not allow the tangent line to “drift” far
from the function itself. This would hopefully produce a more accurate estimate for y act at the
end.
In general, the coordinates of the point (xn+1, yn+1) can be computed from the coordinates
of the poing (xn, yn) as follows:
y0 (boundary condition) = 3.5
h (interval) = 0.1
k x f(x,y) y
0 0 -0.033 3.500
1 0.1 1.316 3.497
2 0.2 1.005 3.628
3 0.3 0.731 3.729
4 0.4 0.490 3.802
5 0.5 0.279 3.851
6 0.6 0.095 3.879
7 0.7 -0.066 3.888
8 0.8 -0.205 3.882
9 0.9 -0.325 3.861
10 1 -0.428 3.829
11 1.1 -0.516 3.786
12 1.2 -0.590 3.734
13 1.3 -0.653 3.675
14 1.4 -0.704 3.610
15 1.5 -0.746 3.540
16 1.6 -0.780 3.465
17 1.7 -0.807 3.387
18 1.8 -0.827 3.306
19 1.9 -0.841 3.224
20 2 -0.850 3.140
Trapezoidal Rule
The trapezoidal rule is applied to solve the definite integral of the form b∫a f(x) dx, by
approximating the region under the graph of the function f(x) as a trapezoid and calculating its
area. Under the trapezoidal rule, we evaluate the area under a curve by dividing the total area
into little trapezoids rather than rectangles.
The trapezoidal rule is the first of the Newton-Cotes closed integration formulas. It
corresponds to the case of first order polynomials:
Geometrically, the trapezoid rule is equivalent to approximating the area of a trapezoid under a
straight-line connecting f(a) and f(b). Recall from geometry that the formula for computing the
aera of a trapezoid is the height times the average of the bases. In our case, the concept is the
same but the trapezoid is on its side. Therefore, the integral estimate can be represented as
or
where, for the trapezoidal rule, the average height is the average of the function
values at the end points, or [f(a) + f(b)]/2.
Trapezoidal Rule
f(x)=1/x
a= 1
b= 2
n= 10
h= 0.1
segment x f(x) I
1 1
1 1.1 0.9090909 0.095454545
2 1.2 0.8333333 0.087121212
3 1.3 0.7692308 0.080128205
4 1.4 0.7142857 0.074175824
5 1.5 0.6666667 0.069047619
6 1.6 0.625 0.064583333
7 1.7 0.5882353 0.060661765
8 1.8 0.5555556 0.057189542
9 1.9 0.5263158 0.054093567
10 2 0.5 0.051315789
sum = 0.173442851
Simpson’s Rule
Aside from applying the trapezoidal rule with finer segmentation, another way to obtain a
more accurate estimate of an integral is to use higher-order polynomials to connect the points.
For example, if there is an extra point midway between f(a) and f(b), the three points can be
connected with a parabola. If there are two points equally spaced between f(a) and f(b), the four
points can be connected with a third-order polynomial. The formulas that result from taking the
integrals under these polynomials are called Simpson’s rules.
In numerical integration, Simpson's rules are several approximations for definite
integrals, named after Thomas Simpson (1710–1761).
The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads
The approximate equality in the rule becomes exact if f is a polynomial up to 3rd degree.
If the 1/3 rule is applied to n equal subdivisions of the integration range [a, b], one obtains the
composite Simpson's rule. Points inside the integration range are given alternating weights 4/3
and 2/3.
Simpson's 3/8 rule, also called Simpson's second rule, requires one more function evaluation
inside the integration range and gives lower error bounds, but does not improve on order of the
error.
Simpson's 1/3 and 3/8 rules are two special cases of closed Newton–Cotes formulas.
In naval architecture and ship stability estimation, there also exists Simpson's third rule, which has no
special importance in general numerical analysis, see Simpson's rules (ship stability).
a= 0
b= 2
n= 20
h= 0.1
The most widely known member of the Runge–Kutta family is generally referred to as
"RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method".
for n = 0, 1, 2, 3, …, using
Runge-Kutta Method x0 = 1
y0 = 4
y' = f(x,y) with y(x0) = y0
h= 0.05
y(n+1) = y(n)+(h/6)*(k1 +
n x(n) y(n) k1 k2 k3 k4 2*k2 + 2*k3 + k4)