0% found this document useful (0 votes)
94 views

Compilation of Algorithms

The document discusses several numerical methods for solving systems of linear equations, including: 1) Graphical methods which solve systems by graphing the lines defined by each equation and finding their point of intersection. 2) Cramer's rule which provides an explicit formula for solving systems using determinants. 3) Elimination methods which solve systems by eliminating unknown variables to obtain a single equation that can then be solved.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Compilation of Algorithms

The document discusses several numerical methods for solving systems of linear equations, including: 1) Graphical methods which solve systems by graphing the lines defined by each equation and finding their point of intersection. 2) Cramer's rule which provides an explicit formula for solving systems using determinants. 3) Elimination methods which solve systems by eliminating unknown variables to obtain a single equation that can then be solved.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Compilation of Algorithms

Numerical Methods and Analysis

Submitted by: Shekinah Gabas

Submitted to: Engr. Irismay Jumawan

January 17, 2022


Table of Contents

Linear Equation System


Small Scale
Graphical Method
Cramer’s Rule
Elimination of Unknown
Large Scale
Gaussian Elimination with Row Pivoting
LU Decomposition
Iterative Methods
Jacobi Method
Gauss-Seidel Method
Case Study 1
Roots of Non-linear Equations
Open Methods
Fixed Point Iteration
Secant Method
Newton Raphson Method
Bracketing Methods
Bisection Method
False Position Method
Case Study 2
Curve Fitting and Interpolation
Linear Regression
Least Square Line
Polynomial Regression
Numerical Integration
Eulers Method
Trapezoidal Rule
Simpson’s Rules
Solution of Ordinary Differential Equations: Initial Value Problems
Runge-Kutta Methods
Linear Equation System
For small scale:

Graphical Method

We have learnt that the graph of a linear equation is a line. Coordinates of each point on
the line is a solution to the equation. For a system of simultaneous linear equations, we will
graph two lines. Then we can see all the points that are solutions to each equation. And, by
finding all the points that the lines have in common, we'll find the solution to the system. 
Most systems of simultaneous linear equations have one solution, but we saw in the previous
lesson that some equations have no solutions and
for other equations all the points on the lines are
solutions. 

Similarly, when we solve a system of two


linear equations represented by a graph of two
lines in the same plane, there are three possible
cases. 

This procedure of solving a system of simultaneous linear equations into variables by


drawing the graph is known as the graphical method. To solve a pair of linear equations in two
variables graphically follow the below-given steps. 

Step 1: Obtain the given system of simultaneous linear equations in two variables. Let the system
of simultaneous linear equations be 

a x+b y+c =0……(1)


1 1 1

a x+b y+c =0……(2)


2 2 2

Step 2: Plot the graph of the first equation and then graph the second equation on the same
rectangular coordinate system. The following three cases may arise. 

Case 1 - If the lines intersect at a point, then the given system has a unique solution given
by the coordinates of the point of intersection. 
Case 2 - If the lines are coincident, then the system is consistent and has infinitely many
solutions. In this case, every solution of one of the equations is a solution of the system. 
Case 3 - If the lines are parallel, then the given system of equations is inconsistent i.e., it
has no solution.
Graphical Method

3x + 2y = 4 2x + 7y = 13
-2x -y = -3 -x + 3y = 0
x Eq 1 Eq 2 x Eq 1 Eq 2
-3 6.50 9.00 -2 2.429 -0.667
-2.5 5.75 8.00 -1.5 2.286 -0.500
-2 5.00 7.00 -1 2.143 -0.333
-1.5 4.25 6.00 -0.5 2.000 -0.167
-1 3.50 5.00 0 1.857 0.000
-0.5 2.75 4.00 0.5 1.714 0.167
0 2.00 3.00 1 1.571 0.333
0.5 1.25 2.00 1.5 1.429 0.500
1 0.50 1.00 2 1.286 0.667
1.5 -0.25 0.00 2.5 1.143 0.833
2 -1.00 -1.00 3 1.000 1.000
2.5 -1.75 -2.00 3.5 0.857 1.167
3 -2.50 -3.00 4 0.714 1.333
10.00 3.000
x
8.00 2.500

6.00 2.000
1.500
4.00
1.000
2.00
0.500
0.00
0.000
-4 -3 -2 -1 0 1 2 3 4
-2.00 -3 -2 -1 0 1 2 3 4 5
-0.500
-4.00 -1.000
Series1 Series2 Series1 Series2

x = 2, y = -1 x = 3, y = 1
Cramer’s Rule

Cramer’s Rule is an explicit formula for the solution of a system of linear equations with
as many equations as unknowns, i.e., a square matrix, valid whenever the system has a unique
solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix
and of matrices obtained from it by replacing one column by the vector of the right-hand sides of
the equations.

2 x 2 Matrix 2x2
3x + 2y = 4 2x + 7y = 13
-2x-y=-3 -x + 3y = 0

3 2 4 2 7 13
-2 -1 -3 -1 3 0

4 2 13 7
-3 -1 2 0 3 39
x= 3 2
= 1
= 2 x= 2 7
= 13
= 3
-2 -1 -1 3

3 4 2 13
-2 -3 -1 -1 0 13
x= 3 2
= 1
= -1 x= 2 7
= 13
= 1
-2 -1 -1 3
Elimination of Unknowns
Consider the simultaneous polynomial equations

in two unknowns x and y, where m ≥ n. It is possible to eliminate one of these unknowns.

First, we form the polynomial

the degree of which is less than m. When (x0, y0) is a solution of (1), then it satisfies

On the other hand, when (x1, y1) is a solution (3), then (2) implies that it satisfies also (1), except
possible in the case bn(y1) = 0. We can continue similarly until we arrive at a pair of equations

Substituting the roots of the former of the equations (4) into the latter one, which in practice is
usually of first degree with respect to x, one can get the corresponding values of x. Hence, one
obtains all solutions of the original system of equations (1). Since the cases bn(y1) = 0 may yield
wrong solutions, one should check them by substituting into (1).

Elimination of Unknowns

3x + 2y = 4 2x + 7y = 13
-2x-y=-3 -x + 3y = 0
x y constant x y constant
3 2 4 2 7 13
-2 -1 -3 -1 3 0

-6 -4 -8 -2 -7 -13
-6 -3 -9 -2 6 0

0 -1 1 0 -13 -13

y= -1 y= 1
x= 2 x= 3

For large scale:


Gaussian Elimination

Elimination of unknowns can be extended to large sets of equations by developing a


systemic scheme or algorithm to eliminate unknowns and to back-substitute. Gauss elimination
is the most basic of these schemes. It is the most fundamental method for solving simultaneous
linear algebraic equations. Although it is one of the earliest techniques developed for this
purpose, it is nevertheless an extremely effective algorithm for obtaining solutions for many
engineering problems though there are general issues such as round-off, scaling, and
conditioning.

Answers obtained using Gauss elimination may be checked by substituting them into the
original equations. However, this does not always represent a reliable check for ill-conditioned
systems. Therefore, some measure of condition, such as the determinant of the scaled system,
should be computed if round-off error is suspected. Using partial pivoting and more significant
figures in the computation are two options for mitigating round-off error.

Naïve Gauss Elimination is the application of Gaussian elimination to solve systems of linear
equations with the assumption that pivot values will never be zero.

Gauss-Jordan method is a variation of Gauss elimination. The major difference is that when an
unknown is eliminated in the Gausso-Jordan method, it is eliminated from all other equations
rather than just the subsequent ones. In addition, all rows are normalized by dividing them by
their pivot elements. Thus, the elimination step results in an identity matrix rather than a
triangular matrix. Consequently, it is not necessary to employ back substitution to obtain the
solution.
GAUSSIAN ELIMINATION

x1 x2 x3 x4 B x1 x2 x3 x4 B
7 -1 2 3 -13 -10 2 -2 3 -24
m21= 0.2857 2 -8 3 -2 -35 m21= -0.1 1 8 1 3 8
m31= 0.5714 4 1 -8 2 24 m31= -0.1 1 -3 -1 1 -12
m41= 0.4286 3 2 -2 -9 -23 m41= -0.2 2 2 -2 -9 36

x1 x2 x3 x4 B x1 x2 x3 x4 B
7 -1 2 3 -13 -10 2 -2 3 -24
0 -7.7143 2.42857 -2.8571 -31.286 0 8.2 0.8 3.3 5.6
m32= -0.204 0 1.57143 -9.1429 0.28571 31.4286 m32= -0.3415 0 -2.8 -1.2 1.3 -14.4
m42= -0.315 0 2.42857 -2.8571 -10.286 -17.429 m42= 0.2927 0 2.4 -2.4 -8.4 31.2

x1 x2 x3 x4 B x1 x2 x3 x4 B
7 -1 2 3 -13 -10 2 -2 3 -24
0 -7.7143 2.42857 -2.8571 -31.286 0 8.2 0.8 3.3 5.6
0 0 -8.6481 -0.2963 25.0556 0 0 -0.9268 2.42683 -12.488
m43= 0.242 0 0 -2.0926 -11.185 -27.278 m43= 2.8421 0 0 -2.6341 -9.3659 29.561

x1 x2 x3 x4 B x1 x2 x3 x4 B
7 -1 2 3 -13 -10 2 -2 3 -24
0 -7.7143 2.42857 -2.8571 -31.286 0 8.2 0.8 3.3 5.6
0 0 -8.6481 -0.2963 25.0556 0 0 -0.9268 2.42683 -12.488
0 0 0 -11.113 -33.34 0 0 0 -16.263 65.0526

x1 -2.0 x3 -3.0 x1 1.0 x3 3.0


x2 2.0 x4 3.0 x2 2.0 x4 -4.0
LU Decomposition

Gauss elimination is designed to solve systems of linear algebraic equations, [A]


{x}={B}. Although it certainly represents a sound way to solve such systems, it becomes
inefficient when solving equations with the same coefficients [A] but with different right-hand
side constants. Recall that Gauss elimination involves two steps: forward elimination and back-
substitution. Of these, the forward-elimination step comprises the bulk of the computational
effort. This is particularly true for a large system of equations.

LU decomposition methods separate the time-consuming elimination of the matrix [A] from the
manipulations of the right-hand side {B}. Thus, once [A] has been “decomposed,” multiple
right-hand side vectors can be evaluated in an efficient manner.
LU DECOMPOSITION

x1 x2 x3 x4 B
7 -1 2 3 -13
m21= 0.28571 2 -8 3 -2 -35
m31= 0.57143 4 1 -8 2 24
m41= 0.42857 3 2 -2 -9 -23

x1 x2 x3 x4 B
7 -1 2 3 -13
0 -7.7143 2.42857 -2.8571 -31.286
m32= -0.2037 0 1.57143 -9.1429 0.28571 31.4286
m42= -0.3148 0 2.42857 -2.8571 -10.286 -17.429

x1 x2 x3 x4 B
7 -1 2 3 -13
0 -7.7143 2.42857 -2.8571 -31.286
0 0 -8.6481 -0.2963 25.0556
m43= 0.24197 0 0 -2.0926 -11.185 -27.278

x1 x2 x3 x4 B
7 -1 2 3 -13
0 -7.7143 2.42857 -2.8571 -31.286
0 0 -8.6481 -0.2963 25.0556
0 0 0 -11.113 -33.34

B
1 0 0 0 7 -1 2 3 -13
0.28571 1 0 0 0 -7.7143 2.42857 -2.8571 -35
L= U=
0.57143 -0.2037 1 0 0 0 -8.6481 -0.2963 24
0.42857 -0.3148 0.24197 1 0 0 0 -11.113 -23

A = LU

A L x U
7 -1 2 3 1 0 0 0 7 -1 2 3
2 -8 3 -2 0.28571 1 0 0 0 -7.7143 2.42857 -2.8571
=
4 1 -8 2 0.57143 -0.2037 1 0 0 0 -8.6481 -0.2963
3 2 -2 -9 0.42857 -0.3148 0.24197 1 0 0 0 -11.113

LY=B UX=Y

y1= -13 y3= 25.0556 x1= -2 x3= -3


y2= -31.286 y4= -33.34 x2= 2 x4= 3
x1 x2 x3 x4 B
-10 2 -2 3 -24
m21= -0.1 1 8 1 3 8
m31= -0.1 1 -3 -1 1 -12
m41= -0.2 2 2 -2 -9 36

x1 x2 x3 x4 B
-10 2 -2 3 -24
0 8.2 0.8 3.3 5.6
m32= -0.3415 0 -2.8 -1.2 1.3 -14.4
m42= 0.29268 0 2.4 -2.4 -8.4 31.2

x1 x2 x3 x4 B
-10 2 -2 3 -24
0 8.2 0.8 3.3 5.6
0 0 -0.9268 2.42683 -12.488
m43= 2.84211 0 0 -2.6341 -9.3659 29.561

x1 x2 x3 x4 B
-10 2 -2 3 -24
0 8.2 0.8 3.3 5.6
0 0 -0.9268 2.42683 -12.488
0 0 0 -16.263 32.5827

B
1 0 0 0 -10 2 -2 3 -24
-0.1 1 0 0 0 8.2 0.8 3.3 8
L= U=
-0.1 -0.3415 1 0 0 0 -0.9268 2.42683 -12
-0.2 0.29268 2.84211 1 0 0 0 -16.263 36

A = LU

A L x U
-10 2 -2 3 1 0 0 0 -10 2 -2 3
1 8 1 3 -0.1 1 0 0 0 8.2 0.8 3.3
=
1 -3 -1 1 -0.1 -0.3415 1 0 0 0 -0.9268 2.42683
2 2 -2 -9 -0.2 0.29268 2.84211 1 0 0 0 -16.263

LY=B UX=Y

y1= -24 y3= -12.488 x1= 1 x3= 3


y2= 5.6 y4= 65.0526 x2= 2 x4= -4
Jacobi Method

Iterative or approximate methods provide an alternative to the elimination methods described to


this point.

The Jacobi method is a method of solving a matrix equation on a matrix that has no zeros along
its main diagonal. Each diagonal element is solved for, and an approximate value plugged in.
The process is then iterated until it converges. This algorithm is a stripped-down version of the
Jacobi transformation method of matrix diagonalization.

The Jacobi method is easily derived by examining each of the equations in the linear system of
equations Ax=b in isolation. If, in the ith equation

solve for the value of xi while assuming the other entries of x remain fixed. This gives

which is the Jacobi method.

In this method, the order in which the equations are examined is irrelevant, since the Jacobi
method treats them independently. The definition of the Jacobi method can be expressed with
matrices as

Where the matrices D, -L, and -U represent the diagonal, strictly lower triangular, and strictly
upper triangular parts of A, respectively.
Gauss-Seidel Method

The Gauss-Seidel method is the most commonly used iterative method. Assume that we are
given a set of n equations [A]{X} = {B}

Suppose that for conciseness, we limit ourselves to a 3 x 3 set of equations. If the diagonal
elements are all nonzero, the first equation can be solved for x1, the second for x2, and the third
for x3 to yield

Now, we can start the solution process by choosing guesses for the x’s. A simple way to obtain
initial guesses is to assume that they are all zero. These zeros can be substituted into Eq. (11.5a),
which can be used to calculate a new value for x1=b1/a11. Then, we substitute this new value of x1
along with the previous guess of zero for x3 into Eq. (11.5b) to compute a new value for x2. The
process is repeated for Eq. (11.5c) to calculate a new estimate for x3. Then we return to the first
equation and repeat the entire procedure until our solution converges closely enough to the true
values. Convergence can be checked using the criterion
Jacobi Gauss-Seidel
x1 x2 x3 x4 x1 x2 x3 x4
7 -1 2 3 -13 7 -1 2 3 -13
2 -8 3 -2 -35 2 -8 3 -2 -35
4 1 -8 2 24 4 1 -8 2 24
3 2 -2 -9 -23 3 2 -2 -9 -23

k x1 x2 x3 x4 k x1 x2 x3 x4
0 0 0 0 0 0 0 0 0 0
1 -1.857 4.375 -3.000 2.556 1 -1.857 3.911 -3.440 3.570
2 -1.470 2.147 -2.743 3.575 2 -1.846 1.731 -2.814 2.950
3 -2.299 2.085 -2.573 3.152 3 -2.070 2.065 -3.039 3.000
4 -2.175 2.047 -3.101 2.824 4 -1.979 1.990 -2.991 3.003
5 -1.889 1.962 -3.126 2.975 5 -2.005 2.001 -3.002 2.999
6 -1.959 1.987 -2.956 3.056 6 -1.999 2.000 -3.000 3.000
7 -2.039 2.013 -2.967 3.001 7 -2.000 2.000 -3.000 3.000
8 -2.008 2.002 -3.018 2.983 8 -2.000 2.000 -3.000 3.000
9 -1.987 1.996 -3.008 3.002 9 -2.000 2.000 -3.000 3.000
10 -1.999 2.000 -2.994 3.005 10 -2.000 2.000 -3.000 3.000
11 -2.004 2.001 -2.998 2.999 11 -2.000 2.000 -3.000 3.000
12 -2.000 2.000 -3.002 2.999 12 -2.000 2.000 -3.000 3.000
13 -1.999 2.000 -3.000 3.001 13 -2.000 2.000 -3.000 3.000
14 -2.000 2.000 -2.999 3.000 14 -2.000 2.000 -3.000 3.000
15 -2.000 2.000 -3.000 3.000 15 -2.000 2.000 -3.000 3.000
16 -2.000 2.000 -3.000 3.000 16 -2.000 2.000 -3.000 3.000
17 -2.000 2.000 -3.000 3.000 17 -2.000 2.000 -3.000 3.000
18 -2.000 2.000 -3.000 3.000 18 -2.000 2.000 -3.000 3.000
19 -2.000 2.000 -3.000 3.000 19 -2.000 2.000 -3.000 3.000
20 -2.000 2.000 -3.000 3.000 20 -2.000 2.000 -3.000 3.000

Converges at the 15th iteration Converges at the 7th iteration


Jacobi Gauss-Seidel
x1 x2 x3 x4 x1 x2 x3 x4
-10 2 -2 3 -24 -10 2 -2 3 -24
1 8 1 3 8 1 8 1 3 8
1 -3 -1 1 -12 1 -3 -1 1 -12
2 2 -2 -9 36 2 2 -2 -9 36

k x1 x2 x3 x4 k x1 x2 x3 x4
0 0 0 0 0 0 0 0 0 0
1 2.400 1.000 12.000 -4.000 1 2.400 0.700 12.300 -6.044
2 -1.000 0.700 7.400 -5.911 2 -1.733 1.946 -1.615 -3.594
3 -0.713 2.417 2.989 -5.711 3 2.034 2.295 3.554 -3.828
4 0.572 2.857 -1.674 -4.286 4 1.000 1.866 3.574 -4.157
5 2.021 2.745 -0.285 -2.866 5 0.811 2.011 2.621 -3.955
6 2.146 1.858 2.920 -2.878 6 1.091 2.019 3.078 -3.993
7 1.324 1.446 5.695 -3.759 7 0.990 1.989 3.031 -4.012
8 0.422 1.532 5.228 -4.650 8 0.988 2.002 2.971 -3.996
9 0.266 2.038 3.176 -4.727 9 1.008 2.001 3.009 -4.000
10 0.754 2.343 1.426 -4.194 10 0.999 1.999 3.001 -4.001
11 1.325 2.300 1.533 -3.629 11 0.999 2.000 2.998 -4.000
12 1.465 2.004 2.796 -3.535 12 1.001 2.000 3.001 -4.000
13 1.181 1.793 3.919 -3.851 13 1.000 2.000 3.000 -4.000
14 0.820 1.806 3.951 -4.210 14 1.000 2.000 3.000 -4.000
15 0.708 1.982 3.190 -4.295 15 1.000 2.000 3.000 -4.000
16 0.870 2.123 2.466 -4.111 16 1.000 2.000 3.000 -4.000
17 1.098 2.125 2.389 -3.883 17 1.000 2.000 3.000 -4.000
18 1.182 2.020 2.841 -3.815 18 1.000 2.000 3.000 -4.000
19 1.091 1.928 3.307 -3.920 19 1.000 2.000 3.000 -4.000
20 0.948 1.920 3.389 -4.064 20 1.000 2.000 3.000 -4.000
21 0.887 1.982 3.124 -4.116
22 0.937 2.042 2.826 -4.057
23 1.026 2.051 2.754 -3.966 Converges at the 13th iteration
24 1.070 2.015 2.908 -3.928
25 1.043 1.976 3.097 -3.961
26 0.988 1.968 3.154 -4.017
27 0.957 1.989 3.067 -4.044
28 0.971 2.014 2.947 -4.027
29 1.005 2.020 2.904 -3.992
30 1.026 2.008 2.953 -3.973
31 1.019 1.993 3.028 -3.982
32 0.998 1.987 3.060 -4.004
33 0.984 1.994 3.033 -4.016
34 0.987 2.004 2.986 -4.012
35 1.000 2.008 2.963 -3.999
36 1.009 2.004 2.978 -3.990
37 1.008 1.998 3.007 -3.992
38 1.001 1.995 3.023 -4.000
39 0.994 1.997 3.015 -4.006
40 0.995 2.001 2.997 -4.005
41 0.999 2.003 2.986 -4.000
42 1.003 2.002 2.990 -3.996
43 1.003 2.000 3.001 -3.997
44 1.001 1.998 3.008 -4.000
45 0.998 1.999 3.007 -4.002
46 0.998 2.000 3.000 -4.002
47 0.999 2.001 2.995 -4.000
48 1.001 2.001 2.996 -3.999
49 1.001 2.000 3.000 -3.999
50 1.000 1.999 3.003 -4.000
51 0.999 1.999 3.003 -4.001
52 0.999 2.000 3.000 -4.001
53 1.000 2.000 2.998 -4.000
54 1.000 2.000 2.998 -4.000
55 1.001 2.000 3.000 -3.999
56 1.000 2.000 3.001 -4.000
57 1.000 2.000 3.001 -4.000
58 1.000 2.000 3.000 -4.000
59 1.000 2.000 2.999 -4.000
60 1.000 2.000 2.999 -4.000
Case Study 1
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.707 0 0.707 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
1 -0.866 0 0.5 0 0 0 0 0 0 0
0 0.5 0 0.866 0 0 0 0 0 0 -100
0 0.866 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.866 -0.707 0 0 0 0 0 0
0 0 0 0 0 -1 -0.707 0 -1 0 0
0 0 0 0 0 0 -0.707 0 0 -1 0
COLUMN 1
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.707 0 0.707 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 -0.866 0 0.5 -0.707 0 0.707 0 0 0 0
0 0.5 0 0.866 0 0 0 0 0 0 -100
0 0.866 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.866 -0.707 0 0 0 0 0 0
0 0 0 0 0 -1 -0.707 0 -1 0 0
0 0 0 0 0 0 -0.707 0 0 -1 0

COLUMN 2
*swap R2 & R3
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.707 0 0.707 0 0 0 0
0 -0.866 0 0.5 -0.707 0 0.707 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0.5 0 0.866 0 0 0 0 0 0 -100
0 0.866 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.866 -0.707 0 0 0 0 0 0
0 0 0 0 0 -1 -0.707 0 -1 0 0
0 0 0 0 0 0 -0.707 0 0 -1 0

F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.707 0 0.707 0 0 0 0
0 -0.866 0 0.5 -0.707 0 0.707 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 1.1547 -0.408 0 0.4082 0 0 0 -100
0 0 1 0.5 -0.707 0 0.707 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.866 -0.707 0 0 0 0 0 0
0 0 0 0 0 -1 -0.707 0 -1 0 0
0 0 0 0 0 0 -0.707 0 0 -1 0

COLUMN 3
*swap R3 & R5
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 0 0 -1 0 0 0
0 0 -1 -0.5 0.707 1 0 0 0 0 0
0 0 0 -0.87 -0.71 0 0 0 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0

F1 F2 F3 F4 F5 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 0 0 -1 0 0 0
0 0 0 0 0 1 0.707 0 0 0 0
0 0 0 -0.87 -0.71 0 0 0 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0
COLUMN 4
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100
0 0 0 0 0.707 0 0.707 0 0 0 -500 A = LU
0 0 0 0 0 0 0 -1 0 0 0 A
0 0 0 0 0 1 0.707 0 0 0 0
0 0 0 0 -1.01 0 0.306 0 0 0 -75
-1 0 0 0 -0.71 0 0.707 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0 0 0 0 0 0.707 0 0.707 0 0 0
1 -0.87 0 0.5 0 0 0 0 0 0
COLUMN 5 0 0.5 0 0.866 0 0 0 0 0 0
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 0.866 1 0 0 0 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0 0 0 0 0 0 0 0 -1 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0 0 0 -1 -0.5 0.707 1 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100 0 0 0 -0.87 -0.71 0 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 0 0 -1 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0
0 0 0 0 0 1 0.707 0 0 0 0 0 0 0 0 0 0 -0.71 0 0 -1
0 0 0 0 0 0 1.319 0 0 0 -792
0 0 0 0 0 -1 -0.71 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0

COLUMN 6
*swap R6 & R7
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0 L
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100 -1 1 0 0 0 0 0 0 0 0
0 0 0 0 0.707 0 0.707 0 0 0 -500 0 -1 1 0 0 0 0 0 0 0
0 0 0 0 0 1 0.707 0 0 0 0 0 -0.5774 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 1.319 0 0 0 -792 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 -1 -0.71 0 -1 0 0 0 0 0 -0.75 -1.433 0 1 0 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0 0 0 -1 0 0 0 0 1 0 0
0 0 0 0 0 0 -1 0 1 0
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B 0 0 0 0 0 0 0 -0.5359 0 1
-1 0 0 0 -0.71 0 0.707 0 0 0 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100 U
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 1 0.707 0 0 0 0 -1 0 0 0 -0.707 0 0.707 0 0 0
0 0 0 0 0 0 0 -1 0 0 0 0 -0.866 0 0.5 -0.707 0 0.707 0 0 0
0 0 0 0 0 0 1.319 0 0 0 -792 0 0 1 0.5 -0.707 0 0.707 0 0 0
0 0 0 0 0 0 0 0 -1 0 0 0 0 0 1.15468 -0.4082 0 0.4082 0 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0 0 0 0 0 0.707 0 0.707 0 0 0
COLUMN 7 0 0 0 0 0 1 0.707 0 0 0
*swap R7 & R8 0 0 0 0 0 0 1.31929 0 0 0
F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B 0 0 0 0 0 0 0 -1 0 0
-1 0 0 0 -0.71 0 0.707 0 0 0 0 0 0 0 0 0 0 0 0 -1 0
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0 0 0 0 0 0 0 0 0 0 -1
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100
0 0 0 0 0.707 0 0.707 0 0 0 -500
0 0 0 0 0 1 0.707 0 0 0 0
0 0 0 0 0 0 1.319 0 0 0 -792
0 0 0 0 0 0 0 -1 0 0 0
0 0 0 0 0 0 0 0 -1 0 0
0 0 0 0 0 0 -0.71 0 0 -1 0

F1 F2 F3 F4 F5 F6 F7 H2 V2 V3 B
-1 0 0 0 -0.71 0 0.707 0 0 0 0
UX=Y
0 -0.87 0 0.5 -0.71 0 0.707 0 0 0 0
0 0 1 0.5 -0.71 0 0.707 0 0 0 0
0 0 0 1.155 -0.41 0 0.408 0 0 0 -100 F1= -348.330 N F6= 424.165 N
0 0 0 0 0.707 0 0.707 0 0 0 -500 F2= -351.67 N F7= -599.950778 N
0 0 0 0 0 1 0.707 0 0 0 0
F3= 304.546 N H2= 175.833 N
0 0 0 0 0 0 1.319 0 0 0 -792
0 0 0 0 0 0 0 -1 0 0 0 F4= 87.569 N V2= 0.000 N
0 0 0 0 0 0 0 0 -1 0 0 F5= -107.2628005 N V3= 424.1652 N
0 0 0 0 0 0 0 0 0 -1 -424
Roots of Non-linear Equations
Open Methods:
Fixed Point Iteration Method
The idea of the fixed point iteration methods is to first reformulate an equation to an
equivalent fixed point problem:
f (x) = 0 ⇐⇒ x = g(x)
and then to use the iteration: with an initial guess x0 chosen, compute a sequence
xn+1 = g(xn), n ≥ 0
in the hope that xn → α.
There are infinite many ways to introduce an equivalent fixed point problem for a given
equation; e.g., for any function G(t) with the property
G(t) = 0 ⇐⇒ t = 0,
we can take g(x) = x + G(f (x)). The resulting iteration method may or may not
converge, though.
Example:

For For

k x f(x) e k x f(x) e
1 0.000 4.000 1 0.000 4.000
2 4.000 1.333 1.000 2 4.000 -4.000 1.000
3 1.333 2.400 -2.000 3 -4.000 -4.000 2.000
4 2.400 1.818 0.444 4 -4.000 -4.000 0.000
5 1.818 2.095 -0.320 5 -4.000 -4.000 0.000
6 2.095 1.953 0.132 6 -4.000 -4.000 0.000
7 1.953 2.024 -0.073 7 -4.000 -4.000 0.000
8 2.024 1.988 0.035 8 -4.000 -4.000 0.000
9 1.988 2.006 -0.018 9 -4.000 -4.000 0.000
10 2.006 1.997 0.009 10 -4.000 -4.000 0.000
11 1.997 2.001 -0.004
12 2.001 1.999 0.002
13 1.999 2.000 -0.001
14 2.000 2.000 0.001
15 2.000 2.000 0.000
16 2.000 2.000 0.000
17 2.000 2.000 0.000
18 2.000 2.000 0.000
19 2.000 2.000 0.000
20 2.000 2.000 0.000

Converges at 15th iteration; root = 2 Converges at 4th iteration; root = -4


For For

k x f(x) e k x f(x) e
1 0.000 1.250 1 0.000 1.250
2 1.250 0.952 1.000 2 1.250 0.859 1.000
3 0.952 1.010 -0.313 3 0.859 1.065 -0.455
4 1.010 0.998 0.057 4 1.065 0.966 0.193
5 0.998 1.000 -0.012 5 0.966 1.017 -0.103
6 1.000 1.000 0.002 6 1.017 0.992 0.050
7 1.000 1.000 0.000 7 0.992 1.004 -0.025
8 1.000 1.000 0.000 8 1.004 0.998 0.012
9 1.000 1.000 0.000 9 0.998 1.001 -0.006
10 1.000 1.000 0.000 10 1.001 0.999 0.003
11 1.000 1.000 0.000 11 0.999 1.000 -0.002
12 1.000 1.000 0.000 12 1.000 1.000 0.001
13 1.000 1.000 0.000 13 1.000 1.000 0.000
14 1.000 1.000 0.000 14 1.000 1.000 0.000
15 1.000 1.000 0.000 15 1.000 1.000 0.000

Converges at 7th iteration; root = 1 Converges at 13th iteration; root = 1


Newton-Raphson Method
The Newton-Raphson method, or Newton Method, is a powerful technique for solving
equations numerically. Like so much of the differential calculus, it is based on the simple idea of
linear approximation. The Newton Method, properly used, usually homes in on a root with
devastating efficiency.
Example

k x f(x) f'(x) e k x f(x) f'(x) e


1 0.000 -8.000 2.000 1 0.000 -5.000 4.000
2 4.000 16.000 10.000 1.000 2 1.250 1.563 6.500 1.000
3 2.400 2.560 6.800 -0.667 3 1.010 0.058 6.019 -0.238
4 2.024 0.142 6.047 -0.186 4 1.000 0.000 6.000 -0.010
5 2.000 0.001 6.000 -0.012 5 1.000 0.000 6.000 0.000
6 2.000 0.000 6.000 0.000 6 1.000 0.000 6.000 0.000
7 2.000 0.000 6.000 0.000 7 1.000 0.000 6.000 0.000
8 2.000 0.000 6.000 0.000 8 1.000 0.000 6.000 0.000
9 2.000 0.000 6.000 0.000 9 1.000 0.000 6.000 0.000
10 2.000 0.000 6.000 0.000 10 1.000 0.000 6.000 0.000

Converges at 6th iteration; root = 2 Converges at 5th iteration; root = 1


Secant Method
One drawback of Newton’s method is that it is necessary to evaluate f’(x) at various
points, which may not be practical for some choices of f. The secant method avoids this issue by
using a finite difference to approximate the derivative. As a result, f(x) is approximated by a
secant line through two points on the graph of f, rather than a tangent line through one point on
the graph.
Since a secant line is defined using two points on the graph of f(x), as opposed to a
tangent line that requires information at only one point on the graph, it is necessary to choose
two initial iterates x0 = and x1. Then, as in Newton’s method, the next iterate x2 is then obtained by
computing the x-value at which the secant line passing through the points (x 0, f(x0)) and (x1,f(x1))
has a y-coordinate of zero. This yields the equation

Example

Given: Given:

x0 = 0 x0 = 0
x1 = 3 x1 = 3
k Xo X1 f(Xo) f(X1) e k Xo X1 f(Xo) f(X1) e
0 0.000 3.000 -8.000 7.000 0 0.000 3.000 -5.000 16.000
1 3.000 1.600 7.000 -2.240 1.000 1 3.000 0.714 16.000 -1.633 1.000
2 1.600 1.939 -2.240 -0.360 -0.875 2 0.714 0.926 -1.633 -0.439 -3.200
3 1.939 2.004 -0.360 0.026 0.175 3 0.926 1.004 -0.439 0.023 0.229
4 2.004 2.000 0.026 0.000 0.032 4 1.004 1.000 0.023 0.000 0.078
5 2.000 2.000 0.000 0.000 -0.002 5 1.000 1.000 0.000 0.000 -0.004
6 2.000 2.000 0.000 0.000 0.000 6 1.000 1.000 0.000 0.000 0.000
7 2.000 2.000 0.000 0.000 0.000 7 1.000 1.000 0.000 0.000 0.000
8 2.000 2.000 0.000 0.000 0.000 8 1.000 1.000 0.000 0.000 0.000

Converges at 6th iteration; root = 2 Converges at 6th iteration; root = 1


Bracketing Methods:

Bisection Method

Suppose that f(x) is a continuous function that changes sign on the interval [a, b]. Then,
by the Intermediate Value Theorem, f(x) = 0 for some x ∈ [a, b]. How can we find the solution,
knowing that it lies in this interval?
The method of bisection attempts to reduce the size of the interval in which a solution is
known to exist. Suppose that we evaluate f(m), where m = (a + b)/2. If f(m) = 0, then we are
done. Otherwise, f must change sign on the interval [a, m] or [m, b], since f(a) and f(b) have
different signs. Therefore, we can cut the size of our search space in half, and continue this
process until the interval of interest is sufficiently small, in which case we must be close to a
solution
Example
Given: 3.00
2.50
2.00
H f(H) 1.50
0 -1.00 1.00
0.5 -1.25 0.50
1 -1.00 0.00
1.5 -0.25 -0.50 0 1 2 3
2 1.00 -1.00
2.5 2.75 -1.50

k ak ck bk f(ak) f(ck) f(bk)


0 1.000 1.500 2.000 -1.000 -0.250 1.000
1 1.500 1.750 2.000 -0.250 0.313 1.000
2 1.500 1.625 1.750 -0.250 0.016 0.313
3 1.500 1.563 1.625 -0.250 -0.121 0.016
4 1.563 1.594 1.625 -0.121 -0.054 0.016
5 1.594 1.609 1.625 -0.054 -0.019 0.016
6 1.609 1.617 1.625 -0.019 -0.002 0.016
7 1.617 1.621 1.625 -0.002 0.007 0.016
8 1.617 1.619 1.621 -0.002 0.002 0.007
9 1.617 1.618 1.619 -0.002 0.000 0.002

Since f(ck)=0, root = 1.618

Given: 5.00
4.00
3.00
H f(H)
2.00
0 -2.00
1.00
0.5 -1.75
0.00
1 -1.00
1.5 0.25 -1.00 0 1 2 3

2 2.00 -2.00
2.5 4.25 -3.00

k ak ck bk f(ak) f(ck) f(bk)


0 1.000 1.250 1.500 -1.000 -0.438 0.250
1 1.250 1.375 1.500 -0.438 -0.109 0.250
2 1.375 1.438 1.500 -0.109 0.066 0.250
3 1.375 1.406 1.438 -0.109 -0.022 0.066
4 1.406 1.422 1.438 -0.022 0.022 0.066
5 1.406 1.414 1.422 -0.022 0.000 0.022

Since f(ck)=0, root = 1.414


False Position Method

An alternative to the bisection method, which is relatively inefficient, based on a


graphical insight. A shortcoming of the bisection method is that, in dividing the interval from x1
to x2 into equal halves, no account is taken of the magnitudes of f(x1) and f(xu). For example, if
f(x1) is much closer to zero than f(xu), it is likely that the root is closer to x1 than to xu. An
alternative method that exploits this graphical insight is to join f(x1) and f(xu) by a straight line.
The intersection of this line with the x-axis represents an improved estimate of the root. The
fact that the replacement of the curve by a straight line gives a “false position” of the root is the
origin of the name, method of false position, or in Latin, regula falsi. It is also called the linear
interpolation method.
The new value of x, xr can be solved by the formula:

Given: Given:

H f(H) 3.00 H f(H) 6.00


0 -1.00 2.00 0 -2.00 4.00
0.5 -1.25 1.00 0.5 -1.75
2.00
1 -1.00 1 -1.00
0.00 0.00
1.5 -0.25 1.5 0.25
0 1 2 3 0 1 2 3
2 1.00 -1.00 2 2.00 -2.00
2.5 2.75 -2.00 2.5 4.25 -4.00

k ak ck bk f(ak) f(ck) f(bk) k ak ck bk f(ak) f(ck) f(bk)


0 1.0 1.500 2.0 -1.000 -0.250 1.000 0 1.0 1.333 2.0 -1.000 -0.222 2.000
1 1.500 1.600 2.000 -0.250 -0.040 1.000 1 1.333 1.400 2.000 -0.222 -0.040 2.000
2 1.600 1.615 2.000 -0.040 -0.006 1.000 2 1.400 1.412 2.000 -0.040 -0.007 2.000
3 1.615 1.618 2.000 -0.006 -0.001 1.000 3 1.412 1.414 2.000 -0.007 -0.001 2.000
4 1.618 1.618 2.000 -0.001 0.000 1.000 4 1.414 1.414 2.000 -0.001 0.000 2.000

Since f(ck)=0, root = 1.618 Since f(ck)=0, root = 1.414


CASE STUDY 2
Open Methods

1. Fixed-Point Iteration Method: 2. Newton-Raphson Method

ሺʹ Ͳ‫ܪ‬ሻହΤଷ ሺʹ Ͳ‫ܪ‬ሻହΤଷ
݂ ‫ ܪ‬ൌ
ͲǤͶͳͶͲ
͹ͳͶͲͷ
ͷ െͷ ݂ ‫ ܪ‬ൌ
ͲǤͶͳͶͲ
͹ͳͶͲͷ
ͷ െͷ
ሺʹ Ͳ൅ ʹ ‫ܪ‬ሻଶΤଷ ሺʹ Ͳ൅ ʹ ‫ܪ‬ሻଶΤଷ

ͷͲ൅ ͵ ‫ܪ‬
ͲǤʹ Ͳ͸ʹ ͳʹ ͹Ͷͳ͸ሺʹ Ͳ൅ ʹ ‫ܪ‬ሻଶΤହ
݃ ‫ ܪ‬ൌ Ͷ͸Ǥ͵ ͳͳʹ ʹ ͵ ‫ ܪ‬ଶΤଷሾ
݂ᇱ ‫ ܪ‬ൌ ሿ
ʹ Ͳ൅ ʹ ‫ ܪ‬ହΤଷ

i H g(H) H-g(H) error


i H f(H) f'(H) error
1 1.0000 0.7100 0.2900
1 1.0000 3.8477 14.2099
2 0.7100 0.7025 0.0075 0.4084
2 0.7292 0.3147 11.8168 0.3713
3 0.7025 0.7023 0.0002 0.0107
3 0.7026 0.0034 11.5574 0.0379
4 0.7023 0.7023 0.0000 0.0003
4 0.7023 0.0000 11.5544 0.0004
5 0.7023 0.7023 0.0000 0.0000
6 0.7023 0.7023 0.0000 0.0000 5 0.7023 0.0000 11.5544 0.0000
7 0.7023 0.7023 0.0000 0.0000 6 0.7023 0.0000 11.5544 0.0000
8 0.7023 0.7023 0.0000 0.0000 7 0.7023 0.0000 11.5544 0.0000
9 0.7023 0.7023 0.0000 0.0000 8 0.7023 0.0000 11.5544 0.0000
10 0.7023 0.7023 0.0000 0.0000 9 0.7023 0.0000 11.5544 0.0000
10 0.7023 0.0000 11.5544 0.0000

Therefore at iteration 5, the value of H = 0.7023.


Therefore at iteration 5, the value of H = 0.7023.

3. Secant Method

ሺʹ Ͳ‫ܪ‬ሻହΤଷ
݂ ‫ ܪ‬ൌ
ͲǤͶͳͶͲ
͹ͳͶͲͷ
ͷ െͷ
ሺʹ Ͳ൅ ʹ ‫ܪ‬ሻଶΤଷ
2. Bisection Method

Bracketing x f(x)
f(x) Methods
0 -5.000
250
1 3.848
200
2 21.507
i Ha 150 Hb f(Ha) f(Hb) error
3 44.393
4
1 4.0000
70.934
100
2.0000 70.9344 21.5066
50
5 2 2.0000 0 1.1298
100.191 21.5066 5.7586 0.7703
3 1.1298 -50 0 0.81162 5.7586
4 6 1.3198 8 0.3921
10

4 0.8116 0.7169 1.3198 0.1704 0.1320


ck=(ak+bk)/2
5 0.7169 0.7029 0.1704 0.0073 0.0200
k ak ck bk f(ak) f(ck) f(bk)
0 6 0.7029
0 0.7023
0.5 0.0073
1 0.0000
-5 -2.1253741840.0009
3.84767287
1 7 0.7023
0.5 0.7023
0.75 0.0000
1 -2.125374184 0.0000
0.5622944640.0000
3.84767287
2 0.5 0.625 0.75 -2.125374184 -0.863132764 0.562294464
3
8 0.7023
0.625
0.7023
0.6875
0.0000 0.0000 0.0000
0.75 -0.863132764 -0.169842793 0.562294464
4 9 0.7023 0.71875
0.6875 0.7023 0.0000
0.75 -0.1698427930.00000.19147996 0.0000
0.562294464
5 0.6875 0.703125 0.71875 -0.169842793 0.009618817 0.19147996
6
10 0.7023
0.6875
0.7023
0.6953125
0.0000 0.0000 0.0000
0.703125 -0.169842793 -0.08041363 0.009618817
7 0.6953125 0.69921875 0.703125 -0.08041363 -0.035472603 0.009618817
8 0.69921875 0.701171875 0.703125 -0.035472603 -0.012945665 0.009618817
9 Therefore0.702148438
0.701171875 at iteration 7,0.703125
the value of H = 0.7023.
-0.012945665 -0.001668114 0.009618817
10 0.702148438 0.702636719 0.703125 -0.001668114 0.00397418 0.009618817
11 0.702148438 0.702392578 0.702636719 -0.001668114 0.00115274 0.00397418
12 0.702148438 0.702270508 0.702392578 -0.001668114 -0.00025776 0.00115274
13 0.702270508 0.702331543 0.702392578 -0.00025776 0.000447471 0.00115274
14 0.702270508 0.702301025 0.702331543 -0.00025776 9.4851E-05 0.000447471
15 0.702270508 0.702285767 0.702301025 -0.00025776 -8.14557E-05 9.4851E-05
16 0.702285767 0.702293396 0.702301025 -8.14557E-05 6.69737E-06 9.4851E-05
17 0.702285767 0.702289581 0.702293396 -8.14557E-05 -3.73793E-05 6.69737E-06
18 0.702289581 0.702291489 0.702293396 -3.73793E-05 -1.5341E-05 6.69737E-06
19 0.702291489 0.702292442 0.702293396 -1.5341E-05 -4.3218E-06 6.69737E-06
20 0.702292442 0.702292919 0.702293396 -4.3218E-06 1.18778E-06 6.69737E-06
21 0.702292442 0.702292681 0.702292919 -4.3218E-06 -1.56701E-06 1.18778E-06
22 0.702292681 0.7022928 0.702292919 -1.56701E-06 -1.89613E-07 1.18778E-06

The re fo re , H= 0.702293
Curve Fitting and Interpolation
Least Square Line
If your data shows a linear relationship between the X and Y variables, you will want to
find the line that best fits that relationship. That line is called a Regression Line and has the
equation ŷ= a + b x. The Least Squares Regression Line is the line that makes the vertical
distance from the data points to the regression line as small as possible. It’s called a “least
squares” because the best line of fit is one that minimizes the variance (the sum of squares of the
errors). This can be a bit hard to visualize but the main point is you are aiming to find the
equation that fits the points as closely as possible.

x y
1.2 1.1
2.3 2.1
3 3.1
3.8 4
4.7 4.9
5.9 5.9 n= 6

0 2 4 6 8
x y x^2 xy
1.2 1.1 1.44 1.32
2.3 2.1 5.29 4.83
3 3.1 9 9.3 x y y'
3.8 4 14.44 15.2 1.2 1.1 1.117579722
4.7 4.9 22.09 23.03 2.3 2.1 2.273344235
5.9 5.9 34.81 34.81 3 3.1 3.008830744
20.9 21.1 87.07 88.49 3.8 4 3.849386754
4.7 4.9 4.795012265
a= -0.143 b= 1.051 5.9 5.9 6.05584628

y = -0.143 + 1.051x

x y
1 3.8
2 6.4
3 7.8
4 10.5
5 13
6 17 n= 6

0 2 4 6 8
x y x^2 xy
1 3.8 1 3.8
2 6.4 4 12.8
3 7.8 9 23.4 x y y'
4 10.5 16 42 1 3.8 3.428571429
5 13 25 65 2 6.4 5.957142857
6 17 36 102 3 7.8 8.485714286
a= 0.900 b= 2.529 6 17

Polynomial Regression
Polynomial regression is a form of regression analysis in which the relationship between the
independent variable x and the dependent variable y is modelled as an nth degree polynomial in x.
Polynomial regression fits a nonlinear relationship between the value of x and the corresponding
conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the
data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is
linear in the unknown parameters that are estimated from the data. For this reason, polynomial
regression is considered to be a special case of multiple linear regression.
The goal of regression analysis is to model the expected value of a dependent variable y in terms of
the value of an independent variable (or vector of independent variables) x. In simple linear
regression, the model

is used, where ε is an unobserved random error with mean zero conditioned on a scalar variable x. In
this model, for each unit increase in the value of x, the conditional expectation of y increases by β1
units.
In many settings, such a linear relationship may not hold. For example, if we are modeling the yield
of a chemical synthesis in terms of the temperature at which the synthesis takes place, we may find
that the yield improved by increasing amounts for each unit increase in temperature. In this case, we
might propose a quadratic model of the form

In this model, when the temperature is increased from x to x + 1 units, the expected yield changes
byβ1+β2(2x+1) (This can be seen by replacing x in this equation with x+1 and subtracting the
equation in x from the equation in x+1.) For infinitesimal changes in x, the effect on y is given by the
total derivative with respect to x:
β1+2β2xThe fact that the change in yield depends on x is what makes the relationship
between x and y nonlinear even though the model is linear in the parameters to be estimated.
In general, we can model the expected value of y as an nth degree polynomial, yielding the
general polynomial regression model
Conveniently, these models are all linear from the point of view of estimation, since the
regression function is linear in terms of the unknown parameters β0, β1, .... Therefore, for least
squares analysis, the computational and inferential problems of polynomial regression can be
completely addressed using the techniques of multiple regression. This is done by treating x, x2, ... as
being distinct independent variables in a multiple regression model.
Polynomial Regression

Fit a second order polynomial to the following data


x y x2 x3 x4 xy x^2 y
0 2.1 0 0 0 0 0
1 7.7 1 1 1 7.7 7.7
2 13.6 4 8 16 27.2 54.4
3 27.2 9 27 81 81.6 244.8
4 40.9 16 64 256 163.6 654.4
5 61.1 25 125 625 305.5 1527.5
15 152.6 55 225 979 585.6 2488.8
n= 6
a0 a1 a2 B
6 15 55 152.6
15 55 225 585.6
55 225 979 2488.8
a0 a1 a2 B
6 15 55 152.6
2.5 0 -17.5 -87.5 -204.1
9.1666667 0 -87.5 -474.83333 -1089.9667
a0 a1 a2 B
6 15 55 152.6
0 -17.5 -87.5 -204.1 x y y1
5 0 0 37.333333 69.466667 0 2.1 2.4786
a2 1.86071 1 7.7 6.6986
a1 2.35929 2 13.6 14.64
a0 2.47857 3 27.2 26.3028
4 40.9 41.687
y= 1.8607 x2 2.3593 x 2.4786 5 61.1 60.7926

(x2)(y) Sr=(y-a0-a1x-a2x^2)^2 x y (y-y') Sr


0 0.14334 0 2.1 544.444444 0.14334
7.7 1.00286 m= 2 1 7.7 314.471111 1.00286
54.4 1.08160 n= 6 2 13.6 140.027778 1.08160
244.8 0.80487 3 27.2 3.12111111 0.80487
654.4 0.61959 4 40.9 239.217778 0.61959
1527.5 0.09434 5 61.1 1272.11111 0.09434
2488.8 3.746593062 15 152.6 2513.39333 3.74659306

Standard error: Coefficient of 0.99851


S= 1.117526 determninattion = 99.85%
Fit a second order polynomial to the following data
x y x2 x3 x4 xy x^2 y
0 2.2 0 0 0 0 0
1 7.9 1 1 1 7.9 7.9
2 14 4 8 16 28 56
3 30 9 27 81 90 270
4 45 16 64 256 180 720
5 72.4 25 125 625 362 1810
15 171.5 55 225 979 667.9 2863.9
n= 6
a0 a1 a2 B
6 15 55 171.5
15 55 225 667.9
55 225 979 2863.9
a0 a1 a2 B
6 15 55 171.5
2.5 0 -17.5 -87.5 -239.15
9.1666667 0 -87.5 -474.83333 -1291.8167
a0 a1 a2 B
6 15 55 171.5
0 -17.5 -87.5 -239.15 x y y1
5 0 0 37.333333 96.066667 0 2.2 2.4786
a2 2.57321 1 7.9 6.6986
a1 0.79964 2 14 14.64
a0 2.99643 3 30 26.3028
4 45 41.687
y= 2.5732 x2 0.7996 x 2.9964 5 72.4 60.7926

(x2)(y) Sr=(y-a0-a1x-a2x^2)^2 x y (y-y') Sr


0 0.13928 0 2.2 539.78778 0.13928
7.9 2.34335 m= 2 1 7.9 307.41778 2.34335
56 4.65696 n= 6 2 14 130.72111 4.65696
270 3.76205 3 30 20.854444 3.76205
720 75.93380 4 45 382.85444 75.93380
1810 82.46819 5 72.4 2205.8678 82.46819
2863.9 169.3036285 15 171.5 3587.5033 169.30363

Standard error: Coefficient of 0.95281


S= 7.512293 determninattion = 95.28%
Numerical Integration
Euler’s Method
Euler's method is a numerical tool for approximating values for solutions of differential equations.
The method that Euler used to estimate a solution (i.e. the corresponding value of y for a
given value of x) of a differential equation was to follow the tangent line from the initial point to
the terminal point.
Here we use the value of yn to estimate the value of yact. This can be directly computed
from the information given by the following equation.

The insight that Euler had was to see how this estimate could be improved on. The
strategy he used was to divide the interval [x0, xact] in the case (xact<x0) into equal subintervals
and recompute the tangent line as you go. This would not allow the tangent line to “drift” far
from the function itself. This would hopefully produce a more accurate estimate for y act at the
end.
In general, the coordinates of the point (xn+1, yn+1) can be computed from the coordinates
of the poing (xn, yn) as follows:
y0 (boundary condition) = 3.5
h (interval) = 0.1

k x f(x,y) y
0 0 -0.033 3.500
1 0.1 1.316 3.497
2 0.2 1.005 3.628
3 0.3 0.731 3.729
4 0.4 0.490 3.802
5 0.5 0.279 3.851
6 0.6 0.095 3.879
7 0.7 -0.066 3.888
8 0.8 -0.205 3.882
9 0.9 -0.325 3.861
10 1 -0.428 3.829
11 1.1 -0.516 3.786
12 1.2 -0.590 3.734
13 1.3 -0.653 3.675
14 1.4 -0.704 3.610
15 1.5 -0.746 3.540
16 1.6 -0.780 3.465
17 1.7 -0.807 3.387
18 1.8 -0.827 3.306
19 1.9 -0.841 3.224
20 2 -0.850 3.140
Trapezoidal Rule

The trapezoidal rule is applied to solve the definite integral of the form b∫a f(x) dx, by
approximating the region under the graph of the function f(x) as a trapezoid and calculating its
area. Under the trapezoidal rule, we evaluate the area under a curve by dividing the total area
into little trapezoids rather than rectangles.

The trapezoidal rule is the first of the Newton-Cotes closed integration formulas. It
corresponds to the case of first order polynomials:

The formula for trapezoidal rule is

Geometrically, the trapezoid rule is equivalent to approximating the area of a trapezoid under a
straight-line connecting f(a) and f(b). Recall from geometry that the formula for computing the
aera of a trapezoid is the height times the average of the bases. In our case, the concept is the
same but the trapezoid is on its side. Therefore, the integral estimate can be represented as

or
where, for the trapezoidal rule, the average height is the average of the function
values at the end points, or [f(a) + f(b)]/2.
Trapezoidal Rule

f(x)=1/x
a= 1
b= 2
n= 10
h= 0.1

segment x f(x) I
1 1
1 1.1 0.9090909 0.095454545
2 1.2 0.8333333 0.087121212
3 1.3 0.7692308 0.080128205
4 1.4 0.7142857 0.074175824
5 1.5 0.6666667 0.069047619
6 1.6 0.625 0.064583333
7 1.7 0.5882353 0.060661765
8 1.8 0.5555556 0.057189542
9 1.9 0.5263158 0.054093567
10 2 0.5 0.051315789
sum = 0.173442851
Simpson’s Rule

Aside from applying the trapezoidal rule with finer segmentation, another way to obtain a
more accurate estimate of an integral is to use higher-order polynomials to connect the points.
For example, if there is an extra point midway between f(a) and f(b), the three points can be
connected with a parabola. If there are two points equally spaced between f(a) and f(b), the four
points can be connected with a third-order polynomial. The formulas that result from taking the
integrals under these polynomials are called Simpson’s rules.
In numerical integration, Simpson's rules are several approximations for definite
integrals, named after Thomas Simpson (1710–1761).
The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads

The approximate equality in the rule becomes exact if f is a polynomial up to 3rd degree.
If the 1/3 rule is applied to n equal subdivisions of the integration range [a, b], one obtains the
composite Simpson's rule. Points inside the integration range are given alternating weights 4/3
and 2/3.
Simpson's 3/8 rule, also called Simpson's second rule, requires one more function evaluation
inside the integration range and gives lower error bounds, but does not improve on order of the
error.
Simpson's 1/3 and 3/8 rules are two special cases of closed Newton–Cotes formulas.
In naval architecture and ship stability estimation, there also exists Simpson's third rule, which has no
special importance in general numerical analysis, see Simpson's rules (ship stability).
a= 0
b= 2
n= 20
h= 0.1

x f(x) multiplier f(x)*multiplier


0 -7 1 -7
2 -11 1 -11
0.1 -6.972 4 -27.888
0.2 -6.896 2 -13.792
0.3 -6.784 4 -27.136
0.4 -6.648 2 -13.296
0.5 -6.5 4 -26
0.6 -6.352 2 -12.704
0.7 -6.216 4 -24.864
0.8 -6.104 2 -12.208
0.9 -6.028 4 -24.112
1 -6 2 -12
1.1 -6.032 4 -24.128
1.2 -6.136 2 -12.272
1.3 -6.324 4 -25.296
1.4 -6.608 2 -13.216
1.5 -7 4 -28
1.6 -7.512 2 -15.024
1.7 -8.156 4 -32.624
1.8 -8.944 2 -17.888
1.9 -9.888 4 -39.552
2 -11 2 -22
Integral Value = -14
Runge=Kutta Method
The Runge–Kutta methods are a family of implicit and explicit iterative methods, which
include the well-known routine called the Euler Method, used in temporal discretization for the
approximate solutions of ordinary differential equations. These methods were developed around
1900 by the German mathematicians Carl Runge and Wilhelm Kutta.

The most widely known member of the Runge–Kutta family is generally referred to as
"RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method".

Let an initial value problem be specified as follows:

Here y is an unknown function (scalar or vector) of time t, which we would like to


approximate; we are told that dy/dt, the rate at which y changes, is a function of t and of y itself.
At the initial time t0 the corresponding y value is y0. The function f and the initial conditions t0,
y0 are given.

for n = 0, 1, 2, 3, …, using
Runge-Kutta Method x0 = 1
y0 = 4
y' = f(x,y) with y(x0) = y0
h= 0.05
y(n+1) = y(n)+(h/6)*(k1 +
n x(n) y(n) k1 k2 k3 k4 2*k2 + 2*k3 + k4)

0 1 4 5 5.15 5.15375 5.3076875 4.2576


1 1.05 4.2576266 5.3076266 5.4653172 5.4692595 5.6310895 4.5310
2 1.1 4.5310255 5.6310255 5.7968011 5.8009455 5.9710728 4.8210
3 1.15 4.8210054 5.9710054 6.1452805 6.1496374 6.3284873 5.1284
4 1.2 5.1284165 6.3284165 6.5116269 6.5162071 6.7042268 5.4542
5 1.25 5.4541524 6.7041524 6.8967562 6.9015713 7.099231 5.7992
6 1.3 5.7991527 7.0991527 7.3016315 7.3066935 7.5144874 6.1644
7 1.35 6.1644051 7.5144051 7.7272653 7.7325868 7.9510345 6.5509
8 1.4 6.550948 7.950948 8.1747217 8.180316 8.4099638 6.9599
9 1.45 6.9598729 8.4098729 8.6451197 8.6510009 8.8924229 7.3923
10 1.5 7.3923274 8.8923274 9.1396356 9.1458183 9.3996183 7.8495
11 1.55 7.8495178 9.3995178 9.6595058 9.6660055 9.9328181 8.3327
12 1.6 8.3327125 9.9327125 10.20603 10.212863 10.493356 8.8432
13 1.65 8.8432446 10.493245 10.780576 10.787759 11.082633 9.3825
14 1.7 9.3825158 11.082516 11.384579 11.39213 11.702122 9.9520
15 1.75 9.9519996 11.702 12.01955 12.027488 12.353374 10.5532
16 1.8 10.553245 12.353245 12.687076 12.695422 13.038016 11.1879
17 1.85 11.187881 13.037881 13.388828 13.397601 13.757761 11.8576
18 1.9 11.857618 13.757618 14.126558 14.135782 14.514407 12.5643
19 1.95 12.564257 14.514257 14.902114 14.91181 15.309848 13.3097
20 2 13.30969 15.30969 15.717432 15.727626 16.146071 14.0959

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy