0% found this document useful (0 votes)
16 views5 pages

Numerical Methods Equation Sheet

Uploaded by

omwattamwar123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

Numerical Methods Equation Sheet

Uploaded by

omwattamwar123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Numerical Methods Equation Sheet

1. Roots of Equations
• Bisection Method
xlower + xupper
xmid =
2
• Newton-Raphson Method
f (xi )
xi+1 = xi −
f 0 (xi )
• Secant Method  
xi − xi−1
xi+1 = xi − f (xi )
f (xi ) − f (xi−1 )

2. Linear Algebraic Equations


• Gauss Elimination Method Forward elimination to create an upper triangular matrix, then back sub-
stitution to solve for the variables.

• LU Decomposition

A = LU, (1)
Ly = b (2)
Ux = y (3)

where L is a lower triangular matrix and U is an upper triangular matrix.

• Cholesky Factorization
A = LLT

where L is a lower triangular matrix with positive diagonal elements and LT is the transpose of L.

• Gauss-Seidel Iteration
 
i−1 n
(k+1) 1  X (k+1)
X (k)
xi = bi − aij xj − aij xj 
aii
j=1 j=i+1

• Jacobi Iteration  
n
(k+1) 1 bi −
X (k) 
xi = aij xj 
aii  
j=1
j6=i

1
• Successive Over-Relaxation (SOR)
 
i−1 n
(k+1) (k) ω  X (k+1)
X (k)
xi = (1 − ω)xi + bi − aij xj − aij xj 
aii
j=1 j=i+1

where ω is the relaxation factor, and 0 < ω < 2.

3. Curve Fitting and Interpolation


• Lagrange Interpolating Polynomial
n
X Y x − xj
P (x) = yi
xi − xj
i=0 0≤j≤n
j6=i

• Newtons Divided Difference Interpolation

P (x) = f [x0 ] + f [x0 , x1 ](x − x0 ) + . . . + f [x0 , x1 , . . . , xn ](x − x0 )(x − x1 ) · · · (x − xn−1 )

• Least Squares Regression (Linear)


y = a0 + a1 x
P P P P P
n xy − x y y − a1 x
a1 = , a0 =
n x2 − ( x)2
P P
n

4. Numerical Differentiation and Integration


• Finite Difference Approximation

– Forward Difference
f (x + h) − f (x)
f 0 (x) ≈
h
– Backward Difference
f (x) − f (x − h)
f 0 (x) ≈
h
– Central Difference
f (x + h) − f (x − h)
f 0 (x) ≈
2h
• Numerical Integration

– Trapezoidal Rule
b
b−a
Z
f (x) dx ≈ [f (a) + f (b)]
a 2
– Simpsons 1/3 Rule: For an even number of intervals (n = 2, 4, 6, ...):
Z b    
b−a a+b
f (x) dx ≈ f (a) + 4f + f (b)
a 6 2

2
– Simpsons 3/8 Rule: For a multiple of three intervals (n = 3, 6, 9, ...):
Z b      
3(b − a) b−a 2(b − a)
f (x) dx ≈ f (a) + 3f a + + 3f a + + f (b)
a 8 3 3

– Gauss-Legendre Quadrature Rules: The integral approximation is given by:


Z 1 n
X
f (x) dx ≈ wi f (xi )
−1 i=1

where xi are the Gauss-Legendre nodes and wi are the corresponding weights. Below are the
values of nodes and weights for different orders of Gauss-Legendre Quadrature Rules:
1. 1st Order (n=1)
x1 = 0, w1 = 2
2. 2nd Order (n=2)
1
x1,2 = ± √ , w1 = w2 = 1
3
3. 3rd Order (n=3)
r
3 8 5
x1 = 0, x2,3 = ± , w1 = , w2 = w3 =
5 9 9
4. 4th Order (n=4) s s
r r
3 2 6 3 2 6
x1,2 = ± − , x3,4 = ± +
7 7 5 7 7 5
√ √
18 + 30 18 − 30
w1,4 = , w2,3 =
36 36

5. Ordinary Differential Equations (ODEs)


• Eulers Method
yi+1 = yi + hf (xi , yi )

• Runge-Kutta Method (Fourth Order)

h
yi+1 = yi + (k1 + 2k2 + 2k3 + k4 )
6
where:

k1 = f (xi , yi ),
 
h h
k2 = f xi + , yi + k1
2 2
 
h h
k3 = f xi + , yi + k2 ,
2 2
k4 = f (xi + h, yi + hk3 )

3
6. Partial Differential Equations (PDEs)
• Finite Difference Method for Heat Equation
∆t
un+1 = uni + α (un − 2uni + uni−1 )
i
(∆x)2 i+1
where α is the thermal diffusivity.

7. Optimization
• Golden Section Search

x1 = a + (1 − φ)(b − a), x2 = a + φ(b − a)



5−1
where φ = 2 .

• Newtons Method for Optimization


f 0 (xi )
xi+1 = xi −
f 00 (xi )

8. Norms
• Vector Norms

– 1-Norm (Taxicab or Manhattan Norm)


n
X
kxk1 = |xi |
i=1

where x = [x1 , x2 , . . . , xn ]T is a vector in Rn .


– 2-Norm (Euclidean Norm)
n
!1/2
X
2
kxk2 = |xi |
i=1

– Infinity Norm (Maximum Norm)


kxk∞ = max |xi |
1≤i≤n

• Matrix Norms

– 1-Norm (Maximum Absolute Column Sum)


m
X
kAk1 = max |aij |
1≤j≤n
i=1

where A = [aij ] is an m × n matrix.


– Frobenius Norm  1/2
Xm X
n
kAkF =  |aij |2 
i=1 j=1

4
– Infinity Norm (Maximum Absolute Row Sum)
n
X
kAk∞ = max |aij |
1≤i≤m
j=1

– Matrix 2-Norm (Spectral Norm) The matrix 2-norm (also known as the spectral norm) is the
largest singular value of A:
kAk2 = σmax
where σmax is the largest singular value of A.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy