Csitnepal: Numerical Method (2067-Second Batch)
Csitnepal: Numerical Method (2067-Second Batch)
1. Discuss methods of Half-interval and Newton’s for solving the non-linear equation f(x)
=0. Illustrate the methods by figures and compare them stating their advantages and
disadvantages.
Half-Interval method:
Suppose that 𝑓(𝑥) is continuous function in the interval [𝑎0 , 𝑏0 ] and 𝑓(𝑎0 ) 𝑓(𝑏0 ) < 0, then by
intermediate value theorem, there exists a root of 𝑓(𝑥) in the interval (𝑎0 , 𝑏0 ). We calculate the
(𝑎 +𝑏 )
first approximation of this root as 𝑐0 = 0 2 0 . If 𝑓(𝑐0 ) = 0, then 𝑐0 is the root of 𝑓(𝑥). If not
then we bisect the interval [𝑎0 , 𝑏0 ] into two equal length sub-intervals [𝑎0 , 𝑐0 ]& [𝑐0 , 𝑏0 ] and set
𝑎1 = 𝑎0 , 𝑏1 = 𝑐0 if 𝑓(𝑎0 )𝑓(𝑐0 ) < 0 and 𝑎1 = 𝑐0 , 𝑏1 = 𝑏0 if 𝑓(𝑐0 )𝑓(𝑏0 ) < 0. The second
(𝑎1 +𝑏1 )
approximation of the root is now calculated as 𝑐1 = . If 𝑓(𝑐1 ) = 0, then 𝑐1 is the root of
2
𝑓(𝑥). If not then we again bisect the interval [𝑎1 , 𝑏1 ] into two equal length sub-intervals
[𝑎1 , 𝑐1 ]& [𝑐1 , 𝑏1 ] and set 𝑎2 = 𝑎1 , 𝑏2 = 𝑐1 if 𝑓(𝑎1 )𝑓(𝑐1) < 0 & 𝑎2 = 𝑐1 , 𝑏2 = 𝑏1 if
(𝑎2 +𝑏2 )
𝑓(𝑐1 )𝑓(𝑏1 ) < 0 and then calculate the third approximation as 𝑐2 = and continuing the
2
above process.
al
This method is guaranteed to work for any continuous function 𝑓(𝑥) on the interval [𝑎, 𝑏]
with 𝑓(𝑎)𝑓(𝑏) < 0.
The number of iterations required to achieve a specified accuracy is known in advance.
Disadvantage:
The method converges slowly, i.e., it requires more iterations to achieve the same
accuracy when compared with some other methods for solving non-linear equations.
Newton’s Method:
Let 𝑓(𝑥) be a differentiable function and let 𝑥0 be an initial points which is sufficiently close to
the root of 𝑓(𝑥). Let (𝑥1 , 0) be the point of intersection of the x-axis and the tangent drawn to
the curve 𝑓(𝑥) at (𝑥0 , 𝑓(𝑥0 )). Newton’s method takes this point as the first approximation for
the root of 𝑓(𝑥). To calculate this point we note that the slope of the tangent to 𝑓(𝑥) at 𝑥 = 𝑥0 is
equal to the slope of the line through the points (𝑥1 , 0) and (𝑥0 , 𝑓(𝑥0 )). i.e.
(𝑓(𝑥0 ) − 0) 𝑓(𝑥0 )
𝑓 ′ (𝑥0 ) = ⇒ 𝑥1 = 𝑥0 − ′
𝑥0 − 𝑥1 𝑓 (𝑥0 )
al
If 𝑓(𝑥1 ) = 0, then 𝑥1 is the required root of 𝑓(𝑥). If not, then we take the point of intersection
(𝑥2 , 0) of the x-axis and the tangent to the 𝑓(𝑥) at 𝑥 = 𝑥1 as the next approximation of the root.
As above, we have
𝑓(𝑥1 )
𝑥2 = 𝑥1 −
𝑓 ′ (𝑥1 )
In general, the (𝑛 + 1)𝑡ℎ approximation of the root of 𝑓(𝑥) is given by the formula:
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 − ; 𝑛≥0
𝑓 ′ (𝑥𝑛 )
We continue to calculate the approximations 𝑥1 , 𝑥2 , 𝑥3 , … using the above formula until we find
the root or its satisfactory approximation.
Advantages:
Unlike the incremental search and bisection methods, the Newton-Raphson method isn’t
fooled by singularities.
Also, it can identify repeated roots, since it does not look for changes in the sign of 𝑓(𝑥)
explicitly.
It can find complex roots of polynomials, assuming you start out with a complex value
for 𝑥1 .
For many problems, Newton-Raphson converges quicker than either bisection or
incremental search.
Disadvantages:
The Newton-Raphson method only works if you have a functional representation of
𝑓 ′ (𝑥). Some functions may be difficult or impossible to differentiate. You may be able to
𝑓(𝑥+∆𝑥)−𝑓(𝑥)
work around this by approximating the derivative 𝑓 ′ (𝑥) ≈ .
∆𝑥
The Newton-Raphson method is not guaranteed to find a root.
2. Derive the equation for Lagrange’s interpolating polynomial and find the value of f(x) at
x=1 for the following:
x -1 -2 2 4
f(x) -1 -9 11 69
al
Solution: Here,
ep
itn
3. Write Newton-cotes integration formulas in basic form for x=1, 2, 3 and give their
𝟏 𝟐
composite rules. Evaluate ∫𝟎 𝒆−𝒙 𝒅𝒙 using the Gaussian integration three point formula.
𝑏
To find the value of ∫𝑎 𝑓(𝑥)𝑑𝑥 numerically using the Newton-Cotes method, we first of all
divide the interval [𝑎, 𝑏] into 𝑛 equal parts of length ℎ by points 𝑥𝑖 = 𝑎 + 𝑖ℎ, 𝑖 = 0, 1, 2, … , 𝑛
(𝑏−𝑎)
where ℎ = . Then 𝑎 = 𝑥0 < 𝑥1 < 𝑥2 < ⋯ < 𝑥𝑛 = 𝑏 forms a partition of [𝑎, 𝑏]. Let 𝑃𝑛 (𝑥)
𝑛
be the interpolating polynomial of 𝑓(𝑥) interpolating at 𝑛 + 1 points (𝑥𝑖 , 𝑓𝑖 ), 𝑖 = 0, 1, 2, … , 𝑛
where 𝑓𝑖 = 𝑓(𝑥𝑖 ). Then 𝑃𝑛 (𝑥) is given by the formula
𝑆(𝑆 − 1) 2 𝑆(𝑆 − 1) … (𝑆 − 𝑛 + 1) 𝑛
al
𝑃𝑛 (𝑥) = 𝑓0 + 𝑆∆𝑓0 + ∆ 𝑓0 + ⋯ + ∆ 𝑓0
2 𝑛!
ep
𝑥−𝑥0
Where, 𝑆 = & ∆𝑗 𝑓0 = ∆𝑗−1 𝑓1 − ∆𝑗−1 𝑓0 are the 𝑗 𝑡ℎ forward differences.
𝑛
itn
Numerical:
(1−0)𝑦+1+0
Let 𝑥 = = 0.5𝑦 + 0.5
2
Then the limit of integration are changed from (0, 1) to (−1, 1) so that
1
−𝑥 2
1 − 0 1 −(0.5𝑦+0.5)2
∫ 𝑒 𝑑𝑥 = ∫ 𝑒 𝑑𝑦
0 2 −1
Using the Gaussian 3-point formula, we get
1
2
∫ 𝑒 −(0.5𝑦+0.5) 𝑑𝑦
−1
2 2
= 0.55556 × 𝑒 −(0.5×(−0.77460)+0.5) + 0.88889 × 𝑒 −(0.5×0+0.5) + 0.55556
2
× 𝑒 −(0.5×0.77460+0.5)
= 0.54855 + 0.69227 + 0.25282 = 1.49364
1
2 1−0
∴ ∫ 𝑒 −𝑥 𝑑𝑥 = × 1.49364 = 0.74682
0 2
4. Solve the following system of algebraic linear equation using Gauss-Jordan algorithm.
𝟎 𝟐 𝟎 𝟏 𝐱𝟏 𝟎
𝟐 𝟐 𝟑 𝟐 𝐱𝟐 −𝟐
( ) (𝐱 ) = ( )
𝟒 −𝟑 𝟎 𝟏 𝟑 −𝟕
𝟔 𝟏 −𝟔 −𝟓 𝐱𝟒 𝟔
The augmented matrix of the system is as follow:
al
ep
itn
0 2 0 1 0
2 2 3 2 −2
[ ]
4 −3 0 1 −7
6 1 −6 −5 6
Interchanging first row with second row: 𝑅1 ↔ 𝑅2
2 2 3 2 −2
0 2 0 1 0
[ ]
4 −3 0 1 −7
6 1 −6 −5 6
1
Normalize the first row: 𝑅1 → 2 𝑅1
3
1 1 1 −1
2
0 2 0 1 0
4 −3 0 1 −7
[6 1 −6 −5 6 ]
Eliminate 𝑥1 from 2nd, 3rd and 4th row: 𝑅2 → 𝑅2 ; 𝑅3 → 𝑅3 − 4𝑅1 ; 𝑅4 → 𝑅4 − 6𝑅1
3
1 1 1 −1
2
0 2 0 1 0
0 −7 −6 −3 −3
[0 −5 −15 −11 12 ]
1
Normalize the second row: 𝑅2 → 2 𝑅2
3
1 1 1 −1
2
1
0 1 0 0
2
0 −7 −6 −3 −3
[0 −5 −15 −11 12 ]
Eliminate 𝑥2 from 1st, 3rd and 4th row: 𝑅1 → 𝑅1 − 𝑅2 ; 𝑅3 → 𝑅3 + 7𝑅2 ; 𝑅4 → 𝑅4 + 5𝑅2
3 1
1 0 −1
2 2
1
0 1 0 0
2
1
0 0 −6 −3
2
17
al
[0 0 −15 −
2
12 ]
ep
1
Normalize the third row: 𝑅3 → − 6 𝑅3
itn
31
1 0 −1
22
1
0 1 0 0
2
1 1
0 0 1 −
12 2
17
[0 0 −15 −
2
12 ]
3
Eliminating 𝑥3 from 1st, 2nd and 4th row: 𝑅1 → 𝑅1 − 2 𝑅3 ; 𝑅2 → 𝑅2 ; 𝑅4 → 𝑅4 + 15𝑅3
5 7
1 0 0 −
8 4
1
0 1 0 0
2
1 1
0 0 1 −
12 2
117 39
[0 0 0 −
12 2 ]
12
Normalize the fourth row: 𝑅4 → − 117 𝑅4
5 7
1 0 0 −
8 4
1
0 1 0 0
2
1 1
0 0 1 −
12 2
[0 0 0 1 −2 ]
5 1 1
Eliminating 𝑥4 from 1st, 2nd and 3rd row: 𝑅1 → 𝑅1 − 8 𝑅4 ; 𝑅2 → 𝑅2 − 2 𝑅4 ; 𝑅3 → 𝑅3 + 12 𝑅4
1
1 0 0 0 −
2
0 1 0 0 1
1
0 0 1 0
3
[0 0 0 1 −2 ]
1 1
Therefore, the solution is 𝑥1 = − 2 ; 𝑥2 = 1 ; 𝑥3 = 3 ; 𝑥4 = −2
al
ep
itn
5. Write an algorithm and computer program to solve system of linear equation using
Gauss-Seidel iterative method.
Algorithm:
Input:
A diagonally dominant system of linear equations 𝐴𝑥 = 𝑏
Process:
𝑏
FOR 𝑖 = 1 TO 𝑛 SET 𝑥𝑖 = 𝑎 𝑖
𝑖𝑖
}
SET 𝑑𝑢𝑚𝑚𝑦 = 𝑠𝑢𝑚/𝑎𝑖𝑖
𝑑𝑢𝑚𝑚𝑦−𝑥𝑖
IF 𝑘𝑒𝑦 = 0 AND | | > 𝑒𝑟𝑟𝑜𝑟
𝑑𝑢𝑚𝑚𝑦
THEN
SET 𝑘𝑒𝑦 = 1
SET 𝑥𝑖 = 𝑑𝑢𝑚𝑚𝑦
}
IF 𝑘𝑒𝑦 = 1 THEN
GOTO BEGIN
Output:
Approximate solution 𝑥𝑖 ; 𝑖 = 1, 2, 3, … , 𝑛 of 𝐴𝑥 = 𝑏
Computer program:
al
#include<iostream.h>
#include<conio.h>
ep
itn
#include<iomanip.h>
#include<math.h>
#define MAXIT 50
#define EPS 0.000001
void gaseid(int n, float a[10][10], float b[10], float x[10], int *count, int *status);
void main()
{ float a[10][10], b[10], x[10];
int i, j, n, count, status;
cout<<"** SOLUTION BY GUASS SEIDEL ITERATION METHOD **"<<endl;
cout<<"input the size of the system:"<<endl;
cin>>n;
cout<<"input coefficients, a(i.j)"<<endl;
cout<<"one row on each line"<<endl;
for(i=1; i<=n; i++)
for(j=1; j<=n; j++)
cin>>a[i][j];
cout<<"input vector b:"<<endl;
for(i=1; i<=n; i++)
cin>>b[i];
gaseid(n, a, b, x, &count, &status);
if(status==2)
{ cout<<"no CONVERGENCE in "<<MAXIT<<"
iterations."<<endl<<endl<<endl;
}
else
{ cout<<"SOLUTION VECTOR X"<<endl;
al
cout<<"iterations= "<<count;
}
getch();
}
void gaseid(int n, float a[10][10], float b[10], float x[10], int *count, int *status)
{ int i, j, key;
float sum, x0[10];
for(i=1; i<=n; i++)
x0[i]=b[i]/a[i][i];
*count=1;
begin:
key=0;
for(i=1; i<=n; i++)
{ sum=b[i];
for(j=1; j<=n; j++)
{ if(i==j)
continue;
sum=sum-a[i][j]*x0[j];
}
x[i]=sum/a[i][i];
if(key==0)
{ if(fabs((x[i]-x0[i])/x[i])>EPS)
key=1;
}
}
if(key==1)
al
{ if(*count==MAXIT)
ep
{ *status=2;
itn
return;
}
else
{ *status=1;
for(i=1; i<=n; i++)
x0[i]=x[i];
}
*count=*count+1;
goto begin;
}
return;
}
6. Explain the Picard’s proves of successive approximation. Obtain a solution up to the fifth
𝒅𝟐 𝒚
approximation of the equation = 𝒚 + 𝒙 such that 𝒚 = 𝟏 when 𝒙 = 𝟎 using Picard’s
𝒅𝒙
process of successive approximation.
𝑑𝑦
Suppose that we are given a differential equation of the form 𝑑𝑥 = 𝑓(𝑥, 𝑦) ; 𝑦(𝑥0 ) = 𝑦.
𝑥
𝑦(𝑥) as 𝑦1 (𝑥) = 𝑦0 + ∫𝑥 𝑓(𝑡, 𝑦0 )𝑑𝑡
ep
0
itn
The second approximation 𝑦2 (𝑥) of 𝑦(𝑥) is calculated by substituting 𝑦(𝑡) on the right of
𝑥
equation (i) as 𝑦2 (𝑥) = 𝑦0 + ∫𝑥 𝑓(𝑡, 𝑦1 (𝑡))𝑑𝑡
0
Proceeding similarly, the 𝑛𝑡ℎ approximation of 𝑦(𝑥) is given by the iteration 𝑦𝑛 (𝑥) = 𝑦0 +
𝑥
∫𝑥 𝑓(𝑡, 𝑦𝑛−1 (𝑡))𝑑𝑡
0
This iterative method of solving the differential equation is known as Picard’s Method.
Numerical:
𝑑𝑦
For 𝑑𝑥 = 𝑦 + 𝑥 when 𝑦 = 1 & 𝑥 = 0 ; 𝑖. 𝑒. 𝑦(𝑥) = 𝑦(0) = 1
When 𝑥 = 1, we get
𝑥 𝑥 𝑥
(𝑡 + 1)2 (𝑥 + 1)2 1
𝑦1 (𝑥) = 1 + ∫ 𝑓(𝑡, 𝑦0 (𝑡))𝑑𝑡 = 1 + ∫ (𝑡 + 1)𝑑𝑡 = 1 + [ ] =1+ −
0 0 2 0
2 2
𝑥2
=1+𝑥+
2
When 𝑥 = 2, we get
𝑥 𝑥 𝑥
𝑡2 𝑡3
𝑦2 (𝑥) = 1 + ∫ 𝑓(𝑡, 𝑦1 (𝑡))𝑑𝑡 = 1 + ∫ (𝑡 + 1 + 𝑡 + ) 𝑑𝑡 = 1 + [𝑡 + 𝑡 2 + ]
0 0 2 6 0
𝑥3 2
=1+𝑥+𝑥 +
6
When 𝑥 = 3, we get
𝑥 𝑥 𝑥
𝑡3 𝑡3 𝑡4
𝑦3 (𝑥) = 1 + ∫ 𝑓(𝑡, 𝑦2 (𝑡))𝑑𝑡 = 1 + ∫ (𝑡 + 1 + 𝑡 + 𝑡 + ) 𝑑𝑡 = 1 + [𝑡 + 𝑡 2 + + ]
2
0 0 6 3 24 0
𝑥3 𝑥4
= 1 + 𝑥 + 𝑥2 + +
3 24
When 𝑥 = 4, we get
𝑥 𝑥
𝑡3 𝑡4
𝑦4 (𝑥) = 1 + ∫ 𝑓(𝑡, 𝑦3 (𝑡))𝑑𝑡 = 1 + ∫ (𝑡 + 1 + 𝑡 + 𝑡 2 + + ) 𝑑𝑡
0 0 3 24
3 4 𝑥
𝑡 𝑡 𝑡5 𝑥3 𝑥4 𝑥5
al
2
= 1 + [𝑡 + 𝑡 + + + ] = 1 + 𝑥 + 𝑥2 + + +
3 12 120 0 3 12 120
ep
When 𝑥 = 5, we get
itn
1
𝑜𝑟, 𝑢𝑥𝑥 (𝑥, 𝑦) = [𝑢(𝑥 + ℎ, 𝑦) − 2𝑢(𝑥, 𝑦) + 𝑢(𝑥 − ℎ, 𝑦)] … … … (𝐴)
ep
ℎ2
itn
Adding equations (iii) & (iv) and ignoring the terms containing 𝑘 4 and higher powers, we get
𝑢(𝑥, 𝑦 + 𝑘) + 𝑢(𝑥, 𝑦 − 𝑘) = 2𝑢(𝑥, 𝑦) + 𝑘 2 𝑢𝑦𝑦 (𝑥, 𝑦)
1
𝑜𝑟, 𝑢𝑦𝑦 (𝑥, 𝑦) = [𝑢(𝑥, 𝑦 + 𝑘) − 2𝑢(𝑥, 𝑦) + 𝑢(𝑥, 𝑦 − 𝑘)] … … … (𝐵)
𝑘2
Now if 𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 0 is the given Laplace’s equation, then from equation (A) & (B) we have
1 1
[𝑢(𝑥 + ℎ, 𝑦) − 2𝑢(𝑥, 𝑦) + 𝑢(𝑥 − ℎ, 𝑦)] + [𝑢(𝑥, 𝑦 + 𝑘) − 2𝑢(𝑥, 𝑦) + 𝑢(𝑥, 𝑦 − 𝑘)] = 0
ℎ2 𝑘2
Choosing ℎ = 𝑘, we get
𝑢(𝑥 + ℎ, 𝑦) + 𝑢(𝑥, 𝑦 + ℎ) + 𝑢(𝑥 − ℎ, 𝑦) + 𝑢(𝑥, 𝑦 − ℎ) − 4𝑢(𝑥, 𝑦) = 0
1
∴ 𝑢(𝑥, 𝑦) = [𝑢(𝑥 + ℎ, 𝑦) + 𝑢(𝑥, 𝑦 + ℎ) + 𝑢(𝑥 − ℎ, 𝑦) + 𝑢(𝑥, 𝑦 − ℎ)]
4
is the difference equation for Laplace’s equation.
Numerical:
From the difference equation for Laplace’s equation, we have
200 + 200 + 𝑢2 + 𝑢3 − 4𝑢1 = 0 ⇒ −4𝑢1 + 𝑢2 + 𝑢3 = −400 … … … (𝑖)
200 + 100 + 𝑢4 + 𝑢1 − 4𝑢2 = 0 ⇒ 𝑢1 − 4𝑢2 + 𝑢4 = −300 … … … (𝑖𝑖)
𝑢1 + 200 + 100 + 𝑢4 − 4𝑢3 = 0 ⇒ 𝑢1 − 4𝑢3 + 𝑢4 = −300 … … … (𝑖𝑖𝑖)
𝑢2 + 𝑢3 + 100 + 100 − 4𝑢4 = 0 ⇒ 𝑢2 + 𝑢3 − 4𝑢4 = −200 … … … (𝑖𝑣)
Solving the equations (𝑖), (𝑖𝑖), (𝑖𝑖𝑖) & (𝑖𝑣), we get
𝑢1 = 175
𝑢2 = 𝑢3 = 150
𝑢4 = 125
(OR) 7. Derive a difference equation to represent Poisson’s equation. Solve the Poisson’s
equation 𝛁 𝟐 𝒇 = 𝟐𝒙𝟐 𝒚𝟐 over the square to main 𝟎 ≤ 𝒙 ≤ 𝟑, 𝟎 ≤ 𝒚 ≤ 𝟑 with 𝒇 = 𝟎 on the
boundary and 𝒉 = 𝟏.
al
Let 𝑢 = 𝑢(𝑥, 𝑦) be a function of two independent variables 𝑥 & 𝑦. Then by Taylor’s formula:
itn
ℎ2 ℎ3
𝑢(𝑥 + ℎ, 𝑦) = 𝑢(𝑥, 𝑦) + ℎ𝑢𝑥 (𝑥, 𝑦) + 𝑢𝑥𝑥 (𝑥, 𝑦) + 𝑢𝑥𝑥𝑥 (𝑥, 𝑦) + ⋯ … … … (𝑖)
2! 3!
ℎ2 ℎ3
𝑢(𝑥 − ℎ, 𝑦) = 𝑢(𝑥, 𝑦) − ℎ𝑢𝑥 (𝑥, 𝑦) + 𝑢𝑥𝑥 (𝑥, 𝑦) − 𝑢𝑥𝑥𝑥 (𝑥, 𝑦) + ⋯ … … … (𝑖𝑖)
2! 3!
𝑘2 𝑘3
𝑢(𝑥, 𝑦 + 𝑘) = 𝑢(𝑥, 𝑦) + 𝑘𝑢𝑦 (𝑥, 𝑦) + 𝑢𝑦𝑦 (𝑥, 𝑦) + 𝑢𝑦𝑦𝑦 (𝑥, 𝑦) + ⋯ … … … (𝑖𝑖𝑖)
2! 3!
𝑘2 𝑘3
𝑢(𝑥, 𝑦 − 𝑘) = 𝑢(𝑥, 𝑦) − 𝑘𝑢𝑦 (𝑥, 𝑦) + 𝑢𝑦𝑦 (𝑥, 𝑦) − 𝑢𝑦𝑦𝑦 (𝑥, 𝑦) + ⋯ … … … (𝑖𝑣)
2! 3!
Adding equations (i) & (ii) and ignoring the terms containing ℎ4 and higher powers, we get
𝑢(𝑥 + ℎ, 𝑦) + 𝑢(𝑥 − ℎ, 𝑦) = 2𝑢(𝑥, 𝑦) + ℎ2 𝑢𝑥𝑥 (𝑥, 𝑦)
1
𝑜𝑟, 𝑢𝑥𝑥 (𝑥, 𝑦) = [𝑢(𝑥 + ℎ, 𝑦) − 2𝑢(𝑥, 𝑦) + 𝑢(𝑥 − ℎ, 𝑦)] … … … (𝐴)
ℎ2
Adding equations (iii) & (iv) and ignoring the terms containing 𝑘 4 and higher powers, we get
𝑢(𝑥, 𝑦 + 𝑘) + 𝑢(𝑥, 𝑦 − 𝑘) = 2𝑢(𝑥, 𝑦) + 𝑘 2 𝑢𝑦𝑦 (𝑥, 𝑦)
1
𝑜𝑟, 𝑢𝑦𝑦 (𝑥, 𝑦) = [𝑢(𝑥, 𝑦 + 𝑘) − 2𝑢(𝑥, 𝑦) + 𝑢(𝑥, 𝑦 − 𝑘)] … … … (𝐵)
𝑘2
Now if 𝑢𝑥𝑥 + 𝑢𝑦𝑦 = 𝑔(𝑥, 𝑦) is the given Poisson’s equation, then from equation (A) & (B)
choosing ℎ = 𝑘 we have,
𝑢(𝑥 + ℎ, 𝑦) + 𝑢(𝑥, 𝑦 + ℎ) + 𝑢(𝑥 − ℎ, 𝑦) + 𝑢(𝑥, 𝑦 − ℎ) − 4𝑢(𝑥, 𝑦) = ℎ2 𝑔(𝑥, 𝑦)
which is the difference equation for Poisson’s equation.
Numerical:
The domain is divided as follows with 𝑓 = 0 at the boundary
al
Now, from the difference equation for the Poisson’s equation, we have
ep
0 + 0 + 𝑓2 + 𝑓3 − 4𝑓1 = 12 × 2 × 12 × 22
itn
0 + 0 + 𝑓1 + 𝑓4 − 4𝑓2 = 12 × 2 × 22 × 22
𝑜𝑟, 𝑓1 + 𝑓4 − 4𝑓2 = 32 … … … (𝑖𝑖)
0 + 0 + 𝑓1 + 𝑓4 − 4𝑓3 = 12 × 2 × 1 × 1
𝑜𝑟, 𝑓1 + 𝑓4 − 4𝑓3 = 2 … … … (𝑖𝑖𝑖)
0 + 0 + 𝑓2 + 𝑓3 − 4𝑓4 = 12 × 2 × 22 × 1
𝑜𝑟, 𝑓2 + 𝑓3 − 4𝑓4 = 8 … … … (𝑖𝑣)
al
ep
itn