Numerical Analysis All Lab Reports
Numerical Analysis All Lab Reports
1. Find x2, the new iterate using, x2 = (x0 + x1)/2 and evaluate f(x2).
2. If f(x2) = 0 then x2 is a root of f(x).
3. If f(x2) ≠ 0 then two things are possible:
i) If f(x0) f(x2) < 0, we compute the new iterate x3 as:
x3 = (x0 + x2)/2 and evaluate f(x3)
ii) If f(x1) f(x2) < 0, we compute the new iterate x3 as:
x3 = (x1 + x2)/2 and evaluate f(x3)
disp('Wrong choice')
end
for k=1:max_I
c=(a+b)/2;
C(k)=c;
fc=feval(f,c);
Fc(k)=fc;
if fc==0,break
end
if fb*fc>0;
b=c;
fb=fc;
else
a=c;
fa=fc;
end
if abs(fc)<tol,break
end
end
disp('Wrong choice')
end
for k=1:max_I
c=(a+b)/2;
C(k)=c;
fc=feval(f,c);
Fc(k)=fc;
if fc==0,break
end
if fb*fc>0;
b=c;
fb=fc;
else
a=c;
fa=fc;
end
if abs(fc)<tol,break
end
end
Solution:
Wrong choice
no of iteration(k), root sequence(x_k), function values fc
ans =
Codes:
% Solution of nonlinear equations
% Bisection method
clear all, close all, clc
format short
f = @(x) 3*x.^4-x.^3+5*x.^2-7;
a=2; b=5;
max_I=20;% maximum number of iterations
tol=10^(-3);
fa=feval(f,a); % Initial guess
fb=feval(f,b);
if fa*fb>0,
disp('Wrong choice')
end
for k=1:max_I
c=(a+b)/2;
C(k)=c;
fc=feval(f,c);
Fc(k)=fc;
if fc==0,break
end
if fb*fc>0;
b=c;
fb=fc;
else
a=c;
fa=fc;
end
if abs(fc)<tol,break
end
end
Solution:
Wrong choice
no of iteration(k), root sequence(x_k), function values fc
ans =
Lab: 02
Introduction:
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named
after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively
better approximations to the roots (or zeroes) of a real-valued function. The most basic version
starts with a single-variable function f defined for a real variable x, the function's derivative f ′,
and an initial guess x0 for a root of f. If the function satisfies sufficient assumptions and the
initial guess is close, then
f (x 0)
X 1 = x0 +
f ( x 0)'
is a better approximation of the root than x0. Geometrically, (x1, 0) is the intersection of the x-
axis and the tangent of the graph of f at (x0, f (x0)): that is, the improved guess is the unique root
of the linear approximation at the initial point. The process is repeated as
f ( xn)
xn+1 = xn -
f ( xn ) '
until a sufficiently precise value is reached. This algorithm is first in the class of Householder's
methods, succeeded by Halley's method. The method can also be extended to complex
functions and to systems of equations.
Solution in matlab:
ans =
1.0000 2.0000
2.0000 1.6429
3.0000 1.5624
4.0000 1.5585
5.0000 1.5585
Other example:
SOLUTION:
ans =
1.0000 2.0000
2.0000 1.5502
3.0000 1.2841
4.0000 1.1243
5.0000 1.0276
6.0000 0.9688
7.0000 0.9329
8.0000 0.9111
9.0000 0.8977
10.0000 0.8896
11.0000 0.8846
12.0000 0.8815
13.0000 0.8797
14.0000 0.8785
15.0000 0.8778
LAB: 03
INTRODUCTION:
n mathematics, the regula falsi, method of false position, or false position method is a very old
method for solving an equation with one unknown, that, in modified form, is still in use. In
simple terms, the method is the trial and error technique of using test ("false") values for the
variable and then adjusting the test value according to the outcome. This is sometimes also
referred to as "guess and check". Versions of the method predate the advent of algebra and the
use of equations.
The method of false position provides an exact solution for linear functions, but more direct
algebraic techniques have supplanted its use for these functions. However, in numerical analysis,
double false position became a root-finding algorithm used in iterative numerical approximation
techniques.
Many equations, including most of the more complicated ones, can be solved only by iterative
numerical approximation. This consists of trial and error, in which various values of the
unknown quantity are tried. That trial-and-error may be guided by calculating, at each step of the
procedure, a new estimate for the solution. There are many ways to arrive at a calculated-
estimate and regula falsi provides one of these.
for k=1:N
c=b-(b-a)*fb/(fb-fa);
C(k)=c;
fc=feval(f,c);
Fc(k)=fc;
if fc==0,break
end
if fb*fc>0;
b=c;
fb=fc;
else
a=c;
fa=fc;
end
if abs(fc)<tol,break
end
end
for k=1:N
c=b-(b-a)*fb/(fb-fa);
C(k)=c;
fc=feval(f,c);
Fc(k)=fc;
if fc==0,break
end
if fb*fc>0;
b=c;
fb=fc;
else
a=c;
fa=fc;
end
if abs(fc)<tol,break
end
end
SOLUTION:
untitled10
Wrong choice
no of iteration(k), root sequence(x_k), function values fc
ans =
ANOTHER EXAMPLE:
% Solution of nonlinear equations
% Regula-Falsi
clear all,close all
f = @(x) x*sin(x)-1;
a=0; b=2;
N=20;% maximum number of iterations
tol=10^(-5);
fa=feval(f,a); % Initial guess
fb=feval(f,b);
if fa*fb>0
%break
disp('Wrong choice')
end
for k=1:N
c=b-(b-a)*fb/(fb-fa);
C(k)=c;
fc=feval(f,c);
Fc(k)=fc;
if fc==0,break
end
if fb*fc>0;
b=c;
fb=fc;
else
a=c;
fa=fc;
end
if abs(fc)<tol,break
end
end
SOLUTION:
untitled11
no of iteration(k), root sequence(x_k), function values fc
ans =
LAB:04
INTRODUCTON:
In numerical analysis, the secant method is a root-finding algorithm that uses a succession
of roots of secant lines to better approximate a root of a function f. The secant method can be
thought of as a finite-difference approximation of Newton's method.
The secant method is very similar to the bisection method except instead of dividing each
interval by choosing the midpoint the secant method divides each interval by the secant line
connecting the endpoints.
SOLLUTION:
untitled12
no of iteration(k), root sequence(x_k)
ans =
ANOTHER EXAMPLE:
% Solution of nonlinear equations
%Secant method method
SOLUTION:
untitled13
no of iteration(k), root sequence(x_k)
ans =
1.0000 0.5000
2.0000 0.8000
3.0000 0.6495
4.0000 0.6516
5.0000 0.6516
6.0000 0.6516
LAB:05
INTRODUCTION:
Gaussian elimination, also known as row reduction, is an algorithm in linear algebra for solving
a system of linear equations. It is usually understood as a sequence of operations performed on
the corresponding matrix of coefficients. This method can also be used to find the rank of a
matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible
square matrix.
To perform row reduction on a matrix, one uses a sequence of elementary row operations to
modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as
possible. There are three types of elementary row operations:
Using these operations, a matrix can always be transformed into an upper triangular matrix,
SOLUTION:
A=
4 1 2
2 4 -1
1 1 -3
B=
9
-5
-9
ANOTHER EXAMPLE:
clear all, close all, clc
% Gauss Elimination method
A=[1 2; 2 7; 1 1] % is N*N matrix
B=[9 6;-5 1;-9 9] % is N*N martix
[N N]= size(A);
X=zeros(N,1);
C=zeros(1, N+1);
Aug= [A B]; % augmented matrix
for p=1:N-1
[Y,j]=max(abs(Aug(p:N,p))); %partial pivoting for column p
% interchange row p and j
C=Aug(p,:);
Aug(p,:)=Aug(j+p-1,:);
Aug(j+p-1,:)=C;
if Aug(p,p)==0, break
end
% elimination of column p
for k=p+1:N
m=Aug(k,p)/Aug(p,p);
Aug(k,p:N+1)=Aug(k,p:N+1)-m*Aug(p,p:N+1);
end
end
% Back substitution
X(N)=Aug(N,N+1)/Aug(N,N);
for k=N-1:-1:1
X(k)=(Aug(k,N+1)-Aug(k,k+1:N)*X(k+1:N))/Aug(k,k);
end
SOLUTION:
A=
1 2
2 7
1 1
B=
9 6
-5 1
-9 9
LAB:06
INTRODUCTION:
n mathematics, and more specifically in numerical analysis, the trapezoidal rule (also known as
the trapezoid rule or trapezium rule—see Trapezoid for more information on terminology) is a
technique for approximating the definite integral.
b
∫ f ( x ) dx
a
The trapezoidal rule works by approximating the region under the graph of the function f(x) as
a trapezoid and calculating its area. It follows that
b
f ( a ) + f (b)
.∫ f ( x ) dx=( b−a ) .
a 3
s=s+feval(f,x);
end
s=h*(feval(f,a)+feval(f,b))/2+h*s;
s
EXAMPLE ON THIS METHOD:
% Composite Trapezoidal rule
a=0;b=3;n=6;
f =@(x) x.^2 / 1+x.^3 ;
h=(b-a)/n;
s=0;
for k=1:n-1
x=a+h*k;
s=s+feval(f,x);
end
s=h*(feval(f,a)+feval(f,b))/2+h*s;
s
SOLUTION:
s=
29.9375
ANOTHER EXAMPLE:
% Composite Trapezoidal rule
a=3;b=7;n=6;
f =@(x) 1-x.^3 /4 ;
h=(b-a)/n;
s=0;
for k=1:n-1
x=a+h*k;
s=s+feval(f,x);
end
s=h*(feval(f,a)+feval(f,b))/2+h*s;
s
SOLUTION:
s=
-142.1111
LAB:07
INTRODUCTION:
In numerical integration, Simpson's rules are
several approximations for definite integrals, named after Thomas Simpson (1710–1761).
The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads
b
a+b
∫ f ( x ) = b−a
6
¿ ¿f(a) + 4f
2
+ f (b)
a
if the 1/3 rule is applied to n equal subdivisions of the integration range [a,b],
one obtains the composite Simpson's rule. Points inside the integration range are given
alternating weights 4/3 and 2/3.
Simpson's 3/8 rule, also called Simpson's second rule requests one more function evaluation
inside the integration range, and is exact if f is a polynomial up to cubic degree.
SOLUTION:
untitled7
s=
374.4000
ANOTHER EXAMPLE:
a=-0; b=90; n=6;
f =@(x) sin(x)^0.5;
h=(b-a)/n;
s1=0;
s2=0;
for k=1:n
x=a+h*(2*k-1);
s1=s1+feval(f,x);
end
for k=1:n-1
x=a+h*2*k;
s2=s2+feval(f,x);
end
s=h/3*(feval(f,a)+feval(f,b)+4*s1+2*s2);
s
SOLUTION:
s=
82.3028 +56.0736i