0% found this document useful (0 votes)
15 views28 pages

Vani Daa

The document outlines various programs aimed at implementing matrix operations and numerical methods using Scilab. It covers topics such as basic matrix operations, eigenvalues and eigenvectors, Gaussian elimination methods, properties of matrices, and plotting functions with derivatives. Additionally, it includes exercises for creating frequency tables in statistical analysis.

Uploaded by

satvik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views28 pages

Vani Daa

The document outlines various programs aimed at implementing matrix operations and numerical methods using Scilab. It covers topics such as basic matrix operations, eigenvalues and eigenvectors, Gaussian elimination methods, properties of matrices, and plotting functions with derivatives. Additionally, it includes exercises for creating frequency tables in statistical analysis.

Uploaded by

satvik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Vani

12825502722

PROGRAM -01
AIM:-Exercises to implement the basic matrix operations in Scilab.

THEORY:-
Matrices
A matrix is a rectangular array of numbers arranged in rows and columns. Matrices are widely used in
various fields such as mathematics, physics, engineering, and computer science. The size of a matrix
is represented as m × n, where m is the number of rows and n is the number of columns.
Types of Matrices:
1. Square Matrix: A matrix where the number of rows is equal to the number of columns.
2. Diagonal Matrix: A square matrix with nonzero elements only on the diagonal.
3. Identity Matrix: A diagonal matrix with all diagonal elements equal to 1.
4. Zero Matrix: A matrix with all elements equal to zero.
5. Transpose of a Matrix: Obtained by swapping rows and columns.
6. Inverse of a Matrix: If A is a square matrix, its inverse A⁻¹ satisfies A * A⁻¹ = I.
7. Symmetric Matrix: A matrix that is equal to its transpose (A = Aᵀ).
8. Skew-Symmetric Matrix: A matrix where Aᵀ = -A.
Basic Matrix Operations:
 Addition: Matrices of the same order can be added by adding corresponding elements.
 Subtraction: Similar to addition but with subtraction.
 Multiplication: Two matrices can be multiplied if the number of columns of the first matrix
matches the number of rows of the second.
 Transpose: Rows and columns are interchanged.
 Determinant: A scalar value that can be computed only for square matrices.
 Inverse: Exists if the determinant is non-zero.
Introduction to Scilab
Scilab is an open-source software for numerical computation, similar to MATLAB. It is widely used
in scientific computing, engineering, and data analysis. It supports matrix operations, visualization,
and programming constructs like loops and conditionals.

EXAMPLE:-

1.Creating a Matrix
A = [1 2 3; 4 5 6; 7 8 9]

2.Matrix Addition and Subtraction


Vani
12825502722

B = [9 8 7; 6 5 4; 3 2 1];
C = A + B;
D = A - B;

3. Matrix Multiplication
E = A * B;

4. Transpose of a Matrix
F = A';

5. Determinant of a Matrix
detA = det(A);

6. Inverse of a Matrix
invA = inv(A);

OUTPUT

PROGRAM -02
Vani
12825502722

AIM:- . Exercises to find the Eigenvalues and eigenvectors in Scilab..

THEORY:-
Eigenvalues and eigenvectors play a crucial role in linear algebra, particularly in solving systems of
equations, stability analysis, and transformations. In Scilab, a powerful open-source computational
software, we can efficiently compute eigenvalues and eigenvectors of a matrix using built-in
functions.
Definition
Given a square matrix AAA of order n×nn \times nn×n, an eigenvalue λ\lambdaλ and its
corresponding eigenvector vvv satisfy the equation:
Av=λvA v = \lambda vAv=λv
where vvv is a nonzero vector.
Eigenvalue and Eigenvector Computation in Scilab
Scilab provides the spec() function to compute eigenvalues and spec(A, "v") to compute both
eigenvalues and eigenvectors.
 Syntax:
o lambda = spec(A) → Computes the eigenvalues of matrix AAA.

o [V, D] = spec(A, "v") → Returns eigenvectors in VVV and a diagonal matrix DDD
with eigenvalues.

Scilab simplifies eigenvalue and eigenvector calculations, making it a useful tool for mathematical
and engineering applications. Understanding these concepts is essential for solving various problems
in linear algebra, physics, and machine learning.

CODE:-

// Define a 3x3 matrix

A = [6 2 1; 2 3 1; 1 1 1];

// Find eigenvalues and eigenvectors

[lambda, V] = spec(A);

// Display the results

disp("Eigenvalues:");

disp(lambda);

disp("Eigenvectors:");
Vani
12825502722

disp(V);

OUTPUT:-

PROGRAM -03
Vani
12825502722

AIM:- . Exercises to find the Eigenvalues and eigenvectors in Scilab.

THEORY: The Gauss Elimination method is a direct method for solving a system of linear
equations. It involves transforming the system's augmented matrix into an upper triangular form and
then performing back-substitution to find the unknowns.
The Gauss-Jordan method is an extension of Gauss Elimination. It reduces the augmented matrix to
row echelon form and then to reduced row echelon form by making all elements above and below the
main diagonal zero.
The Gauss-Seidel method is an iterative technique for solving linear systems. It improves the
approximation of the solution by using previously computed values in subsequent iterations.

CODE:-
. Gauss Elimination Method
// Define the augmented matrix
A = [2, 3, 1, 1;
4, 1, 2, 2;
3, 2, 3, 3];
// Perform Gauss elimination
[n, m] = size(A);
for i = 1:n-1
for j = i+1:n
factor = A(j, i) / A(i, i);
A(j, :) = A(j, :) - factor * A(i, :);
end
end
// Back substitution
x = zeros(n, 1);
x(n) = A(n, m) / A(n, n);
for i = n-1:-1:1
x(i) = (A(i, m) - A(i, i+1:n) * x(i+1:n)) / A(i, i);
end
// Display the result
disp(x, "Solution using Gauss Elimination:");

OUTPUT:-
Vani
12825502722

Gauss-Jordan Method
// Define the augmented matrix
A = [2, 3, 1, 1; 4, 1, 2, 2; 3, 2, 3, 3];
// Perform Gauss-Jordan elimination
[n, m] = size(A);
for i = 1:n
// Make the diagonal element 1
A(i, :) = A(i, :) / A(i, i);
// Make the other elements in the column 0
for j = 1:n
if i != j
A(j, :) = A(j, :) - A(j, i) * A(i, :);
Vani
12825502722

end
end
end
// The solution is in the last column
x = A(:, m);
// Display the result
disp(x, "Solution using Gauss-Jordan:");

OUTPUT:-

. Gauss-Seidel Method
// Define the coefficients matrix and the constants vector
Vani
12825502722

A = [2, 3, 1;
4, 1, 2;
3, 2, 3];
b = [1; 2; 3];
// Initial guess
x = zeros(3, 1);
n = size(A, 1);
tolerance = 1e-6;
max_iterations = 100;
iterations = 0;
// Gauss-Seidel iteration
do
x_old = x; // Store the old values
for i = 1:n
sum1 = A(i, 1:i-1) * x(1:i-1);
sum2 = A(i, i+1:n) * x_old(i+1:n);
x(i) = (b(i) - sum1 - sum2) / A(i, i);
end
iterations = iterations + 1;
until (norm(x - x_old) < tolerance) | (iterations >= max_iterations)

// Display the result


disp(x, "Solution using Gauss-Seidel:");
Vani
12825502722

OUTPUT:-
Vani
12825502722

PROGRAM -04
AIM:- . Exercises to implement the associative, commutative and distributive property in a
matrix in Scilab.
THEORY:-
In mathematics, the associative, commutative, and distributive properties are fundamental properties
of operations. When working with matrices, these properties can be applied to matrix addition and
multiplication.
1. Associative Property:
 For addition: ((A + B) + C = A + (B + C))
 For multiplication: ((A \cdot B) \cdot C = A \cdot (B \cdot C))
2. Commutative Property:
 For addition: (A + B = B + A)
 For multiplication: (A \cdot B = B \cdot A) (Note: This property does not hold for
matrix multiplication in general.)
3. Distributive Property:
 For addition over multiplication: (A \cdot (B + C) = A \cdot B + A \cdot C)

CODE:-
// Define three matrices
A = [1 2; 3 4];
B = [5 6; 7 8];
C = [9 10; 11 12];
// Associative Property
LHS_add = (A + B) + C; // (A + B) + C
RHS_add = A + (B + C); // A + (B + C)
LHS_mul = (A * B) * C; // (A * B) * C
RHS_mul = A * (B * C); // A * (B * C)
// Commutative Property
comm_add = A + B - (B + A); // Should be zero matrix
// Multiplication is NOT generally commutative
// Distributive Property
LHS_dis = A * (B + C);
RHS_dis = (A * B) + (A * C);
// Display results
Vani
12825502722

disp("Associative Property (Addition):"), disp(LHS_add == RHS_add)


disp("Associative Property (Multiplication):"), disp(LHS_mul == RHS_mul)
disp("Commutative Property (Addition):"), disp(comm_add == zeros(2,2))
disp("Distributive Property:"), disp(LHS_dis == RHS_dis)

OUTPUT:-

PROGRAM -05
Vani
12825502722

AIM:- . Exercises to implement the associative, commutative and distributive property in a


matrix in Scilab.
THEORY:-
The Reduced Row Echelon Form (RREF) of a matrix is a special form obtained using Gaussian
elimination followed by Gauss-Jordan elimination. A matrix is in RREF if it satisfies the following
conditions:
1. Each leading entry (pivot) in a row is 1.
2. Each pivot is the only nonzero entry in its column.
3. The pivot in each row appears to the right of the pivot in the row above.
4. Any rows consisting of only zeros are at the bottom of the matrix.
The row operations used to transform a matrix into RREF are:
 Swapping rows
 Multiplying a row by a nonzero scalar
 Adding or subtracting a multiple of one row from another

CODE:-

// Define a matrix
A = [2 4 -2; 1 3 1; 3 7 3];

// Compute the Reduced Row Echelon Form


R = rref(A);

// Display the result


disp("Reduced Row Echelon Form of A:");
disp(R);

OUTPUT:-
Vani
12825502722

PROGRAM:-06
Vani
12825502722

AIM:- Exercises to plot the functions and to find its first and second derivatives in Scilab.
THEORY:
1. Derivatives
 The first derivative f′(x)f′(x) of a function f(x)f(x) represents its instantaneous rate
of change (slope of the tangent line).
 The second derivative f′′(x)f′′(x) measures the curvature (how the slope itself
changes).
2. Numerical Differentiation
Since Scilab computes derivatives numerically (not symbolically), we use finite difference
approximations:
 First derivative (Central Difference):
f′(x)≈f(x+h)−f(x−h)2hf′(x)≈2hf(x+h)−f(x−h)
 Second derivative:
f′′(x)≈f(x+h)−2f(x)+f(x−h)h2f′′(x)≈h2f(x+h)−2f(x)+f(x−h)
where hh is a small step size (e.g., 0.0010.001).
CODE:-
// Define the function
function y = f(x)
y = x.^3 - 2*x.^2 + sin(x);
endfunction
// First derivative (central difference)
function y = derivative(f, x, h)
y = (f(x + h) - f(x - h)) / (2 * h);
endfunction
// Second derivative
function y = second_derivative(f, x, h)
y = (f(x + h) - 2*f(x) + f(x - h)) / (h^2);
endfunction
// Generate x values
x = linspace(-2, 3, 500);
h = 0.001;
// Compute derivatives
Vani
12825502722

df = derivative(f, x, h);
d2f = second_derivative(f, x, h);
// Plot all
clf(); // Clear previous plots
plot(x, f(x), 'b-', 'LineWidth', 2);
plot(x, df, 'r--', 'LineWidth', 2);
plot(x, d2f, 'g-.', 'LineWidth', 2);
xlabel("x");
ylabel("y");
title("Function and its Derivatives");
legend(["f(x)", "f''(x)", "f''''(x)"]);

OUTPUT

PROGRAM:-07
AIM: Exercises to present the data as a frequency table in SPSS.
Vani
12825502722

Theory:-
Frequency tables are fundamental tools in statistical analysis that display how often different
values or categories occur within a dataset. They serve several important purposes:
1. Data Organization: Frequency tables systematically organize raw data into
meaningful categories
2. Pattern Identification: They help identify patterns, outliers, and the distribution of
values
3. Descriptive Statistics: Provide basic counts and percentages that summarize the data
4. Data Quality Check: Allow researchers to verify data entry and identify potential
errors
Types of Frequency Tables
1. Simple Frequency Table: Shows counts for each unique value
2. Grouped Frequency Table: Groups continuous data into intervals (bins)
3. Cumulative Frequency Table: Shows running totals of frequencies
4. Relative Frequency Table: Displays proportions or percentages rather than counts

CODE:-

Exercise 1: Fixed Numeric Data


data = [15, 16, 18, 19, 21, 22, 24, 25, 28, 30];
class_labels = ["10-19", "20-29", "30-39"];
lower_bounds = [10, 20, 30];
upper_bounds = [19, 29, 39];
frequencies = zeros(1, length(class_labels));
for i = 1:length(class_labels)
mask = (data >= lower_bounds(i)) & (data <= upper_bounds(i));
frequencies(i) = sum(mask);
end
disp("Age Range Frequency");
for i = 1:length(class_labels)
mprintf("%-10s %d\n", class_labels(i), frequencies(i));
end
Vani
12825502722

OUTPUT:

Exercise 2: (Discrete Numeric Data)


data = [55, 60, 60, 70, 75, 75, 75, 80, 90, 90];
unique_vals = unique(data);
frequencies = zeros(1, length(unique_vals));
for i = 1:length(unique_vals)
frequencies(i) = sum(data == unique_vals(i));
end
disp("Score Frequency");
for i = 1:length(unique_vals)
mprintf("%5d %9d\n", unique_vals(i), frequencies(i));
end

OUTPUT:

Exercise 3: Grouped Data


clc; clear;
data = [15, 16, 18, 19, 21, 22, 24, 25, 28, 30]; // Replace with your numbers
Vani
12825502722

ranges = ["10-19", "20-29", "30-39"]; // Bin labels


bin_edges = [10, 20, 30, 40]; // Bin boundaries
counts = histc(data, bin_edges(1:$-1)); // Don't change this line
disp(" ")
disp("FREQUENCY DISTRIBUTION")
disp("-----------------------")
disp("Range Count")
disp("-----------------------")
for i = 1:length(ranges)
printf("%-8s %2d\n", ranges(i), counts(i)) // Shows each range and count
end
disp("-----------------------")
printf("Total %2d\n", sum(counts)) // Shows total count
disp(" ")
bar(counts, 'blue') // Creates blue bars
title("Age Distribution") // Chart title
xlabel("Age Groups") // X-axis label
ylabel("Count") // Y-axis label
set(gca(), "xticklabel", ranges) // Labels for each bar

OUTPUT:

PROGRAM:-08
AIM: Exercises to find the outliers in a dataset in SPSS.
THEORY:
Vani
12825502722

An outlier is a data point that significantly differs from other observations in a dataset. Outliers
can occur due to:
 Measurement errors (instrument malfunctions, data entry mistakes)
 Natural variability (rare but valid extreme values)
 Data processing issues (incorrect merging or filtering)
2. Importance of Outlier Detection
 Improves model accuracy (outliers can skew statistical analyses)
 Enhances data quality (helps identify errors)
 Supports robust decision-making (avoids misleading conclusions)

3. Common Methods for Outlier Detection


(A) Z-Score Method (For Normally Distributed Data)
 Concept: Measures how many standard deviations a point is from the mean.
 Formula:
Z=X−μσZ=σX−μ
where:
o XX = data point

o μμ = mean

o σσ = standard deviation

 Threshold: Typically, ∣Z∣>2.5∣Z∣>2.5 or 33 indicates an outlier.


 Best for: Normally distributed (Gaussian) data.

CODE:
data = [12, 15, 18, 22, 24, 25, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 150];
// Example with an outlier (150)
mean_val = mean(data);
std_dev = stdev(data);
z_scores = (data - mean_val) / std_dev;
threshold = 2.5;
outliers = data(abs(z_scores) > threshold);
// Display results
disp("Z-Score Outlier Detection:");
Vani
12825502722

disp("--------------------------");
disp("Mean: " + string(mean_val));
disp("Std Dev: " + string(std_dev));
disp("Outliers (Z > " + string(threshold) + "): " + string(outliers));

OUTPUT:

PROGRAM:-09
AIM: Exercises to find the most risky project out of two mutually exclusive projects in
SPSS.
THEORY:
1. Risk Assessment for Mutually Exclusive Projects
Vani
12825502722

When comparing two mutually exclusive projects, we need quantitative measures to evaluate their
relative riskiness. Key concepts include:
 Mutually Exclusive Projects: Selection of one project automatically excludes the other
 Risk Measurement: Volatility of returns determines project riskiness
 Decision Criteria: Higher volatility → Higher risk

CODE:
projectA = [1.5, 0.8, 1.2, -0.5, 2.1, 1.8, 0.3, 1.1, 2.4, -1.2, 1.9, 2.6];
projectB = [2.3, -1.2, 4.5, 3.2, -2.1, 5.6, 1.2, -3.4, 6.7, 2.3, -4.5, 7.8];

function [sigma, cv, max_drawdown, prob_loss] = risk_metrics(returns)


sigma = stdev(returns);
cv = sigma / mean(returns);
cumulative = cumsum(returns);
peak = cumulative(1);
max_drawdown = 0;

for i = 2:length(cumulative)
if cumulative(i) > peak then
peak = cumulative(i);
end
drawdown = peak - cumulative(i);
if drawdown > max_drawdown then
max_drawdown = drawdown;
end
end

prob_loss = sum(returns < 0) / length(returns);


endfunction
[sigmaA, cvA, drawdownA, lossA] = risk_metrics(projectA);
[sigmaB, cvB, drawdownB, lossB] = risk_metrics(projectB);
Vani
12825502722

printf("\n\nRISK COMPARISON RESULTS");


printf("\n=================================");
printf("\nMetric Project A Project B");
printf("\n---------------------------------");
printf("\nStd Dev (σ) %6.2f %6.2f", sigmaA, sigmaB);
printf("\nCoeff of Variation %6.2f %6.2f", cvA, cvB);
printf("\nMax Drawdown %6.2f %6.2f", drawdownA, drawdownB);
printf("\nProb of Loss %6.2f%% %6.2f%%", lossA*100, lossB*100);
printf("\n=================================\n");
scf(0); clf(0); // Create and clear figure
subplot(2,1,1);
plot(cumsum(projectA), 'b-', 'LineWidth', 2);
plot(cumsum(projectB), 'r--', 'LineWidth', 2);
title("Cumulative Returns Comparison", "fontsize", 3);
xlabel("Month", "fontsize", 2);
ylabel("Cumulative Return (%)", "fontsize", 2);
legend(["Project A", "Project B"], 2);
set(gca(), "grid", [1 1]);
subplot(2,1,2);
histplot(20, projectA, normalization=%t, style=2);
histplot(20, projectB, normalization=%t, style=5);
title("Return Distribution", "fontsize", 3);
xlabel("Monthly Return (%)", "fontsize", 2);
ylabel("Probability Density", "fontsize", 2);
legend(["Project A", "Project B"], 1);
set(gca(), "grid", [1 1]);

OUTPUT:-
Vani
12825502722

PROGRAM:-10
AIM: Exercises to draw a scatter diagram, residual plots, outliers leverage and influential
data points in R.
THEORY:
Vani
12825502722

CODE:
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15]';
y = [2.1, 3.8, 5.2, 7.9, 10.1, 12.5, 13.7, 16.2, 18.1, 20.3, 5.0]';
x(11) = 15;
y(11) = 5;

// 1. Scatter diagram
scf(0);
clf();
plot(x, y, 'bo');
xtitle("Scatter Diagram", "X Variable", "Y Variable");
xgrid(color("gray"), 1, 7);
[X, Y] = reglin(x, y);
y_pred = X(1) + X(2)*x;
// Plot regression line on scatter plot
plot(x, y_pred, 'r-');
legend(["Data points"; "Regression line"]);
// 2. Residual plots
residuals = y - y_pred;
// Residuals vs Fitted values
scf(1);
clf();
plot(y_pred, residuals, 'bo');
xtitle("Residuals vs Fitted Values", "Fitted Values", "Residuals");
xgrid(color("gray"), 1, 7);
replot([min(y_pred), max(y_pred)], [0, 0], 'k--'); // Reference line at 0
// 3. Calculate leverage (hat values)
n = length(x);
p = 2; // number of parameters (intercept + slope)
X_matrix = [ones(n,1), x];
hat_matrix = X_matrix * inv(X_matrix'*X_matrix) * X_matrix';
h = diag(hat_matrix); // leverage values
Vani
12825502722

// Plot leverage values


scf(2);
clf();
bar(h, 'blue');
xtitle("Leverage (Hat) Values", "Observation Index", "Leverage");
xgrid(color("gray"), 1, 7);
// Add cutoff line for high leverage
cutoff = 2*p/n;
replot([0, n+1], [cutoff, cutoff], 'r--');
legend(["Leverage"; "Cutoff for high leverage"]);
// 4. Identify influential points using Cook's distance
s_squared = sum(residuals.^2)/(n-p);
cooksd = (residuals.^2 ./ (p * s_squared)) .* (h ./ (1-h).^2);
// Plot Cook's distance
scf(3);
clf();
bar(cooksd, 'green');
xtitle("Cook's Distance for Influence", "Observation Index", "Cook's Distance");
xgrid(color("gray"), 1, 7);// Add cutoff line (common rule is 4/n)
cutoff_cooksd = 4/n;
replot([0, n+1], [cutoff_cooksd, cutoff_cooksd], 'r--');
legend(["Cook's Distance"; "Cutoff for influential points"]);
// 5. Display all metrics in a table
disp("Observation | Residual | Leverage | Cook's Distance");
disp([(1:n)', residuals, h, cooksd]);
// Identify potential outliers and influential points
high_residual = find(abs(residuals) > 2*stdev(residuals));
high_leverage = find(h > cutoff);
high_influence = find(cooksd > cutoff_cooksd);
disp("Potential outliers (high residuals):");
disp(high_residual);
disp("High leverage points:");
Vani
12825502722

disp(high_leverage);
disp("Influential points (high Cook's distance):");
disp(high_influence);

OUTPUT:
Vani
12825502722
Vani
12825502722

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy