0% found this document useful (0 votes)
12 views27 pages

Kanha Project 4

The document is a thesis titled 'Linear Algebra' submitted by Rash Bihari Nanda for the Bachelor of Science degree at Government College (Autonomous), Angul. It explores the fundamental concepts of linear algebra, including its applications in various fields, particularly in medical engineering through eigenvalues and eigenvectors. The thesis aims to provide a comprehensive understanding of linear algebra's significance and its practical applications, including principal components analysis for medical image compression.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views27 pages

Kanha Project 4

The document is a thesis titled 'Linear Algebra' submitted by Rash Bihari Nanda for the Bachelor of Science degree at Government College (Autonomous), Angul. It explores the fundamental concepts of linear algebra, including its applications in various fields, particularly in medical engineering through eigenvalues and eigenvectors. The thesis aims to provide a comprehensive understanding of linear algebra's significance and its practical applications, including principal components analysis for medical image compression.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

LINEAR ALGEBRA

A PROJECT SUBMITTED IN PARTIAL FULFILLMENT OF THE


REQUIREMENTS FOR THE DEGREE OF BACHELOR OF SCIENCE
IN MATHEMATICS
By
RASH BIHARI NANDA
ROLL NO: 21SMTH003

UNDER THE GUIDANCE OF


SANDEEP KUMAR
P.G.DEPARTMENT OF MATHEMATICS,
GOVERNMENT COLLEGE (AUTONOMOUS),
ANGUL
ANGUL – 759143, ODISHA, INDIA

1
GOVERNMENT COLLEGE (AUTONOMOUS), ANGUL

CERTIFICATE

This is to certify that the thesis entitled “LINEAR ALGEBRA”


RASH BIHARI NANDA for the award of degree of Bachelor of Science
from Government College (Autonomous), Angul, is absolutely based
upon her own work under the supervision of SANDEEP ROUT. The
results embodied in this thesis are new and neither this thesis nor any part
of it has been submitted for any degree/diploma or any other academic
award anywhere before.

Date: Sandeep Rout


Professor
P.G. Department of Mathematics
Govt. College (Autonomous) ,Angul

2
ACKNOWLEDGEMENTS

I would like to thank the Department of Mathematics


(Government Autonomous College, Angul) for making this
project work resources available to me during its preparation. I
would like to express my special thanks of gratitude to my
supervisor SANDEEP ROUT who helped me in completing my
project. I came to know about so many new things, I am really
thankful to him. Secondly I would like to thank our other faculties
Dr.Smita tapaswani , Sandeep Rout, Dr.Prabhas
jena,Nibedita Mohapatra who helped me a lot in finalizing this
project within the limited time frame. I must thank to my friends
for their support during these three year.
Finally, I must thank to my parents and whose blessings are reach
me to do such type of project and their encouragement was the
most valuable for me.

Date: Name: Rash Bihari Nanda


Government (Autonomous)
College, Angul
Angul-759122, Odisha, India

3
DECLARATION

I declare that the project entitled “LINEAR ALGEBRA” prepared


and submitted by me in partial fulfillment of the requirement for
the B.Sc. & B.A degree in Mathematics Honours, as a part of the
curriculum is original and it has not been submitted previously to
this or any other institutions for any other degree or diploma.

Name: RashBihari Nanda(21SMTH003)


Government College (Autonomous),Angul
Angul – 759122 , Odisha, India

4
ABSTRACT

Linear algebra is a main important part of the mathematics. It is a principal branch of


mathematics that is related to mathematical structures closed under the operations of addition
and scalar multiplication and that includes the theory of systems of linear equations, matrices,
determinants, vector spaces, and linear transformations. Linear algebra, is a mathematical
discipline that deals with vectors and matrices and, more generally, with vector spaces and linear
transformations. Unlike other parts of mathematics that are frequently invigorated by new ideas
and unsolved problems, linear algebra is very well understood. Its value lies in its many
applications, from mathematical physics to modern algebra and its usage in the engineering and
medical fields such as image processing and analysis.

This thesis is a detailed review and explanation of the linear algebra domain in which all
mathematical concepts and structures concerned with linear algebra are discussed. The thesis’s
main aim is to point out the significant applications of the linear algebra in the medical
engineering field. Hence, the eigenvectors and eigenvalues which represent the core of linear
algebra are discussed in details in order to show how they can be used in many engineering
applications. The principal components analysis is one of the most important compression and
feature extraction algorithms used in the engineering field. It is mainly dependent on the
calculation and extraction of eigenvalues and eigenvectors that then be used to represent an
input; whether it is image or a simple matrix. In this thesis, the use of principal components
analysis for the compression of medical images is discussed as an important and novel
application of linear algebra.

Keywords: Linear algebra; addition; scalar; multiplication; linear equations; matrices;


determinants; vector spaces; linear transformations; image processing; eigenvectors; eigenvalues;
principal components analysis; compression

5
TABLE OF CONTENTS

ACKNOWLEDGMENTS............................................................................................................i
ABSTRACT..................................................................................................................................ii
TABLE OF CONTENTS.............................................................................................................iii

CHAPTER 1: INTRODUCTION
1.1 Introduction
1.2 Aims of the project

CHAPTER TWO: LINEAR ALGEBRA BASICS


2.1 Introduction to Linear Algebra
2.2 Scalars
2.3 Vector Algebra
2.4 Summary
CHAPTER THREE: LINEAR COMBINATIONS AND LINEAR INDEPENDENCE
3.1 Linear Combinations
3.1.1 A basis for a vector space
3.2 Testing for Linear Dependence of Vectors

CHAPTER FOUR: CONCLUSION


4.1 Conclusion
REFERENCES

6
CHAPTER 1
INTRODUCTION

1.1 Introduction

Linear algebra is an important course for a diverse number of students for at least two reasons.
First, few subjects can claim to have such widespread applications in other areas of mathematics-
multi variable calculus, differential equations, and probability, for example-as well as in physics,
biology, chemistry, economics, finance, psychology, sociology, and all fields of engineering.
Second, this subject presents the student at the sophomore level with an excellent opportunity to
learn how to handle abstract concepts.

Linear algebra is one of the most known mathematical disciplines because of its rich theoretical
foundations and its many useful applications to science and engineering. Solving systems of
linear equations and computing determinants are two examples of fundamental problems in
linear algebra that have been studied for a long time ago. Leibnitz found the formula for
determinants in 1693, and in 1750 Cramer presented a method for solving systems of linear
equations, which is today known as Cramer’s Rule. This is the first foundation stone on the
development of linear algebra and matrix theory. At the beginning of the evolution of digital
computers, the matrix calculus has received very much attention. John von Neumann and Alan
Turing were the world-famous pioneers of computer science. They introduced significant
contributions to the development of computer linear algebra. In 1947, von Neumann and
Goldstine investigated the effect of rounding errors on the solution of linear equations. One year
later, Turing [Tur48] initiated a method for factoring a matrix to a product of a lower triangular
matrix with an echelon matrix (the factorization is known as LU decomposition). At present,
computer linear algebra is broadly of interest. This is due to the fact that the field is now
recognized as an absolutely essential tool in many branches of computer applications that require
computations which are lengthy and difficult to get right when done by hand, for example: in
computer graphics, in geometric modeling, in robotics, etc.

1
1.2 Aims of Project

The motivation for this thesis comes mainly from the purpose to understand the complexity of
mathematical problems in linear algebra. Many tasks of linear algebra are recognized usually as
elementary problems, but the precise complexity of them was not known for a long time ago.

The aims are of this thesis is to understand the eigenvalues and eigenvectors and to go through
some of their applications in the mathematical and engineering areas in order to show their
importance and impact.

1
CHAPTER TWO
LINEAR ALGEBRA BASICS

This chapter reviews the basic concepts and thoughts of linear algebra. It discusses and reviews
the scalars and their properties through equations. Moreover, it presents the vectors and their
transformations such as multiplication, subtraction etc..

2.1 Introduction to Linear Algebra


Linear Algebra is a standout amongst the most critical fundamental ranges in Mathematics,
having at any rate as awesome an effect as Calculus, and to be sure it gives a noteworthy piece of
the hardware required to sum up Calculus to vector-esteemed elements of numerous variables.
Dissimilar to numerous logarithmic frameworks considered in Mathematics or connected inside
or out with it, a hefty portion of the issues concentrated on in Linear Algebra are manageable to
precise and even algorithmic arrangements, and this makes them implementable on PCs – this
clarifies why so much calculational utilization of PCs includes this sort of polynomial math and
why it is so generally utilized. Numerous geometric subjects are examined making utilization of
ideas from Linear Algebra, and the thought of a direct change is an arithmetical adaptation of
geometric change. At long last, a lot of present day unique variable based math constructs on
Linear Algebra and regularly gives solid illustrations of general though (Poole, 2010).

The subject of linear algebra based math can be somewhat clarified by the means of the two
terms involving the title. "Linear" is a term you will acknowledge better toward the end of this
course, and in reality, achieving this gratefulness could be taken as one of the essential objectives
of this course. However until further notice, you can comprehend it to mean anything that is
"straight" or "level." For instance in the xy-plane you may be acclimated to portraying straight
lines (is there some other kind?) as the arrangement of answers for a mathematical statement of
the structure y=mx+b, where the slant m and the y-capture b are constants that together
depict
the line. In the event that you have contemplated multivariate analytics, then you will have
experienced planes. Living in three measurements, with directions portrayed by triples (x,y,z),
they can be depicted as the arrangement of answers for mathematical statements of the structure
ax+by+cz=d, where a,b,c,d are constants that together focus the plane. While we may
depict
planes as level, lines in three measurements may be portrayed as linear. From a multivariate
analytics course you will review that lines are sets of focuses portrayed by comparisons, for
example, x=3t−4, y=−7t+2, z=9t, where t is a parameter that can tackle any worth.

Another perspective of this idea of levelness is to perceive that the arrangements of focuses
simply depicted are answers for mathematical statements of a moderately basic structure. These
mathematical statements include expansion and duplication just. We will have a requirement for
subtraction, and every so often we will isolate, yet for the most part you can depict linear
mathematical statements as including just addition and multiplication (Kolman, 1996).

2.2 Scalars
Before examining vectors, first we clarify what is implied by scalars. These are "numbers" of
different sorts together with logarithmic operations for consolidating them. The principle cases
we will consider are the objective numbers Q, the genuine numbers R and the mind boggling
numbers C. Be that as it may mathematicians routinely work with different fields, for example,
the limited fields (otherwise called Galois fields) which are essential in coding hypothesis,
cryptography and other advanced applications (Rajendra, 1996).

A field of scalars (or only a field) comprises of a set F whose components are called scalars,
together with two arithmetical operations, expansion + and augmentation ×, for joining each pair
of scalars x, y ∈ F to give new scalars x + y ∈ F and x × y ∈ F. These operations are required to
fulfill the accompanying properties which are here and there known as the field

Associativity: For x, y, z ∈ F,

(x + y) + z = x + (y + z), (2.1)

(x × y) × z = x × (y × z) (2.2)

Zero and unity: There are unique and distinct elements 0, 1 ∈ F such that for x ∈ F,

𝑥 + 0 = 𝑥 = 0 + 𝑥, (2.3)

𝑥 × 1 = 𝑥 = 1 × 𝑥. (2.4)
Distributivity: For x, y, z ∈ F,

(x + y) × z = x × z + y × z, (2.5)

z × (x + y) = z × x + z × y. (2.6)

Commutativity: For x, y ∈ F,

x + y = y + x, (2.7)

x × y = y × x. (2.8)

Additive and multiplicative inverses: For x ∈ F there is a unique element −x ∈ F (the


additive inverse of x) for which

x + (−x) = 0 = (−x) + x (2.9)

1
F there is a unique element ( ) ∈ F (the multiplicative inverse of y) for

For each non-zero y
𝑦

which

𝑦 × (1) = 1 1
× y
y
=
(2.10)

 Remarks 2.1.
• Usually xy is written instead of x × y, and then we always have xy = yx.

• Because of commutativity, an above portion standards or rules are repetitive as in the sense that
they are results of others (Kolman, 1996).

• When working with vectors we will dependably have a particular field of scalars at the top of
the priority list and will make utilization of these guidelines.
 Definition 2.1
A real vector space is a set V of elements on which we have two operations and defined
with the following properties:

(a) if u and v are any elements in V. then u J v is in V, (We say that V is closed under the
operation ).
(1) u v=v u for all u,v in V.
(2) u (v w) = (n v) w for all u, v, w in V.
(3) There exists an element - u in V such that u u=-u u = 0.
(4) If u is any element in V and c is any real number, then c . n is in V (i.e., V is closed
under the operation .).
(b) If u is any element in V and c is any real number, then c.n is in V (i.e., V is closed under
the operation .).
(5) c .(u v) = c .u c . v for any u, v in V and any real number c.
(6) (c + d) . u = c . u d . u for any u in V and any real numbers f and d.
(7) C . (l . u) = (cd) . u for any u in V and any real numbers c and d.
(8) I . u = u for any u in V.
The elements of V are called vectors: the elements of the set of real numbers R are called
scalars. The operation is called vector addition: the operation . is called scalar multiplication.
The vector 0 in property (3) is called a zero vector, The vector - u in property
(4) is called a negative of u.

 Definition 2.2

Let V be a vector space and W a nonempty subset of V. If W is a vector space with respect to
the operations in V, then W is called a subspace of V.

It follows from Definition 2.2 that to verify that a subset W of a vector space V is a subspace,
one must check that (a), (b), and (1) through (8) of Definition 2.1 hold. However, the next
theorem says that it is enough to merely check that (a) an (b) hold to verify that a subset W of
a vector space V is a subspace. Property (a) is called the closure property for , and property
(b) is called the closure property for ..
 Theorem 2.1

Let V be a vector space with operations and . and let W be a nonempty subset of V. Then W is
a subspace of V if and only if the following conditions hold:

(a) If u and v are any vectors in W, then u v is in W.

(b) If c is any real number and u is any vector in W. then e . u is in W.

 Proof

If W is a subspace of V, then it is a vector space and (a) and (b) of Definition 4.4 hold; these are
precisely (a) and (b) of the theorem

Conversely, suppose that (a) and (b) hold. We wish to show that W is a subspace of V. First,

from (b) we have that ( - 1) . u is in W for any u in W. From (a) we have that u (-1) . u is in

W. But u (-1) . u = 0, so 0 is in W. Then u 0 = u for any u in W. Finally, properties (1), (2),


(5), (6), (7), and (8) hold in W because they hold in V. Hence W is a subspace of V.

𝑎
 Example 2.1

Let W be the set of all vectors in R3 of the form [ 𝑏 ] where a and b are any real numbers. To
𝑎+𝑏
verify Theorem 2.1 (a) and (b), we let

be two vectors in W. Then

is in W. for W consists of all those vectors whose third entry is the sum of the first two entries.
Similarly,
is in W. Hence W is a subspace of R3.

2.3 Vector Algebra


Here, we introduce a few useful operations which are defined for free vectors. Multiplication by
a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has
magnitude B = |α| A. The vector B, is parallel to A and points in the same direction if α > 0. For
α < 0, the vector B is parallel to A but points in the opposite direction (antiparallel).

Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA,


which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α
> 0. For α < 0, the vector B is parallel to A but points in the opposite direction (antiparallel)
(Kolman, 1996).

Once we multiply an arbitrary vector, A, by the inverse of its magnitude, (1/A), we obtain a unit
vector which is parallel to A. There exist several common notations to denote a unit vector,
e.g. Aˆ , eA, etc. Thus, we have that Aˆ = A/A = A/|A|, and A = A Aˆ , |Aˆ | = 1.

 Vector addition
Vector addition has a very simple geometrical interpretation. To add vector B to vector A, we
simply place the tail of B at the head of A. The sum is a vector C from the tail of A to the head of
B. Thus, we write C = A + B. The same result is obtained if the roles of A are reversed B. That
is, C = A + B = B + A. This commutative property is illustrated below with the parallelogram
construction.
Since the result of adding two vectors is also a vector, we can consider the sum of multiple
vectors. It can easily be verified that vector sum has the property of association, that is,

(A + B) + C = A + (B + C) (2. 11)

Vector subtraction Since A − B = A + (−B), in order to subtract B from A, we simply multiply B


by −1 and then add (Golan, 1995).

 Scalar product (“Dot” product)


This product involves two vectors and results in a scalar quantity. The scalar product between
two vectors A and B, is denoted by A · B, and is defined as

A ∙ B = AB cos . (2.12)

Here 𝜃, is the angle between the vectors A and B when they are drawn with a common origin

 Vector product (“Cross” product)


This product operation involves two vectors A and B, and results in a new vector C = A×B. The
magnitude of C is given by,

C = AB sin , (2.13)

eliminate ambiguity, between the two possible choices, 𝜃 is always taken as the angle smaller
 where is the angle between the vectors A and B when drawn with a common origin. To

than π. We can easily show that C is equal to the area enclosed by the parallelogram defined by
A and B. The vector C is orthogonal to both A and B, i.e. it is orthogonal to the plane defined
by A and B. The direction of C is determined by the right-hand rule as shown (Kolman, 1996).
From this definition, it follows that

B × A = −A × B, (2.14)

which indicates that vector multiplication is not commutative (but anticommutative). We also
note that if A × B = 0, then, either A and/or B are zero, or, A and B are parallel, although not
necessarily pointing in the same direction. Thus, we also have A × A = 0. Having defined vector
multiplication, it would appear natural to define vector division. In particular, we could say that
“A divided by B”, is a vector C such that A = B × C. We see immediately that there are a
number of difficulties with this definition. In particular, if A is not perpendicular to B, the vector
C does not exist. Moreover, if A is perpendicular to B then, there are an infinite number of
vectors that satisfy A = B × C. To see that, let us assume that C satisfies, A = B × C. Then, any
vector D = C
+ βB, for 3 any scalar β, also satisfies A = B × D, since B × D = B × (C + βB) = B × C = A.
We conclude therefore, that vector division is not a well-defined operation (Golan, 2007).

2.4 Summary
This chapter presented a brief review of the linear algebra as a general topic. Moreover, a review
of scalars and vectors including their properties and transformations was presented.
CHAPTER 3
LINEAR COMBINATIONS AND LINEAR INDEPENDENCE

This chapter presents an explanation of the linear combinations as well as linear independence.

3.1 Linear Combinations


For the most part, mathematics, you say that a linear combination of things is an entirety of
products of those things (Poole, 2010). Along these lines, for instance, one linear combination of
the functions f(x), g(x), and h(x) is

2f(x) + 3g(x) − 4h(x) (3.1)

 Definition 3.1
A linear combination of vectors V1, V2, . . . , Vk in a vector space V is an expression of the form

𝑐1𝑣1 + 𝑐2𝑣2 + ∙ ∙ ∙ + 𝑐𝑘𝑣𝑘 (3.2)

where the ci's are scalars, that is, it's a whole of scalar products of them (Larry, 1998).

4.1.1 A basis for a vector space.


Some bases for vector spaces officially are known, regardless of the possibility that we haven't
3
known them by that name. For example, in the three vectors i = (1, 0, 0) which focuses along
the x-axis, j = (0, 1, 0) which focuses along the y-axis, and k = (0, 0, 1) which focuses along the
z-axis together from the standard premise for 3. Each vector (x, y, z) in 3
is an extraordinary
linear combination of the standard basis vectors (Henry, 2008).

(x, y, z) = x i + yj + z𝑘. (3.3)

That’s the one and only linear combination of i, j, and k that gives (x, y, z).
 Definition 4.2
A (ordered) subset of a vector space V is a (requested) premise of V if every vector v in V may
be interestingly represented as a linear combination of vectors from β.

v = 𝑣1𝑏1 + 𝑣2𝑏2 + ∙ ∙ ∙ + 𝑣𝑛𝑏𝑛. (3.4)

For a requested basis, the coefficients in that linear combination are known as the coordinates of
the vector as for β.

Later on, when we study arranges in more detail, we'll compose the coordinates of a vector v as a
segment vector and give it a special notation.

𝑣1
𝖥𝑣2
1
[𝑉]𝛽 = .
.
(3.5)

I .
I
[𝑣𝑛
]

Although we have a standard basis for Rn, there are other bases (Lloyd and David, 1997).

 Example 3.1

In R3 let

The vector
is a linear combination of VI, V2, and V3 if we can find real numbers a1, a2, and a3 so that
(3.6)

Figure 5: Linear combination of vectors

Substituting for v, V1, V2, and V3, we have

Equating corresponding entries leads to the linear system (verify)

Solving this linear system by the methods of Chapter 2 gives (verify) a1= 1, a2 = 2, and a3 = - 1,
which means that V is a linear combination of VI, V2, and V3. Thus

(4.7)
The Figure below shows V as a linear combination of V1, V2, and V3.

Figure 6: Linear combination of V1, V2, and V3

 Definition 3.3

The vectors V1, V2 ……. Vt in a vector space V are said to be linearly dependent if there exist
constants a1, a2, ……at, not all zero, such that

(3.8)

Otherwise, V1, V2 ….,Vk are called linearly independent. That is, V1, V2 ,…..,Vk are linearly
independent if, whenever a1V1 + a2V2 + ... + akVk = 0,

a1 = a2 =……. = ak = 0.

If S = {V1, V2,......,Vd},then we also say that the set S is linearly dependent or linearly
independent if the vectors have the corresponding property.
 Example 3.2
Determine whether the vectors

are linearly independent.

 Solution
Forming Equation (1),

we obtain the homogeneous system (verify)

The corresponding augmented matrix is

whose reduced row echelon form is (verify)

Thus there is a nontrivial solution


so the vectors are linearly dependent.

 Example 3.3
Are the vectors 𝑉1 = [1 0 1 2], 𝑉2 = [0 1 1 2], and 𝑉3 = [1 1 1 3] in R4 linearly
dependent or linearly independent?

 Solution
We form Equation (1).

and solve for a1, a2, and a3 . The resulting homogeneous system is (verify)

The corresponding augmented matrix is (verify)

and its reduced row echelon form is (verify)


Thus the only solution is the trivial solution a1 = a2 = a3 = 0, so the vectors are linearly
independent.

4.3 Testing for Linear Dependence of Vectors


There are numerous circumstances when we may wish to know whether an arrangement of
vectors is linearly independent, that is if one of the vectors is some combinations of the others.

xu+yv=0
Two vectors u and v are linearly independent if the main numbers x and y fulfilling

are x=y=0. On the off chance that we let


𝑎 𝑐
𝑢⃗→ = [ ] 𝑣→ = [ ]
𝑏 𝑑
and (3.9)

then xu + yv = 0 is equivalent to
𝑎 𝑐 𝑎 𝑐 𝑥
0=𝑥[ ]+𝑦[ ]=
[ 𝑏 ] [𝑦]
𝑑 𝑏
(3.10)
𝑑

In the event that u and v are linearly independent, then the main answer for this arrangement of
mathematical statements is the trivial solution, x=y=0. For homogeneous systems this happens
exactly when the determinant is non-zero. We have now discovered a test for figuring out if a
given set of vectors is linearly independent: A set of n vectors of length n is linearly independent
if the matrix with these vectors as columns has a non-zero determinant. The set is obviously
dependent if the determinant is zero .
CHAPTER 4
CONCLUSION

4.1 Conclusion

Overall, in addition to its mathematical usages, linear algebra has broad usages and applications
in most of engineering, medical, and biological field. As science and engineering disciplines
grow so the use of mathematics grows as new mathematical problems are encountered and new
mathematical skills are required. In this respect, linear algebra has been particularly responsive
to computer science as linear algebra plays a significant role in many important computer science
undertakings.

The broad utility of linear algebra to computer science reflects the deep connection that exists
between the discrete nature of matrix mathematics and digital technology. In this thesis we have
seen one important applications of the linear algebra which is called principal components
analysis. This technique is used broadly in the medical field for compressing the medical images
while keeping the good and needed features. However, this is not the only application of linear
algebra in this field. Linear algebra has many other applications in this field. It provides many
other concepts that are crucial to many areas of computer science, including graphics, image
processing, cryptography, machine learning, computer vision, optimization, graph algorithms,
quantum computation, computational biology, information retrieval and web search. Among
these applications are face morphing, face detection, image transformations such as blurring and
edge detection, image perspective removal, classification of tumors as malignant or benign,
integer factorization, error-correcting codes, and secret-sharing.
REFERENCES
1. Rao A R and Bhim Sankaram Linear Algebra Hindustan
Publishing house.
2. Gilbert Strang, Linear Algebra and its
Applications, Thomson, 2007.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy