Kanha Project 4
Kanha Project 4
1
GOVERNMENT COLLEGE (AUTONOMOUS), ANGUL
CERTIFICATE
2
ACKNOWLEDGEMENTS
3
DECLARATION
4
ABSTRACT
This thesis is a detailed review and explanation of the linear algebra domain in which all
mathematical concepts and structures concerned with linear algebra are discussed. The thesis’s
main aim is to point out the significant applications of the linear algebra in the medical
engineering field. Hence, the eigenvectors and eigenvalues which represent the core of linear
algebra are discussed in details in order to show how they can be used in many engineering
applications. The principal components analysis is one of the most important compression and
feature extraction algorithms used in the engineering field. It is mainly dependent on the
calculation and extraction of eigenvalues and eigenvectors that then be used to represent an
input; whether it is image or a simple matrix. In this thesis, the use of principal components
analysis for the compression of medical images is discussed as an important and novel
application of linear algebra.
5
TABLE OF CONTENTS
ACKNOWLEDGMENTS............................................................................................................i
ABSTRACT..................................................................................................................................ii
TABLE OF CONTENTS.............................................................................................................iii
CHAPTER 1: INTRODUCTION
1.1 Introduction
1.2 Aims of the project
6
CHAPTER 1
INTRODUCTION
1.1 Introduction
Linear algebra is an important course for a diverse number of students for at least two reasons.
First, few subjects can claim to have such widespread applications in other areas of mathematics-
multi variable calculus, differential equations, and probability, for example-as well as in physics,
biology, chemistry, economics, finance, psychology, sociology, and all fields of engineering.
Second, this subject presents the student at the sophomore level with an excellent opportunity to
learn how to handle abstract concepts.
Linear algebra is one of the most known mathematical disciplines because of its rich theoretical
foundations and its many useful applications to science and engineering. Solving systems of
linear equations and computing determinants are two examples of fundamental problems in
linear algebra that have been studied for a long time ago. Leibnitz found the formula for
determinants in 1693, and in 1750 Cramer presented a method for solving systems of linear
equations, which is today known as Cramer’s Rule. This is the first foundation stone on the
development of linear algebra and matrix theory. At the beginning of the evolution of digital
computers, the matrix calculus has received very much attention. John von Neumann and Alan
Turing were the world-famous pioneers of computer science. They introduced significant
contributions to the development of computer linear algebra. In 1947, von Neumann and
Goldstine investigated the effect of rounding errors on the solution of linear equations. One year
later, Turing [Tur48] initiated a method for factoring a matrix to a product of a lower triangular
matrix with an echelon matrix (the factorization is known as LU decomposition). At present,
computer linear algebra is broadly of interest. This is due to the fact that the field is now
recognized as an absolutely essential tool in many branches of computer applications that require
computations which are lengthy and difficult to get right when done by hand, for example: in
computer graphics, in geometric modeling, in robotics, etc.
1
1.2 Aims of Project
The motivation for this thesis comes mainly from the purpose to understand the complexity of
mathematical problems in linear algebra. Many tasks of linear algebra are recognized usually as
elementary problems, but the precise complexity of them was not known for a long time ago.
The aims are of this thesis is to understand the eigenvalues and eigenvectors and to go through
some of their applications in the mathematical and engineering areas in order to show their
importance and impact.
1
CHAPTER TWO
LINEAR ALGEBRA BASICS
This chapter reviews the basic concepts and thoughts of linear algebra. It discusses and reviews
the scalars and their properties through equations. Moreover, it presents the vectors and their
transformations such as multiplication, subtraction etc..
The subject of linear algebra based math can be somewhat clarified by the means of the two
terms involving the title. "Linear" is a term you will acknowledge better toward the end of this
course, and in reality, achieving this gratefulness could be taken as one of the essential objectives
of this course. However until further notice, you can comprehend it to mean anything that is
"straight" or "level." For instance in the xy-plane you may be acclimated to portraying straight
lines (is there some other kind?) as the arrangement of answers for a mathematical statement of
the structure y=mx+b, where the slant m and the y-capture b are constants that together
depict
the line. In the event that you have contemplated multivariate analytics, then you will have
experienced planes. Living in three measurements, with directions portrayed by triples (x,y,z),
they can be depicted as the arrangement of answers for mathematical statements of the structure
ax+by+cz=d, where a,b,c,d are constants that together focus the plane. While we may
depict
planes as level, lines in three measurements may be portrayed as linear. From a multivariate
analytics course you will review that lines are sets of focuses portrayed by comparisons, for
example, x=3t−4, y=−7t+2, z=9t, where t is a parameter that can tackle any worth.
Another perspective of this idea of levelness is to perceive that the arrangements of focuses
simply depicted are answers for mathematical statements of a moderately basic structure. These
mathematical statements include expansion and duplication just. We will have a requirement for
subtraction, and every so often we will isolate, yet for the most part you can depict linear
mathematical statements as including just addition and multiplication (Kolman, 1996).
2.2 Scalars
Before examining vectors, first we clarify what is implied by scalars. These are "numbers" of
different sorts together with logarithmic operations for consolidating them. The principle cases
we will consider are the objective numbers Q, the genuine numbers R and the mind boggling
numbers C. Be that as it may mathematicians routinely work with different fields, for example,
the limited fields (otherwise called Galois fields) which are essential in coding hypothesis,
cryptography and other advanced applications (Rajendra, 1996).
A field of scalars (or only a field) comprises of a set F whose components are called scalars,
together with two arithmetical operations, expansion + and augmentation ×, for joining each pair
of scalars x, y ∈ F to give new scalars x + y ∈ F and x × y ∈ F. These operations are required to
fulfill the accompanying properties which are here and there known as the field
Associativity: For x, y, z ∈ F,
(x + y) + z = x + (y + z), (2.1)
(x × y) × z = x × (y × z) (2.2)
Zero and unity: There are unique and distinct elements 0, 1 ∈ F such that for x ∈ F,
𝑥 + 0 = 𝑥 = 0 + 𝑥, (2.3)
𝑥 × 1 = 𝑥 = 1 × 𝑥. (2.4)
Distributivity: For x, y, z ∈ F,
(x + y) × z = x × z + y × z, (2.5)
z × (x + y) = z × x + z × y. (2.6)
Commutativity: For x, y ∈ F,
x + y = y + x, (2.7)
x × y = y × x. (2.8)
1
F there is a unique element ( ) ∈ F (the multiplicative inverse of y) for
∈
For each non-zero y
𝑦
which
𝑦 × (1) = 1 1
× y
y
=
(2.10)
Remarks 2.1.
• Usually xy is written instead of x × y, and then we always have xy = yx.
• Because of commutativity, an above portion standards or rules are repetitive as in the sense that
they are results of others (Kolman, 1996).
• When working with vectors we will dependably have a particular field of scalars at the top of
the priority list and will make utilization of these guidelines.
Definition 2.1
A real vector space is a set V of elements on which we have two operations and defined
with the following properties:
(a) if u and v are any elements in V. then u J v is in V, (We say that V is closed under the
operation ).
(1) u v=v u for all u,v in V.
(2) u (v w) = (n v) w for all u, v, w in V.
(3) There exists an element - u in V such that u u=-u u = 0.
(4) If u is any element in V and c is any real number, then c . n is in V (i.e., V is closed
under the operation .).
(b) If u is any element in V and c is any real number, then c.n is in V (i.e., V is closed under
the operation .).
(5) c .(u v) = c .u c . v for any u, v in V and any real number c.
(6) (c + d) . u = c . u d . u for any u in V and any real numbers f and d.
(7) C . (l . u) = (cd) . u for any u in V and any real numbers c and d.
(8) I . u = u for any u in V.
The elements of V are called vectors: the elements of the set of real numbers R are called
scalars. The operation is called vector addition: the operation . is called scalar multiplication.
The vector 0 in property (3) is called a zero vector, The vector - u in property
(4) is called a negative of u.
Definition 2.2
Let V be a vector space and W a nonempty subset of V. If W is a vector space with respect to
the operations in V, then W is called a subspace of V.
It follows from Definition 2.2 that to verify that a subset W of a vector space V is a subspace,
one must check that (a), (b), and (1) through (8) of Definition 2.1 hold. However, the next
theorem says that it is enough to merely check that (a) an (b) hold to verify that a subset W of
a vector space V is a subspace. Property (a) is called the closure property for , and property
(b) is called the closure property for ..
Theorem 2.1
Let V be a vector space with operations and . and let W be a nonempty subset of V. Then W is
a subspace of V if and only if the following conditions hold:
Proof
If W is a subspace of V, then it is a vector space and (a) and (b) of Definition 4.4 hold; these are
precisely (a) and (b) of the theorem
Conversely, suppose that (a) and (b) hold. We wish to show that W is a subspace of V. First,
from (b) we have that ( - 1) . u is in W for any u in W. From (a) we have that u (-1) . u is in
𝑎
Example 2.1
Let W be the set of all vectors in R3 of the form [ 𝑏 ] where a and b are any real numbers. To
𝑎+𝑏
verify Theorem 2.1 (a) and (b), we let
is in W. for W consists of all those vectors whose third entry is the sum of the first two entries.
Similarly,
is in W. Hence W is a subspace of R3.
Once we multiply an arbitrary vector, A, by the inverse of its magnitude, (1/A), we obtain a unit
vector which is parallel to A. There exist several common notations to denote a unit vector,
e.g. Aˆ , eA, etc. Thus, we have that Aˆ = A/A = A/|A|, and A = A Aˆ , |Aˆ | = 1.
Vector addition
Vector addition has a very simple geometrical interpretation. To add vector B to vector A, we
simply place the tail of B at the head of A. The sum is a vector C from the tail of A to the head of
B. Thus, we write C = A + B. The same result is obtained if the roles of A are reversed B. That
is, C = A + B = B + A. This commutative property is illustrated below with the parallelogram
construction.
Since the result of adding two vectors is also a vector, we can consider the sum of multiple
vectors. It can easily be verified that vector sum has the property of association, that is,
(A + B) + C = A + (B + C) (2. 11)
A ∙ B = AB cos . (2.12)
Here 𝜃, is the angle between the vectors A and B when they are drawn with a common origin
C = AB sin , (2.13)
eliminate ambiguity, between the two possible choices, 𝜃 is always taken as the angle smaller
where is the angle between the vectors A and B when drawn with a common origin. To
than π. We can easily show that C is equal to the area enclosed by the parallelogram defined by
A and B. The vector C is orthogonal to both A and B, i.e. it is orthogonal to the plane defined
by A and B. The direction of C is determined by the right-hand rule as shown (Kolman, 1996).
From this definition, it follows that
B × A = −A × B, (2.14)
which indicates that vector multiplication is not commutative (but anticommutative). We also
note that if A × B = 0, then, either A and/or B are zero, or, A and B are parallel, although not
necessarily pointing in the same direction. Thus, we also have A × A = 0. Having defined vector
multiplication, it would appear natural to define vector division. In particular, we could say that
“A divided by B”, is a vector C such that A = B × C. We see immediately that there are a
number of difficulties with this definition. In particular, if A is not perpendicular to B, the vector
C does not exist. Moreover, if A is perpendicular to B then, there are an infinite number of
vectors that satisfy A = B × C. To see that, let us assume that C satisfies, A = B × C. Then, any
vector D = C
+ βB, for 3 any scalar β, also satisfies A = B × D, since B × D = B × (C + βB) = B × C = A.
We conclude therefore, that vector division is not a well-defined operation (Golan, 2007).
2.4 Summary
This chapter presented a brief review of the linear algebra as a general topic. Moreover, a review
of scalars and vectors including their properties and transformations was presented.
CHAPTER 3
LINEAR COMBINATIONS AND LINEAR INDEPENDENCE
This chapter presents an explanation of the linear combinations as well as linear independence.
Definition 3.1
A linear combination of vectors V1, V2, . . . , Vk in a vector space V is an expression of the form
where the ci's are scalars, that is, it's a whole of scalar products of them (Larry, 1998).
That’s the one and only linear combination of i, j, and k that gives (x, y, z).
Definition 4.2
A (ordered) subset of a vector space V is a (requested) premise of V if every vector v in V may
be interestingly represented as a linear combination of vectors from β.
For a requested basis, the coefficients in that linear combination are known as the coordinates of
the vector as for β.
Later on, when we study arranges in more detail, we'll compose the coordinates of a vector v as a
segment vector and give it a special notation.
𝑣1
𝖥𝑣2
1
[𝑉]𝛽 = .
.
(3.5)
I .
I
[𝑣𝑛
]
Although we have a standard basis for Rn, there are other bases (Lloyd and David, 1997).
Example 3.1
In R3 let
The vector
is a linear combination of VI, V2, and V3 if we can find real numbers a1, a2, and a3 so that
(3.6)
Solving this linear system by the methods of Chapter 2 gives (verify) a1= 1, a2 = 2, and a3 = - 1,
which means that V is a linear combination of VI, V2, and V3. Thus
(4.7)
The Figure below shows V as a linear combination of V1, V2, and V3.
Definition 3.3
The vectors V1, V2 ……. Vt in a vector space V are said to be linearly dependent if there exist
constants a1, a2, ……at, not all zero, such that
(3.8)
Otherwise, V1, V2 ….,Vk are called linearly independent. That is, V1, V2 ,…..,Vk are linearly
independent if, whenever a1V1 + a2V2 + ... + akVk = 0,
a1 = a2 =……. = ak = 0.
If S = {V1, V2,......,Vd},then we also say that the set S is linearly dependent or linearly
independent if the vectors have the corresponding property.
Example 3.2
Determine whether the vectors
Solution
Forming Equation (1),
Example 3.3
Are the vectors 𝑉1 = [1 0 1 2], 𝑉2 = [0 1 1 2], and 𝑉3 = [1 1 1 3] in R4 linearly
dependent or linearly independent?
Solution
We form Equation (1).
and solve for a1, a2, and a3 . The resulting homogeneous system is (verify)
xu+yv=0
Two vectors u and v are linearly independent if the main numbers x and y fulfilling
then xu + yv = 0 is equivalent to
𝑎 𝑐 𝑎 𝑐 𝑥
0=𝑥[ ]+𝑦[ ]=
[ 𝑏 ] [𝑦]
𝑑 𝑏
(3.10)
𝑑
In the event that u and v are linearly independent, then the main answer for this arrangement of
mathematical statements is the trivial solution, x=y=0. For homogeneous systems this happens
exactly when the determinant is non-zero. We have now discovered a test for figuring out if a
given set of vectors is linearly independent: A set of n vectors of length n is linearly independent
if the matrix with these vectors as columns has a non-zero determinant. The set is obviously
dependent if the determinant is zero .
CHAPTER 4
CONCLUSION
4.1 Conclusion
Overall, in addition to its mathematical usages, linear algebra has broad usages and applications
in most of engineering, medical, and biological field. As science and engineering disciplines
grow so the use of mathematics grows as new mathematical problems are encountered and new
mathematical skills are required. In this respect, linear algebra has been particularly responsive
to computer science as linear algebra plays a significant role in many important computer science
undertakings.
The broad utility of linear algebra to computer science reflects the deep connection that exists
between the discrete nature of matrix mathematics and digital technology. In this thesis we have
seen one important applications of the linear algebra which is called principal components
analysis. This technique is used broadly in the medical field for compressing the medical images
while keeping the good and needed features. However, this is not the only application of linear
algebra in this field. Linear algebra has many other applications in this field. It provides many
other concepts that are crucial to many areas of computer science, including graphics, image
processing, cryptography, machine learning, computer vision, optimization, graph algorithms,
quantum computation, computational biology, information retrieval and web search. Among
these applications are face morphing, face detection, image transformations such as blurring and
edge detection, image perspective removal, classification of tumors as malignant or benign,
integer factorization, error-correcting codes, and secret-sharing.
REFERENCES
1. Rao A R and Bhim Sankaram Linear Algebra Hindustan
Publishing house.
2. Gilbert Strang, Linear Algebra and its
Applications, Thomson, 2007.