0% found this document useful (0 votes)
20 views80 pages

24MA201 - Unit III InnerProduct Spaces Digital Material

This document outlines the course structure for 'Linear Algebra and Applications' (24MA201) offered by RMK Group of Educational Institutions for the academic years 2024-2028. It includes course objectives, prerequisites, a detailed syllabus, course outcomes, and assessment schedules, along with lecture plans and activity-based learning strategies. The document emphasizes the importance of inner product spaces, vector spaces, and various linear algebra concepts, and is intended for educational use only.

Uploaded by

240765.ad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views80 pages

24MA201 - Unit III InnerProduct Spaces Digital Material

This document outlines the course structure for 'Linear Algebra and Applications' (24MA201) offered by RMK Group of Educational Institutions for the academic years 2024-2028. It includes course objectives, prerequisites, a detailed syllabus, course outcomes, and assessment schedules, along with lecture plans and activity-based learning strategies. The document emphasizes the importance of inner product spaces, vector spaces, and various linear algebra concepts, and is intended for educational use only.

Uploaded by

240765.ad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

2

Please read this disclaimer before proceeding:


This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.

3
24MA201 - LINEAR ALGEBRA AND APPLICATIONS

DEPARTMENT CSE, ADS ,CSD and IT


BATCH/YEAR 2024 - 2028/I

CREATED BY Department of Mathematics

DATE 17.02.2025

4
Table of Contents
S. NO. TOPICS Page No.

1 Course Objectives 6
2 Pre Requisites 7
3 Syllabus 8
4 Course Outcomes 9
5 CO – PO/PSO Mapping 10
6
Lecture Notes: UNIT III INNER PRODUCT SPACES

Lecture Plan 11
Activity Based Learning 12
3.1 Inner Product 13

3.2 Norm or length of a vector 13

3.3 Gram Schmidt Ortho-gonalization Process: 20

3.4 Ad-joint of a Linear Operator 33

3.5 Least Square Approximation 39


3.6 Least square approximation method of
46
finding a curve of best fit
3.6.1 Line of best fit (or) Least square line 46
3.6.2 Parabola of Best fit (or) Least
49
Square Parabola.
3.7 Practice Quiz 52
3.8 Assignment 56
3.9 Part A Questions and Answers 61
3.10 Part B Questions
68
7 Supportive Online Certification Courses 71
8 Real Time Applications 72
9 Assessment Schedule 73
10 Mini Project 74
11 Prescribed Text Books & Reference Books 79

5
COURSE OBJECTIVES

S. No. Course Objectives

The syllabus is designed to:

1 Comprehend the fundamental concepts of matrices.

Illustrate the basic notions associated with vector


2 spaces and its properties.

3 Utilize the Gram-Schmidt ortho normalization process.

4 Understand the components and implications for


vector spaces by rank-nullity dimension theorem.

5 Calculate the eigenvalues and eigenvectors of linear


transformations.

6
PREREQUISITES

Subject Code: 24MA201

Subject Name: LINEAR ALBEGRA AND APPLICATIONS

PREREQUISITES

TOPICS COURSE NAME WITH CODE

To learn Linear Algebra one has


to be strong in Mathematics Higher Secondary Level
including the basic concepts of
Mat rice s a n d D e t e r m i n a n t s .
Familiarization in functions, set
t h e o r y, r e l a t i o n s a n d linear
equations is mandatory.

7
SYLLABUS

24MA201 LINEAR ALGEBRA AND APPLICATIONS LTPC

3024
UNIT I Matrices and System of Linear Equations 15
Matrices – Row echelon form – Rank of a matrix – System of linear equations –
Consistency – Gauss elimination method – Gauss Jordan method.
Experiments using C language:
1. Solve the system of equations using Gauss Elimination method.
2. Solve the system of equations using Gauss Jordan method.

UNIT II Vector spaces 15


Real and Complex fields – Vector spaces over Real and Complex fields – Subspace
– Linear space – Linear independence and dependence (Statement only) – Bases
and dimensions.
Experiments using C language:
1. Check whether the given vectors are linearly independent or not.
2. Find the basis and dimension for given vectors.

UNIT III Inner product spaces 15


Inner product space and norms – Properties – Orthogonal, Orthonormal vectors –
Gram- Schmidt ortho normalization process – Least squares approximation.
Experiments using C language:
1. Find the orthogonal vectors using inner product.
2. Find the orthonormal vectors using inner product.
UNIT IV Linear Transformation 15
Linear transformation – Range and null space – Rank and nullity – Rank nullity
Dimension theorem – Matrix representation of linear transformation – Eigenvalues
and eigenvectors of linear transformation.
Experiments using C language:
1. Find the Rank and Nullity of a matrix.
2. Find the eigenvalues and eigenvectors of a matrix.

UNIT V Eigenvalue Problems And Matrix Decomposition 15


Eigenvalue problems – Power method – Jacobi method – Singular value
decomposition – QR decomposition.
Experiments using C language:
1. Solve the system of equations using Jacobi method.
2. Find QR decomposition of a matrix.

TOTAL: 75 PERIODS

8
Course Outcomes

Highest
Course Outcomes Cognitive
CO’s
Level

After the successful completion of the course, the student will be able to:

CO1 Solve the system of linear equations using Gauss


K2
elimination and Gauss Jordan method.

CO2 Analyze vector spaces to determine their bases and


dimensions.
K2

Apply Gram-Schmidt process to ortho normalize sets of


K2
CO3 vectors.

Apply rank nullity theorem to analyse linear


transformations.
K2
CO4

CO5 Compute the eigenvalues and eigenvectors using


K2
singular value decomposition.

Understand the ideas of least squares approximations K2


CO6
and its applications

9
CO-PO/PSO Mapping

CO’s PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12

CO1 3 2 - - - - - - - - - 1

CO2 3 2 - - - - - - - - - 1

CO3 3 2 - - 1 - - - - - - -

CO4 3 2 - 1 - - - - - - - -

CO5 3 2 - - 1 - - - - - - -

CO6 3 - 2 - - - - - - - - 1

CO’s PSO1 PSO2 PSO3

CO1 - - -

CO2 - - -

CO3 - - -

CO4 - - -

CO5 - - -

CO6 - - -

10
LECTURE PLAN- UNIT III

No. of Actual CO Knowledge Mode of


Topics to periods Purposed Date level Delivery
S.No.
be covered Date

Inner product, PPT,Board


1 1 22.2.2025 CO3 K1 & Marker
norms
24.2.2025 CO3 PPT,Board
2 1 K2 & Marker
Problems
CO3 PPT,Board
3 Orthogonal, 1 25.2.2025 K2 & Marker
Orthonormal
vectors
Gram Schmidt CO3
26.2.2025 PPT,Board
4 orthogonalization 1 K1 & Marker
process
27.2.2025 CO3 PPT,Board
5 1 K2 & Marker
Problems
28.2.2025 CO3 PPT,Board
6 1 K2 & Marker
Problems
Ad joint of linear 1.3.2025
CO3 PPT,Board
7 1 K1 & Marker
operations

4.3.2025 CO3 PPT,Board


8 Problems 1 K2 & Marker

CO3
5.3.2025 PPT,Board
9 Least square 1 K2 & Marker
6.3.2025
approximation

7.3.2025 CO3 PPT,Board


10 Problems 1 K2 & Marker
CO3 PPT,Board
8.3.2025 & Marker
11 Problems 1 K2

10.3.2025 CO3 PPT,Board


11.3.2025 & Marker
12 Problems 1 K2

11
ACTIVITY BASED LEARNING

Activity based learning enhances students’ critical thinking and


collaborative skills. Experiential learning being the core, various
activities such as quiz competitions, group discussion, etc. are conducted
for all the five units to enhance the learning abilities of students. The
students are the center of the activities, where student’s opinions are
valued, questions are encouraged, and discussions are done. These
activities empower the students to explore and learn by themselves.

S.No. TOPICS Activity Link


https://quizizz.com/join/quiz/5f639b1
Inner Product 995c6f7001c91bbfa/start?studentShar
1 space- Norms Quiz e=true

Gram Schmidt https://study.com/academy/practic


Orthogonalization Quiz e/quiz-worksheet-the-gram-
2 schmidt-process.html
Process

Least Square https://www.varsitytutors.com/line


Quiz ar_algebra-help/least-squares
3 Approximation

12
UNIT III – INNER PRODUCT SPACES

3.1Introduction
An inner product space or a Hausdorff pre-Hilbert space is a vector
space with an additional structure called an inner product. This additional structure
associates each pair of vectors in the space with a scalar quantity known as the
inner product of the vectors, often denoted using angle brackets. Inner products
allow the rigorous introduction of intuitive geometrical notions, such as the length
of a vector or the angle between two vectors. They also provide the means of
defining orthogonality between vectors (zero inner product). Inner product
spaces generalize Euclidean spaces (in which the inner product is the dot
product, also known as the scalar product) to vector spaces of any (possibly
infinite) dimension, and are studied in functional analysis. Inner product spaces
over the field of complex numbers are sometimes referred to as unitary spaces.
The first usage of the concept of a vector space with an inner product is due
to Giuseppe Peano, in 1898.

1. Inner Product
An inner product on a vector space V ( F ) is a function that assigns, to every

ordered pair of vectors u, v V , a scalar in F such that u, v, w  V and   ,   F


the following axioms hold.
(i) u, v  v, u , where the bar denotes the complex conjugation

(ii) u  v, w  u , w  v, w
(iii) u , v   u , v
(iv) u , u   0 and u , u   0 if and only if u  0 .

A vector space V ( F ) with an inner product on it is called an inner product space.

3.2 Norm or length of a vector


Let V be an inner product space. For v V , we define the norm or length of v
by v  v,v .

The vector v is called a unit vector if v 1.

13
Orthogonal Vectors
Let V be an inner product space. The vectors u and v in V are orthogonal (or
perpendicular) if u, v  0 .
A subset S of V is orthogonal if any two distinct vectors in S are orthogonal.

Orthonormal Set
A subset S of V is orthonormal if S is orthogonal and consists entirely of unit vectors.

Note:
The process of multiplying a non-zero vector by the reciprocal of its length is

called normalizing. i.e.) If u is a non-zero vector, then 1 u is a unit vector.


u

Problem: 1
Let V (F)  R3 (R) be a vector space. u  (a 1, a 2, a 3), v  (b1,b 2,b 3) defined by

u, v  a1b1  a2b2  a3b3 , Verify it is an inner product space.

Solution:
Let u  (a1, a2 , a3 ), v  (b1,b2 ,b3 ) and w  (c1, c2 , c3 )

(i) u, v  u, v  a1b1  a2b2  a3b3[Sinceits real]


 b1a1  b2 a2  b3 a3
u, v  v, u

(ii) u, u  a1a1  a2 a2  a3 a3


 a12  a22  a32
u, u  0 u  0
u, u  0  a12  a 22  a 32  0  a1  0, a 2  0, a 3  0  u  (0, 0, 0)

(iii) u   v   (a1, a2 , a3 )   (b1, b2 , b3)  (a1  b1, a2  b2 ,  a 3  b3 )


u   v, w  (a1   b1)c1  (a2  b2 )c2  (a3  b3 )c3
  (a1c1  a2 c2  a3 c3 )   (b1c1  b2 c2  b3c3 )
 u   v, w   u, w   v, w
V  R 3 (R) is an inner product space.

14
Problem:2
Let u  (a , a , a ,..., a ), v  (b , b , b ,..., b 1) 2 F n3(C) . n 1 2 3 n

Define u, v  a b  a b  a b ... a b . Verify that it is an inner product of F n (C).


1 1 2 2 3 3 n n

Solution:

(i) u, v  a1 b1  a2 b2  a3b3  ...  an bn


 a1 b1  a2 b2  a3b3  ...  an bn
 a1b1  a2b2  a3b3  ...  anbn
 b1 a1  b2 a2  b3 a3  ...  bn an
 v, u

(ii) u, u  a1 a1  a2 a2  a3 a3  ...  an an


2 2 2
1 2
u,3 u  a n a a  ...  a 2
 0 if u  0
u, u  0  u  0

(iii) Let w  (c1 , c2 , c3 ,..., cn )


u   v   (a1, a2 , a3 ,..., an )   (b1, b2 , b3,..., bn )  (a1  b1, a2  b2 ,  a 3  b3,...,an
u   v, w  (a1  b1)c1  (a2  b2 )c2  (a3  b3 )c3  ....  (an  bn )cn
  (a1c1  a2 c2  a3 c3  ...  an cn )   (b1c1  b2 c2  b3 c3  ...  bn cn )
 u   v, w   u, w   v, w
Hence u, v is an inner product space on F n (C)

Problem:3
Let V be the set of all continuous real functions defined on the closed interval [0,1].
1

The inner product on V defined by  f ( x ) , g ( x)   f (t ) g (t) dt . Prove that V(R) is


0

an inner product space.


Solution:
1 1 1

(i)  f , g   f (t) g (t )dt   g (t) f (t)dt   g (t ) f (t )dt  g, f 


0 0 0
1 1

(ii)  f , f    f (t ) f (t )dt   (f (t )) dt  0, f (t)  0


2

0 0

and  f , f   0 iff f (t)  0, t [0,1]

15
1

(iii)  f   g, h   ( f (t )  g (t ))h(t) dt
0
1 1

   f (t )h(t ) dt    g (t )h(t ) dt
0 0

 f   g, h    f , h    g, h
 f , g is an inner product space over R

Problem: 4
Prove that R 2 (R) is an inner product space with the inner product defined by

u, v  a1b1  a2b1  a1b2  2a2b2 , u  (a1, a2 ), v  (b1,b2 ).

Solution:

(i) u, v  a1b1  a2b1  a1b2  2a2b2


 a1b1  a2 b1  a1b2  2a2 b2
 b1a1  b1a2  b2 a1  2b2a2
 v, u

(ii) u, u  a1a1  a2 a1  a1a2  2a2 a2


 a12  2a1a 2  a 22  a 22
u, u  (a1  a 2 ) 2  a 22  0 if u  0
u, u  0iff (a 1 a )22  a 22 0  a 1 a 2 0, a 2 0  a 1 a 2 0  u  0

(iii) Let w  (c1 , c2 ),  ,   R


 u   v   (a1 , a2 )   (b1 , b2 )  ( a1   b1 ,  a2   b2 )
 u   v, w   ( a1   b1 , a2   b2 ), (c1 , c2 )
 ( a1   b1 )c1  ( a2   b2 )c1  ( a1   b1 )c2  2( a2   b2 )c2
  (a1c1  a2c1  a1c2  2a2c2 )   (b1c1  b2c1  b1c2  2b2c2 )
 u   v, w   u, w   v, w
Hence R 2 (R) is an inner product space

Theorem: 1
Let V be an inner product space. Then for u,v, wV and   F, the following

statements are true.


(i) u, v  w  u, v  u, w
(ii) u,  v   u, v

16
(iii)u, 0  0, u  0
(iv) If u, v  u, w, u V then v  w

Proof:
(i) u, v  w  v  w, u
 v, u  w, u
 v, u  w, u
u, v  w  u, v  u, w
(ii) u,  v   v, u
  v, u
u,  v   u, v
(iii)u, 0  u, 0v  0 u, v  0
0, u  0v, u  0 v, u  0
u, 0  0, u  0
(iv) Consider, u, v  w  u, v  u, w using(i)
=0 (given)
 v  w  0 using(iii)
vw

Theorem: 2
Let V be an inner product space over F, then u, v V and  ,   F we have

(i) u   u
(ii) u, v  u v (Cauchy Schwartz inequality)
(iii) u  v  u  v (Triangular inequality)

Proof:

(i) u 2
 u,u
  u,  u
   u, u
  u, u
  u, u
u 2   2 u 2

 u   u

17
(ii) Cauchy Schwartz Inequality
Case(i)
If u  0 (or) v  0 then u, v  0
Also u  0, v  0
Hence u, v  u v

Case(ii)
Let u  0 and v  0.

Let w  v  v,u2 u
u

since v,u2  F , let v,u2  k


u u
 w  v  ku

consider w, w  v  ku, v  ku


 v, v  v, ku  ku, v  ku, ku
 v 2  kv, u  ku, v  k k u 2

 v 2  v,u v, u  v,u u, v  k 2 u 2

u 2 u 2
2
u, vu, v u, vu, v v, u
2 u 2
 v   
u 2 u 2 u 4
2 2 2
u, v
2 u, v u, v
 v   
u 2
u 2
u 2
2
u, v2
w, w  v 
u 2
We know that, w, w  0
2
u, v
2
 v  0
u 2
2
2 u, v
v 
u 2
2
u v 2  u, v 2
u, v  u v

18
(iii)Triangular Inequality
2
uv  u  v, u  v
 u, u  u, v  v, u  v, v
 u 2  u, v  u, v  v 2

 u 2  2 Reu, v  v 2

 u 2  2 u, v  v 2
since, 2 Re(z)  2 z 
 u 2 2 u v  v 2
since, u, v  u v 

u  v 2  (u  v )2

 uv  u  v

Theorem: 3
In an inner product space V(R), x, y V ,

(i) x  y  x y

(ii) x  y 2  x  y 2
 4x, y

Proof:
(i) x  (x  y)  y
x  x y  y
x  y  x y      (1)
y  ( y  x)  x
y  yx  x
y  x  yx
 ( x  y ) x  y      (2)
from(1)&(2),
x  y  x y

(ii) x  y 2  x  y 2
  x  y, x  y   x  y, x  y
  x, x   x, y   y, x   y, y   x, x   x, y   y, x   y, y
  x, y   y, x   x, y   y, x
x  y 2  x  y 2  4x, y sinceV is real,x, y   y, x 
Theorem: 4
In an inner product space V, any subset of non-zero orthogonal vectors are linearly
independent.
Proof:

19
Let S  v1 ,v2 ,v3 ,...,vn be a set of orthogonal vectors, which are non-zero.

 vi  0, vi , v j   0 if i  j
Let 1v1   2 v2  3v3  ...   n vn  0.
Consider 1v1   2 v 2  3v3  ...   n v n , vi   0  0, vi   0 by theorem1
 1 v1, vi   2 v2 , vi   3 v3 , vi   ...  n vn , vi   0
 1 .0   2 .0  3 .0  ...   n .0  0
 i vi , vi   0
 i vi 2
0
i  0 Hence S is linearly independent.

3.3 Gram Schmidt Orthogonalization Process:

Orthonormal basis:
Let V be an inner product space. A subset S of V is orthonormal basis if it is an

ordered basis that is orthonormal.

 1 2   2 1  
Example: The set  , , , 2
  is an orthonormal basis for R .
 5 5   5 5 

Theorem: 5 (Gram Schmidt Orthogonalization Process)


Every finite dimensional inner product space has an orthonormal set as a basis.

Proof:
Let V(F) be a finite dimensional inner product space and dim(V )  n .

Let B  v1 ,v2 ,v3,...,vn be a basis for V(F).

Claim: we have to construct an orthonormal basis w1 , w2 , w3,..., wn from B

First we shall construct an orthogonal basis u1 , u2 , u3,..., unfrom B .

We prove by induction on n.
Take u1  v1  0

Let u 2  v2  v2 ,u12 u 1, We have to prove u 2  0


u1

20
v , u 
For, if u2  0 then v2  2 21 u 1  0
u1

v , u 
 v2  2 21 u 1
u1

 v2 , u1 are dependent, which is a contradiction


u 2  0

Claim: u2 is orthogonal to u1

v , u 
u2 , u1   v2  2 21 u 1,u 1
u1

 v2 , u1   v2 ,u12 u 1, u 1


u1

 v2 , u1   v2 ,u12 u 1


2

u1
 v2 , u1   v2 , u1 
u2 , u1   0
 u2 is orthogonal to u1
 u1 ,u2 is an orthogonal set

Hence the theorem is true for n  2 .


Now, Assume the theorem is true for all integers up to k .

i.e. u1 , u2 , u3,..., uk is an orthogonal set.

Now, we prove the theorem for n  k 1


vk 1, u1 u  vk 1, u 2  u
Let u k 1  vk 1  2 1 2 2  ...  vk 1,u2k  u k then uk 1  0
u1 u2 uk

v , u  v , u 
If uk 1  0, then vk 1  k 1 2 1 u 1  k 1 22 u 2  ...  vk 1 ,u2k  u k
u1 u2 uk

 vk 1 is a linear combination of u1 , u2 , u3,..., uk and hence linear combination of

v1 ,v2,v3,...,vk , which is a contradiction. Hence uk 1  0 .

To prove: uk 1 is orthogonal to u1,u2 ,u3 ,...,uk . i.e. uk 1 ,ui   0

v , u  v , u 
uk 1, ui   vk 1  k 1 21 u 1  k 1 22 u 2  ...  vk 1 ,u2k  u k, u i
u1 u2 uk

21
 vk 1 , ui   vk 1,u21 u 1, u i ....  vk 1,uk2 u ,ku i
u1 uk
uk 1 ,ui   0

Hence u1 ,u2 ,u3,...,uk1 is an orthogonal set.

Hence the theorem is true for n.


Hence, every finite dimensional inner product space has an orthonormal set as a
basis.
Theorem: 6

Let V be an inner product space and S  v1 ,v2 ,v3,...,vn be an orthogonal subset of V
n
v, vi 
consisting of non-zero vectors . If v  L(S ) , then V  
i1 vi 2 i
v.

Proof:
Given: V is an inner product space over F.

S  v1 ,v2 ,v3,...,vn  is a subset of V.


Let v  L(S ) , v  1v1   2 v2  3v3  ...   n vn

v, vi   1v1   2 v2  3v3  ...   n vn , vi 


 1v1, vi   2 v2 , vi   3 v3 , vi   ...  i vi , vi   ...  n vn , vi 
 i vi , vi  [since vi , v j   0 if i  j]
2
v, vi    i vi

v, vi 
Where,  i  , i  1, 2, 3,...
vi 2

n n
v, vi 
V   i vi   2 vi
i1 i1 vi

Theorem: 7

Let V be an inner product space over F. Let v1 ,v2 ,v3,...,vn be an orthonormal set of

V. Then v1 ,v2 ,v3,...,vn are linearly independent.

Proof:

Given: v1 ,v2 ,v3,...,vn  be an orthonormal set of V, then vi , vi   1, vi , v j   0 if i  j

22
To Prove: v1 ,v2 ,v3,...,vn are linearly independent.

1v1   2 v 2 3 v3 ...   n v n  0, i  F


1v1  2v2  3v3  ...  n vn , v1  0, v1  0
1 v1 , v1   2 v2 , v1   3 v3 , v1   ...  n vn , v1   0
1 1 2  0  3  0  ...  n  0  0
1  0
Similarly, for 1  i  n,

1vi  2vi  3vi  ...  ivi  ...  n vn , vi   0, vi   0


1v1, vi   2 v2 , vi   3 v3 , vi   ...  i vi , vi   ...  n vn , vi   0
1  0  2  0  3  0  ...  i 1 ...  n  0  0
 i  0
 1  2  3  ...  n  0
Hence v1, v2 , v3 ,..., vn are linearly independent.

Working rule to find orthogonal basis and orthonormal basis


from the given vectors:

Step 1: Let the given vectors be taken as v1 ,v2 ,v3 

Step 2: Let the orthogonal basis be u1 ,u2 ,u3

Where,
u1  v1
v2 , u1 u
u2  v 2  2 1
u1
v3 , u1 u  v3 , u2  u .
u3  v 3  2 1 2 2
u1 u2

Step 3: Let the orthonormal basis be w1 , w2 , w3

Where, w1  u1 , w 2  u2 , w 3  u3 .
u1 u2 u3

Note: The Fourier coefficients of v relative to the orthonormal basis w1 , w2 , w3are
given as v, w1 ,v, w2 ,v, w3 .

23
Problem 1:
In R2 (R), test whether S  (v ,v )where v  (1,1) and v  (1,1) is a basis. Find
1 2 1 2

the orthogonal basis with the standard inner product. Also find the Fourier

coefficients of (3,4) relative to the orthonormal basis.

Solution:
Let S  (v1 , v2 ) (1,1), (1, 1)

1 1
Let A   
1 1

1 1
A  (1  1)  2  0 and dim R 2  2.
1 1

Hence S is a basis of R2 (R).

To find Orthogonal basis:

Let the orthogonal basis be u1 , u2 

Where

u1  v1  (1,1)

v2 , u1 u
u2  v 2 
u 2 1
1

v2 , u1   (1, 1), (1,1)  1 (1)  0

 u1 , u1   (1,1), (1,1)  11  2


2
u1

0
 u 2  (1, 1)  (1,1)  (1, 1)  (0,0 )  (1, 1)
2

u1  (1,1) and u2  (1, 1)


 The orthogonal basis u1 , u2  (1,1), (1, 1) 
To find orthonormal basis:

Let the orthonormal basis be w1 , w2 

Where w1  u1  (1,1)  (1,1)   1 , 1 


 
u1 u 1 ,u 1  2  2 2

w2  u2
u2

24
Here u2  u2 ,u 2   (1,1),(1,1)  2

 w 2  (1, 1)   1 , 1  .
 
2  2 2
Fourier coefficients:

Given vector v  (3,4)

The Fourier coefficients of the vector v relative to the orthonormal basis are
 1 1   3  4  7
v, w1  (3, 4 ),  ,   
 2 2  2 2  2

 1 1   3  4   1
v, w 2    (3, 4 ),  ,   
 2 2  2 2  2

Problem 2:
In R 4 , let w1  (1, 0,1, 0 ), w2  (1,1,1,1 ) and w3  (0,1, 2,1). Then w1 , w2 , w3  is linearly

independent. Use Gram-Schmidt process to compute orthogonal vectors v1, v2 , v3 and then

normalize these vectors to obtain an orthonormal set.

Solution:

Given w1  (1,0,1,0 ), w2  (1,1,1,1) and w3  (0,1, 2,1).

To find Orthogonal basis:

Let the orthogonal basis v1 ,v2 ,v3

Where

v1  w1  (1,0,1,0)
w , v 
v2  w 2  2 21 v 1
v1

w2,v1  (1,1,1,1),(1,0,1,0) 1 0 1 0  2


2
v1  v1 , v 1   (1, 0,1, 0 ), (1, 0,1, 0 )  1  0 1  0  2

2
v 2  (1,1,1,1)  (1,0,1,0)  (1,11,1)  (1,0,1,0)  (0,1,0,1)
2

v2  (0,1,0,1).

25
w3 , v1 v  w3 , v2  v
v3  w3  2 1 2 2
v1 v2

w3,v1   (0,1, 2,1), (1,0,1,0 )  0  0  2  0  2

w3,v2   (0,1,2,1),(0,1,0,1)  0 1 0 1  2


v2 2
 v2 , v2    (0,1, 0,1),(0,1, 0,1)  0  1  0  1  2

2
 v 3  (0,1, 2,1)  (1,0,1, 0)  2 (0,1, 0,1)  (0,1, 2,1)  (1,0,1, 0)  (0,1,0,1)  (1, 0,1, 0)
2 2

v3  (1,0,1,0 ).

The orthogonal basis v1 , v2 , v3  (1, 0,1, 0), (0,1, 0,1), (1, 0,1, 0).

The orthogonal vectors are normalized to obtain the orthonormal basis.

Let the orthonormal basis be u1 , u2 , u3

Where u1  v1  (1, 0,1, 0 )


v1 2

v
u2  2 
(0,1, 0,1)
v2 2

u 3  v3 
(1, 0,1, 0 ) .
v3 2


 The orthonormal basis u ,1u ,2u 3   1 (1,0,1,0), 1
(0,1,0,1), 1
( 1, 0,1, 0 ) .
 2 2 2 
Problem 3:
1

Let V  P (R) with the inner product  f (x ), g (x)   f (t) g (t)dt and consider the
1

subspace P2(R) with the standard basis . Use Gram-Schmidt process to replace 

by an orthogonal basis (v1 ,v2 ,v3 ) for P2 (R ) and the use the orthogonal basis to obtain

orthonormal basis for P2(R).

Solution:

The basis of P 2 (R ),  
1, x, x 2

Let w1  1, w2  x and w3  x 2 .

26
To find orthogonal basis:

Let the orthogonal basis v1 ,v2 ,v3 

Where v1  w1  1

w2 , v1 v
v2  w 2  2 1
v1
1 1

w2 , v1    x,1   t.1dt   t dt 0 t is an odd function.


1 1

1 1

 1,1   1.1dt   dt  t 1  11  2


2 1
v1
1 1

0
 v2  x  (1)  x  0  x
2
v2  x.

w3 , v1 v  w3 , v2  v


v3  w3  2 1 2 2
v1 v2

1 1 1
 t3  1 1 2
w3 , v1   x ,1   t .1dt   t dt      
2 2 2

1 1  3  1 3 3 3
1 1

w3 , v2    x , x 
2

1
 t .t dt   t
2

1
3
dt 0 t 3 is an odd function.

1
2 1  t3 1 1 1 2 2
v2   x, x   t.t dt   t dt      
1 1  3 1 3 3 3
2
0 1
v 3  x  3 (1) 
2
( x) x 
2

2 2 3
3

v3  x 2  1 .
3

 1
The orthogonal basis v1 , v2 , v3  1, x, x   .
2

 3

The orthogonal vectors are normalized to obtain the orthonormal basis.

Let the orthonormal basis be u1 , u2 , u3

v1 1
Where u1  
v1 2
27
v x 3x
u2  2  
v2 2 2
3

u3  v3
v3
1 1
1 1  1  1  2 1 8
  x 2  , x 2      t 2   . t 2   dt    t 4  t 2   dt 
2
v3
3 3 1  3  3 1 
3 9 45

x2  1
3 45  x 2  1  .
u3 
8 8  3 
45

3 x, 45  x 2  1   .
 The orthonormal basis for P2 (R) is u1,u 2,u 3  1,
8  3 
 2 2

Problem 4:
Apply Gram-Schmidt process to find the orthogonal basis and orthonormal basis for

the subset S (1,i, 0),(1 i, 2, 4i). Also find the Fourier coefficients of the vector

v  (3  i, 4i, 4 ) relative to orthonormal basis.

Solution:
Given S  (1,i, 0 ), (1 i, 2, 4i )

Let v1  (1, i,0 ), v2  (1 i, 2, 4i )

To find Orthogonal basis:

Let the orthogonal basis be u1,u2

Where

u1  v1  (1, i,0)
v2 , u1 u
u2  v 2  2 1
u1

v2,u1  (1i,2,4i),(1,i,0)  (1i).1 2(i)  4i(0) 1 i  2i 1 3i


2
u1  u1 , u 1   (1, i, 0 ), (1, i, 0 )  1  i ( i )  0  1  i 2  1  1  2

28
1 3i
u 2  (1 i, 2,4i)  (1,i,0)  (1i, 2,4i) 1 3i , i  3 ,0  1 i ,1

i 
, 4i 
2  2 2   2 2 

1 i , 1 i , 4i 
u1  (1,i,0 ) and u2   
 2 2 
  1  i 1 i 
 The orthogonal basis u 1, u 2   (1, i, 0 ),  , , 4i   .
  2 2 
To find orthonormal basis:

Let the orthonormal basis be w1 , w2 


(1,i, 0 )  (1, i, 0 )   1 , i
Where w1  u1   , 0 
u1 u1 , u 1 2  2 2 

w2  u2
u2

 1 i ,1 i , 4i  , 1 i ,1 i , 4i 
u2  u2 , u2      2 
 2 2   2 
Here

(1 i )(1 i)  (1 i)(1 i)  4i (4i )  17
4 4

 1  i ,1  i , 4i 
 2 2   1 i 1 i 4i 
 w2    , ,
17  2 17 2 17 2 17  .
 

 1 i , 0  ,  1  i , 1  i , 4i   .
 The orthonormal basis w1 , w2    ,   
 2 2   2 17 2 17 2 17  

Fourier coefficients:

Let v  (3i,4i,4)

The Fourier coefficients of S relative to the orthonormal basis are


i
 1
v, w1   (3  i, 4i, 4 ),  , , 0   7  i
 2 2  2

 1 i 1 i 4i 
v, w 2    (3  i, 4i, 4 ),  , ,
 2 17 2 17 2 17 
 17i

29
Problem 5:
Apply Gram-Schmidt process to find the orthogonal basis and orthonormal basis for

 3 5  ,  1 9  ,  7 17 
the subset S      . Also find the Fourier coefficients of
 
 1 1   5 1  2 6 

 1 27 
the vector A   relative to orthonormal basis.
 4 8 

Solution:

 3 5  , w   1 9  , w   7
Let w1  
17
 2 5 2
 1 1  1 6 
3
 
To find Orthogonal basis:

Let the orthogonal basis v1 ,v2 ,v3 

Where

3 5
v1  w1   
 1 1 
w2 , v1 v
v2  w 2  2 1
v1

w2 , v1   trace (v1.w2 )

 3 1 .  1 9   8 28 
 trace        trace  
  5 1   5 1   0 44 
 sumof diagonal elements  8  44  36

v1  v1 , v1  trace (v1.v1 )


2

 3 1.  3 5   10 14 
 trace        trace  
  5 1   1 1    14 26 
 sum of diagonal elements  10  26  36

 1 9   36  3 5   4 4
v2   1  36  1 1   6
5     2 

4 4 .
v2  
6 2 

w3 , v1 v  w3 , v2  v


v3  w3  2 1 2 2
v1 v2

w3 , v1  trace (v1 .w3 )

30
 3 1. 7 17   trace 19 45
 trace    6  37 91
 5 1   2   
 sum of diagonal elements  72

w3 , v 2   trace (v 2.w3 )

 4 4  .  7 17   trace  16 32 


 trace   2  2 6   24 56
 6     
 sum of diagonal elements  72

 trace (v2 .v2 )


2
v2

 4 6  .  4 4   52 28 
 trace     trace 
  
  4 2   6 2    28 20 
 sum of diagonal elements  72

7 17  (72)  3 5  (72)  4 4  9 3


 v3   6 

36  1
 
 2 1 72  6 2   6 6 
  

5  ,  4 4  ,  9 3  .
The orthogonal basis v 1,v 2,v 3   3  
 1 1   6 2  6 6

The orthogonal vectors are normalized to obtain the orthonormal basis

Let the orthonormal basis be u1 , u2 , u3 

Where v1  1  3 5
u1 
v1 6  1 1 

v 1  4 4   1  4 4 
u2  2  
v2 72  6 2  6 2  6 2 

u3  v3
v3

 v3 , v 3  trace (v3.v3 )


2
v3

 9 6  .  9 3    trace 117 73


 trace   6  6 6   73 45 
  3     
 sum of diagonal elements  162
1 9 3  1 9 3 
u3  v3   
v3 162  6 6  9 2  6 6 

31
5 , 1 4 4  , 1 9 3   .
 The orthonormal basis u ,1u ,2u 3   1  3   
 6  1 1  6 2  6 2  9 2  6 6  

Fourier coefficients:

 1 27 
Let A  
 4 8 

The Fourier coefficients of A relative to the orthonormal basis are


 1 3 1  1 27  
 A,u1   trace (u1 A ) trace   . 
 6 5 1   4 8  

 1 73 
 trace 1  
6 9 143  
1
 sum of diagonal elements  (144 )  24
6
 A,u1   24

 1  4 6   1 27  
 A,u 2   trace (u 2 A ) trace   . 
6 2  4 2   4 8  

 1  20 60 
 trace  
6 2  4 92 

1
 sum of diagonal elements  (72 )  6 2
62

 A,u2   6 2

 1  9 6   1 27  
 A,u3   trace (u 3 A ) trace  
. 
 9 2 3 6  4 8  

 1  9 6  .  1 27  
 trace   
 9 2 3 6   4 8  

 1  33 291  
 trace  2  27 129
9
 
1
 sum of diagonal elements  (162 )  9 2
62

 A,u3   9 2 .

32
3.4 Adjoint Of A Linear Operator

Definition: Adjoint of a matrix


Let A M nn (F ) . If A  a
 ijmn then we define the adjoint of A as A ( ) and
t

()t
is defined by A* . That is A*  A . A* is an n m matrix.

1 i i  1i i 
For example, if A    . Then A  
 1 2  3i   1 2  3i 

() 1i
A*  A   
' 1 

 i 2  3i 

Definition: Trace of matrix A


Let A M nn (F ) . If A   aij  mn
, then the trace of A is the sum of the
diagonal elements.
n
i.e. tr ( A )   aij
i1

Definition:
Let A Mnn (F ) be the vector space where F is R or C. The inner product of A
and B in V defined as  A,B  tr (B* A)is called the Frobeniusinner product on
V.

Problem
1 2  i  , B  1  i 0
If A   belong to M 22 (C ). Using Frobenius inner
3 i  
 0 i 
product compute  A,B , A , B .

Solution:
1 2  i  , B  1 i 0 
Given A  
3 i  
 0 i 
By the definition of Frobenius inner product : A,B  tr (B* A)
1  i 0
Now B *  
 0 i 

B * A  1  i 0  1 2  i 
 0 i  3 i 

33
 1  i 3 i 
3i 1 
 

 A,B  tr (B* A ) 1 i 1  i

A   A,A

 A,A  tr (A* A )
1 3  1 2  i 
A* A   
2  i i  3 i 

  10 2  4i 
 2  4i
 6 
tr (A* A ) 10  6  16
A 4

And B  B,B
B,B  tr (B*B )

B*B  1  i 0  1  i 0
 0 i   0 i

 2 0 
 
0 1 
tr (B*B ) 2 1  3
B  3

Definition: Linear functional

Let V (F ) be a vector space. A linear functional on V is a linear transformation


f:V  F For example
1. In R , the line 2X 3Y 15 is the set of all points at which the linear
2

function f(x,y)=2x-3y has value 15.


2. If V (F ) is an inner product space and let � ∈ �. The function g:V  F

defined by g(x)=<x,y>,x V is clearly linear. So g is a linear functional.

34
Note: Let V (F ) be a finite dimensional inner product space and g:V  F be a linear
functional. Then there exists a unique vector � ∈ � such that g(x)=x,y,x  V

Definition : Adjoint operators


Let V (F ) be a finite dimensional inner product space and T be a linear operator on V .
Then the adjoint of T on V , written as T* , is defined as T(u),v=u,T* (v) u,vV

Theorem 1: Let V (F ) be a finite dimensional inner product space. If T* the adjoint


of T on V , then T* is linear and unique.

Proof:
Given T is a linear operator on V and T* is defined as T(u),v=u,T* (v) u,vV
First we prove T* is linear.

If u,v  V , then

u,T* (v  w )  T (u ), (v  w)
= T(u),v+T(u),w
= u,T* (v)+u,T* (w)
 u,T* (v+w)=u,T* (v)+u,T* (w)  u,T* (v)+T* (w), uV
 T* (v+w) =T* (v) + T* (w), v,w  V

For any   F ,
u,T* ( v) = T(u), v
=  T(u),v
=  u,T * (v)
= u, T* (v), u  V
T* ( v)= T* (v), v V
Hence T is a linear transformation on V .
*

Next we prove T * is unique

Suppose S :V V be a linear transformation such that T(u),v=u,S(v) u,v  V


Then u,T* (v)= u,S(v),u,vV

 T* (v)=S(v), vV

T * =S
Hence T
*
is unique.

35
Theorem2: Let V (F ) be a finite dimensional inner product space and
B  v1 ,v2 ,...vn  be a orthonormal basis.

Proof:
If A  (T )B ,C  T * ( ) then
B

Cij  T * (v j ), vi 

 vi ,T * (v j )
 T (vi ), v j 
 Aij  (A * )ij
Hence C  A*
Thus if A  (T )B thenA*  T * ( ) B

Theorem 3: If V (F ) is an inner product space and S , T are any linear operation


onV . Then

1. (T )  T
* *

2. (S  T )*  S *  T *
3. (T )*  T *
4. (ST )*  T * S*

Proof:
1.For any u, v V , consider
u, (T * ) v  T * (u ), v
*

= v, T * (u )
= T (v ),u
 u,T (v ) u V
 (T * ) v  T (v ),v V
*

 (T * )  T
*

2. u,(S  T ) v  (S  T )u, v


*

36
  S (u )  T (u ), v
  S (u ), v  T (u ), v
 u, S * (v )  u, T * (v )
 u, S * (v )  T * (v )
 u, (S * T * )(v ), u V
(S  T ) v  (S * T *)(v ),v  V
*

 (S  T )  S * T *
*

3. u, ( T ) v   ( T )u, v
*

 T (u ), v
 T (u ), v
  u,T * (v )
u, ( T ) v  u, T * (v ), u V
*

( T )* v  T * (v ),v V
 ( T )  T *
*

4. u, (ST ) v   (ST )u, v


*

 S (T (u )), v
 T (u ), S * (v )
 u,T * (S * (v ))  u, (T * S * (v )),u V
(ST )* v  (T * S * )v,v V
 (ST )*  (T * S * )

Problem 1 :

A linear operator on R 2 (R) with standard inner product is defined by


T ( x, y )  ( x  2 y, x  y ) . Find T *

Solution:

Given R 2 (R) is an inner product space and T on R 2 (R) is defined by


T ( x, y )  ( x  2 y, x  y ) .An orthonormal basis is B  e1 , e2 where e1  1, 0 ,
e2 0,1

37
T (e1 )  T (1,0 )  (1,1)  (1,0 )  (0,1)
 T (e1 )  e1  e2 and
T (e2 )  T (0,1)  (2, 1)  (2,0 )  (0, 1)  2 (1,0 )  (1)(0,1)
 T (e2 )  2e1  e2
t
 1 1  1 2 
Matrix TB      
 2 1 1 1 
*
1 2   1 1 
Matrix T  B  
*
  
1 1  2 1
1 1   x  x  y 
T* (x, y ) in the basis B is  1  y    2x  y 
2    

T * (x, y )  (x  y, 2x  y )

Problem 2 :
A linear operator T on R2 (R) is defined by T (x, y)  (2x  y, x  3y) with standard
inner product. Find T* (x, y ) and T* (3, 5).
Solution:

Given T (x, y )  (2x  y, x  3 y )


An orthonormal basis is B  e1 , e2 where e1  1,0, e2  0,1
T (e1 )  T (1,0 )  (2,1)  2 (1,0 )  (0,1)
 T (e1 )  2e1  e2
T (e2 )  T (0,1)  (1, 3)  (1,0 )  (0, 3)  (1,0 )  (3 )(0,1)
 T (e1 )  e1  3e2
t
2 1  2 1 
Matrix TB      
 1 3   1 3 
*
2 1  2 1 
Matrix T  B  
*
  
 1 3  1 3
2 1   x   2x  y 
T* (x, y ) in the basis B is  3  y    x  3y 
1    

T * (x, y )  (2x  y, x  3y )

Hence T * (3, 5)  (2.3 5, 3 3.5)  (11, 12 )

38
3.5 Least Square Approximation

A system of linear equations can be represented by a matrix equation AX  b Where


A is the coefficient matrix, X is the column matrix of the variable and b is the
column matrix of constants. The equation AX  b may be consistent with unique
solution or infinite number of solutions or inconsistent i.e. no solution.

The least square solution of a linear equation AX  b is a vector X 0 of smallest norm


that minimize AX  b .
i.e. AX 0  b  AX  b ,X

Finding a vector �0 for which " �� − � " is as small as possible is called the
general least square.

When X 0 is the best approximation solution of AX  b ,the error in this solution is


E  AX 0  b

Working Rule:
To find least square solution of AX  b

Step 1: Compute A* A and A*b


Step 2: Find the normal equation. (A A )X  A b
* *

( )
Step 3: If rankA  n , the number of variables, then A* A is invertible. Find A*A .
1

( )
Step 4:Solution is X  A* A
1
A*b

Minimal solution:

Suppose AX  b is consistent with infinite number of solutions. Then it has only one
minimal solution s . If u is a solution of (AA )X  bthen the minimal solution is
*

s  A*u

Problem 1:
Find the least square solution and least square error of the equation
2 1  1 
 1 2   x 1   1 
  x   
 2  1 
1 1  

39
Solution:

Given equation is
2 1 1 
1   x1   
2   1
  x  
1 1  2   1
2 1  1
   x1   
Here A   1 2  , X    ,and b  1 
 x2  1 
1 1  
t
2 1
2 1 1
Since A is a real matrix A  1
*
2   

1 2 1 
1 1
( )
The normal equations are A* A X  A*b
2 1 
2 1 1  
A A A A
* t
 1 2
1 2 1
1 1 
 6 5
 
5 6
1 
 2 1 1    4 
And A b  
*
 1  
 1 2 1  1  4 
 
But �(�) = 2 = ������ � ƒ ���i�����
 A* A is invertible

( )
Normal equations are A* A X  A*b

  6 5  x1    4 
 5 6  x   4 
  2   
 6x1  5x2  4 …………(1)
 5x1  6x2  4 …………(2)

Solving equation (1) and (2), we get x 1  4 and x2 


4 11 11

40
4
 
∴the least square solution is X 0
 11 
4
 
 11 

The least square error E  AX 0  b


 12 
 11 
2 1   4   
  12
Now AX 0   1 2   11    
 4   11 
 1 1    8 
 11 
 
 11 

 12   1 
 11   11 
  1  
AX 0  b   12   1  1 
 11   11 
 8  1  3 
   
 11   11 

AX 0  b  ( AX 0  b ), ( AX 0  b )

 1 1 3   1 1 3 
 11 ,11 , 11  ,11 ,11 ,11  
   

1 1
  2  2  92   0.3
 11 11 11 

Problem 2 :Find least square solution in the inconsistent system


x  5 y  3, 2x  2 y  2, x  y  5
Also find the least square error.

Solution:
x  5 y  3,
Given system is 2x  2 y  2,
x  y  5

41
1 5  3
 x
Therefore the coefficient matrix A   2 2  , X    and b   2  So, the matrix
  
 y  5
1 1   
t
1 5
 1 2 1 
equation is AX  b , A is real A  A   2
* t
2    
 5 2 1 
1 1 
The normal equation is (A* A )X  A*b …………………(1)

1 5
1 2 1  
Now A A  A A  
* t
 2 2 
5 2 1  
1 1 

  6 0 

 0 30 
3
 1 2 1  2    2 
And A b  
*
  
 5 2 1     16 
 5

But �(�) = 2 = ������ �� ���������


 A* A is invertible

Normal equations are (A* A )X  A*b

  6 0  x    2 
 0 30 y  16
    

 6x  2, 30 y  16

 x  1, y  8
3 15

1
3
∴the least square solution is X0   
8
 
 15 

The least square error E  AX 0  b

42
 3
 1
1 5  1   
 3  2
Now AX 0   2 2      
8  5
1 1     1
 15 
 5 

   
 3   0 
 2   3  
  12 
AX 0  b      2   
 5   5 
 1   5   24 
   
 5   5 

AX 0  b  ( AX 0  b ), ( AX 0  b )

 12 24   12 24   122 242 


  0, ,  , 0, ,    0  2  2   5.37
 5 5   5 5   5 5 

Problem 3:

Solve the system of equations in the least square sense

2x1  2x2  2x3  1, 2x1  2x2  2x3  3,  2x1  2x2  6x3  2


Solution:

The given equations are


2x1  2x2  2x3  1,
2x1  2x2  2x3  3
2x1  2x2  6x3  2
From the first two equations, it is obvious the equations are inconsistent. So, we can
find approximate solution only in the least square sense.
 2 2 2  x1   1

The coefficient matrix A   2  
2 2  X   x 2   b   3 

 2 2 6  x   2
   3  
So, the matrix equation is AX  b .
The normal equation is (A* A )X  A*b
 2 2 2 t  2 2 2 
Since A is real matric, A*  At   2 2 2    2 2 2 
 2 2 6   2 2 6 
Now 43
2 2 2  2
t
2 2  12 12 20

A A A A 2
* t
2 2   2 2 2    12 12 20 
 2 2 6   2 2 6   20 20 44 

And
2 2 2t  1   4
     
A b Ab   2
* t
2 2  3  4 
 2 2 6   2  4

The normal equation is (A* A )X  A*b

 12 12 20 x1   4
 12 12 20  x 2   4 

 20 20 44  x   4
3  

12x1 12x2  20x3  4,



20x1  20x2  44x3  4
Put x1  1, weget

3x2  5x3  2,


5x2 11x3  6
x2  1,
Solving we get x3  1
The least square solution is
x1  1, x2  1, x3  1
1 
X 0  1

 
1 
 
Problem 4 :

Find the minimal solution of the system

x  2 y  z  1, 2x  3y  z  2, 4x  7 y  z  4
Solution:

The given equations are

44
x  2 y  z  1,
2 x  3 y  z  2,
4x  7 y  z  4

 1 2 1   x  1

The coefficient matrix A   2 3 1  X   y  b   2 
 4 7 1  z  4
     

So, the matrix equation is AX b.

1 2 1
A  2 3 1 0
4 7 1

Therefore the system is linearly Dependent.

To find the minimal solution ,we must find one solution of (AA* )X  b

t
 1 2 1  1 2 4 

Since A is real matrix, A*  At   2 3 1    2 3 7 
 4 7 1  1 1 1
   

Now
1 2 1  1 2 4  6 7 19 
AA  AA   2
* t
3 1  2 3 7    7 14 28 

4 7 1   1 1 1   19 28 66 

and
6 7 19   x   1 
    
(AA ) *
X  b   7 14 28  y    2 
 19 28 66   z   4 
    

6 x  7 y  19 z  1,
7 x  14 y  28 z  2,
19 x  28 y  66 z  4,

45
6 7 19
7 14 28  0
19 28 66

since the system is linearly Dependent

Put x  0, weget
7 y 19z  1,
14 y  28z  2
y  1,
Solving we get 7
z0
Therefore one solution is
x  0, y  1 , z  0
7
 0
1
u  
7
 0
 
The minimal solution is s  A*u
 2
 0  7
1 2 4    
 1 3
s   2 3 7      
7 7
 1 1 1 
0  1
 
7
 
2 3 1
The minimal solution is x  , y  , z 
7 7 7

3.6 Least square approximation method of finding a curve of best fit


The general problem of finding the equation of the approximating curve
that fits the given data is called curve fitting.

1. Line of best fit (or) Least square line


Suppose we have n data points (x1, y1 ), (x2 , y2 ),… (xn , yn ) and the
relationship suggested is linear, say y  ax  b.

Then this system can be solved by Least square Method.

46
Problem1
Find the least square line fitted to the data (1,1) ,(2,2),(3,2) ,(4,3) . Also find
the least square error.

Solution: Let y  ax  b ……….(1)

Be the line fitted to the given data (1,1) ,(2,2),(3,2 ),(4, 3).
Substituting the points in (1), we get
a.1 b 1
a.2  b  2
a.3 b  2
a.4  b  3
11 1 
21 2 
a 
The coefficient matrix A   , X   , B   
31 b   2
   
41 3 
the matrix equation is AX  B

The normal equations are ( A* A) X  A* B

1 1
2 1 1 2 3 4 
Since A is real , A*  A'   
3 1 1 1 1 1 
 
4 1

1 1
1 2 3 4   2 1
Now A* A  A' A   
1 1 1 1  3 1
 
4 1
30 10 

10 4 
 

1 
12 3 4   2
And A*B  A'B   
1111   2 
3 
 
23 
 
8 

47
30 10   a    23
the normal equations becomes     
10 4  b   8 

30a 10b  23 ……….(2)


10a  4b  8 …………(3)

Solving equation (2) and (3), we get

a  3 and b  1
5 2
 3
 5
Therefore X 0    is the least square solution
  1
 2 

3 1
the line of best fit y  x 
5 2

And error  B  AX 0

 11 
10 
1  
2
1  3  17 
1    
AX 0     5   10 
3 1  1   23 
 
4 1 2  10 
 29 
 
10 

 11 
10 
1  17 
2   
10
 B  AX 0      
3   23 
 
 4  10 
 29 
 
10 

48
 1 
 10 
 
3 

 10 
3
 
 10 
1 
 
10 

Error= B  AX 0

1  32  (3)2  1
  0.45
102 102 102 102

3.6.2 Parabola of Best fit (or )Least Square Parabola.

Suppose we have n data points (x1, y1 ) , (x 2 , y 2 ),… (xn , yn ) and the


relationship suggested as say y  ax2  bx  c.

Then this system can be solved by Least square Method.

Problem:
Find the parabola of best fit to the data (3,9), (2, 6) , (0, 2), (1,1). Also find the error.

Solution:
The given points are (3, 9 ), (2, 6) , (0, 2 ), (1,1).

Let y  ax2  bx  c be the curve of best fit is AX  B ,

 
9 3 1 9 
  a  6 
A  4 2 1
Where 0  , X  b  and B   
 0 1  2
 c   
1 1 1 1 

We want the least squares approximation of AX  B

The normal equation are (A* A)X  A* B .

A is real matrix

49
9 4 0 1

A  A  3 2 0
* t
1
 
1 1 1 1

Now
 

9 4 0 1  9 3 1

A* A  A t A  3 2 0 1 4 2 1

1 1 1 10 0 1
1 11

 98 34 14 

 34 14 4 
  And
14 4 4 

9 1 9  106
4 0
6 
 
A*B  At B   3 2 0 1     38
  2  
1 1    1
  1
18
1 

the normal equation becomes


 98 34 14   a  106 
 34 14 4   b    38 
    
14 4 4 c  18 

Therefore by solving the system we get


a  1 , b   4 and c  2
3 3

So, the best fitted curve is


y  1 x2  4 x  2
3 3
 1 
 3 
 
4
X
 0     is the best square solution
 3
 2 
 
 

50
And the error is B  AX 0

Now

  1 
9 3 1  3  9 

   4  6 
AX 0  4 2 1      
   3  2
 0 0 1  2   
1 1 1   1 
 

9  9 
6  6 
B  AX 0      0
 2  2
   
1  1 

Therefore Error is B  AX 0  0

51
3.7. Practice Quiz

1. Let be an inner product space, and let x and y be two vectors in V such
that " � " = 3 and " � " = 4 . What exactly can we say about " � + � " ?

a) " � + �" = 5
b) " � + � " is between 0 and 7 inclusive.
c) " � + � " is between 1 and 7 inclusive.
d) " � + � " = 7.

2. Any set of linearly independent vectors can be orthonormalized by the:


a) Pound-Smith procedure.
b) Sobolev method.
c) Sobolev-P method.
d) Gram-Schmidt procedure.

3. A unitary matrix is defined by the expression:


a) U = U T , where superscript T means transpose.
b) U = U † , † means Hermitian conjugate.
c) U = U ∗ .
d) U −1 = U † .

4. If B= {v1,v2,v3} is an orthogonal set of vectors with respect to an inner


Product on a vector space V, then the set B

a) Is linearly independent
b) Is linearly dependent
c) Is an orthonormal basis for V
d) Span the vector space V

5. The cosine of the angle between the two vectors v = (1,-1,1) & u = (1,1,1)
1
a) is equal to 3
1
b) is equal to 9

c) is equal to √ 19
1
d) is equal to − 3

52
6. If the vectors v1, v2 and v3 form an orthonormal basis for R3
v1 = (1,0,0) ; v2 = (0, 1 , 1
); v3 = (0,- 1 1
, );
√2 √2 √2 √2
Then The vector w = (1,-1,1) can be expressed as a linear combination of the
vectors v1, v2 and v3 in the following way:

2 2
a. W = 1.v1 + v2 + v3
√2 √2
b. W = 1. v1 + √22 v2

c. W = 1.v1 + √22 v3
2 2
d. W = 1.v1 - v2 + v3
√2 √2

7. let u = (-1, 1/4) and v = (4, -1/8). Then find " � "
17
a) √ 16

b) √1025
16

c) √ 577
4

d) √ 174

8. u = (-1, 1/4) and v = (4, -1/8). find ∥v∥ .

17
a) √ 16

b) √1025
16

577
c) √ 4

d) √ 174

9. u = (-1, 1/4) and v = (4, -1/8). find ∥u+v∥ .


17
a) √ 16

b) √1025
16

577
c) √ 4

d) √ 174

53
10. find the unit vector in the direction u.u = (3, 2, -5)

2
a) ( √338, , 5 )
√38
√38 2
b) ( √338, , − 5)
√38
√38

c) ( −3 , −2 −5
, )
√38 √38
d) ( √338, − 2 , − 5 )
√38 √38 √38

11 Find the distance between two vectors u = (1, 2, 0) and v = (-1, 4, 1)


a. 3
b. 6
c. 9
d. √3

12. Find the dot product < u, v> where u = (3, 4) and v = (2, -3)
a) -30
b) -6
c) 13
d) 36

13. Find the dot product < u , v > v where u = (3, 4) and v = (2, -3)
a) (-12, 18)
b) (12, 18)
c) (-12, -18)
d) (12, -18)

14. Find < u + v , 2u – v > where <u , u> = 4 , <u , v> = −5 and <v , v> = 10

a) 7
b) 13
c) -13
d) -7

54
ANSWERS:

1 2 3 4 5 6 7 8 9 10 11 12 13 14

b d d a a d c b c b a b a d

55
3.8 Assignment 1

1. State and prove Schwarz and Triangle inequality.

2. Prove that every finite dimensional inner product space has an orthonormal
basis. (Or)
State and prove Gram-Schmidt orthogonalisation process.

3. Find an orthogonal basis of the inner product space R3(R) with standard inner
product, given the basis B={(1,1,0),(1,-1,1),(-1,1,2)} using Gram-
Schmidt orthogonalisation process.Also find the Fourier coefficients of the
vector (2,1,3) relative to orthonormal basis.

56
3.8 Assignment 2

57
3.8 Assignment 3

1. Find the minimal solution of the following system of linear equations


X+2y-z=1,2x+3y+z=2,4x+7y-z=4.

2. Use the least squares approximation to find the best fits with both
i)a linear function and ii) a Quadratic function. Also
Compute the error E in both cases for the following data
{(-3,9),(- 2,6),(0,2),(1,1)}
3.8 Assignment 4

1. Apply Gram Schmidt process, find the orthogonal basis and


orthonormal basis for the sub set S  {(1,0,1), (0,1,1), (1,3,3)}. Also find the
Fourier coefficients of the vector v  (1,1, 2) relative to orthonormal basis.

2. Find the line of best fit to the data (2,1), (5, 2), (7, 3), (8, 3).. A lso find the
least square error.
3.8 Assignment 5

1. Use the Frobenius inner product to compute || A ||, || B || and A, B

2. Find the minimal solution of the following system of l inear equations


x  2 y  z  4, x  2 y  2z  11, x  5y  19.
3.9 Part – A Questions and Answers –Unit III

S.NO QUESTIONS K CO
Level
1. K1 CO4
Define an inner product space. Give an example.

Solution:
Let V be a real vector space. Suppose to each pair of
vectors, there is assigned a real number, denoted by u, v .

This function is called a real inner product on V if it satisfies


the following axioms

Linear property: au1  bu2 ,v  a u1 ,v  b u2 ,v

Symmetric property: u, v  v, u

Positive definite property: u,u  0 and u,u  0 if and only

if u  0 . The vector space V with an inner product is called


an inner product space.

Example: Let V (C) be the vector space of all polynomials in

t with coefficients in C. If f (t), g (t)V . Let us define


1

f (t), g ( t )   f ( t ) g ( t ) dt .
0

2. K2 CO4

If an V is an inner product space then prove that  u   u .

Solution:
 u 2   u,  u   u,u   2
u 2  u   u .

61
3. If f (t ) 3t  5, g (t ) t 2 in the polynomial space p(t) with inner K2 CO4
1

product f (t), g (t)   f (t) g (t)dt then find f and g .


0

Solution:
Given f ( t)  3t  5, g ( t)  t 2
1 1 1

f (t)  f (t), f (t )   f (t ) f (t )dt  (3t  5) dt  (9t 2  30t  25)dt


2 2

0 0 0

1
 3t3 15t2  25t   3 15  25  13.
0

f (t)  13.
1
1
 t5 
1
1 1
 g(t), g (t)   g (t) g (t)dt   t dt  
2
g(t)    g(t) 
4
.
0 0  5 0 5 5
4. State Cauchy-Schwartz inequality. K1 CO4
Solution:
For any vectors u and v in an inner product space V , u, v  u v

5. 1. If u  (2,3,5) and v  (1,4,3)in R3 . Find angle between these K2 CO4

vectors.
Solution:
Given: u  ( 2, 3, 5) and v  (1, 4, 3)

u, v  ( 2,3,5 ) ,(1, 4,3)  2 12 15  5

u  22  32  52  4  9  25  38

v  12  ( 4 )  ( 3)  116  9  26
2 2

Then the angle  between u and v is given by


u, v 5
cos   .
u v 38 26

62
6. Define orthogonality. K1 CO4

Solution:
Let V be an inner product space over F and u, v V . The

vector u is said to be orthogonal (perpendicular) to v if


u, v  0 .

7. Define orthonormal set. K1 CO4

Solution:
Let v be an inner product space. A set S  ui  of vectors in V is said

to be an orthonormal set if u , iu j 0  for i  j


1 for i  j

8. Find a unit vector orthogonal to v  (1,2,1) and v  (3,1,0) in R3 with K2 CO4


1 2

standard inner product.

Solution:

Let w  ( x, y, z ) be orthogonal to v1 and v2 . Then by the definition

of orthogonal vectors w, v1  0; w, v2  0

w, v1  x  2 y z w, v2  3 x  y
Now, and
0  x 2 y z   (1) 0  3x y   (2)
any non-trivial solution to the above system gives a vector
orthogonal to both v1 and v2 .

Solving system of equation (1) and (2) we get


x  1, y  3, z  5 .

 w  ( 1, 3,  5)

w  ( 1)2  (3)2  ( 5)2  1 9  25  35

The required unit vector  w  ( ).


1,3, 5
w 35

63
9 K2 CO4
Find k so that u  (1, 2, k, 3) and v  ( 3, k,7, 5) in R4 are
orthogonal.
Solution:
u is said to be orthogonal to v if u, v  0

Now, u, v  (1, 2, k ,3 ) , ( 3, k , 7, 5 )

0  3  2k  7k 15

0  9k 12  9k  12  k  4
3

10. K2 CO4
Find the Fourier coefficient c of v  (1,2, 3,4) and w  (1,2,1, 2) in
R4
Solution:

Now, v, w  (1, 2,3, 4 ) ,(1, 2,1, 2 )  1  4  3  8  8

w 2  w, w  12  22 12  22  1 4 1 4  10

v, w 8 4
c   .
w 2
10 5

11. If A=[� � + � ] belong to M2x2(c) compute " � " . K2 CO4


� �

Solution:
"�"=√< �, � >
< �, � >=tr(A*A)

A*=[ 1 3]
2 − � −�
A*A=[ 10 2 + 4�]
2 − 4� 6
< �, � >=10+6=16

"�"= √ 16=4.

64
12. In the inner product space V(R), u,v ∈ � are such that u+v K2 CO4
and u-v are orthogonal prove that " � " = " � " .

Solution:
Given u+v and u-v are orthogonal
< � + �, � − � >= 0
< �, � > −< �, � > +< �, � > −< �, � >= 0

"�"2−< �, � > +< �, � > −"�"2=0


" �" =" �" .
13. If T:C3→C3,defined by T(x,y,z)={ix+(2+3i)y,3x+(3-i)z,(2- K2 CO4
5i)y+iz},find T*(x,y,z).

Solution:
i 2  3i 0 
T  3 0 3  i  . Since e is the standard orthonormal basis
0 e  
2  5i i 
 

for c3, T   T 
t

e  e
 i 2  3i 0   i 3 0 
T    3 0 3  i   T *    2  3i 0 2  5i 
0    e  
2  5i i 
e
0 3i i
   

T * ( x, y, z )  ix  3y, (2  3i)x  (2  5i)z, (3 i)y  iz .

14. Define adjoint of linear operator. K1 CO4

Solution:
A linear operator T on an inner product space V is said to
have an adjoint operator T * on V, if

T ( u) , v  u,T * ( v ) u,v V .

65
12. In the inner product space V(R), u,v ∈ � are such that u+v K2 CO4
and u-v are orthogonal prove that " � " = " � " .

Solution:
Given u+v and u-v are orthogonal
< � + �, � − � >= 0
< �, � > −< �, � > +< �, � > −< �, � >= 0

"�"2−< �, � > +< �, � > −"�"2=0


" �" =" �" .
13. If T:C3→C3,defined by T(x,y,z)={ix+(2+3i)y,3x+(3-i)z,(2- K2 CO4
5i)y+iz},find T*(x,y,z).

Solution:
i 2  3i 0 
T 
 3 0 3  i  . Since e is the standard orthonormal basis
0 e  
2  5i i 
 

for c3, T   T 
t

e  e
 i 2  3i 0   i 3 0 
T    3 0 3  i   T *    2  3i 0 2  5i 
0   e  
i  i 
e
2  5i 0 3i
   

 T * ( x, y, z )   ix  3 y , ( 2  3i)x  ( 2  5 i )z,(3  i) y  iz.

14. Define adjoint of linear operator. K1 CO4

Solution:
A linear operator T on an inner product space V is said to
have an adjoint operator T * on V, if

T ( u) , v  u,T * ( v ) u,v V .

66
15 Prove that if S and T are linear operators on V and is scalar K2 CO4

then ( ST )*  T *S * .

Solution:
Given that S and T are linear operators
( ST ) u, v ( )
 ST ( u ) , v  T ( u ) , S * ( v )  u,T * S *v ( )
 u, T *S * v

The uniqueness of the adjoint operator implies that

( ST )*  T *S * .

16. Define curve Fitting. K2 CO5

Solution:

The general problem of finding the equation of the approximating


curve that fits the given data is called curve fitting.

67
3.10 PART B

S.NO QUESTIONS K CO
Level
1. Apply Gram Schmidt process, find the orthogonal basis and K2 CO4
orthonormal basis for
the sub set S (1,0,1), (0,1,1), (1,3,3). Also find the Fourier

coefficients of the vector v  (1,1, 2) relative to orthonormal

basis.
ANS:
(i) The orthogonal basis

(u1,u 2 ,u3 )  (1,0,1,), 1,1, 1 , , 1, 1 , 1,. 


2 2 3 3 3
    
(ii) The orthonormal basis

( w1 , w2, w 3)  1
,0,
1   1 2 1   1 1 1  
, , , , , , .
2 2  6 6 6   3 3 3 
     
3 6
(iii) Fouriercoefficients: v, w   , v, w   , v, w   0 .
1
2 2
2 3

2. Find the orthogonal and orthonormal basis using Gram K2 CO4

Schmidt method for the basis (sint, cost, 1, t ) using the inner

product  f , g   f ( t ) g ( t ) dt .
0

ANS:
(i) The orthogonal basis

(u1,u 2,u 3,u 4)  sin t, cost, 1 4 sin t,8cos t  2t  2 . 


  
(ii)The orthonormal basis
 
(w1, w2 , w3, w4 )   2 sin t, 2
cos t,
  4 sin t 8 cos t  2 t  2 
, .
    3  8 5  
 32 
3 

68
3. Represent the polynomial f (x) 1 2x 3x2 as a linear K2 CO4

combination of the vectors in the orthonormal basis

 1 2 5 
 , x, 3x 2 1 ( ) for P (R). Where
2
 2 2 8 
1

 f , g   f (t) g (t)dt .
1

ANS:

2 6, 2 10
 f (x ),u 1  2 2,  f (x ),u 2   f (x ),u 3  .
3 5

 1   2 6  3  2 10  5 
f (x )  2 2   x
3  2 

5  8
3x 2
1  ( )
 2 

4. Use the Frobenius inner product to compute K2 CO4


|| A ||, || B || and A, B

1 2  i  and B   1 i 0 .
A  
3 i  
 i i 

Ans: || A ||  4, || B ||  3 and A, B  i

Find the line of best fit to the data (2,1), (5, 2 ), (7, 3 ), (8, 3 )Also K2 CO5
find the least square error.
5.

Ans: y  5 x  2
14 7
6. Find the parabola of best fit to the data K2 CO5
(1,1),(0,1),(1,0),(2, 2)Also find the least square error.

Ans: y  x 2  3 x  7
5 10

69
7. Find the minimal solution of the following system K2 CO 5
of inear equations
x  2 y  z  4, x  2 y  2z  11, x  5y  19.

 1 
Ans: s   4 
 
3
 
8. Prove that if S is an orthogonal set of non-zero vectors, then is K2 CO5
S linearly independent. (or)

Suppose u1 , u1 ,..., ur  is an orthogonal set of vectors then prove


that|| u1  u 2 ........  u ||r 2 || u ||1 2  || u ||22 .... || u ||2 r

9. Prove that let T be a linear operator on n-dimensional inner K2 CO4


product space v , then there exists
a unique operator T * on v such that

T ( u) , v  u,T * ( v )  u,v V .

Let V be the inner product space . Prove that the norm in V K2 CO4
satisfy the following properties
10
(i) || v ||  0 and || v ||  0 iff v  0

(ii) || kv ||  || k || || v ||

(iii) || u  v ||  || u |||| v ||

70
SUPPORTIVE ONLINE CERTIFICATION COURSES

The following NPTEL and Coursera courses are the supportive online certification
courses for the subject Linear Algebra

Introduction to Abstract and Linear Algebra


https://swayam.gov.in/nd1_noc20_ma31/preview

Linear Algebra
https://swayam.gov.in/nd1_noc20_ma54/preview

Linear Algebra
https://www.coursera.org/learn/algebra-lineynaya

71
REAL TIME APPLICATIONS

View the following videos on YouTube:

Applications Of Matrices

https://youtu.be/rowWM-MijXU

Essence of Linear Algebra

https://youtu.be/fNk_zzaMoSs

Applications of Linear combination,Span& Basis

https://youtu.be/k7RM-ot2NWY

Applications of Eigen Values & Eigen vectors

https://youtu.be/jUIIm5C-xFs

72
Assessment Schedule (Proposed Date & Actual Date)

Sl. Proposed Actual


ASSESSMENT
No. Date Date

24/2/25 to
1 FIRST INTERNAL ASSESSMENT
25/2/25

01/04/25 to
2 SECOND INTERNAL ASSESSMENT
05/04/25

28/04/25 to
3 MODEL EXAMINATION
07/05/25
MINI PROJECT - 1

Project Title: Real-Time Hand Gesture Recognition Using Inner Products and
Norms

Objective: Build a real-time hand gesture recognition system that classifies different
gestures based on 3D hand pose data using norms and inner products.

Tasks:
Collect data on hand movements (e.g., using a camera or a depth sensor like
Kinect).
Preprocess the data into vectors representing different hand poses.
Apply inner product and norms to classify gestures based on the angle or distance
between gesture vectors.
Use the Gram-Schmidt process to create an orthogonal feature space for gesture
classification.
MINI PROJECT - 2

Project Title: Real-Time Video Compression with Least Squares for Efficient
Encoding
Objective: Implement a real-time video compression algorithm using least squares
approximation to reduce video size while maintaining quality.
Tasks:
Collect video data and represent frames as matrices.
Apply least squares approximation to reduce the dimensionality of the video
frames.
Implement the compression and decompression algorithms, and assess video
quality using metrics like PSNR (Peak Signal-to-Noise Ratio).
Develop a real-time interface to compress and display videos in near real-time with
minimal loss in quality.
MINI PROJECT - 3

Project Title: Real-Time GPS Data Correction Using Least Squares


Approximation
Objective: Develop a real-time system that improves GPS trajectory data by using
least squares to minimize errors caused by signal noise.
Tasks:
Collect raw GPS data (latitude, longitude, and altitude) over time.
Apply a least squares approximation to smooth the trajectory and minimize the
GPS errors.
Implement the Gram-Schmidt process to generate orthogonal components of
the trajectory.
Use the corrected trajectory to estimate the user's true path in real-time.
Create a real-time visualization of the corrected GPS data on a map interface.
MINI PROJECT - 4

Project Title: Real-Time Audio Noise Reduction Using Orthogonalization


Objective: Develop a real-time audio signal processing system that reduces
noise from a noisy audio signal using orthogonalization and least squares
methods.
Tasks:
Capture an audio signal with background noise (e.g., speech with
noise).
Use Gram-Schmidt orthogonalization to separate the noise from the
speech signal.
Apply least squares approximation to reconstruct a cleaner version of
the speech signal.
Implement real-time filtering to process audio and remove noise.
Evaluate the effectiveness of the algorithm by comparing the signal-to-
noise ratio (SNR) before and after processing.
MINI PROJECT - 5

Project Title: Real-Time Image Reconstruction Using Least Squares in Medical


Imaging (e.g., MRI, CT Scan)
Objective: Apply least squares approximation to reconstruct high-quality medical images
in real-time, improving diagnostic accuracy.
Tasks:
Use data from medical imaging scans (e.g., MRI or CT scan) which may have
noise or incomplete information.
Apply least squares approximation techniques to reduce errors and improve the
quality of reconstructed images.
Use the Gram-Schmidt process to optimize image reconstruction by improving the
orthogonality of basis functions.
Implement real-time image reconstruction algorithms for quick medical analysis
and visualization.
Test and compare the quality of reconstructed images before and after using least
squares methods.
PRESCRIBED TEXT BOOKS AND REFERENCE BOOKS

LINEAR ALGEBRA AND APPLICATIONS 24MA201

S. No. TEXT BOOKS

A.H. Friedberg, A. J. Insel, and L. Spence, “Linear Algebra”,


1 Prentice Hall of India, 5th Edition, New Delhi, 2008.

Steven J. Leon, “Linear Algebra with Applications”, Pearson


2 Educational International”, 9th Edition, United States of
America, 2015.

REFERENCES :
G. Strang, “Linear Algebra and its applications”, Thomson
1
(Brooks / Cole), 4th Edition, New Delhi, 2005.

2 C.F. Gerald and P.O. Wheatley, “Applied Numerical Analysis”,


7th Edition, Pearson Education, New Delhi, 2004.

3 Richard Branson, “Matrix Operations”, Schaum's outline


series, 1989.

4 Bernard Kolman, R. David R. Hill, “Introductory Linear


Algebra”,
Pearson Educations, New Delhi, First Reprint, 2009.

5 S. Kumaresan, “Linear Algebra - A geometric approach”,


Prentice Hall of India, New Delhi, Reprint, 2010.

NPTEL course on "Linear Algebra", by Prof. K. C.


6. Sivakumar, IIT Madras:
https://archive.nptel.ac.in/courses/111/106/111106051/#

79
Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.

80

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy