0% found this document useful (0 votes)
10 views69 pages

Module-5: Inner Product Spaces: Dr. Bhavya Shivaraj

The document discusses inner product spaces, defining them as vector spaces with a specified inner product. It covers concepts such as orthogonality, properties of inner products, norms, distances, angles, and orthonormal bases, along with examples and theorems related to these topics. Additionally, it explains orthogonal projections in inner product spaces and provides mathematical proofs and calculations to illustrate the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views69 pages

Module-5: Inner Product Spaces: Dr. Bhavya Shivaraj

The document discusses inner product spaces, defining them as vector spaces with a specified inner product. It covers concepts such as orthogonality, properties of inner products, norms, distances, angles, and orthonormal bases, along with examples and theorems related to these topics. Additionally, it explains orthogonal projections in inner product spaces and provides mathematical proofs and calculations to illustrate the concepts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

ll Jai Sri Gurudev ll

SRI ADICHUNCHANAGIRI SHIKSHANA TRUST®

SJB INSTITUTE OF TECHNOLOGY

Department of Mathematics

Module-5: Inner Product Spaces


by

Dr. Bhavya Shivaraj


BLESSINGS

His Divine Soul Jagadguru His Holiness Jagadguru


Padmabhushana Sri Sri Sri Dr. Revered Sri Sri Dr.
Sri Sri Sri Dr. Balagangadharanatha Nirmalanandanatha Maha Prakashnath Swamiji
Maha Swamiji Swamiji Managing Director,
Founder President, Chief Pontiff,
Sri Adichunchanagiri Mutt BGS & SJB Group of Institutions
Sri Adichunchanagiri Shikshana Trust (R)
Inner Product:

A vector space V along with a specified inner product on V is


called an inner product space.

Orthogonality:

Let V be an inner product space. The vectors u,v belongs to V are


said to be orthogonal and u is said to be orthogonal to v. If

u, v = 0

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Inner Product Spaces

• 𝐮⋅𝐯 = dot product (Euclidean inner product for Rn)


𝐮, 𝐯 = general inner product for vector space V.
• Definition: Let u, v, and w be vectors in a vector space V,
and let c be any scalar. An inner product on V is a function
that associates a real number <u, v> with pair of vectors u
and v and satisfies the following axioms.
1. 𝐮, 𝐯 = 𝐯, 𝐮
2. 𝐮, 𝐯 + 𝐰 = 𝐯, 𝐮 + 𝐮, 𝐰
3. 𝑐 𝐮, 𝐯 = 𝑐𝐮, 𝐯
4. 𝐯, 𝐯 ≥ 0 and 𝐯, 𝐯 = 0 iff v = 0.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Conti…
• A vector space V with an inner product is called an inner
product space.
1) Show that the following function defines an inner product on
R2:
where 𝐮, 𝐯 = 𝑢1 𝑣1 + 2𝑢2 𝑣2 ⇔ 𝐮 ⋅ 𝐯 = 𝑢1 𝑣1 + 𝑢2 𝑣2
pf: 1. 𝐮 = (𝑢1 , 𝑢2 ) 𝐯 = (𝑣1 , 𝑣2 ).

2. Let 𝐮, 𝐯 = 𝑢1 𝑣1 + 2𝑢2 𝑣2 = 𝑣1 𝑢1 + 2𝑣2𝑢2 = 𝐯, 𝐮


𝐰 = (𝑤1 , 𝑤2 )
𝐮, 𝐯 + 𝐰 = 𝑢1 (𝑣1 + 𝑤1 ) + 2𝑢2 (𝑣2 + 𝑤2 )
= (𝑢1 𝑣1 + 2𝑢2 𝑣2 ) + (𝑢1 𝑤1 + 2𝑢2 + 𝑤2 )
= 𝐯, 𝐮 + 𝐮, 𝐰

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Conti…
3. If c is any scalar, then
𝑐 𝐮, 𝐯 = 𝑐(𝑢1 𝑣1 + 2𝑢2 𝑣2 )
= (𝑐𝑢1 )𝑣1 + 2(𝑐𝑢2 )𝑣2
= 𝑐𝐮, 𝐯

4. Because the square of a real number is nonnegative,


𝐯, 𝐯 = 𝑣12 + 2𝑣22 ≥ 0
Moreover, this expression is equal to zero iff v = 0.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


2) Show that the following function is not an inner product on R3:
𝐮, 𝐯 = 𝑢1 𝑣1 − 2𝑢2 𝑣2 + 𝑢3 𝑣3 𝐮 = (𝑢1 , 𝑢2 , 𝑢3 )
where 𝐯 = (𝑣1 , 𝑣2 , 𝑣3 ).

pf: Let v = (1, 2, 1). Then

𝐯, 𝐯 = (1)(1) − 2(2)(2) + (1)(1) = −6 < 0

The given problem is not an inner product on R3.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


1. Consider 𝑓(𝑡) = 3𝑡 − 5 and 𝑔(𝑡) = 𝑡2 , the inner product
< 𝑓, 𝑔 > = 0 ∫ 1 𝑓(𝑡)𝑔(𝑡)𝑑𝑡 . Find < 𝑓, 𝑔 >.

𝑓(𝑡) = 3𝑡 − 5, 𝑔(𝑡) = 𝑡 2
1

𝑓, 𝑔 = න(3𝑡 − 5) 𝑡 2 𝑑𝑡
0
1

= න(3𝑡 3 − 5 𝑡 2 ) 𝑑𝑡
0
1
𝑡4 𝑡3
= 3 −5
4 3 𝑡=0
3 5
= −
4 3
11
= − .
12

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Inner Product on M2,2 & Pn

𝑎11 𝑎12 𝑏11 𝑏12


𝐴= 𝑎 𝐵=
• Let 21 𝑎22 and 𝑏21 𝑏22

be matrices in the vector space M2,2. The function given by


𝐴, 𝐵 = 𝑎11 𝑏11 + 𝑎21 𝑏21 + 𝑎12 𝑏12 + 𝑎22 𝑏22

is an inner product on M2,2.


𝑛
• Let 𝑝 = 𝑎0 + 𝑎1𝑥 + ⋯ + 𝑎𝑛 𝑥 𝑛 and 𝑞 = 𝑏0 + 𝑏1 𝑥 + ⋯ + 𝑏𝑛 𝑥
be polynomials in the vector space Pn. The function given
by 𝑝, 𝑞 = 𝑎0 𝑏0 + 𝑎1 𝑏1 + ⋯ + 𝑎𝑛 𝑏𝑛

is an inner product on Pn.


• The verification of the four inner product axioms is left to you.
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Properties of Inner Products
Let u, v, and w be vectors in an inner product
space V, and let c be any real number.

1. 𝟎, 𝐯 = 𝐯, 𝟎 = 0

𝐮 + 𝐯, 𝐰 = 𝐮, 𝐰 + 𝐯, 𝐰
2.
𝐮, 𝑐𝐯 = 𝑐 𝐮, 𝐯
3.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Norm, Distance & Angle

Let u and v be vectors in an inner product space V.


1. The norm (or length) of u is 𝐮 = 𝐮, 𝐮
2. The distance between u and v is 𝑑(𝐮, 𝐯) = 𝐮 − 𝐯
3. The angle between two nonzero vectors u and v is given
by 𝐮, 𝐯
cos 𝜃 = , 0≤𝜃≤𝜋
𝐮 𝐯

4. u and v are orthogonal if 𝐮, 𝐯 = 0


• If 𝐯 = 1 , then v is called a unit vector.
• If v is any nonzero vector, then the vector 𝐮 = 𝐯Τ 𝐯
is called the unit vector in the direction of v.
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
𝑝(𝑥) = 1 − 2𝑥 2 , 𝑞(𝑥) = 4 − 2𝑥 + 𝑥 2 , 𝑟(𝑥) = 𝑥 + 2𝑥 2
Let and
be polynomial in P2, and determine the following.
1. 𝑝, 𝑞 = 𝑎0 𝑏0 + 𝑎1 𝑏1 + 𝑎2 𝑏2 = (1)(4) + (0)(−2) + (−2)(1) = 2

2. 𝑞, 𝑟 = (4)(0) + (−2)(1) + (1)(2) = 0

3. 𝑞 = 𝑞, 𝑞 = 42 + (−2)2 + 12 = 21

4. 𝑑(𝑝, 𝑞) = 𝑝 − 𝑞 = (1 − 2𝑥 2 ) − (4 − 2𝑥 + 𝑥 2 )
= −3 + 2𝑥 − 3𝑥 2
= (−3)2 + 22 + (−3)2 = 22

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Let u and v be vectors in an inner product space
V.
1. Cauchy-Schwarz Inequality: 𝐮, 𝐯 ≤ 𝐮 𝐯
𝐮+𝐯 ≤ 𝐮 + 𝐯
2. Triangle Inequality:
3. Pythagorean Theorem:
u and v are orthogonal iff 𝐮 + 𝐯 2 = 𝐮 2 + 𝐯 2

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Orthogonal

Two vectors in an inner product space are orthogonal if their inner product is zero. The
same definition applies to Euclidean spaces, where angles are defined, and there
orthogonality means that either the angle between the vectors is π/2 or one of the vectors
is zero. So orthogonality is slightly more general than perpendicularity.

A vector v in an inner product space is a unit vector if Its length is 1.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Orthonormal basis

We define a set of vectors to be orthonormal if they are all unit vectors and
each one is orthogonal to each of the others. An orthonormal basis is simply
a basis that is orthonormal. Note that there is no such thing as an
‘orthonormal vector’. The property applies to a whole set of vectors, not to
an individual vector.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Theorem: An orthonormal set of vectors {v1, …, vn} is linearly independent.

Proof: Suppose λ 1v1 + … + λ nvn = 0. Then <λ 1v1 + … + λ nvn | vr > = 0 for each r.

since vr is orthogonal to the other vectors in the set and vr is a unit vector.
Hence each λr = 0.

Because of the above theorem, if we want to show that a set of vectors is an


orthonormal basis we need only show that it is orthonormal and that it spans the space.
Linear independences come free. Another important consequence of the above
theorem is that it is very easy to find the coordinates of a vector relative to an
orthonormal basis.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Orthogonal Projections: R2

• Let u and v be vectors in the plane. If v is nonzero, then u


can be orthogonally projected onto v. This projection is
denoted by projvu.
• projvu is a scalar multiple of v, i.e., projvu = av.
• If a > 0, then cos 𝜃 > 0 and
𝐮 𝐯 cos 𝜃 𝐮 ⋅ 𝐯
𝑎𝐯 = 𝑎 𝐯 = 𝐮 cos 𝜃 = =
𝐯 𝐯
2
⇒ 𝑎 = 𝐮 ⋅ 𝐯Τ 𝐯 = (𝐮 ⋅ 𝐯)Τ(𝐯 ⋅ 𝐯)

𝐮⋅𝐯  v
Therefore 𝑝𝑟𝑜𝑗𝐯 𝐮 = 𝐯
𝐯⋅𝐯
projvu
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Projvu in R2
• If a < 0, then cos 𝜃 < 0. The orthogonal projection of u onto v is
given by the same formula.
• The orthogonal projection of u = (4, 2) onto
v = (3, 4) is given by

𝐮⋅𝐯 20 12 16
𝑝𝑟𝑜𝑗𝐯 𝐮 = 𝐯= (3, 4) = ( , )
𝐯⋅𝐯 25 5 5
v

u u
projvu

v

projvu
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Orthogonal Projection
• Let u and v be vectors in an inner product space V. Then the
orthogonal projection of u onto v is
given by 𝑝𝑟𝑜𝑗𝐯 𝐮 =
𝐮, 𝐯
𝐯 𝐯 ≠ 𝟎.
𝐯, 𝐯
• The orthogonal projection of u = (6, 2, 4) onto v = (1, 2, 0) is
given by

𝐮⋅𝐯 10
𝑝𝑟𝑜𝑗𝐯 𝐮 = 𝐯= (1, 2, 0) = (2, 4, 0)
𝐯⋅𝐯 5

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


• Given u = (6, 2, 4), v = (1, 2, 0), and
projvu = (2, 4, 0).
u – projvu = (6, 2, 4) – (2, 4, 0) = (4, –2, 4) is orthogonal to
v = (1, 2, 0).
• If u and v are nonzero vectors in an inner product space, then
u – projvu is orthogonal to v.

u d(u, projvu)

 v
projvu

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Orthogonal Projection and Distance
Let u and v be vectors in an inner product
space V, such that 𝐯 ≠ 𝟎. Then
𝐮, 𝐯
𝑑(𝐮, 𝑝𝑟𝑜𝑗𝐯 𝐮) < 𝑑(𝐮, 𝑐𝐯), 𝑐 ≠
𝐯, 𝐯

d(u, cv)
d(u, projvu)


v
projvu

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Orthogonal Bases
• The standard basis for R3: B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
1. The three vectors are mutually orthogonal.
2. Each vector in the basis is a unit vector.

• Definition: Orthogonal & Orthonormal Sets


A set S of vectors in an inner product space V is called
orthogonal if every pair of vectors in S is orthogonal. If, in
addition, each vector in the set is a unit vector, then S is called
orthonomal.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Orthonormal Basis
• For S = {v1, v2, …, vn},
Orthogonal Orthonormal
1. <vi, vj> = 0, i  j 1. <vi, vj> = 0, i  j
2. 𝐯𝑖 = 1, 𝑖 = 1,2, . . . , 𝑛
If S is a basis, then it is called an orthogonal basis or an
orthonormal basis.
• The standard basis for Rn is orthonormal, but it is not the only
orthonormal basis for Rn.
• For example,
𝐵 = {(cos 𝜃 , sin 𝜃 , 0), (− sin 𝜃 , cos 𝜃 , 0), (0, 0, 1)}
is a nonstandard orthonormal basis for R3.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


• Example 1: Show that the following set
1 1 2 2 2 2 2 2 1
𝑆 = {𝐯1 , 𝐯2 , 𝐯3 } = , , 0 , − , , , , − ,
2 2 6 6 3 3 3 3

is an orthonormal basis for R3.


Sol: 1. 𝐯1 = 𝐯2 = 𝐯3 = 1
2. 𝐯1 ⋅ 𝐯2 = 0, 𝐯1 ⋅ 𝐯3 = 0, 𝐯2 ⋅ 𝐯3 = 0
Therefore S is an orthonormal set.

• Example 2: In P3, with inner product


<p, q> = a0b0 + a1b1 + a2b2 + a3b3,
the standard basis B = {1, x, x2, x3} is orthonormal

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Orthogonal Sets Are Linearly Independent
If S = {v1, v2, …, vn} is an orthogonal set of nonzero vectors in an
inner product space V, then S is linearly independent.
pf: Because S is orthogonal, <vi, vj> = 0, i  j.
<( c1v1+ c2v2 + … + cnvn), vi>
= c1<v1, vi>+ c2<v2, vi>+ …+ ci<vi, vi>+…+ cn<vn, vi>
= ci <vi, vi> = 0
2
∵ < 𝐯𝑖 , 𝐯𝑖 >= 𝐯𝑖 ≠ 0, ∴ 𝑐𝑖 = 0

Hence every ci must be zero and the set must be linearly


independent.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


• Corollary
If V is an inner product space of dimension n, then any orthogonal
set of n nonzero vectors is a basis for V. Show that the following set
is a basis for R4.
Sol: Because
𝑆 = {(2, 3, 2, −2), (1, 0, 0, 1), (−1, 0, 2, 1), (−1, 2, −1, 1)}

𝐯1 ⋅ 𝐯2 = 0, 𝐯1 ⋅ 𝐯3 = 0, 𝐯1 ⋅ 𝐯4 = 0, 𝐯2 ⋅ 𝐯3 = 0, 𝐯2 ⋅ 𝐯4 = 0,𝐯3 ⋅ 𝐯4 = 0

Thus, S is orthogonal.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Coordinates Relative to an Orthonormal Basis
If B = {v1, v2, …, vn} is an orthonormal basis for an inner product
space V, then the coordinate representation of a vector w
with respect to B is
w = <w, v1>v1 + <w, v2>v2 + … + <w, vn>vn
pf: Because B is a basis for V, then there exists unique scalars
c1, c2, …, cn, such that w = c1v1 + c2v2 + … + cnvn.
Taking the inner product of the both sides of this equation,
<w, vi> = <(c1v1 + c2v2 + … + cnvn), vi>
= c1<v1, vi> + c2<v2, vi> + … + cn<vn, vi>
= ci<vi, vi>
Because <vi, vi> = 1, <w, vi> = ci.

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Coordinate Matrix
• The coordinates representation of w relative to the
orthonormal basis B = {v1, v2, …, vn} is
w = <w, v1>v1 + <w, v2>v2 + … + <w, vn>vn
The corresponding coordinate matrix of w relative to B is
𝐰 = 𝑐1 𝑐2 ⋯ 𝑐𝑛 𝑇 = < 𝐰, 𝐯1 > < 𝐰, 𝐯2 > ⋯ < 𝐰, 𝐯𝑛 > 𝑇
𝐵

• Find the coordinates of w = (5, –5, 2) relative to


Sol: Because B is orthonormal,
3 4 −4 3
𝐵 = ( , , 0), ( , , 0), (0, 0, 1)
5 5 5 5

Thus
𝐰 ⋅ 𝐯1 = −1, 𝐰 ⋅ 𝐯2 = −7, 𝐰 ⋅ 𝐯3 = 2
[𝐰]𝐵 = −1 −7 2 𝑇

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Gram-Schmidt Orthonormal Process

• Let B = {v1, v2, …, vn} be a basis for an inner product space


V.
• Let 𝐵′ = {𝐰1 , 𝐰2 , . . . , 𝐰𝑛 }is given by
𝐰1 = 𝐯1
< 𝐯2 , 𝐰1 >
𝐰2 = 𝐯2 − 𝐰
< 𝐰1 , 𝐰1 > 1
< 𝐯3 , 𝐰1 > < 𝐯3 , 𝐰2 >
𝐰3 = 𝐯3 − 𝐰1 − 𝐰
< 𝐰1 , 𝐰1 > < 𝐰2 , 𝐰2 > 2

< 𝐯𝑛 , 𝐰1 > < 𝐯𝑛 , 𝐰2 > < 𝐯𝑛−1 , 𝐰𝑛−1 >
𝐰𝑛 = 𝐯𝑛 − 𝐰1 − 𝐰2 − ⋯ − 𝐰
< 𝐰1 , 𝐰1 > < 𝐰2 , 𝐰2 > < 𝐰𝑛−1 , 𝐰𝑛−1 > 𝑛−1

Then 𝐵′ is an orthogonal basis for V.


Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Conti…

• Let 𝐮𝑖 = 𝐰𝑖 Τ 𝐰𝑖 . Then the set 𝐵″ = {𝐮1, 𝐮2, . . . , 𝐮𝑛 }


is an orthonormal basis for V. Moreover,
span{v1, v2, …, vk} = span{u1, u2, …, uk} for k = 1,2,…,n.
• Let {v1, v2} be a basis for R2.
𝐰1 = 𝐯1
𝐰2 = 𝐯2 − 𝑝𝑟𝑜𝑗𝐯1 𝐯2 𝐯2
< 𝐯2 , 𝐰1 >
= 𝐯2 − 𝐰 𝐰2
< 𝐰1 , 𝐰1 > 1 𝐯1 = 𝐰1

{w1, w2} is an orthogonal basis. 𝑝𝑟𝑜𝑗𝐯1 𝐯2

𝐰1 𝐰2
𝐮1 , 𝐮2 = ,
𝐰1 𝐰2 is an orthonormal basis
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
1)Apply the Gram-Schmidt orthonormal process to the following
basis for R2: B = {(1, 1), (0, 1)}.
Sol:
𝐰1 = 𝐯1 = (1, 1)
𝐰1 2 2 (1,1)
⇒ 𝐮1 = = , (0,1)
𝐰1 2 2
𝐯2 ⋅ 𝐰1
𝐰2 = 𝐯2 − 𝐰
𝐰1 ⋅ 𝐰1 1
1
= (0, 1) − (1, 1)
2
−1 1 − 2
,
2 2
,
2
= , 2 2 2 2
2 2
𝐰2 − 2 2
⇒ 𝐮2 = = ,
𝐰2 2 2

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


2)Apply the Gram-Schmidt orthonormal process to the following
basis for R3: B = {(1, 1, 0), (1, 2, 0), (0, 1, 2)}.
Sol:
𝐰1 = 𝐯1 = (1, 1, 0)
2 2
⇒ 𝐮1 = 𝐰1 Τ 𝐰1 = , , 0
2 2
𝐯2 ⋅ 𝐰1 3 −1 1
𝐰2 = 𝐯2 − 𝐰 = (1, 1, 0) − (1, 1, 0) = , , 0
𝐰1 ⋅ 𝐰1 1 2 2 2
− 2 2
⇒ 𝐮2 = 𝐰2 Τ 𝐰2 = , , 0
2 2
𝐯3 ⋅ 𝐰1 𝐯3 ⋅ 𝐰2 1 1Τ2
𝐰3 = 𝐯3 − 𝐰1 − 𝐰2 = 𝐯3 − 𝐰1 − 𝐰
𝐰1 ⋅ 𝐰1 𝐰2 ⋅ 𝐰2 2 1Τ2 2
= (0, 0, 2)
⇒ 𝐮3 = 𝐰3 Τ 𝐰3 = 0, 0, 1

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


3) The vectors v1 = (0, 1, 0) and v2 = (1, 1, 1) span a plane in R3.
Find an orthonormal basis for this subspace.
Sol:
𝐰1 = 𝐯1 = (0, 1, 0)
⇒ 𝐮1 = 𝐰1 Τ 𝐰1 = 0, 1, 0 z
𝐯2 ⋅ 𝐰1 1 (1, 1, 1)
𝐰2 = 𝐯2 − 𝐰 (1,0,1)
𝐰1 ⋅ 𝐰1 1 2
1 (0, 1, 0)
= (1, 1, 1) − (0, 1, 0) y
1
= 1, 0, 1
2 2
⇒ 𝐮2 = 𝐰2 Τ 𝐰2 = , 0,
2 2
x

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


4) Find an orthonormal basis for the solution space of the
following homogeneous system of linear equations
𝑥1 + 𝑥2 ⥂ +7𝑥4 = 0
2𝑥1 + 𝑥2 + 2𝑥3 + 6𝑥4 = 0

Sol: Let x3 = s and x4 = t,


1 1 0 7 0 1 0 2 −1 0

2 1 2 6 0 0 1 −2 8 0

𝑥1 −2 1
𝑥2 2 −8
𝑥3 = 𝑠 1
+𝑡
0
𝑥4 0 1

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


5) One basis for solution space is
𝐵 = {𝐯1 , 𝐯2 } = {(−2, 2, 1, 0), (1, −8, 0, 1)}

Apply the Gram-Schmidt orthonormalization process to the basis


B:
𝐰1 = 𝐯1 = (−2, 2, 1, 0)
⇒ 𝐮1 = 𝐰1 Τ 𝐰1 = (−2, 2, 1, 0)
𝐯2 ⋅ 𝐰1 −18 −2 2 1
𝐰2 = 𝐯2 − 𝐰1 = (1, −8, 0, 1) − , , , 0
𝐰1 ⋅ 𝐰1 9 3 3 3
= (−3, −4, 2, 1)
−3 −4 2 1
⇒ 𝐮2 = 𝐰2 Τ 𝐰2 = ( , , , )
30 30 30 30

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


QR Factorization – Singular Example
Recall weighted sum of columns view of
systems of equations
𝑥1 𝑏1
↑ ↑ ⋯ ↑ 𝑥2 𝑏2
𝑀1 𝑀2 ⋯ 𝑀𝑁 ⋮ =

↓ ↓ ⋯ ↓ 𝑥𝑁 𝑏𝑁

𝑥1 𝑀1 + 𝑥2 𝑀2 + ⋯ + 𝑥𝑁 𝑀𝑁 = 𝑏

M is singular but b is in the span of the columns of M


Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
QR Factorization – Key idea
If M has orthogonal columns

Orthogonal columns implies:


𝑀𝑖 • 𝑀𝑗 = 0𝑖 ≠ 𝑗

Multiplying the weighted columns equation by i-th column:


𝑀𝑖 • 𝑥1 𝑀1 + 𝑥2 𝑀2 + ⋯ + 𝑥𝑁 𝑀𝑁 = 𝑀𝑖 • 𝑏

Simplifying using orthogonality:


𝑀𝑖 • 𝑏
𝑥𝑖 𝑀𝑖 • 𝑀𝑖 = 𝑀𝑖 • 𝑏 ⇒ 𝑥𝑖 =
𝑀𝑖 • 𝑀𝑖

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


QR Factorization - M orthonormal
Picture for the two-dimensional case
𝑀1 𝑀1

𝑏
𝑏
𝑀2

𝑥1 𝑥2

Non-orthogonal Case
Orthogonal Case 𝑀2

M is orthonormal if:
𝑀𝑖 • 𝑀𝑗 = 0𝑖 ≠ 𝑗and𝑀𝑖 • 𝑀𝑖 = 1
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
QR Factorization – Key idea
𝑦1 𝑏1
𝑥1 𝑏1 ↑ ↑ ⋯ ↑
↑ ↑ ⋯ ↑ 𝑦2 𝑏2
𝑥2 𝑏2 𝑄1 𝑄2 ⋯ 𝑄𝑁 =
𝑀1 𝑀2 ⋯ 𝑀𝑁 = ⋮ ⋮
⋮ ⋮ ↓ ↓ ⋯ ↓
↓ ↓ ⋯ ↓ 𝑦𝑁 𝑏𝑁
𝑥𝑁 𝑏𝑁 𝑀𝑎𝑡𝑟𝑖𝑥𝑤𝑖𝑡ℎ
𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙𝑀𝑎𝑡𝑟𝑖𝑥 𝑂𝑟𝑡ℎ𝑜𝑛𝑜𝑟𝑚𝑎𝑙
𝐶𝑜𝑙𝑢𝑚𝑛𝑠

𝑄𝑦 = 𝑏 ⇒ 𝑦 = 𝑄𝑇 𝑏

How to perform the conversion?


Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
QR Factorization – Projection formula
Given 𝑀1 , 𝑀2 , find 𝑄2 = 𝑀2 − 𝑟12 𝑀1 so that

𝑀1 • 𝑄2 = 𝑀1 • 𝑀2 − 𝑟12 𝑀1 = 0

𝑀1 • 𝑀2
𝑟12 =
𝑀1 • 𝑀1

𝑀2

𝑄2 𝑀1

𝑟12

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


QR Factorization – Normalization
Formulas simplify if we normalize
1 1
𝑄1 = 𝑀1 = 𝑀 ⇒ 𝑄1 • 𝑄1 = 1
𝑟11 1
𝑀1 • 𝑀1

෨ ෨
Now find 𝑄2 = 𝑀2 − 𝑟12 𝑄1 so that𝑄2 • 𝑄1 = 0

𝑟12 = 𝑄1 • 𝑀2

1 ෨ 1 ෨
Finally 𝑄2 = 𝑄2 = 𝑄2
෨ ෨ 𝑟22
𝑄2 • 𝑄2

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


QR Factorization – 2x2 case
Mx=b → Qy=b  Mx=Qy

↑ ↑ 𝑥 ↑ ↑ 𝑦
1 1
𝑀1 𝑀2 𝑥 = 𝑥1 𝑀1 + 𝑥2 𝑀2 = 𝑄1 𝑄2 𝑦 = 𝑦1 𝑄1 + 𝑦2 𝑄2
2 2
↓ ↓ ↓ ↓

𝑀1 = 𝑟11 𝑄1 𝑀2 = 𝑟22 𝑄2 + 𝑟12 𝑄1

𝑟11 𝑟12 𝑥1 𝑦1
0 𝑟22 𝑥2 = 𝑦2
QR Factorization – 2x2 case
↑ ↑ 𝑥 ↑ ↑ 𝑟11 𝑟12 𝑥1
1 𝑏1
𝑀1 𝑀2 𝑥 = 𝑄1 𝑄2 0 𝑟22 𝑥2 =
𝑏2
2
↓ ↓ ↓ ↓ 𝑈𝑝𝑝𝑒𝑟
𝑂𝑟𝑡ℎ𝑜𝑛𝑜𝑟𝑚𝑎𝑙 𝑇𝑟𝑖𝑎𝑛𝑔𝑢𝑙𝑎𝑟

Two Step Solve Given QR

Step 1) 𝑄𝑅𝑥 = 𝑏 ⇒ 𝑅𝑥 = 𝑄𝑇 𝑏 = 𝑏෨

Step 2) Backsolve 𝑅𝑥 = 𝑏෨
QR Factorization – General case

↑ ↑ ↑ ↑ ↑ ↑
𝑀1 𝑀2 𝑀3 ⇒ 𝑀1 𝑀2 − 𝑟12 𝑀1 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2
↓ ↓ ↓ ↓ ↓ ↓

To Insure the third column is orthogonal


𝑀1 • 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2 = 0

𝑀2 • 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2 = 0
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
QR Factorization – General case
𝑀1 • 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2 = 0

𝑀2 • 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2 = 0

𝑀1 • 𝑀1 𝑀1 • 𝑀2 𝑟13 𝑀1 • 𝑀3
=
𝑀2 • 𝑀1 𝑀2 • 𝑀2 𝑟23 𝑀2 • 𝑀3

In general, must solve NxN dense linear system for coefficients


Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
QR Factorization – General case

To Orthogonalize the Nth Vector

𝑀1 • 𝑀1 ⋯ 𝑀1 • 𝑀𝑁−1 𝑟1,𝑁 𝑀1 • 𝑀𝑁
⋮ ⋱ ⋮ ⋮ = ⋮
𝑀𝑁−1 • 𝑀1 ⋯ 𝑀𝑁−1 • 𝑀𝑁−1 𝑟𝑁−1,𝑁 𝑀𝑁−1 • 𝑀𝑁

𝑁 2 inner products or 𝑁 3 work

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


QR Factorization – General case
Modified Gram-Schmidt Algorithm
↑ ↑ ↑ ↑ ↑ ↑
𝑀1 𝑀2 𝑀3 ⇒ 𝑀1 𝑀2 − 𝑟12 𝑄1 𝑀3 − 𝑟13 𝑄1 − 𝑟23 𝑄2
↓ ↓ ↓ ↓ ↓ ↓

To Insure the third column is orthogonal


𝑄1 • 𝑀3 − 𝑄1 𝑟13 − 𝑄2 𝑟23 = 0 ⇒ 𝑟13 = 𝑄1 • 𝑀3

𝑄2 • 𝑀3 − 𝑄1 𝑟13 − 𝑄2 𝑟23 = 0 ⇒ 𝑟23 = 𝑄2 • 𝑀3

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


QR Factorization – Zero Column
What if a Column becomes Zero?
↑ 0 ↑ ⋯ ↑
෩ ෩
𝑄1 0 𝑀3 ⋯ 𝑀𝑁
↓ 0 ↓ ⋯ ↓

Matrix MUST BE Singular!


1) Do not try to normalize the column.
2) Do not use the column as a source for orthogonalization.
3) Perform backward substitution as well as possible
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
QR Factorization – Zero Column

Resulting QR Factorization

𝑟11 𝑟12 𝑟13 ⋯ 𝑟1𝑁


0 0 0 ⋯ 0
↑ 0 ↑ ⋯ ↑ 0 0 𝑟33 ⋯ 𝑟3𝑁
𝑄1 0 𝑄3 ⋯ 𝑄𝑁 0 0 0 ⋱ ⋮
↓ 0 ↓ ⋯ ↓ 0 0 0 ⋯ 𝑟𝑁𝑁

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


QR Factorization – Zero Column
Recall weighted sum of columns view of
systems of equations
𝑥1 𝑏1
↑ ↑ ⋯ ↑ 𝑥2 𝑏2
𝑀1 𝑀2 ⋯ 𝑀𝑁 ⋮ =

↓ ↓ ⋯ ↓ 𝑥𝑁 𝑏𝑁

𝑥1 𝑀1 + 𝑥2 𝑀2 + ⋯ + 𝑥𝑁 𝑀𝑁 = 𝑏

M is singular but b is in the span of the columns of M


Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Least Squares Solution
  
• Consider a subspace V = im(A) of Rn, where A = v1 v2  v.m 
Then, V ⊥ = {x in R n : v  x = 0, for all v in V }
 n  
= {x in R : vi  x = 0, for i = 1,, m}
 n T 
= {x in R : vi  x = 0, for i = 1,, m}
V⊥ is the kernel of the matrix AT.
• For any matrix A, (im A)⊥ = ker(AT ).
1
• Consider the line V = im2, then V ⊥ = ker1 2 3 is the plane with
3

equation x1+2x2+3x3=0

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


What are eigenvalues?
• Given a matrix, A, x is the eigenvector and  is
the corresponding eigenvalue if Ax = x
– A must be square and the determinant of A -  I
must be equal to zero
Ax - x = 0 ! (A - I) x = 0
• Trivial solution is if x = 0
• The non trivial solution occurs when det(A - I) = 0
• Are eigenvectors are unique?
– If x is an eigenvector, then x is also an eigenvector
and  is an eigenvalue
A(x) = (Ax) = (x) = (x)
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Calculating the Eigenvectors/values
• Expand the det(A - I) = 0 for a 2 x 2 matrix
  a11 a12  1 0 
det ( A − I ) = det    −   =0

 a21 a22   0 1 
a11 −  a12 
det   = 0  (a11 −  )(a22 −  ) − a12 a21 = 0
 a21 a22 −  
2 −  (a11 + a22 ) + (a11a22 − a12 a21 ) = 0
• For a 2 £ 2 matrix, this is a simple quadratic equation with two solutions
(maybe complex)
 = (a11 + a22 ) 
(a11 + a22 )
2

4(a11a22 − a12 a21 )


• This “characteristic equation” can be used to solve for x

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Eigenvalue example
• Consider,
2 − (a11 + a22 ) + (a11a22 − a12 a21 ) = 0
1 2  
A=     2
− (1 + 4) + (1  4 − 2  2 ) = 0
2 4   2
= (1 + 4)   = 0,  = 5

• The corresponding eigenvectors can be computed as
 1 2  0 0    x  1 2  x  1x + 2 y  0
 = 0    −   
   = 0      =   = 
  2 4  0 0    y   2 4   y   2 x + 4 y  0 
 1 2  5 0    x   − 4 2   x   − 4 x + 2 y  0 
 = 5    − 0 5    y  = 0   2 − 1   y  =  2 x − 1y  = 0
 2 4             
– For  = 0, one possible solution is x = (2, -1)
– For  = 5, one possible solution is x = (1, 2)

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Physical interpretation
• Consider a covariance matrix, A, i.e., A = 1/n S ST for some S
 1 .75
A=   1 = 1.75, 2 = 0.25
.75 1 

• Error ellipse with the major axis as the larger eigenvalue and
the minor axis as the smaller eigenvalue

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Physical interpretation

Original Variable PC 2
PC 1
B

Original Variable A

• Orthogonal directions of greatest variance in data


• Projections along PC1 (Principal Component) discriminate the data most
along any one axis

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Physical interpretation
• First principal component is the direction of
greatest variability (covariance) in the data
• Second is the next orthogonal (uncorrelated)
direction of greatest variability
– So first remove all the variability along the first
component, and then find the next direction of greatest
variability
• And so on …
• Thus each eigenvectors provides the directions of
data variances in decreasing order of eigenvalues

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Eigen/diagonal Decomposition
• Let be a square matrix with m
linearly independent eigenvectors (a “non-
defective” matrix) Unique
for
• Theorem: Exists an eigen decomposition distinct
diagonal eigen-
values

– (cf. matrix diagonalization theorem)


• Columns of U are eigenvectors of S
• Diagonal elements of are eigenvalues of

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Diagonal decomposition: why/how
 
Let U have the eigenvectors as columns: U = v1 ... vn 
 
Then, SU can be written
      1 
SU = S v1 ... vn  = 1v1 ... n vn  = v1 ... vn   ... 

       n 

Thus SU=U, or U–1SU=

And S=UU–1.
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Diagonal decomposition - example
2 1 
Recall S=  ; 1 = 1, 2 = 3.
1 2 
1 1  1 1
The eigenvectors   and   form U =  
 −1  
1  − 1 1
 
−1 1 / 2 − 1 / 2
Inverting, we have U = 
Recall
UU–1 =1.
1 / 2 1 / 2 
 1 1 1 0 1 / 2 − 1 / 2
Then, S=UU–1 = − 1 1 0 3 1 / 2 1 / 2 
   
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Example continued
Let’s divide U (and multiply U–1) by 2

 1 / 2 1 / 2  1 0 1 / 2 −1/ 2 
Then, S=     
− 1 / 2 1 / 2  0 3 1 / 2 1/ 2 
Q  (Q-1= QT )

Why? Stay tuned …

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Symmetric Eigen Decomposition
• If is a symmetric matrix:
• Theorem: Exists a (unique) eigen decomposition
S = QQ T

• where Q is orthogonal:
– Q-1= QT
– Columns of Q are normalized eigenvectors
– Columns are orthogonal.
– (everything is real)
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Singular Value Decomposition
• If A is a rectangular m £ k matrix of real numbers, then there exists an m
£ m orthogonal matrix U and a k £ k orthogonal matrix V such that

A = U  VT UU T = VV T = I
(mk ) ( mm ) (mk ) (k k )
–  is an m £ k matrix where the (i, j)th entry i ¸ 0, i = 1  min(m, k) and the
other entries are zero
• The positive constants i are the singular values of A
• If A has rank r, then there exists r positive constants 1, 2,r, r
orthogonal m £ 1 unit vectors u1,u2,,ur and r orthogonal k £ 1 unit
vectors v1,v2,,vr such that
r
A =  i u i v Ti
i =1
– Similar to the spectral decomposition theorem

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Singular Value Decomposition (contd.)
• If A is a symmetric and positive definite
then
– SVD = Eigen decomposition
• EIG(i) = SVD(i2)
• Here AAT has an eigenvalue-eigenvector
pair (i2,ui) T
AA = UV
T
( T T
)(UV )
= UV T VU T
= U2 UT
• Alternatively, the vi are the eigenvectors of
ATA with the same non zero eigenvalue i2
AT A = V2VT

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Example for SVD
• Let A be a symmetric, positive definite matrix
– U can be computed as
3 − 1
 3 1 1  3 1 1   = 11 1 
A=   AA T
= − 1 3 1  1 3   
− 1 3 1   1 1   1 11
 
( )
det AA T − I = 0   1 = 12,  2 = 10  u1T =  1 , 1 , uT2 =  1 , − 1 
 2 2  2 2

– V can be computed as
3 − 1 10 0 2
 3 1 1 1 3   3 1 1  0 10 4
A=   A T
A =   − 1 3 1  = 
− 1 3 1 1 1     2 4 2
( )
det AT A − I = 0   1 = 12,  2 = 10,  3 = 0

 v1T =  1 , 2 , 1 , v T2 =  2 , − 1 ,0 , v T3 =  1 ,2 ,−5 


 6 6 6  5 5   30 30 30 

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Example for SVD
• Taking 21=12 and 22=10, the singular value
decomposition of A is
 3 1 1
A= 
 − 1 3 1
1   1 
= 12  2   1 , 2 , 1  + 10  2  2 , − 1 ,0
 1   6 6 6  − 1  5 5 
 2   2 
• Thus the U, V and  are computed by performing eigen
decomposition of AAT and ATA
• Any matrix has a singular value decomposition but only
symmetric, positive definite matrices have an eigen
decomposition

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Applications of SVD in Linear Algebra
• Inverse of a n £ n square matrix, A
– If A is non-singular, then A-1 = (UVT)-1= V-1UT where
-1=diag(1/1, 1/1,, 1/n)
– If A is singular, then A-1 = (UVT)-1¼ V0-1UT where
0-1=diag(1/1, 1/2,, 1/i,0,0,,0)
• Least squares solutions of a m£n system
– Ax=b (A is m£n, m¸n) =(ATA)x=ATb ) x=(ATA)-1 ATb=A+b
– If ATA is singular, x=A+b¼ (V0-1UT)b where 0-1 = diag(1/1,
1/2,, 1/i,0,0,,0)
• Condition of a matrix
– Condition number measures the degree of singularity of A
• Larger the value of 1/n, closer A is to being singular

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


Singular Value Decomposition
• Illustration of SVD dimensions and sparseness

Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT


SVD example
1 − 1
Let A =  0 1 
 1 0 
Thus m=3, n=2. Its SVD is

 0 2/ 6 1/ 3  1 0 
   1 / 2 1/ 2 
1 / 2 −1/ 6 1 / 3  0 3 
1 / 2  1/ 2 −1/ 2 
 1/ 6 − 1 / 3   0 0 
Typically, the singular values arranged in decreasing order.
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy