Module-5: Inner Product Spaces: Dr. Bhavya Shivaraj
Module-5: Inner Product Spaces: Dr. Bhavya Shivaraj
Department of Mathematics
Orthogonality:
u, v = 0
𝑓(𝑡) = 3𝑡 − 5, 𝑔(𝑡) = 𝑡 2
1
𝑓, 𝑔 = න(3𝑡 − 5) 𝑡 2 𝑑𝑡
0
1
= න(3𝑡 3 − 5 𝑡 2 ) 𝑑𝑡
0
1
𝑡4 𝑡3
= 3 −5
4 3 𝑡=0
3 5
= −
4 3
11
= − .
12
1. 𝟎, 𝐯 = 𝐯, 𝟎 = 0
𝐮 + 𝐯, 𝐰 = 𝐮, 𝐰 + 𝐯, 𝐰
2.
𝐮, 𝑐𝐯 = 𝑐 𝐮, 𝐯
3.
3. 𝑞 = 𝑞, 𝑞 = 42 + (−2)2 + 12 = 21
4. 𝑑(𝑝, 𝑞) = 𝑝 − 𝑞 = (1 − 2𝑥 2 ) − (4 − 2𝑥 + 𝑥 2 )
= −3 + 2𝑥 − 3𝑥 2
= (−3)2 + 22 + (−3)2 = 22
Two vectors in an inner product space are orthogonal if their inner product is zero. The
same definition applies to Euclidean spaces, where angles are defined, and there
orthogonality means that either the angle between the vectors is π/2 or one of the vectors
is zero. So orthogonality is slightly more general than perpendicularity.
We define a set of vectors to be orthonormal if they are all unit vectors and
each one is orthogonal to each of the others. An orthonormal basis is simply
a basis that is orthonormal. Note that there is no such thing as an
‘orthonormal vector’. The property applies to a whole set of vectors, not to
an individual vector.
Proof: Suppose λ 1v1 + … + λ nvn = 0. Then <λ 1v1 + … + λ nvn | vr > = 0 for each r.
since vr is orthogonal to the other vectors in the set and vr is a unit vector.
Hence each λr = 0.
𝐮⋅𝐯 v
Therefore 𝑝𝑟𝑜𝑗𝐯 𝐮 = 𝐯
𝐯⋅𝐯
projvu
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Projvu in R2
• If a < 0, then cos 𝜃 < 0. The orthogonal projection of u onto v is
given by the same formula.
• The orthogonal projection of u = (4, 2) onto
v = (3, 4) is given by
𝐮⋅𝐯 20 12 16
𝑝𝑟𝑜𝑗𝐯 𝐮 = 𝐯= (3, 4) = ( , )
𝐯⋅𝐯 25 5 5
v
u u
projvu
v
projvu
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Orthogonal Projection
• Let u and v be vectors in an inner product space V. Then the
orthogonal projection of u onto v is
given by 𝑝𝑟𝑜𝑗𝐯 𝐮 =
𝐮, 𝐯
𝐯 𝐯 ≠ 𝟎.
𝐯, 𝐯
• The orthogonal projection of u = (6, 2, 4) onto v = (1, 2, 0) is
given by
𝐮⋅𝐯 10
𝑝𝑟𝑜𝑗𝐯 𝐮 = 𝐯= (1, 2, 0) = (2, 4, 0)
𝐯⋅𝐯 5
u d(u, projvu)
v
projvu
d(u, cv)
d(u, projvu)
v
projvu
𝐯1 ⋅ 𝐯2 = 0, 𝐯1 ⋅ 𝐯3 = 0, 𝐯1 ⋅ 𝐯4 = 0, 𝐯2 ⋅ 𝐯3 = 0, 𝐯2 ⋅ 𝐯4 = 0,𝐯3 ⋅ 𝐯4 = 0
Thus, S is orthogonal.
Thus
𝐰 ⋅ 𝐯1 = −1, 𝐰 ⋅ 𝐯2 = −7, 𝐰 ⋅ 𝐯3 = 2
[𝐰]𝐵 = −1 −7 2 𝑇
𝐰1 𝐰2
𝐮1 , 𝐮2 = ,
𝐰1 𝐰2 is an orthonormal basis
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
1)Apply the Gram-Schmidt orthonormal process to the following
basis for R2: B = {(1, 1), (0, 1)}.
Sol:
𝐰1 = 𝐯1 = (1, 1)
𝐰1 2 2 (1,1)
⇒ 𝐮1 = = , (0,1)
𝐰1 2 2
𝐯2 ⋅ 𝐰1
𝐰2 = 𝐯2 − 𝐰
𝐰1 ⋅ 𝐰1 1
1
= (0, 1) − (1, 1)
2
−1 1 − 2
,
2 2
,
2
= , 2 2 2 2
2 2
𝐰2 − 2 2
⇒ 𝐮2 = = ,
𝐰2 2 2
𝑥1 −2 1
𝑥2 2 −8
𝑥3 = 𝑠 1
+𝑡
0
𝑥4 0 1
𝑥1 𝑀1 + 𝑥2 𝑀2 + ⋯ + 𝑥𝑁 𝑀𝑁 = 𝑏
𝑏
𝑏
𝑀2
𝑥1 𝑥2
Non-orthogonal Case
Orthogonal Case 𝑀2
M is orthonormal if:
𝑀𝑖 • 𝑀𝑗 = 0𝑖 ≠ 𝑗and𝑀𝑖 • 𝑀𝑖 = 1
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
QR Factorization – Key idea
𝑦1 𝑏1
𝑥1 𝑏1 ↑ ↑ ⋯ ↑
↑ ↑ ⋯ ↑ 𝑦2 𝑏2
𝑥2 𝑏2 𝑄1 𝑄2 ⋯ 𝑄𝑁 =
𝑀1 𝑀2 ⋯ 𝑀𝑁 = ⋮ ⋮
⋮ ⋮ ↓ ↓ ⋯ ↓
↓ ↓ ⋯ ↓ 𝑦𝑁 𝑏𝑁
𝑥𝑁 𝑏𝑁 𝑀𝑎𝑡𝑟𝑖𝑥𝑤𝑖𝑡ℎ
𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙𝑀𝑎𝑡𝑟𝑖𝑥 𝑂𝑟𝑡ℎ𝑜𝑛𝑜𝑟𝑚𝑎𝑙
𝐶𝑜𝑙𝑢𝑚𝑛𝑠
𝑄𝑦 = 𝑏 ⇒ 𝑦 = 𝑄𝑇 𝑏
𝑀1 • 𝑄2 = 𝑀1 • 𝑀2 − 𝑟12 𝑀1 = 0
𝑀1 • 𝑀2
𝑟12 =
𝑀1 • 𝑀1
𝑀2
𝑄2 𝑀1
𝑟12
෨ ෨
Now find 𝑄2 = 𝑀2 − 𝑟12 𝑄1 so that𝑄2 • 𝑄1 = 0
𝑟12 = 𝑄1 • 𝑀2
1 ෨ 1 ෨
Finally 𝑄2 = 𝑄2 = 𝑄2
෨ ෨ 𝑟22
𝑄2 • 𝑄2
↑ ↑ 𝑥 ↑ ↑ 𝑦
1 1
𝑀1 𝑀2 𝑥 = 𝑥1 𝑀1 + 𝑥2 𝑀2 = 𝑄1 𝑄2 𝑦 = 𝑦1 𝑄1 + 𝑦2 𝑄2
2 2
↓ ↓ ↓ ↓
𝑟11 𝑟12 𝑥1 𝑦1
0 𝑟22 𝑥2 = 𝑦2
QR Factorization – 2x2 case
↑ ↑ 𝑥 ↑ ↑ 𝑟11 𝑟12 𝑥1
1 𝑏1
𝑀1 𝑀2 𝑥 = 𝑄1 𝑄2 0 𝑟22 𝑥2 =
𝑏2
2
↓ ↓ ↓ ↓ 𝑈𝑝𝑝𝑒𝑟
𝑂𝑟𝑡ℎ𝑜𝑛𝑜𝑟𝑚𝑎𝑙 𝑇𝑟𝑖𝑎𝑛𝑔𝑢𝑙𝑎𝑟
Step 1) 𝑄𝑅𝑥 = 𝑏 ⇒ 𝑅𝑥 = 𝑄𝑇 𝑏 = 𝑏෨
Step 2) Backsolve 𝑅𝑥 = 𝑏෨
QR Factorization – General case
↑ ↑ ↑ ↑ ↑ ↑
𝑀1 𝑀2 𝑀3 ⇒ 𝑀1 𝑀2 − 𝑟12 𝑀1 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2
↓ ↓ ↓ ↓ ↓ ↓
𝑀2 • 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2 = 0
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
QR Factorization – General case
𝑀1 • 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2 = 0
𝑀2 • 𝑀3 − 𝑟13 𝑀1 − 𝑟23 𝑀2 = 0
𝑀1 • 𝑀1 𝑀1 • 𝑀2 𝑟13 𝑀1 • 𝑀3
=
𝑀2 • 𝑀1 𝑀2 • 𝑀2 𝑟23 𝑀2 • 𝑀3
𝑀1 • 𝑀1 ⋯ 𝑀1 • 𝑀𝑁−1 𝑟1,𝑁 𝑀1 • 𝑀𝑁
⋮ ⋱ ⋮ ⋮ = ⋮
𝑀𝑁−1 • 𝑀1 ⋯ 𝑀𝑁−1 • 𝑀𝑁−1 𝑟𝑁−1,𝑁 𝑀𝑁−1 • 𝑀𝑁
Resulting QR Factorization
𝑥1 𝑀1 + 𝑥2 𝑀2 + ⋯ + 𝑥𝑁 𝑀𝑁 = 𝑏
equation x1+2x2+3x3=0
• Error ellipse with the major axis as the larger eigenvalue and
the minor axis as the smaller eigenvalue
Original Variable PC 2
PC 1
B
Original Variable A
And S=UU–1.
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Diagonal decomposition - example
2 1
Recall S= ; 1 = 1, 2 = 3.
1 2
1 1 1 1
The eigenvectors and form U =
−1
1 − 1 1
−1 1 / 2 − 1 / 2
Inverting, we have U =
Recall
UU–1 =1.
1 / 2 1 / 2
1 1 1 0 1 / 2 − 1 / 2
Then, S=UU–1 = − 1 1 0 3 1 / 2 1 / 2
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Example continued
Let’s divide U (and multiply U–1) by 2
1 / 2 1 / 2 1 0 1 / 2 −1/ 2
Then, S=
− 1 / 2 1 / 2 0 3 1 / 2 1/ 2
Q (Q-1= QT )
• where Q is orthogonal:
– Q-1= QT
– Columns of Q are normalized eigenvectors
– Columns are orthogonal.
– (everything is real)
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT
Singular Value Decomposition
• If A is a rectangular m £ k matrix of real numbers, then there exists an m
£ m orthogonal matrix U and a k £ k orthogonal matrix V such that
A = U VT UU T = VV T = I
(mk ) ( mm ) (mk ) (k k )
– is an m £ k matrix where the (i, j)th entry i ¸ 0, i = 1 min(m, k) and the
other entries are zero
• The positive constants i are the singular values of A
• If A has rank r, then there exists r positive constants 1, 2,r, r
orthogonal m £ 1 unit vectors u1,u2,,ur and r orthogonal k £ 1 unit
vectors v1,v2,,vr such that
r
A = i u i v Ti
i =1
– Similar to the spectral decomposition theorem
– V can be computed as
3 − 1 10 0 2
3 1 1 1 3 3 1 1 0 10 4
A= A T
A = − 1 3 1 =
− 1 3 1 1 1 2 4 2
( )
det AT A − I = 0 1 = 12, 2 = 10, 3 = 0
0 2/ 6 1/ 3 1 0
1 / 2 1/ 2
1 / 2 −1/ 6 1 / 3 0 3
1 / 2 1/ 2 −1/ 2
1/ 6 − 1 / 3 0 0
Typically, the singular values arranged in decreasing order.
Dr. Bhavya Shivaraj, Dept. Of Mathematics, SJBIT