Instructor: Ks Senthil Raani: Insights!
Instructor: Ks Senthil Raani: Insights!
Contents
1. Vector Spaces 2
2. Linear Transformation 3
3. Triangulation and diagonalization 4
4. Primary sum (Jordan) decomposition (D-N) 5
4.1. Example: Triangulable but not diagonalizable matrix 5
5. Rational and Jordan forms 7
5.1. Order of a vector: 7
5.2. Companion matrix Ap(x) of a polynomial p(x) 7
5.3. Rational and Jordan Canonical form 8
5.4. Some insights on S-N decomposition 8
6. Inner Product Spaces and Spectral theorem 9
7. Appendix 10
7.1. Example: Solution space of system of homogeneous linear equations 10
7.1.1. Solution space - dierential equations 12
7.2. Example: Space of all polynomials of any degree over the eld F 13
7.3. Quotient spaces 14
References 15
Insights!
1
2 INSTRUCTOR: KS SENTHIL RAANI
1. Vector Spaces
- important example(s):
space of all polynomials over the eld (See subsection 7.2)
and Dierential equation point of view: Solution space of a system of ho-
mogeneous linear equations (see subsection 7.1).
self adjoint matrices is NOT a subspace of complex square matrices.
2. Linear Transformation
How are we going to relate matrices with real examples. Dierential equations are
one of the main important way/technique to study physical bodies. Similarly
geometry plays very vital role. How are we going to deal geometry and dierential
equations via matrices? What is the analogue for innite dimension?
- What is a linear transformation?
examples of linear transformations on nite and innite dimensional vector
spaces
- rank - nullity theorem null space - hyperspace
applications in solving system of linear equations
- the algebra of linear transformations, invertible linear transformations
- isomorphism any nite dimensional vector space over the eld is isomorphically
Fn
- matrix of a linear transformation
- change of basis
examples of dierent nite dimensional vector space and representing the
linear transformations on them as matrices.
- linear functionals
important examples of linear functionals on both nite and innite dimen-
sional vector space
- annihilator of a subspace, dual space - transpose of a linear transformation - dou-
ble dual, canonical isomorphism between a vector space and its double dual
- applications of annihilator, null space, dual space in picturizing lower dimensional
lines, planes and so on in Euclidean space, various geometrical and linear equation
problems
We would like to analyse linear operators... to make them simple... we know that
triangular matrices and diagonalizable matrices are better to understand...
similarly can we simplify linear operators?:
Denition 3.1. Recall from Section 6.4 in [2] the denition of invariant subspaces.
Let dim (V, F ) = n, T ∈ L(V ), W be a proper subspace of V which is T -invariant.
The T -conductor polynomial into W of v ∈/ W is the monic polynomial g of the
lowest degree for which g(T )v ∈ W . The T -conductor polynomial into W is the
monic polynomial p of the lowest degree for which p(T )(v) ∈ W ∀v ∈ V .
g|p. If W = {0}, then p ≡ m
MTH 311 - COURSE CONTENT 5
Now that we know how to diagonalize, we would like to know how to approach any
linear transformation and any vector space in a 'simplied' way?
- direct sum decompositions, projections, invariant direct sums, primary decompo-
sition theorem, nilpotent operators, S-N decomposition.
• Any diagonalizable matrix (direct sum) decompose the vector space into
'nice' T -invariant subspaces.
• Consider the example 4.1 to see how decomposition (that we are going to
discuss) plays role in a simple matrix.
NOTES: (Refer [2] Chapter 6 Sections: 6.6-6.8)
- Note that the matrices of (T − I), (T − I)2 and T − 4I with respect to the
standard basis are respectively given as
0 1 0 0 0 1
(A − I) = 0 0 1 , (A − I)2 = 0 0 3
0 0 3 0 0 9
−3 1 0
and (A − 4I) = 0 −3 1
0 0 0
This gives N(T −I)2 =< (1, 0, 0), (0, 1, 0) > and NT −4I =< (1, 3, 9) >.
6 INSTRUCTOR: KS SENTHIL RAANI
In fact, the matrix of T with respect to the new basis from the above
subspaces, that is with respect to B = {(1, 0, 0), (0, 1, 0), (1, 3, 9)} is
1 1
0 1 0
0 4
Note that although the given matrix is not diagonalizable, it is similar
to diagonally-block matrix. We see the importance of this form in this
chapter.
- The eigenspace NT −I (the null space of T − I ) corresponding to the eigen-
value 1 has dimension 1 which is not equal to the multiplicity of
(x − 1), that is 2. This via Theorem on characterization of diagonlaization
says that this operator is not diagonalizable. Check that c(T ) ≡ 0. Two
ways to check:
Check with the matrix that (A−I3 )2 (A−4I3 ) = ([T ]−I3 )2 ([T ]−4I3 ) =
0 and hence by the representationP of the operator by a matrix, the
operator (T − I)2 (T − 4)(vi ) = 0vi = 0 for all vi ∈ B .
P
Aji vj =
Thus c(T )(v) ≡ (T − I)2 (T − 4)(v) = 0 for all v ∈ V .
another way to check: Find out the matrix representation of (T − I)2 .
Check that nullspace N(T −I)2 is 2 dimensional and similarly the eigen
space NT −4I corresponding to the eigen value 4 is 1. Check that
the vectors in N(T −I)2 and NT −4I are linearly independent. Since
these two has 0 as the only common element, the subspace formed by
spanning both the eigen spaces is of dimension 3. Thus c(T ) ≡ 0.
- minimal polynomial:
By the denition of m, m|c. Let m1 (x) = (x − 1)(x − 4). by the property of
minimal polynomial , (x − 1)(x − 4) should divide minimal polynomial. But
(T − I)(T − 4I) 6≡ 0 (since eigenspaces of distinct eigen values have zero as
the only common element and dim NT −I = 1 = dim NT −4I , but dim V =
3, we have a non-zero vector v ∈ / NT −I ∪ NT −4I such that m1 (T )v 6= 0,
that is m1 (T ) 6≡ 0).
Hence m = c.
- Hence we have V = N(T −I)2 NT −4I .
L
(check why is it direct sum?)-apply Primary decomposition - The-
orem 4.1.
Both the subspaces W1 = N(T −I)2 and W = NT −4I are T -Invariant?
What happens to Ti = T |Wi the restriction of T on Wi ?
- Note that
1 1 0
A= 0 1 1 =D+N
0 0 4
where
1
− 13
1 0 3 0 1
D= 0 1 1 and N = 0 0 0
0 0 4 0 0 0
which is obtained by primary sum decomposition -Theorem 4.1, where
D is diagonalizable and N is nilpotent.
MTH 311 - COURSE CONTENT 7
of Z(v; T ) is
à B 0 ··· 0
↑ 0 Ã B ··· 0
Am(x) := r blocks . ... .. ,
..
↓ .
0 ··· Ã
where à = Ap(x) and B = (bij )k×k
with b1k = 1 and bij = 0 for all other i, j
5.3. Rational and Jordan Canonical form.
Theorem 5.3. Elemntary divisor theorem - Cyclic decomposition theo-
rem:
Let T ∈ L(V ) where V is a nite dimensional vector space over an arbitrary eld
and V 6= 0. Then there exist non-zero vectors {v1 , v2 , · · · vk } in V whose orders are
powers of prime polynomials in F [x], {p1 (x)r1 , · · · , pk (x)rk } and are such that V is
the direct sum of the cyclic subspaces Z(vi ; T ). This is unique upto a rearrangement.
T he outline of the proof was discussed in the class. We concentrate on the proof
when the eld is the complex numbers: For the proof, refer to [4] (Chapter XI:
Section 6)
Recall the example 4.1: The minimal polynomial is m(x) = (x − 1)2 (x − 4)
and check that NT −4I = Z((1, 3, 9); T ) and NT −I = Z((0, 1, 0); T ). Note that
Z((1, 0, 0); T ) is a subspace of Z((0, 1, 0); T ). Here rational form is same as the
matrix obtained by the primary decomposition
5.4. Some insights on S-N decomposition. For this subsection, let us study
vector spaces on the eld of complex numbers.
A semisimple operator T is dened to be a linear operator such that every T -
invaraiant subspace has a complimentary T -invariant subspace, that
L is, if W is a
T -invariant subspace of V , then there exists W̃ such that V = W W̃ and W̃ is
T -invariant.
Lemma 5.4. Let T ∈ L(v) and V = V1 · · · Vk beLthe primary decomposition.
L L
If W is a T -invariant subspace, then W = W ∩ V1 · · · W ∩ Vk . (Refer [2]
L
section 7.5 for the proof)
Lemma 5.5. Let T ∈ L(V ). If m(x) = x − λ, then T is semisimple.
Theorem 5.6. Let V be a nite dimensional vector space over the eld of complex
numbers. T ∈ L(V ) is semi-simple if and only if T is diagonal.
MTH 311 - COURSE CONTENT 9
7. Appendix
dx1 = a1 x1 dt,
dx2 = a2 x2 dt
Many more complicated dierential equations can be reduced to this simple system. You may study
why in a later course! The solution is given by
In fact the constants are determined by the initial conditions x1 (t0 ) and x2 (t0 ).
Consider the map x(t) = (x1 (t), x2 (t)) from R to R2 . Then the tangent vector x0 (t)
to the curve x(t) is given by the vector notation: x0 = Ax, where Ax = (a1 x1 , a2 x2 ).
Suppose we consider A the diagonal matrix with a1 = 2 and a2 = −1/2. Then
x2
x̃ + Ax̃ x̃
Ax̃0 x̃0 + Ax̃0
x1
Ax̃ Ax̃
This picturization clearly describes why we look at the following system of equa-
tion given by Ax = y or Ax = 0 where A = (aij ) is an m × n matrix:
x1
a11 a12 a13 . . . a1n a11 x1 + a12 x2 + · · · + a1n xn
a21 a22 a23 . . . a2n
x2
a21 x1 + a22 x2 + · · · + a2n xn
. =
........................... ···
.
am1 am2 am3 . . . amn am1 x1 + am2 x2 + · · · + amn xn
xn
homogeneous if yi = 0, ∀i.
Recall: If A is a n × n matrix,
the homogeneous system AX = 0 has only trivial solution X = 0
if and only if
the system of equations AX = Y has a solution for each n × 1 matrix Y
if and only if
A is invertible.
Solution space of a system of homogeneous linear equations:
Recall:
• Using the row-reduced echelon matrix (RX = P Y where the row reduced
echelon matrix R equivalent to A is given by R = P A where P is m × m
invertible matrix), recall how did you conclude in your basic linear algebra
class that:
If A is an m × n matrix with m < n, then the homogeneous system of
linear equations AX = 0 has a non-trivial solution.
Suppose A is an n × n matrix. A is row-equivalent to n × n identity
matrix if and only if the system of equations AX = 0 has only the
trivial solution.
• recall for a non-homogeneous system of linear equations AX = Y via aug-
mented matrix m × (n + 1) matrix à (where Ãij = Aij if j ≤ n and
Ãi(n+1) = Ai )
• A ∈ F n×n is invertible < =⇒ the homogeneous system AX = 0 has only
trivial solution < =⇒ the system AX = Y has a solution X for each
Y ∈ F n×1 .
• Recall Cramer's rule!
Fix A, an m × n matrix over the eld F . The solution space of AX = 0 is the
set S of all n × 1 matrices X over F that satises AX = 0.
7.2. Example: Space of all polynomials of any degree over the eld F .
Consider F to be either R or C.
F [x]: the space of all polynomials from the eld F into itself.
F m [x]: the space of all polynomials of degree less than or equal to m from the eld
F into itself.
Check/Prove the following:
F [x] F m [x]
0 is a
{fj }m
basis for (F [x])∗? basis for (F m [x])∗
(2)
0 1 0 ... 0
0 0 2 ... 0
. . . ... .
[T ]B0 =
0 0 0 ... m
0 0 0 ... 0
Let us denote the standard basis element ei (x) = xi−1 and y ∈ F . Dene B = {gi }
where gi (x) = (x + t)i−1 .
g1 = e1
g2 = te1 + e2
g3 = t2 e1 + 2te2 + e3
...
gm = tm−1 e1 + (m − 1)tm−2 e2 + (m − 1)C2 tm−3 e3 + · · · + (m − 1)tem−1 + em
Hence
1 t t2 ... tm−1
0 1 0 ... 0
0 0
0 1 2t ... (m − 1)tm−2
2 ... 0
0 0 1 ... (m − 1)C2 tm−3
. = [D]B = P [D]B0 P , where P =
−1
. . . ...
0 0 . . . ... ..
0 ... m
0 0 0 ... (m − 1)t
0 0 0 ... 0
0 0 0 ... 1
Other points to remember:
(1) There is exactly one polynomial function p over R which has degree at most
m and satises p(tj ) = cj , j = 1, ...m + 1. Refer to the example 22 in the
section 3.5 from [2] that we discussed during the class.
(2) We have discussed transpose of dierential operator D (It acts on Linear
functionals not on the vector space). See 2nd question in the quiz 1 (??).
(3) Check that the null space of D − 0I is of dimension is 1 but the multiplicity
of the polynomial x − 0 is 5 in the characteristic polynomial of D. Hence
D is not diagonalizable.
7.3. Quotient spaces. Refer section 26 in [1].
MTH 311 - COURSE CONTENT 15
References
[1] Curtis, C. W. Linear Algebra, an introductory Approach
[2] Homan, K and Kunze, R. Linear Algebra Prentice-Hall, 1961.
[3] Hirsh, M. W. and Smale, S. Dierential equations, dynamical systems and linear algebra
60. Acad. press, 1974
[4] Lang, S. Linear Algebra(2nd Edition), Addition-Wesley Publishing, 1971
[5] Halmos, P. Finite dimensional vector spaces (2nd
Edition), Undergraduate texts in Mathe-
matics, Springer-Verlag New York Inc., 1987
[6] Lang, S. Algebra Graduate Texts in Mathematics (3rd Edition), Springer-Verlag New York
Inc., 2005