0% found this document useful (0 votes)
68 views15 pages

Instructor: Ks Senthil Raani: Insights!

This document outlines the course content for Advanced Linear Algebra - MTH 311. It will cover topics such as vector spaces, linear transformations, triangulation and diagonalization of matrices, Jordan decomposition, and inner product spaces. Key concepts include bases, subspaces, linear functionals, invariant subspaces, characteristic and minimal polynomials, and the primary sum decomposition theorem. Examples will illustrate these topics, including the solution space of a system of homogeneous linear equations and spaces of polynomials. The course provides foundations for abstract algebra and a geometric viewpoint.

Uploaded by

Mi Mi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views15 pages

Instructor: Ks Senthil Raani: Insights!

This document outlines the course content for Advanced Linear Algebra - MTH 311. It will cover topics such as vector spaces, linear transformations, triangulation and diagonalization of matrices, Jordan decomposition, and inner product spaces. Key concepts include bases, subspaces, linear functionals, invariant subspaces, characteristic and minimal polynomials, and the primary sum decomposition theorem. Examples will illustrate these topics, including the solution space of a system of homogeneous linear equations and spaces of polynomials. The course provides foundations for abstract algebra and a geometric viewpoint.

Uploaded by

Mi Mi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

ADVANCED LINEAR ALGEBRA - MTH 311

INSTRUCTOR: KS SENTHIL RAANI

Contents

1. Vector Spaces 2
2. Linear Transformation 3
3. Triangulation and diagonalization 4
4. Primary sum (Jordan) decomposition (D-N) 5
4.1. Example: Triangulable but not diagonalizable matrix 5
5. Rational and Jordan forms 7
5.1. Order of a vector: 7
5.2. Companion matrix Ap(x) of a polynomial p(x) 7
5.3. Rational and Jordan Canonical form 8
5.4. Some insights on S-N decomposition 8
6. Inner Product Spaces and Spectral theorem 9
7. Appendix 10
7.1. Example: Solution space of system of homogeneous linear equations 10
7.1.1. Solution space - dierential equations 12
7.2. Example: Space of all polynomials of any degree over the eld F 13
7.3. Quotient spaces 14
References 15

Insights!

- This course forms a basic step towards abstract algebra


-∗- Those who are interested more towards algebraic concepts, like, prop-
erties of SL(n) (-n × n matrices with determinant 1), the decomposi-
tion theorems are the basics, and you can look at more motivation like
Iwasawa decomposition -see Appendix 2 in [4].
- This course is also intimately related to geometry. Hence this course equally
concentrates on geometric point of view as well. If interested in more geo-
metric view, please refer to Chapter 3 from [4] and chapter 12 can be read
by the end of the course as a winter project.

1
2 INSTRUCTOR: KS SENTHIL RAANI

1. Vector Spaces

Denition of Vector spaces


- criteria to check subspaces
- what are bases and dimension?
 Hamel - standard basis
- some examples including nite dimensional and innite dimensional vector spaces
 also recall how the vector spaces behave with subelds.
- recall not every innite dimensional vector spaces have explicit bases

Thought! - examples like spaces of continuous functions does


not get benefited by basis, that is, by expressing finite linear
combination of elements – There is no explicit basis of this
space. When you are looking at a point you would want the con-
tinuous function to be written in terms of same ’nice’ functions.
However, with finite linear combination we might have any sort
of functions upto our expectations. So this kind of algebraic (or)
Hamel basis are not useful. In applications, with the help of var-
ious topologies the notion of a Schauder basis (-you will study
later -not in this course!) becomes more useful.

- important example(s):
 space of all polynomials over the eld (See subsection 7.2)
 and Dierential equation point of view: Solution space of a system of ho-
mogeneous linear equations (see subsection 7.1).
 self adjoint matrices is NOT a subspace of complex square matrices.

NOTES: (Refer [2] - Chapter 2 Sections: 2.1-2.3)


MTH 311 - COURSE CONTENT 3

2. Linear Transformation

How are we going to relate matrices with real examples. Dierential equations are
one of the main important way/technique to study physical bodies. Similarly
geometry plays very vital role. How are we going to deal geometry and dierential
equations via matrices? What is the analogue for innite dimension?
- What is a linear transformation?
 examples of linear transformations on nite and innite dimensional vector
spaces
- rank - nullity theorem  null space - hyperspace
 applications in solving system of linear equations
- the algebra of linear transformations, invertible linear transformations
- isomorphism  any nite dimensional vector space over the eld is isomorphically
Fn
- matrix of a linear transformation
- change of basis
 examples of dierent nite dimensional vector space and representing the
linear transformations on them as matrices.
- linear functionals
 important examples of linear functionals on both nite and innite dimen-
sional vector space
- annihilator of a subspace, dual space - transpose of a linear transformation - dou-
ble dual, canonical isomorphism between a vector space and its double dual
- applications of annihilator, null space, dual space in picturizing lower dimensional
lines, planes and so on in Euclidean space, various geometrical and linear equation
problems

NOTES: (Refer [2] Chapter 3 Sections: 3.1-3.7)


4 INSTRUCTOR: KS SENTHIL RAANI

3. Triangulation and diagonalization

We would like to analyse linear operators... to make them simple... we know that
triangular matrices and diagonalizable matrices are better to understand...
similarly can we simplify linear operators?:

one way of studying diagonalizable operator: Characteristic polynomials


Refer [2] Chapter 6 - Section 6.2
 Characteristic values, eigenvectors, eigenspace
 importance via diagonalizable linear operator - equivalent notions of di-
agonalizable operator (in terms of characteristic polynomial, dimensions of eigen
spaces)
 examples of how to diagonalize via linear operator
another way of studying diagonalizable operator: minimal polynomial
(Refer to [1]
 what are annihilating polynomials, minimal polynomial?
 importance of studying them
 characterization of diagonalizable operator using the minimal polynomial
 Cayley-Hamilton theorem
More tools for triangulation and diagonalization:
 invariant subspaces, (Refer to Section 6.4 in [2]
 simultaneous triangulation, simultaneous diagonalization(Refer to Section
6.5 in [2]

Triangulizable does not imply diagonalizable. See the example 4.1

Denition 3.1. Recall from Section 6.4 in [2] the denition of invariant subspaces.
Let dim (V, F ) = n, T ∈ L(V ), W be a proper subspace of V which is T -invariant.
The T -conductor polynomial into W of v ∈/ W is the monic polynomial g of the
lowest degree for which g(T )v ∈ W . The T -conductor polynomial into W is the
monic polynomial p of the lowest degree for which p(T )(v) ∈ W ∀v ∈ V .
g|p. If W = {0}, then p ≡ m
MTH 311 - COURSE CONTENT 5

4. Primary sum (Jordan) decomposition (D-N)

Now that we know how to diagonalize, we would like to know how to approach any
linear transformation and any vector space in a 'simplied' way?
- direct sum decompositions, projections, invariant direct sums, primary decompo-
sition theorem, nilpotent operators, S-N decomposition.

• Any diagonalizable matrix (direct sum) decompose the vector space into
'nice' T -invariant subspaces.
• Consider the example 4.1 to see how decomposition (that we are going to
discuss) plays role in a simple matrix.
NOTES: (Refer [2] Chapter 6 Sections: 6.6-6.8)

Theorem 4.1. Primary sum decomposition theorem: (See Theorem 12 and 13 -


section 6.8 [2])
Let T ∈ L(V ) where dim V < ∞. Let the minimal polynomial for T be m(x) =
pr1 (x) · · · prk (x) where pi 's are distinct irreducible (prime) monic polynomials over
1 k

the eld F and the ri are positive integers. Then


(i) V = Npr11 (T ) · · · Nprk (T )
L L
k
(ii) each Npri (T ) is T −invariant
i
(iii) if Ti is operator induced on Npri (T ) by T then the minimal polynomial for
i
Ti is pri i .
(iv) If each pi is linear, then T = D + N where the diagonalizable operator
D and the nilpotent operator N on V are uniquely determined such that
DN = N D. This is Jordan decomposition
Apart from the following example, also refer to the subsection 7.1.1.

4.1. Example: Triangulable but not diagonalizable matrix. Consider the


matrix A given by an operator T ∈ L(V ) with respect some basis B of V = R3 :
 
1 1 0
A = 0 1 1
0 0 4
Note the following:
- The characteristic polynomial of this operator T is c(x) = (x − 1)2 (x − 4).

- Note that the matrices of (T − I), (T − I)2 and T − 4I with respect to the
standard basis are respectively given as
   
0 1 0 0 0 1
(A − I) =  0 0 1  , (A − I)2 =  0 0 3 
0 0 3 0 0 9

 
−3 1 0
and (A − 4I) =  0 −3 1 
0 0 0
 This gives N(T −I)2 =< (1, 0, 0), (0, 1, 0) > and NT −4I =< (1, 3, 9) >.
6 INSTRUCTOR: KS SENTHIL RAANI

 In fact, the matrix of T with respect to the new basis from the above
subspaces, that is with respect to B = {(1, 0, 0), (0, 1, 0), (1, 3, 9)} is
 
1 1
0 1 0
 
0 4
Note that although the given matrix is not diagonalizable, it is similar
to diagonally-block matrix. We see the importance of this form in this
chapter.
- The eigenspace NT −I (the null space of T − I ) corresponding to the eigen-
value 1 has dimension 1 which is not equal to the multiplicity of
(x − 1), that is 2. This via Theorem on characterization of diagonlaization
says that this operator is not diagonalizable. Check that c(T ) ≡ 0. Two
ways to check:
 Check with the matrix that (A−I3 )2 (A−4I3 ) = ([T ]−I3 )2 ([T ]−4I3 ) =
0 and hence by the representationP of the operator by a matrix, the
operator (T − I)2 (T − 4)(vi ) = 0vi = 0 for all vi ∈ B .
P
Aji vj =
Thus c(T )(v) ≡ (T − I)2 (T − 4)(v) = 0 for all v ∈ V .
 another way to check: Find out the matrix representation of (T − I)2 .
Check that nullspace N(T −I)2 is 2 dimensional and similarly the eigen
space NT −4I corresponding to the eigen value 4 is 1. Check that
the vectors in N(T −I)2 and NT −4I are linearly independent. Since
these two has 0 as the only common element, the subspace formed by
spanning both the eigen spaces is of dimension 3. Thus c(T ) ≡ 0.
- minimal polynomial:
By the denition of m, m|c. Let m1 (x) = (x − 1)(x − 4). by the property of
minimal polynomial , (x − 1)(x − 4) should divide minimal polynomial. But
(T − I)(T − 4I) 6≡ 0 (since eigenspaces of distinct eigen values have zero as
the only common element and dim NT −I = 1 = dim NT −4I , but dim V =
3, we have a non-zero vector v ∈ / NT −I ∪ NT −4I such that m1 (T )v 6= 0,
that is m1 (T ) 6≡ 0).
Hence m = c.
- Hence we have V = N(T −I)2 NT −4I .
L
 (check why is it direct sum?)-apply Primary decomposition - The-
orem 4.1.
 Both the subspaces W1 = N(T −I)2 and W = NT −4I are T -Invariant?
 What happens to Ti = T |Wi the restriction of T on Wi ?
- Note that  
1 1 0
A= 0 1 1 =D+N
0 0 4
where
1
− 13
   
1 0 3 0 1
D= 0 1 1  and N = 0 0 0 
0 0 4 0 0 0
which is obtained by primary sum decomposition -Theorem 4.1, where
D is diagonalizable and N is nilpotent.
MTH 311 - COURSE CONTENT 7

5. Rational and Jordan forms

We continue to view a vector space in a 'simplied' way for a xed linear


operator. characteristic and minimal polynomials, in general, do not convey how
two matrices are similar. We see a characterization of similar matrices.
5.1. Order of a vector: We say a particular polynomial as the order of a vector:
Lemma 5.1. Let V be a nite dimensional vector space over F and T ∈ L(V ).
Let v ∈ V be a non-zero vector. There exists a unique non-zero monic polynomial
mv in F [x]−space of all polynomials over F such that mv (T )(v) = 0 and mv |g for
every g ∈ F [x] such that g(T )(v) = 0.
Proof. We outline the proof of existence. The proof of uniqueness is left as an ex-
ercise.
Since V is nite dimension, there exists a positive integer k such that
{v, T v, T 2 v, · · · , T k−1 v} are linearly independent and {v, T v, T 2 v, · · · , T k v} are lin-
Pk−1 Pk−1
early dependent. Hence T k v = i=0 ai T i v . Conclude that mv (x) = xk − i=0 ai xi
is the required polynomial by division algorithm. 
Properties of order of v: (Check the following:)
(a) mv divides the minimal polynomial m of T
(b) For a xed T ∈ L(V ), consider the cyclic subspace generated by v Z(v; T ) =
{g(T )(v) : g ∈ F [x]}. Let TZ(v;T ) be the restriction of T to Z(v; T ). Then
order mv (of v ) is the minimal polynomial of TZ(v;T ) upto a constant factor.
5.2. Companion matrix Ap(x) of a polynomial p(x).
Theorem 5.2. Let p(x) = xk −ak−1 xk−1 −· · ·−a0 be an irreducible (prime) monic
polynomial ∈ F [x].
(i) Suppose mv (x) = p(x) is the order of v . Then the matrix of TZ(v;T ) with
respect to the basis {v, T v, · · · T k−1 v} of Z(v; T ) is
 
0 ··· a0

 1 0 ··· a1 

 0 1 0 ··· 
:=  .. .. .
 
Ap(x)

 . ··· . 

 0 
0 ··· 0 1 ak−1
Proof. By the denition of Z(v; T ) and the proof of Lemma 5.1, we have a
basis {v, T v, · · · T k−1 v} of Z(v; T ). Since T (T i (v)) = T i+1 v for all i ≤ k − 2
and
T (T k−1 v) = a0 v + a1 T v + · · · ak−1 T k−1 v,
clearly the matrix is as described. 
(ii) Suppose mv (x) = pr (x) for some integer r > 1. Then the matrix of TZ(v;T )
with respect to the basis
{p(T )r−1 v, T p(T )r−1 v, · · · T k−1 p(T )r−1 v,
p(T )r−2 v, T p(T )r−2 v, · · · T k−2 p(T )r−1 v,
· · · , v, T v, · · · T k−1 v}
8 INSTRUCTOR: KS SENTHIL RAANI

of Z(v; T ) is
 
à B 0 ··· 0
↑  0 Ã B ··· 0 
Am(x) := r blocks  . ... ..  ,
 ..
 
↓ . 
0 ··· Ã
where à = Ap(x) and B = (bij )k×k
with b1k = 1 and bij = 0 for all other i, j
5.3. Rational and Jordan Canonical form.
Theorem 5.3. Elemntary divisor theorem - Cyclic decomposition theo-
rem:
Let T ∈ L(V ) where V is a nite dimensional vector space over an arbitrary eld
and V 6= 0. Then there exist non-zero vectors {v1 , v2 , · · · vk } in V whose orders are
powers of prime polynomials in F [x], {p1 (x)r1 , · · · , pk (x)rk } and are such that V is
the direct sum of the cyclic subspaces Z(vi ; T ). This is unique upto a rearrangement.
T he outline of the proof was discussed in the class. We concentrate on the proof
when the eld is the complex numbers: For the proof, refer to [4] (Chapter XI:
Section 6)
Recall the example 4.1: The minimal polynomial is m(x) = (x − 1)2 (x − 4)
and check that NT −4I = Z((1, 3, 9); T ) and NT −I = Z((0, 1, 0); T ). Note that
Z((1, 0, 0); T ) is a subspace of Z((0, 1, 0); T ). Here rational form is same as the
matrix obtained by the primary decomposition

Thought! -What is the rational canonical


form (Jordan form if the Field is algebraically
closed) of a nilpotent matrix?

5.4. Some insights on S-N decomposition. For this subsection, let us study
vector spaces on the eld of complex numbers.
A semisimple operator T is dened to be a linear operator such that every T -
invaraiant subspace has a complimentary T -invariant subspace, that
L is, if W is a
T -invariant subspace of V , then there exists W̃ such that V = W W̃ and W̃ is
T -invariant.
Lemma 5.4. Let T ∈ L(v) and V = V1 · · · Vk beLthe primary decomposition.
L L
If W is a T -invariant subspace, then W = W ∩ V1 · · · W ∩ Vk . (Refer [2]
L
section 7.5 for the proof)
Lemma 5.5. Let T ∈ L(V ). If m(x) = x − λ, then T is semisimple.
Theorem 5.6. Let V be a nite dimensional vector space over the eld of complex
numbers. T ∈ L(V ) is semi-simple if and only if T is diagonal.
MTH 311 - COURSE CONTENT 9

6. Inner Product Spaces and Spectral theorem

- Inner product space, Gram-Schmidt


 we see why the geometric dimension is equal to the algebraic dimension of the
space we have studied so far, via orthogonality
- linear functionals and adjoints
 we see the existence of adjoint operators
- unitary operators, normal operators, self-adjoint operators, spectral theorem for
self-adjoint operators.
NOTES: (Refer [2] Chapter 8 and Theorem 9 in Chapter 9)
10 INSTRUCTOR: KS SENTHIL RAANI

7. Appendix

7.1. Example: Solution space of system of homogeneous linear equations.


M otivation: (refer [3] page 3) Let us look at a simple example of dierential equa-
tions. Consider the system of two dierential equations in two unknown functions:

dx1 = a1 x1 dt,
dx2 = a2 x2 dt

Many more complicated dierential equations can be reduced to this simple system. You may study
why in a later course! The solution is given by

x1 (t) = C1 exp(a1 t), C1 = constant,


x2 (t) = C2 exp(a2 t), C2 = constant.

In fact the constants are determined by the initial conditions x1 (t0 ) and x2 (t0 ).
Consider the map x(t) = (x1 (t), x2 (t)) from R to R2 . Then the tangent vector x0 (t)
to the curve x(t) is given by the vector notation: x0 = Ax, where Ax = (a1 x1 , a2 x2 ).
Suppose we consider A the diagonal matrix with a1 = 2 and a2 = −1/2. Then

x2

x̃ + Ax̃ x̃
Ax̃0 x̃0 + Ax̃0
x1
Ax̃ Ax̃

as a vector eld 


obtained from x to Ax
The solution curve of the dierential equation can also be plotted and seen easily
as 'a curved vector eld'.

This picturization clearly describes why we look at the following system of equa-
tion given by Ax = y or Ax = 0 where A = (aij ) is an m × n matrix:
 
  x1  
a11 a12 a13 . . . a1n a11 x1 + a12 x2 + · · · + a1n xn
 a21 a22 a23 . . . a2n  
 x2 
  a21 x1 + a22 x2 + · · · + a2n xn 
  . = 
 ...........................    ··· 
 . 
am1 am2 am3 . . . amn am1 x1 + am2 x2 + · · · + amn xn
xn

Homogeneous Linear equations:


We call the following the system of m linear equations in n unknowns: (we call it
MTH 311 - COURSE CONTENT 11

homogeneous if yi = 0, ∀i.

A11 x1 + A12 x2 + · · · + A1n xn = y1


A12 x1 + A22 x2 + · · · + A2n xn = y2
(1) ········· ···
········· ···
Am1 x1 + Am2 x2 + · · · + Amn xn = ym

Recall: If A is a n × n matrix,
the homogeneous system AX = 0 has only trivial solution X = 0
if and only if
the system of equations AX = Y has a solution for each n × 1 matrix Y
if and only if
A is invertible.
Solution space of a system of homogeneous linear equations:
Recall:
• Using the row-reduced echelon matrix (RX = P Y where the row reduced
echelon matrix R equivalent to A is given by R = P A where P is m × m
invertible matrix), recall how did you conclude in your basic linear algebra
class that:
 If A is an m × n matrix with m < n, then the homogeneous system of
linear equations AX = 0 has a non-trivial solution.
 Suppose A is an n × n matrix. A is row-equivalent to n × n identity
matrix if and only if the system of equations AX = 0 has only the
trivial solution.
• recall for a non-homogeneous system of linear equations AX = Y via aug-
mented matrix m × (n + 1) matrix à (where Ãij = Aij if j ≤ n and
Ãi(n+1) = Ai )
• A ∈ F n×n is invertible < =⇒ the homogeneous system AX = 0 has only
trivial solution < =⇒ the system AX = Y has a solution X for each
Y ∈ F n×1 .
• Recall Cramer's rule!
Fix A, an m × n matrix over the eld F . The solution space of AX = 0 is the
set S of all n × 1 matrices X over F that satises AX = 0.

Properties of Solution space:


(1) S is a vector space over F .
It follows from the observation that A1 (cA2 + A3 ) = c(A1 A2 ) + A1 A3 for
any m × n matrix Ai and c ∈ F .
(2) If R is row-reduced echelon matrix of A, then S = SR Solution space for
the system RX = 0. (Important to know how to obtain basis of SR from
the basis of S ) - Example 15, Section 2.3 [2].
(3) Homogeneous system of linear functions give the motivation to study linear
functionals:
If we have the system AX = 0 where A ∈ F m×n , then consider Ti (x1 , · · · , xn ) =
Ai1 x1 + · · · + Ain xn . The solution space is given by {x = (x1 , · · · , xn ) :
Ti (x) = 0, i = 1, · · · m}, that is subspace annihilated by T1 , · · · Tm . Thus
12 INSTRUCTOR: KS SENTHIL RAANI

row-reduction helps to give a systematic method of nding the annihilator


of the subspace spanned by a given nite vectors in F n .
7.1.1. Solution space - dierential equations. (See Example 14 in section 6.8 - [2]):
Fix F = C. Let n be a positive integer and W be the space of all n times
continuously dierentiable functions f on the real line which satisfy the dierential
equation
dn f dn−1 f df
n
+ a n−1 n−1
+ · · · + a1 + a0 f = 0
dx dx dx
where a0 , · · · , an−1 are some xed constants. In fact, W is the null space of the
operator p(D) where p(x) = xn +an−1 xn−1 +· · ·+a1 x+a0 is the polynomial and D
denotes the dierential operator. Also W is a D− invariant subspace of the vector
space V of all continuous functions.

Application of primary decomposition theorem 4.1: If p(x) L =L(x −


c1 )r1 · · · (x − ck )rk , then by primary decomposition theorem V = W1 · · · Wk
where Wi is the null space of (D−ci I)ri and if f is the solution of the above dieren-
tial equation, then f = f1 + · · · + fk where fi 's are the solutions of (D − ci I)ri f = 0.
(i) (i) (i)
Prove by induction that fi (x) = eci x (b0 + b1 x + · · · bri −1 xri −1 ) for some bj ∈ F .
MTH 311 - COURSE CONTENT 13

7.2. Example: Space of all polynomials of any degree over the eld F .
Consider F to be either R or C.
F [x]: the space of all polynomials from the eld F into itself.
F m [x]: the space of all polynomials of degree less than or equal to m from the eld
F into itself.
Check/Prove the following:
F [x] F m [x]

Vector space: Yes Yes.


+ pointwise addition
subspace of F [x]
Standard Basis B0 {1, x, x2 , · · · } {1, x, x2 , · · · , xm }

Dimension Innite Finite (= m + 1)

Identity: I(x) = 1 I ∈ F [x] I ∈ F m [x]

Dierential operator, D D ∈ L(F [x]) D ∈ L(F m [x])


Pk
D(p)(x) = j=0 jcj xj−1 (deg(p) = k
Pk
∀p(x) = j=0 cj xj must be ≤ m)
ND , Null space of D Constant polynomials Constant polynomials

[D]B0 does not exist see (2)

Integral operator, I I ∈ L(F [x]) / L(F m [x])


I∈
Pk cj
I(p)(x) = j=0 j+1 xj+1 DI = I
Pk
∀p(x) = j=0 cj xj ID 6= I
I is right inverse of D Yes No

multiplication by x M ∈ L(F [x]) / L(F m [x])


M∈
M (p)(x) = xp(x)

Evaluation at x0 , Lx0 Lx0 ∈ (F [x])∗ Lx0 ∈ (F m [x])∗


(Lx0 (p) = p(x0 ).)
Rb
Lba (p) = a p(x)dx L ∈ (F [x])∗ L ∈ (F m [x])∗
for xed a, b ∈ F

fj (p) = cj , ∀j fj ∈ (F [x])∗ fj ∈ (F m [x])∗


Pk
where p(x) = j=0 cj xj deg (p) ≤ m

0 is a
{fj }m
basis for (F [x])∗? basis for (F m [x])∗

Express Lx0 , Lba in terms


of the basis
14 INSTRUCTOR: KS SENTHIL RAANI

(2)
 
0 1 0 ... 0
0 0 2 ... 0 
 
 . . . ... . 
[T ]B0 =  
0 0 0 ... m
0 0 0 ... 0
Let us denote the standard basis element ei (x) = xi−1 and y ∈ F . Dene B = {gi }
where gi (x) = (x + t)i−1 .
g1 = e1
g2 = te1 + e2
g3 = t2 e1 + 2te2 + e3
...
gm = tm−1 e1 + (m − 1)tm−2 e2 + (m − 1)C2 tm−3 e3 + · · · + (m − 1)tem−1 + em
Hence
1 t t2 ... tm−1
 
 
0 1 0 ... 0
0 0
0 1 2t ... (m − 1)tm−2 
2 ... 0  
0 0 1 ... (m − 1)C2 tm−3 
.  = [D]B = P [D]B0 P , where P = 
−1
  
. . . ...   

0 0 . . . ... .. 
0 ... m   
0 0 0 ... (m − 1)t 
0 0 0 ... 0
0 0 0 ... 1
Other points to remember:
(1) There is exactly one polynomial function p over R which has degree at most
m and satises p(tj ) = cj , j = 1, ...m + 1. Refer to the example 22 in the
section 3.5 from [2] that we discussed during the class.
(2) We have discussed transpose of dierential operator D (It acts on Linear
functionals not on the vector space). See 2nd question in the quiz 1 (??).
(3) Check that the null space of D − 0I is of dimension is 1 but the multiplicity
of the polynomial x − 0 is 5 in the characteristic polynomial of D. Hence
D is not diagonalizable.
7.3. Quotient spaces. Refer section 26 in [1].
MTH 311 - COURSE CONTENT 15

References
[1] Curtis, C. W. Linear Algebra, an introductory Approach
[2] Homan, K and Kunze, R. Linear Algebra Prentice-Hall, 1961.
[3] Hirsh, M. W. and Smale, S. Dierential equations, dynamical systems and linear algebra
60. Acad. press, 1974
[4] Lang, S. Linear Algebra(2nd Edition), Addition-Wesley Publishing, 1971
[5] Halmos, P. Finite dimensional vector spaces (2nd
Edition), Undergraduate texts in Mathe-
matics, Springer-Verlag New York Inc., 1987
[6] Lang, S. Algebra Graduate Texts in Mathematics (3rd Edition), Springer-Verlag New York
Inc., 2005

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy