0% found this document useful (0 votes)
90 views20 pages

Linear Algebra Endsem 1

The document provides a rubric for grading an end-semester exam with points deducted for common errors such as using undefined symbols or illogical statements. It lists the number of marks deducted for each type of error. The document also provides sample exam questions and solutions with the number of marks awarded for different parts of the solutions.

Uploaded by

Akarsh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views20 pages

Linear Algebra Endsem 1

The document provides a rubric for grading an end-semester exam with points deducted for common errors such as using undefined symbols or illogical statements. It lists the number of marks deducted for each type of error. The document also provides sample exam questions and solutions with the number of marks awarded for different parts of the solutions.

Uploaded by

Akarsh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Rubric for End-Semester Exam

List of Common Errors and Marks Deductions:


1. Using an undefined symbol. Please deduct 1/2 mark each time this is
done. More marks may be deducted if the undefined symbol is used in
the argument.
2. Writing an equation in which the LHS and RHS are not comparable, for
example, if the LHS is an m × n matrix and the RHS is a real number.
Please deduct 1/2 mark each time this is done.
3. Writing a meaningless or completely illogical statement. Please deduct 1
mark for every meaningless statement.
4. Please deduct 1/2 mark for every calculation mistake.

Question 1 (10 marks). Suppose that V be a vector space over a field F .


Let T ∈ L(V, V ) (i.e. T is a linear transformation from V to V ).
Let v ∈ V be a vector such that T m v 6= 0 but T m+1 (v) = 0 for some
m ≥ 0.
Show that v, T v, . . . , T m v are linearly independent.
Solution. Let
c1 v + c2 T v + · · · + cm+1 T m v = 0,
where c1 , . . . , cm+1 ∈ R. Suppose if possible that the cj s are not all zero.
Let k be the smallest index for which ck 6= 0. Then
T m−k+1 (c1 v + c2 T v + · · · + cm+1 T m v) = T m−k+1 (0) = 0.
Therefore
c1 T m−k+1 v + c2 T m−k+2 v + · · · + cm+1 T 2m−k+1 v = 0 =⇒ ck T m v = 0.
As T m v 6= 0, it follows that ck = 0, which is a contradiction. Therefore
c1 = c2 = · · · = cm+1 = 0.
Rubric.

• Award 1 mark for the step: v, T v, . . . , T m v are l.i. iff c1 v + c2 T v +


· · · + cm+1 T m v = 0 =⇒ c1 = · · · = cm+1 = 0.

• Award 1 mark for applying T m to the linear combination c1 v+c2 T v+


· · · + cm+1 T m v

• Award 1 mark for deriving the conclusion from the above step that
the coefficient of v is zero, correctly

• Award 1 mark for applying T m−1 to the linear combination c2 T v +


· · · + cm+1 T m v

• Award 1 mark for deriving the conclusion from the above step that
the coefficient of T v is zero, correctly.

• Award 5 marks for showing that the remaining coefficients are zero.

• Deduct 1 mark if proof by pattern technique is used, but the proof is


otherwise correct.

• Deduct 1 mark if the operation of the function T is described using


the word “multiplication”.

• Deduct 2 marks for proving that the coefficient of T m+1 v is zero, in


an arbitrary linear combination.
Question 2 (10 marks). Let V = F n and consider the operator T : V → V
given by
n n n
!
X X X
T (x1 , . . . , xn ) = xi , xi , . . . , xi
i=1 i=1 i=1

(a) (1 mark) Construct the matrix A of T relative to any suitable basis of


V.

(b) (6 marks) Determine the eigenvalues and corresponding eigenvectors of


T.

(c) (3 marks) Is T diagonalizable (YES/NO)? Justify your answer briefly,


by referring to a suitable result.

Solution. (a) Let A = (aij ) be the matrix of T with respect to the standard
basis {e1 , . . . , en } of V . Then

A = [[T (e1 )]B ... [T (en )]B ]


= [T (e1 ) . . . T (en )]
 
1 1 ··· 1
1 1 · · · 1
= . ,
 
 .. ..
. 
1 1 ··· 1

i.e. aij = 1 for i, j = 1, . . . , n.

(b) Clearly, the rank of A is 1. Hence det A = 0. Thus zero is an eigenvalue


of A.

Now the eigenspace corresponding to the zero eigenvalue is simply Nul A.


Further,
dim Nul A = n − rank(A) = n − 1.
Thus the multiplicity of the zero eigenvalue is at least n − 1. In fact,
the vectors      
1 0 0
−1  1  0
     
 0  −1  .. 
,
    , . . . ,  . 
 ..   ..   
 .   .  1
0 0 −1
are a basis for the eigenspace corresponding to the zero eigenvalue Also
    
1 1 ··· 1 1 n
1 1 · · · 1 1 n
  ..  =  .. 
    
 .. ..
. .  .  . 
1 1 ··· 1 1 n

Thus n is an eigenvalue of A and (1, 1, . . . , 1) is a corresponding eigen-


vector.

Hence the multiplicity of the zero eigenvalue equals n − 1.

(c) A is a symmetric matrix, therefore diagonalizable. Thus the matrix of


T with respect to the standard basis is diagonalizable. Therefore T is
diagonalizable.

Rubric.

(a) 1 mark for fixing a basis and finding matrix of T relative to that basis

(b) • 1 mark for each correctly found eigenvalue


• 1 mark for finding the eigenvector corresponding to n
• 3 marks for finding the remaining eigenvectors

(c) 3 marks for the correct justification


Question 3. Let (V, h., .i) be a finite dimensional inner product space, and
let U and W be subspaces of V .
(a) (5 marks) Let W be a subspace of V . Show that

(U + W )⊥ = U ⊥ ∩ W ⊥

(b) (5 marks) Show that

(U ∩ W )⊥ = U ⊥ + W ⊥

Solution. (a) Let x ∈ (U + W )⊥ . Then

hx, yi = 0, ∀y ∈ U + W.

For every u ∈ U, u = u + 0. Therefore U ⊂ U + W . Similarly, W ⊂


U + W . Hence

hx, yi = 0, ∀y ∈ U =⇒ x ∈ U ⊥ .

Similarly, x ∈ W ⊥ . Therefore x ∈ U ⊥ ∩ W ⊥ . As the choice of x was


arbitrary, (U + W )⊥ ⊂ U ⊥ ∩ W ⊥ .

Next, suppose x ∈ U ⊥ ∩ W ⊥ . Let y ∈ U + W . Then y = u + w for some


u ∈ U and w ∈ W . Therefore

hx, yi = hx, u + wi = hx, ui + hx, wi = 0 + 0 = 0.

Hence x ∈ (U + W )⊥ . As the choice of x was arbitrary,

U ⊥ ∩ W ⊥ ⊂ (U + W )⊥ .

Thus
U ⊥ ∩ W ⊥ = (U + W )⊥

(b) Claim: If X is any subspace of V , then (X ⊥ )⊥ = X.

The proof of the above claim can be found on page 195 of [6].
Using part (a) and the above claim, we obtain

U ∩ W = (U ⊥ )⊥ ∩ (W ⊥ )⊥ = (U ⊥ + W ⊥ )⊥

Using the stated claim again, we obtain

(U ∩ W )⊥ = ((U ⊥ + W ⊥ )⊥ )⊥ = U ⊥ + W ⊥
Rubric.

(a) • 2.5 marks for showing that (U + W )⊥ ⊂ U ⊥ ∩ W ⊥ .


• 2.5 marks for showing that U ⊥ ∩ W ⊥ ⊂ (U + W )⊥

(b) 5 marks for the applying the relation (X ⊥ )⊥ = X to derive the required
result.
Question 4 (10 marks). Let a, b ∈ R, and let b 6= 0. Orthogonally diago-
nalize  
a 0 b
A= 0 a 0 
b 0 a
Solution. Let p(λ) be the characteristic polynomial of A. Then

p(λ) = det(A − λI)


= (a − λ)((a − λ)2 − b2 )
= (a − λ)(a − λ + b)(a − λ − b)

As b 6= 0, the eigenvalues λ1 = a, λ2 = a + b, λ3 = a − b are distinct.


Therefore the eigenvectors corresponding to λ1 , λ2 and λ3 are mutually
orthogonal. Let us find them and normalize them, to obtain an orthonormal
basis of R3 consisting of eigenvectors of A.
 
0 0 b
A − λ1 I = A − aI =  0 0 0 
b 0 0
As dim Nul(A − aI) = 1, a basis for the eigenspace corresponding to λ1 is
e2 . Next,  
−b 0 b
A − λ2 I = A − (a + b)I =  0 −b 0 
b 0 −b
As dim Nul(A − (a + b)I) = 1, a basis for the eigenspace corresponding to
λ2 is the eigenvector (1, 0, 1). Next,
 
b 0 b
A − λ3 I = A − (a − b)I =  0 b 0 
b 0 b

As dim Nul(A − (a − b)I) = 1, a basis for the eigenspace corresponding to


λ1 is the eigenvector (1, 0, −1).
After normalizing the eigenvectors corresponding to λ1 , λ2 and λ3 we
construct them to form the orthogonal matrix
 √ √ 
0 1/ 2 1/ 2
P = 1 √0 √0

0 1/ 2 −1/ 2
Thus
A = P DP T ,
where  
a 0 0
D = 0 a+b 0 
0 0 a−b
Please note: The matrices P and D are only unique upto a permutation of
the columns of P and a corresponding permutation of the diagonal entries
of D.

Rubric.

• Finding characteristic polynomial: 2 marks

• Finding eigenvalues : 1 mark

• Finding eigenvectors: 3 marks

• Normalizing the eigenvectors: 1 mark

• Justifying the fact that the eigenvectors obtained are mutually orthog-
onal: 1 mark

• Writing P and D in the proper order (eigenvectors should correspond


to eigenvalues in the same order): 2 marks
Question 5 (10 marks). Let
 
−49π 20π
A=
−136π 55π
 
a −b
Find an invertible matrix P and a matrix B of the form such that
b a

A = P BP −1 .

Solution. Let p(λ) be the characteristic polynomial of A. Then

p(λ) = det(A − λI)


 
−49π − λ 20π
= det
−136π 55π − λ
= λ2 − 6πλ + 25π 2

The eigenvalues are 3π ± 4πi.


Let us find the eigenspace corresponding to the eigenvalue λ = 3π − 4πi.

   
−49π − λ 20π −52π + 4πi 20π
Nul(A−λI) = Nul = Nul
−136π 55π − λ −136π 52π + 4πi

We solve the simultaneous system

(−52π + 4πi)z1 + 20πz2 = 0


−136πz1 + (52π + 4πi)z2 = 0

to obtain the general solution


   
z1 5

z2 13 − i

where µ ∈ C. Let
       
5 5 5 0
v1 = Re = , v2 = Im =
13 − i 13 13 − i −1

Put P = [v1 v2 ]. Then  


5 0
P =
13 −1
Then
   
−1 1/5 0 −49π 20π 5 0
P AP =
13/5 −1 −136π 55π 13 −1
 
3π −4π
= =B
4π 3π

Rubric.

• Finding characteristic polynomial: 1 mark

• Finding eigenvalues: 1 mark

• Finding a complex eigenvector: 3 marks

• Constructing P using real and imaginary parts of this complex eigen-


vector: 3 marks

• Finding B: 2 marks

Please note: The answers for P and B are not unique. They will vary
according to which complex eigenvector is chosen in order to construct P .
Please check the calculations so that the equation P BP −1 = A (or P −1 AP =
B) holds.
Question 6.

(a) (4 marks) Find an LU factorization of the following matrix:

2 −4 2π 2 −2
 

A =  6 −9 7 −3 
−1 −4 8 0

(b) (6 marks) Solve the equation Ax = b by using the LU -factorization of


A that you obtained in part (a). Do not use any other method.

−4π 2
 

b =  −12π 2 + 7 
8 − 4π 2

Solution.

(a) Step 1. Divide the first column of A by 2 to obtain the first column of L.
Perform the corresponding inverse row operations clear out the
entries in the first column of A below the first entry. Resultant
matrices:
2π 2
   
1 0 0 2 −4 −2
L1 =  3 1 0 ,A ∼  0 3 7 − 6π 2 3 
−1/2 0 1 0 −6 8 + π 2 −1

Step 2. Divide the entries below the first entry of the second column
of the matrix obtained in Step 1 (row equivalent to A) by 3
to obtain the second column of L. Perform the corresponding
inverse row operations on the matrix obtained in Step 1 (row
equivalent to A). Resultant matrices:

2π 2
   
1 0 0 2 −4 −2
L2 =  3 1 0 ,A ∼  0 3 7 − 6π 2 3 
−1/2 −2 1 0 0 22 − 11π 2 5

Step 3. As the resultant matrix (row equivalent to A) is in echelon form,


the required factorization is

2π 2
   
1 0 0 2 −4 −2
L= 3 1 0 ,U =  0 3 7 − 6π 2 3 
−1/2 −2 1 0 0 22 − 11π 2 5
(b) We first solve the system Ly = b, where y = (y1 , y2 , y3 ).
−4π 2
      
1 0 0 y1 y1
 3 1 0  y2  =  3y1 + y2  =  −12π 2 + 7 
−1/2 −2 1 y3 −y1 /2 − 2y2 + y3 8 − 4π 2
to obtain
−4π 2 −4π 2
     
y1
y2  =  −12π 2 + 7 − 3y1  =  7 
y3 2
8 − 4π + y1 /2 + 2y2 22 − 6π 2

We next solve the system U x = y, using back substitution.


 
2
 x1
−4π 2
  
2 −4 2π −2  
x2  
 0 3 7 − 6π 2 3  = 7 
2
x3  2
0 0 22 − 11π 5 22 − 6π
x4
As the fourth column of U is not a pivot column, x4 is a free variable.
So
22 − 6π 2 − 5x4
(22 − 11π 2 )x3 + 5x4 = 22 − 6π 2 =⇒ x3 =
22 − 11π 2
From
3x2 + (7 − 6π 2 )x3 + 3x4 = 7
we get
1
x2 = (7 − 3x4 − (7 − 6π 2 )x3 )
3
22 − 6π 2 − 5x4
 
1 2
= (7 − 3x4 − (7 − 6π )
3 22 − 11π 2
π 2 (97 − 36π 2 ) + (3π 2 − 31)x4
= .
66 − 33π 2
From
2x1 − 4x2 + 2π 2 x3 − 2x4 = −4π 2 ,
we get
x1 = 2x2 − π 2 x3 + x4 − 2π 2
2π 2 (97 − 36π 2 ) + (6π 2 − 62)x4 22π 2 − 6π 4 − 5π 2 x4
= − + x4 − 2π 2
66 − 33π 2 22 − 11π 2
(4 − 12π 2 )(x4 − π 2 )
=
66 − 33π 2
Thus   2
4 − 12π 2 −π (4 − 12π 2 )
 
 66 − 33π 2   66 − 33π 2 
   
   
 3π 2 − 31   π 2 (97 − 36π 2 ) 
 
x1    
x2    
 66 − 33π 2 

  = x4  66 − 33π 2  +
x3  


 
 

−5 22 − 6π 2
x4
   
   
 22 − 11π 2   22 − 11π 2 
   
   
1 0

Rubric.

Part (a) • 2 marks for Step 1


• 2 marks for Step 2

Pbrt (b) • 2 marks for solving Ly = b


• 1 mark for recognizing that x4 is a free variable in the system
Ux = y
• 1 mark for computing x3 as a function of x4
• 1 mark each for computing x2 as a function of x4
• 1 mark each for computing x1 as a function of x4
Question 7. Let V = C 1 [−π, π], the set of all continuously differentiable
functions defined on the interval [−π, π].
(A function f is said to be continuously differentiable on [−π, π], if f is
differentiable at every point in [−π, π] and its derivative f 0 is continuous on
[−π, π]. )

(a) (2 marks) Show that V is a vector space over R, under the usual opera-
tions of pointwise addition of functions and pointwise multiplication of
a function by a scalar.

(You may assume without proof that every differentiable function is


continuous, i.e. C 1 [−π, π] ⊂ C[−π, π].)

(b) (2 marks) Show that the mapping h., .i : V × V → R, defined by


Z π Z π
hf, gi = f (t)g(t) dt + f 0 (t)g 0 (t) dt
−π −π

is an inner product on V .

(c) (6 marks) Find an orthogonal basis for the subspace

W = Span{1, cos t, sin t, cos2 t},

of V , with respect to the inner product defined in part (b).

Solution. (a) Let f, g ∈ V . Then

(f + g)0 = f 0 + g 0 ∈ C[−π, π].

Therefore f + g is continuously differentiable. Hence V is closed under


addition.

Next, if f ∈ V, c ∈ R, then

(cf )0 = cf 0 ∈ C[−π, π].

Therefore cf is continuously differentiable. Hence V is closed under


scalar multiplication. Thus V is a subspace of C[−π, π]. Therefore V is
a vector space.

(b) Let us verify that the mapping h., .i : V × V → R satisfies the conditions
for being an inner product:
(i) Let f, g, h ∈ V . Then
Z π Z π
hf + g, hi = (f + g)(t)h(t) dt + (f + g)0 (t)h0 (t) dt
Z−π
π
−π
Z π
= (f (t) + g(t))h(t) dt + (f 0 (t) + g 0 (t))h0 (t) dt
Z−π
π Z π −π
Z π Z π
0 0
= f (t)h(t) dt + g(t)h(t) dt + f (t)h (t) dt + g 0 (t)h0 (t) dt
Z−π
π Z−π
π
−π
Z π Z−π
π
0 0
= f (t)h(t) dt + f (t)h (t) dt + g(t)h(t) dt + g 0 (t)h0 (t) dt
−π −π −π −π
= hf, hi + hg, hi

(ii) Let f, g ∈ V, c ∈ R. Then


Z π Z π
hcf, gi = (cf )(t)g(t) dt + (cf )0 (t)g 0 (t) dt
Z−π
π Z π−π
= cf (t)g(t) dt + cf 0 (t)g 0 (t) dt
−π −π
Z π Z π 
0 0
=c f (t)g(t) dt + f (t)g (t) dt
−π −π
= chf, gi

(iii) Let f, g ∈ V . Then


Z π Z π
hf, gi = f (t)g(t) dt + f 0 (t)g 0 (t) dt
Z−π
π Z−π
π
= g(t)f (t) dt + g 0 (t)f 0 (t) dt
−π −π
= hg, f i

(iv) If f = 0, then Z π
hf, f i = 2 0 dt = 0.
−π

Suppose f ∈ V and hf, f i = 0. Then


Z π Z π
2
(f (t)) dt + (f 0 (t))2 dt = 0
−π −π

Hence f = 0.
(The integral of a non-negative continuous function can only be
zero if the function is the zero function. This is a fact which will
be covered later on in Calculus/Analysis courses. The students do
not need to mention this in the proof.)
1 + cos 2t
(c) Let f0 = 1, f1 = cos t, f2 = sin t, f3 = cos2 t = . Observe the
2
for any n ∈ Z,
sin nt π
Z π
cos nt dt = − =0 (1)
−π n −π
and π
cos nt π
Z
sin nt dt = = 0. (2)
−π n −π

Method 1: We use the Gram-Schmidt algorithm.

g0 = f0 = 1

hf1 , g0 i
g1 = f1 − g0
hg0 , g0 i
Z π Z π 
1
= cos t − cos t dt + 0 dt
hg0 , g0 i −π −π
= cos t
hf2 , g0 i hf2 , g1 i
g2 = f2 − g0 − g1
hg0 , g0 i hg1 , g1 i
Z π Z π  Z π Z π 
1 g1
= sin t − sin t dt + 0 dt − sin t cos t dt − sin t cos t dt
hg0 , g0 i −π −π hg1 , g1 i −π −π
= sin t
hf3 , g0 i hf3 , g1 i hf3 , g2 i
g3 = f3 − g0 − g1 − g2
hg0 , g0 i hg1 , g1 i hg2 , g2 i
Z π
1 + cos 2t 1
= − (1 + cos 2t) dt
2 2hg0 , g0 i −π
Z π Z π 
g1
− cos t(1 + cos 2t) dt +2 sin t sin 2t dt
2hg1 , g1 i −π −π
Z π Z π 
g2
− sin t(1 + cos 2t) dt −2 cos t sin 2t dt
2hg2 , g2 i −π −π
Z π Z π 
1 + cos 2t 1 g1
= − − cos t cos 2t dt + (cos t − cos 3t) dt
2 2 2hg1 , g1 i −π −π
Z π Z π 
g2
− sin t cos 2t dt + (sin 3t − sin t) dt
2hg2 , g2 i −π −π
Z π Z π
1 + cos 2t 1 g1 cos 3t + cos t g2 sin 3t − sin t
= − − − dt dt
2 2 2hg1 , g1 i −π 2 2hg2 , g2 i −π 2
cos 2t
=
2

1 + cos 2t
Method 2: Since cos2 t = and cos 2t = 2 cos2 t − 1, it follows
2
that

W = Span{1, cos t, sin t, cos2 t} = Span{1, cos t, sin t, cos 2t}

From (1) and (2) it is clear that

{1, cos t, sin t, cos 2t}

is an orthogonal set of nonzero vectors in V , and is therefore an orthog-


onal basis for W .
Rubric.

(a) 2 marks for proving that V is a subspace of C[−π, π]

(b) • 1 marks for linearity in the first variable


• 1/2 mark for symmetric property hf, gi = hg, f i
• 1/2 mark for positive definite property hf, f i = 0 ⇐⇒ f = 0

(c) Method 1:
• 1 mark for computing g0
• 1 mark for computing g1
• 1 mark for computing g2
• 3 marks for computing g3

Method 2:

• 3 marks for showing that W = Span{1, cos t, sin t, cos 2t}


• 2 marks for showing that {1, cos t, sin t, cos 2t} is an orthogonal set
• 1 mark for concluding that {1, cos t, sin t, cos 2t} is a basis
Question 8 (10 marks).
Let n ≥ 2. Let V = Pn , the vector space of polynomials of degree at
most n, with real coefficients.
Let {p1 , p2 , p3 } be a linearly independent subset of V . Let A = (aij ) be
a 3 × 3 matrix having real entries. Let
    
a11 a12 a13 p1 q1
 a21 a22 a23   p2  =  q2 
a31 a32 a33 p3 q3

Show that the {q1 , q2 , q3 } is a linearly independent subset of V if and


only if A is invertible.
Solution. Let a1 , a2 and a3 denote the columns of A.
If ξ = (c1 , c2 , c3 ) ∈ R3 , then

c1 q1 + c2 q2 + c3 q3
= c1 (a11 p1 + a12 p2 + a13 p3 )
+ c2 (a21 p1 + a22 p2 + a23 p3 )
+ c3 (a31 p1 + a32 p2 + a33 p3 )
(3)
= (c1 a11 + c2 a21 + c3 a31 )p1
+ (c1 a12 + c2 a22 + c3 a32 )p2
+ (c1 a13 + c2 a23 + c3 a33 )p3
= (aT1 ξ)p1 + (aT2 ξ)p2 + (aT3 ξ)p3

Suppose A is not invertible. Then AT is also not invertible.


Hence there exists a nontrivial solution ξ = (c1 , c2 , c3 ) ∈ R3 of the equa-
tion AT x = 0. Then

c1 q1 + c2 q2 + c3 q3 = (aT1 ξ)p1 + (aT2 ξ)p2 + (aT3 ξ)p3 = 0

Therefore {q1 , q2 , q3 } is linearly dependent.


Conversely, suppose {q1 , q2 , q3 } is linearly dependent. Then there exist
numbers c1 , c2 , c3 not all zero such that

c1 q1 + c2 q2 + c3 q3 = 0.

Let ξ = (c1 , c2 , c3 ). Then

(aT1 ξ)p1 + (aT2 ξ)p2 + (aT3 ξ)p3 = 0


As the set {p1 , p2 , p3 } is linearly independent, it follows that
aT1 ξ = aT2 ξ = aT3 ξ = 0
Therefore AT x = 0 has a nontrivial solution. Therefore AT is not invertible,
so A is not invertible.
Rubric.
• For the method I’ve used above:
– For establishing the relationship expressed by equation 3 (1 mark
each for the first two subequations, 2 marks for the third): 4
marks
– For recognizing that A is invertible iff AT is invertible: 2 marks
– For showing that {q1 , q2 , q3 } is a linearly independent subset of
V implies A is invertible, using the two above ideas: 2 marks
– For showing that A is invertible implies {q1 , q2 , q3 } is a linearly
independent subset of V , using the two above ideas: 2 marks
• For other methods:
– 5 marks for showing that A invertible implies {q1 , q2 , q3 } linearly
independent.
– 5 marks for showing that {q1 , q2 , q3 } linearly independent implies
A invertible.

References
[1] David C. Lay, Linear Algebra and its Applications. Pearson, 2005.
[2] Strang Linear Algebra and Its Applications.
[3] Lipschutz, Linear Algebra, Schaum’s Outline Series.
[4] Hoffman and Kunze, Linear Algebra.
[5] Kumaresan, Linear Algebra: A Geometric Approach
[6] Axler, Linear Algebra Done Right. Undergraduate Texts in Mathemat-
ics, Springer 2015.
[7] Halmos, Finite-Dimensional Vector Spaces
[8] Michael Artin, Algebra. Prentice-Hall Inc.,New Jersey, 1991.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy