Matrices
Matrices
d
Lecture Notes: Matrices (Semester 1 - CC1)
an
M
Dr. P. Mandal
P.
October 2021
.
Dr
©
ii
© Dr
. P.
M
an
d al
Contents
Matrices v
1 Matrices 1
al
1.1 Definitions: Some Terminologies . . . . . . . . . . . . . . . . . 1
d
1.2 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Special Square Matrices . . . . . . . . . . . . . . . . . . . . . 9
an
1.4 Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . .
1.4.1 Corollaries . . . . . . . . . . . . . . . . . . . . . . . .
.
.
13
18
M
1.5 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . 22
1.6 Diagonalization of Matrices . . . . . . . . . . . . . . . . . . . 24
1.6.1 Corollaries . . . . . . . . . . . . . . . . . . . . . . . . . 25
P.
iii
iv
© Dr
. P.
M
an
d al
CONTENTS
Matrices
© Dr
v
. P.
M
an
d al
vi CHAPTER 0. MATRICES
al
d
an
M
P.
.
Dr
©
Chapter 1
Matrices
al
Syllabus: (a) Addition and Multiplication of Matrices. Null Matrices. Di-
agonal, Scalar and Unit Matrices. Transpose of a Matrix. Symmetric and
d
Skew-Symmetric Matrices. Conjugate of a Matrix. Hermitian and Skew-
an
Hermitian Matrices. Singular and Non-Singular matrices. Orthogonal and
Unitary Matrices. Trace of a Matrix. Inner Product.
M
(b) Eigenvalues and Eigenvectors. Cayley-Hamiliton Theorem. Diago-
nalization of Matrices. Solutions of Coupled Linear Ordinary Differential
Equations. Functions of a Matrix.
P.
The element which belongs to the i-th row and j-th column is denoted
as aij .
2. Row matrix: A row matrix contains only one row i.e. the numbers
are arrayed in a single row. The order of such matrix is thus 1 × n.
R= a11 a12 ... ... a1n
1
2 CHAPTER 1. MATRICES
3. Column matrix: A column matrix contains only one column i.e. the
numbers are arrayed in a single column. The order of such matrix is
thus m × 1.
a11
a21
C= ...
...
am1
5. Square matrix: If the number of rows (m) and the number of columns
al
(n) of a matrix are equal, it is called a square matrix. The matrix S
below is a square matrix of order 3.
d
a11 a12 a13
an
S = a21 a22 a23
a31 a32 a33
M
The elements aii are called the diagonal elements.
P.
a11 0 0
©
D = 0 a22 0
0 0 a33
a11 ± b11 a12 ± b12 a13 ± b13
∴A±B =
a21 ± b21 a22 ± b22 a23 ± b23
al
Properties:
d
Matrix addition is commutative i.e. A + B = B + A.
an
Matrix addition is associative i.e. A + (B + C) = (A + B) + C
M
2. Multiplication:
as C = AB, where
Xn
cij = aik bkj
©
k=1
3x − y + 2z = −3
2x + 3y − z = 2 (1.1)
x − 2y + z = 5
Properties:
al
Matrix multiplication is, in general, not commutative i.e. AB ̸=
BA.
d
Matrix multiplication is associative i.e. A(BC) = (AB)C
an
Matrix multiplication follows the distributive law i.e. A(B +
C) = AB + AC.
M
(c) Direct product: The direct product is defined for general matrices.
Given an n×n matrix A and an m×m matrix B, the direct product
of A and B is an nm × nm matrix, and is defined by C = A ⊗ B
P.
A= and B =
Dr
a11 b11 a11 b12 a12 b11 a12 b12
a11 B a12 B a11 b21 a11 b22 a12 b21 a12 b22
C = A⊗B = =
a21 B a22 B a21 b11 a21 b12 a22 b11 a22 b12
a21 b21 a21 b22 a22 b21 a22 b22
[A, B] = −[B, A]
[A, B + C] = [A, B] + [A, C]
[A, BC] = [A, B]C + B[A, C]
[A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0
al
4. Power of a matrix: For any positive integer n, the power of a square
matrix A is defined as An = AA...A (n times) i.e. A2 = AA, A3 =
d
AAA. In particular, A0 = I, the unit matrix1 .
an
5. Function of a matrix: The function of a matrix maps a matrix
M
to another matrix. For example, consider a matrix function f (A) =
3A2 − 2A + I wherePI is the unit matrix of same order. A more fancy
example is f (A) = k ak Ak , where ak are scalar coefficients. Another
P.
eA =
Dr
k=0
k!
There are several techniques for lifting a real function to a square matrix
©
function. If the real function f (x) has the Taylor series expansion
x2
f (x) = f (0) + f ′ (0)x + f ′′ (0) + ...
2!
A2
f (A) = f (0)I + f ′ (0)A + f ′′ (0) + ...
2!
where the powers become matrix powers, the additions become matrix
sums and the multiplications become scaling operations.
1
A−n = A−1 A−1 ...A−1 (n times) is defined if A is a nonsingular matrix i.e., |A| =
̸ 0
6 CHAPTER 1. MATRICES
Properties:
(AT )T = A
(A + B)T = AT + B T
(AB)T = B T AT
al
7. Complex Conjugate: For any matrix A, the complex conjugate ma-
d
trix A∗ is formed by taking the complex conjugate of each element of
an
A, i.e. if A = (aij ), A∗ = (a∗ij ) for all i and j. For example,
M
1 2+i ∗ 1 2−i
A= ⇒A =
3−i i 3 + i −i
P.
Properties:
(A∗ )∗ = A
.
(A + B)∗ = A∗ + B ∗
Dr
(AB)∗ = A∗ B ∗
©
Properties:
(A† )† = A.
(A + B)† = A† + B † .
1.2. MATRIX ALGEBRA 7
(AB)† = B † A† .
Properties:
Tr(A) = Tr(AT ).
Tr(A + B) = Tr(A) + Tr(B).
Tr(AB)=Tr(BA).
al
10. Determinant of a matrix: The determinant of a square matrix A is
d
defined as the determinant having same array as that of the matrix and
is generally denoted as |A|or det(A). For example, the determinant
an
2 3 2 3
of the matrix A = is |A| = = 5. If the determinant
1 4 1 4
M
of a matrix is zero i.e. |A| = 0, A is called a singular matrix.
Properties:
P.
a11 a12 a13 A11 A12 A13
A= a21 a22 a23 ⇒ Ac = A21 A22 A23
a31 a32 a33 A31 A32 A33
where
a22 a23 a21 a23 a21 a22
A11 = (−1)1+1 , A12 = (−1)1+2 , A13 = (−1)1+3
a32 a33 a31 a33 a31 a32
a12 a13 a11 a13 a11 a12
A21 = (−1)2+1 , A22 = (−1)2+2 , A23 = (−1)2+3
a32 a33 a31 a33 a31 a32
a12 a13 a11 a13 a11 a12
A31 = (−1)3+1 , A32 = (−1)3+2 , A33 = (−1)3+3
a22 a23 a21 a23 a21 a22
2 −1 c 3 0
The cofactor matrix of A = is A =
0 3 1 2
8 CHAPTER 1. MATRICES
al
(B = A−1 ). For example, consider the following matrices:
d
2 −5 3 5
A=
−1 3
& B= an 1 2
M
Note that,
2 −5 3 5 1 0
AB = = =I
P.
−1 3 1 2 0 1
Similarly,
.
Dr
3 5 2 −5 1 0
BA = = =I
1 2 −1 3 0 1
©
3 5
∴B= = A−1 , the inverse matrix of A.
1 2
The inverse matrix can be found by using the relation
adj(A)
A−1 = (1.2)
|A|
2 −1
Problem 1: Find out the inverse matrix of A = .
0 3
3 1
Solution: Adjoint matrix of A i.e. adj(A) = . The determi-
0 2
3 1
nant of the matrix i.e. |A| = =6
0 2
1.3. SPECIAL SQUARE MATRICES 9
−1 1 3 1
∴A = 6
.
0 2
Properties:
(A−1 )−1 = A.
(A + B)−1 = A−1 + B −1 .
(AB)−1 = B −1 A−1 .
al
d x x2 1 1 2x 0
=
dx ex 0 2x3 ex 0 6x2
d
an
15. Integral of a matrix: The integral of a matrix with respect to a
variable say, x is equal to the integral of each element with respect x
M
separately. For example,
Z x2
x 3x2 1 2
x3 x c1 c2 c3
dx = +
P.
4
ex 0 2x3 ex c x2 c4 0 c5
al
Problem 3: Any square matrix can be uniquely written as the sum of
a symmetric matrix and a skew-symmetric matrix.
d
Solution: Let A is a square matrix.
1
an 1
A = (A + AT ) + (A − AT ) = P + Q
2 2
M
1 1 T 1
Now, P = (A + A ) = {A + (A ) } = (AT + A) = P
T T T T T
2 2 2
T 1 T T 1 T T T 1 T
P.
Q = (A − A ) = {A − (A ) } = (A − A) = −Q
2 2 2
i.e. P is a symmetric matrix and Q is a skew-symmetric matrix. So
any square matrix can be written as the sum of a symmetric matrix
.
Dr
AT = R T + S T = R − S
1 1
⇒ R = (A + AT ), S = (A − AT )
2 2
Note that,
al
or purely imaginary.
d
an
Problem 4: For an arbitrary matrix A, show that A+A† and i(A−A† )
are both Hermitian.
M
Solution: A matrix H is Hermitian if H † = H. Now,
†
P.
A + A† = A† + (A† )†
= A† + A
.
†
i A − A† = −i A† + (A† )†
©
= −i A† − A
= i(A − A† )
|OOT | = |I| = 1
⇒ |O||OT | = 1
⇒ |O|2 = 1 (∵ |OT | = |O|)
⇒ |O| = ±1.
al
its transpose i.e. O−1 = OT .
d
Solution: For an orthogonal matrix O, OOT = OT O = I. Since
|O| = ±1, the inverse matrix O−1 exists. Now,
an
O−1 OOT = O−1 I
M
⇒ O−1 = OT (∵ O−1 O = I)
P.
0 i † 0 −i
U = ⇒U =
i 0 −i 0
©
† 0 i 0 −i 1 0
∴ UU = = = I = U †U
i 0 −i 0 0 1
U −1 U U † = U −1 I
⇒ U −1 = U †
1.4. EIGENVALUE PROBLEMS 13
al
AX = λX (1.3)
d
where λ is a scalar (real or complex) and X is a column matrix. Eq. (1.3) is
an
called the eigenvalue equation of matrix A with eigenvalue λ and eigenvector
X. If A is a square matrix of order n, X is a column matrix of order n × 1.
M
From eq. (1.3), (A − λI)X = 0. In terms of the elements of the matrices
A and X,
a11 − λ
P.
al
D(λ) = (λ1 − λ)(λ2 − λ).....(λn − λ) (1.7)
d
By speculation of eq. 1.6 and eq. 1.7 an
λ1 λ2 ....λn = c0 = |A| (1.8)
M
Thus the product of the eigenvalues of a matrix is equal to its determinant.
Similary, by inspection of eq. 1.6 and eq. 1.7 (equating the coefficients of
P.
n−1
λ ) we find
Thus the sum of the eigenvalues is equal to the trace of the matrix.
©
2 −1
Problem 8: Find the trace and determinant of the matrix A =
3 −2
and hence determine its eigenvalues.
Solution: The trace of the matrix is the sum of its diagonal elements i.e.
Tr(A) = 2 − 2 = 0.
The determinant of the matrix is
2 −1
|A| = = −4 + 3 = −1
3 −2
If λ1 and λ2 are the eigenvalues of the matrix, by eq. 1.8 and eq. 1.9 we have
λ1 + λ2 = Tr(A) = 0
λ1 λ2 = |A| = −1
1.4. EIGENVALUE PROBLEMS 15
2−λ −1
= 0
3 −2 − λ
al
⇒ (2 − λ)(2 + λ) + 3 = 0
d
⇒ λ2 = 1
or, λ an
= ±1
(A − λ1 I)X1 = 0
3 −1 x1
⇒ = 0
.
3 −1 x2
Dr
⇒ 3x1 − x2 = 0
or, x2 = 3x1
©
(A − λ2 I)X2 = 0
1 −1 x1
⇒ = 0
3 −3 x2
⇒ x1 − x2 = 0
or, x1 = x2
16 CHAPTER 1. MATRICES
al
or, λ = ±i
d
Thus the eigenvalues are λ1 = −i and λ2 = i.
an
Let, X1 is the eigenvector of A which corresponds to the eigenvalue λ1 =
−i. From the eigenvalue equation AX1 = λ1 X1 , we have
M
(A − λ1 I)X1 = 0
i −1 x1
⇒ = 0
1 i x2
P.
⇒ ix1 − x2 = 0
or, x2 = ix1
.
a
the given matrix corresponding to the eigenvalue λ1 = −1 is X1 = .
ia
©
1 1
In normalized form, X1n = 2 √ .
i
Similarly, let us consider X2 as the eigenvector of A corresponding to the
eigenvalue λ2 = i. From the eigenvalue equation AX2 = λ2 X2 , we have
(A − λ2 I)X2 = 0
−i −1 x1
⇒ = 0
1 −i x2
⇒ x1 − ix2 = 0
or, x1 = ix2
If x2 = b, x1 = ib where b is another arbitrary number (̸= 0). Therefore,
the eigenvector
of the given matrix corresponding
to the eigenvalue λ2 = i is
ib i
X2 = . In normalized form, X2n = √12 .
b 1
1.4. EIGENVALUE PROBLEMS 17
1 0 0
Example 3: A = 0 0 2
0 2 0
The eigenvalue equation of the matrix is AX = λX or, (A − λI)X =
0 where λ is the eigenvalue and X is the corresponding eigenvector. The
characteristic equation is |A − λI| = 0 i.e.
1−λ 0 0
0 −λ 2 = 0
0 2 −λ
⇒ (1 − λ)(λ2 − 4) = 0
or, λ = 1, ±2
al
Thus the eigenvalues are λ1 = −2, λ2 = 1 and λ3 = 2.
d
Let, X1 is the eigenvector of A which corresponds to the eigenvalue λ1 =
(A − λ1 I)X1 = 0
an
−2. From the eigenvalue equation AX1 = λ1 X1 , we have
M
3 0 0 x1
⇒ 0 2 2 x2 = 0
P.
0 2 2 x3
⇒ 3x1 = 0 & x2 + x3 = 0
or, x1 = 0 & x2 = −x3
.
Dr
(A − λ2 I)X2 = 0
0 0 0 x1
⇒ 0 −1 2 x2 = 0
0 2 −1 x3
⇒ −x2 + 2x3 = 0 & 2x2 − x3 = 0
or, x2 = x3 = 0
18 CHAPTER 1. MATRICES
(A − λ3 I)X3 = 0
−1 0
al
0 x1
⇒ 0 −2 2 x2 = 0
d
0 2 −2 x3
an
⇒ x1 = 0 & x2 = x3
M
Let x2 = x3 = c, where c is an arbitrary number (̸= 0). The eigenvector
of
0
the given matrix corresponding to the eigenvalue λ3 = 2 is X3 = c . In
P.
c
0
1
normalized form, X3n = 2√ 1 .
.
Dr
1
©
1.4.1 Corollaries
a11 0 ... ... 0
0 a22 ... ... 0
... ... ... ... ...
... ... ... ... ...
0 0 ... ... ann
1.4. EIGENVALUE PROBLEMS 19
|D − λI| = 0
a11 − λ 0 ... ... 0
0 a22 − λ ... ... 0
⇒ ... ... ... ... ... = 0
... ... ... ... ...
0 0 ... ... ann − λ
⇒ (a11 − λ)(a22 − λ)...(ann − λ) = 0
i.e. λ = a11 , a22 , ..., ann , the diagonal elements of the matrix.
al
Proof: Consider a singular matrix A i.e. |A| = 0.
d
If λ1 , λ2 , λ3 , ... are the eigenvalues of the matrix A, the product of the
an
eigenvalues must be equal to the determinant of A (eq. 1.8) i.e.
matrix A and its inverse matrix A−1 corresponding to the same eigen-
vector X. The eigenvalue equations are
©
AX = λX
A X = λ′ X
−1
A−1 AX = λA−1 X
⇒ X = λλ′ X
or, (1 − λλ′ )X = 0
al
or, (1 − |λ|2 )X † X = 0
⇒ 1 − |λ|2 0 (∵ X † X ̸= 0)
d
=
⇒ |λ|2 = 1 an
5. The eigenvalues of a Hermitian matrix are real and the eigenvectors
M
corresponding to different eigenvalues are orthogonal.
Proof: Let us consider a Hermitian matrix H having an eigenvalue λ
P.
X † HX = λX † X (1.13)
©
al
X2† HX1 = λ2 X2† X1 (1.19)
d
Multiplying eq.(1.16) by X2† from left
an
X2† HX1 = λ1 X2† X1 (1.20)
M
Comparing eq. 1.19 and eq. 1.20,
λ1 X2† X1 = λ2 X2† X1
⇒ (λ1 − λ2 )X2† X1 = 0
P.
⇒ X2† X1 = 0 (∵ λ2 ̸= λ1 )
Thus X1 and X2 are orthogonal.
.
Dr
n
X
2 n
D(λ) = c0 + c1 λ + c2 λ + ..... + cn λ = ci λ i (1.22)
i=0
al
matrix i.e.
n
d
X
D(A) = ci Ai = 0 (1.23)
i=0 an
The theorem can be verifiedwith the
following example.
M
2 −1
Consider a matrix A = . The characteristic equation of the
3 −2
P.
matrix is
.
2−λ −1
Dr
D(λ) = =0
3 −2 − λ
⇒ D(λ) = λ2 − 1 = 0 (1.24)
©
n
X
D(A) = ci Ai = c0 I + c1 A + c2 A2 + ..... + cn An = 0 (1.25)
i=0
1.5. CAYLEY-HAMILTON THEOREM 23
2 0 1
Problem 9: Determine the inverse of the matrix A = 1 1 2 by
0 1 1
al
using the Cayley-Hamilton theorem.
Solution: The characteristic equation of the matrix is
d
2−λ
1
0
1−λ
1
2
an = 0
0 1 1−λ
M
⇒ (2 − λ){(1 − λ)2 − 2} + 1 = 0
⇒ λ3 − 4λ2 + 3λ + 1 = 0 (1.26)
P.
A3 − 4A2 + 3A + I = 0 (1.27)
Dr
al
λ1 0 ... ... 0
d
0 λ2 ... ... 0
D= ... ... ... ... ...
an
... ... ... ... ...
0 0 ... ... λn
M
If A has n number of linearly independent eigenvectors, a matrix S can be
found such that S −1 AS = D, the diagonal matrix. The matrix S is called
the diagonalizing matrix.
P.
x11 x12 ... ... x1n
x21 x22 ... ... x2n
S = (X1 X2 ...Xn ) = ... ... ... ... ...
©
... ... ... ... ...
xn1 xn2 ... ... xnn
al
Solution: Note that the eigenvalues
of A are λ1 =−1, λ2 = 1. Corre-
d
1 1
sponding eigenvectors are X1 = and X2 = respectively. The
3 an 1
eigenvectors are linearly independent2 .
1 1
M
Thus the diagonalizing matrix S =
3 1
1 −1
The inverse matrix S −1 = − 12
−3 1
P.
2 −3 1 3 −2 3 1 0 1
Dr
©
1.6.1 Corollaries
1. Diagonalizing matrix of a real symmetric matrix is orthogonal.
Proof: Let us consider a symmetric matrix A i.e. AT = A. If λi are
the eigenvalues of A and S is the diagonalizing matrix,
S −1 AS = D = diag(λ1 , λ2 , ...λn )
⇒ (S −1 AS)T = DT
⇒ S T AT (S −1 )T = D
⇒ S T A(S −1 )T = S −1 AS
⇒ ST = S −1
⇒ ST S = I
2
The task is left for the readers
26 CHAPTER 1. MATRICES
S −1 HS = D
⇒ (S −1 HS)† = D†
⇒ S † H † (S −1 )† = D
⇒ S † H(S −1 )† = S −1 HS
⇒ S† = S −1
⇒ S †S = I
al
i.e. S is unitary.
d
1.7 Similarity Transformation
an
M
Consider a square matrix A of order n and a non-singular matrix S such
that S −1 AS = B, another square matrix of same order as A. Matrix B
P.
|S −1 AS − λI| = 0
⇒ |S −1 AS − S −1 λIS| = 0
⇒ |S −1 (A − λI)S| = 0
⇒ |S −1 ||A − λI||S| = 0
⇒ |S −1 S||A − λI| = 0
⇒ |A − λI| = 0
which is the characteristic equation of the original matrix A with same eigen-
value λ. Thus the eigenvalues remain invariant under similarity transforma-
tion.
1.8. UNITARY TRANSFORMATION 27
al
B = U † AU
d
⇒ B † = (U † AU )†
= U † A† (U † )†
an
= U † AU (∵ A† = A)
M
= B
P.
B † = (U −1 AU )† = (U † AU )† = U † A† U
B † B = U † A† U U † AU = U † A† AU
⇒ |B † B| = |U † ||A† A||U | = |U † U ||A† A| = |A† A|
Thus the norm of the matrix remains invariant under the unitary transfor-
mation.
3
Norm of a matrix A is defined as |A† A|. Note that, if |X † X| = 1 for a column matrix
X it is said to be normalized.
28 CHAPTER 1. MATRICES
where f (D) is similar function of D. Thus from eq. 1.28, for any power An
of matrix A
An = SDn S −1 (1.29)
2 −1
Problem 14: A = . Find A50 .
al
3 −2
−1 0
d
Solution: Refer to Problem 10. The diagonal matrix D =
0 1
and the diagonalising matrix S =
1 1
3 1
an
. The inverse matrix S −1 =
M
1 1 −1
−2
−3 1
P.
∴ By eq. 1.29,
50
50 50 −1 1 1 −1 −1 0 1 1
A = SD S = −
.
2 −3 1 0 1 3 1
Dr
1 1 −1 1 0 1 1
= −
2 −3 1 0 1 3 1
©
1 0
=
0 1
al
i
where ai are arbitrary constants.
d
Applying the boundary conditions ai can be determined and exact so-
lution is obtained.
an
Let us consider the following set of equations:
M
y1′ = 2y1 + 3y2
y2′ = 4y1 + y2
P.
′
y1 2 3 y1
= i.e. Y ′ = AY
y2′ 4 1 y2
©
2 3
where A = . The eigenvalues of the matrix A are λ1 = −2 and
4 1
3 1
λ2 = 5. Corresponding eigenvectors are X1 = and X2 =
−4 1
respectively.
Thus, the general solutions are
X
Y (t) = ai eλi t Xi
i
y1 (t) −2t 3 5t 1
⇒ = a1 e + a2 e
y2 (t) −4 1
or, y1 (t) = 3a1 e−2t + a2 e5t
and y2 (t) = −4a1 e−2t + a2 e5t
30 CHAPTER 1. MATRICES
y1 (0) = 3a1 + a2 = 2
y2 (0) = −4a1 + a2 = 1
d al
an
M
. P.
Dr
©
Bibliography
al
(1978)
d
[3] B. S. Rajput, Mathematical Physics Pragati Prakashan (2014)
an
M
. P.
Dr
©
31