0% found this document useful (0 votes)
14 views44 pages

Ch-04 Linear Algebra

The document provides a rapid revision guide for Engineering Mathematics, focusing on Linear Algebra concepts such as determinants, properties of symmetric and skew-symmetric matrices, matrix inverses, and eigenvalues. It includes definitions, key properties, and examples of various types of matrices, along with multiple-choice questions for practice. Additionally, it covers the Cayley-Hamilton theorem and diagonalization of matrices, emphasizing the importance of eigenvalues and eigenvectors in linear transformations.

Uploaded by

truthlight1989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views44 pages

Ch-04 Linear Algebra

The document provides a rapid revision guide for Engineering Mathematics, focusing on Linear Algebra concepts such as determinants, properties of symmetric and skew-symmetric matrices, matrix inverses, and eigenvalues. It includes definitions, key properties, and examples of various types of matrices, along with multiple-choice questions for practice. Additionally, it covers the Cayley-Hamilton theorem and diagonalization of matrices, emphasizing the importance of eigenvalues and eigenvectors in linear transformations.

Uploaded by

truthlight1989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

GATE-2025

ENGINEERING MATHEMATICS

Rapid Revision by

(SP Yadav sir)


Rapid Revision of Linear Algebra
Determinant and Properties
• Determinant is a convenient way of storing values .
• The sum of the product of elements of any row/column with their corresponding cofactor is known as value of
determinant of a square matrix.
• The number of algebraic terms in the expansion of determinant of a square matrix An×n is n!.
❖ If two rows/columns of a determinant be interchanged the numerical value remains
unchanged but determinant is multiplied by -1.
❖ If two rows/columns of a determinant are equal or in proportion the value of determinant is zero.
❖ Linear transform does not change value of determinant.
❖ It the elements of a row of a determinant are added m times the corresponding element of another row/column
the value of determinant thus obtained is equal to the value of original determinant.
The sum of products of the elements of any row/column with cofactor of some other
row/column is equal to 0.
❖ The determinant of skew symmetric matrix of odd order is 0
❖ The determinant of a skew symmetric matrix of even order is nonzero and perfect square.

Key Properties You Must Know
Key Properties You Must Know
Key Properties You Must Know
Q.1 MCQ 1 2 0 0
2 4 2 0
Find determinant of A =  
0 2 9 2
 
0 0 2 16 

(a) 64

(b) -64

(c) 0

(d) 32
MSQ

Let A be an n x n matrix over the set of all real numbers R. Let B be a matrix obtained from A by swapping
two rows. Which of the following statements is/are TRUE?

a) The determinant of B is the negative of the determinant of A

b) If A is invertible, then B is also invertible

c) if A is symmetric, then B is also symmetric

d) If the trace of A is zero, then the trace of B is also zero


MCQ

If A and B are square matrices of size n × n, then which of the following statement is not true?
(a) det(AB) = det(A) . det(B)

(b) det(kA) = kn det(A)

(c) det(A + B) = det(A) + det(B)


1
(d) det(AT) =
det 𝐴−1
Square Matrices
Symmetric Matrix:
A square matrix A = [aij]n×n is said to be symmetric if A = AT i.e. aij = aji for all values
of i and j

Skew-Symmetric Matrix
A square matrix A = [aij]n×n is said to be skew-symmetric if A =
–AT i.e. aij = –aji i & j ∈ 𝑛,
Properties of Symmetric Matrices
Properties:
1. For any square matrix A, AAT is always symmetric A + AT is always symmetric.
2. If A is symmetric.
B is symmetric then
(a) A+B is always symmetric
(b) An is always symmetric
(c) AB and BA may or may not be symmetric but if they commute i.e. AB =BA then AB is symmetric.
(d) Real symmetric matrices not only have real eigenvalues, they are always diagonalizable
Properties of Skew- Symmetric Matrices
A−At
1. For every square matrix always skew symmetric matrix.
2
2. If A is Skew symmtric matrix
a.) then An is Skew symmetric matrix if n is odd
b.) then An is Symmetric matrix if n is even
3.) Determinant of any skew symmetric matrix of odd order is always zero.
4.) Every square matrix can be uniquely expressed as the sum of symmetric matrix and Skew
symmetric matrix.
Square Matrices

Idempotent Matrix A2 = A
Involutory Matrix A2 = I
Nilpotent Matrix Ax = 0
Square Matrices
Orthogonal matrix:
A square matrix A is called an orthogonal matrix if the product of matrix A with its transpose matrix AT is
an identity matrix
Properties of Orthogonal Matrix
1. If A is orthogonal then its rows/columns are pairwise orthogonal ( converse is not true) and
rows/column vectors of A are unit vectors.
Properties of Orthogonal Matrix
2. A set of vectors S= {v1, v2, ..., vn} are mutually orthogonal if every pair of vectors is orthogonal. i.e.
vi.vj = 0 ∀ i ≠ 𝑗

3. A set of vectors S = {v1, v2, ..., vn} is orthonormal if every vector in S has magnitude 1 and the set of
vectors are mutually orthogonal. Simply an orthonormal set of vectors is an orthogonal set of unit vectors.
An orthonormal set of vectors is linearly independent.
Inverse of Matrix

adj(A)
If A ≠ 0 A is non sigular matrix then A−1 = A

adj(A) = (Cofactor Matrix) T


Inverse of Matrix
Shortcut (2×2)
−1

a b  1  d −b 
c d = ad − bc  −c a 
   
Inverse of Matrix
Shortcut (3×3) MLFM Method
Q.
1 0 1
𝐴 = −1 1 1 Find A-1
0 1 0
Inverse of Matrix
Properties

A−1 −1 =A

−1
A−1 T = AT

AB −1 = B−1 A−1

−1 1
kA = A−1
k

An −1 = A−1 n
Inverse of Matrix
Properties
Rank of Matrix
Row-Echelon Form
A matrix is said to be in row echelon form if
(i) Zero rows occupy the last row if any.
(ii) The number of zeros before a non zero element of each row less than the number of
such zero before a non zero element in next row.
(iii) Number of non-zero rows/column in row echelon form is called Rank and they are linearly independent.
(iv) To reduce any matrix in row echelon form we should use row operations only.
(v) Upper triangular matrices always in row echelon but converse is not true.
MCQ
5 0 −5 0 
Find the rank of given matrix. 0 2 0 1
 
 −5 0 5 0
 
0 1 0 2

(a) 4 (b) 2
(c) 3 (d) 1
Q.MCQ
 1 −1 0 0 0 
 0 0 1 −1 0 
 
Find rank of the matrix  0 1 −1 0 0 
 
 − 1 0 0 0 1 
 0 0 0 1 −1

(a) 4 (b) 3
(c) 2 (d) 1
System of Linear Equations
Types of Linear Equation

Types of Linear
Equation

Consistent Inconsistent
Equation Equation

System has at least System having no


one solution solution
Types of Linear Equation

Under-determined: A system of linear equations is considered underdetermined if there are fewer


equations than unknowns(m < n )

Over-determined: A system of equations is considered over determined if there are more equations than
unknowns (m > n )

Equally-Determined: A system of equations is considered equally determined if number of equations


equal to number of unknowns (m = n)
Consistency of Non-Homogeneous Linear Equation
Consistency of Homogeneous Linear Equation
A system of homogeneous equation has either trivial or infinitely many solutions.
A homogeneous linear system with fewer equations than unknowns always has non
trivial solutions.
Eigen Values and Eigen Vectors
Eigenvalues and Eigenvectors
In linear algebra, an Eigen vector or characteristic vector of a linear transformation is a nonzero vector that
changes by a scalar factors when that linear transformation is applied to it. The corresponding Eigen value,
often denoted by, λ is the factor by which the Eigen vector is scaled.
If λ is an Eigen value of a matrix A then there exist a non-zero vector X such that
AX = λ X then non vector X is called Eigen Vector.
Q.
2 2
Find Eigen values & corresponding Eigen of the matrix A =
1 3
Cayley Hamilton Theorem
Every square matrix satisfies its own characteristic equation
Properties of Eigen Value & Eigen Vector
1. A and AT have same eigen values

2. λ1+ λ2+ λ3…… λn = Trace(A) = a11+a22+a33……ann


3. λ1 * λ2 λ3……. λ n = det( A)

Note: (if det(A) =0 then at least one Eigen value is zero and if one eigen value is zero the det(A) will
be zero
Properties of Eigen Value & Eigen Vector
4. If λ1, λ2, λ3………. λn are Eigen values of matrix A then Eigen values of
i. kA are _________________________________________________

ii. Am are_________________________________________________

iii. A-1 are ________________________________________________

iv. A±kI are _______________________________________________

v. (A±kI)-1 are _____________________________________________

vi. (A±kI)2 are ______________________________________________

vii. adj(A) are_______________________________________________


Properties of Eigen Value & Eigen Vector

5. Maximum number of distinct eigenvalues is equal to size of matrix A.


6. Eigen values of any triangular matrix are just the diagonal elements of the matrix.
7. If one eigen value of an orthogonal matrix, is then is also be eigen value of that matrix.
1
8. The eigenvectors corresponding to different eigenvalues  of real symmetric matrix
are always orthogonal.
9. If n rows of a square matrix are same then n-1 Eigen values will be 0
Properties of Eigen Value & Eigen Vector
10.The eigenvectors corresponding different eigenvalues of any square matrix are always linearly
independent.

11. Number of linearly independent Eigen vectors corresponding to repeated Eigen value λ of a
matrix A = Total number of columns 𝜌(𝐴 − λI)

12. The number of times an Eigen value is repeated is known as algebraic multiplicity and the
number of linearly independent Eigen vector of repeated eigen value is know as geometric
multiplicity (GM<AM)
Q.MCQ
1 4 
Using Cayley Hamilton theorem For the matrix A =   and express A5-4A4-7A3+11A2-A-10I as a
linear polynomial in A . 2 3 
(a) A+5I

(b) A -5I

(c) I

(d) 5A
Diagonalisation
Diagonalisation of Matrix
If a square matrix A of order n has n linearly independent eigen vectors then a matrix P can be found such that P–
1AP is a diagonal matrix.

Note:
(i) The matrix P which diagonalizes A is called modal matrix and the resulting diagonal matrix D is known as
the spectra matrix of A.
(ii) The diagonal matrix has the eigen values of A as its diagonal elements.
(iii) The matrix P which diagonalizes A found by grouping the eigen vectors of A into square matrix.
Similar Matrix
Definition:
Two matrices A and B are said to be similar if there exist a non-singular matrix P such that
B = P–1AP (similarity transform)

Note: Two similar matrices have same eigen values PB = AP


Diagonalization
Power of Matrix
Diagonalization of a matrix is very useful for obtaining powers of matrix .Let A be a square matrix then a non-
singular matrix P can be found such that
D = (P-1AP)
D2 = (P-1AP)(P-1AP)
D3 = (P-1A2P)
D4 = (P-1A3P)
…..
Dn = (P-1AnP)
PDnP-1 = An
An =PDnP-1

λn 0 0

Dn =
0 λn 0
0 0 λn
Q
1 0 
Find eA if A=  .
 0 2 
Thank
You

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy