0% found this document useful (0 votes)
81 views24 pages

Appendix A: Matrix Methods

Members of a set are called elements of the set. If all elements of set S are also the elements of another set, then S is said to be a subset of T, or S is contained in T: S and T. The intersection of two sets is the set of all points " xx such that " xx is an element of either S 1 or S 2.

Uploaded by

Gilang Samudra
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views24 pages

Appendix A: Matrix Methods

Members of a set are called elements of the set. If all elements of set S are also the elements of another set, then S is said to be a subset of T, or S is contained in T: S and T. The intersection of two sets is the set of all points " xx such that " xx is an element of either S 1 or S 2.

Uploaded by

Gilang Samudra
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Appendix A

Matrix Methods
A.1 REVIEW SUMMARY
A.1.1 Sets
A set of points is denoted by
S x
1
. x
2
. x
3
A.1
This shows a set of three points, x
1
, x
2
, and x
3
. Some properties may be assigned to
the set, i.e.,
S fx
1
. x
2
. x
3
jx
3
0g A.2
Equation (A.2) indicates that the last component of the set x
3
= 0. Members of a set
are called elements of the set. If a point x, usually denoted by " xx, is a member of the
set, it is written as
" xx 2 S A.3
If we write:
" xx , 2S A.4
then point x is not an element of set S. If all the elements of a set S are also the
elements of another set T, then S is said to be a subset of T, or S is contained in T:
S & T A.5
Alternatively, this is written as
T ' S A.6
The intersection of two sets S
1
and S
2
is the set of all points " xx such that " xx is an
element of both S
1
and S
2
. If the intersection is denoted by T, we write:
T S
1
\ S
2
A.7
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
The intersection of n sets is
T S
1
\ S
2
\ . . . \ S
n
\
n
i1
S
i
A.8
The union of two sets S
1
and S
2
is the set of all points " xx such that " xx is an element of
either S
1
or S
2
. If the union is denoted by P, we write:
P S
1
[ S
2
A.9
The union of n sets is written as:
P S
1
[ S
2
[ . . . [ S
n
U
n
i1
S
i
A.10
A.1.2 Vectors
A vector is an ordered set of numbers, real or complex. A matrix containing only one
row or column may be called a vector:
" xx
x
1
x
2

x
n

A.11
where x
1
, x
2
, . . ., x
n
are called the constituents of the vector. The transposed form is
" xx
0
jx
1
. x
2
. . . . . x
n
j A.12
Sometimes the transpose is indicated by a superscript letter t. A null vector
"
00 has all
its components equal to zero and a sum vector
"
11 has all its components equal to 1.
The following properties are applicable to vectors
" xx " yy " yy " xx
" xx " yy " zz " xx " yy " zz
o
1
o
2
" xx o
1
o
2
" xx
o
1
o
2
" xx o
1
" xx o
2
" xx
"
00 " xx
"
00
A.13
Multiplication of two vectors of the same dimensions results in an inner or scalar
product:
" xx
0
" yy
X
n
i1
x
i
y
i
" yy
0
" xx
" xx
0
" xx j " xxj
2
cos
" xx
0
" yy
jxjjyj
A.14
where is the angle between vectors and |x| and |y| are the geometric lengths. Two
vectors " xx
1
and " xx
2
are orthogonal if:
" xx
1
" xx
0
2
0 A.15
Matrix Methods 713
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
A.1.3 Matrices
1. A matrix is a rectangular array of numbers subject to certain rules of
operation, and usually denoted by a capital letter within brackets [A], a capital letter
in bold, or a capital letter with an overbar. The last convention is followed in this
book. The dimensions of a matrix indicate the total number of rows and columns.
An element a
ij
lies at the intersection of row i and column j.
2. A matrix containing only one row or column is called a vector.
3. A matrix in which the number of rows is equal to the number of columns is
a square matrix.
4. A square matrix is a diagonal matrix if all off-diagonal elements are zero.
5. A unit or identity matrix
"
II is a square matrix with all diagonal elements
=1 and off-diagonal elements = 0.
6. A matrix is symmetric if, for all values of i and j, a
ij
= a
ji.
7. A square matrix is a skew symmetric matrix if a
ij
= a
ji
for all values of i
and j.
8. A square matrix whose elements below the leading diagonal are zero is
called an upper triangular matrix. A square matrix whose elements above the leading
diagonal are zero is called a lower triangular matrix.
9. If in a given matrix rows and columns are interchanged, the new matrix
obtained is the transpose of the original matrix, denoted by
"
AA
0
.
10. A square matrix
"
AA is an orthogonal matrix if its product with its trans-
pose is an identity matrix:
"
AA
"
AA
0

"
II A.16
11. The conjugate of a matrix is obtained by changing all its complex ele-
ments to their conjugates, i.e., if
"
AA
1 i 3 4i 5
7 2i i 4 3i

A.17
then its conjugate is
"
AA


1 i 3 4i 5
7 2i i 4 3i

A.18
A square matrix is a unit matrix if the product of the transpose of the conjugate
matrix and the original matrix is an identity matrix:
"
AA
0
"
AA
"
II A.19
12. A square matrix is called a Hermitian matrix if every ij element is equal
to the conjugate complex ji element, i.e.,
"
A A
"
AA
0
A.20
13. A matrix, such that:
"
AA
2

"
AA A.21
is called an idempotent matrix.
714 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
14. A matrix is periodic if
"
AA
k1

"
AA A.22
15. A matrix is called nilpotent if
"
AA
k
0 A.23
where k is a positive integer. If k is the least positive integer, then k is called the index
of nilpotent matrix.
16. Addition of matrices follows a commutative law:
"
AA
"
BB
"
BB
"
AA A.24
17. A scalar multiple is obtained by multiplying each element of the matrix
with a scalar. The product of two matrices
"
AA and
"
BB is only possible if the number of
columns in
"
A A equals the number of rows in
"
BB.
If
"
AA is an mn matrix and
"
BB is n p matrix, the product
"
AA
"
BB is an mp
matrix where
c
ij
a
i1
b
1j
a
i2
b
2j
a
in
b
nj
A.25
Multiplication is not commutative:
"
AA
"
BB 6
"
BB
"
AA A.26
Multiplication is associative if conrmability is assured:
"
AA
"
BB
"
CC
"
AA
"
BB
"
CC A.27
It is distributive with respect to addition:
"
AA
"
BB
"
CC
"
AA
"
BB
"
AA
"
CC A.28
The multiplicative inverse exists if jAj 6 0. Also,

"
AA
"
BB
0

"
BB
0
"
AA
0
A.29
18. The transpose of the matrix of cofactors of a matrix is called an adjoint
matrix. The product of a matrix
"
AA and its adjoint is equal to the unit matrix multi-
plied by the determinant of A.
"
AA
"
AA
adj

"
IIjAj A.30
This property can be used to nd the inverse of a matrix (see Example A.4).
19. By performing elementary transformations any nonzero matrix can be
reduced to one of the following forms called the normal forms:
I
r
I
r
0
I
r
0

I
r
0
0 0

A.31
The number r is called the rank of matrix
"
AA. The form:
I
r
0
0 0

A.32
is called the rst canonical form of
"
AA. Both row and column transformations can be
used here. The rank of a matrix is said to be r if (1) it has at least one nonzero minor
of order r, and (2) every minor of
"
AA of order higher than r = 0. Rank is a nonzero
row (the row that does not have all the elements =0) in the upper triangular matrix.
Matrix Methods 715
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
Example A.1
Find the rank of the matrix:
"
AA
1 4 5
2 6 8
3 7 22

This matrix can be reduced to an upper triangular matrix by elementary row opera-
tions (see below):
"
AA
1 4 5
0 1 1
0 0 12

The rank of the matrix is 3.


A.2 CHARACTERISTICS ROOTS, EIGENVALUES, AND
EIGENVECTORS
For a square matrix
"
AA, the
"
AA z
"
II matrix is called the characteristic matrix; z is
a scalar and
"
II is a unit matrix. The determinant jA zIj when expanded gives
a polynomial, which is called the characteristic polynomial of
"
AA and the equation
jA zIj 0 is called the characteristic equation of matrix
"
AA. The roots of the
characteristic equation are called the characteristic roots or eigenvalues.
Some properties of eigenvalues are:
. Any square matrix
"
AA and its transpose
"
AA
0
have the same eigenvalues.
. The sum of the eigenvalues of a matrix is equal to the trace of the matrix
(the sum of the elements on the principal diagonal is called the trace of the
matrix).
. The product of the eigenvalues of the matrix is equal to the determinant of
the matrix. If
z
1
. z
2
. . . . . z
n
are the eigenvalues of
"
AA, then the eigenvalues of
k
"
AA are kz
1
. kz
2
. . . . . kz
n
"
AA
m
are z
m
1
. z
m
2
. . . . . z
m
n
"
AA
1
are 1,z
1
. 1,z
2
. . . . . 1,z
n
A.33
. Zero is a characteristic root of a matrix, only if the matrix is singular.
. The characteristic roots of a triangular matrix are diagonal elements of the
matrix.
. The characteristics roots of a Hermitian matrix are all real.
. The characteristic roots of a real symmetric matrix are all real, as the real
symmetric matrix will be Hermitian.
716 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
A.2.1 CayleyHamilton Theorem
Every square matrix satises its own characteristic equation:
If j
"
AA z
"
IIj 1
n
z
n
a
1
z
n1
a
2
z
n2
a
n
A.34
is the characteristic polynomial of an n n matrix, then the matrix equation:
"
XX
n
a
1
"
XX
n1
a
2
"
XX
n2
a
n
"
II 0
is satisfied by
"
XX
"
AA
"
AA
n
a
1
"
AA
n1
a
2
"
AA
n2
a
n
"
II 0
A.35
This property can be used to nd the inverse of a matrix.
Example A.2
Find the characteristic equation of the matrix:
"
AA
1 4 2
3 2 2
1 1 2

and then the inverse of the matrix.


The characteristic equation is given by
1 z 4 2
3 2 z 2
1 1 2 z

0
Expanding, the characteristic equation is
z
3
5z
2
8z 40 0
then, by the CayleyHamilton theorem:
"
AA
2
5
"
AA 8
"
II 40
"
AA
1
0
40
"
AA
1

"
AA
2
5
"
AA 8
"
II
We can write:
40A
1

1 4 2
3 2 2
1 1 2

2
5
1 4 2
3 2 2
1 1 2

8
1 0 0
0 1 0
0 0 1

0
The inverse is
A
1

0.05 0.25 0.3


0.2 0 0.2
0.125 0.125 0.25

This is not an effective method of nding the inverse for matrices of large dimen-
sions.
A.2.2 Characteristic Vectors
Each characteristic root z has a corresponding nonzero vector " xx which satises the
equation j
"
AA z
"
IIj " xx 0. The nonzero vector " xx is called the characteristic vector or
eigenvector. The eigenvector is, therefore, not unique.
Matrix Methods 717
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
A.3 DIAGONALIZATION OF A MATRIX
If a square matrix
"
AA of n n has n linearly independent eigenvectors, then a matrix
"
PP
can be found so that
"
PP
1
"
AA
"
PP A.36
is a diagonal matrix.
The matrix
"
PP is found by grouping the eigenvectors of
"
AA into a square matrix,
i.e.,
"
PP has eigenvalues of
"
AA as its diagonal elements.
A.3.1 Similarity Transformation
The transformation of matrix
"
AA into P
1
"
AA
"
PP is called a similarity transformation.
Diagonalization is a special case of similarity transformation.
Example A.3
Let
"
AA
2 2 3
2 1 6
1 2 0

Its characteristics equation is


z
3
z
2
21z 45 0
z 5z 3z 3 0
The eigenvector is found by substituting the eigenvalues:
7 2 3
2 4 6
1 2 3

x
y
z

0
0
0

As eigenvectors are not unique, by assuming that z 1, and solving, one eigenvector
is
1. 2. 1
t
Similarly, other eigenvectors can be found. A matrix formed of these vectors is
"
PP
1 2 3
2 1 0
1 0 1

and the diagonalization is obtained:


"
PP
1
"
AA
"
PP
5 0 0
0 3 0
0 0 3

This contains the eigenvalues as the diagonal elements.


718 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
A.4 LINEAR INDEPENDENCE OR DEPENDENCE OF VECTORS
Vectors " xx
1
. " xx
2
. . . . . " xx
n
are dependent if all vectors (row or column matrices) are of the
same order, and n scalars z
1
. z
2
. . . . . z
n
(not all zeros) exist such that:
z
1
" xx
1
z
2
" xx
2
z
3
" xx
3
z
n
" xx
n
0 A.37
Otherwise they are linearly independent. In other words, if vector " xx
K
1 can be
written as a linear combination of vectors x
1
. " xx
2
. . . . . " xx
n
, then it is linearly depen-
dent, otherwise it is linearly independent. Consider the vectors:
" xx
3

4
2
5

" xx
1

1
0.5
0

" xx
2

0
0
1

then
" xx
3
4 " xx
1
5 " xx
2
Therefore, " xx
3
is linearily dependent on " xx
1
and " xx
2
.
A.4.1 Vector Spaces
If " xx is any vector from all possible collections of vectors of dimension n, then for any
scalar o, the vector o " xx is also of dimension n. For any other n-vector " yy, the vector
" xx " yy is also of dimension n. The set of all n-dimensional vectors are said to form a
linear vector space E
n
. Transformation of a vector by a matrix is a linear transfor-
mation:
"
AAo " xx [" yy o
"
AA" xx [
"
AA" yy A.38
One property of interest is
"
AA" xx 0 A.39
i.e., whether any nonzero vector " xx exists which is transformed by matrix
"
AA into a
zero vector. Equation (A.39) can only be satised if the columns of
"
AA are linearly
dependent. A square matrix whose columns are linearly dependent is called a sin-
gular matrix and a square matrix whose columns are linearly independent is called a
nonsingular matrix. In Eq. (A.39) if " xx
"
00, then columns of
"
AA must be linearly
independent. The determinant of a singular matrix is zero and its inverse does not
exist.
A.5 QUADRATIC FORM EXPRESSED AS A PRODUCT OF
MATRICES
The quadratic form can be expressed as a product of matrices:
Quadratic form " xx
0
A" xx A.40
where
" xx
x
1
x
2
x
3

"
AA
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33

A.41
Matrix Methods 719
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
Therefore,
" xx
0
A" xx x
1
x
2
x
3

a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33

x
1
x
2
x
3

a
11
x
2
1
a
22
x
2
2
a
33
x
2
3
2a
12
x. x
2
2a
23
x
2
x
3
2a
13
x
1
x
3
A.42
A.6 DERIVATIVES OF SCALAR AND VECTOR FUNCTIONS
A scalar function is dened as
y f x
1
. x
2
. . . . . x
n
A.43
where x
1
. x
2
. . . . . x
n
are n variables. It can be written as a scalar function of an n-
dimensional vector, i.e., y f " xx, where " xx is an n-dimensional vector:
" xx
x
1
x
2

x
n

A.44
In general, a scalar function could be a function of several vector variables, i.e.,
y f " xx. " uu. " pp, where " xx. " uu, and " pp are vectors of various dimensions. A vector function
is a function of several vector variables, i.e., " yy f " xx. " uu. " pp.
A derivative of a scalar function with respect to a vector variable is dened as
of
ox

of ,ox
1
of ,ox
2

of ,ox
n

A.45
The derivative of a scalar function with respect to a vector of n dimensions is a vector
of the same dimension. The derivative of a vector function with respect to a vector
variable x is dened as
of ,ox
of
1
,ox
1
of
1
,ox
2
of
1
,ox
n
of
2
,ox
2
of
2
,ox
2
of
2
,ox
n

of
m
,ox
1
of
m
,ox
2
of
m
,ox
n

of
1
,ox
1

T
of
2
,ox
2

of
m
,ox
n

A.46
If a scalar function is dened as
s z
T
f " xx" uu" pp
z
1
f
1
" xx. " uu. " pp z
2
f
2
" xx. " uu. " pp z
m
f
m
" xx. " uu. " pp
A.47
then os,oz is
os
oz

f
1
" xx. " uu. " pp
f
2
" xx. " uu. " pp
..
f
m
" xx. " uu. " pp

f " xx. " uu. " pp A.48


720 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
and os,ox is
os
ox

z
1
of
1
ox
1
z
2
of
2
ox
1
. . . z
m
of
m
ox
1
z
1
of
1
ox
2
z
2
of
2
ox
2
. . . z
m
of
m
ox
2
. . . . . .
z
1
of
1
ox
n
z
2
of
2
ox
n
. . . z
m
of
m
ox
n

of
1
ox
1

of
2
ox
1
. . .
of
m
ox
1
of
1
ox
2

of
2
ox
2
. . .
of
m
ox
2
. . . . . .
of
1
ox
n

of
2
ox
n
. . .
of
m
ox
n

z
1
z
2
..
z
m

A.49
Therefore,
os
ox

of
ox

T
z A.50
A.7 INVERSE OF A MATRIX
The inverse of a matrix is often required in the power system calculations, though it
is rarely calculated directly. The inverse of a square matrix
"
AA is dened so that
"
AA
1
"
AA
"
AA
"
AA
1

"
II A.51
The inverse can be evaluated in many ways.
A.7.1 By Calculating the Adjoint and Determinant of the Matrix
"
AA
1

"
AA
adj
jAj
A.52
Example A.4
Consider the matrix:
"
AA
1 2 3
4 5 6
3 1 2

Its adjoint is
"
AA
adj

4 1 3
10 7 6
11 5 3

and the determinant of


"
AA is equal to 9.
Matrix Methods 721
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
Thus, the inverse of
"
AA is
"
AA
1

4
9
1
9
1
3

10
9
7
9

2
3
11
9

5
9
1
3

A.7.2 By Elementary Row Operations


The inverse can also be calculated by elementary row operations. This operation is as
follows:
1. A unit matrix of n n is rst attached to the right side of matrix n n
whose inverse is required to be found.
2. Elementary row operations are used to force the augmented matrix so
that the matrix whose inverse is required becomes a unit matrix.
Example A.5
Consider a matrix:
"
AA
2 6
3 4

It is required to nd its inverse.


Attach a unit matrix of 2 2 and perform the operations as shown:
2 6
3 4

1 0
0 1

!
R
1
2
1 3
3 4

1
2
0
0 1

! R
2
3R
1
1 3
0 5

1
2
0
3
2
1

! R
1

5
3
R
2
1 0
0 5

2
5
3
5
3
2
1

! R
2

1
5
1 0
0 1

2
5
3
5
3
10
1
5

Thus, the inverse is


"
AA
1

2
5
3
5
3
10
1
5

Some useful properties of inverse matrices are:


The inverse of a matrix product is the product of the matrix inverses taken in
reverse order, i.e.,

"
AA
"
BB
"
CC
1

"
CC
1

"
BB
1

"
AA
1
A.53
The inverse of a diagonal matrix is a diagonal matrix whose elements are the respec-
tive inverses of the elements of the original matrix:
722 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
A
11
B
22
C
33

1
A
11
1
B
22
1
C
33

A.54
A square matrix composed of diagonal blocks can be inverted by taking the inverse
of the respective submatrices of the diagonal block:
block A
block B
block C

block A
1
block B
1
block C
1

A.55
A.7.3 Inverse by Partitioning
Matrices can be partitioned horizontally and vertically, and the resulting submatrices
may contain only one element. Thus, a matrix
"
AA can be partitioned as shown:
"
AA
a
11
a
12
a
13
a
14
a
21
a
22
a
23
a
24
a
31
a
32
a
33
a
34
a
41
a
42
a
43
a
44

"
AA
1
"
A A
2
"
AA
3
"
A A
4

A.56
where
"
AA
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33

A.57
"
AA
2

a
14
a
24
a
34

"
AA
3
a
41
a
42
a
43

"
AA
4
a
44
A.58
Partitioned matrices follow the rules of matrix addition and subtraction. Partitioned
matrices
"
AA and
"
BB can be multiplied if these are conrmable and columns of
"
AA and
rows of
"
BB are partitioned exactly in the same manner:
"
AA
11
22
"
AA
12
21
"
AA
21
12
"
AA
22
11

"
BB
11
23
"
BB
12
21
"
BB
21
13
"
BB
22
11

"
AA
11
"
BB
11

"
AA
12
"
BB
21
"
AA
11
"
BB
12

"
AA
12
"
BB
22
"
A A
21
"
BB
11

"
AA
22
"
BB
21
"
AA
21
"
BB
12

"
AA
22
"
BB
22

A.59
Example A.6
Find the product of two matrices A and B by partitioning:
"
AA
1 2 3
2 0 1
1 3 6

"
BB
1 2 1 0
2 3 5 1
4 6 1 2

Matrix Methods 723


Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
is given by
"
AA
"
BB
1 2
2 0

1 2 1
2 3 5

3
1

4 6 1

1 2
2 0

0
1

3
1

2
1 3

1 2 1
2 3 5

6 4 6 1

1 3

0
1

62

A matrix can be inverted by partition. In this case, each of the diagonal submatrices
must be square. Consider a square matrix partitioned into four submatrices:
"
AA
"
AA
1
"
AA
2
"
AA
3
"
AA
4

A.60
The diagonal submatrices
"
AA
1
and
"
A A
4
are square, though these can be of different
dimensions. Let the inverse of
"
AA be
"
AA
1

"
AA
00
1
"
AA
00
2
"
AA
00
3
"
AA
00
4

A.61
then
"
AA
1
"
AA
"
AA
00
1
"
AA
00
2
"
AA
00
3
"
AA
00
4

"
AA
1
"
AA
2
"
AA
3
"
AA
4

1 0
0 1

A.62
The following relations can be derived from this identity:
"
AA
00
1

"
AA
1

"
AA
2
"
AA
1
4
"
AA
3

1
"
AA
00
2

"
AA
00
1
"
AA
2
"
AA
1
4
"
AA
00
4

"
AA
3
"
AA
1
1
"
AA
2

"
AA
4

1
"
AA
00
3

"
AA
00
4
"
AA
3
"
AA
1
1
A.63
Example A.7
Invert the following matrix by partitioning:
"
AA
2 3 0
1 1 3
1 2 4

"
AA
1

2 3
1 1

"
AA
2

0
3

"
AA
3
1 2

"
AA
4
4
"
AA
00
1

2 3
1 1

0
3

1
4
!
1 2

!
1

2
7
12
7
1
7

8
7

"
AA
0
2

2
7
12
7
1
7

8
7

0
3

1
4
!

9
7
6
7

724 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
"
AA
00
3

1
7
!
1 2

1 3
1 2


1
7
1
7

"
AA
00
4
1 2

1 3
1 2

0
3

4
!
1

1
7
"
AA
1

2
7
12
7

9
7
1
7

8
7
6
7

1
7
1
7
1
7

A.8 SOLUTION OF LARGE SIMULTANEOUS EQUATIONS


The application of matrices to the solution of large simultaneous equations consti-
tutes one important application in the power systems. Mostly, these are sparse
equations with many coefcients equal to zero. A large power system may have
more than 3000 simultaneous equations to be solved.
A.8.1 Consistent Equations
A system of equations is consistent if they have one or more solutions.
A.8.2 Inconsistent Equations
A system of equations that has no solution is called inconsistent, i.e., the following
two equations are inconsistent:
x 2y 4
3x 6y 5
A.8.3 Test for Consistency and Inconsistency of Equations
Consider a system of n linear equations:
a
11
x
1
a
12
x
2
a
1n
x
1
b
1
a
21
x
1
a
22
x
2
A
2n
x
2
b
2

a
n1
x
1
a
n2
x
2
a
mn
x
n
b
n
A.64
Form an augmented matrix
"
CC:
"
CC
"
AA.
"
BB
a
11
a
12
a
1
n b
1
a
21
a
22
a
2
n b
2

a
n1
a
n2
a
nn
b
n

A.65
Matrix Methods 725
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
The following holds for the test of consistency and inconsistency:
. A unique solution of the equations exists if: rank of
"
AA = rank of
"
CC n,
where n is the number of unknowns.
. There are innite solutions to the set of equations if: rank of
"
AA = rank of
"
CC r, r - n.
. The equations are inconsistent if rank of
"
AA is not equal to rank of
"
CC.
Example A.8
Show that the equations:
2x 6y 11
6x 20y 6z 3
6y 18z 1
are inconsistent.
The augmented matrix is
"
CC
"
AA
"
BB
2 6 0 11
6 20 6 3
0 6 18 1

It can be reduced by elementary row operations to the following matrix:


2 6 0 11
0 2 6 30
0 0 0 91

The rank of A is 2 and that of C is 3. The equations are not consistent.


The equations (A.64) can be written as
"
AA" xx
"
bb A.66
where
"
AA is a square coefcient matrix,
"
bb is a vector of constants, and " xx is a vector of
unknown terms. If
"
AA is nonsingular, the unknown vector " xx can be found by
" xx
"
AA
1
"
bb A.67
This requires calculation of the inverse of matrix
"
AA. Large system equations are not
solved by direct inversion, but by a sparse matrix techniques.
Example A.9
This example illustrates the solution by transforming the coefcient matrix to an
upper triangular form (backward substitution). The equations:
1 4 6
2 6 3
5 3 1

x
1
x
2
x
3

2
1
5

726 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
can be solved by row manipulations on the augmented matrix, as follows:
1 4 6
2 6 3
5 3 1

2
1
5

! R
2
2R
1

1 4 6
0 2 9
5 3 1

2
3
5

! R
3
5R
1

1 4 6
0 2 9
0 17 29

2
3
5

! R
3

17
2
R
2

1 4 6
0 2 9
0 0 47.5

2
3
20.5

Thus,
47.5x
3
20.5
2x
2
9x
3
3
x
1
4x
2
6x
3
2
which gives
" xx
1.179
0.442
0.432

A set of simultaneous equations can also be solved by partitioning:


a
11
. . a
1k
a
1m
. . a
1n
.. ..
a
k1
. . a
kk
a
km
. . a
kn
a
m1
. . a
mk
a
mm
. . a
mn
.. ..
a
n1
. . a
nk
a
nm
. . a
nn

x
1

x
k
x
m

x
n

b
1

b
k
b
m

b
n

A.68
Equation (A.68) is horizontally partitioned and rewritten as
"
AA
1
"
AA
2
"
AA
3
"
AA
4

"
XX
1
"
XX
2

"
BB
1
"
BB
2

A.69
Vectors " xx
1
and " xx
2
are given by
"
XX
1

"
AA
1

"
AA
2
"
AA
1
4
"
AA
3

1
"
BB
1

"
AA
2
"
AA
1
4
"
BB
2

A.70
"
XX
2

"
AA
1
4

"
BB
2

"
AA
3
"
XX
1


A.71
A.9 CROUTS TRANSFORMATION
A matrix can be resolved into the product of a lower triangular matrix
"
LL and an
upper unit triangular matrix
"
UU, i.e.,
a
11
a
12
a
13
a
14
a
21
a
22
a
23
a
24
a
31
a
32
a
33
a
34
a
41
a
42
a
43
a
44

l
11
0 0 0
l
21
l
22
0 0
l
31
l
32
l
33
0
l
41
l
42
l
43
l
44

1 u
12
u
13
u
14
0 1 u
23
u
24
0 0 1 u
34
0 0 0 1

A.72
Matrix Methods 727
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
The elements of
"
UU and
"
LL can be found by multiplication:
l
11
a
11
l
21
a
21
l
22
a
22
l
21
u
12
l
31
a
31
l
32
a
32
l
31
u
12
l
33
a
33
l
31
u
13
l
32
u
23
l
41
a
41
l
42
a
42
l
41
u
12
l
43
a
43
l
41
u
13
l
42
u
23
l
44
a
44
a
41
u
14
l
42
u
24
l
43
u
3
A.73
and
u
12
a
12
,l
11
u
13
a
13
,l
11
u
14
a
14
,l
11
u
23
a
23
l
21
u
13
,l
22
u
24
a
24
l
21
u
14
,l
22
u
34
a
34
l
31
u
14
l
32
u
24
l
33
A.74
In general:
l
ij
a
ij

X
kj1
k1
l
ik
u
kj
i ! j A.75
for j 1. . . . . n
u
ij

1
l
ii
a
ij

X
kj1
k1
l
ik
u
kj
!
i - j A.76
Example A.10
Transform the following matrix into LU form:
1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2

From Eqs. (A.75) and (A.76):


1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2

1 0 0 0
0 3 0 0
2 4 4 0
1 2 1 2.33

1 2 1 0
0 1 1 0.33
0 0 1 0.33
0 0 0 1

728 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
The original matrix has been converted into a product of lower and upper triangular
matrices.
A.10 GAUSSIAN ELIMINATION
Gaussian elimination provides a natural means to determine the LU pair:
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33

x
1
x
2
x
3

b
1
b
2
b
3

A.77
First, form an augmented matrix:
a
11
a
12
a
13
b
1
a
21
a
22
a
23
b
2
a
31
a
32
a
33
b
3

A.78
1. Divide the rst row by a
11
. This is the only operation to be carried out on
this row. Thus, the new row is
1 a
0
12
a
0
13
b
0
1
a
0
12
a
12
,a
11
. a
0
13
a
13
,a
11
. b
0
1
b
1
,a
11
A.79
This gives
l
11
a
11
. u
11
1. u
12
a
0
12
. u
13
a
0
13
A.80
2. Multiply new row 1 by a
21
and add to row 2. Thus, a
21
becomes zero.
0 a
0
22
a
0
23
a
0
33
b
0
2
a
0
22
a
22
a
21
a
0
12
a
0
23
a
23
a
21
a
0
13
b
0
2
b
2
a
21
b
0
1
A.81
Divide new row 2 by a
0
22
. Row 2 becomes
0 1 a
00
23
b
00
2
a
00
23
a
0
23
,a
0
22
b
00
2
b
0
2
,a
0
22
A.82
This gives
l
21
a
21
. l
22
a
0
22
. u
22
1. u
23
a
0
23
A.83
3. Multiply new row 1 by a
31
and add to row 3. Thus, row 3 becomes:
0 a
0
32
a
0
33
b
0
3
a
0
32
a
32
a
32
a
0
12
a
0
33
a
33
a
31
a
0
13
A.84
Multiply row 2 by a
32
and add to row 3. This row now becomes
0 0 a
00
33
b
00
3
A.85
Matrix Methods 729
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
Divide new row 3 by a
00
33
. This gives
0 0 1 b
0 00
3
b
0 00
3
b
00
3
,a
00
33
A.86
From these relations:
l
33
a
00
33
. l
31
a
31
. l
32
a
0
32
. u
33
1 A.87
Thus, all the elements of LU have been calculated and the process of forward
substitution has been implemented on vector
"
bb.
A.11 FORWARDBACKWARD SUBSTITUTION METHOD
The set of sparse linear equations:
"
AA" xx
"
bb A.88
can be written as
"
LL
"
UU " xx
"
bb A.89
or
"
LL" yy
"
bb A.90
where
" yy
"
UU " xx A.91
"
LL" yy
"
bb is solved for " yy by forward substitution. Thus, " yy is known. Then
"
UU " xx " yy is
solved by backward substitution.
Solve
"
LL" yy
"
bb by forward substitution:
l
11
0 0 0
l
21
l
22
0 0
l
31
l
32
l
33
0
l
41
l
42
l
43
l
44

y
1
y
2
y
3
y
4

b
1
b
2
b
3
b
4

A.92
Thus,
y
1
b
1
,l
11
y
2
b
2
l
21
y
1
,l
22
y
3
b
3
l
31
y
1
l
32
y
2
,l
33
y
4
b
4
l
41
y
1
l
42
y
2
l
43
y
3
,l
44
A.93
Now solve
"
UU " xx " yy by backward substitution:
1 u
12
u
13
u
14
0 1 u
23
u
24
0 0 1 u
34
0 0 0 1

x
1
x
2
x
3
x
4

y
1
y
2
y
3
y
4

A.94
730 Appendix A
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
Thus,
x
4
y
4
x
3
y
3
u
34
x
4
x
2
y
2
u
23
x
3
u
24
x
4
x
1
y
1
u
12
x
2
u
13
x
3
u
14
x
4
A.95
The forwardbackward solution is generalized by the following equation:
"
AA
"
LL
"
UU
"
LL
d

"
LL
l

"
II
"
UU
u
A.96
where
"
LL
d
is the diagonal matrix,
"
LL
l
is the lower triangular matrix,
"
II is the identity
matrix, and
"
UU
u
is the upper triangular matrix.
Forward substitution becomes
"
LL" yy
"
bb

"
LL
d

"
LL
l
" yy
"
bb
"
LL
d
" yy
"
bb
"
LL
l
" yy
" yy
"
LL
1
d

"
bb
"
LL
l
" yy
A.97
i.e.,
y
1
y
2
y
3
y
4

1,l
11
0 0 0
0 1,l
22
0 0
0 0 1,l
33
0
0 0 0 1,l
44

x
b
1
b
2
b
3
b
4

0 0 0 0
l
21
0 0 0
l
31
l
32
0 0
l
41
l
42
l
43
l
44

y
1
y
2
y
3
y
4

2
6
6
4
3
7
7
5
A.98
Backward substitution becomes

"
II
"
U U
u
" xx " yy
" xx " yy
"
UU
u
" xx
A.99
i.e.,
x
1
x
2
x
3
x
4

y
1
y
2
y
3
y
4

0 u
12
u
13
u
14
0 0 u
23
u
24
0 0 0 u
34
0 0 0 0

x
1
x
2
x
3
x
4

A.100
A.11.1 Bifactorization
A matrix can also be split into LU form by sequential operation on the columns and
rows. The general equations of the bifactorization method are
l
ip
a
1
p for ! p
u
pj

a
pj
a
pp
for j > p
a
ij
a
1
j l
ip
u
pj
for i > p. j > p
A.101
Matrix Methods 731
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
Here, the letter p means the path or the pass. This will be illustrated with an
example.
Example A.11
Consider the matrix:
"
AA
1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2

It is required to convert it into LU form. This is the same matrix of Example A.10.
Add an identity matrix, which will ultimately be converted into a U matrix and
the
"
AA matrix will be converted into an L matrix:
1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2

1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1

First step, p=1:


The shaded columns and rows are converted into L and U matrix column and row
and the elements of
"
AA matrix are modied using Eq. (A.101), i.e.,
a
32
a
32
l
31
u
12
0 22 4
a
33
a
33
l
31
u
13
2 21 0
Step 2, pivot column 2, p=2:
732 Appendix A
1 1 2 1 0
0 3 3 0 0 1 0 0
2 4 0 0 0 0 1 0
1 2 1 2 0 0 1 0
1 0 0 0 1 2 1 0
0 3 0 0 0 1 1 0.33
2 4 4 1.32 0 0 1
1 2 1 2.66 0 0 0 1
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
Third step, pivot column 3, p=3:
This is the same result as derived before in Example A.10.
A.12 LDU (PRODUCT FORM, CASCADE, OR CHOLESKI FORM)
The individual terms of L, D, and U can be found by direct multiplication. Again,
consider a 4 4 matrix:
a
11
a
12
a
13
a
14
a
21
a
22
a
23
a
24
a
31
a
32
a
33
a
34
a
41
a
42
a
43
a
44

1 0 0 0
l
21
1 0 0
l
31
l
32
1 0
l
41
l
42
l
43
1

d
11
0 0 0
0 d
22
0 0
0 0 d
33
0
0 0 0 d
44

1 u
12
u
13
u
14
0 1 u
23
u
24
0 0 1 u
34
0 0 0 1

A.102
The following relations exist:
d
11
a
11
d
22
a
22
l
21
d
11
u
12
d
33
a
33
l
31
d
11
u
13
l
32
d
22
u
23
d
44
a
44
l
41
d
11
u
14
l
42
d
22
u
24
l
43
d
33
u
34
u
12
a
12
,d
11
u
13
a
13
,d
11
u
14
a
14
,d
11
u
23
a
23
l
21
d
11
u
13
,d
22
u
24
a
24
l
21
d
11
u
14
,d
22
u
34
a
34
l
31
d
11
u
14
l
32
d
22
u
24
,d
33
l
21
a
21
,d
11
l
31
a
31
,d
11
l
32
a
32
l
31
d
11
u
12
,d
22
l
41
a
41
,d
11
l
42
a
42
l
41
d
11
u
12
,d
22
l
43
a
43
l
41
d
11
u
13
l
42
d
22
u
23
,d
33
A.103
Matrix Methods 733
1 0 0 0 1 2 1 0
0 3 0 0 0 1 1 0.33
2 4 4 0 0 0 1 0.33
1 2 1 2.33 0 0 0 1
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.
734 Appendix A
In general:
d
ii
a
11

X
i1
j1
l
ij
d
jj
u
ji
for i 1. 2. . . . . n
u
ik
a
ik

X
i1
j1
l
if
d
jj
u
jk
" #
,d
ii
for k i 1 . . . . n i 1. 2. . . . . n
l
ki
a
ki

X
i1
j1
l
kj
d
jj
u
ji
" #
,d
ii
for k i 1. . . . . n i 1. 2. . . . . n
A.104
Another scheme is to consider A as a product of sequential lower and upper matrices
as follows:
A L
1
L
2
. . . . . L
n
U
n
. . . . . U
2
U
1
A.105
a
11
a
12
a
13
a
14
a
21
a
22
a
23
a
24
a
31
a
32
a
33
a
34
a
41
a
42
a
43
a
44

l
11
0 0 0
l
21
1 0 0
l
31
0 1 0
l
41
0 0 1

1 0 0 0
0 a
22
2
a
23
2
a
24
2
0 a
32
2
a
33
2
a
34
2
0 a
42
2
a
43
2
a
44
2

1 u
12
u
13
u
14
0 1 0 0
0 0 1 0
0 0 0 1

A.106
Here the second step elements are denoted by subscript 2 to the subscript.
l
21
a
21
l
31
a
31
l
41
a
41
u
12
a
12
,l
11
u
13
a
13
,l
11
u
14
a
14
,l
11
a
ij
2
a
1j
l
1i
u
1j
i. j 2. 3. 4
A.107
All elements correspond to step 1, unless indicated by subscript 2.
In general for the kth step:
d
k
kk
a
k
kk
k 1. 2. . . . . n 1
l
k
ik
a
k
ik
,a
k
kk
u
kj
a
k
kj
,a
k
kk
a
k1
ij
a
k
ij
a
k
ik
a
k
kj
,a
k
kk
k 1. 2. . . . . n 1i. j k 1. . . . . n
A.108
Example A.12
Convert the matrix of Example A.10 into LDU form:
1 2 1 0
0 3 3 1
2 0 2 0
1 0 0 2

l
1
l
2
l
3
Du
3
u
2
u
1
The lower matrices are
l
1
l
2
l
3

1 0 0 0
0 1 0 0
2 0 1 0
1 0 0 1

1 0 0 0
0 1 0 0
0 4,3 1 0
1 2,3 0 1

1 0 0 0
0 1 0 0
0 0 1 0
0 0 1,4 0

Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.


The upper matrices are
u
3
u
2
u
1

1 0 0 0
0 1 0 1,3
0 0 1 1,3
0 0 0 1

1 0 0 0
0 1 1 0
0 0 1 0
0 0 0 1

1 2 1 0
0 1 0 0
0 0 1 0
0 0 0 1

The matrix D is
D
1 0 0 0
0 3 0 0
0 0 4 0
0 0 0 7,3

Thus, the LDU form of the original matrix is


1 0 0 0
0 1 0 0
2 4,3 1 0
1 2,3 1,4 1

1 0 0 0
0 3 0 0
0 0 4 0
0 0 0 7,3

1 2 1 0
0 1 1 1,3
0 0 1 1,3
0 0 0 1

If the coefcient matrix is symmetrical (for a linear bilateral network), then


L U
t
A.109
Because
l
ip
new a
ip
,a
pp
u
pi
a
pi
,a
pp
a
ip
a
pi

A.110
The LU and LDU forms are extensively used in power systems.
BIBLIOGRAPHY
1. PL Corbeiller. Matrix Analysis of Electrical Networks. Cambridge, MA: Harvard
University Press, 1950.
2. WE Lewis, DG Pryce. The Application of Matrix Theory to Electrical Engineering.
London: E&F N Spon, 1965.
3. HE Brown. Solution of Large Networks by Matrix Methods. New York: Wiley
Interscience, 1975.
4. SA Stignant. Matrix and Tensor Analysis in Electrical Network Theory. London:
Macdonald, 1964.
5. RB Shipley. Introduction to Matrices and Power Systems. New York: Wiley, 1976.
Matrix Methods 735
Copyright 2002 by Marcel Dekker, Inc. All Rights Reserved.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy