0% found this document useful (0 votes)
60 views75 pages

Chapter 2 Direct Methods For Solving Linear Systems

Uploaded by

BACIR L
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views75 pages

Chapter 2 Direct Methods For Solving Linear Systems

Uploaded by

BACIR L
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 75

Lecture 2 Direct Methods for

Solving Linear Systems


§0 Introduction

Consider the system:

a11 x1  a12 x2    a1n xn  b1


a x  a x    a x  b (1)
 21 1 22 2 2n n 2



an1 x1  an 2 x2    ann xn  bn
Using matrix form Ax=b (2)
24/6/19 1
where A is a n-order square matrix, x and b are n-
dimensional vectors. According to the Cramer’s rule, if
det(A) ≠0,system (2) has a unique solution
Dj
xj  j  1, 2,  , n
D
where D=det(A), and Dj is the determinant of the matrix
obtained by substituting j-th column of A with the right
hand side b.
There exist two classes of methods for solving linear
systems: direct methods and iterative methods 。

24/6/19 2
§2 Gaussian Elimination Method
2.1 The Idea of GEM

Eg.2.1 Solve the following linear system of equations


 x1  2 x 2  3 x3  1

2 x1  7 x 2  5 x3  6
 x  4 x  9 x  3
 1 2 3

The 1st step, the first equation times by -2 and adds to the
second equation, and the first equation times by -1 and adds
to the third equation.

 x1  2 x 2  3 x3  1

 3 x 2  x3  4
 2 x 2  6 x3  4
24/6/19
 3
The 2nd step: the second equation times by -2/3and adds
to the third equation, we have

 x1  2 x 2  3 x3  1

 3 x 2  x3  4
 20 20
 x3  
 3 3
The 3rd step: solve the linear system by backward
substituting, we have x*=(2,1,-1)T
The above process can be described by matrix
form as
1 2 3 1  r2  r2  2 r1 1 2 3  1 
 
 A : b  2 7 5 6   0 3 1 4 
 r3  r3  r1

1 4 9 3 0 2 6  4 


24/6/19 4
 
1 2 3  1 
2
r3  r3  r2  
 0 3 1  4 
3

 20 20 
0 0   
 3 3
2.2 GEM

Denote Ax=b as A(1)x=b(1),where A(1) and b(1) with


(1) (1)
entries aij and bi , respectively. The goal in the
first elimination process is changing the n-1
(1)
entries below a11 into 0. We denote the resulting
system by A(2)x=b(2).

24/6/19 5
 a11
(1) (1)
a12  a1(1)
n b1(1) 
 (1) (1) (1) 
 a21 a22  a2(1)n b2  ri  ri  li 1  r1

  
 A b  
(1) (1)
     ( i  2,3,, n )
 (1) (1) 
 an1 an(1)2 (1)
 ann bn 
 a11
(1) (1)
a12  a1(1)
n b1
(1)

 (2) ( 2) 
 0 a22  a2(2) b2 
n
  A(2)b (2) 
     
 (2) 
 0 an(2)
2
(2)
 ann bn 

24/6/19 6
 ai(1)
l
 i1  1
(1)
i  2, 3,  , n
 a 11
 ( 2)
a
 ij  a (1)
ij  l a (1)
i1 1 j i , j  2, 3,  , n
 ( 2)
b
 i  b i
(1)
 l b
i1 1
(1)
i  2, 3,  , n

 (k )
a
After k-1th eliminations(suppose kk ≠0),we have
a11
(1) (1)
a12  a1(1k)  a1(1n) b1(1) 
 ( 2) ( 2) ( 2) ( 2) 
 a22  a2 k  a2 n b2 
   
(k )
A :b (k )
 
(k )
akk (k )
 akn bk  (k ) 

    
 
 (k ) (k )
ank  ann bn  (k )

24/6/19 7
Like the first elimination process, the goal of k-th
is eliminating the entries below ak( k,k) . The operations
in detail are as follows:
 (k )
aik
lik  ( k ) i  k  1,  , n ( mul t i pl i er s )
 akk
 ( k 1)
a
 ij  a (k )
ij  l a
ik kj
(k )
i , j  k  1, k  2,  , n
 ( k 1)
 ib  b i
(k )
 lik kb (k )
i  k  1, k  2,  , n


(k )
a
If  kk ≠0,(k=1,2,,n-1), the process can continue.
Finally, we obtain the following matrix
 a11(1) a12(1)  a1(1)n b1(1) 
 (2) (2) (2) 
 a22  a2 n b2 
 A b  
(n) (n)
   
 (n) (n) 
 ann bn 
24/6/19 8
Because the matrix A(n) is an upper triangular
matrix, the linear system of equations is changed
(n ) ≠0,we
into a triangular system of equations. If ann
obtain the solution
 bn( n )
 xn  ( n )
 ann

 n

 bi
(i )
  a (i )
ij x j
(i=n-1,n-2,,1)
 xi  j  i 1
(i )

 a ii
In a word, we can get the algorithm of GEM: 
step 1elimination process
(1) (1)
①set aij  aij , bi  bi (i,j=1,2,…,n)

24/6/19 9
(k )
②for k=1 to n-1,if a
kk ≠0 ,do

 (k )
aik
lik  ( k ) (i=k+1,k+2,,n)
 akk
 ( k 1)
a
 ij  a (k )
ij  l ik a (k )
kj (i,j=k+1,k+2,,n)
 ( k 1)
b
 i  b (k )
 l b (k )
i ik k (i=k+1,k+2,,n)


(i )
2backward substitution ( a
ii ≠0)

 bn( n )
 xn  ( n)
 a nn
 n
 bi   a ij( i ) x j
( i )
 j  i 1
 xi  (i=n-1,n-2,,1)

 a (i )
ii

24/6/19 10
2.3 A Condition on GEM
From the previous algorithm, we need assume the
(k )
entry akk ≠0 in the k-th elimination process. In
(k ) ≠0 for
order to finish the GEM, the condition akk
all k. But we hope decide only according to A
whether the GEM can finish or not. So, we have the
theorem:

Theorem 2.1 GEM can continue iff all principal


minor determinants in order of A don’t equal 0.

To compute elimination 2(n-1)n(n+1)/6 + n(n-1)


flops are required, plus n2 flops to backward
substitution. Therefore, about (n3/3 + 2n2 ) flops
are needed to solve the linear system using GEM.

24/6/19 11
§3 Gaussian Elimination with Pivoting

In the previous section, the GEM perhaps get a


(k )
solution with a great error when | akk | <<1,because
multipliers perhaps are very huge and the
computation error is amplified largely.

Eg. 2.2 Solve the following linear system of equation using 3


significant digits.
0.50 x1  1.1x2  3.1x3  6.0

 2.0 x1  4.5 x2  0.36 x3  0.020
 5.0 x  0.96 x  6.5 x  0.96
 1 2 3
24/6/19 12
Solution. ( The accurate solution x*=(-2.6, 1, 2)T.)
Method I: normal GEM
 0.50 1.1 3.1 6.0   0.500 1.10 3.10 6.00 
   
 2.0 4.5 0.36 0.020    0 0.100 -12.0 -24.0 
 5.0 0.96 6.5 0.96   0 -10.0 -24.5 -59.0 

 0.500 1.10 6.00 
3.10
 
 0 0.100 -12.0 -24.0 
 0 0 -1220 -2460 

The solution x=(-5.80, 2.40, 2.02)T.


l21=4, l31=10.
l32=-100.

24/6/19 13
Method II: GEM with Column Pivoting
 0.50 1.1 3.1 6.0   5.0 0.96 6.5 0.96 
   
 2.0 4.5 0.36 0.020    2.0 4.5 0.36 0.020 
 5.0 0.96 6.5 0.96   0.50 1.1 3.1 6.0 

 5.00 0.960 6.50 0.960   5.00 0.960 6.50 0.960 
   
 0 4.12 -2.24 -0.364    0 4.12 -2.24 -0.364 
 0 1.00 2.45 5.90   0 0 2.99 5.99 

The solution x=(-2.60, 1.00, 2.00)T.


l21=0.4, l31=0.1.
l32=0.24272.

24/6/19 14
From the example, we can find that the simple
exchange between rows can better the precision of
algorithm.

1. GEM with Column Pivoting


Suppose that GEM with column pivoting is done k-1
times(1≤k≤n-1), the system Ax=b→A(k)x=b(k)
a11
(1) (1)
a12     b1(1) 
 ( 2) ( 2) 
 a22     b2 

 (k )
A :b 
(k )


   
(k ) (k ) (k ) 
akk  akn bk 

    
 (k ) (k ) (k ) 
 ank  ann bn 
24/6/19 15
Before kth elimination, the step ① 、 ② need be
done:
①determine ik according to | a (k )
ik , k | max | a (k )
ik | .
k i  n
(k )
If aik ,k =0,we have det(A)= | A(k )| =0,namely,system
Ax=b doesn’t satisfy the Cramer’s rule.

②If ik≠k, exchange row ik and row k, namely


akj( k )  ai( k,)j bk( k )  bi( k ) (k j n)
k k
After this process, we solve the resulting system using
normal GEM.

24/6/19 16
Algorithm of GEM with column pivoting
1. Find the pivot element , select ik to satisfy
| aik ,k | max | aik |
k i  n

if aik ,k =0 , then A is a singular matrix , break.

2. if ik  k , go to step 3. Else do
akj  ai j b ( k )  b ( k ) (k j n)
k
k ik
3. Elimination process
aik
aik 
akk
aij  aij  aik akj , bi  bi  aik bk

i  k  1, k  2,  , n
24/6/19 17
4. if ann =0 , break .

5. backward substitution


 bn
bn 
 a nn

 n

 bi   aij b j
bi  j i 1
i  n  1, n  2,  , 3, 2,1
 aii
6. output ( b1,b2,···,bn)T

24/6/19 18
§4 Gauss-Jordan Elimination method

1.4. Gauss-Jordan elimination method

GJEM is a modified method of GEM. The elimination


process in GEM changes the matrix A into a triangular matrix.
If we also change all entries on top of the diagonal entries into
0 and change the diagonal entries into 1, we will get the GJEM.

Suppose we have conduct k-1th GJEM,then system Ax=b


is changed into system A(k)x=b(k).

24/6/19 19
1 a1k  a1n b1 

 1    
    
 
 A( k ) b ( k )    1 ak 1,k  ak 1,n bk 1 
 akk  akn bk 
 
   
 ank  ann bn 


In the kth elimination process, we eliminate all


entries on top and below the entry akk. Meanwhile,
change akk into 1.(k=1,2,,n)
| ai ,k | max | aik |
1select pivot and determine ik such that k

2If ik  k , exchange row k and row ik


k i  n

aik , j  akj , bk  bi (j=k,k+1,,n)


k
24/6/19 20
3compute multipliers
 aik
lik  i  1, 2,  , n。 ik
akk
1
lkk 
akk
4elimination process

aij=aij-likakj(i=1,2,,n, 且 i≠k,j=k+1,,n)
bi=bi-likbk (i=1,2,,n, 且 i≠k)
5do

akj=akjlkk (j=k,k+1,,n)
bk=bklkk
24/6/19 21
When k=n,
1 b1 
 1 b2 

 Ab   A b   
(n) (n)
 
 
  
 1 bn 

Obviously, xi=bi,i=1,2,,n is the solution of system Ax=b.


The elimination process of GJEM is more complex than
that of GEM. The computational complexity of GJEM is
n3/2(that of GEM is n3/3). Usually, GJEM is used to obtain
inverse of a matrix.

24/6/19 22
2.5.1 computing the inverse of matrix

 1 1 0 
Eg. 2.4 soppuse A   2 2 3 , show A-1
 
 1 2 1 

Solution.
 1 1 0 1 0 0   2 2 3 0 1 0
  
[ AI ]   2 2 3 0 1 0  r1  r2
  1 1 0 1 0 0 
 1 2 1 0 0 1   1 2 1 0 0 1 

24/6/19 23
1 1 3
2 0 1
2 0   1 1 3
2 0 1
2 0
 
 0 2  32 1  12 0   0 3 5
2 0 1
2 1 

0 3 5
0 1
1 
   0 2  3
1  1
0 
 2 2 2 2

1 0 2
3 0 1
3  13  1 0 0 4 1 3
 1   
 0 1 5
6 0 1
6 3    0 1 0 5 1 3 
0 0 1
1  16 2 
3 
0 0 1 6 1 4 
 6

 4 1 3
as a consequence, A1   5 1 3
 6 1 4 

Modifying GJEM we will get the algorithm of inverse of a


matrix.
24/6/19 24
Algorithm ( computing inverse of matrix using GJEM )

1. Input the matrix


[ AI ]
2. For k=1,2,···,n do step 3 to step 6

3. Find the pivot element

4. if
| a | max | aik |
=0 , then A is a singular imatrix,
kk kbreak.
i  n

5. if aik k , goto 6, else exchange rows ,

ik  k
6. Elimination process
akj  aik j j  k , k  1,  , 2 n

24/6/19 25
aik
aik  i  1, 2,  , n。 i  k
akk
1
akk 
akk
aij  aij  aik akj
i  1, 2,  , n , i  k。 j  k  1, k  2,  , n

akj  akj akk j  k  1, k  2,  , 2 n

1
7. output A
24/6/19 26
§5 LU Factorization of Matrix
Suppose A is a n-order square matrix, if there exists a lower
triangular matrix L and an upper triangular matrix U, such
that A=LU, we call LU is the triangular factorization of A.

Note 1. LU factorizations of a matrix isn’t unique. In


fact, if A=LU , for any nonsingular diagonal matrix D,
A=(LD)(D-1U) also is an LU factorization.

Note 2. To make sure that LU factorization is unique, some


restraints are demanded.
Doolittle’s Factorization : restrict L to be a below
triangular matrix with unit diagonal entries.
Crout’s Factorization : restrict U to be an upper
triangular
24/6/19 matrix with unit diagonal entries. 27
Theorem. Suppose all principle determinants of a n-order
matrix A don’t equal 0, then the Doolittle’s factorization
( Crout’s factorization ) is unique.
Proof. For the system Ax=b, we assume A satisfies
that all principle determinants are not equal 0. Let
A(1)=A , the first step of GEM is equivalent to A(1) left multiplied
by an elementary matrix L1, where L1 is
 1 
  ai1
  l21 1  li1  i  2,3, , n
L1    l31 0 1  a11
 
     
l 0 0  1
 n1 

that is, A(2)  L1 A(1) , b (2)  L1b (1)

24/6/19 28
Similarly, A( k 1)  Lk A( k ) , b ( k 1)  Lk b ( k )
1 
 
  
 1 
Lk   
  lk 1, k  
   
 
  lnk 1 

After finishing n-1 steps, we get A(n) , and denote U=A(n). It is


obvious that U is an upper triangular matrix. The entire
elimination process can be described as follows
Ln 1 Ln  2  L1 A  U
A  ( Ln 1 Ln  2  L1 ) 1U  L11 L21  Ln11U
s et L  L11 L21  Ln11
A  LU
Obviously, U is an upper triangular matrix and L satisifies
24/6/19 29
1 
 
 l21 1 
l l32 1 
L 
31

     
     
 
l ln 2 lnk  ln , n 1 1 
 n1

That is, L is a lower triangular matrix with unit diagonal entries.

24/6/19 30
After LU factorization, it is an easy job for to solve system
Ax=b. System Ax=b is equivalent to the two systems:
 Ly  b

Ux  y

2.5.1 Doolittle’s Factorization


For the Doolittle’s factorization of matrix A, we have

 a11 a12  a1n  1 0  0  u11 u12  u1n 


a a22  a2 n  l21 1  0   0 u22  u2 n 
 21 
          
    
 an1 an 2  ann  ln1 ln 2  1   0 0  unn 

24/6/19 31
According to the multiplication rule of matrix, we have
① becaus e a1 j  u1 j s o u1 j  a1 j j  1, 2, , n
ai1

becaus e ai1  li1u11 so li1  i  2,3, , n
u11
Assume that we have obtained the first k-1 rows of U and
the first k-1 columns of L ( 1≤k≤n),for the kth step,
n
③because akj  l
r 1
kr urj

when r>k , lkr=0 and lkk=1,we have


k 1
ukj  akj   lkr urj j  k , k  1, , n
r 1

24/6/19 32
n
④according to
aik   lir urk
r 1
when r>k , urk=0 , and ukk is a known value
k 1
aik   lir urk
lik  r 1
i  k  1, k  2, , n
ukk
⑤solve L y = b
 y1  b1

 k 1

 yk  bk   lkr yr k  2,3, , n
24/6/19
 r 1
33
⑥ solve U x = y
 yn
 xn  u


nn

 n

 yk   ukr xr
 xk  r  k 1
k  n  1,  , 2,1

 ukk
The Doolittle’s Factorization Method have the same
efficient as GEM.

24/6/19 34
2.5.2 Crout Factorization
 a11 a12  a1n  l11 0  0   1 u12  u1n 
a a22  a2 n  l21 l22  0   0 1  u2 n 
 21 
          
    
 an1 an 2  ann  ln1 ln 2  lnn   0 0  1 
According to the matrix multiplication,


becaus e of ai1  li1 , we have li1  ai1 i  1, 2,  , n

② becaus e of a1 j  l11u1 j , we have u1 j  a1 j / l11 j  2,3, , n


Suppose we finish the first k-1 columns of L , k-1 rows of U
( 1≤k≤n) n
③ Because aik  
lir urk
r 1
24/6/19 35
when r>k , urk=0 and ukk=1, we have
k 1
lik  aik   lir urk i  k , k  1, , n
r 1
n
④Because akj   lkr urj
r 1

when r>k , lkr=0 and lkk is a known value


k 1
akj   lkr urj
ukj  r 1
j  k  1, k  2, , n
lkk

24/6/19 36
⑤solve L y = b
 b1
 y1  l
 11

 k 1

 bk   lkr yr
 yk  r 1
k  2,3, , n
 lkk

⑥ solve U x = y

 xn  yn

 n

 xk  yk   ukr xr k  n  1,  , 2,1
 r  k 1

24/6/19 37
Finding the [U] matrix
Using the Forward Elimination Procedure of Gauss Elimination

 25 5 1
 64 8 1
 
144 12 1
 25 5 1 
64
Step 1:  2.56; Row2  Row12.56    0  4.8  1.56
25
144 12 1 

25 5 1 
144
 5.76; Row3  Row15.76   0  4.8  1.56 
25
 0  16.8  4.76
http://numericalmethods.eng.usf.edu
Finding the [U] Matrix
25 5 1 
Matrix after Step 1:  0  4.8  1.56 
 
 0  16.8  4.76

25 5 1 
 16.8
Step 2:  3.5; Row3  Row23.5   0  4.8  1.56
 4.8
 0 0 0.7 

25 5 1 
U    0  4.8  1.56
 0 0 0.7 
http://numericalmethods.eng.usf.edu
Finding the [L] matrix
1 0 0
 1 0
 21
31 32 1

Using the multipliers used during the Forward Elimination Procedure


a 21 64
From the first  25 5 1  21    2.56
a11 25
step of forward  64 8 1
elimination   31 
a31 144
  5.76
144 12 1 a11 25

http://numericalmethods.eng.usf.edu
Finding the [L] Matrix

From the second 25 5 1 


step of forward  0  4.8  1.56  32  a32   16.8  3.5
elimination   a 22  4.8
 0  16.8  4.76

 1 0 0
L  2.56 1 0
5.76 3.5 1

http://numericalmethods.eng.usf.edu
Does [L][U] = [A]?

 1 0 0 25 5 1 
LU   2.56 1 0  0  4.8  1.56  ?
5.76 3.5 1  0 0 0.7 

http://numericalmethods.eng.usf.edu
Using LU Decomposition to solve SLEs
Solve the following set of  25 5 1  x1  106.8 
linear equations using  64 8 1  x   177.2 
LU Decomposition
   2  
144 12 1  x3  279.2

Using the procedure for finding the [L] and [U]


matrices
 1 0 0 25 5 1 
A  LU   2.56 1 0  0  4.8  1.56
5.76 3.5 1  0 0 0.7 

http://numericalmethods.eng.usf.edu
Example

Set [L][Z] = [C]  1 0 0  z1  106.8 


2.56 1 0  z   177.2 
  2   
5.76 3.5 1  z 3  279.2

Solve for [Z] z1  10


2.56 z1  z 2  177.2
5.76 z1  3.5 z 2  z 3  279.2

http://numericalmethods.eng.usf.edu
Example
Complete the forward substitution to solve for
[Z]
z1  106.8
z 2  177.2  2.56 z1  z1   106.8 
 177.2  2.56106.8
 96.2
Z    z2    96.21
z3  279.2  5.76 z1  3.5 z 2  z3   0.735 
 279.2  5.76106.8  3.5 96.21
 0.735

http://numericalmethods.eng.usf.edu
Example
Set [U][X] = [Z]
25 5 1   x1   106.8 
 0  4.8  1.56  x    96.21
   2  
 0 0 0.7   x3   0.735 

Solve for [X] The 3 equations become


25a1  5a2  a3  106.8
 4.8a2  1.56a3  96.21
0.7 a3  0.735

http://numericalmethods.eng.usf.edu
Example
Substituting in a3 and using the
From the 3 equation
rd
second equation

0.7 a3  0.735  4.8a2  1.56a3  96.21


0.735  96.21  1.56a3
a3  a2 
0.7  4.8
a3  1.050  96.21  1.561.050
a2 
 4.8
a2  19.70

http://numericalmethods.eng.usf.edu
Example
Substituting in a3 and a2 Hence the Solution Vector
using the first equation is:

25a1  5a2  a3  106.8  a1  0.2900


106.8  5a2  a3 a    19.70 
a1 
25
 2  
106.8  519.70   1.050
 a3   1.050 

25
 0.2900

http://numericalmethods.eng.usf.edu
Eg. 2.5 Solve the following system using Doolittle’s method

Ax  b
1 2 3 4   4
   
 1 4 9 16   10 
wher e A 
, b 
 28 
1 8 9 64
   
1 16 81 256  82 

24/6/19 49
Solution.
1 2 3 4  1 2 3 4 
1   
 4 9 16  (1)  1 4 9 16 
 
1 8 27 64 
 1 8 27 64 
 
1 16 81 256   1 16 81 256 

1 2 3 4  1 2 3 4 
   
1 2 6 12  (3)  1 2 6 12 
 
(2)
  
1 3 27 64  1 3 6 24 
1 7 81 256  1 7 6 256 
 
1 2 3 4 1 0 0 0  1 2 3 4
  1
1 2 6 12  1 0 0   0 2 6 12 
 
(4)
 . That is, L =  ,U =  .
1 3 1 0  0 0 6 24 
1 3 6 24 
   
1 7 6 
24  1 7 6 1  0 0 0 24 

24/6/19 50
To solve Ly=b

 1 0 0 0   y1   4   y1   4 

 1 1 0 0   y2  10   y  6 
 2    .
 , we have
 1 3 1 0   y3   28  y3  6 
        
 1 7 6 1   y4  82   y4   0 

Then, we solve Ux=y

 1 2 3 4   x1   4   x1  1 

 0 2 6 12   x2  6   x2  0 
 , 
 0 0 6 24   x3  6   x3  1 
        
 0 0 0 24   x4  0   x4  0 

24/6/19 51
§6 Cholesky’s Method
If matrix A is a symmetric positive definite, we can simplify
further LU factorization and get a more efficient method.
2.6.1 LDU Factorization
Theorem. If all principle determinants don’t equal 0, then the
matrix can be factorized uniquely as
A=LDU
where L is a unit lower triangular matrix , U is a upper triangular
matrix , D is a diagonal matrix.
Proof. According to the Doolittle’s Factorization, A=LU*
uii*  0 i  1, 2, , n

let di  uii* i  1, 2, , n


24/6/19 52
denote D  diag(d1 , d 2 ,  , d n )
uij*
uij  i  1, 2, , n; j  k , k  1, , n
di
U  (uij ) nn
let U  DU
*

then A  LU  LDU *

So, we have
 Lz  b

Ax  b   Dy  z
Ux  y

24/6/19 53
2.6.2 Cholesky’s Factorization

Theorem. Suppose A to be a symmetric positive definite matrix,


then there exists a unique nonsingular lower triangular matrix L,
satisfying A=LLT.
Proof. Because A is symmetric positive definite , all principle
determinants don’t equal 0.
A=L1DU1, AT=U1TDL1T=A=L1DU1
Because the factorization L1DU1 is unique, thus, U1T=L1,
U1=L1T. As a consequence, A=L1DL1T .
To show all diagonal entries of matrix D to be positive,
because |L1|=1≠0 , the linear system of equations
L1Tyi=ei
has a unique solution yi ≠ 0 。
24/6/19 54
y Ayi  y L1 DL yi  ( L yi ) D ( L yi )  e Dei  di
T
i
T
i
T
1
T
1
T T
1
T
i

Because A is symmetric positive definite, the above


quadratic form is positive definite. As a result, di > 0.
1
D  diag( d1 , d 2 , , d n ),
2

1 1 1 1
A  L1 DL1T  L1 D D L1T  ( L1 D )( L1 D )  LLT
2 2 2 2 T

We can assure that the factorization A isLL T


unique because
the computation process is determined. We call this factorization
as the Cholesky Factorization of A.

24/6/19 55
2.6.3 Cholesky’s Factorization method

Suppose
 a11 a12  a1n  l11 0  0  l11 l21  ln1 
a a22  a2 n  l21 l22  0   0 l22  ln 2 
 21 
          
    
 an1 an 2  ann  ln1 ln 2  lnn   0 0  lnn 

According to the multiplication of matrices


ai1
① becaus e a11  l11l11 , ai1  li1l11, s o l11  a11 , li1 
l11
Assume we have computed the first k-1columns of
L ( 1≤k≤n)
n
According to aik  l l
r 1
ir rk i  1, 2, , n; i  k
24/6/19 56
en r>k , lkr=0, so
k 1
aik   lir lkr +lik lkk i  k , k  1, , n
r 1
k 1 1

② lkk  (akk   lkr2 ) 2


k  1, 2, , n
r 1
k 1
aik   lir lkr
lik  r 1
i  k , k  1, , n
③ lkk

Ax=b is changed into  Ly  b


 T
L x  y
24/6/19 57
④solve L y = b
 b1
 y1  l
 11
k 1

 bk   lkr yr
 yk  r 1
k  2,3, , n
 lkk
⑤solve LTx = y
 yn
x
 n l 
 nn

 n

 yk   lrk xr
 xk  r  k 1
k  n  1, , 2,1
 lkk

The computation complexity of Cholesky’s method is about


n3/6.
24/6/19 58
§7 Norm and Condition Number

2.7.1 Norm
1. Norm of Vector
Definition 1. If for any a vector in Rn , there exists a
corresponding real number ‖x‖, satisfying
(1) || x || 0, x  R n , and || x || 0  x  0
(2) || kx ||| k ||| x || x  R n , k  R

(3) || x  y |||| x ||  || y || x, y  R n


Then, real number ‖x‖ is called the norm of vector x
24/6/19 59
Obviously, absolute value of real number, modulus of complex
number and the length of a space vector. In a word, the norm of vector is a
extension of conception of length of vectors.

Let x=(x1,x2,…,xn)T , the normal norms of vector are as follows:


n
|| x ||1   | xi |
i 1
1
 n 2 2
|| x ||2    xi 
 i 1 
|| x ||  max | xi |
1i  n

It is easy to verify that all the three norms satisfy the three
conditions of norm definition.
24/6/19 60
Eg.1 let x=(1,0.5,0,-0.3)T, show || x ||1 ,|| x ||2 ,|| x ||
x 1  1  0.5  0  0.3  1.8
Solution.
x 2  12  0.52  0.32  1.1576
x 
1
Definition 2 If for any two vectors a and y in Rn, there exists
a corresponding real number (x,y), satisfying:

( x, x)  0, x  R n , and ( x, x )  0  x  0

( x, y )  ( y , x ) x, y  R n
(kx, y )  k ( x, y ) x, y  R n , k  R

24/6/19 61
( x, y  z )  ( x, y )  ( x, z ) x, y, z  R n
Then, the real number (x,y) is called the inner product of vectors x
and y. n
( x, y )  x1 y1  x2 y2    xn yn   xi yi
i 1
2.7.2 Norm of Matrices
Definition 3. If for any a n-order matrix A, there exists a
corresponding real number ‖A‖, satisfying:
|| A || 0, A  R nn , 且 || A || 0  A  0
|| kA ||| k ||| A || A  R nn , k  R

|| A  B |||| A ||  || B || A, B  R nn

24/6/19 62
|| AB |||| A |||| B || A, B  R nn

|| A || is called the norm of matrix A.


Definition 4. We say that a matrix norm ‖A‖ and a vector
norm ‖x‖ are compatible, if

|| Ax |||| A |||| x ||
Definiton 5.
|| Ax ||
|| A || max  max || Ax ||
x0
n
|| x || || x|| 1
n
xR xR

is a matrix norm called induced matrix norm.

24/6/19 63
The following norms are often used
n
|| A ||1  max  | aij | ( column sun norm )
1 j  n
i 1

|| A ||2  1 ( λ1 is the spectral radius of


n ATA. )
|| A ||  max  | aij | ( row sum norm )
1 i  n
j 1

Definition 6. If two type of norms‖• ‖α and‖• ‖β satisify


m‖• ‖α ≤ ‖• ‖ β ≤ M ‖• ‖α
where m and M are positive constants, we say ‖• ‖α is
equivalent to‖• ‖β.
Theorem 1. (1) All norms in Rn are equivalent each other.
(2) All norms in Rn×n are equivalent each other.
24/6/19 64
Definition 7. If the series of vectors {x ( k ) } satisfy
lim x (jk )  x j , j  1,2,  , n
k 

{x ( k ) } converge to the vector


We say x  ( x1 , x2 , , xn )T
denote lim x ( k )  x
k 

Theorem 2. lim x ( k )  x  lim || x ( k )  x || 0  lim || x ( k 1)  x ( k ) || 0


k  x  k 
(k )
Definition 8. If the series of matrices { A } satisfy
lim aij( k )  aij i, j  1, 2, , n
k 

{ A( k ) } converge to
We say A  (aij ) nn

denote lim A( k )  A
k 

24/6/19 65
Theorem 3. lim A( k )  A  lim || A( k )  A || 0
k  k 

 lim || A( k 1)  A( k ) || 0


k 
2.7. 3 Spectral Radius
Definition 9. Suppose  j ( j  1, 2,isthe
, n ) eigenvalue of A,
we say  ( A)  max |  |
j
1 j  n
Is the spectral radius of A.
Theorem 4. Fro any square matrix, we have
 ( A) || A ||
Proof. Suppose λi is an eigenvalue of A and xi is the
corresponding eigenvector, that is,
Axi= λixi
24/6/19 66
Passing to the norm, we have |λi|‖x i‖≤‖A‖‖x i‖.
Because x i ≠ 0, thus‖x i‖>0 , we get
|λi |≤‖A‖ i=1,2,…,n 。
As a result, ρ(A)=max|λi |≤‖A‖.
Theorem 5. lim Ak  0   ( A)  1
k 

 2 1 0 
 
Eg. 2 Let A   1  2 1  show‖A‖ ,‖A‖ ,‖A‖ ,ρ(A).
 0 1  2  1 2 ∞
 
Solution. Obviously, ‖A‖1 =4,‖A‖∞=4.
 5 4 1 
 
AT A   4 6 4 
 1 4 5 

24/6/19 67
 5 4 1
|  I  AT A | 4  6 4  (  4)( 2  12  4)  0
1 4  5
1  4, 2,3  6  4 2,||A ||2 = 6+4 2 =3.4142
  2 1 0
|  I  A | 1   2 1  (  2)( 2  4  2)  0
0 1   2
1  2, 2,3  2  2

 ( A) | 3 | 2  2  3.4142

Theorem 6. If A is a real symmetric matrix,  ( A) || A ||2


holds.
24/6/19 68
2.7.2 Condition Number of Matrix and Error Analysis
In this section, we will analyze the effect of error of A and
b for solving system Ax=b.

1. Perturbation δb
Suppose that there exists a perturbation δb of b and matrix A is
accurate , we analyze the error δx .
A( x   x )  b   b
Ax  A x  b   b
Because Ax=b , so A δx= δb , further we have
δx=A-1 δb
24/6/19 69
‖ δx‖≤‖A-1‖ ‖ δb‖

Because ‖b‖=‖Ax‖≤‖A‖‖x‖
|| b ||
we have || x ||
|| A ||
||  x || 1 ||  b ||
|| A ||  || A ||
And then || x || || b ||

Similarly, x=A-1b and A δx= δb , we have


‖ δb‖≤‖A‖ ‖ δx‖
‖x‖=‖A-1b‖≤‖A-1‖‖b‖
So, 1 ||  b || ||  x ||
1

|| A ||  || A || || b || || x ||
24/6/19 70
Definition 10 If A is non-singular, then ‖A‖×‖A-1‖ is called the
condition number of A, denoted by Cond(A).
Cond(A)=‖A‖×‖A-1‖.
When different norms are used, we get different condition
numbers of the same matrix.
Cond ( A)1 || A ||1|| A1 ||1
Cond ( A) 2 || A ||2 || A1 ||2
Cond ( A) || A || || A1 ||
2. Perturbation δA

Suppose there exists perturbation δA

( A   A )( x   x )  b
24/6/19 71
We ||  A ||
Cond ( A)
have ||  x || || A ||
 here ||  A ||  || A1 || 1
|| x || 1  Cond ( A) ||  A ||
|| A ||
3. Perturbations of δA and δb

Suppose there exist perturbation δA and δb

( A   A )( x   x )  b   b
We
||  x || Cond ( A) ||  A || || b ||
have  ( + )
|| x || 1  Cond ( A) ||  A || || A || ||b||
|| A ||
her e ||  A ||  || A1 || 1
24/6/19 72
Cond(A) =|| A||||A-1||≥|| AA-1|| =||I||=1
When Cond(A) is closed to 1, we say system is well-
conditioned. When Cond(A) ﹥﹥1, we say system is ill-
conditioned. An ill-conditioned system can deduce a great error.

24/6/19 73
Eg. 3 n-order Hilbert’s matrix
 1 1 1 
 1  
 2 3 n 
 1 1 1

1 
 2 3 4 n 1 
Hn        
 1 1 1 1 
  
 n 1 n n 1 2n  2 
 1 1 1

1 

 n n 1 n2 2n  1 

When n is a great number, the matrix is highly ill-posement.


 1
1 
H2   2
Eg. 4  1 1
 
2 3
Show cond2(H2),cond1(H2) and cond∞(H2)
24/6/19 74
1
1 
2 4 1 4  13
Sol ut i on.  2    0, 
1 1 3 12 6

2 3
4  13
Becaus e H 2 i s s ymet r i c, || H 2 ||2   (H 2 )  .
6

 4  6 4 6
H 21   ,  2  16  12  0,   8  2 13
 6 12  6 12  

Thus, || H 21 ||   ( H 21 )  8  2 13

cond 2 ( H 2 ) || H 2 ||  || H 21 ||  19.28

3
|| H 2 ||1  || H 21 ||1  18,we have
, cond1 ( H 2 )  27
2

Similarly , cond  ( H 2 )  27.

24/6/19 75

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy