0% found this document useful (0 votes)
29 views72 pages

Chapter 6 DCS Part 1

Uploaded by

21ee01070
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views72 pages

Chapter 6 DCS Part 1

Uploaded by

21ee01070
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Chapter 6

State Space
Analysis
EE4L005 DIGITAL CONTROL SYSTEMS
N. C. Sahoo
Introduction
• The conventional analysis methods of discrete-time
systems using z-transforms, transfer functions, block
diagrams, and signal flow graphs have so far been
discussed.
• State space method has the following advantages.
(1) State variable formulation is natural and
convenient for computing solutions.
(2) It allows a unified representation of digital
systems with various types of sampling schemes.
(3) It allows a unified representation of single-
variable and multivariable systems.
(4)It can be used for nonlinear as well as time-
varying systems.
• In state space analysis, a continuous-time system is
represented by a set of first-order differential
equations, called state equations.

• For discrete-time systems, the state equations are in


the form of first-order difference equations.

• Since a sample-data system contains both


continuous and discrete data variables, the state
equations will generally consist of both first-order
differential as well as difference equations.
Continuous-time System State
Space Analysis: Brief Summary

• The LTI system’s state space model is:


dx(t )
 x (t )  Ax (t )  Bu(t ) c(t )  Dx(t )  Eu(t )
dt
 x1 (t )   u1 (t )   c1 (t ) 
 x (t )   u (t )  c (t ) 
x (t )   2  u(t )   2  c(t )   2 
        
     
c
 q  (t )
 n 
x ( t ) u p (t )
State Transition Matrix φ(t ) :
• For a homogeneous state equation x (t )  Ax(t ) ,
The solution of state equation for t  0 with initial
state x(0) is:
x(t ) = φ(t )x(0)
1 1
and φ (t )   [( sI  A ) ]
1 2 2 1 33
φ(t )   At
 I  At  A t  A t  
2! 3!

• For a nonzero initial time t 0 :

x(t )  φ(t  t 0 )x(t 0 )


State Transition Equation:
• For the nonhomogeneous state equation
x (t )  Ax(t )  Bu(t ) ,
1 1 1 1
 x(t )   [(sI  A) ]x(0)   [(sI  A) BU( s)]
t

 x(t )  φ(t )x(0)   φ(t   )Bu( ) d ,t 0


0

• For a nonzero initial time t 0 :


t
x(t )  φ(t  t 0 )x(t 0 )   φ(t   )Bu( ) d , t  t0
t0
t
c(t )  Dφ(t  t 0 )x(t 0 )   Dφ(t   )Bu( ) d  Eu (t ) , t  t 0
t0
State Equations for Discrete-
Time Systems with S/H Devices

• The outputs of ZOH are:

u i (t )  ui (kT )  ei (kT ) , kT  t  (k  1)T and i  1 , p


• Because of ZOH,
u( )  u(kT ) for kT    (k  1)T
Thus,
t
x(t )  φ(t  t 0 )x(t 0 )   φ(t   )B d u(kT ) , kT  t  (k  1)T
t0

Setting t 0  kT ,
t
x(t )  φ(t  kT )x(kT )   φ(t   )B d u(kT ) , kT  t  (k  1)T
kT
t
Let θ(t  kT )   φ(t   )B d , then
KT

x(t )  φ(t  kT )x(kT )  θ(t  kT )u(kT ) , kT  t  (k  1)T


• For numerical iteration, it is more convenient to
describe x(t ) only at the sampling instants, i.e.,
x[(k  1)T ]  φ(T )x(kT )  θ(T )u(kT )
1 2 2 1 3 3
φ(T )   AT
 I  AT  A T  A T  
2! 3!
( k 1)T
θ(T )   φ[(k  1)T   ]B d
KT

By setting m  kT   , we get dm  d . Thus


T
θ(T )   φ(T   )B d
0

where m is replaced by .
• This is a set of first order difference equations,
referred to as discrete state equations of the
sampled-data system.
• These state equations, however, describe the
dynamics only at the sampling instants. All
information between the sampling instants is lost.

• In a similar manner, the output equation is


discretized by setting t  kT . Thus

c(kT )  Dx(kT )  Eu(kT )


• In certain situations, the sampling operation is not
done with respect to time, and time is not the
independent variable.

• In this situation, the discrete dynamic equations


describe the discrete “events” and are described as
x(k  1)  φ(1)x(k )  θ(1)u(k )
c(k )  Dx(k )  Eu(k )
which are obtained by setting T = 1. Another way
of looking at the notation is that the sampling
period T is normalized to be one.
Digital Simulation and Approximation:
• Discrete state equations also result when an analog
system is approximated by a discrete-time model.
• Let us consider the dynamic equations of a
continuous-time system.
x (t )  Ax(t )  Bu(t ) c(t )  Dx(t )  Eu(t )
• The digital approximation may be performed by
setting t  kT and approximating the derivative of x(t )
at t  kT as:
x[(k  1)T ]  x(kT )
x (t ) t kT   Ax (kT )  Bu(kT )
T
 x[(k  1)T ]  (I  TA)x(kT )  TBu(kT )
State Equations of Systems with All-Digital
Elements:
• When a digital system is composed of all digital
elements and the inputs and outputs of the system
are all digital, the system may be described by the
following discrete dynamic equations:
x(k  1)  Ax(k )  Bu(k )
c(k )  Dx(k )  Eu(k )
• Matrices A and B can be arbitrary and depend
entirely on the characteristics of the digital system.
State Transition Equations
Three types of state equations:
System-I: x[(k  1)T ]  φ(T )x(kT )  θ(T )u(kT )
System-II: x(k  1)  φ(1)x(k )  θ(1)u(k )
System-III: x(k  1)  Ax(k )  Bu(k )
• The solutions of all three types of discrete state
equations are the state transition equations, which
are all of similar form.
• The solutions of the first two types of state equations
contain the state transition matrix φ(T ) of the A
matrix which is nonsingular.
• Thus, the state transitions of these two types of
systems can occur in both forward as well as
backward directions.

• However, for the discrete system given by the third


equation , the A matrix has generally no restrictions;
therefore, for the state transition to occur in
backward direction, A must be nonsingular.

• In the following, the state transition equations for the


first type of system
x[(k  1)T ]  φ(T )x(kT )  θ(T )u(kT )
are derived and then these are extended to the
other two types of discrete systems.
(1) Recursive Method:
System-I: x[(k  1)T ]  φ(T )x(kT )  θ(T )u(kT )
Recursion is the most straightforward method
k  0 : x(T )  φ(T )x(0)  θ(T )u(0)
k  1 : x(2T )  φ(T )x(T )  θ(T )u(T )
 φ(T )φ(T )x(0)  φ(T )θ(T )u(0)  θ(T )u(T )
 φ(2T )x(0)  φ(T )θ(T )u(0)  θ(T )u(T )
Continuing this process,
N 1
x( NT )  φ( NT )x(0)   φ[( N  i  1)T ]θ(T )u(iT )
i 0
which is the solution.
• The solutions of the other two types of state
equations are similarly obtained as follows:
For System-II:
N 1
x( N )  φ( N )x(0)   φ( N  i  1)θ(1)u(i )
i 0
For System-III:
N 1
x( N )  A x(0)   A N i 1Bu(i )
N
i 0
(2) z-Transform Method:
System-I: x[(k  1)T ]  φ(T )x(kT )  θ(T )u(kT )

Taking the z-transform of both sides,


zX( z)  zx(0)  φ(T )X( z)  θ(T )U( z)
1
 X( z )  [ zI  φ(T )] {zx(0)  θ(T )U( z )}

 x(kT )   1 [ zI  φ(T )]1 z x(0) 
 1 [ zI  φ(T )]1 θ(T )U( z )

k
Now, Φ( z )   φ(kT ) z
k 0
 [I  φ(T ) z 1 ]Φ( z)  I
1
 Φ( z )  [ zI  φ(T )] z
Taking the inverse z-transform,
φ(kT )   1
[ zI  φ(T )] z
1

By real convolution theorem:


[ zI  φ(T )] 
k 1
1 1
 θ(T )U( z )   φ[(k  i  1)T ]θ(T )u(iT )
i 0

• Thus, the entire state transition is given by


k 1
x(kT )  φ(kT )x(0)   φ[(k  i  1)T ]θ(T )u(iT )
i 0
Example:

The dynamic equations of the linear process are:


 x1 (t )   0 1   x1 (t )  0
 x (t )   2  3  x (t )  1u (t )
 2    2   
c(t )  x1 (t )
u(t )  u(kT )  r (kT ) , kT  t  (k  1)T
 0 1 0 
A  ,B 
 2  3 1
s 1  1 1  s  3 1
sI  A    ; ( sI  A)  2   2 s
 2 s  3 s  3 s  2  

t  2t
 2    t   2t 
φ(t )  l 1[(sI  A) 1 ]   t  2t t  2t 
  2   2     2 
T
  (T  )   2(T  ) 
T
θ(T )   φ(T   )Bd    (T  )  2 (T  )  d
0 0    2  

0.5    0.5
T 2T

 
  
T 2T

Thus,

 x1 (k  1)   2 T   2T  T   2T   x1 (k ) 
 x (k  1)   T  2T T  2T   
 2    2  2    2  2 
x ( k )
0.5   T  0.5 2T 
 T 2T u (k )
   
Therefore,
 x1 ( N )   2  NT   2 NT   NT   2 NT   x1 (0) 
 x ( N )    NT 2 NT  NT 2 NT    
 2    2  2   2   x2 (0)
N 1
(1   T ) ( N k 1)T  0.5(1   2T ) 2( N k 1)T 
  (1   T ) ( N k 1)T (1   2T
) 2 ( N k 1)T u (k )
k 0  
Relationship Between State Equations
and Transfer Function
• Consider the MIMO discrete-time system given by
C( z )  G( z )U( z )
 U1 ( z )   C1 ( z ) 
U ( z )  C ( z ) 
U( z )    , C( z )   2 
2
     
   
U
 p ( z ) C
 q ( z )
G11 ( z ) G12 ( z )  G1 p ( z ) 
G ( z ) G ( z )  G2 p ( z ) 
G( z)   21 22 
     
 
Gq1 ( z ) Gq 2 ( z )  Gqp ( z ) 
• Elements of G (z ) are of the following form:
1 m
bm  bm1 z    b0 z
Gij ( z ) 
an  an1 z 1    a0 z n

• In state space,
x(k  1)  Ax(k )  Bu(k )
c(k )  Dx(k )  Eu(k )

Thus, X( z )  ( zI  A) 1 zx(0)  ( zI  A) 1 BU( z )


c( z )  D( zI  A) 1 zx(0)  D( zI  A) 1 BU( z )  EU ( z )

• For getting the transfer function, the initial state x(0)


is assumed to be null vector. Thus,
G ( z )  D( zI  A) 1 B  E

• If the discrete system contains S/H devices, A and B


have to be replaced by φ(T ) and θ(T ), respectively.

• The inverse z-transform of the transfer function


matrix is called the impulse response matrix or the
weighting sequence matrix g(kT ) .
g(kT )  Dφ[(k  1)T ]B  E (0)

Since φ[(k  1)T ]  0 for k < 1,


g(kT )  E for k = 0
g(kT )  Dφ[(k  1)T ]B for k  1
Example:
Refer to the system in previous Example.

 2 T   2T  T  2T

φ(T )   T 2T T  2T 
  2  2    2 
0.5   T  0.5 2T 
θ(T )   T  2T 
   

D = [1 0] and E = 0

Thus G(z) is obtained as:


T 2T T T 2
(0.5    0.5 ) z  0.5 (1   )
G( z )  T 2T 3T
z  (   ) z  
2
The characteristic equation of the system is
T 2T 3T
z  (
2
 )z   0
• G(z ) can also be derived by another approach.

G1 ( s )  D( sI  A) 1 B
C ( s) 1
G1 ( s )   2
U ( s ) s  3s  2
C ( z) 1  1 
 G( z)   (1  z )   
R( z )  s( s  1)(s  2) 
This will give the same result as obtained earlier.
Eigenvalues and Eigenvectors
• The characteristic equation of discrete-time system
is defined from A {or φ(T ) } of the system as
zI  φ(T )  0
In the case of an all-digital system, the characteristic
equation is
zI  A  0
• The roots of the characteristic equation are defined
as the eigenvalues of the matrix A {or φ(T ) }.
• Important properties of the eigenvalues are:
1)If the coefficients of the characteristic equation
are scalar quantities, the eigenvalues are either
real or in complex conjugate pairs.
2) The trace of A, which is defined as the sum of
elements on the main diagonal of A, is given by
trace of A = tr(A) = z1  z 2    z n
where zi , i  1,2,, n are eigenvalues of A.

3) The determinant of A is: A  z1 z 2  z n


4) If A is nonsingular with eigenvalues zi , i  1,2,, n
,then 1 zi , i  1,2,, n are the eigenvalues of A 1 .
5) If zi is an eigenvalue of A, then it is an eigenvalue
of A  .
6) If A is a real symmetric matrix, then its
eigenvalues are all real.

• The n 1 vector p i which satisfies the equation


( zi I  A)p i  0
where zi , i  1,2,, n are eigenvalues of A, is called
the eigenvector of A associated with the
eigenvalue zi .

• The eigenvectors are not unique.


Example:
The φ(T ) and characteristic equation of the system in
the previous example are:
 2 T   2T  T   2T 
φ(T )   T 2T T  2T 
  2  2    2 
z 2  ( T   2T ) z   3T  0
• The eigenvalues of φ(T ) for this system are:
T
z1    2T z 2  
Let the corresponding eigenvectors be p1 and p 2 ,
respectively.
By definition: [ zi I  φ(T )]p i  0 , i  1, 2
 p11   p12 
Let p1    p2   
 p 21   p 22 
 z1  2 T   2T  T   2T   p11  0  and
 2T     
 2 T
 2 2T
z1    2   p21  0 
T

 z 2  2 T   2T   T   2T   p12  0


 T 2T T 2T     
 2  2 z 2    2   p22  0

From the first Eq. ,


2( T   2T ) p11  ( T   2T ) p21  0
2( T   2T ) p11  ( T   2T ) p21  0

• These two equations are linearly dependent. So, an


arbitrary value may be assigned to p11 or p21 , and the
other one may be solved.

Let p11 = 1 . Then p 21 = 2


• Note that p11 should not be assigned to zero. Because,
in that case, p21 will be zero. That is: an eigenvector
cannot be a null vector.

• A similar procedure can be done for the other


eigenvector. The two eigenvectors are:
1 1
p1    p2   
  2  1

Properties of Eigenvectors:
1) An eigenvector cannot be a null vector.
2) The rank of ( zi I  A) ,where zi , i  1,2,, n denotes
distinct eigenvalues of A, is n  1 .
3) The eigenvector p i is given by any nonzero column
of matrix adj ( zi I  A), where zi is a distinct eigenvalue
of A.
4) If the matrix A has distinct eigenvalues, the set of
eigenvectors are linearly independent.
5) If p i is an eigenvector of A, then kp i is also an
eigenvector of A, where k is a scalar quantity.

Eigenvectors of Multiple-Order Eigenvalues:

• If one or more eigenvalues of A is of multiple order,


a full set of linearly independent eigenvectors may
or may not exist.
• Number of linearly independent eigenvectors
associated with eigenvalue zi of multiplicity mi is equal
to the degeneracy d i of ( zi I  A) .
• The degeneracy d i of ( zi I  A) is defined as:
d i  n  ri
where n = dimension of A and ri = rank of ( zi I  A) .

• There are always d i linearly independent eigenvectors


associated with zi . Moreover
1  d i  mi
• The procedure for determination of eigenvectors of
matrix A with multiple-order eigenvalues is given
below.
Full degeneracy ( d i = mi )

• For the eigenvalue zi of multiplicity mi , the fully


degenerated case has a full set of mi eigenvectors
associated with zi . These eigenvectors are found
from the nonzero independent columns of
mi 1
 
 m 1 adj( zI  A)
1 d
(m i 1)! dz i 
 z  zi
Simple degeneracy ( d i =1):
When the degeneracy of ( zi I  A ) is equal to one
(simple degeneracy), only one eigenvector is
associated with zi regardless of the multiplicity of zi .
• In this case, the eigenvector can be determined using
the same method as for the case of distinct
eigenvalues.
However, for the mi-th order eigenvalue, there are mi  1
additional vectors called the generalized eigenvectors.
• These generalized eigenvectors p i 2 , p i 3 ,, p imi 1 are
found from the following mi  1 equations.
( zi I  A)p i 2  p i1 ( zi I  A)p i 3  p i 2

( zi I  A)p imi  p imi 1
zi
where p i1 is the eigenvector of z i determined by
solving the following set of matrix equations.
( zi I  A)p i1  0
Example:
1 2
A 
  2  3

The eigenvalues of A are: z1  z 2  1


Thus, the eigenvalue z1  1 has a multiplicity of two.
To check the degeneracy of the matrix ( z1I  A) :
 2  2
z1I  A    has a rank of one.
 2 2 

Thus, the degeneracy of ( z1I  A) is one.


This means only one independent eigenvector can be
found for z1 from the columns of adj( z1I  A) after taking
out common factors.
2 2
adj( z1I  A)   
 2 2 
1
Thus, the eigenvector of z1is: p1   1
 

For finding the generalized eigenvector,


  2  2  1
( z1I  A)p 2  p1   p2   
2 2 1
Solving the above equation for p 2 ,

0
p2   
0.5
Diagonalization of “A” Matrix
• Solving the state equations of LTI systems is a
simple matter if these equations are decoupled
from each other, i.e., if A matrix is diagonal.
• In general, if A has distinct eigenvalues, it can be
diagonalized by a similarity transformation as
follows.
x(k  1)  Ax(k )  Bu(k )
Let P be a nonsingular matrix such that
1
x(k )  Py(k )  y ( k )  P x( k )
The transformed state equations are:
y (k  1)  Λy (k )  Γu(k )
 z1 0 0  0
0 z2 0  0
where  
Λ  0 0 z3  0
 
    
 0 0 0  z n 
and z1 , z 2 , , z n are the distinct eigenvalues of A.

• These decoupled state equations are known as the


canonical form.
1
Λ  P AP and Γ  P 1B

• Several methods exist for determination of P, given


the matrix A.
• In fact, the columns of P are always the
eigenvectors of A.
Let p i represent the eigenvector of A corresponding
to zi . Then,
P  p1 p 2  p n 
Proof:
zi p i  Ap i
Thus,
z1p1 z 2p 2  z np n   Ap1 Ap 2  Ap n   AP

 p1 p 2  p n Λ  PΛ  AP
1
 Λ  P AP
Jordan Canonical Form
• When A cannot be diagonalized, a similarity
transformation exists such that Λ  P 1AP , where 
is the Jordan canonical form, which is nearly a
diagonal matrix.
For example, let us say A has eigenvalues z1 , z 2 , z 3 , z 3
,and z 3 (the last three are identical).
Then, the Jordan canonical form matrix  is given by
 z1 0 0 0 0
0 z2 0 0 0
 
Λ  0 0 z3 1 0
 
0 0 0 z3 1
 0 0 0 0 z3 
• A Jordan canonical form matrix has the following
properties.
(i) Its diagonal elements are the eigenvalues of A.
(ii) All the elements below the principal diagonal
are zeros.
(iii) Some of the elements above the principal
diagonal are ones.
(iv)The sub-matrices formed by each eigenvalue
are called Jordan blocks.

• The matrix P is again formed using the eigenvectors


of A, as done earlier. The eigenvectors associated
with distinct eigenvalues are determined in the usual
manner.
• Consider the j-th eigenvalue z j of m-th order. The
eigenvectors associated with m-th order Jordan
canonical form are found using generalized
eigenvectors by referring to the structure of
corresponding Jordan block in the Jordan canonical
form matrix.
p1 p 2  p m Λ  Ap1 p 2  p m 
Thus,
z j p1  Ap1
p1  z j p 2  Ap 2
p 2  z j p 3  Ap 3


p m1  z j p m  Ap m
After rearrangement, the equations become
( z j I  A)p1  0
( z j I  A)p 2  p1


( z j I  A)pm  pm1

• The generalized eigenvectors p1 , p 2 , , p m are


determined from above.
Computation of State Transition Matrix
• It is important to emphasize the difference between
the formulations of state transition matrix under two
different conditions.
For the sampled-data system
x (t )  Ax(t )  Bu(t )
where u(t )  u(kT ) for kT  t  (k  1)T

• The discretized equations are expressed as:


x[(k  1)T ]  φ(T )x(kT )  θ(T )u(kT )
where φ(T ) is the state transition matrix of A.
φ(T )   AT  φ(t ) t T
φ( NT )  φ(T )φ(T )φ(T ) (Product of N terms)

• The state equations for a digital control system are


expressed as:
x(k  1)  Ax(k )  Bu(k )
The state transition matrix is defined as:
φ( N )  A N  A.A.A  A (Product of N terms)

Cayley-Hamilton Theorem Method:


• The theorem states that every square matrix must
satisfy its own characteristic equation.
For example, consider the following characteristic
equation of A.
n 1
z  an1 z
n
   a1 z  a0  0
n 1
Then, A n
 a n 1 A    a1A  a0 I  0
n 1
Thus, A  (an1A
n
   a1A  a0 I )

• Similar equation holds good for φ(T ) .


N
The crux of this theorem is that A can be expressed
as an algebraic sum of A N 1, A N 2, …, for N  n . By
repeatedly applying, N can eventually be expressed

A
in terms of A, …,A n 1 .
Example:  3 2
A 
 2 3 
Characteristic equation: zI  A  z  6 z  5  0
2

Applying Cayley-Hamilton theorem,

A 2  6A  5I
 A  6A  5A  31A  30I
3 2

 A  6A  5A  156 A  155I
4 3 2

and so on.
State Diagram
• The conventional and sampled signal flow graphs
studied earlier apply only to algebraic equations.
They can be used only for the derivation of input-
output relations in transform domain.
• The method can be extended to the state transition
signal flow graph or the state diagram to represent
the difference equations.
• For a continuous-time system, the state diagram
resembles the block diagram of analog computer
diagram. From the state diagram, the problem can be
solved either by analog computer or by pencil-paper.
• The same notation and relations are utilized for their
digital counterparts.
(A) State Diagrams for Continuous-Time Systems:

• The fundamental linear operations that can be


performed on an analog computer are: multiplication
by a constant, addition, sign change, and integration
of variables.
(a) Multiplication by a constant:
• On an analog computer, this is done by
potentiometers and amplifiers. The operations is of
the form
x2 (t )  ax1 (t )  X 2 ( s )  aX 1 ( s )
where a is a real constant. If 0 < a < 1, a potentiometer
is used. If a  1 ,an OPAMP (inverting or noninverting
depending upon the sign) is used.
(a) Block diagram symbol of a potentiometer; (b) Block
diagram symbol of an OPAMP; (c) State diagram

(b) Algebraic Sum of Variables:


Example: x4 (t )  a1 x1 (t )  a2 x2 (t )  a3 x3 (t )

(a) Block diagram of


summing device;
(b) state diagram
(c) Integration:
Example:
t
x1 (t )   ax2 ( ) d  x1 (t 0 )
t0

As the past history of the integrator prior to t 0 is


represented by x1 (t 0 ) and state transition begins at t  t 0 .
X 2 ( s ) x1 (t 0 )
Thus, X 1 (s)  a  for t  t 0
s s

(a) Block
diagram of
integrator;
(b) (c) State
diagrams
(B) State Diagrams for Discrete-Time Systems:
Some of the basic linear operations of a digital system
are as follows.
(a) Multiplication by a constant:
x2 (kT )  ax1 (kT )  X 2 ( z )  aX 1 ( z )
(b) Summing:
x3 (kT )  x1 (kT )  x2 (kT )  X 3 ( z )  X 1 ( z )  X 2 ( z )

(c) Time Delay or Storage:


x2 (kT )  x1[(k  1)T ]  X 2 ( z )  zX 1 ( z )  zx1 (0)
 X 1 ( z )  z 1 X 2 ( z )  x1 (0)
Basic elements in a digital state diagram
Example:
A digital system is described by the difference equation
c(k  2)  2c(k  1)  3c(k )  r (k )
• The state diagram of the system can be drawn by
first equating the highest order term to the rest of
the terms.
• The state variables of the system are designated as
the outputs of the time-delay units.
• By setting the initial states to zero and deleting the
time-delay units in the state diagram, the state
equations can be obtained as
 x1 (k  1)   0 1   x1 (k )  0
 x (k  1)   3  2  x (k )  1 r (k )
 2    2   
 x1 (k ) 
c(k )  1 0 
 2 
x ( k )
The state transition equations, which are solutions of
state eqs., are obtained from state diagram using the
gain formula when X 1 ( z ) and X 2 ( z ) are regarded as
output nodes and R(z ) , x1 (0) ,and x2 (0) are input nodes.
 X 1 ( z )  1 1  2 z 1 z 1   x1 (0)  1  z 2 
 X ( z )  Δ  1     1  R( z )
 2    3z 1   x2 (0) Δ  z 

1 2
Δ  1  2 z  3z
• The transfer function between the output and input
is determined from state diagram by gain formula
as
C ( z) X 1 ( z) 1
  2
R( z ) R( z ) z  2 z  3
Decomposition of Transfer Functions
• The transfer function of a digital controller or system
D(z ) may be realized by a digital processor, digital
circuit, microprocessor, or a general-purpose digital
computer.
• The steps involved in realizing the state diagram and
dynamic equations of z-transfer function are termed
decomposition.
• There are three basic types of decomposition:
(A) Direct decomposition
(B) Cascade decomposition
(C) Parallel decomposition
(A) Direct Decomposition:
• The transfer function of a digital system is typically of
the form:
1 m
C ( z ) bm  bm1 z    b0 z
D( z )  
R( z ) an  an1 z 1    a0 z n
For physical realizability, an  0 .
• For direct decomposition, the right hand side is
multiplied by a variable X (z ) .
Thus, C ( z ) b  bm1 z 1
   b z m
X ( z) 
 m 0
R( z ) a n  a n1 z 1
   a 0 z n
X ( z)
 
C ( z )  bm  bm1 z 1    b0 z m X ( z )

R ( z )  an  an1 z    a0 z X ( z )
1 n

For constructing state diagram, R ( z ) is first written in a


cause-effect relation. Dividing both sides of Eq. by a n
and then equating X (z ) in terms of other terms,
1 an1 1 a0  n
X ( z )  R( z )  z X ( z)    z X ( z)
an an an
• In practice, the transfer function is conveniently
manipulated so that an  1 .
• So, the state diagram is drawn for C ( z ) and X (z ) .
State diagram for n = m = 3
• A digital computer program can be derived from the
state diagram; the branches with gain z 1 are
realized by a time delay of T which is the sampling
period.
• By defining the variables at the output nodes of all
time-delay units as state variables, the dynamic
equations and state transition equations can all be
determined from this state diagram.
x1 (k  1)  x2 (k )
x2 (k  1)  x3 (k )
a0 a1 a2 1
x3 (k  1)   x1 (k )  x2 (k )  x3 (k )  r (k )
a3 a3 a3 a3
  3 x1 (k )   2 x2 (k )  1 x3 (k )  r (k )
b3
c(k )  (b0  3b3 ) x1 (k )  (b1   2b3 ) x2 (k )  (b2  1b3 ) x3 (k )  r (k )
a3
 (b0   3b3 ) x1 (k )  (b1   2b3 ) x2 (k )  (b2  1b3 ) x3 (k )  r (k )
In matrix form,
 x1 (k  1)   0 1 0   x1 (k )   0 
 x (k  1)   0 0 1   x2 ( k )    0  r ( k )
 2      
 x3 (k  1)    3 2  1   x3 (k )    
 x1 (k ) 
c(k )  b0   3b3 b1   2b3  
b2  1b3  x2 (k )  r (k )
 
 x3 (k ) 
• This form of state model is known as controllable
(phase-variable) canonical form.
(B) Cascade Decomposition:
• If the numerator and denominator of a transfer
function D(z )are expressed as products of first-order
factors, each such factor can be realized by simple
digital program or state diagram.
• The entire D(z ) is then represented by the cascade
connection of the individual programs.
C ( z ) K ( z  z1 )( z  z m )
Let D( z )  
R ( z ) ( z  p1 )( z  pn )
 D( z )  KD1 ( z ) D2 ( z ) Dn ( z )
z  zi 1  zi z 1 i  1,2,, m
where Di ( z )   for
z  pi 1  pi z 1
1
1 z
1 for j  m  1,, n
and D j ( z )  
z  p j 1 p j z
• The state diagram representations of such
functions can be obtained by direct decomposition.
• The overall state diagram for D(z ) is obtained by
connecting m diagrams of Fig.(a) and n-m diagrams
of Fig.(b) with the gain branch of K, all in series.

(C) Parallel Decomposition:


• A transfer function D(z ) with real poles can be
realized by parallel decomposition, in which case
only the denominator must be factored.
m1
C( z) z  bm1 z    b0
m
D( z )  K n
R( z ) z  an1 z m1    a0
If, among the n real eigenvalues, i are distinct and the
rest are of multiplicity n-i, then D(z ) is expanded by
partial fraction expansion as
i n
Kk Kk
D( z )     k i
k 1 z  pk k i 1 ( z  pi 1 )

• Thus, the individual terms can be realized by the


state diagram used for cascade decomposition and
the overall state diagram is obtained by connecting
these in parallel.

• The parallel decomposition leads to a set of state


equations that is in the canonical form for distinct
eigenvalues or Jordan canonical form for systems
with multiple-order eigenvalues.
Example: C ( z) 10( z  z  1)
2
D( z )   2
R( z ) z ( z  0.5)( z  0.8)

• The eigenvalues are at 0, 0, 0.5, 0.8. The


decomposition is done by parallel decomposition
method. By partial fraction expansion,

 233.33 127.08 25 106.25


D( z )    2
z  0.5 z  0.8 z z
• It is emphasized that since D(z ) is of fourth-order, the
state diagram should have only four time-delay units
for minimal-order realization.
• Notice that the trick is to share one of the time-delay
units for the double pole at z = 0.
The state equations of the system are:

 x1 (k  1)  0.5 0 0 0   x1 (k )   1 
 x (k  1)   0 0.8 0 0   x (k )   1 
 2   2     r (k )
 x3 (k  1)   0 0 0 1   x3 (k )  106.25
      
 x4 (k  1)   0 0 0 0   x4 (k )   25 

The output equation is

c(k )   233 .33 127 .08 1 0x(k )

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy