CdeMachine_CHP4
CdeMachine_CHP4
Chapter IV:
Γ𝑟 (𝑡)
Ω𝑚 (𝑡)
Ω𝑚 ∗ (𝑡) Speed Γ𝑒 ∗ (𝑡) Torque 𝑢(𝑡) Electrical
- Controller - Controller machine
Γ𝑒 (𝑡)
IV.2 PI Control
IV.2.1 PI action
The PI controller can be used to control first-order systems or systems with a single
dominant time constant.
𝐾𝑝
Where: 𝐾𝑖 = 𝑇𝑖
The main action of the proportional part 𝐾𝑝 is to manage the response time, and the main
𝐾𝑖
action of the integral part is to cancel the static error.
𝑠
1
Chapter IV: Control law synthesis
Then the open loop transfer function (or loop transfer function) is:
𝐾𝐾𝑝 (1+𝑇𝑖 𝑠)
𝐿(𝑠) = 𝐶𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) = (4.3)
𝑇𝑖 𝑠(1+𝜏𝑠)
The pole compensation technique consists to choose 𝑻𝒊 = 𝝉 such as to eliminate the pole
of 𝐺𝑠 (𝑠). Then the loop transfer become:
𝐾𝐾𝑝
𝐿(𝑠) = 𝐶𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) = (4.4)
𝜏𝑠
𝜏
𝐾𝑝 is chosen so that the time constant 𝐾𝐾 , of the closed loop, is equal to the desired time
𝑝
𝝉
constant 𝜏𝑑 which is set in accordance with the requirements specification 𝑲𝒑 = 𝑲𝝉 .
𝒅
Then, one can approximate the transfer function by the simplified one below:
𝐾
𝐺𝑠 (𝑝) ≈ (1+𝜏𝑠) (4.7)
2
Chapter IV: Control law synthesis
c. Remark
This technique is not adapted with an input disturbance.
𝑑(𝑡)
𝑦 ∗ (𝑡) 𝑦(𝑡)
𝐶𝑃𝐼 (𝑠) 𝐺𝑠 (𝑠)
- 𝑢(𝑡)
𝜏
𝑠
𝐺𝑠 (𝑠) 𝐾𝑝
And 𝐹𝑑 (𝑠) = = (1+𝜏 (4.9)
1+𝐶𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) 𝑑 𝑠)(1+𝜏𝑠)
Which mean then the disturbance is rejected with the time constant of the system 𝐺𝑠 (𝑠).
If this time constant is large, the disturbance is rejected slowly.
3
Chapter IV: Control law synthesis
2𝜉𝜔𝑛 𝜏−1
𝑇𝑖 = 𝜔𝑛 2 𝜏
{ 2𝜉𝜔𝑛 𝜏−1
(4.12)
𝐾𝑝 = 𝐾
𝜉 will fix the desired overcoming 𝐷1 % and 𝜔𝑛 the desired response time 𝑇𝑟𝑑 , as shown
in figure below:
𝐷1
𝑇𝑟𝑑
Fig.4.4. Responses of second-order systems
Overcoming 𝐷1 :
𝜋𝜉
√1−𝜉2
𝐷1 = 𝑒 (4.13)
Relation between response time of a second-order response and 𝜔𝑛 and 𝜉 is given by the
figure below:
Temps de
réponse Tr d’un
système du
𝜔𝑛 𝑇𝑟𝑑
0.707
𝜉
4
Chapter IV: Control law synthesis
𝑑(𝑡)
𝑦 ∗ (𝑡) 1 𝑦(𝑡)
𝐶𝑃𝐼 (𝑠) 𝐺𝑠 (𝑠)
1 + 𝑇𝑖 𝑠 - 𝑢(𝑡)
c. Remark
This technique is well adapted with an input disturbance.
𝐾
𝐺𝑠 (𝑠) 𝑠
𝜏
And 𝐹𝑑 (𝑠) = 1+𝐶 = (1+𝐾𝐾𝑝 ) 𝐾𝐾𝑝
(4.15)
𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) 𝑠2 + 𝑠+
𝜏 𝑇𝑖 𝜏
From this last equation one can remark then the disturbance is rejected with the same
dynamic than the reference tracking.
In practical all actuators have limitations. Then, it may happen that the control variables
reaches the actuator limits. When this happens the system is not controlled and the actuator will
remain at its limit independently of the system output. If the error between the reference and
the output is different from zero then it will continue to be integrated by the integrating action.
This mean that the integral term may become very large. It is then required that the error has
opposite sign for a long period before things return to normal. The consequence is that any
controller with integral action may give large transients when actuator saturates. To avoid
integrator windup, we introduce an anti-windup as presented below:
5
Chapter IV: Control law synthesis
𝑑(𝑡)
𝑦 ∗ (𝑡) 𝑢(𝑡) 𝑦(𝑡)
𝐾𝑝 𝐺𝑠 (𝑠)
-
1
1 + 𝑠𝑇𝑖
IV.3.1 Principle
𝑑(𝑡)
𝑦 ∗ (𝑡) 𝑦(𝑡)
𝑇(𝑠) 𝑆 −1 (𝑠) 𝐺𝑠 (𝑠)
- 𝑢(𝑡)
𝑅(𝑠)
𝑑(𝑡)
∗ 𝑦(𝑡)
𝑦 (𝑡) 𝑇(𝑠)
𝐺𝑠 (𝑠)
- 𝑆(𝑠) 𝑢(𝑡)
𝑅(𝑠)
𝑇(𝑠)
This immediately gives rise to closed-loop transfers, for tracking and regulation:
𝐵(𝑠)𝑇(𝑠) 𝐵(𝑠)𝑆(𝑠)
𝑦(𝑠) = 𝐴(𝑠)𝑆(𝑠)+𝐵(𝑠)𝑅(𝑠) 𝑦 ∗ (𝑠) + 𝐴(𝑠)𝑆(𝑠)+𝐵(𝑠)𝑅(𝑠) 𝑑(𝑠) (4.16)
1
In some cases 𝑇(𝑠) can be chosen in rational form instead of polynomial form
6
Chapter IV: Control law synthesis
This equation is called Bezout equation (or in some case Diophantine equation).
In case where 𝑦 ∗ and 𝑑 are constant (step function), then their Laplace transform are in
1
the form 𝑠 , 𝑆(𝑠) is chosen as: 𝑆(𝑠) = 𝑠𝑆′(𝑠).
To be sure that the Bezout’s equation has a unique solution, simply set the degree of 𝑅(𝑠)
to 𝑛 (where 𝑛 is the degree of 𝐴(𝑠)) and set the degree of 𝑆(𝑠) to 𝑛 in case of proper regulator,
or to 𝑛 + 1 in case of strictly proper one. In case of proper regulator the degree of 𝐷(𝑠) is 2𝑛,
and in case of strictly proper one its degree is 2𝑛 + 1. Remark: common factor between 𝐴(𝑠)
and 𝐵(𝑠) must be simplified.
The question is how to fix the dynamic 𝐷(𝑠)? To do that, we can place 2𝑛 desired poles
for the closed-loop system in case of proper regulator or 2𝑛 + 1 desired poles in case of strictly
proper regulator.
7
Chapter IV: Control law synthesis
𝐵(0)𝑇(0) 𝐵(0)𝑇(0)
In steady state the gain between 𝑦 ∗ and 𝑦 is =
𝐷(0) 𝐷(0)
𝐵(0)𝑇(0) 𝑇(0)
As 𝑆(0) = 0, then the gain becomes: = 𝑅(0). Then, to ensure that the gain
𝐵(0)𝑅(0)
between 𝑦 ∗ and 𝑦 is equal to 1 one just needs to choose 𝑇(𝑠) such as 𝑇(0) = 𝑅(0).
𝐵(0)𝑆(0)
Remark: the gain between 𝑑 and 𝑦 in steady state is = 0.
𝐷(0)
𝐷(𝑠) is factorized such that : 𝐷(𝑠) = 𝐶(𝑠)𝐹(𝑠). Where 𝐶(𝑠) is the dynamic of the control
and 𝐹(𝑠) the dynamic of the filtering. The degree of 𝐶(𝑠) is equal to 𝑛 and the degree of 𝐹(𝑠)
is equal to 𝑛 in case of proper regulator of 𝑛 + 1 in case of strictly proper regulator.
𝑇(𝑠) is chosen:
To do that, we have to choose 𝑛 poles to fix 𝐶(𝑠) and 𝑛 poles, or 𝑛 + 1 poles, to fix 𝐹(𝑠).
The robust pole placement strategy consists to choose two high-level synthesis
parameters: 𝑇𝑐 (control horizon) and 𝑇𝑓 (filtering horizon). And we can apply one of the two
techniques below:
We deduce the roots of 𝐶(𝑠) from the roots of 𝐴(𝑠) by using the PPA technique, as shown
in figure below:
8
Chapter IV: Control law synthesis
Roots of 𝐴(𝑠)
Roots of 𝐶(𝑠)
−1
1
−
𝑇𝑐
+1
Let 𝑝𝑖 be the root of 𝐴(𝑠). PPA algorithm to calculate the root 𝑝𝑐 𝑖 of 𝐶(𝑠):
𝑝𝑐 𝑖 = 𝑝𝑐 𝑖
If 𝑟𝑒𝑎𝑙(𝑝𝑐 𝑖 ) > 0,
Then 𝑝𝑐 𝑖 = −𝑟𝑒𝑎𝑙(𝑝𝑐 𝑖 ) + 𝑖𝑚𝑎𝑔(𝑝𝑐 𝑖 ), end
If 𝑎𝑏𝑠 (𝑖𝑚𝑎𝑔(𝑝𝑐 𝑖 )) > 𝑎𝑏𝑠 (𝑟𝑒𝑎𝑙(𝑝𝑐 𝑖 )),
Then 𝑝𝑐 𝑖 = 𝑎𝑏𝑠(𝑝𝑐 𝑖 ) ∗ (−1 + 𝑖 ∗ 𝑠𝑖𝑔𝑛(𝑖𝑚𝑎𝑔(𝑝𝑐 𝑖 ))), end
1
If 𝑟𝑒𝑎𝑙(𝑝𝑐𝑖 ) > − 𝑇 ,
𝑐
1
Then 𝑝𝑐 𝑖 = − 𝑇 + 𝑖𝑚𝑎𝑔(𝑝𝑐 𝑖 ), end
𝑐
We deduce the roots of 𝐹(𝑠) from the roots of 𝐵(𝑠) by using the PPB algorithm, as shown
in figure below:
Roots of 𝐵(𝑠)
Roots of 𝐹(𝑠)
1
−
𝑇𝑓
Let 𝑧𝑖 be the root of 𝐵(𝑠). PPB algorithm to calculate the first 𝑚 roots 𝑝𝑓 of 𝐹(𝑠):
𝑖
𝑝𝑓 = 𝑧𝑖
𝑖
9
Chapter IV: Control law synthesis
𝐹(𝑠) = ∏𝑛𝑖=1
𝑜𝑟 𝑛+1
(𝑠 − 𝑝𝑓 ) (4.22)
𝑖
Construction of (𝑠): 𝐷(𝑠) = 𝐶(𝑠)𝐹(𝑠) where deg 𝐶(𝑠) = deg 𝐹(𝑠) = deg 𝐴(𝑠) = 1.
1 1 1
𝑝𝑐 1 = min (− 𝜏 , − 𝑇 ) and 𝑝𝑓 = − 𝑇
𝑐 1 𝑓
10
Chapter IV: Control law synthesis
1 1 1
𝑝𝑐 1 = − 𝑇 and 𝑝𝑓 = min (− 𝜏 , − 𝑇 )
𝑐 1 𝑓
(𝑠 + 𝑎)𝑠 + 𝑏(𝑟0 𝑠 + 𝑟1 ) = 𝑠 2 + 𝑑1 𝑠 + 𝑑2
𝑠 2 + (𝑎 + 𝑏𝑟0 )𝑠 + 𝑏𝑟1 = 𝑠 2 + 𝑑1 𝑠 + 𝑑2
𝑑 −𝑎
𝑎 + 𝑏𝑟0 = 𝑑1 𝑟0 = 1𝑏
{ { 𝑑
𝑏𝑟1 = 𝑑2 𝑟 = 2 1 𝑏
𝑅(0) 𝑟
𝑇(𝑠) = 𝐹(0) 𝐹(𝑠) = −𝑝1 (𝑠 − 𝑝𝑓 )
𝑓1 1
Construction of (𝑠): 𝐷(𝑠) = 𝐶(𝑠)𝐹(𝑠) where deg 𝐶(𝑠) = deg 𝐴(𝑠) = 1 and deg 𝐹(𝑠) =
deg 𝐴(𝑠) + 1 = 2.
1 1 1
𝑝𝑐 1 = min (− 𝜏 , − 𝑇 ) and 𝑝𝑓 = 𝑝𝑓 = − 𝑇
𝑐 1 2 𝑓
1 1 1 1
𝑝𝑐 1 = − 𝑇 , 𝑝𝑓 = min (− 𝜏 , − 𝑇 ) and 𝑝𝑓 = − 𝑇
𝑐 1 𝑓 2 𝑓
(𝑠 + 𝑎)(𝑠 2 + 𝑠1 𝑠) + 𝑏(𝑟0 𝑠 + 𝑟1 ) = 𝑠 3 + 𝑑1 𝑠 2 + 𝑑2 𝑠 + 𝑑3
11
Chapter IV: Control law synthesis
𝑠1 = 𝑑1 − 𝑎
𝑎 + 𝑠1 = 𝑑1 𝑑2 −𝑎𝑠1
{𝑎𝑠1 + 𝑏𝑟0 = 𝑑2 {𝑟0 = 𝑏
𝑏𝑟1 = 𝑑3 𝑑
𝑟 = 3 1 𝑏
𝑅(0) 𝑟1
𝑇(𝑠) = 𝐹(0) 𝐹(𝑠) = 𝑝 (𝑠 − 𝑝𝑓 ) (𝑠 − 𝑝𝑓 )
𝑓 1 𝑝𝑓 2 1 2
If the control input signal has any limitations (saturation), as the controller has an
integrating action so it’s essential to introduce an anti-windup RST. This anti-wind-up RST
controller is shown below:
𝑑(𝑡)
𝑦 ∗ (𝑡) 𝑢(𝑡) 𝑦(𝑡)
𝐺𝑠 (𝑠)
-
𝑆(𝑠) − 𝐹(𝑠)
𝐹(𝑠)
𝑅(𝑠)
𝐹(𝑠)
𝐴 ∈ ℝ𝑛x𝑛 : state matrix, 𝐵 ∈ ℝ𝑛x𝑚 : input matrix, 𝐶 ∈ ℝ𝑝x𝑛 : output matrix and 𝐷 ∈ ℝ𝑝x𝑚 :
direct transmission matrix (generally this matrix is zero for inertial systems).
12
Chapter IV: Control law synthesis
𝐷
𝑢(𝑡) 𝑦(𝑡)
𝐵 න. 𝐶
𝑥(𝑡)
𝐴
The transfer function of the system can be calculated from the state model:
Transfer function: 𝐺𝑠 (𝑠) = 𝐶̅ (𝑠𝐼 − 𝐴̅)−1 𝐵̅ + 𝐷 it the same as the one given in (4.23).
a. Stability
Definition : System modelled by state model 𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) is stable (internal
stability) if and only if all the eigenvalues of the state matrix 𝐴 have a negative real part.
b. Controllability
Definition: System modelled by state model 𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) is said to be
controllable if and only if it is, by means of the input 𝑢(𝑡), to transfer the system from any initial
state 𝑥𝑖 to any other final state 𝑥𝑓 in a finite time 𝑇.
𝑥
𝑥𝑓
𝑥𝑖
𝑡
𝑇
Fig.4.14. Controllability
Algebraic controllability theorem: The time invariant system 𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) is
controllable if and only if the rank of the controllability matrix 𝒞𝐴,𝐵 is equal to 𝑛.
If the pair (𝐴, 𝐵) is not controllable (uncontrollable) we can make a basis change to
separate the controllable part from the uncontrollable one. If 𝑇 is the transformation matrix.
The new state model matrices are:
13
Chapter IV: Control law synthesis
𝑥̅
𝑥̅ = [ 1 ]
𝑥̅2
𝐴̅ 𝐴̅12 ̅
𝐴̅ = 𝑇 −1 𝐴𝑇 = [ 11 ] 𝐵̅ = 𝑇 −1 𝐵 = [𝐵1 ] (4.25)
0 𝐴̅22 0
{𝐶̅ = 𝐶𝑇 = [𝐶1̅ 𝐶2̅ ] 𝐷̅=𝐷
If the uncontrollable part is stable (i.e. all the eigenvalues of the state matrix 𝐴̅22 have a
negative real part) we say that the system is stabilisable (or the pair (𝑨, 𝑩) is stabilisable).
c. Observability
𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡)
Definition: System modelled by state model { is said to be
𝑦(𝑡) = 𝐶𝑥(𝑡) + 𝐷𝑢(𝑡)
observable if and only if it is possible to determine any state 𝑥(𝑡) by using only finite record
𝑢(𝜏) and 𝑦(𝜏) for 𝑡 ≤ 𝜏 ≤ 𝑡 + 𝑇)
Observer
𝑥ො(𝑡)
Fig.4.15. Observability
𝐶
𝐶𝐴
Where: 𝒪𝐶,𝐴 =[ ]
⋮
𝐶𝐴𝑛−1
We say that pair (𝐶, 𝐴) is observable.
If the pair (𝐶, 𝐴) is not observable (unobservable) we can make a basis change to separate
the observable part from the unobservable one. If 𝑇 is the transformation matrix. The new state
model matrices are:
14
Chapter IV: Control law synthesis
𝑥̅
𝑥̅ = [ 1 ]
𝑥̅2
𝐴̅ 0 𝐵̅ (4.26)
𝐴̅ = 𝑇 −1 𝐴𝑇 = [ 11 ] 𝐵̅ = 𝑇 −1 𝐵 = [ 1 ]
𝐴̅21 𝐴̅22 𝐵̅2
{𝐶̅ = 𝐶𝑇 = [𝐶1̅ 0] 𝐷̅=𝐷
If the unobservable part is stable (i.e. all the eigenvalues of the state matrix 𝐴̅22 have a
negative real part) we say that the system is detectable (or the pair (𝑪, 𝑨) is detectable).
If the system is uncontrollable and unobservable we can do a basis change to separate all
the mode to obtain a new state model:
𝑥̅1
𝑥̅2
𝑥̅ = [ ]
𝑥̅3
𝑥̅4
𝐴11̅ 𝐴̅12 𝐴13
̅ 𝐴̅14 𝐵̅1 (4.27)
0 𝐴̅22 0 𝐴̅24 ̅
𝐴̅ = 𝑇 −1 𝐴𝑇 = 𝐵̅ = 𝑇 −1 𝐵 = [𝐵2 ]
0 0 𝐴̅33 𝐴̅34 0
[ 0 0 0 𝐴̅44 ] 0
̅ ̅
{𝐶 = 𝐶𝑇 = [0 𝐶2 0 𝐶4 ] ̅ 𝐷̅=𝐷
d. Matlab instructions
Calculation of transfer function from state model: ss2tf(A,B,C,D)
15
Chapter IV: Control law synthesis
In the case where all the states are measurable, the state feedback control diagram is:
The state feedback gain 𝐾 must ensure system stability and set response times.
𝐾 is calculated so that all the eigenvalues of 𝐴𝑐𝑙 have strictly negative real parts.
The purpose of the gain 𝐾𝑟 is to provide a static unity gain between the reference 𝑦 ∗ and
the output signal 𝑦. It is calculated so that:
a. Monovariable system
In the case of a monovariable system, it is sufficient to fix the desired 𝑛 eigenvalues of
𝐴𝑐𝑙 to calculate 𝐾 ∈ ℝ1x𝑛 .
𝑛 desired eigenvalues of 𝐴𝑐𝑙 are fixed using the PPA (from the 𝑛 eigenvalues of 𝐴) or the
PPB (from the zero of the system) technique.
Let Λ 𝑐 = {𝜆𝑐 1 , ⋯ , 𝜆𝑐 𝑛 } the set of desired eigenvalues of 𝐴𝑐𝑙 . Then, the desired
characteristic polynomial of 𝐴𝑐𝑙 is:
16
Chapter IV: Control law synthesis
We just need to equalise the two characteristic polynomials (4.31) and (4.32) (𝜋𝑐𝑑 (𝑠) =
𝜋𝐴𝑐𝑙 (𝑠)) to calculate the state feedback gain 𝐾.
This system is unstable. Indeed the eigenvalues of 𝐴 are Λ𝐴 = {−2,1}, the second
eigenvalue is positive.
0 2
𝒞𝐴,𝐵 = [𝐵 𝐴𝐵 ] = [ ] det 𝒞𝐴,𝐵 = −2 ≠ 0 rank 𝒞𝐴,𝐵 = 2
1 1
System is controllable.
For example if we use PPA technique with 𝑇𝑐 = 0.5. The desired eigenvalues of 𝐴𝑐𝑙 will
be Λ 𝑐 = {−2, −2}.
−2 2
The state matrix of the closed-loop system: 𝐴𝑐𝑙 = 𝐴 − 𝐵𝐾 = [ ].
−𝑘1 1 − 𝑘2
𝑠+2 −2
𝜋𝐴𝑐𝑙 (𝑠) = det(𝑠𝐼 − 𝐴 + 𝐵𝐾) = | | = (𝑠 + 2)(𝑠 − 1 + 𝑘2 ) + 2𝑘1
𝑘1 𝑠 − 1 + 𝑘2
1 + 𝑘2 = 4 𝑘 =3
{ { 2
2𝑘1 + 2𝑘2 − 2 = 4 𝑘1 = −2
17
Chapter IV: Control law synthesis
b. Multivariable system
In case of multivariable system, we cannot apply the previous technique. If we proceed
in this way, we will obtain an infinite number of solutions. Then, how to choose one of them?
The solution is to use a Linear Quadratic (LQ) control to calculate the state feedback 𝐾,
that stabilise the system and minimise an energy objective function:
+∞
𝐽 = ∫0 (𝑥 𝑡 𝑄𝑐 𝑥 + 𝑢𝑡 𝑅𝑐 𝑢)𝑑𝑡 (4.33)
Remark: This technique can be applied to both multivariable and monovariable systems.
𝐾 = 𝑅𝑐 −1 𝐵 𝑇 𝑃𝑐 (4.34)
𝐴𝑇 𝑃𝑐 + 𝑃𝑐 𝐴 + 𝑄𝑐 − 𝑃𝑐 𝐵𝑅𝑐 −1 𝐵 𝑇 𝑃𝑐 = 0 (4.35)
There are different rules to fix matrices 𝑄𝑐 and 𝑅𝑐 . One of them is the De Larminat’s rule.
De Larminat’s rule:
𝑅𝑐 = 𝐼
{ −1 (4.36)
𝑄𝑐 = (𝑇𝑐 . 𝑊𝑐 (𝑇𝑐 ))
Where:
𝑇 𝑇
𝑊𝑐 (𝑇𝑐 ) = ∫0 𝑐 𝑒 𝐴𝑡 𝐵𝐵 𝑇 𝑒 𝐴 𝑡 𝑑𝑡 (4.37)
With this choice of weighting matrices 𝑄𝑐 and 𝑅𝑐 , the eigenvalues of 𝐴𝑐𝑙 are all placed to
the left of −1/𝑇𝑐 (it is little bit equivalent to PPA technique for monovariable systems).
18
Chapter IV: Control law synthesis
𝑅𝑐 = 1.
Calculation of 𝑄𝑐 :
𝑇 𝑇
𝑊𝑐 (𝑇𝑐 ) = ∫0 𝑐 𝑒 𝐴𝑡 𝐵𝐵 𝑇 𝑒 𝐴 𝑡 𝑑𝑡
- Diagonalization method:
As the set of eigenvalues of 𝐴 is Λ𝐴 = {−2,1}, then its similar diagonal matrix is:
−2 0 −2𝑡
𝐷=[ ] 𝑒 𝐷𝑡 = [𝑒 0]
0 1 0 𝑒𝑡
Then 𝑒 𝐴𝑡 = 𝑇𝑒 𝐷𝑡 𝑇 −1 where 𝑇 is the matrix of eigenvectors.
Calculation of eigenvectors:
−2 2 𝛼 𝛼 −2𝛼 + 2𝛽 = −2𝛼
𝐴𝑣1 = 𝜆1 𝑣1 [ ] [𝛽 ] = −2 [𝛽 ] { 𝛽 = 0 and 𝛼 is
0 1 𝛽 = −2𝛽
1
arbitrary. For example we can fix 𝛼 = 1 𝑣1 = [ ]
0
2
−2 2 𝛼 𝛼 −2𝛼 + 2𝛽 = 𝛼 𝛼 = 3𝛽
𝐴𝑣2 = 𝜆2 𝑣2 [ ] [𝛽 ] = [𝛽 ] { { . For example if we
0 1 𝛽=𝛽 𝛽=𝛽
2
fix 𝛽 = 3 𝑣2 = [ ].
3
1 2 1 3 −2
Then 𝑇 = [ ] 𝑇 −1 = 3 [ ]
0 3 0 1
2
𝐴𝑡 𝐷𝑡 −1 1 2 𝑒 −2𝑡
1 0 ] [3 −2 𝑒 −2𝑡 (𝑒 𝑡 − 𝑒 −2𝑡 )
Thus, 𝑒 = 𝑇𝑒 𝑇 = 3[ ][ ]=[ 3 ]
0 3 0 𝑒𝑡 0 1 0 𝑒𝑡
2 2
𝑒 −2𝑡 (𝑒 𝑡 − 𝑒 −2𝑡 ) 0 (𝑒 𝑡 − 𝑒 −2𝑡 )
Then: 𝑒 𝐴𝑡 𝐵 = [ 3 ] [ ] = [3 ]
0 𝑒𝑡 1 𝑒𝑡
4 2
(𝑒 𝑡 − 𝑒 −2𝑡 )2 (𝑒 2𝑡 − 𝑒 −𝑡 )
𝑇 𝐴𝑇 𝑡
𝑒 𝐵𝐵 𝑒
𝐴𝑡
= [92 3
]
(𝑒 2𝑡 − 𝑒 −𝑡 ) 𝑒 2𝑡
3
19
Chapter IV: Control law synthesis
4 2
𝑇𝑐 𝑇 𝑇𝑐 9
(𝑒 𝑡 − 𝑒 −2𝑡 )2 (𝑒 2𝑡 − 𝑒 −𝑡 )
𝑊𝑐 (𝑇𝑐 ) = ∫0 𝑒 𝐴𝑡 𝐵𝐵 𝑇 𝑒 𝐴 𝑡 𝑑𝑡 = ∫0 [ 2 2𝑡 3
] 𝑑𝑡
(𝑒 − 𝑒 −𝑡 ) 𝑒 2𝑡
3
4 1 1 9 2 1 2𝑇 3
( 𝑒 2𝑇𝑐 + 2𝑒 −𝑇𝑐 − 4 𝑒 −4𝑇𝑐 − 4) ( 𝑒 𝑐+ 𝑒 −𝑇𝑐 − 2)
9 2 3 2
𝑊𝑐 (𝑇𝑐 ) = [ 2 1 3 1 2𝑇 1
]
( 𝑒 2𝑇𝑐 + 𝑒 −𝑇𝑐 − 2) 𝑒 𝑐 −2
3 2 2
4 1 1 1 9 2 1 1 3
( 𝑒 + 2𝑒 −0.5 − 4 𝑒 −2 − 4) ( 𝑒 + 𝑒 −0.5 − 2)
9 2 3 2
𝑊𝑐 (0.5) = [ 2 1 1 3 1 1 1
]
( 𝑒 + 𝑒 −0.5 − ) 𝑒 −
3 2 2 2 2
0.1282 0.3104
𝑊𝑐 (0.5) = [ ]
0.3104 0.8591
−1 125.1213 −45.2122
𝑄𝑐 = (𝑇𝑐 . 𝑊𝑐 (𝑇𝑐 )) =[ ]
−45.2122 18.6652
Checking conditions of resolution of ARE:
125.1213 −45.2122 𝑚 2 𝑚1 𝑚2
𝑄𝑐 = [ ]=[ 1 ]
−45.2122 18.6652 𝑚1 𝑚2 𝑚2 2
11.1848 −4.3203
Observability matrix of the pair (𝑀𝑐 , 𝐴): 𝒪𝑀𝑐 ,𝐴 = [ ]
−22.3715 18.0512
det 𝒪𝑀𝑐,𝐴 = 105.2643 ≠ 0
Resolution of ARE:
𝐴𝑇 𝑃𝑐 + 𝑃𝑐 𝐴 + 𝑄𝑐 − 𝑃𝑐 𝐵𝑅𝑐 −1 𝐵 𝑇 𝑃𝑐 = 0
0 𝑝1 𝑝2 𝑝1 𝑝2 −2 2 𝑝1 𝑝2 0 𝑝1 𝑝2
[−2
2
][
1 𝑝2 𝑝3 ] + [𝑝2
125.1213 −45.2122
𝑝3 ] [ 0 1] + [−45.2122 18.6652 ] − [𝑝2 𝑝3 ] [1] [0 1] [𝑝
2
0
𝑝3 ] = [0
0
0
]
30.1965 2.0822
𝑃𝑐 = [ ]
2.0822 6.2909
30.1965 2.0822
𝐾 = 𝑅𝑐 −1 𝐵 𝑇 𝑃𝑐 = [0 1] [ ]
2.0822 6.2909
𝐾 = [2.0822 6.2909]
20
Chapter IV: Control law synthesis
−2 2
𝐴𝑐𝑙 = 𝐴 − 𝐵𝐾 = [ ]
−2.0822 −5.2909
Its characteristic polynomial: 𝜋𝐴𝑐𝑙 (𝑠) = det(𝑠𝐼 − 𝐴𝑐𝑙 ) = 𝑠 2 + 7.2909𝑠 + 17.7461
As all the degree of this polynomial is equal to 2, then the necessary condition of Routh
become sufficient. So, we can conclude than the system is stable because all the coefficients of
this polynomial have the same sign.
Its eigenvalues are: Λ 𝑐 = {−3.6455 + 1.207𝑖, −3.6455 − 1.207𝑖}. The real parts of all
the eigenvalues are negative.
When state variable are not measurable and pair (𝐶, 𝐴) is observable than we can estimate
𝑥 by using an observer:
Observer
𝑥ො(𝑡)
Fig.4.17. Asymptotic state observer
The problem consists in finding the observer gain matrix 𝐿 that stabilises the observer
while making 𝑥ො(𝑡) tend "very quickly" towards 𝑥(𝑡).
Estimation error:
𝑥̃̇(𝑡) = ⏟
(𝐴 − 𝐿𝐶) 𝑥̃(𝑡)
{ 𝐴𝑜 (4.41)
𝑦̃(𝑡) = 𝐶𝑥̃(𝑡)
21
Chapter IV: Control law synthesis
𝐿 is calculated to stabilise this the observer matrix 𝐴𝑜 (the real parts of all eigenvalues of
𝐴𝑜 are negative).
The problem of finding 𝐿 is dual to that of finding 𝐾. This is because the eigenvalues of
𝐴 − 𝐿𝐶 are the same than those of 𝐴𝑇 − 𝐶 𝑇 𝐿𝑇 . Therefore, all solutions likely to lead to a state
feedback 𝐾 stabilising 𝐴 − 𝐵𝐾 are transportable by duality to the problem of finding the
filtering gain 𝐿: it suffices to replace 𝐴 by 𝐴𝑇 , 𝐵 by 𝐶 𝑇 , and to transpose the control gain 𝐾 to
obtain the filtering gain 𝐿.
The calculation of 𝑳 is possible on condition that the system is observable or, otherwise,
detectable.
a. Monovariable system
In the case of a monovariable system, it is sufficient to fix the desired 𝑛 eigenvalues of
𝐴𝑜 to calculate 𝐿 ∈ ℝ𝑛x1 .
𝑛 desired eigenvalues of 𝐴𝑜 are fixed using the PPA (from the 𝑛 eigenvalues of 𝐴) or the
PPB (from the zero of the system) technique.
We just need to equalise the two characteristic polynomials (4.42) and (4.43) (𝜋𝑜𝑑 (𝑠) =
𝜋𝐴𝑜 (𝑠)) to calculate the state feedback gain 𝐾.
𝐶 1 0
𝒪𝐶,𝐴 = [ ] = [ ] det 𝒪𝐶,𝐴 = 2 ≠ 0 rank 𝒪𝐶,𝐴 = 2
𝐶𝐴 −2 2
System is observable.
For example if we use PPA technique with 𝑇𝑜 = 0.5. The desired eigenvalues of 𝐴𝑜 will
be Λ 𝑐 = {−2, −2}.
22
Chapter IV: Control law synthesis
𝑙
The observer gain: 𝐿 = [ 1 ]
𝑙2
−2 − 𝑙1 2
The state matrix of the observer: 𝐴𝑜 = 𝐴 − 𝐿𝐶 = [ ].
−𝑙2 1
𝑠 + 2 + 𝑙1 −2
𝜋𝐴𝑜 (𝑠) = det(𝑠𝐼 − 𝐴 + 𝐿𝐶) = | | = (𝑠 + 2 + 𝑙1 )(𝑠 − 1) + 2𝑙2
𝑙2 𝑠−1
1 + 𝑙1 = 4 𝑙 =3
{ {1
−𝑙1 + 2𝑙2 − 2 = 4 𝑙2 = 4.5
3
Finally 𝐿 = [ ].
4.5
b. Multivariable system
As for LQ control technique, we define two matrices 𝑄𝑜 ≽ 0 and 𝑅𝑐 ≻ 0 (𝑄𝑜 ∈ ℝ𝑛x𝑛 and
𝑅𝑜 ∈ ℝ𝑝x𝑝 ).
Remark: This technique can be applied to both multivariable and monovariable systems.
𝐿 = 𝑃𝑜 𝐶 𝑇 𝑅𝑜 −1 (4.44)
There are different rules to fix matrices 𝑄𝑜 and 𝑅𝑜 . One of them is the De Larminat’s rule.
23
Chapter IV: Control law synthesis
De Larminat’s rule:
𝑅𝑜 = 𝐼
{ −1 (4.46)
𝑄𝑜 = (𝑇𝑜 . 𝑊𝑜 (𝑇𝑜 ))
Where:
𝑇 𝑇
𝑊𝑜 (𝑇𝑜 ) = ∫0 𝑜 𝑒 𝐴 𝑡 𝐶 𝑇 𝐶𝑒 𝐴𝑡 𝑑𝑡 (4.47)
With this choice of weighting matrices 𝑄𝑜 and 𝑅𝑜 , the eigenvalues of 𝐴𝑜 are all placed to
the left of −1/𝑇𝑜 (it is little bit equivalent to PPA technique for monovariable systems).
𝑅𝑐 = 1.
Calculation of 𝑄𝑜 :
𝑇 𝑇
𝑊𝑜 (𝑇𝑜 ) = ∫0 𝑜 𝑒 𝐴 𝑡 𝐶 𝑇 𝐶𝑒 𝐴𝑡 𝑑𝑡
2
𝑒 −2𝑡 (𝑒 𝑡 − 𝑒 −2𝑡 ) 2
Then: 𝐶𝑒 𝐴𝑡 = [1 0] [ 3 ] = [𝑒 −2𝑡 3
(𝑒 𝑡 − 𝑒 −2𝑡 )]
𝑡
0 𝑒
2
𝑒 −4𝑡 (𝑒 −𝑡 − 𝑒 −4𝑡 )
𝐴𝑇 𝑡 𝑇
𝑒 𝐴𝑡
𝐶 𝐶𝑒 𝑑𝑡 = [2 3
4 ]
(𝑒 −𝑡 − 𝑒 −4𝑡 ) (𝑒 𝑡 − 𝑒 −2𝑡 )2
3 9
2
𝑇𝑜 𝑇 𝑇𝑜 𝑒 −4𝑡 (𝑒 −𝑡 − 𝑒 −4𝑡 )
𝑊𝑜 (𝑇𝑜 ) = ∫0 𝑒 𝐴 𝑡 𝐶 𝑇 𝐶𝑒 𝐴𝑡 𝑑𝑡 = ∫0 [2 −𝑡 3
4 ] 𝑑𝑡
(𝑒 − 𝑒 −4𝑡 ) (𝑒 𝑡 − 𝑒 −2𝑡 )2
3 9
1 1 2 1 3
− 4 𝑒 −4𝑇𝑜 + 4 (−𝑒 −𝑇𝑜 + 4 𝑒 −4𝑇𝑜 + 4)
3
𝑊𝑜 (𝑇𝑜 ) = [2 1 3 4 1 1 9
]
(−𝑒 −𝑇𝑜 + 4 𝑒 −4𝑇𝑜 + 4) ( 𝑒 2𝑇𝑜 + 2𝑒 −𝑇𝑜 − 4 𝑒 −4𝑇𝑜 − 4)
3 9 2
1 1 2 1 3
− 𝑒 −2 + (−𝑒 −0.5 + 𝑒 −2 + )
4 4 3 4 4
𝑊𝑜 (𝑇𝑜 ) = [2 1 3 4 1 1 1 −2 9
]
−0.5 −2 −0.5
(−𝑒 + 𝑒 + ) ( 𝑒 + 2𝑒 − 𝑒 − )
3 4 4 9 2 4 4
0.2162 0.1182
𝑊𝑜 (0.5) = [ ]
0.1182 0.1182
24
Chapter IV: Control law synthesis
−1 18.6652 −17.2144
𝑄𝑜 = (𝑇𝑜 . 𝑊𝑜 (𝑇𝑜 )) =[ ]
−17.2144 31.4814
Checking conditions of resolution of ARE:
18.6652 −17.2144 𝑚 2 𝑚1 𝑚2
𝑄𝑜 = [ ]=[ 1 ]
−17.2144 31.4814 𝑚1 𝑚2 𝑚2 2
4.3203 −19.8623
Controllability matrix of the pair (𝐴, 𝑀𝑜 ): 𝒞𝐴,𝑀𝑜 = [ ]
−5.6108 −5.6108
det 𝒞𝐴,𝑀𝑜 = −135.6847 ≠ 0
Resolution of ARE:
𝐴𝑃𝑜 + 𝑃𝑜 𝐴𝑇 + 𝑄𝑜 − 𝑃𝑜 𝐶 𝑇 𝑅𝑜 −1 𝐶𝑃𝑜 = 0
2 𝑝1 𝑝2 𝑝1 𝑝2 −2 0 𝑝1 𝑝2 1 𝑝1 𝑝2
[−2
0
][
1 𝑝2 𝑝3 ] + [𝑝2
18.6652 −17.2144
𝑝3 ] [ 2 1] + [−17.2144 31.4814 ] − [𝑝2 𝑝3 ] [0] [1 0] [𝑝
2
0
𝑝3 ] = [0
0
0
]
6.2909 11.5185
𝑃𝑜 = [ ]
11.5185 50.5975
6.2909 11.5185 1
𝐿 = 𝑃𝑜 𝐶 𝑇 𝑅𝑜 −1 = [ ][ ]
11.5185 50.5975 0
6.2909
𝐿=[ ]
11.5185
Checking the stability of the observer.
−8.2909 2
𝐴𝑜 = 𝐴 − 𝐿𝐶 = [ ]
−11.5185 1
Its characteristic polynomial: 𝜋𝐴𝑜 (𝑠) = det(𝑠𝐼 − 𝐴𝑜 ) = 𝑠 2 + 7.2909𝑠 + 17.7461
As all the degree of this polynomial is equal to 2, then the necessary condition of Routh
become sufficient. So, we can conclude than the system is stable because all the coefficients of
this polynomial have the same sign.
Its eigenvalues are: Λ 𝑜 = {−3.6455 + 1.207𝑖, −3.6455 − 1.207𝑖}. The real parts of all
the eigenvalues are negative.
25
Chapter IV: Control law synthesis
𝑥̇ 𝑑 (𝑡) = 𝐴𝑑 𝑥𝑑 (𝑡)
{ (4.49)
𝑑(𝑡) = 𝐶𝑑 𝑥𝑑 (𝑡)
𝑥̇ 𝑑 (𝑡) = 0
As an example, if the disturbance is a constant its dynamic model is { .
𝑑(𝑡) = 𝑥𝑑 (𝑡)
𝑥(𝑡)
Let be 𝑥𝐺 (𝑡) = [ ]. The model of the whole system is:
𝑥𝑑 (𝑡)
𝐴 𝐸 𝐵
𝑥̇ 𝐺 (𝑡) = [ ] 𝑥 (𝑡) + [ ] 𝑢(𝑡)
⏟0 𝐴𝑑 𝐺 ⏟
0
𝐴𝐺 𝐵𝐺 (4.50)
𝑦(𝑡) = [⏟𝐶 0] 𝑥𝐺 (𝑡) + 𝐷𝑢(𝑡)
{ 𝐶𝐺
Thus, we observe the state of the system and that of the disturbance.
To simplify the equations, we will consider only the most frequently encountered cases
of inertial systems where 𝐷 = 0.
26
Chapter IV: Control law synthesis
Or
𝑥̇ (𝑡) 𝐴 − 𝐵𝐾 𝐵𝐾 𝐼 0 𝑥(𝑡) 𝐵𝐾
[ ̇ ]=[ ][ ][ ] + [ 𝑟 ] 𝑦 ∗ (𝑡)
𝑥̃ (𝑡) 0 𝐴 − 𝐿𝐶 𝐼 −𝐼 𝑥̃(𝑡) 0
(4.54)
𝑥(𝑡)
𝑦(𝑡) = [𝐶 0] [ ]
{ 𝑥̃(𝑡)
From this equation we conclude that the set of eigenvalues of the whole system is the ⨄
of the set of eigenvalues of 𝐴 − 𝐵𝐾 and those of 𝐴 − 𝐿𝐶:. 𝜎(𝐴𝑐𝑙𝑜 ) = 𝜎(𝐴 − 𝐵𝐾)⨄𝜎(𝐴 − 𝐿𝐶).
Where 𝜎(. ) is a set of eigenvalues of a matrix.
b. Separation principle
We can calculate first the feedback gain 𝐾 (using poles placement or LQ technique)
regardless to the observer. And, then, we calculate the observer gain 𝐿 (using poles placement
or LQ technique) regardless the state feedback control.
We can star calculating 𝐿 and then 𝐾. The order of the calculations is not important.
As with the robust RST control, to ensure a robust stability we need to fix a completely
different dynamics for the state feedback and an the observer. In case of primal LTR, the
dynamics of the observer bust be very fast compared with those of the state feedback. In case
of dual LTR, the dynamics of the state feedback bust be very fast compared with those of the
observer.
27