0% found this document useful (0 votes)
2 views27 pages

CdeMachine_CHP4

Chapter IV discusses the synthesis of control laws for electrical machines, focusing on three types: PI, RST, and Observer-based state feedback control. It details the implementation of PI control, including pole compensation and placement techniques, as well as anti-windup strategies to manage actuator limitations. The chapter also introduces robust RST control principles and the associated mathematical frameworks for achieving desired system dynamics.

Uploaded by

shmtuu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views27 pages

CdeMachine_CHP4

Chapter IV discusses the synthesis of control laws for electrical machines, focusing on three types: PI, RST, and Observer-based state feedback control. It details the implementation of PI control, including pole compensation and placement techniques, as well as anti-windup strategies to manage actuator limitations. The chapter also introduces robust RST control principles and the associated mathematical frameworks for achieving desired system dynamics.

Uploaded by

shmtuu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Chapter IV: Control law synthesis

Chapter IV:

Ch IV: Control Law Synthesis

Γ𝑟 (𝑡)
Ω𝑚 (𝑡)
Ω𝑚 ∗ (𝑡) Speed Γ𝑒 ∗ (𝑡) Torque 𝑢(𝑡) Electrical
- Controller - Controller machine
Γ𝑒 (𝑡)

Fig.4.1. Control of electrical motor

IV.1. Control objectives


The objective is to propose 3 different control laws to control the electrical machine.
These control laws are: PI, RST and Observer-based state feedback control. The two first
control laws will be carried out using the frequency approach (Transfer function) and the last
one with the state space approach.

IV.2 PI Control

IV.2.1 PI action

The PI controller can be used to control first-order systems or systems with a single
dominant time constant.

The transfer function of a PI controller:


1 1+𝑇𝑖 𝑠 𝐾𝑖
𝐶𝑃𝐼 (𝑠) = 𝐾𝑝 (1 + 𝑇 𝑠) = 𝐾𝑝 ( ) = 𝐾𝑝 + (4.1)
𝑖 𝑇𝑖 𝑠 𝑠

𝐾𝑝
Where: 𝐾𝑖 = 𝑇𝑖

The main action of the proportional part 𝐾𝑝 is to manage the response time, and the main
𝐾𝑖
action of the integral part is to cancel the static error.
𝑠

1
Chapter IV: Control law synthesis

IV.2.2 Pole compensation technique

a. Case of first-order system

𝑦 ∗ (𝑡) 𝑢(𝑡) 𝑦(𝑡)


𝐶𝑃𝐼 (𝑠) 𝐺𝑠 (𝑠)
-

Fig.4.2. Control scheme

Transfer function of the system to be controlled is:


𝐾
𝐺𝑠 (𝑝) = 1+𝜏𝑠 (4.2)

Then the open loop transfer function (or loop transfer function) is:
𝐾𝐾𝑝 (1+𝑇𝑖 𝑠)
𝐿(𝑠) = 𝐶𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) = (4.3)
𝑇𝑖 𝑠(1+𝜏𝑠)

The pole compensation technique consists to choose 𝑻𝒊 = 𝝉 such as to eliminate the pole
of 𝐺𝑠 (𝑠). Then the loop transfer become:
𝐾𝐾𝑝
𝐿(𝑠) = 𝐶𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) = (4.4)
𝜏𝑠

Then the closed loop transfer function is:


𝐶 (𝑠)𝐺𝑠 (𝑠) 1
𝐹𝑐𝑙 (𝑠) = 1+𝐶𝑃𝐼 = 𝜏 (4.5)
𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) 1+
𝐾𝐾𝑝
𝑠

𝜏
𝐾𝑝 is chosen so that the time constant 𝐾𝐾 , of the closed loop, is equal to the desired time
𝑝
𝝉
constant 𝜏𝑑 which is set in accordance with the requirements specification  𝑲𝒑 = 𝑲𝝉 .
𝒅

b. Case of second-order system


Transfer function of the system to be controlled is:
𝐾
𝐺𝑠 (𝑠) = (1+𝜏𝑠)(1+𝜏 (4.6)
𝑓 𝑠)

Where 𝜏 ≫ 𝜏𝑓 . Then, 𝜏 is the dominant time constant.

Then, one can approximate the transfer function by the simplified one below:
𝐾
𝐺𝑠 (𝑝) ≈ (1+𝜏𝑠) (4.7)

𝜏 and 𝐾𝑝 are calculated in the same way as above.

2
Chapter IV: Control law synthesis

c. Remark
This technique is not adapted with an input disturbance.

𝑑(𝑡)
𝑦 ∗ (𝑡) 𝑦(𝑡)
𝐶𝑃𝐼 (𝑠) 𝐺𝑠 (𝑠)
- 𝑢(𝑡)

Fig.4.3. Control scheme with input disturbance

Let 𝑑(𝑡) is the input disturbance.

The output response is:

𝑦(𝑠) = 𝐹𝑐𝑙 (𝑠)𝑦 ∗ (𝑠) + 𝐹𝑑 (𝑠)𝑑(𝑠) (4.8)


1
Where 𝐹𝑐𝑙 (𝑠) = 𝜏
1+ 𝑠
𝐾𝐾𝑝

𝜏
𝑠
𝐺𝑠 (𝑠) 𝐾𝑝
And 𝐹𝑑 (𝑠) = = (1+𝜏 (4.9)
1+𝐶𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) 𝑑 𝑠)(1+𝜏𝑠)

Which mean then the disturbance is rejected with the time constant of the system 𝐺𝑠 (𝑠).
If this time constant is large, the disturbance is rejected slowly.

IV.2.3 Pole placement technique

a. Case of first-order system


The open loop transfer function (or loop transfer function) is:
𝐾𝐾𝑝 (1+𝑇𝑖 𝑠)
𝐿(𝑠) = 𝐶𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) = (4.10)
𝑇𝑖 𝑠(1+𝜏𝑠)

Then the closed loop transfer function is:


𝐾𝐾𝑝
(1+𝑇𝑖 𝑠)
𝑇𝑖 𝜏
𝐹𝑐𝑙 (𝑠) = (1+𝐾𝐾𝑝 ) 𝐾𝐾𝑝
(4.11)
𝑠2 + 𝑠+
𝜏 𝑇𝑖 𝜏

It is a second-order transfer function. It can be fixed by 2 parameters: natural frequency


𝜔𝑛 and damping coefficient 𝜉. The relation between these two parameters and 𝐾𝑝 and 𝑇𝑖 are :
𝐾𝐾𝑝
= 𝜔𝑛 2
𝑇𝑖 𝜏
{(1+𝐾𝐾
𝑝)
= 2𝜉𝜔𝑛
𝜏

Which leads to:

3
Chapter IV: Control law synthesis

2𝜉𝜔𝑛 𝜏−1
𝑇𝑖 = 𝜔𝑛 2 𝜏
{ 2𝜉𝜔𝑛 𝜏−1
(4.12)
𝐾𝑝 = 𝐾

𝜉 will fix the desired overcoming 𝐷1 % and 𝜔𝑛 the desired response time 𝑇𝑟𝑑 , as shown
in figure below:

𝐷1

𝑇𝑟𝑑
Fig.4.4. Responses of second-order systems

Overcoming 𝐷1 :
𝜋𝜉

√1−𝜉2
𝐷1 = 𝑒 (4.13)

Relation between response time of a second-order response and 𝜔𝑛 and 𝜉 is given by the
figure below:

Temps de
réponse Tr d’un
système du
𝜔𝑛 𝑇𝑟𝑑

0.707
𝜉

Fig.4.5. Responses of second-order systems

For example if 𝜉 = 0.707 then 𝐷1 = 4.3% and 𝜔𝑛 𝑇𝑟𝑑 = 3.

4
Chapter IV: Control law synthesis

b. Improvement of the control


We can remark the presence of a zero in the closed loop transfer function. This zero can
induced a potential overcoming if the reference presents a jump (discontinuity like a step at
time zero). If it is the case one can compensate this zero by filtering the reference as shown in
figure below:

𝑑(𝑡)
𝑦 ∗ (𝑡) 1 𝑦(𝑡)
𝐶𝑃𝐼 (𝑠) 𝐺𝑠 (𝑠)
1 + 𝑇𝑖 𝑠 - 𝑢(𝑡)

Fig.4.6. Improvement of control scheme

c. Remark
This technique is well adapted with an input disturbance.

The output response is:

𝑦(𝑠) = 𝐹𝑐𝑙 (𝑠)𝑦 ∗ (𝑠) + 𝐹𝑑 (𝑠)𝑑(𝑠) (4.14)


𝐾𝐾𝑝
𝑇𝑖 𝜏
Where 𝐹𝑐𝑙 (𝑠) = (1+𝐾𝐾𝑝 ) 𝐾𝐾𝑝
𝑠2 + 𝑠+
𝜏 𝑇𝑖 𝜏

𝐾
𝐺𝑠 (𝑠) 𝑠
𝜏
And 𝐹𝑑 (𝑠) = 1+𝐶 = (1+𝐾𝐾𝑝 ) 𝐾𝐾𝑝
(4.15)
𝑃𝐼 (𝑠)𝐺𝑠 (𝑠) 𝑠2 + 𝑠+
𝜏 𝑇𝑖 𝜏

From this last equation one can remark then the disturbance is rejected with the same
dynamic than the reference tracking.

IV.2.4 Anti-windup PI controller

In practical all actuators have limitations. Then, it may happen that the control variables
reaches the actuator limits. When this happens the system is not controlled and the actuator will
remain at its limit independently of the system output. If the error between the reference and
the output is different from zero then it will continue to be integrated by the integrating action.
This mean that the integral term may become very large. It is then required that the error has
opposite sign for a long period before things return to normal. The consequence is that any
controller with integral action may give large transients when actuator saturates. To avoid
integrator windup, we introduce an anti-windup as presented below:

5
Chapter IV: Control law synthesis

𝑑(𝑡)
𝑦 ∗ (𝑡) 𝑢(𝑡) 𝑦(𝑡)
𝐾𝑝 𝐺𝑠 (𝑠)
-
1
1 + 𝑠𝑇𝑖

Fig.4.7. Anti-windup PI controller

IV.3. Robust RST control

IV.3.1 Principle

Principle of RST controller is shown in the diagram above:

𝑑(𝑡)
𝑦 ∗ (𝑡) 𝑦(𝑡)
𝑇(𝑠) 𝑆 −1 (𝑠) 𝐺𝑠 (𝑠)
- 𝑢(𝑡)
𝑅(𝑠)

Fig.4.8. RST control diagram

The diagram to be implemented in Matlab/Simulink is:

𝑑(𝑡)
∗ 𝑦(𝑡)
𝑦 (𝑡) 𝑇(𝑠)
𝐺𝑠 (𝑠)
- 𝑆(𝑠) 𝑢(𝑡)
𝑅(𝑠)
𝑇(𝑠)

Fig.4.9. Implemented RST control diagram


𝐵(𝑠)
Where: 𝐺𝑠 (𝑠) = 𝐴(𝑠); and 𝐴(𝑠), 𝐵(𝑠), 𝑅(𝑠), 𝑆(𝑠) and 𝑇(𝑠) are polynomials1.

This immediately gives rise to closed-loop transfers, for tracking and regulation:
𝐵(𝑠)𝑇(𝑠) 𝐵(𝑠)𝑆(𝑠)
𝑦(𝑠) = 𝐴(𝑠)𝑆(𝑠)+𝐵(𝑠)𝑅(𝑠) 𝑦 ∗ (𝑠) + 𝐴(𝑠)𝑆(𝑠)+𝐵(𝑠)𝑅(𝑠) 𝑑(𝑠) (4.16)

𝑆(𝑠) and 𝑅(𝑠) are calculated so that:

𝐴(𝑠)𝑆(𝑠) + 𝐵(𝑠)𝑅(𝑠) = 𝐷(𝑠) (4.17)

1
In some cases 𝑇(𝑠) can be chosen in rational form instead of polynomial form
6
Chapter IV: Control law synthesis

This equation is called Bezout equation (or in some case Diophantine equation).

𝐷(𝑠) is the desired dynamic of the closed loop system.

𝑇(𝑠) is calculated to manage tracking problem (𝑦 → 𝑦 ∗ ).

Desired dynamic 𝐷(𝑠) is calculated via a robust pole placement.

The control law:


𝑇(𝑠) 𝑅(𝑠)
𝑢(𝑠) = 𝑦 ∗ (𝑠) − 𝑆(𝑠) 𝑦(𝑠) (4.18)
𝑆(𝑠)

In case where 𝑦 ∗ and 𝑑 are constant (step function), then their Laplace transform are in
1
the form 𝑠 , 𝑆(𝑠) is chosen as: 𝑆(𝑠) = 𝑠𝑆′(𝑠).

IV.3.2 Solving Bezout’s equation

To be sure that the Bezout’s equation has a unique solution, simply set the degree of 𝑅(𝑠)
to 𝑛 (where 𝑛 is the degree of 𝐴(𝑠)) and set the degree of 𝑆(𝑠) to 𝑛 in case of proper regulator,
or to 𝑛 + 1 in case of strictly proper one. In case of proper regulator the degree of 𝐷(𝑠) is 2𝑛,
and in case of strictly proper one its degree is 2𝑛 + 1. Remark: common factor between 𝐴(𝑠)
and 𝐵(𝑠) must be simplified.

The form of different polynomials:

𝐴(𝑠) = 𝑠 𝑛 + 𝑎1 𝑠 𝑛−1 + ⋯ + 𝑎𝑛−1 𝑠 + 𝑎𝑛

𝐵(𝑠) = 𝑏0 𝑠 𝑚 + ⋯ + 𝑏𝑚−1 𝑠 + 𝑏𝑚 (where 𝑚 ≤ 𝑛)

𝑅(𝑠) = 𝑟0 𝑠 𝑛 + 𝑟1 𝑠 𝑛−1 + ⋯ + 𝑟𝑛−1 𝑠 + 𝑟𝑛

In case of proper regulator:

𝑆(𝑠) = 𝑠 𝑛 + 𝑠1 𝑠 𝑛−1 + ⋯ + 𝑠𝑛−1 𝑠 (𝑠𝑛 = 0 because 𝑆(𝑠) = 𝑠𝑆′(𝑠))

𝐷(𝑠) = 𝑠 2𝑛 + 𝑑1 𝑠 2𝑛−1 + ⋯ + 𝑑2𝑛−1 𝑠 + 𝑑2𝑛

In case of strictly proper regulator:

𝑆(𝑠) = 𝑠 𝑛+1 + 𝑠1 𝑠 𝑛 + ⋯ + 𝑠𝑛 𝑠 (𝑠𝑛+1 = 0 because 𝑆(𝑠) = 𝑠𝑆′(𝑠))

𝐷(𝑠) = 𝑠 2𝑛+1 + 𝑑1 𝑠 2𝑛 + ⋯ + 𝑑2𝑛 𝑠 + 𝑑2𝑛+1

IV.3.3 Robust pole placement strategy

The question is how to fix the dynamic 𝐷(𝑠)? To do that, we can place 2𝑛 desired poles
for the closed-loop system in case of proper regulator or 2𝑛 + 1 desired poles in case of strictly
proper regulator.

7
Chapter IV: Control law synthesis

From equation (4.16):


𝐵(𝑠)𝑇(𝑠) 𝐵(𝑠)𝑆(𝑠)
𝑦(𝑠) = 𝑦 ∗ (𝑠) + 𝑑(𝑠) (4.19)
𝐷(𝑠) 𝐷(𝑠)

𝐵(0)𝑇(0) 𝐵(0)𝑇(0)
In steady state the gain between 𝑦 ∗ and 𝑦 is =
𝐷(0) 𝐷(0)

𝐵(0)𝑇(0) 𝑇(0)
As 𝑆(0) = 0, then the gain becomes: = 𝑅(0). Then, to ensure that the gain
𝐵(0)𝑅(0)
between 𝑦 ∗ and 𝑦 is equal to 1 one just needs to choose 𝑇(𝑠) such as 𝑇(0) = 𝑅(0).
𝐵(0)𝑆(0)
Remark: the gain between 𝑑 and 𝑦 in steady state is = 0.
𝐷(0)

𝐷(𝑠) is factorized such that : 𝐷(𝑠) = 𝐶(𝑠)𝐹(𝑠). Where 𝐶(𝑠) is the dynamic of the control
and 𝐹(𝑠) the dynamic of the filtering. The degree of 𝐶(𝑠) is equal to 𝑛 and the degree of 𝐹(𝑠)
is equal to 𝑛 in case of proper regulator of 𝑛 + 1 in case of strictly proper regulator.

𝑇(𝑠) is chosen:

𝑇(𝑠) = 𝑘𝑐 𝐹(𝑠) (4.20)


𝑅(0)
Where 𝑘𝑐 = 𝐹(0)

Equation (4.19) becomes:


𝑅(0) 𝐵(𝑠) 𝐵(𝑠)𝑆(𝑠)
𝑦(𝑠) = 𝐹(0) 𝐶(𝑠) 𝑦 ∗ (𝑠) + 𝐶(𝑠)𝐹(𝑠) 𝑑(𝑠) (4.21)

The second question is how to choose the desired poles?

To do that, we have to choose 𝑛 poles to fix 𝐶(𝑠) and 𝑛 poles, or 𝑛 + 1 poles, to fix 𝐹(𝑠).

The robust pole placement strategy consists to choose two high-level synthesis
parameters: 𝑇𝑐 (control horizon) and 𝑇𝑓 (filtering horizon). And we can apply one of the two
techniques below:

a. Primal LTR (Loop Transfer Recovery): 𝑇𝑓 ≪ 𝑇𝑐


We deduce the roots of 𝐶(𝑠) from the roots of 𝐴(𝑠) and the roots of 𝐹(𝑠) from the roots
of 𝐵(𝑠) (as 𝐵(𝑠) has just 𝑚 roots then we add 𝑛 − 𝑚 or 𝑛 − 𝑚 + 1 placed on −1/𝑇𝑓 ).

1. Construction of 𝐶(𝑠); PPA algorithm:

We deduce the roots of 𝐶(𝑠) from the roots of 𝐴(𝑠) by using the PPA technique, as shown
in figure below:

8
Chapter IV: Control law synthesis

Roots of 𝐴(𝑠)
Roots of 𝐶(𝑠)

−1

1

𝑇𝑐

+1

Fig.4.10. PPA technique

Let 𝑝𝑖 be the root of 𝐴(𝑠). PPA algorithm to calculate the root 𝑝𝑐 𝑖 of 𝐶(𝑠):

 𝑝𝑐 𝑖 = 𝑝𝑐 𝑖
 If 𝑟𝑒𝑎𝑙(𝑝𝑐 𝑖 ) > 0,
Then 𝑝𝑐 𝑖 = −𝑟𝑒𝑎𝑙(𝑝𝑐 𝑖 ) + 𝑖𝑚𝑎𝑔(𝑝𝑐 𝑖 ), end
 If 𝑎𝑏𝑠 (𝑖𝑚𝑎𝑔(𝑝𝑐 𝑖 )) > 𝑎𝑏𝑠 (𝑟𝑒𝑎𝑙(𝑝𝑐 𝑖 )),
Then 𝑝𝑐 𝑖 = 𝑎𝑏𝑠(𝑝𝑐 𝑖 ) ∗ (−1 + 𝑖 ∗ 𝑠𝑖𝑔𝑛(𝑖𝑚𝑎𝑔(𝑝𝑐 𝑖 ))), end
1
 If 𝑟𝑒𝑎𝑙(𝑝𝑐𝑖 ) > − 𝑇 ,
𝑐
1
Then 𝑝𝑐 𝑖 = − 𝑇 + 𝑖𝑚𝑎𝑔(𝑝𝑐 𝑖 ), end
𝑐

As the degree of 𝐴(𝑠) is 𝑛 then 𝐶(𝑠) is completely defined by this algorithm:

𝐶(𝑠) = ∏𝑛𝑖=1(𝑠 − 𝑝𝑐 𝑖 ) (4.22)

2. Construction of 𝐹(𝑠); PPB algorithm:

We deduce the roots of 𝐹(𝑠) from the roots of 𝐵(𝑠) by using the PPB algorithm, as shown
in figure below:

Roots of 𝐵(𝑠)
Roots of 𝐹(𝑠)

1

𝑇𝑓

Fig.4.11. PPB technique

Let 𝑧𝑖 be the root of 𝐵(𝑠). PPB algorithm to calculate the first 𝑚 roots 𝑝𝑓 of 𝐹(𝑠):
𝑖

 𝑝𝑓 = 𝑧𝑖
𝑖

9
Chapter IV: Control law synthesis

 If 𝑟𝑒𝑎𝑙 (𝑝𝑓 ) > 0,


𝑖
Then 𝑝𝑓 = −𝑟𝑒𝑎𝑙 (𝑝𝑓 ) + 𝑖𝑚𝑎𝑔(𝑝𝑓 ), end
𝑖 𝑖 𝑖
1
 If 𝑟𝑒𝑎𝑙 (𝑝𝑓 ) < − 𝑇 ,
𝑖 𝑓
1
Then 𝑝𝑓 = − 𝑇 + 𝑖𝑚𝑎𝑔(𝑝𝑓 ), end
𝑖 𝑓 𝑖
The 𝑛 − 𝑚 remaining roots of 𝐹(𝑠) (𝑛 − 𝑚 + 1 in case of strictly proper regulator) are
1 1
all fixed to − 𝑇 : 𝑝𝑓 = − 𝑇 for 𝑖 = 𝑚 + 1 𝑡𝑜 𝑛 (𝑜𝑟 𝑛 + 1).
𝑓 𝑖 𝑓

Polynomial 𝐹(𝑠) is:

𝐹(𝑠) = ∏𝑛𝑖=1
𝑜𝑟 𝑛+1
(𝑠 − 𝑝𝑓 ) (4.22)
𝑖

b. Dual LTR (Loop Transfer Recovery): 𝑇𝑐 ≪ 𝑇𝑓


We deduce the roots of 𝐶(𝑠) from the roots of 𝐵(𝑠) (as 𝐵(𝑠) has just 𝑚 roots then we
add 𝑛 − 𝑚 placed on −1/𝑇𝑐 ) by using the PPB technique and the roots of 𝐹(𝑠) from the roots
of 𝐴(𝑠) by using the PPA technique (in case of strictly proper regulator we add one root on
−1/𝑇𝑓 ).

IV.3.4 Application to a first order system


𝐾 𝑏
Transfer function of the system to control is: 𝐺𝑠 (𝑝) = 1+𝜏𝑠 = 𝑠+𝑎 (with 𝜏 > 0), where
𝐾 1
b= 𝜏 and 𝑎 = 𝜏 .

Then: 𝐵(𝑠) = 𝑏 and 𝐴(𝑠) = 𝑠 + 𝑎.

The degree of 𝐵(𝑠) is 𝑚 = 0, and the degree of 𝐴(𝑠) is 𝑛 = 1.


1
Root of 𝐴(𝑠) is 𝑝1 = −
𝜏

a. Case of proper regulator


deg 𝑅(𝑠) = deg 𝑆(𝑠) = deg 𝐴(𝑠) = 1 and deg 𝐷(𝑠) = 2 ∗ deg 𝐴(𝑠) = 2.

Then: 𝑅(𝑠) = 𝑟0 𝑠 + 𝑟1 , 𝑆(𝑠) = 𝑠 and 𝐷(𝑠) = 𝑠 2 + 𝑑1 𝑠 + 𝑑2 .

Construction of (𝑠): 𝐷(𝑠) = 𝐶(𝑠)𝐹(𝑠) where deg 𝐶(𝑠) = deg 𝐹(𝑠) = deg 𝐴(𝑠) = 1.

Then: 𝐶(𝑠) = 𝑠 − 𝑝𝑐 1 and 𝐹(𝑠) = 𝑠 − 𝑝𝑓 , then 𝐷(𝑠) = 𝑠 2 − (𝑝𝑐 1 + 𝑝𝑓 ) 𝑠 + 𝑝𝑐 1 𝑝𝑓


1 1 1

In case of primal LTR (𝑇𝑓 is chosen smaller than 𝑇𝑐 ):

1 1 1
𝑝𝑐 1 = min (− 𝜏 , − 𝑇 ) and 𝑝𝑓 = − 𝑇
𝑐 1 𝑓

10
Chapter IV: Control law synthesis

In case of dual LTR (𝑇𝑐 is chosen smaller than 𝑇𝑓 ):

1 1 1
𝑝𝑐 1 = − 𝑇 and 𝑝𝑓 = min (− 𝜏 , − 𝑇 )
𝑐 1 𝑓

Resolution of Bezout’s equation:

𝐴(𝑠)𝑆(𝑠) + 𝐵(𝑠)𝑅(𝑠) = 𝐷(𝑠)

 (𝑠 + 𝑎)𝑠 + 𝑏(𝑟0 𝑠 + 𝑟1 ) = 𝑠 2 + 𝑑1 𝑠 + 𝑑2

 𝑠 2 + (𝑎 + 𝑏𝑟0 )𝑠 + 𝑏𝑟1 = 𝑠 2 + 𝑑1 𝑠 + 𝑑2
𝑑 −𝑎
𝑎 + 𝑏𝑟0 = 𝑑1 𝑟0 = 1𝑏
{ { 𝑑
𝑏𝑟1 = 𝑑2 𝑟 = 2 1 𝑏

𝑅(0) 𝑟
𝑇(𝑠) = 𝐹(0) 𝐹(𝑠) = −𝑝1 (𝑠 − 𝑝𝑓 )
𝑓1 1

b. Case of proper regulator


deg 𝑅(𝑠) = deg 𝐴(𝑠) = 1, deg 𝑆(𝑠) = deg 𝐴(𝑠) + 1 = 2 and deg 𝐷(𝑠) = 2 deg 𝐴(𝑠) +
1 = 3.

Then: 𝑅(𝑠) = 𝑟0 𝑠 + 𝑟1 , 𝑆(𝑠) = 𝑠 2 + 𝑠1 𝑠 and 𝐷(𝑠) = 𝑠 3 + 𝑑1 𝑠 2 + 𝑑2 𝑠 + 𝑑3 .

Construction of (𝑠): 𝐷(𝑠) = 𝐶(𝑠)𝐹(𝑠) where deg 𝐶(𝑠) = deg 𝐴(𝑠) = 1 and deg 𝐹(𝑠) =
deg 𝐴(𝑠) + 1 = 2.

Then: 𝐶(𝑠) = 𝑠 − 𝑝𝑐 1 and 𝐹(𝑠) = (𝑠 − 𝑝𝑓 ) (𝑠 − 𝑝𝑓 ).


1 2

In case of primal LTR (𝑇𝑓 is chosen smaller than 𝑇𝑐 ):

1 1 1
𝑝𝑐 1 = min (− 𝜏 , − 𝑇 ) and 𝑝𝑓 = 𝑝𝑓 = − 𝑇
𝑐 1 2 𝑓

In case of dual LTR (𝑇𝑐 is chosen smaller than 𝑇𝑓 ):

1 1 1 1
𝑝𝑐 1 = − 𝑇 , 𝑝𝑓 = min (− 𝜏 , − 𝑇 ) and 𝑝𝑓 = − 𝑇
𝑐 1 𝑓 2 𝑓

Resolution of Bezout’s equation:

𝐴(𝑠)𝑆(𝑠) + 𝐵(𝑠)𝑅(𝑠) = 𝐷(𝑠)

 (𝑠 + 𝑎)(𝑠 2 + 𝑠1 𝑠) + 𝑏(𝑟0 𝑠 + 𝑟1 ) = 𝑠 3 + 𝑑1 𝑠 2 + 𝑑2 𝑠 + 𝑑3

 𝑠 3 + (𝑎 + 𝑠1 )𝑠 2 + (𝑎𝑠1 + 𝑏𝑟0 )𝑠 + 𝑏𝑟1 = 𝑠 3 + 𝑑1 𝑠 2 + 𝑑2 𝑠 + 𝑑3

11
Chapter IV: Control law synthesis

𝑠1 = 𝑑1 − 𝑎
𝑎 + 𝑠1 = 𝑑1 𝑑2 −𝑎𝑠1
 {𝑎𝑠1 + 𝑏𝑟0 = 𝑑2  {𝑟0 = 𝑏
𝑏𝑟1 = 𝑑3 𝑑
𝑟 = 3 1 𝑏

𝑅(0) 𝑟1
𝑇(𝑠) = 𝐹(0) 𝐹(𝑠) = 𝑝 (𝑠 − 𝑝𝑓 ) (𝑠 − 𝑝𝑓 )
𝑓 1 𝑝𝑓 2 1 2

IV.3.5 Anti-windup RST controller

If the control input signal has any limitations (saturation), as the controller has an
integrating action so it’s essential to introduce an anti-windup RST. This anti-wind-up RST
controller is shown below:

𝑑(𝑡)
𝑦 ∗ (𝑡) 𝑢(𝑡) 𝑦(𝑡)
𝐺𝑠 (𝑠)
-
𝑆(𝑠) − 𝐹(𝑠)
𝐹(𝑠)
𝑅(𝑠)
𝐹(𝑠)

Fig.4.12. Implementation of anti-windup RST controller

IV.4. Robust observer based state feedback control

IV.4.1 State model

Linear system can be modelled using the state approach:

𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡)


{ (4.23)
𝑦(𝑡) = 𝐶𝑥(𝑡) + 𝐷𝑢(𝑡)

Where: 𝑥 ∈ ℝ𝑛 : state vector, 𝑦 ∈ ℝ𝑝 : output vector and 𝑢 ∈ ℝ𝑚 : input vector.

𝐴 ∈ ℝ𝑛x𝑛 : state matrix, 𝐵 ∈ ℝ𝑛x𝑚 : input matrix, 𝐶 ∈ ℝ𝑝x𝑛 : output matrix and 𝐷 ∈ ℝ𝑝x𝑚 :
direct transmission matrix (generally this matrix is zero for inertial systems).

In case of monovariable system 𝑚 = 𝑝 = 1.

The schematic representation of the state model is:

12
Chapter IV: Control law synthesis

𝐷
𝑢(𝑡) 𝑦(𝑡)
𝐵 න. 𝐶
𝑥(𝑡)
𝐴

Fig.4.13. Schematic representation of the state model

The transfer function of the system can be calculated from the state model:

𝐺𝑠 (𝑠) = 𝐶(𝑠𝐼 − 𝐴)−1 𝐵 + 𝐷 (4.24)

Transfer function is invariant when changing basis of state space: if 𝑥 = 𝑇𝑥̅

𝑥̅̇ (𝑡) = 𝐴̅𝑥̅(𝑡) + 𝐵̅ 𝑢(𝑡)


New state model: { where 𝐴̅ = 𝑇 −1 𝐴𝑇, 𝐵̅ = 𝑇 −1 𝐵 and 𝐶̅ = 𝐶𝑇.
𝑦(𝑡) = 𝐶̅ 𝑥̅(𝑡) + 𝐷𝑢(𝑡)

Transfer function: 𝐺𝑠 (𝑠) = 𝐶̅ (𝑠𝐼 − 𝐴̅)−1 𝐵̅ + 𝐷 it the same as the one given in (4.23).

a. Stability
Definition : System modelled by state model 𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) is stable (internal
stability) if and only if all the eigenvalues of the state matrix 𝐴 have a negative real part.

b. Controllability
Definition: System modelled by state model 𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) is said to be
controllable if and only if it is, by means of the input 𝑢(𝑡), to transfer the system from any initial
state 𝑥𝑖 to any other final state 𝑥𝑓 in a finite time 𝑇.

𝑥
𝑥𝑓

𝑥𝑖
𝑡
𝑇
Fig.4.14. Controllability

Algebraic controllability theorem: The time invariant system 𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) is
controllable if and only if the rank of the controllability matrix 𝒞𝐴,𝐵 is equal to 𝑛.

Where: 𝒞𝐴,𝐵 = [𝐵 𝐴𝐵 ⋯ 𝐴𝑛−1 𝐵 ]

We say that pair (realization) (𝐴, 𝐵) is controllable.

If the pair (𝐴, 𝐵) is not controllable (uncontrollable) we can make a basis change to
separate the controllable part from the uncontrollable one. If 𝑇 is the transformation matrix.
The new state model matrices are:

13
Chapter IV: Control law synthesis

𝑥̅
𝑥̅ = [ 1 ]
𝑥̅2
𝐴̅ 𝐴̅12 ̅
𝐴̅ = 𝑇 −1 𝐴𝑇 = [ 11 ] 𝐵̅ = 𝑇 −1 𝐵 = [𝐵1 ] (4.25)
0 𝐴̅22 0
{𝐶̅ = 𝐶𝑇 = [𝐶1̅ 𝐶2̅ ] 𝐷̅=𝐷

This representation is called the controllability staircase form.

If the uncontrollable part is stable (i.e. all the eigenvalues of the state matrix 𝐴̅22 have a
negative real part) we say that the system is stabilisable (or the pair (𝑨, 𝑩) is stabilisable).

c. Observability
𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡)
Definition: System modelled by state model { is said to be
𝑦(𝑡) = 𝐶𝑥(𝑡) + 𝐷𝑢(𝑡)
observable if and only if it is possible to determine any state 𝑥(𝑡) by using only finite record
𝑢(𝜏) and 𝑦(𝜏) for 𝑡 ≤ 𝜏 ≤ 𝑡 + 𝑇)

𝑢(𝑡 → 𝑡 + 𝑇) Dynamic 𝑦(𝑡 → 𝑡 + 𝑇)


system

Observer
𝑥ො(𝑡)
Fig.4.15. Observability

𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡)


Algebraic observability theorem: The time invariant system { is
𝑦(𝑡) = 𝐶𝑥(𝑡) + 𝐷𝑢(𝑡)
observable if and only if the rank of the observability matrix 𝒪𝐶,𝐴 is equal to 𝑛.

𝐶
𝐶𝐴
Where: 𝒪𝐶,𝐴 =[ ]

𝐶𝐴𝑛−1
We say that pair (𝐶, 𝐴) is observable.

Remark: We can remark a duality between controllability and observability. If we


calculate the transpose of 𝒪𝐶,𝐴 : 𝒪𝐶,𝐴 𝑡 = [𝐶 𝑡 𝐴𝑡 𝐶 𝑡 ⋯ 𝐴𝑡 𝑛−1 𝐶 𝑡 ]. This transpose is a
controllability matrix of the pair (𝐴𝑡 , 𝐶 𝑡 ). I.e. 𝒪𝐶,𝐴 𝑡 = 𝒞𝐴𝑡,𝐶 𝑡 . In the same, way if calculate the
𝐵𝑡
𝐵 𝑡 𝐴𝑡
transpose of 𝒞𝐴,𝐵 : 𝒞𝐴,𝐵 𝑡 = [ ] we obtain the observability matrix of the pair (𝐵𝑡 , 𝐴𝑡 ).

𝑛−1
𝐵 𝑡 𝐴𝑡
𝑡
I.e. 𝒞𝐴,𝐵 = 𝒪𝐵𝑡 ,𝐴𝑡 . We say that (𝐴, 𝐵, 𝐶, 𝐷) are dual to (𝐴𝑡 , 𝐶 𝑡 , 𝐵 𝑡 , 𝐷𝑡 ).

If the pair (𝐶, 𝐴) is not observable (unobservable) we can make a basis change to separate
the observable part from the unobservable one. If 𝑇 is the transformation matrix. The new state
model matrices are:

14
Chapter IV: Control law synthesis

𝑥̅
𝑥̅ = [ 1 ]
𝑥̅2
𝐴̅ 0 𝐵̅ (4.26)
𝐴̅ = 𝑇 −1 𝐴𝑇 = [ 11 ] 𝐵̅ = 𝑇 −1 𝐵 = [ 1 ]
𝐴̅21 𝐴̅22 𝐵̅2
{𝐶̅ = 𝐶𝑇 = [𝐶1̅ 0] 𝐷̅=𝐷

This representation is called the observability staircase form.

If the unobservable part is stable (i.e. all the eigenvalues of the state matrix 𝐴̅22 have a
negative real part) we say that the system is detectable (or the pair (𝑪, 𝑨) is detectable).

If the system is uncontrollable and unobservable we can do a basis change to separate all
the mode to obtain a new state model:

𝑥̅1
𝑥̅2
𝑥̅ = [ ]
𝑥̅3
𝑥̅4
𝐴11̅ 𝐴̅12 𝐴13
̅ 𝐴̅14 𝐵̅1 (4.27)
0 𝐴̅22 0 𝐴̅24 ̅
𝐴̅ = 𝑇 −1 𝐴𝑇 = 𝐵̅ = 𝑇 −1 𝐵 = [𝐵2 ]
0 0 𝐴̅33 𝐴̅34 0
[ 0 0 0 𝐴̅44 ] 0
̅ ̅
{𝐶 = 𝐶𝑇 = [0 𝐶2 0 𝐶4 ] ̅ 𝐷̅=𝐷

The transfer function is:

𝐺𝑠 (𝑠) = 𝐶(𝑠𝐼 − 𝐴)−1 𝐵 + 𝐷 = 𝐶2̅ (𝑠𝐼 − 𝐴̅22 )−1 𝐵̅2 + 𝐷 (4.28)

𝑥̅2̇ (𝑡) = 𝐴̅22 𝑥̅ 2 (𝑡) + 𝐵̅2 𝑢(𝑡)


And {
𝑦(𝑡) = 𝐶2̅ 𝑥̅2 (𝑡) + 𝐷𝑢(𝑡)

d. Matlab instructions
Calculation of transfer function from state model: ss2tf(A,B,C,D)

Calculation of controllability matrix: ctrb(A,B)

To obtain controllability staircase form: ctrbf(A,B,C)

Calculation of observability matrix: obsv(A,C)

To obtain observability staircase form: obsvf(A,B,C)

Calculation of a rank of matrix T: rank(T)

Calculation of eigenvalues of matrix A: eig(A)

15
Chapter IV: Control law synthesis

IV.4.2 State feedback control

In the case where all the states are measurable, the state feedback control diagram is:

𝑦 ∗ (𝑡) 𝑢(𝑡) 𝑦(𝑡)


𝐾𝑟 Dynamic
− system
𝑥(𝑡)
𝐾
Fig.4.16. State feedback control

The control law is:

𝑢(𝑡) = −𝐾𝑥(𝑡) + 𝐾𝑟 𝑦 ∗ (𝑡) (4.28)

Where: 𝐾 ∈ ℝ𝑚x𝑛 and 𝐾𝑟 ∈ ℝ𝑚x𝑝

The state model of the closed loop system is:

(𝐴 − 𝐵𝐾) 𝑥(𝑡) + 𝐵𝐾𝑟 𝑦 ∗ (𝑡)


𝑥̇ (𝑡) = ⏟
{ 𝐴𝑐𝑙 (4.29)
𝑦(𝑡) = (𝐶 − 𝐷𝐾)𝑥(𝑡) + 𝐷𝐾𝑟 𝑦 ∗ (𝑡)

The state feedback gain 𝐾 must ensure system stability and set response times.

𝐾 is calculated so that all the eigenvalues of 𝐴𝑐𝑙 have strictly negative real parts.

The calculation of 𝑲 is possible on condition that the system is controllable or,


otherwise, stabilisable.

The purpose of the gain 𝐾𝑟 is to provide a static unity gain between the reference 𝑦 ∗ and
the output signal 𝑦. It is calculated so that:

−((𝐶 − 𝐷𝐾)(𝐴 − 𝐵𝐾)−1 𝐵 + 𝐷)𝐾𝑟 = 𝐼𝑝x𝑝 (4.30)

In case where 𝐷 then −(𝐶(𝐴 − 𝐵𝐾)−1 𝐵)𝐾𝑟 = 𝐼𝑝x𝑝 .


1
In case of monovariable system: 𝐾𝑟 = − 𝐶(𝐴−𝐵𝐾)−1 𝐵.

a. Monovariable system
In the case of a monovariable system, it is sufficient to fix the desired 𝑛 eigenvalues of
𝐴𝑐𝑙 to calculate 𝐾 ∈ ℝ1x𝑛 .

𝑛 desired eigenvalues of 𝐴𝑐𝑙 are fixed using the PPA (from the 𝑛 eigenvalues of 𝐴) or the
PPB (from the zero of the system) technique.

Let Λ 𝑐 = {𝜆𝑐 1 , ⋯ , 𝜆𝑐 𝑛 } the set of desired eigenvalues of 𝐴𝑐𝑙 . Then, the desired
characteristic polynomial of 𝐴𝑐𝑙 is:

16
Chapter IV: Control law synthesis

𝜋𝑐𝑑 (𝑠) = ∏𝑛𝑖=1(𝑠 − 𝜆𝑐 𝑖 ) = 𝑠 𝑛 + 𝑎𝑑 1 𝑠 𝑛−1 + ⋯ + 𝑎𝑑 𝑛−1 𝑠 + 𝑎𝑑 𝑛 (4.31)

From 𝐴𝑐𝑙 = 𝐴 − 𝐵𝐾 we can calculate the characteristic polynomial of 𝐴𝑐𝑙 :

𝜋𝐴𝑐𝑙 (𝑠) = det(𝑠𝐼 − 𝐴 + 𝐵𝐾) (4.32)

We just need to equalise the two characteristic polynomials (4.31) and (4.32) (𝜋𝑐𝑑 (𝑠) =
𝜋𝐴𝑐𝑙 (𝑠)) to calculate the state feedback gain 𝐾.

In Matlab we can use the instruction place or acker to calculate 𝐾: 𝐾=place(𝐴, 𝐵, Λ 𝑐 ) or


𝐾=acker(𝐴, 𝐵, Λ 𝑐 ).

𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) −2 2 0


Example: Let be the system { where 𝐴 = [ ], 𝐵 = [ ] and
𝑦(𝑡) = 𝐶𝑥(𝑡) 0 1 1
𝐶 = [1 0].

This system is unstable. Indeed the eigenvalues of 𝐴 are Λ𝐴 = {−2,1}, the second
eigenvalue is positive.

We can check if the system is controllable. Controllability matrix:

0 2
𝒞𝐴,𝐵 = [𝐵 𝐴𝐵 ] = [ ]  det 𝒞𝐴,𝐵 = −2 ≠ 0  rank 𝒞𝐴,𝐵 = 2
1 1
 System is controllable.

For example if we use PPA technique with 𝑇𝑐 = 0.5. The desired eigenvalues of 𝐴𝑐𝑙 will
be Λ 𝑐 = {−2, −2}.

Then, the desired characteristic polynomial of 𝐴𝑐𝑙 is:

𝜋𝑐𝑑 (𝑠) = (𝑠 + 2)(𝑠 + 2) = 𝑠 2 + 4𝑠 + 4

The state feedback gain: 𝐾 = [𝑘1 𝑘2 ]

−2 2
The state matrix of the closed-loop system: 𝐴𝑐𝑙 = 𝐴 − 𝐵𝐾 = [ ].
−𝑘1 1 − 𝑘2

From 𝐴𝑐𝑙 = 𝐴 − 𝐵𝐾 we can calculate the characteristic polynomial of 𝐴𝑐𝑙 :

𝑠+2 −2
𝜋𝐴𝑐𝑙 (𝑠) = det(𝑠𝐼 − 𝐴 + 𝐵𝐾) = | | = (𝑠 + 2)(𝑠 − 1 + 𝑘2 ) + 2𝑘1
𝑘1 𝑠 − 1 + 𝑘2

 𝜋𝐴𝑐𝑙 (𝑠) = 𝑠 2 + (1 + 𝑘2 )𝑠 + 2𝑘1 + 2𝑘2 − 2

We identify 𝜋𝐴𝑐𝑙 (𝑠) to 𝜋𝑐𝑑 (𝑠): 𝑠 2 + (1 + 𝑘2 )𝑠 + 2𝑘1 + 2𝑘2 − 2 = 𝑠 2 + 4𝑠 + 4 to


obtain two equalities:

1 + 𝑘2 = 4 𝑘 =3
{ { 2
2𝑘1 + 2𝑘2 − 2 = 4 𝑘1 = −2

Finally 𝐾 = [−2 3].

17
Chapter IV: Control law synthesis

b. Multivariable system
In case of multivariable system, we cannot apply the previous technique. If we proceed
in this way, we will obtain an infinite number of solutions. Then, how to choose one of them?

The solution is to use a Linear Quadratic (LQ) control to calculate the state feedback 𝐾,
that stabilise the system and minimise an energy objective function:
+∞
𝐽 = ∫0 (𝑥 𝑡 𝑄𝑐 𝑥 + 𝑢𝑡 𝑅𝑐 𝑢)𝑑𝑡 (4.33)

Where 𝑄𝑐 ≽ 0 and 𝑅𝑐 ≻ 0 (𝑄𝑐 ∈ ℝ𝑛x𝑛 and 𝑅𝑐 ∈ ℝ𝑚x𝑚 ).

Matrices 𝑄𝑐 and 𝑅𝑐 are fixed by the user.

Remark: This technique can be applied to both multivariable and monovariable systems.

The state feedback gain is:

𝐾 = 𝑅𝑐 −1 𝐵 𝑇 𝑃𝑐 (4.34)

Where 𝑃𝑐 is a solution of algebraic Riccati equation (ARE) :

𝐴𝑇 𝑃𝑐 + 𝑃𝑐 𝐴 + 𝑄𝑐 − 𝑃𝑐 𝐵𝑅𝑐 −1 𝐵 𝑇 𝑃𝑐 = 0 (4.35)

𝑃𝑐 is a symmetric definite positive matrix.

The conditions for the existence of the solution of ARE are:

 Pair (𝐴, 𝐵) must be stabilisable


 Pair (𝑀𝑐 , 𝐴) must be detectable (𝑀𝑐 is square root matrix of 𝑄𝑐 ; i.e. 𝑄𝑐 = 𝑀𝑐 𝑡 𝑀𝑐 )

Matlab instruction: [𝐾, 𝑃𝑐 ] = 𝑙𝑞𝑟(𝐴, 𝐵, 𝑄𝑐 , 𝑅𝑐 )

There are different rules to fix matrices 𝑄𝑐 and 𝑅𝑐 . One of them is the De Larminat’s rule.

 De Larminat’s rule:

𝑅𝑐 is fixed to identity matrix. 𝑄𝑐 is calculated by using partial controllability matrix with


the control horizon 𝑇𝑐 .

𝑅𝑐 = 𝐼
{ −1 (4.36)
𝑄𝑐 = (𝑇𝑐 . 𝑊𝑐 (𝑇𝑐 ))

Where:
𝑇 𝑇
𝑊𝑐 (𝑇𝑐 ) = ∫0 𝑐 𝑒 𝐴𝑡 𝐵𝐵 𝑇 𝑒 𝐴 𝑡 𝑑𝑡 (4.37)

With this choice of weighting matrices 𝑄𝑐 and 𝑅𝑐 , the eigenvalues of 𝐴𝑐𝑙 are all placed to
the left of −1/𝑇𝑐 (it is little bit equivalent to PPA technique for monovariable systems).

18
Chapter IV: Control law synthesis

𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) −2 2


Example: Let be the same system as above { where 𝐴 = [ ],
𝑦(𝑡) = 𝐶𝑥(𝑡) 0 1
0
𝐵 = [ ] and 𝐶 = [1 0].
1
Let be 𝑇𝑐 = 0.5.

𝑅𝑐 = 1.

Calculation of 𝑄𝑐 :
𝑇 𝑇
𝑊𝑐 (𝑇𝑐 ) = ∫0 𝑐 𝑒 𝐴𝑡 𝐵𝐵 𝑇 𝑒 𝐴 𝑡 𝑑𝑡

Calculation of 𝑒 𝐴𝑡 : There are different methods to obtain this exponential


(diagonalization method, Sylvester Method, Caley-Hamilton method, …):

- Diagonalization method:

As the set of eigenvalues of 𝐴 is Λ𝐴 = {−2,1}, then its similar diagonal matrix is:

−2 0 −2𝑡
𝐷=[ ]  𝑒 𝐷𝑡 = [𝑒 0]
0 1 0 𝑒𝑡
Then 𝑒 𝐴𝑡 = 𝑇𝑒 𝐷𝑡 𝑇 −1 where 𝑇 is the matrix of eigenvectors.

Calculation of eigenvectors:

−2 2 𝛼 𝛼 −2𝛼 + 2𝛽 = −2𝛼
𝐴𝑣1 = 𝜆1 𝑣1  [ ] [𝛽 ] = −2 [𝛽 ]  {  𝛽 = 0 and 𝛼 is
0 1 𝛽 = −2𝛽
1
arbitrary. For example we can fix 𝛼 = 1  𝑣1 = [ ]
0
2
−2 2 𝛼 𝛼 −2𝛼 + 2𝛽 = 𝛼 𝛼 = 3𝛽
𝐴𝑣2 = 𝜆2 𝑣2  [ ] [𝛽 ] = [𝛽 ]  { { . For example if we
0 1 𝛽=𝛽 𝛽=𝛽
2
fix 𝛽 = 3  𝑣2 = [ ].
3
1 2 1 3 −2
Then 𝑇 = [ ]  𝑇 −1 = 3 [ ]
0 3 0 1
2
𝐴𝑡 𝐷𝑡 −1 1 2 𝑒 −2𝑡
1 0 ] [3 −2 𝑒 −2𝑡 (𝑒 𝑡 − 𝑒 −2𝑡 )
Thus, 𝑒 = 𝑇𝑒 𝑇 = 3[ ][ ]=[ 3 ]
0 3 0 𝑒𝑡 0 1 0 𝑒𝑡
2 2
𝑒 −2𝑡 (𝑒 𝑡 − 𝑒 −2𝑡 ) 0 (𝑒 𝑡 − 𝑒 −2𝑡 )
Then: 𝑒 𝐴𝑡 𝐵 = [ 3 ] [ ] = [3 ]
0 𝑒𝑡 1 𝑒𝑡
4 2
(𝑒 𝑡 − 𝑒 −2𝑡 )2 (𝑒 2𝑡 − 𝑒 −𝑡 )
𝑇 𝐴𝑇 𝑡
 𝑒 𝐵𝐵 𝑒
𝐴𝑡
= [92 3
]
(𝑒 2𝑡 − 𝑒 −𝑡 ) 𝑒 2𝑡
3

19
Chapter IV: Control law synthesis

4 2
𝑇𝑐 𝑇 𝑇𝑐 9
(𝑒 𝑡 − 𝑒 −2𝑡 )2 (𝑒 2𝑡 − 𝑒 −𝑡 )
 𝑊𝑐 (𝑇𝑐 ) = ∫0 𝑒 𝐴𝑡 𝐵𝐵 𝑇 𝑒 𝐴 𝑡 𝑑𝑡 = ∫0 [ 2 2𝑡 3
] 𝑑𝑡
(𝑒 − 𝑒 −𝑡 ) 𝑒 2𝑡
3

4 1 1 9 2 1 2𝑇 3
( 𝑒 2𝑇𝑐 + 2𝑒 −𝑇𝑐 − 4 𝑒 −4𝑇𝑐 − 4) ( 𝑒 𝑐+ 𝑒 −𝑇𝑐 − 2)
9 2 3 2
 𝑊𝑐 (𝑇𝑐 ) = [ 2 1 3 1 2𝑇 1
]
( 𝑒 2𝑇𝑐 + 𝑒 −𝑇𝑐 − 2) 𝑒 𝑐 −2
3 2 2

4 1 1 1 9 2 1 1 3
( 𝑒 + 2𝑒 −0.5 − 4 𝑒 −2 − 4) ( 𝑒 + 𝑒 −0.5 − 2)
9 2 3 2
 𝑊𝑐 (0.5) = [ 2 1 1 3 1 1 1
]
( 𝑒 + 𝑒 −0.5 − ) 𝑒 −
3 2 2 2 2

0.1282 0.3104
 𝑊𝑐 (0.5) = [ ]
0.3104 0.8591
−1 125.1213 −45.2122
 𝑄𝑐 = (𝑇𝑐 . 𝑊𝑐 (𝑇𝑐 )) =[ ]
−45.2122 18.6652
Checking conditions of resolution of ARE:

Pair (𝐴, 𝐵) must be stabilisable. It is the case. We demonstrate it above.

Pair (𝑀𝑐 , 𝐴) must be detectable:

𝑀𝑐 is such as 𝑄𝑐 = 𝑀𝑐 𝑡 𝑀𝑐 (with 𝑀𝑐 = [𝑚1 𝑚2 ])

125.1213 −45.2122 𝑚 2 𝑚1 𝑚2
 𝑄𝑐 = [ ]=[ 1 ]
−45.2122 18.6652 𝑚1 𝑚2 𝑚2 2

𝑚1 = 11.1848 and 𝑚2 = −4.3203, or 𝑚1 = −11.1848 and 𝑚2 = 4.3203

11.1848 −4.3203
Observability matrix of the pair (𝑀𝑐 , 𝐴): 𝒪𝑀𝑐 ,𝐴 = [ ]
−22.3715 18.0512
det 𝒪𝑀𝑐,𝐴 = 105.2643 ≠ 0

Then the pair (𝑀𝑐 , 𝐴) is observable.

Resolution of ARE:

𝐴𝑇 𝑃𝑐 + 𝑃𝑐 𝐴 + 𝑄𝑐 − 𝑃𝑐 𝐵𝑅𝑐 −1 𝐵 𝑇 𝑃𝑐 = 0
0 𝑝1 𝑝2 𝑝1 𝑝2 −2 2 𝑝1 𝑝2 0 𝑝1 𝑝2
 [−2
2
][
1 𝑝2 𝑝3 ] + [𝑝2
125.1213 −45.2122
𝑝3 ] [ 0 1] + [−45.2122 18.6652 ] − [𝑝2 𝑝3 ] [1] [0 1] [𝑝
2
0
𝑝3 ] = [0
0
0
]

30.1965 2.0822
 𝑃𝑐 = [ ]
2.0822 6.2909
30.1965 2.0822
 𝐾 = 𝑅𝑐 −1 𝐵 𝑇 𝑃𝑐 = [0 1] [ ]
2.0822 6.2909

 𝐾 = [2.0822 6.2909]

20
Chapter IV: Control law synthesis

Checking the stability of the closed-loop system.

−2 2
𝐴𝑐𝑙 = 𝐴 − 𝐵𝐾 = [ ]
−2.0822 −5.2909
Its characteristic polynomial: 𝜋𝐴𝑐𝑙 (𝑠) = det(𝑠𝐼 − 𝐴𝑐𝑙 ) = 𝑠 2 + 7.2909𝑠 + 17.7461

As all the degree of this polynomial is equal to 2, then the necessary condition of Routh
become sufficient. So, we can conclude than the system is stable because all the coefficients of
this polynomial have the same sign.

Its eigenvalues are: Λ 𝑐 = {−3.6455 + 1.207𝑖, −3.6455 − 1.207𝑖}. The real parts of all
the eigenvalues are negative.

IV.4.2 Asymptotic state observer

When state variable are not measurable and pair (𝐶, 𝐴) is observable than we can estimate
𝑥 by using an observer:

𝑢(𝑡) Dynamic 𝑦(𝑡)


system

Observer
𝑥ො(𝑡)
Fig.4.17. Asymptotic state observer

The state model of the observer:

𝑥ො̇(𝑡) = 𝐴𝑥ො(𝑡) + 𝐵𝑢(𝑡) + 𝐿(𝑦(𝑡) − 𝑦ො(𝑡))


{ (4.38)
𝑦ො(𝑡) = 𝐶𝑥ො(𝑡) + 𝐷𝑢(𝑡)

This observer can be rewritten:

𝑥ො̇(𝑡) = (𝐴 − 𝐿𝐶)𝑥ො(𝑡) + (𝐵 − 𝐿𝐷)𝑢(𝑡) + 𝐿𝑦(𝑡)


{ (4.39)
𝑦ො(𝑡) = 𝐶𝑥ො(𝑡) + 𝐷𝑢(𝑡)

The problem consists in finding the observer gain matrix 𝐿 that stabilises the observer
while making 𝑥ො(𝑡) tend "very quickly" towards 𝑥(𝑡).

Estimation error:

𝑥̃(𝑡) = 𝑥(𝑡) − 𝑥ො(𝑡) (4.40)

Then, the state model of the estimation error is:

𝑥̃̇(𝑡) = ⏟
(𝐴 − 𝐿𝐶) 𝑥̃(𝑡)
{ 𝐴𝑜 (4.41)
𝑦̃(𝑡) = 𝐶𝑥̃(𝑡)

21
Chapter IV: Control law synthesis

𝐿 is calculated to stabilise this the observer matrix 𝐴𝑜 (the real parts of all eigenvalues of
𝐴𝑜 are negative).

The problem of finding 𝐿 is dual to that of finding 𝐾. This is because the eigenvalues of
𝐴 − 𝐿𝐶 are the same than those of 𝐴𝑇 − 𝐶 𝑇 𝐿𝑇 . Therefore, all solutions likely to lead to a state
feedback 𝐾 stabilising 𝐴 − 𝐵𝐾 are transportable by duality to the problem of finding the
filtering gain 𝐿: it suffices to replace 𝐴 by 𝐴𝑇 , 𝐵 by 𝐶 𝑇 , and to transpose the control gain 𝐾 to
obtain the filtering gain 𝐿.

The calculation of 𝑳 is possible on condition that the system is observable or, otherwise,
detectable.

a. Monovariable system
In the case of a monovariable system, it is sufficient to fix the desired 𝑛 eigenvalues of
𝐴𝑜 to calculate 𝐿 ∈ ℝ𝑛x1 .

𝑛 desired eigenvalues of 𝐴𝑜 are fixed using the PPA (from the 𝑛 eigenvalues of 𝐴) or the
PPB (from the zero of the system) technique.

Let Λ 𝑜 = {𝜆𝑜 1 , ⋯ , 𝜆𝑜 𝑛 } the set of desired eigenvalues of 𝐴𝑜 . Then, the desired


characteristic polynomial of 𝐴𝑐𝑙 is:

𝜋𝑜𝑑 (𝑠) = ∏𝑛𝑖=1(𝑠 − 𝜆𝑜 𝑖 ) = 𝑠 𝑛 + 𝑎𝑑 1 𝑠 𝑛−1 + ⋯ + 𝑎𝑑 𝑛−1 𝑠 + 𝑎𝑑 𝑛 (4.42)

From 𝐴𝑜 = 𝐴 − 𝐿𝐶 we can calculate the characteristic polynomial of 𝐴𝑜 :

𝜋𝐴𝑜 (𝑠) = det(𝑠𝐼 − 𝐴 + 𝐿𝐶) (4.43)

We just need to equalise the two characteristic polynomials (4.42) and (4.43) (𝜋𝑜𝑑 (𝑠) =
𝜋𝐴𝑜 (𝑠)) to calculate the state feedback gain 𝐾.

In Matlab we can use the instruction place or acker to calculate 𝐿: 𝐿=place(𝐴𝑡 , 𝐶 𝑡 , Λ 𝑜 ) or


𝐿=acker(𝐴𝑡 , 𝐶 𝑡 , Λ 𝑐 ) and 𝐿 = 𝐿𝑡 .

𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) −2 2 0


Example: Let be the system { where 𝐴 = [ ], 𝐵 = [ ] and
𝑦(𝑡) = 𝐶𝑥(𝑡) 0 1 1
𝐶 = [1 0].

We can check if the system is observable. Observability matrix:

𝐶 1 0
𝒪𝐶,𝐴 = [ ] = [ ]  det 𝒪𝐶,𝐴 = 2 ≠ 0  rank 𝒪𝐶,𝐴 = 2
𝐶𝐴 −2 2

 System is observable.

For example if we use PPA technique with 𝑇𝑜 = 0.5. The desired eigenvalues of 𝐴𝑜 will
be Λ 𝑐 = {−2, −2}.

Then, the desired characteristic polynomial of 𝐴𝑜 is:

22
Chapter IV: Control law synthesis

𝜋𝑜𝑑 (𝑠) = (𝑠 + 2)(𝑠 + 2) = 𝑠 2 + 4𝑠 + 4

𝑙
The observer gain: 𝐿 = [ 1 ]
𝑙2

−2 − 𝑙1 2
The state matrix of the observer: 𝐴𝑜 = 𝐴 − 𝐿𝐶 = [ ].
−𝑙2 1

From 𝐴𝑜 = 𝐴 − 𝐿𝐶 we can calculate the characteristic polynomial of 𝐴𝑜 :

𝑠 + 2 + 𝑙1 −2
𝜋𝐴𝑜 (𝑠) = det(𝑠𝐼 − 𝐴 + 𝐿𝐶) = | | = (𝑠 + 2 + 𝑙1 )(𝑠 − 1) + 2𝑙2
𝑙2 𝑠−1

 𝜋𝐴𝑜 (𝑠) = 𝑠 2 + (1 + 𝑙1 )𝑠 − 𝑙1 + 2𝑙2 − 2

We identify 𝜋𝐴𝑜 (𝑠) to 𝜋𝑜𝑑 (𝑠): 𝑠 2 + (1 + 𝑙1 )𝑠 − 𝑙1 + 2𝑙2 − 2 = 𝑠 2 + 4𝑠 + 4 to obtain


two equalities:

1 + 𝑙1 = 4 𝑙 =3
{ {1
−𝑙1 + 2𝑙2 − 2 = 4 𝑙2 = 4.5

3
Finally 𝐿 = [ ].
4.5

b. Multivariable system
As for LQ control technique, we define two matrices 𝑄𝑜 ≽ 0 and 𝑅𝑐 ≻ 0 (𝑄𝑜 ∈ ℝ𝑛x𝑛 and
𝑅𝑜 ∈ ℝ𝑝x𝑝 ).

Matrices 𝑄𝑜 and 𝑅𝑜 are fixed by the user.

Remark: This technique can be applied to both multivariable and monovariable systems.

The state feedback gain is:

𝐿 = 𝑃𝑜 𝐶 𝑇 𝑅𝑜 −1 (4.44)

Where 𝑃𝑜 is a solution of algebraic Riccati equation (ARE) :

𝐴𝑃𝑜 + 𝑃𝑜 𝐴𝑇 + 𝑄𝑜 − 𝑃𝑜 𝐶 𝑇 𝑅𝑜 −1 𝐶𝑃𝑜 = 0 (4.45)

𝑃𝑜 is a symmetric definite positive matrix.

The conditions for the existence of the solution of ARE are:

 Pair (𝐶, 𝐴) must be detectable


 Pair (𝐴, 𝑀𝑜 ) must be stabilisable (𝑀𝑜 is square root matrix of 𝑄𝑜 ; i.e. 𝑄𝑜 =
𝑀𝑜 𝑀𝑜 𝑡 )

Matlab instruction: [𝐿, 𝑃𝑜 ] = 𝑙𝑞𝑟(𝐴𝑡 , 𝐶 𝑡 , 𝑄𝑜 , 𝑅𝑜 ) and 𝐿 = 𝐿𝑡

There are different rules to fix matrices 𝑄𝑜 and 𝑅𝑜 . One of them is the De Larminat’s rule.

23
Chapter IV: Control law synthesis

 De Larminat’s rule:

𝑅𝑜 is fixed to identity matrix. 𝑄𝑜 is calculated by using partial controllability matrix with


the control horizon 𝑇𝑜 .

𝑅𝑜 = 𝐼
{ −1 (4.46)
𝑄𝑜 = (𝑇𝑜 . 𝑊𝑜 (𝑇𝑜 ))

Where:
𝑇 𝑇
𝑊𝑜 (𝑇𝑜 ) = ∫0 𝑜 𝑒 𝐴 𝑡 𝐶 𝑇 𝐶𝑒 𝐴𝑡 𝑑𝑡 (4.47)

With this choice of weighting matrices 𝑄𝑜 and 𝑅𝑜 , the eigenvalues of 𝐴𝑜 are all placed to
the left of −1/𝑇𝑜 (it is little bit equivalent to PPA technique for monovariable systems).

𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) −2 2


Example: Let be the same system as above { where 𝐴 = [ ],
𝑦(𝑡) = 𝐶𝑥(𝑡) 0 1
0
𝐵 = [ ] and 𝐶 = [1 0].
1
Let be 𝑇𝑜 = 0.5.

𝑅𝑐 = 1.

Calculation of 𝑄𝑜 :
𝑇 𝑇
𝑊𝑜 (𝑇𝑜 ) = ∫0 𝑜 𝑒 𝐴 𝑡 𝐶 𝑇 𝐶𝑒 𝐴𝑡 𝑑𝑡

2
𝑒 −2𝑡 (𝑒 𝑡 − 𝑒 −2𝑡 ) 2
Then: 𝐶𝑒 𝐴𝑡 = [1 0] [ 3 ] = [𝑒 −2𝑡 3
(𝑒 𝑡 − 𝑒 −2𝑡 )]
𝑡
0 𝑒
2
𝑒 −4𝑡 (𝑒 −𝑡 − 𝑒 −4𝑡 )
𝐴𝑇 𝑡 𝑇
𝑒 𝐴𝑡
𝐶 𝐶𝑒 𝑑𝑡 = [2 3
4 ]
(𝑒 −𝑡 − 𝑒 −4𝑡 ) (𝑒 𝑡 − 𝑒 −2𝑡 )2
3 9

2
𝑇𝑜 𝑇 𝑇𝑜 𝑒 −4𝑡 (𝑒 −𝑡 − 𝑒 −4𝑡 )
 𝑊𝑜 (𝑇𝑜 ) = ∫0 𝑒 𝐴 𝑡 𝐶 𝑇 𝐶𝑒 𝐴𝑡 𝑑𝑡 = ∫0 [2 −𝑡 3
4 ] 𝑑𝑡
(𝑒 − 𝑒 −4𝑡 ) (𝑒 𝑡 − 𝑒 −2𝑡 )2
3 9

1 1 2 1 3
− 4 𝑒 −4𝑇𝑜 + 4 (−𝑒 −𝑇𝑜 + 4 𝑒 −4𝑇𝑜 + 4)
3
 𝑊𝑜 (𝑇𝑜 ) = [2 1 3 4 1 1 9
]
(−𝑒 −𝑇𝑜 + 4 𝑒 −4𝑇𝑜 + 4) ( 𝑒 2𝑇𝑜 + 2𝑒 −𝑇𝑜 − 4 𝑒 −4𝑇𝑜 − 4)
3 9 2

1 1 2 1 3
− 𝑒 −2 + (−𝑒 −0.5 + 𝑒 −2 + )
4 4 3 4 4
 𝑊𝑜 (𝑇𝑜 ) = [2 1 3 4 1 1 1 −2 9
]
−0.5 −2 −0.5
(−𝑒 + 𝑒 + ) ( 𝑒 + 2𝑒 − 𝑒 − )
3 4 4 9 2 4 4

0.2162 0.1182
 𝑊𝑜 (0.5) = [ ]
0.1182 0.1182

24
Chapter IV: Control law synthesis

−1 18.6652 −17.2144
 𝑄𝑜 = (𝑇𝑜 . 𝑊𝑜 (𝑇𝑜 )) =[ ]
−17.2144 31.4814
Checking conditions of resolution of ARE:

Pair (𝐶, 𝐴) must be detectable. It is the case. We demonstrate it above.

Pair (𝐴, 𝑀𝑜 ) must be stabilisable:


𝑚1
𝑀𝑜 is such as 𝑄𝑜 = 𝑀𝑜 𝑀𝑜 𝑡 (with 𝑀𝑜 = [𝑚 ])
2

18.6652 −17.2144 𝑚 2 𝑚1 𝑚2
 𝑄𝑜 = [ ]=[ 1 ]
−17.2144 31.4814 𝑚1 𝑚2 𝑚2 2

𝑚1 = 4.3202 and 𝑚2 = −5.6108, or 𝑚1 = −4.3202 and 𝑚2 = 5.6108

4.3203 −19.8623
Controllability matrix of the pair (𝐴, 𝑀𝑜 ): 𝒞𝐴,𝑀𝑜 = [ ]
−5.6108 −5.6108
det 𝒞𝐴,𝑀𝑜 = −135.6847 ≠ 0

Then the pair (𝐴, 𝑀𝑜 ) is controllable.

Resolution of ARE:

𝐴𝑃𝑜 + 𝑃𝑜 𝐴𝑇 + 𝑄𝑜 − 𝑃𝑜 𝐶 𝑇 𝑅𝑜 −1 𝐶𝑃𝑜 = 0
2 𝑝1 𝑝2 𝑝1 𝑝2 −2 0 𝑝1 𝑝2 1 𝑝1 𝑝2
 [−2
0
][
1 𝑝2 𝑝3 ] + [𝑝2
18.6652 −17.2144
𝑝3 ] [ 2 1] + [−17.2144 31.4814 ] − [𝑝2 𝑝3 ] [0] [1 0] [𝑝
2
0
𝑝3 ] = [0
0
0
]

6.2909 11.5185
 𝑃𝑜 = [ ]
11.5185 50.5975
6.2909 11.5185 1
 𝐿 = 𝑃𝑜 𝐶 𝑇 𝑅𝑜 −1 = [ ][ ]
11.5185 50.5975 0
6.2909
𝐿=[ ]
11.5185
Checking the stability of the observer.

−8.2909 2
𝐴𝑜 = 𝐴 − 𝐿𝐶 = [ ]
−11.5185 1
Its characteristic polynomial: 𝜋𝐴𝑜 (𝑠) = det(𝑠𝐼 − 𝐴𝑜 ) = 𝑠 2 + 7.2909𝑠 + 17.7461

As all the degree of this polynomial is equal to 2, then the necessary condition of Routh
become sufficient. So, we can conclude than the system is stable because all the coefficients of
this polynomial have the same sign.

Its eigenvalues are: Λ 𝑜 = {−3.6455 + 1.207𝑖, −3.6455 − 1.207𝑖}. The real parts of all
the eigenvalues are negative.

25
Chapter IV: Control law synthesis

c. Presence of a non-measurable disturbance


In case of presence of a non-measurable disturbance the state model of the system is:

𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡) + 𝐸𝑑(𝑡)


{ (4.48)
𝑦(𝑡) = 𝐶𝑥(𝑡) + 𝐷𝑢(𝑡)

Where 𝑑(𝑡) is the disturbance.

We need to know the exact dynamic of the disturbance:

𝑥̇ 𝑑 (𝑡) = 𝐴𝑑 𝑥𝑑 (𝑡)
{ (4.49)
𝑑(𝑡) = 𝐶𝑑 𝑥𝑑 (𝑡)

𝑥̇ 𝑑 (𝑡) = 0
As an example, if the disturbance is a constant its dynamic model is { .
𝑑(𝑡) = 𝑥𝑑 (𝑡)

𝑥(𝑡)
Let be 𝑥𝐺 (𝑡) = [ ]. The model of the whole system is:
𝑥𝑑 (𝑡)

𝐴 𝐸 𝐵
𝑥̇ 𝐺 (𝑡) = [ ] 𝑥 (𝑡) + [ ] 𝑢(𝑡)
⏟0 𝐴𝑑 𝐺 ⏟
0
𝐴𝐺 𝐵𝐺 (4.50)
𝑦(𝑡) = [⏟𝐶 0] 𝑥𝐺 (𝑡) + 𝐷𝑢(𝑡)
{ 𝐶𝐺

The state model of the observer:

𝑥ො̇𝐺 (𝑡) = 𝐴𝐺 𝑥ො𝐺 (𝑡) + 𝐵𝐺 𝑢(𝑡) + 𝐿𝐺 (𝑦(𝑡) − 𝑦ො(𝑡))


{ (4.51)
𝑦ො(𝑡) = 𝐶𝐺 𝑥ො𝐺 (𝑡) + 𝐷𝑢(𝑡)

Thus, we observe the state of the system and that of the disturbance.

IV.4.3 Observer based state feedback control

Principle diagram of the observer based state feedback control is:

𝑣(𝑡) 𝑢(𝑡) Dynamic 𝑦(𝑡)


− system
𝐿

Observer
𝑦ො(𝑡)
𝑥ො(𝑡)
𝐾

Fig.4.18. Observer based state feedback control diagram

To simplify the equations, we will consider only the most frequently encountered cases
of inertial systems where 𝐷 = 0.

Dynamic equations of such system are:

26
Chapter IV: Control law synthesis

𝑥̇ (𝑡) = 𝐴𝑥(𝑡) + 𝐵𝑢(𝑡)


{
𝑦(𝑡) = 𝐶𝑥(𝑡)
(4.52)
𝑥ො̇(𝑡) = (𝐴 − 𝐿𝐶)𝑥ො(𝑡) + 𝐵𝑢(𝑡) + 𝐿𝑦(𝑡)
{𝑢(𝑡) = −𝐾𝑥ො(𝑡) + 𝐾𝑟 𝑦 ∗ (𝑡)

Or

𝑥̇ (𝑡) 𝐴 −𝐵𝐾 𝑥(𝑡) 𝐵𝐾


[ ̇ ]=[ ][ ] + [ 𝑟 ] 𝑦 ∗ (𝑡)
𝑥ො(𝑡) ⏟
𝐿𝐶 𝐴 − 𝐵𝐾 − 𝐿𝐶 𝑥
ො(𝑡) 𝐵𝐾𝑟
𝐴𝑐𝑙𝑜 (4.53)
𝑥(𝑡)
𝑦(𝑡) = [𝐶 0] [ ]
{ 𝑥ො(𝑡)

a. Stability of the overall system


Question : is this system stable ?

𝑥(𝑡) 𝐼 0 𝑥(𝑡) 𝑥(𝑡) 𝐼 0 𝑥(𝑡)


Change of basis: Let [ ]=[ ][ ][ ]=[ ][ ]
𝑥̃(𝑡) 𝐼 −𝐼 𝑥ො(𝑡) 𝑥ො(𝑡) 𝐼 −𝐼 𝑥̃(𝑡)

Equation (4.53) becomes:

𝑥̇ (𝑡) 𝐴 − 𝐵𝐾 𝐵𝐾 𝐼 0 𝑥(𝑡) 𝐵𝐾
[ ̇ ]=[ ][ ][ ] + [ 𝑟 ] 𝑦 ∗ (𝑡)
𝑥̃ (𝑡) 0 𝐴 − 𝐿𝐶 𝐼 −𝐼 𝑥̃(𝑡) 0
(4.54)
𝑥(𝑡)
𝑦(𝑡) = [𝐶 0] [ ]
{ 𝑥̃(𝑡)

From this equation we conclude that the set of eigenvalues of the whole system is the ⨄
of the set of eigenvalues of 𝐴 − 𝐵𝐾 and those of 𝐴 − 𝐿𝐶:. 𝜎(𝐴𝑐𝑙𝑜 ) = 𝜎(𝐴 − 𝐵𝐾)⨄𝜎(𝐴 − 𝐿𝐶).
Where 𝜎(. ) is a set of eigenvalues of a matrix.

We can conclude that the overall system is stable.

b. Separation principle
We can calculate first the feedback gain 𝐾 (using poles placement or LQ technique)
regardless to the observer. And, then, we calculate the observer gain 𝐿 (using poles placement
or LQ technique) regardless the state feedback control.

We can star calculating 𝐿 and then 𝐾. The order of the calculations is not important.

As with the robust RST control, to ensure a robust stability we need to fix a completely
different dynamics for the state feedback and an the observer. In case of primal LTR, the
dynamics of the observer bust be very fast compared with those of the state feedback. In case
of dual LTR, the dynamics of the state feedback bust be very fast compared with those of the
observer.

27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy