Notes3002 2021 Sem1
Notes3002 2021 Sem1
Simon Illingworth
2021 semester 1
Contents
3 Linearization 33
3.1 Finding equilibria . . . . . . . . . . . . . . . . . . 34
3.2 Linearization . . . . . . . . . . . . . . . . . . . . 35
3.3 From one state to many states . . . . . . . . . . . 37
3.4 Linearization with inputs . . . . . . . . . . . . . . 38
1
Contents 2
5 Second-order systems 61
5.1 Standard form . . . . . . . . . . . . . . . . . . . . 61
5.2 Laplace transforms . . . . . . . . . . . . . . . . . 62
5.3 Poles . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.4 Free response . . . . . . . . . . . . . . . . . . . . 64
5.5 Transfer function . . . . . . . . . . . . . . . . . . 68
5.6 Impulse response . . . . . . . . . . . . . . . . . . 69
5.7 Step response . . . . . . . . . . . . . . . . . . . . 70
5.8 Characterizing the underdamped step response . . 73
5.9 Links and further reading . . . . . . . . . . . . . 77
6 First-order systems 81
6.1 Cup of tea as a first-order system . . . . . . . . . 81
6.2 Laplace transforms . . . . . . . . . . . . . . . . . 82
6.3 Poles . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.4 Free response . . . . . . . . . . . . . . . . . . . . 83
6.5 Transfer function . . . . . . . . . . . . . . . . . . 84
6.6 Impulse response . . . . . . . . . . . . . . . . . . 84
6.7 Step response . . . . . . . . . . . . . . . . . . . . 85
Contents 3
5
1.1. The mass-spring damper system 6
J = mr2
1.3.1 Capacitor
A capacitor stores energy in an electric field. The simplest imple-
mentation of a capacitor consists of two pieces of conducting ma-
terial separated by a dielectric material1 . The charge on an ideal
1
a material in which an electric field can be established without a signifi-
cant flow of charge through it
1.3. Electrical circuits 11
q = Cv,
1.3.2 Inductor
An inductor stores energy in a magnetic field. The standard ex-
ample of an inductor is a helical coil of conducting wire. When
current flows through a conducting structure, a magnetic field is
established in the space or material around the structure. If the
current varies as a function of time then the magnetic field will
also vary. Lenz’s law then says that this changing magnetic field
will generate voltage differences in the conducting structure which
oppose the changing current. This property—whereby a voltage
difference is set up that opposes the rate of change of current
flow—is called inductance and the governing equation is
1 t
Z
i= v dt + i0 ,
L 0
where L is the inductance with units of henries (h = v · sec/amp).
An ideal inductor provides zero resistance to the flow of charge.
1.3.3 Resistor
A resistor dissipates energy. The simplest form of a resistor is
a thin metal wire. Unlike capacitors and inductors it doesn’t
store energy because it involves no electric-field or magnetic-field
effects.
1.3. Electrical circuits 12
2. Kirchoff ’s voltage law, which says that the sum of all volt-
age drops around a circuit loop is zero:
X
vdrop = 0. (1.8)
elements
for this is actually quite subtle: one of its terminals is the frame of
reference we have chosen (the ‘mechanical ground’). So the mass
behaves differently in an important way.
We can also define a ‘through’ variable’, so-called because it doesn’t
change across an element but instead is constant through it. For
the mechanical system the through variable is force. For the elec-
trical system the through variable is current.
To summarize, in the velocity-voltage analogy we have the follow-
ing correspondence:
But we now know that the mass does not behave exactly like a
capacitor. Instead it behaves like a grounded capacitor. This is
because one of the mass’s terminals is connected to the mechanical
ground, which is the frame of reference.
Is it possible to come up with a device that behaves like a mass but
has two terminals instead of one? In other words, is it possible to
come up with the true mechanical analogue of the capacitor?
r1 = 1.3:
Figure radius An
of rack pinion γ of=theradius
implementation of gyration
inerter. Takenoffrom
flywheel
here.
r2 = radius of gear wheel m = mass of the flywheel
There are two main advantages toα1the
r 3 = radius of flywheel pinion = γ/r 3 and α2 = r2 /r1
inerter:
1. A broader Equation rangeofofmotion:
mechanical F systems can
= (mα12 α 2 be realized when
2 ) (v̇2 − v̇1 )
compared to using a standard mass. (You can probably imag-
(Assumes mass of gears, housing etc is negligible.)
ine that the electrical circuits one could generate would be
limited
SICE Annual if any
Conference, Fukui,capacitors always had to be grounded, and this18
Japan, 4 August 2003
Mass F = m dv
dt
E = 12 mv 2
11 2
R
Spring F = k v dt E= 2k
F
Damper F = cv P = cv 2
Rot. mass T = J dΩ
dt
E = 12 JΩ2
1 1 2
R
Rot. spring T =K Ω dt E= 2K
T
Capacitor i = C dv
dt
E = 12 Cv 2
i = L1 v dt E = 12 Li2
R
Inductor
1 1 2
Resistor i= R
v P = R
v
(Inerter) F = b dv
dt
E = 12 bv 2
but it is still true that the total energy of the system is Etotal =
Emass + Espring (the damper stores no energy) and so we can still
write
dEtotal
= q̇(mq̈ + kq).
dt
Now to make use of our modified governing equation we can write
this as
dEtotal
= q̇(mq̈ + cq̇ + kq − cq̇)
dt
= −cq̇ 2 .
1.A. Energy of the mass-spring system 21
22
2.1. The state x 23
(We have set the forcing to be zero for now.) How would we
represent this system in the state-space form (2.1)? The first
thing to notice is that equation (2.2) is second-order (because the
highest time derivative is two) while the state-space form (2.1)
involves n equations, each of which is first-order (since the highest
derivative in it is one). So we somehow need to convert a second-
order equation into two first-order equations. We can do this by
doubling the number of equations. If we set the state to be
x1 (t) q(t)
x(t) = =
x2 (t) q̇(t)
Notice that we used up one equation to state the obvious: the first
row of (2.3) just says that q̇(t) = q̇(t). This is the price we pay for
writing one second-order equation as two first-order equations. In
other words, one equation of second order became two equations,
each of first order. All of the dynamics of the mass-spring-damper
is contained in the second row of (2.3) (which is just the original
2.2. The matrix exponential 24
2. Its derivative is
d At
e = AeAt . (2.11)
dt
3. It commutes with A (i.e. the order of their multiplication does
not matter):
AeAt = eAt A.
You can check the solution (2.9) by substituting it into the two
sides of (2.8) and using (2.11) for the derivative of eAt .
2.3. Eigendecomposition of eAt 26
AV = V Λ (2.13)
where
λ1 0
| |
Λ=
.. ,
V = v1 · · · vn .
.
0 λn | |
That is, the matrix Λ contains on its diagonal the n eigenvalues
λi ; and the matrix V contains in its columns the n eigenvectors vi .
(If you’re not sure about the order of terms in (2.13), try writing
it out for a 2 × 2 matrix.)
Equation (2.13) can be rearranged for A:
A = V ΛV −1 (2.14)
λi = σi ± jωi .
1 T
and so the B matrix for this system is B = [0 m
] .
(We will not derive this.) The solution (2.21) is not terribly easy
to use because the integral term can be quite complicated to eval-
uate. For that reason we will not use (2.21) in this course and
will instead use Laplace transform techniques when there are in-
puts. But (2.21) is included here for completeness. (It also helps
to motivate the Laplace transform techniques of chapter 4, since
(2.21) shows us what we would have to do without them!)
2.7. Including outputs 31
2
One way to think about this is that the order of equation (2.25) is zero
because the highest time derivative is zero. Then the A matrix is of dimension
0 × 0, which is another way of saying we don’t need it.
3 Linearization
33
3.1. Finding equilibria 34
ẋ = f (x). (3.1)
The vector
x1
x = ... (3.3)
xn
is an equilibrium if
0
..
f (x) = . . (3.4)
0
In words: the vector x is an equilibrium if, when we ‘f ’ it, we get
back a vector of zeros. This makes sense. The governing equation
(3.1) tells us that the time derivative of x is given by f (x). So
if f (x) gives back zero, then the time derivative of x is zero, and
the state does not change in time. In other words, we are at an
equilibrium.
Note that an equilibrium x is not necessarily unique. A dynami-
cal system of the form (3.1) can have zero, one, or many equilib-
ria.
3.2 Linearization
Once we have an equilibrium x, the next thing we are interested
in is linearizing the system about this equilibrium. To keep things
simple, let’s consider a dynamical system with a single-state to
start with:
ẋ = f (x).
An example function f (x) is plotted in figure 3.1.
Now consider a small perturbation δ away from the equilibrium
x so that
x=x+δ
3.2. Linearization 36
δ =x−x
δ̇ = ẋ.
δ̇ = ẋ = f (x)
∂f 1 ∂ 2f
≈ f (x) + (x − x) + (x − x)2 + . . .
∂x x 2! ∂x2 x
or
∂f 1 ∂ 2f
δ̇ ≈ δ+ δ2 + . . . (3.5)
∂x x 2 ∂x2 x
Here comes the key point: when we linearize, we retain only
the leading-order (i.e. linear) term so that we are left with sim-
ply:
∂f
δ̇ = δ (3.6)
∂x x
or
δ̇ = aδ (3.7)
with the constant a given by
∂f
a= .
∂x x
3.3. From one state to many states 37
Pay careful attention to what this says: it says that ẋ1 (t) depends
on both x1 (t) and x2 (t); and that ẋ2 (t) depends on both x1 (t) and
x2 (t). How would we do the same Taylor series expansion for this
system? This time we will have two equations; and we need to
account for derivatives not only with respect to x1 but also with
respect to x2 . The first equation is
∂f1 1 ∂ 2 f1
δ̇1 ≈ f1 (x1 , x2 ) + δ1 + δ12 + . . .
∂x1 x 2 ∂x21 x
∂f1 1 ∂ 2 f1
+ δ2 + δ22 + . . .
∂x2 x 2 ∂x22 x
∂f1 ∂f1
δ̇1 ≈ δ1 + δ2 .
∂x1 x ∂x2 x
∂f2 ∂f2
δ̇2 ≈ δ1 + δ2 .
∂x1 x ∂x2 x
3.4. Linearization with inputs 38
δ̇ = ∂x
1 ∂x2
∂f2 ∂f2 δ
∂x1 ∂x2 x
= A δ.
So that was the two-dimensional case. You might be able to
imagine how this generalizes to n dimensions. The A matrix be-
comes
∂f1 ∂f1
∂x1 · · · ∂xn
. .. ..
A= .
. . .
. (3.10)
∂fn ∂fn
···
∂x1 ∂xn x
So A is the matrix of derivatives of f , all evaluated at the equilib-
rium x. The dimension of A is n×n because there are n functions
f1 . . . fn , and each function needs to be differentiated with respect
to each of the n states x1 . . . xn . This matrix is important enough
that it has a name: we call it the Jacobian of f . It represents the
linearization of the nonlinear function f evaluated at the equilib-
rium x.
se
M
i f
fee
i d
m
Figure 3.2: Cart-pendulum system.
The equations of motion for the cart-pendulum system are derived
in § 3.A. They are
The first equation represents the equation of motion for the pen-
dulum and the second equation represents the equation of motion
for the cart. Notice that they are nonlinear. We want to put these
two equations in the form
ẋ = f (x, u).
x1 = x
x2 = θ
x3 = ẋ
x4 = θ̇,
ẋ1 = x3
ẋ2 = x4
−mlx24 sin x2 − mg cos x2 sin x2 − f
ẋ3 =
m cos2 x2 − (M + m)
(m + M )g sin x2 + mlx24 sin x2 cos x2 + f cos x2 − ( M
m
+ sin2 x2 )d
ẋ4 =
l[m cos2 x2 − (M + m)]
This is complicated, and you can probably imagine that it’s not
very nice to linearize! But it’s not as bad as it looks for two
reasons:
• The first two equations are easy to linearize because they are
already linear and each only depends on one other state.
1
A reminder that the inverse of a 2 × 2 matrix is
−1
−1 a b 1 d −b 1 d −b
A = = = .
c d det(A) −c a ad − bc −c a
3.5. Cart-pendulum example 42
• The third and fourth equations are harder, but the only tricky
case is differentiating them with respect to x2 .
So out of the sixteen entries in the A matrix, many of them are
zero, and only two of them are difficult to evaluate. These are
∂f3 /∂x2 and ∂f4 /∂x2 .
Figure 3.3: Free-body diagrams for the pendulum and the cart.
3.A. Cart-pendulum equations of motion 44
The free-body diagrams for the cart and for the pendulum are
shown in figure 3.3. Note that the coordinates of the cart are
(x, y) and the coordinates of the pendulum are (xp , yp ) with
xp = x + l sin θ
yp = −l cos θ.
3.A.1 Pendulum
First let’s consider the pendulum. Summing forces in the x-
direction:
d 2 xp
Rx + d cos θ = m
dt2
d2
= m 2 (x + l sin θ)
dt
d
= m (ẋ + lθ̇ cos θ)
dt
or
Rx = m(ẍ − lθ̇2 sin θ + lθ̈ cos θ) − d cos θ (3.21)
Summing forces in the y-direction:
d2 yp
Ry − mg + d sin θ = m
dt2
d2
= m 2 (−l cos θ)
dt
d
= m (lθ̇ sin θ)
dt
or
Ry = m(lθ̈ sin θ + lθ̇2 cos θ) + mg − d sin θ (3.22)
Taking moments about the pendulum’s point-mass, noting that
the moment of inertia of a point-mass is zero (I = 0):
Substituting (3.21) and (3.22) into (3.23) and after some rear-
ranging we get
3.A.2 Cart
Now for the cart. We only need consider the force balance in x
because the cart is supported in y and therefore cannot accelerate
in that direction. The force balance in the x-direction is
M ẍ = f − Rx .
and I ask you to find the solution in time, q(t), when the initial
condition is zero and the forcing is
How would you do it? It turns out that this is a difficult problem
to solve in the time domain. One way to do it would be to convert
the governing equation (4.1) to a linear state-space model and
then solve for q(t) using (2.21). Converting to a state-space model
is straightforward enough but solving the integral in (2.21) will
46
4.2. Definition of the Laplace transform 47
s = σ + jω.
with (
1/ 0 ≤ t ≤
g(t) =
0 0.
An important property of the impulse function is that, for any
function f (t), Z ∞
δ(t − τ )f (t)dt = f (τ ),
0
so the integral ‘picks out’ the value of f (t) at t = τ . The Laplace
transform of the impulse function is
Z ∞
L{δ(t)} = e−st δ(t)dt
0
= e−st t=0
= e0
so
L{δ(t)} = 1 (4.6)
The impulse function δ(t) also goes by the name of the Dirac delta
function after Paul Dirac.
4.4. Properties of the Laplace transform 50
4.4.1 Linearity/superposition
For two functions f1 (t) and f2 (t),
Z ∞
L{αf1 (t) + βf2 (t)} = e−st (αf1 (t) + βf2 (t))dt
0
Z ∞ Z ∞
−st
=α e f1 (t)dt + β e−st f2 (t)dt
0 0
= αL{f1 (t)} + βL{f2 (t)}
= αF1 (s) + βF2 (s)
so
L{αf1 (t) + βf2 (t)} = αF1 (s) + βF2 (s) (4.7)
Note that this does not apply to the product of functions, i.e.
L{f1 (t)f2 (t)} =
6 F1 (s)F2 (s)!
4.4. Properties of the Laplace transform 51
x(t)e−st dt
R
x(t) (definition)
δ(t) 1 all s
1
u(t) Re(s) > 0
s
1
e−at u(t) Re(s) > −a
s+a
m!
tm e−at u(t) Re(s) > −a
(s + a)m+1
s
cos(ω0 t) u(t) Re(s) > 0
s + ω02
2
ω0
sin(ω0 t) u(t) Re(s) > 0
s + ω02
2
s+a
e−at cos(ω0 t) u(t) Re(s) > −a
(s + a)2 + ω02
ω0
e−at sin(ω0 t) u(t) Re(s) > −a
(s + a)2 + ω02
Table 4.1: Common Laplace transform pairs.
4.4. Properties of the Laplace transform 52
4.4.2 Time-scale
What if we stretch time by a factor of a? The Laplace transform
of f (at) is
Z ∞
L{f (at)} = e−st f (at) dt
0
4.4.3 Derivative
The Laplace transform of the derivative of x(t) is
Z ∞
−st d
L{x(t)} = e x(t) dt.
0 dt
Rb Rb
Now we use integration by parts, a uv 0 dt = uv|ba − a u0 vdt, with
u = e−st and v 0 = dtd
x(t) to give
Z ∞ Z ∞
−st d −st ∞
e x(t) dt = e x(t) 0 − −se−st x(t) dt
0 dt
Z ∞0
= −x(0) + s e−st x(t) dt
0
so
L{ẋ(t)} = sX(s) − x(0) (4.9)
4.4. Properties of the Laplace transform 53
4.4.6 Integration
If we integrate x(t) and then take the Laplace transform we get
Z t
1
L x(t) dt = X(s) (4.11)
0 s
4.4.7 Time-shift
If we time-shift x(t) by an amount of time τ then we get
L{x(t − τ )u(t − τ )} = e−τ s X(s) (4.12)
where u(t) is the step function.
4.5. The inverse Laplace transform 54
Where pi are called the poles of G(s) and the coefficients bi are
sometimes called the residues. (We assume for now that there are
no repeated poles.)
1
Example: G(s) = s2 +s
The poles pi are distinct. Using the ’cover up’ method we have
1
b1 = = 1.
s(s
A + 1)
s=0
and
1
b2 = XX = −1.
s
(s
+X1)
X s=−1
1
Example: G(s) = (s+2)2 (s+1)
We have
1 b1 b2 c1
G(s)) = 2
= + 2
+ .
(s + 2) (s + 1) (s + 2) (s + 2) (s + 1)
Multiply both sides through by the denominator of G(s):
1 = b1 (s + 2)(s + 1) + b2 (s + 1) + c1 (s + 2)2 .
1 = c1 (−1 + 2)2 =⇒ c1 = 1
Setting s = −2 gives
1 = b2 (−2 + 1) =⇒ b2 = −1
and so we have
1 1 1
G(s) = − 2
−
(s + 1) (s + 2) (s + 2)
tm −at
−1 1
L = e
(s + a)m+1 m!
61
5.2. Laplace transforms 62
5.3 Poles
The two terms on the right-hand side of (5.3) share the same
denominator, which suggests that it has some special significance.
Its significance is that its roots (its poles) give us the natural
frequencies of the system. The poles pi are therefore given by the
roots of the denominator, s2 + 2ζωn s + ωn2 , which are
p
p1,2 = −ζωn ± ωn ζ 2 − 1. (5.4)
1
0.8
0.6
0.4
0.2
)
1,2
0
Im(λ
−0.2
−0.4
−0.6
−0.8
−1
% poles.m plot the poles of a second-order system as the damping ratio is varied
% plot
figure, hold on, grid on
plot(p1,'b'), plot(p2,'r')
% plot first entry (corresponding to zeta=0) with a cross
plot(p1(1),'bx'), plot(p2(1),'rx')
axis([-1.2*omn 0.2*omn -1.2*omn 1.2*omn])
xlabel('Re(p {1,2})'), ylabel('Im(p {1,2})')
set(gca,'Box','On','XMinorTick','On','YMinorTick','On');
tion since any response is due purely to the initial values of the
displacement q0 and the velocity q̇0 . Without forcing, F (s) = 0
we have
q0 s + q̇0 + 2ζωn q0
Q(s) = . (5.5)
s2 + 2ζωn s + ωn2
Write this as
c1 s + c0
Q(s) = (5.6)
s2 + 2ζωn s + ωn2
c1 s + c0
=
(s + σ)2 + ωd2
c0 − σc1
s+σ ωd
= c1 2
+
2
(s + σ) + ωd ωd (s + σ)2 + ωd2
p
where ωd = ωn 1 − ζ 2 is the damped natural frequency and σ = ζωn .
Now substituting in c1 = q0 and c0 = q̇0 + 2ζωn q0 and taking in-
verse Laplace transforms we get
p1 = −ωn
p2 = −ωn ,
Notice the ‘t’ in the second term. This appears because of the
repeated poles (see the Laplace transforms in table 4.1).
K
Q(s) = F (s), (5.10)
s2 + 2ζωn s + ωn2
or
Q(s) = G(s)F (s).
G(s) is called the transfer function. For the second-order system
considered in this chapter it is given by
K
G(s) =
s2 + 2ζωn s + ωn2
t
Io
G(s) takes as its input F (s) and gives as its output Q(s).
K
Q(s) = G(s)F (s) = G(s) · 1 = G(s) = .
s2 + 2ζωn s + ωn2
Now we have Q(s) but we want q(t). For this we need to take
inverse Laplace transforms but, as for the free response, we need
to be careful to treat the underdamped, critically-damped and
overdamped cases separately. The good thing about having used
the general form (5.6) is that we can now reuse it. Comparing
(5.6) to the expression for Q(s) above, we see that if we set
c0 = K
c1 = 0
then we can reuse the results for the free response for the impulse
response. Since this involves simply repeating the steps we have
already performed (with different values of c0 and c1 we state only
the final results here.
5.7. Step response 70
5.6.1 Underdamped
The underdamped impulse response is
K −ζωn t
q(t) = e sin(ωd t)
ωd
q(t) = Kte−ωn t
5.6.3 Overdamped
The overdamped impulse response is
K
q(t) = (ep1 t − ep2 t )
p1 − p2
We will now spend some time on the step response for the under-
damped case and carefully characterize it. Before that, we first
consider the two other cases (overdamped and critically damped)
and get them out of the way.
5.7.2 Overdamped
The poles are real and distinct and
A B1 B2
Q(s) = + + .
s s − p1 s − p2
Then
q(t) = A + B1 ep1 t + B2 ep2 t
5.7. Step response 72
with
K K K
A= , B1 = , B2 = .
p 1 p2 p1 (p1 − p2 ) p2 (p2 − p1 )
Substituting these into the expression for the free response and
tidying up we find
!!
K ζ
q(t) = 1 − e−ζωn t cos ωd t + p sin ωd t (5.16)
ωn2 1 − ζ2
π
Tp =
ωd
5.8.2 Overshoot Mp
The overshoot Mp is the amount by which the maximum response
qmax overshoots the final value qfinal = limt→∞ q(t) = K/ωn2 . We
express the overshoot as a ratio:
qmax − qfinal
Mp = ,
qfinal
so that an overshoot of 0.05, for example, means that the max-
imum response qmax is 5 % larger than the final response qfinal .
The overshoot and the peak time are related: Mp tells us the
size of the peak response, while Tp tells us the time at which it
happens. Therefore to get the overshoot Mp we set t = Tp in the
5.8. Characterizing the underdamped step response 75
ζω
√n π
−
=e ωn 1−ζ 2 .
Simplifying:
√−ζπ
1−ζ 2
Mp = e (5.18)
Notice what this says: that the overshoot depends only on the
damping ratio ζ! If someone gives us a step response, we could
therefore estimate ζ from knowledge of Mp alone by rearranging
(5.18). (Remember that to get Mp we need to know both qmax
and qfinal .)
1 − δ ≤ q(t) ≤ 1 + δ.
To answer we use the form (5.17) of the step response (with re-
member K = ωn2 ):
e−ζωn t
q(t) = 1 − p sin(ωd t + φ).
1 − ζ2
The two envelopes of the overall curve are then
e−ζωn t
1± p .
1 − ζ2
5.9. Links and further reading 77
(The sine wave part of the solution wiggles within this envelope.)
We want to set this envelope equal to (1±δ) and so we have
e−ζωn t
±δ = ± p
1 − ζ2
x(t) 0
-1
0 0.5 1 1.5 2 2.5 3 3.5 4
t
1
x(t)
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
1
x(t)
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
0.2
x(t) 0
-0.2
0 0.5 1 1.5 2 2.5 3 3.5 4
t
0.05
x(t)
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
0.05
x(t)
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
x(t) 1
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
1
x(t)
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
1
x(t)
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4
t
81
6.2. Laplace transforms 82
The key difference from the cup of tea that we just considered
is that there is now an input u(t). (For the cup of tea, this
could be heating from a stove-top, for example.) Taking Laplace
transforms, including the effect of any initial conditions,
Y (s)(τ s + 1) = τ y0 + KU (s)
or
K τ y0
Y (s) = U (s) + . (6.3)
τs + 1 τs + 1
The first term is the transfer function and the second term repre-
sents the response to the initial condition y0 .
6.3 Poles
Both terms in (6.3) share the same denominator, which suggests
it has some special significance. The roots of this denominator,
τ s + 1 = 0, give the poles of the first-order system:
p = −1/τ.
So the time constant is the time taken for the free response to
decay to about 37 % of its initial value. There is nothing special
about the number 0.368 other than it is equal to e−1 . The time
constant is simply a characteristic time that allows us to compare
one first-order system to another one.
Y (s) K
G(s) = = (6.5)
U (s) τs + 1
K K/τ
Y (s) = ·1=
τs + 1 s + 1/τ
K −t/τ
y(t) = e (6.6)
τ
6.7. Step response 85
1
x(t)
0.5
0
0 2 4 6 8 10
t
1
x(t)
0.5
0
0 2 4 6 8 10
t
1
x(t)
0.5
0
0 2 4 6 8 10
t
87
7.2. Transfer functions 88
or
Y (s) = G(s)U (s) + Q(s).
If we assume that the initial conditions are all zero—which is
what we do when considering the transfer function—then we get
simply
Y (s) = G(s)U (s)
G(s) is the transfer function. It characterizes the forced re-
sponse of a linear system.
7.2. Transfer functions 89
This is what we did in chapters 5 & 6 to solve for the free re-
sponse, impulse response and step response. The constants Ai
are sometimes called the residues of G(s) and pi are the poles
of G(s).
The third way, which we haven’t seen before, is to factor the
numerator and denominator of (7.2) using their roots:
Qm
(s − zi )
G(s) = K Qni=1 (7.4)
i=1 (s − pi )
r = n − m.
where σkeep is the real part of the poles we keep and σdiscard is the
real part of the poles we throw away. We will look at an example
in lectures.
In words this says that the behaviour in the time domain for t
very large is related to the behaviour in the Laplace domain for
s very small.
We need to be careful when applying (7.5) because there are only
certain conditions under which it can be used. For (7.5) to make
sense, the limit limt→∞ y(t) needs to exist. This means that y(t)
must settle down to a definite value for t → ∞. But this won’t
be true for all systems. For this to be true, all poles of sY (s)
must lie in the left half-plane. If this is true then limt→∞ y(t)
exists. If instead sY (s) has poles on the imaginary axis or in the
right half plane then limt→∞ y(t) does not exist because y(t) will
contain oscillating or exponentially growing terms. Let’s look at
some examples where the final value theorem can and cannot be
applied.
7.3. Initial-value & final-value theorems 92
Example
Consider the step response (U (s) = 1/s) of
1 1
G(s) = 2 .
s + 0.2s + 1 s + 10
2.
1
G(s) = with u(t) = sin ωt
s+1
The Laplace transform of the input is L{sin ωt} = s21+1 . Then
Y (s) has ‘poles’ (due to U (s)) at ±1, so the FVT cannot be
applied. (Again, try plotting in Matlab.)
In words this says that the behaviour in the time domain for t
very small is related to the behaviour in the Laplace domain for
s very large. Unlike the FVT, we can apply the IVT (7.6) even
when there are imaginary-axis or right half-plane poles.
Example
The impulse response of
1
G(s) = .
s2 + 0.2s + 1
Applying the IVT (7.6) we get
s
y(0+ ) = lim sY (s) = lim = 0.
s→∞ s→∞ s2 + 0.2s + 1
So we know that the system’s impulse response starts at zero.
This is much less work than finding the system’s impulse response
and letting t → 0!
7.4. Non-minimum-phase transfer functions 94
which has two LHP poles and one RHP zero. The system’s step
response is plotted in figure 7.1. Notice that the step response
is initially in the opposite direction to its final value. We can
understand why this happens by using the initial- and final–value
theorems. Using the FVT, the system’s final response is
1 1
lim y(t) = lim sY (s) = lim sG(s) = . (7.10)
t→∞ s→0 s→0 s 6
We would also like to know the initial value of the system’s deriva-
tive, ẏ(t) which, using the IVT (and L{ẏ(t)} = sY (s)) is
0.2
0.1
y(t)
-0.1
0 0.5 1 1.5 2 2.5 3 3.5 4
t
L {ku(t)} = kU (s)
2. differentiator:
L {u̇(t)} = sU (s)
3. integrator:
Z
1
L u(t) dt = U (s)
s
1. Series connection
2. Parallel connection
3. Feedback connection
99
8.1. Fluid systems 100
We also know that the volume flow rate q(t) is related to the
height of water in the tank:
d dh
q(t) = (Ah(t)) = A .
dt dt
∆P (s) ρg/A
= g (8.3)
Qi (s) s + RA
This gives the transfer function between the the volume flow-
rate in Qi (s) (the input) and the pressure difference ∆P (s) (the
output). If we are interested instead in the height of the water,
H(s), then we can use ∆p(t) = ρgh(t) to give
H(s) 1/A
= g (8.4)
Qi (s) s + RA
8.1. Fluid systems 103
Tank 1
1. Conservation of mass.
ṁ = ṁi − ṁo
d
(ρA1 h1 ) = ρA1 ḣ1 = ρqi − ρq1
dt
8.1. Fluid systems 104
Tank 2
Now we do the same thing for tank 2.
1. Conservation of mass.
ṁ = ṁi − ṁo
ρA2 ḣ2 = ρq1 − ρqo
h1
d h1
= A + Bqi , (8.7)
dt
h2 h2
where the states are the two heights h1 , h2 and the input is qi .
We want to find the two transfer functions H 1 (s)
Qi (s)
and H 2 (s)
Qi (s)
. To do
8.1. Fluid systems 106
or
H 1 (s)
1 A2 R1 R2 s + g(R1 + R2 )
Qi (s)
=
H (s) D(s)
2
Qi (s)
R2 g
with
1
D(s) = .
A1 A2 R1 R2 s2 + g(A2 R2 + A1 (R1 + R2 ))s + g 2
8.2. Thermal systems 107
There are two transfer functions and they share the same de-
nominator D(s), which is second-order. The transfer function for
H1 (s) also has a single zero while the transfer function for H2 (s)
does not have any zeros. We just did something that we have not
done before: we took Laplace transforms of a state-space model.
We will look at this again in more detail in chapter 13 when we
look at the connections between state-space models and transfer
functions.
Example
Find the dynamic equations for ∆T (t) in terms of the heat input
qi (t) for the thermal system shown below.
∆T (t)
ct ∆Ṫ (t) = qi (t) − .
Rt
8.3. Links and further reading 110
∆T (s) Rt
=
Qi (s) ct Rt s + 1
This chapter is all about Bode plots: what they are and how to
plot them.
9.1 Introduction
Bode plots characterize the response of linear dynamical systems
to harmonic (i.e. sinusoidal) inputs. To give some simple exam-
ples, we might want to characterize the response of a mass-spring-
damper system to harmonic forcing of different frequencies; or the
response of an RLC circuit to a harmonic voltage of different fre-
quencies. To give a more complicated example, we might want
to characterize the response of a room (such as a concert hall) to
sound sources of different harmonic frequencies. For example if a
violin or a guitar plays here, how will it sound over there, and how
does this vary with frequency? Or we might want to characterize
the response of an aircraft to gusts of different frequencies. What
is the frequency for which a passenger at the back of the plane
is most likely to spill their coffee (a small gust) or be injured (a
large gust)? Can we make these less likely?
Why is it important to characterize a system’s response to har-
monic forcing? There are several reasons:
• The inputs of interest are often harmonic (or composed of a
small number of harmonic frequencies).
111
9.1. Introduction 112
Example
We choose the example of a first-order system that is forced at
the frequency ω = π rad/s:
1
G(s) = and u(t) = sin πt.
s+1
First we take the Laplace transform of u(t) (see table 4.1):
π
U (s) = L{u(t)} = .
s2 + π2
Then the Laplace transform of the output is
π
Y (s) = G(s) U (s) =
(s + 1)(s2 + π 2 )
A Bπ Cs
= + 2 2
+ 2 .
s+1 s +π s + π2
1
u(t) = cos ωt would work fine too.
9.1. Introduction 113
and
−π/(π 2 + 1)
−1
φ = tan
1/(π 2 + 1)
= tan−1 (−π)
= −1.263 rad = −72.3 deg
The input u(t) (black) and the output y(t) (blue) are shown be-
low.
9.2. The frequency response 114
0.5
u(t), y(t)
-0.5
-1
0 2 4 6 8 10
t
That was quite a lot of work! But don’t worry, we will find a better
way. What did we learn? The key insight from (9.1) is that the
response to a sinusoidal input is made up of two components: a
transient response and a steady-state response.
The transient response decays with time due to the e−t term.
(Notice that the system pole at p = −1 appears in the transient
response via this e−t term.)
The steady-state response does not decay in time. It has the same
frequency as the input (ω = π) but it has a different amplitude
and is phase-shifted.
Now for a better way to get to the same answer: the frequency
response.
u(t) = ejωt .
Therefore
Remember that the input we applied was u(t) = ejωt and so (9.6)
tells us two important things:
• that relative to the input u(t), the output y(t) is scaled by a
factor |G(jω)|. This is the gain of G at frequency ω.
• that relative to the input u(t), the output is phase-shifted by
an amount ∠G(jω). This is the phase of G at frequency ω.
So now we know what the frequency response is, and we know
how to define its gain and phase. We are now ready to tackle
Bode plots.
9.3. The Bode plot 117
s = jω.
Let’s try that for the first order system—we just need to write jω
wherever we see an s. This gives
1
G(jω) = .
jω + a
It is clear that G(jω) is complex. Our task now is to find the
gain, |G(jω)|, and the phase, ∠G(jω).
9.4. Bode plots by hand 118
|1|
|G(jω)| =
|jω + a|
1
=√ .
ω 2 + a2
The phase ∠G(jω) is
∠G(jω) = − arctan(0) = 0.
∠G(jω) ≈ −90◦ .
9.5.3 At ω = ωn
For ω = ωn we have
1 −j
G(jω) = 2
= .
j2ζωn 2ζωn2
The gain and the phase are therefore
1
|G(jωn )| = and ∠G(jωn ) = −90◦ . (9.9)
2ζωn2
Finding the peak in the gain plot means maximizing this quantity.
Since the numerator is just 1, this is the same as minimizing the
quantity
[(ωn2 − ω 2 ) + (2ζωn ω)2 ]1/2 . (9.10)
To find the minimum value of (9.10) we differentiate it with re-
spect to ω and set it to zero. This gives a frequency of
p
ωpeak = ωn 1 − 2ζ 2 . (9.11)
This is neither the undamped
p natural frequency ωn nor the damped
natural frequency ωd = ωn 1 − ζ 2 ! But for small ζ it will be close
to both. Substituting (9.11) into the expression for the gain we
find a maximum gain of
1
Gmax = |G(jωpeak )| = p .
2ζωn2 1 − ζ 2
For small ζ this is very close to the value of the gain at ω = ωn
that we found in (9.9).
One final remark: from (9.11) we see that there
√ is no resonant
peak if the damping ratio is greater than 1/ 2 = 0.707 (since
then ωpeak becomes negative, which is not meaningful).
the gain; and ii) the additive property for the phase, both of which
we will now explain.
Then
So to plot the gain of G1 (s)G2 (s) (in dB), we can simply add
together the gain of G1 (s) (in dB) and the gain of G2 (s) (in dB).
This relies critically on the fact that we plot the gain using a
logarithmic scale.
Gain plot
1. If there are no zeros or poles at the origin, start at 20 log10 |G(j0)| dB
with a horizontal asymptote.
2. If the term 1/sq is present, start at 20 log10 |G(j0)sq |−20q log10 ωmin dB
with a −20q dB/dec asymptote.
3. At |zi |, add to the slope +20 dB/dec. At |pi |, add to the slope
-20 dB/dec. At ωi , add to the slope
√ -40 dB/dec. Around ωi ,
mark on a resonant peak if ζi < 1/ 2 ≈ 0.7.
Phase plot
1. No poles/zeros at the origin: if K > 0 start the graph at 0◦
with a horizontal asymptote; if K < 0 start the graph at −180◦
with a horizontal asymptote.
9.8. Bode plots of common transfer functions 126
20
15
Magnitude (dB)
10
-5
91
Phase (deg)
90.5
90
89.5
89
10 0 10 1
Frequency (rad/s)
0
Magnitude (dB)
-5
-10
-15
-20
-89
Phase (deg)
-89.5
-90
-90.5
-91
10 0 10 1
Frequency (rad/s)
40
Magnitude (dB)
30
20
10
0
90
Phase (deg)
45
0
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
0
Magnitude (dB)
-10
-20
-30
-40
0
Phase (deg)
-45
-90
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
1
Figure 9.4: G(s) = s+1
: single pole.
9.8. Bode plots of common transfer functions 129
40
20
Magnitude (dB)
-20
-40
-60
0
Phase (deg)
-45
-90
-135
-180
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
0
Magnitude (dB)
-20
-40
-60
0
Phase (deg)
-45
-90
-135
-180
10 -1 10 0 10 1 10 2
Frequency (rad/s)
32
Figure 9.6: G(s) = (s+3)2
: pair of real poles.
9.8. Bode plots of common transfer functions 130
0
Magnitude (dB)
-5
-10
-15
-20
60
Phase (deg)
30
0
10 -2 10 -1 10 0 10 1 10 2 10 3
Frequency (rad/s)
s+1
Figure 9.7: G(s) = s+10
: single zero followed by a single pole.
10
8
Magnitude (dB)
0
0
Phase (deg)
-10
-20
-30
10 -2 10 -1 10 0 10 1 10 2
Frequency (rad/s)
s+3
Figure 9.8: G(s) = s+1
: single pole followed by a single zero.
9.8. Bode plots of common transfer functions 131
40
Magnitude (dB)
20
-20
180
Phase (deg)
135
90
45
0
10 -1 10 0 10 1
Frequency (rad/s)
80
60
Magnitude (dB)
40
20
-20
0
Phase (deg)
-45
-90
-135
-180
10 -1 10 0 10 1 10 2
Frequency (rad/s)
2
Figure 9.10: G(s) = ss2 +0.02s+1
+s+100
: complex-conjugate pair of zeros
and complex-conjugate pair of poles.
9.A. Derivation of the frequency response 132
(We have absorbed the 1/(s − jω) term into the denominator so
that we have an extra pole, pn+1 = jω.)
Now write Y (s) as a sum of partial fractions:
n+1
X B̃i
Y (s) =
i=1
s − pi
for i = 1, . . . , n, we have:
Bi
B̃i =
pi − jω
9.B. The decibel 133
This chapter is all about Nyquist plots, which also go by the name
of polar plots.
In the Bode plot we opt for (10.1) and plot the frequency response
in terms of its gain |G(jω)| and phase ∠G(jω). In the Nyquist
plot we opt for (10.2) and plot the frequency response in terms of
its real and imaginary parts.
One important difference between Bode plots and Nyquist plots is
that, in the Nyquist plot, we generally plot the frequency response
135
10.2. Nyquist plots of common transfer functions 136
10
2
Im(G)
-2
-4
-6
-8
-10
-10 -8 -6 -4 -2 0 2 4 6 8 10
Re(G)
10
2
Im(G)
-2
-4
-6
-8
-10
-10 -8 -6 -4 -2 0 2 4 6 8 10
Re(G)
0.5
0.4
0.3
0.2
0.1
Im(G)
-0.1
-0.2
-0.3
-0.4
-0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Re(G)
1
Figure 10.3: G(s) = s+1
: single LHP pole.
60
40
20
Im(G)
-20
-40
-60
-30 -20 -10 0 10 20 30
Re(G)
1
Figure 10.4: G(s) = s2 +0.02s+1
: pair of complex LHP poles.
10.2. Nyquist plots of common transfer functions 139
0.8
0.6
0.4
0.2
Im(G)
-0.2
-0.4
-0.6
-0.8
-0.2 0 0.2 0.4 0.6 0.8 1
Re(G)
32
Figure 10.5: G(s) = (s+3)2
: pair of real LHP poles.
0.5
0.4
0.3
0.2
0.1
Im(G)
-0.1
-0.2
-0.3
-0.4
-0.5
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Re(G)
s+1
Figure 10.6: G(s) = s+10
: single LHP zero followed by a single
LHP pole.
10.2. Nyquist plots of common transfer functions 140
1.5
0.5
Im(G)
-0.5
-1
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
Re(G)
s+3
Figure 10.7: G(s) = s+1
: single LHP pole followed by a single
LHP zero.
0.8
0.6
0.4
0.2
Im(G)
-0.2
-0.4
-0.6
-0.8
-1
-100 -80 -60 -40 -20 0 20
Re(G)
5000
4000
3000
2000
1000
Im(G)
-1000
-2000
-3000
-4000
-5000
-3000 -2000 -1000 0 1000 2000 3000
Re(G)
2
Figure 10.9: G(s) = ss2 +0.02s+1
+s+100
: complex-conjugate pair of zeros
and complex-conjugate pair of poles.
11 Fourier analysis
So what does eq. (11.1) actually say? It says that some periodic
function, f (t), can be represented as a sum of building blocks,
142
11.1. Fourier series 143
and that those building blocks are sines and cosines of different
frequencies (and a constant). The amplitudes of the cosine build-
ing blocks (i.e. how much of each we have) are given by the ak
terms; and the amplitudes of the sine building blocks are given
by the bk terms. Finally, the amplitude of the constant term is a0
(which could of course be zero).
Let’s take note of a few important things here:
1. We have specified (in the sentence before eq. (11.1)) that f (t)
should be periodic with period T .
2. There is a constant term, a0 . This accounts for the fact that,
as well as being made up of sines and cosines, f (t) may also
have a contribution that is constant with t.
3. The two sums go from i = 1 to i = ∞. We can generally
truncate the series at some finite number, though, and still
achieve good results.
4. We have chosen f to be a function of t, f (t), because we had
to choose something, but we could equally have written, say,
f (x) or f (y) instead.
Equation (11.1) tells us how to get f (t) from knowledge of ak and
bk terms. But we are usually interested in the problem the other
way around: given f (t), how do we find the ak and bk terms? We
will go through this in more detail in lectures. Here we will just
write down the final result. The constants a0 , ak and bk are given
11.2. Complex-exponential form 144
by
1 T
Z
a0 = f (t) dt (11.2a)
T 0
2 T
Z
2πkt
ak = f (t) cos dt (11.2b)
T 0 T
2 T
Z
2πkt
bk = f (t) sin dt. (11.2c)
T 0 T
where
ωk = 2πk/T.
In this case the coefficients ck are given by
1 T
Z
ck = f (t)e−jωk t dt. (11.5)
T 0
11.3. Relationship between them 145
nf t
O t
T 72 Tiz T
Now for ak :
2 T
Z
2πkt
ak = f (t) cos dt
T 0 T
2 T /2 2 T
Z Z
2πkt 2πkt
= 1 · cos dt + 0 · cos dt
T 0 T T T /2 T
2 T /2
Z
2πkt
= cos dt
T 0 T
T /2
2 T 2πkt
= · sin
T 2πk T 0
1
= [sin(πk) − sin(0)]
πk
= 0.
so
ak = 0 for all k = 1, 2, 3, . . .
11.4. Example: Square wave 147
And finally bk :
2 T
Z
2πkt
bk = f (t) sin dt
T 0 T
2 T /2 2 T
Z Z
2πkt 2πkt
= 1 · sin dt + 0 · sin dt
T 0 T T T /2 T
2 T /2
Z
2πkt
= sin dt
T 0 T
T /2
2 T 2πkt
=− · cos
T 2πk T 0
1
= (cos(0) − cos(πk))
πk
1
= (1 − (−1)k )
πk
and therefore
bk = 0 if k is even
2
bk = if k is odd.
kπ
Putting this all together, we can therefore write the Fourier series
of the square wave as
∞
1 X 2 2πkt
f (t) = + sin (11.8)
2 k=1,3,5,... kπ T
The Fourier series (11.8) is compared to the true square wave for
different numbers of Fourier modes in figure 11.2.
It makes sense that the ak terms came out as zero: the original
square wave is an odd function (f (−t) = −f (t)). Meanwhile
cosine waves are even functions (f (t) = f (−t)) and sine waves are
odd functions. All of the cosine waves come out to zero because
11.4. Example: Square wave 148
What if weAult
want the input to be (say)Ault
half as large?
1
t t
X
Tiz Tiz
11.5. Application: Pulse Width Modulation (PWM) 150
t
Ault Ault
1
t t
X
Tiz Tiz
One option (on the left above) would be to reduce the size of the
step. Another option (on the right above) would be to reduce the
energy of the input by placing ‘gaps’ in it. Pulse width modulation
uses the second option (on the right above) and we will now look
at this in more detail by considering the Fourier series of the
input; and the Bode plot of a DC motor; and combining them
together.
Notice that the period of the cycle is T ; that the signal is high
for a length of time of 2T0 ; and that it is low for the rest of the
cycle (i.e. for an amount of time (T − 2T0 )).
Our first task is to find the Fourier series of the pulse and we
will find it in complex-exponential form (11.4) because this will
make application of the Bode plot easier. This means finding the
11.5. Application: Pulse Width Modulation (PWM) 151
1 T /2
Z
c0 = u(t)e−j·0·t dt
T −T /2
1 T0
Z
= 1 · e0 dt
T −T0
T0
=2 ,
T
which is the average value of the signal. For k 6= 0 we have
T0
1
Z
1 · e−jk( T )t dt
2π
ck = (u(t) = 0 outside this range)
T −T0
" #T0
1 1
e−jk( T )t
2π
=
T −jk 2πT −T0
j h −jk( 2π )T0 i
− e−jk( T )(−T0 )
2π
= e T
2πk
j T0 T0 T0 T0
= cos 2πk − j sin 2πk − cos 2πk − j sin 2πk
2πk T T T T
j T0
= −2j sin 2πk
2πk T
1 T0
= sin 2πk .
πk T
1 1 1
first noting that x
sin(cx) = −x
sin(−cx) (i.e. x
sin(cx) is an even
11.5. Application: Pulse Width Modulation (PWM) 152
function)
∞ −1
T0 j0 X 1 T0 jk 2π
X 1 T0 2π
u(t) = 2 e + sin 2πk e T
t
+ sin 2πk e−jk T t
T πk T πk T
k=1 k=−∞
∞
T0 X 1 T0 jk 2π t 2π
=2 + sin 2πk e T + e−j(−k) T t
T πk T
k=1
∞
T0 X 2 T0 2π
=2 + sin 2πk ejk T t
T πk T
k=1
0.2
0
-20 -10 0 10 20
1 1
0.5 0.5
0 0
-10 -5 0 5 10 -10 -5 0 5 10
1 1
0.5 0.5
0 0
-10 -5 0 5 10 -10 -5 0 5 10
1 1
0.5 0.5
0 0
-10 -5 0 5 10 -10 -5 0 5 10
θ̇ = ω = ωd = constant
so
Ra b + Ke Kt
α= ωd
Kt
100
=
100
= ωd .
with
T0 2 T0 2π
c0 = 2 , ck = sin 2πk , ωk = k.
T πk T T
y0 = c0 ej0 G(j0)
T0 100 T0
=2 =2
T 5 × 20 T
11.6. Application: PWM of a DC motor 156
yK (t) ≈ G(j0)c0 = ωd .
0.5
0
-10 -5 0 5 10
0.5
0
-10 -5 0 5 10
-100
-200
10 -2 10 0 10 2 10 4 10 6
0.5
0
-10 -5 0 5 10
0.5
0
-10 -5 0 5 10
-100
-200
10 -2 10 0 10 2 10 4 10 6
0.5
0
-10 -5 0 5 10
0.5
0
-10 -5 0 5 10
-100
-200
10 -2 10 0 10 2 10 4 10 6
|G(jωk )| ≈ 0 for k 6= 0.
and
N −1
1 X ˆ +j 2π nk
fk = fn e N (11.11)
N n=0
N
N N 2 (DFT) 2
log2 N (FFT) saving
32 1,024 80 92%
11.D Examples
Perhaps the best thing to do now is to look at the FFT in action
by looking at some examples. Some of these examples are taken
almost verbatim from Matlab’s documentation for its fast Fourier
transform fft() (see links at the end of this chapter).
11.D. Examples 164
1
y(t)
-1
-2
0 0.02 0.04 0.06 0.08 0.1
t (s)
1
|P1(f)|
0.5
0
0 100 200 300 400 500
f (Hz)
Figure 11.3: Example 1. Top: the first 0.1 s of the signal in the
time domain. Bottom: fast Fourier transform.
11.D. Examples 167
2
y(t)
-2
-4
0 0.02 0.04 0.06 0.08 0.1
t (s)
1
|P1(f)|
0.5
0
0 100 200 300 400 500
f (Hz)
Figure 11.4: Example 2. Top: the first 0.1 s of the signal in the
time domain. Bottom: fast Fourier transform.
1
x(t)
0.5 y(t)
x(t), y(t)
-0.5
-1
0 0.2 0.4 0.6 0.8 1
t (s)
1
|Px (f)|
|Px (f)|, |Py (f)|
|Py (f)|
0.5
0
0 2 4 6 8 10
f (Hz)
and we can note two things. First, that the largest component
of the fast Fourier transform of y(t) is at f = 1 Hz. (Which
is the nearest frequency to the true frequency of f = 1.05 Hz.)
11.F. Aliasing 171
Second, that the fast Fourier transform also has energy at other
frequencies—energy has ’leaked’ into other frequencies.
11.F Aliasing
Aliasing does not occur for the data you are given for the last
part of assignment 3 so you don’t need to worry about it. But I
feel I should still tell you a little about it.
It can be shown that, for a sampling frequency of Fs , the highest
frequency that can be resolved is
1
Fnyq = Fs
2
and this is known as the Nyquist frequency. Aliasing can occur
when the frequency at which a signal is sampled is too low when
compared to the frequencies that are present in the signal. In
particular, aliasing will occur if the highest frequency present in
the signal is higher than the Nyquist frequency, Fnyq .
This is best seen with an example, the code and figures for which
follow on the next two pages. The frequency (f0) contained in
the signal is 15 Hz. The signal x1 (t) has a sampling frequency
of 100 Hz, which is high enough to resolve this frequency (since
FNyq = Fs /2 = 100/2 = 50 Hz > 15 Hz). The spectrum therefore
(correctly) shows a peak at f = 15 Hz. The signal x2 (t) has a
sampling frequency of only 20 Hz and so the frequency 15 Hz is
not resolved (since in this case FNyq = 20/2 = 10 Hz < 15 Hz).
Instead it is aliased to f = 5 Hz—it is as if the true frequency has
been ‘folded over’ to a lower frequency.
11.F. Aliasing 172
% sampling frequency of 20 Hz
Fs = 20; T = 1/Fs; L = 20;
t2 = (0:L-1)*T; % time vector
x2 = sin(2*pi*f0*t2); % signal
f2 = Fs*(0:(L/2))/L; % frequency vector
X2 = fft(x2);
P2 = abs(X2/L); P1 = P2(1:L/2+1);
P1(2:end-1) = 2*P1(2:end-1);
PX2 = P1; % one-sided spectrum
0.5
x(t), y(t)
-0.5
-1
0 0.2 0.4 0.6 0.8 1
t (s)
1
|P1 (f)|
|Px (f)|, |Py (f)|
|P2 (f)|
0.5
0
0 5 10 15 20
f (Hz)
174
12.2. Difference equations 175
Y (z) bm z −m + · · · + b1 z −1 + b0
G(z) = = .
U (z) an z −n + · · · + a1 z −1 + a0
y[k] = yk
Ukr
1
Ukr
O 1
1 2 3 4 5 K
Ukr
z transform of a1 delayed discrete impulse
On the left below we delay the impulse by one time-step; on the
O O i
right we 1delay2 it by
3
n time-steps:
4 5 K 1 2 3 4 5 K
Ukr Ukr
1 1
O i
n 2 n I n n 11 nt2 K
O i O i
1 2 3 4 5 K n 2 n I n n 11 nt2 K
IfUkr
we delay the impulse by one time-step (on the left above) then
1
we get a z transform of
U (z) = 0 + 1 · z −1 + 0 + · · ·
= z −1 .
O i
n 2 n I n n 11 nt2 K
12.3. The z transform 178
U (z) = 0 + 0 + · · · + 1 · z −n + 0 + · · ·
= z −n .
O e i e i l
1 2 3 4 5 K
Its z transform is
U (z) = 1 + z −1 + z −2 + z −3 + · · ·
o i z z 4 I K
un i
un i
o i z z 4 I K un i
−n
and we can think of z as representing a delay of n time-steps:
un n
un n
un i
un n
uan
h
h
y(t) = u(t − T ) −→
uan
Y (s) = e−sT U (s)
h
and for a delay of n time-steps we would have
h
y(t) = u(t − nT ) −→ Y (s) = e−snT U (s)
ax[k] + by[k]
aX(z) + bY (z).
4 2
2 1
Im(s)
Im(z)
0 0
-2 -1
-4 -2
-1 -0.5 0 0.5 1 -2 -1 0 1 2
Re(s) Re(z)
T = 1; % sampling period
% choose points in s-plane
s1 = -0.5 + 1j*pi*[-1:0.05:1]; % stable (LHP)
s2 = 0.0 + 1j*pi*[-1:0.05:1]; % marginally stable
s3 = +0.5 + 1j*pi*[-1:0.05:1]; % unstable (RHP)
% map each line to points in z
z1 = exp(s1*T); z2 = exp(s2*T); z3 = exp(s3*T);
% plot points in s plane
figure, set(gcf,'Position', [10 10 200 200])
plot(real(s1), imag(s1), 'bo-'), hold on
plot(real(s2), imag(s2), 'go-')
plot(real(s3), imag(s3), 'ro-')
xlabel('Re(s)'), ylabel('Im(s)')
set(gca, 'Box', 'On'), pbaspect([1 1 1])
axis([-1. 1. -4. 4.])
% plot points in z plane
figure, set(gcf,'Position', [10 10 200 200])
plot(real(z1), imag(z1), 'bo-'), hold on
plot(real(z2), imag(z2), 'go-')
plot(real(z3), imag(z3), 'ro-')
xlabel('Re(z)'), ylabel('Im(z)')
set(gca, 'Box', 'On'), pbaspect([1 1 1])
z = esT
12.4. Mapping the s-plane to the z-plane 182
ln z = ln(reiθ )
= ln r + ln(eiθ )
= ln r + iθ.
So finally we have
ln r + iθ
s=
T
where r and θ are defined such that z = reiθ .
12.4. Mapping the s-plane to the z-plane 183
Im
We can see this from figure 12.1 in the previous section, where
i) points in the left half-plane in s mapped to points inside the
unit circle; ii) points in the right half-plane in s mapped to points
outside the unit circle; and iii) points on the imaginary axis in s
mapped to points on the unit circle.
12.5. Converting from s to z 184
2
d
0
0 5 10 15 20 25 30
T = 1; % sampling time
ru
Uz 43
44
the
t
to f Iz e ta
2 z−1
=G .
T z+1
13.1 Introduction
We have seen that there are two different ways to represent a
linear dynamical system:
1. using a state-space model in chapter 2;
2. using Laplace transforms in chapter 4 (which leads to a transfer
function covered in chapters 5–7).
As we might expect, there are close connections between the trans-
fer function of a system and its state-space representation. We
will now look at these connections: first for autonomous systems
(in which there is no input); and second for non-autonomous sys-
tems (in which the effect of some input(s) is included).
ẋ(t) = Ax(t)
191
13.2. Autonomous system ẋ = Ax 192
The two solutions must of course give the same answer and so
so
where
(−1)1+1 M1,1 · · · (−1)j+1 Mj,1 · · · (−1)n+1 Mn,1
.. ...
.
Adj (B) = ..
(−1)1+i M1,i (−1)j+i Mj,i .
.. ..
. .
(−1)1+n M1,n ··· (−1)n+n Mn,n
What is M1,2 ?
4 6
B(1),(2) = 4 =
6
7 9
7 9
and
M1,2 = detB(1),(2) = 4 · 9 − 7 · 6 = −6.
Application to (sI − A)
Applying this to B = (sI − A), we get
1 N (s)
(sI − A)−1 = · Adj(sI − A) = .
det(sI − A) D(s)
M q̈(t) + Kq(t) = 0
with
m1 0 k1 + k2 −k2
M = ; K= .
0 m2 −k2 k2 + k3
M = MT ; K = KT .
198
14.2. The three-mass case 199
det(M −1 K − λI) = 0.
From this we can find the n values of the undamped natural fre-
quencies, ω12 , ω22 , . . . , ωn2 . Substituting any one of these back into
(14.2), we get a corresponding set of relative values for the vector
q̃, which contain the eigenvectors or mode shapes, which we will
1
By doing this we are saying that we expect the vibrating system to
oscillate at some frequency given by est ; and that the oscillation will have
some shape given by the vector q̃. In other words, the masses will oscillate
together at some common frequency and with some overall shape.
14.4. Natural frequencies and mode shapes 201