0% found this document useful (0 votes)
94 views18 pages

Digital Filters As Dynamical Systems

This document discusses representing digital filters as dynamical systems using state space representations. It begins by providing an overview of dynamical systems and their typical mathematical representation using state vectors and state space. It then explains that digital filters can be modeled as discrete-time, linear dynamical systems and represented using state-space equations. The document gives several examples from literature where state-space representations have been useful for applications like designing time-varying digital filters and analyzing non-linearities in filters. It concludes that the state-space, dynamical systems approach provides valuable tools for both designing and analyzing digital filters.

Uploaded by

femtyfem
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views18 pages

Digital Filters As Dynamical Systems

This document discusses representing digital filters as dynamical systems using state space representations. It begins by providing an overview of dynamical systems and their typical mathematical representation using state vectors and state space. It then explains that digital filters can be modeled as discrete-time, linear dynamical systems and represented using state-space equations. The document gives several examples from literature where state-space representations have been useful for applications like designing time-varying digital filters and analyzing non-linearities in filters. It concludes that the state-space, dynamical systems approach provides valuable tools for both designing and analyzing digital filters.

Uploaded by

femtyfem
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

State Space Representation: Digital Filters as Dynamical

Systems
Julian Parker
TKK Helsinki University of Technology
Department of Signal Processing and Acoustics
julian.parker@tkk.fi
Abstract
Fundamentally, digital lters are simple discrete dynamical systems. This paper ex-
plores the literature surrounding dynamical systems, including their properties and
the mathematical constructs which are used to represent them. The application of
this form of representation to digital lters is then explored, and related back to
more traditional forms of lter representation. Finally, we present a number of ap-
plications in which the representation of a digital lter in state space is advanta-
geous. Examples of both lter design and lter analysis applications are given. We
conclude that state-space representations and the dynamical systems approach are
valuable tools in the design and analysis of digital lters.
Keywords Digital Filters, Stability Analysis, Filter Design, Dynamical Systems, Non-
Linear Filtering
1. INTRODUCTION
The mathematical machinery of dynamical systems and their representation in state-space
is an extremely powerful way of analysing the behaviour of systems. Digital lters can
be treated as dynamical systems, but traditionally most analysis has concentrated on fre-
quency domain methods using the transfer function. Nonetheless, state-space lter repre-
sentations have been used to tackle a number of problems that are difcult or impossible
to solve using frequency domain methods. This report explores the dynamical systems
approach, and how it can be applied to digital lter problems.
Section 2 gives an overview of the dynamical systems approach and its formalisms,
along with a brief discussion of some interesting properties of linear and non-linear sys-
tems - primarily their stability. Section 3 explains how a digital lter can be considered
as a dynamical system, and shows how to move between this framework and other more
widespread lter representations. Section 4 gives a number of examples of cases in the
literature where this approach has been fruitfully applied to digital lters. These appli-
cations include time-varying digital lters, numerical non-linearities within digital lter
structures, and more. Section 5 concludes the report, and offers a number of potential
avenues for further application of this approach to digital lters.
1
2. DYNAMICAL SYSTEMS
2.1. Overview of Dynamical Systems
2.1.1. What is a Dynamical System?
Literally speaking, any system which evolves with time is a dynamical system. Such
systems are obviously widespread throughout all areas of science and engineering, and
are analyzed using a wide variety of tools - both mathematical and non-mathematical,
quantitative and qualitative. A moving pendulum, an electrical circuit, the water owing
in a river, even a trafc jam; all of these things are dynamical systems.
System
(a)
System
(b)
Figure 1: Schematic diagrams show a) an un-oriented dynamical system and b) an ori-
ented dynamical system.
In this document, we use the term dynamical system to refer to something more spe-
cic. We use it to refer to a particular way of viewing these kinds of systems, and the
associated mathematical machinery which is associated with this view. Fundamentally,
a dynamical systems approach views a system as having a state and a number of rules
governing the evolution of that state as time passes. The systems state interacts with the
environment via the use of variables. When constructing the mathematical model that
determines the evolution of the state of the system, it is useful to separate the systems
variables into those which exert some form of inuence on the system, known as causes
or input variables, those which react to the state of the system, known as effects or output
variables, and those which represent the internal state of the system, state variables. A
system in which we can divide the variables in this way is called an oriented system, and
our discussion will concentrate on this form of system. It is worth noting that depending
on the form of the system, division of variables between input variables and output vari-
ables is not necessarily trivial. Schematic diagrams of un-oriented and oriented dynamical
systems are given in Figures 1(a) and 1(b) respectively.
2
Chapter 1 of Basile and Marro (1992) gives a good overview of the basics of dynamical
systems.
2.1.2. State Space Representation
Consider a simple continuous-time (t R) dynamical system that consists of one input
variable, u(t), one state variable, x(t) and one output variable, y(t). We can describe the
evolution of such a system as follows:
x(t) = f(x(t), u(t), t) (1)
y(t) = g(x(t), u(t), t)
where the rst equation describes the time evolution of the state of the system based on
its previous state and outputs, and the second equation maps from the state of the system
and its input to its output.
Equivalently, in discrete time (i Z), we write:
x(i + 1) = f(x(i), u(i), i) (2)
y(i) = g(x(i), u(i), i)
Trivially, we can extend this description to an arbitrary number of input variables, state
variables and output variables by dening an input vector, u(t) of length p, a state vector,
x(t) of length q and an output vector, y(t) of length r. This produces:
x(t) = f(x(t), u(t), t) (3)
y(t) = g(x(t), u(t), t)
which are the equations describing a general continuous-time dynamical system.
If the functions f and g are linear with respect to x and u, we can re-write the equation
using linear operators:
x(t) = A(t)x(t) +B(t)u(t)
y(t) = C(t)x(t) +D(t)u(t) (4)
where A(t),B(t),C(t),D(t) are time-dependent matrices of q q, q p, r q and r p
elements respectively. We therefore have the equations describing a general continuous
linear time-varying system.
If we remove the time-dependence of the linear operators, we produce:
x(t) = Ax(t) +Bu(t)
y(t) = Cx(t) +Du(t) (5)
which are the equations governing a general continuous linear time-invariant system.
The state vector x(t), exists within a q-dimensional abstract space which we call state-
space, and the equations presented above describe the trajectory in this space along which
the systemtravels. We therefore refer to themas state-space representations of the system.
3
We may trivially repeat the above process to produce a similar set of state-space rep-
resentations describing discrete-time general systems:
x(i + 1) = f(x(i), u(i), i) (6)
y(i) = g(x(i), u(i), i)
discrete, linear, time-varying systems:
x(i + 1) = A(i)x(i) +B(i)u(i)
y(i) = C(i)x(i) +D(i)u(i) (7)
and discrete, linear time-invariant systems:
x(i + 1) = Ax(i) +Bu(i)
y(i) = Cx(i) +Du(i) (8)
From here onwards we will mainly discuss discrete-time dynamical systems, as they
are more relevant to our application.
2.2. Properties of Linear Systems
As we have described above, linear dynamical systems of both time-invariant and time-
varying types can be described using linear operators. We can therefore use the plentiful
apparatus of linear algebra to analytically derive properties of the system from these oper-
ators. This topic is covered well, and in more detail, in Chapter 2 of Scheinerman (1996).
2.2.1. Stability of Time-Invariant Systems
Consider a discrete-time linear time-invariant system of the form given in equation 8.
Since the system is linear, we can simply consider the evolution of the state variables
given an arbitrary initial value x = x
0
, as this also generalises to stability given a bounded
input. Also, since the matrix C simply maps from the state of the system to the output
variables, we can assume that if the evolution of x(i) is bounded then the output y(i) is
also bounded. Therefore, if we want to derive the conditions under which the output of
the system is bounded, we need only consider this system:
x(i + 1) = Ax(i)
x(0) = x
0
(9)
therefore, the state of the system at time i is clearly given by:
x(i) = A
i
x
0
If we assume that Ais diagonalizable, we can write it as:
A = SS
1
4
where is a diagonal matrix containing the eigenvectors,
j
of Aand S is a q q matrix
containing the eigenvectors of A. Hence:
A
i
= (SS
1
)
i
which expands out to:
A
i
= (SS
1
)(SS
1
)(SS
1
) . . . (SS
1
)
Using the associative property of matrix multiplication, we can re-arrange the brackets in
the above expression to produce (S
1
S) pairs, which reduce to the identity matrix I. This
leaves us with:
A
i
= S
i
S
1
Since raising a diagonal matrix to a power is equivalent to raising its elements to that
power, we can now write:
A
i
= S
_

i
1
0 0 . . . 0
0
i
2
0 . . . 0
0 0
i
3
. . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 . . .
i
q
_

_
S
1
Clearly, if all the members of
i
are bounded as i , the members of A
i
should also
be bounded. Given that the eigenvalues can be complex valued, it is therefore obvious that
they will remain bounded as long as all |
j
| < 1. This analysis can be repeated for non-
diagonalizable matrices by converting them to Jordan canonical form (see Scheinerman
(1996), page 60). We therefore reach the conclusion that any linear time-invariant system
is stable as long as all the eigenvalues of its state matrix have a magnitude of less than
one. Hence, since:
|A| =
q

j=1

j
(10)
we can write a general stability test:
|A|
2
< 1 (11)
for all linear time-invariant dynamical systems.
2.2.2. Modes
Further to the analysis above, if the state matrix, A of a dynamical system is diagonaliz-
able, it necessarily has linearly independent eigenvectors. We can therefore express the
system as q eigenvalue eigenvector pairs,
j
v
j
. Each of these pairs represents a normal
mode of the system, with a frequency dictated by the eigenvalue and an amplitude in
state-space dictated by the eigenvector. Conversely, if A is not diagonalizable, it does
not have linearly independent eigenvectors. This implies that a number of its modes are
coupled and hence inuence each other.
5
2.2.3. Transfer Function of a Dynamical System
Much of traditional linear system theory relies on analysis of the transfer function of a
system, the ratio of the output and the input of the system in the frequency domain. The
transfer functions of a linear systemcan be derived easily fromthe state-space description.
We start with equation 8, the state-space description of a discrete linear time-invariant
system. Performing a z-transform, we obtain:
zx(z) = Ax(z) +Bu(z)
y(z) = Cx(z) +Du(z)
we can then perform some rearrangement to gather the terms in x together:
(zI A)x(z) = Bu(z)
= x(z) = (zI A)
1
Bu(z)
as long as we assume that the state matrix Ais invertible. We can substitute this equation
for x into the above equation for y, giving:
y(z) = C(zI A)
1
Bu(z) +Du(z)
Now, since we know that the transfer function is the ratio of output over input, we can
write:
G(z) = C(zI A)
1
B+D (12)
Note that G(z) is a matrix containing the transfer functions that relate every possible
input-output pair of the system. This notation is therefore a much more compact way of
presenting the transfer function when dealing with systems that have multiple inputs and
outputs. This approach may be extended trivially to the case where A, B, C, and D are
time-varying, giving us an expression for the transfer function at a particular instant in
time.
2.2.4. Impulse Response of a Dynamical System
Trivially, by examining the state space representation, we can dene the impulse response
matrix, H, of a discrete linear system excited at time n
0
and observed at time n as:
H(n, n
0
) =
_

_
0, if n < n
0
D(n
0
), if n = n
0
C(n)
_
n1

m=n
0
+1
A(m)
_
B(i), if n > n
0
(13)
Note that as for the transfer function matrix, G, this matrix gives the all the impulse
responses between any two input-output pairs.
6
2.2.5. Stability of Time Varying Systems
The stability of a time-varying linear dynamical system does not follow simply from the
condition for stability for an LTI system. In other words, fullling the test
|A(i)|
2
< 1
for all i does not necessarily imply stability. This is shown in Laroche (2007). Instead,
we have to adopt a different approach to deriving a stability condition. Laroche (2007)
also shows that one such condition is that:
0 < ||A(i)|| < 1 (14)
for all i. Where ||A(i)|| denotes the norm of A, taken as the norm induced by the Eu-
clidean vector norm. Unfortunately, whilst this test guarantees stability, the converse is
not true. There are some systems which fail this test, yet are still stable.
2.2.6. Observability and Controllability
We have so far considered only the properties of the state matrix A from equations 7 and
8 above. However, the linear operators B and C, also have some important properties.
Since they map fromthe inputs to the systemto the state vector and fromthe state vector to
the outputs of the system respectively, they also dictate whether a particular state variable
can be inuenced by the input of the system and whether a particular state variable can
inuence the output of the system. These qualities are known as the controllability and
observability of a state variable.
The condition for complete controllability of a system is that:
rank() = q (15)
where
=
_
B AB A
2
B . . . A
q1
B

(16)
i.e. that every input can uniquely inuence a particular state variable, either directly or
via any combination of other state variables. Recall that q was dened above as the length
of the state vector x.
To test for controllability from the n-th input, a similar procedure can be performed,
where we require:
rank() = q (17)
where
=
_
b
n
Ab
n
A
2
b
n
. . . A
q1
b
n

(18)
and b
n
is the n-th column of B.
The condition for complete observability of a system is that:
rank(O) = q (19)
7
where
O =
_

_
C
AC
A
2
C
.
.
.
A
q1
C
_

_
(20)
i.e. that every output can be inuenced by every state variable, either directly or via any
combination of other state variables.
To test for observability of the state variables from the n-th output, a similar procedure
can be performed, where we require:
rank(o) = q (21)
where
o =
_

_
c
n
Ac
n
A
2
c
n
.
.
.
A
q1
c
n
_

_
(22)
and c
n
is the n-th row of C.
Note that these concepts generalise to the controllability and observability of the modes
of the system, as well as the individual state variables.
2.3. Properties of Non-Linear Systems
In general, the exact behaviour of non-linear systems is signicantly more difcult to
analyse than that of linear systems. Apart from special cases, global behaviour of the
system cannot be derived easily. Instead, we have to consider the local behaviour of the
system. Scheinerman (1996) gives a good overview of the qualitative properties of non-
linear systems.
2.3.1. Fixed Points
The xed points of a non-linear system are those states in which the system does not
evolve with time without perturbation. If the systems state is at a xed point, it will
remain there forever unless input forces it away. Clearly, by denition the xed points of
a discrete-time non-linear system are given when:
x(i + 1) = x
hence:
x(i) = f(x(i))
This equation may be solved for a particular non-linear system to nd the xed points,
denoted as x.
8
2.3.2. Stability of Fixed Points
A xed point may exhibit a number of different kinds of stability. A xed point, x, is
called stable if for some arbitrary starting point x
0
near x, x(i) x as i . A xed
point is called marginally stable if for some arbitrary starting point x
0
near x, x(i) does
not converge on x as i , but does stay close to x (for example, in an orbit around it).
A xed point is called unstable if for some arbitrary point near x, it shows neither stable
nor marginally stable behaviour.
The stability of a particular xed point can be tested in a number of ways. One ap-
proach is to linearize the area near the xed point. This means approximating the func-
tion near the xed point by using linear functions constructed from the derivatives of the
system with respect to the state variables, and testing the stability of these linear func-
tions. These linear approximations are constructed by evaluating the Jacobian matrix at
the xed point:
J =
_

_
f
1
x
1
f
2
x
1
f
3
x
1
. . .
f
q
x
1
f
1
x
2
f
2
x
2
f
3
x
2
. . .
f
q
x
2
f
1
x
3
f
2
x
3
f
3
x
3
. . .
f
q
x
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
f
1
x
q
f
2
x
q
f
3
x
q
. . .
f
q
x
q
_

_
(23)
We can then treat the Jacobian matrix like the linear state matrix, A, and hence infer
the stability of the system at that point from the magnitude of the eigenvalues of the
Jacobian. Unfortunately, the linearization test fails in some circumstances, and hence a
more advanced method of testing the stability of a xed point of a non-linear system is
needed.
A more advanced method for determining the stability of a xed point is Lyapunovs
method. This method works by nding some function V that acts as a potential for the
system, and examining the form of this potential around the xed point. Finding such a
function can be easy when examining a physical system, as it may follow directly from
intuition. Energy is an obvious choice for many physical systems. Finding such a suitable
Lyapunov function for abstract systems where intuition is no help is much harder. The
topic of Lyapunov functions is examined widely in the literature. A good introduction is
given in Scheinerman (1996).
Non-linear functions also exhibit a rich variety of other behaviors, many unpredictable.
Unfortunately, a discussion of chaos, bifurcations, strange attractors and similar is beyond
the scope of this document.
3. DIGITAL FILTERS AS DYNAMICAL SYSTEMS
Digital lters are traditionally analyzed and designed in the frequency domain using the
mathematical machinery of LTI systems theory and complex analysis. However, digital
lters can also be treated as dynamical system, hence allowing us to leverage the machin-
ery of that area of study, some of which is discussed in section 2 above.
9
Consider a digital lter in arguably its most basic form, the difference equation:
y(n) = f(n, u(n), u(n1), u(n2), , y(n1), y(n2), . . . , u(nk
b
), y(nk
a
)) (24)
where n, k
a
, k
b
Z, k
a
, k
b
< n. This system consists of an input value, u(n), a number of
internal values derived from this value, the u(n 1), y(n 1) etc, and an output value,
y(n) which is formed from a combination of the input and the internal values. It is easy
to see that such a structure ts the denition of a dynamical system.
This section discusses how digital lters can be represented in a dynamical systems
framework, and outlines some of the advantages this brings. A good overview of this
subject is also given in Appendix G of Smith (2006).
3.1. State Space Representation of Digital Filters
A standard Single Input Single Output (SISO) LTI digital lter may represented as a
discrete LTI dynamical system, with single input and output variables:
x(i + 1) = Ax(i) +bu(i)
y(i) = c x(i) + du(i) (25)
where A is a q q linear operator, b is a column vector of length q, c is a row vector of
length q and d is a scalar. As the choice of state variables is somewhat arbitrary, a given
digital lter structure can potentially be represented as a dynamical system in a number
of ways.
3.1.1. Converting from a Transfer Function to the State-Space Representation
If we restrict it to LTI cases, the digital lter given in equation 24 has a transfer function
of the form:
H(z) =
b
0
+ b
1
z
1
+ + b
k
b
z
k
b
1 + a
1
z
1
+ + a
k
a
z
k
a
before we can convert this lter into state space form, we need to seperate the direct path
of the lter, given by the coefcent b
0
from the rest of the transfer function. This gives us:
H(z) = b
0
+

1
z
1
+ +
k
z
k
1 + a
1
z
1
+ + a
k
a
z
k
a
where
1
. . .
k
= (b
1
b
0
a
1
) . . . (b
k
b
0
a
k
) and k = max(k
a
, k
b
) with a
k
= 0 for k > k
a
and b
k
= 0 for k > k
b
. A standard result from control theory, detailed in section 3.3 of
Basile and Marro (1992), allows us to convert from such a transfer function into a state
space model which is either guaranteed controllable or observable (these properties are
dened above in section 2.2.6). These are known as controller canonical or observer
10
canonical forms. The controller canonical form is given by:
A =
_

_
a
1
a
2
. . . a
k1
a
k
1 0 . . . 0 0
0 1 . . . 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 1 . . . 0 0
_

_
b =
_

_
1
0
0
.
.
.
0
_

_
c =
_

1

2
. . .
k

d = b
0
(26)
The observer canonical form is given by:
A =
_

_
a
1
1 0 . . . 0
a
2
0 1 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
k1
0 0 . . . 1
a
k
0 0 . . . 0
_

_
b =
_

3
.
.
.

k
_

_
c =
_
1 0 . . . 0

d = b
0
(27)
3.1.2. Converting from a Block Diagram to the State-Space Representation
It is also possible to convert a digital lter to state-space representation via inspection of
its block diagram, and assignment of a state variable to each of its unit delay elements.
The formof A, b, c and d may then be inferred simply by inspection of the block diagram.
The general biquad lter has the difference equation:
y(n) = b
0
u(n) + b
1
u(n 1) + b
2
u(n 2) a
1
y(n 1) a
2
y(n 2) (28)
Now, consider its implementation in Direct Form 2, as shown in Figure 2. We assign two
state variables, x
1
and x
2
to the two nodes following the unit delay elements. Immediately,
by inspection, we can see that:
A =
_
a
1
a
2
1 0
_
b =
_
1
0
_
c =
_
b
1
b
2

d = b
0
(29)
which is equivalent to the controller canonical form in this case.
3.1.3. Properties of the State Matrix
As we have seen above, the feedback sections of a digital lter are dened in the state-
space representation by the state matrix, A. We might therefore expect A to have some
relation to the poles of the lter. This turns out to be correct. If we decompose A into its
11
z
1
z
1
u(n) y(n) b
0
b
1
b
2
a
1
a
2
Figure 2: Block diagram showing DF2 implementation of a biquad lter section.
modes (as discussed in section 2.2.2), i.e. its eigenvalue, eigenvector pairs, we see that the
eigenvalues correspond with the poles of the lter. Similarly, the stability of the lter can
be inferred from A via the methods discussed in section 2. In the LTI case, the stability
condition that the magnitude of the eigenvalues of A must be < 1 is analogous to the
familiar digital lter stability condition that requires the poles of the lter must lie within
the unit circle on the complex plane. The stability condition for the LTV case given by the
state-variable representation has no equivalent in transfer-function based lter analysis,
and therefore represents a major advantage of this approach.
3.2. Analysis of Larger Signal-Processing Structures
Another advantage of representing digital lters in the state-space form is the relative
ease of analysing multiple lters connected in a larger network. Figures 3(a), 3(b) and
3(c) show simple examples of such congurations. Parallel and series connections can be
dealt with in the state space representation trivially, but feedback connections take a little
more effort.
Using the state space form for a SISO digital lter given in 25 we can derive the form
for the same lter with feedback of gain g. Adding the feedback, the form of the equations
becomes:
x(i + 1) = Ax(i) +bu(i) + gby(i)
y(i) = c x(i) + du(i) + gdy(i)
12
f
2
f
1
Series
(a)
f
1
f
2
Parallel
(b)
f
1
g
Feedback
(c)
Figure 3: Schematic diagrams showing lters connected in a) series, b) parallel and c)
with feedback.
solving for y(i) and substituting into the state equation, we have:
x(i + 1) =
_
A
gb c
1 gd
_
x(i) +
_
1 +
g d
1 gd
_
bu(i)
y(i) =
c
1 gd
x(i) +
d
1 gd
u(i) (30)
Thus the systemwith feedback can nowbe analysed in the same way as the systemwithout
feedback. This is much harder to achieve using frequency domain representations.
3.3. MIMO Filters
Extension of the dynamical system model of digital lters to Multiple In Multiple Out
(MIMO) cases is trivial. We simply revert to vector instead of scalar inputs and outputs,
giving us a representation that looks like:
x(i + 1) = Ax(i) +Bu(i)
y(i) = Cx(i) +Du(i) (31)
in the LTI case. Note that this is a very powerful benet of representing lters is state-
space, as it allows us to analyse MIMO lters with little more effort than SISO lters.
In contrast, MIMO lters are much messier to deal with when represented as a transfer
function or difference equation.
13
4. APPLICATIONS OF STATE SPACE REPRESENTATION
State-space methods have been applied to digital lter design and analysis in a variety of
ways. In this section, we give a brief overview of some of these applications.
4.1. Sound Synthesis
The State Variable Filter (SVF) is a common lter used in musical sound synthesis, as
noted in Dattorro (1997), that provides simultaneous low-pass, band-pass and high-pass
outputs. It was originally derived in the analog domain by Kerwin et al. (1967), and
the approach was extended by Snelgrove and Sedra (1986). Chamberlin (1985) derived
a digital version of the lter by direct replacement of its components with their digital
equivalents, and this implementation went on to be widely used in commercial products.
The SVF is based on the observation that the equation
x(t) = Ax(t)
implies that each state-variable essentially acts as an integrator. Any state matrix may
therefore be implemented directly as a network of integrators, either digital or analog.
The SVF is implemented as a cascade of two integrators with feedback. Taps taken before
the rst integrator, after the rst integrator and after the second integrator respectively
provide the highpass, bandpass and lowpass outputs. Figure 4 shows how the standard
digital SVF is implemented. Coefcient q sets the resonance of the lter and coefcient
sets the lter cutoff point in angular frequency. The stability of the lter breaks down near
= 1, which corresponds to around 1/6th of the sampling rate (sampling rate corresponds
to = 2). The digital SVF is therefore often oversampled to provide stability at higher
cutoff frequencies. f Chamberlin (1985) describes how the actual cut-off frequency of the
lter differs fromthe value of , he corrects for this by introducing a newparameterisation
that corrects of the tuning error. This is give by:
= 2 sin
_

f
f
s
_
(32)
where f and f
s
are the desired lter cutoff frequency and the sampling frequency respec-
tively, both given as absolute frequency in Hz.
State-space methods have also been used for sound synthesis in a digital waveguide
context by Mignot et al. (2009), who leverage the ease with which state-space models
can be combined in a modular system. Another advantage of state-space representation
in sounds synthesis, especially physical modelling, is the ease with which the system can
be decomposed into its modes by calculation of the eigenvalue, eigenvector pairs of the
state matrix. Matignon (1995) notes this quality.
4.2. Time-Varying Digital Filters
4.2.1. Stability
The application of state-space methods to the problem of the stability of time-varying
digital lters is very fruitful. Laroche (2007) gives an excellent overview of the topic,
14

Lowpass
Bandpass
Highpass
u(n)
q

Figure 4: Block diagram showing digital SVF implementation using integrators.
including providing a number of stability conditions based on the norm of the state matrix.
There is also an excellent discussion of the more intuitive aspects of the stability of time-
varying digital lters.
Pekonen (2008) and Pekonen et al. (2009) apply the methods of Laroche (2007) to the
specic case of coefcient modulation of allpass lters. Pekonen (2008) examines the
case of rst-order allpass lters whilst Pekonen et al. (2009) examines the case of large
cascades of allpass lters, known as spectral delay lters.
Using the notation introduced above in section 2 for consistency, the stability condition
given by Laroche is:
|D(i)| +

n=i1

C(i)
_
i1

k=n+1
A(k)
_
B(n)

< G (33)
where G is some real number or function which is bounded (i.e. does not diverge to
or for any value). In the case of both the modulated rst-order allpass lter given in
Pekonen (2008), the coefcents are given by:
A(i) = [m(i)]
B(i) = [1 m
2
(i)]
C(i) = [1]
D(i) = [m(i)] (34)
where m(i) is a modulation function. The linear operator notation has been kept so as to
place this low-order case more obviously in the general framework. Also note that in such
a simple lter containing only one delay element, only a single state variable is necessary
- as discussed above in section 3.1.2. Substituting these coefcents into equation 33, we
have:
|m(i)| +

n=i1

(1 m
2
(n))
_
i1

k=n+1
m(k)
_

< G (35)
15
Now, we need to nd an upper bound function, G. We can construct such a function using
a basic geometrical result, the triangle inequality (|x + y| |x| +|y|). This gives us:
|m(i)| +

n=i1

(1 m
2
(n))
_
i1

k=n+1
m(k)
_

|m(i)| +

n=i1
|1 m
2
(n)|
_
i1

k=n+1
|m(k)|
_
(36)
With the product term now strictly dealing with positive numbers, it is possible to say
that the upper bound converges if |m(k)| < 1 for all k and thus the lter is stable under
that condition. Clearly the case of m(i) = 1 also converges, as in this case the sum term
reduces to zero. Hence, we can write a stability condition for this lter:
|m(i)| 1 i (37)
The analysis for the spectral delay lter is slightly more complex, but produces the
same condition, as shown in Pekonen et al. (2009).
4.2.2. Transients
State-space representations have been used extensively in attempts to suppress transients
generated when the coefcients of a digital lter are varied. Valimaki and Laakso (1998a)
describes such an attempt. The state-space representation is used to identify the form of
the transient produced on variation of coefcients, separate from from the steady-state
response of the lter to its input. An attempt is then made to cancel the transient by
altering the state vector based on the the predicted transient form for a particular change
in coefcients.
4.3. Implementation Nonlinearities in Digital Filters
The applications of state-space and dynamical systems methods given so far have con-
centrated on linear lters, whether they be time-varying or time-invariant. However, the
dynamical systems perspective is also powerful in examining the non-linearities inher-
ent in some digital lter implementations. Ling (2007) gives excellent coverage of the
non-linearities introduced into otherwise linear digital lters by quantization, numerical
saturation and twos compliment arithmetic. Non-linear dynamical systems methods are
applied to derive the effects of these non-linearities on the behaviour of digital lters.
5. CONCLUSIONS
This report has presented an overview of the dynamical systems approach, along with
its application to digital lters. This approach brings a number of powerful tools to the
analysis and design of digital lters. Time-varying digital lters can be dealt with easily
using the state-space approach, as can the inherent non-linearities of certain implemen-
tations of digital lters. The power of the state-space approach when applied to modular
16
systems has also been highlighted. Further work could concentrate on providing more
complete stability conditions for time-varying lters. Another potentially fruitful avenue
of investigation could be the application of Lyapunov stability analysis to arbitrary non-
linear lter structures (for example, polynomial lters) in order to derive stable non-linear
lters which could produce sonically interesting results.
References
C. Barnes and A. Fam. Minimum norm recursive digital lters that are free of overow
limit cycles. IEEE Transactions on Circuits and Systems,, 24(10):569 574, Jan 1977.
G. Basile and G. Marro. Controlled and conditioned invariants in linear system theory.
Prentice Hall, 1992.
D. Bohn. Overload characteristics of state-variable crossovers. 79th Convention of the
AES, (2264), Jan 1985.
H. Chamberlin. Musical applications of microprocessors. Pearson Education Canada, Jan
1985.
J. Dattorro. Effect design-part 1: Reverberator and other lters. J. Audio Eng. Soc, 45(9):
660684, Jan 1997.
D. Frey. Low-cost alternatives in high-quality state variable lters. Journal of the Audio
Engineering Society, 27(10):750756, 1979.
W. Kerwin, L. Huelsman, and R. Newcomb. State-variable synthesis for insensitive in-
tegrated circuit transfer functions. IEEE Journal of Solid-State Circuits, 2(3):87 92,
Jan 1967.
J. Laroche. On the stability of time-varying recursive lters. J. Audio Engineering Society,
55(6):460471, Jan 2007.
W. Ling. Nonlinear Digital Filters. Elsevier, 2007.
D. Matignon. Physical modelling of musical instruments: analysis-synthesis by means of
state space representations. In Proc. Int. Symp. on Musical Acoustics, Jan 1995.
R. Mignot, T. Hlie, and D. Matignon. State-space representations for digital waveguide
netwokrs of lossy ared acoustic pipes. In Proc. of the 12th Int. Conference on Digital
Audio Effects (DAFx-09), Como, Italy, 2009.
J. Pekonen. Coefcient-modulated rst-order allpass lter as distortion effect. In Proceed-
ings of the 11th International Conference on Digital Audio Effects (DAFx-08) Espoo,
Finland, Sept 2008.
J. Pekonen, V. Vlimki, J. Abel, and J. Smith. Spectral delay lters with feedback and
time-varying coefcients. In Proc. of the 12th Int. Conference on Digital Audio Effects
(DAFx-09), Como, Italy, Sept 2009.
17
E. R. Scheinerman. Invitation to Dynamical Systems. Prentice Hall, 1996.
J. O. Smith. Introduction to Digital Filters With Audio Applications. Printed On Demand,
2006. URL https://ccrma.stanford.edu/~jos/filters/filters.
html.
W. Snelgrove and A. Sedra. Synthesis and analysis of state-space active lters using
intermediate transfer functions. IEEE Transactions on Circuits and Systems, 33(3):287
301, Jan 1986.
N. Steiner. Simultaneous input, variable resonance, voltage controlled lter for signal
processing. 58th Convention of the AES, (1270), 1977.
V. Valimaki and T. Laakso. Suppression of transients in time-varying recursive lters for
audio signals. Proceedings of the 1998 IEEE International Conference on Acoustics,
Speech and Signal Processing, 1998., 6:3569 3572 vol.6, May 1998a.
V. Valimaki and T. Laakso. Suppression of transients in variable recursive digital lters
with a novel and efcient cancellation method. IEEE Transactions on Signal Process-
ing, 46(12):3408 3414, Dec 1998b.
D. Wise. A survey of biquad lter structures for application to digital parametric equal-
ization. 105th Convention of the AES, (4820), Jan 1998.
J. Zurada. Programmable state-variable active biquads. J. Audio Engineering Society, 29
(11):786793, 1981.
18

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy