Adaptive Filters: Angelina A. Aquino
Adaptive Filters: Angelina A. Aquino
Reading assignment
Angelina A. Aquino
EE 374: Digital Signal Processing II
2nd Semester A.Y. 2019–2020
2 Applications
Adaptive filters are used in noise cancellation for Fig. 2: Inverse system modeling [2]
various situations such as acoustic, industrial, and
biomedical systems [1]. These systems experience
background noise which changes rapidly over time
3 Algorithms
(e.g. due to non-stationary environments or AC A common method for optimizing adaptive filter
fluctuations) and cannot be filtered by conventional coefficients is the Least Mean Squares (LMS) algo-
means. Adaptive filters can operate on a feedback rithm, which uses gradient descent to reduce the
loop as shown in Fig. 1 to generate a cancelling error in the output signal. For every iteration of the
signal y(n) based on an initial reference n2 (n) of the algorithm, the gradient (i.e. the vector correspond-
expected noise, then continuously optimize its re- ing to the rate and direction of greatest increase) of
sponse to minimize the error in the resulting output the mean square error with respect to the filter coef-
signal e(n). ficients is evaluated, and the coefficients are updated
in the direction opposite the gradient until the error
has converged to a minimum. The equation for the
update step is shown below, where w[n] and x[n]
are the coefficient and input vectors at time n, [n]
is the mean square error at time n, and µ is the step
size which determines the rate of convergence. Note
that the expression [n] · x[n] does not give an exact
evaluation for the gradient, but is a simple approxi-
Fig. 1: Adaptive noise cancelling [1] mation used in the LMS algorithm first presented
by Widrow and Hoff in 1960 [3].
Another use of adaptive filters is in system mod-
w[n] = w[n] − µ · ∇[n]
eling for communication channels, stock market be-
havior, or other dynamic processes [2]. In communi- = w[n] + µ · [n] · x[n]
cations, for example, a channel model is needed in
One problem with the convenional LMS algo-
order to compensate for any possible distortion in
rithm is that the estimated gradient value is highly
signal transmission. Similar to the noise cancellation
dependent on the amplitude of the input signal x[n].
process, adaptive filtering can be used to produce a
With a time-varying signal amplitude, it is hard to
1
Midterm Exam 2
guarantee convergence or stability of the learning form a window of the output signal, and the spectral
algorithm. To address this issue, several normalized content is estimated using the short-time Fourier
LMS (NLMS) algorithms have been independently transform. Once the spectrum is found for each win-
proposed [4][5]. In NLMS implementations, the step dow, a peak tracking algorithm is used to produce
size is normalized with respect to the norm of the best estimates of the current heart rate based on
input vector, resulting in a better convergence rate previous measurements.
that properly scales with the input amplitude and Using test data from 12 subjects running on a
error over time. The equation below gives the nor- treadmill at set speeds for specific intervals, ranging
malized step size, with an additional constant α between 1-2, 6-8, and 12-15 km/hour, the method
optionally included to avoid computational difficulty above produced heart rate estimations with a mean
for input amplitudes much less than 1. error of 1.57 beats per minute and a standard de-
viation of 1.11 beats per minute. This showed im-
µ
w[n] = w[n] + 2 · [n] · x[n] provements in mean error and standard deviation
α + kx[n]k over prior methods, indicating comparatively better
An alternative to the LMS algorithm and its vari- accuracy and robustness.
ants is the recursive least squares (RLS) algorithm,
which updates coefficients based on a different objec- References
tive function. Instead of seeking to reduce the mean
square error, the RLS algorithm instead minimizes [1] S. Dixit and D. Nagaria (2017). LMS adaptive filters
a weighted sum of all square error values produced for noise cancellation: A review. International Jour-
nal of Electrical and Computer Engineering 7(5):
over the whole recursive process [6]. This can be
2520–2529.
represented by the cost function C as shown below,
wherein 0 < λ ≤ 1 is a “forgetting factor” which [2] J. Bermudez (2011). Adaptive Filtering - Theory
reduces the effect of older error samples over time. and Applications. Technical report. Available: http:
//sc.enseeiht.fr/doc/Seminar_Bermudez.pdf
n
X
C(w[n]) = λn−i 2 [i] [3] B. Widrow and M. E. Hoff (1960). Adaptive switch-
i=0 ing circuits. IRE WESCON Convention Record pp.
96–104.
RLS algorithms result in a much faster convergence
rate compared to LMS-based algorithms, at the cost [4] A. E. Albert and L. A. Gardner (1967). Stochas-
of higher computational complexity and varying tic Approximation and Nonlinear Regression. Cam-
stability. bridge, MA: MIT Press.