0% found this document useful (0 votes)
95 views81 pages

An Introduction To Adaptive Filtering & It's Applications: Asst - Prof.Dr - Thamer M.Jamel

This document provides an introduction to adaptive filtering and its applications. It begins with an overview of linear filters and optimal filter design methods. It then discusses the Wiener filter and its limitations when a priori statistical information is unavailable. Adaptive filters are introduced as filters that can monitor and adjust their transfer function based on the actual input signals to find the optimal filter design. Common adaptive filter algorithms like the LMS algorithm are explained. Applications of adaptive filtering discussed include system identification, inverse modeling, prediction, interference cancellation, acoustic echo cancellation, and adaptive equalization. Emerging trends in adaptive filtering like partial updating, sub-band processing, and nonlinear techniques are also mentioned.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views81 pages

An Introduction To Adaptive Filtering & It's Applications: Asst - Prof.Dr - Thamer M.Jamel

This document provides an introduction to adaptive filtering and its applications. It begins with an overview of linear filters and optimal filter design methods. It then discusses the Wiener filter and its limitations when a priori statistical information is unavailable. Adaptive filters are introduced as filters that can monitor and adjust their transfer function based on the actual input signals to find the optimal filter design. Common adaptive filter algorithms like the LMS algorithm are explained. Applications of adaptive filtering discussed include system identification, inverse modeling, prediction, interference cancellation, acoustic echo cancellation, and adaptive equalization. Emerging trends in adaptive filtering like partial updating, sub-band processing, and nonlinear techniques are also mentioned.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 81

An Introduction to

adaptive filtering & its


applications
By

Asst.Prof.Dr.Thamer M.Jamel
Department of Electrical Engineering
University of Technology
Baghdad Iraq

Introduction
Linear filters :
the filter output is a linear function of the
filter input
Design methods:
The classical approach
frequency-selective filters such as
low pass / band pass / notch filters etc
Optimal filter design

Mostly based on minimizing the meansquare value


of the error signal
2

Wiener filter
work of Wiener in 1942 and

Kolmogorov in 1939
it is based on a priori
statistical information
when such a priori
information is not available,
which is usually the case,
it is not possible to design
a Wiener filter in the first
place.

Adaptive filter
the signal and/or noise characteristics are

often nonstationary and the statistical


parameters vary with time
An adaptive filter has an adaptation

algorithm, that is meant to monitor the


environment and vary the filter transfer
function accordingly
based in the actual signals received,

attempts to find the optimum filter design

Adaptive filter
The basic operation now involves two

processes :
1. a filtering process, which produces an
output signal in response to a given input
signal.
2. an adaptation process, which aims to
adjust the filter parameters (filter transfer
function) to the (possibly time-varying)
environment
Often, the (average) square value of the
error signal is used as the optimization
criterion

Adaptive filter
Because of complexity of the optimizing

algorithms most adaptive filters are digital


filters that perform digital signal processing
When processing

analog signals,
the adaptive filter
is then preceded
by A/D and D/A
convertors.

Adaptive filter
The generalization to adaptive IIR filters

leads to stability problems


Its common to use

a FIR digital filter


with adjustable
coefficients

Applications of Adaptive
Filters: Identification
Used to provide a linear model of an

unknown plant
Applications:
System identification

Applications of Adaptive
Filters: Inverse Modeling
Used to provide an inverse model of an

unknown plant
Applications:
Equalization (communications channels)

Applications of Adaptive
Filters: Prediction
Used to provide a prediction of the

present value of a random signal


Applications:
Linear predictive coding

Filters: Interference
Cancellation
Used to cancel unknown interference from a

primary signal
Applications:
Echo / Noise cancellation
hands-free carphone, aircraft headphones
etc

Acoustic Echo
Cancellation

LMS Algorithm
Most popular adaptation algorithm is LMS

Define cost function as mean-squared error

Based on the method of steepest descent


Move towards the minimum on the error surface
to get to minimum
gradient of the error surface estimated at every
iteration

LMS Adaptive Algorithm


Introduced by Widrow & Hoff in 1959
Simple, no matrices calculation involved in the

adaptation
In the family of stochastic gradient algorithms
Approximation of the steepest descent method
Based on the MMSE criterion.(Minimum Mean

square Error)
Adaptive process containing two input signals:

Filtering process, producing output signal.

2.) Desired signal (Training sequence)

Adaptive process: recursive adjustment of filter

tap weights

LMS Algorithm Steps


Filter output y n u n k w n
M 1

*
k

k 0

n d n y n

Estimation errore

wk n un k e n
*

Tap-weight adaptation
wk n 1

update value
old value
learning -

of tap - weigth of tap - weight rate

vector

vector
parameter

tap

input
vector

error

signal

17

Stability of LMS
The LMS algorithm is convergent in the

mean square if and only if the step-size


parameter satisfy
Here max is the largest eigenvalue of the

correlation matrix of the input data


More practical test for stability is
Larger values for step size
Increases adaptation rate (faster

adaptation)
Increases residual mean-squared error

STEEPEST DESCENT EXAMPLE

Given the following function we need to obtain the vector that would give us the absolute
minimum.

Y (c1 , c2 ) C12 C22

It is obvious that
give us the minimum.

C1 C2 0,

(This figure is quadratic error function (quadratic bowl) )

C1

C2

Now lets find the solution by the steepest descend method

STEEPEST DESCENT EXAMPLE

We start by assuming (C1 = 5, C2 = 7)

We select the constant


. If it is too big, we miss the minimum. If it is too small, it would take us a lot of time to
het the minimum. I would select
= 0.1.

The gradient vector is:

dy
dc
1

2C1

dy 2C2
dc
2

So our iterative equation is:

C1
C
2

C1
C1
0.2 y
C
C2
[ n 1] 2 [ n ]

C1
0. 1
C2
[n]

C1
0.9
C2
[n]

[n]

STEEPEST DESCENT EXAMPLE


y

C 5
Iteration1 : 1
C2 7
C1 4.5
Iteration 2 :

C2 6.3

Initial guess

C1 0.405
0.567
C

Iteration3 :

Minimum

......
C1 0.01
Iteration 60 :

C2 0.013
C1
0
lim n
C 2 [ n ] 0

C1

C2

As we can see, the vector [c1,c2] converges to the value which would yield
the function minimum and the speed of this convergence depends on
.

LMS CONVERGENCE GRAPH


Example for the Unknown Channel of 2nd order:

Desired Combination of taps

This graph illustrates the LMS algorithm. First we start from


guessing the TAP weights. Then we start going in opposite the
gradient vector, to calculate the next taps, and so on, until we get
the MMSE, meaning the MSE is 0 or a very close value to it.(In
practice we can not get exactly error of 0 because the noise is a
random process, we could only decrease the error below a desired
minimum)

SMART ANTENNAS
Adaptive
Array
Antenna
Adaptive Arrays

Linear Combiner

Interference

Adaptive Array Antenna

Applications are many


Digital Communications
(OFDM , MIMO , CDMA,
and RFID)
Channel Equalisation
Adaptive noise
cancellation
Adaptive echo
cancellation
System identification
Smart antenna systems
Blind system equalisation

Adaptive
Equalization

Introduction
Wireless communication is the
most
interesting field of
communication these days,
because it supports mobility
(mobile users). However, many
applications of wireless comm.
now require high-speed
communications (high-data-

What is the ISI


Inter-symbol-interference, takes place
when a given transmitted symbol is
distorted by other transmitted symbols.

Cause of ISI
ISI is imposed due to band-limiting effect
of practical channel, or also due to the
multi-path effects (delay spread).

Definition of the Equalizer:


the equalizer is a digital filter that
provides an approximate inverse of
channel frequency response.

Need of equalization:
is to mitigate the effects of ISI to
decrease the probability of error that
occurs without suppression of ISI, but
this reduction of ISI effects has to be
balanced with prevention of noise power
enhancement.

Types of Equalization
techniques

Linear Equalization techniques


which are simple to implement, but greatly
enhance noise power because they work by
inverting channel frequency response.

Non-Linear Equalization
techniques
which are more complex to implement, but
have much less noise enhancement than
linear equalizers.

Equalization Techniques

Fig.3 Classification of equalizers

Linear equalizer with N-taps, and (N-1) delay


elements.
Go

Table of various algorithms and their


trade-offs:
algorith
m

Multiplyingoperations

LMS
2N 1
MMSE N 2toN 3
RLS 2.5 N 2 4.5 N
Fast
20 N 5
kalma
n
1.5 N 2 6.5 N
RLSDFE

complexity

convergen tracking
ce

Low
Very high
High
Fairly
Low

slow
fast
fast
fast

poor
good
good
good

High

fast

good

Adaptive
noise
cancellation

Adaptive Filter Block Diagram


Adaptive Filter Block Diagram
d(n) Desired

e(n)

Error Output

x(n)
Filter Input

Adaptive Filter

y(n)

e(n)

Filter Output

The LMS Equation


The Least Mean Squares Algorithm

(LMS) updates each coefficient on a


sample-by-sample basis based on the
error e(n).

wk (n 1) wk (n) e(n) xk (n)

This equation minimises the power in

the error e(n).

The Least Mean Squares Algorithm


The value of (mu) is critical.
If is too small, the filter reacts slowly.
If is too large, the filter resolution is
poor.
The selected value of is a
compromise.

LMS Convergence Vs u

Audio Noise Reduction

A popular application of acoustic noise

reduction is for headsets for pilots. This uses


two microphones.
Block Diagram of a Noise Reduction Headset
Near Microphone

d(n) = speech + noise

e(n)

Speech Output

Far Microphone

x(n) = noise'
Adaptive Filter

y(n)

e(n)

Filter Output
(noise)

The Simulink Model

Setting the Step size


(mu)
The rate of

convergence of
the LMS
Algorithm is
controlled by the
Step size (mu).
This is the critical
variable.

Trace of Input to Model

Input = Signal + Noise.

Trace of LMS Filter


Output
Output starts at
zero and grows.

Trace of LMS Filter Error


Error contains
the noise.

Typical C6713 DSK


Setup
USB to PC

Headphones

to +5V

Microphone

Adaptive Echo
Cancellation

Acoustic Echo Canceller

New Trends in Adaptive


Filtering
Partial Updating Weights.
Sub-band adaptive filtering.
Adaptive Kalman filtering.
Affine Projection Method.
Time-Space adaptive processing.
Non-Linear adaptive filtering:Neural Networks.
The Volterra Series Algorithm .
Genetic & Fuzzy.
Blind Adaptive Filtering.

Thank You

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy