0% found this document useful (0 votes)
20 views7 pages

005-Mohanaprasad Def 16468

This paper discusses the implementation of Acoustic Noise Cancellation (ANC) systems using wavelet-based adaptive filtering algorithms, specifically LMS, NLMS, RLS, and FRLS. The inclusion of wavelet transformations enhances system efficiency by reducing processing time and improving Signal to Noise Ratio (SNR). Simulation results indicate that wavelet transform-based algorithms outperform conventional adaptive algorithms in terms of execution time and SNR improvement.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views7 pages

005-Mohanaprasad Def 16468

This paper discusses the implementation of Acoustic Noise Cancellation (ANC) systems using wavelet-based adaptive filtering algorithms, specifically LMS, NLMS, RLS, and FRLS. The inclusion of wavelet transformations enhances system efficiency by reducing processing time and improving Signal to Noise Ratio (SNR). Simulation results indicate that wavelet transform-based algorithms outperform conventional adaptive algorithms in terms of execution time and SNR improvement.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

International Review on Computers and Software (I.RE.CO.S.), Vol. 9, N.

10
ISSN 1828-6003 October 2014

Wavelet Based Adaptive Filtering Algorithms


for Acoustic Noise Cancellation

K. Mohanaprasad1, P. Arulmozhivarman2

Abstract – This paper prefers Acoustic Noise cancellation (ANC) system using Wavelet based
adaptive filtering algorithms. The Acoustic Noise canceller is implemented using adaptive
algorithms like LMS (Least Mean Square), NLMS (Normalized Least Mean Square),RLS
(Recursive Least Square), and FRLS (Fast Recursive Least Square). The inclusion of wavelet
based transformation in ANC reduces the number of samples to be processed and increase the
efficiency of the system by minimizing the processing time. The simulation results shows that the
wavelet transform based adaptive algorithms produce improvement in SNR (Signal to Noise Ratio)
with less execution time compared to conventional adaptive algorithms. Copyright © 2014 Praise
Worthy Prize S.r.l. - All rights reserved.

Keywords: Acoustic Noise Cancellation, Least Mean Square, Normalized Least Mean Square,
Recursive Least Square, Fast Recursive Least Square, Signal to Noise Ratio

Nomenclature The acoustic noise cancellation method has to be able


to track the continuously changing features of noise. For
 Step size parameter this reason, an adaptive filter is required. The adaptive
d n Noise corrupted speech signal filter [1] works on the principle where its coefficients are
x n restructured constantly in order to get the least value of
Back ground noise signal
error. The adaptive filter works with a stipulated
ŵ  n  Tap weight coefficient value of the filter algorithm of our choice. The adaptive filtering [2]
L Length of the adaptive filter algorithm consists of two procedures, the adaptive and
a n Clean speech signal the filtering process. The adaptive filter makes sure that
the filter coefficients are attuned to the correction
e n Error signal between the corrupted speech signal and the filter output
y n Adaptive filter output and the noise signal that is given as the reference.
The time domain adaptive filtering algorithms such as
λ Small positive constant Least Mean Squares (LMS) [3]-[4], Normalized Least
k n Gain in RLS algorithm Mean Squares (NLMS) [5], Recursive Least Squares
 n Intermediate quantity in the gain (RLS) [6]-[7] and Fast Recursive Least Squares (FRLS)
[8]-[9] were used in the adaptive filtering process. The
 n Priori forward prediction error time domain adaptive algorithm is changeling in case of
 t  Mother wavelet high speed real time application because of its
computational complexity. The computational
W  a,b  Continuous wavelet transform complexity is reduced by using frequency domain
a Scaling parameter adaptive algorithm using Fast Fourier Transform (FFT)
b Location parameter [10]-[12]. In case of LMS algorithm the convergence rate
W  l,m  Discrete wavelet transform is influenced by the input autocorrelation matrix which
decreases drastically with higher input autocorrelation
l Discrete translation
matrix.
m Discrete dilation
The filter coefficients generated by transform domain
adaptive algorithm [13]-[14] is applied for the case of
I. Introduction higher input autocorrelation to improve the convergence
rate. The stability and convergence factor for LMS
Noise cancellation is performed in order to enhance algorithm mainly depends upon the step size (  ).
the quality of noise corrupted speech. Acoustic noise For stationary signal the step size is fixed and non-
cancellation (ANC) is required to cancel the background stationary signal the variable step size LMS algorithm is
noise as in the case of mobile phone users using hands proposed to obtain better performance [15]-[18].
free equipment.

Copyright © 2014 Praise Worthy Prize S.r.l. - All rights reserved

1675
K. Mohanaprasad, P. Arulmozhivarman

The NLMS uses variable step size algorithm to d n  a n  x n (2)
improve the convergence speed, stability and
performance of LMS algorithm [19].
In order for the system to function efficiently, the where a  n  is the clean speech signal, the Enhanced
value of the signal to noise ratio (SNR) has to be error signal e  n  which is the difference between the
sufficient. Wavelet transform [20] can be used to raise
the SNR value. Wavelet transform is a fairly recent noisy speech and the filter output y  n  is given as:
method which is being used for signal analysis,
especially for the investigation of speech, image and e n  d n  y n (3)
sonar signals. The advantage of wavelet transform [21] is
that at high frequency regions and at low frequency The LMS, NLMS, RLS and FRLS algorithms have
regions, better time and frequency resolutions are been utilized for this purpose.
respectively obtained. This helps in tracking the
constantly varying transients of the signal. It is also
computationally simpler than Fourier transforms. The II.2. Adaptive LMS and MLMS Algorithms
disadvantages of adaptive filtering algorithms for ANC
The function of the adaptive filter is to be able to
are their low SNR values. This paper prefers the analysis
estimate the noise in the speech signal. The correction of
of Discrete Wavelet transform (DWT) in adaptive
the speech and the output of the filter, as well as the
filtering algorithm in form as DWT-LMS, DWT-NLMS,
background noise input are used to manipulate the filter
DWT-RLS and DWT-FRLS to increase the SNR values
coefficients, which are continuously restructured via the
for ANC application. This paper is structured as follows:
adaptive filtering algorithms.
Adaptive algorithms for Acoustic Noise Cancellation is
The LMS algorithm is one of the most commonly
briefed in section II, Proposed wavelet based adaptive
used and the simplest of adaptive algorithms [3]. The fact
algorithm for Acoustic Noise Cancellation is detailed in
that it is less complex and more stable than other
section III, Simulation and Results are discussed in
algorithms is its major advantages and the algorithm
section IV and Conclusion is achieved in section V.
appears to be fairly robust against implementation errors.
The LMS algorithm is the first adaptive filtering
II. Adaptive Algorithm for Acoustic algorithm that is implemented in this paper.
As previously discussed, the adaptive filtering
Noise Cancellation
algorithms undergo the filtering and the adaptive
II.1. Acoustic Noise Cancellation procedure. During the filtering part, two values are
estimated. First, the value of the filter output is
In order to decrease the overall noise of the system, a
generated:
noise polluted speech signal and noisy reference signal,
consisting of just noise are utilized which is shown in m 1
Fig. 1. A noise corrupted speech signal d  n  and a y n   wi x  n  i   wT  n  x  n  (4)
i 0
background noise signal x  n  as inputs to the filter.
The LMS algorithm aims at curtailing the mean square
error by amending the tap weight vectors. The tap weight
adjustment at time n+1 is:

w ˆ  n    x  n  e  n 
ˆ  n  1  w (5)

where  is the step size parameter, the value of the error


is found by subtracting the filter output from the wanted
reaction. In the adaptive process, the filter regulates its
factors in agreement to the wanted reaction:
Fig. 1. Adaptive filter for Acoustic Noise Cancellation

The input noise signal, x  n  is an Mx1 vector given e  n   d  n   y  n   d  n   wT  n  (6)

as:
T The LMS algorithm suffers from a slow convergence
x  n    x  n  ,x  n  1 ,.....,x  n  M  1  (1) rate and when the input vector is too large, there arises a
complication called gradient noise amplification
The length of the filter is denoted by L and ŵ  n  is problem. The NLMS algorithm is the second type of
adaptive filtering algorithm which has been implemented
tap weight coefficient value of the filter. The corrupted [5]. It is analogous to the LMS algorithm, but with a few
speech signal is denoted as: changes made.

Copyright © 2014 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 9, N. 10

1676
K. Mohanaprasad, P. Arulmozhivarman

The NLMS algorithm is improvised in terms of a This requires the calculation of the forward prediction
better accuracy rate and also a higher rate of convergence coefficient A  n  . Next, the backward prediction
than the LMS algorithm. Since the method by which the
step size parameter is different for NLMS, it shows better coefficient G n is calculated. Using all the
solidity as well. The tap weight adjustment at time n+1 is intermediate steps, the required gain value k  n  is
given by:
arrived at and calculated as:

ˆ  n  1  w
w ˆ n  2
x  n  e  n  (7) k  n   1    n  rb  n  
1
k L
L 1  n   rb  n  G  n  1  (10)
x n

where   n  is the priori forward prediction error.


where  variable step size parameter. This way, a
greater value of convergence is obtained for the NLMS While implementing the above time domain adaptive
algorithm. algorithms for Acoustic Noise cancellation, produces low
But we constantly try to increase this convergence rate value of SNR with high execution time.
in order to come up with adaptive filtering algorithms of Which implies the noise cancellation performance is
higher caliber. This leads us to examine the Recursive less in case of conventional adaptive algorithm and
Least Squares (RLS) algorithm. having poor convergence rate. To improve noise
cancellation performance with less execution time
wavelet based adaptive algorithms are proposed.
II.3. Adaptive RLS and FRLS Algorithms
The Recursive Least Squares (RLS) algorithm, a III. Proposed Wavelet based Adaptive
deterministic process differs from the previous two
Algorithm for Acoustic Noise
algorithms, which happen to be stochastic processes. All
the algorithms differ in the methodology used for the Cancellation
calculation of gain. In the RLS algorithm, [7] the element III.1. Wavelet Transform
of randomness is eliminated. This is achieved by
introducing a term called the “forgetting factor” which Wavelet Transform mapped L2(R) into L2 (R2), but
ensures only select data, and not all of it, is sent for having good time-frequency localization. There are two
processing. This leads to a higher rate of convergence. types of wavelet transform are continuous wavelet
The fact that matrix inversion is not required is transform (CWT) and discrete wavelet transforms
another advantage of the RLS algorithm. The value of (DWT) [22]. The Continuous Wavelet Transform (CWT)
gain is computed as: is defined in terms of dilations and translations of a
mother wavelet   t  and it is given by:
 n
k n  H
(8)
x  n  n 1 t b 
ab  t     (11)
a  a 
where λ is a small positive constant and   n  is an
The CWT of a given signal f  t  is given by:
intermediate quantity in the calculation of the gain.
The tap weight vector is estimated as:

w ˆ  n  1  k  n  e  n 
ˆ n  w (9) W  a,b    ab  t  f  t  dt (12)


The Fast Recursive Least Squares algorithm was where ‘a’ is scaling parameter and ‘b’ is the Location
introduced in order to surmount the complexity of the parameter. Wavelet is translated for a given scaling
RLS algorithm. The main aim of this algorithm is to parameter a by varying the parameter b. CWT of a signal
combine the merits of the LMS and the RLS algorithm,
f  t  will undergo many dilation and translation of
thereby resulting in a superior algorithm that is fast and
also simple to execute. Thus this becomes a highly mother wavelet, which results in redundant information.
efficient noise cancelling method. This algorithm This is the main cause for Discrete Wavelet transform.
redefined the methodology in which the gain was Discrete Wavelet transform (DWT) is the discretization
calculated. The FRLS algorithm [8] is classified into a of the CWT through sampling particular wavelet
filtering element and a prediction element. The filtering coefficients. Sampling of CWT is achieved by letting
element receives a gain value from the prediction a=2-l and b= m2-l, in W  a,b  . Where ‘l’ is the discrete
element in order to categorize the unidentified system. translation and ‘m’ is the discrete dilations. DWT is
The augmented gain vector k L 1  n  is first calculated given signal f  t  is given by:
as the intermediate step to calculating the gain vector.

Copyright © 2014 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 9, N. 10

1677
K. Mohanaprasad, P. Arulmozhivarman

 The better correlation between the wavelet and signal


W  l,m     
f  t  2l / 2  2l t  m f  t  dt (13) gives higher value of the transform. In Discrete Wavelet
 Transform, high pass filters and low pass filters are
employed to scrutinize the signal. The high frequency
DWT is easier to implement and also reduce the regions of the signal are transmitted through high pass
computation time. The signal at diverse frequencies with filters and the low frequency regions are transmitted
diverse resolutions can be analyzed by using Multi through low pass filters for analysis. In this paper,
Resolution Analysis (MRA). discrete wavelet transform is implemented as a pre-
The given signal is decomposed in to approximation processing step to the adaptive filtering algorithms,
component and detail component, where approximation which is shown in Fig. 3. The first stage that is
consists of low frequency information and detail consists implemented is called decomposition. The signal length
of high frequency information. On consecutive has to be adjusted and extended. Next, the convolved
decomposition on approximation gives multiple levels, input noisy speech signal undergoes a process called
which is shown in Fig. 2. The decomposition level [23] down sampling, which involves one half of the data be
can be extended until the standard deviation of the sent through a low pass filter and the other half, through
approximation component is less than the standard a high pass filter. The content that passes through the low
deviation of the original signal. This is given by an pass filter is normally the more important part.
equation: Noise is generally of high frequency. This process of
 Ak sending the signal through a low pass filter and a high
 0.1 (14) pass filter is known as approximations and detailed
s
coefficients respectively. In this proposed technique
HAAR and Daubechies wavelets are implemented.
 Ak is the standard deviation of the approximation
coefficients for the kth level, and  s is the standard
deviation of the original input signal.

Fig. 3. Proposed methodology

Before estimating the noise signal from the noise


corrupted speech by using adaptive filter, Noise
corrupted speech is decomposed into approximation and
detail coefficients by using Daubechies 2 (dB2) wavelet
and Haar wavelet.

Fig. 2. Multilevel decomposition III.3. The Daubechies Wavelet Based Approach


where S is the original signal, SA and SD represents The Daubechies Wavelet Transform was named after
approximate and detail components respectively: the mathematician Ingrid Daubechies. We have
implemented the Daubechies 2 (db2) Wavelet transform.
(S=SA3+SD3+SD2+SD1) This wavelet transform also consists of a scaling
function and a wavelet function. These have a wide range
of applications. The wavelet coefficients used are derived
III.2. Wavelet Transform based Adaptive Filter from the scaling function coefficients. The wavelet
In wavelet analysis, the signal that is under scrutiny is expansion of a noise corrupted speech signal d  m  has
altered to a form that is more constructive to us. A the following:
wavelet undergoes two main processes, known as
translation and scaling. During translation, the wavelet
undergoes a shift in position and during scaling, the scale d m   c j k j k  m     f jk jk  m  z
0 0
(15)
k j  j0 k
gets shifted.

Copyright © 2014 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 9, N. 10

1678
K. Mohanaprasad, P. Arulmozhivarman

There are two terms in wavelet expansion; the first The output of adaptive filtering algorithm gets up
term is the approximation is defined by: sampled by adding new samples to the signal. The
wavelet based adaptive algorithm separates the clean
c jk  d  m  *jk  m  dm speech signal from the noise signal with increase
 (16)
efficiency and taken less computation time compared to
conventional adaptive filter. The proposed method is
and  jk are called scaling function: simulated using Matlab.

1  m  k2 j  III.4. Haar Wavelet Based Approach


 jk  m      (17)
2  2j  The HAAR wavelet, a square function, is the simplest
of all wavelets. The HAAR wavelets are discrete-time
The detail coefficients are: orthonormal sequences  ij  t  , defined by:

f jk  d  m  *jk  m  dm
 (18)

 ij  t    i 0 t  2i j  (25)

and  jk are called wavelet function: and:


 i
 m  k2 j  2 2 0  t  2i 1  1 
1
 jk  m      (19)  
2  2j  i 0  t   1, 2i 1  t  2i  1 (26)

0 , elsewhere 
 
The wavelet expansion of a Noise reference signal 
x  m  has the following:
The indices ‘i’ and ‘j’ correspond to the scale and
translation respectively. Here ‘i’ is a natural number and
x m   a j k j k  m     b jk jk  m 
0 0
(20) ‘j’ is an integer.
k j  j0 k Haar wavelet has two properties; the first property is
that any function can be linearly combined with the
The first term is the approximation of microphone scaling function, φ(x) and its shifted versions. The
signal is defined by: second property is that any function can be linearly
combined with the Haar wavelet function, ψ(x) and its
a jk  x  m  *jk  m  dm
 (21) shifted versions. The LMS algorithm is used for the

adaptation of filter coefficients wij i.e.:
and  jk are called scaling function:
 
wij  t  1  wij  t  1   rij  t  e  t  (27)
1  m  k2 j 
 jk  m     (22)
2  2 j  where  is the adaptation gain, e  t  the error between
the desired signal and adaptive filter output:
The detail coefficients are:
 

b jk  x  m 
  *jk  m  dm (23)
y t    rij  t  wij  t  (28)
 i, j D

and  jk are called wavelet function: The index set D is the reduced order for modelling,
ri j  t  convolution of the input signal x  t  and wavelet
1  m  k2 j 
 jk  m      (24)  ij  t  :
2  2j 
rij  t    xij  t   ij  t  (29)
The approximated coefficients of noise corrupted l

speech signal consist of clean speech signal with some


part of noise signal, whereas the detail coefficients of Procedure to carry out the Approximation and detailed
noise corrupted speech signal consist of majority of noise coefficients use haar wavelet as follows. Consider a
signal. The approximated coefficients are sufficient to signal X   y1 , y2 , y3 ,..., yn  . This signal can be
separate the clean speech signal, so it is passed to the decomposed into two coarser signals as follows:
adaptive filter for cancellation of remaining noise signal.

Copyright © 2014 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 9, N. 10

1679
K. Mohanaprasad, P. Arulmozhivarman

 y  y2 y3  y4 y  yN  TABLE II
A 1 , ......, N 1  COMPARISON OF SNR VALUES USING HAAR WAVELET BASED
 2 2 2  ADAPTIVE FILTERING
(30) Input Noise SNR Output of HAAR Wavelet (dB
 y  y2 y3  y4 y  yN 
D 1 , ......, N 1 
(dB) LMS NLMS RLS FRLS
 2 2 2  24 19.802 21.968 30.832 32.686
27 19.954 22.346 30.282 32.246
30 19.361 21.978 31.762 32.546
Avg 12.164 8.612 20.136 15.473
IV. Simulation and Results execution
time (ms)
In order to examine the performance of the proposed
wavelet based adaptive algorithm, number of
TABLE III
experiments like DWT-LMS, DWT-NLMS, DWT-RLS COMPARISON OF SNR VALUES USING DAUBECHIES WAVELET
and DWT-FRLS have been performed in Matlab and BASED ADAPTIVE FILTERING
results are compared with conventional time domain Input SNR Output of DB2 Wavelet (dB)
adaptive algorithms, namely the LMS, NLMS, RLS and Noise (dB) LMS NLMS RLS FRLS
FRLS algorithms. 24 19.807 21.976 30.816 32.519
27 19.798 22.373 30.282 32.277
For the simulation purpose, TIMIT database is used to 30 19.362 21.697 31.866 32.589
generate noisy speech observations, with various acoustic Avg
environments, the room impulse response and additive execution 11.746 7.427 19.649 14.203
noises. A noise corrupted speech signal and a noise time (ms)
reference signal are used, comprising of a sampling
frequency of 8 KHz. A -21dB noisy speech signal that is Discrete wavelet transforms undergo the processes of
around 15 seconds long, consisting of 1, 25,502 samples decomposition and reconstruction. The results obtained
is used. Three different simulations were carried out are best for a single level of decomposition.
using three different noise levels of 24dB, 27dB and The FRLS algorithm has the best results in terms of
30dB to corrupt the clean speech signal. To evaluate the SNR output, it has the simplicity of the LMS algorithm
variation between these algorithms, the SNR value is and it is faster than the RLS algorithm. All the adaptive
calculated. The SNR values of the speech signal before algorithms have improved convergence speeds and SNR
and after the implementation of the adaptive filtering values using wavelet decomposition. The execution time
algorithms are noted: using Daubechies wavelet based adaptive algorithm is
greatly reduced compared to Haar wavelet.
Variance  input speech 
SNRin  10 log10 (31)
Variance  noise reference  V. Conclusion
Variance  filtered speech  In this paper wavelet based adaptive filtering
SNRout  10 log10 (32) algorithms has been proposed to improve efficiency of
Variance  residual noise  Acoustic Noise Cancellation. The adaptive filtering
algorithms have been carried out and compared using
TABLE I SNR and execution time.
SNR VALUES IN THE TIME DOMAIN ADAPTIVE FILTERING DOMAIN
Input Input LMS NLMS RLS FRLS
The simulations have shown that the performance of
Noise SNR the adaptive filtering algorithms have significantly
(dB) (dB) improved after the implementation of the HAAR and
Output Output Output Output DB2 wavelets. The FRLS algorithm shows a superior
SNR SNR SNR SNR performance and when combined with the wavelet
(dB) (dB) (dB) (dB)
24 0.33 17.269 19.674 28.183 29.666 transforms, proves to be a greater choice in terms of
27 2.661 17.190 19.970 28.492 30.227 computational speed and performance. The results are
30 5.661 17.332 20.146 29.239 29.960 tested with the limited SNR values.
Avg execution 32.748 31.399 33.478 30.45
time(ms)
References
Table I denotes the SNR values obtained for various
[1] S. HAYKIN, Adaptive Filter Theory (Prentice Hall, 2002).
values of input SNR in dB Amongst the adaptive filtering [2] A. H. SAYED, Fundamentals of Adaptive Filtering (Wiley,
algorithms implemented, FRLS displays the highest SNR 2008)
values. The execution speed is also improved compared [3] B. WIDROW, S. STEAM, Adaptive Signal Processing (Prentice
to RLS. Wavelet transform is introduced in order to Hall, 1985.)
[4] D. T. M. SLOCK, on the convergence behaviour of the LMS and
reduce the complexity and to improve the SNR values. the NLMS algorithms. IEEE Trans. Signal Processing, Vol. 42,
The convergence rates show marked improvements pp. 2811-2825, 1993
Table II and Table III illustrate the improved values of [5] S. I. A. SUGIYAMA, An adaptive noise canceller with low signal
SNR as a result of the HAAR and DB2 wavelets distortion for speech codes. IEEE Trans. Signal Processing, Vol.
47, n. 3, pp. 665-674, 1999.
respectively. [6] M. S. E. ABADI, J. H. HUSOY, and A.M. FAR, Convergence

Copyright © 2014 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 9, N. 10

1680
K. Mohanaprasad, P. Arulmozhivarman

analysis of two recently introduced adaptive filter algorithms Authors’ information


(FEDS/RAMP). Iranian Journal of Electrical and Computer
Engineering (IJECE), Vol. 7, n. 1, 2008. 1
Assistant Professor (senior), School of Electronics Engineering, VIT
[7] M. S. E. ABADI, J. H. HUSOY, A comparative study of some University, Vellore, India.
simplified RLS type algorithm. Proc. Intl. Symp on control, E-mail: kmohanaprasad@vit.ac.in
Communications and Signal Processing, Hammamet, Tunisia,
(Year of Publication: 2004) 2
Professor, School of Electronics Engineering, VIT University, Vellore,
[8] M. AREZKI, A. BENEALLAL, Error Propagation Analysis of India.
Fast Recursive Least Squares Algorithms. Proc. 9th IASTED E-mail: parulmozhivarman@vit.ac.in
International Conference on Signal and Image Processing,
Honolulu, Hawaii, USA, Vol. 20–22, n. 8, pp .97-101, 2007. K. Mohanaprasad was born in 1981.He
[9] G. V. MOUSTAKIDES, and V. THEODORIDIS, Fast Newton received his B.E. degree from Madras
transversal filters - A new class of adaptive estimation algorithms. university, Chennai in 2003 and M.E degree
IEEE Trans. Signal Process, Vol. 39, n. 10, pp. 2184–2193, 1991. from Anna university, Chennai in 2006.
[10] D. Mansour and A. H. Gray Jr., “Unconstrained frequency Currently he is pursuing Ph.D in VIT University
domain filter”, IEEE Trans. Acoust., Speech, Signal Processing, Vellore, and has 2 International journals and 2
Vol. ASSP-30, n. 5, pp. 726-734,Oct. 1982. International conference publications. His
[11] M. Dentino, J. McCool, and B. Widrow, “Adaptive filtering in research interests include signal processing,
frequency domain”, Proc. IEEE. Vol. 66, n. 12, pp. 1658-1659, speech processing and Wavelet transform
Dec. 1978.
[12] S. S. Narayan, A. M. Peterson, and M. J. Narasimha, “Transform- Pachaiyappan Arulmozhivarman was born in
domain LMS algorithm”, ”, IEEE Trans. Acoust., Speech, Signal India in 1975. He received his Ph.D degree in
Processing, Vol. ASSP-31, n. 3, pp. 609-615, June. 1983. the field of wave front sensing and Adaptive
[13] D. I. Kim and P. De Wilde, “Performance analysis of the DCT- optics from NIT Trichy, India in 2005 and he
LMS adaptive filtering algorithm”, Signal Processing, Vol. 80, n. did M.Sc., and B. Sc., Degree in Applied
8, pp. 1629-1654, Aug. 2000. Physics (Instrumentation) in Bharathidasan
[14] R. C. Bilcu, P. Kuosmnen, and K. Egiazarian, “A transform University Trichy, and secured gold medal for
domain LMS adaptive filter with variable step-size”, IEEE Trans., both programs. Currently he is working as a
Signal Processing, Vol.9, n .2, Feb 2002. Professor in Signal & Image Processing Division under the School of
[15] B. Widrow, J. R. Glover, et al., “Adaptive noise cancelling: Electronics Engineering, Vellore Institute of Technology University
Principles and applications”, Proc. IEEE, pp. 1692-1716, Dec. (VIT), Vellore, India. He is a member of IEEE signal processing
1975. society.
[16] R. W. Harris, D. M. Chabries, and F. A. Bishop, “A variable step
(vs) adaptive filter algorithm”, IEEE Trans., Acoustic, speech,
Signal Processing, ASSP-34, pp. 309-316
[17] T. J. Shan and T, Kailath, “Adaptive algorithms with an automatic
gain control feature”, IEEE Trans. Circuits Syst., Vol. CAS-35,
pp. 122-127, Jan. 1988.
[18] Ramli, R., Abid Noor, A., Abdul Samad, S., Modified Adaptive
Line Enhancer in Variable Noise Environments using Set-
Membership Adaptive Algorithm, (2014) International Review on
Computers and Software (IRECOS), 9(8), pp. 1468-1475.
[19] Yue Wang, Chun Zhang, and Zhihua Wang, “A new variable
step-size LMS algorithm with application to active noise control”,
IEEE International conference on Acoustic speech and signal
processing (ICASSP), china, 2003, 6-10 April.
[20] C. S. BURRUS, et al., Introduction to Wavelet and Wavelet
Transforms (Prentice Hall Inc., 1998).
[21] I. Y. SOON, S. N. KOH, and C. K. YEO, Wavelet for speech
denoising. Proc. of the IEEE TENCON, pp. 479-482, 1997.
[22] V. K. GUPTA, M. CHANDRA, and S. N. SHARAN, Acoustic
Echo and Noise cancellation system for Hand free
telecommunication using variable step size algorithms. Radio
engineering, Vol. 22, no. 1, p. 200-207, 2013.
[23] N. RAMESH BABU, and P. ARULMOZHIVARMAN,
Improving Forecast Accuracy of Wind Speed Using Wavelet
Transform and Neural Networks. J Electrical Engg Technology,
Vol. 8, n. 3, pp. 559-564, 2013.

Copyright © 2014 Praise Worthy Prize S.r.l. - All rights reserved International Review on Computers and Software, Vol. 9, N. 10

1681

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy