0% found this document useful (0 votes)
107 views

Parameter Estimation in Lapacian Noise

This document discusses methods for estimating the frequency of sinusoidal signals in non-Gaussian noise, specifically Laplace noise. It shows that the Cramér-Rao lower bound (CRLB) for frequency estimation under Laplace noise is half of the CRLB under Gaussian noise. It proposes using maximum likelihood estimation to minimize the sum of absolute errors, rather than sum of squared errors commonly used. This Laplace maximum likelihood estimator is shown to attain the Laplace CRLB asymptotically, allowing for improved frequency estimation compared to existing methods that only achieve the Gaussian CRLB. A computational procedure is also presented to overcome issues in computing the MLE due to local extrema in the likelihood function.

Uploaded by

skanshah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views

Parameter Estimation in Lapacian Noise

This document discusses methods for estimating the frequency of sinusoidal signals in non-Gaussian noise, specifically Laplace noise. It shows that the Cramér-Rao lower bound (CRLB) for frequency estimation under Laplace noise is half of the CRLB under Gaussian noise. It proposes using maximum likelihood estimation to minimize the sum of absolute errors, rather than sum of squared errors commonly used. This Laplace maximum likelihood estimator is shown to attain the Laplace CRLB asymptotically, allowing for improved frequency estimation compared to existing methods that only achieve the Gaussian CRLB. A computational procedure is also presented to overcome issues in computing the MLE due to local extrema in the likelihood function.

Uploaded by

skanshah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Estimation of the Frequency of Sinusoidal Signals

in Laplace Noise
Ta-Hsin Li

Kai-Sheng Song

Department of Mathematical Sciences


IBM T. J. Watson Research Center
Yorktown Heights, NY 10598-0218, USA
thl@us.ibm.com

Department of Statistics
Florida State University
Tallahassee, FL 32306-4330, USA
kssong@stat.fsu.edu

Abstract Accurate estimation of the frequency of sinusoidal


signals from noisy observations is an important problem in signal
processing applications such as radar, sonar, and telecommunications. In this paper, we study the problem under the assumption
of non-Gaussian noise in general and Laplace noise in particular.
We prove that the Laplace maximum likelihood estimator is
able to attain the asymptotic Cramer-Rao lower bound under
the Laplace assumption which is one half of the Cramer-Rao
lower bound in the Gaussian case. This provides the possibility of
improving the currently most efficient methods such as nonlinear
least-squares and periodogram maximization in non-Gaussian
cases. We propose a computational procedure that overcomes
the difficulty of local extrema in the likelihood function when
computing the maximum likelihood estimator. We also provide
some simulation results to validate the proposed approach.

I. I NTRODUCTION
Consider the problem of estimating the frequency, , of a
sinusoidal signal from noisy observations
yt := A cos( t) + B sin( t) + t

(t = 1, . . . , n),

(1)

where A R, B R, := (0, ) are unknown constants


and {t } is a white noise process with mean zero and unknown
variance 2 > 0. The literature on this subject is extensive [1].
The most popular approaches include Fourier transform (periodogram) [2][5], Gaussian maximum likelihood (or nonlinear
least-squares) [6][8], autoregression [9][12], and eigendecomposition (signal/noise subspace) [13][18].
It is well known that under the assumption that {t } is
Gaussian white noise, the asymptotic Cramer-Rao lower bound
(CRLB) with respect to the frequency parameter can be
expressed as (12/ ) n3 , where := 12 (A2 + B2 )/ 2 is the
signal-to-noise ratio (SNR). In the following we shall refer
to this bound as the Gaussian CRLB. Analytical as well as
simulation studies suggest [4] [6] that the Gaussian CRLB
can be attained asymptotically by the maximum likelihood
estimator (MLE) which maximizes the Gaussian likelihood
function, or equivalently, minimizes the sum of squared errors
n

2 ( ) := |yt (A cos( t) + B sin( t))|2


t=1

Kai-Sheng Song was supported in part by an IBM Faculty Award. The


authors thank Dr. Tailen Hsing of the Ohio State University for helpful
discussions concerning Theorem 3.

as a function of := [A, B, ]T . Several numerical procedures


have been proposed to compute this estimator [6] [8].
When the noise is not Gaussian, the minimizer of 2 ( ) is
known as the nonlinear least-squares (NLS) estimator. Under
fairly general assumptions about {t }, it has been shown [2]
that the NLS estimator of is asymptotically equivalent
to the maximizer of the continuous
periodogram, In ( ) :=

n1 | yt exp(it )|2 , with i := 1, and the asymptotic variance of this estimator coincides with the Gaussian CRLB.
Analytical and simulation studies show that typical frequency estimation procedures that exist in the literature either
attain the Gaussian CRLB asymptotically (e.g., the NLS
method [2]) or fall short of it (e.g., the signal/noise subspace
methods [16]). The question is: can we do better than the
Gaussian CRLB in non-Gaussian cases?
In this paper, we provide a positive answer to the question.
Toward that end, we first examine the CRLB in non-Gaussian
cases generally and show that the Gaussian CRLB is the
worse-case performance limit, i.e., the largest lower bound,
among a large family of noise distributions. Then, we focus
on the case of Laplace noise and show that the Laplace MLE
attains asymptotically the Laplace CRLB which is only one
half of the Gaussian CRLB.
The Gaussian assumption of the noise is often made
in practice for its mathematical tractability rather than its
goodness of fit to the data. In reality, departures from the
Gaussian assumption can occur in many different forms, one
of which is heavy tails. A heavy-tailed distribution has greater
tail probabilities than what the Gaussian model suggests.
It manifests itself as outliers in the observations that can
cause algorithms developed under the Gaussian assumption
to malfunction. The Laplace distribution has heavier tails than
the Gaussian distribution and therefore is often used to model
heavy-tailed data in the statistical literature. The Laplace
distribution can also be used as a surrogate in developing more
robust algorithms against outliers or in solving problems that
do not have a solution under the Gaussian assumption (i.e.,
blind deconvolution of non-minimum phase systems).
As with the methods of Gaussian maximum likelihood and
periodogram maximization, it is very difficult to compute
the Laplace MLE without an extremely good initial guess,
because the likelihood function is full of local extrema in

the vicinity of the desired solution. To solve this problem,


we use a well-understood iterative filtering method, called the
three-step algorithm (TSA), discussed in [19] [20], to provide
the necessary initial guess for a general-purpose optimization
routine that computes the final estimates. In addition to its
unified architecture suitable for practical implementation, the
TSA initialization procedure has the analytically proven property of fast and virtually global convergence to an estimate
as accurate as the Gaussian MLE and the periodogram maximizer. Simulation confirms the validity of this approach.

Theorem 2. Under the assumptions of Theorem 1 and the


standard regularity conditions, the asymptotic CRLB for estimating on the basis of y can be expressed as
CRLB( ) = 1 CRLBg ( ),
where
2
(A + 4B2 )/n
3AB/n
1
(4A2 + B2 )/n
CRLBg ( ) :=

symmetry

II. CRLB FOR N ON -G AUSSIAN N OISE

is the asymptotic CRLB under the Gaussian assumption.

Let the t be i.i.d. random variables with mean zero, variance 2 > 0, and probability density function p(x). Assume
that p(x) is almost everywhere differentiable with derivative
p(x),

except at a finite number of points, such that


0 < :=

Proof. The assertion follows from Theorem 1 combined with


+ O(1)} D1
the fact that XT X = D1
n {
n , where

1 2
1
0
2
4B
1

:=
41 A2
2
1
2 + B2 )
symmetry
(A
6

2
{ p(x)}

dx < .
p(x)>0 p(x)

and Dn := diag(n1/2 , n1/2 , n3/2 ).

Theorem 1. Under these assumptions, the Fisher information


matrix of y := [y1 , . . . , yn ]T with respect to := [A, B, ]T can
be expressed as

In the next result, we show that the Gaussian CRLB is


the worse-case performance limit among a large family of
distributions.

I( ) = Ig ( ).
In this expression, Ig ( ) is the Fisher information matrix under
the Gaussian assumption, i.e.,

Theorem 3. Let P be the set of probability density functions


satisfying the assumptions of Theorem 1 and having the entire
real line R as their support. Then, 1 for any p(x) P,
with = if and only if {t } is Gaussian.

1 T
X X,
2
where X := [x1 , x2 , x3 ], x1 := vec[cos( t)], x2 := vec[sin( t)],
and x3 := vec[At sin( t) + Bt cos( t)].
Ig ( ) :=

Proof. Consider the problem of estimating R on the


basis of Y p(y ). It is easy to show that the Fisher
information of Y equals / 2 and Y is an unbiased estimator
of with Var(Y ) = 2 . So, the Cramer-Rao inequality can be
written as 1, with = iff a(y ) = (d/d ) log p(y )
for some constant a 6= 0. With x := y , this condition
can be rewritten as d log p(x)/dx = ax, leading to p(x) =
exp( 12 ax2 + bx + c). Imposing the constraints on the mean
and variance completes the proof.

Proof. The log-likelihood function of y can be written as


n

L( | y) = log p(yt st ( ))
t=1

where st ( ) := A cos( t) + B sin( t). Therefore,

L
=
A
L
=
B
L
=

p(
t )

p(t ) cos( t)

t=1
n

t=1
n

6B/n2
6A/n2
12/n3

An example of p(x) that satisfies the assumptions is the


Laplace distribution with

p(x) = (2c)1 exp |x|/c ,

where c := / 2. It is easy to show that = 2. This implies


that the CRLB under the Laplace assumption is one half of
the CRLB under the Gaussian assumption.

p(
t )
sin( t)
p(t )
p(
t )

p(t ) {At sin( t) + Bt cos( t)}

t=1

so that


I( ) := E L( | y) T L( | y)
(
)
p(
) 2
= E
(XT X)
p( )

III. M AXIMUM L IKELIHOOD E STIMATION


Whereas typical procedures of frequency estimation only
achieve the Gaussian CRLB at best, the CRLB under the
Laplace assumption suggests the possibility of reducing the
error by 50% when the noise has a Laplace distribution. The
question is: can we get there?
To answer this question, we turn to the maximum likelihood
method, which typically produces the most efficient estimates
for sufficiently large sample sizes. Note that maximizing the

= ( / 2 ) (XT X).

The proof is complete upon noting that = 1 in the Gaussian


case where p(x) = (2 2 )1/2 exp( 21 x2 / 2 ).
From Theorem 1, the following result about the CRLB can
be obtained immediately.
2

TABLE I
ROLE OF BANDWIDTH PARAMETER

Laplace likelihood function is equivalent to minimizing the


sum of absolute errors

Bandwidth

1 = O n
=0

15 , 12
12 , 1
=

1 ( ) := |yt (A cos( t) + B sin( t))|.


t=1

Therefore, instead of minimizing the 2 error, which only


achieves the Gaussian CRLB, we minimize the 1 error.
However, to prove the efficiency of this estimator is not a
simple exercise. We cannot adopt the standard argument of
asymptotic normality of the maximum likelihood estimators
from i.i.d. observations, which can be found in many textbooks
such as [21], not only because the yt do not have the same
distribution for each t, but also because the Laplace likelihood
function is not everywhere differentiable.
Nonetheless, by employing more sophisticated mathematical tools and the concept of local asymptotic normality (LAN),
we are able to prove the following result.

Initial Accuracy
(0)
n
O(1) 
O n 
O n
O n1

Final Accuracy
n

O n1/2 
(1+3

)/2
On

O n(2+ )/2

3/2
On

The gist of the method is the following [19]. For any given
(2 /(1 + 2 ), 2 /(1 + 2 )) with fixed (0, 1), let
yt ( ) be obtained by filtering the data with the 2nd-order IIR
filter H(z1 ) := {1 (1 + 2 ) z1 + 2 z2 }1 , i.e.,
yt ( ) := H(z1 ) yt .

Given {yt ( )}, let n ( ) be the minimizer of the weighted


sum of forward and backward prediction error sums of squares
{yt ( ) yt1 ( )}2 + 2 {yt2 ( ) yt1 ( )}2 , which
turns out be

Theorem 4. Assume that {t } is a Laplace white noise process


with mean zero and variance 2 . Let n := {(a, b, c) : |aA|
1 n1/2 , |b B| 2 n1/2 , |c | 3 n3/2 } R R be
a neighborhood of the true parameter := [A, B, ]T for some
positive constants 1 , 1 , and 3 . Then, there exists a sequence
{ n } n of local maxima of the Laplace likelihood function
such that n is asymptotically distributed as N( , CRLB( )).

n ( ) =

yt1 ( ) {yt ( ) + 2 yt2 ( )}2

t=1

(1 + 2 ) {yt1 ( )}2

t=1

Proof. We omit the proof here, except to say that the key
in proving the assertion is to establish that asymptotically,

(m)
Using this function of , a sequence { n } can be obtained
from the accelerated version of fixed-point iteration
(m)
(m1) 
(m1)
n := 2n n
n
(m = 1, 2, . . . ),

Rn ( ) := 1 ( + Dn ) 1 ( )
can be approximated in distribution by the random process

R( ) := T z + ( 2/ ) T

which can be regarded as an iteration of linear filtering


followed by least squares. It can be shown [19] that with a
suitable initial value the sequence converges to a fixed point
n of n ( ), i.e.,

uniformly for all in any compact subset of R3 , where z


), with defined in the proof of Theorem 2.
N(0,

(m)
lim n = n = n ( n )

Theorem 4 shows that by minimizing the 1 error instead of


the 2 error, the Laplace MLE is able to reduce the variance
of the NLS estimator by 50% in the case of Laplace noise.

almost surely for sufficiently large n. From this fixed point, a


frequency estimator,

IV. A C OMPUTATIONAL P ROCEDURE


Finding the minimizer of 1 ( ) numerically is a challenging problem, not only because the objective function is not
everywhere differentiable, but most importantly because it
contains many local minima in the small vicinity of the true
parameter. A similar problem was also experienced with the
methods of NLS and periodogram maximization, where it has
been demonstrated that an initial value of accuracy O(n1 )
in standard error is required for iterative search algorithms
to converge to the desired solution [22]. A typical remedy
to the problem is to use a DFT-based coarse search followed
by a local interpolation refinement to generate suitable initial
values for standard optimization routines such as the NewtonRaphson algorithm [2][8].
Rather than stitching together various methods of different flavors, we propose a unified procedure to produce the
required initial values, using the simple parametric filtering
(PF) method discussed in [12] [19] [20].

n := arccos( n ),
is produced. The consistency and asymptotic normality of n
as an estimator of can be established [19].
In this algorithm the bandwidth parameter plays an
important role in determining the required accuracy of the
initial values and the final accuracy of the frequency estimator.
The relationship is summarized in Table I.
Based on these results, the following three-step algorithm
(TSA) was proposed in [19] to bring an initial value of
accuracy O(1) to a final estimate of accuracy arbitrarily close
to O(n3/2 ) at the cost of computational complexity O(n log n):
1. Take 1 1 = O(1) to accommodate initial values of accuracy O(1); iterate O(n log n) times to obtain an estimate
of accuracy O(n1/2 ).
2. Take 1 2 = O(n1/3 ) and use the result from Step 1
as the initial value; iterate O(1) times to get an estimate
of accuracy O(n1 ).
3

0.40
0.35

90
TSA
NLS
MLE
CRLBG
CRLBL

0.30

80

1/MSE (dB)

0.20
0.15

FREQUENCY

0.25

70

60

0.10

50

0.05

40

0.0

30
50 100

10

15

20

25

200

300

400

500

600

700

800

900

SAMPLE SIZE

30

Reciprocal MSE of the normalized frequency estimates and


reciprocal CRLB for a sinusoidal signal in Laplace white noise. Solid
line, Laplace CRLB; dotted line, Gaussian CRLB; , Laplace MLE
initialized by the three-step algorithm; +, NLS initialized by the
three-step algorithm; , estimates from the three-step algorithm.
Fig. 2.

NUMBER OF ITERATIONS

Trajectory of normalized frequency estimates from the


three-step algorithm for different initial values. Sample size is 128.
Frequency normalization is defined as 7 f := /(2 ) (0, 0.5).
Fig. 1.

3. Take 1 3 = O(n ) with = 1 and use the result from


Step 2 as the initial value; iterate O(1) times to obtain an
estimate of accuracy arbitrarily close to O(n3/2 ).

NLS, cannot be used to compute the Laplace MLE. Fortunately, there are a plenty of general-purpose algorithms that
do not require the differentiability. The simplex algorithm of
Nelder and Mead [24], available in many software packages
such as Mathematica and R, is such an example. There are also
special algorithms designed for nonlinear regression with the
1 norm. For example, the interior point algorithm proposed
in [25] for nonlinear quantile regression is readily available
for R (the function nlrq in the quantreg package).
Fig. 2 shows the result of a simulation study in the case
of Laplace noise based on 1,000 Monte Carlo trials for each
sample size. The signal and noise parameters are: A = 1, B = 0,
= 0.15 2 , and = 0 dB (normalized for each trial). The
three values of in the TSA procedure are 0.85, 1 n0.6 ,
and 1 n0.9 . The numbers of iterations with these values are
6, 3, and 11 for each trial. The TSA procedure is initialized
with Pronys estimator which is known to be biased regardless
of the sample size and have a standard error O(n1/2 ), so the
initial MSE is merely O(1). Fig. 2 shows that the estimates
from the TSA procedure attain the Gaussian CRLB for all
the sample sizes. It also shows that using the TSA estimates
as initial values to minimize 2 ( ) by a standard optimization
routine (in this case the Nelder-Mead algorithm) does not lead
to an improved accuracy. However, by replacing 2 ( ) with
1 ( ), the initial TSA estimates are improved considerably,
and the final estimates (the Laplace MLE) closely follow the
Laplace CRLB, as predicted by Theorem 2, except for the
smallest sample size n = 50. As expected, the improvement is

The global convergence property of this algorithm can be


appreciated from Fig. 1, where all initial values lead to a
convergence to the desired solution after about 12 iterations
(6 iterations with = 0.75, 3 more iterations with = 0.84,
and all remaining iterations with = 0.98).
In practice, one can use Pronys estimator to initialize
Step 1. Pronys estimator corresponds to n ( ) with = 0
and = 0 (no filtering) and has accuracy O(1) (because it
is biased). With this estimator as the initial value instead
of other alternatives such as SVD-based estimates, the entire
procedure becomes unified in architecture, thus simplifying
the hardware/software implementation.
It can be shown [23] that with = 1 the estimator n is
able to attain the Gaussian CRLB asymptotically. Simulation
indicates that even with < 1 (but sufficiently close to 1)
the Gaussian CRLB can be attained by the estimator for
samples sizes as small as 50. Therefore, starting with virtually
any initial guess as the input, the TSA procedure should
be able to produce an output accurate enough to initialize
general-purpose optimization routines for minimizing 1 ( ).
Initial values of A and B can be obtained easily by leastsquares regression using the frequency estimate from the TSA
procedure.
Because 1 ( ) is not everywhere differentiable, gradientbased algorithms, such as the Gauss-Newton algorithm for
4

about 3 dB, or a 50% reduction in MSE.

[9] S. M. Kay, Accurate frequency estimation at low signal-to-noise ratio,


IEEE Trans. Acoust., Speech, Signal Processing, vol. 32, no. 3, pp. 540
547, 1984.
[10] M. S. Mackisack and D. S. Poskitt, Autoregressive frequency estimation, Biometrika, vol. 76, no. 3, pp. 565575, 1989.
[11] P. Stoica, T. Soderstrom, and F. Ti, Asymptotic properties of the highorder Yule-Walker estimates of sinusoidal frequencies, IEEE Trans.
Acoust., Speech, Signal Processing, vol. 37, no. 11, pp. 17211734,
1989.
[12] T. H. Li and B. Kedem, Iterative filtering for multiple frequency
estimation, IEEE Trans. Signal Processing, vol. 42, no. 5, pp. 1120
1132, 1994.
[13] V. F. Pisarenko, The retrieval of harmonics from a covariance function,
Geophys. J. Roy. Astronom. Soc., vol. 33, pp. 347366, 1973.
[14] D. W. Tufts and R. Kumaresan, Estimation of frequencies of multiple
sinusoids: Making linear prediction performance like maximum likelihood, Proc. IEEE, vol. 70, no. 9, pp. 975989, 1982.
[15] R. Roy and T. Kailath, ESPRIT estimation of signal parameters via
rotation invariance techniques, IEEE Trans. Acoust., Speech, Signal
Processing, vol. 37, no. 7 pp. 984995, 1989.
[16] P. Stoica and T. Soderstrom, Statistical analysis of MUSIC and subspace rotation estimates of sinusoidal frequencies, IEEE Trans. Signal
Processing, vol. 39, no. 8, pp. 18361847, 1991.
[17] K. W. Chan and H. C. So, An exact analysis of Pisarenkos singletone frequency estimation algorithm, Signal Processing, vol. 83, no. 3,
pp. 685690, 2003.
[18] K. Mahata, Subspace fitting approaches for frequency estimation using real-valued data, IEEE Trans. Signal Processing, vol. 53, no. 8,
pp. 30993110, 2005.
[19] K. S. Song and T. H. Li, A statistically and computationally efficient
method for frequency estimation, Stochastic Processes Appl., vol. 86,
pp. 2947, 2000.
[20] T. H. Li and K. S. Song, Asymptotic analysis of a fast algorithm for
efficient multiple frequency estimation, IEEE Trans. Inform. Theory,
vol. 48, no. 10, pp. 27092720, 2002. Errata: vol. 49, no. 2, p. 529,
2003.
[21] E. L. Lehman, Theory of Point Estimation, Wiley, New York, 1983.
[22] J. A. Rice and M. Rosenblatt, On frequency estimation, Biometrika,
vol. 75, no. 3, pp. 477484, 1988.
[23] B. G. Quinn and J. M. Fernandes, A fast efficient technique for the
estimation of frequency, Biometrika, vol. 78, no. 3, pp. 489497, 1991.
[24] J. A. Nelder and R. Mead, A simplex algorithm for function minimization, Comput. J., vol. 7, pp. 308313, 1965.
[25] R. Keonker and B. Park, An interior point algorithm for nonlinear
quantile regression, J. Econometrics, vol. 71, pp. 265283, 1996.

V. C ONCLUDING R EMARKS
In this paper we have demonstrated the possibility of
achieving more accurate frequency estimates than the Gaussian
CRLB suggests for sinusoidal signals in non-Gaussian noise.
In particular, we have shown that in the case of Laplace noise
the maximum likelihood estimator, which minimizes the sum
of absolute errors, is able to attain asymptotically the Laplace
CRLB which is 50% smaller than the Gaussian CRLB attained
by nonlinear least squares and periodogram maximization.
In addition to the theoretical findings, we have also proposed
a computational procedure to obtain the maximum likelihood
estimates numerically. The procedure utilizes an iterative algorithm proposed in [19] [20] to produce sufficiently accurate
initial values for standard optimization routines. Owing to the
global convergence property of the initialization algorithm, the
proposed procedure is able to accommodate poor initial values
of accuracy O(1) and produce a final estimator of accuracy
O(n3/2 ) that attains the Laplace CRLB for sufficiently large
sample sizes.
The proposed procedure is also generalized to the case
of multiple sinusoids, provided that the frequencies satisfy a
separation condition [20]. In principle, one can also generalize
the problem by assuming that the noise has a generalized
Gaussian distribution of the form p(x) exp(|x/c| ), where
> 0 is a predetermined constant and c > 0 is the scale
parameter. Note that for Gaussian noise, = 2 and for Laplace
noise, = 1. Under this assumption, the maximum likelihood
estimator of can be found by minimizing ( ) := |yt
(A cos( t) + B sin( t))| . Because special mathematical and
computational tools are needed to handle the general case of
6= 1, 2, this problem deserves further investigation.
R EFERENCES
[1] P. Stoica, List of references on spectral line analysis, Signal Processing, vol. 31, no. 3, pp. 329340, 1993.
[2] A. M. Walker, On the estimation of a harmonic component in a time
series with stationary independent residuals, Biometrika, vol. 58, no. 1,
pp. 2136, 1971.
[3] L. C. Palmer, Coarse frequency estimation using the discrete Fourier
transform, IEEE Trans. Inform. Theory, vol. 20, no. 1, pp. 104109,
1974.
[4] D. C. Rife and R. R. Boorstyn, Single-tone parameter estimation from
discrete-time observations, IEEE Trans. Inform. Theory, vol. 20, no. 5,
pp. 591598, 1974.
[5] E. Aboutanios and B. Mulgrew, Iterative frequency estimation by
interpolation on Fourier coefficients, IEEE Trans. Signal Processing,
vol. 53, no. 4, pp. 12371242, 2005.
[6] P. Stoica, R. L. Moses, B. Friedlander, and T. Soderstrom, Maximum
likelihood estimation of the parameters of multiple sinusoids from noisy
measurements, IEEE Trans. Acoust., Speech, and Signal Processing,
vol. 37, no. 3, pp. 378392, 1989.
[7] D. Starer and A. Nehorai, Newton algorithm for conditional and
unconditional maximum likelihood estimation of the parameters of
exponential signals in noise, IEEE Trans. Signal Processing, vol. 40,
no. 6, pp. 15281534, 1992.
[8] H. Van Hamme, Maximum likelihood estimation of superimposed
complex sinusoids in white Gaussian noise by reduced effort coarse
search (RECS), IEEE Trans. Signal Processing, vol. 39, no. 2, pp. 536
538, 1991.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy