0% found this document useful (0 votes)
13 views3 pages

Davisson 1966

Uploaded by

kritisinha0010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views3 pages

Davisson 1966

Uploaded by

kritisinha0010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

740 TRANSACTIONS

IEEE ON AUTOMATIC
OCTOBER
CONTROL

transport lags is that in the former case two setsof differential equa- preceding the estimated point and the L points after the estimated
tions (21), (22) are obtained, each set being obeyed over a different points as well as the point itself:
time interval, whereas in the latter case only one set of differential
equations is obtained and these are to be obeyed over the entire time =
L
1
& ( L V ) ~ ~ + j , k = 1, 2,-. ,S
interval. \Then ~ , = 0 , i = 1, . . . , n + r , the necessary conditions de- j=-.W

rived above reduce to the necessary conditions for a s)-stem without or in vector notation j,=Bs‘Xr;, where B.v and X k are the ;1I+L+1
transport lags. column vectors B.v = { B-M( X), . . . , BL( :V)j and S n = { xk--.lr, . . . ,
Yk+L 1.
The weight vector, I?.\-, is to be determined from the L V + X + L
samples { x~--.v,. . . , s.v+L}. If the signal covariance R,(i,j)=E[s;sj]
were known, then thecoefficients would be determined by minimizing

Adaptive LinearFiltering when Signal Distributions


are Unknown Since the signal covariance is unknown, however, (1j can not be
evaluated. However, it seems reasonabletotake a leastsquares
LEE D. D.A\-ISSOX approach and minimize the sample average
Abstracf-This paperconsiderstheproblem of linearsignal
estimation when the time-discrete data consists of signal plus addi-
tiveindependentnoise.Thesignalprobabilitydistributionsare
completely unknown but the noise mean and covariance properties The summation in ( 2 ) may or may not converge as 4- x . If it
are known. The paper considers two main problems. The first is the does converge, the same filter is obtained by this methcd as in pro-
definition of an adaptive procedure for filtering. The second is the cedures involving spectral or covariance estimates. I f the summation
analysis of the procedure for the special case of stationary Gaussian does not converge, ho\\-ever, the procedure is still valid i n the sense
data with zero mean and square integrable spectral density. It is that the “average” propertiesof the signal and noise are taken advan-
believed that the procedure defined has a wider applicability than tage of over the S+.lf+L points.
other methods and that the analytical approach is entirely new. The minimization of ( 2 ) representsa compronke between the
predictabilit>-of the { x k ) from the adjaceut samples and the neight-
ISTKODCCTIOS ing of the noise. This can be illustrated mosteasily for the white noise
The optimum operation for the linear filtering of time series to case, R J i , j j =0, i # j , xvhen one minimizes
minimize mean square error is well known when the signal and noise
first and second moments are available. LVhen this is not the case
there exists no universally agreed upon procedure. This paper con-
siders the problem when the noise is additive and signal independent, The first term of the summation isminimized b\- settingDo( :V) = 1 and
with first and second moments known while the signal distributions Bj(-Y) = O for j#O. The other term is minimized by making Bc(:V) as
are completely unkno\x-n. small as possible ( - 50 ). Therefore a tradeoff is involved between
Several filtering procedures are possible. If one is nilling to a s ”interpolating“ XI:from the adjacent sampleswhich tends to “average
sumestationarity ( a t least locally), spectralestimates [ I ] or co- out” the noise and taking xr; itself as the estimate of sk.
varianceestimates [ 2 ] , [ 3 ] canbemadeand used to design the The minimum of ( 2 j is attained for
optimum linearfilter. .I leastsquaresmethodis proposed in this
paper whichdoes notdepend on theassumption of stationarity.
Like other methods, as the amount of data becomes infinitely large,
the filter approaches the optimum linear filter under ergodic condi-
tions. The filter is similar to that proposed by l l a n n and IYald [ 7 ] Or in matrix form B s = +g, where
andBalakrishnan [8] for pureprediction.Becausethereis noise
present i n the case considered in this paper, a modified least squares
procedure is required to statistically separate the signal and noise.
\Vhen the signal and noise are stationary Gaussian time series AXALYSIS
with zero means and square integrable spectral densities, thea s y m p
totic form of the error variance is given. I n the limit as the input The mean square smoothing error of the above filter is needed
signal-to-noise ratio goes to zero and the optimumfilter output signal- so that some basis can be found for specifying X , L , and :V. If the
to-noise ratio goes to inhnity, an expression is obtained which does data is stationary and the above covariance estimates converge as
notdepend on thespectral densities. This expression provides a n X- x , one should allow dl, L- 30 so that the estimates of each
as>-mptotic upper bound o n the adaptive filtersignal-to-noise per- point are based on as much information as possible. On the other
formance which is a reasmable basis for the filter parameter specifi- hand, for :\’< one generall>-chooses thenumber of degrees of
cations. The analysis is similar to that in Davisson [ 9 ] for pure pre- freedom,31+L+l<<lY so thatthere is sufficient informationto
diction. effectively “learn!‘ B.v, but the question remains: how much smaller
than N must d I + L be to be sure that the increase in mean sqaure
-ADAPTIVE OPERATIOS
FILTER error due to finite S is tolerable? The procedure defined in the pre-
1,
Let {SI: { m } be independent sequences corresponding to signal ceding section is too general for the purposes of analysis. This does
and noise respectively. The sequence of observations ( s p } is given, not mean, of course, that further restrictions are necessary for the
.u=s~:+nk. The noise is assumed to have zero mean. The covariance filter operation. Therefore it is natural to assume theusual stationary
function of the noise R,(k, j ) = E [ t z ~ ; nis~ ]knov-n. I t isdesired t o Gaussian model with mean zero and process spectral densities g,(X),
make -X7 estimates of the signal by linear weightings of the J l points p&Xj and pz(X)=p..(X)+p.(X). For thepurposes of analysis it is
further necessary to assume that p,(Xj& [ -T, TI:
Manuxript received April 20. 1966: revised June 6. 1966. This work was sup-
ported underNationalScienceFoundationGrantGK269anduse of computer
facilitieswassupported in part by Grant NSF-GPSi9.
The author iswith Princeton University. Princeton.
N.J.
1966 SHUK'I 741

This square integrabilit!. makes i t possible to show that: Therefore,


a ) Covariance estimates have the covariance properties

Now, from [SI, at every point of continuity of p , ( h ) (except A = O )


This folloxs readily from straightforward calculations (see[6]). .Also,

Therefore (assuming p , ( h ) to be continuous almost everywhere),

b ) T h e covariance matrix K = E ( r ) is nonsingular for X , L < x:,


that is
IRi f O . (6) This can be normalized to the signal power, S, to determine the out-
put noise-to-signal ratio, B , ,
(4) and (5) will be used in the anal>-&.
Let G = E ( g ) . Since K is nonsingular, the vector
panded i n the usual fashion about B, = K-IG,
Bs can be ex- %,,
= +oe +
B,\- B, + R-'!g - rR-'G! + ~.j-, (7)
where
Kow, as the input-noise-to-signal ratio, +i becomes infinite, the inte-
gral approaches @pi ftom below

S o w , the mean square error averaged over the S points is


Thus, the following (pessimistic expression) results:

-2[(sk - B , ' S ; I ~ B . ~- B,j'&] + [ ( B x - BJ'XI:]' . t


The first term is simply the error variance when the statistics
are known. u,?. Before proceeding further, the following are defined:
Z ~ S= SI: - B,'?it ~k = ab + ne.
Using (7) and ( 3 ) and performing a fex computations results in
This provides a rough basis for specifying what the relationship
of M+L to N should be if is unknown. For example, if a signal-
to-noise ratio of 10 d b is desired and the input signal-to-noise ratic
is 3 db., one would certainly require S > 5 ( X + L + 2 ) . On the other
hand, if it were established that (%,)-1~13 db, then it would be
necessary to have ;V>lO(X+L+2).
where the prime on the expectation signifies that the means should To demonstrate the feasibility of the method and to verify the
be subtracted on each side of the matrix R-I. That is. the R , ( k , i) of asymptoticformulas, a computersimulation was developed using
( 3 ) is dropped in ( 8 ) for simplicit>-. Soxv, using (,*j and ( 5 ) stationary Gaussian deviates with mean zero and a given spectral
density. Results are presented in Fig. 1 for a first-order Markov sig-
nal process and "white noise":
E [ s i ~ j=
] 0.9:'+, E [ n i t ~ j=
] 6ij.

Five point midpoint smoothing was used. The output signal-to-noise


ratio was calculated by averaging the mean square smoothing error
overalargenumber of theindependentX-length sequences. The
exact number chosen was on the basis of a 90 percent confidence inter-
where { a , ) is the (.lI+L+l) X 1 vector with elements a,,,, = - X , val of * 2 percent. Good agreement is seen between the calculations
based on (9) and the empirical results. The predicted signal-to-noise
1 ,L.
. .

This expression is still too complicated for simple application ratios using (10) and (11) are also given. These give progressively
and furthermore depends on the unknown spectral density. Suppose rougherapproximationsfor large K although for smalllearning
p,(h)>O, forever>- h [ - - a , a ] . Then it is well known (see [4] for periods their pessi-nistic r'ature results in a better approximation due
example) that the minimum eigenvalue is positive and hence R is to the effect of the o(l/;V) term.
nonsingular, for M , L+ x , and for almost ever>-h
COKCLTSION
This paper has presented an adaptive procedure for the linear
filtering of time series when the noise first and second moment proper-
where the K subscript denotes limiting \x-ith respect to K ties are known. For the procedure defined the mean square error is
found asymptotically under stationary Gaussian conditions \Then the
lim of;(l) = 0.
K- 0 data has a zero mean and square integrable spectral density.
742 IEEE TRANS.4CTIONS CONTROL
Oh- XUTOhiATIC OCTOBER

- 6 -
- _ _ _ _ _ _ _ _ _ _ B_ =_- _
-
*
U
( 9 )E q u a t i o n

kl
H
0 COhlP.IPUTER S I M U L A T I O N Fig. 1. System S.

11. DESCRIPTIO?;
OF STsmaf

j=-2

ilss~rmption1
L 1
0 10 20 30 40 50 60 70 90
80 100 The
nonlinear
element N is characterized
by a piece-wise con-
-LEARNING PERIOD, N tinuous
function +(.) defined on ( - =, +
= ) such that O<+(U)/U
< k < * Vu#O and +(O)=O. For ease of notation, let +(.(t))=zl(t).
Fig. 1. Output signal-to-noise ratio a s a iunction of learning period.
Input signal-t@noise ratio=O dB.
dss1rmption 2
The linear plant is assumed to be nonanticipative time-invariant,
REFEREKCES completely
andcontrollable
observable,
andand is characterized
by
[I] R. B. Blackmanand J . W. Tukey, The AIeasnrement of Power Specba. New its transfer function IT'(5). TT7(s) is a rational fraction in 5 with its
York: Dover. 1959.
L2] D. Gabor et unix.-ersl n o n l h y filter predictor
and
simulator which numerator polynomial of h e r degreethanthedenominator. If7(s)
optimizes itself byalearningprocess, Trans. I E E ( G B ) .vel. 108. PP. 4 2 2 4 3 8 . has poles only in the left half s-plalle (principal case), or has Some
July 1961.
[3] C. S. lveaver, *Adaptive communication filtering.- I R E Trans. o n I1Iforrnation poles on thejw axis (particular cases). ~ ( t is) the zero input response
Theory. 1.01. IT-8. pp. 169-178, September 1962.
141 U. Grenander and G. Szego. ToeplitsForms and TheirApplicafioxs. Berkeley. Of the linear plant'
Callf: Cnirersity of California Press. 1958.
[j] I-.Grenander. "On theestimation of regressioncoefficientsin the caseof an dss~rmption3
autocorrelated disturbance. Ann. M a t h Stal.. vol. 2 5 , pp. 252-272. 1951.
[6] U.Grenanderand hi. Rosenblatt. S f a f i s l i c d Andysrs OfSfationary Time Series. The iIIPIIt signal to the system is such that r j l ) and i ( t ) are
S e w \-ark: !Vile?. 1955.
[i] H. R. Mannand A . Wald."Onthestatisticaltreatment of linearstochastic bounded for all t>o.
differenceequations.:Econolltetrica. vol. 11. pp. 173-220. July 1943.
[SI A. V. Balxkrishnan. An adaptive non-linear data predictor,' Proc. 1962 S a t ' l .
Telemefming Con!.. paper 6-5. 111. M A I X RESCLTS
[9] L. D. Darlsson. The prediction error oi stationaq Gaussian time series with
unknowncovariance." I E E E T r a m . 011 Ixformatro,rTheory, vol.IT-11. pp. Tlxorem
527-532, October 1965.
Let Assumptions 1, 2 , and 3 be satisfied. Given that the input of
thesystem S andits first derivativearebounded,theoutput 1s
bounded if
a) the nonlinearity is contained in the sector [0, k ] for the prin-
cipal case and in the sector [E, k ] for particular cases ( E > O arbitrarily
small) ;
b) there exists a real p and a positive 6 such that for all w > O , the
On Input-Output Stability of Nonlinear following inequality is satisfied.
Feedback Systems 1
Re { (1 + j w q , W ( j w ) ) f - > 6 > 0. (P)
A. R. BERGEX, XEXBER,IEEE, k -
R. P. IiVEXs, STUDENT MEMBER, IEEE, ASD I n addition, for particular cases, the
conditions
for
stabilitv-in-the-
A. J. FLIGLT, STUDEKT MEMBER, I E E E limit2
must
be satisfied.
Absfraci--It is proved that if the input of a nonlinear feedback Ren~arks:iyithout loss of generalitytheTheorem need only be
sytem and its &st derivative are bounded, satisfaction of the v. M. proved for
i) principal cases of lT-(s),
popov Theorem implies that the output is also bounded.
ii) O < g < = ,
INTRODUCTION
I. nonlinearit>-
the
iii) @(u) inreduced
thesector [E. k --E],i.e.,
Recentll-, Sandberg [l], considering a nonlinear feedback system +(.!
EI-IK-E V U # O
similar to Fig. 1,' showed that satisfaction of the Popov inequality U

implies absolutestability
in
the
bounded-input, bounded-output \%-hereE > O is arbitraril>-
small.
sense. Theresultderivedherediffers only inconsequentiallyfromThese are justifiedinreferellce [2 for the zero-input stabil- 1
Sandberg's but the proof differs significantb-. I n particular, the proof ity of system s,H ~ \ ~the. Sanle ~ ~ argunlents
~ ~ , that ;\izerman and
herein is close in spirit to that of the Popov Theorem [2], but con- Gantmacher use i n [ ? ] can be applied for the no:1-zero input case.
siderably extends the results of t h a t proof.
The notation and terrninologJ- in this paper follo\v those used by duxiliur\. L e ~ 7 m e s
Xizerman Gantmacher
and in [ 2 ] . The proof ofTheorem
the uses lemmas.
twoOne of them is a
well-known lemma concerning the frequency domain analysis in the
hianuyript receivedApril
1.
1966;
revised July 21. 1966. T h e research
herein
supported in part by the Air Force Office of Scientific Research under Grants
Theorem 1. l2
.kF-XFOSK-292-66 and .kF-hFOSR-230-66.
The authors are with the Departmentoi Electrical Enqineering and Electronics
Research Laboratory. College of Engineering. L niversltv Ot California. Rerkeley. 2 These conditions require that the system of Fig. 1 be asymptotically stablefor
1 The
nonlinear
and
linear
elements
are
interchanged. a linear gain ~ ( c=om
) where c >O arbitrarily
is small.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy