100% found this document useful (1 vote)
157 views591 pages

Statistical Spectral Analysis A Nonprobabilistic Theory

Statistical Spectral Analysis a Nonprobabilistic Theory

Uploaded by

john
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
100% found this document useful (1 vote)
157 views591 pages

Statistical Spectral Analysis A Nonprobabilistic Theory

Statistical Spectral Analysis a Nonprobabilistic Theory

Uploaded by

john
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 591
STATISTICAL SPECTRAL ANALYSIS A Nonprobabilistic Theory PRENTICE HALL INFORMATION AND SYSTEM SCIENCES SERIES Thomas Kailath, Editor ANDERSON & Moore Optimal Filtering Asrrém & WITTENMARK Computer-Controlled Systems: Theory and Design GARDNER Statistical Spectral Analysis: A Nonprobabilistic Theory Goopwin & Sin Adaptive Filtering, Prediction, and Control Gray & Davisson Random Processes: A Mathematical Approach for Engineers Havain Adaptive Filter Theory Jain Fundamentals of Digital Image Processing JOHNSON Lectures on Adaptive Parameter Estimation Kaltarat Linear Systems Kune VLSI Array Processors Kuno, Wurrenouse, & VLSI and Modern Signal Processing Kalcaa, EDs. Macovs«t Medical Imaging Systems MELSA & SAGE An Introduction to Probability and Stochastic Processes SPILKER Digital Communications by Satellite ‘WiLttams Designing Digital Filters STATISTICAL SPECTRAL ANALYSIS A Nonprobabilistic Theory Dr. William A. Gardner Professor, Electrical Engineering and Computer Science University of California, Davis Davis, California 95616 President, Statistical Signal Processing, Inc. Yountville, California 94599 PRENTICE HALL Englewood Cliffs, New Jersey 07632 Library of Congress Cataloging-in-Publication Data Garpnex, WiLAM A. Statistical spectral analysis, Includes bibliographies and index. 1. Time-series analysis. 2. Signal processing. 3. Spectral theory (Mathematics) I. Title. QA280.G37 1987 519.55 86-3056 ISBN. 0-13-844572-9 ‘© 1988 by Prentice-Hall, Inc. A Division of Simon & Schuster Englewood Cliffs, New Jersey 07632 All rights reserved. No patt of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. Printed in the United States of America wo 987654321 ISBN O-13-844572-9 Prenrice-HaLt Inrerwationat (UK) Lauren, London Prentice-HALL oF Austeatia Pry. Limitep, Sydney PeenTice-HALL CaNapa INc., Toronto Prenrice-Hati. HispANOAMERICANA, S.A., Mexico Prenrict-HaLt oF Ixpia Putvate Limtrep; New Delhi PRENTICE-HALL OF JAPAN, INC., Tokyo Simon & Scuuster Asta Pre. Lrp., Singapore Eprrora Prentice-HaLt po BrasiL, Lrps., Rio de Janeiro To Nancy In lieu of time we might have spent together CONTENTS FOREWORD PREFACE ACKNOWLEDGMENTS GLOSSARIES Part | Constant Phenomena 1. INTRODUCTION TO SPECTRAL ANALYSIS A. Objectives and Motives, 3 B. Orientation, 5 1. What Is Spectral Analysis?, 5 2. Why Analyze Waveforms Into Sine Wave Components?, C. Origins of Spectral Analysis, 12 D. Spectral Analysis and Periodicity, 20 E. Summary, 21 F. Overview of Part 1, 22 Exercises, 23 Appendix 1-1: Linear Time-Invariant Transformations and Fourier Transforms: A Review, 26 2. NONSTATISTICAL SPECTRAL ANALYSIS A. Temporal and Spectral Resolution, 35 B. Data Tapering, 38 . Time-frequency Uncertainty Principle, 42 Periodogram-Correlogram Relation, 42 ~ Periodogram and Correlogram Relations for Filters, . Local Average Power Spectral Density, 48 |. Time Sampling and Aliasing, 49 I. Summary, 51 Exercises, 53 Appendix 2-1: Instantaneous Frequency, 63 momma . Finite-Average Autocorrelation and Pseudo Spectrum, 46 43 7 xvii xxi xxiii vil 3. STATISTICAL SPECTRAL ANALYSIS 67 A. Motivating Example, 68 B. Temporal- and Spectral-Smoothing Equivalence, 72 C. The Limit Spectrum, 74 D. Examples of Spectral Density, 77 1. White Noise, 77 2. Sine Wave with Additive Noise, 78 3. Sine Wave with Multiplicative Noise (Amplitude Modulation), 78 4. Pulse-Amplitude Modulation, 79 5. Sine Wave with Amplitude and Phase Modulation, 80 E. Time-Sampling and Aliasing, 81 F. Time-Series Models, 83 1. The Moving Average Model, 84 2. The Autoregressive Model, 84 3. The ARMA Model, 85 G. Statistical Inference, 85 H. Summary, 87 Exercises, 88 Appendix 3-1: Band-pass Time-Series, 98 Appendix 3-2: Random-Signal Detection, 104 4, ANALOG METHODS 108 ‘A. Temporal and Spectral Smoothing, 109 B. Fourier Transformation of Tapered Autocorrelation, 112 C. Spectral Leakage and Prewhitening, 113 D. Hopped Temporal Smoothing, 116 E. Wave Analysis, 118 1. Complex Implementation, 118 2. Real Implementation, 120 F. Demodulation, 120 1. Complex Implementation, 121 2. Real Implementation, 122 3. Swept-Frequency Implementation, 123 G. A General Representation, 125 H. Summary, 126 Exercises, 128 Appendix 4-1: Other Wave-Analysis Methods, 136 1. The Fano Identity, 136 2. The Schroeder-Atal Identity, 136 5. FRACTION-OF-TIME PROBABILISTIC ANALYSIS 138 A, Motivation, 138 B. Fraction-of-Time Probabilistic Model, 140 C. Bias and Variability, 143 1. The Finite-Time Complex Spectrum, 144 2. The Finite-Time Spectrum, 145 3. Statistical Spectra, 147 4. Time-Frequency Uncertainty Condition, 159 D. Resolution, Leakage, and Reliability: Design Trade-offs, 161 vill Contents E, Summary, 169 Exercises, 170 6. DIGITAL METHODS 179 A. Introduction, 179 B. The DFT, 180 1. Resolution and Zero-Padding, 180 2. Circular Convolution, 185 3. The FST and CFT, 187 C. Methods Based on the DFT, 192 1. Bartlett-Welch Method, 193 2. Wiener-Daniell Method, 195 3. Blackman-Tukey Method, 196 4. Channelizer Methods, 197 5. Minimum-Leakage Method, 198 D. Fraction-of-time Probabilistic Analysis, 201 E. Summary, 202 Exercises, 202 7. CROSS-SPECTRAL ANALYSIS au A. Elements of Cross-Spectral Analysis, 211 B. Coherence, 215 C. Autocoherence and Periodicity, 220 D. Measurement Methods, 223 1. Temporal and Spectral Smoothing, 224 2. Fourier Transformation of Tapered Cross Corretation, 224 3. Cross-Wave Analysis, 224 4. Cross Demodulation, 227 E. Resolution, Leakage, and Reliability, 229 1. Cross periodogram, 229 2. Statistical Cross Spectra, 230 F. Summary, 234 Exercises, 234 Appendix 7-1: Propagation-Path Identification, 239 Appendix 7-2: Distant-Source Detection, 240 Appendix 7-3: Time- and Frequency-Difference-of-Arrival Estimation, 241 8. TIME-VARIANT SPECTRAL ANALYSIS 244 A. General Variation, 244 1. The Physical Spectrum, 244 2. Linear Time-Variant Systems, 246 3. Local Ergodicity, 249 B, Periodic Variation, 250 C. Summary, 251 Exercises, 252 9. PARAMETRIC METHODS 254 A. Introduction, 254 B. Autoregressive Modeling Theory, 255 Contents ix Part Il 10. . Yule-Walker Equations, 256 . Levinson-Durbin Algorithm, 257 . Linear Prediction, 258 |. Wold-Cramér Decomposition, 259 . Maximum-Entropy Model, 261 . Lattice Filter, 263 7. Cholesky Factorization and Correlation Matrix Inversion, 265 C. Autoregressive Methods, 266 1. Introdaction, 266 2. Least Squares Procedures, 273 3. Model-Order Determination, 281 4. Singular-Value Decomposition, 283 5. Maximum Likelihood Approach, 287 6. Discussion, 288 D. ARMA Methods, 290 1. Modified Yule-Walker Equations, 29 2. Estimation of the AR Parameters, 292 3. Estimation of the MA Parameters, 293 E. Experimental Study, 298 1. Periodogram Methods, 301 2. Minimum-Leakage Method, 306 3. Yule-Walker, Burg, and Forward-Backward Least-Squares AR Methods, 306 Overdetermined-Normal-Equations AR Method, 307 Singular-Value-Decomposition Method, 318 6. Hybrid Method, 319 F. Summary, 328 Exercises, 329 Appendix 9-1: Table of Data, 343 AWAWNS 4. 5. Periodic Phenomena 351 INTRODUCTION TO SECOND-ORDER PERIODICITY 355 A. Motivation and Overview, 355 B. Derivation of Fundamental Statistical Parameters, 359 1. Generation of Spectral Lines from Second-Order Periodicity, 359 2. Synchronized Averaging, 362 3. Cross-Spectral Analysis, 365 4. Optimum Generation of Spectral Lines, 367 C. Relationships to Woodward Radar Ambiguity and Wigner-Ville Distribution, 369 D. Sine Waves and Principal Components, 373 1. Linear Periodically Timme-Variant Transformations, 373 2. Cyclostationary Stochastic Processes, 375 E. The Link Between Deterministic and Probabilistic Theories, 376 F. Multiple Periodicities, 378 G. Summary, 380 Exercises, 381 Contents 1. 12, 13. 14, CYCLIC SPECTRAL ANALYSIS 384 A. Cyclic Periodogram and Cyclic Correlogram, 384 B. Temporal and Spectral Smoothing, Resolution, and Reliability, 386 C. The Limit Cyclic Spectrum, 389 1, Derivation, 389 2. Spectrum Types and Bandwidths, 390 3. Symmetries and Parseval Relations, 393 4. Cyclic Cross Spectra, 396 5. Spectral Autocoherence, 396 6. Filtering and Product Modulation, 398 D. Linear Periodically Time-Variant Transformations, 405 J. General Input-Output Relations, 405 2. Rice’s Representation, 409 E. Summary, 414 Exercises, 414 EXAMPLES OF CYCLIC SPECTRA 419 A. Pulse and Carrier Amplitude Modulation, 420 B. Quadrature-Carrier Amplitude Modulation, 425 C. Phase and Frequency Carrier Modulation, 428 D. Digital Pulse Modulation, 434 E. Digital Carrier Modulation, 442 4. Amplitude-Shift Keying, 442 2. Phase-Shift Keying, 443 3. Frequency-Shift Keying, 448 F. Spread-Spectrum Modulation, 453 1. Direct Sequence PSK, 453 2. Frequency-Hopped FSK, 454 G. Summary, 457 Exercises, 457 MEASUREMENT METHODS 463 ‘A. Temporal and Spectral Smoothing, 463 B, Fourier Transformation of Tapered Cyclic Autocorrelation or Ambiguity Function, 467 C. Fourier Transformation of Spectrally Smoothed Wigner-Ville Distribution, 470 D. Cyclic Wave Analysis, 470 E. Cyclic Demodulation, 475 F. Summary, 477 Exercises, 478 APPLICATIONS 481 A. Optimum Cyelic Filtering, 482 B. Adaptive Cyclic Filtering, 485 C. Cyclic System Identification, 488 D. Cyclic Parameter Estimation and Synchronization, 493 E, Cyclic Detection, 497 F. Cyclic Array Processing, 503 Contents xi G. Summary, 505 Exercises, 506 15. CYCLIC FRACTION-OF-TIME PROBABILISTIC ANALYSIS A. Cyclic Fraction-of-Time Probabilistic Model, 511 I. Cyclic Fraction-of-Time Distributions, 511 2. Cyclic Temporal Expectation, 517 3. Gaussian Almost Cyclostationary Time-Series, 521 B. Probabilistic Analysis of Cyclic Spectrum Measurements, 1. A General Representation, 526 2. Resolution and Leakage, 527 3. Variability, 528 C. Summary, 534 Exercises, 535 REFERENCES FOR PART I REFERENCES FOR PART II AUTHOR INDEX SUBJECT INDEX xii 510 525 538, 548 553 557 Contents FOREWORD A good deal of our statistical theory, although it is mathematical in nature, originated not in mathematics but in problems of astronomy, geomagnetism and meteorology: examples of fruitful problems in these subjects have included the clustering of stars, also galaxies, on the celestial sphere, tidal analysis, the correlation of fluctuations of the Earth’s magnetic field with other solar-terrestrial effects, and the determination of seasonal variations and climatic trends from weather data. All three of these fields are observational. Great figures of the past, such as C. F. Gauss (1777-1855) (who worked with both astronomical and geomagnetic data, and discovered the method of least square fitting of data, the normal error distribution, and the Fast Fourier Transform algorithm), have worked on observational data analysis and have contributed much to our body of knowledge on time series and randomness. Much other theory has come from gambling, gunnery, and agricultural research, fields that are experimental. Measurements of the fall of shot on a firing range will reveal a pattern that can be regarded as a sample from a normal distribution in two dimensions, together with whatever bias is imposed by pointing and aiming, the wind, air temperature, atmospheric pressure and Earth rotatio1 The deterministic part of any one of these influences may be characterized with further precision by further firing tests. In the experimental sciences, as well as in the observational, great names associated with the foundations of statistics and probability also come to mind. Experimental subjects are traditionally distinguished from observational ones by the property that conditions are under the control of the experimenter. The design of experiments leads the experimenter to the idea of an ensemble, or random process, an abstract probabilistic creation iffustrated by the bottomless xiii barrel of well-mixed marbles that is introduced in elementary probability courses. A characteristic feature of the contents of such a barrel is that we know in advance how many marbles there are of each color, because it is we who put them in; thus, a sample set that is withdrawn after stirring must be compatible with the known mix. The observational situation is quite unlike this. Our knowledge of what is in the barrel, or of what Nature has in store for us, is to be deduced from what has been observed to come out of the barrel, to date. The probability distribution, rather than being a given, is in fact to be intuited from experience. The vital stage of connecting the world of experience to the different world of conventional probability theory may be glossed over when foreknowledge of the barrel and its contents—a probabilistic model—are posited as a point of departure. Many experimental situations are like this observational one. The theory of signal processing, as it has developed in electrical and electronics engineering, leans heavily toward the random process, defined in terms of probability distributions applicable to ensembles of sample signal waveforms. But many students who are adept at the useful mathematical techniques of the probabil approach and quite at home with joint probability distributions are unable to make even a rough drawing of the underlying sample waveforms. The idea that the sample waveforms are the deterministic quantities being modeled somehow seems to get lost. When we examine the pattern of fall of shot from a gun, or the pattern of bullet holes in a target made by firing from a rifle clamped in a vise, the distribution can be characterized by its measurable centroid and second moments or other spread parameters. While such a pattern is necessarily discrete, and never much like a normal distribution, we have been taught to picture the pattern as a sample from an infinite ensemble of such patterns; from this point of view the pattern will of course be compatible with the adopted parent population, as with the marbles. In this probabilistic approach, to simplify mathematical discussion, one begins with a model, or specification of the continuous probability distribution from which each sample is supposed to be drawn. Although this probability distribution is not known, one is comforted by the assurance that it is potentially approachable by expenditure of more ammunition. But in fact it is not. The assumption of randomness is an expression of ignorance. Progress means the identification of systematic effects which, taken as a whole, may initially give the appearance of randomness or unpredictability. Continuing to fire at the target on a rifle range will not refine the probability distribution currently in use but will reveal, to a sufficiently astute planner of experiments, that air temperature, for example, has a determinate effect which was always Present but was previously accepted as stochastic. After measurement, to ap- propriate precision, temperature may be allowed for. Then a new probability model may be constructed to cover the effects that remain unpredictable. Many authors have been troubled by the standard information theory approach via the random process or probability distribution because it seems to put the cart before the horse. Some sample parameters such as mean amplitudes or powers, mean durations and variances may be known, to precision of measurement, xiv Foreword but if we are to go beyond pure mathematical deduction and make advances in the realm of phenomena, theory should start from the data. To do otherwise risks failure to discover that which is not built into the model. Estimating the magnitude of an earthquake from seismograms, assessing a stress-test cardiogram, or the pollutant in a stormwater drain, are typical exercises where noise, systematic or random, is to be fought against. Problems on the forefront of development are often ones where the probability distributions of neither signal nor noise is known; and such distributions may be essentially unknowable because repetition is impossible. Thus, any account of measurement, data processing, and inter- pretation of data that is restricted to probabilistic models leaves something to be desired. The techniques used in actual research with real data do not loom large in courses in probability. Professor Gardner's book demonstrates a consistent approach from data, those things which in fact are given, and shows that analysis need not proceed from assumed probability distributions or random processes. This is a healthy approach and one that can be recommended to any reader. Ronald N. Bracewell Stanford, California Foreword xv PREFACE This book grew out of an enlightening discovery I made a few years ago, as a result of a long-term attempt to strengthen the tenuous conceptual link between the abstract probabilistic theory of cyclostationary stochastic processes and em- pirical methods of signal processing that accommodate or exploit periodicity in random data. After a period of unsatisfactory progress toward using the concept of ergodicity’ to strengthen this link, it occurred to me (perhaps wishfully) that the abstraction of the probabilistic framework of the theory might not be necessary. As a first step in pursuing this idea, I set out to clarify for myself the extent to which the probabilistic framework is needed to explain various well-known concepts and methods in the theory of stationary stochastic processes, especially spectral analysis theory. To my surprise, I discovered that all the concepts and methods of empirical spectral analysis can be explained in a more straightforward fashion in terms of a deterministic theory, that is, a theory based on time-averages of a single time-series rather than ensemble-averages of hypothetical random samples from an abstract probabilistic model. To be more specific, ] found that the fundamental concepts and methods of empirical spectral analysis can be explained without use of probability calculus or the concept of probability and that probability calculus, which is indeed useful for quantification of the notion of degree of randomness or variability, can be based on time-averages of a single time-series without any use of the concept or theory of a stochastic process defined on an abstract probability space. This seemed to be of such fundamental importance for practicing engineers and scientists and so intuitively satisfying that I felt it must already be in the literature. To put my discovery in perspective, I became a student of the history of the subject. I found that the apparent present-day complacence with the abstraction of the probabilistic theory of stochastic processes, introduced by A, N. Kolmogorov in 1941, has been the trend for about 40 years. Nevertheless, I found also that ' Ergodicity is the property of a mathematical model for an infinite set of time-series that guarantees that an ensemble average over the infinite set will equal an infinite time average over ‘one member of the set. many probabilists throughout this period, including Kolmogorov himself, have felt that the concept of randomness should be defined as directly as possible, and that from this standpoint it seems artificial to conceive of a time-series as a sample of a stochastic process. (The first notable attempt to set up the probability calculus more directly was the theory of Collectives introduced by Von Mises in 1919; the mathematical development of such alternative approaches is traced by P. R. Masani [Masani 1979].) In the engineering literature, | found that in the early 1960s two writers, D. G. Brennan (Brennan 1961] and E. M. Hofstetter [Hofstetter 1964], had made notable efforts to explain that much of the theory of stationary time-series need not be based on the abstract probat ic theory of stochastic processes and then linked with empirical method only through the abstract concept of ergodicity, but rather that a probabilistic theory based directly on time-averages will suffice; however, they did not pursue the idea that a theory of empirical spectral analysis can be developed without any use of probability, Similarly, the more recent book by D. R. Brillinger on time-series [Brillinger 1975] briefly explains precisely how the probabilistic theory of stationary time- series can be based on time-averages, but it develops the theory of empirical spectral analysis entirely within the probabilistic framework. Likewise, the early engineering book by R. B. Blackman and J. W. Tukey [Blackman and Tukey 1958] on spectral analysis defines an idealized spectrum in terms of time-averages but then carries out all analysis of measurement techniques within the probabilistic framework of stochastic processes. In the face of this 40-year trend, I was perplexed to find that the one most profound and influential work in the entire history of the subject of empirical spectral analysis, Norbert Wiener’s Generalized Harmonic Analysis, written in 1930 [Wiener 1930], was entirely devoid of probability theory; and yet I found only one book written since then for engineers or scientists that provides more than a brief mention of Wiener’s deterministic theory. All other such books that I found emphasize the probabilistic theory of A. N. Kolmogorov usually to the complete exclusion of Wiener’s deterministic theory. This one book was written by a close friend and colleague of Wiener’s, Y. W. Lee, in 1960 [Lee 1960]. Some explanation of this apparent historical anomaly is given by P. R. Masani in his recent commentary on Wiener’s Generalized Harmonic Analysis (Miasani 1979]: “The quick appearance of the Birkhoff ergodic theorem and the Kolmogorov theory of stochastic processes after the publication of Wiener’s Generalized Harmonic Analysis created an intellectual climate favoring stochastic analysis rather than generalized harmonic analysis.” But Masani goes on to explain that the current opinion, that Wiener’s 1930 memoir [Wiener 1930] marks the culmination of generalized harmonic analysis and its supercession by the more advanced theories of stochastic processes, is questionable on several counts, and he states that the ‘‘integrity and wisdom” in the attitude expressed in the early 1960s by Kolmogorov suggesting a possible return to the ideas of Von Mises “‘. . . should point the way toward the future. Side by side with the vigorous pursuit of the theory of stochastic processes, must coexist a more direct process-free [deterministic} inquiry of randomness of different classes of functions.”* In an even stronger stance, T. L. Fine in the concluding section of his book Theories of Probability [Fine, 1973] states ‘Judging from the present confused xviii Preface status of probability theory, the time is at hand for those concerned about the characterization of chance and uncertainty and the design of inference and decision- making systems to reconsider their long-standing dependence on the traditional statistical and probabilistic methodology. . .. Why not ignore the complicated and hard to justify probability-statistics structure and proceed ‘directly’ to those, perhaps qualitative, assumptions that characterize our source of random phenomena, the means at our disposal, and our task?” As a result of my discovery and my newly gained historical perspective, J felt compelled to write a book that would have the same goals, in principle, as many existing books on spectral analysis—to present a general theory and methodology for empirical spectral analysis—but that would present a more relevant and palatable (for many applications) deterministic theory following Wiener’s original approach rather than the conventional probabilistic theory. AS the book developed, I continued to wonder about the apparent fact that no one in the 50 years since Wiener’s memoir had considered such a project worthy enough to pursue. However, as I continued to search the literature, I found that one writer, J. Kampé de Fériet, did make some progress along these lines in a tutorial paper [Kampé de Fériet 1954], and other authors have contributed to development of deterministic theories of related subjects in time-series analysis, such as linear prediction and extrapolation [Wold 1948], [Finch 1969], [Fine 1970]. Furthermore, as the book progressed and I observed the favorable reactions of my students and colleagues, my conviction grew to the point that I am now convinced that it is generally beneficial for students of the subject of empirical spectral analysis to study the deterministic theory before studying the more abstract probabilistic theory. When I had completed most of the development for a book on a deterministic theory of empirical spectral analysis of stationary time-series, | was then able to return to the original project of presenting the results of my research work on cyclostationary time-series but within a nonprobabilistic framework. Once I started, it quickly became apparent that I was able to conceptualize intuitions, hunches, conjectures, and so forth far more clearly than before when I was laboring within the probabilistic framework. The original relatively fragmented research results on cyclostationary stochastic processes rapidly grew into a com- prehensive theory of random time-series from periodic phenomena that is every bit as satisfying as the theory of random time-series from constant phenomena (stationary time-series). This theory, which brings to light the fundamental role played by spectral correlation in the study of periodic phenomena, is presented in Part I. Part I of this book is intended to serve as both a graduate-level textbook and a technical reference. The only prerequisite is an introductory course on Fourier analysis. However, some prior exposure to probability would be helpful for Section B in Chapter 5 and Section A in Chapter 15. The body of the text in Part I presents a thorough development of fundamental concepts and results in the theory of statistical spectral analysis of empirical time-series from constant Phenomena, and a brief overview is given at the end of Chapter 1. Various supplements that expand on topics that are in themselves important or at least Preface xix illustrative but that are not essential to the foundation and framework of the theory, are included in appendices and exercises at the ends of chapters. Part Hi of this book, like Part I, is intended to serve as both textbook and reference, and the same unifying philosophical framework developed in Part I is used in Part Il. However, unlike Part I, the majority of concepts and results presented in Part II are new. Because of the novelty of this material, a brief preview is given in the Introduction to Part II. The only prerequisite for Part II is Part 1. The focus in this book is on fundamental concepts, analytical techniques, and basic empirical methods. In order to maintain a smooth flow of thought in the development and presentation of concepts that steadily build on one another, various derivations and proofs are omitted from the text proper, and are put into the exercises, which include detailed hints and outlines of solution approaches. Depending on the students’ background, the instructor can either assign these as homework exercises, or present them in the lectures. Because the treatment of experimental design and applications is brief and is also relegated to the exercises and concise appendices, some readers might desire supplements on these topics. REFERENCES Backman, R. B. and J. W. Tuxey. 1958. The Measurement of Power Spectra. New York: American Telephone and Telegraph Co. BRENNAN, D. G. 1961. Probability theory in communication system engineering. Chapter 2 in Communication System Theory. Ed. E. J. Baghdady, New York: McGraw-Hill, BRILLINGER, D, R. 1975. Time Series. New York: Holt, Rinehart and Winston. Fincx, P. D. 1969. Linear least squares prediction in non-stochastic time-series. Advances in Applied Prob. 1:111-22. Fine, T. L. 1970. Extrapolation when very little is known about the source. Information and Control. 16:331-359. Fine, T. L. 1973. Theories of Probability: An Examination of Foundations. New York: Academic Press. Horsrerrer, E. M. 1964, Random processes. Chapter 3 in The Mathematics of Physics and Chemistry, vol. 11. Bd. 1. Margenatt and. G. M. Murphy. Princeton, N.J.: D. Van Nostrand Co. Kamré pe Féner, J. 1954, Introduction to the statistical theory of turbulence, I and II. J. Soc. Indust. Appl. Math. 2, Nos. 1 and 3:1-9 and 143-74. Lee, Y. W. 1960. Statistical Theory of Communication. New York: John Wiley & Sons. Masawt, P. R. 1979. “Commentary on the memoir on generalized harmonic analysis."” pp. 333-379 in Norbert Wiener: Collected Works, Volume I. Cambridge, Mass.: Mas sachusetts Institute of Technology. Wiener, N. 1930. Generalized harmonic analysis. Acta Mathematika, 55:117-258. Wotp, H. O. A. 1948. On prediction in stationary time-series. Annals of Math Stat. 19:558-67. William A. Gardner ~ Preface ACKNOWLEDGMENTS I would like to express my gratitude to Mr. William A. Brown for his important technical and moral support in the early stages of this project, and to Professors Enders A. Robinson, Ronald N. Bracewell, and James L. Massey for their enthusiastic encouragement. I also would like to express my appreciation to Professor Thomas Kailath for bringing to my attention several early fundamental papers on nonprobabilistic statistical theory. In addition, I would like to thank Professor Herschel H. Loomis and Dr. Crawford W. Scott for their interest in applications of the theory in Part II, and the resultant financial support, and Messrs. Brian G, Agee, William A. Brown, and Chihkang Chen for their par- ticipation in applying the theory of Part II. Credit is due Messrs. Brown and Chen for their contributions to some of the technical material in Chapter 12, and also special credit is due Mr. Brown for his major contribution to Chapter 15, especially section B. Further credit is due Messrs. Chen and Brown for their substantial joint effort to produce the many excellent computer-generated graphs. It is a pleasure to express my appreciation to Mrs. Patty A. Gemulla and Mrs. Marion T. Franke for their excellent job of typing the manuscript, Dr. Sheldon N. Salinger for critically reading the manuscript, Mr. Randy S. Roberts and Messrs. Brown and Chen for their substantial proofreading efforts, and many other past and present students for their feedback and assistance. My deepest gratitude is expressed to my wife, Nancy, for her patience, understanding, and support throughout this demanding project and the years of work leading up to it. William A. Gardner xxi GLOSSARIES GLOSSARY OF NOTATIONS AND TERMINOLOGY FOR WINDOW FUNCTIONS a(t) Ayr(f) Ef) Balt) Gif) Ayyas@) Myf) ur) v(t) wrt) zr) General data-tapering window of unity height and approximate width 7. Fourier transform of a7(t). Effective spectral smoothing window. General time-smoothing window of unity area and approximate width At. Fourier transform of gy(1). General autocorrelation-tapering window of unity height and approximate width 1/A/. General spectral smoothing window of unity area and approximate width Af; Fourier transform of hys,(r). Rectangle window of unity area and width T. Triangle window of unity area and base width 27. Sinc window of unity area and null-to-null width 27. Squared sinc window of unity area and null-to-null width 27. GLOSSARY OF NOTATIONS AND TERMINOLOGY FOR CORRELATIONS AND SPECTRA IN PART I' " Some notation that is used only within a rz) Finite autocorrelation of h: (36), Chapter 2. F(t) Finite autocorrelation of discrete-time h: (86), Chapter 3. single chapter is not included in this glossary. weil R(t, 7) Ri) Rt Dr RO RG) RAt, 7) SP) 5.04 f) Fat f) St fr Seya(ts Pas Saya far Sats Day Sit Pay Salt, Nuss ae S(t, Pas, ar 5) Su) S.At, f) Arlt, f) Xx(t, f) xxiv Time-variant correlogram (time-variant finite-time autocorrelation) of segment of x of duration T: (19), Chapter 2; for tapered data, (20), Chapter 2. Time-variant correlogram (time-variant finite-time autocorrelation) of segment of discrete-time x of duration T: (67), Chapter 2, (44), Chapter 6. Time-variant finite-average autocorrelation of x: (21), Chapter 2. Limit autocorrelation of x: (6), Chapter 1; see also (31), Chapter 2. Limit autocorrelation of discrete-time x: (79), Chapter 3. Probabilistic instantaneous autocorrelation of x: (7), Chapter 8. Time-variant periodogram (time-variant finite-time spectrum) of segment of x of duration 7: (1), (2), Chapter 2; for tapered data, (1), (11), Chapter 2; Fourier transform of R(t, 1). Time-variant periodogram (time-variant finite-time spectrum) of segment of discrete-time x of duration 7: (69), Chapter 3, (30), Chapter 6; Fourier series transform of R,,(t, 7). Expected time-variant periodogram (expected time-variant JSinite-time spectrum) of segment of x of duration T: (3), Chapter 8. Time-variant pseudospectrum of x: (22), Chapter 2; Fourier transform of R(t, 7)r. Temporally smoothed spectrum of tapered x: (11), Chapter 3, (1), Chapter 4. Temporally smoothed spectrum of tapered discrete-time x: (29), Chapter 6. Spectrally smoothed spectrum of x: (16)-(17), Chapter 3, (2) and (21), Chapter 4, Spectrally smoothed spectrum of discrete-time x: (36) and (43), Chapter 6. Temporally smoothed pseudospectrum of x: (3), Chapter 4. Spectrally smoothed pseudospectrum of x: (4) and (22), Chapter 4. Limit spectrum of x: (26), Chapter 3; Fourier transform of RG). Limit spectrum of discrete-time x: (69a), Chapter 3. Probabilistic instantaneous spectrum of x. (6), Chapter 8. Time-variant finite-time complex spectrum of segment of x of duration 7: (2), (11), Chapter 2; complex demodulate, (44)- (45), Chapter 4. Normalized time-variant finite-time complex spectrum of segment of x of duration T: (27), Chapter 5. Glossaries Xx(t, f) xrlt, f) Er(t, f) Time-variant finite-time complex spectrum of segment of discrete-time x of length N = 1 + 7/T,; (53), Chapter 2, (28), Chapter 6. Local sine wave component of x: (44), Chapter 4. Local sine wave component of discrete-time x: (26), Chapter 6; for tapered data, (31), Chapter 6. GLOSSARY OF NOTATIONS AND TERMINOLOGY FOR CROSS CORRELATIONS AND CROSS SPECTRA IN PART | CoA) Ry At, 7) R(t, Dr R@) Sort J) Solt Dr (t Dae Swat, Dar Salt Aras. av Solt, Aas, ar Sf) Complex coherence function of x and y: (32), Chapter 7. Time-variant cross correlogram (time-variant finite-time cross correlation) of segments of x and y of duration T: (5), Chapter 7. Time-variant finite-average cross correlation of x and y: (13), Chapter 7. Limit cross correlation of x and y: (18), (20), Chapter 7. Time-variant cross periodogram (time-variant finite-time cross spectrum) of segments of x and y of duration T: (3), Chapter 7; Fourier transform of R,,,(t, 7). Time-variant pseudo-cross spectrum of x and Chapter 7; Fourier transform of R,,(t, 7)r- Temporally smoothed cross spectrum of x and y: (9), Chapter 7. Spectrally smoothed cross spectrum of x and y: (8), Chapter 7. y: (12), Temporally smoothed pseudo-cross spectrum of x and y: (1D), Chapter 7. Spectrally smoothed pseudo-cross spectrum of x and y: (10), Chapter 7. Limit cross spectrum of x and y: (17), Chapter 7; Fourier transform of R,,(). GLOSSARY OF NOTATIONS AND TERMINOLOGY FOR CYCLIC CORRELATIONS AND CYCLIC SPECTRA IN PART I en) ei) ri) Rit, 7) Glossaries Spectral autocoherence of x: (35), Chapter 10. Cyclic cross coherence of x and y, (45b), Chapter 14. Finite cyclic autocorrelation of h: (137), Chapter 11. Time-variant cyclic cross correlogram of segment of x of duration T: (12), Chapter 11. xxv Rit Dr RS, {t, 7) R@) R@ RO R(t, 5 To) Rito SOP) St) Sia Par 5,4 Par Shut fay S8b far Saf) Saf) SA S.Ct, fi To) 5.) ravi Time-variant finite-average cyclic autocorrelation of x: (14), Chapter 11. Time-variant cyclic cross correlogram of segments of x and y of duration T: (67), Chapter 11. Limit cyclic autocorrelation of x: (25), Chapter 10; (13), (15), Chapter 11. Limit cyclic autocorrelation of discrete-time x: (109), (110), Chapter 11. Limit cyclic cross correlation of x and y: (69), Chapter 11. Limit periodic autocorretation of x (period = Ty): (99), (102), Chapter 10. Limit almost periodic autocorrelation of x with multiple periodicity: (100), (103), Chapter 10; or limit periodic autocorrelation of x with single periodicity: (23), (24), Chapter 10, Time-variant cyclic periodogram of segment of x of duration T: (8), (11), Chapter 11. Time-variant cyclic cross periodogram of segments of x and y of duration T: (65), (66), Chapter 11. Temporally smoothed cyclic spectrum of x: (1), Chapter 11, (4a), Chapter 13. Temporally smoothed cyclic spectrum of discrete-time x: (6), Chapter 13. Spectrally smoothed cyclic spectrum of x: (4b), Chapter 13. Spectrally smoothed cyclic spectrum of discrete-time x: (5), Chapter 13. Limit cyclic spectrum of x: (30), Chapter 10, (43), Chapter 11, Limit cyclic spectrum of discrete-time x: (110), (112), Chapter 11. Limit cyclic cross spectrum of x and y: (63), (68), Chapter 11. Limit periodic spectrum of x (period = Ty); Fourier transform of Ry(t, 7; Tr). Limit almost periodic spectrum of x with multiple periodicity: (106), (107), Chapter 10; or limit periodic spectrum of x with single periodicity: (58), Chapter 10. Glossaries Part | CONSTANT PHENOMENA INTRODUCTION The subject of Part I is the statistical spectral analysis of empirical time-series. The term empirical indicates that the time-series represents data from a physical phenomenon; the term spectral analysis denotes decomposition of the time-series into sine wave components; and the term statistical indicates that the squared magnitude of each measured or computed sine wave component, or the product of pairs of such components, is averaged to reduce random effects in the data that mask the spectral characteristics of the phenomenon under study. The purpose of Part I is to present a comprehensive deterministic theory of statistical spectral analysis and thereby to show that contrary to popular belief, the theoretical foundations of this subject need not be based on probabilistic concepts. The motivation for Part 1 is that for many applications the conceptual gap between practice and the deterministic theory presented herein is narrower and thus easier to bridge than is the conceptual gap between practice and the more abstract probabilistic theory. Nevertheless, probabilistic concepts are not ignored. A means for obtaining probabilistic interpretations of the deterministic theory is developed in terms of fraction-of-time distributions, and ensemble averages are occasionally discussed. A few words about the terminology used are in order. Although the terms statistical and probabilistic are used by many as if they were synonymous, their meanings are quite distinct. According to the Oxford English Dictionary, statistical means nothing more than ‘‘consisting of or founded on collections of numerical facts”, Therefore, an average of a collection of spectra is a statistical spectrum. And this has nothing to do with probability. Thus, there is nothing contradictory in the notion of a deterministic or nonprobabilistic theory of statistical spectral analysis. (An interesting discussion of variations in usage of the term statistical is given in Comparative Statistical Inference by V. Barnett [Barnett 1973]). The term deterministic is used here as it is commonly used, as a synonym for non- probabilistic. Nevertheless, the reader should be forewarned that the elements of the nonprobabilistic theory presented herein are defined by infinite limits of time averages and are therefore no more deterministic in practice than are the elements of the probabilistic theory. (In mathematics, the deterministic and probabilistic theories referred to herein are sometimes called the functional and stochastic theoties, respectively.) The term random is often taken as an implication of an underlying probabilistic model. But in this book, the term is used in its broader sense to denote nothing more than the vague notion of erratic unpredictable behavior. 2 Constant Phenomena _Part | INTRODUCTION TO SPECTRAL ANALYSIS This introductory chapter sets the stage for the in-depth study of spectral analysis taken up in the following chapters by explaining objectives and motives, answering some basic questions about the nature and uses of spectral analysis, and establishing a historical perspective on the subject. A. OBJECTIVES AND MOTIVES A premise of this book is that the way engineers and scientists are commonly taught to think about empirical statistical spectral analysis of time-series data is fundamentally inappropriate for many applications. The subject is not really as abstruse as it appears to be from the conventional point of view. The problem is that the subject has been imbedded in the abstract probabilistic framework of stochastic processes, and this abstraction impedes conceptualization of the fun- damental principles of empirical statistical spectral analysis. Hence, the probabilistic theory of statistical spectral analysis should be taught to engineers and scientists only after they have learned the fundamental deterministic principles—both qualitative and quantitative. For example, one should first learn 1) when and why sine wave analysis of time-series is appropriate, 2) how and why temporal and spectral resolution interact, 3) why statistical (averaged) spectra are of interest, and 4) what the various methods for measuring and computing statistical spectra are and how they are related. One should also learn how simultaneously to control the spectral and temporal resolution and the degree of randomness (reliability) of a statistical spectrum. All this can be accomplished in a nonsuperficial way without reference to the probabilistic theory of stochastic processes. The concept of a deterministic theory of statistical spectral analysis is not new. Much deterministic theory was developed prior to and after the infusion, beginning in the 1930s, of probabilistic concepts into the field of time-series analysis. The most fundamental concept underlying present-day theory of statistical spectral analysis is the concept of an ideal spectrum, and the primary objective of statistical spectral analysis is to estimate the ideal spectrum using a finite amount of data. The first theory to introduce the concept of an ideal spectrum is Norbert Wiener’s theory of generalized harmonic analysis {Wienex 1930], and theory is deterministic. Later, Joseph Kampé de Fériet presented a deter- ministic theory of statistical spectral analysis that ties Wiener’s theory more closely to the empirical reality of finite-length time-series [Kampé de Fériet 1954]. But the very great majority of treatments in the ensuing 30 years consider only a probabilistic theory of statistical spectral analysis, although a few authors do briefly mention the dual deterministic theory (e.g., [Koopmans 1974; Brillinger 1976]). The primary objective of Part I of this book is to adopt the deterministic viewpoint of Wiener and Kampé de Fériet and show that a comprehensive deterministic theory of statistical spectral analysis, which for many applications relates more directly to empirical reality than does its more popular probabilistic counterpart, can be developed. A secondary objective of Part I is to adopt the empirical viewpoint of Donald G. Brennan [Brennan 1961] and Edward M. Hof- stetter [Hofstetter 1964], from which they develop an objective probabi theory of stationary random processes based on fraction-of-time distributions and show that probability theory can be applied to the deterministic theory of statistical spectral analysis without introducing a more abstract mathematical model of empirical reality based on the axiomatic or subjective probabilistic theory of stochastic processes. This can be interpreted as an exploitation of Herman O. A. Wold’s isomorphism between an empirical time-series and a probabilistic model of a stationary stochastic process. This isomorphism is re- sponsible for the duality between probabilistic (ensemble-average) and deterministic (time-average) theories of time-series [Wold 1948] [Gardner 1985]. There are two motives for Part I of this book. The first is to stimulate a reassessment of the way engineers and scientists are often taught to think about statistical spectral analysis by showing that probability theory need not play a primary role. The second motive is to pave the way for introducing a new theory and methodology for statistical spectral analysis of random data from periodically time-variant phenomena, which is presented in Part II. The fact that this new theory and methodology, which unifies various emerging—as well as long- established—time-series analysis concepts and techniques, is most transparent when built on the foundation of the deterministic theory developed in Part 1 is additional testimony that probability theory need not play a primary role in statistical spectral analysis. The book, although concise, is tutorial and is intended to be comprehensible by graduate students and professionals in engineering, science, mathematics, and statistics. The accomplishments of the book should be appreciated most by 4 Introduction to Spectral Analysis Chap. 1 those who have studied statistical spectral analysis in terms of the popular probabilistic theory and have struggled to bridge the conceptual gaps between this abstract theory and empirical reality. B. ORIENTATION 1. What Is Spectral Analysis? Spectral analysis of functions is used for solving a wide variety of practical problems encountered by engineers and scientists in nearly every field of engineering, and science. The functions of primary interest in most fields are temporal or spatial waveforms or discrete data. The most basic purpose of spectral analysis is to represent a function by a sum of weighted sinusoidal functions called spectral components; that is, the purpose is to decompose (analyze) a function into these spectral components. The weighting function in the decomposition is a density of spectral components. This spectral density is also called a spectrum.' The reason for representing a function by its spectrum is that the spectrum can be an efficient, convenient, and often revealing description of the function.’ ‘As an example of the use of spectral representation of temporal waveforms in the field of signal processing, consider the signal extraction problem of extracting an information-bearing signal from corrupted (noisy) measurements. In many situations, the spectrum of the signal differs substantially from the spectrum of the noise. For example, the noise might have more high-frequency content; hence, the technique of spectral filtering can be used to attenuate the noise while leaving the signal intact. Another example is the data-compression problem of using coding to compress the amount of data used to represent information for the purpose of efficient storage or transmission. In many situations, the information contained in a complex temporal waveform (e.g., a speech segment) can be coded more efficiently in terms of the spectrum. There are two types of spectral representations. The more elementary of the two shall be referred to as simply the spectrum, and the other shall be referred to as the statistical spectrum. The term statistical indicates that averaging or smoothing is used to reduce random effects in the data that mask the spectral characteristics of the phenomenon under study. For time-functions, the spectrum is obtained from an invertible transformation from a time-domain description of a function, x(1), to a frequency-domain description, or more generally to a joint time- and frequency-domain description. The (complex) spectrum of a segment of data of length 7 centered at time # and evaluated at frequency f is a a w for which i = \/~1. Because of the invertibility of this transformation, a "The term spectrum, which derives from the Latin for image, was originally introduced by Sir Isaac Newton (see [Robinson 1982)). Sec. B Orientation 5 function can be recovered from its spectrum, xu) = f. Xt fee™ df, u E (tT /2,t+ 7/21. (2) In contrast to this, a statistical spectrum involves an averaging or smoothing operation that is not invertible. For example, the statistical spectrum 4/2 I Sl Dar? Ef” 8.00.1) do, 8 for which S,,(t, f) is the normalized squared magnitude spectrum 1 Sub P) > FI ATOAP, @ is obtained from a temporal smoothing operation. Thus, a statistical spectrum is a summary description of a function from which the function x(‘) cannot be recovered. Therefore, although the spectrum is useful for both signal extraction and data compression, the statistical spectrum is not directly useful for either. It is, however, quite useful indirectly for analysis, design, and adaptation of schemes for signal extraction and data compression. It is also useful for forecasting or prediction and more directly for other signal-processing tasks such as 1) the modeling and system-identification problems of determining the characteristics of a system from measurements on it, such as its response to excitation, and 2) decision problems, such as the signal-detection problem of detecting the presence of a signal buried in noise. As a matter of fact, the problem of detecting hidden periodicities in random data motivated the earliest work in the development of spectral analysis, as discussed in Section D. Statistical spectral analysis has diverse applications in areas such as mechanical vibrations, acoustics, speech, communications, radar, sonar, ultrasonics, optics, astronomy, meteorology, oceanography, geophysics, economics, biomedicine, and many other areas. To be more specific, let us briefly consider a few applications. Spectral analysis is used to characterize various signal sources. For example, the spectral purity of a sine wave source (oscillator) is determined by measuring the amounts of harmonics from distortion due, for example, to nonlinear effects in the oscillator and also by measuring the spectral content close in to the fundamental frequency of the oscillator, which is due to random phase noise. Also, the study of modulation and coding of sine wave carrier signals and pulse- train signals for communications, telemetry, radar, and sonar employs spectral analysis as a fundamental tool, as do surveillance systems that must detect and identify modulated and coded signals in a noisy environment. Spectral analysis of the response of electrical networks and components such as amplifiers to both sine wave and random-noise excitation is used to measure various properties such as nonlinear distortion, rejection of unwanted components, such as power- supply components and common-mode components at the inputs of differential amplifiers, and the characteristics of filters, such as center frequencies, bandwidths, pass-band ripple, and stop-band rejection. Similarly, spectral analysis is used to study the magnitude and phase characteristics of the transfer functions as well as nonlinear distortion of various electrical, mechanical, and other systems, 6 Introduction to Spectral Analysis Chap. 1 including loudspeakers, communication channels and modems (modulator- demodulators), and magnetic tape recorders in which variations in tape motion introduce signal distortions. In the monitoring and diagnosis of rotating machinery , spectral analysis is used to characterize random vibration patterns that result from wear and damage that cause imbalances. Also, structural analysis of physical systems such as aircraft and other vehicles employs spectral analysis of vibrational response to random excitation to identify natural modes of vibration (resonances). In the study of natural phenomena such as weather and the behavior of wildli and fisheries populations, the problem of identifying cause-effect relationships is attacked using techniques of spectral analysis. Various physical theories are developed with the assistance of spectral analysis, for example, in studies of atmospheric turbulence and undersea acoustical propagation. In various fields of endeavor involving large, complex systems such as economics, spectral analysis is used in fitting models to time-series for several purposes, such as simulation and forecasting. As might be surmised from this sampling of applications, the techniques of spectral analysis permeate nearly every field of science and engineering. Spectral analysis applies to both continuous-time functions, called waveforms, and discrete-time functions, called sampled data. Other terms are commonly used also; for example, the terms data and time-series are each used for both continuous-time and discrete-time functions. Since the great majority of data sources are continuous-time phenomena, continuous-time data are focused on in this book, because an important objective is to maintain a close tie between theory and empirical reality. Furthermore, since optical technology has emerged as a new frontier in signal processing and optical quantities vary continuously in time and space, this focus on continuous time data is well suited to upcoming technological developments. Nevertheless, since some of the most economical implementations of spectrum analyzers and many of the newly emerging parametric methods of spectral analysis operate with discrete time and discrete frequency and since some data are available only in discrete form, discrete-time and discrete- frequency methods also are described. 2. Why Analyze Waveforms Into Sine Wave Components?” The primary reason why sine waves are especially appropriate components with which to analyze waveforms is our preoccupation with linear time-invariant (LTT) transformations, which we often call filters. A secondary reason why statistical (time-averaged) analysis into sine wave components is especially appropriate is our preoccupation with time-invariant phenomena (data sources). To be specific, a transformation of a waveform x(¢) into another waveform, say y(‘), is an LTI transformation if and only if there exists a weighting function A(#) (here assumed * Readers in need of a brief remedial review of the prerequisite topic of linear time-invariant transformations and the Fourier transform should consult Appendix 1. Sec. B Orientation 7 10 be absolutely integrable in the generalized sense, which accommodates Dirac deltas) such that y(t) is the convolution (denoted by ®) of x with h: VD = x() @ A) = f A(t — u)x(u) du (Sa) = f. h(v)x(t — v) do. (5b) The time-invariance property of a transformation is, more precisely, a translation- invariance property that guarantees that a translation, by w, of x(¢) to x(¢ + w) has no effect on y(s) other than a corresponding translation to y(t + w) (exercise 1). A phenomenon is said to be time-invariant only if it is persistent in the sense that it is appropriate to conceive of a mathematical model of x(¢) for which the following limit time-average exists for each value of 7 and is not identically zero,* ae 7p ; J matt + Sla(r— 5) a © This function is called the limit autocorrelation function’ for x(t). For t = 0, (© is simply the time-averaged value of the instantaneous power.’ Sine wave analysis is especially appropriate for studying a convolution because the principal components (eigenfunctions) of the convolution operator are the complex sine wave functions, e””" for all real values of f. This follows from the facts that (1) the convolution operation produces a continuous linear combination of time-translates, that is, y(t) is a weighted sum (over v) of x(t — v), and (2) the complex sine wave is the only bounded function whose form is invariant (except for a scale factor) to time-translation, that is, a bounded function x(t) satisfies x(t = v) = ex(t) @ for all ¢ if and only if x(t) = Xe? @) for some real values of X and f (exercise 3). As a consequence, the form of a bounded function x(¢) is invariant to convolution if and only if x() = Xe", in > In Part Il, it is explained that periodic and almost periodic phenomena as well as constant (time-invariant) phenomena satisfy (6). For x(t) to be from a constant phenomenon, it must satisfy not only (6) but also lim Fina + r/Dx(t — 7/2)e-P™ de = O for all a # 0. “In some treatments of time-series analysis (see [Jenkins and Watts 1968}), the function (6) modified by subtraction of the mean im © lim f. x(0) dt from x(1), is called the autocovariance function, and when normalized by R,(0) it is called the autocorrelation function. * If-x(0) is the voltage (in volts) across a one-ohm resistance, then x*(t) is the power dissipation (in watts). 8 Introduction to Spectral Analysis Chap. 1

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy