0% found this document useful (0 votes)
50 views53 pages

PSD Periodogram

Uploaded by

anıl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views53 pages

PSD Periodogram

Uploaded by

anıl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

System Identification

Lecture 3: Frequency domain identification

Roy Smith

2018-10-5 3.1

Signal and system relationships

Signal properties that we can calculate/estimate from data


§ Fourier domain representation (DFT)
§ Periodogram
§ Auto and cross-correlation
§ Spectral densities

Methods:
§ Sinusoidal correlation methods
§ Empirical Transfer Function Estimation (ETFE)
§ Spectral estimation

2018-10-5 3.2
Inverse Fourier transform

The inverse Fourier Transform is,


ż
1 π
xpkq “ Xpejω qejωk dω,
2π ´π

where k “ ´8, . . . , 8.

2018-10-3 2.11

Energy spectral density

If xpkq is a finite energy signal,


8
ÿ
}xpkq}22 “ |xpkq|2 ă 8.
k“´8

The sequence, xpkq, has a Fourier transform,


8
ÿ

Xpe q “ xpkqe´jωk , where ω P r´π, πq,
k“´8

The energy spectral density can be defined as,

Sx pejω q “ |Xpejω q|2 .

2018-10-3 2.12
Autocorrelation (finite energy signals)

The autocorrelation of xpkq is,


8
ÿ
Rx pτ q “ xpkqxpk ´ τ q, τ “ ´8, . . . , 0, . . . , 8.
k“´8

The energy spectral density is the Fourier Transform of the autocorrelation:


8
ÿ
Rx pτ qe´jωτ “ Sx pejω q.
τ “´8

2018-10-3 2.13

Autocorrelation example (finite energy signal)

xpkq

2.0

1.0

0 index: k
10 20 30 40 50 60

´1.0

´2.0

xpkq, k “ t0, 1, ¨ ¨ ¨ , 63u

2018-10-3 2.14
Autocorrelation example (finite energy signal)

8
ÿ
Rx pτ q “ xpkqxpk ´ τ q, τ “ ´8, . . . , 0, . . . , 8.
k“´8

30 Rx pτ q

20

10

´60 ´40 ´20 20 40 60 lag: τ

´10
2018-10-3 2.15

Energy spectral density example (finite energy signal)

8
ÿ

Sx pe q “ Rx pτ qe´jωτ
´8

Sx pejω q

100

80

60

40

20

ω
(rad/sample)
´π ´π{2 0 π{2 π

2018-10-3 2.16
Discrete periodic signals

If xpkq is periodic with period equal to M (assume M even);

xpkq “ xpk ` M q, for all k P t´8, 8u.

The fundamental frequency is,



ω0 “ .
M
There are only M unique harmonics of the sinusoid, ejω0 .

The non-negative harmonic frequencies are,

ejnω0 , n “ 0, 1, . . . , M {2.

2018-10-3 2.17

Discrete Fourier series (periodic signals)

Periodic signal: xpkq (period = M ).

Choose the “calculation length”, N to be equal to the period (N “ M ).

The Fourier series is:


´1
Nÿ
2πn
Xpe jωn
q “ xpkqe´jωn k , where ωn “ “ nω0 ,
k“0
N
n “ 0, . . . , N ´ 1.

The inverse transform is:


N ´1
1 ÿ
xpkq “ Xpejωn qejωn k .
N n“0

2018-10-3 2.18
Autocorrelation (periodic signals, N “ M )

The autocorrelation of xpkq (of period M and N “ M ) is:


N ´1
1 ÿ
Rx pτ q “ xpkqxpk ´ τ q.
N k“0

The Fourier transform of Rx pτ q is defined as the power spectral density,


´1
Nÿ
1
φx pe jωn
q “ Rx pτ qe´jωn τ “ |Xpejωn q|2
τ “0
N

Energy in a single period:


´1
Nÿ ´1
Nÿ
2
|xpkq| “ φx pejωn q
k“0 n“0

2018-10-3 2.19

Example: periodic signal (N “ M )

xpkq
2.0

1.0

index: k
N 2N

´1.0

´2.0

2018-10-3 2.20
Autocorrelation example (periodic signal, N “ M )

N ´1
1 ÿ
Rx pτ q “ xpkqxpk ´ τ q, τ “ ´N {2 ` 1, . . . , N {2.
N k“0

1.0 Rx pτ q

0.5

lag: τ
´N {2 ` 1 N {2 N 3N {2

´0.5
2018-10-3 2.21

Power spectral density (periodic signal, N “ M )

´1
Nÿ
2πn
φx pe jωn
q “ Rx pτ qe´jωn τ , ωn “ , n “ ´N {2 ` 1, . . . , N {2.
τ “0
N

φx pejωn qq
3.0

2.0

1.0

ω
(rad/sample)
´π ´π{2 0 π{2 π 2π 3π

2018-10-3 2.22
Cross-correlation (periodic signals, N “ M )

The cross-correlation of ypkq and upkq (both of period M “ N ) is:


N ´1
1 ÿ
Ryu pτ q “ ypkqupk ´ τ q.
N k“0

Cross-spectral density (FT of the cross-correlation):


´1
Nÿ
φyu pe jωn
q“ Ryu pτ qe´jωn τ , ωn “ 2πn
N
,
τ “0
n “ 0, . . . , N ´ 1
1
“ Y pejωn qU ˚ pejωn q
N

2018-10-3 2.23

Cross-correlation example (periodic signal, N “ M )


upkq
3.0
2.0
1.0
0 index: k
N
´1.0
´2.0
´3.0

ypkq
3.0
2.0
1.0
0 index: k
N
´1.0
´2.0
´3.0

2018-10-3 2.24
Cross-correlation example (periodic signal, N “ M )

N ´1
1 ÿ
Ryu pτ q “ ypkqupk ´ τ q, τ “ ´N {2 ` 1, . . . , N {2.
N k“0

1.0 Rx pτ q

0.75

0.5

0.25

lag: τ
´N {2 ` 1 N {2

´0.25

´0.5
2018-10-3 2.25

Cross power spectral density example (periodic signal, N “ M )

´1
Nÿ
2πn
φyu pe jωn
q“ Ryu pτ qe´jωn τ , ωn “ , n “ ´N {2 ` 1, . . . , N {2.
τ “0
N
ˇ ˇ
5.0 ˇφyu pejωn qqˇ

2.5

ω
(rad/sample)
´π ´π{2 π{2 π

2018-10-3 2.26
Noise models: random signals

Normally distributed noise:


#
Etepkqu “ 0 (zero mean)
epkq P N p0, λq, ùñ
Et|epkq|2 u “ λ (variance)

The epkq are independent and identically distributed (i.i.d.).

epkq

Hpejω q

vpkq
ypkq upkq
` Gpejω q

8
ÿ
vpkq “ hplqepk ´ lq “ H epkq with epkq P N p0, λq.
l“0

2018-10-3 2.27

Autocovariance (random signals)

For random xpkq, with Etxpkqu “ 0,

Define the autocovariance sequence,


or covariance function, as;

Rx pτ q “ Etxpkqxpk ´ τ qu
“ Etxpkqx˚ pk ´ τ qu (in the complex case)
“ Etxpkqx˚ pk ´ τ qu (in the multivariable case)

General (non-stationary, non-zero mean) case:

Rx ps, tq “ Etpxpsq ´ Etxuqpxptq ´ Etxuqu


“ Etxpsqxptqu (if zero mean)
“ Rx ps ´ tq (if stationary)

2018-10-3 2.28
Power spectral density (random signals)

The power spectral density is defined as the Fourier transform of Rx pτ q,


8
ÿ

φx pe q :“ Rx pτ qe´jωτ where ω P r´π, πq.
τ “´8

The inverse transform is given by,


ż
1 π
Rx pτ q “ φx pejω qejωτ dω.
2π ´π

For a zero-mean random signal,


N ´1 ż
1 ÿ 2 1 π
lim |xpkq| “ vartxpkqu “ φx pejω q dω
N ÝÑ8 N 2π ´π
k“0

2018-10-3 2.29

Basic properties

Autocovariance:

Rx p´τ q “ Rx˚ pτ q
Rx p0q ě |Rx pτ q| for all τ ą 0

Spectral density:

φx pejω q P R
φx pejω q ě 0 for all ω
φx pejω q “ φx pe´jω q for all real-valued xpkq

2018-10-3 2.30
Cross-covariance (random signals)

For random ypkq and upkq, the cross-covariance is:

Ryu pτ q “ E tpypkq ´ Etypkquqpupk ´ τ q ´ Etupkququ

For zero mean signals, Etypkqu “ 0 and Etupkqu “ 0,

Ryu pτ q “ Etypkqupk ´ τ qu

Joint stationarity is required to make the definition dependent on τ alone.

If Ryu pτ q “ 0 for all τ then ypkq and upkq are uncorrelated.

2018-10-3 2.31

Cross power spectral density (random signals)

The Fourier transform of Ryu pτ q is defined as the cross spectral density, or


cross-spectrum,
8
ÿ

φyu pe q “ Ryu pτ qe´jωτ , ω P r´π, πq.
τ “´8

The inverse is,


żπ
1
Ryu pτ q “ φyu pejω qejωτ dω.
2π ´π

2018-10-3 2.32
Discrete-Fourier Transform (finite-length signals)

Finite length signal,

xpkq, k “ 0, . . . , K ´ 1.

We take our calculation length to be the entire signal (N “ K),

The Discrete Fourier Transform (DFT) of xpkq is:


´1
Nÿ
2πn
XN pe jωn
q “ xpkqe´jωn k , where ωn “ ,
k“0
N
n “ 0, . . . , N ´ 1.

The inverse DFT is,


N ´1
1 ÿ
xpkq “ XN pejωn qejωn k , k “ 0, . . . , N ´ 1.
N n“0

2018-10-3 2.33

Periodogram

The periodogram (for a random signal vpkq) is defined as:


ˇ ˇ2
1 ˇ jω ˇ
ˇVN pe qˇ
N
See [Schuster, 1900] for an interesting application.

Asymptotically unbiased estimator of the spectrum:


" *
1 jω 2
lim E |VN pe q| “ φv pωq
N ÝÑ8 N
This assumes that the autocorrelation decays quickly enough:

1 ÿ
N
lim |τ Rv pτ q| “ 0
N ÝÑ8 N
τ “´N

2018-10-3 2.34
Bibliography

Fourier transforms:
A.V. Oppenheim, A.S. Wilsky & S.H. Nawab, Signals & Systems, Prentice-Hall, 2nd
Ed., 1996.

Spectral estimation:
Lennart Ljung, System Identification; Theory for the User, (see Section 6.4)
Prentice-Hall, 2nd Ed., 1999.

P. Stoica & R. Moses, Introduction to Spectral Analysis (see Chapters 1 and 2),
Prentice-Hall, 1997.

Periodograms:
Arthur Schuster, “The Periodogram of Magnetic Declination as obtained from the
records of the Greenwich Observatory during the years 1871–1895,” Trans. Cambridge
Phil. Soc., vol. 18, pp. 107–135, 1900.

Lennart Ljung, System Identification; Theory for the User, (see Section 2.2)
Prentice-Hall, 2nd Ed., 1999.

2018-10-3 2.35
Open-loop identification configuration

epkq

Hpejω q

vpkq
ypkq upkq
` Gpejω q

Model assumptions:

Linear, time-invariant system, gplq:


8
ÿ
ypkq “ gplqupk ´ lq ` vpkq, k “ 0, 1, . . .
l“0

Assumptions: causal: ðñ gplq “ 0, for all l ă 0.

noise: Etvpkqu “ 0, (zero-mean)


stationary.

2018-10-5 3.3

Open-loop identification problem

epkq

Hpejω q

vpkq
ypkq upkq
` Gpejω q

Identification problem:

Given data,

tupkq, ypkqu, k “ 0, . . . , K ´ 1,

which we assume to have come from a noisy experiment with a plant, Gpejω q:

Find a causal, linear, time-invariant plant model, Ĝpejω q,


such that

Ĝpejω q « Gpejω q.

2018-10-5 3.4
Identification error: bias and variance

The identified plant estimate, Ĝ, is a stochastic variable.

For any identification procedure giving a plant estimate Ĝ, define:

Bias: BiaspĜq “ G ´ EtĜu


"ˇ ! )ˇ2 *
ˇ ˇ
Variance: varpĜq “ E ˇĜ ´ E Ĝ ˇ
"ˇ ˇ2 *
ˇ ˇ
Mean-square error: MSEpĜq “ E ˇG ´ Ĝˇ

Note that,

MSEpĜq “ varpĜq ` Bias2 pĜq.

These are common measures of model quality (but we can’t calculate them
exactly in practice).

2018-10-5 3.5

Input-output relationship: finite-energy signals

epkq

Hpejω q

vpkq
ypkq upkq
` Gpejω q

8
ÿ
ypkq “ gplqupk ´ lq ` vpkq,
l“0

Y pejω q “ Gpejω qU pejω q ` V pejω q

Idealized case:
Y pejω q jω V pejω q
“ Gpe q ` « Gpejω q
U pejω q U pejω q

2018-10-5 3.6
Empirical transfer function estimation (ETFE)

Motivation:

8
ÿ
Time domain: ypkq “ gplqupk ´ lq ` vpkq
l“0

Frequency domain: Y pejω q “ Gpejω qU pejω q ` V pejω q

Y pejω q jω V pejω q
“ Gpe q `
U pejω q U pejω q

2018-10-5 3.7

Empirical transfer function estimation

Approximation:
´1
Nÿ 8
ÿ
´jωn k
Y N pe
jωn
q
loooomoooon “ ypkqe « ypkqe´jωn k “ Y pejωn q
k“0 k“´8
length-N DFT

´1
Nÿ 8
ÿ
´jωn k
jωn
UN pe q
loooomoooon “ upkqe « upkqe´jωn k “ U pejωn q
k“0 k“´8
length-N DFT

jωn YN pejωn q
ĜN pe q
loooomoooon :“
UN pejωn q
ETFE

One choice for N is N “ K (data length).

2018-10-5 3.8
ETFE example: “experimental” data

3.0 upkq

2.0

1.0

0 index: k
32 64 128
´1.0

´2.0

´3.0

3.0 ypkq

2.0

1.0

0 index: k
32 64 128
´1.0

´2.0

´3.0

2018-10-5 3.9

ETFE example: FFT calculation results (N “ 128)


Magnitude
60

40 UN pejωn q

20

ω
0 (rad/sample)
0 π{2 π

Magnitude
60

40 YN pejωn q

20

ω
0 (rad/sample)
0 π{2 π

2018-10-5 3.10
ETFE example

ĜN pejωn q “ YN pejωn q{UN pejωn q


Magnitude
10
ĜN pejωn q
log ω
1 (rad/sample)
0.1 1 π

0.1

0.01

200
ĜN pejωn q
100

log ω
0 (rad/sample)
0.1 1 π

´100

´200
Phase (deg.)
2018-10-5 3.11

ETFE example

ĜN pejωn q “ YN pejωn q{UN pejωn q


Magnitude
10

Gpejω q ĜN pejωn q


log ω
1 (rad/sample)
0.1 1 π

0.1

0.01

200
ĜN pejωn q
100

log ω
0 (rad/sample)
0.1 1 π
Gpejω q
´100

´200
Phase (deg.)
2018-10-5 3.11
ETFE example: absolute error

EN pejωn q “ Gpejωn q ´ ĜN pejωn q

Magnitude
10

Gpejω q

log ω
1 (rad/sample)
0.1 1 π

0.1

EN pejωn q

0.01

2018-10-5 3.12

ETFE example: Matlab calculations

U = fft(u); % calculate N point FFTs


Y = fft(y);
omega = (2*pi/N)*[0:N-1]’; % frequency grid
idx = find(omega > 0 & omega < pi); % positive frequencies
loglog(omega(idx),abs(U(idx)))
loglog(omega(idx),abs(Y(idx)))

Gest = Y./U; % ETFE estimate


Gfresp = squeeze(freqresp(G,omega)); % "true" system response
loglog(omega(idx),abs(Gest(idx)))
semilogx(omega(idx),angle(Gest(idx)))

Err = Gest - Gfresp; % calculate error


loglog(omega(idx),abs(Err(idx)))

2018-10-5 3.13
Empirical transfer function estimation

Periodic input case:

Period M inputs: upkq “ upk ` M q:

If sM “ N for some integer s then,


2πn
UN pejωn q “ U pejωn q for all ωn “ , n “ 0, . . . , N ´ 1.
N
Then,

YN pejωn q “ Gpejωn qUN pejωn q ` VN pejωn q

jωn jωn VN pejωn q


ĜN pe q “ Gpe q`
UN pejωn q

2018-10-5 3.14

ETFE error properties:

Bias properties

YN pejωn q VN pejωn q
ĜN pejωn q “ “ Gpe jωn
q `
UN pejωn q UN pejωn q
And we find the bias by examining,
" *
jωn jωn VN pejωn q
EtĜN pe qu “ Gpe q`E
UN pejωn q
“ Gpejωn q (assumes zero mean noise)

For periodic inputs (with N being an integer number of periods):


— the ETFE is unbiased.

2018-10-5 3.15
ETFE error properties:

Variance properties

Variance (for unbiased case EtĜpejωn qu “ Gpejωn q):


"ˇ ˇ2 *
ˇ jωn jωn ˇ φv pejωn q ` N2 c
E ˇĜN pe q ´ Gpe qˇ “ 1 ,
N
|UN pe jωn q|2

8
ÿ
where |c| ď C “ |τ Rv pτ q| is assumed to be finite.
τ “1

For estimates at different frequencies (ωn ‰ ωi ):


!´ ¯´ ¯)
jωn jωn ´jωi ´jωi
E ĜN pe q ´ Gpe q ĜN pe q ´ Gpe q “0

2018-10-5 3.16

ETFE example: process control

Model: total flow to tank height.

10 Magnitude
0.001 0.01 0.1 1 log ω
1 (rad/sec)
0.1

0.01

0.001

0.0001 model: P

0.001 0.01 0.1 1 log ω


0 (rad/sec)
´90

´180

´270

´360

´450 model: P
Phase (deg.)

2018-10-5 3.17
ETFE example — taking more data

EN pejωn q “ Gpejωn q ´ ĜN pejωn q


Magnitude
10
Gpejω q
log ω
1 (rad/sample)
0.1 1 π
ĜN pejωn q, N “ 128

0.1

0.01

Magnitude
10
EN pejωn q, N “ 128 Gpejω q
log ω
1 (rad/sample)
0.1 1 π

0.1

0.01
2018-10-5 3.18

ETFE example — taking more data

EN pejωn q “ Gpejωn q ´ ĜN pejωn q


Magnitude
10
Gpejω q
log ω
1 (rad/sample)
0.1 1 π
ĜN pejωn q, N “ 128
ĜN pejωn q, N “ 256
0.1

0.01

Magnitude
10
EN pejωn q, N “ 128 Gpejω q
EN pejωn q, N “ 256 log ω
1 (rad/sample)
0.1 1 π

0.1

0.01
2018-10-5 3.18
ETFE example — taking more data

EN pejωn q “ Gpejωn q ´ ĜN pejωn q


Magnitude
10
Gpejω q
log ω
1 (rad/sample)
0.1 1 π
ĜN pejωn q, N “ 128
ĜN pejωn q, N “ 256
0.1
ĜN pejωn q, N “ 512

0.01

Magnitude
10
EN pejωn q, N “ 128 Gpejω q
EN pejωn q, N “ 256 log ω
1 (rad/sample)
EN pejωn q, N “ 512 0.1 1 π

0.1

0.01
2018-10-5 3.18

ETFE example — taking more data

EN pejωn q “ Gpejωn q ´ ĜN pejωn q


Magnitude
10
Gpejω q
log ω
1 (rad/sample)
0.1 1 π
ĜN pejωn q, N “ 128
ĜN pejωn q, N “ 256
0.1
ĜN pejωn q, N “ 512
ĜN pejωn q, N “ 1024
0.01

Magnitude
10
EN pejωn q, N “ 128 Gpejω q
EN pejωn q, N “ 256 log ω
1 (rad/sample)
EN pejωn q, N “ 512 0.1 1 π
EN pejωn q, N “ 1024
0.1

0.01
2018-10-5 3.18
ETFE example: mean-square error

Mean-square error: rerunning the experiments 10,000 times.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 128 log ω
1 (rad/sample)
0.1 1 π

0.1

0.01

0.001

2018-10-5 3.19

ETFE example: mean-square error

Mean-square error: rerunning the experiments 10,000 times.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 128 log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 256

0.1

0.01

0.001

2018-10-5 3.19
ETFE example: mean-square error

Mean-square error: rerunning the experiments 10,000 times.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 128 log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇˇEN pejωn qˇˇ( , N “ 256
E ˇEN pejωn qˇ , N “ 512
0.1

0.01

0.001

2018-10-5 3.19

ETFE example: mean-square error

Mean-square error: rerunning the experiments 10,000 times.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 128 log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇˇEN pejωn qˇˇ( , N “ 256
E ˇˇEN pejωn qˇˇ( , N “ 512
0.1 E ˇEN pejωn qˇ , N “ 1024

0.01

0.001

2018-10-5 3.19
ETFE example: mean-square error

Mean-square error: rerunning the experiments 10,000 times.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 128 log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇˇEN pejωn qˇˇ( , N “ 256
E ˇˇEN pejωn qˇˇ( , N “ 512
0.1 E ˇEN pejωn qˇ , N “ 1024

0.01

|Hpejω q|2
0.001

2018-10-5 3.19

Empirical transfer function estimation

Transient responses:

Initial transient corrupts the measurement:


` ˘
ypkq “ G uperiodic pkqWr0,N ´1s pkq ` vpkq,

with the “window” function:


"
1 if 0 ď k ă N
Wr0,N ´1s pkq “
0 otherwise

For all outputs up to time k “ N ´ 1,


` ˘
loooooooooooomoooooooooooon ` vpkq
ypkq “ Guperiodic pkq ´ G u periodic W p´8,´1s

rpkq

YN pejωn q “ Gpejωn qUN pejωn q ` RN pejωn q ` VN pejωn q

2018-10-5 3.20
Transient responses
5 uperiodic pkq

index: k
´32 0 32 64
´5
5 yperiodic pkq

index: k
´32 0 32 64
´5
5 upkq

index: k
´32 0 32 64
´5
5 ypkq

index: k
´32 0 32 64
´5
5 rpkq “ yperiodic pkq ´ ypkq

index: k
´32 0 32 64
´5
2018-10-5 3.21

Transient responses
Periodic signal with period M “ 32.
rpkq for k “ 0, . . . , N ´ 1. (N “ mM )

0.5 rpkq

0.25

m“1
m“4 m “ 16
0 index: k
32 128 512

´0.25

´0.5

2018-10-5 3.22
Transient responses

RN pejωn q for n “ 0, . . . , N ´ 1. (N “ mM )

Magnitude
10

log ω
1 (rad/sample)
0.1 1 π

RN pejωn q, m “ 1

0.1

2018-10-5 3.23

Transient responses

RN pejωn q for n “ 0, . . . , N ´ 1. (N “ mM )

Magnitude
10

log ω
1 (rad/sample)
0.1 1 π

RN pejωn q, m “ 1
RN pejωn q, m “ 4

0.1

2018-10-5 3.23
Transient responses

RN pejωn q for n “ 0, . . . , N ´ 1. (N “ mM )

Magnitude
10

log ω
1 (rad/sample)
0.1 1 π

RN pejωn q, m “ 1
RN pejωn q, m “ 4
RN pejωn q, m “ 16
0.1

2018-10-5 3.23

Transient responses: input excitation

upkq for k “ 0, . . . , N ´ 1. where (N “ mM, M “ 32 and m “ 1, 4, 16q.

4 upkq
m“1 m“4 m “ 16

0 index: k
32 128 512

´2

´4

2018-10-5 3.24
Transient responses: input excitation

UN pejωn q for n “ 0, . . . , N {2 where N “ mM, M “ 32 and m “ 1, 4, 16.

Magnitude

m“1
m“4
200 m “ 16

150

100

50

0 ω (rad/sample)
0 π{2 π

2018-10-5 3.25

ETFE Transient error properties

ETFE transient bias error

YN pejωn q RN pejωn q VN pejωn q


Ĝpejωn q “ “ Gpe jωn
q ` `
UN pejωn q UN pejωn q UN pejωn q

Periodic upkq Random upkq

As N “ mM , m ÝÑ 8, As N ÝÑ 8,
ˇ ˇ ˇ ˇ !ˇ ˇ) ? a
ˇ jωn ˇ ˇ jωn ˇ ˇ jωn ˇ
ˇUN pe qˇ “ m ˇUM pe qˇ E ˇUN pe qˇ ÝÑ N φu pejωn q

ˇ ˇ #
ˇ RN pejωn q ˇ 1{N for periodic inputs; or
So ˇˇ ˇ ÝÑ 0 with rate ?
UN pejωn q ˇ 1{ N for random inputs

2018-10-5 3.26
Transient response example

Estimates: ĜN pejωn q “ YN pejωn q{UN pejωn q for increasing m.

Magnitude
10

Gpejω q

log ω
1 (rad/sample)
1 π

ĜN pejωn q, m “ 1
0.1
ĜN pejωn q, m “ 4

ĜN pejωn q, m “ 16

0.01

2018-10-5 3.27

Transient response example

Transient error decay: EN pejωn q “ ĜN pejωn q ´ Gpejωn q

Magnitude
10
Gpejω q

log ω
1 (rad/sample)
1 π
EN pejωn q, m “ 1

0.1 EN pejωn q, m “ 4

0.01
EN pejωn q, m “ 16

0.001

2018-10-5 3.28
ETFE example: average error with noisy data

Previous example with noise (no transient): period M “ N in each experiment.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 128 log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 256
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 512
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 1024
0.1

0.01

|Hpejω q|2
0.001

2018-10-5 3.29

ETFE example: average error with periodic signals

Noisy example (no transient): period M “ 128. Larger N ùñ more periods.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 128

0.1

0.01

|Hpejω q|2
0.001

2018-10-5 3.30
ETFE example: average error with periodic signals

Noisy example (no transient): period M “ 128. Larger N ùñ more periods.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇˇEN pejωn qˇˇ( , N “ 128
E ˇEN pejωn qˇ , N “ 256
0.1

0.01

|Hpejω q|2
0.001

2018-10-5 3.30

ETFE example: average error with periodic signals

Noisy example (no transient): period M “ 128. Larger N ùñ more periods.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇˇEN pejωn qˇˇ( , N “ 128
E ˇˇEN pejωn qˇˇ( , N “ 256
0.1 E ˇEN pejωn qˇ , N “ 512

0.01

|Hpejω q|2
0.001

2018-10-5 3.30
ETFE example: average error with periodic signals

Noisy example (no transient): period M “ 128. Larger N ùñ more periods.


"ˇ ˇ2 * ÿ ˇˇ
10,000 ˇ2
ˇ jωn ˇ 1 jωn jωn ˇ
E ˇEN pe qˇ « ˇGpe q ´ ĜN pe qˇ
10, 000 l“1

Magnitude
10

Gpejω q
log ω
1 (rad/sample)
0.1 1 π
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 128
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 256
ˇ ˇ(
E ˇEN pejωn qˇ , N “ 512
0.1 ˇ ˇ(
E ˇEN pejωn qˇ , N “ 1024

0.01

|Hpejω q|2
0.001

2018-10-5 3.30

Spectral transformations (random signals)

epkq

Hpejω q

vpkq
ypkq upkq
` Gpejω q

If vpkq “ 0 then,

φy pejω q “ Gpejω qφu pejω qGT pe´jω q

If vpkq ‰ 0 then,

φy pejω q “ Gpejω qφu pejω qG˚ pejω q ` φv


` Gpejω qφuv pejω q ` φvu pejω qG˚ pejω q

“ Gpejω qφu pejω qG˚ pejω q ` φv (if uncorrelated)

“ |Gpejω q|2 φu pejω q ` |Hpejω q|2

2018-10-5 3.31
Spectral transformations (random signals)

epkq

Hpejω q

vpkq
ypkq upkq
` Gpejω q

Cross-correlation result:

φyu pejω q “ Gpejω q φu pejω q ` φuv pejω q

“ Gpejω q φu pejω q
(if upkq and vpkq are uncorrelated)

Spectral estimation methods:

φ̂yu pejω q
Ĝpejω q “
φ̂u pejω q

2018-10-5 3.32

Spectral estimation methods

φy pejω q “ |Gpejω q|2 φu pejω q ` φv pejω q

φyu pejω q “ Gpejω q φu pejω q

φ̂yu pejωn q
Ĝpejωn q “
φ̂u pejωn q

2018-10-5 3.33
Spectral estimation (via the periodogram)

The periodogram (for a random signal vpkq) is defined as:


ˇ ˇ2
1 ˇ jω ˇ
ˇVN pe qˇ
N
Asymptotically unbiased estimator of the spectrum:
" *
1 jω 2
lim E |VN pe q| “ φv pejω q
N ÝÑ8 N
Assumes:

1 ÿ
N
lim |τ Rv pτ q| “ 0
N ÝÑ8 N
τ “´N

2018-10-5 3.34

Spectral estimation (via covariances)

Autocovariance estimate (stochastic vpkq):


$
Nÿ´1

’ 1

’ vpkqvpk ´ τ q, for τ ě 0,

& N ´ |τ | k“τ
R̂v pτ q “

’ N `τ
ÿ´1

’ 1

% N ´ |τ | vpkqvpk ´ τ q, for τ ă 0,
k“0

Gives estimates for ´N ` 1 ď τ ď N ´ 1.

Unbiased estimator of Rv pτ q: EtR̂v pτ qu “ Rv pτ q

´1
Nÿ

Spectral estimate: φ̂v pe q “ R̂v pτ qe´jωτ
τ “´N `1

2018-10-5 3.35
Spectral estimation (periodic signals)

Periodic signal, xpkq, with period M


(N “ mM for some integer m)

M ´1
1 ÿ
Rx pτ q “ xpkqxpk ´ τ q (using a periodic calculation)
M k“0

The power spectral density can be calculated exactly and is also equal to the
periodogram.

ÿ
M ´1
1
φx pe jωn
q“ Rx pτ qe´jωn τ “ |XM pejωn q|2
τ “0
M

2018-10-5 3.36

Spectral estimation (more general case)

Alternative autocorrelation estimate:


$
N ´1


’ 1 ÿ

’ xpkqxpk ´ τ q, for τ ě 0,
& N k“τ
R̂x pτ q “
’ N `τ ´1


’ 1 ÿ

%N xpkqxpk ´ τ q, for τ ă 0,
k“0

Periodic xpkq: unbiased (exact) if N “ mM (for integer m)

N ´ |τ |
Random xpkq: biased EtR̂x pτ qu “ Rx pτ q
N
asymptotically unbiased
(as N ÝÑ 8, τ {N ÝÑ 0)

2018-10-5 3.37
Bibliography

Empirical Transfer Function Estimation (ETFE):


Lennart Ljung, System Identification; Theory for the User, (see Section 6.3)
Prentice-Hall, 2nd Ed., 1999.

Spectral estimation:
Lennart Ljung, System Identification; Theory for the User, (see Section 6.4)
Prentice-Hall, 2nd Ed., 1999.

P. Stoica & R. Moses, Introduction to Spectral Analysis (see Chapters 1 and 2),
Prentice-Hall, 1997.

2018-10-5 3.38
Averaging

Multiple estimates

Multiple experiments: ur pkq, yr pkq, r “ 1, . . . , R, and k “ 0, . . . , K ´ 1.

Multiple estimates (ETFE):

Yr pejωn q
Ĝr pejωn q “ (drop the N from the YN pejωn q notation)
Ur pejωn q
Averaging to improve the estimate.

jωn
ÿ
R
jωn
ÿ
R
Ĝpe q“ αr Ĝr pe q, with αr “ 1.
r“1 r“1

How to choose αr ?

The “average” is calculated with αr “ 1{R.

2018-10-10 4.2
Averaging

Optimal weighted average

If the variance of Ĝr pejωn q is σr2 pejωn q


(with uncorrelated errors and identical means), then,
˜ ¸
´ ¯ ÿR ÿ
R
jωn jωn jωn
variance Ĝpe q “ variance αr pe qĜr pe q “ αr2 σr2 pejωn q
r“1 r“1

This is minimized by

1{σr2 pejωn q
αr pejωn q “ .
ÿ
R
1{σr2 pejωn q
r“1

´ ¯ φv pejωn q |Ur pejωn q|2


jωn jωn
If variance Ĝr pe q “ then αr pe q“ R .
1
N
|Ur pejωn q|2 ÿ
|Ur pejωn q|2
r“1

2018-10-10 4.3

Averaging

Variance reduction

ˇ ˇ
Best result: ˇUr pejωn qˇ is the same for all r “ 1, . . . , R.

This gives,
´ ¯
´ ¯ variance Ĝr pejωn q
variance Ĝpejωn q “ .
R
If the estimates are biased we will not get as much variance reduction.

(The method of splitting the data record and averaging the periodograms is attributed
to Bartlett (1948, 1950).)

2018-10-10 4.4
Averaging example (noisy data)
Splitting the data record: K “ 72, R “ 3, N “ 24,
5
upkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
5
ypkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
5
y1 pkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
5
y2 pkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
5
y3 pkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70

2018-10-10 4.5

Averaging example (noisy data)

Estimates: Ĝr pejωn q, r “ 1, 2, 3 and the weighted average, Ĝavg pejωn q.

Magnitude
10

Gpejω q

Ĝavg pejωn q
Ĝ3 pejωn q

log ω
1 (rad/sample)
1 π
Ĝ1 pejωn q

Ĝ2 pejωn q

0.1

2018-10-10 4.6
Averaging example (noisy data)
Estimates: Ĝr pejωn q, r “ 1, 2, 3 and the weighted average, Ĝavg pejωn q.

Errors: Er pejωn q “ Gpejωn q ´ Ĝr pejωn q.

Magnitude
5

E2 pejωn q E1 pejωn q
log ω
1 (rad/sample)
1 π
E3 pejωn q

0.1
Eavg pejωn q

0.01

2018-10-10 4.7

Averaging with periodic1 signals


Splitting the data record: K “ 72, N “ 24, R “ 3, upkq has period M “ 24.
5
upkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
5
ypkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
5
y1 pkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
5
y2 pkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
5
y3 pkq
0
´5 index: k
´20 ´10 0 10 20 30 40 50 60 70
1
(not really periodic: upkq “ 0 for k ă 0)
2018-10-10 4.8
Averaging with periodic signals
´ ¯
Estimates: Ĝr pe jωn
q and Ĝavg pe jωn
q “ Ĝ2 pe jωn
q ` Ĝ3 pe jωn
q {2

Magnitude
10

Gpejω q

Ĝavg pejωn q
Ĝ2 pejωn q

log ω
1
Ĝ3 pejωn q 1 π (rad/sample)

0.1

2018-10-10 4.9

Averaging with periodic signals


´ ¯
Estimates: Ĝr pejωn q and Ĝavg pejωn q “ Ĝ2 pejωn q ` Ĝ3 pejωn q {2

Errors: Er pejωn q “ Gpejωn q ´ Ĝr pejωn q.

Magnitude
5

log ω
1 (rad/sample)
1 π

E2 pejωn q
0.1
E3 pejωn q
Eavg pejωn q

0.01

2018-10-10 4.10
Averaging with periodic signals
´ ¯
Estimates: Ĝr pe jωn
q and Ĝavg pe jωn
q “ Ĝ2 pe jωn
q ` Ĝ3 pe jωn
q {2

Errors: Er pejωn q “ Gpejωn q ´ Ĝr pejωn q.

Magnitude
5

log ω
1 (rad/sample)
Eavg pejωn q 1 π
(non-periodic)

E2 pejωn q
0.1
E3 pejωn q
Eavg pejωn q

0.01

2018-10-10 4.10

Bias-variance trade-offs in data record splitting


Bartlett’s procedure
Divide a (long) data record into smaller parts for averaging.

Data: tupkq, ypkqu k “ 0, . . . , K ´ 1 with K very large.

Choose R records, and calculation a length: N, N R ď K.

ur pnq “ uprN ` nq, r “ 0, . . . , R ´ 1, n “ 0, . . . , N ´ 1.

And average the resulting estimates:

jωn
ÿ
R´1
jωn
ÿ
R´1
Ŷr pejωn q
G̃pe q“ αr Ĝr pe q“ αr .
r“0 r“0 Ûr pejωn q

Bias and variance effects


As R increases:
§ The number of points calculated, N , decreases (so N R ď K);
§ The variance decreases (by up to 1{R);
§ The bias increases (due to non-periodicity transients).
2018-10-10 4.11
Bias-variance trade-offs in data record splitting
Mean-square error
MSE “ bias2 ` variance.
§ Transient bias grows linearly with the number of data splits.
§ Variance decays with a rate of up to 1/(number of averages).

Optimal bias-variance trade-off

Error
MSE

Bias error

Variance error

1 2 number of records, R

2018-10-10 4.12

Smoothing transfer function estimates

What if we have no option of running periodic input experiments?


§ The system will not tolerate periodic inputs; or
§ The data has already been taken and given to us.
§ More data just gives more frequencies (with the same error variance).

Exploit the assumed smoothness of the underlying system

Gpejω q is assumed to be a low order (smooth) system.

! )
jωn jωn jωs jωs
E pGpe q ´ ĜN pe qqpGpe q ´ ĜN pe qq ÝÑ 0 pn ‰ sq,
(asymptotically at least).

2018-10-10 4.13
Smoothing the ETFE

Smooth transfer function assumption

Assume the true system to be close to constant for a range of frequencies.

Gpejωn`r q « Gpejωn q for r “ 0, ˘1, ˘2, . . . , ˘R.

Smooth minimum variance estimate

The minimum variance smoothed estimate is,


ÿ
R
αr ĜN pejωn`r q ˇ ˇ
r“´R
1 ˇUN pejωn`r qˇ2
G̃N pejωn q “ , αr “ N
.
ÿ
R φv pejωn`r q
αr
r“´R

(Here we smooth over 2R ` 1 points)

2018-10-10 4.14

Smoothing the ETFE

If N is large (many closely spaced frequencies), the summation can be


approximated by an integral,

ÿ
R
αr ĜN pejωn`r q
r“´R
G̃N pejωn q “
ÿ
R
αr
r“´R

ż ωn`r
αpejξ qĜN pejξ q dξ ˇ ˇ
ωn´r
1 ˇUN pejξ qˇ2
« ż ωn`r , with αpejξ q “ N
.
jξ φv pejξ q
αpe q dξ
ωn´r

2018-10-10 4.15
Smoothing the ETFE

Smoothing window

żπ
1
Wγ pejpξ´ωn q qαpejξ qĜN pejξ q dξ
2π ´π
G̃N pejωn q “ ż ,
1 π jpξ´ωn q jξ
Wγ pe qαpe q dξ
2π ´π
ˇ ˇ
1 ˇUN pejξ qˇ2
with αpejξ q “ N
.
φv pejξ q

The 1{2π scaling will make it easier to derived time-domain windows later.

2018-10-10 4.16

Smoothing the ETFE

Assumptions on φv pejω q

Assume φv pejω q is also a smooth (and flat) function of frequency.


ż ˇ ˇ
1 π ˇ 1 1 ˇ
Wγ pejpξ´ωn q q ˇˇ ´ ˇ dξ « 0.
2π ´π φv pejξ q φv pejωn q ˇ

Then use,
ˇ ˇ
1 ˇUN pejξ qˇ2
αpejξ q “ N
,
φv pejωn q
to get,
żπ ˇ ˇ2
1 1 ˇ
jpξ´ωn q jξ ˇ
Wγ pe q ˇUN pe qˇ ĜN pejξ q dξ
2π ´π N
G̃N pejωn q “ żπ ˇ ˇ2 .
1 jpξ´ωn q 1 ˇ jξ ˇ
Wγ pe q ˇUN pe qˇ dξ
2π ´π N

2018-10-10 4.17
Weighting functions

Typical window (Hann) as a function of the width parameter, γ.

Wγ pejωn q

20

γ “ 20
15

10
N “ 256 γ “ 10

5
γ“5
γ“1

´π ´π{2 0 π{2 π
Discrete frequency: ω

2018-10-10 4.18

Weighting functions

Frequency smoothing window characteristics: width (specified by γ


parameter)

The wider the frequency window (i.e. decreasing γ) ...


§ the more adjacent frequencies included in the smoothed estimate,
§ the smoother the result,
§ the lower the noise induced variance,
§ the higher the bias.

2018-10-10 4.19
Weighting functions

Window characteristics: shape

Some common choices:


ˆ ˙2
1 sin γω{2
Bartlett: Wγ pe q “

γ sin ω{2

Hann: Wγ pejω q “ 12 Dγ pωq ` 14 Dγ pω ´ π{γq ` 14 Dγ pω ` π{γq


sin ωpγ ` 0.5q
where Dγ pωq “
sin ω{2
Others include: Hamming, Parzen, Kaiser, ...

The differences are mostly in the leakage properties of the energy to adjacent
frequencies. And the ability to resolve close frequency peaks.

2018-10-10 4.20

Weighting functions

Example frequency domain windows.


Wγ pejωn q

Welch
γ “ 10 10 Hann
N “ 256 Hamming

Bartlett
5

´π ´π{2 0 π{2 π
Discrete frequency: ω

2018-10-10 4.21
ETFE smoothing example: Matlab calculations
U = fft(u); % calculate N point FFTs
Y = fft(y);
Gest = Y./U; % ETFE estimate
Gs = 0*Gest; % smoothed estimate

[omega,Wg] = WfHann(gamma,N); % window (centered)


zidx = find(omega==0); % shift to start at zero
omega = [omega(zidx:N);omega(1:zidx-1)]; % frequency grid
Wg = [Wg(zidx:N);Wg(1:zidx-1)];
a = U.*conj(U); % variance weighting

for wn = 1:N,
Wnorm = 0; % reset normalisation
for xi = 1:N,
widx = mod(xi-wn,N)+1; % wrap window index
Gs(wn) = Gs(wn) + ...
Wg(widx) * Gest(xi) * a(xi);
Wnorm = Wnorm + Wg(widx) * a(xi);
end
Gs(wn) = Gs(wn)/Wnorm; % weight normalisation
end
2018-10-10 4.22

Window properties

Properties and characteristic values of window functions

żπ
1
Wγ pejξ qdξ “ 1 (Normalised)
2π ´π
żπ
ξWγ pejξ qdξ “ 0 (“Even” sort of)
´π
żπ
M pγq :“ ξ 2 Wγ pejξ qdξ (bias effect)
´π
żπ
W̄ pγq :“ 2π Wγ2 pejξ qdξ (variance effect)
´π

2.78
Bartlett: M pγq “ , W̄ pγq « 0.67γ (for γ ą 5)
γ
π2
Hamming: M pγq “ , W̄ pγq « 0.75γ (for γ ą 5)
2γ 2

2018-10-10 4.23
Smoothed estimate properties

Asymptotic bias properties

! ! )) ! )
jωn jωn jωn jωn
E G̃pe q ´ E Gpe q “ E G̃pe q ´ Gpe q
ˆ 1
˙
1 jωn φu pe q
jωn
1 2 jωn
“ M pγq G pe q ` G pe q
2 φu pejωn q
´ ? ¯
` OpC pγqq
loooomoooon
3 ` O 1{ N
looooomooooon
ÝÑ 0 ÝÑ 0
as γ ÝÑ 8 as N ÝÑ 8

Increasing γ:
§ makes the frequency window narrower;
§ averages over fewer frequency values;
§ makes M pγq smaller; and
§ reduces the bias of the smoothed estimate, G̃pejωn q.

2018-10-10 4.24

Smoothed estimate properties

Asymptotic variance properties

"´ ! )¯2 *
jωn jωn 1 φv pejωn q
E G̃pe q ´ E G̃pe q “ W̄ pγq
N φu pejωn q
` ˘
` W̄ pγq{N
oloooooomoooooon
ÝÑ 0
as γ ÝÑ 8
N ÝÑ 8
γ{N ÝÑ 0

Increasing γ:
§ makes the frequency window narrower;
§ averages over fewer frequency values;
§ makes W̄ pγq larger; and
§ increases the variance of the smoothed estimate, G̃pejωn q.

2018-10-10 4.25
Smoothed estimate properties

Asymptotic MSE properties


"ˇ ˇ2 *
ˇ jωn jωn ˇ 2 jωn 2 1 φv pejωn q
E ˇG̃pe q ´ Gpe qˇ « M pγq|F pe q| ` W̄ pγq ,
N φu pejωn q

1 2 jωn φ1 pejωn q
where F pejωn q “ G pe q ` G1 pejωn q u jω
2 φu pe n q

If M pγq “ M {γ 2 and W̄ pγq “ W̄ γ then MSE is minimised by,


ˆ ˙1{5
4M 2 |F pejωn q|2 φu pejωn q
γoptimal “ N 1{5 .
W̄ φv pejωn q
and

MSE at γoptimal « CN ´4{5

2018-10-10 4.26

Bibliography

Windowing and ETFE smoothing


P. Stoica & R. Moses, Introduction to Spectral Analysis (see Chapters 1 and 2),
Prentice-Hall, 1997.

Lennart Ljung, System Identification; Theory for the User, (see Section 6.4)
Prentice-Hall, 2nd Ed., 1999.

M.S. Bartlett, “Smoothing Periodograms from Time-Series with Continuous Spectra,”


Nature, vol. 161(4096), pp. 686–687, 1948.

M.S. Bartlett, “Periodogram analysis and continuous spectra,” Biometrika, vol. 37,
pp. 1–16, 1950.

2018-10-10 4.27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy