0% found this document useful (0 votes)
17 views45 pages

ELEC-C5231 Lecture2 LTI Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views45 pages

ELEC-C5231 Lecture2 LTI Systems

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Introduction to Signal Processing

Lecture 2: LTI systems and impulse responses

Filip Elvander

Dept. Information and Communications Engineering


Aalto University
0.
Today’s lecture

Discrete-time signals • Linear time-invariant (LTI) systems


(Ch. 2)
• Interpretation and analysis in
Discrete-time systems time-domain.
(Ch. 2) • Important properties: stability and
causality.
Frequency analysis of signals
(Ch. 4 and 7) • Brief description of the concept and
use of correlation.
z-transform
• Reading: Chapter 2.
(Ch. 3)

Freq. analysis of systems and filters


(Ch. 5 and 7)

Implementation of systems
(Ch. 9)

Filter design
(Ch. 10)

February 27, 2025


0.
Systems

• A system is an operator (kind of a function) that takes a signal x(n) as


input and produces an output signal y(n).
• Control circuits, guitar amplifiers, ECG machines, . . .
• We write y(n) = T (x(n)), where T denotes the action of the system.

x(n) y(n)
T

• Example:
• y(n) = T (x(n)) = (x(n + 1) + 5x(n) − 3x(n − 2))2 .
• y(n) = T (x(n)) = x(−n).
• We will focus on a certain class of systems, called LTI systems.
• Many important real systems belong to this class, or can be
approximated by them.
• A lot of powerful theory for analysing and building this type of
systems.
February 27, 2025
0.
Our basic building blocks

x(n) ax(n)
a

x(n) x(n − 1)
z−1

x1 (n)

x2 (n) x1 (n) + x2 (n)


+

x(n)
x(n)
x(n)

February 27, 2025


0.
LTI systems
A system is that is linear and time-invariant is called an LTI system.
• Linearity: T (αx1 (n) + β x2 (n)) = αT (x1 (n)) + β T (x2 (n))

x1 (n) x1 (n) y1 (n)


α T α

x2 (n) y(n) x2 (n) y2 (n) y(n)


β + T T β +

Linearly combining inputs (left) is the same as linearly combining outputs (right).

• Time-invariance: if y(n) = T (x(n)), then y(n − D) = T (x(n − D))

x(n) x(n − d) y(n − d) x(n) y(n) y(n − d)


z−d T T z−d

Delaying the input (left) is the same as delaying the output (right).

February 27, 2025


0.
Today’s main points


x(n) y(n)
LTI ⇐⇒ y(n) = ∑ h(k)x(n − k)
k=−∞

That is, any LTI system can be completely described by h(n).


• If we have h(n), we can calculate the output of the system for any input
x(n).
• h(n) is called the impulse response of the system.
Also, we will see that many useful LTI systems can be represented by a
difference equation
P Q
∑ a p y(n − p) = ∑ bq x(n − q)
p=0 q=0

where a0 , . . . , aP and b0 , . . . , bQ are sets of coefficients.

February 27, 2025


0.
Impulse response
Recall from last lecture:

δ (n) = {. . . , 0, 1, 0, . . .} .

The impulse response of an LTI system T is the result


n o
T (δ (n)) = h(n) = . . . , h(−1), h(0), h(1), . . . .

By time-invariance, we have that


n o
T (δ (n − k)) = h(n − k) = . . . , h(−1 − k), h(−k), h(1 − k), . . . , for any k ∈ N.

How does this help us if we want to compute T (x(n)) for an arbitrary x(n)?

δ (n) h(n) δ (n − k) h(n − k)


LTI LTI

February 27, 2025


0.
Impulse response

Recall that we can represent x(n) as


..
. ∞
x(n) = ∑ x(k)δ (n − k).
k=−∞
δ (n + 1)
x(−1) +
As the system is linear time-invariant,
 ∞ 
δ (n) x(n) y(n) y(n) = T ∑ x(k)δ (n − k)
x(0) + T
k=−∞
∞  
L
= ∑ T x(k)δ (n − k)
δ (n − 1)
x(1) + k=−∞

L
= ∑ x(k)T (δ (n − k))
k=−∞
..
. ∞
TI
= ∑ x(k)h(n − k)
k=−∞
= x(n) ∗ h(n)

February 27, 2025


0.
Impulse response

.. ..
. .

δ (n + 1) δ (n + 1) h(n + 1)
x(−1) + T x(−1) +

δ (n) x(n) y(n) δ (n) h(n) y(n)


x(0) + T T x(0) +

δ (n − 1) δ (n − 1) h(n − 1)
x(1) + T x(1) +

.. ..
. .

Linearly combining inputs (left) is the same as linearly combining outputs (right).

February 27, 2025


0.
Impulse responses in every-day life
• A room’s effect on a sound can be
well-described as an LTI.
• Sound waves are reflected on the
walls ⇒ each reflection is a delayed
and scaled version of the signal.
• In music recordings, this is created
artificially.
1.5

0.5

-0.5

-1
0 1 2 3 4 5 6 7 8
t (s)

Original signal Play .

February 27, 2025


0.
Impulse responses in every-day life
0.2
• A room’s effect on a sound can be
0.15
well-described as an LTI.
0.1

• Sound waves are reflected on the 0.05

walls ⇒ each reflection is a delayed 0

and scaled version of the signal. -0.05

-0.1
• In music recordings, this is created
-0.15
artificially. -0.2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
t (s)
1.5
Concert hall IR Play .
1.5
1

1
0.5

0.5

-0.5

-0.5

-1
0 1 2 3 4 5 6 7 8
t (s) -1
0 1 2 3 4 5 6 7 8

Original signal Play . t (s)


Convolved signal Play .
February 27, 2025
0.
Impulse responses in every-day life
0.3
• A room’s effect on a sound can be
well-described as an LTI. 0.2

0.1
• Sound waves are reflected on the
0
walls ⇒ each reflection is a delayed
-0.1
and scaled version of the signal.
-0.2

• In music recordings, this is created -0.3


artificially.
-0.4
0 10 20 30 40 50 60 70 80
t (s)
1.5
Inchin oil depot IR Play .
20
1
15

0.5 10

0
0

-5
-0.5
-10

-1 -15
0 1 2 3 4 5 6 7 8
t (s) -20
0 1 2 3 4 5 6 7 8
Original signal Play . t (s)
Convolved signal Play .
February 27, 2025
0.
Black board example
Compute the convolution of

x(n) = {2, 4, 6, 5, 2} , h(n) = {3, 2, 1} .

∞ ∞
y(n) = ∑ x(k)h(n − k) = ∑ h(k)x(n − k)
k=−∞ k=−∞
2
= ∑ h(k)x(n − k)
k=0
= h(0)x(n − 0) + h(1)x(n − 1) + h(2)x(n − 2)
= 3x(n) + 2x(n − 1) + x(n − 2)
= {6, 16, 28, 31, 22, 9, 2} .

Note:
• convolving a signal of length K with a signal of length L gives a signal of
length N = K + L − 1. Here, K = 5, L = 3 gives N = 7.

February 27, 2025


0.
Hint for future: response to exponentials
What happens if our input signal is a complex exponential?
x(n) = en(α+ jω) , n = . . . , −2, −1, 0, 1, 2, . . . ,
where α ∈ R and ω ∈ R.
∞ ∞
y(n) = T (x(n)) = ∑ h(k)x(n − k) = ∑ h(k)e(n−k)(α+ jω)
k=−∞ k=−∞
∞ ∞
= ∑ h(k)en(α+ jω) e−k(α+ jω) = e|n(α+ jω)
{z } ∑ h(k)e−k(α+ jω)
k=−∞ k=−∞
x(n)

= x(n) ∑ h(k)e−k(α+ jω) .
k=−∞

• y(n) is just a scaled version x(n), where the scaling ∑∞ −k(α+ jω)
k=−∞ h(k)e
depends on α, ω, and the IR h.
• Exponentials are eigenfunctions of LTIs: they come back unchanged
expect for a (complex) scaling.
• Same concept as in linear algebra: Ax = λ x if x is an eigenvector of A.
February 27, 2025
0.
Hint for future: response to exponentials

x(n) = en(α+ jω) y(n) = cen(α+ jω) 0.8


x(n) (real part)
y(n) (real part)
LTI 0.6

0.4

0.2
x(n) = en(α+ jω) y(n) = cen(α+ jω) 0
c
-0.2

-0.4
The constant gain is c = ∑∞ −k(α+ jω) .
k=−∞ h(k)e -0.6

-0.8

-1
• This is one of the most important 0 200 400 600 800 1000
n
topics of the course. System response to exponential. Note scaling
and (slight) phase-shift.
• We will return to this when we look at
frequency analysis.

February 27, 2025


0.
Properties of convolution
For the convolution ∑∞
k=−∞ x(k)h(n − k) we have the standard properties:
Commutativity

x1 (n) ∗ x2 (n) = x2 (n) ∗ x1 (n) = ∑ x1 (n − k)x2 (k).
k=−∞

Associativity

x1 (n) ∗ [x2 (n) ∗ x3 (n)] = [x1 (n) ∗ x2 (n)] ∗ x3 (n)

Distributivity

x1 (n) ∗ [x2 (n) + x3 (n)] = x1 (n) ∗ x2 (n) + x1 (n) ∗ x3 (n)

As LTIs can be represented by convolutions, they inherit these properties.

February 27, 2025


0.
Consequences for LTI 1

x1 (n) ∗ x2 (n) = x2 (n) ∗ x1 (n) (commutativity)


x1 (n) ∗ [x2 (n) ∗ x3 (n)] = [x1 (n) ∗ x2 (n)] ∗ x3 (n) (associativity)
x1 (n) ∗ [x2 (n) + x3 (n)] = x1 (n) ∗ x2 (n) + x1 (n) ∗ x3 (n) (distributivity)

x(n) y(n)
h1 (n) h2 (n)

x(n) y(n)
h2 (n) h1 (n)

x(n) y(n)
h(n) = h1 (n) ∗ h2 (n)

Systems in series: total system by convolving impulse responses. Order does not
matter.

February 27, 2025


0.
Consequences for LTI 2

x1 (n) ∗ x2 (n) = x2 (n) ∗ x1 (n) (commutativity)


x1 (n) ∗ [x2 (n) ∗ x3 (n)] = [x1 (n) ∗ x2 (n)] ∗ x3 (n) (associativity)
x1 (n) ∗ [x2 (n) + x3 (n)] = x1 (n) ∗ x2 (n) + x1 (n) ∗ x3 (n) (distributivity)

h1 (n)
x(n) y(n)
+

h2 (n)

x(n) y(n)
h(n) = h1 (n) + h2 (n)

Systems in parallell: total system by adding impulse responses.

February 27, 2025


0.
Causality and stability
In practice, we are interested in LTI systems that are causal and stable.
Causality The system output y(n) depends only on the present and past
values of the input, i.e., x(n), x(n − 1), x(n − 2), . . ..
• Important for real-time applications of the system. If y(n) depends on
x(n + 1), x(n + 2), . . . , x(n + D), then we have to wait until we have
observed x(n + D) before we can compute y(n).
• Causality is easily checked using the impulse response:

∞ −1 ∞
y(n) = x(n) ∗ h(n) = ∑ x(n − k)h(k) = ∑ x(n − k)h(k) + ∑ x(n − k)h(k)
k=−∞ k=−∞ k=0
| {z } | {z }
anti-causal: future causal: past and present

The only way the anti-causal part is zero for any x(n) is if h(n) = 0 for all n < 0.

LTI system causal ⇐⇒ h(n) = 0 , ∀n < 0

February 27, 2025


0.
BIBO stability
Definition
A system is bounded input - bounded outoput (BIBO) stable if any
bounded input signal yields a bounded output signal:

max |x(n)| = Mx < ∞ ⇒ max |y(n)| = max |T (x(n))| = My < ∞.


n n n

• Very important in practice.


• Systems that are unstable are sensitive to noise, can break, or worse.
• For example: chain reaction in nuclear power plant.
• Feedback at a concert.
• The Tacoma Bridge collape Link

February 27, 2025


0.
BIBO stability
Again, we can check stability using the impulse response.
∞ ∞ ∞
|y(n)| = ∑ x(k)h(n − k) ≤ ∑ |x(k)h(n − k)| = ∑ |x(k)| |h(n − k)|
k=−∞ k=−∞ k=−∞ | {z }
≤Mx
∞ ∞
≤ Mx ∑ |h(n − k)| = Mx ∑ |h(n)| .
k=−∞ k=−∞
Thus, ∑∞
k=−∞ |h(n)| < ∞ ⇒ |y(n)| < ∞.
h(−n)
The converse also holds: choose x(n) = |h(−n)|
(complex sign). Then,
∞ ∞ ∞ |h(−k)|2
h(−k)
y(0) = ∑ x(k)h(0 − k) = ∑ h(−k) = ∑
k=−∞ k=−∞
|h(−k)| k=−∞
|h(−k)|
∞ ∞
= ∑ |h(−k)| = ∑ |h(k)| .
k=−∞ k=−∞


LTI system stable ⇐⇒ ∑ |h(n)| < ∞
n=−∞

February 27, 2025


0.
A simple(?) system
Let a system T have impulse response h(n) = u(n)an with |a| < 1. Is the
system causal/stable?
• u(n) = 0 for n < 0 ⇒ h(n) = 0 for n < 0, system is causal.
n n 1
• ∑∞ ∞ ∞
n=−∞ |h(n)| = ∑n=0 |a | = ∑n=0 |a| = 1−|a| < ∞, stable.
We have
∞ ∞
y(n) = T (x(n)) = x(n) ∗ h(n) = ∑ h(k)x(n − k) = ∑ ak x(n − k).
k=−∞ k=0

The impulse response is infinitely long ⇒ need infinite history of x(n).


x(n)

z−1 z−1 z−1 z−1 ...

a a2 a3 a4

+ + + + + ...

y(n)

February 27, 2025


0.
A simple(?) system
By closer inspection, we see that
∞ ∞
y(n) = T (x(n)) = ∑ ak x(n − k) = a0 x(n) + ∑ ak x(n − k)
k=0 k=1
∞ ∞
k−1
= x(n) + a ∑ a x(n − k) = x(n) + a ∑ ak x(n − 1 − k)
k=1 k=0
| {z }
y(n−1)
= x(n) + ay(n − 1).
⇒ no need to keep track of history of input values, we just need the last
output y(n − 1)! We can define the system by the difference equation
y(n) − ay(n − 1) = x(n).

x(n) y(n)
+

a z−1

February 27, 2025


0.
Two broad classes of impulse responses
There are two big classes of impulse responses: FIR and IIR.
Finite-duration impulse responses (FIR)
There are N0 < ∞ and N1 < ∞ such that h(n) = 0 for n < N0 and n > N1 . Then,
∞ N1
y(n) = ∑ h(k)x(n − k) = ∑ h(k)x(n − k).
k=−∞ k=N0

• The IR has (at most) N = N1 − N0 + 1 non-zero values.


• To compute y(n), we need to keep N values of x(n) in memory, and then
perform N multiplications and additions.
• If the system is causal (N0 = 0, N = N1 + 1), we need to ”remember” N
past values of x(n).

February 27, 2025


0.
Two broad classes of impulse responses
There are two big classes of impulse responses: FIR and IIR.
Infinite-duration impulse responses (IIR)
There are no N0 < ∞ and N1 < ∞ that bound the length of h(n). For simplicity,
assume that the system is causal. Then,
∞ N
y(n) = ∑ h(k)x(n − k) ̸= ∑ h(k)x(n − k) for any N < ∞
k=0 k=0

• Looks like we need infinite memory and to perform infinitely many


multiplications/additions.
• But we just saw that we could realize an IIR with a very simple system.
• The trick is that we need feedback: y(n) will depend on old values
y(n − 1), y(n − 2), . . ..

February 27, 2025


0.
Difference equation representation
Many useful systems can be represented by a difference equation:
P Q
∑ a p y(n − p) = ∑ bq x(n − q) , a0 = 1.
p=0 q=0


• If a1 = . . . = aP = 0, this is a FIR system with IR h(n) = b0 , b1, . . . , bQ .
• If a p ̸= 0 for some p, then this is an IIR system.
• {y(n − 1), y(n − 2), . . . , y(n − P)} is called the state of the system at time n.

x(n) y(n)
b0 + +

z−1 b1 + + −a1 z−1

z−1 b2 + + −a2 z−1

z−1 b3 +

February 27, 2025


0.
Impulse response from difference equation
By definition, h(n) = T (δ (n)). That is, the impulse response satisfies
P Q
∑ a p h(n − p) = ∑ bq δ (n − q) , a0 = 1.
p=0 q=0

This gives us a set of equations:

n<0: h(n) = 0 (causal system)


!
P Q
0≤n≤Q: ∑ a p h(n − p) = bn , ∑ bq δ (n − q) = bn
p=0 q=0
!
P Q
n>Q: ∑ a p h(n − p) = 0 , ∑ bq δ (n − q) = 0
p=0 q=0

• FIR system: simple, h(n) = bn for n = 0, 1, . . . , Q.


• IIR system: what should we do here?

February 27, 2025


0.
Impulse response for IIR
Assume Q ≤ P. Here, we want to solve
P
∑ a p h(n − p) = bn , 0 ≤ n ≤ Q.
p=0

We can write the system of equations as (recall causality: h(n) = 0 for n < 0)
    
1 0 0 0 0 h(0) b0
 a1
 1 0 0 0 0   h(1)   b1 
   
 a2
 a1 1 0 0 ... 0   h(2)   b2 
   
 a3
 a2 a1 1 0 ... 0   h(3)   b3 
 = 
 .. ..   ..   .. 
 .
 .  .   . 
    
 0     
aQ aQ−1 aQ−2 . . . . . . a1 1aQ+1 a h(Q) bQ

Triangular system of equations ⇒ easy to solve for h(0), h(1), . . . , h(Q). We


can solve for h(Q + 1), . . . , h(P) in the same manner.
But this gets a bit tedious, can we be smarter as to find h(n) for n > Q?

February 27, 2025


0.
Impulse response for IIR
For n > Q, we have the homogeneous equation.
P
∑ a p h(n − p) = 0. (1)
p=0

Let’s try a solution of the form h(n) = λ n for λ ∈ C. Then,


P  
∑ a p λ n−p = 0 ⇐⇒ λ n−P λ P + a1 λ P−1 + . . . aP−1 λ + aP = 0.
p=0

This polynomial equation will in general have P roots λ1 , λ2 , . . . , λP .

⇒ h(n) = C1 λ1n +C2 λ2n + . . . +CP λPn satisfies (1) for any C1 , . . . ,CP .

If λ1 , . . . , λP are distinct, then the converse is also true:


P
∑ a p h(n − p) = 0 ⇐⇒ h(n) = C1 λ1n +C2 λ2n + . . . +CP λPn .
p=0

So, we only need to find the correct C1 , . . . ,CP to get our infinite length IR.
February 27, 2025
0.
Impulse response for IIR

1. Find roots of λ1 , . . . , λP of λ P + a1 λ P−1 + . . . aP−1 λ + aP = 0.


2. Find h(0), . . . , h(P − 1) by solving
(
P bn , 0 ≤ n ≤ Q
∑ a p h(n − p) = 0 , Q + 1 ≤ n ≤ P − 1
p=0

3. Find C1 , . . . ,CP by solving


 0
λ20 λ30 λP0
   
λ1 ... C1 h(0)
 λ1
 2 λ 2 λ3 ... λP  C2   h(1) 
2 2 λP2
   
 λ
 1 λ2 λ3 ...  C3   h(2) 
  =  
 .. ..   ..   .. 
 . . .   . 
λ1P−1 λ2P−1 λ3P−1 ... P−1
λP CP h(P − 1)
| {z }
invertible for distinct λ1 ,...,λP

4. Impulse response given by


h(n) = C1 λ1n +C2 λ2n + . . . +CP λPn .

We will see that this can be done easier in a few lectures.


February 27, 2025
0.
Stability of FIR and IIR
FIR
Any FIR is stable as the IR is of finite length:
∞ Q
∑ |h(n)| = ∑ |h(n)| < ∞.
n=0 n=0

IIR
As h(n) = C1 λ1n +C2 λ2n + . . . +CP λPn ,
∞ ∞ P P ∞
n
∑ |h(n)| = ∑ ∑ Cp λ pn ≤ ∑ Cp ∑ λp .
n=0 n=0 p=1 p=1 n=0
n
The right-hand-side is finite if and only if ∑∞
n=0 λ p ⇐⇒ λ p < 1. This is in
fact both necessary and sufficient:
IIR stable
⇐⇒
all P roots λ p to λ P + a1 λ P−1 + . . . aP−1 λ + aP = 0 satisfy λ p < 1.

February 27, 2025


0.
Black board example (1/3)
Let
y(n) − y(n − 2) = −x(n) + 2x(n − 1).
Find h(n). Is the system stable?

By definition, for the impulse response y(n) = h(n), we have x(n) = δ (n). Thus,
n=0: h(0) − h(−2) = −δ (0) + 2δ (−1) = 1,
n=1: h(1) − h(−1) = −δ (1) + 2δ (0) = 2,
n=2: h(2) − h(0) = −δ (2) + 2δ (1) = 0,
n≥2: h(n) − h(n − 2) = −δ (n) + 2δ (n − 2) = 0.
We look for a causal system, i.e., h(n) = 0 for n < 1. We thus have the
conditions
n=0: h(0) = −1,
n=1: h(1) = 2,
n≥2: h(n) − h(n − 2) = 0.
Let’s find h(n) satisfying this.
February 27, 2025
0.
Black board example (2/3)
Let’s solve the homogeneous equation

h(n) − h(n − 2) = 0, n ≥ 2. (2)

The corresponding polynomial is

λ n − λ n−2 = λ n−2 (λ 2 − 1) = 0, n ≥ 2.

We get the roots λ = ±1, and thus

h(n) = C1 (−1)n +C2 1n , n ≥ 2 (3)

is a solution to (2) for any C1 ,C2 . To find C1 ,C2 we need some ”initial”
conditions. We have two coefficients, thus we need to know h(n) at two time
steps. Take n = 2 and n = 3 (for example). We have from (2) that

h(2) − h(0) = 0 ⇒ h(2) = h(0) = −1 (from previous slide),


h(3) − h(1) = 0 ⇒ h(3) = h(1) = 2 (from previous slide).

We can now plug this into (3) for n = 2 and n = 3.

February 27, 2025


0.
Black board example (3/3)
This yields

−1 = h(2) = C1 (−1)2 +C2 (1)2 = C1 +C2 ,


2 = h(3) = C1 (−1)3 +C2 (1)3 = −C1 +C2 .

Solving this yields C1 = −3/2 and C2 = 1/2. We thus have



−1,
 n=0
h(n) = 2, n=1
 3
 n 1
− 2 (−1) + 2 , n ≥ 2.

In fact, it holds that h(n) = − 32 (−1)n + 21 for n ≥ 0 (how could we have seen
that directly?).

Clearly, the impulse response does not decay: h(n) = {−1, 2, −1, 2, −1, . . .}
and thus is not absolutely summable. This means that the system is not
stable.

February 27, 2025


0.
Summary of LTIs


x(n) y(n)
LTI ⇐⇒ y(n) = ∑ h(k)x(n − k)
k=−∞
• Any LTI is completely characterized by its impulse response h(n).
• LTI causal ⇐⇒ h(n) = 0 for n < 0.
• LTI stable ⇐⇒ ∑∞
n=−∞ |h(n)| < ∞
• Many useful LTIs can be represented by the difference equation
P Q
∑ a p y(n − p) = ∑ bq x(n − q).
p=0 q=0

• FIR: a0 = 1, a p = 0 for p ̸= 0. A FIR system is always stable.


• IRR: ∃ a p ̸= 0 for p ̸= 0. Stability depends on the roots of the
characteristic polynomial

λ P + a1 λ P−1 + . . . + aP−1 λ + aP .

February 27, 2025


0.
Signals in noise
In many signal processing applications, we have measurements y(n) on the
form

y(n) = x(n) + w(n).

x(n) is the signal of interest and w(n) is ”noise”.


• Radar: transmit a wave-form x(n) which bounces on an object and
comes back to the antenna.

y(n) = x(n − D) + w(n),

D is the roundtrip delay, i.e., D = 2 dc Fs , where d is the distance to the


object, c speed of light, and Fs is the sampling frequency. How to find D?
• Radio astronomy: we look for, e.g., gravitational waves which has a
signature x(n). Our measurement is
(
x(n) + w(n) (there is a gravitational wave there)
y(n) =
w(n) (no wave, only noise).

How should we decide if there is a wave of not?


February 27, 2025
0.
Gravitational waves
• What we are looking for: x(n)
predicted by General Relativity.
• y(n) = x(n) + w(n) where w(n) is
instrument noise, environmental
background,...

Abbott et al. (2016), Observation of Gravitational Waves from a Binary Black


Hole Merger, Physical Review Letters.

February 27, 2025


0.
Correlation
Correlation measures the similarity between to signals at different time-lags.
Assume that the signals have finite energy:
∞ ∞
Ex = ∑ |x(n)|2 < ∞ , Ey = ∑ |y(n)|2 < ∞.
n=−∞ n=−∞

Auto-correlation
∞ ∞
rxx (ℓ) = x(ℓ) ∗ x(−ℓ) = ∑ x(n)x(−(ℓ − n)) = ∑ x(n)x(n − ℓ)
n=−∞ n=−∞

Cross-correlation

rxy (ℓ) = x(ℓ) ∗ y(−ℓ) = ∑ x(n)y(n − ℓ)
n=−∞

• |rxx (ℓ)| ≤ rxx (0) = Ex for all ℓ.


p p
• rxy (ℓ) ≤ rxx (0)ryy (0) = Ex Ey for all ℓ.
commutative
• rx1 ,x2 +x3 = x1 (ℓ) ∗ (x2 (−ℓ) + x3 (−ℓ)) = x1 (ℓ) ∗ x2 (−ℓ) + x1 (ℓ) ∗ x2 (−ℓ)
= rx1 ,x2 (−ℓ) + rx1 ,x3 (−ℓ).
February 27, 2025
0.
Finding signal delay
Let y(n) = x(n − D) + w(n) and say that we want to find D. Then,

ryx (ℓ) = rx(n−D)+w(n),x(n) (ℓ) = rx(n−D),x(n) (ℓ) + rw(n),x(n) (ℓ)



= ∑ x(n − D)x(n − ℓ) + rw(n),x(n) (ℓ)
n=−∞

= ∑ x(n)x(n − (ℓ − D)) +rw(n),x(n) (ℓ)
n=−∞
| {z }
rxx (ℓ−D)

If w(n) is just noise, then probably rw(n),x(n) (ℓ) = ∑∞


n=−∞ w(n)x(n − ℓ) ≈ 0.

⇒ ryx (ℓ) ≈ rxx (ℓ − D) , for all ℓ.

ryx (ℓ) ≈ |rxx (ℓ − D)| ≤ rxx (0) = rxx (D − D).

⇒ find D by computing ryx (ℓ) for ℓ = 0, 1, . . . and see where we get the highest
value!

February 27, 2025


0.
Finding signal delay

1
x(n)
0.5 1

0
0.8
-0.5

-1
0 50 100 150 200 250 300 350 0.6
n
1
x(n-D) 0.4
0.5 D

0
0.2
-0.5

-1
0 50 100 150 200 250 300 350 400 0
-150 -100 -50 0 50 100 150
n

February 27, 2025


0.
Finding signal delay

1
x(n)
0.5 1

0.9
0
0.8
-0.5
0.7
-1
0 50 100 150 200 250 300 350 0.6
n
0.5
1
x(n-D) 0.4
0.5 D
0.3
0
0.2
-0.5
0.1
-1
0 50 100 150 200 250 300 350 400 0
-150 -100 -50 0 50 100 150
n

February 27, 2025


0.
Finding signal delay

1 6
x(n) y(n) = x(n-D) + w(n)
0.5
4
0

-0.5
2
-1
0 50 100 150 200 250 300 350
n 0
1
x(n-D)
0.5 D -2

0
-4
-0.5

-1
0 50 100 150 200 250 300 350 400 -6
n 0 100 200 300 400 500 600
n

February 27, 2025


0.
Finding signal delay

1 6
x(n) y(n) = x(n-D) + w(n)
0.5 x(n-D)
4
0

-0.5
2
-1
0 50 100 150 200 250 300 350
n 0
1
x(n-D)
0.5 D -2

0
-4
-0.5

-1
0 50 100 150 200 250 300 350 400 -6
n 0 100 200 300 400 500 600
n

February 27, 2025


0.
Finding signal delay

6
y(n) = x(n-D) + w(n)
x(n-D) 0.35

4
0.3

2 0.25

0.2
0

0.15

-2
0.1

-4 0.05

0
-6 0 50 100 150
0 100 200 300 400 500 600
n

February 27, 2025


0.
Summary of today

• Linear time-invariant (LTI) systems.


• Impulse responses and two desirable properties: causality and
stability.
• Brief introduction to correlation.
• Coming up: frequency analysis for signals and systems.
• For next lecture, read Chapter 4 Frequency Analysis of Signals.
• Skip the section on ”The Cepstrum”.
• Skip section ”Relationship of the Fourier Transform to the
z-transform”. This will come later.
• ”The Frequency Ranges of Some Natural Signals” - read if you find
it fun.

February 27, 2025

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy