0% found this document useful (0 votes)
57 views76 pages

Compressive Ing: Nonsens

Compressive sensing promises to enable the acquisition of signals using far fewer samples or measurements than required by the Nyquist rate, by exploiting the signal's compressibility or sparsity. However, challenges remain, as compressive sensing amplifies noise in the recovered signal and suffers from "tail folding" effects when signals are only approximately sparse. Nonetheless, compressive sensing may provide significant gains in dynamic range by allowing the use of an ADC with higher resolution at a lower sampling rate. Overall, compressive sensing involves tradeoffs between sampling rate, noise amplification, and dynamic range that require adaptation to specific applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views76 pages

Compressive Ing: Nonsens

Compressive sensing promises to enable the acquisition of signals using far fewer samples or measurements than required by the Nyquist rate, by exploiting the signal's compressibility or sparsity. However, challenges remain, as compressive sensing amplifies noise in the recovered signal and suffers from "tail folding" effects when signals are only approximately sparse. Nonetheless, compressive sensing may provide significant gains in dynamic range by allowing the use of an ADC with higher resolution at a lower sampling rate. Overall, compressive sensing involves tradeoffs between sampling rate, noise amplification, and dynamic range that require adaptation to specific applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

compressive

nonsensing

Richard Baraniuk
Rice University
Chapter 1

The Problem
challenge 1
data too expensive
Case in Point: MR Imaging
• Measurements
very expensive

• $1-3 million
per machine

• 30 minutes
per scan
Case in Point: IR Imaging
challenge 2
too much data
Case in Point: DARPA ARGUS-
IS
• 1.8 Gpixel image sensor
– video rate output:
444 Gbits/s
– comm data rate:
274 Mbits/s

factor of 1600x
way out of reach of
existing compression
technology

• Reconnaissance
without conscience
– too much data to transmit to a ground station
– too much data to make effective real-time decisions
Chapter 2

The Promise
V E
S I
ES G
P R I N
O M N S
C SE
innovation 1
sparse signal
models
Sparsity

large
pixel
wavelet
s
coefficient
s
(blue = 0)

wideban
frequenc

large
d Gabor (TF)
signal
y

coefficient
samples s
tim
e
Sparsity

large
pixel
wavelet
s
coefficient
s
(blue = 0)

spars
nonlinear e
signal
signal
model
nonzer
o
innovation 2
dimensionality
reduction for
sparse signals
Dimensionality Reduction
• When data is sparse/compressible, can directly
acquire a compressed representation with
no/little information loss through
linear dimensionality reduction

spars
measurement
e
s
signal

nonzer
o
entries
Stable Embedding
• An information preserving projection preserves
the geometry of the set of sparse signals

K-dim
subspaces

• SE ensures that
Stable Embedding
• An information preserving projection preserves
the geometry of the set of sparse signals

• SE ensures that
Random Embedding is Stable
• Measurements = random linear combinations
of the entries of

• No information loss for sparse vectors whp

spars
measurement
e
s
signal

nonzer
o
entries
innovation 3
sparsity-based
signal recovery
Signal Recovery

• Goal: Recover signal


from measurements

• Problem: Random
projection not full rank
(ill-posed inverse problem)

• Solution: Exploit the sparse/compressible


geometry of acquired signal

• Recovery via (convex) sparsity


penalty or greedy algorithms
[Donoho; Candes, Romberg, Tao, 2004]
Signal Recovery

• Goal: Recover signal


from measurements

• Problem: Random
projection not full rank
(ill-posed inverse problem)

• Solution: Exploit the sparse/compressible


geometry of acquired signal

• Recovery via (convex) sparsity


penalty or greedy algorithms
[Donoho; Candes, Romberg, Tao, 2004]
“Single-Pixel” CS Camera
scen
e single photon
detector
image
reconstruction
or
processing

DMD DMD

random
pattern on
DMD
array

w/ Kevin
Kelly
“Single-Pixel” CS Camera
scen
e single photon
detector
image
reconstruction
or
processing

DMD DMD

random
pattern on
DMD
array

• Flip mirror array M times to acquire M measurements


• Sparsity-based recovery
Random Demodulator
• Problem: In contrast to Moore’s Law, ADC
performance doubles only every 6-8 years

• CS enables sampling near signal’s (low)


“information rate” rather than its (high) Nyquist rate

A2I number Nyquist


sampling of bandwidt
rate tones / h
Example: Frequency Hopper
• Sparse in time-frequency

Nyquist rate 20x sub-Nyquist


sampling sampling

spectrogram sparsogra
m
challenge 1
data too expensive

means fewer
expensive
measurements
needed for the same
resolution scan
challenge 2
too much data

means we compress
on the fly as we
acquire data
! !!
NG
IT I
XC
E
2004—2014

9797 citations

6640 citations

dsp.rice.edu/cs archive >1500 papers

nuit-blanche.blogspot.com > 1 posting/sec


Chapter 3

The Hype
CS is Growing Up
Gerhard Richter
4096 Colours
muralsoflajolla.com/roy-mcmakin-mural
“L1 is the new L2”
- Stan Osher
Exponential Growth
?
Chapter 4

The Fallout
“L1 is the new L2”
- Stan Osher
CS for “Face Recognition”
From: M. V.
Subject: Interesting application for compressed sensing
Date: June 10, 2011 at 11:37:31 PM EDT
To: candes@stanford.edu, jrom@ece.gatech.edu

Drs. Candes and Romberg,


You may have already been approached about this, but I feel I should say
something in case you haven't. I'm writing to you because I recently read an
article in Wired Magazine about compressed sensing

I'm excited about the applications CS could have in many fields, but today I was
reminded of a specific application where CS could conceivably settle an
area of dispute between mainstream historians and Roswell UFO theorists.
As outlined in the linked video below, Dr. Rudiak has analyzed photos from 1947
in which a General Ramey appears holding a typewritten letter from which Rudiak
believes he has been able to discern a number of words which he believes
substantiate the extraterrestrial hypothesis for the Roswell Incident). For
your perusal, I've located a "hi-res" copy of the cropped image of the letter in
Ramey's hand.

I hope to hear back from you. Is this an application where compressed


sensing could be useful? Any chance you would consider trying it?

Thank you for your time,


M. V.
x
Chapter 5

Back to Reality
Back to Reality
• “There's no such thing as a free lunch”

• “Something for Nothing” theorems

• Dimensionality reduction
is no exception

• Result: Compressive
Nonsensing
Nonsense 1

Robustness
Measurement Noise

• Stable recovery
with additive
measurement noise

• Noise is added to

• Stability: noise only mildly amplified


in recovered signal
Signal Noise

• Often seek recovery


with additive
signal noise

• Noise is added to

• Noise folding: signal noise amplified in by


3dB for every doubling of

• Same effect seen in classical “bandpass subsampling”

[Davenport, Laska, Treichler, B


2011]
Noise Folding in CS

slope = -3
CS recovered signal SNR
“Tail Folding”

• Can model compressible


(approx sparse) signals as

“signal” + “tail”

• Tail “folds” into “signal”


signal as
increases “tail”

[Davies, Guo, 2011;


Davenport, Laska, Treichler, B
sorted
2011] index
All Is Not Lost – Dynamic Range

• In wideband ADC apps

• As amount of subsampling
grows, can employ
an ADC with a lower
sampling rate and hence
higher-resolution quantizer
Dynamic Range
• CS can significantly boost the ENOB
of an ADC system for sparse signals
stated number of bits

CS ADC w/ sparsity

conventional ADC

log sampling frequency


Dynamic Range

• As amount of subsampling
grows, can employ
an ADC with a lower
sampling rate and hence
higher-resolution quantizer

• Thus dynamic range of CS ADC can significantly


exceed Nyquist ADC

• With current ADC trends, dynamic range gain is


theoretically 7.9dB for each doubling in
Dynamic Range

slope = +5 (almost 7.9)


dynamic range
Tradeoff

SNR: 3dB loss


for
each doubling of

Dynamic Range:
up to 7.9dB gain
for
each doubling of
Adaptivity

• Say we know the


locations of the
non-zero entries
in

column
• Then we boost s
the SNR by

• Motivates adaptive

sensing strategies
that bypass the
noise-folding tradeoff
[Haupt, Castro, Nowak, B 2009;
Candes, Davenport 2011]
Nonsense 2

Quantization
CS and Quantization
• Vast majority of work in CS assumes the
measurements are real-valued

• In practice, measurements must be quantized


(nonlinear)

• Should measure CS performance in terms of


number of measurement bits
rather than
number of (real-valued) measurements

• Limited progress
– large number of bits per measurement
– 1 bit per measurement
CS and Quantization
N=2000, K=20, M = (total bits)/(bits per meas)
12 bits/meas
10 bits
8 bits

6 bits
1 bit
4 bits

2 bits
Nonsense 3

Weak Models
Weak Models

• Sparsity models in CS emphasize discrete bases


and frames
– DFT, wavelets, …

• But in real data acquisition problems, the world is


continuous, not discrete
The Grid Problem
• Consider “frequency sparse” signal
– suggests the DFT sparsity basis

• Easy CS problem: K=1


frequency

• Hard CS problem: K=1


frequency

slow decay due to sinc


interpolation of off-grid
sinusoids
(asymptotically, signal
is not even in L 1 )
Going Off the Grid
• Spectral CS [Duarte, B, 2010]
– discrete formulation

• CS Off the Grid [Tang, Bhaskar, Shah, Recht, 2012]


– continuous formulation

best case

Spectral CS

20dB
average case
worst case
Nonsense 4

Focus on
Recovery
Misguided Focus on Recovery

• Recall the data deluge problem


in sensing
– ex: large-scale imaging, HSI, video,
ultrawideband ADC,
– data ambient dimension N too large

• When N ~ billions, signal recovery


becomes problematic, if not
impossible

• Solution: Perform signal


exploitation directly on the
compressive measurements
Compressive Signal Processing

• Many applications involve signal inference


and not reconstruction

detection < classification < estimation < reconstruction

• Good news: CS supports efficient learning,


inference, processing directly
on compressive measurements

• Random projections ~ sufficient statistics


for signals with concise geometrical structure
Classification
• Simple object classification problem
– AWGN: nearest neighbor classifier

• Common issue:
– L unknown articulation parameters

• Common solution: matched filter


– find nearest neighbor under all articulations
CS-based Classification
• Target images form a low-dimensional
manifold as the target articulates
– random projections preserve information
in these manifolds if

• CS-based classifier: smashed filter


– find nearest neighbor under all articulations
under random projection [Davenport, B, et al 2006]
Smashed Filter
• Random shift and rotation (L=3 dim. manifold)
• White Gaussian noise added to measurements
• Goals: identify most likely shift/rotation parameters
identify most likely class
avg. shift estimate error

classification rate
more
noise
more
noise
(%)

number of measurements number of measurements


Frequency Tracking
• Compressive Phase Locked Loop (PLL)
– key idea: phase detector in PLL computes inner product
between signal and oscillator output
– RIP ensures we can compute this inner product between
corresponding low-rate CS measurements

CS-PLL w/ 20x
undersampling
Nonsense 5

Weak
Guarantees
Performance Guarantees

• CS performance guarantees
– RIP, incoherence, phase transition

• To date, rigorous results only


for random matrices
– practically not useful
– often pessimistic

• Need rigorous guarantees for non-random,


structured sampling matrices with fast algorithms
– analogous to the progress in coding theory from Shannon’s
original random codes to modern codes
Chapter 6

All Is Not Lost !


Sparsity
Convex
optimization
Dimensionality

reduction
12-Step Program
To End Compressive Nonsensing
1. Don’t give in to the hype surrounding CS
2. Resist the urge to blindly apply L 1 minimization
3. Face up to robustness issues
4. Deal with measurement quantization
5. Develop more realistic signal models
6. Develop practical sensing matrices beyond random
7. Develop more efficient recovery algorithms
8. Develop rigorous performance guarantees for practical CS
systems
9. Exploit signals directly in the compressive domain
10. Don’t give in to the hype surrounding CS
11. Resist the urge to blindly apply L 1 minimization
12. Don’t give in to the hype surrounding CS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy