One of the central
tenets of signal processing is the Shannon/Nyquist sampling theory:
the number of samples needed to capture a signal is dictated by its
bandwidth. Very recently, an alternative theory of "compressive sampling"has
emerged. By using nonlinear recovery algorithms (based on convex optimization),
super-resolved signals and images can be reconstructed from what appears
to be highly incomplete data. Compressive sampling shows us how data
compression can be implicitly incorporated into the data acquisition
process, a gives us a new vantage point for a diverse set of applications
including accelerated tomographic imaging, analog-to-digital conversion,
and digital photography.
See examples of compressive
sampling in action.
Mirrors
Ukrainian translation of this page courtesy of Zonderpump.
Bosnian translation of this page courtesy of Balkanscience.com.
Macedonian translation of this page courtesy of vladacatalic.com.
Estonian translation of this page courtesy of Patricia
Motosan.
Slovenian translation of this page courtesy of Valerie Bastiaan.
Czech translation of this page courtesy of Evelina Koprziwova.
Hungarian translation of this page courtesy of Elana Pavlet.
Indonesian translation of this page courtesy of Jordan Silaen at ChameleonJohn.com.
Portugese translation of this page courtesy of Diana Gomes
at Travel Ticker.
Finnish translation of this page
courtesy of Karla Valenzuela.
Code
L1-MAGIC is a collection of MATLAB routines for solving
the convex optimization programs central to compressive sampling. The
algorithms are based on standard interior-point methods, and are suitable
for large-scale problems.
Download
the code (including User's Guide)
Download
the User's Guide (pdf)
top
Papers
A nonlinear sampling theorem
"Robust uncertainty principles: Exact recovery from highly incomplete
Fourier information"
by: Emmanuel Candes, Justin Romberg, and Terence Tao
To appear in IEEE Transactions on Information Theory, February 2006.
The central result from this paper is that a sparse
vector can be recovered exactly from a small number of
Fourier domain observations. More precisely, let f be a length-N discrete
signal which has B nonzero
components (we stress that the number and locations of the components
are unknown a priori). We collect
samples at K different frequencies which are randomly selected. Then
for K on the order of B log N,
we can recover f perfectly (with very high probability) through l1 minimization.
Download
(pdf)
Near-optimal signal recovery and the Uniform Uncertainty
Principle
"Near-optimal signal recovery from random projections and universal
encoding strategies"
by: Emmanuel Candes and Terence Tao
Submitted to IEEE Transactions on Information Theory, November 2004.
This paper derives precise conditions for when an
arbitrary sparse signal f can be recovered from a
fixed set of linear measurements y = Mf. If M obeys what is terms the
Uniform Uncertainty Principle
for sets of size S (which essentially means that all submatrices formed
by taking S columns from M are
approximate isometries), then every signal f with no more than S nonzero
components can be recovered
from its measurements y=Mf via an l1 minimization program. It is shown
that if M is generated randomly,
it will obey the UUP with high probability for sets of size S ~ K log
(N/K), where K is the number of rows
of M. Using this result, it is shown that if f is compressible rather
than sparse (meaning that the sorted
components of f decay quickly), then the l1 recovery is near-optimal:
the recovery error goes to zero as
we add more measurements almost as fast as the nonlinear approximation
error of the original signal.
Download
(pdf)
Stability
"Stable signal recovery from incomplete and inaccurate measurements"
by: Emmanuel Candes, Justin Romberg, and Terence Tao
To appear in Communications on Pure and Applied Mathematics, 2006.
This paper demonstrates that the recovery procedure
is stable. Given that the measurement matrix
satisfies the UUP, we show that we can recover a sparse or compressible
signal f from corrupted measurements
y = Mf+e, where the size of e is less than epsilon, to error within
epsilon. The proof is short and clean, and
encompasses the previous recovery results for the noiseless case.
Download
(pdf)
Statistical Estimation
"The Dantzig selector: Statistical estimation when p is much
smaller than n"
by: Emmanuel Candes and Terence Tao
Submitted to IEEE Transactions on Information Theory, June 2005.
When the errors made in the measurement process are
Gaussian, much more can be said about the precision
of the recovery. This paper shows that a sparse signal can be estimated
from an incomplete set of measurements
corrupted by additive white Gaussian noise just as well as from observing
the entire noisy signal by itself
(and thresholding). The estimation process, which is again a certain
type of l1 minimization program, is termed
the Dantzig Selector.
Download
(pdf)
Linear Decoding
"Decoding by Linear Programming"
by: Emmanuel Candes and Terence Tao
IEEE Transactions on Information Theory, December 2005.
This paper demonstrates that in addition to recovering
sparse signals, l1 minimization can be used to detect
and correct sparse errors. A codeword c is generated by applying an
MxN coding matrix A to a message m:
c = Am. It is shown that if A obeys a type of uncertainty principle,
then c can be recovered even if m is
deviously altered in qM unknown locations (where q is a constant).
Download
(pdf)
Finding Sparse Decompositions
"Quantitative robust uncertainty principles and optimally sparse
decompositions"
by: Emmanuel Candes and Justin Romberg
To appear in Foundations of Computational Mathematics, 2006.
This paper revisits the now classical application
of l1 minimization to finding sparse representations in unions
of bases. The spike-sinusoid system is studied in detail: it is shown
that if a signal is comprised of a superposition
of ~ N/sqrt(log N) spikes and sinusoids, then the sparsest (in the sense
of minimum support) decomposition can be
found via l1 minimization. This paper makes explicit use of novel uncertainty
principles between the time and
frequency domains. The extension to finding sparse decompositions in
general pairs of bases is also discussed.
Download
(pdf)
top
Links
David
Donoho's publications, including work on Compressed Sensing and Sparse
Recovery (with Jared Tanner)
The Rice University
DSP group's Compressed Sensing resources page; see in particular the
very recent work on building
a CS camera
Robert
Nowak and Jarvis Haupt's paper on Signal Reconstruction from Noisy
Random Projections.
Terence
Tao's summary of the current state of compressive sampling theory
Joel Tropp's web page at the California Institute of Technology, see in particular
his work on reconstruction using greedy
algorithms (with Anna Gilbert).
Martin
Strauss and Anna
Gilbert, at the University of Michigan, and their papers on fast
algorithms for estimating sparse Fourier transforms
Martin
Vetterli and Irena Maravic's work on sampling signals with "finite
rate of innovation"
David Brady's Duke
Integrated Sensing and Processing page
top |