The Living Thing / Notebooks :

Compressed sensing / compressed sampling

The fanciest ways of counting zero

Stand by for higgledy-piggledy notes on the theme of exploiting sparsity to recover signals from few non-local measurements, given that we know they are nearly sparse, in a sense that will be made clear soon. This is another twist on classic sampling theory.

Sparse regression is closely related, but with a stochastic process angle.

See also matrix factorisations, random projections, optimisation, model selection, multiple testing, random linear algebra, concentration inequalities, restricted isometry properties.

Basic Compressed Sensing

I'll follow the intro of CENR11, which tries to unify many variants.

We attempt to recover a signal from y_k) of the form

or, as a matrix equation,

where is the stacked measurement matrices, and the terms denote i.i.d. measurement noise.

Now, if is a sparse vector, and satisfies an appropriate restricted isometry property, then we can construct an estimate with small error by minimising

where

In the lecture notes on restricted isometry properties, Candès and Tao talk about not vectors but functions on Abelian groups like which is convenient for some phrasing, since then when I say say my signal is -sparse, which means that its support |S|=s).

In the finite-dimensional vector framing, we can talk about best sparse approximations x).

where all the coefficients apart from the largest are zeroed.

The basic results are find attractive convex problems. There are also greedy optimisation versions, which are formulated as above, but no longer necessarily a convex optimisation; instead, we talk about Orthogonal Matching Pursuit, Iterative Thresholding and some other stuff the details of which I do not yet know, which I think pops up in wavelets and sparse coding.

For all of these the results tend to be something like

with data the difference between my estimate of and is bounded by something-or-other where the oracle estimate is the one where you know ahead of time the set .

Candés gives an example result

conditional upon

where this gives the restricted isometry constant of a matrix, defined as the smallest constant such that for all -sparse . That is, the measurement matrix does not change the norm of sparse signals “much”, and in particular, does not null them when

This is not the strongest bound out there apparently, but for any of that form, those constants look frustrating.

Measuring the restricted isometry constant of a given measurement matrix is presumably hard, although I haven't tried yet. But generating random matrices that have a certain RIC with high probability is easy; that's a neat trick in this area.

Redundant compressed sensing

TBD. For now see restricted isometry principles.

Introductory texts

…Using random projections

Classic. Notes to come.

…Using deterministic projections

Surely this is close to quasi monte carlo?

That phase transition

How well can you recover a matrix from a certain number of measurements? In obvious metrics there is a sudden jump in how well you do with increasing measurements for a given rank. This looks a lot like a physical phase transition. Hmm.

See statistical mechanics of statistics.

Weird things to be classified

csgm, (BJPD17) compressed sensing using generative models, tries to find a model which is sparse with respect to… some manifold of the latent variables of… a generative model? or something?

Sparse FFT.

Refs