The Living Thing / Notebooks :

Restricted isometry properties

Plus incoherence, irrepresentability, and other uncertainty bounds for a sparse world, and maybe frame theory, what's that now?

Restricted isometry properties, a.k.a. uniform uncertainty principles (CaTa05, CaRT06), mutual incoherence (DoET06), irrepresentability conditions (ZhYu06)…

This is mostly notes while I learn some definitions; expect no actual thoughts.

Recoverability conditions, as seen in sparse regression, sparse basis dictionaries, function approximation, compressed sensing etc. Uncertainty principles for a sparse world.

Terry Tao mentions the various related conditions for the compressed sensing problem, and which types of random matrices satisfy them. The chatty lecture notes on uniform uncertainty look nice.

Restricted Isometry

The compressed sensing formulation.

The restricted isometry constant of a matrix \(A\), is the smallest constant \(\delta_s\) \((1-\delta_s(A))\|x\|_2^2\leq \|Ax\|_2^2\leq (1+\delta_s(A))\|x\|_2^2\) for all \(s\)-sparse \(x\). That is, the measurement matrix does not change the norm of sparse signals “too much”, and in particular, does not null them when \(\delta_s \lt 1.\)

Irrepresentability

The set up is a little different for regression-type problems, which is where “representability” comes from. Here we care also about the design, roughly, the dependence of covariates we actually observe, and the noise distribution.

ZhYu06 present an abstract condition called strong irrepresentability, which guarantees asymptotic sign consistency of selection. See also MeBü06, who call this neighborhood stability, which is even less catchy.

More recently MeYu09 extend this (and explain the original irrepresenatability more clearly IMO):

Here we examine the behavior of the Lasso estimators if the irrepresentable condition is relaxed. Even though the Lasso cannot recover the correct sparsity pattern, we show that the estimator is still consistent in the l2-norm sense for fixed designs under conditions on (a) the number \(s_n\) of nonzero components of the vector \(\beta_n\) and (b) the minimal singular values of design matrices that are induced by selecting small subsets of variables. Furthermore, a rate of convergence result is obtained on the l2 error with an appropriate choice of the smoothing parameter.

They do a good gob of uniting prediction-error and model-selection consistency approaches. In fact, I will base everything off MeYu09, since not only is the prose more lucid, it gives the background to the design assumptions and relaxation of coherence.

TBC.

Incoherence

A Basis-Pursuit noise-free setting.

DoET06:

We can think of the atoms in our dictionary as columns in a matrix \(\Phi\), so that \(\Phi\) is \(n\) by \(m\) and \(m \gt n.\). A representation of \(y\in\mathbb{R}^n\) can be thought of as a vector \(\alpha\in\mathbb{R}^m\) satisfying \(y=\Phi\alpha.\)

The concept of mutual coherence of the dictionary […] is defined, assuming that the columns of are normalized to unit \(\ell^2\)-norm, in terms of the Gram matrix \(G=\Phi^T\Phi\). With \(G(k,j)\) denoting entries of this matrix, the mutual coherence is

$$ M(\Phi) = \max_{1\leq k, j\leq m, k\neq j} |G(k,j)| $$

A dictionary is incoherent if \(M\) is small.

Frame theory

Something I see mentioned in the various conditions above, but don’t really understand.

Morgenshtern and Bölcskei (MoBö11):

Hilbert spaces [1, Def. 3.1-1] and the associated concept of orthonormal bases are of fundamental importance in signal processing, communications, control, and information theory. However, linear independence and orthonormality of the basis elements impose constraints that often make it difficult to have the basis elements satisfy additional desirable properties. This calls for a theory of signal decompositions that is flexible enough to accommodate decompositions into possibly nonorthogonal and redundant signal sets. The theory of frames provides such a tool. This chapter is an introduction to the theory of frames, which was developed by Duffin and Schaeffer [DuSc52] and popularized mostly through [Daub90, Daub92, HeWa89, Youn01]. Meanwhile frame theory, in particular the aspect of redundancy in signal expansions, has found numerous applications such as, e.g., denoising, code division multiple access (CDMA), orthogonal frequency division multiplexing (OFDM) systems, coding theory, quantum information theory, analog-to-digital (A/D) converters, and compressive sensing [DoEl03, Dono06, CaTa06]. A more extensive list of relevant references can be found in [KoCh08]. For a comprehensive treatment of frame theory we refer to the excellent textbook [Chri16].

Refs

Restricted isometry properties, a.k.a. uniform uncertainty principles (CaTa05, CaRT06), mutual incoherence (DoET06), irrepresentability conditions (ZhYu06)…

This is mostly notes while I learn some definitions; expect no actual thoughts.

Recoverability conditions, as seen in sparse regression, sparse basis dictionaries, function approximation, compressed sensing etc. Uncertainty principles for a sparse world.

Terry Tao mentions the various related conditions for the compressed sensing problem, and which types of random matrices satisfy them. The chatty lecture notes on uniform uncertainty look nice.

Restricted Isometry

The compressed sensing formulation.

The restricted isometry constant of a matrix \(A\), is the smallest constant \(\delta_s\) \((1-\delta_s(A))\|x\|_2^2\leq \|Ax\|_2^2\leq (1+\delta_s(A))\|x\|_2^2\) for all \(s\)-sparse \(x\). That is, the measurement matrix does not change the norm of sparse signals “too much”, and in particular, does not null them when \(\delta_s \lt 1.\)

Irrepresentability

The set up is a little different for regression-type problems, which is where “representability” comes from. Here we care also about the design, roughly, the dependence of covariates we actually observe, and the noise distribution.

ZhYu06 present an abstract condition called strong irrepresentability, which guarantees asymptotic sign consistency of selection. See also MeBü06, who call this neighborhood stability, which is even less catchy.

More recently MeYu09 extend this (and explain the original irrepresenatability more clearly IMO):

Here we examine the behavior of the Lasso estimators if the irrepresentable condition is relaxed. Even though the Lasso cannot recover the correct sparsity pattern, we show that the estimator is still consistent in the l2-norm sense for fixed designs under conditions on (a) the number \(s_n\) of nonzero components of the vector \(\beta_n\) and (b) the minimal singular values of design matrices that are induced by selecting small subsets of variables. Furthermore, a rate of convergence result is obtained on the l2 error with an appropriate choice of the smoothing parameter.

They do a good gob of uniting prediction-error and model-selection consistency approaches. In fact, I will base everything off MeYu09, since not only is the prose more lucid, it gives the background to the design assumptions and relaxation of coherence.

TBC.

Incoherence

A Basis-Pursuit noise-free setting.

DoET06:

We can think of the atoms in our dictionary as columns in a matrix \(\Phi\), so that \(\Phi\) is \(n\) by \(m\) and \(m \gt n.\). A representation of \(y\in\mathbb{R}^n\) can be thought of as a vector \(\alpha\in\mathbb{R}^m\) satisfying \(y=\Phi\alpha.\)

The concept of mutual coherence of the dictionary […] is defined, assuming that the columns of are normalized to unit \(\ell^2\)-norm, in terms of the Gram matrix \(G=\Phi^T\Phi\). With \(G(k,j)\) denoting entries of this matrix, the mutual coherence is

$$ M(\Phi) = \max_{1\leq k, j\leq m, k\neq j} |G(k,j)| $$

A dictionary is incoherent if \(M\) is small.

Frame theory

Something I see mentioned in the various conditions above, but don’t really understand.

Morgenshtern and Bölcskei (MoBö11):

Hilbert spaces [1, Def. 3.1-1] and the associated concept of orthonormal bases are of fundamental importance in signal processing, communications, control, and information theory. However, linear independence and orthonormality of the basis elements impose constraints that often make it difficult to have the basis elements satisfy additional desirable properties. This calls for a theory of signal decompositions that is flexible enough to accommodate decompositions into possibly nonorthogonal and redundant signal sets. The theory of frames provides such a tool. This chapter is an introduction to the theory of frames, which was developed by Duffin and Schaeffer [DuSc52] and popularized mostly through [Daub90, Daub92, HeWa89, Youn01]. Meanwhile frame theory, in particular the aspect of redundancy in signal expansions, has found numerous applications such as, e.g., denoising, code division multiple access (CDMA), orthogonal frequency division multiplexing (OFDM) systems, coding theory, quantum information theory, analog-to-digital (A/D) converters, and compressive sensing [DoEl03, Dono06, CaTa06]. A more extensive list of relevant references can be found in [KoCh08]. For a comprehensive treatment of frame theory we refer to the excellent textbook [Chri16].

Refs