# Compressed sensing / compressed sampling

### The fanciest ways of counting zero

Usefulness: đź”§
Novelty: đź’ˇ
Uncertainty: đź¤Ş đź¤Ş đź¤Ş
Incompleteness: đźš§ đźš§ đźš§

Stand by for higgledy-piggledy notes on the theme of exploiting sparsity to recover signals from few non-local measurements, given that we know they are nearly sparse, in a sense that will be made clear soon. This is another twist on classic sampling theory.

Sparse regression is closely related, but with a stochastic process angle.

## Basic Compressed Sensing

Iâ€™ll follow the intro of CENR11, which tries to unify many variants.

We attempt to recover a signal $$x_k\in \mathbb{R}^d$$ from $$m\ll n$$ measurements $$y_k$$ of the form

$y_k =\langle a_k, x\rangle + z_k,\, 1\leq k \leq m,$

or, as a matrix equation,

$y = Ax + z$

where $$A$$ is the $$m \times d$$ stacked measurement matrices, and the $$z$$ terms denote i.i.d. measurement noise.

Now, if $$x$$ is a sparse vector, and $$A$$ satisfies an appropriate restricted isometry property, then we can construct an estimate $$\hat{x}$$ with small error by minimising

$\hat{x}=\min \|\dot{x}\|_1 \text{ subject to } \|A\dot{x}-y\|_2 \lt \varepsilon,$

where $$\varepsilon\gt \|z\|_2^2.$$

In the lecture notes on restricted isometry properties, CandĂ¨s and Tao talk about not vectors $$x\in \mathbb{R}^d$$ but functions $$f:G \mapsto \mathbb{C}$$ on Abelian groups like $$G=\mathbb{Z}/d\mathbb{Z},$$ which is convenient for some phrasing, since then when I say say my signal is $$s$$-sparse, which means that its support $$\operatorname{supp} \tilde{f}=S\subset G$$ where $$|S|=s$$.

In the finite-dimensional vector framing, we can talk about best sparse approximations $$x_s$$ to non-sparse vectors, $$x$$.

$x_s = \argmin_{\|\dot{x}\|_0\leq s} \|x-\dot{x}\|_2$

where all the coefficients apart from the $$s$$ largest are zeroed.

The basic results are find attractive convex problems. There are also greedy optimisation versions, which are formulated as above, but no longer necessarily a convex optimisation; instead, we talk about Orthogonal Matching Pursuit, Iterative Thresholding and some other stuff the details of which I do not yet know, which I think pops up in wavelets and sparse coding.

For all of these the results tend to be something like

with data $$y,$$ the difference between my estimate of $$\hat{x}$$ and $$\hat{x}_\text{oracle}$$ is bounded by something-or-other where the oracle estimate is the one where you know ahead of time the set $$S=\operatorname{supp}(x)$$.

$\|\hat{x}-x\|_2 \leq C_0\frac{\|x-x_s\|_1}{\sqrt{s}} + C_1\varepsilon$

conditional upon

$\delta_2s(A) \lt \sqrt{2} -1$

where this $$\delta_s(\cdot)$$ gives the restricted isometry constant of a matrix, defined as the smallest constant such that $$(1-\delta_s(A))\|x\|_2^2\leq \|Ax\|_2^2\leq (1+\delta_s(A))\|x\|_2^2$$ for all $$s$$-sparse $$))x$$. That is, the measurement matrix does not change the norm of sparse signals â€śmuchâ€ť, and in particular, does not null them when $$\delta_s \lt 1.$$

This is not the strongest bound out there apparently, but for any of that form, those constants look frustrating.

Measuring the restricted isometry constant of a given measurement matrix is presumably hard, although I havenâ€™t tried yet. But generating random matrices that have a certain RIC with high probability is easy; thatâ€™s a neat trick in this area.

## Redundant compressed sensing

đźš§ For now see restricted isometry principles.

## Introductory texts

• Aside: see the rather good practical python notebook in numerical tours.

• Terry Taoâ€™s exposition is great as usual. See, e.g.

[â€¦]we can at least give an informal geometric argument as to why $$\ell^1$$ minimisation is more likely to recover a sparse solution than $$\ell^2$$ minimisation. The set of all $$f$$ whose Fourier coefficients match the observed data $$c_\xi$$ forms an affine subspace of the space of all functions. The $$\ell^2$$ minimiser can then be viewed geometrically by taking l^2 balls (i.e.Â Euclidean balls) centred at the origin, and gradually increasing the radius of the ball until the first point of contact with the affine subspace. In general, there is no reason to expect this point of contact to be sparse (i.e.Â to lie on a high-codimension coordinate subspace). If however we replace $$\ell^2$$ with $$\ell^1$$, then the Euclidean balls are replaced by octahedra, which are much â€śpointierâ€ť (especially in high dimensions) and whose corners lie on coordinate subspaces. So the point of first contact is now much more likely to be sparse. The idea of using $$\ell^1$$ as a â€śconvex relaxationâ€ť of $$\ell^0$$ is a powerful one in applied mathematics; see for instance Trop06 on the topic.

• Hegde, Baraniuk, Davenport and Duarte have an open source textbook

• Wes McKinneyâ€™s intro

• RIP vs JL

• Gabriel Peyreâ€™s Compressed Sensing of Images

## â€¦Using random projections

Classic. Notes to come.

## â€¦Using deterministic projections

Surely this is close to quasi monte carlo?

• Dustin G. Mixon Achieving the Welch bound with difference sets

I blogged about constructing harmonic frames using difference sets. We proved that such harmonic frames are equiangular tight frames, thereby having minimal coherence between columns. I concluded the entry by conjecturing that incoherent harmonic frames are as good for compressed sensing as harmonic frames whose rows were randomly drawn from the discrete Fourier transform (DFT) matrix

• A variant on the compressed sensing of Yves Meyer

recent work of Yves Meyer might be relevant:

• A variant on the compressed sensing of Emmanuel Candes, Basarab Matei and Yves Meyer

• Simple quasicrystals are sets of stable sampling, Basarab Matei and Yves Meyer

These papers are interesting because their approach to compressed sensing is very different. Specifically, their sparse vectors are actually functions of compact support with sufficiently small Lebesgue measure. As such, concepts like conditioning are replaced with that of stable sampling, and the results must be interpreted in the context of functional analysis. The papers demonstrate that sampling frequencies according to a (deterministic) simple quasicrystal will uniquely determine sufficiently sparse functions, and furthermore, the sparsest function in the preimage can be recovered by L1-minimization provided itâ€™s nonnegative.

## That phase transition

How well can you recover a matrix from a certain number of measurements? In obvious metrics there is a sudden jump in how well you do with increasing measurements for a given rank. This looks a lot like a physical phase transition. Hmm.

## Weird things to be classified

csgm, (BJPD17) compressed sensing using generative models, tries to find a model which is sparse with respect toâ€¦ some manifold of the latent variables ofâ€¦ a generative model? or something?

# Refs

Achlioptas, Dimitris. 2003. â€śDatabase-Friendly Random Projections: Johnson-Lindenstrauss with Binary Coins.â€ť Journal of Computer and System Sciences, Special Issue on PODS 2001, 66 (4): 671â€“87. https://doi.org/10.1016/S0022-0000(03)00025-4.

Azizyan, Martin, Akshay Krishnamurthy, and Aarti Singh. 2015. â€śExtreme Compressive Sampling for Covariance Estimation,â€ť June. http://arxiv.org/abs/1506.00898.

Bach, Francis, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. 2012. â€śOptimization with Sparsity-Inducing Penalties.â€ť Foundations and TrendsÂ® in Machine Learning 4 (1): 1â€“106. https://doi.org/10.1561/2200000015.

Baraniuk, Richard, Mark Davenport, Ronald DeVore, and Michael Wakin. 2008. â€śA Simple Proof of the Restricted Isometry Property for Random Matrices.â€ť Constructive Approximation 28 (3): 253â€“63. https://doi.org/10.1007/s00365-007-9003-x.

Baraniuk, Richard G. 2007. â€śCompressive Sensing.â€ť IEEE Signal Processing Magazine 24 (4). http://users.isr.ist.utl.pt/~aguiar/CS_notes.pdf.

â€”â€”â€”. 2008. â€śSingle-Pixel Imaging via Compressive Sampling.â€ť IEEE Signal Processing Magazine 25 (2): 83â€“91. https://doi.org/10.1109/MSP.2007.914730.

Baraniuk, Richard G., Volkan Cevher, Marco F. Duarte, and Chinmay Hegde. 2010. â€śModel-Based Compressive Sensing.â€ť IEEE Transactions on Information Theory 56 (4): 1982â€“2001. https://doi.org/10.1109/TIT.2010.2040894.

Baron, Dror, Shriram Sarvotham, and Richard G. Baraniuk. 2010. â€śBayesian Compressive Sensing via Belief Propagation.â€ť IEEE Transactions on Signal Processing 58 (1): 269â€“80. https://doi.org/10.1109/TSP.2009.2027773.

Bayati, Mohsen, and Andrea Montanari. 2011. â€śThe Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing.â€ť IEEE Transactions on Information Theory 57 (2): 764â€“85. https://doi.org/10.1109/TIT.2010.2094817.

Berger, Bonnie, Noah M. Daniels, and Y. William Yu. 2016. â€śComputational Biology in the 21st Century: Scaling with Compressive Algorithms.â€ť Communications of the ACM 59 (8): 72â€“80. https://doi.org/10.1145/2957324.

Bian, W., and X. Chen. 2013. â€śWorst-Case Complexity of Smoothing Quadratic Regularization Methods for Non-Lipschitzian Optimization.â€ť SIAM Journal on Optimization 23 (3): 1718â€“41. https://doi.org/10.1137/120864908.

Bingham, Ella, and Heikki Mannila. 2001. â€śRandom Projection in Dimensionality Reduction: Applications to Image and Text Data.â€ť In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 245â€“50. KDD â€™01. New York, NY, USA: ACM. https://doi.org/10.1145/502512.502546.

Blanchard, Jeffrey D. 2013. â€śToward Deterministic Compressed Sensing.â€ť Proceedings of the National Academy of Sciences 110 (4): 1146â€“7. https://doi.org/10.1073/pnas.1221228110.

Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. â€śCompressed Sensing Using Generative Models.â€ť In International Conference on Machine Learning, 537â€“46. http://arxiv.org/abs/1703.03208.

Borgerding, Mark, and Philip Schniter. 2016. â€śOnsager-Corrected Deep Networks for Sparse Linear Inverse Problems,â€ť December. http://arxiv.org/abs/1612.01183.

Bruckstein, A. M., Michael Elad, and M. Zibulevsky. 2008a. â€śSparse Non-Negative Solution of a Linear System of Equations Is Unique.â€ť In 3rd International Symposium on Communications, Control and Signal Processing, 2008. ISCCSP 2008, 762â€“67. https://doi.org/10.1109/ISCCSP.2008.4537325.

â€”â€”â€”. 2008b. â€śOn the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations.â€ť IEEE Transactions on Information Theory 54 (11): 4813â€“20. https://doi.org/10.1109/TIT.2008.929920.

Cai, T. Tony, Guangwu Xu, and Jun Zhang. 2008. â€śOn Recovery of Sparse Signals via â„“1 Minimization,â€ť May. http://arxiv.org/abs/0805.0149.

Cai, T. Tony, and Anru Zhang. 2015. â€śROP: Matrix Recovery via Rank-One Projections.â€ť The Annals of Statistics 43 (1): 102â€“38. https://doi.org/10.1214/14-AOS1267.

CandĂ¨s, Emmanuel J. 2014. â€śMathematics of Sparsity (and Few Other Things).â€ť ICM 2014 Proceedings, to Appear. http://www2.isye.gatech.edu/~yxie77/isye6416/ICM2014.pdf.

CandĂ¨s, Emmanuel J., and Mark A. Davenport. 2011. â€śHow Well Can We Estimate a Sparse Vector?â€ť April. http://arxiv.org/abs/1104.5246.

CandĂ¨s, Emmanuel J., Yonina C. Eldar, Deanna Needell, and Paige Randall. 2011. â€śCompressed Sensing with Coherent and Redundant Dictionaries.â€ť Applied and Computational Harmonic Analysis 31 (1): 59â€“73. https://doi.org/10.1016/j.acha.2010.10.002.

CandĂ¨s, Emmanuel J., and Benjamin Recht. 2009. â€śExact Matrix Completion via Convex Optimization.â€ť Foundations of Computational Mathematics 9 (6): 717â€“72. https://doi.org/10.1007/s10208-009-9045-5.

CandĂ¨s, Emmanuel J., J. Romberg, and T. Tao. 2006a. â€śRobust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information.â€ť IEEE Transactions on Information Theory 52 (2): 489â€“509. https://doi.org/10.1109/TIT.2005.862083.

CandĂ¨s, Emmanuel J., Justin K. Romberg, and Terence Tao. 2006b. â€śStable Signal Recovery from Incomplete and Inaccurate Measurements.â€ť Communications on Pure and Applied Mathematics 59 (8): 1207â€“23. https://doi.org/10.1002/cpa.20124.

CandĂ¨s, Emmanuel J., and Terence Tao. 2006. â€śNear-Optimal Signal Recovery from Random Projections: Universal Encoding Strategies?â€ť IEEE Transactions on Information Theory 52 (12): 5406â€“25. https://doi.org/10.1109/TIT.2006.885507.

â€”â€”â€”. 2008. â€śThe Uniform Uncertainty Principle and Compressed Sensing.â€ť

CandĂ¨s, Emmanuel J., and M.B. Wakin. 2008. â€śAn Introduction to Compressive Sampling.â€ť IEEE Signal Processing Magazine 25 (2): 21â€“30. https://doi.org/10.1109/MSP.2007.914731.

CandĂ¨s, Emmanuel, and Terence Tao. 2005. â€śDecoding by Linear Programming.â€ť IEEE Transactions on Information Theory 51 (12): 4203â€“15. https://doi.org/10.1109/TIT.2005.858979.

Carmi, Avishy Y. 2013. â€śCompressive System Identification: Sequential Methods and Entropy Bounds.â€ť Digital Signal Processing 23 (3): 751â€“70. https://doi.org/10.1016/j.dsp.2012.12.006.

â€”â€”â€”. 2014. â€śCompressive System Identification.â€ť In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 281â€“324. Signals and Communication Technology. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-38398-4_9.

Cevher, Volkan, Marco F. Duarte, Chinmay Hegde, and Richard Baraniuk. 2009. â€śSparse Signal Recovery Using Markov Random Fields.â€ť In Advances in Neural Information Processing Systems, 257â€“64. Curran Associates, Inc. http://papers.nips.cc/paper/3487-sparse-signal-recovery-using-markov-random-fields.

Chartrand, R., and Wotao Yin. 2008. â€śIteratively Reweighted Algorithms for Compressive Sensing.â€ť In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, 3869â€“72. https://doi.org/10.1109/ICASSP.2008.4518498.

Chen, Xiaojun. 2012. â€śSmoothing Methods for Nonsmooth, Nonconvex Minimization.â€ť Mathematical Programming 134 (1): 71â€“99. https://doi.org/10.1007/s10107-012-0569-0.

Chen, Xiaojun, and Weijun Zhou. 2013. â€śConvergence of the Reweighted â„“.â€ť Computational Optimization and Applications 59 (1-2): 47â€“61. https://doi.org/10.1007/s10589-013-9553-8.

Chretien, Stephane. 2008. â€śAn Alternating L1 Approach to the Compressed Sensing Problem,â€ť September. http://arxiv.org/abs/0809.0660.

Dasgupta, Sanjoy. 2000. â€śExperiments with Random Projection.â€ť In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, 143â€“51. UAIâ€™00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://arxiv.org/abs/1301.3849.

Dasgupta, Sanjoy, and Anupam Gupta. 2003. â€śAn Elementary Proof of a Theorem of Johnson and Lindenstrauss.â€ť Random Structures & Algorithms 22 (1): 60â€“65. https://doi.org/10.1002/rsa.10073.

Dasgupta, Sanjoy, Daniel Hsu, and Nakul Verma. 2012. â€śA Concentration Theorem for Projections.â€ť arXiv Preprint arXiv:1206.6813. http://arxiv.org/abs/1206.6813.

Daubechies, I., M. Defrise, and C. De Mol. 2004. â€śAn Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint.â€ť Communications on Pure and Applied Mathematics 57 (11): 1413â€“57. https://doi.org/10.1002/cpa.20042.

Daubechies, Ingrid, Ronald DeVore, Massimo Fornasier, and C. SiĚ‡nan GĂĽntĂĽrk. 2010. â€śIteratively Reweighted Least Squares Minimization for Sparse Recovery.â€ť Communications on Pure and Applied Mathematics 63 (1): 1â€“38. https://doi.org/10.1002/cpa.20303.

DeVore, Ronald A. 1998. â€śNonlinear Approximation.â€ť Acta Numerica 7 (January): 51â€“150. https://doi.org/10.1017/S0962492900002816.

Diaconis, Persi, and David Freedman. 1984. â€śAsymptotics of Graphical Projection Pursuit.â€ť The Annals of Statistics 12 (3): 793â€“815.

Donoho, David L. 2006. â€śCompressed Sensing.â€ť IEEE Transactions on Information Theory 52 (4): 1289â€“1306. https://doi.org/10.1109/TIT.2006.871582.

Donoho, David L., and Michael Elad. 2003. â€śOptimally Sparse Representation in General (Nonorthogonal) Dictionaries via â„“1 Minimization.â€ť Proceedings of the National Academy of Sciences 100 (5): 2197â€“2202. https://doi.org/10.1073/pnas.0437847100.

Donoho, David L., A. Maleki, and A. Montanari. 2010. â€śMessage Passing Algorithms for Compressed Sensing: I. Motivation and Construction.â€ť In 2010 IEEE Information Theory Workshop (ITW), 1â€“5. https://doi.org/10.1109/ITWKSPS.2010.5503193.

Donoho, David L., Arian Maleki, and Andrea Montanari. 2009a. â€śMessage-Passing Algorithms for Compressed Sensing.â€ť Proceedings of the National Academy of Sciences 106 (45): 18914â€“9. https://doi.org/10.1073/pnas.0909892106.

â€”â€”â€”. 2009b. â€śMessage Passing Algorithms for Compressed Sensing: II. Analysis and Validation.â€ť In 2010 IEEE Information Theory Workshop (ITW), 1â€“5. https://doi.org/10.1109/ITWKSPS.2010.5503228.

Donoho, D. L., M. Elad, and V. N. Temlyakov. 2006. â€śStable Recovery of Sparse Overcomplete Representations in the Presence of Noise.â€ť IEEE Transactions on Information Theory 52 (1): 6â€“18. https://doi.org/10.1109/TIT.2005.860430.

Duarte, Marco F., and Richard G. Baraniuk. 2013. â€śSpectral Compressive Sensing.â€ť Applied and Computational Harmonic Analysis 35 (1): 111â€“29. https://doi.org/10.1016/j.acha.2012.08.003.

Flammia, Steven T., David Gross, Yi-Kai Liu, and Jens Eisert. 2012. â€śQuantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators.â€ť New Journal of Physics 14 (9): 095022. https://doi.org/10.1088/1367-2630/14/9/095022.

Foygel, Rina, and Nathan Srebro. 2011. â€śFast-Rate and Optimistic-Rate Error Bounds for L1-Regularized Regression,â€ť August. http://arxiv.org/abs/1108.0373.

Freund, Yoav, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. 2007. â€śLearning the Structure of Manifolds Using Random Projections.â€ť In Advances in Neural Information Processing Systems, 473â€“80. http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2007_133.pdf.

Giryes, R., G. Sapiro, and A. M. Bronstein. 2016. â€śDeep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?â€ť IEEE Transactions on Signal Processing 64 (13): 3444â€“57. https://doi.org/10.1109/TSP.2016.2546221.

Graff, Christian G., and Emil Y. Sidky. 2015. â€śCompressive Sensing in Medical Imaging.â€ť Applied Optics 54 (8): C23â€“C44. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4669980/.

Hall, Peter, and Ker-Chau Li. 1993. â€śOn Almost Linearity of Low Dimensional Projections from High Dimensional Data.â€ť The Annals of Statistics 21 (2): 867â€“89.

Harchaoui, Zaid, Anatoli Juditsky, and Arkadi Nemirovski. 2015. â€śConditional Gradient Algorithms for Norm-Regularized Smooth Convex Optimization.â€ť Mathematical Programming 152 (1-2): 75â€“112. https://doi.org/10.1007/s10107-014-0778-9.

Hassanieh, Haitham, Piotr Indyk, Dina Katabi, and Eric Price. 2012. â€śNearly Optimal Sparse Fourier Transform.â€ť In Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, 563â€“78. STOC â€™12. New York, NY, USA: ACM. https://doi.org/10.1145/2213977.2214029.

Hassanieh, H., P. Indyk, D. Katabi, and E. Price. 2012. â€śSimple and Practical Algorithm for Sparse Fourier Transform.â€ť In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, 1183â€“94. Proceedings. Kyoto, Japan: Society for Industrial and Applied Mathematics. http://groups.csail.mit.edu/netmit/sFFT/soda_paper.pdf.

Hegde, Chinmay, and Richard G. Baraniuk. 2012. â€śSignal Recovery on Incoherent Manifolds.â€ť IEEE Transactions on Information Theory 58 (12): 7204â€“14. https://doi.org/10.1109/TIT.2012.2210860.

Hormati, A., O. Roy, Y.M. Lu, and M. Vetterli. 2010. â€śDistributed Sampling of Signals Linked by Sparse Filtering: Theory and Applications.â€ť IEEE Transactions on Signal Processing 58 (3): 1095â€“1109. https://doi.org/10.1109/TSP.2009.2034908.

Hoyer, Patrik O. n.d. â€śNon-Negative Matrix Factorization with Sparseness Constraints.â€ť Journal of Machine Learning Research 5 (9): 1457â€“69. Accessed October 10, 2014. http://arxiv.org/abs/cs/0408058.

Jaggi, Martin. 2013. â€śRevisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization.â€ť In Journal of Machine Learning Research, 427â€“35. http://jmlr.csail.mit.edu/proceedings/papers/v28/jaggi13.html.

KabĂˇn, Ata. 2014. â€śNew Bounds on Compressive Linear Least Squares Regression.â€ť In Journal of Machine Learning Research, 448â€“56. http://jmlr.org/proceedings/papers/v33/kaban14.pdf.

Kim, Daeun, and Justin P. Haldar. 2016. â€śGreedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery.â€ť Signal Processing 125 (August): 274â€“89. https://doi.org/10.1016/j.sigpro.2016.01.021.

Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. â€śRandom Projections of Random Manifolds,â€ť July. http://arxiv.org/abs/1607.04331.

Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2006. â€śVery Sparse Random Projections.â€ť In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 287â€“96. KDD â€™06. New York, NY, USA: ACM. https://doi.org/10.1145/1150402.1150436.

Li, Yingying, and Stanley Osher. 2009. â€śCoordinate Descent Optimization for â„“ 1 Minimization with Application to Compressed Sensing; a Greedy Algorithm.â€ť Inverse Problems and Imaging 3 (3): 487â€“503. http://ns1.aimsciences.org/journals/pdfs.jsp?paperID=4386&mode=full.

Matei, Basarab, and Yves Meyer. 2010. â€śSimple Quasicrystals Are Sets of Stable Sampling.â€ť Complex Variables and Elliptic Equations 55 (8-10): 947â€“64. https://doi.org/10.1080/17476930903394689.

â€”â€”â€”. n.d. â€śA Variant on the Compressed Sensing of Emmanuel Candes.â€ť Accessed April 1, 2016. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.6694&rep=rep1&type=pdf.

Mishali, Moshe, and Yonina C. Eldar. 2010. â€śFrom Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals.â€ť IEEE Journal of Selected Topics in Signal Processing 4 (2): 375â€“91. https://doi.org/10.1109/JSTSP.2010.2042414.

Montanari, Andrea. 2012. â€śGraphical Models Concepts in Compressed Sensing.â€ť Compressed Sensing: Theory and Applications, 394â€“438. http://arxiv.org/abs/1011.4328.

Mousavi, Ali, and Richard G. Baraniuk. 2017. â€śLearning to Invert: Signal Recovery via Deep Convolutional Networks.â€ť In ICASSP. http://arxiv.org/abs/1701.03891.

Needell, D., and J. A. Tropp. 2008. â€śCoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples,â€ť March. http://arxiv.org/abs/0803.2392.

Oka, A, and L. Lampe. 2008. â€śCompressed Sensing of Gauss-Markov Random Field with Wireless Sensor Networks.â€ť In 5th IEEE Sensor Array and Multichannel Signal Processing Workshop, 2008. SAM 2008, 257â€“60. https://doi.org/10.1109/SAM.2008.4606867.

Olshausen, B. A., and D. J. Field. 1996. â€śNatural Image Statistics and Efficient Coding.â€ť Network (Bristol, England) 7 (2): 333â€“39. https://doi.org/10.1088/0954-898X/7/2/014.

Olshausen, Bruno A, and David J Field. 2004. â€śSparse Coding of Sensory Inputs.â€ť Current Opinion in Neurobiology 14 (4): 481â€“87. https://doi.org/10.1016/j.conb.2004.07.007.

Pawar, Sameer, and Kannan Ramchandran. 2015. â€śA Robust Sub-Linear Time R-FFAST Algorithm for Computing a Sparse DFT,â€ť January. http://arxiv.org/abs/1501.00320.

Peleg, Tomer, Yonina C. Eldar, and Michael Elad. 2010. â€śExploiting Statistical Dependencies in Sparse Representations for Signal Recovery.â€ť IEEE Transactions on Signal Processing 60 (5): 2286â€“2303. https://doi.org/10.1109/TSP.2012.2188520.

Ravishankar, Saiprasad, and Yoram Bresler. 2015. â€śEfficient Blind Compressed Sensing Using Sparsifying Transforms with Convergence Guarantees and Application to MRI,â€ť January. http://arxiv.org/abs/1501.02923.

Ravishankar, S., and Y. Bresler. 2015. â€śSparsifying Transform Learning with Efficient Optimal Updates and Convergence Guarantees.â€ť IEEE Transactions on Signal Processing 63 (9): 2389â€“2404. https://doi.org/10.1109/TSP.2015.2405503.

Rish, Irina, and Genady Grabarnik. 2014. â€śSparse Signal Recovery with Exponential-Family Noise.â€ť In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 77â€“93. Signals and Communication Technology. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-38398-4_3.

Rish, Irina, and Genady Ya Grabarnik. 2015. Sparse Modeling: Theory, Algorithms, and Applications. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series. Boca Raton, FL: CRC Press, Taylor & Francis Group.

Romberg, J. 2008. â€śImaging via Compressive Sampling.â€ť IEEE Signal Processing Magazine 25 (2): 14â€“20. https://doi.org/10.1109/MSP.2007.914729.

Rosset, Saharon, and Ji Zhu. 2007. â€śPiecewise Linear Regularized Solution Paths.â€ť The Annals of Statistics 35 (3): 1012â€“30. https://doi.org/10.1214/009053606000001370.

Rubinstein, Ron, T. Peleg, and Michael Elad. 2013. â€śAnalysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model.â€ť IEEE Transactions on Signal Processing 61 (3): 661â€“77. https://doi.org/10.1109/TSP.2012.2226445.

Sarvotham, Shriram, Dror Baron, and Richard G. Baraniuk. 2006. â€śMeasurements Vs. Bits: Compressed Sensing Meets Information Theory.â€ť In In Proc. Allerton Conf. On Comm., Control, and Computing. http://hdl.handle.net/1911/20323.

Schniter, P., and S. Rangan. 2012. â€śCompressive Phase Retrieval via Generalized Approximate Message Passing.â€ť In 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 815â€“22. https://doi.org/10.1109/Allerton.2012.6483302.

Shalev-Shwartz, Shai, and Ambuj Tewari. 2011. â€śStochastic Methods for L1-Regularized Loss Minimization.â€ť Journal of Machine Learning Research 12 (July): 1865â€“92. http://www.machinelearning.org/archive/icml2009/papers/262.pdf.

Smith, Virginia, Simone Forte, Michael I. Jordan, and Martin Jaggi. 2015. â€śL1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework,â€ť December. http://arxiv.org/abs/1512.04011.

Song, Ruiyang, Yao Xie, and Sebastian Pokutta. 2015. â€śSequential Information Guided Sensing,â€ť August. http://arxiv.org/abs/1509.00130.

Tropp, J.A. 2006. â€śJust Relax: Convex Programming Methods for Identifying Sparse Signals in Noise.â€ť IEEE Transactions on Information Theory 52 (3): 1030â€“51. https://doi.org/10.1109/TIT.2005.864420.

Tropp, J. A., and S. J. Wright. 2010. â€śComputational Methods for Sparse Solution of Linear Inverse Problems.â€ť Proceedings of the IEEE 98 (6): 948â€“58. https://doi.org/10.1109/JPROC.2010.2044010.

Vetterli, Martin. 1999. â€śWavelets: Approximation and Compressionâ€“a Review.â€ť In AeroSenseâ€™99, 3723:28â€“31. International Society for Optics and Photonics. https://doi.org/10.1117/12.342945.

Weidmann, Claudio, and Martin Vetterli. 2012. â€śRate Distortion Behavior of Sparse Sources.â€ť IEEE Transactions on Information Theory 58 (8): 4969â€“92. https://doi.org/10.1109/TIT.2012.2201335.

Wipf, David, and Srikantan Nagarajan. 2016. â€śIterative Reweighted L1 and L2 Methods for Finding Sparse Solution.â€ť Microsoft Research, July. https://www.microsoft.com/en-us/research/publication/iterative-reweighted-l1-l2-methods-finding-sparse-solution/.

Wu, R., W. Huang, and D. R. Chen. 2013. â€śThe Exact Support Recovery of Sparse Signals with Noise via Orthogonal Matching Pursuit.â€ť IEEE Signal Processing Letters 20 (4): 403â€“6. https://doi.org/10.1109/LSP.2012.2233734.

Wu, Yan, Mihaela Rosca, and Timothy Lillicrap. 2019. â€śDeep Compressed Sensing.â€ť In International Conference on Machine Learning, 6850â€“60. http://arxiv.org/abs/1905.06723.

Yaghoobi, M., Sangnam Nam, R. Gribonval, and M.E. Davies. 2012. â€śNoise Aware Analysis Operator Learning for Approximately Cosparse Signals.â€ť In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5409â€“12. https://doi.org/10.1109/ICASSP.2012.6289144.

Yang, Wenzhuo, and Huan Xu. 2015. â€śStreaming Sparse Principal Component Analysis.â€ť In Journal of Machine Learning Research, 494â€“503. http://jmlr.org/proceedings/papers/v37/yangd15.html.

Zhang, Kai, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, and Jieping Ye. 2017. â€śRandomization or Condensation?: Linear-Cost Matrix Sketching via Cascaded Compression Sampling.â€ť In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 615â€“23. KDD â€™17. New York, NY, USA: ACM. https://doi.org/10.1145/3097983.3098050.