The Living Thing / Notebooks :

Compressed sensing / compressed sampling

The fanciest ways of counting zero

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪 🤪
Incompleteness: 🚧 🚧 🚧

Stand by for higgledy-piggledy notes on the theme of exploiting sparsity to recover signals from few non-local measurements, given that we know they are nearly sparse, in a sense that will be made clear soon. This is another twist on classic sampling theory.

Sparse regression is closely related, but with a stochastic process angle.

See also matrix factorisations, random projections, optimisation, model selection, multiple testing, random linear algebra, concentration inequalities, restricted isometry properties.

Basic Compressed Sensing

I’ll follow the intro of CENR11, which tries to unify many variants.

We attempt to recover a signal \(x_k\in \mathbb{R}^d\) from \(m\ll n\) measurements \(y_k\) of the form

\[y_k =\langle a_k, x\rangle + z_k,\, 1\leq k \leq m,\]

or, as a matrix equation,

\[ y = Ax + z \]

where \(A\) is the \(m \times d\) stacked measurement matrices, and the \(z\) terms denote i.i.d. measurement noise.

Now, if \(x\) is a sparse vector, and \(A\) satisfies an appropriate restricted isometry property, then we can construct an estimate \(\hat{x}\) with small error by minimising

\[ \hat{x}=\min \|\dot{x}\|_1 \text{ subject to } \|A\dot{x}-y\|_2 \lt \varepsilon, \]

where \(\varepsilon\gt \|z\|_2^2.\)

In the lecture notes on restricted isometry properties, Candès and Tao talk about not vectors \(x\in \mathbb{R}^d\) but functions \(f:G \mapsto \mathbb{C}\) on Abelian groups like \(G=\mathbb{Z}/d\mathbb{Z},\) which is convenient for some phrasing, since then when I say say my signal is \(s\)-sparse, which means that its support \(\operatorname{supp} \tilde{f}=S\subset G\) where \(|S|=s\).

In the finite-dimensional vector framing, we can talk about best sparse approximations \(x_s\) to non-sparse vectors, \(x\).

\[ x_s = \argmin_{\|\dot{x}\|_0\leq s} \|x-\dot{x}\|_2 \]

where all the coefficients apart from the \(s\) largest are zeroed.

The basic results are find attractive convex problems. There are also greedy optimisation versions, which are formulated as above, but no longer necessarily a convex optimisation; instead, we talk about Orthogonal Matching Pursuit, Iterative Thresholding and some other stuff the details of which I do not yet know, which I think pops up in wavelets and sparse coding.

For all of these the results tend to be something like

with data \(y,\) the difference between my estimate of \(\hat{x}\) and \(\hat{x}_\text{oracle}\) is bounded by something-or-other where the oracle estimate is the one where you know ahead of time the set \(S=\operatorname{supp}(x)\).

Candés gives an example result

\[ \|\hat{x}-x\|_2 \leq C_0\frac{\|x-x_s\|_1}{\sqrt{s}} + C_1\varepsilon \]

conditional upon

\[ \delta_2s(A) \lt \sqrt{2} -1 \]

where this \(\delta_s(\cdot)\) gives the restricted isometry constant of a matrix, defined as the smallest constant such that \((1-\delta_s(A))\|x\|_2^2\leq \|Ax\|_2^2\leq (1+\delta_s(A))\|x\|_2^2\) for all \(s\)-sparse \())x\). That is, the measurement matrix does not change the norm of sparse signals “much”, and in particular, does not null them when \(\delta_s \lt 1.\)

This is not the strongest bound out there apparently, but for any of that form, those constants look frustrating.

Measuring the restricted isometry constant of a given measurement matrix is presumably hard, although I haven’t tried yet. But generating random matrices that have a certain RIC with high probability is easy; that’s a neat trick in this area.

Redundant compressed sensing

🚧 For now see restricted isometry principles.

Introductory texts

…Using random projections

Classic. Notes to come.

…Using deterministic projections

Surely this is close to quasi monte carlo?

That phase transition

How well can you recover a matrix from a certain number of measurements? In obvious metrics there is a sudden jump in how well you do with increasing measurements for a given rank. This looks a lot like a physical phase transition. Hmm.

See statistical mechanics of statistics.

Weird things to be classified

csgm, (BJPD17) compressed sensing using generative models, tries to find a model which is sparse with respect to… some manifold of the latent variables of… a generative model? or something?

Sparse FFT.

Refs

Achlioptas, Dimitris. 2003. “Database-Friendly Random Projections: Johnson-Lindenstrauss with Binary Coins.” Journal of Computer and System Sciences, Special Issue on PODS 2001, 66 (4): 671–87. https://doi.org/10.1016/S0022-0000(03)00025-4.

Azizyan, Martin, Akshay Krishnamurthy, and Aarti Singh. 2015. “Extreme Compressive Sampling for Covariance Estimation,” June. http://arxiv.org/abs/1506.00898.

Bach, Francis, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. 2012. “Optimization with Sparsity-Inducing Penalties.” Foundations and Trends® in Machine Learning 4 (1): 1–106. https://doi.org/10.1561/2200000015.

Baraniuk, Richard, Mark Davenport, Ronald DeVore, and Michael Wakin. 2008. “A Simple Proof of the Restricted Isometry Property for Random Matrices.” Constructive Approximation 28 (3): 253–63. https://doi.org/10.1007/s00365-007-9003-x.

Baraniuk, Richard G. 2007. “Compressive Sensing.” IEEE Signal Processing Magazine 24 (4). http://users.isr.ist.utl.pt/~aguiar/CS_notes.pdf.

———. 2008. “Single-Pixel Imaging via Compressive Sampling.” IEEE Signal Processing Magazine 25 (2): 83–91. https://doi.org/10.1109/MSP.2007.914730.

Baraniuk, Richard G., Volkan Cevher, Marco F. Duarte, and Chinmay Hegde. 2010. “Model-Based Compressive Sensing.” IEEE Transactions on Information Theory 56 (4): 1982–2001. https://doi.org/10.1109/TIT.2010.2040894.

Baron, Dror, Shriram Sarvotham, and Richard G. Baraniuk. 2010. “Bayesian Compressive Sensing via Belief Propagation.” IEEE Transactions on Signal Processing 58 (1): 269–80. https://doi.org/10.1109/TSP.2009.2027773.

Bayati, Mohsen, and Andrea Montanari. 2011. “The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing.” IEEE Transactions on Information Theory 57 (2): 764–85. https://doi.org/10.1109/TIT.2010.2094817.

Berger, Bonnie, Noah M. Daniels, and Y. William Yu. 2016. “Computational Biology in the 21st Century: Scaling with Compressive Algorithms.” Communications of the ACM 59 (8): 72–80. https://doi.org/10.1145/2957324.

Bian, W., and X. Chen. 2013. “Worst-Case Complexity of Smoothing Quadratic Regularization Methods for Non-Lipschitzian Optimization.” SIAM Journal on Optimization 23 (3): 1718–41. https://doi.org/10.1137/120864908.

Bingham, Ella, and Heikki Mannila. 2001. “Random Projection in Dimensionality Reduction: Applications to Image and Text Data.” In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 245–50. KDD ’01. New York, NY, USA: ACM. https://doi.org/10.1145/502512.502546.

Blanchard, Jeffrey D. 2013. “Toward Deterministic Compressed Sensing.” Proceedings of the National Academy of Sciences 110 (4): 1146–7. https://doi.org/10.1073/pnas.1221228110.

Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. “Compressed Sensing Using Generative Models.” In International Conference on Machine Learning, 537–46. http://arxiv.org/abs/1703.03208.

Borgerding, Mark, and Philip Schniter. 2016. “Onsager-Corrected Deep Networks for Sparse Linear Inverse Problems,” December. http://arxiv.org/abs/1612.01183.

Bruckstein, A. M., Michael Elad, and M. Zibulevsky. 2008a. “Sparse Non-Negative Solution of a Linear System of Equations Is Unique.” In 3rd International Symposium on Communications, Control and Signal Processing, 2008. ISCCSP 2008, 762–67. https://doi.org/10.1109/ISCCSP.2008.4537325.

———. 2008b. “On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations.” IEEE Transactions on Information Theory 54 (11): 4813–20. https://doi.org/10.1109/TIT.2008.929920.

Cai, T. Tony, Guangwu Xu, and Jun Zhang. 2008. “On Recovery of Sparse Signals via ℓ1 Minimization,” May. http://arxiv.org/abs/0805.0149.

Cai, T. Tony, and Anru Zhang. 2015. “ROP: Matrix Recovery via Rank-One Projections.” The Annals of Statistics 43 (1): 102–38. https://doi.org/10.1214/14-AOS1267.

Candès, Emmanuel J. 2014. “Mathematics of Sparsity (and Few Other Things).” ICM 2014 Proceedings, to Appear. http://www2.isye.gatech.edu/~yxie77/isye6416/ICM2014.pdf.

Candès, Emmanuel J., and Mark A. Davenport. 2011. “How Well Can We Estimate a Sparse Vector?” April. http://arxiv.org/abs/1104.5246.

Candès, Emmanuel J., Yonina C. Eldar, Deanna Needell, and Paige Randall. 2011. “Compressed Sensing with Coherent and Redundant Dictionaries.” Applied and Computational Harmonic Analysis 31 (1): 59–73. https://doi.org/10.1016/j.acha.2010.10.002.

Candès, Emmanuel J., and Benjamin Recht. 2009. “Exact Matrix Completion via Convex Optimization.” Foundations of Computational Mathematics 9 (6): 717–72. https://doi.org/10.1007/s10208-009-9045-5.

Candès, Emmanuel J., J. Romberg, and T. Tao. 2006a. “Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information.” IEEE Transactions on Information Theory 52 (2): 489–509. https://doi.org/10.1109/TIT.2005.862083.

Candès, Emmanuel J., Justin K. Romberg, and Terence Tao. 2006b. “Stable Signal Recovery from Incomplete and Inaccurate Measurements.” Communications on Pure and Applied Mathematics 59 (8): 1207–23. https://doi.org/10.1002/cpa.20124.

Candès, Emmanuel J., and Terence Tao. 2006. “Near-Optimal Signal Recovery from Random Projections: Universal Encoding Strategies?” IEEE Transactions on Information Theory 52 (12): 5406–25. https://doi.org/10.1109/TIT.2006.885507.

———. 2008. “The Uniform Uncertainty Principle and Compressed Sensing.”

Candès, Emmanuel J., and M.B. Wakin. 2008. “An Introduction to Compressive Sampling.” IEEE Signal Processing Magazine 25 (2): 21–30. https://doi.org/10.1109/MSP.2007.914731.

Candès, Emmanuel, and Terence Tao. 2005. “Decoding by Linear Programming.” IEEE Transactions on Information Theory 51 (12): 4203–15. https://doi.org/10.1109/TIT.2005.858979.

Carmi, Avishy Y. 2013. “Compressive System Identification: Sequential Methods and Entropy Bounds.” Digital Signal Processing 23 (3): 751–70. https://doi.org/10.1016/j.dsp.2012.12.006.

———. 2014. “Compressive System Identification.” In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 281–324. Signals and Communication Technology. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-38398-4_9.

Cevher, Volkan, Marco F. Duarte, Chinmay Hegde, and Richard Baraniuk. 2009. “Sparse Signal Recovery Using Markov Random Fields.” In Advances in Neural Information Processing Systems, 257–64. Curran Associates, Inc. http://papers.nips.cc/paper/3487-sparse-signal-recovery-using-markov-random-fields.

Chartrand, R., and Wotao Yin. 2008. “Iteratively Reweighted Algorithms for Compressive Sensing.” In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, 3869–72. https://doi.org/10.1109/ICASSP.2008.4518498.

Chen, Xiaojun. 2012. “Smoothing Methods for Nonsmooth, Nonconvex Minimization.” Mathematical Programming 134 (1): 71–99. https://doi.org/10.1007/s10107-012-0569-0.

Chen, Xiaojun, and Weijun Zhou. 2013. “Convergence of the Reweighted ℓ.” Computational Optimization and Applications 59 (1-2): 47–61. https://doi.org/10.1007/s10589-013-9553-8.

Chretien, Stephane. 2008. “An Alternating L1 Approach to the Compressed Sensing Problem,” September. http://arxiv.org/abs/0809.0660.

Dasgupta, Sanjoy. 2000. “Experiments with Random Projection.” In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, 143–51. UAI’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://arxiv.org/abs/1301.3849.

Dasgupta, Sanjoy, and Anupam Gupta. 2003. “An Elementary Proof of a Theorem of Johnson and Lindenstrauss.” Random Structures & Algorithms 22 (1): 60–65. https://doi.org/10.1002/rsa.10073.

Dasgupta, Sanjoy, Daniel Hsu, and Nakul Verma. 2012. “A Concentration Theorem for Projections.” arXiv Preprint arXiv:1206.6813. http://arxiv.org/abs/1206.6813.

Daubechies, I., M. Defrise, and C. De Mol. 2004. “An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint.” Communications on Pure and Applied Mathematics 57 (11): 1413–57. https://doi.org/10.1002/cpa.20042.

Daubechies, Ingrid, Ronald DeVore, Massimo Fornasier, and C. Si̇nan Güntürk. 2010. “Iteratively Reweighted Least Squares Minimization for Sparse Recovery.” Communications on Pure and Applied Mathematics 63 (1): 1–38. https://doi.org/10.1002/cpa.20303.

DeVore, Ronald A. 1998. “Nonlinear Approximation.” Acta Numerica 7 (January): 51–150. https://doi.org/10.1017/S0962492900002816.

Diaconis, Persi, and David Freedman. 1984. “Asymptotics of Graphical Projection Pursuit.” The Annals of Statistics 12 (3): 793–815.

Donoho, David L. 2006. “Compressed Sensing.” IEEE Transactions on Information Theory 52 (4): 1289–1306. https://doi.org/10.1109/TIT.2006.871582.

Donoho, David L., and Michael Elad. 2003. “Optimally Sparse Representation in General (Nonorthogonal) Dictionaries via ℓ1 Minimization.” Proceedings of the National Academy of Sciences 100 (5): 2197–2202. https://doi.org/10.1073/pnas.0437847100.

Donoho, David L., A. Maleki, and A. Montanari. 2010. “Message Passing Algorithms for Compressed Sensing: I. Motivation and Construction.” In 2010 IEEE Information Theory Workshop (ITW), 1–5. https://doi.org/10.1109/ITWKSPS.2010.5503193.

Donoho, David L., Arian Maleki, and Andrea Montanari. 2009a. “Message-Passing Algorithms for Compressed Sensing.” Proceedings of the National Academy of Sciences 106 (45): 18914–9. https://doi.org/10.1073/pnas.0909892106.

———. 2009b. “Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation.” In 2010 IEEE Information Theory Workshop (ITW), 1–5. https://doi.org/10.1109/ITWKSPS.2010.5503228.

Donoho, D. L., M. Elad, and V. N. Temlyakov. 2006. “Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise.” IEEE Transactions on Information Theory 52 (1): 6–18. https://doi.org/10.1109/TIT.2005.860430.

Duarte, Marco F., and Richard G. Baraniuk. 2013. “Spectral Compressive Sensing.” Applied and Computational Harmonic Analysis 35 (1): 111–29. https://doi.org/10.1016/j.acha.2012.08.003.

Flammia, Steven T., David Gross, Yi-Kai Liu, and Jens Eisert. 2012. “Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators.” New Journal of Physics 14 (9): 095022. https://doi.org/10.1088/1367-2630/14/9/095022.

Foygel, Rina, and Nathan Srebro. 2011. “Fast-Rate and Optimistic-Rate Error Bounds for L1-Regularized Regression,” August. http://arxiv.org/abs/1108.0373.

Freund, Yoav, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. 2007. “Learning the Structure of Manifolds Using Random Projections.” In Advances in Neural Information Processing Systems, 473–80. http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2007_133.pdf.

Giryes, R., G. Sapiro, and A. M. Bronstein. 2016. “Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?” IEEE Transactions on Signal Processing 64 (13): 3444–57. https://doi.org/10.1109/TSP.2016.2546221.

Graff, Christian G., and Emil Y. Sidky. 2015. “Compressive Sensing in Medical Imaging.” Applied Optics 54 (8): C23–C44. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4669980/.

Hall, Peter, and Ker-Chau Li. 1993. “On Almost Linearity of Low Dimensional Projections from High Dimensional Data.” The Annals of Statistics 21 (2): 867–89.

Harchaoui, Zaid, Anatoli Juditsky, and Arkadi Nemirovski. 2015. “Conditional Gradient Algorithms for Norm-Regularized Smooth Convex Optimization.” Mathematical Programming 152 (1-2): 75–112. https://doi.org/10.1007/s10107-014-0778-9.

Hassanieh, Haitham, Piotr Indyk, Dina Katabi, and Eric Price. 2012. “Nearly Optimal Sparse Fourier Transform.” In Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, 563–78. STOC ’12. New York, NY, USA: ACM. https://doi.org/10.1145/2213977.2214029.

Hassanieh, H., P. Indyk, D. Katabi, and E. Price. 2012. “Simple and Practical Algorithm for Sparse Fourier Transform.” In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, 1183–94. Proceedings. Kyoto, Japan: Society for Industrial and Applied Mathematics. http://groups.csail.mit.edu/netmit/sFFT/soda_paper.pdf.

Hegde, Chinmay, and Richard G. Baraniuk. 2012. “Signal Recovery on Incoherent Manifolds.” IEEE Transactions on Information Theory 58 (12): 7204–14. https://doi.org/10.1109/TIT.2012.2210860.

Hormati, A., O. Roy, Y.M. Lu, and M. Vetterli. 2010. “Distributed Sampling of Signals Linked by Sparse Filtering: Theory and Applications.” IEEE Transactions on Signal Processing 58 (3): 1095–1109. https://doi.org/10.1109/TSP.2009.2034908.

Hoyer, Patrik O. n.d. “Non-Negative Matrix Factorization with Sparseness Constraints.” Journal of Machine Learning Research 5 (9): 1457–69. Accessed October 10, 2014. http://arxiv.org/abs/cs/0408058.

Jaggi, Martin. 2013. “Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization.” In Journal of Machine Learning Research, 427–35. http://jmlr.csail.mit.edu/proceedings/papers/v28/jaggi13.html.

Kabán, Ata. 2014. “New Bounds on Compressive Linear Least Squares Regression.” In Journal of Machine Learning Research, 448–56. http://jmlr.org/proceedings/papers/v33/kaban14.pdf.

Kim, Daeun, and Justin P. Haldar. 2016. “Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery.” Signal Processing 125 (August): 274–89. https://doi.org/10.1016/j.sigpro.2016.01.021.

Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. “Random Projections of Random Manifolds,” July. http://arxiv.org/abs/1607.04331.

Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2006. “Very Sparse Random Projections.” In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 287–96. KDD ’06. New York, NY, USA: ACM. https://doi.org/10.1145/1150402.1150436.

Li, Yingying, and Stanley Osher. 2009. “Coordinate Descent Optimization for ℓ 1 Minimization with Application to Compressed Sensing; a Greedy Algorithm.” Inverse Problems and Imaging 3 (3): 487–503. http://ns1.aimsciences.org/journals/pdfs.jsp?paperID=4386&mode=full.

Matei, Basarab, and Yves Meyer. 2010. “Simple Quasicrystals Are Sets of Stable Sampling.” Complex Variables and Elliptic Equations 55 (8-10): 947–64. https://doi.org/10.1080/17476930903394689.

———. n.d. “A Variant on the Compressed Sensing of Emmanuel Candes.” Accessed April 1, 2016. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.6694&rep=rep1&type=pdf.

Mishali, Moshe, and Yonina C. Eldar. 2010. “From Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals.” IEEE Journal of Selected Topics in Signal Processing 4 (2): 375–91. https://doi.org/10.1109/JSTSP.2010.2042414.

Montanari, Andrea. 2012. “Graphical Models Concepts in Compressed Sensing.” Compressed Sensing: Theory and Applications, 394–438. http://arxiv.org/abs/1011.4328.

Mousavi, Ali, and Richard G. Baraniuk. 2017. “Learning to Invert: Signal Recovery via Deep Convolutional Networks.” In ICASSP. http://arxiv.org/abs/1701.03891.

Needell, D., and J. A. Tropp. 2008. “CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples,” March. http://arxiv.org/abs/0803.2392.

Oka, A, and L. Lampe. 2008. “Compressed Sensing of Gauss-Markov Random Field with Wireless Sensor Networks.” In 5th IEEE Sensor Array and Multichannel Signal Processing Workshop, 2008. SAM 2008, 257–60. https://doi.org/10.1109/SAM.2008.4606867.

Olshausen, B. A., and D. J. Field. 1996. “Natural Image Statistics and Efficient Coding.” Network (Bristol, England) 7 (2): 333–39. https://doi.org/10.1088/0954-898X/7/2/014.

Olshausen, Bruno A, and David J Field. 2004. “Sparse Coding of Sensory Inputs.” Current Opinion in Neurobiology 14 (4): 481–87. https://doi.org/10.1016/j.conb.2004.07.007.

Pawar, Sameer, and Kannan Ramchandran. 2015. “A Robust Sub-Linear Time R-FFAST Algorithm for Computing a Sparse DFT,” January. http://arxiv.org/abs/1501.00320.

Peleg, Tomer, Yonina C. Eldar, and Michael Elad. 2010. “Exploiting Statistical Dependencies in Sparse Representations for Signal Recovery.” IEEE Transactions on Signal Processing 60 (5): 2286–2303. https://doi.org/10.1109/TSP.2012.2188520.

Ravishankar, Saiprasad, and Yoram Bresler. 2015. “Efficient Blind Compressed Sensing Using Sparsifying Transforms with Convergence Guarantees and Application to MRI,” January. http://arxiv.org/abs/1501.02923.

Ravishankar, S., and Y. Bresler. 2015. “Sparsifying Transform Learning with Efficient Optimal Updates and Convergence Guarantees.” IEEE Transactions on Signal Processing 63 (9): 2389–2404. https://doi.org/10.1109/TSP.2015.2405503.

Rish, Irina, and Genady Grabarnik. 2014. “Sparse Signal Recovery with Exponential-Family Noise.” In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 77–93. Signals and Communication Technology. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-38398-4_3.

Rish, Irina, and Genady Ya Grabarnik. 2015. Sparse Modeling: Theory, Algorithms, and Applications. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series. Boca Raton, FL: CRC Press, Taylor & Francis Group.

Romberg, J. 2008. “Imaging via Compressive Sampling.” IEEE Signal Processing Magazine 25 (2): 14–20. https://doi.org/10.1109/MSP.2007.914729.

Rosset, Saharon, and Ji Zhu. 2007. “Piecewise Linear Regularized Solution Paths.” The Annals of Statistics 35 (3): 1012–30. https://doi.org/10.1214/009053606000001370.

Rubinstein, Ron, T. Peleg, and Michael Elad. 2013. “Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model.” IEEE Transactions on Signal Processing 61 (3): 661–77. https://doi.org/10.1109/TSP.2012.2226445.

Sarvotham, Shriram, Dror Baron, and Richard G. Baraniuk. 2006. “Measurements Vs. Bits: Compressed Sensing Meets Information Theory.” In In Proc. Allerton Conf. On Comm., Control, and Computing. http://hdl.handle.net/1911/20323.

Schniter, P., and S. Rangan. 2012. “Compressive Phase Retrieval via Generalized Approximate Message Passing.” In 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 815–22. https://doi.org/10.1109/Allerton.2012.6483302.

Shalev-Shwartz, Shai, and Ambuj Tewari. 2011. “Stochastic Methods for L1-Regularized Loss Minimization.” Journal of Machine Learning Research 12 (July): 1865–92. http://www.machinelearning.org/archive/icml2009/papers/262.pdf.

Smith, Virginia, Simone Forte, Michael I. Jordan, and Martin Jaggi. 2015. “L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework,” December. http://arxiv.org/abs/1512.04011.

Song, Ruiyang, Yao Xie, and Sebastian Pokutta. 2015. “Sequential Information Guided Sensing,” August. http://arxiv.org/abs/1509.00130.

Tropp, J.A. 2006. “Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise.” IEEE Transactions on Information Theory 52 (3): 1030–51. https://doi.org/10.1109/TIT.2005.864420.

Tropp, J. A., and S. J. Wright. 2010. “Computational Methods for Sparse Solution of Linear Inverse Problems.” Proceedings of the IEEE 98 (6): 948–58. https://doi.org/10.1109/JPROC.2010.2044010.

Vetterli, Martin. 1999. “Wavelets: Approximation and Compression–a Review.” In AeroSense’99, 3723:28–31. International Society for Optics and Photonics. https://doi.org/10.1117/12.342945.

Weidmann, Claudio, and Martin Vetterli. 2012. “Rate Distortion Behavior of Sparse Sources.” IEEE Transactions on Information Theory 58 (8): 4969–92. https://doi.org/10.1109/TIT.2012.2201335.

Wipf, David, and Srikantan Nagarajan. 2016. “Iterative Reweighted L1 and L2 Methods for Finding Sparse Solution.” Microsoft Research, July. https://www.microsoft.com/en-us/research/publication/iterative-reweighted-l1-l2-methods-finding-sparse-solution/.

Wu, R., W. Huang, and D. R. Chen. 2013. “The Exact Support Recovery of Sparse Signals with Noise via Orthogonal Matching Pursuit.” IEEE Signal Processing Letters 20 (4): 403–6. https://doi.org/10.1109/LSP.2012.2233734.

Wu, Yan, Mihaela Rosca, and Timothy Lillicrap. 2019. “Deep Compressed Sensing.” In International Conference on Machine Learning, 6850–60. http://arxiv.org/abs/1905.06723.

Yaghoobi, M., Sangnam Nam, R. Gribonval, and M.E. Davies. 2012. “Noise Aware Analysis Operator Learning for Approximately Cosparse Signals.” In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5409–12. https://doi.org/10.1109/ICASSP.2012.6289144.

Yang, Wenzhuo, and Huan Xu. 2015. “Streaming Sparse Principal Component Analysis.” In Journal of Machine Learning Research, 494–503. http://jmlr.org/proceedings/papers/v37/yangd15.html.

Zhang, Kai, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, and Jieping Ye. 2017. “Randomization or Condensation?: Linear-Cost Matrix Sketching via Cascaded Compression Sampling.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 615–23. KDD ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3097983.3098050.