The Living Thing / Notebooks :

Sparse regression

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪
Incompleteness: 🚧 🚧 🚧

Penalised regression where the penalties are sparsifying. The prediction losses could be anything – likelihood, least-squares, robust Huberised losses, absolute deviation etc.

I will play fast and loose with terminology here regarding theoretical and empirical losses, and the statistical models we attempt to fit.

In nonparametric statistics we might estimate simultaneously what look like many, many parameters, which we constrain in some clever fashion, which usually boils down to something we can interpret as a smoothing parameters, controlling how many factors we still have to consider, from a subset of the original.

I will usually discuss our intent to minimise prediction error, but one could also try to minimise model selection error too.

Then we have a simultaneous estimation and model selection procedure, probably a specific sparse model selection procedure and we possibly have to choose clever optimisation method to do the whole thing fast. Related to compressed sensing, but here we consider sampling complexity and measurement error.

See also matrix factorisations, optimisation, multiple testing, concentration inequalities, sparse flavoured icecream.

🚧 disambiguate the optimisation technologies at play – iteratively reweighted least squares etc.

Now! A set of headings under which I will try to understand some things, mostly the LASSO variants.

LASSO

Quadratic loss penalty, absolute coefficient penalty. We estimate the regression coefficients \(\beta\) by solving

\[\begin{aligned} \hat{\beta} = \underset{\beta \in \mathbb{R}^p}{\text{argmin}} \: \frac{1}{2} \| y - {\bf X} \beta \|_2^2 + \lambda \| \beta \|_1, \end{aligned}\]

The penalty coefficient \(\lambda\) is left for you to choose, but one fo the magical properties of the lasso is that it is very easy to test many possible values of \(\lambda\) at low marginal cost.

Popular because, amongst other reasons, it turns out to be in practice very fast and convenient, and due to various nifty hacks to speed it up e.g. aggressive approximate variable selection.

Adaptive LASSO

🚧 This is the one with famous oracle properties if you choose \(\lambda\) correctly. Hsi Zou’s paper on this (Zou 2006) is very readable. I am having trouble digesting Sara van de Geer’s paper (van de Geer 2008) on the Generalised Lasso, but it seems to offer me guarantees for something very similar to the Adaptive Lasso, but with far more general assumptions on the model and loss functions, and some finite sample guarnatees.

LARS

A confusing one; LASSO and LARS are not the same thing but you can use one to calculate the other? Something like that? I need to work this one through with a pencil and paper.

Graph LASSO

As used in graphical models. 🚧

Elastic net

Combination of \(L_1\) and \(L_2\) penalties. 🚧

Grouped LASSO

AFAICT this is the usual LASSO but with grouped factors. See (Yuan and Lin 2006).

Model selection

Can be fiddly with sparse regression, which couples variable selection tightly with parameter estimation. See sparse model selection.

Debiased LASSO

There exist a few versions, but the one I have needed is (van de Geer 2008), section 2.1. See also and (S. van de Geer 2014b). (🚧 relation to (van de Geer 2008)?)

Sparse basis expansions

Wavelets etc; mostly handled under sparse dictionary bases.

Sparse neural nets

That is, sparse regressions as the layers in a neural network? Sure thing. (Wisdom et al. 2016)

Other coefficient penalties

Put a weird penalty on the coefficients! E.g. “Smoothly Clipped Absolute Deviation” (SCAD). 🚧

Other prediction losses

Put a weird penalty on the error! MAD prediction penalty, lasso-coefficient penalty, etc.

See (Wang, Li, and Jiang 2007; Portnoy and Koenker 1997) for some implementations using e.g. maximum absolute prediction error.

Bayesian Lasso

See Bayesian sparsity.

Implementations

Hastie, Friedman eta’s glmnet for R is fast and well-regarded, and has a MATLAB version. Here’s how to use it for adaptive lasso.

SPAMS (C++, MATLAB, R, python) by Mairal, looks interesting. It’s an optimisation library for many, many sparse problems.

liblinear also include lasso-type solvers, as well as support-vector regression.

Tidbits

Sparse regression as a universal classifier explainer? Local Interpretable Model-agnostic Explanations (Ribeiro, Singh, and Guestrin 2016) uses LASSO for model interpretation this. (See the blog post, or the source.

Refs

Abramovich, Felix, Yoav Benjamini, David L. Donoho, and Iain M. Johnstone. 2006. “Adapting to Unknown Sparsity by Controlling the False Discovery Rate.” The Annals of Statistics 34 (2): 584–653. https://doi.org/10.1214/009053606000000074.

Aghasi, Alireza, Nam Nguyen, and Justin Romberg. 2016. “Net-Trim: A Layer-Wise Convex Pruning of Deep Neural Networks,” November. http://arxiv.org/abs/1611.05162.

Aragam, Bryon, Arash A. Amini, and Qing Zhou. 2015. “Learning Directed Acyclic Graphs with Penalized Neighbourhood Regression,” November. http://arxiv.org/abs/1511.08963.

Azizyan, Martin, Akshay Krishnamurthy, and Aarti Singh. 2015. “Extreme Compressive Sampling for Covariance Estimation,” June. http://arxiv.org/abs/1506.00898.

Bach, Francis. 2009. “Model-Consistent Sparse Estimation Through the Bootstrap.” arXiv:0901.3202 [Cs, Stat]. https://hal.archives-ouvertes.fr/hal-00354771/document.

Bach, Francis, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. 2012. “Optimization with Sparsity-Inducing Penalties.” Foundations and Trends® in Machine Learning 4 (1): 1–106. https://doi.org/10.1561/2200000015.

Bahmani, Sohail, and Justin Romberg. 2014. “Lifting for Blind Deconvolution in Random Mask Imaging: Identifiability and Convex Relaxation,” December. http://arxiv.org/abs/1501.00046.

Banerjee, Arindam, Sheng Chen, Farideh Fazayeli, and Vidyashankar Sivakumar. 2014. “Estimation with Norm Regularization.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 1556–64. Curran Associates, Inc. http://papers.nips.cc/paper/5465-estimation-with-norm-regularization.pdf.

Banerjee, Onureena, Laurent El Ghaoui, and Alexandre d’Aspremont. 2008. “Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data.” Journal of Machine Learning Research 9 (Mar): 485–516. http://www.jmlr.org/papers/v9/banerjee08a.html.

Barber, Rina Foygel, and Emmanuel J. Candès. 2015. “Controlling the False Discovery Rate via Knockoffs.” The Annals of Statistics 43 (5): 2055–85. https://doi.org/10.1214/15-AOS1337.

Barbier, Jean. 2015. “Statistical Physics and Approximate Message-Passing Algorithms for Sparse Linear Estimation Problems in Signal Processing and Coding Theory,” November. http://arxiv.org/abs/1511.01650.

Baron, Dror, Shriram Sarvotham, and Richard G. Baraniuk. 2010. “Bayesian Compressive Sensing via Belief Propagation.” IEEE Transactions on Signal Processing 58 (1): 269–80. https://doi.org/10.1109/TSP.2009.2027773.

Barron, Andrew R., Albert Cohen, Wolfgang Dahmen, and Ronald A. DeVore. 2008. “Approximation and Learning by Greedy Algorithms.” The Annals of Statistics 36 (1): 64–94. https://doi.org/10.1214/009053607000000631.

Barron, Andrew R., Cong Huang, Jonathan Q. Li, and Xi Luo. 2008. “MDL, Penalized Likelihood, and Statistical Risk.” In Information Theory Workshop, 2008. ITW’08. IEEE, 247–57. IEEE. https://doi.org/10.1109/ITW.2008.4578660.

Batenkov, Dmitry, Yaniv Romano, and Michael Elad. 2017. “On the Global-Local Dichotomy in Sparsity Modeling,” February. http://arxiv.org/abs/1702.03446.

Battiti, Roberto. 1992. “First-and Second-Order Methods for Learning: Between Steepest Descent and Newton’s Method.” Neural Computation 4 (2): 141–66. https://doi.org/10.1162/neco.1992.4.2.141.

Bayati, M., and A. Montanari. 2012. “The LASSO Risk for Gaussian Matrices.” IEEE Transactions on Information Theory 58 (4): 1997–2017. https://doi.org/10.1109/TIT.2011.2174612.

Bellec, Pierre C., and Alexandre B. Tsybakov. 2016. “Bounds on the Prediction Error of Penalized Least Squares Estimators with Convex Penalty,” September. http://arxiv.org/abs/1609.06675.

Belloni, Alexandre, Victor Chernozhukov, and Lie Wang. 2011. “Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming.” Biometrika 98 (4): 791–806. https://doi.org/10.1093/biomet/asr043.

Bian, Wei, Xiaojun Chen, and Yinyu Ye. 2014. “Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization.” Mathematical Programming 149 (1-2): 301–27. https://doi.org/10.1007/s10107-014-0753-5.

Bien, Jacob, Irina Gaynanova, Johannes Lederer, and Christian Müller. 2016. “Non-Convex Global Minimization and False Discovery Rate Control for the TREX,” April. http://arxiv.org/abs/1604.06815.

Bien, Jacob, Irina Gaynanova, Johannes Lederer, and Christian L. Müller. 2018. “Non-Convex Global Minimization and False Discovery Rate Control for the TREX.” Journal of Computational and Graphical Statistics 27 (1): 23–33. https://doi.org/10.1080/10618600.2017.1341414.

Bloniarz, Adam, Hanzhong Liu, Cun-Hui Zhang, Jasjeet Sekhon, and Bin Yu. 2015. “Lasso Adjustments of Treatment Effect Estimates in Randomized Experiments,” July. http://arxiv.org/abs/1507.03652.

Bondell, Howard D., Arun Krishna, and Sujit K. Ghosh. 2010. “Joint Variable Selection for Fixed and Random Effects in Linear Mixed-Effects Models.” Biometrics 66 (4): 1069–77. https://doi.org/10.1111/j.1541-0420.2010.01391.x.

Borgs, Christian, Jennifer T. Chayes, Henry Cohn, and Yufei Zhao. 2014. “An $Lp̂$ Theory of Sparse Graph Convergence I: Limits, Sparse Random Graph Models, and Power Law Distributions,” January. http://arxiv.org/abs/1401.2906.

Bottou, Léon, Frank E. Curtis, and Jorge Nocedal. 2016. “Optimization Methods for Large-Scale Machine Learning,” June. http://arxiv.org/abs/1606.04838.

Breiman, Leo. 1995. “Better Subset Regression Using the Nonnegative Garrote.” Technometrics 37 (4): 373–84. http://www-personal.umich.edu/~jizhu/jizhu/wuke/Breiman-Technometrics95.pdf.

Bruckstein, A. M., Michael Elad, and M. Zibulevsky. 2008. “On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations.” IEEE Transactions on Information Theory 54 (11): 4813–20. https://doi.org/10.1109/TIT.2008.929920.

Brunton, Steven L., Joshua L. Proctor, and J. Nathan Kutz. 2016. “Discovering Governing Equations from Data by Sparse Identification of Nonlinear Dynamical Systems.” Proceedings of the National Academy of Sciences 113 (15): 3932–7. https://doi.org/10.1073/pnas.1517384113.

Bu, Yunqi, and Johannes Lederer. 2017. “Integrating Additional Knowledge into Estimation of Graphical Models,” April. http://arxiv.org/abs/1704.02739.

Bühlmann, Peter, and Sara van de Geer. 2011. “Additive Models and Many Smooth Univariate Functions.” In Statistics for High-Dimensional Data, 77–97. Springer Series in Statistics. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-20192-9_5.

———. 2015. “High-Dimensional Inference in Misspecified Linear Models” 9 (1): 1449–73. https://doi.org/10.1214/15-EJS1041.

Candès, Emmanuel J., and Mark A. Davenport. 2011. “How Well Can We Estimate a Sparse Vector?” April. http://arxiv.org/abs/1104.5246.

Candès, Emmanuel J., Yingying Fan, Lucas Janson, and Jinchi Lv. 2016. “Panning for Gold: Model-Free Knockoffs for High-Dimensional Controlled Variable Selection.” arXiv Preprint arXiv:1610.02351. https://arxiv.org/abs/1610.02351.

Candès, Emmanuel J., and Carlos Fernandez-Granda. 2013. “Super-Resolution from Noisy Data.” Journal of Fourier Analysis and Applications 19 (6): 1229–54. https://doi.org/10.1007/s00041-013-9292-3.

Candès, Emmanuel J., and Y. Plan. 2010. “Matrix Completion with Noise.” Proceedings of the IEEE 98 (6): 925–36. https://doi.org/10.1109/JPROC.2009.2035722.

Candès, Emmanuel J., Justin K. Romberg, and Terence Tao. 2006. “Stable Signal Recovery from Incomplete and Inaccurate Measurements.” Communications on Pure and Applied Mathematics 59 (8): 1207–23. https://doi.org/10.1002/cpa.20124.

Candès, Emmanuel J., Michael B. Wakin, and Stephen P. Boyd. 2008. “Enhancing Sparsity by Reweighted ℓ 1 Minimization.” Journal of Fourier Analysis and Applications 14 (5-6): 877–905. https://doi.org/10.1007/s00041-008-9045-x.

Carmi, Avishy Y. 2014. “Compressive System Identification.” In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 281–324. Signals and Communication Technology. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-38398-4_9.

———. 2013. “Compressive System Identification: Sequential Methods and Entropy Bounds.” Digital Signal Processing 23 (3): 751–70. https://doi.org/10.1016/j.dsp.2012.12.006.

Cevher, Volkan, Marco F. Duarte, Chinmay Hegde, and Richard Baraniuk. 2009. “Sparse Signal Recovery Using Markov Random Fields.” In Advances in Neural Information Processing Systems, 257–64. Curran Associates, Inc. http://papers.nips.cc/paper/3487-sparse-signal-recovery-using-markov-random-fields.

Chartrand, R., and Wotao Yin. 2008. “Iteratively Reweighted Algorithms for Compressive Sensing.” In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, 3869–72. https://doi.org/10.1109/ICASSP.2008.4518498.

Chen, Minhua, J. Silva, J. Paisley, Chunping Wang, D. Dunson, and L. Carin. 2010. “Compressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds.” IEEE Transactions on Signal Processing 58 (12): 6140–55. https://doi.org/10.1109/TSP.2010.2070796.

Chen, Xiaojun. 2012. “Smoothing Methods for Nonsmooth, Nonconvex Minimization.” Mathematical Programming 134 (1): 71–99. https://doi.org/10.1007/s10107-012-0569-0.

Chen, Yen-Chi, and Yu-Xiang Wang. n.d. “Discussion on ‘Confidence Intervals and Hypothesis Testing for High-Dimensional Regression’.” Accessed July 12, 2015. http://www.stat.cmu.edu/~ryantibs/journalclub/hdconf.pdf.

Chen, Y., and A. O. Hero. 2012. “Recursive ℓ1,∞ Group Lasso.” IEEE Transactions on Signal Processing 60 (8): 3978–87. https://doi.org/10.1109/TSP.2012.2192924.

Chernozhukov, Victor, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. 2016. “Double/Debiased Machine Learning for Treatment and Causal Parameters,” July. http://arxiv.org/abs/1608.00060.

Chernozhukov, Victor, Christian Hansen, Yuan Liao, and Yinchu Zhu. 2018. “Inference for Heterogeneous Effects Using Low-Rank Estimations,” December. http://arxiv.org/abs/1812.08089.

Chernozhukov, Victor, Whitney K. Newey, and Rahul Singh. 2018. “Learning L2 Continuous Regression Functionals via Regularized Riesz Representers,” September. http://arxiv.org/abs/1809.05224.

Chetverikov, Denis, Zhipeng Liao, and Victor Chernozhukov. 2016. “On Cross-Validated Lasso,” May. http://arxiv.org/abs/1605.02214.

Chichignoud, Michaël, Johannes Lederer, and Martin Wainwright. 2014. “A Practical Scheme and Fast Algorithm to Tune the Lasso with Optimality Guarantees,” October. http://arxiv.org/abs/1410.0247.

Dai, Ran, and Rina Foygel Barber. 2016. “The Knockoff Filter for FDR Control in Group-Sparse and Multitask Regression.” arXiv Preprint arXiv:1602.03589. https://arxiv.org/abs/1602.03589.

Daneshmand, Hadi, Manuel Gomez-Rodriguez, Le Song, and Bernhard Schölkopf. 2014. “Estimating Diffusion Network Structures: Recovery Conditions, Sample Complexity & Soft-Thresholding Algorithm.” In ICML. http://arxiv.org/abs/1405.2936.

Descloux, Pascaline, and Sylvain Sardy. 2018. “Model Selection with Lasso-Zero: Adding Straw to the Haystack to Better Find Needles,” May. http://arxiv.org/abs/1805.05133.

Diaconis, Persi, and David Freedman. 1984. “Asymptotics of Graphical Projection Pursuit.” The Annals of Statistics 12 (3): 793–815. http://www.jstor.org/stable/2240961.

Efron, Bradley, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. 2004. “Least Angle Regression.” The Annals of Statistics 32 (2): 407–99. https://doi.org/10.1214/009053604000000067.

Elhamifar, E., and R. Vidal. 2013. “Sparse Subspace Clustering: Algorithm, Theory, and Applications.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (11): 2765–81. https://doi.org/10.1109/TPAMI.2013.57.

Ewald, Karl, and Ulrike Schneider. 2015. “Confidence Sets Based on the Lasso Estimator,” July. http://arxiv.org/abs/1507.05315.

Fan, Jianqing, and Runze Li. 2001. “Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties.” Journal of the American Statistical Association 96 (456): 1348–60. https://doi.org/10.1198/016214501753382273.

Fan, Rong-En, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. “LIBLINEAR: A Library for Large Linear Classification.” Journal of Machine Learning Research 9: 1871–4.

Flynn, Cheryl J., Clifford M. Hurvich, and Jeffrey S. Simonoff. 2013. “Efficiency for Regularization Parameter Selection in Penalized Likelihood Estimation of Misspecified Models,” February. http://arxiv.org/abs/1302.2068.

Foygel, Rina, and Nathan Srebro. 2011. “Fast-Rate and Optimistic-Rate Error Bounds for L1-Regularized Regression,” August. http://arxiv.org/abs/1108.0373.

Friedman, Jerome, Trevor Hastie, Holger Höfling, and Robert Tibshirani. 2007. “Pathwise Coordinate Optimization.” The Annals of Applied Statistics 1 (2): 302–32. https://doi.org/10.1214/07-AOAS131.

Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2008. “Sparse Inverse Covariance Estimation with the Graphical Lasso.” Biostatistics 9 (3): 432–41. https://doi.org/10.1093/biostatistics/kxm045.

Fu, Fei, and Qing Zhou. 2013. “Learning Sparse Causal Gaussian Networks with Experimental Intervention: Regularization and Coordinate Descent.” Journal of the American Statistical Association 108 (501): 288–300. https://doi.org/10.1080/01621459.2012.754359.

Gasso, G., A. Rakotomamonjy, and S. Canu. 2009. “Recovering Sparse Signals with a Certain Family of Nonconvex Penalties and DC Programming.” IEEE Transactions on Signal Processing 57 (12): 4686–98. https://doi.org/10.1109/TSP.2009.2026004.

Geer, Sara van de. 2007. “The Deterministic Lasso.” ftp://ftp.stat.math.ethz.ch/pub/Research-Reports/140.pdf.

———. 2016. Estimation and Testing Under Sparsity. Vol. 2159. Lecture Notes in Mathematics. Cham: Springer International Publishing. http://link.springer.com/10.1007/978-3-319-32774-7.

———. 2014a. “Weakly Decomposable Regularization Penalties and Structured Sparsity.” Scandinavian Journal of Statistics 41 (1): 72–86. https://doi.org/10.1111/sjos.12032.

———. 2014b. “Worst Possible Sub-Directions in High-Dimensional Models.” In. Vol. 131. http://arxiv.org/abs/1403.7023.

———. 2014c. “Statistical Theory for High-Dimensional Models,” September. http://arxiv.org/abs/1409.8557.

Geer, Sara A. van de. 2008. “High-Dimensional Generalized Linear Models and the Lasso.” The Annals of Statistics 36 (2): 614–45. https://doi.org/10.1214/009053607000000929.

Geer, Sara A. van de, Peter Bühlmann, and Shuheng Zhou. 2011. “The Adaptive and the Thresholded Lasso for Potentially Misspecified Models (and a Lower Bound for the Lasso).” Electronic Journal of Statistics 5: 688–749. https://doi.org/10.1214/11-EJS624.

Geer, Sara van de, Peter Bühlmann, Ya’acov Ritov, and Ruben Dezeure. 2014. “On Asymptotically Optimal Confidence Regions and Tests for High-Dimensional Models.” The Annals of Statistics 42 (3): 1166–1202. https://doi.org/10.1214/14-AOS1221.

Ghadimi, Saeed, and Guanghui Lan. 2013a. “Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming.” SIAM Journal on Optimization 23 (4): 2341–68. https://doi.org/10.1137/120880811.

———. 2013b. “Accelerated Gradient Methods for Nonconvex Nonlinear and Stochastic Programming,” October. http://arxiv.org/abs/1310.3787.

Girolami, Mark. 2001. “A Variational Method for Learning Sparse and Overcomplete Representations.” Neural Computation 13 (11): 2517–32. https://doi.org/10.1162/089976601753196003.

Giryes, Raja, Guillermo Sapiro, and Alex M. Bronstein. 2014. “On the Stability of Deep Networks,” December. http://arxiv.org/abs/1412.5896.

Greenhill, Catherine, Mikhail Isaev, Matthew Kwan, and Brendan D. McKay. 2016. “The Average Number of Spanning Trees in Sparse Graphs with Given Degrees,” June. http://arxiv.org/abs/1606.01586.

Gu, Jiaying, Fei Fu, and Qing Zhou. 2014. “Adaptive Penalized Estimation of Directed Acyclic Graphs from Categorical Data,” March. http://arxiv.org/abs/1403.2310.

Gui, Jiang, and Hongzhe Li. 2005. “Penalized Cox Regression Analysis in the High-Dimensional and Low-Sample Size Settings, with Applications to Microarray Gene Expression Data.” Bioinformatics 21 (13): 3001–8. https://doi.org/10.1093/bioinformatics/bti422.

Gupta, Pawan, and Marianna Pensky. 2016. “Solution of Linear Ill-Posed Problems Using Random Dictionaries,” May. http://arxiv.org/abs/1605.07913.

Hallac, David, Jure Leskovec, and Stephen Boyd. 2015. “Network Lasso: Clustering and Optimization in Large Graphs,” July. https://doi.org/10.1145/2783258.2783313.

Hansen, Niels Richard, Patricia Reynaud-Bouret, and Vincent Rivoirard. 2015. “Lasso and Probabilistic Inequalities for Multivariate Point Processes.” Bernoulli 21 (1): 83–143. https://doi.org/10.3150/13-BEJ562.

Hastie, Trevor J., Rob Tibshirani, and Martin J. Wainwright. 2015. Statistical Learning with Sparsity: The Lasso and Generalizations. Boca Raton: Chapman and Hall/CRC. https://web.stanford.edu/~hastie/StatLearnSparsity/index.html.

Hawe, S., M. Kleinsteuber, and K. Diepold. 2013. “Analysis Operator Learning and Its Application to Image Reconstruction.” IEEE Transactions on Image Processing 22 (6): 2138–50. https://doi.org/10.1109/TIP.2013.2246175.

He, Dan, Irina Rish, and Laxmi Parida. 2014. “Transductive HSIC Lasso.” In Proceedings of the 2014 SIAM International Conference on Data Mining, edited by Mohammed Zaki, Zoran Obradovic, Pang Ning Tan, Arindam Banerjee, Chandrika Kamath, and Srinivasan Parthasarathy, 154–62. Proceedings. Philadelphia, PA: Society for Industrial and Applied Mathematics. http://epubs.siam.org/doi/abs/10.1137/1.9781611973440.18.

Hebiri, Mohamed, and Sara A. van de Geer. 2011. “The Smooth-Lasso and Other ℓ1+ℓ2-Penalized Methods.” Electronic Journal of Statistics 5: 1184–1226. https://doi.org/10.1214/11-EJS638.

Hegde, Chinmay, and Richard G. Baraniuk. 2012. “Signal Recovery on Incoherent Manifolds.” IEEE Transactions on Information Theory 58 (12): 7204–14. https://doi.org/10.1109/TIT.2012.2210860.

Hegde, Chinmay, Piotr Indyk, and Ludwig Schmidt. 2015. “A Nearly-Linear Time Framework for Graph-Structured Sparsity.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 928–37. http://machinelearning.wustl.edu/mlpapers/paper_files/icml2015_hegde15.pdf.

Hesterberg, Tim, Nam Hee Choi, Lukas Meier, and Chris Fraley. 2008. “Least Angle and ℓ1 Penalized Regression: A Review.” Statistics Surveys 2: 61–93. https://doi.org/10.1214/08-SS035.

Hormati, A., O. Roy, Y. M. Lu, and M. Vetterli. 2010. “Distributed Sampling of Signals Linked by Sparse Filtering: Theory and Applications.” IEEE Transactions on Signal Processing 58 (3): 1095–1109. https://doi.org/10.1109/TSP.2009.2034908.

Hsieh, Cho-Jui, Mátyás A. Sustik, Inderjit S. Dhillon, and Pradeep D. Ravikumar. 2014. “QUIC: Quadratic Approximation for Sparse Inverse Covariance Estimation.” Journal of Machine Learning Research 15 (1): 2911–47. http://www.jmlr.org/papers/volume15/hsieh14a/hsieh14a.pdf.

Hu, Tao, Cengiz Pehlevan, and Dmitri B. Chklovskii. 2014. “A Hebbian/Anti-Hebbian Network for Online Sparse Dictionary Learning Derived from Symmetric Matrix Factorization.” In 2014 48th Asilomar Conference on Signals, Systems and Computers. https://doi.org/10.1109/ACSSC.2014.7094519.

Huang, Cong, G. L. H. Cheang, and Andrew R. Barron. 2008. “Risk of Penalized Least Squares, Greedy Selection and L1 Penalization for Flexible Function Libraries.” http://www.stat.yale.edu/~arb4/publications_files/RiskGreedySelectionAndL1penalization.pdf.

Ishwaran, Hemant, and J. Sunil Rao. 2005. “Spike and Slab Variable Selection: Frequentist and Bayesian Strategies.” The Annals of Statistics 33 (2): 730–73. https://doi.org/10.1214/009053604000001147.

Janson, Lucas, William Fithian, and Trevor J. Hastie. 2015. “Effective Degrees of Freedom: A Flawed Metaphor.” Biometrika 102 (2): 479–85. https://doi.org/10.1093/biomet/asv019.

Javanmard, Adel, and Andrea Montanari. 2014. “Confidence Intervals and Hypothesis Testing for High-Dimensional Regression.” Journal of Machine Learning Research 15 (1): 2869–2909. http://jmlr.org/papers/v15/javanmard14a.html.

Jung, Alexander. 2013. “An RKHS Approach to Estimation with Sparsity Constraints.” In Advances in Neural Information Processing Systems 29. http://arxiv.org/abs/1311.5768.

Kabán, Ata. 2014. “New Bounds on Compressive Linear Least Squares Regression.” In Journal of Machine Learning Research, 448–56. http://jmlr.org/proceedings/papers/v33/kaban14.pdf.

Koppel, Alec, Garrett Warnell, Ethan Stump, and Alejandro Ribeiro. 2016. “Parsimonious Online Learning with Kernels via Sparse Projections in Function Space,” December. http://arxiv.org/abs/1612.04111.

Kowalski, Matthieu, and Bruno Torrésani. 2009. “Structured Sparsity: From Mixed Norms to Structured Shrinkage.” In SPARS’09-Signal Processing with Adaptive Sparse Structured Representations. https://hal.inria.fr/inria-00369577/.

Krämer, Nicole, Juliane Schäfer, and Anne-Laure Boulesteix. 2009. “Regularized Estimation of Large-Scale Gene Association Networks Using Graphical Gaussian Models.” BMC Bioinformatics 10 (1): 384. https://doi.org/10.1186/1471-2105-10-384.

Lam, Clifford, and Jianqing Fan. 2009. “Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.” Annals of Statistics 37 (6B): 4254–78. https://doi.org/10.1214/09-AOS720.

Lambert-Lacroix, Sophie, and Laurent Zwald. 2011. “Robust Regression Through the Huber’s Criterion and Adaptive Lasso Penalty.” Electronic Journal of Statistics 5: 1015–53. https://doi.org/10.1214/11-EJS635.

Langford, John, Lihong Li, and Tong Zhang. 2009. “Sparse Online Learning via Truncated Gradient.” In Advances in Neural Information Processing Systems 21, edited by D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, 905–12. Curran Associates, Inc. http://papers.nips.cc/paper/3585-sparse-online-learning-via-truncated-gradient.pdf.

Lee, Jason D., Dennis L. Sun, Yuekai Sun, and Jonathan E. Taylor. 2013. “Exact Post-Selection Inference, with Application to the Lasso,” November. http://arxiv.org/abs/1311.6238.

Lim, Néhémy, and Johannes Lederer. 2016. “Efficient Feature Selection with Large and High-Dimensional Data,” September. http://arxiv.org/abs/1609.07195.

Lockhart, Richard, Jonathan Taylor, Ryan J. Tibshirani, and Robert Tibshirani. 2014. “A Significance Test for the Lasso.” The Annals of Statistics 42 (2): 413–68. https://doi.org/10.1214/13-AOS1175.

LU, W., Y. GOLDBERG, and J. P. FINE. 2012. “On the Robustness of the Adaptive Lasso to Model Misspecification.” Biometrika 99 (3): 717–31. https://doi.org/10.1093/biomet/ass027.

Mahoney, Michael W. 2016. “Lecture Notes on Spectral Graph Methods.” arXiv Preprint arXiv:1608.04845. http://arxiv.org/abs/1608.04845.

Mairal, J. 2015. “Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning.” SIAM Journal on Optimization 25 (2): 829–55. https://doi.org/10.1137/140957639.

Mazumder, Rahul, Jerome H Friedman, and Trevor J. Hastie. 2009. “SparseNet: Coordinate Descent with Non-Convex Penalties.” Stanford University. http://web.stanford.edu/~hastie/Papers/Sparsenet/jasa_MFH_final.pdf.

Meier, Lukas, Sara van de Geer, and Peter Bühlmann. 2008. “The Group Lasso for Logistic Regression.” Group 70 (Part 1): 53–71. ftp://ftp.math.ethz.ch/sfs/pub/Research-Reports/Other-Manuscripts/buhlmann/lukas-sara-peter.pdf.

Meinshausen, Nicolai, and Peter Bühlmann. 2006. “High-Dimensional Graphs and Variable Selection with the Lasso.” The Annals of Statistics 34 (3): 1436–62. https://doi.org/10.1214/009053606000000281.

Meinshausen, Nicolai, and Bin Yu. 2009. “Lasso-Type Recovery of Sparse Representations for High-Dimensional Data.” The Annals of Statistics 37 (1): 246–70. https://doi.org/10.1214/07-AOS582.

Molchanov, Dmitry, Arsenii Ashukha, and Dmitry Vetrov. 2017. “Variational Dropout Sparsifies Deep Neural Networks.” In Proceedings of ICML. http://arxiv.org/abs/1701.05369.

Montanari, Andrea. 2012. “Graphical Models Concepts in Compressed Sensing.” Compressed Sensing: Theory and Applications, 394–438. http://arxiv.org/abs/1011.4328.

Mousavi, Ali, and Richard G. Baraniuk. 2017. “Learning to Invert: Signal Recovery via Deep Convolutional Networks.” In ICASSP. http://arxiv.org/abs/1701.03891.

Müller, Patric, and Sara van de Geer. 2015. “Censored Linear Model in High Dimensions: Penalised Linear Regression on High-Dimensional Data with Left-Censored Response Variable.” TEST, April. https://doi.org/10.1007/s11749-015-0441-7.

Nam, Sangnam, and R. Gribonval. 2012. “Physics-Driven Structured Cosparse Modeling for Source Localization.” In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5397–5400. https://doi.org/10.1109/ICASSP.2012.6289141.

Needell, D., and J. A. Tropp. 2008. “CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples,” March. http://arxiv.org/abs/0803.2392.

Nesterov, Yu. 2012. “Gradient Methods for Minimizing Composite Functions.” Mathematical Programming 140 (1): 125–61. https://doi.org/10.1007/s10107-012-0629-5.

Neville, Sarah E., John T. Ormerod, and M. P. Wand. 2014. “Mean Field Variational Bayes for Continuous Sparse Signal Shrinkage: Pitfalls and Remedies.” Electronic Journal of Statistics 8 (1): 1113–51. https://doi.org/10.1214/14-EJS910.

Ngiam, Jiquan, Zhenghao Chen, Sonia A. Bhaskar, Pang W. Koh, and Andrew Y. Ng. 2011. “Sparse Filtering.” In Advances in Neural Information Processing Systems 24, edited by J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, 1125–33. Curran Associates, Inc. http://papers.nips.cc/paper/4334-sparse-filtering.pdf.

Nickl, Richard, and Sara van de Geer. 2013. “Confidence Sets in Sparse Regression.” The Annals of Statistics 41 (6): 2852–76. https://doi.org/10.1214/13-AOS1170.

Oymak, S., A. Jalali, M. Fazel, and B. Hassibi. 2013. “Noisy Estimation of Simultaneously Structured Models: Limitations of Convex Relaxation.” In 2013 IEEE 52nd Annual Conference on Decision and Control (CDC), 6019–24. https://doi.org/10.1109/CDC.2013.6760840.

Peleg, Tomer, Yonina C. Eldar, and Michael Elad. 2010. “Exploiting Statistical Dependencies in Sparse Representations for Signal Recovery.” IEEE Transactions on Signal Processing 60 (5): 2286–2303. https://doi.org/10.1109/TSP.2012.2188520.

Portnoy, Stephen, and Roger Koenker. 1997. “The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error Versus Absolute-Error Estimators.” Statistical Science 12 (4): 279–300. https://doi.org/10.1214/ss/1030037960.

Pouget-Abadie, Jean, and Thibaut Horel. 2015. “Inferring Graphs from Cascades: A Sparse Recovery Framework.” In Proceedings of the 32nd International Conference on Machine Learning. http://arxiv.org/abs/1505.05663.

Pourahmadi, Mohsen. 2011. “Covariance Estimation: The GLM and Regularization Perspectives.” Statistical Science 26 (3): 369–87. https://doi.org/10.1214/11-STS358.

Qian, Wei, and Yuhong Yang. 2012. “Model Selection via Standard Error Adjusted Adaptive Lasso.” Annals of the Institute of Statistical Mathematics 65 (2): 295–318. https://doi.org/10.1007/s10463-012-0370-0.

Qin, Zhiwei, Katya Scheinberg, and Donald Goldfarb. 2013. “Efficient Block-Coordinate Descent Algorithms for the Group Lasso.” Mathematical Programming Computation 5 (2): 143–69. https://doi.org/10.1007/s12532-013-0051-x.

Rahimi, Ali, and Benjamin Recht. 2009. “Weighted Sums of Random Kitchen Sinks: Replacing Minimization with Randomization in Learning.” In Advances in Neural Information Processing Systems, 1313–20. Curran Associates, Inc. http://papers.nips.cc/paper/3495-weighted-sums-of-random-kitchen-sinks-replacing-minimization-with-randomization-in-learning.

Ravikumar, Pradeep, Martin J. Wainwright, Garvesh Raskutti, and Bin Yu. 2011. “High-Dimensional Covariance Estimation by Minimizing ℓ1-Penalized Log-Determinant Divergence.” Electronic Journal of Statistics 5: 935–80. https://doi.org/10.1214/11-EJS631.

Ravishankar, Saiprasad, and Yoram Bresler. 2015. “Efficient Blind Compressed Sensing Using Sparsifying Transforms with Convergence Guarantees and Application to MRI,” January. http://arxiv.org/abs/1501.02923.

Ravishankar, S., and Y. Bresler. 2015. “Sparsifying Transform Learning with Efficient Optimal Updates and Convergence Guarantees.” IEEE Transactions on Signal Processing 63 (9): 2389–2404. https://doi.org/10.1109/TSP.2015.2405503.

Reynaud-Bouret, Patricia. 2003. “Adaptive Estimation of the Intensity of Inhomogeneous Poisson Processes via Concentration Inequalities.” Probability Theory and Related Fields 126 (1). https://doi.org/10.1007/s00440-003-0259-1.

Reynaud-Bouret, Patricia, and Sophie Schbath. 2010. “Adaptive Estimation for Hawkes Processes; Application to Genome Analysis.” The Annals of Statistics 38 (5): 2781–2822. https://doi.org/10.1214/10-AOS806.

Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “"Why Should I Trust You?": Explaining the Predictions of Any Classifier.” In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44. KDD ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2939672.2939778.

Rish, Irina, and Genady Grabarnik. 2014. “Sparse Signal Recovery with Exponential-Family Noise.” In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 77–93. Signals and Communication Technology. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-38398-4_3.

Rish, Irina, and Genady Ya Grabarnik. 2015. Sparse Modeling: Theory, Algorithms, and Applications. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series. Boca Raton, FL: CRC Press, Taylor & Francis Group.

Ročková, Veronika, and Edward I. George. 2018. “The Spike-and-Slab LASSO.” Journal of the American Statistical Association 113 (521): 431–44. https://doi.org/10.1080/01621459.2016.1260469.

Sashank J. Reddi, Suvrit Sra, Barnabás Póczós, and Alex Smola. 1995. “Stochastic Frank-Wolfe Methods for Nonconvex Optimization.” https://arxiv.org/abs/1607.08254.

Schelldorfer, Jürg, Peter Bühlmann, and Sara Van De Geer. 2011. “Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization.” Scandinavian Journal of Statistics 38 (2): 197–214. https://doi.org/10.1111/j.1467-9469.2011.00740.x.

She, Yiyuan, and Art B. Owen. 2010. “Outlier Detection Using Nonconvex Penalized Regression.” http://statweb.stanford.edu/~owen/reports/theta-ipod.pdf.

Simon, Noah, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. 2011. “Regularization Paths for Cox’s Proportional Hazards Model via Coordinate Descent.” Journal of Statistical Software 39 (5). http://www.jstatsoft.org/v39/i05/paper.

Smith, Virginia, Simone Forte, Michael I. Jordan, and Martin Jaggi. 2015. “L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework,” December. http://arxiv.org/abs/1512.04011.

Soh, Yong Sheng, and Venkat Chandrasekaran. 2017. “A Matrix Factorization Approach for Learning Semidefinite-Representable Regularizers,” January. http://arxiv.org/abs/1701.01207.

Soltani, Mohammadreza, and Chinmay Hegde. 2016. “Demixing Sparse Signals from Nonlinear Observations.” Statistics 7: 9. http://home.engineering.iastate.edu/~chinmay/files/papers/demix_ISUTR.pdf.

Starck, J. L., Michael Elad, and David L. Donoho. 2005. “Image Decomposition via the Combination of Sparse Representations and a Variational Approach.” IEEE Transactions on Image Processing 14 (10): 1570–82. https://doi.org/10.1109/TIP.2005.852206.

Stine, Robert A. 2004. “Discussion of "Least Angle Regression" by Efron et Al.” The Annals of Statistics 32 (2): 407–99. http://arxiv.org/abs/math/0406471.

Su, Weijie, Malgorzata Bogdan, and Emmanuel J. Candès. 2015. “False Discoveries Occur Early on the Lasso Path,” November. http://arxiv.org/abs/1511.01957.

Taddy, Matt. 2013. “One-Step Estimator Paths for Concave Regularization,” August. http://arxiv.org/abs/1308.5623.

Thisted, Ronald A. 1997. “[The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error Versus Absolute-Error Estimators]: Comment.” Statistical Science 12 (4): 296–98. http://www.jstor.org/stable/2246217.

Thrampoulidis, Chrtistos, Ehsan Abbasi, and Babak Hassibi. 2015. “LASSO with Non-Linear Measurements Is Equivalent to One with Linear Measurements.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, R. Garnett, and R. Garnett, 3402–10. Curran Associates, Inc. http://papers.nips.cc/paper/5739-lasso-with-non-linear-measurements-is-equivalent-to-one-with-linear-measurements.pdf.

Tibshirani, Robert. 1996. “Regression Shrinkage and Selection via the Lasso.” Journal of the Royal Statistical Society. Series B (Methodological) 58 (1): 267–88. http://statweb.stanford.edu/~tibs/lasso/lasso.pdf.

———. 2011. “Regression Shrinkage and Selection via the Lasso: A Retrospective.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (3): 273–82. https://doi.org/10.1111/j.1467-9868.2011.00771.x.

Tibshirani, Ryan J. 2014. “A General Framework for Fast Stagewise Algorithms,” August. http://arxiv.org/abs/1408.5801.

Trofimov, Ilya, and Alexander Genkin. 2015. “Distributed Coordinate Descent for L1-Regularized Logistic Regression.” In Analysis of Images, Social Networks and Texts, edited by Mikhail Yu Khachay, Natalia Konstantinova, Alexander Panchenko, Dmitry I. Ignatov, and Valeri G. Labunets, 243–54. Communications in Computer and Information Science 542. Springer International Publishing. https://doi.org/10.1007/978-3-319-26123-2_24.

———. 2016. “Distributed Coordinate Descent for Generalized Linear Models with Regularization,” November. http://arxiv.org/abs/1611.02101.

Tropp, J. A., and S. J. Wright. 2010. “Computational Methods for Sparse Solution of Linear Inverse Problems.” Proceedings of the IEEE 98 (6): 948–58. https://doi.org/10.1109/JPROC.2010.2044010.

Tschannen, Michael, and Helmut Bölcskei. 2016. “Noisy Subspace Clustering via Matching Pursuits,” December. http://arxiv.org/abs/1612.03450.

Uematsu, Yoshimasa. 2015. “Penalized Likelihood Estimation in High-Dimensional Time Series Models and Its Application,” April. http://arxiv.org/abs/1504.06706.

Unser, Michael A., and Pouya Tafti. 2014. An Introduction to Sparse Stochastic Processes. New York: Cambridge University Press. http://www.sparseprocesses.org/sparseprocesses-123456.pdf.

Unser, M., P. D. Tafti, A. Amini, and H. Kirshner. 2014. “A Unified Formulation of Gaussian Vs Sparse Stochastic Processes - Part II: Discrete-Domain Theory.” IEEE Transactions on Information Theory 60 (5): 3036–51. https://doi.org/10.1109/TIT.2014.2311903.

Unser, M., P. D. Tafti, and Q. Sun. 2014. “A Unified Formulation of Gaussian Vs Sparse Stochastic Processes—Part I: Continuous-Domain Theory.” IEEE Transactions on Information Theory 60 (3): 1945–62. https://doi.org/10.1109/TIT.2014.2298453.

Veitch, Victor, and Daniel M. Roy. 2015. “The Class of Random Graphs Arising from Exchangeable Random Measures,” December. http://arxiv.org/abs/1512.03099.

Wahba, Grace. 1990. Spline Models for Observational Data. SIAM.

Wang, Hansheng, Guodong Li, and Guohua Jiang. 2007. “Robust Regression Shrinkage and Consistent Variable Selection Through the LAD-Lasso.” Journal of Business & Economic Statistics 25 (3): 347–55. https://doi.org/10.1198/073500106000000251.

Wang, L., M. D. Gordon, and J. Zhu. 2006. “Regularized Least Absolute Deviations Regression and an Efficient Algorithm for Parameter Tuning.” In Sixth International Conference on Data Mining (ICDM’06), 690–700. https://doi.org/10.1109/ICDM.2006.134.

Wang, Zhangyang, Shiyu Chang, Qing Ling, Shuai Huang, Xia Hu, Honghui Shi, and Thomas S. Huang. 2016. “Stacked Approximated Regression Machine: A Simple Deep Learning Approach.” In. https://arxiv.org/abs/1608.04062.

Wisdom, Scott, Thomas Powers, James Pitton, and Les Atlas. 2016. “Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery.” In Advances in Neural Information Processing Systems 29. http://arxiv.org/abs/1611.07252.

Woodworth, Joseph, and Rick Chartrand. 2015. “Compressed Sensing Recovery via Nonconvex Shrinkage Penalties,” April. http://arxiv.org/abs/1504.02923.

Wright, S. J., R. D. Nowak, and M. A. T. Figueiredo. 2009. “Sparse Reconstruction by Separable Approximation.” IEEE Transactions on Signal Processing 57 (7): 2479–93. https://doi.org/10.1109/TSP.2009.2016892.

Wu, Tong Tong, and Kenneth Lange. 2008. “Coordinate Descent Algorithms for Lasso Penalized Regression.” The Annals of Applied Statistics 2 (1): 224–44. https://doi.org/10.1214/07-AOAS147.

Xu, H., C. Caramanis, and S. Mannor. 2010. “Robust Regression and Lasso.” IEEE Transactions on Information Theory 56 (7): 3561–74. https://doi.org/10.1109/TIT.2010.2048503.

Yaghoobi, M., Sangnam Nam, R. Gribonval, and M. E. Davies. 2012. “Noise Aware Analysis Operator Learning for Approximately Cosparse Signals.” In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5409–12. https://doi.org/10.1109/ICASSP.2012.6289144.

Yang, Wenzhuo, and Huan Xu. 2013. “A Unified Robust Regression Model for Lasso-Like Algorithms.” In ICML (3), 585–93. http://www.jmlr.org/proceedings/papers/v28/yang13e.pdf.

Yoshida, Ryo, and Mike West. 2010. “Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing.” Journal of Machine Learning Research 11 (May): 1771–98. http://www.jmlr.org/papers/v11/yoshida10a.html.

Yuan, Ming, and Yi Lin. 2006. “Model Selection and Estimation in Regression with Grouped Variables.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 (1): 49–67. https://doi.org/10.1111/j.1467-9868.2005.00532.x.

———. 2007. “Model Selection and Estimation in the Gaussian Graphical Model.” Biometrika 94 (1): 19–35. https://doi.org/10.1093/biomet/asm018.

Yun, Sangwoon, and Kim-Chuan Toh. 2009. “A Coordinate Gradient Descent Method for ℓ 1-Regularized Convex Minimization.” Computational Optimization and Applications 48 (2): 273–307. https://doi.org/10.1007/s10589-009-9251-8.

Zhang, Cun-Hui. 2010. “Nearly Unbiased Variable Selection Under Minimax Concave Penalty.” The Annals of Statistics 38 (2): 894–942. https://doi.org/10.1214/09-AOS729.

Zhang, Cun-Hui, and Stephanie S. Zhang. 2014. “Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76 (1): 217–42. https://doi.org/10.1111/rssb.12026.

Zhang, Lijun, Tianbao Yang, Rong Jin, and Zhi-Hua Zhou. 2015. “Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach,” November. http://arxiv.org/abs/1511.03766.

Zhao, Peng, Guilherme Rocha, and Bin Yu. 2009. “The Composite Absolute Penalties Family for Grouped and Hierarchical Variable Selection.” The Annals of Statistics 37 (6A): 3468–97. https://doi.org/10.1214/07-AOS584.

Zhao, Tuo, Han Liu, and Tong Zhang. 2018. “Pathwise Coordinate Optimization for Sparse Learning: Algorithm and Theory.” The Annals of Statistics 46 (1): 180–218. https://doi.org/10.1214/17-AOS1547.

Zhou, Tianyi, Dacheng Tao, and Xindong Wu. 2011. “Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction.” Data Mining and Knowledge Discovery 22 (3): 340–71. http://link.springer.com/article/10.1007/s10618-010-0182-x.

Zou, Hui. 2006. “The Adaptive Lasso and Its Oracle Properties.” Journal of the American Statistical Association 101 (476): 1418–29. https://doi.org/10.1198/016214506000000735.

Zou, Hui, and Trevor Hastie. 2005. “Regularization and Variable Selection via the Elastic Net.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67 (2): 301–20. https://doi.org/10.1111/j.1467-9868.2005.00503.x.

Zou, Hui, Trevor Hastie, and Robert Tibshirani. 2007. “On the ‘Degrees of Freedom’ of the Lasso.” The Annals of Statistics 35 (5): 2173–92. https://doi.org/10.1214/009053607000000127.

Zou, Hui, and Runze Li. 2008. “One-Step Sparse Estimates in Nonconcave Penalized Likelihood Models.” The Annals of Statistics 36 (4): 1509–33. https://doi.org/10.1214/009053607000000802.