The Living Thing / Notebooks :

Learning of manifolds

Also topological data analysis; other hip names to follow

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪 🤪
Incompleteness: 🚧 🚧 🚧

Berger, Daniels and Yu: Manifolds in Genome search

As in – handling your high-dimensional, or graphical, data by trying to discover a low(er)-dimensional manifold that contains it. That is, inferring a hidden constraint that happens to have the form of a smooth surface of some low-ish dimension. related: Learning on manifolds

There are a million different versions of this. Multidimensional scaling seems to be the oldest.

Tangential aside: in dynamical systems we talk about creating very high dimensional Takens embedding for state space reconstruction for arbitrary nonlinear dynamics. I imagine there are some connections between learning the lower-dimensional manifold upon which lies your data, and the higher dimensional manifold in which your data’s state space is naturally expressed. But I would not be the first person to notice this, so hopefully it’s done for me somewhere?

See also kernel methods, which do regression on an implicit manifold, (how do you reconcile these, btw?) and functional regression where the manifold isn’t even necessarily low dimensional, although typically still smooth, in some sense.

See also information geometry, which doesn’t give you a manifold for your data, but a manifold in which the parametric model itself is embedded.

To look at: ISOMAP, Locally linear embedding, spectral embeddings, multidimensional scaling…

Bioinformatics is leading to some weird use of data manifolds; see for example BeDY16 for the performance implications of knowing the manifold shape for *-omics search, using compressive manifold storage based on both fractal dimension and metric entropy concepts. Also suggestive connection with fitness landscape in evolution.

Neural networks have some implicit manifolds, if you squint right. see Christopher Olahs’s visual explanation how, whose diagrams should be stolen by someone trying to explain V-C dimension.

MoSF13 argue:

Manifold learning algorithms have recently played a crucial role in unsupervised learning tasks such as clustering and nonlinear dimensionality reduction[…] Many such algorithms have been shown to be equivalent to Kernel PCA (KPCA) with data dependent kernels, itself equivalent to performing classical multidimensional scaling (cMDS) in a high dimensional feature space (Schölkopf et al., 1998; Williams, 2002; Bengio et al., 2004).[…] Recently, it has been observed that the majority of manifold learning algorithms can be expressed as a regularized loss minimization of a reconstruction matrix, followed by a singular value truncation (Neufeld et al., 2012)

Implementations

TTK

TTK

The Topology ToolKit (TTK) is an open-source library and software collection for topological data analysis in scientific visualization.

TTK can handle scalar data defined either on regular grids or triangulations, either in 2D or in 3D. It provides a substantial collection of generic, efficient and robust implementations of key algorithms in topological data analysis. It includes:

scikit-learn

scikit-learn implements a grab-bag of algorithms

tapkee

C++: Tapkee. Pro-tip – even without coding, tapkee does a long list of nice dimensionality reduction from the CLI, some of which are explicitly manifold learners (and the rest are matrix factorisations which is not so different)

Refs

Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. “Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23. http://proceedings.mlr.press/v70/arjovsky17a.html.

Aste, Tomaso, Ruggero Gramatica, and T Di Matteo. 2012. “Exploring Complex Networks via Topological Embedding on Surfaces.” Physical Review E 86 (3): 036109. https://doi.org/10.1103/PhysRevE.86.036109.

Aswani, Anil, Peter Bickel, and Claire Tomlin. 2011. “Regression on Manifolds: Estimation of the Exterior Derivative.” The Annals of Statistics 39 (1): 48–81. https://doi.org/10.1214/10-AOS823.

Belkin, Mikhail, and Partha Niyogi. 2003. “Laplacian Eigenmaps for Dimensionality Reduction and Data Representation.” Neural Computation 15 (6): 1373–96. https://doi.org/10.1162/089976603321780317.

Bengio, Yoshua, Aaron Courville, and Pascal Vincent. 2013. “Representation Learning: A Review and New Perspectives.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35: 1798–1828. https://doi.org/10.1109/TPAMI.2013.50.

Berger, Bonnie, Noah M. Daniels, and Y. William Yu. 2016. “Computational Biology in the 21st Century: Scaling with Compressive Algorithms.” Communications of the ACM 59 (8): 72–80. https://doi.org/10.1145/2957324.

Carlsson, Gunnar, Tigran Ishkhanov, Vin de Silva, and Afra Zomorodian. 2008. “On the Local Behavior of Spaces of Natural Images.” International Journal of Computer Vision 76 (1): 1–12. https://doi.org/10.1007/s11263-007-0056-x.

Chen, Minhua, J. Silva, J. Paisley, Chunping Wang, D. Dunson, and L. Carin. 2010. “Compressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds.” IEEE Transactions on Signal Processing 58 (12): 6140–55. https://doi.org/10.1109/TSP.2010.2070796.

DeVore, Ronald A. 1998. “Nonlinear Approximation.” Acta Numerica 7 (January): 51–150. https://doi.org/10.1017/S0962492900002816.

Diaconis, Persi, and David Freedman. 1984. “Asymptotics of Graphical Projection Pursuit.” The Annals of Statistics 12 (3): 793–815. http://www.jstor.org/stable/2240961.

———. 1986. “On the Consistency of Bayes Estimates.” The Annals of Statistics 14 (1): 1–26. http://www.jstor.org/stable/2241255.

Donoho, David L., and Carrie Grimes. 2003. “Hessian Eigenmaps: Locally Linear Embedding Techniques for High-Dimensional Data.” Proceedings of the National Academy of Sciences 100 (10): 5591–6. https://doi.org/10.1073/pnas.1031596100.

Freund, Yoav, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. 2007. “Learning the Structure of Manifolds Using Random Projections.” In Advances in Neural Information Processing Systems, 473–80. http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2007_133.pdf.

Gashler, Mike, and Tony Martinez. 2012. “Robust Manifold Learning with CycleCut.” Connection Science 24 (1): 57–69. https://doi.org/10.1080/09540091.2012.664122.

Hadsell, R., S. Chopra, and Y. LeCun. 2006. “Dimensionality Reduction by Learning an Invariant Mapping.” In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2:1735–42. https://doi.org/10.1109/CVPR.2006.100.

Hall, Peter, and Ker-Chau Li. 1993. “On Almost Linearity of Low Dimensional Projections from High Dimensional Data.” The Annals of Statistics 21 (2): 867–89. http://www.jstor.org/stable/2242265.

Hawe, S., M. Kleinsteuber, and K. Diepold. 2013. “Analysis Operator Learning and Its Application to Image Reconstruction.” IEEE Transactions on Image Processing 22 (6): 2138–50. https://doi.org/10.1109/TIP.2013.2246175.

He, Xiaofei, and Partha Niyogi. 2003. “Locality Preserving Projections.” In Proceedings of the 16th International Conference on Neural Information Processing Systems, 16:153–60. NIPS’03. Cambridge, MA, USA: MIT Press. https://papers.nips.cc/paper/2359-locality-preserving-projections.pdf.

Huckemann, Stephan F., Peter T. Kim, Ja-Yong Koo, and Axel Munk. 2010. “Möbius Deconvolution on the Hyperbolic Plane with Application to Impedance Density Estimation.” The Annals of Statistics 38 (4): 2465–98. https://doi.org/10.1214/09-AOS783.

Kemp, Charles, and Joshua B Tenenbaum. 2008. “The Discovery of Structural Form.” Proceedings of the National Academy of Sciences 105 (31): 10687–92. https://doi.org/10.1073/pnas.0802631105.

Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. “Random Projections of Random Manifolds,” July. http://arxiv.org/abs/1607.04331.

Maaten, Laurens van der, and Geoffrey Hinton. 2008. “Visualizing Data Using T-SNE.” Journal of Machine Learning Research 9 (Nov): 2579–2605. http://www.jmlr.org/papers/v9/vandermaaten08a.html.

Moustafa, Karim Abou-, Dale Schuurmans, and Frank Ferrie. 2013. “Learning a Metric Space for Neighbourhood Topology Estimation: Application to Manifold Learning.” In Journal of Machine Learning Research, 341–56. http://jmlr.org/proceedings/papers/v29/Moustafa13.html.

Mukherjee, Sayan, Qiang Wu, and Ding-Xuan Zhou. 2010. “Learning Gradients on Manifolds.” Bernoulli 16 (1): 181–207. https://doi.org/10.3150/09-BEJ206.

Roweis, Sam T., and Lawrence K. Saul. 2000. “Nonlinear Dimensionality Reduction by Locally Linear Embedding.” Science 290 (5500): 2323–6. https://doi.org/10.1126/science.290.5500.2323.

Saul, Lawrence K., and Sam T. Roweis. 2003. “Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds.” The Journal of Machine Learning Research 4 (December): 119–55. https://doi.org/10.1162/153244304322972667.

Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller. 1997. “Kernel Principal Component Analysis.” In Artificial Neural Networks — ICANN’97, edited by Wulfram Gerstner, Alain Germond, Martin Hasler, and Jean-Daniel Nicoud, 583–88. Lecture Notes in Computer Science. Springer Berlin Heidelberg. https://doi.org/10.1007/BFb0020217.

———. 1998. “Nonlinear Component Analysis as a Kernel Eigenvalue Problem.” Neural Computation 10 (5): 1299–1319. https://doi.org/10.1162/089976698300017467.

Shaw, Blake, and Tony Jebara. 2009. “Structure Preserving Embedding.” In Proceedings of the 26th Annual International Conference on Machine Learning, 937–44. ICML ’09. New York, NY, USA: ACM. https://doi.org/10.1145/1553374.1553494.

Shieh, Albert D., Tatsunori B. Hashimoto, and Edoardo M. Airoldi. 2011. “Tree Preserving Embedding.” Proceedings of the National Academy of Sciences 108 (41): 16916–21. https://doi.org/10.1073/pnas.1018393108.

Smola, Alex J., Robert C. Williamson, Sebastian Mika, and Bernhard Schölkopf. 1999. “Regularized Principal Manifolds.” In Computational Learning Theory, edited by Paul Fischer and Hans Ulrich Simon, 214–29. Lecture Notes in Computer Science 1572. Springer Berlin Heidelberg. http://link.springer.com/chapter/10.1007/3-540-49097-3_17.

Song, Dongjin, and Dacheng Tao. 2010. “Biologically Inspired Feature Manifold for Scene Classification.” IEEE Transactions on Image Processing: A Publication of the IEEE Signal Processing Society 19 (1): 174–84. https://doi.org/10.1109/TIP.2009.2032939.

Steinke, Florian, and Matthias Hein. 2009. “Non-Parametric Regression Between Manifolds.” In Advances in Neural Information Processing Systems 21, 1561–8. Curran Associates, Inc. http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2008_0692.pdf.

Tenenbaum, Joshua B, Vin de Silva, and John C Langford. 2000. “A Global Geometric Framework for Nonlinear Dimensionality Reduction.” Science, 2000. http://web.mit.edu/cocosci/Papers/sci_reprint.pdf.

Wang, Boyue, Yongli Hu, Junbin Gao, Yanfeng Sun, Haoran Chen, and Baocai Yin. 2017. “Locality Preserving Projections for Grassmann Manifold.” In PRoceedings of IJCAI, 2017. http://arxiv.org/abs/1704.08458.

Weinberger, Kilian Q., Fei Sha, and Lawrence K. Saul. 2004. “Learning a Kernel Matrix for Nonlinear Dimensionality Reduction.” In Proceedings of the Twenty-First International Conference on Machine Learning, 106. ICML ’04. New York, NY, USA: ACM. https://doi.org/10.1145/1015330.1015345.

Williams, Christopher K. I. 2001. “On a Connection Between Kernel PCA and Metric Multidimensional Scaling.” In Advances in Neural Information Processing Systems 13, edited by T. K. Leen, T. G. Dietterich, and V. Tresp, 46:675–81. MIT Press. https://doi.org/10.1023/A:1012485807823.

Wu, Qiang, Justin Guinney, Mauro Maggioni, and Sayan Mukherjee. 2010. “Learning Gradients: Predictive Models That Infer Geometry and Statistical Dependence.” The Journal of Machine Learning Research 11: 2175–98. http://dl.acm.org/citation.cfm?id=1859926.

Yin, M., J. Gao, and Z. Lin. 2016. “Laplacian Regularized Low-Rank Representation and Its Applications.” IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (3): 504–17. https://doi.org/10.1109/TPAMI.2015.2462360.

Yu, Yaoliang, James Neufeld, Ryan Kiros, Xinhua Zhang, and Dale Schuurmans. 2012. “Regularizers Versus Losses for Nonlinear Dimensionality Reduction: A Factored View with New Convex Relaxations.” In ICML 2012. http://arxiv.org/abs/1206.6455.

Zhou, Tianyi, Dacheng Tao, and Xindong Wu. 2011. “Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction.” Data Mining and Knowledge Discovery 22 (3): 340–71. http://link.springer.com/article/10.1007/s10618-010-0182-x.

Zhu, Jun-Yan, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. 2016. “Generative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision. http://arxiv.org/abs/1609.03552.