Fast multipole methods

August 18, 2016 — September 20, 2016

Hilbert space
premature optimization
Figure 1

“Efficiently approximating fields made up of many decaying sources.”

Barnes-hut algorithms, fast Gauss transforms, generalized multipole methods.

Not something I intend to worry about right now, but I needed to clear these refs out of my overcrowded Mercer kernel approximation notebook. Fast multipole methods can also approximate certain Mercer kernels in the sense of rapidly approximately evaluating the field strength at given points, rather than approximating the kernels themselves with something simpler.

How do these methods comapre to/relate to H-matrices?

Overview on Vikas Rakyar’s thesis page: FAST SUMMATION ALGORITHMS:

The art of getting ‘good enough’ solutions ‘as fast as possible’.

Huge data sets containing millions of training examples with a large number of attributes (tall fat data) are relatively easy to gather. However one of the bottlenecks for successful inference of useful information from the data is the computational complexity of machine learning algorithms. Most state-of-the-art nonparametric machine learning algorithms have a computational complexity of either \(O(N^2)\) or \(O(N^3)\), where N is the number of training examples. This has seriously restricted the use of massive data sets. The bottleneck computational primitive at the heart of various algorithms is the multiplication of a structured matrix with a vector, which we refer to as matrix-vector product (MVP) primitive. The goal of my thesis is to speedup up these MVP primitives by fast approximate algorithms that scale as \(O(N)\) and also provide high accuracy guarantees. I use ideas from computational physics, scientific computing, and computational geometry to design these algorithms. Currently the proposed algorithms have been applied in kernel density estimation, optimal bandwidth estimation, projection pursuit, Gaussian process regression, implicit surface fitting, and ranking.

1 Implementation

figtree (C++, MATLAB, Python) does Gaussian fields in the inexact case.

2 References

Barnes, and Hut. 1986. A Hierarchical O(N Log N) Force-Calculation Algorithm.” Nature.
Board, and Schulten. 2000. The Fast Multipole Algorithm.” Computing in Science & Engineering.
Dongarra, and Sullivan. 2000. Guest Editors’ Introduction: The Top 10 Algorithms.” Computing in Science & Engineering.
Greengard, and Strain. 1991. The Fast Gauss Transform.” SIAM Journal on Scientific and Statistical Computing.
Lange, and Kutz. 2021. FC2T2: The Fast Continuous Convolutional Taylor Transform with Applications in Vision and Graphics.” arXiv:2111.00110 [Cs].
Raykar, and Duraiswami. 2005. The Improved Fast Gauss Transform with Applications to Machine Learning.”
Rokhlin. 1985. Rapid Solution of Integral Equations of Classical Potential Theory.” Journal of Computational Physics.
Schwab, and Todor. 2006. Karhunen–Loève Approximation of Random Fields by Generalized Fast Multipole Methods.” Journal of Computational Physics, Uncertainty Quantification in Simulation Science,.
Simoncini, and Szyld. 2003. Theory of Inexact Krylov Subspace Methods and Applications to Scientific Computing.” SIAM Journal on Scientific Computing.
van der Vorst. n.d. Krylov Subspace Iteration.” Computing in Science & Engineering.
Yang, Duraiswami, and Davis. 2004. Efficient Kernel Machines Using the Improved Fast Gauss Transform.” In Advances in Neural Information Processing Systems.
Yang, Duraiswami, Gumerov, et al. 2003. Improved Fast Gauss Transform and Efficient Kernel Density Estimation.” In Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2. ICCV ’03.