The Living Thing / Notebooks : Fast multipole methods

Efficiently approximating fields made up of many decaying sources.”

Barnes hut algorithms, fast Gauss transforms, generalized multipole methods.

Not something I intend to worry about right now, but I needed to clear these refs out of my overcrowded Mercer kernel approximation notebook. Fast multipole methods can also approximate certain Mercer kernels in the sense of rapidly approximately evaluating the field strength at given points, rather than approximating the kernels themselves with something simpler.

See also H-matrices.

Overview on Vikas Rakyar’s thesis page: FAST SUMMATION ALGORITHMS:

The art of getting ‘good enough’ solutions ‘as fast as possible’.

Huge data sets containing millions of training examples with a large number of attributes (tall fat data) are relatively easy to gather. However one of the bottlenecks for successful inference of useful information from the data is the computational complexity of machine learning algorithms. Most state-of-the-art nonparametric machine learning algorithms have a computational complexity of either \(O(N^2)\) or \(O(N^3)\), where N is the number of training examples. This has seriously restricted the use of massive data sets. The bottleneck computational primitive at the heart of various algorithms is the multiplication of a structured matrix with a vector, which we refer to as matrix-vector product (MVP) primitive. The goal of my thesis is to speedup up these MVP primitives by fast approximate algorithms that scale as \(O(N)\) and also provide high accuracy guarantees. I use ideas from computational physics, scientific computing, and computational geometry to design these algorithms. Currently the proposed algorithms have been applied in kernel density estimation, optimal bandwidth estimation, projection pursuit, Gaussian process regression, implicit surface fitting, and ranking.

Implementation

figtree (C++, MATLAB, Python) does gaussian fields in the inexact case.

Refs

BaHu86
Barnes, J., & Hut, P. (1986) A hierarchical O(N log N) force-calculation algorithm. Nature, 324(6096), 446–449. DOI.
BaRo02
Baxter, B., & Roussos, G. (2002) A New Error Estimate of the Fast Gauss Transform. SIAM Journal on Scientific Computing, 24(1), 257–259. DOI.
BoSc00
Board, J., & Schulten, K. (2000) The Fast Multipole Algorithm. Computing in Science & Engineering, 2(1), 76–79. DOI.
CoRW93
Coifman, R., Rokhlin, V., & Wandzura, S. (1993) The fast multipole method for the wave equation: a pedestrian prescription. IEEE Antennas and Propagation Magazine, 35(3), 7–12. DOI.
DoSu00
Dongarra, J., & Sullivan, F. (2000) Guest Editors’ Introduction: The Top 10 Algorithms. Computing in Science & Engineering, 2(1), 22–23. DOI.
ElDD03
Elgammal, A., Duraiswami, R., & Davis, L. S.(2003) Efficient kernel density estimation using the fast gauss transform with applications to color modeling and tracking. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 25(11), 1499–1504. DOI.
GrRo87
Greengard, L., & Rokhlin, V. (1987) A fast algorithm for particle simulations. Journal of Computational Physics, 73(2), 325–348. DOI.
GrSt91
Greengard, L., & Strain, J. (1991) The Fast Gauss Transform. SIAM Journal on Scientific and Statistical Computing, 12(1), 79–94. DOI.
MSRD09
Morariu, V. I., Srinivasan, B. V., Raykar, V. C., Duraiswami, R., & Davis, L. S.(2009) Automatic online tuning for fast Gaussian summation. In Advances in Neural Information Processing Systems (pp. 1113–1120).
RaDu05
Raykar, V. C., & Duraiswami, R. (2005) The improved fast Gauss transform with applications to machine learning. . Presented at the NIPS
Rokh85
Rokhlin, V. (1985) Rapid solution of integral equations of classical potential theory. Journal of Computational Physics, 60(2), 187–207. DOI.
ScTo06
Schwab, C., & Todor, R. A.(2006) Karhunen–Loève approximation of random fields by generalized fast multipole methods. Journal of Computational Physics, 217(1), 100–122. DOI.
SiSz03
Simoncini, V., & Szyld, D. (2003) Theory of Inexact Krylov Subspace Methods and Applications to Scientific Computing. SIAM Journal on Scientific Computing, 25(2), 454–477. DOI.
Vors00
Vorst, H. A. van der. (2000) Krylov Subspace Iteration. Computing in Science & Engineering, 2(1), 32–37. DOI.
YaDD04
Yang, C., Duraiswami, R., & Davis, L. S.(2004) Efficient kernel machines using the improved fast Gauss transform. In Advances in neural information processing systems (pp. 1561–1568).
YDGD03
Yang, C., Duraiswami, R., Gumerov, N. A., & Davis, L. (2003) Improved Fast Gauss Transform and Efficient Kernel Density Estimation. In Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2 (p. 464–). Washington, DC, USA: IEEE Computer Society DOI.