Coarse graining

November 12, 2014 — December 2, 2015

algebra
Bayes
machine learning
networks
physics
sciml
snarks
statmech
surrogate

AFAICT, this is the question ‘how much worse do your predictions get as you discard information in some orderly fashion?’, as framed by physicists.

Do “renormalisation groups”, whatever they are, fit in here? Fast-slow systems?

The ML equivalent seems to be multi-fidelity modelling.

1 Persistent Homology

What’s that? Petri et al. (2014):

Persistent homology is a recent technique in computational topology developed for shape recognition and the analysis of high dimensional datasets.… The central idea is the construction of a sequence of successive approximations of the original dataset seen as a topological space X. This sequence of topological spaces \(X_0, X_1, \dots{}, X_N = X\) is such that \(X_i \subseteq X_j\) whenever \(i < j\) and is called the filtration.

3 References

Bar-Sinai, Hoyer, Hickey, et al. 2019. Learning Data-Driven Discretizations for Partial Differential Equations.” Proceedings of the National Academy of Sciences.
Bar-Yam. 2003. Dynamics Of Complex Systems.
Castiglione, and Falcioni. 2008. Chaos and Coarse Graining in Statistical Mechanics.
Kelly, and Melbourne. 2014. Deterministic Homogenization for Fast-Slow Systems with Chaotic Noise.”
Noid. 2013. “Perspective: Coarse-Grained Models for Biomolecular Systems.” The Journal of Chemical Physics.
Petri, Expert, Turkheimer, et al. 2014. Homological Scaffolds of Brain Functional Networks.” Journal of The Royal Society Interface.
Plis, Danks, and Yang. 2015. Mesochronal Structure Learning.” Uncertainty in Artificial Intelligence : Proceedings of the … Conference. Conference on Uncertainty in Artificial Intelligence.
Voth. 2008. Coarse-Graining of Condensed Phase and Biomolecular Systems.