# Coarse graining

AFAICT, this is the question ‘how much worse do your predictions get as you discard information in some orderly fashion?’, as framed by physicists.

Do “renormalisation groups”, whatever they are, fit in here? How about Scholtes and his time-respecting networks?

Where the coarse gaining is itself a stochastic proces, is this just a hierarchical model, in the statistical sense?

To consider: the algorithmic statistics angle, the pseudorandomness angle, the topological angle as exemplified by the suggestive utility of sigma-algebras and filtrations here.

Persistent homology is a recent technique in computational topology developed for shape recognition and the analysis of high dimensional datasets [36,37]. It has been used in very diverse fields, ranging from biology [38,39] and sensor network coverage [40] to cosmology [41]. Similar approaches to brain data [42,43], collaboration data [44] and network structure [45] also exist. The central idea is the construction of a sequence of successive approximations of the original dataset seen as a topological space X. This sequence of topological spaces $X_0, X_1, \dots{}, X_N = X$ is such that $X_i \subseteq X_j$ whenever $i < j$ and is called the filtration. Choosing how to construct a filtration from the data is equivalent to choosing the type of goggles one wears to analyse the data.