The Living Thing / Notebooks :

Algorithmic statistics

The intersection between probability, ignorance, and algorithms, butting up against computational complexity, coding theory, dynamical systems, ergodic theory, and probability. When is the relation between things sufficiently unstructured that we may treat them as random? Stochastic approximations to deterministic algorithms. Kolmogorov complexity. Compressibility, Shannon information. Sideswipe at deterministic chaos. Chaotic systems treated as if stochastic. (Are “real” systems not precisely that?) Statistical mechanics and ergodicity.

I saw a provocative talk by Daniela Andrés on, nominally, Parkinson's disease. The grabbing part was talking about the care and feeding of neural “codewords”, and the information theory of the brain, which she did in the foreign (to me) language of “algorithmic statistics”, and “Kolmogorov structure functions”. I have no idea what she meant. This is a placeholder to remind me to come back and see if it is as useful as it sounded like it might be.

To consider: relationship between underlying event space and measures we construct on such spaces. How much topology is lost by laundering our events through pullback of a (e.g. probability) measure?

Chazelle:

The discrepancy method has produced the most fruitful line of attack on a pivotal computer science question: What is the computational power of random bits? It has also played a major role in recent developments in complexity theory. This book tells the story of the discrepancy method in a few succinct independent vignettes. The chapters explore such topics as communication complexity, pseudo-randomness, rapidly mixing Markov chains, points on a sphere, derandomization, convex hulls and Voronoi diagrams, linear programming, geometric sampling and VC-dimension theory, minimum spanning trees, circuit complexity, and multidimensional searching. The mathematical treatment is thorough and self-contained, with minimal prerequisites. More information can be found on the book's home page.

Random number generation

Cosma Shalizi's upcoming textbook has the world's pithiest summary:

In fact, what we really have to assume is that the relationships between the causes omitted from the DAG and those included is so intricate and convoluted that it might as well be noise, along the lines of algorithmic information theory (Li and Vitányi, 1997), whose key result might be summed up as “Any determinism distinguishable from randomness is insufficiently complex”.

Here, a John Baez talk on foundational issues.

Information-based complexity theory

Is a specialty within this field?

ICB website:

Information-based complexity (IBC) is the branch of computational complexity that studies problems for which the information is partial, contaminated, and priced.

To motivate these assumptions about information consider the problem of the numerical computation of an integral. Here, the integrands consist of functions defined over the d-dimensional unit cube. Since a digital computer can store only a finite set of numbers, these functions must be replaced by such finite sets (by, for example, evaluating the functions at a finite number of points). Therefore, we have only partial information about the functions. Furthermore, the function values may be contaminated by round-off error. Finally, evaluating the functions can be expensive, and so computing these values has a price.

Refs