The intersection between probability, ignorance, and algorithms, butting up against computational complexity, coding theory, dynamical systems, ergodic theory, and probability. When is the relation between things sufficiently unstructured that we may treat them as random? Stochastic approximations to deterministic algorithms. Kolmogorov complexity. Compressibility, Shannon information. Sideswipe at deterministic chaos. Chaotic systems treated as if stochastic. (Are “real” systems not precisely that?) Statistical mechanics and ergodicity.
I saw a provocative talk by Daniela Andrés on, nominally, Parkinson's disease. The grabbing part was talking about the care and feeding of neural “codewords”, and the information theory of the brain, which she did in the foreign (to me) language of “algorithmic statistics”, and “Kolmogorov structure functions”. I have no idea what she meant. This is a placeholder to remind me to come back and see if it is as useful as it sounded like it might be.
To consider: relationship between underlying event space and measures we construct on such spaces. How much topology is lost by laundering our events through pullback of a (e.g. probability) measure?
Chazelle:
The discrepancy method has produced the most fruitful line of attack on a pivotal computer science question: What is the computational power of random bits? It has also played a major role in recent developments in complexity theory. This book tells the story of the discrepancy method in a few succinct independent vignettes. The chapters explore such topics as communication complexity, pseudorandomness, rapidly mixing Markov chains, points on a sphere, derandomization, convex hulls and Voronoi diagrams, linear programming, geometric sampling and VCdimension theory, minimum spanning trees, circuit complexity, and multidimensional searching. The mathematical treatment is thorough and selfcontained, with minimal prerequisites. More information can be found on the book's home page.
Cosma Shalizi's upcoming textbook has the world's pithiest summary:
In fact, what we really have to assume is that the relationships between the causes omitted from the DAG and those included is so intricate and convoluted that it might as well be noise, along the lines of algorithmic information theory (Li and Vitányi, 1997), whose key result might be summed up as “Any determinism distinguishable from randomness is insufficiently complex”.

As far as I can tell, this is the main research thrust of the Max Planck institute at Leipzig to solve a kind of inverse problem, inspecting inferred probabilities for evidence of computation. Is that… sane? Why would one wish to do that?

Just had this recommended by David Donoho: Mezard, M. (2009). Information, physics, and computation. Oxford ; New York: Oxford University Press. Online.
Here, a John Baez talk on foundational issues.
Informationbased complexity theory
Is a specialty within this field?
Informationbased complexity (IBC) is the branch of computational complexity that studies problems for which the information is partial, contaminated, and priced.
To motivate these assumptions about information consider the problem of the numerical computation of an integral. Here, the integrands consist of functions defined over the ddimensional unit cube. Since a digital computer can store only a finite set of numbers, these functions must be replaced by such finite sets (by, for example, evaluating the functions at a finite number of points). Therefore, we have only partial information about the functions. Furthermore, the function values may be contaminated by roundoff error. Finally, evaluating the functions can be expensive, and so computing these values has a price.

The RJLipton post on IBC linking it to complexity theory:
Now as ordinary complexity theorists, our first instinct would be to define properties intrinsic to the function {f} and try to prove they cause high complexity for any algorithm. Making a continuous analogy to concepts in discrete Boolean complexity, drawing on papers like this by Noam Nisan and Mario Szegedy, we would try to tailor an effective measure of “sensitivity.” We would talk about functions {f} that resemble the {n}ary parity function in respect of sensitivity but don’t have a simple known integral. Notions of {f} being “isotropic” could cut both ways—they could make the sensitivity pervasive but could enable a good global estimate of the integral.
IBC, however, focuses on properties of algorithms and restrictions on the kind of inputs they are given. Parlett’s general objection is that doing so begs the question of a proper complexity theory and reverts to the standard—and hallowed enough—domain of ordinary numerical analysis of algorithms.
Refs
 Solo64a: R.J. Solomonoff (1964a) A formal theory of inductive inference Part I. Information and Control, 7(1), 1–22. DOI
 Solo64b: R.J. Solomonoff (1964b) A formal theory of inductive inference Part II. Information and Control, 7(2), 224–254. DOI
 Vali84: L. G. Valiant (1984) A theory of the learnable. Commun. ACM, 27(11), 1134–1142. DOI
 Chai77: Gregory J Chaitin (1977) Algorithmic information theory. IBM Journal of Research and Development.
 GáTV01: Péter Gács, JohnT. Tromp, Paul M.B. Vitányi (2001) Algorithmic statistics. IEEE Transactions on Information Theory, 47(6), 2443–2463. DOI
 Nort13: John D. Norton (2013) All Shook Up: Fluctuations, Maxwell’s Demon and the Thermodynamics of Computation. Entropy, 15(10), 4432–4483. DOI
 LiVi09: Ming Li, Paul MB Vitányi (2009) An introduction to Kolmogorov complexity and its applications. Springer
 DGJS09: Ilias Diakonikolas, Parikshit Gopalan, Ragesh Jaiswal, Rocco Servedio, Emanuele Viola (2009) Bounded Independence Fools Halfspaces (No. 016)
 Riss07: Jorma Rissanen (2007) Information and complexity in statistical modeling. New York: Springer
 Calu02: Cristian S Calude (2002) Information and Randomness : An Algorithmic Perspective. Springer
 MeMo09: Marc Mezard, Andrea Montanari (2009) Information, physics, and computation. Oxford ; New York: Oxford University Press
 Chai88: Gregory J Chaitin (1988) Information, Randomness and Incompleteness: Papers on Algorithmic Information Theory (World Scientific Series in Computer Science, Vol 8). World Scientific Pub Co Inc
 Legg06: Shane Legg (2006) Is There an Elegant Universal Theory of Prediction? In Algorithmic Learning Theory (pp. 274–287). Springer Berlin Heidelberg
 CoGG89: Thomas M. Cover, Péter Gács, Robert M. Gray (1989) Kolmogorov’s Contributions to Information Theory and Algorithmic Complexity. The Annals of Probability, 17(3), 840–865. DOI
 VeVi04: N.K. Vereshchagin, Paul MB Vitányi (2004) Kolmogorov’s structure functions and model selection. IEEE Transactions on Information Theory, 50(12), 3265–3290. DOI
 BHMS19: Shai BenDavid, Pavel Hrubeš, Shay Moran, Amir Shpilka, Amir Yehudayoff (2019) Learnability can be undecidable. Nature Machine Intelligence, 1(1), 44. DOI
 KoCM06: Leonid Kontorovich, Corinna Cortes, Mehryar Mohri (2006) Learning Linearly Separable Languages. In Algorithmic Learning Theory (pp. 288–303). Springer Berlin Heidelberg
 Vitá06: Paul M Vitányi (2006) Meaningful information. IEEE Transactions on Information Theory, 52(10), 4617–4626. DOI
 HaYu01: Mark H. Hansen, Bin Yu (2001) Model Selection and the Principle of Minimum Description Length. Journal of the American Statistical Association, 96(454), 746–774.
 BEHW87: Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, Manfred K. Warmuth (1987) Occam’s Razor. Information Processing Letters, 24(6), 377–380. DOI
 CMRR08: Corinna Cortes, Mehryar Mohri, Ashish Rastogi, Michael Riley (2008) On the computation of the relative entropy of probabilistic automata. International Journal of Foundations of Computer Science, 19(01), 219–242. DOI
 Chai66: Gregory J Chaitin (1966) On the length of programs for computing finite binary sequences. Journal of the ACM (JACM), 13(4), 547–569.
 Chai69: Gregory J. Chaitin (1969) On the Simplicity and Speed of Programs for Computing Infinite Sets of Natural Numbers. J. ACM, 16(3), 407–422. DOI
 VaCh71: V. Vapnik, A. Chervonenkis (1971) On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities. Theory of Probability & Its Applications, 16(2), 264–280. DOI
 CFWS06: Alexander Clark, Christophe Costa Florêncio, Chris Watkins, Mariette Serayet (2006) Planar Languages and Learnability. In Grammatical Inference: Algorithms and Applications (pp. 148–160). Springer Berlin Heidelberg
 Brav08: Mark Braverman (2008) Polylogarithmic Independence Fools AC0 Circuits. J. ACM, 57(5), 28:1–28:10. DOI
 ShYC06: Takeshi Shibata, Ryo Yoshinaka, Takashi Chikayama (2006) Probabilistic Generalization of Simple Grammars and Its Application to Reinforcement Learning. In Algorithmic Learning Theory (pp. 348–362). Springer Berlin Heidelberg
 VeVi10: N.K. Vereshchagin, Paul MB Vitányi (2010) Rate Distortion and Denoising of Individual Data Using Kolmogorov Complexity. IEEE Transactions on Information Theory, 56(7), 3438–3454. DOI
 LiWa86: N. Littlestone, M. K. Warmuth (1986) Relating Data Compression and Learnability
 Göde40: K. Gödel (1940) The Consistency of the Continuum HypothesisThe Consistency of the Continuum Hypothesis
 Chaz01: Bernard Chazelle (2001) The discrepancy method: randomness and complexity. Cambridge: Cambridge Univ. Press
 Davi07: Paul C W Davies (2007) The Implications of a Cosmological Information Bound for Complexity, Quantum Information and the Nature of Physical Law. ArXiv:QuantPh/0703041.
 Cohe63: Paul J. Cohen (1963) The Independence of the Continuum Hypothesis. Proceedings of the National Academy of Sciences, 50(6), 1143–1148. DOI
 Cohe64: Paul J. Cohen (1964) The Independence of the Continuum Hypothesis, Ii. Proceedings of the National Academy of Sciences, 51(1), 105–110. DOI
 Chai02: Gregory J Chaitin (2002) The intelligibility of the universe and the notions of simplicity, complexity and irreducibility.
 Kolm68: A N Kolmogorov (1968) Three approaches to the quantitative definition of information. International Journal of Computer Mathematics, 2(1), 157–168.
 Göde31: Kurt Gödel (1931) Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38(1), 173–198. DOI
 KuNi12: L. Kuipers, H. Niederreiter (2012) Uniform Distribution of Sequences. Dover Publications
 Reyz19: Lev Reyzin (2019) Unprovability comes to machine learning. Nature, 565(7738), 166. DOI