The Living Thing / Notebooks :

Probabilistic programming

a.k.a. Bayesian programming

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪
Incompleteness: 🚧 🚧 🚧
George Ho diagrams probabilistic programming frameworks

This is apparently what we call Bayesian inference these days. When we say Bayesian programming, we might mean a simple hierarchical model, but we want to emphasise hope that we might even succeed in doing inference for very complicated models indeed, possibly ones without tractable likelihoods of any kind, maybe even Turing-complete. Hope in this context means something like “we provide the programming primitives to in principle express the awful crazy likelihood structure of your complicated problem, although you are on your own in demonstrating any kind of concentration or convergence for your estimates of its posterior likelihood in the light of data.”

Mostly these tools are based on Markov Chain Monte Carlo sampling which turns out to be a startlingly general way to grind out the necessary calculations. There are other ways, such as classic conjugate priors, variational methods or reparameterisation flows, and many hybrids thereof.

See George Ho for an in-depth introduction into what might be desirable to solve these problems in practice.

A probabilistic programming framework needs to provide six things:

  1. A language or API for users to specify a model
  2. A library of probability distributions and transformations to build the posterior density
  3. At least one inference algorithm, which either draws samples from the posterior (in the case of Markov Chain Monte Carlo, MCMC) or computes some approximation of it (in the case of variational inference, VI)
  4. At least one optimizer, which can compute the mode of the posterior density
  5. An autodifferentiation library to compute gradients required by the inference algorithm and optimizer
  6. A suite of diagnostics to monitor and analyze the quality of inference

Stan

Stan is the inference toolbox for broad classes of Bayesian model and the de facto reference point.

Andrew Gelman notes

The basic execution structure of Stan is in the JSS paper (by Bob Carpenter, Andrew Matt Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell) and in the reference manual. The details of autodiff are in the arXiv paper (by Bob Carpenter, Matt Hoffman, Marcus Brubaker, Daniel Lee, Peter Li, and Michael Betancourt). These are sort of background for what we’re trying to do.

If you haven’t read Maria Gorinova’s MS thesis and POPL paper (with Andrew Gordon and Charles Sutton), you should probably start there.

Radford Neal’s intro to HMC is nice, as is the one in David McKay’s book. Michael Betancourt’s papers are the thing to read to understand HMC deeply—he just wrote another brain bender on geometric autodiff (all on arXiv). Starting with the one on hierarchical models would be good as it explains the necessity of reparameterizations.

Also I recommend our JEBS paper (with Daniel Lee, and Jiqiang Guo) as it presents Stan from a user’s rather than a developer’s perspective.

Edward

From Blei’s lab, leverages trendy deep learning machinery, tensorflow for variational Bayes.

Pyro

pytorch + bayes = pyro. For rationale, see the pyro launch announcment:

We believe the critical ideas to solve AI will come from a joint effort among a worldwide community of people pursuing diverse approaches. By open sourcing Pyro, we hope to encourage the scientific world to collaborate on making AI tools more flexible, open, and easy-to-use. We expect the current (alpha!) version of Pyro will be of most interest to probabilistic modelers who want to leverage large data sets and deep networks, PyTorch users who want easy-to-use Bayesian computation, and data scientists ready to explore the ragged edge of new technology.

pyprob

pyprob: (Le, Baydin, and Wood 2017)

pyprob is a PyTorch-based library for probabilistic programming and inference compilation. The main focus of this library is on coupling existing simulation codebases with probabilistic inference with minimal intervention.

The main advantage of pyprob, compared against other probabilistic programming languages like Pyro, is a fully automatic amortized inference procedure based on importance sampling. pyprob only requires a generative model to be specified. Particularly, pyprob allows for efficient inference using inference compilation which trains a recurrent neural network as a proposal network.

In Pyro such an inference network requires the user to explicitly define the control flow of the network, which is due to Pyro running the inference network and generative model sequentially. However, in pyprob the generative model and inference network runs concurrently. Thus, the control flow of the model is directly used to train the inference network. This alleviates the need for manually defining its control flow.

The flagship application seems to be etalumis (Baydin et al. 2019) a probablistic programming ramework with emphasis AFAICT on Bayesian inverse problems.

Turing.jl

Turing.jl

Turing.jl is a Julia library for (universal) probabilistic programming. Current features include:

It is one of many julia options.

Gen

Gen:

Gen simplifies the use of probabilistic modeling and inference, by providing modeling languages in which users express models, and high-level programming constructs that automate aspects of inference.

Like some probabilistic programming research languages, Gen includes universal modeling languages that can represent any model, including models with stochastic structure, discrete and continuous random variables, and simulators. However, Gen is distinguished by the flexibility that it affords to users for customizing their inference algorithm.

Gen’s flexible modeling and inference programming capabilities unify symbolic, neural, probabilistic, and simulation-based approaches to modeling and inference, including causal modeling, symbolic programming, deep learning, hierarchical Bayesiam modeling, graphics and physics engines, and planning and reinforcement learning.

It has an impressive talk demonstrating how you would interactively clean data using it.

Miscellanous julia options

DynamicHMC does Hamiltonian/NUTS sampling in a raw likelihood setting.

Possibly it is a competitor of Klara.jl, the Juliastats MCMC.

Miletus is a financial product adn ter-structure modeling package that is available for quant stuff in Julia as part of the paid packages offerings in finance. Although it looks like it is also freely available?

PyMC3/PyMC4

Pymc3 is python+Theano. PyMC4 will depend upon Tensorflow.

See Chris Fonnesbeck’s example in python.

Greta

Greta

greta models are written right in R, so there’s no need to learn another language like BUGS or Stan

greta uses Google TensorFlow so it’s fast even on massive data sets, and runs on CPU clusters and GPUs

The thing about tensorflow implying speed does not necessarily follow, but yeah.

Soss.jl

Soss.jl

Soss is a library for probabilistic programming.

Let’s jump right in with a simple linear model:

using Soss

m = @model X begin
    β ~ Normal() |> iid(size(X,2))
    y ~ For(eachrow(X)) do x
        Normal(x’ * β, 1)
    end
end;

In Soss, models are first-class and function-like, and “applying” a model to its arguments gives a joint distribution.

Just a few of the things we can do in Soss:

Zhusuan

Zhusuan.

ZhuSuan is a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning. ZhuSuan is built upon Tensorflow. Unlike existing deep learning libraries, which are mainly designed for deterministic neural networks and supervised tasks, ZhuSuan provides deep learning style primitives and algorithms for building probabilistic models and applying Bayesian inference. The supported inference algorithms include:

Church/Anglican

Level up you esoterism with Church, a general-purpose Turing-complete Monte Carlo lisp-derivative, which is unbearably slow but does some reputedly cute tricks with modeling human problem-solving, and other likelihood-free methods, according to creators Noah Goodman and Joshua Tenenbaum.

See also anglican, which is the same but different, being built in clojure, and hence also leveraging browser Clojurescript.

WebPPL

WebPPL is a successor to Church designed as a teaching language for probabilistic reasoning in the browser. If you like Javascript ML.

Refs

Baydin, Atılım Güneş, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, et al. 2019. “Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale,” August. http://arxiv.org/abs/1907.03382.

Carroll, Colin. n.d. “A Tour of Probabilistic Programming Language APIs.” Https://Colcarroll.github.io. https://colcarroll.github.io/ppl-api/.

Cusumano-Towner, Marco, Benjamin Bichsel, Timon Gehr, Martin Vechev, and Vikash K. Mansinghka. 2018. “Incremental Inference for Probabilistic Programs.” In Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, 571–85. PLDI 2018. New York, NY, USA: ACM. https://doi.org/10.1145/3192366.3192399.

Cusumano-Towner, Marco F., and Vikash K. Mansinghka. 2017. “Encapsulating Models and Approximate Inference Programs in Probabilistic Modules,” May. http://arxiv.org/abs/1612.04759.

———. 2018. “Using Probabilistic Programs as Proposals,” January. http://arxiv.org/abs/1801.03612.

Cusumano-Towner, Marco F., Feras A. Saad, Alexander K. Lew, and Vikash K. Mansinghka. 2019. “Gen: A General-Purpose Probabilistic Programming System with Programmable Inference.” In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, 221–36. PLDI 2019. New York, NY, USA: ACM. https://doi.org/10.1145/3314221.3314642.

Cusumano-Towner, Marco, and Vikash K. Mansinghka. 2018. “A Design Proposal for Gen: Probabilistic Programming with Fast Custom Inference via Code Generation.” In Proceedings of the 2Nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, 52–57. MAPL 2018. New York, NY, USA: ACM. https://doi.org/10.1145/3211346.3211350.

Gelman, Andrew, Daniel Lee, and Jiqiang Guo. 2015. “Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization.” Journal of Educational and Behavioral Statistics 40 (5): 530–43. https://doi.org/10.3102/1076998615606113.

Goodrich, Ben, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Bob Carpenter, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. “Stan : A Probabilistic Programming Language.” Journal of Statistical Software 76 (1). https://doi.org/10.18637/jss.v076.i01.

Gorinova, Maria I., Andrew D. Gordon, and Charles Sutton. 2019. “Probabilistic Programming with Densities in SlicStan: Efficient, Flexible and Deterministic.” Proceedings of the ACM on Programming Languages 3 (POPL): 1–30. https://doi.org/10.1145/3290348.

Kochurov, Max, Colin Carroll, Thomas Wiecki, and Junpeng Lao. 2019. “PyMC4: Exploiting Coroutines for Implementing a Probabilistic Programming Framework,” September. https://openreview.net/forum?id=rkgzj5Za8H.

Lao, Junpeng. 2019. “A Hitchhiker’s Guide to Designing a Bayesian Library in Python.” Presentation Slides presented at the PyData Córdoba, Córdoba, Argentina, September 29. https://docs.google.com/presentation/d/1xgNRJDwkWjTHOYMj5aGefwWiV8x-Tz55GfkBksZsN3g/edit?usp=sharing.

Le, Tuan Anh, Atılım Güneş Baydin, and Frank Wood. 2017. “Inference Compilation and Universal Probabilistic Programming.” In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 54:1338–48. Proceedings of Machine Learning Research. Fort Lauderdale, FL, USA: PMLR. http://arxiv.org/abs/1610.09900.

Moore, Dave, and Maria I. Gorinova. 2018. “Effect Handling for Composable Program Transformations in Edward2,” November. http://arxiv.org/abs/1811.06150.

Pradhan, Neeraj, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Eli Bingham, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D. Goodman. 2018. “Pyro: Deep Universal Probabilistic Programming,” October. http://arxiv.org/abs/1810.09538.

PyMC Development Team. 2019. “PyMC3 Developer Guide.” https://docs.pymc.io/developer_guide.html.

Rainforth, Tom. 2017. “Automating Inference, Learning, and Design Using Probabilistic Programming.” PhD Thesis, University of Oxford. http://www.robots.ox.ac.uk/~twgr/assets/pdf/rainforth2017thesis.pdf.

Salvatier, John, Thomas V. Wiecki, and Christopher Fonnesbeck. 2016. “Probabilistic Programming in Python Using PyMC3.” PeerJ Computer Science 2 (April): e55. https://doi.org/10.7717/peerj-cs.55.

Tran, Dustin, Matthew D. Hoffman, Rif A. Saurous, Eugene Brevdo, Kevin Murphy, and David M. Blei. 2017. “Deep Probabilistic Programming.” In ICLR. http://arxiv.org/abs/1701.03757.

Tran, Dustin, Alp Kucukelbir, Adji B. Dieng, Maja Rudolph, Dawen Liang, and David M. Blei. 2016. “Edward: A Library for Probabilistic Modeling, Inference, and Criticism,” October. http://arxiv.org/abs/1610.09787.

Vasudevan, Srinivas, Ian Langmore, Dustin Tran, Eugene Brevdo, Joshua V. Dillon, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A. Saurous. 2017. “TensorFlow Distributions,” November. http://arxiv.org/abs/1711.10604.