The Living Thing / Notebooks :

Nearly sufficient statistics

How about “Sufficient sufficiency,” is that taken?

TBD.

I’m working through a small realisation, for my own interest, which has been helpful in my understanding of variational Bayes; specifically relating it to non-Bayes variational inference. Also sequential monte carlo.

By starting from the idea of sufficient statistics, we come to the idea of variational inference in a natural way, via some other interesting stopovers.

I doubt this insight is novel, but I will work through it as if it is, for the sake of my own education.

I will contend that most Bayesian inference (maybe also frequentist, but the setup here is easier with Bayes) is most naturally considered as filtering. We have some system who parameters we wish to estimate, and we have many experiments done upon that system, throughout time. We would like to update our understanding of that system, using the experiments’ data. Possibly the system is changing, possibly it’s not changing. Either way (or both ways) there is a computational challenge.

(Or it’s not changing, but our model is increasing in size as we gain more data about it. How to handle this case?)

We would like the hypothesis updating to happen in a compact way, where we can do inference without keeping the whole data-set about, but rather use some summary statistics. With exponential family models this is trivial to do; just use conjugate updates and you are done. With models where compactness would be more useful, say, big data and big models without conjugate updates, it’s not clear how we can do this exactly.

See also mixture models, probabilistic deep learning, directed graphical models, other probability metrics. Intuitive connection with differential privacy of posterior sampling? DNZM13

Possibly related: Data summarisation via inducing sets or coresets. Bounded Memory Learning? To mention: Bayesian likelihood principle, Pitman–Koopman–Darmois theorem.

Sufficient statistics in exponential families

Let’s start with sufficient statistics in exponential families, which, for reasons of historical pedagogy, are the Garden of Eden of Inference, the Garden of Edenference for short. I suspect that deep in their hearts, all statisticians regard themselves as prodigal exiles from of the exponential family, and long for the innocence of that Garden of Edenference.

Anyway, informally speaking, here’s what’s going on with the inference problems involving sufficient statistics. We are interested in estimating some parameter of inference, \(\theta\) using realisations \(x\) of some random process \(X\sim \mathbb{P}(x|\theta).\)

Then \(T(x)\) is a sufficient statistic for \(\theta\) iff \[\mathbb{P}(x|T(x),\theta)= \mathbb{P}(x|T(x)).\] That is, our inference about \(\theta\) depends on the data only through the sufficient statistic.

(mention size of sufficient statistic)

Fisher–Neyman factorization theorem \[ \mathbb{P}(x;\theta)=h(x)g(T(x);\theta) \]

Famously, Maximum Likelihood estimators of exponential family models are highly compressible, in that these have sufficient statistics - these are low-dimensional functions of the data which characterise all the information in the complete data, with respect to the parameter estimates. Many models and data sets and estimation methods do not have this feature, even parametric models with very few parameters.

This can be a PITA when your data is very big and you wish to get benefit from that, and yet you can’t fit the data in memory; The question then arises – when can I do better? Can I find a “nearly sufficient” statistic, which is smaller than my data and yet does not worsen my error substantially? Can I quantify this nearness to the original?

Refs