The Living Thing / Notebooks :

Frequentist consistency of Bayesian methods

Bayesian consistency

Life is short. You want to use some tasty tool, such as a hierarchical model without anyone getting cross at you for apostasy? Why not use whatever estimator works, and then show that it works on both frequentist and Bayesian grounds?

There is a basic result here, due to Doob, which essentially says that the Bayesian learner is consistent, except on a set of data of prior probability zero. That is, the Bayesian is subjectively certain they will converge on the truth. This is not as reassuring as one might wish, and showing Bayesian consistency under the true distribution is harder. In fact, it usually involves assumptions under which non-Bayes procedures will also converge. […]

Concentration of the posterior around the truth is only a preliminary. One would also want to know that, say, the posterior mean converges, or even better that the predictive distribution converges. For many finite-dimensional problems, what’s called the “Bernstein-von Mises theorem” basically says that the posterior mean and the maximum likelihood estimate converge, so if one works the other will too. This breaks down for infinite-dimensional problems.

Regularisation and priors

An excellent answer by Tymoteusz Wołodźko must be in the running for punchiest summary ever, made precise by Andrew Milne.

Question: What do nonconvex regularizers look like in a Bayesian context, and are they an argument for Bayesian sampling from the posterior rather than the frequntist’s NP-hard optimum search? And what does, e.g. the GJPS08’s recommended alternative Cauchy prior look like?

Refs

Bayesian consistency

Life is short. You want to use some tasty tool, such as a hierarchical model without anyone getting cross at you for apostasy? Why not use whatever estimator works, and then show that it works on both frequentist and Bayesian grounds?

There is a basic result here, due to Doob, which essentially says that the Bayesian learner is consistent, except on a set of data of prior probability zero. That is, the Bayesian is subjectively certain they will converge on the truth. This is not as reassuring as one might wish, and showing Bayesian consistency under the true distribution is harder. In fact, it usually involves assumptions under which non-Bayes procedures will also converge. […]

Concentration of the posterior around the truth is only a preliminary. One would also want to know that, say, the posterior mean converges, or even better that the predictive distribution converges. For many finite-dimensional problems, what’s called the “Bernstein-von Mises theorem” basically says that the posterior mean and the maximum likelihood estimate converge, so if one works the other will too. This breaks down for infinite-dimensional problems.

Regularisation and priors

An excellent answer by Tymoteusz Wołodźko must be in the running for punchiest summary ever, made precise by Andrew Milne.

Question: What do nonconvex regularizers look like in a Bayesian context, and are they an argument for Bayesian sampling from the posterior rather than the frequntist’s NP-hard optimum search? And what does, e.g. the GJPS08’s recommended alternative Cauchy prior look like?

Refs