The Living Thing / Notebooks :

Bayesians vs frequentists

Just because you both get the same answer doesn't mean neither of you is wrong

Sundry schools thought in how to stitch mathematics to the world, brief notes and questions thereto.

Avoiding the whole damn issue

You are a frequentist and want to use a Bayesian estimator because it’s tractable and simple? No problem. Discuss prior beliefs in terms of something other than probability, use the Bayesian formalism, then produce a frequentist justification.

Now everyone is happy, apart from you, because you had to miss your family’s weekend in the countryside, and cannot remember the name of your new niece.

This is the best option; just be clear about which guarantees your method of choice will give you. There is a diversity of such guarantees across different fields of statistics, and no free lunches. You know, just like you’d expect.

Some estimators break down on countable state spaces, others don’t. Sometimes one is tractable for big data and another is not. Usually for any plausible tractable model, neither is.

Frequentist vs Bayesian acrimony

Was that too simple and practical?

Would you prefer to spend time in an interminable and, to outsiders, useless debate? Is there someone you wish to irritate at the next faculty meeting?

Well then, why not try to use your current data set as a case study to answer the following questions:

Can you recycle conditional probabilistic formalism as a measures of certainty for a hypothesis, or not? Which bizarre edge case can you demonstrate by assuming you can? Or by assuming you can’t? Can you straw-man the “other side” into sounding like idiots?

If you can phrase an estimator in terms of Bayesian belief updates, does it mean that anyone who doesn’t phrase an estimator in terms of Bayesian belief updates is doing it wrong and you need tell them so? If someone produces a perfectly good estimator by belief updating, do you regard it as broken if it uses the language of probabilities to describe belief, even when it still satisfies frequentist desiderata such as admissibility? If you can find a Bayesian rationale for a given frequentist method - say, regularisation - does it mean that what the frequentist is “really” doing is the Bayesian thing you just rationalised, but they are ignorant for not describing it in terms of priors?

That should give you some controversies. Now, weigh in!

OTOH, here is a sampling of actual expert opinions:

More or less, claims “Baysian statistical practice IS science”. Makes frequentists angry.

In this post I want to review an interesting result by David Freedman […]

The result gets very little attention. Most researchers in statistics and machine learning seem to be unaware of the result. The result says that, “almost all” Bayesian prior distributions yield inconsistent posteriors, in a sense we’ll make precise below. The math is uncontroversial but, as you might imagine, the interpretation of the result is likely to be controversial.

[…]as Freedman says in his paper:

“ … it is easy to prove that for essentially any pair of Bayesians, each thinks the other is crazy.”

I am told I should look at Andrew Gelman’s model of Bayesian methodology, which is supposed to be reasonable even to frequentists (‘I always feel that people who like Gelman would prefer to have no Bayes at all.’)

Mathematical invective from Shalizi, showing that stubbornly applying Bayesian methods to a sufficiently un-cooperative problem with a sufficiently bad model is effectively producing a replicator system. Which is to say, the failure modes are interesting. (Question: Is this behaviour much worse than in a mis-specified dependent frequentist parametric model? I should read it and find out.)

Chatty summary here.

He has a Nobel Memorial Prize, so he gets to speak on behalf of Bayesian econometrics I guess.