…and other interminable debates on how to do inference. Sundry schools thought in how to stitch mathematics to the world, brief notes and questions thereto.
Avoiding the whole damn issue
You want to use a Bayesian estimator because it’s tractable and simple? OK. Discuss prior beliefs in terms of something other than probability, use the Bayesian formalism, then produce a frequentist justification.
Now everyone is happy, apart from you, because you had to miss your family’s weekend in the countryside, and and cannot remember the name of your new niece.
Frequentist vs Bayesian Acrimony
Was that too simple and practical? Would you prefer to spend time in an interminable and, to outsiders, useless debate? Is there someone you wish to irritate at the next faculty meeting?
Well then, why not try to use your current data set as a case study to answer the following questions: Can you recycle conditional probabilistic formalism as a measures of certainty for a hypothesis, or not? Which bizarre edge case can you demonstrate by assuming you can? Or by assuming you can’t? There are in fact more than two schools of thought here, and degrees within schools.
So far as I can tell, no-one waits to get a grip on such nuance before weighing in with an opinion though, so let’s keep that heading and mentally visualise the conflict in the usual way.
Myself, I can’t help but feel a lot of the confusion is terminological; If you accept Bayesian belief (as in, prior and posterior distributions) are not the same things as the probabilities frequentists use. (loosely, we expect something with P=0.5 to happen, over many experiments, half the time) you defang some constroversial claims. But this move is unpopular. I don’t get why.
Now, here is a sampling of actual expert opinions:
Jaynes, E. T., & Bretthorst, G. L.(2003). Probability theory: the logic of science. Cambridge, UK; New York, NY: Cambridge University Press.
More or less, claims “Baysian statistical practice IS science”. Makes frequentists angry.
Deborah Mayo as a philosopher of the practice of frequentism, has more than you could possibly wish to know about the details of statistical practice, as well as rhetorical dissection of the F-vs-B debate, and says BTW that “Baysian statistics ARE NOT science”. Makes Bayesians angry.
Larry Wasserman: Freedman’s neglected theorem
In this post I want to review an interesting result by David Freedman […]
The result gets very little attention. Most researchers in statistics and machine learning seem to be unaware of the result. The result says that, “almost all” Bayesian prior distributions yield inconsistent posteriors, in a sense we’ll make precise below. The math is uncontroversial but, as you might imagine, the intepretation of the result is likely to be controversial.
[…]as Freedman says in his paper:
“ … it is easy to prove that for essentially any pair of Bayesians, each thinks the other is crazy.”
Gelman, A. (2011). Induction and deduction in Bayesian data analysis. Rationality, Markets and Morals, 2(67-78), 1999. Online.
I am told I should look at Andrew Gelman’s model of Bayesian methodology, which is supposed to be reasonable even to Bayesians (‘I always feel that people who like Gelman would prefer to have no Bayes at all.’)
Mathematical invective from Shalizi, showing that stubbornly applying Bayesian methods to a sufficiently un-cooperative problem is effectively producing a replicator system. Which is to say IMO, neither useless nor especially good. (Question: Is this behaviour much worse than in a mis-specified dependent frequentist parametric model?)
Sims, C. (2010). Understanding non-bayesians. Unpublished Chapter, Department of Economics, Princeton University.
He has a Nobel Memorial Prize, so he gets to speak on behalf of Bayesian econometrics.