We’re out here everyday, doing the dirty work finding noise and then polishing it into the hypotheses everyone loves. It’s not easy. --- John Schmidt, The noise miners
Multiple testing across a whole scientific field, with a side helping of uneven data release.
This is a complicated issue which I won't actually engage with right now. Perhaps you'd prefer to see an actual train-wreck in progress in social psychology.
Keywords: “file-drawer process” and the “publication sieve”, which are the large-scale models of how this works in a scientific community and “researcher degrees of freedom” which is the model for how this works at the individual scale.
This is particularly pertinent in social psychology, where it turns out the there is too much bullshit with .
Sanjay Srivastava, Everything is fucked, the syllabus
Some might say, just fix the incentives, but apparently that is off the table. There is also open notebook science, that could be a thing.
Failing the budget for that… pre-registration?
Tom Staffords 2 minute guide to experiment pre-registation:
Pre-registration is easy. There is no single, universally accepted, way to do it.
you could write your data collection and analysis plan down and post it on your blog.
you can use the Open Science Framework to timestamp and archive a pre-registration, so you can prove you made a prediction ahead of time.
you can visit AsPredicted.org which provides a form to complete, which will help you structure your pre-registration (making sure you include all relevant information).
“Registered Reports”: more and more journals are committing to published pre-registered studies. They review the method and analysis plan before data collection and agree to publish once the results are in (however they turn out).
But should you in fact pre-register?
No, not really. That is not what the system demands:
Ed Hagen, Academic success is either a crapshoot or a scam:
The problem, in a nut shell, is that empirical researchers have placed the fates of their careers in the hands of nature instead of themselves.[…]
Academic success for empirical researchers is largely determined by a count of one’s publications, and the prestige of the journals in which those publications appear […]
the minimum acceptable number of pubs per year for a researcher with aspirations for tenure and promotion is about three. This means that, each year, I must discover three important new things about the world. […] […]
Let’s say I choose to run 3 studies that each has a 50% chance of getting a sexy result. If I run 3 great studies, mother nature will reward me with 3 sexy results only 12.5% of the time. I would have to run 9 studies to have about a 90% chance that at least 3 would be sexy enough to publish in a prestigious journal.
I do not have the time or money to run 9 new studies every year.