The Living Thing / Notebooks :

Open notebook science

and other strategies for reproducible research

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪 🤪
Incompleteness: 🚧 🚧 🚧

Tom Gauld:

Hazard labels used in our laboratory

Methodologies of how to publish your methods, not technical details. Technical details are under build/pipelines tools and scientific workbooks. The painful process of journals and article validation is under academic publishing.

The Knowledge Repository is one open workflow sharing platform. Motivation:

[…]our process combines the code review of engineering with the peer review of academia, wrapped in tools to make it all go at startup speed. As in code reviews, we check for code correctness and best practices and tools. As in peer reviews, we check for methodological improvements, connections with preexisting work, and precision in expository claims. We typically don’t aim for a research post to cover every corner of investigation, but instead prefer quick iterations that are correct and transparent about their limitations.

This is coupled closely with build tools.

Gaël Varoquaux: Please destroy this software after publication

Actually existing reproducible research tools

Basic steps toward reproducible research. Also useful, scientific workbooks, and build/pipelines tools. The fraught process of getting stuff in journals is under academic publishing.

Peer review hacks

Replication markets claims to provide markets in predicting experiment replication, which supposedly provides incentives to replicate research better.

Open notebooks

What do you get when you take your scientific workbooks and publish them online along with the data? An open notebook! These are huge in the machine learning pedagogy world right now, and small-to-medium in the applied-machine-learning, especially the recruitment end of that. They are noticeable but rare AFAICS in the rest of the world.

If you want in-depth justifications for open notebooks, see Caleb McDaniel or Ben Marwick’s slide deck.

I’m interested in this because it seems like the actual original pitch for how scientific research was supposed to work, with rapid improvement upon each others’ ideas. Whether I get around to fostering such stuff despite the fact that it is not valued by my employer, that is the question.

Containerized workflow

Docker is designed for reproducible deployment which makes it an approximate fit for reproducible research. See docker for reproducible research.

Build tools

A reproducible experiment is closely coupled to with build tools, which recreate all the, possibly complicated and lengthy, steps. Some of the build tools I document have reproducibility as a primary focus, notably DVC, drake, lancet, and pachyderm.

Sundry data sharing ideas

See Data sharing.

Collaboration

codeocean seems to be targeting this

For the first time, researchers, engineers, developers and scientists can upload code and data in any open source programming language and link working code in a computational environment with the associated article for free. We assign a Digital Object Identifier (DOI) to the algorithm, providing correct attribution and a connection to the published research.

The platform provides open access to the published software code and data to view and download for everyone for free. But the real treat is that users can execute all published code without installing anything on their personal computer. Everything runs in the cloud on CPUs or GPUs according to the user needs. We make it easy to change parameters, modify the code, upload data, run it again, and see how the results change.

The also ran a workshop on this.

Possibly Sylabs cloud is a similar project?

Hybrid environment Nextjournal might also be this; It is a collaborative coding machine that claims to make this easy for you and your colleagues to write in a workbook style together, and uses containerised environments under the hood.

Less code-obsessed but possibly related, Open Science Framework seems to

OSF is a free and open source project management tool that supports researchers throughout their entire project lifecycle.

As a collaboration tool, OSF helps research teams work on projects privately or make the entire project publicly accessible for broad dissemination. As a workflow system, OSF enables connections to the many products researchers already use, streamlining their process and increasing efficiency.

Refs

Boettiger, Carl. 2015. “An Introduction to Docker for Reproducible Research, with Examples from the R Environment.” ACM SIGOPS Operating Systems Review 49 (1): 71–79. https://doi.org/10.1145/2723872.2723882.

Fitschen, Timm, Alexander Schlemmer, Daniel Hornung, Henrik tom Wörden, Ulrich Parlitz, and Stefan Luther. 2019. “CaosDB - Research Data Management for Complex, Changing, and Automated Research Workflows.” Data 4 (2): 83. https://doi.org/10.3390/data4020083.

Himmelstein, Daniel S., Vincent Rubinetti, David R. Slochower, Dongbo Hu, Venkat S. Malladi, Casey S. Greene, and Anthony Gitter. 2019. “Open Collaborative Writing with Manubot.” Edited by Dina Schneidman-Duhovny. PLOS Computational Biology 15 (6): e1007128. https://doi.org/10.1371/journal.pcbi.1007128.

Tong, Christopher. 2019. “Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science.” The American Statistician 73 (sup1): 246–61. https://doi.org/10.1080/00031305.2018.1518264.