Brian Nosek: A Crisis of Research Reproducibility

Image Courtesy of Center for Open Science.

Replication is a key tenet of the scientific method. In theory, any discovery should be reproducible with identical procedures. However, scientists have become increasingly aware that in practice, most studies’ findings may be irreplicable, hinting at systemic flaws deep within scholarly research. After all, how can science be trusted if it builds upon unrepeatable results?

Few understand these flaws better than Brian Nosek (GRD ‘02), co-founder and director of the Center for Open Science (COS). In 2015, Nosek and the COS made waves when they published a study in Science in which less than half of their attempts to replicate one hundred psychology experiments were successful. The study was among the first to empirically characterize the extent of the reproducibility crisis, drawing attention from scientists and non-scientists alike.

Nosek traces his interest in interrogating the scientific process to a research methods class taken during his PhD at Yale. Nosek recalls reading papers detailing common issues in scientific practice, from publication bias to ignoring null results, and their solutions. What shocked him was that these studies dated back to the ‘60s. “We’re reading these papers in the 1990s, and for me, it was like, ‘wait a second—we’ve known the problem, we’ve known the solution; thirty years later, we’ve done nothing… what’s going on?’” Nosek said. 

This lack of progress led Nosek to take an interest in improving his own research methodology. One year, he collected data at the beach to obtain larger sample sizes. The next, he created a website to collect data on implicit biases, long before online data collection was commonplace. This would grow to become Project Implicit—since its creation, over twenty million people have taken an implicit association test on the site.

When Nosek became a professor at the University of Virginia, his focus shifted beyond improving just his own methodology. “We started thinking about how we could build tools to help others do the same,” Nosek said. However, this technology-building work was ineligible for most grants, limiting its scope. All this changed in 2013 when Nosek received five million dollars from the Arnold Foundation to scale up his operations, thus creating the COS.

The COS was originally centered around two projects from Nosek’s lab: the psychology “reproducibility project” and the creation of the Open Science Framework (OSF). The OSF is a tool to help researchers document and share their experimental progress, all while promoting open science practice: making the entire research process transparent. Unlike the traditional system of only publishing completed work, the OSF tracks a project’s whole journey, including the initial research plan, any changes to it, null results, and more.

The COS has grown substantially since its creation. In 2020, it introduced the Transparency and Openness (TOP) Factor, a metric to evaluate research journals’ adoption of the best practices of open science. In 2021, it released the results of a replication project focusing on cancer biology research. Open science has been growing, too. “If you look at every key performance indicator, the growth over the last decade has been nonlinear,” Nosek said. The OSF now has over half a million users, and many others have conducted studies evaluating replicability in fields from ecology to economics. 

Nosek hopes to add tools to the COS to help not only those producing research, but also those utilizing it. “What we want to do in the next ten years is [add] in the interaction between the research producer and the research consumer,” Nosek said. “The consumer could be policymakers or people way outside of the process.” Facilitating dialogue between research producers and consumers could hold researchers accountable for the thoroughness of their experimentation and reporting.

So, how do we fix the reproducibility crisis? 

Nosek believes that changes to the publication process are required. Under the current model, scientists are rewarded for producing publishable findings. His proposed alternative is the “Registered Reports” model, where peer review is conducted prior to data collection, shifting the focus from getting results to having watertight methods. “Registered Reports changes the reward system for researchers… the decisions at journal level are no longer about the outcomes—they’re about the questions,” Nosek said.

Despite his familiarity with the challenges of the knowledge production process, Nosek’s belief in science is unwavering. “The reason to trust science is because science doesn’t trust itself,” Nosek said. While mistakes are inevitable when pushing the boundaries of knowledge, what matters is that science is open for revision. For all the flaws plaguing the research process, science’s self-correcting nature remains ever-present.