The results of the Reproducibility Project – a very cool endeavour to repeat a bunch of published studies in psychology – came out this week [1]. The authors (a team of psychologists from around to world) found that they were able to successfully replicate the results of 39 out of 100 studies, leaving 61% unreplicated. This seems like an awful lot of negatives, but the authors argue that it’s more or less what you’d expect. A good chunk of published research is wrong, because of sampling error, experimenter bias, an emphasis on publishing surprising findings that turn out to be false, or more than one of the above. No one study can ever represent the truth – nor is it intended to. The idea is that with time and collective effort, scientific knowledge progresses towards certainty.

So science crowd-sources certainty.

This very millenial-sounding view goes back to Karl Popper, who in the 1940s wrote about how the objectivity of science comes not from the individual scientist, but rather from the collective community and its practices [2]. According to Popper, there are no unbiased, emotionally detached, perfect scientists. The same goes for individual studies.

Popper championed the hypothetico-deductive model, which may be one of the few philosophies taught as early as elementary school. As kids we learn The Scientific Method. It’s like a recipe: form a hypothesis, H. Develop a prediction, P (if H is true, then P must also be true). Devise a way to test your prediction, and, if P turns out not to be true, reject H. Repeat. Popper maintained that science is based on repeated attempts at falsification. The flip side is that there is no positive evidence in this model, only a failure to disprove things.

But of course this can’t be the whole story: other philosophers like Thomas Kuhn later pointed out that much of what scientists actually do is not very deductive. We describe. We collect observations. And we make probabilistic conjectures. How much of each probably depends somewhat on the field: compare studies in paleontology to The Journal of Experimental Biology, for instance.

One thing I’ve noticed about scientists is that we love to think we’re deductive, even when we aren’t. It’s always helpful to see a clear explanation of the authors’ predictions or expectations in the Introduction of a paper. Sometimes researchers go too far and pitch their work as a “test” of something when it’s actually not. Not too long ago I reviewed a manuscript where the authors collected data on numerous ecological traits comparing two closely related bird populations with different social systems. Setting aside concerns about the limitations of comparing only two species [3], this kind of study can help build a model about the evolution of behaviour, which the authors did at the end of their manuscript. But it cannot possibly be a “test” of an idea, given that there was no “If H then P” to start with. Some data are descriptive, and there’s nothing wrong with that! Hypotheses, models, and theories have to come from somewhere, right?

Since joining an experimental lab, I’ve noticed that even the hard-nosed experimentalists don’t follow The Scientific Method proper. We may start at point A and get to point B, but the route is circuitous. Our lab notebooks are filled with dead ends. One of the best things about my job is that I get to dream up ideas, and then see if they work. It’s is also one of the worst. Most ideas, and most experiments, don’t (work, that is!).

There are other twists and turns. Things get discovered by accident. Predictions get confirmed, but for the wrong reason. Sometimes, critical tests may be impossible. And yet, when it comes time to communicate our findings, we step back from the chronology of our lab books and work out the deductive logic that takes us directly from point A to point B. Wait no, make that A2 and B2, because by that time A is probably obsolete and we’ve realized that our initial ideas about B were wrong. Ultimately, our talks and our papers are based on this streamlined story – not the chronological one that we actually experienced. And there’s nothing wrong with that. After all, in an alternate universe bizarro scientist might have got to the same conclusion some other way. That shouldn’t change the underlying logic.

Deductive logic is an important part of science. We use it for planning experiments, diagnosing problems with our analyses [4], and communicating our results. But as Popper pointed out, achieving deductive perfection is a process, bigger than any one person or study. As much as we love deduction, we should keep in mind that some studies can’t be fit into the deductive mold. But even if a descriptive study is not a test, it’s still a valuable part of the collective process.

From August 29, 2015