Science in crisis: good to be skeptical of some research
November 5, 2016
The veracity of scientific research is in full crisis mode. It’s not that investigators with an inclination for test tubes and statistics are fudging results to keep grant funds flowing, although there are always a few of those in any group. No, it’s that the prime tenant of scientific method is falling on its face. A high number of studies are not being replicated.
Let me be clear. I am not one of those “science is a left-wing conspiracy” people. Quite the contrary. Science is my religion of choice. I prefer my faith to be grounded in what actually is versus beliefs in what can never be known or proved. Besides, what actually IS never fails to fill me with a sense of awe and wonder. Kinda like religion.
Back to science in crisis. Researchers just can’t seem to replicate other’s studies with consistent success. And it’s not for lack of trying.
In a survey of 1,576 scientists conducted by “Nature” magazine, one of the most-cited scientific research journals in the world, “more than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.”
“Nature” came up with some possible reasons why. My interpretation of their findings are as follows:
Part of the problem is, if scientific method is being done right, there should be lots of blind alleys and endless barking up the wrong trees. That’s a good thing, right? But shouldn’t findings be replicated before they make their way into print and calcify in the consciousness of scientists and the public?
Part of the problem, which “Nature” points out, is that there isn’t a consensus on what reproducibility actually means. How close to the original results must a study be to be considered replicated? Close enough for horseshoes or precisely the same or no cigar?
Part of the problem is the isolation in which scientists operate, leading to a disquieting lack of communication between them. Even “Nature” recognized this. For example, one guy fails to replicate another’s research but doesn’t drop him a line saying, “Hey, tried to come up with your recombinant DNA results but man, I got something completely different.” Just a little hashing out of methodology might uncover where one or the other missed the mark. A retry, possibly a successful one, might ensue.
Part of the problem is love. Yeah, scientists have feelings. You dream up this theorem, a really elegant little gem, a potentially paradigm-blasting proposition. You get to the lab and test out your baby. And well….some of the results are better than the rest. It’s like discovering your lover’s flaws and somehow those quirks make them even more captivating. Perhaps if you just focus on the parts that work and just not worry too much about the parts that really suck… Well, you get the picture. You publish a beautiful paper. But it’s just a tad delusional. Someone else runs your experiment and sees nothing but warts. Alas, love is blind.
Part of the problem is that whether the study was successfully replicated or not, journal editors are loathe to print retreads of old research. Ahem. “Nature?” So the original work stands, whether or not it should.
The solution to this crisis in science is to acknowledge the problem and the survey by “Nature” has certainly done that. Until the issues underlying the irreproducibility of research are ironed out, perhaps it’s best to take every study with a whopping dose of skepticism. Because it looks as though some of it really could dead wrong.
Bruce Knuteson • Nov 7, 2016 at 10:13 am
Thoughtful article, Jeannette.
The solution to science’s replication crisis is an incentive structure that explicitly rewards accuracy and penalizes inaccuracy, as described at https://ssrn.com/abstract=2835131 and enforced at http://kn-x.com.