Fears that this is resulting in some questionable findings began to emerge in 2005, when Dr. John P. A. Ioannidis, a kind of meta-scientist who researches research, wrote a paper pointedly titled “Why Most Published Research Findings Are False.”
Given the desire for ambitious scientists to break from the pack with a striking new finding, Dr. Ioannidis reasoned, many hypotheses already start with a high chance of being wrong. Otherwise proving them right would not be so difficult and surprising — and supportive of a scientist’s career. Taking into account the human tendency to see what we want to see, unconscious bias is inevitable. Without any ill intent, a scientist may be nudged toward interpreting the data so it supports the hypothesis, even if just barely.
The effect is amplified by competition for a shrinking pool of grant money and also by the design of so many experiments — with small sample sizes (cells in a lab dish or people in an epidemiological pool) and weak standards for what passes as statistically significant. That makes it all the easier to fool oneself.
... C. Glenn Begley, who is chief scientific officer at TetraLogic Pharmaceuticals, described an experience he had while at Amgen, another drug company. He and his colleagues could not replicate 47 of 53 landmark papers about cancer. Some of the results could not be reproduced even with the help of the original scientists working in their own labs.
I feel that a big leap forward in science right now would be a category of journals that only commissions studies replicating important findings (or asking important new questions) and will publish the results, regardless of the answer. That will beat publication bias. Determine what you want to study ex-ante and publish the results regardless of how it turns out. Only then can we really know what's going on.