From a Forbes.com article
by Bill Frezza:
Fraud, plagiarism, cherry-picked results, poor or non-existent controls, confirmation bias, opaque, missing, or unavailable data, and stonewalling when questioned have gone from being rare to being everyday occurrences. Just look at the soaring retraction level across multiple scientific publications and the increasingly vocal hand wringing of science vigilantes. Hardly a prestigious university or large pharmaceutical company is immune, with the likes of Harvard, Cal Tech, Johns Hopkins, Ohio State, University of Kentucky, and the University of Maryland recently fingered byRetraction Watch.
And if you think science fraud only impacts the scientific literature, consider the horrendous case of Dr. Scott Reuben, formerly chief of the acute pain service at Baystate Medical Center in Massachusetts. He was sentenced to prison for falsifying research data purportedly demonstrating the efficacy of analgesic medications sold by Pfizer, Merck, and Wyeth that were published in dozens of journals before his fabrications were uncovered. And while Reuben is through as a scientist the problem lingers on, as his research papers were among the most heavily cited in the field.
When I first began looking into the increasingly vexing problem of irreproducible scientific research I assumed that the bulk of the problem was caused by sloppy science. Not so, says a National Academy of Sciences study that attributes two thirds of the retractions in the biomedical and life-sciences to scientific misconduct. And remember, these are only the people that have gotten caught.
In fact, it’s amazing that anyone gets caught at all. While the U.S. Office of Research Integrity (ORI), part of the Department of Health and Human Services, is chartered with rooting out science fraud, investigators must rely on allegations submitted by scientists in the field. And yet consider the consequences to the career of any whistleblower. How many graduate students are likely to turn in their Principal Investigator (PI) knowing that this would dash their hopes of earning a Ph.D.? How many post-docs would do the same, throwing away their chance for a faculty appointment? How many assistant professors would risk receiving tenure by outing a colleague? And how many PIs would be willing to wade into a controversy by bringing charges against the very same peers who review their publications and grant proposals? It isn’t hard to see how this can lead to a culture of omerta (though without worrying about a visit from Luca Brasi).
Conspiracy theory? I have personally spoken to young graduate students asked to review papers on behalf of their PIs who detected falsified data, usually by noticing identical noise floors in two different readings – a statistical impossibility. They were told to keep quiet about it. These fraudulent results are now part of the scientific literature. Every time I write a column like this I get email from more of them, none of whom will come forward for the reasons outlined above.
Something needs to be done to change the culture to make it easier to root out the bad apples. Too much is at stake to let this go—not just because of the research dollars wasted or the misguided public policy that might result, but because bad science threatens to mislead the vast majority of good scientists who wouldn’t dream of doctoring their results.
The change will come not from public policy, but from the conscientious action of brave individuals. If you witness science fraud and you don’t speak out, consider yourself part of the problem.