I recently received an email inviting me to comment on the infographic below which makes some pretty startling claims. I decided to do a little bit of research on the stats to unearth the facts behind the data.
The first reference is a famous paper published in PLoS Medicine controversially titled “Why Most Published Research Findings Are False”. The paper (PDF) is “the most downloaded technical paper that the journal PLoS Medicine has ever published” and has been cited over 1000 times. The paper outlines an extraordinary range of statistical flaws common in a lot of scientific research that still manages to get published. The key observation is that p-values alone should not be used to interpret research. The discussion is something of a must-read for academics. There is little dispute that the paper highlights a number of statistical truths that scientists often forget and that journalists are often never even made aware of, notably:
“research findings may often be simply accurate measures of the prevailing bias”
“The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.”
Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124
The underlying argument of the paper is irrefutable, all research should be taken with a pinch of salt until the study is repeatedly replicated. There are countless analyses of the paper that expand upon the debate and are well worth reading but first, check out the original paper. It is more readable than you might expect:
If you have 15 minutes to spare for an excellent lesson in stats (of course you do) , kick back and watch this video which outlines the paper and how it has been misrepresented by science-deniers:
Tl;dr Moonesinghe et al (2007) summed up the (somewhat paragraph 19) caveat:
Ioannidis, the author of the original paper went on to publish another fascinating paper that makes the point that current publishing models have unintentionally created an environment for academic malpractice to thrive. The paper provides a bombshell of empirical evidence for this theory:
The conclusions are striking. The tendency of null and negative findings to be rejected creates an artificial incentive for scientific malpractice and results in meaningful data become buried. Journals that proudly announce that they reject over 92% of papers as a badge of honour potentiate the incentive causing a “winner’s curse”. I’m not going to go in to any greater detail because this discussion merits a dozen blog posts that have been well covered elsewhere:
The statistics displayed in the infographic should be seen as symptomatic of the problems outlined by Ionnidis. The statistics are mainly taken from a meta-analysis of self report surveys which is particularly worrying because one would imagine this would greatly under-report the numbers.
In conclusion, science is not impervious to criticism – criticism is the foundation of science. Scientists make mistakes and some scientists do bad things. Science is hard work and major breakthroughs are rare. Errors are more likely to be reported in press than genuine scientific consensus. Thankfully the cream floats yet it is normally the dregs which are reported in the papers and misrepresented by those with an agenda. The obvious conclusion is straightforward, always take everything you read with a pinch of salt and before drawing conclusions wait until the evidence has been vigorously tested.
Infographic via clinicalpsychology.net/bad-science/Follow Simon on Twitter, Facebook, Google+, RSS, or join the mailing list.
Cookie ComplianceThis site contains cookies. If you have ever used the internet before then you probably knew that already and ate them long before you arrived here. If you are allergic to cookies please leave now.