The Frugal Family Doctor has done a great post on an absolutely outstanding and at the same time superbly simple statistics paper that demonstrates how the numbers printed in the medical press informing us of risk are more often than not, not as they seem. The source paper is fully open access (PDF) and should be a must-read for all health journalists. That’s never going to happen so I’m hereby officially adding it to your personal reading list.

“For example, in October of 1995, the UK Committee on Safety of Medicines issued a warning that oral contraceptive pills increased risk of blood clots in legs or lungs by 100%.  That number was the relative risk increase.  This warning frightened patients and their doctors such that thousands of women stopped their birth control pills.  However, the absolute risk reduction, if it had been calculated and broadcast, was 0.00014 (1/7000) with a number needed to harm of 7143.  An unintended result of the misinformation was that the number of additional abortions in England and Wales increased by 13,000 during the year following the warning.”

Birth control pill scare How Medical Misinformation Led to 13,000 Unnecessary Abortions in the UK

The paper demonstrates that the statistical fallacy that led to the situation above is so poorly understood that when tested on it, even gynecologists for whom the mathematical problem is particularly vital, got the simple question below wrong. In fact, only 21% of the physicians asked got the answer right, that’s slightly worse than chance.

Assume you conduct breast cancer screening using mammography in a certain region. You know the following information about the women in this region:

  • The probability that a woman has breast cancer is 1% (prevalence).
  • If a woman has breast cancer, the probability that she tests positive is 90% (sensitivity).
  • If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% (false-positive rate).

A woman tests positive. She wants to know from you whether that means that she has breast cancer for sure, or what the chances are. What is the best answer?

A. The probability that she has breast cancer is about 81%.

B. Out of 10 women with a positive mammogram, about 9 have breast cancer.

C. Out of 10 women with a positive mammogram, about 1 has breast cancer.

D. The probability that she has breast cancer is about 1%.

You can scroll to the end of this article for the answer. The problem is in two parts. Firstly, humans are inherently crap at computing percentages. The solution to this problem is to calculate and compare the actual values being referred to. Whether we like it or not, our brains tend to find it much easier to juggle whole numbers:

fig3 How Medical Misinformation Led to 13,000 Unnecessary Abortions in the UK

Two ways of calculating the probability that a woman who tests positive in mammography screening actually has breast cancer (positive predictive value)

This effect is compounded by the fact that the percentages that we are discussing when we are talking about the “relative risk” of a disease are mind bogglingly small. A percentage alone (as is commonly reported) gives no indication to the risk in the first place:

“Consider one medication that lowers risk of disease from 20% to 10% and another that lowers it from 0.0002% to 0.0001%. Both yield a 50% relative risk reduction”

This time, journalists are not necessarily to blame. The study reports that relative risks without the base rate are not just a problem in newspapers and press releases but even such reputable journals as the Annals of Internal Medicine, BritishMedical Journal (BMJ), Journal of the American Medical Association (JAMA), The Lancet, and The New England Journal of Medicine have been guilty of reporting relative risk reductions without reporting the absolute risk. Thankfully this is changing with most academic publishers now requiring absolute risks be reported. Unfortunately this rarely trickles down in to press releases and even less frequently in to print media, leaving Joe Bloggs with a very distorted picture of medical research findings.

The moral of the story, if you are given a percentage that you think affects you, always find out the raw numbers. They might be much less earth shattering than you were led to believe.

Answer:

C: 1 out of every 10 women who test positive in screening actually has breast cancer. The other 9 are falsely alarmed. Despite this, the physicians (who all worked in this area) “grossly overestimated the probability, most answering with 90% or 81%.


Reference:

Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L., & Woloshin, S. (2007). Helping Doctors and Patients Make Sense of Health Statistics Psychological Science in the Public Interest, 8 (2), 53-96 DOI: 10.1111/j.1539-6053.2008.00033.x

Return to Neurobonkers.com How Medical Misinformation Led to 13,000 Unnecessary Abortions in the UK

Statistics are often used by newspapers as the basis for a story. People are far more likely to agree with you if you tell them that they are on the side of the majority. This is why bogus statistics are so effective in moralising comment pieces. It’s a lot easier to say, “hey, most people agree with us” than convince someone with facts. By bogus, I don’t mean data fabrication  (though that happens too), I mean rigging the questions to get the answers you want. Here’s how:

Divide and Rule
A poll that just went up on the Telegraph’s website has the following responses:

Questions How to construct a bogus survey

Options one, two and four all appear sensible. Option three sounds outrageous. Unfortunately, you can only pick one answer, this splits the rational vote. Those of a right wing disposition will most likely shun treatment, legalisation and equality (which are apparently all mutually exclusive) and therefore gain an unfair advantage on option three.

The Leading Question

In psychology and law we have a phenomenon called “leading questions“. Evidence is inadmissible if the witness is given a hint of the answer in the question.  I don’t know about you but I’d call a headline titled “Drug gangs controlling parts of British cities” a leading question. Placing the survey within an article also restricts your sample to people choosing to read that article.

The Nullified Answer & The Catch 22

If you look closely at option one in the Telegraph’s poll it actually includes two answers which are normally considered divisive. The US have lobbied internationally to stop treatment efforts in the hope that making drug use more dangerous will prevent it. A number of presidential candidates actually believe blocking treatment is a good form of prevention. Likewise, very few that support treatment will choose this option because the poll is rigged so supporting treatment means also signing up to prevention. In short, next to nobody is going to click this option but this stat will come in handy when lobbying against treatment.

The PR firm

Yesterday I highlighted a bogus poll in the Daily Mail, that purportedly found that “a quarter of young British women are dating at least three men at once“. After a couple of emails I’ve managed to get them to give me a press release. The research was done by a PR firm hired on behalf of a high street restaurant. Sound Fishy? The press release they sent me has lots of information about the restaurant’s “rich food and beverage heritage”, their “juicy burgers”, “tender ribs” and “hearty steaks” but none of the information normally associated with a study (such as the sample information, methods or the actual questions that were asked) is present. Apparently the research is ongoing so I can’t get access to the data, though they’ve assured me I can have it when it’s finished, I’m not holding my breath. PR firms are great for hiding information and deflecting bad publicity, in fact that’s what they are for.

The Reporting

answers How to construct a bogus survey

Results from the Daily Telegraph Survey as of 28/02/12

Now you’ve got your bogus results you can make up your shock-horror headlines as you see fit, here are some examples we could use:

  • 92% OF BRITS OPPOSED TO TREATMENT OF DRUG ADDICTS
  • ONLY 8% OF BRITS WANT TAXPAYERS’ MONEY SPENT ON TREATING DRUG ADDICTS
  • 56% OF BRITS OPPOSED TO LEGALISATION
  • 39% WANT TOUGHER SENTENCES FOR DRUG USERS

The greatest part of the plan is that you never have to worry about getting caught because you don’t have to explain how you got your dodgy data! Welcome to the PR industry.

Reference

Tajfel, H., & Turner, J. C. (1986). The social identity theory of intergroup behaviour. Psychology of intergroup relations , 7-24

Return to Neurobonkers.com How to construct a bogus survey

I recently received an email inviting me to comment on the infographic below which makes some pretty startling claims. I decided to do a little bit of research on the stats to unearth the facts behind the data.

falsified research infographic2 A Scientists Worst Nightmare

The first reference is a famous paper published in PLoS Medicine controversially titled “Why Most Published Research Findings Are False”. The paper (PDF) is “the most downloaded technical paper that the journal PLoS Medicine has ever published” and has been cited over 1000 times. The paper outlines an extraordinary range of  statistical flaws common in a lot of scientific research that still manages to get published. The key observation is that p-values alone should not be used to interpret research. The discussion is something of a must-read for academics. There is little dispute that the paper highlights a number of statistical truths that scientists often forget and that journalists are often never even made aware of, notably:

“research findings may often be simply accurate measures of the prevailing bias”

“The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.”

Ioannidis, J. (2005). Why Most Published Research Findings Are False PLoS Medicine, 2 (8) DOI: 10.1371/journal.pmed.0020124

The underlying argument of the paper is irrefutable, all research should be taken with a pinch of salt until the study is repeatedly replicated. There are countless analyses of the paper that expand upon the debate and are well worth reading but first,   check out the original paper. It is more readable than you might expect:

If you have 15 minutes to spare for an excellent lesson in stats (of course you do) , kick back and watch this video which outlines the paper and how it has been misrepresented by science-deniers:

Tl;dr Moonesinghe et al (2007) summed up the (somewhat paragraph 19) caveat:

Moonesinghe, R., Khoury, M., & Janssens, A. (2007). Most Published Research Findings Are False—But a Little Replication Goes a Long Way PLoS Medicine, 4 (2) DOI: 10.1371/journal.pmed.0040028 (PDF)

Ioannidis, the author of the original paper went on to publish another fascinating paper that makes the point that current publishing models have unintentionally created an environment for academic malpractice to thrive. The paper provides a bombshell of empirical evidence for this theory:

Young NS, Ioannidis JP, & Al-Ubaydli O (2008). Why current publication practices may distort science. PLoS medicine, 5 (10) PMID: 18844432 (PDF)

The conclusions are striking. The tendency of null and negative findings to be rejected creates an artificial incentive for scientific malpractice and results in meaningful data become buried. Journals that proudly announce that they reject over 92% of papers as a badge of honour potentiate the incentive causing a “winner’s curse”. I’m not going to go in to any greater detail because this discussion merits a dozen blog posts that have been well covered elsewhere:

The statistics displayed in the infographic should be seen as symptomatic of the problems outlined by Ionnidis. The statistics are mainly taken from a meta-analysis of self report surveys which is particularly worrying because one would imagine this would greatly under-report the numbers.

Fanelli D (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PloS one, 4 (5) PMID: 19478950 (PDF)

In conclusion, science is not impervious to criticism – criticism is the foundation of science. Scientists make mistakes and some scientists do bad things. Science is hard work and major breakthroughs are rare. Errors are more likely to be reported in press than genuine scientific consensus. Thankfully the cream floats yet it is normally the dregs which are reported in the papers and misrepresented by those with an agenda. The obvious conclusion is straightforward, always take everything you read with a pinch of salt and before drawing conclusions wait until the evidence has been vigorously tested.

Infographic via clinicalpsychology.net/bad-science/

Return to Neurobonkers.com A Scientists Worst Nightmare

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...