The Frugal Family Doctor has done a great post on an absolutely outstanding and at the same time superbly simple statistics paper that demonstrates how the numbers printed in the medical press informing us of risk are more often than not, not as they seem. The source paper is fully open access (PDF) and should be a must-read for all health journalists. That’s never going to happen so I’m hereby officially adding it to your personal reading list.

“For example, in October of 1995, the UK Committee on Safety of Medicines issued a warning that oral contraceptive pills increased risk of blood clots in legs or lungs by 100%.  That number was the relative risk increase.  This warning frightened patients and their doctors such that thousands of women stopped their birth control pills.  However, the absolute risk reduction, if it had been calculated and broadcast, was 0.00014 (1/7000) with a number needed to harm of 7143.  An unintended result of the misinformation was that the number of additional abortions in England and Wales increased by 13,000 during the year following the warning.”

Birth control pill scare How Medical Misinformation Led to 13,000 Unnecessary Abortions in the UK

The paper demonstrates that the statistical fallacy that led to the situation above is so poorly understood that when tested on it, even gynecologists for whom the mathematical problem is particularly vital, got the simple question below wrong. In fact, only 21% of the physicians asked got the answer right, that’s slightly worse than chance.

Assume you conduct breast cancer screening using mammography in a certain region. You know the following information about the women in this region:

  • The probability that a woman has breast cancer is 1% (prevalence).
  • If a woman has breast cancer, the probability that she tests positive is 90% (sensitivity).
  • If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% (false-positive rate).

A woman tests positive. She wants to know from you whether that means that she has breast cancer for sure, or what the chances are. What is the best answer?

A. The probability that she has breast cancer is about 81%.

B. Out of 10 women with a positive mammogram, about 9 have breast cancer.

C. Out of 10 women with a positive mammogram, about 1 has breast cancer.

D. The probability that she has breast cancer is about 1%.

You can scroll to the end of this article for the answer. The problem is in two parts. Firstly, humans are inherently crap at computing percentages. The solution to this problem is to calculate and compare the actual values being referred to. Whether we like it or not, our brains tend to find it much easier to juggle whole numbers:

fig3 How Medical Misinformation Led to 13,000 Unnecessary Abortions in the UK

Two ways of calculating the probability that a woman who tests positive in mammography screening actually has breast cancer (positive predictive value)

This effect is compounded by the fact that the percentages that we are discussing when we are talking about the “relative risk” of a disease are mind bogglingly small. A percentage alone (as is commonly reported) gives no indication to the risk in the first place:

“Consider one medication that lowers risk of disease from 20% to 10% and another that lowers it from 0.0002% to 0.0001%. Both yield a 50% relative risk reduction”

This time, journalists are not necessarily to blame. The study reports that relative risks without the base rate are not just a problem in newspapers and press releases but even such reputable journals as the Annals of Internal Medicine, BritishMedical Journal (BMJ), Journal of the American Medical Association (JAMA), The Lancet, and The New England Journal of Medicine have been guilty of reporting relative risk reductions without reporting the absolute risk. Thankfully this is changing with most academic publishers now requiring absolute risks be reported. Unfortunately this rarely trickles down in to press releases and even less frequently in to print media, leaving Joe Bloggs with a very distorted picture of medical research findings.

The moral of the story, if you are given a percentage that you think affects you, always find out the raw numbers. They might be much less earth shattering than you were led to believe.

Answer:

C: 1 out of every 10 women who test positive in screening actually has breast cancer. The other 9 are falsely alarmed. Despite this, the physicians (who all worked in this area) “grossly overestimated the probability, most answering with 90% or 81%.


Reference:

Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L., & Woloshin, S. (2007). Helping Doctors and Patients Make Sense of Health Statistics Psychological Science in the Public Interest, 8 (2), 53-96 DOI: 10.1111/j.1539-6053.2008.00033.x

Return to Neurobonkers.com How Medical Misinformation Led to 13,000 Unnecessary Abortions in the UK
Tagged with:
 
  • http://twitter.com/christianmunthe Christian Munthe

    Excellent post. The repeated phenomenon of using relative rather than absolute risk or probability numbers is one focus (together with other things) of criticism in my and my colleagues Niklas Juth’s very recent book on the ethics of screening: http://www.springer.com/medicine/book/978-94-007-2044-2

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...