Communication, Latest, Research

201 – Reasoning with probabilities

I’m reading a very interesting book called Reckoning With Risk: Learning to Live With Uncertainty by Gerd Gigerenzer (holiday reading). He is a psychologist who specialises in the way that risk is communicated and perceived, or often mis-perceived.

One of his interesting points is that, even when well educated people (like doctors, for example) have accurate knowledge of probabilities related to a risk, they are often incapable of manipulating those probabilities in fairly simple ways to make sound judgements about the risk.

Here is a striking example. He asked 24 doctors (with an average of 14 years experience) the following question.

The probability that a randomly chosen woman has breast cancer is 0.8 percent. If a woman who has breast cancer undergoes a mammogram, the probability is 90 percent that the mammogram result will be positive (i.e. indicating that she does have breast cancer). If a woman who does not have breast cancer undergoes a mammogram, the probability is 7 percent that she will have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?

See if you can work out the answer, at least approximately. People who are used to working with statistics and probabilities – like, say, academic economists – should find it easy. (It’s an application of Bayes’ law (see appendix), but you don’t need to know that to solve it.)

Not surprisingly, most people find it very difficult to untangle information like this that includes conditional probabilities, even doctors whose job it is to advise people about these sorts of things.

Of the 24 doctors who responded to the question, only four got close. Most were wildly inaccurate, giving probabilities that were far too high. It seems like they had no idea how to tackle the question. The most common answer, but far, was 90 percent. But that obviously can’t be right because that would be the answer if all women had cancer, whereas 99.2 percent of them don’t!

This lack of probabilistic reasoning among doctors is a pretty concerning finding. You’d like to think that your doctors could provide sound advice about medical risks, if you wanted it, but in this case, at least, you’d be disappointed. It actually reinforces a feeling I’ve had a number of times that doctors who were advising me did not have a good understanding of probabilities.

Gigerenzer then did something quite clever. He reformulated the question in a way that made it much easier for people to think about, as follows.

Eight out of every 1000 women have breast cancer. Of these 8 women with breast cancer, 7 will have a positive mammogram. Of the remaining 992 women who don’t have breast cancer, 70 will still have a positive mammogram. Imagine a randomly selected sample of women who have positive mammograms. How many of these women actually have breast cancer? ____ out of ____.

This is the same information as in the earlier question (with some rounding). From this description, it is much easier to see that, of the 1000 women we are considering, 77 will have positive mammograms, and of those 77, only 7 will actually have cancer. Thus the answer is that 7/77 = 1/11 = 9% of women with a positive mammogram would actually have cancer.

Gigerenzer argues that this approach of reformulating probabilistic information into absolute numbers (i.e. it talks about 8 women with breast cancer, rather than 0.8% of women) will generally improve people’s understanding of risk. It certainly worked pretty well with another sample of 24 doctors. Almost half got the second version of the question exactly right, and most got it approximately right. Only 5 of the 24 were highly inaccurate.

That’s a big improvement, although I’m still worried that 20% of doctors could still get the answer completely wrong when the question is posed in such a transparent and intuitively obvious way. I hope they aren’t my doctors!

In follow up research, Gigerenzer investigated what the doctors actually did when they were trying to answer these questions – which numbers from the problem description did they use and how did they combine them. For the first more difficult description based on conditional probabilities, it became even clearer that the doctors didn’t have a clue what to do. Only 20% of them used the same reasoning strategy again when the question was repeated for a different medical test. Clearly, they were just guessing, and overwhelmingly guessing wrong.

Given the intuitive problem description, more doctors used the right reasoning strategy and applied it consistently, but there was still a disturbing number who didn’t.

It certainly will make me want to explore the probabilities more deeply with my doctor when I next have a medical diagnosis.

Appendix: Using Bayes Theorem to solve the problem

Bayes Theorem: P(a|b) = P(b|a)*P(a)/P(b)

where
a = has breast cancer
b = has positive mammogram

From the problem description:
P(a) = 0.008
P(b|a) = 0.9
P(b) = 0.008*0.9 + 0.992*0.07 = 0.077

So
P(a|b) = 0.9*0.008/0.077 = .094

That is, if a woman at random has a positive mammogram, the probability that she has breast cancer is 9%. The intuitive approach, based on absolute frequencies, is clearly much easier to think about than this, although it amounts to the same thing in the end.

Further reading

Gigerenzer, G. (2002). Reckoning With Risk: Learning to Live With Uncertainty, Penguin, London.