(Updated 2016 April 1 to add some additional point and try to improve the explanation below the video.)
Here’s a video with a surprise ending — a big surprise, in fact. The question addressed is this: What if you’re take a test for a disease and get a positive result. How worried should you be?
Obviously that depends on the seriousness of the disease and the reliability of the test. A little less obviously, it also depends on how common the disease is. To put some numbers on it, suppose it’s a pretty bad disease that effects one person in a thousand, that the test never misses a real instance of the disease, and that it’s reliable enough that only one healthy person in ten will get a positive result. (Note, incidentally, that since this hypothetical test is perfect at detecting the disease, a negative result means that you’re guaranteed not to have it.)
Here’s a video that analyzes the question using an important piece of mathematics called Bayes’s Theorem after the Reverend Thomas Bayes (1701-1761). Below the video I’ll give an alternative explanation that doesn’t require any probability theory:
Link: https://youtu.be/M8xlOm2wPAA
OK, here’s the version without any algebra: Suppose we test a random group of a thousand people for the disease. Given that the disease effects only 1 in 1000 on average, we’d expect at most a few people in the group to have the disease, and it could be none at all.
The rest of the group, 999 on average, would not have the disease. But about 1/10 of them would get a positive result anyway, so we’d still expect roughly 100 positive results out of 1000 people even if none of them had the disease.
On average, then, we’d likely have about 100 false positives and 1 true positive. Of the positive test results, about 99 percent of them would likely be false positives. They could even all be false positives. And this is with a test that’s pretty accurate (only a 10% rate of false positive and a 0% rate of false negatives).
This is the problem with screening tests in general. Even with diseases that are less rare or tests that are even more accurate, there will be a lot of false positives. Besides the needless worries caused by false positives, there’s the chance that they might lead to medical interventions that are potentially harmful. This is why some screening tests are now discouraged as potentially leading to more harm than benefit.
And this doesn’t just apply to screening for diseases. Drug testing, for example, isn’t especially accurate, producing significant numbers of false positives. And while it’s true that the rate of drug abuse is a lot higher than one in a thousand, in practice a very high percentage of positive results are indeed false. Even when more accurate and expensive follow-up gas chromatography tests are employed, the number of false positives remains a problem.
This isn’t just a theoretical concern. A handful of states have passed laws requiring drug tests for poor people getting government assistance. That might at first glance seem like a good idea as a way to avoid having tax dollars go to support drug abuse. But in practice, the testing costs millions of dollars and produces a very small total number of positive results, resulting in no net savings. The substantial rate of false positives just makes it worse, and even true positives end up punishing not just drug users but their children. This isn’t an argument against policies to discourage drug abuse, only against bad policies that waste money that could be better spent directly on helping people get off drugs.
by