Monday's newspaper feed had two excellent opeds on different aspects of medical care.
In the Washington Post, Daniel Morgan wrote "What the Tests Don't Show: Doctors Are Surprisingly Bad at Reading Lab Results." Here is the gist:
But my research has found that many physicians misunderstand test results or think tests are more accurate than they are. Doctors especially fail to grasp how false positives work, which means they make crucial medical decisions — sometimes life-or-death calls — based on incorrect assumptions that patients have ailments that they probably don’t. When we do this without understanding the science of risk and probability, we unacceptably increase the chances of making the wrong choice. In the worst cases, as with the man whose angiogram caused otherwise avoidable strokes, we increase the odds of unnecessarily putting patients in danger.
And here is a sample:
A 5 percent false-positive rate is typical of many common tests. The primary blood test to check for a heart attack, known as high-sensitivity troponin, has a 5 percent false-positive rate, for instance. U.S. emergency rooms often administer the test to people with a very low probability of a heart attack; as a result, 84 percent of positive results are false, according to a study published last year. These false-positive troponin tests often lead to stress tests, observation visits with expensive co-pays and sometimes invasive cardiac angiograms.
You can read the entire article here. (Paywalled, but with free registration; first published October 5, but in my WaPo feed Monday for some reason.)
I watched as my 73-year-old mother spent the last year of her life replacing the foods she loved with concoctions of cottage cheese mixed with flaxseed oil. I listened as she justified spending thousands of dollars and hours of her precious remaining time traveling many miles to receive vitamin C infusions.
As she grew more ill, I gently questioned the supplements she was taking -- often so many that there was no room left in her shrinking stomach for food. And at times, when I could find evidence that a treatment had been scientifically tested -- and proved not to be effective -- I shared a strong opinion.
Yet the coffee enemas continued. Like a religious zealot, my mother completely and thoroughly put her faith, hope, and money into treatments and practices promising miracles without any evidence that they actually worked.
You can read the entire oped here.
I wish that the dubious nature of randomly testing people for things where false positives are not implausible was better known in general. Here in Australia, there is _massive_ random testing of drivers for drunk driving and for using drugs while driving. (In the year and a half I've lived here, I've been tested at least 10 times as much as I have been in my whole life before that in the US.) These tests are often given in the morning (around 10am) or early afternoon (1pm or so.) These don't seem to be times well designed to really be picking out true positives. The tests are surely not 100% accurate. Even if they give follow-ups, it's a huge waste of people's time, causing lots of stress, and probably giving some unfair punishment. All either because people don't understand probabilities or because they don't care.
Posted by: Matt | October 23, 2018 at 07:36 AM
What is the impact of false negatives? I cannot imagine that tests that have a high percentage of false positives would not also have a significant percentage of false negatives.
Posted by: Ellen Wertheimer | October 25, 2018 at 01:16 PM
I am sure you are right, Ellen. I believe that tests are designed, to the extent possible, so that false positives (which can be checked) are more likely than false negatives.
And of course, they can only test for illnesses and conditions that have been identified. Perhaps the greatest false negative is when the physician says "there's nothing wrong with you," when often it should be "we have no test that can determine what is wrong with you."
Posted by: Steve L. | October 25, 2018 at 02:47 PM