Evidence-based medicine is all the rage these days, and its practice is only as good as the studies it’s based on. The gold standard is the prospective, randomized, double-blind, controlled trial. But these can easily lead clinicians astray, Dr. Philip J. Devereaux said at the annual meeting of the American Society of Anesthesiologists.
Imagine, if you will, two gold-standard trials of a treatment to prevent myocardial infarction that are identical studies except that one includes 200 patients and the other has 8,000 patients. In study #1, one patient in the treatment group and nine on placebo had heart attacks. That’s a significant difference, with a statistical P value of .02. In study #2, 200 patients on treatment and 250 on placebo had heart attacks. That’s also a significant difference, with the same P value of .02. But in study #1, the relative risk reduction is 90%, while in study #2 it’s 20%, so some people may find study #1 more persuasive.
“I have more concern about small randomized, controlled trials than about observational studies” because people challenge the latter, said Dr. Devereaux of McMaster University, Hamilton, Ont., Canada. Perioperative medicine, where anesthesiologists do most of their work, is dominated by small trials.
Look at what happens if you add two heart attacks in the treatment group of each of these studies. In study #1, the P value becomes .13, meaning the treatment is not significantly different from placebo. In study #2, the P value remains .02, with significance intact.
Dr. Devereaux and his associates developed an Absolute Fragility Index (AFI) that he proposed should be included when analyzing any trial results. The AFI is the minimum number of patients required to switch from not having an “event” (such as a heart attack) to having an event in the treatment group in order to move the results from being significant to being nonsignificant. In the two hypothetical studies, the first has an AFI of 1, the second has an AFI of 9.
“If your trial is hinging on one or two events, you should be very humble about the results,” he said. Most medical research fits this description, he added.
The AFI is more than an academic exercise. He looked back at the most highly cited studies in leading medical journals in the past few decades and found that 16% of them later were contradicted and 16% later were shown to have exaggerated treatment effects. Why? The only common factor to explain these boo-boos was that the initial trial was small, with relatively few patients. Yet those trials have influenced thousands or millions of clinicians over the years.
His talk had extra resonance for me after reading a provocative article in the November 2010 issue of The Atlantic Monthly entitled Lies, Damned Lies, and Medical Science. The article profiles the work of Dr. John P.A. Ioannidis, professor and chairman of epidemiology at the University of Ioannina, Greece, who has spent much of his career exposing the bad science behind some medical practice and why so many researchers rely on reports that are exaggerated, wrong, or misleading. (Hat tip to my editor Jane MacNeil for pointing me to this article.)
It would be nice if we could all agree that science is not static, but rather progresses and regresses. We learn, and then find out that some of what we thought we had learned was wrong, and set about using that information to seek the next level of truth. Repeat, ad infinitum. Personally, I’d love it if my doctors couched every bit of advice with, “Here’s what we think we know today.”
But I suspect that wouldn’t sit well with many patients, who want certainty (as if there is such a thing). And it especially seems like a difficult proposition in our contentious society, where anti-science nay-sayers like to jump on contradictory findings to challenge the basic value of science overall.
If you’re a clinician, let us know — do your patients tolerate ambiguity? Do you even have time to express to them your level of confidence in the data you rely on in helping them make decisions about their care?