Wednesday, August 15, 2018

Why Unbox Evidence-Based Medicine?




I’ve been teaching evidence-based medicine with enthusiasm for over 15 years, to undergraduate medical students within my university and for medical doctors in extra-mural activities. As time passes by, my feeling of uneasy with this discipline increases to the point that I come to the conclusion that evidence-based medicine should not be a discipline at all. Nowadays, I finish my course every semester confessing to my students that I have a dream: one day my discipline will cease to exist, because evidence-based medicine will be recognized just as medicine. 

In fact, I recently looked into the etymology of the word “medicine”. It originates from the latim “mederi”, which means “to know the best way”. I then realized that medicine is not knowing the certain way, because “medicine is a science of uncertainty”, as stated by Willian Osler in the first half of the 20th century. Best choice does not mean the right choice, it only provides the best chance. Only afterwards, we will learn about right or wrong, when we see the outcome result. 

Maybe we should consider Osler as the father of evidence-based medicine. Once he recognized the uncertainty of medicine, he suggested a solution: “medicine is the art of probability”. He proposed probability-based medicine, as empirical evidence is the way to assess diagnostic, prognostic and treatment probabilities. In fact, “evidence” is just a means to an end: the end of critical thinking based on uncertainty. 

First, we need to unbox evidence-based medicine into uncertainty-based medicine. Second, it is not to consider evidence-based medicine a medical field or a form to practice medicine. It is just medicine as it should be.

I have a feeling that evidence-based medicine has been presented in a too methodological package that creates a gap between the real world physician and the scientific way of thinking. Evidence-based medicine should be presented with more sensibility and grace. It is not about the evidence, it is about the patient. 

Evidence is the means to acquire the probability to be used in an individual decision, that takes into account clinical aspects of the patient, as well as her values and preferences. Evidence-based medicine is the art of medicine, the art of probability. And now I understand why Osler said art of probability, instead of science of probability. Because we need sensibility to apply probabilities to a unique patient, taking into account her clinical, mental, spiritual particularities. 

Unboxing evidence-based medicine is presenting it to the consumer of science, as opposed to the researcher. Evidence-based medicine is not the field of trialists or systematic reviewers, it is the field of the caretaker. This caretaker should know how to judge the quality of evidence, how to use the knowledge from a systematic review, but does not need to learn how to build one. He needs to develop more attitude than knowledge. The knowledge to read an article is easy to acquire, but the skepticism and critical attitude is one to be developed over the years. Again, it takes sensibility and affection for this matter.

Unboxing evidence-based medicine is to put emphasis in the general concept of chance in determining outcomes, one step beyond P values or confidence intervals. It is to understand the real world as full of bias to promote illusions. Concepts have to be created in a controlled environment and then applied with art in the real world.

The intention of this blog is not to make easy a supposedly difficult discipline. It is to make it interesting. In fact, medicine is not supposed to be easy, but it can be very interesting and fulfilling. It is what evidence-based medicine should be about. 

2 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. Part II

    How can we explain such large effect size with rather small differences between groups in terms of preventive medications?

    As you suggested, the tables suggest a rather small percentage difference in the rates of medications among groupos. The problem here is that by averaging the number of patients on certain therapies, we neglect the fact that a substantial number of patients were taken off preventive measures as result of normal CTA results. Table S3 shows that 14% of patients in the CT guided arm received new preventive treatment compared to only 4% in control. At the same time, 4% of the CT arm patients were taken off preventive therapy vs. essentially zero in control. Therefore, follow up averages underestimate the difference in new preventive treatment folks received in the CT guided arm. Furthermore, the data does not reveal if the intensity of prevention/medication dosing changed as result of preventive efforts. It is conceivable that providers increased statin doses in patients already on statins whose CT revealed atherosclerotic disease.

    It is also important to recognize that the results are not due to one intervention, such as statin therapy, but of a whole array of "preventive measures" --some even not accounted for. In addition to statins—which have been proven to reduce MI and CV death in similar populations--patients were also started on antiplatelet therapy—which is also associated with reductions in MI and CV death. New interventions are typically tested in addition to standard of care and the added benefit is often small. Here, we have the curious situation that some patients did not get any directed prevention (in patients with normal stress test results) and others got a whole array of preventive measures (patients with coronary atherosclerotic disease by CT). It appears the cumulative effect is quite impressive.

    3) Are the results inflated by ascertainment bias?

    I agree what there is potential bias since no event adjudication occurred. However, I find it unlikely that there would be a substantial bias based on treatment assignment in these randomized groups. The fact that PROMISE and the European registry showed similar results with shorter follow up and lower risk populations suggests to me that while SCOT-HEART results may be on the upper range of the effect size, study results remain within plausibility.


    4) What is the external validity of SCOT-HEART?

    As outlined above, I do not believe that protocolizing preventive measures in response to evidence of coronary atherosclerotic disease reduces external validity of the trial. On the contrary, I believe this measure is not only reasonable but very appropriate and applicable. In other words, the same prescription should be given in clinical practice if similar results are to be expected.


    5) Conclusions

    I believe the fact that two RCT and one large registry (encompassing together >100,000 patients in different countries and healthcare systems) revealed a reduction of MI with a CT guided approach vs. stress testing is of major significance and should result in immediate change to our practice (as already done in the UK where they use CTA as first line test in patients with stable CP). We use approximately 10,000,000 stress tests in the US each year. Even if the effect size in clinical practice is smaller than in SCOT-HEART, e.g., a 30% risk reduction for MI when combining SCOT-HEART and PROMISE, we may prevent many thousands of myocardial infarctions each year by identifying symptomatic patients who have coronary atherosclerotic disease—not detectable by stress testing—who benefit from prevention. As such, I believe the SCOT-HEART trial ranks among the most important clinical trials in cardiology of the past years.

    Kind regards,
    Armin


    My disclosure: I am directing a Cardiac CT Lab. My main clinical and investigational focus, however, is CHD and I have been interested in CT solely because I believe it is currently the best tool to identify patients with CHD and guide their management.


    ReplyDelete

Is acupuncture better than stenting for stable angina?

As a cardiologist, it was not easy to accept the negative result of ORBITA Trial two years ago. Stenting an obstructed coronary d...