Sunday, December 16, 2018

The Parachute Trial: useful caricature or just a joke?


Caricature studies" have been used successfully in the scientific field to make relevant methodological discussions more palatable. I like this approach and often use them as teaching tools, such as the strong correlation between chocolate consumption and Nobel Prizes as an example of confounding bias.

In 2003, a systematic review on efficacy of parachute use in patients who jumped from great heights was published in the British Medical Journal. The review indicated no randomized clinical trials for this intervention. It was a clever way of demonstrating that not everything needs experimental evidence. That article inspired us to create the terms "parachute paradigm" and "principle of extreme plausibility".

Yesterday, I received a plethora of enthusiastic messages about the latest clinical trial published in the British Medical Journal as part of the Christmas series: Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial.

In this trial, airplane passengers were invited to enter a study where they would jump from the plane to the ground, after being randomized to the use of parachute or non-parachute backpack as a control group. The primary outcome was death or severe trauma. Based on the premise that 99% of the control group would suffer the outcome, for a 99% power to detect a huge (and plausible) relative risk reduction of 95%, only 10 patients per group would be needed. This was done and, surprisingly, the study was negative: zero incidence of the primary outcome in both groups. However, only individuals who would jump from planes parked on the ground agreed to participate in the work.

Funny, but what is the implicit message of this study?

"Randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice."

According to the authors, the new parachute study would be pointing to the problem that randomized clinical trials select samples less predisposed to the benefit of the intervention, a phenomenon that would promote false negative studies. The authors explain that it happens because patients who are more likely to benefit from therapy are less likely to agree to enter a study in which they may be randomized to non-treatment. This would make clinical trial samples less sensitive to benefit detection as a partial exclusion of patients with a greater chance of therapeutic success would take place.

Caricatures serve to accentuate true traits. However, if we were to characterize clinical trial samples (ideal world), they tend to be more predisposed to finding positive results in comparison with a real world target population. Therefore, this study is not a caricature of the real world clinical trial.

Thus, the present article should lose the caricature status and be considered just a funny joke, with no ability to anchor our mind towards a better scientific thinking.

As proof of concepts, clinical trials rely on the use of highly treatment-friendly samples, by applying restrictive inclusion and exclusion criteria. Differences between patients who accept and do not agree to enter the study are not sufficient to generate a sample less predisposed to treatment benefit than reality.

The "joke study" commits an unusual sample bias: it allows the inclusion of patients who do not need treatment. It would be as if a study aimed at testing thrombolysis allowed the inclusion of any chest pain, regardless of the electrocardiogram. Doctors who already believe in thrombolysis would see the electrocardiogram, thrombolyze ST-elevation patients, and release those who do not need thrombolysis to be randomized to drug or placebo. A joke without scientific value.

Caricature studies are useful when they anchor the mind of the community to a sharper criticism of the results of studies. However, in this case, the anchoring occurred in the opposite direction.

First, when we think of the scientific ecosystem, the biggest problem is false positive studies, mediated by several phenomena: confounding bias in observational studies, outcome reporting bias, conclusions skewed to positive finding  (spin) and, finally, citation bias that favor positive studies. Behind all this lies the innate predilection of the human mind for false statements, to the detriment of true denials.

Secondly, there is the problem of efficacy (ideal world) versus effectiveness (real world). Clinical trials aim to evaluate efficacy, which could be interpreted as the intrinsic potential of the intervention to offer clinical benefit: "Does the treatment have beneficial ownership?" Therefore clinical trials represent the ideal condition for the treatment to work. In the face of a positive clinical trial, we must always reflect whether this positivity will be reproduced in the real world, which constitutes effectiveness.

Of course there is the problem of false negative studies and it should also be a concern. But the bias suggested by the funny parachute study does not represent an important false-negative mechanism. The most prevalent mechanisms leading to false negatives are reduced statistical power, excessive crossover in the intention-to-treat analysis and inadequate intervention applicability.

My concern is that a reader of this funny study would take the following message home: if a promising study is negative, consider that clinical trials tend to include patients less likely to the benefit from the intervention. This message is wrong, as clinical trials tend to select samples more predisposed to the benefit. Of course, there are exceptions, but if we are to anchor our mind, it should be in the direction of the most prevalent phenomenon.

My prediction is that this study will come to be cited by legions of believers not satisfied with negative results from well-designed studies. Just as the seminal article of the parachute has been used inadequately as a justification for many treatments that have nothing to do with the parachute paradigm under the premise that "there is no evidence at all." A recent study by Vinay Prasad has shown that most interventions characterized as parachute paradigm by medical articles are not that, many have had clinical trials with negative results.

The great attention received by the parachute clinical trial is an example of how information sharing on social networks occurs. The main criterion for sharing is the interesting, unusual or amusing character, to the detriment of the veracity or usefulness of the information. In the appeal for novelty, fake news end up getting more attention than true news, as was recently demonstrated by a paper published in Science. Although the article we are discussing should not be framed as fake news, it is not a good caricature of the real world either.

The work in question is not a caricature of the ecosystem of randomized clinical trials. It is a mere joke with the potential to bias our minds to the inadequate idea that the heterogeneity between clinical trial samples and the target population of the treatment reduces the sensitivity of these studies to detect positive effects. In fact, the samples enrolled in clinical trials usually have a greater chance to detect positive results (sensitivity) than if the entire target population were included.


When the learning of science is approached in a fun way, it arouses great interest of the biomedical community. But we should always ask ourselves: what is the implicit message of the caricature? It is the first step to the critical appraisal of such “thought experiments”.

2 comments:

  1. "Randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice."

    This is undoubtedly true. Even enrollment in an observational study is associated with clinically significant differences in disease risk, e.g.
    https://www.ncbi.nlm.nih.gov/pubmed/11916347

    ReplyDelete

Vitamin C for Sepsis: a philosophical-scientific point of view

The CITRIS-ALI trial was a negative trial recently published in the JAMA, which depicts a graphic figure with looks and numbers show a ...