Category Archives: Research

The Power Of Trauma Research?

The veracity of medical research conclusions, specifically trauma research, has always fascinated me. I see so many people who are content to jump to conclusions based on a paper’s title or the conclusions listed in its abstract. However, having read thousands of research papers over the years, I am no longer surprised when these easy-to-digest blurbs don’t entirely match up with the actual details embedded within the manuscript.

The authors of these publications genuinely intend to provide new and valuable information, either for clinical use or to allow future studies to build upon. Unfortunately, mistakes can be made that degrade their value. Problems with research design are among the top culprits that affect the significance of these papers.

I will focus on what are considered “gold standard” studies in this post. The randomized, controlled trial (RCT) is usually considered the big kahuna among research designs. They can provide solid answers to our clinical questions if designed and carried out properly.

In 2010, a checklist of 25 guidelines was created (CONSORT 2010, see reference 1) to guide researchers on the essential items that should be included in all RCTs to ensure high-quality results. A group of investigators from Jackson Memorial in Miami and Denver Health published a paper earlier this month that critically reviewed 187 trauma surgery RCTs published from 2000 to 2021. They analyzed power calculations, adherence to the CONSORT guidelines, and an interesting metric called the fragility index.

Here is a summary of their findings:

  • Only 46% of studies calculated the sample size needed to prove their thesis before beginning data collection. With no pre-defined stop point, the researchers might never be able to demonstrate significant results or may be wasting money on subjects in excess of the number actually needed.
  • Of the 54% that did calculate the needed sample size, two-thirds did not achieve it and were not powered to identify even very large effects. Once again, they are spending research money and will almost certainly be unable to show statistical differences between the groups, even if one actually existed.
  • The CONSORT checklist was applied to studies published after they were developed in 2010; the average number of criteria met was 20/25, and only 11% met all 25 criteria. The most common issue was failure to publish the randomization scheme to ensure no bias was possible due to it.
  • Among 30 studies that had a statistically significant binary outcome, the mean fragility index was 2. In half of these studies, having a different outcome in as few as two patients could swing the final results and conclusion of the study.
  • The majority of the studies (76%) were single-center trials. Frequently, such trial results cannot be generalized to larger and more disparate populations. Larger, confirmatory studies often have results that are at odds with the single-center ones.

Bottom line: What does it all mean? Basically, a lot of well-intentioned but poorly designed research gets published. The sheer volume of it and the work needed to interpret it correctly make it very difficult for the average trauma professional to understand and apply. And unfortunately, there is a lot of pressure to pump out publication volume and not necessarily quality. 

My advice for researchers is to ensure you have access to good research infrastructure, including experts in study design and statistical analysis. Resist the temptation to write up that limited registry review from only your center. Go for bigger numbers and better analysis so your contribution to the research literature is a meaningful one!

References:

  1. CONSORT 2010 Guidelines Checklist for randomized, controlled trials
  2. Statistical Power of Randomized Controlled Trials in Trauma Surgery. JACS 237(5):731-736, 2023.
Print Friendly, PDF & Email

Best Of EAST 2023 #1: The Quality Of Trauma Research

I’ve been reading and reviewing scientific papers for years. One of my biggest pet peeves is the preponderance of studies that have been thrown together with insufficient thought given to research design. One of the most common issues I see in any study is the failure to look at study size and statistical power. The biggest offenders are the underpowered non-inferiority studies that claim two choices are equally valid when there were never enough subjects to show a difference in the first place!

If you want to see this in action, look at the studies that justify the “small chest tube is not inferior to bigger chest tube” studies.

But I digress. The first EAST abstract, I will discuss critically examined randomized clinical trials (RCT) relating to trauma published over ten years. The authors, from the Ryder Trauma Center in Miami, reviewed these studies for type (superiority, inferiority, equivalence), sample size calculation, and power analysis.

Here are the factoids:

  • Only 118 randomized clinical trials were identified in 20 journals over the ten years (!!)
  • Only half were registered before performing the research
  • Most were equivalence studies (49%)
  • Only half had performed a sample size calculation first, and only half of those actually met their target enrollment (!)
  • 70% of studies had a positive result
  • Overall, only about one-third to one-half of studies were adequately powered to show an effect size

The authors concluded that a large number of RCTs either did not perform a sample size calculation in advance, did not meet their enrollment targets, and weren’t powered enough to detect even a large effect.

Bottom line: Unfortunately, this abstract confirms my informal bias based on reading numerous papers over the years. There is a lot of weak research being published. And this applies not only to the field of trauma but to all scientific work.

There is a tremendous amount of pressure to publish. Those at academic institutions must be productive to keep their job. And the American College of Surgeons Verification Review Committee requires Level I trauma centers to publish twenty papers in peer-reviewed journals every three years. 

Unfortunately, this pressure pushes trauma professionals to come up with weak ideas that may not be well supported statistically. And there is an implicit bias in research publications that rewards positive results. This can be seen in this abstract’s 70% positive result rate. It’s boring to read a paper that shows that some new approach truly didn’t have an appreciable effect. But knowing this fact may help other researchers in the field avoid duplicating ineffective interventions.

This is an important abstract that clearly points out the shortcomings in published randomized controlled trials. But what about the 95+ percent of papers that do not use such a rigorous study design?

Here are my questions/comments for the presenter and authors:

  • Please provide the denominator of all the studies you reviewed. Only 118 were RCTs, which is woefully low. Please give us an idea of how many less rigorous studies were published over the ten-year study period.
  • Were there any obvious geographical patterns in study quality? Were RCTs from any specific continent of higher quality from the sample size perspective than others?

This important abstract is needed to stimulate more thought and interest in publishing better papers rather than more papers!

Reference: STATISTICAL POWER OF RANDOMIZED CONTROLLED TRIALS (RCT) IN THE FIELD OF TRAUMA SURGERY. EAST 2023 podium abstract #6.

Print Friendly, PDF & Email

Cognitive Bias – Don’t You Hate It When They Do That?

cognitive_bias

Source: http://chainsawsuit.com/comic/2014/09/16/on-research/

I sat in on a committee meeting once where the management of a particular clinical problem was being vigorously discussed. One of the participants pulled out his smartphone, did a quick search, and said, “Aha! This article shows that my opinion is correct!”

This approach is wrong on so many levels, it’s almost laughable. But it illustrates a real weakness that all human beings have: susceptibility to cognitive bias. 

Scientists have identified somewhere between 150 and 200 different types of cognitive bias, and trying to sort them out will literally make your head spin. For a quick and enlightening read, I recommend reading the article below. It sifts through the mess and lumps them into four understandable categories.

Bottom line: We are all capable of warping what we read, hear, and see to fit our own vortex of pre-existing beliefs. It’s very important to recognize the possibility of bias when you are seeking information so that you can do everything to minimize its impact. If you can’t or won’t do that, then you’ll end up being that know-it-all guy with the smartphone.

Related post:

Print Friendly, PDF & Email