I’ve been reading and reviewing scientific papers for years. One of my biggest pet peeves is the preponderance of studies that have been thrown together with insufficient thought given to research design. One of the most common issues I see in any study is the failure to look at study size and statistical power. The biggest offenders are the underpowered non-inferiority studies that claim two choices are equally valid when there were never enough subjects to show a difference in the first place!
If you want to see this in action, look at the studies that justify the “small chest tube is not inferior to bigger chest tube” studies.
But I digress. The first EAST abstract, I will discuss critically examined randomized clinical trials (RCT) relating to trauma published over ten years. The authors, from the Ryder Trauma Center in Miami, reviewed these studies for type (superiority, inferiority, equivalence), sample size calculation, and power analysis.
Here are the factoids:
- Only 118 randomized clinical trials were identified in 20 journals over the ten years (!!)
- Only half were registered before performing the research
- Most were equivalence studies (49%)
- Only half had performed a sample size calculation first, and only half of those actually met their target enrollment (!)
- 70% of studies had a positive result
- Overall, only about one-third to one-half of studies were adequately powered to show an effect size
The authors concluded that a large number of RCTs either did not perform a sample size calculation in advance, did not meet their enrollment targets, and weren’t powered enough to detect even a large effect.
Bottom line: Unfortunately, this abstract confirms my informal bias based on reading numerous papers over the years. There is a lot of weak research being published. And this applies not only to the field of trauma but to all scientific work.
There is a tremendous amount of pressure to publish. Those at academic institutions must be productive to keep their job. And the American College of Surgeons Verification Review Committee requires Level I trauma centers to publish twenty papers in peer-reviewed journals every three years.
Unfortunately, this pressure pushes trauma professionals to come up with weak ideas that may not be well supported statistically. And there is an implicit bias in research publications that rewards positive results. This can be seen in this abstract’s 70% positive result rate. It’s boring to read a paper that shows that some new approach truly didn’t have an appreciable effect. But knowing this fact may help other researchers in the field avoid duplicating ineffective interventions.
This is an important abstract that clearly points out the shortcomings in published randomized controlled trials. But what about the 95+ percent of papers that do not use such a rigorous study design?
Here are my questions/comments for the presenter and authors:
- Please provide the denominator of all the studies you reviewed. Only 118 were RCTs, which is woefully low. Please give us an idea of how many less rigorous studies were published over the ten-year study period.
- Were there any obvious geographical patterns in study quality? Were RCTs from any specific continent of higher quality from the sample size perspective than others?
This important abstract is needed to stimulate more thought and interest in publishing better papers rather than more papers!
Reference: STATISTICAL POWER OF RANDOMIZED CONTROLLED TRIALS (RCT) IN THE FIELD OF TRAUMA SURGERY. EAST 2023 podium abstract #6.