Tag Archives: research

The Power Of Trauma Research?

The veracity of medical research conclusions, specifically trauma research, has always fascinated me. I see so many people who are content to jump to conclusions based on a paper’s title or the conclusions listed in its abstract. However, having read thousands of research papers over the years, I am no longer surprised when these easy-to-digest blurbs don’t entirely match up with the actual details embedded within the manuscript.

The authors of these publications genuinely intend to provide new and valuable information, either for clinical use or to allow future studies to build upon. Unfortunately, mistakes can be made that degrade their value. Problems with research design are among the top culprits that affect the significance of these papers.

I will focus on what are considered “gold standard” studies in this post. The randomized, controlled trial (RCT) is usually considered the big kahuna among research designs. They can provide solid answers to our clinical questions if designed and carried out properly.

In 2010, a checklist of 25 guidelines was created (CONSORT 2010, see reference 1) to guide researchers on the essential items that should be included in all RCTs to ensure high-quality results. A group of investigators from Jackson Memorial in Miami and Denver Health published a paper earlier this month that critically reviewed 187 trauma surgery RCTs published from 2000 to 2021. They analyzed power calculations, adherence to the CONSORT guidelines, and an interesting metric called the fragility index.

Here is a summary of their findings:

  • Only 46% of studies calculated the sample size needed to prove their thesis before beginning data collection. With no pre-defined stop point, the researchers might never be able to demonstrate significant results or may be wasting money on subjects in excess of the number actually needed.
  • Of the 54% that did calculate the needed sample size, two-thirds did not achieve it and were not powered to identify even very large effects. Once again, they are spending research money and will almost certainly be unable to show statistical differences between the groups, even if one actually existed.
  • The CONSORT checklist was applied to studies published after they were developed in 2010; the average number of criteria met was 20/25, and only 11% met all 25 criteria. The most common issue was failure to publish the randomization scheme to ensure no bias was possible due to it.
  • Among 30 studies that had a statistically significant binary outcome, the mean fragility index was 2. In half of these studies, having a different outcome in as few as two patients could swing the final results and conclusion of the study.
  • The majority of the studies (76%) were single-center trials. Frequently, such trial results cannot be generalized to larger and more disparate populations. Larger, confirmatory studies often have results that are at odds with the single-center ones.

Bottom line: What does it all mean? Basically, a lot of well-intentioned but poorly designed research gets published. The sheer volume of it and the work needed to interpret it correctly make it very difficult for the average trauma professional to understand and apply. And unfortunately, there is a lot of pressure to pump out publication volume and not necessarily quality. 

My advice for researchers is to ensure you have access to good research infrastructure, including experts in study design and statistical analysis. Resist the temptation to write up that limited registry review from only your center. Go for bigger numbers and better analysis so your contribution to the research literature is a meaningful one!

References:

  1. CONSORT 2010 Guidelines Checklist for randomized, controlled trials
  2. Statistical Power of Randomized Controlled Trials in Trauma Surgery. JACS 237(5):731-736, 2023.

Best Of EAST 2023 #1: The Quality Of Trauma Research

I’ve been reading and reviewing scientific papers for years. One of my biggest pet peeves is the preponderance of studies that have been thrown together with insufficient thought given to research design. One of the most common issues I see in any study is the failure to look at study size and statistical power. The biggest offenders are the underpowered non-inferiority studies that claim two choices are equally valid when there were never enough subjects to show a difference in the first place!

If you want to see this in action, look at the studies that justify the “small chest tube is not inferior to bigger chest tube” studies.

But I digress. The first EAST abstract, I will discuss critically examined randomized clinical trials (RCT) relating to trauma published over ten years. The authors, from the Ryder Trauma Center in Miami, reviewed these studies for type (superiority, inferiority, equivalence), sample size calculation, and power analysis.

Here are the factoids:

  • Only 118 randomized clinical trials were identified in 20 journals over the ten years (!!)
  • Only half were registered before performing the research
  • Most were equivalence studies (49%)
  • Only half had performed a sample size calculation first, and only half of those actually met their target enrollment (!)
  • 70% of studies had a positive result
  • Overall, only about one-third to one-half of studies were adequately powered to show an effect size

The authors concluded that a large number of RCTs either did not perform a sample size calculation in advance, did not meet their enrollment targets, and weren’t powered enough to detect even a large effect.

Bottom line: Unfortunately, this abstract confirms my informal bias based on reading numerous papers over the years. There is a lot of weak research being published. And this applies not only to the field of trauma but to all scientific work.

There is a tremendous amount of pressure to publish. Those at academic institutions must be productive to keep their job. And the American College of Surgeons Verification Review Committee requires Level I trauma centers to publish twenty papers in peer-reviewed journals every three years. 

Unfortunately, this pressure pushes trauma professionals to come up with weak ideas that may not be well supported statistically. And there is an implicit bias in research publications that rewards positive results. This can be seen in this abstract’s 70% positive result rate. It’s boring to read a paper that shows that some new approach truly didn’t have an appreciable effect. But knowing this fact may help other researchers in the field avoid duplicating ineffective interventions.

This is an important abstract that clearly points out the shortcomings in published randomized controlled trials. But what about the 95+ percent of papers that do not use such a rigorous study design?

Here are my questions/comments for the presenter and authors:

  • Please provide the denominator of all the studies you reviewed. Only 118 were RCTs, which is woefully low. Please give us an idea of how many less rigorous studies were published over the ten-year study period.
  • Were there any obvious geographical patterns in study quality? Were RCTs from any specific continent of higher quality from the sample size perspective than others?

This important abstract is needed to stimulate more thought and interest in publishing better papers rather than more papers!

Reference: STATISTICAL POWER OF RANDOMIZED CONTROLLED TRIALS (RCT) IN THE FIELD OF TRAUMA SURGERY. EAST 2023 podium abstract #6.

How To Tell If Research Is Crap

I recently read a very interesting article on research, and found it to be very pertinent to the state of academic research today. It was published on Manager Mint, a site that considers itself to be “the most valuable business resource.” (?) But the message is very applicable to trauma professionals, medical professionals, and probably anyone else who engages in research pursuits. The link to the full article is listed at the end of this post.

1. Research is not good because it is true, but because it is interesting.

Interesting research doesn’t just restate what is already known. It creates or explores new territory. Don’t just read and believe existing dogma.

Critique it.

Question it. Then devise a way to see if it’s really true.

2. Good research is innovative.

Some of the best ideas come from combining ideas from various disciplines.

Some of the best research ideas are derived from applying concepts from totally unrelated fields to your own.

That’s why I read so many journals, blogs, and newsfeeds from many different fields. And even if you are not doing the research, a broad background can help you sort out and gain perspective as you read the works of others.

3. Good research is useful.

Yes, basic bench level research can potentially be helpful in understanding all the nuances of a particular biochemical or disease process.But a lot of the time, it just demonstrates relatively unimportant chemical or biological reactions. And only a very small number actually contribute to the big picture. For most of us working at a macro level, research that could actually change our practice or policies is really what we need.

4. The best research should be empirically derived.

It shouldn’t rely on complicated statistical models. If it does, it means that the effect being measured is very subtle, and potentially not clinically significant. There is a big difference between statistical and clinical relevance.

Reference: If You Can’t Answer “Yes” To These 5 Questions, Your Research Is Rubbish. Garrett Stone. Click here to view on Manager Mint.

Why Is So Much Published Research So Bad?

Welcome to two days of rants about bad research!

I read lots of trauma-related articles every week. And as I browse through them, I often find studies that leave me wondering how they ever got published. And this is not a new phenomenon. Look at any journal a year ago. Five years ago. Twenty years ago. And even older. The research landscape is littered with their carcasses.

And on a related note, sit down with any serious clinical question in your field you want to answer. Do a deep dive with one of the major search engines and try to get an answer. Or better yet, let the professionals from the Cochrane Library or other organization do it for you. Invariably, you will find hints and pieces of the answer you seek. But never the completely usable solution you desire. 

Why is it so hard? Even with tens of thousands of articles being published every year?

Because there is no overarching plan! Individuals are forced to produce research as a condition of their employment. Or to assure career advancement. Or to get into medical school, or a “good” residency. And in the US, Level I trauma centers are required to publish at least 20 papers every three years to maintain their status. So there is tremendous pressure across all disciplines to publish something.

Unfortunately, that something is usually work that is easily conceived and quickly executed. A registry review, or some other type of retrospective study. They are easy to get approval for, take little time to complete and analyze, and have the potential to get published quickly.

But what this “publish or perish” mentality promotes is a random jumble of answers that we didn’t really need and can’t learn a thing from. There is no planning. There is no consideration of what questions we really need to answer. Just a random bunch of thoughts that are easy to get published but never get cited by anyone else.

Bottom line: How do we fix this? Not easily. Give every work a “quality score.” Instead of focusing on the quantity of publications, the “authorities” (tenure committees and the journal editors themselves) need to focus in on their quality. Extra credit should be given to multicenter trial involvement, prospective studies, and other higher quality projects. These will increase the quality score. The actual number of publications should not matter as much as how much high quality work is in progress. Judge the individual or center on their total quality score, not the absolute number of papers they produce. Sure, the sheer number of studies published will decline, but the quality will increase exponentially!

Tomorrow, the big picture view on how to detect bad research.

First, Read The Paper. THEN THINK ABOUT IT!

This is a perfect example of why you cannot just simply read an abstract! And in this case, you can’t just read the paper, either. You’ve got to critically think about it and see if the conclusions are reasonable. And if they are not, then you need to go back and try to figure out why it isn’t.

A study was published a few years ago regarding bleeding after nonoperative management of splenic injury. The authors have been performing an early followup CT within 48 hours of admission for more than 12 years(!). They wrote this paper comparing their recent experience with a time interval before they implemented the practice.

Here are the factoids. Pay attention closely:

  • 773 adult patients were retrospectively studied from 1995 to 2012
  • Of 157 studied from 1995 to 1999, 83 (53%) were stable and treated nonoperatively. Ten failed, and all the rest underwent repeat CT after 7 days.
  • After a “sentinel delayed splenic rupture event”, the protocol was revised, and a repeat CT was performed in all patients at 48 hours. Pseudoaneurysm or extravasation initially or after repeat scan prompted a trip to interventional radiology.
  • Of 616 studied from 2000-2012, after the protocol change, 475 (77%) were stable and treated nonoperatively. Three failed, and it is unclear whether this happened before or after the repeat CT at 48 hours.
  • 22 high risk lesions were found after the first scan, and 29 were found after the repeat. 20% of these were seen in Grade 1 and 2 injuries. All were sent for angiography.
  • There were 4 complications of angiography (8%), with one requiring splenectomy.
  • Length of stay decreased from 8 days to 6.

So it sounds like we should be doing repeat CT in all of our nonoperatively managed spleens, right? The failure rate decreased from 12% to less than 1%. Time in the hospital decreased significantly as well.

Wrong! Here are the problems/questions:

  • Why were so many of their patients considered “unstable” and taken straight to OR (47% and 23%)?
  • CT sensitivity for detecting high risk lesions in the 1990s was nothing like it is today.
  • The accepted success rate for nonop management is about 95%, give or take. The 99.4% in this study suggests that some patients ended up going to OR who didn’t really need to, making this number look artificially high.
  • The authors did not separate pseudoaneurysm from extravasation on CT. And they found them in Grade 1 and 2 injuries, which essentially never fail
  • 472 people got an extra CT scan
  • 4 people (8%) had complications from angiography, which is higher than the oft-cited 2-3%. And one lost his spleen because of it.
  • Is a 6 day hospital stay reasonable or necessary?

Bottom line: This paper illustrates two things:

  1. If you look at your data without the context of what others have done, you can’t tell if it’s an outlier or not; and
  2. It’s interesting what reflexively reacting to a single adverse event can make us do.

The entire protocol is based on one bad experience at this hospital in 1999. Since then, a substantial number of people have been subjected to additional radiation and the possibility of harm in the interventional suite. How can so many other trauma centers use only a single CT scan and have excellent results?

At Regions Hospital, we see in excess of 100 spleen injuries per year. A small percentage are truly unstable and go immediately to OR. About 97% of the remaining stable patients are successfully managed nonoperatively, and only one or two return annually with delayed bleeding. It is seldom immediately life-threatening, especially if the patient has been informed about clinical signs and symptoms they should be looking for. And our average length of stay is 2-3 days depending on grade.

Never read just the abstract. Take the rest of the manuscript with a grain of salt. And think!

Reference: Delayed hemorrhagic complications in the nonoperative management of blunt splenic trauma: early screening leads to a decrease in failure rate. J Trauma 76(6):1349-1353, 2014.