Category Archives: Pet Peeve

The Power Of Trauma Research?

The veracity of medical research conclusions, specifically trauma research, has always fascinated me. I see so many people who are content to jump to conclusions based on a paper’s title or the conclusions listed in its abstract. However, having read thousands of research papers over the years, I am no longer surprised when these easy-to-digest blurbs don’t entirely match up with the actual details embedded within the manuscript.

The authors of these publications genuinely intend to provide new and valuable information, either for clinical use or to allow future studies to build upon. Unfortunately, mistakes can be made that degrade their value. Problems with research design are among the top culprits that affect the significance of these papers.

I will focus on what are considered “gold standard” studies in this post. The randomized, controlled trial (RCT) is usually considered the big kahuna among research designs. They can provide solid answers to our clinical questions if designed and carried out properly.

In 2010, a checklist of 25 guidelines was created (CONSORT 2010, see reference 1) to guide researchers on the essential items that should be included in all RCTs to ensure high-quality results. A group of investigators from Jackson Memorial in Miami and Denver Health published a paper earlier this month that critically reviewed 187 trauma surgery RCTs published from 2000 to 2021. They analyzed power calculations, adherence to the CONSORT guidelines, and an interesting metric called the fragility index.

Here is a summary of their findings:

  • Only 46% of studies calculated the sample size needed to prove their thesis before beginning data collection. With no pre-defined stop point, the researchers might never be able to demonstrate significant results or may be wasting money on subjects in excess of the number actually needed.
  • Of the 54% that did calculate the needed sample size, two-thirds did not achieve it and were not powered to identify even very large effects. Once again, they are spending research money and will almost certainly be unable to show statistical differences between the groups, even if one actually existed.
  • The CONSORT checklist was applied to studies published after they were developed in 2010; the average number of criteria met was 20/25, and only 11% met all 25 criteria. The most common issue was failure to publish the randomization scheme to ensure no bias was possible due to it.
  • Among 30 studies that had a statistically significant binary outcome, the mean fragility index was 2. In half of these studies, having a different outcome in as few as two patients could swing the final results and conclusion of the study.
  • The majority of the studies (76%) were single-center trials. Frequently, such trial results cannot be generalized to larger and more disparate populations. Larger, confirmatory studies often have results that are at odds with the single-center ones.

Bottom line: What does it all mean? Basically, a lot of well-intentioned but poorly designed research gets published. The sheer volume of it and the work needed to interpret it correctly make it very difficult for the average trauma professional to understand and apply. And unfortunately, there is a lot of pressure to pump out publication volume and not necessarily quality. 

My advice for researchers is to ensure you have access to good research infrastructure, including experts in study design and statistical analysis. Resist the temptation to write up that limited registry review from only your center. Go for bigger numbers and better analysis so your contribution to the research literature is a meaningful one!

References:

  1. CONSORT 2010 Guidelines Checklist for randomized, controlled trials
  2. Statistical Power of Randomized Controlled Trials in Trauma Surgery. JACS 237(5):731-736, 2023.

Pet Peeve: “High Index of Suspicion”

How often have you heard this phrase in a talk or seen it in a journal article:

“Maintain a high index of suspicion”

What does this mean??? It’s been popping up in papers and textbooks for at least 30 years. And to me, it’s meaningless. You try to figure out that sentence!

An index is a number, usually mathematically derived in some way. Yet whenever I see or hear this phrase, it doesn’t apply to anything quantifiable. What the author is really referring to is “a high level of suspicion,” not an index.

This term has become a catch-all to caution the reader or listener to think about a (usually) less common diagnostic possibility. As trauma professionals, we are advised to do this about so many things, it really has become sad and meaningless. And don’t we all do this anyway?

Bottom line: Don’t use this phrase in your presentations or writing. It’s silly and doesn’t make any sense. And feel free to chide any of your colleagues who do. Please give us some concrete data so we don’t have to be so suspicious!

Reference: High index of suspicion. Ann Thoracic Surg 64:291-292, 1997.

Pet Peeve: Conflicts Of Interest

For the longest time, one of my pet peeves has been potential conflicts of interest (COI) involving authors on research papers. There is no simple definition of the term “conflict of interest.” However, a simple way to think of it is a situation where one’s personal interests may influence their professional responsibilities.

Upton Sinclair said it more simply in a book he was writing in the 1930s:

“It is difficult to get a man to understand something when his salary depends upon his not understanding it!”

The corollary of this is more useful when applying to research:

“One can be led to believe anything when their salary depends on it.”

Psychological research has demonstrated time and time again that human behavior is easily influenced. Even receiving a tiny gift leads the receiver to perceive the giver favorably and to be more willing to reciprocate, even without being asked. Ever wonder why your hospital won’t let pharma representatives sponsor lunches anymore? Or even give you a pen? Believe it or not, these little things can change your attitude regarding their product. And even if you firmly believe you can’t be swayed, you can’t change the basic operating system in your brain. Your behavior will change.

We know that behavioral changes after receiving a small gift can be significant. What if the gift is not so small? What about a research grant? A position on an advisory board? Free drugs or devices for your research? Corporate stock? These can significantly improve one’s academic rank, job security, financial status, and more. Think about this with respect to my corollary quote above.

An interesting paper published by the Canadian Medical Association twenty years ago looked at industry-sponsored randomized controlled trials and how often they produced statistically significant positive results. The authors reviewed papers in eight leading surgical and five leading medical journals over three years. They applied a rigorous evaluation method looking for a positive correlation between industry support and study results, controlling for quality and study size.

The authors found that overall, positive results were nearly twice as likely in industry-funded studies. These numbers were even more pronounced when looking at surgical procedures and drugs. Drug studies were 5x more likely to be positive, and new surgical procedure studies were 8x more likely when receiving industry support.

Does this surprise you? It shouldn’t. There are numerous ways to design (manipulate) studies by playing with the characteristics of the study groups, statistical analysis, and even the wording of the manuscript. And at worst, the study could just be trashed, never to see the light of day at the author’s whim (or sponsor’s). Ever wonder why you (almost) never see a negative or even a neutral result in a study where the authors have received some benefit from industry? They are very frequently positive (or “non-inferior”).

Like other high-quality journals, the Journal of Trauma and Acute Care Surgery has recognized the potential dangers of COI and its impact on the integrity of the papers they publish. A publication in process from the journal editors outlines the new stance and policy regarding conflicts. They believe it is such a potential problem that they have revised their COI policy.

The Journal will now require detailed COI forms to be filed at the time of manuscript submission. The manuscript will only progress in the review process once received. Reviewers cannot see them but are encouraged to independently review data in physician payment databases. I’m not confident all reviewers will be this meticulous. If the editors believe a significant conflict exists, they may require revision or retraction. Egregious violations could even result in banning from publication.

Bottom line: It’s about time! More and more journals are cracking down on conflicts of interest. This doesn’t mean that they won’t accept manuscripts with such conflicts. It merely means that the authors must provide a detailed list of all their conflicts. It will then be up to you to gauge if these conflicts could have impacted the study and how large a grain of salt to keep on your desk as you read it and decide if it is believable.

References:

  1. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. Can Med Assn J, 170(4):477-480, 2004.
  2. The Journal of Trauma and Acute Care Surgery Position on the Issue of Disclosure of Conflict of Interests by Authors of Scientific Manuscripts. Journal of Trauma and Acute Care Surgery, Publish Ahead of Print DOI: 10.1097/TA.0000000000004024, 2023.

First, Read The Paper. THEN THINK ABOUT IT!

This is a perfect example of why you cannot just simply read an abstract! And in this case, you can’t just read the paper, either. You’ve got to critically think about it and see if the conclusions are reasonable. And if they are not, then you need to go back and try to figure out why it isn’t.

A study was published a few years ago regarding bleeding after nonoperative management of splenic injury. The authors have been performing an early followup CT within 48 hours of admission for more than 12 years(!). They wrote this paper comparing their recent experience with a time interval before they implemented the practice.

Here are the factoids. Pay attention closely:

  • 773 adult patients were retrospectively studied from 1995 to 2012
  • Of 157 studied from 1995 to 1999, 83 (53%) were stable and treated nonoperatively. Ten failed, and all the rest underwent repeat CT after 7 days.
  • After a “sentinel delayed splenic rupture event”, the protocol was revised, and a repeat CT was performed in all patients at 48 hours. Pseudoaneurysm or extravasation initially or after repeat scan prompted a trip to interventional radiology.
  • Of 616 studied from 2000-2012, after the protocol change, 475 (77%) were stable and treated nonoperatively. Three failed, and it is unclear whether this happened before or after the repeat CT at 48 hours.
  • 22 high risk lesions were found after the first scan, and 29 were found after the repeat. 20% of these were seen in Grade 1 and 2 injuries. All were sent for angiography.
  • There were 4 complications of angiography (8%), with one requiring splenectomy.
  • Length of stay decreased from 8 days to 6.

So it sounds like we should be doing repeat CT in all of our nonoperatively managed spleens, right? The failure rate decreased from 12% to less than 1%. Time in the hospital decreased significantly as well.

Wrong! Here are the problems/questions:

  • Why were so many of their patients considered “unstable” and taken straight to OR (47% and 23%)?
  • CT sensitivity for detecting high risk lesions in the 1990s was nothing like it is today.
  • The accepted success rate for nonop management is about 95%, give or take. The 99.4% in this study suggests that some patients ended up going to OR who didn’t really need to, making this number look artificially high.
  • The authors did not separate pseudoaneurysm from extravasation on CT. And they found them in Grade 1 and 2 injuries, which essentially never fail
  • 472 people got an extra CT scan
  • 4 people (8%) had complications from angiography, which is higher than the oft-cited 2-3%. And one lost his spleen because of it.
  • Is a 6 day hospital stay reasonable or necessary?

Bottom line: This paper illustrates two things:

  1. If you look at your data without the context of what others have done, you can’t tell if it’s an outlier or not; and
  2. It’s interesting what reflexively reacting to a single adverse event can make us do.

The entire protocol is based on one bad experience at this hospital in 1999. Since then, a substantial number of people have been subjected to additional radiation and the possibility of harm in the interventional suite. How can so many other trauma centers use only a single CT scan and have excellent results?

At Regions Hospital, we see in excess of 100 spleen injuries per year. A small percentage are truly unstable and go immediately to OR. About 97% of the remaining stable patients are successfully managed nonoperatively, and only one or two return annually with delayed bleeding. It is seldom immediately life-threatening, especially if the patient has been informed about clinical signs and symptoms they should be looking for. And our average length of stay is 2-3 days depending on grade.

Never read just the abstract. Take the rest of the manuscript with a grain of salt. And think!

Reference: Delayed hemorrhagic complications in the nonoperative management of blunt splenic trauma: early screening leads to a decrease in failure rate. J Trauma 76(6):1349-1353, 2014.

Pet Peeve: (Not So) Clever Medical Study Acronyms

I’m not a big fan of acronyms, although they do serve a purpose. We use them all the time providing medical care. CBC. CTA. CXR. ROSC. And a zillion others. And they can actually be helpful so you don’t have to say or write down some ridiculously long phrase. OMG.

But what really bothers me is the rise of researchers designing clever acronyms for medical studies. The first one , the University Group Diabetes Program (UGDP), was developed in the 1970s. It was actually shortened by journals and media to make for an easier presentation, not by the group themselves.

But then in the 1980s, the Multiple Risk Factor Intervention Trial (MRFIT) came along. It evaluated the impact of multiple interventions on cardiovascular mortality. Mr. Fit. Get it? This was the first of an ever growing number of studies that chose acronyms that were either cleverly related to the work in some way, or that made a catchy new word to help people remember it.

And the number of these acronyms has been growing rapidly. From 1992 to 2002, they increased from 245 to 4100, a 16-fold increase. There are now so many acronyms that many simple ones are being reused. And it seems like studies without an acronym are becoming the minority.

Plus, we’ve moved away from creating pure acronyms like UGDP that are derived from the first letter of each word. Now we use multiple letters from a word, skip some words altogether, or don’t even bother to use the words at all. There are MICHELANGELO, MATISSE, PICASSO, and EINSTEIN studies that were given the name just for the positive association. Nothing to do with the study at all.

This is all a warm-up for my next post, which reviews a geriatric trauma prognosis calculator from the PALLIATE consortium (Prognostic Assessment of Life and Limitations After Trauma in the Elderly). Groan! The title itself almost made me not want to read it. But I am compelled. Tune in Monday.

Reference: SearCh for humourIstic and Extravagant acroNyms and Thoroughly Inappropriate names For Important Clinical trials (SCIENTIFIC): qualitative and quantitative systematic study. BMJ. 2014;349:g7092.