Tag Archives: research

First, Read The Paper. THEN THINK ABOUT IT!

This is a perfect example of why you cannot just simply read an abstract! And in this case, you can’t just read the paper, either. You’ve got to critically think about it and see if the conclusions are reasonable. And if they are not, then you need to go back and try to figure out why it isn’t.

A study was published a few years ago regarding bleeding after nonoperative management of splenic injury. The authors have been performing an early followup CT within 48 hours of admission for more than 12 years(!). They wrote this paper comparing their recent experience with a time interval before they implemented the practice.

Here are the factoids. Pay attention closely:

  • 773 adult patients were retrospectively studied from 1995 to 2012
  • Of 157 studied from 1995 to 1999, 83 (53%) were stable and treated nonoperatively. Ten failed, and all the rest underwent repeat CT after 7 days.
  • After a “sentinel delayed splenic rupture event”, the protocol was revised, and a repeat CT was performed in all patients at 48 hours. Pseudoaneurysm or extravasation initially or after repeat scan prompted a trip to interventional radiology.
  • Of 616 studied from 2000-2012, after the protocol change, 475 (77%) were stable and treated nonoperatively. Three failed, and it is unclear whether this happened before or after the repeat CT at 48 hours.
  • 22 high risk lesions were found after the first scan, and 29 were found after the repeat. 20% of these were seen in Grade 1 and 2 injuries. All were sent for angiography.
  • There were 4 complications of angiography (8%), with one requiring splenectomy.
  • Length of stay decreased from 8 days to 6.

So it sounds like we should be doing repeat CT in all of our nonoperatively managed spleens, right? The failure rate decreased from 12% to less than 1%. Time in the hospital decreased significantly as well.

Wrong! Here are the problems/questions:

  • Why were so many of their patients considered “unstable” and taken straight to OR (47% and 23%)?
  • CT sensitivity for detecting high risk lesions in the 1990s was nothing like it is today.
  • The accepted success rate for nonop management is about 95%, give or take. The 99.4% in this study suggests that some patients ended up going to OR who didn’t really need to, making this number look artificially high.
  • The authors did not separate pseudoaneurysm from extravasation on CT. And they found them in Grade 1 and 2 injuries, which essentially never fail
  • 472 people got an extra CT scan
  • 4 people (8%) had complications from angiography, which is higher than the oft-cited 2-3%. And one lost his spleen because of it.
  • Is a 6 day hospital stay reasonable or necessary?

Bottom line: This paper illustrates two things:

  1. If you look at your data without the context of what others have done, you can’t tell if it’s an outlier or not; and
  2. It’s interesting what reflexively reacting to a single adverse event can make us do.

The entire protocol is based on one bad experience at this hospital in 1999. Since then, a substantial number of people have been subjected to additional radiation and the possibility of harm in the interventional suite. How can so many other trauma centers use only a single CT scan and have excellent results?

At Regions Hospital, we see in excess of 100 spleen injuries per year. A small percentage are truly unstable and go immediately to OR. About 97% of the remaining stable patients are successfully managed nonoperatively, and only one or two return annually with delayed bleeding. It is seldom immediately life-threatening, especially if the patient has been informed about clinical signs and symptoms they should be looking for. And our average length of stay is 2-3 days depending on grade.

Never read just the abstract. Take the rest of the manuscript with a grain of salt. And think!

Reference: Delayed hemorrhagic complications in the nonoperative management of blunt splenic trauma: early screening leads to a decrease in failure rate. J Trauma 76(6):1349-1353, 2014.

Why Is So Much Published Research So Bad?

I read lots of trauma-related articles every week. And as I browse through them, I often find studies that leave me wondering how they ever got published. And this is not a new phenomenon. Look at any journal a year ago. Five years ago. Twenty years ago. And even older. The research landscape is littered with their carcasses.

And on a related note, sit down with any serious clinical question in your field you want to answer. Do a deep dive with one of the major search engines and try to get an answer. Or better yet, let the professionals from the Cochrane Library or other organization do it for you. Invariably, you will find hints and pieces of the answer you seek. But never the completely usable solution you desire. 

Why is it so hard? Even with tens of thousands of articles being published every year?

Because there is no overarching plan! Individuals are forced to produce research as a condition of their employment. Or to assure career advancement. Or to get into medical school, or a “good” residency. And in the US, Level I trauma centers are required to publish at least 20 papers every three years to maintain their status. So there is tremendous pressure across all disciplines to publish something.

Unfortunately, that something is usually work that is easily conceived and quickly executed. A registry review, or some other type of retrospective study. They are easy to get approval for, take little time to complete and analyze, and have the potential to get published quickly.

But what this “publish or perish” mentality promotes is a random jumble of answers that we didn’t really need and can’t learn a thing from. There is no planning. There is no consideration of what questions we really need to answer. Just a random bunch of thoughts that are easy to get published but never get cited by anyone else.

Bottom line: How do we fix this? Not easily. Give every work a “quality score.” Instead of focusing on the quantity of publications, the “authorities” (tenure committees and the journal editors themselves) need to focus in on their quality. Extra credit should be given to multicenter trial involvement, prospective studies, and other higher quality projects. These will increase the quality score. The actual number of publications should not matter as much as how much high quality work is in progress. Judge the individual or center on their total quality score, not the absolute number of papers they produce. Sure, the sheer number of studies published will decline, but the quality will increase exponentially!

Why Is So Much Published Research So Bad?

Yesterday, my colleague the Skeptical Scalpel wrote about an interesting (?) paper published in Emergency Medicine Australasia. It was a small study that concluded that ED wait times decreased as the number of people presenting to be seen decreased. Where’s the mystery in that? Overstating the obvious?

But if you look through almost any journal today, you will find studies that leave you wondering how they ever got published. And this is not a new phenomenon. Look at any journal a year ago. Five years ago. Twenty years ago. And even older. The research landscape is littered with their carcasses. 

And on a related note, sit down with any serious clinical question in your field you want to answer. Do a deep dive with one of the major search engines and try to get an answer. Or better yet, let the professionals from the Cochrane Library or other organization do it for you. Invariably, you will find hints and pieces of the answer you seek. But never the completely usable solution you desire. 

Why is it so hard? With tens of thousands of articles being published every year?

Because there is no plan! Individuals are forced to produce research as a condition of their employment. Or to assure career advancement. Or to get into medical school, or a “good” residency. And in the US, Level I trauma centers are required to publish at least 20 papers every three years to maintain their status. So there is tremendous pressure across all disciplines to publish something

Unfortunately, that something is usually work that is easily conceived and quickly executed. A registry review, or some other type of retrospective study. They are easy to get approval for, take little time to complete and analyze, and have the potential to get published quickly.

But what this “publish or perish” mentality promotes is a random jumble of answers that we didn’t really need. There is no planning. There is no consideration of what questions we really need to answer. Just a random bunch of easy to get published thoughts that never get cited by anyone else. 

Bottom line: How do we fix this? Not easily. Instead of focusing on the quantity of publications, the “authorities” need to focus in on their quality. Extra credit should be given to multicenter trial involvement, prospective studies, and other higher quality projects. The actual number of publications should not matter as much as how much high quality work is in progress. Sure, the sheer number of studies published will decline, but the quality will increase exponentially!