Category Archives: Abstracts

It’s AAST Month, 2022!

As many of you are aware, the 81st Annual Meeting of the American Association for the Surgery of Trauma is taking place later this month. Keeping to tradition, I will be analyzing select abstracts that caught my interest over the coming weeks.

As in the past, I’ll analyze what’s available in the short abstract format and provide a bit of background about why it might be important. I will then examine the design and methods and review the results.

Finally, I will provide my own analysis, as well as questions for the authors and presenter that they might encounter at the meeting. In addition to sharing all of this with you, my readers, I always send a message to the authors so they can personally check out the post.

I will get started with an analysis of Oral Paper #1, a multi-center trial using the Trauma Specific Frailty Index.

Best Of AAST 2021: Trauma Transfers Discharged From The ED

Aren’t these embarrassing? A referring center sends you a patient with the idea that they will be evaluated and admitted to your hospital. But it doesn’t work out that way. The patient is seen, possibly by a surgical specialist, bandaged up, and then sent home. Probably to one that is quite a few miles away. Not only is this a nuisance for the patient and an embarrassment for the sending center, it may use resources at the trauma center that are already tight.

Transfer patients who are seen and discharged are another form of “ultimate overtriage.” In this case, the incorrect triage takes place at the outside hospital.  The trauma group in Oklahoma City reviewed their experience with these patients over a two year period. They looked exclusively at patients who were transferred in to a Level I center and then discharged.

Here are the factoids:

  • A total of 2,350 patients were transferred in, and 27% were transferred home directly from the trauma bay (!)
  • The three most common culprits by injury pattern were face (51%), hand (31%), isolated ortho injury (9%)
  • A third of these patients required a bedside procedure, including laceration repair (53%), eye exam (24%), splinting (18%), and joint reduction (5%)
  • Ten facilities accounted for 40% of the transfers

The authors concluded that the typical injuries prompting transfer are predictable. It may be possible to reduce the number of transfers by deploying telemedicine systems to push evaluations out to the referring hospitals.

Bottom line: This is quite interesting. Anyone who works in a Level I or II center is aware of this phenomenon. This abstract went a step further and quantified the specific issues involved. This center ended up discharging over 300 patients per year after transfer in. This is a tremendous drain on resources by patients who did not truly have the need for them.

The authors speculate that telemedicine evaluation may help reduce some of those transfers. This seems like an easy solution. However, it also poses a lot of issues in terms of who will actually staff the calls and how will they be compensated for their time.

There are a number of important take-aways from this abstract:

  1. Know your referring hospitals. In this study, there were 10 hospitals that generated an oversize number of referrals. Those are the targets / low hanging fruit. Identify them!
  2. Understand what their needs are. Are they frequently having issues with simple ortho injuries? Eye exams? This is what they need!
  3. Provide education and training to make them more comfortable. This allows you to target those hospitals with exactly the material they need and hopefully make them more self-sufficient.

This allows the higher level centers to reserve phone and/or telemedicine consultation for only the most ambiguous cases. It’s a better use of telehealth resources that may be needed, typically at night and on weekends.

Here are my questions for the presenter and authors:

  • Would the common issues that were transferred and discharged be amenable to education and training at the referring centers to decrease the transfer volume?
  • How have you begun to address this issue at your center?

Reference: TRAUMA TRANSFERS DISCHARGED FROM THE EMERGENCY DEPARTMENT – IS THERE A ROLE FOR TELEMEDICINE. AAST 2021, Oral abstract #63.

Best Of AAST 2021: Chest Tube Based On Pneumothorax Size

How big is too big? That has been the question for a long time as it applies to pneumothorax and chest tubes. For many, it is a math problem that takes into account the appearance on chest x-ray, the physiology of the patient, and their ability to tolerate the pneumothorax based on any pre-existing medical conditions.

The group at Froedtert in Milwaukee has been trying to make this decision a bit more objective. They introduced the concept of CT based size measurement using a 35mm threshold at this very meeting three years ago. Read my review here. My criticisms at the time centered around the need to get a CT scan for diagnosis and their subjective definition of a failure requiring chest tube insertion. The abstract never did make it to publication.

The authors are back now with a follow-on study. This time, they made a rule that any pneumothorax less than 35mm from the chest wall would be observed without tube placement. The performed a retrospective review of their experience and divided it into two time periods: 2015-2016, before the new rule, and 2018-2019, after the new rule. They excluded any chest tubes inserted before the scan was performed, those that included a sizable hemothorax, and patients placed on a ventilator or who died.

Here are the factoids:

  • There were 93 patients in the early period and 154 in the later period
  • Chest tube use significantly declined from 20% to 10% between the two periods
  • Compliance with the rule significantly increased from 82% to 92%
  • There was no difference in length of stay, complications, or death
  • Observation failure was marginally less in the later period, and statistical significance depends on what method you use to calculate it
  • Patients in the later group were 2x more likely to be observed (by regression analysis)

The authors concluded that the 35mm rule resulted in a two-fold increase in observation and decreased the number of unnecessary CT scans.

Bottom line: I still have a few issues with this series of abstracts. First, decision to insert a chest tube requires a CT scan in a patient with a pneumothorax. This seems like extra radiation for patients who may not otherwise fit any of the usual blunt imaging criteria. And, like their 2018 abstract, there is no objective criteria for failure requiring tube insertion. So the number of insertions can potentially be quite subjective based on the whims of the individual surgeon.

What this abstract really shows is that compliance with the new rule increased, and there were no obvious complications from its use. The other numbers (chest tube insertions, observation failure) are just too subjective to learn much from.

Here are my questions for the presenter and authors:

  • Why was there such a large increase in the number of subjects for two identical-length time periods? Both were two years long, yet there were two-thirds more patients in the later period. Did your trauma center volumes go up that much? If not, could this represent some sort of selection bias that might change your numbers?
  • You concluded that your new rule decreased the number of “unnecessary” CT scans? How so? It looks like you are using more of them!
  • Do you routinely get a chest CT on all your patients with pneumothorax? Seems like a lot of radiation just to decide whether or not to put a tube in.
  • How do you manage a pneumothorax found on chest x-ray? Must they get a CT? Or are you willing to watch them and follow with serial x-rays?
  • How do you decide to take out the chest tube? Hopefully not another scan!

There should be some very interesting discussion of this abstract!

Reference: THE 35-MM RULE TO GUIDE PNEUMOTHORAX MANAGEMENT: INCREASES APPROPRIATE OBSERVATION AND DECREASES UNNECESARY CHEST TUBES. AAST 2021, Oral abstract #56.

Best Of AAST 2021: Comparing Two Different Doses Of Enoxaparin

Oh, look, my favorite topic! Prevention of venous thromboembolism (VTE) and complications. We’ve grown accustomed to using enoxaparin at the standard 30mg bid dose for a long time.  The orthopedic surgeons like to use 40mg qd, and there is some literature that shows this is reasonable for fracture patients.

The group at OHSU in Portland wanted to show that the single dose regimen is just as safe and effective as the bid dose. They performed a seven year, prospective, randomized trial of the two dose regimens. Weekly screening duplex exams were performed. The outcome measured was the occurrence of deep venous thrombosis (DVT) in the legs. They also examined missed doses, bleeding complications, and hospital length of stay.

Here are the factoids:

  • There were 267 total patients, 139 on the single dose regimen and 128 in the bid group
  • Average age was 49 and BMI was 28 in both groups
  • DVT occurred in 15 (11%) qd patients and 12 (9%) bid patients
  • Bleeding occurred in 19% of qd patients vs 14% of bid patients
  • There were fewer missed doses in the qd patients
  • None of the differences were statistically significant

The authors concluded that the qd dose was similar to the bid dose and is equally efficacious.

Bottom line: Hold on, now. First, this is a non-inferiority study. Daily dosing is presumed to be as good as twice daily dosing since there was no statistical difference seen between groups. This assumes that you have the statistical power (enough patients) to detect a difference. Is this the case here?

I pulled out my Sample Size calculator to check this over. I work things backwards to see the magnitude of difference that would have to be present for the given number of subjects. It looks like a sample size this small would only be able to detect a difference of 2x in the DVT occurrence result!

Lets look at this in simple terms. The absolute number of DVTs was actually higher in the qd group (11% vs 9%). So let’s say it is actually inferior to bid, meaning that the higher occurrence of DVT is real. Using the number of subjects here, the incidence could rise to 20% in the qd group and still not reach significance. 

The other major issue is the potential for selection bias. This study took place over 7 years. Yet only 267 were enrolled, or 38 patients per year. But this trauma center admits several thousand patients annually. If the enrollment criteria were so strict, the subjects probably don’t represent the general population. And if they weren’t, where did all the patients go? This is most likely a skewed study group.

I have lots of questions for the presenter and authors on this one!

  • Please show us your power calculations. Are you sure you have the statistical oomph to show non-inferiority?
  • Why did it take so long to accumulate 267 subjects? Show us the statistics for your overall trauma population to make sure they look the same.
  • Were you able to detect any other complications like pulmonary embolism?

Lots of questions here! Hopefully there’s much more information in the presentation!

Reference: A PROSPECTIVE RANDOMIZED TRIAL COMPARING TWO
STANDARD DOSES OF ENOXAPARIN FOR PREVENTION OF
THROMBOEMBOLISM IN TRAUMA. AAST 2021, Oral abstract 40.

Best Of AAST 2021: Individual Surgeon Outcomes In Trauma Laparotomy

Trauma programs use a number of quality indicators and PI filters to evaluate both individual and system performance. The emergent trauma laparotomy (ETL) is the index case for any trauma surgeon and is performed on a regular basis. However, this is one procedure where individual surgeon outcome is rarely benchmarked.

The trauma group in Birmingham AL performed a retrospective review of 242 ETLs performed at their hospital over a 14 month period. They then excluded patients who underwent resuscitative thoracotomy prior to the laparotomy. Rates of use of damage control and mortality at various time points were studied.

Here are the factoids:

The chart shows the survival rates after ETL at 24 hours (blue) and to discharge (gray) for 14 individual surgeons.

  • Six patients died intraoperatively and damage control laparotomy was performed in one third.
  • Mortality was 4% at 24 hours and 7% overall
  • ISS and time in ED were similar, but operative time varied substantially (40-469 minutes)
  • There were significant differences in individual surgeon mortality and use of damage control

The authors concluded that there were significant differences in outcomes by surgeon, and that more granular quality metrics should be developed for quality improvement.

Bottom line: I worry that this work is a superficial treatment of surgeon performance. The use of gross outcomes like death and use of damage control is not very helpful, in my opinion. There are so, so many other variables involved in who is likely to survive or the decision-making to consider the use of damage control. I am concerned that a simplistic retrospective review without most of those variables will lead to false conclusions.

It may be that there is a lot more information here that just couldn’t fit on the abstract page. In that case, the presentation should clear it all up.  But I am doubtful.

We have already reached a point in medicine where hospitals with better outcomes for patients with certain conditions can be identified. These centers should be selected preferentially to treat stroke or pancreatic cancer, or whatever there benchmark-proven expertise is. It really is time for this to begin to trickle down to individual providers. A specific surgeon should be encouraged to do what they are demonstrated to be really good at, and other surgeons should handle the things the first surgeon is only average at.

But I don’t think this study can provide the level of benchmarking to suggest changes to a surgeon’s practice or the selection of a specific surgeon for a procedure. A lot more work is needed to identify the pertinent variables needed to develop legitimate benchmarks.

Here are my questions for the presenter and authors:

  • Show us the details of all of the variables you analyzed (ISS, NISS, time in ED, etc) and the breakdown by surgeon.
  • Are there any other variables that influence the outcome that you wish you had collected?
  • There were an average of 17 cases per surgeon in your study. Is it possible to show statistical significance for anything given these small numbers?

The devil is in the details, and I hope these come out during the presentation!

Reference: IT’S TIME TO LOOK IN THE MIRROR: INDIVIDUAL SURGEON OUTCOMES AFTER EMERGENT TRUMA LAPAROTOMY. AAST 2021, oral abstract #38.