Tag Archives: abstract

Best of AAST 2022 #2: How Much Does It Cost To Be A Trauma Center?

Becoming and remaining a trauma center is an expensive proposition. Some components can pay for themselves (surgical specialists and operating rooms) but others are required yet generate no revenue. These costs must somehow be offset for a trauma center to remain viable.

How much does it actually cost? There have been two papers that deal with this topic (see references). One was published way back in 2004 and examined readiness costs averaged across 10 Florida trauma centers. They comingled data for these hospitals, which were a mix of adult, pediatric, Level I and Level II centers. They arrived at a median annual cost of readiness of $2.1 million.

A similar study was published in 2017 for Level I and Level II centers in Georgia. They were ultimately able to estimate that the annual average readiness cost for Level I centers was $6.8 million, and for Level II centers was $2.3 million.

That’s a lot of money! These hospitals tend to be larger and have specialty centers that allow them to generate enough revenue to support the non-revenue parts of the trauma program.

But what about Level III and Level IV centers? They are generally much smaller hospitals. In many more rural states they are critical access hospitals with 25 or fewer beds. They don’t have a wealth of other programs that can generate significant excess revenue.

So how much does it cost them?  A group at Mercer University in Atlanta attempted to quantify this issue. They developed a survey tool along the lines of the previous work. They sent this to all 14 Level III and Level IV trauma centers in the state, who based their numbers on 2019 data.

Here are the factoids:

  • For Level III centers, the average annual readiness cost was $1.7 million
  • The most expensive components for Level III centers were for clinical medical staff. This was most likely related to stipends for service / call coverage.
  • For Level IV centers, the cost was only $82 thousand and primarily involved administrative costs (most likely trauma program personnel)
  • Education and outreach programs are mandated for these centers but the centers actually spent only $8,000 annually. The authors believe this represented significant under-resourcing by the hospitals.

The authors concluded that there is a need for additional trauma center funding to enable Level III and IV centers to meet the requirements set forth by the American College of Surgeons.

My comments: This is a very enlightening paper on the cost of being a trauma center. Only two papers have previously explored this, and only for higher level centers. However, the devil is in the details. The nuts and bolts numbers and the assumptions made on how they fit together are key. But it does provide some enlightening information on what it costs to be a trauma center. And the disparity between the two levels is fascinating / frightening.

Here are my questions for the authors / presenters:

  • What assumptions did you have to make to arrive at these numbers? Please explain the details of your model and where you think the weaknesses in it may lie.
  • Why is it so much more expensive to be a Level III center? The abstract places the blame on “clinical medical staff.” Are these on-call stipends or something else?
  • What would you tell wannabe Level III or IV centers looking to become a trauma center? Unfortunately, these numbers might scare some of the off.

Thanks for an intriguing and challenging paper! The discussion will be very interesting!

References: 

  1. ASSESSING TRAUMA READINESS COSTS IN LEVEL III AND LEVEL IV TRAUMA CENTERS. Plenary session paper #10, AAST 2022.
  2. The cost of trauma center readiness. Am J Surg 187(1):7-13, 2004.
  3. What Are the Costs of Trauma Center Readiness? Defining and Standardizing Readiness Costs for Trauma Centers Statewide. Am Surg 83(9):979-990, 2017.

 

Best of AAST 2022 #1: The Trauma-Specific Frailty Index (TSFI)

Let’s start with the paper that is kicking off the 81st Annual Meeting for the AAST. Everyone recognizes that many of our elderly patients don’t do well after trauma. Unfortunately, elderly is a very imprecise term. According to the TRISS method for predicting mortality it begins at age 55. But we have all seen many patients younger than that who appear much older physiologically. And a few older ones who are in excellent condition.

How can we determine who is frail and thus more likely to develop complications or even die after injury? The trauma group at the University of Arizona – Tucson published their original paper on a 50-variable frailty index in 2014 in order to address this issue. Unfortunately, 50 variables were found to be very unwieldy, which vastly decreased its usability.

They immediately decided to strip it down to the most significant 15 variables, and named it the Trauma-Specific Frailty Index. This tool simply predicted whether the patient would have a favorable discharge (home), or an unfavorable one (skilled nursing facility or death). The TSFI was very good at this, and was far better than using age alone.

The authors rolled the TFSI out to the AAST multi-institutional study group. A total of 17 Level I and II trauma centers participated in a three-year prospective, observational study. All patients with age > 65 had their TFSI calculated. They were stratified into three groups, including non-frail, pre-frail, and frail. The outcomes studied were expanded and included mortality, complications, discharge status, and 3 month status for readmission, falls, complications, and death.

Here are the factoids:

  • A total of 1,321 patients were enrolled across all centers with a mean age of 77 and median ISS 9
  • A third each were classified as non-frail, pre-frail, and frail
  • The overall study group had a 5% mortality, 14% complication rate, and 42% unfavorable discharge rate
  • Frail patients had a higher complication rate vs the pre- and non-frail groups (21% vs 14% vs10%) which was significant
  • They also had a higher mortality rate (7% vs 3% vs 4%) with p=0.048 although significant on multivariate analysis
  • Overall, 16% were readmitted within 3 months and 2% died. This was not stratified in the abstract by frailty group.

The authors claim that the TFSI is an independent predictor of worse outcomes, and that it is practical and effective and should be used in the management of geriatric trauma patients.

Comments: I find the concept of the abstract very interesting. I think most of us can identify the obviously frail patients when we see them. The TFSI promises more objective identification  using 15 variables. For reference, here they are:

  • Comorbidities
    • Cancer history
    • Coronary heart disease
    • Dementia
  • Daily activities
    • Help with grooming
    • Help with managing money
    • Help doing housework
    • Help toileting
    • Help walking
  • Health attitude
    • Feel less useful
    • Feel sad
    • Feel effort to do everything
    • Falls
    • Feel lonely
  • Sexual function
  • Serum albumin

The authors showed that all of the outcomes were significantly and negatively associated with the patient’s frailty index. The analysis appears reasonable, and the numbers are both statistically and clinically significant. 

But the big question now is, how do we use the results? The 15-variable version is reasonably workable. Is it any better than the trauma professional walking into a room and doing a good eyeball test? The study did not look at that. Either way, what can we do when we identify the truly frail patient? What can we alter in the hospital care that might make a difference? Right now, options are limited. Much of what led to the patient’s frailty is water under the bridge due to possibly decades of lifestyle choice or pre-existing disease.

I think that the next step in this train of thought is to start applying specific interventions in patients identified as frail or better yet, pre-frail. Here are my questions for the authors and presenter:

  1. What’s next? You’ve shown that you have a numerical tool that identifies patients who may have a less than desirable outcome. If we implement this, what can we do to try to reduce those undesirable outcomes?

This was thought provoking work, and I am looking forward to the full presentation!

Reference: PROSPECTIVE VALIDATION AND APPLICATION OF THE TRAUMA SPECIFIC FRAILTY INDEX: RESULTS OF AN AAST MULTI-INSTITUTIONAL OBSERVATIONAL TRIAL. AAST 2022 Plenary Paper 1.

Best Of AAST 2021: Trauma Transfers Discharged From The ED

Aren’t these embarrassing? A referring center sends you a patient with the idea that they will be evaluated and admitted to your hospital. But it doesn’t work out that way. The patient is seen, possibly by a surgical specialist, bandaged up, and then sent home. Probably to one that is quite a few miles away. Not only is this a nuisance for the patient and an embarrassment for the sending center, it may use resources at the trauma center that are already tight.

Transfer patients who are seen and discharged are another form of “ultimate overtriage.” In this case, the incorrect triage takes place at the outside hospital.  The trauma group in Oklahoma City reviewed their experience with these patients over a two year period. They looked exclusively at patients who were transferred in to a Level I center and then discharged.

Here are the factoids:

  • A total of 2,350 patients were transferred in, and 27% were transferred home directly from the trauma bay (!)
  • The three most common culprits by injury pattern were face (51%), hand (31%), isolated ortho injury (9%)
  • A third of these patients required a bedside procedure, including laceration repair (53%), eye exam (24%), splinting (18%), and joint reduction (5%)
  • Ten facilities accounted for 40% of the transfers

The authors concluded that the typical injuries prompting transfer are predictable. It may be possible to reduce the number of transfers by deploying telemedicine systems to push evaluations out to the referring hospitals.

Bottom line: This is quite interesting. Anyone who works in a Level I or II center is aware of this phenomenon. This abstract went a step further and quantified the specific issues involved. This center ended up discharging over 300 patients per year after transfer in. This is a tremendous drain on resources by patients who did not truly have the need for them.

The authors speculate that telemedicine evaluation may help reduce some of those transfers. This seems like an easy solution. However, it also poses a lot of issues in terms of who will actually staff the calls and how will they be compensated for their time.

There are a number of important take-aways from this abstract:

  1. Know your referring hospitals. In this study, there were 10 hospitals that generated an oversize number of referrals. Those are the targets / low hanging fruit. Identify them!
  2. Understand what their needs are. Are they frequently having issues with simple ortho injuries? Eye exams? This is what they need!
  3. Provide education and training to make them more comfortable. This allows you to target those hospitals with exactly the material they need and hopefully make them more self-sufficient.

This allows the higher level centers to reserve phone and/or telemedicine consultation for only the most ambiguous cases. It’s a better use of telehealth resources that may be needed, typically at night and on weekends.

Here are my questions for the presenter and authors:

  • Would the common issues that were transferred and discharged be amenable to education and training at the referring centers to decrease the transfer volume?
  • How have you begun to address this issue at your center?

Reference: TRAUMA TRANSFERS DISCHARGED FROM THE EMERGENCY DEPARTMENT – IS THERE A ROLE FOR TELEMEDICINE. AAST 2021, Oral abstract #63.

Best Of AAST 2021: Chest Tube Based On Pneumothorax Size

How big is too big? That has been the question for a long time as it applies to pneumothorax and chest tubes. For many, it is a math problem that takes into account the appearance on chest x-ray, the physiology of the patient, and their ability to tolerate the pneumothorax based on any pre-existing medical conditions.

The group at Froedtert in Milwaukee has been trying to make this decision a bit more objective. They introduced the concept of CT based size measurement using a 35mm threshold at this very meeting three years ago. Read my review here. My criticisms at the time centered around the need to get a CT scan for diagnosis and their subjective definition of a failure requiring chest tube insertion. The abstract never did make it to publication.

The authors are back now with a follow-on study. This time, they made a rule that any pneumothorax less than 35mm from the chest wall would be observed without tube placement. The performed a retrospective review of their experience and divided it into two time periods: 2015-2016, before the new rule, and 2018-2019, after the new rule. They excluded any chest tubes inserted before the scan was performed, those that included a sizable hemothorax, and patients placed on a ventilator or who died.

Here are the factoids:

  • There were 93 patients in the early period and 154 in the later period
  • Chest tube use significantly declined from 20% to 10% between the two periods
  • Compliance with the rule significantly increased from 82% to 92%
  • There was no difference in length of stay, complications, or death
  • Observation failure was marginally less in the later period, and statistical significance depends on what method you use to calculate it
  • Patients in the later group were 2x more likely to be observed (by regression analysis)

The authors concluded that the 35mm rule resulted in a two-fold increase in observation and decreased the number of unnecessary CT scans.

Bottom line: I still have a few issues with this series of abstracts. First, decision to insert a chest tube requires a CT scan in a patient with a pneumothorax. This seems like extra radiation for patients who may not otherwise fit any of the usual blunt imaging criteria. And, like their 2018 abstract, there is no objective criteria for failure requiring tube insertion. So the number of insertions can potentially be quite subjective based on the whims of the individual surgeon.

What this abstract really shows is that compliance with the new rule increased, and there were no obvious complications from its use. The other numbers (chest tube insertions, observation failure) are just too subjective to learn much from.

Here are my questions for the presenter and authors:

  • Why was there such a large increase in the number of subjects for two identical-length time periods? Both were two years long, yet there were two-thirds more patients in the later period. Did your trauma center volumes go up that much? If not, could this represent some sort of selection bias that might change your numbers?
  • You concluded that your new rule decreased the number of “unnecessary” CT scans? How so? It looks like you are using more of them!
  • Do you routinely get a chest CT on all your patients with pneumothorax? Seems like a lot of radiation just to decide whether or not to put a tube in.
  • How do you manage a pneumothorax found on chest x-ray? Must they get a CT? Or are you willing to watch them and follow with serial x-rays?
  • How do you decide to take out the chest tube? Hopefully not another scan!

There should be some very interesting discussion of this abstract!

Reference: THE 35-MM RULE TO GUIDE PNEUMOTHORAX MANAGEMENT: INCREASES APPROPRIATE OBSERVATION AND DECREASES UNNECESARY CHEST TUBES. AAST 2021, Oral abstract #56.

Best Of AAST 2021: Individual Surgeon Outcomes In Trauma Laparotomy

Trauma programs use a number of quality indicators and PI filters to evaluate both individual and system performance. The emergent trauma laparotomy (ETL) is the index case for any trauma surgeon and is performed on a regular basis. However, this is one procedure where individual surgeon outcome is rarely benchmarked.

The trauma group in Birmingham AL performed a retrospective review of 242 ETLs performed at their hospital over a 14 month period. They then excluded patients who underwent resuscitative thoracotomy prior to the laparotomy. Rates of use of damage control and mortality at various time points were studied.

Here are the factoids:

The chart shows the survival rates after ETL at 24 hours (blue) and to discharge (gray) for 14 individual surgeons.

  • Six patients died intraoperatively and damage control laparotomy was performed in one third.
  • Mortality was 4% at 24 hours and 7% overall
  • ISS and time in ED were similar, but operative time varied substantially (40-469 minutes)
  • There were significant differences in individual surgeon mortality and use of damage control

The authors concluded that there were significant differences in outcomes by surgeon, and that more granular quality metrics should be developed for quality improvement.

Bottom line: I worry that this work is a superficial treatment of surgeon performance. The use of gross outcomes like death and use of damage control is not very helpful, in my opinion. There are so, so many other variables involved in who is likely to survive or the decision-making to consider the use of damage control. I am concerned that a simplistic retrospective review without most of those variables will lead to false conclusions.

It may be that there is a lot more information here that just couldn’t fit on the abstract page. In that case, the presentation should clear it all up.  But I am doubtful.

We have already reached a point in medicine where hospitals with better outcomes for patients with certain conditions can be identified. These centers should be selected preferentially to treat stroke or pancreatic cancer, or whatever there benchmark-proven expertise is. It really is time for this to begin to trickle down to individual providers. A specific surgeon should be encouraged to do what they are demonstrated to be really good at, and other surgeons should handle the things the first surgeon is only average at.

But I don’t think this study can provide the level of benchmarking to suggest changes to a surgeon’s practice or the selection of a specific surgeon for a procedure. A lot more work is needed to identify the pertinent variables needed to develop legitimate benchmarks.

Here are my questions for the presenter and authors:

  • Show us the details of all of the variables you analyzed (ISS, NISS, time in ED, etc) and the breakdown by surgeon.
  • Are there any other variables that influence the outcome that you wish you had collected?
  • There were an average of 17 cases per surgeon in your study. Is it possible to show statistical significance for anything given these small numbers?

The devil is in the details, and I hope these come out during the presentation!

Reference: IT’S TIME TO LOOK IN THE MIRROR: INDIVIDUAL SURGEON OUTCOMES AFTER EMERGENT TRUMA LAPAROTOMY. AAST 2021, oral abstract #38.