Category Archives: Abstracts

AAST 2019 #7: Trauma Surgeon Fatigue And Burnout

I love this topic, especially since I’m getting a bit long in the tooth myself. The impact of night call is significant, even if it’s less noticeable in my younger colleagues. Theories (and some real data) abound that long stretches of stressful work is unhealthy and may lead to burnout. Modifications such as reduced call length or varying work type have been tried, but there is little data showing any real effect.

The trauma group at Grant Medical Center in Columbus, Ohio performed a month-long prospective study involving six Level I trauma centers. They set out to monitor fatigue levels  due to varying call shift schedules and duration, and to see if they could identify any relationship to risk of surgeon burnout.

The authors used an actigraphy type device to monitor fatigue using an unspecified alertness model. These devices are typically worn on the wrist, and have varying levels of sophistication for determining sleep depth and fatigue. The surgeons self-reported their daily work activities as “academic”, “on-call”, “clinical non-call”, or “not working” and the lengths of time for each. A validated burnout inventory was taken at the end of the study to gauge burnout risk. The impact of 12 vs 24 hour call shifts was judged based on these variables.

Here are the factoids:

  • The number of surgeons involved in the study was not reported (!!!)
  • Mean  and worst fatigue score levels were “significantly worse” after a 24 hour shift compared to 12 hours
  • The proportion of time spent with a fatigue level < 70 (“equivalent to a blood alcohol of 0.08%”) was significantly longer during 24 hours shifts (10% vs 6% of time)
  • There was no real correlation of call shift length or times spent in various capacities on the burnout score
  • Pre-call fatigue levels correlated well with on-call fatigue, but not working pre-call did not

The authors concluded that fatigue levels relate to call length and correlate strongly with fatigue going into the call shift. They also noted that the longer shifts brought fatigue levels to a point that errors were more likely. They did not find any relationship to burnout.

There are a lot of things here that need explanation. First, the quality of the measurement system (actigraph) is key. Without this, it’s difficult to interpret anything else. And the significance data is hard to understand anyway. 

The burnout information is also a bit confusing. Other than putting an actigraph on the surgeons and having them log what they were doing, there was no real intervention. How can this possibly correlate with burnout?

The authors are trying to address a very good question, the relationship between call duration, configuration, fatigue, and error rates. More importantly, but less studied in trauma professionals, is the impact of disrupted sleep on the health and longevity. These are very important topics and I encourage the authors to keep at it!

Here are my questions for the presenter and authors:

  • Please provide some detail about the device used for actigraphy and exactly what was measured. There is substantial variation between devices, and very few are able to show sleep disturbance as well as actual brain wave monitoring. If this information is not extremely well-validated, then all of the results become suspect.
  • How many subjects actually participated, and how can you be sure your fatigue score differences are really statistically significant? It’s difficult for me to conceive that a difference of only 3.7 points on the “fatigue level scale” from 83.6 to 87.3 is significant. This is especially relevant since the abstract states that a score < 70 is similar to a blood alcohol level of 0.08%. The average level is well above that. And does statistical significance confer clinical significance?
  • And how about more info on the burnout inventory used? I presume the surgeons didn’t suddenly just start taking call for a month. They’ve been doing it for years. So why would a month of monitoring give any new indication of the possibility of burnout. It would seem that the usual surgeon lifestyle across this group is not leading to burnout. And I’m not sure this is accurate.

Reference: More call does not mean more burnout: a multicenter analysis of trauma surgeon activity with fatigue and burnout risk. AAST 2019, Oral abstract 52.

AAST 2019 #6: DOACs Part 3!

A little further down the direct oral anticoagulants (DOACs) rabbit hole please? The abstract reviewed in my last post suggested that elderly patients taking these agents actually do better than those on warfarin. So if that’s the case, do we need to be so attentive to getting followup CT scans on these patients to ensure that nothing new and unexpected is happening?

The trauma group at UCSF – East Bay performed a multi-center review of the experience at “multiple” Level I trauma centers over a three year period. They included anticoagulated patients with blunt trauma who had a negative initial head CT. Patients taking only an anti-platelet agent or a non-oral anticoagulant were excluded.  They analyzed the data for new, delayed intracranial hemorrhage, use of reversal agents, neurosurgical intervention, readmission, and death.

Here are the factoids:

  • A total of 739 records were studied: 409 on warfarin and 330 on a DOAC. Average age was 79, and half were male.
  • Repeat head CT was performed only half the time (!)
  • Delayed hemorrhage was noted in 4% of warfarin cases (9 of 224) and 2.5% of DOAC cases (4 of 159)
  • There were no interventions or deaths in the DOAC group with followup CT, or in those who did not have the repeat scan
  • There was 1 intervention in the warfarin group and two deaths attributed to TBI
  • Reversal agents were administered to 2% of DOAC patients and 14% of warfarin patients
  • The authors performed a regression analysis that showed the two strong associations with delayed hemorrhage were male sex and AIS head > 2 (!)

The authors concluded that this “largest study” suggests that DOACs “may” have a better safety profile compared to warfarin and repeat head CT is not indicated.

Now, hold on a minute!

Rule #1: No single published paper should ever change your practice. They need to be confirmed by other, hopefully better work.

Rule #2: No single abstract should make you even think about changing your practice! These are preliminary works that always need more detail, more effort, and a lot more thought. They are meant to telegraph what the authors are working on and to raise interesting questions from the audience. They should stimulate others to try to replicate and improve upon the work. In general, if something looks really good as an abstract, the next step is successful publication. This means that peers have reviewed the data and agree that it looks promising. But then it should take several years of work by the original authors and others to prove or refute the claims.

This study was small in the first place, and became smaller because half did not have repeat CT scans. The only statistically significant result was that we confirmed that the providers were not very good about getting followup scans. Just because they didn’t do it doesn’t mean it’s not indicated, especially given the nature of the data and the very small numbers.

I consider this another very small piece in the puzzle that suggests DOACs are not as evil as warfarin. There are several of these low power studies floating around right now. But we need to hunker down and really do a big study right so we can start to get a clearer picture of what we should do. For now, it’s best to treat all anticoagulants and anti-platelet agents as evil and err on the side of overtreating.

Here are my comments and questions for the presenter and authors:

  • Why was the followup head CT rate so poor? Was this a “however they like to do it” thing, was there a protocol, did the trauma centers just not believe that DOACs could be bad?
  • What were the guidelines for reversal? If the initial head CT was normal, why ever reverse? This suggests that participating centers could do whatever they wanted based on unspecified criteria.
  • Was the regression analysis helpful in any way? Being male and having a mild TBI seem rather nonspecific factors and wouldn’t help select patients for reversal or repeat scan.
  • Please provide more information on the warfarin intervention and deaths.
  • Isn’t the title of this abstract rather bold for the quality of the results presented?

I’m sure there will be some lively debate at the end of this presentation!

Reference: Repeat CT head scan is not indicated in trauma patients taking novel anticoagulation: a multi-institutional study. AAST 2019, Oral Abstract #66.

AAST 2019 #5: DOACs Part 2

In my last post, I reviewed a study that scrutinized reversal of direct oral anticoagulants (DOACs), and the outcomes of using various reversal agents. Today I’ll look at an abstract that compared in-hospital outcomes of elderly patients with severe TBI who were taking a variety of anticoagulant drugs, including DOACs.

The group at St. Joseph Mercy Hospital in Ann Arbor reviewed the dataset from the Michigan Trauma Quality Improvement Program database over a seven year period. To be included, patients needed to be at least 65 years old, suffer a fall, and have a significant head injury (AIS > 3). The final data consisted of records from 8312 patients treated at both Level I and II trauma centers across the state.

Here are the factoids:

  • 40% of patients were taking antiplatelet agents, 13% warfarin, 4% DOAC, and the remaining half or so were taking nothing.
  • The head injuries were severe, with mean AIS of 4.
  • After adjusting for “patient factors”, mortality or hospital outcomes were 1.6x more likely when warfarin was used
  • Complication risk increased 1.4x for warfarin and 1.3x for antiplatelet patients, but not for DOACs
  • Hospital length of stay was a day longer in the warfarin group (6.7 days) vs about 5.7 in the others

The authors concluded that elderly patients with severe TBI on DOACs fared better than those on warfarin. They stated that this could help alleviate concerns about DOACs in head trauma patients.

This is yet another interesting and surprising piece of the TBI on anticoagulants puzzle! It is obviously limited due to its retrospective database nature, which prevents us from asking even more interesting questions of this dataset. And it completely prevents us from looking at the specifics of each case including decision making, imaging, etc. But it’s a good start that should prompt us to find even better sources of data to tease out the details we must know in order to improve this patient group’s care.

Here are my questions for the presenter and authors:

  • I am very interested in the “patient factors” that were adjusted for to try to normalize the groups. Please describe in detail the specific ones that were used so we can understand how this influenced your results.
  • This information is intriguing, suggesting that warfarin is more evil that DOACs. What is the next step? What shall we do to further elucidate the problems, and how can we ameliorate the mortality and complication effects?

This is more good stuff about DOACs, and I can’t wait to hear the details.

AAST 2019 #5: DOACs Part 1

A short while ago I wrote about the proper nomenclature of the new or novel oral anticoagulant medications that are replacing warfarin in patients with atrial fibrillation (click here for details). Cut to the chase, the consensus seems to be that they should be called direct oral anticoagulants or DOACs.

These medications strike fear into the average trauma professional, primarily because there is no easy way to reverse them as there is for warfarin. We are finally accumulating enough experience with them to start to see the bigger picture with respect to complications and mortality. Today, I’ll begin the discussion with a series of three abstracts regarding these drugs.

The AAST conducted a multicenter, prospective, observational study that collected DOAC trauma patient information from 15 centers. They reviewed four years of data, specifically examining the use of reversal agents and mortality.

Here are the factoids:

  • A total of 606 patients were enrolled. They were generally elderly with an average age of 75.
  • Most were taking one of the Factor Xa inhibitors (apixaban, rivaroxaban, edoxiban), while just 8% were taking the direct thrombin inhibitor dabigatran.
  • Only 1% of patients received a reversal agent (prothrombin complex concentrate (PCC) 87%, Praxbind (12%), and Andexxa (1%)
  • Those receiving reversal tended to be older than the average and had more severe head injuries
  • Patients who were reversed with PCC had no change in mortality using a regression model
  • Patients reversed with Praxbind or Andexxa had a 15x higher probability of mortality

The author’s conclusions merely restated their results.

This is fascinating information. Unfortunately, this study was not designed to provide a comparison with patients taking warfarin. However, my next two abstract reviews will cover this very topic. 

There are two interesting tidbits here. First, reversal was only carried out in about one in eight patients. Why is this? No protocol? No product? Too pricey? Patients not hurt badly enough? And how would that be judged anyway?

The second is that reversal with PCC seems to be benign, but use of one of the specifically designed reversal agents really jacked up mortality. These agents (Praxbind and Andexxa) are very expensive ($3.5K and $50K respectively). Furthermore, there are no studies anywhere that show their effectiveness. This one actually seems to show they might be dangerous.

The devil is in the details. Here are my questions for the presenter and authors:

  • Were there any guidelines for reversal? This is key because if not, the statistics just describe “how we do it.” Yes, you can tease out higher ISS or AIS head as potential reasons, but were there directions regarding this built into the study protocol?
  • Do you have any data on the success rates of PCC reversal? Were there provisions to demonstrate lesion stability vs progression after administration?
  • Do you have an impression of why the tailored reversal agents seemed to be so deadly? Were they used as a last resort due to cost. Did the centers have a hard time getting it or authorizing its use?

This abstract could be a gold mine!

Reference: The AAST prospective, observational, multicenter study investigating the initial experience with reversal ofnovel oral anticoagulants in trauma patients. AAST 2019, Oral Paper 58.

AAST 2019 #4: Kidney Injury And The “Random Forest Model”

Brace yourselves, this one is going to be intense! I selected the next paper due to its use of an unusual modeling technique, the random forest model (RFM). What, you say, is that? Exactly!

The RFM is a relatively new method (5 years old for trauma stuff) that uses artificial intelligence (AI) to try to tease out relationships in data. It is different from its better known cousin, the neural network. The RFM tries to strike a balance of flexibility so that it can deduce rules from data sets that may not otherwise be apparent.

The authors from the trauma program at Emory in Atlanta wanted to develop a predictive model to identify factors leading to acute kidney injury in trauma patients. They assembled a small data set from 145 patients culled over a four year period. Some esoteric lab tests were collected on these patients (including serum vascular endothelial growth factor and serum monocyte chemoattractant protein-1), the sequential organ failure assess score (SOFA) was calculated, and then all was fed to the machine learning system.

The authors go into some detail about how they accomplished this work.  The main results are the sensitivity and specificity of both the RFM analysis. The RFM numbers were also converted to a regression equation and similarly examined. The area under the receiver operating characteristic curve (AUROC) was calculated for both.

Here are the factoids when using SOFA and the two biomarkers above:

  • For RFM: sensitivity .82, specificity .61, AUROC 0.74
  • For the resulting logistic regression: sens 0.77, spec 0.64, AUROC 0.72

The authors conclude that the biomarkers “may have diagnostic utility” in the early identification of patients who go on to develop AKI and that “further refinement and validation” could be helpful.

I’ll say! First, RFM is a very esoteric analysis tool, especially in the trauma world. Typically, it’s strengths are the following:

  • Requires few statistical assumptions like normal distribution
  • Allows the use of lower quality models to come up with a result
  • Shows the relative importance of each prediction feature, unlike the opacity of neural networks

The downsides?

  • It’s complicated
  • Doesn’t do well with data outside the ranges found in the dataset
  • May be difficult to interpret

But the real problem here is with the results. At this point, they are weak at best. The algorithm predicts only 4 of 5 actual cases of AKI correctly and identifies barely more than half of patients who don’t. Coin toss. A good AUROC number is better than 0.8. The ones obtained here are fair to poor at best.

I understand that this is probably a pilot study. But it seems unlikely that adding more data points will help, especially if the same input parameters are to be used in the future. I think this is an interesting exercise, but I need help seeing any future clinical applicability!

Here are my questions for the presenter and authors:

  • Why did it occur to you to try this technique? Who thought to use it? Your statisticians? What was the rationale, aside from not being able to collect any more data for the study? The origin study should be very interesting!
  • Given the lackluster results, how are you planning to “refine and validate” to make them better?
  • What future do you see for using RFM in other trauma-related studies?

I’m intrigued! Can’t wait to hear the punch lines!

Reference: Random forest model predicts acute kidney injury after trauma laparotomy. AAST Oral Abstract #11.