All posts by TheTraumaPro

AAST 2019 #5: DOACs Part 2

In my last post, I reviewed a study that scrutinized reversal of direct oral anticoagulants (DOACs), and the outcomes of using various reversal agents. Today I’ll look at an abstract that compared in-hospital outcomes of elderly patients with severe TBI who were taking a variety of anticoagulant drugs, including DOACs.

The group at St. Joseph Mercy Hospital in Ann Arbor reviewed the dataset from the Michigan Trauma Quality Improvement Program database over a seven year period. To be included, patients needed to be at least 65 years old, suffer a fall, and have a significant head injury (AIS > 3). The final data consisted of records from 8312 patients treated at both Level I and II trauma centers across the state.

Here are the factoids:

  • 40% of patients were taking antiplatelet agents, 13% warfarin, 4% DOAC, and the remaining half or so were taking nothing.
  • The head injuries were severe, with mean AIS of 4.
  • After adjusting for “patient factors”, mortality or hospital outcomes were 1.6x more likely when warfarin was used
  • Complication risk increased 1.4x for warfarin and 1.3x for antiplatelet patients, but not for DOACs
  • Hospital length of stay was a day longer in the warfarin group (6.7 days) vs about 5.7 in the others

The authors concluded that elderly patients with severe TBI on DOACs fared better than those on warfarin. They stated that this could help alleviate concerns about DOACs in head trauma patients.

This is yet another interesting and surprising piece of the TBI on anticoagulants puzzle! It is obviously limited due to its retrospective database nature, which prevents us from asking even more interesting questions of this dataset. And it completely prevents us from looking at the specifics of each case including decision making, imaging, etc. But it’s a good start that should prompt us to find even better sources of data to tease out the details we must know in order to improve this patient group’s care.

Here are my questions for the presenter and authors:

  • I am very interested in the “patient factors” that were adjusted for to try to normalize the groups. Please describe in detail the specific ones that were used so we can understand how this influenced your results.
  • This information is intriguing, suggesting that warfarin is more evil that DOACs. What is the next step? What shall we do to further elucidate the problems, and how can we ameliorate the mortality and complication effects?

This is more good stuff about DOACs, and I can’t wait to hear the details.

AAST 2019 #5: DOACs Part 1

A short while ago I wrote about the proper nomenclature of the new or novel oral anticoagulant medications that are replacing warfarin in patients with atrial fibrillation (click here for details). Cut to the chase, the consensus seems to be that they should be called direct oral anticoagulants or DOACs.

These medications strike fear into the average trauma professional, primarily because there is no easy way to reverse them as there is for warfarin. We are finally accumulating enough experience with them to start to see the bigger picture with respect to complications and mortality. Today, I’ll begin the discussion with a series of three abstracts regarding these drugs.

The AAST conducted a multicenter, prospective, observational study that collected DOAC trauma patient information from 15 centers. They reviewed four years of data, specifically examining the use of reversal agents and mortality.

Here are the factoids:

  • A total of 606 patients were enrolled. They were generally elderly with an average age of 75.
  • Most were taking one of the Factor Xa inhibitors (apixaban, rivaroxaban, edoxiban), while just 8% were taking the direct thrombin inhibitor dabigatran.
  • Only 1% of patients received a reversal agent (prothrombin complex concentrate (PCC) 87%, Praxbind (12%), and Andexxa (1%)
  • Those receiving reversal tended to be older than the average and had more severe head injuries
  • Patients who were reversed with PCC had no change in mortality using a regression model
  • Patients reversed with Praxbind or Andexxa had a 15x higher probability of mortality

The author’s conclusions merely restated their results.

This is fascinating information. Unfortunately, this study was not designed to provide a comparison with patients taking warfarin. However, my next two abstract reviews will cover this very topic. 

There are two interesting tidbits here. First, reversal was only carried out in about one in eight patients. Why is this? No protocol? No product? Too pricey? Patients not hurt badly enough? And how would that be judged anyway?

The second is that reversal with PCC seems to be benign, but use of one of the specifically designed reversal agents really jacked up mortality. These agents (Praxbind and Andexxa) are very expensive ($3.5K and $50K respectively). Furthermore, there are no studies anywhere that show their effectiveness. This one actually seems to show they might be dangerous.

The devil is in the details. Here are my questions for the presenter and authors:

  • Were there any guidelines for reversal? This is key because if not, the statistics just describe “how we do it.” Yes, you can tease out higher ISS or AIS head as potential reasons, but were there directions regarding this built into the study protocol?
  • Do you have any data on the success rates of PCC reversal? Were there provisions to demonstrate lesion stability vs progression after administration?
  • Do you have an impression of why the tailored reversal agents seemed to be so deadly? Were they used as a last resort due to cost. Did the centers have a hard time getting it or authorizing its use?

This abstract could be a gold mine!

Reference: The AAST prospective, observational, multicenter study investigating the initial experience with reversal ofnovel oral anticoagulants in trauma patients. AAST 2019, Oral Paper 58.

AAST 2019 #4: Kidney Injury And The “Random Forest Model”

Brace yourselves, this one is going to be intense! I selected the next paper due to its use of an unusual modeling technique, the random forest model (RFM). What, you say, is that? Exactly!

The RFM is a relatively new method (5 years old for trauma stuff) that uses artificial intelligence (AI) to try to tease out relationships in data. It is different from its better known cousin, the neural network. The RFM tries to strike a balance of flexibility so that it can deduce rules from data sets that may not otherwise be apparent.

The authors from the trauma program at Emory in Atlanta wanted to develop a predictive model to identify factors leading to acute kidney injury in trauma patients. They assembled a small data set from 145 patients culled over a four year period. Some esoteric lab tests were collected on these patients (including serum vascular endothelial growth factor and serum monocyte chemoattractant protein-1), the sequential organ failure assess score (SOFA) was calculated, and then all was fed to the machine learning system.

The authors go into some detail about how they accomplished this work.  The main results are the sensitivity and specificity of both the RFM analysis. The RFM numbers were also converted to a regression equation and similarly examined. The area under the receiver operating characteristic curve (AUROC) was calculated for both.

Here are the factoids when using SOFA and the two biomarkers above:

  • For RFM: sensitivity .82, specificity .61, AUROC 0.74
  • For the resulting logistic regression: sens 0.77, spec 0.64, AUROC 0.72

The authors conclude that the biomarkers “may have diagnostic utility” in the early identification of patients who go on to develop AKI and that “further refinement and validation” could be helpful.

I’ll say! First, RFM is a very esoteric analysis tool, especially in the trauma world. Typically, it’s strengths are the following:

  • Requires few statistical assumptions like normal distribution
  • Allows the use of lower quality models to come up with a result
  • Shows the relative importance of each prediction feature, unlike the opacity of neural networks

The downsides?

  • It’s complicated
  • Doesn’t do well with data outside the ranges found in the dataset
  • May be difficult to interpret

But the real problem here is with the results. At this point, they are weak at best. The algorithm predicts only 4 of 5 actual cases of AKI correctly and identifies barely more than half of patients who don’t. Coin toss. A good AUROC number is better than 0.8. The ones obtained here are fair to poor at best.

I understand that this is probably a pilot study. But it seems unlikely that adding more data points will help, especially if the same input parameters are to be used in the future. I think this is an interesting exercise, but I need help seeing any future clinical applicability!

Here are my questions for the presenter and authors:

  • Why did it occur to you to try this technique? Who thought to use it? Your statisticians? What was the rationale, aside from not being able to collect any more data for the study? The origin study should be very interesting!
  • Given the lackluster results, how are you planning to “refine and validate” to make them better?
  • What future do you see for using RFM in other trauma-related studies?

I’m intrigued! Can’t wait to hear the punch lines!

Reference: Random forest model predicts acute kidney injury after trauma laparotomy. AAST Oral Abstract #11.

AAST 2019 #3: Delayed Splenectomy In Pediatric Splenic Injury

Nonoperative management of the blunt injured spleen is now routine in patients who are hemodynamically and have no evidence of other significant intra-abdominal injury.  The trauma group at the University of Arizona – Tucson scrutinized the failure rate of this procedure in children because it is not yet well established.

They reviewed 5 years of data from the National Readmission Database. This is actually a collection of software and databases maintained by the federal government that seeks to provide information on a difficult to track patient group: those readmitted to hospitals after their initial event.

Patients who had sustained an isolated spleen injury who were less than 18 years old and who had either nonoperative management (NOM), angioembolization (AE), or splenectomy were analyzed. Outcome measures included readmission rate, blood transfusion, and delayed splenectomy. Common statistical techniques were used to analyze the data.

Here are the factoids:

  • About 9500 patients were included, with an average age of 14
  • Most (77%) underwent NOM, 16% had splenectomy, and 7% had AE (no combo therapies?)
  • Significantly more patients with high grade injury (4-5) had splenectomy or AE than did the NOM patients (as would be expected)
  • A total of 6% of patients were readmitted within 6 months of their initial injury: 12% of NOM *, 8% of AE *, and 5% of those with splenectomy (* = statistically significant)
  • The NOM and AE patients were also more likely to receive blood transfusions during their first admission
  • Delayed splenectomy occurred in 15% of cases (7% NOM and 5% AE) (these numbers don’t add up, see below)
  • Statistical analysis showed that delayed splenectomy was predicted by high grade injury (of course), blood transfusion (yes), and nonoperative management (huh?)
  • In patients who were readmitted and splenectomized, it occurred after an average of 14 days for the NOM group and 58 days for AE (huh?)

The authors concluded that “one in seven children had failure of conservative management and underwent delayed splenectomy within 6 months of discharge.” They stated that NOM and AE demonstrated only a temporary benefit and that we need to be better about selecting patients for nonoperative management.

Hmm, there are several loose ends here. First, what is the quality of the study group? Was it possible to determine if these patients had been treated in a trauma center? A pediatric vs adult trauma center? We know that there are outcome disparities in spleen trauma care at different types of trauma centers. 

Next, are they really pediatric patients? Probably not, since age < 18 were included and the average age was 14. Injured spleens in pre-pubescent children behave much better than adolescents, which are more adult-like.

And what about the inherent bias in the “readmission data set?” You are looking only at patients who were readmitted! By definition, youare looking at a dataset of poorer outcomes. What if you had identified 9,500 initial patient admissions from trauma registries and then tried to find them in the readmission set. I know it’s not possible to do that, but if it were I would bet the readmission and delayed splenectomy numbers would be far, far lower.

And what about those delayed splenectomy numbers? I can’t get the percentages to match up. If 15% of the 7965 patients who didn’t have an initial splenectomy  had it done later, how does 7.2% of the 7318 NOM patients and 5.3% of the 1541 AE patients add up?

Bottom line: The usual success rate tossed around for well-selected nonoperative management is around 93% when optional adjunctive AE is part of the algorithm. That’s a 1 in 14 failure rate, and it generally occurs during the initial hospitalization. In my experience, readmissions are very rare. And that’s for adults; children tend to behave even better!

I wouldn’t consider changing my practice yet based on these findings, but the devil will probably be in the details!

Here are some questions for the presenter and authors:

  • Please provide some detail on the data set. We really need to know an age breakdown and the types of centers they were treated at, if available.
  • Discuss the potential data set bias working backwards from a database that includes only readmitted patients.
  • Please clarify the delayed splenectomy statistics to help match up the numbers.

I’m anticipating a great presentation at the meeting!

Reference: Delayed splenectomy in pediatric splenic injuries: is conservative management overused? AAST 2019 Oral abstract #8.

AAST 2019 #2: Predicting Abdominal Operation After Blunt Trauma – The RAPTOR Score

Patients with blunt abdominal injury, particularly those with seat belt signs, can be diagnostically very challenging. If the patient is stable and does not have peritonitis, CT scan is typically the first stop after the trauma resuscitation room. As many trauma professionals know, the radiographic findings can be subtle and/or not very convincing.

The trauma group at the University of Tennessee in Memphis sought to identify specific findings that might help us better identify patients that will need laparotomy. They retrospectively identified all their mesenteric injuries over a five-year period. A single blinded radiologist (is this an oxymoron or not?) reviewed all 151 patient images who underwent laparotomy, looking for predictors of bowel or mesenteric injury.  All of the predictors were then converted into a scoring system called RAPTOR (radiographic predictors of therapeutic operative intervention; kind of a stretch?). These predictors were then subjected to multivariate regression analyses to try to tease out if there were any independent predictors of injury.

Here are the factoids:

  • A total of 151 patients were identified over the 5 year period; 114 underwent laparotomy
  • Of the 114 operated patients, two thirds underwent a therapeutic laparotomy and the other third were nontherapeutic
  • There no missed injuries in the non-operated patients
  • The components of the RAPTOR score were culled from all the potential findings, and were determined to be
    • Multifocal hematoma
    • Acute arterial extravasation
    • Bowel wall hematoma
    • Bowel devascularization
    • Fecalization (of what??)
    • Free air
    • Fat pad injury (??)
  • Linear regression then showed that only three of these, extravasation, bowel devascularization, and fat pad injury to be independent predictors of injury
  • If three or more RAPTOR variables were present, then the sensitivity, specificity, and positive predictive values for injury were 67%, 85%, and 86%, and an area under the receiver operating characteristic curve (AUROC) of 0.91

The authors concluded that the RAPTOR score provided a simplified approach to detect patients who might benefit from early laparotomy and not serial abdominal exams. They go further and say it could potentially be an invaluable tool when patients don’t have clear indications for operation.

It looks like there are two things going on here at the same time. First, a new potential scoring system is being piloted. And second, a regression analysis is being used to examine the data as well. 

But first, let’s back up to the beginning. This is a retrospective study, with a relatively small size. This makes it far harder to ensure that the results will be significant, or at least meaningful. Use of a single radiologist can also be problematic, especially since many of the CT findings with this mechanism of injury are subtle. 

The reported performance of the RAPTOR score is a bit weak. The listed statistics show that it accurately identified only two thirds of those who needed an operation and 85% of those who didn’t. The AUROC for the regression is very good, though. Could a good old-fashioned serial exam scenario be better?

Bottom line: It will be interesting to hear the background on RAPTOR vs regression, and find our how the authors will use or are using these tools.

Here are my questions for the presenter and authors:

  • Why did you decide to create a scoring system that uses a set of variables that may be dependent on each other? Isn’t the regression equation better?
  • Has this information changed your practice? It seems that the two of the three regression variables are fairly obvious reasons to operate (active extravasation and devascularization). Do you really need the rest?
  • Has this study helped you decrease the non-therapeutic laparotomy rate for blunt abdominal injury?
  • And please define fecalization and fat pad injury!

I’m looking forward to hearing this presentation!

Reference: RADIOGRAPHIC PREDICTORS OF THERAPEUTIC OPERATIVE INTERVENTION AFTER BLUNT ABDOMINAL TRAUMA: THE RAPTOR SCORE. AAST 2019 Oral Paper 6.