Category Archives: Abstracts

EAST 2018 #7: Cervical Spine Injury And Dysphagia

One of the under-appreciated complications of cervical spine fractures is dysphagia. This problem disproportionately affects the elderly, and is most common in patients with C1-C3 fractures. Swallowing becomes even more difficult when the head is held in position by a rigid cervical collar, which is the most common treatment for this injury.

How common is dysphagia in patients with cervical spine injury? What is the best way to detect it? These questions were asked by the group at MetroHealth Medical Center in Cleveland. They  retrospectively reviewed their experience with patients presenting with cervical spine injury for 14 months, then prospectively studied the use of routine, nurse-driven bedside dysphagia screening in similar patients for a year. They wanted to test the utility of screening, and judge its impact on outcome.

Here are the factoids:

  • 221 patients were prospectively studied and received a bedside dysphagia screen, but only 114 met all inclusion criteria and had the protocol properly followed (!)
  • 17% had dysphagia overall, with an incidence of 15% in cervical spine injuries and 31% in those with a concomitant spinal cord injury
  • The bedside dysphagia screen was 84% sensitive, 96% specific, with positive and negative predictive values of 80% and 97%, respectively
  • There were 6/214 patients with dysphagia complications in the retrospective group vs 0/114 in the screened group

Bottom line: This abstract actually puts a number on the incidence of dysphagia on this group of patients. I wish the patient numbers could have been higher, but they are still very good. The results are convincing, and the negative predictive value is excellent. If the screen is passed, then the patient should do well with feeds. I recommend that all patients with cervical spine injury treated with a rigid collar undergo this simple screen, and have appropriate diet adjustments to limit complications.

Here are some questions for the authors to consider before their presentation:

  • Please share the details of the nurse-driven component of the bedside dysphagia screen, and how you determine when a formal barium swallow is indicated
  • Why did your prospective study group drop from 221 to 114?
  • When did you typically perform the screen? Fracture swelling may not peak for 3 days, so early screening may not be as good as later screening.
  • This was a nice study, with a very practical and actionable result!

Reference: EAST Podium abstract #10.

EAST 2018 #6: Predicting Deterioration After Rib Fracture

Yes, more on predictions. We all know that chest trauma is one of the bigger causes of morbidity and mortality in trauma patients. A number of methods have been developed to predict which patients might deteriorate after sustaining chest injury, and where to place them in the hospital on admission. Elderly patients are at particular risk, and determining who to look after more closely, and/or in the intensive care unit, can be very helpful.

The authors of this abstract from West Virginia University took a slightly different approach. Let’s say we have already placed a patient in a ward bed after admission for rib fractures. How can we monitor them and preemptively increase interventions or move to the ICU before they crash and burn?

The WVU group developed a rib fracture guideline nearly 10 years ago, and retrospectively reviewed their experience with 1106 patients over a 6 year period. They measured serial forced vital capacity readings in these patients, and arbitrarily used thresholds of <1, 1-1.5, and >1.5 to predict deterioration.

To boost your memory, look at the following chart. The vital capacity is the maximum amount of volume that can be voluntarily exhaled.

Here are the factoids:

  • Only patients with initial FVC > 1 were enrolled in the study
  • They were then separated into two groups: those whose FVC remained greater than 1 (83%), and those in whom it decreased below 1 (17%)
  • Patients in the low FVC group had significantly more complications like pneumonia, intubation, or unplanned transfer to ICU (15% vs 3%) and a longer length of stay
  • However, the low FVC group also had a higher chest AIS score, higher ISS, were 10 years older, and were twice as likely to have COPD

Bottom line: Seems like a promising study, right? Check out an easy to measure, objective test and step up your level of care if it dips below a certain critical value? But not so fast. The two study groups look like they are very different. No significance testing was shown for these differences, but they certainly look like they should be different. Couldn’t their deterioration have been predicted based on their age and degree of chest injury?

Here are some questions for the authors to consider before their presentation:

  • Please provide the details of your rib fracture pathway
  • Your FVC threshold does not have any units assigned. I am assuming that it is in liters. Please clarify.
  • Why did you describe three cohorts initially, then settle on the lowest (FVC < 1) as your final threshold? Was there a method to this? Why not 1.5? Or 0.75?
  • Did you do any further analysis to try to determine if the differences between the groups were responsible for the differences in complication rates?
  • Big picture question: So why couldn’t you just use a specific age/ISS/comorbidity threshold and predict failure at the time of admission, and forget about measuring several FVC values?

Reference: EAST 2018 Podium paper #9.

EAST 2018 #5: Predicting Absence Of Pediatric Abdominal Injury

More on prediction systems today! The authors of this abstract used good old mathematics, albeit very fancy math, instead of a machine learning algorithm. The specifics of this tool were described in an article published in JACS earlier this year (see reference).

The authors were interested in finding a way to decrease the use of CT scan for evaluating blunt abdominal trauma in children. After developing the model using prospectively collected data from 14 Level I pediatric trauma centers, they sought to validate it using a public dataset from the Pediatric Emergency Care Applied Research Network (PECARN). This dataset contained more than 2,400 records, and included 10% of patients who had an intra-abdominal injury (IAI), and 2.5% with an IAI that required intervention (IAI-I).

Here are the factoids:

  • There were five prediction rule variables: complaint of abdominal pain, tenderness / distension / or contusion on exam, abnormal chest x-ray, AST > 200, elevated pancreatic enzymes)
  • Prediction rule sensitivity was 98% and specificity was 37% for IAI, and 100% / 35% for IAI-I
  • The negative predictive value for finding any abdominal injury was 99.3%, and for injury requiring intervention was 100%
  • Unfortunately, nearly half of the very low risk children underwent CT scanning anyway!

Bottom line: This is a nice validation study for a well-designed prediction tool. It builds on previous work published earlier this year. The variables make clinical sense. Although the number of patients with injury were relatively small, I believe these results should be considered for incorporation in our blunt pediatric trauma evaluation protocols now!

Here are some questions for the authors to consider before their presentation:

  • The liver function and pancreatic enzyme tests results take some time to perform. How much do they contribute to the negative predictive value, since they are relatively uncommon injuries?
  • What are considered abnormal chest x-ray findings?
  • How do you recommend incorporating this into the care of trauma activation patients? Wait for 30 minutes in the trauma bay for the lab tests to come back? Evaluation in patients undergoing a more routine evaluation for abdominal trauma would not be unduly delayed.
  • Be prepared to explain how you derived the decision rule in very simple language.

References:

  • EAST 2018 Podium paper #7.
  • Identifying Children at Very Low Risk for Blunt Intra-Abdominal Injury in Whom CT of the Abdomen Can Be Avoided Safely. JACS 224(4):449-458.

EAST 2018 #4: Machine Prediction Of Instability In ICU Patients

In trauma care, as in all of medical care, we try to predict the future. What injuries does my patient have? What will happen if I treat this fracture that way? Is she going to live? How much disability can we expect given this degree of head injury?

Trauma professionals are constantly tapping into their own experience and that of others to predict the future and try to shape it in the best way for their patients. And now more than ever, with the combination of mathematical algorithms and powerful machine learning systems, we’ve been able to move past simple correlations and linear regressions to try to peer into that future.

A group at Washington University in St. Louis previously developed a real time risk score that claims to predict the need for cardiovascular support in ICU patients.  It is called the hemodynamic instability indicator (HII). The exact details of this score are not included in the abstract, and I have not found it published yet so I have no idea how it was derived. The presenters prospectively applied this system to 126 stable patients who were admitted to the ICU and were expected to stay at least 24 hours and survive at least 48 hours. They wanted to determine how well HII predicted an episode of hemodynamic instability.

Here are the factoids:

  • The majority were male (64%) acute care surgery patients (55%) with a median age of 60
  • Only 60 of the 126 patients had sufficient data to calculate HII in the pre-intervention period of unstable patients (!)
  • HII predicted the need for pressors/inotropes with a sensitivity of 0.56 and specificity of 0.76. The authors claim that this was statistically significant (p < 0.01) (???)
  • The system got better as the time to intervention for instability grew closer

Bottom line: Machine learning and prediction systems can be tricky tools. They are very good at identifying patterns without anything more than a good training dataset. However, they are only as good as that dataset. It is crucial that the system be trained with and tested against other large sets of data with a variety of patients. Otherwise, you will create a great system for predicting events in 60 year old male acute care surgery patients, and no one else.

Here are some questions for the authors to consider before their presentation:

  • Be prepared to describe in detail how you derived the original HII system. How big was the dataset, and what did it look like in terms of demographics?
  • What statistics did you use to conclude you had a statistical p value of < 0.01? Your sensitivity and specificity numbers do not look that good. What about negative and positive predictive values?
  • You mentioned that the system made better predictions as the episode of instability grew closer. Predicting an adverse event 24 hours in advance vs 5 minutes in advance is very different. How near did the event have to be for good prediction? Did this factor into your significance calculation above?
  • Why not use a receiver operating characteristic curve to show your data? It is a much better analysis tool.
  • Big picture questions: Why do you expect that you can generalize the results of your HII system to new and disparate datasets? Have you tried it on major trauma patients?

Reference: EAST 2018 Podium paper #5.

EAST 2018 #3: Platelet Transfusion In Patients On Anti-Platelet Agents?

When patients with significant brain injuries present while taking drugs that interfere with clotting, we seem to have this burning desire to neutralize those drugs, right? Warfarin? Give PCC. Aspirin or clopidogrel? Well, not quite so easy. You can’t neutralize them, but can’t you just transfuse some working platelets?

That is the current practice among many clinicians, although there isn’t really much data to support it. A group at Iowa Methodist Hospital in Des Moines looked at using a commercial platelet reactivity test (PRT) to determine if platelets should be given in patients with moderate to severe TBI who were known or suspected to be taking an anti-platelet drug.

This was a retrospective study of 167 patients with a head Abbreviated Injury Scale score of 2 or higher. Patients had to have received at least 2 head CT scans in order to judge progression of any bleeds.

Here are the factoids:

  • Nearly a third of patients (29%) were non-therapeutic on their anti-platelet medication, meaning that platelet function as judged by PRT was not abnormal
  • No platelet transfusions were given to 92% of patients with non-therapeutic meds, and only 2 of these patients (4%) had clinical progression of their bleed
  • Overall, using a selective platelet transfusion policy decreased platelet transfusions and their attendant costs by about half

Bottom line: So this is one of those “how we do it” studies. This means that the authors have been doing it this way for a while, and wanted to examine the results. It is not a comparison to their historical control, but it’s likely that their current usage is much lower than it used to be. Regardless, the results are impressive, and would seem to indicate that we are throwing a lot of platelets away based on a rumor that our patient is taking an anti-platelet medication.

Here are some questions for the authors to consider before their presentation:

  • How did you define “clinically significant bleed” in the two patients that had them? Did they eventually get some platelets? Did it help?
  • Have you looked at patients that did receive platelets for an abnormal PRT to see if their platelet function improves?
  • Big picture question: What evidence is there that PRT results are meaningful? How do we know that abnormal PRT is associated with bleeding in head injured patients, or that normal PRT is not associated with it? In other words, is it a valid test?

Reference: EAST 2018 Podium abstract #4.