All posts by TheTraumaPro

EAST 2018 #10: Fresh Whole Blood And Survival

Decades ago, our blood bank system began disassembling units of donated blood, ushering in the era of component therapy. Now, it seems, we are seeing the light and starting to re-look at the concept of using fresh whole blood. To see the difference between fresh whole blood and “rebuilt” whole blood from components, read this post.

The military has a keen interest in studying the practice of using whole blood, since combat locations have a considerable number of “walking blood banks” (i.e. soldiers) . An abstract being presented tomorrow at EAST was submitted by the US Army Institute of Surgical Research. They performed a straightforward study looking at mortality in combat casualties, comparing troops who received fresh whole blood (FWB) to those who received component therapy (kind of). They used regression analysis to try to identify and control for other variables, and also analyzed a subgroup who required massive transfusion.

Here are the factoids:

  • A total of 215 soldiers received FWB, and 896 did not. Of note, the non-FWB patients did not necessarily receive platelets.
  • Overall, survival was similar in both groups at about 94%
  • After controlling for physiologic injury severity and blood product/crystalloid volumes, the risk of death was twice as high in the group that did not receive FWB
  • Survival was higher in FWB patients who underwent massive transfusion (89% vs 80%), although this was only marginally significant

Bottom line: I see this an an interesting but preliminary study, with many unanswered questions. It’s not really a comparison of patients receiving fresh whole blood vs component therapy, because not all of the latter patients received platelets. It also did not take into account the specific anatomic injury areas, particularly critical ones such as brain injury. But this study should certainly stimulate some better designed projects for followup.

Here are some questions for the authors to consider before their presentation:

  • Did you do a power analysis to estimate how many patients would need to be enrolled to discover a real difference? If so, how many?
  • Have you performed a subanalysis on patients in the non-FWB group who received platelets? This would then be a comparison of FWB vs component therapy.
  • Any idea of the age of the components given vs the day 0 FWB?
  • Be sure to show and interpret your significance testing in the presentation

Reference: EAST 2018 Podium paper #15.

EAST 2018 #9: Occupational Exposure During ED Thoracotomy

ED thoracotomy is performed infrequently, under high stress circumstances, and with high stakes for the victim. Thus, it is a setup for mayhem. If not conducted properly, it can be noisy, disorganized, and dangerous due to the possibility of blood exposure. Unfortunately, we don’t know where these trauma patients have been. Previous data shows that the incidence of HIV, hepatitis, and other infectious agents is low but significant.

Occupational exposure of healthcare providers to these infectious agents via needlestick/cut, mucus membrane, open wound, or eyes can happen during any surgical procedure. But the possibility during the less controlled ED thoracotomy would seem to be greater. So the group at the University of Pennsylvania decided to perform a prospective, observational study at 16 trauma centers over a 2 year period. A total of 1360 participants were surveyed who were involved in 305 ED thoracotomies. They analyzed the data for risk of occupational exposure.

Here are the factoids:

  • Mechanism was 68% gunshot, 57% were undergoing prehospital CPR, and 37% arrived with signs of life
  • 22 exposures were documented, or a rate of 7% per thoracotomy and 1% per participant
  • There was no difference between Level I and II centers or hours worked at time of procedure
  • Those with exposures were typically trainees (68%) who sustained a percutaneous injury (86%) during the actual procedure (73%)
  • Full personal protective precautions were only utilized by 46% of exposed providers (!!)
  • Each additional piece of personal protective equipment reduced the risk of exposure by 32%

Bottom line: The authors concluded that the incidence of exposure to patient blood is the same as for other operative procedures. Hmm. They also state that the fear of occupational exposure should not deter providers from performing thoracotomy.

I certainly agree that one should always follow the accepted indications for performing ED thoracotomy. I’m not so sure about the comparison with non-emergent procedures, since the numbers are fairly low. However, of one thing there is no doubt: wear your personal protective equipment! You never know when you might be exposed!

Here are some questions for the authors to consider before their presentation:

  • What kind of power analysis did you do to ensure that you could draw reasonable comparisons between thoracotomy and non-emergent procedures?
  • Please provide detailed breakdown of how you sliced and diced your numbers in terms of type of provider, hours worked, trainee level, precautions taken, etc
  • I enjoyed this paper and look forward to hearing the details!

EAST 2018 #8: 4-Factor PCC Plus Plasma. What?

Many trauma centers have moved toward reversing warfarin with prothrombin complex concentrate (PCC) in place of plasma due to the speed and low volume of infusate with the former. In the US, 3-factor PCC was approved by the FDA first, but it has a lower Factor VII content. This usually required infusion of plasma anyway to make up the Factor VII, so what was the point (although there was some debate on this)?

Then 4-factor PCC was approved, and it alone could be used for warfarin reversal. But so far, PCC has not been routinely used for reversal of coagulopathy from trauma. We still rely on plasma infusion for this. The abstract I am discussing today compares reversal with 4-factor PCC alone to reversal with 4-factor PCC and plasma in coagulopathic patients.

This study retrospectively reviewed adult patients who received one of the above treatments over a 3 year period. Patient who were on oral anticoagulants were excluded. The goal INR was 1.5, and time to correction and number of PRBC transfused were measured.

Here are the factoids:

  • There were 516 patients who met criteria, but only 80 FFP patients and 40 PCC+FFP patients were analyzed
  • Patients were an average of 58 years old, had an ISS of 29, and 87% had sustained blunt injury
  • PCC+FFP resulted in faster correction of INR (373 min vs 955 min)
  • PCC+FFP received fewer units of PRBC (7 vs 9 units) and FFP (5 vs 7 units)
  • Mortality rate was lower in the PCC+FFP group (25% vs 33%)
  • There was no difference in thrombotic complications

Bottom line: Well, this is an interesting start. I think this abstract suggests that we should incorporate giving 4-factor PCC into the massive transfusion protocol to try to reduce the INR faster. However, the patient numbers are low and several of the results are only weakly significant (units transfused, mortality, p=0.04). Some additional confirmative studies will be needed before this is ready for prime time!

Here are some questions for the authors to consider before their presentation:

  • Why did your study group drop from 516 to 120? What impact might this have had on you analyses?
  • Did you look at the correction times stratified by initial INR? Severely coagulopathic patients could skew the numbers, especially if they were predominantly in only one of the study groups.
  • It did not look like the patients received much PRBC or plasma (<10 units of each). How injured / coagulopathic were they?
  • The mortality rates are rather high for an average ISS of 29. Did you analyze to see what impact ISS had on mortality? Could this have influenced your analysis?
  • Big picture question: Should we consider routinely giving PCC as part of the massive transfusion protocol in patients who are known to be coagulopathic? Based on the graph, it looks like patients will need more than a single dose. Reversal time was still very long for PCC+FFP.

Thanks for an intriguing abstract!

Reference: EAST 2018 Podium paper #12.

EAST 2018 #7: Cervical Spine Injury And Dysphagia

One of the under-appreciated complications of cervical spine fractures is dysphagia. This problem disproportionately affects the elderly, and is most common in patients with C1-C3 fractures. Swallowing becomes even more difficult when the head is held in position by a rigid cervical collar, which is the most common treatment for this injury.

How common is dysphagia in patients with cervical spine injury? What is the best way to detect it? These questions were asked by the group at MetroHealth Medical Center in Cleveland. They  retrospectively reviewed their experience with patients presenting with cervical spine injury for 14 months, then prospectively studied the use of routine, nurse-driven bedside dysphagia screening in similar patients for a year. They wanted to test the utility of screening, and judge its impact on outcome.

Here are the factoids:

  • 221 patients were prospectively studied and received a bedside dysphagia screen, but only 114 met all inclusion criteria and had the protocol properly followed (!)
  • 17% had dysphagia overall, with an incidence of 15% in cervical spine injuries and 31% in those with a concomitant spinal cord injury
  • The bedside dysphagia screen was 84% sensitive, 96% specific, with positive and negative predictive values of 80% and 97%, respectively
  • There were 6/214 patients with dysphagia complications in the retrospective group vs 0/114 in the screened group

Bottom line: This abstract actually puts a number on the incidence of dysphagia on this group of patients. I wish the patient numbers could have been higher, but they are still very good. The results are convincing, and the negative predictive value is excellent. If the screen is passed, then the patient should do well with feeds. I recommend that all patients with cervical spine injury treated with a rigid collar undergo this simple screen, and have appropriate diet adjustments to limit complications.

Here are some questions for the authors to consider before their presentation:

  • Please share the details of the nurse-driven component of the bedside dysphagia screen, and how you determine when a formal barium swallow is indicated
  • Why did your prospective study group drop from 221 to 114?
  • When did you typically perform the screen? Fracture swelling may not peak for 3 days, so early screening may not be as good as later screening.
  • This was a nice study, with a very practical and actionable result!

Reference: EAST Podium abstract #10.

EAST 2018 #6: Predicting Deterioration After Rib Fracture

Yes, more on predictions. We all know that chest trauma is one of the bigger causes of morbidity and mortality in trauma patients. A number of methods have been developed to predict which patients might deteriorate after sustaining chest injury, and where to place them in the hospital on admission. Elderly patients are at particular risk, and determining who to look after more closely, and/or in the intensive care unit, can be very helpful.

The authors of this abstract from West Virginia University took a slightly different approach. Let’s say we have already placed a patient in a ward bed after admission for rib fractures. How can we monitor them and preemptively increase interventions or move to the ICU before they crash and burn?

The WVU group developed a rib fracture guideline nearly 10 years ago, and retrospectively reviewed their experience with 1106 patients over a 6 year period. They measured serial forced vital capacity readings in these patients, and arbitrarily used thresholds of <1, 1-1.5, and >1.5 to predict deterioration.

To boost your memory, look at the following chart. The vital capacity is the maximum amount of volume that can be voluntarily exhaled.

Here are the factoids:

  • Only patients with initial FVC > 1 were enrolled in the study
  • They were then separated into two groups: those whose FVC remained greater than 1 (83%), and those in whom it decreased below 1 (17%)
  • Patients in the low FVC group had significantly more complications like pneumonia, intubation, or unplanned transfer to ICU (15% vs 3%) and a longer length of stay
  • However, the low FVC group also had a higher chest AIS score, higher ISS, were 10 years older, and were twice as likely to have COPD

Bottom line: Seems like a promising study, right? Check out an easy to measure, objective test and step up your level of care if it dips below a certain critical value? But not so fast. The two study groups look like they are very different. No significance testing was shown for these differences, but they certainly look like they should be different. Couldn’t their deterioration have been predicted based on their age and degree of chest injury?

Here are some questions for the authors to consider before their presentation:

  • Please provide the details of your rib fracture pathway
  • Your FVC threshold does not have any units assigned. I am assuming that it is in liters. Please clarify.
  • Why did you describe three cohorts initially, then settle on the lowest (FVC < 1) as your final threshold? Was there a method to this? Why not 1.5? Or 0.75?
  • Did you do any further analysis to try to determine if the differences between the groups were responsible for the differences in complication rates?
  • Big picture question: So why couldn’t you just use a specific age/ISS/comorbidity threshold and predict failure at the time of admission, and forget about measuring several FVC values?

Reference: EAST 2018 Podium paper #9.