Category Archives: Performance Improvement

NFTI: A Nifty Tool To Replace The Cribari Grid?

In my last post, I reviewed using the Cribari grid to evaluate over- and under-triage at your trauma center.  This technique has been a mainstay for over a decade, but it has shortcomings. The most important one is that it relies only on the Injury Severity Score (ISS) to judge whether some type of mistriage occurred.  As you know, the ISS is usually calculated after discharge, so it can only be applied after the fact.

A few years ago, the Baylor University in Dallas group sought to develop an alternate method of determining who needed a full trauma team activation. They chose resource utilization as their surrogate to select these cases. They reviewed 2.5 years of their registry data (Level I center).  After several iterations, they settled on six “need for trauma intervention” (NFTI) criteria:

  • blood transfusion within 4 hours of arrival
  • discharge from ED to OR within 90 minutes of arrival
  • discharge from ED to interventional radiology (IR)
  • discharge from ED to ICU AND ICU length of stay at least 3 days
  • require mechanical ventilation during the first 3 days, excluding anesthesia
  • death within 60 hours of arrival

Patients who had at least one NFTI criterion were considered candidates for full trauma activation, and those who met none were not. Here are the factoids for this study:

  • There were a total of 2260 full trauma activations and 2348 partial activations during the study period (a little over 900 per year for each level)
  • Roughly 2/3 of full activations were NFTI +, and 1/3 were NFTI –
  • For partial activations, 1/4 were NFTI + and 3/4 were NFTI –
  • Only 13 of 561 deaths were NFTI – and all had DNR orders in place

The authors concluded that NFTI assesses anatomy and physiology using only measures of early resource utilization. They believe that it self-adjusts for age, frailty, and comorbidities, and that it is a simple and effective tool for identifying major trauma patients.

Bottom line: This is an elegant attempt to improve upon the simple (yet admittedly flawed) Cribari matrix method for assessment of major trauma patient triage. It was thoughtfully designed and evaluated at this one center. The authors recognize that it is based on retrospective data, but so is the Cribari technique. 

I believe that NFTI can be used as an adjunct to Cribari. The matrix identifies gross under- and over-triage using ISS as a surrogate for trauma activation criteria. Normally, the trauma program then needs to review the outliers to see if mistriage actually occurred. It is basically a “first pass” that seeks to over-identify potential problem patients.

NFTI uses the need for resource utilization as a surrogate. I recommend that it be applied to the Cribari outliers, and then the remaining few charts can be analyzed to see if your trauma activation criteria were met. Combining both techniques can dramatically reduce the workload for reviewing undertriage cases.

Reference: Asking a Better Question: Development and Evaluation of the Need For Trauma Intervention (NFTI) Metric as a Novel Indicator of Major Trauma. J Trauma Nursing 24(3):150-157, 2017.

The Cribari Grid And Over/Undertriage

Any trauma performance improvement professional understands the importance of undertriage and overtriage.  Overtriage occurs when a patient who does not meet trauma activation criteria gets one anyway. And undertriage is the converse, where no activation is called despite criteria being met. As you may expect, the latter is much more dangerous for the patient than the former.

I frequently get questions on the “Cribari grid” or “Cribari method” for calculating these numbers. Dr. Chris Cribari is a former chair of the Verification Review Subcommittee of the ACS Committee on Trauma. He developed a table-format grid that provides a simplified method for calculating these numbers.

But remember, the gold standard for calculating over- and undertriage is examining each admission to see if it met any of your trauma activation triage criteria. The Cribari method is designed for those programs that do not check these on every admission. It is a surrogate that allows you to identify patients with higher ISS who might have benefited from a trauma activation.

So, if you use the Cribari method, use it as a first pass to identify potential undertriage. Then, examine every patient’s chart in the undertriage list to see if they meet your activation criteria. If not, they were probably not undertriaged. However, you must then look at their injuries and overall condition to see if they might have been better cared for by your trauma team. If so, you may need to add a new activation criterion. Then, count that patient as undertriage, of course.

I’ve simplified the calculation process even more and provided a Microsoft Word document that automates the task for you. Just download the file, fill in four values in the table, update the formulas, and you’ve got your numbers! Instructions for manual calculations are also included. Download it by clicking the image below or the link at the end of this post.

cribarigrid

Download the calculator by clicking here

In my next post, I’ll examine how the NFTI score (need for trauma intervention) fits into the undertriage/overtriage calculations.

Don’t Write This In Your PI Committee Minutes!

One of the more poorly understood concepts in trauma performance improvement is the focus of the process. Are we really discussing the patient who had a quality issue?

I occasionally see something like the following in the published multidisciplinary trauma PI committee minutes:

“Although an opportunity for improvement was found, it was non-contributory and had no impact on patient outcome.”

Unfortunately, the true purpose of the committee discussion has been lost. The simple truth is that we are trying to learn from a patient we have cared for. None of the events or opportunities for improvement identified can impact them. Time has passed, and if there were any irregularities in their care, it is too late to fix them. For this patient.

However, the proper focus of the performance improvement program is to make things better for the next, similar patient. Here’s an example:

Scenario 1: An elderly patient presents after a fall with a mild head strike. They are awake and alert and present to a trauma center where this is recognized as a high-risk mechanism. A limited activation occurs, the patient is rapidly assessed, and she is whisked off to CT scan 20 minutes after arrival. The report is back in 10 minutes and shows a 1.5cm subdural hematoma with mild ventricular effacement.

Neurosurgery is rapidly consulted and sees the patient within 15 minutes. He plans an emergent operation. The patient is taken to the OR two hours later for a successful craniectomy and drainage. She does well and is discharged home neurologically intact four days later.

Everything looks great, right? Unfortunately, no.

This case could very easily be called a great save. But the patient’s identical twin sister comes in two weeks later with exactly the same presentation. What if the patient vomits, becomes unresponsive, and blows her pupils just one hour after the neurosurgeon sees her? They get a stat repeat CT, and the neurosurgeon now pronounces the larger lesion a non-survivable injury.

The second case will definitely end up being discussed by your multidisciplinary trauma PI committee as a death. Perhaps the one-hour delay is deemed acceptable because “that’s how we do it here” (shudder, a big red flag).

But what if the PI process picks up that two-hour delay in the first case and deems it suboptimal despite the rosy outcome? Processes are implemented to get an OR ready quicker and ensure the neurosurgeon’s availability. Now, a patient can theoretically be in the OR within 30 minutes of this “emergency” designation. When the second patient arrives two weeks later, this new process works flawlessly, and she, too, has a great outcome.

Bottom line: Your PI program is designed to protect the next similar trauma patient arriving at your center. Don’t forget that. Scrutinize care closely, even if the outcome was great and it’s exactly how you “normally” do it. Ask yourself if you would be satisfied if it were your spouse, parent, or child receiving that care. If not, fix everything that isn’t right. For all you know, that next patient could very well be your family member!

Blame The Trauma Surgeon?

I found an interesting paper published a couple of years ago that purports to examine individual surgeon outcomes after trauma laparotomy. This was presented at the annual AAST meeting in 2021 and then published in the Journal the following year.

Everyone seems to be giving this paper a pass. I won’t be so easy on it. Let me provide some details.

The authors observe that the mortality in patients presenting in shock who require emergent laparotomy averages more than 40%, and hasn’t changed significantly in at least 20 years. They also note that this mortality varies widely from 11-46%, and therefore, “significant differences must exist at the level of the individual surgeon.” They go on to point out that damage control usage varies between individuals and trauma centers, which could lead to the same conclusion.

So the authors designed a retrospective cohort study of results from their hospital to try to look at the impact of individual surgeon performance on survival.

Here are the factoids:

  • Over the 15-month study period, there were over 7,000 trauma activations and 252 emergent laparotomies for hemorrhage control
  • There were 13 different trauma surgeons, and the number of laparotomies for each ranged from 7 to 31, with a median of 15
  • There were no differences in [crude, in my opinion] patient demographics, hemodynamics, or lab values preop
  • “Significant” differences in management and outcomes between surgeons were noted:
    • Median total OR time was significantly different, ranging from 120-197 minutes
    • Median operation time was also different, from 75-151 minutes across the cohort of surgeons
    • Some of the surgeons had a higher proportion of patients with ED LOS < 60 minutes and OR time < 120 minutes
    • Resuscitation with red cells and plasma varied “significantly” across the surgeons
  • Mortality rates “varied significantly” across surgeons at all time points (24-hour, and hospital stay)
  • There were no mortality differences based on surgeons’ volume of cases, age, or experience level

The authors acknowledged several limitations, included the study’s retrospective and single-center nature, the limited number of patients, and its limited scope. Yet despite this, they concluded that the study “suggests that differences between individual surgeons appear to affect patient care.” They urge surgeons to openly and honestly evaluated ourselves. And of course, they recommend a large, prospective, multicenter study to further develop this idea.

Bottom line: This study is an example of a good idea gone astray. Although the authors tried to find a way to stratify patient injury (using ISS and individual AIS scores and presence of specific injuries) and intervention times (time in ED, time to OR, time in OR, op time), these variables just don’t cut it. They are just too crude. The ability to meaningfully compare these number across surgeons is also severely limited by low patient numbers. 

The authors found some fancy statistical ways to demonstrate a significant difference. But upon closer inspection, many of these differences are not meaningful clinically. Here are some examples:

  • Intraoperative FFP ranged from 0-7 units between surgeons, with a p value of 0.03
  • Postoperative FFP ranged from 0-7 units, with a p value of 0.01
  • Intraoperative RBC usage was 0-6 units with the exception of one surgeon who used 15 in a case, resulting in a p value of 0.04

The claim that mortality rates varied significantly is difficult to understand. Overall p values were > 0.05, but they singled out one surgeon who had a significant difference from the rest in 22 of 25 mortality parameters listed. This surgeon also had the second highest patient volume, at 25.

The authors are claiming that they are able to detect significant variations in surgeon performance which impacts timing, resuscitation, and mortality. I don’t buy it! They believe that they are able to accurately standardize these patients using simple demographic and performance variables. Unfortunately, the variables selected are far too crude to accurately describe what is wrong inside the patient and what the surgeon will have to do to fix it.

Think about your last 10 trauma laparotomies where your patient was truly bleeding to death. How similar were they? Is there no difference between a patient with a mesenteric laceration with bleeding, an injury near the confluence of the superior mesenteric vessels, and a right hepatic vein injury? Of course there is. And this will definitely affect the parameters measured here and crude outcomes. Then add some unfavorable patient variables like obesity or previous laparotomy.

In my estimation, this paper completely misses the point because it’s not possible to retrospectively categorize all the possible variables impacting “surgeon performance.” This is particularly true of the patient variables that could not possibly be captured. The only way to do this right is to analyze each case as prospectively as possible, as close to the time of the procedure and as honestly as possible. And this is exactly what a good trauma M&M process does!

So forget the strained attempts at achieving statistical significance. Individual surgeon performance and variability will come to light at a proper morbidity and mortality conference, and should be evened out using the peer review and mentoring process. It’s not time to start blaming the surgeon!

Reference: It is time to look in the mirror: Individual surgeon outcomes after emergent trauma laparotomy. J Trauma 92(5):769-780, 2022.

The Implications Of A High Pediatric Readiness Score

In my last post, I described the Pediatric Readiness Score and its components. Today, I’ll explain why maintaining a high score may benefit your trauma center and what it costs to do so.

Research groups at the Oregon Health Sciences University and the University of Utah combined multiple data sources to estimate current levels of ED pediatric readiness, the cost to achieve it, the number of pediatric deaths in emergency departments, and the number of potential lives saved if readiness is maintained.

As you can imagine, this was an extensive data set suffering from the usual glitches. The authors either excluded incomplete data or managed it with sophisticated statistical methods. Data was included from 4,840 emergency departments in all 50 states and the District of Columbia.

Here are the factoids:

  • The authors estimated that nearly 670,000 children receive care in the emergency departments each year
  • Only 15% (842 EDs) had high readiness. The range was 2.9% in Arkansas to 100% in Delaware.
  • The annual cost to achieve high pediatric readiness nationwide was approximately $210 million
  • The annual cost per child to achieve high readiness ranged from $0 in Delaware to $11.84 in North Dakota
  • It was estimated that about 28% of the 7619 childhood deaths each year could be prevented if the treating ED had high pediatric readiness

Bottom line: This paper has a lot of information to digest. Please remember that these are not precisely measured numbers but estimates based on statistical models. So, minor inaccuracies in those models could change these results.

Nonetheless, the data demonstrate the importance of maintaining high pediatric readiness in your emergency department.  Don’t let the total cost of readiness frighten you. Spread evenly across all the EDs studied, this amounts to only about $43,000 annually.

I urge all trauma centers to measure their pediatric readiness score. Then, dedicate the resources your hospital can afford to improve it as much as possible/practical. The number of potential pediatric lives saved is substantial and meaningful.

Reference: State and National Estimates of the Cost of Emergency Department Pediatric Readiness and Lives Saved. JAMA Netw Open. 2024;7(11):e2442154.