Category Archives: Performance Improvement

An Update On The Electronic Trauma Flow Sheet

It’s been five years since I published my series on the use of the electronic trauma flow sheet (eTFS). Anyone who knows me is familiar with my skepticism about this tool. I’ve been writing about the significant problems it can create since 2008! With the progress in computing power and interfaces we have enjoyed, we would have this problem solved by now.

But alas, that is not the case. There has been little progress and at great expense and aggravation for the trauma centers. Since I last published the series, I’ve visited numerous hospitals that use the eTFS and a diminishing number that have stuck with the paper trauma flow sheet. Based on this experience, I am updating the series and will republish it here over the next several weeks.

As you read each part of the series, please take a moment to post comments or questions at the end of the piece or email them to me. I will strive to address them in my updates. And I would love to hear your opinions on how this tool is working (or not) for you. If I receive enough comments, I’ll post a summary of them at the end of the series.

I’ll kick off the series with my next post, which describes why your hospital wants you to switch to some newfangled eTFS. Enjoy, or weep, as the case may be!

Print Friendly, PDF & Email

Blame The Trauma Surgeon?

I just finished reading a recent paper published in the Journal of Trauma that purports to examine individual surgeon outcomes after trauma laparotomy. The paper was presented at AAST last year, and is authored by the esteemed trauma group at the University of Alabama at Birmingham. It was also recently discussed in the trauma literature review series that is emailed to members of EAST regularly.

Everyone seems to be giving this paper a pass. I won’t be so easy on it. Let me provide some detail.

The authors observe that the mortality in patients presenting in shock that require emergent laparotomy averages more than 40%, and hasn’t changed significantly in at least 20 years. They also note that this mortality varies widely from 11-46%, and therefore “significant differences must exist at the level of the individual surgeon.” They go on to point out that damage control usage varies between individuals and trauma centers which could lead to the same conclusion.

So the authors designed a retrospective cohort study of results from their hospital to try to look at the impact of individual surgeon performance on survival.

Here are the factoids:

  • Over the 15 month study period, there were over 7,000 trauma activations and 252 emergent laparotomy for hemorrhage control
  • There were 13 different trauma surgeons and the number of laparotomies for each ranged from 7 to 31, with a median of 15
  • There were no differences in [crude, in my opinion] patient demographics, hemodynamics, or lab values preop
  • “Significant” differences in management and outcomes between surgeons were noted:
    • Median total OR time was significantly different, ranging from 120-197 minutes
    • Median operation time was also different, from 75-151 minutes across the cohort of surgeons
    • Some of the surgeons had a higher proportion of patients with ED LOS < 60 minutes and OR time < 120 minutes
    • Resuscitation with red cells and plasma varied “significantly” across the surgeons
  • Mortality rates “varied significantly” across surgeons at all time points (24-hour, and hospital stay)
  • There were no mortality differences based on surgeons’ volume of cases, age, or experience level

The authors acknowledged several limitations, included the study’s retrospective and single-center nature, the limited number of patients, and its limited scope. Yet despite this, they concluded that the study “suggests that differences between individual surgeons appear to affect patient care.” They urge surgeons to openly and honestly evaluated ourselves. And of course, they recommend a large, prospective, multicenter study to further develop this idea.

Bottom line: This study is an example of a good idea gone astray. Although the authors tried to find a way to stratify patient injury (using ISS and individual AIS scores and presence of specific injuries) and intervention times (time in ED, time to OR, time in OR, op time), these variables just don’t cut it. They are just too crude. The ability to meaningfully compare these number across surgeons is also severely limited by low patient numbers. 

The authors found fancy statistical ways to demonstrate a significant difference. But upon closer inspection, many of these differences are not meaningful clinically. Here are some examples:

  • Intraoperative FFP ranged from 0-7 units between surgeons, with a p value of 0.03
  • Postoperative FFP ranged from 0-7 units, with a p value of 0.01
  • Intraoperative RBC usage was 0-6 units with the exception of one surgeon who used 15 in a case, resulting in a p value of 0.04

The claim that mortality rates varied significantly is difficult to understand. Overall p values were > 0.05, but they singled out one surgeon who had a significant difference from the rest in 22 of 25 mortality parameters listed. This surgeon also had the second highest patient volume, at 25.

The authors are claiming that they are able to detect significant variations in surgeon performance which impacts timing, resuscitation, and mortality. I don’t buy it! They believe that they are able to accurately standardize these patients using simple demographic and performance variables. Unfortunately, the variables selected are far too crude to accurately describe what is wrong inside the patient and what the surgeon will have to do to fix it.

Think about your last 10 trauma laparotomies where your patient was truly bleeding to death. How similar were they? Is there no difference between a patient with a mesenteric laceration with bleeding, an injury near the confluence of the superior mesenteric vessels, and a right hepatic vein injury? Of course there is. And this will definitely affect the parameters measured here and crude outcomes. Then add some unfavorable patient variables like obesity or previous laparotomy.

In my estimation, this paper completely misses the point because it’s not possible to retrospectively categorize all the possible variables impacting “surgeon performance.” This is particularly true of the patient variables that could not possibly be captured. The only way to do this right is to analyze each case as prospectively as possible, as close to the time of the procedure and as honestly as possible. And this is exactly what a good trauma M&M process does!

So forget the strained attempts at achieving statistical significance. Individual surgeon performance and variability will come to light at a proper morbidity and mortality conference, and should be evened out using the peer review and mentoring process. It’s not time to start blaming the surgeon!

Reference: It is time to look in the mirror: Individual surgeon outcomes after emergent trauma laparotomy. J Trauma 92(5):769-780, 2022.

Print Friendly, PDF & Email

For PI Fans: Cribari, NFTI, And STAT!

I’ve published a two-part series on the Cribari matrix, Need For Trauma Intervention (NFTI), and the Standardized Triage Assessment Tool (STAT). These are performance improvement topics for the real nerds out there and can be found only on my Trauma PI website, TraumaMedEd.com.

If you are interested in optimizing trauma triage and trauma activations at your center, check out my posts by clicking this link:

https://www.traumameded.com/blog/

Print Friendly, PDF & Email

Best Of AAST 2021: Individual Surgeon Outcomes In Trauma Laparotomy

Trauma programs use a number of quality indicators and PI filters to evaluate both individual and system performance. The emergent trauma laparotomy (ETL) is the index case for any trauma surgeon and is performed on a regular basis. However, this is one procedure where individual surgeon outcome is rarely benchmarked.

The trauma group in Birmingham AL performed a retrospective review of 242 ETLs performed at their hospital over a 14 month period. They then excluded patients who underwent resuscitative thoracotomy prior to the laparotomy. Rates of use of damage control and mortality at various time points were studied.

Here are the factoids:

The chart shows the survival rates after ETL at 24 hours (blue) and to discharge (gray) for 14 individual surgeons.

  • Six patients died intraoperatively and damage control laparotomy was performed in one third.
  • Mortality was 4% at 24 hours and 7% overall
  • ISS and time in ED were similar, but operative time varied substantially (40-469 minutes)
  • There were significant differences in individual surgeon mortality and use of damage control

The authors concluded that there were significant differences in outcomes by surgeon, and that more granular quality metrics should be developed for quality improvement.

Bottom line: I worry that this work is a superficial treatment of surgeon performance. The use of gross outcomes like death and use of damage control is not very helpful, in my opinion. There are so, so many other variables involved in who is likely to survive or the decision-making to consider the use of damage control. I am concerned that a simplistic retrospective review without most of those variables will lead to false conclusions.

It may be that there is a lot more information here that just couldn’t fit on the abstract page. In that case, the presentation should clear it all up.  But I am doubtful.

We have already reached a point in medicine where hospitals with better outcomes for patients with certain conditions can be identified. These centers should be selected preferentially to treat stroke or pancreatic cancer, or whatever there benchmark-proven expertise is. It really is time for this to begin to trickle down to individual providers. A specific surgeon should be encouraged to do what they are demonstrated to be really good at, and other surgeons should handle the things the first surgeon is only average at.

But I don’t think this study can provide the level of benchmarking to suggest changes to a surgeon’s practice or the selection of a specific surgeon for a procedure. A lot more work is needed to identify the pertinent variables needed to develop legitimate benchmarks.

Here are my questions for the presenter and authors:

  • Show us the details of all of the variables you analyzed (ISS, NISS, time in ED, etc) and the breakdown by surgeon.
  • Are there any other variables that influence the outcome that you wish you had collected?
  • There were an average of 17 cases per surgeon in your study. Is it possible to show statistical significance for anything given these small numbers?

The devil is in the details, and I hope these come out during the presentation!

Reference: IT’S TIME TO LOOK IN THE MIRROR: INDIVIDUAL SURGEON OUTCOMES AFTER EMERGENT TRUMA LAPAROTOMY. AAST 2021, oral abstract #38.

Print Friendly, PDF & Email

Best Of AAST 2021: Reducing Errors In Trauma Care

Finally, a performance improvement (PI) abstract at AAST!

As many of you know, there are two general types of issues that are encountered in the usual PI processes: provider (peer) vs system. Provider issues are errors of omission or commission by an individual clinician. Examples include a surgeon making a technical error during a procedure, or prescribing the wrong drug or dose for some condition.

One might think that provider issues are the most common type of problem encountered. But they would be wrong. The vast majority of clinicians go to work each day with the idea that they will do their job to the best of their abilities. So how could things go awry?

Because the majority of errors have some degree of system component! They are set up to fail by factors outside their perception and/or control. Let’s look at a surgeon who has several small bowel anastomoses fall apart. His surgery department head chides/educates him, reports him to hospital quality, and proctors his next ten bowel cases. Everything is good, right?

But then, two months later, the stapler company issues a recall because they found a higher than usual number of anastomotic failures with one of their products. So it wasn’t the surgeon after all, like everyone assumed. This is an extreme example, but you get the idea. System issues often look like peer issues, but it’s frequently difficult for many PI programs to recognize or accept this.

A multi-institutional group reviewed the results of a newly implemented Mortality Reporting System (MRS) to analyze a large number of PI opportunities for improvement (OFI). More than 300 trauma centers submitted data to the MRS when a death occurred where an OFI was identified. The reports included details of the incident and mitigation strategies that were applied.

 

Here are the factoids:

  • A total of 395 deaths were reviewed over a two year period
  • One third of deaths were unanticipated (!!), and a third of those were failure to rescue
  • Half of errors pertained to clinical management, clinical performance, and communication
  • Human failures occurred in about two thirds of cases
  • The most common remedy applied was education, which presumes a “provider issue”
  • System strategies like automation, standardization, and fail-safe approaches were seldom used, implying that system issues were seldom recognized
  • in 7%, the trauma centers could not identify a specific strategy to prevent future harm (!!!)

The authors concluded that most strategies to reduce errors focus on individual performance and do not recognize the value of system-level intervention.

Bottom line: Look at the pyramid chart above (interesting choice for a chart, but very effective). The arrow shows progression from provider focus to systems focus. The pyramid shows how the recognition of and intervention for system issues drops off very rapidly.

I am both shocked and fascinated by the last bullet point. A strategy couldn’t be developed to prevent the same thing from happening again. Now, there are a few rare instances where this could be correct. Your patient could have been struck by a bolt of lightning in her room, or a meteorite could have crashed through the wall. But I doubt it. This 7% illustrates the importance of investigating all the angles to try to determine how the system failed!

For once, I have no critique for an abstract. It is a straightforward descriptive study that reveals an issue that many in PI are not fully aware of. I’ll definitely be listening to this one, and I really look forward to the published paper!

Reference: ERROR REDUCTION IN TRAUMA CARE: LESSONS FROM AN ANONYMIZED, NATIONAL, MULTI-CENTER MORTALITY REPORTING SYSTEM. AAST 2021, Oral abstract #17.

Print Friendly, PDF & Email