Tag Archives: imaging

Pan Scanning for Elderly Falls?

The last abstract for the Clinical Congress of the American College of Surgeons that I will review deals with doing a so-called “pan-scan” for ground level falls. Apparently, patients at this center have been pan-scanned for years, and they wanted to determine if it was appropriate.

This was a retrospective trauma registry review of 9 years worth of ground level falls. Patients were divided into young (18-54 years) and old (55+ years) groups. They were included in the study if they received a pan-scan.

Here are the factoids:

  • Hospital admission rates (95%) and ICU admission rates (48%) were the same for young and old
  • ISS was a little higher in the older group (9 vs 12)
  • Here are the incidence and type of injuries detected:
Young (n=328) Old (n=257)
TBI 35% 40%
C-spine 2% 2%
Blunt Cereb-vasc inj * 20% 31%
Pneumothorax 14% 15%
Abdominal injury 4% 2%
Mortality * 3% 11%

 * = statistically significant

Bottom line: There is an ongoing argument, still, regarding pan-scan vs selective scanning. The pan-scanners argue that the increased risk (much of which is delayed or intangible) is worth the extra information. This study shows that the authors did not find much difference in injury diagnosis in young vs elderly patients, with the exception of blunt cerebrovascular injury.

Most elderly patients who fall sustain injuries to the head, spine (all of it), extremities and hips. The torso is largely spared, with the exception of ribs. In my opinion, chest CT is only for identification of aortic injury, which just can’t happen from falling over. Or even down stairs. And solid organ injury is also rare in this group.

Although the future risk from radiation in an elderly patient is probably low, the risk from the IV contrast needed to see the aorta or solid organs is significant in this group. And keep in mind the dangers of screening for a low probability diagnosis. You may find something that prompts invasive and potentially more dangerous investigations of something that may never have caused a problem!

I recommend selective scanning of the head and cervical spine (if not clinically clearable), and selective conventional imaging of any other suspicious areas. If additional detail of the thoracic and/or lumbar spine are needed, specific spine CT imaging should be used without contrast.

Related posts:

Reference: Pan-scanning for ground level falls in the elderly: really? ACS Surgical Forum, trauma abstracts, 2016.

CT Crystal Ball – Part 3

And yet another one of these crystal ball abstracts, all presented at the same meeting of the American College of Surgeons Clinical Congress!

This one postulates that more injuries seen on CT scan might predict mortality in “older” trauma patients. Hmmm. The authors pulled info  on head CT findings, GCS, AIS Head, lengths of stay, death, functional scores, and discharge disposition. And the age had to be >45 years. Older? Hmmm.

A scoring tool was designed that gave 1 point each for subdural, epidural, subarachnoid, or intraparenchymal blood, cerebral contusion, skull fracture, brain edema/herniation, midline shift, and external trauma to the head/face. The score range was 0-8, even though there were 10 factors.

Lets look at the factoids:

  • Nearly 10 years of data were analyzed
  • 620 patients meeting criteria were identified
  • The scoring system positively correlated with all of the outcome measures
  • Independent predictors of mortality included GCS, AIS Head, and the CT score (odds ratio 1.3)
  • The CT test also “predicted” (author’s word) neursurgical intervention (odds ratio 1.2)

Bottom line: Oh boy, here we go again. Another correlation study, and a weak one at that. So if someone told you that an “older” patient (beginning after age 45) would do worse clinically the more injuries were seen in and around their head, what would you say? And why did it take 10 years of data to accumulate data on 620 patients in this age range (62 per year)? And why not test your scoring system prospectively? And run some really good statistics on the new data?  Sadly, I feel this is another run to submit an abstract and present at a meeting. But thankfully, I don’t think it will ever see the light of print.

Related posts:

Reference: Prognostication of traumatic brain injury outcomes in older trauma patients: a novel risk assessment tool based on initial cranial CT findings. ACS Scientific Forum, trauma abstracts, 2016.

captain-obvious1

The CT Crystal Ball – Part 2

Yesterday, I wrote about a study that looked at a CT scan-derived index that promised to predict complications and mortality based on the waist-hip ratio. It was actually a very good one. But there is another abstract being presented at the American College of Surgeons Clinical Congress this week that promises miracles from the CT scanner as well.

This next abstract looks at muscle mass in trauma patients, as measured by CT scan. Specifically, the authors measured the density of the psoas muscle by determining its cross-sectional area and its density in Hounsfeld units. They then looked at the relationship between this and 90 day mortality, complications, and disposition location.

Really? Well, here are the factoids:

  • The study involved only 152 patients age 45+ from the year 2008
  • Median ISS was only 9
  • Patients with the lowest psoas cross-sectional area had an associated significantly higher death rate
  • Those with lowest psoas density had an associated increase in complications, dependency on discharge, and mortality
  • The authors suggest that these measurements could aid in patients who would benefit from aggressive nutritional support and physical therapy, and could aid in discharge planning

Bottom line: Very different from yesterday’s abstract. This one has no grounding in prior research. It appears to be one that was just dreamed up from nowhere. And it is truly an association study. No causality can or should be inferred.

There were only 152 patients studied. From 2008. Why? Why didn’t the authors use a more contemporary dataset? There is something weird going on behind the scenes. Is this an old study that was forgotten, and is just now being conveniently dusted off for analysis and submission? A power analysis to find out how many patients should be reviewed is not possible, so it is important to err on the high side. Not just 152 patients.

If you were to just read the abstract and especially the conclusions, you really might get the wrong idea. This is a study that will not see it’s day in any journal. Read and learn from it. But don’t duplicate it!

Related post:

Reference: Computed tomography-measured psoas density predicts complications, discharge location, and mortality in trauma patients. ACS Scientific Forum, trauma abstracts, 2016.

 

 

ED Use of CT – Everyone Does It Differently

There is tremendous variability in ordering imaging in trauma patients. To some degree, this is due to the dearth of standards pertaining to radiographic imaging, at least in trauma. And when standards do exist, trauma professionals are not very good at adhering to them. We’d rather do it our way. Or the way we were trained to do it.

The group at Jamaica Hospital in Queens, NY quantified some of those differences, studying ordering patterns of trauma surgeons (TS), emergency physicians (EP), and surgery chief residents (CR). Unfortunately, they then tried to draw some interesting conclusions, which I’ll discuss at the end.

They reviewed all blunt trauma activations over a 6 month period at their urban trauma center. At the end of each trauma activation, each of the three physician groups wrote imaging orders, but only the trauma surgeons’ were submitted. Missed injuries were defined as any that would not have been found based on each provider group’s orders. Extremity injuries, and those found on physical exam or plain imaging were excluded.

Here are the factoids:

  • The authors do not state how many patients they saw in this period, but by extrapolation it appears to be about 250
  • Trauma surgeons ordered significantly more studies (1,012) than the EPs (882) or CRs (884)
  • This resulted in essentially a “pan-scan” in 78%, 64%, and 69%, respectively
  • Radiation exposure was said to be the same for all groups (18 vs 13 vs 15 mSv) [I’m having a hard time buying this]
  • But cost was higher in the trauma surgeon group ($344 vs $267 vs $292) [Huh? Is this only the electric bill for the CT scanner? Very low, IMHO]
  • And the trauma surgeons had a missed injury rate of only 1%, vs 11% for EPs and 7% for CRs [Wow!]

Bottom line: Sorry, I just can’t believe these results. There are a lot of things left unsaid in this poster. What were all these missed injuries? What magical CT scan that only the trauma surgeons ordered actually picked them up? And probably most importantly, were they clinically significant? A small hematoma somewhere doesn’t make a difference (see the “tree falls in a forest” post below).

It looks to me like the authors wanted to justify their use of pan-scan, and push their emergency physicians to follow suit. Unfortunately, this is a poster presentation, meaning that there will be limited opportunity to question the authors about the specifics.

The debate regarding pan-scan vs selective imaging is an active one. The evidence is definitely not in yet. While we sort it out, the best path is to develop a reasonable imaging practice guideline based on the literature, where available. Some areas such as head and cervical spine CT have been worked out fairly well. Then fill in the blanks and encourage all trauma professionals in your hospital to follow them. There is great value in adhering to good guidelines, even when there are blanks in our knowledge.

Related posts:

Reference: Variability in computed tomography imaging of trauma patients among emergency department physicians and trauma surgeons with respect to missed injuries, radiation exposure and cost. AAST 2016, Poster #75.

Misleading Abstract Alert: Injuries Identified By Chest CT

Here is another one of those papers that have this nicely done abstract that arrives at what seems to be a reasonable conclusion. But then you sit back and think about it. And it’s no longer so reasonable.

This study seems like it should be a good one! It’s a multi-center trial involving data from ten level I trauma centers. The research infrastructure used to collect the data and the statistical analyses for this retrospective review were sound.

Here are the factoids:

  • Of nearly 15,000 patients with blunt chest trauma, about 6,000 (40%) underwent both chest x-ray and CT
  • 25% (1,454) of these patient had new injuries discovered by the CT
  • 954 were truly occult, only being found on the CT; the remaining 500 scans found more injuries than seen on chest x-ray
  • 202 patients had major interventions (chest tube, ventilator, surgery)
  • 343 had minor interventions (admission, extended observation)
  • Chest x-ray was not very good at detecting aortic or diaphragm injury (surprise)
  • 76% of the major interventions were chest tube insertions
  • 32% of of patients with new fractures seen were hospitalized for pain control
  • None of the odds ratios reported were statistically significant

Bottom line: What could possibly go wrong? Ten trauma centers. Six thousand patients. Lots of data points. There are two major issues. First, the primary outcome was a major intervention based on the chest CT. The problem with having so many participating centers is that it is hard to figure out why they performed the interventions. Are they saying that a pneumothorax or hemothorax that was invisible on chest x-ray required a chest tube? Based on whose judgment? Unfortunately, that is a big variable. The authors admit that they did not know whether “interventions based on chest CT were truly necessary or beneficial because we did not study patient outcomes” and that the decisions for intervention “were largely made by residents (usually) or fellows.”

And the secondary outcome was admission or extended observation based on the chest CT. Yet these admissions were primarily for pain management in patients with fractures. Did the patients develop additional pain due to irradiation, or was it there all along?

So adding a chest CT greatly increases the likelihood of doing additional procedures. And it is difficult to tell (from this study) if those procedures were truly necessary. But we know that they can certainly be dangerous. If you back out all of the potentially unnecessary chest tubes and the admissions for pain that should have been admitted anyway, this study demonstrates very little additional value from CT.

A well-crafted imaging guideline will help determine which patients really need CT to identify patients with those occult injuries that are dangerous enough that they can’t be missed. The authors even conclude that “a validated decision instrument to support clinical judgment is needed.”

Related posts:

Reference: Prevalence and clinical import of thoracic injury identified by chest computed tomography but not chesty radiography in blunt trauma: multicenter prospective cohort study. Annals Emerg Med 66(6):589-600, 2015.