Tag Archives: philosophy

Another Failure Of Shotgun Style Diagnostic Testing: The Trauma Incidentaloma

When our patients present with a problem, there is a time honored and well-defined sequence to help us come to a final diagnosis.

  • Take a detailed history
  • Examine the patient
  • Order pertinent diagnostic tests, if indicated
  • Then think about it a while

The first two items are a chip shot, and the trauma professional can gain a lot of information by spending a relatively short period of time doing these. And many times the diagnosis can be made without any further action.

However, diagnostic testing of all kinds has become so prevalent and easy to obtain that we rely on it a bit too much. And sometimes, we order it up in lieu of a thorough history and exam. If the clinician skimps on those steps, it’s much more difficult to narrow the list of differential diagnoses to a manageable number.

So what happens then? They use diagnostic tests as a crutch. Instead of being able to select a few focused tests to answer the questions, they essentially put an order sheet on the wall, fire off a shotgun, and order everything that’s been hit by the pellets.

Lots of tests, so they will definitely find the answer, right? Nope! There are two major problems here. First, the so-called signal to noise ratio is very low. There are so many results, that it is easy to overlook a pertinent positive among all the negatives.

But more significantly, there is always the possibility that there will be more than one positive. One of them might actually be the answer you were seeking. But what about the others? There are the trauma incidentalomas. Some may be truly positive, but there is always the possibility of a false positive. These are the most treacherous, because many trauma professionals then feel obligated to “do something about it.”

As we have found from multiple screening tests like PSA, PAP smear, and mammography, a significant number of patients may be harmed trying to further investigate what turns out to be nothing at all (artifact), or something completely benign. This includes not only harm from complications or unnecessary procedures, but months of anxiety the patient may suffer while the clinicians figure out what that thing inside them really is.

There are only a few studies on trauma incidentalomas available. One reviewed a series of almost 600 head CT scans in patients with TBI and found unexpected findings on 85%. About 90% were obviously benign. Unfortunately, it was not possible to follow these patients to find out how many of the remaining lesions turned out to be benign as well. But I would wager that most did.

Bottom line: I shouldn’t even have to say this, but do a good history and physical exam! If you need diagnostic studies, order only the one(s) that have the potential to make your final diagnosis. Don’t shotgun it. One very helpful tool is a well-designed practice guideline for commonly encountered clinical scenarios. This will limit the number of “other” findings you have to deal with. And finally, did I say to do a good history and physical exam?

Related posts:

Reference: Incidental cranial CT findings in head injury patients in a Nigerian tertiary hospital. J Emerg Trauma Shock 8(2):77-82, 2015.

Why Is NPO The Default Diet For Trauma Patients?

I’ve watched it happen for years. A trauma patient is admitted with a small subarachnoid hemorrhage in the evening. The residents put in all the “usual” orders and tuck them away for the night. I am the rounder the next day, and when I saunter into the patient’s room, this is what I find:

They were made NPO. And this isn’t just an issue for patients with a small head bleed. A grade II spleen. An orbital fracture. Cervical spine injury. The list goes on.

What do these injuries have to do with your GI tract?

Here are some pointers on writing the correct diet orders on your trauma patients:

  • Is there a plan to take them to the operating room within the next 8 hours or so? If not, let them eat. If you are not sure, contact the responsible service and ask. Once you have confirmed their OR status, write the appropriate order.
  • Have they just come out of the operating room from a laparotomy? Then yes, they will have an ileus and should be NPO.
  • Are they being admitted to the ICU? If their condition is tenuous enough that they need ICU level monitoring, then they actually do belong to that small group of patients that should be kept NPO.

But here’s the biggest offender. Most trauma professionals don’t think this one through, and reflexively write for the starvation diet.

  • Do they have a condition that will likely require an emergent operation in the very near future? This one is a judgment call. But how often have you seen a patient with subarachnoid hemorrhage have an emergent craniotomy? How often do low grade solid organ injuries fail if they’ve always had stable vital signs? Or even high grade injuries? The answer is, not often at all! So let them eat!

Bottom line: Unless your patient is known to be heading to the OR soon, or just had a laparotomy, the default trauma diet should be a regular diet! 

Why People Don’t Change Their Minds Despite The Data

Has this happened to you?

Your (emergency physician / neurosurgeon / orthopaedic surgeon) colleague wants to (get rib detail xrays / administer steroids / wait a few days before doing a femur ORIF). You question it based on your interpretation of the literature. You even provide a stack of papers to them to prove your point. Do they buy it? Even in the presence of randomized, double-blinded, placebo-controlled studies with thousands of patients (good luck finding those)?

The answer is generally NO! Why not? It’s science. It’s objective data. WTF?

Sociologists and psychologists have shown that there is a concept that they call the Backfire Effect. Essentially, once you come to believe something, you do your best to protect it from harm. You become more skeptical of facts that refute your beliefs, and less skeptical of the items that support them. Having one’s beliefs challenged, even with objective and authoritative data, causes us to hold them even more deeply. There are plenty of examples of this in everyday life. The absence of weapons of mass destruction in Iraq. The number of shooters in the JFK assassination. President Obama’s citizenship.

Bottom line: It’s human nature to try to pick apart a scientific article that challenges your biases, looking for every possible fault. It’s the Backfire Effect. Be aware of this built in flaw (protective mechanism?) in our psyche. And always ask yourself, “what if?” Look at the issue through the eyes of someone not familiar with the concepts. If someone challenges your beliefs, welcome it! Be skeptical of both them AND yourself. You might just learn something new!

Why Is So Much Published Research So Bad?

Yesterday, my colleague the Skeptical Scalpel wrote about an interesting (?) paper published in Emergency Medicine Australasia. It was a small study that concluded that ED wait times decreased as the number of people presenting to be seen decreased. Where’s the mystery in that? Overstating the obvious?

But if you look through almost any journal today, you will find studies that leave you wondering how they ever got published. And this is not a new phenomenon. Look at any journal a year ago. Five years ago. Twenty years ago. And even older. The research landscape is littered with their carcasses. 

And on a related note, sit down with any serious clinical question in your field you want to answer. Do a deep dive with one of the major search engines and try to get an answer. Or better yet, let the professionals from the Cochrane Library or other organization do it for you. Invariably, you will find hints and pieces of the answer you seek. But never the completely usable solution you desire. 

Why is it so hard? With tens of thousands of articles being published every year?

Because there is no plan! Individuals are forced to produce research as a condition of their employment. Or to assure career advancement. Or to get into medical school, or a “good” residency. And in the US, Level I trauma centers are required to publish at least 20 papers every three years to maintain their status. So there is tremendous pressure across all disciplines to publish something

Unfortunately, that something is usually work that is easily conceived and quickly executed. A registry review, or some other type of retrospective study. They are easy to get approval for, take little time to complete and analyze, and have the potential to get published quickly.

But what this “publish or perish” mentality promotes is a random jumble of answers that we didn’t really need. There is no planning. There is no consideration of what questions we really need to answer. Just a random bunch of easy to get published thoughts that never get cited by anyone else. 

Bottom line: How do we fix this? Not easily. Instead of focusing on the quantity of publications, the “authorities” need to focus in on their quality. Extra credit should be given to multicenter trial involvement, prospective studies, and other higher quality projects. The actual number of publications should not matter as much as how much high quality work is in progress. Sure, the sheer number of studies published will decline, but the quality will increase exponentially!

How To Tell If Research Is Crap

I recently read a very interesting article on research, and found it to be very pertinent to the state of academic research today. It was published on Manager Mint, a site that considers itself to be “the most valuable business resource.” (?) But the message is very applicable to trauma professionals, medical professionals, and probably anyone else who engages in research pursuits. The link to the full article is listed at the end of this post.

1. Research is not good because it is true, but because it is interesting.

Interesting research doesn’t just restate what is already known. It creates or explores new territory. Don’t just read and believe existing dogma.

Critique it.

Question it. Then devise a way to see if it’s really true.

2. Good research is innovative.

Some of the best ideas come from combining ideas from various disciplines.

Some of the best research ideas are derived from applying concepts from totally unrelated fields to your own.

That’s why I read so many journals, blogs, and newsfeeds from many different fields. And even if you are not doing the research, a broad background can help you sort out and gain perspective as you read the works of others.

3. Good research is useful.

Yes, basic bench level research can potentially be helpful in understanding all the nuances of a particular biochemical or disease process.But a lot of the time, it just demonstrates relatively unimportant chemical or biological reactions. And only a very small number actually contribute to the big picture. For most of us working at a macro level, research that could actually change our practice or policies is really what we need.

4. The best research should be empirically derived.

It shouldn’t rely on complicated statistical models. If it does, it means that the effect being measured is very subtle, and potentially not clinically significant. There is a big difference between statistical and clinical relevance.

Reference: If You Can’t Answer “Yes” To These 5 Questions, Your Research Is Rubbish. Garrett Stone. Click here to view on Manager Mint.