Tag Archives: philosophy

Why Is So Much Published Research So Bad?

Welcome to two days of rants about bad research!

I read lots of trauma-related articles every week. And as I browse through them, I often find studies that leave me wondering how they ever got published. And this is not a new phenomenon. Look at any journal a year ago. Five years ago. Twenty years ago. And even older. The research landscape is littered with their carcasses.

And on a related note, sit down with any serious clinical question in your field you want to answer. Do a deep dive with one of the major search engines and try to get an answer. Or better yet, let the professionals from the Cochrane Library or other organization do it for you. Invariably, you will find hints and pieces of the answer you seek. But never the completely usable solution you desire. 

Why is it so hard? Even with tens of thousands of articles being published every year?

Because there is no overarching plan! Individuals are forced to produce research as a condition of their employment. Or to assure career advancement. Or to get into medical school, or a “good” residency. And in the US, Level I trauma centers are required to publish at least 20 papers every three years to maintain their status. So there is tremendous pressure across all disciplines to publish something.

Unfortunately, that something is usually work that is easily conceived and quickly executed. A registry review, or some other type of retrospective study. They are easy to get approval for, take little time to complete and analyze, and have the potential to get published quickly.

But what this “publish or perish” mentality promotes is a random jumble of answers that we didn’t really need and can’t learn a thing from. There is no planning. There is no consideration of what questions we really need to answer. Just a random bunch of thoughts that are easy to get published but never get cited by anyone else.

Bottom line: How do we fix this? Not easily. Give every work a “quality score.” Instead of focusing on the quantity of publications, the “authorities” (tenure committees and the journal editors themselves) need to focus in on their quality. Extra credit should be given to multicenter trial involvement, prospective studies, and other higher quality projects. These will increase the quality score. The actual number of publications should not matter as much as how much high quality work is in progress. Judge the individual or center on their total quality score, not the absolute number of papers they produce. Sure, the sheer number of studies published will decline, but the quality will increase exponentially!

Tomorrow, the big picture view on how to detect bad research.

Pop Quiz: Do We Really Need To Do All That? The Answer

The scenario involved an elderly woman who fell from standing at her care facility 12 hours earlier. They want to send her to your trauma center for evaluation because she seems a bit different from her baseline. You have well defined practice guidelines for patients with head injuries that dictate what type of monitoring and diagnostics they receive.

What do you need to know to determine what you should do? Thanks for all of you who sent in suggestions.

Here are my thoughts:

  • Which scans should she get? Usually, you would obtain an initial head CT and, due to her age, a cervical CT regardless of her physical exam due to the high miss rate in these patients. But now the fun begins. Your subarachdoid / intraparenchymal hemorrhage (IPH) practice guideline would have you admit for neurologic monitoring for 12 hours, obtain a TBI screen, then discharge without a followup scan if the screen was passed. But in this case, the clock started 12 hours ago and the guideline would be finished with the exception of the TBI screen. So an initial scan and a TBI screen in the ED are all that are needed. The observation period is already over and the patient could potentially be discharged from ED if a SAH or IPH were found.
    Your subdural guideline mandates all of the above plus a repeat scan at 12 hours. But once again, the clock has already started. Do you just get an initial scan, which also serves as the 12 hour scan? Or do you get yet another one?  If the neuro exam is normal, I vote for the former, and your evaluation is complete after the TBI screen. If the neuro exam is not quite normal, then admission for continuing exams and a repeat scan are in order.
  • Does the patient need to be admitted, and for how long? Hopefully, you’ve figure this out in the previous bullet. The clock started running when she fell down, so in cases where the physical exam is normal, only the first CT is needed and ongoing monitoring is not. Thus, she could return to her care facility from the ED after the scan.
  • What other important information do you need to know? Of paramount importance is her DNR status and her/her family’s willingness to have brain surgery if a significant lesion is identified. It is extremely important to know the latter item. If there is never any patient or family intent to proceed to surgery, is there any point to obtaining scans at all? In my opinion, no. The whole reason to obtain the scan and monitor is to potentially “do something.” But if the patient and/or family will not let us “do something,” there is no reason to do any of this. It is crucial that the patient and family understand the typical outcomes from surgery given her age and degree of frailty. This is most important in patients who are impaired with dementia or a high-grade lesion  if found from which there is minimal chance of recovery. In most such cases, even if surgery is “successful,” the patient will never recover enough to return to their prior level of care. This should be weighed heavily by the family and care providers.
  • Should a patient with DNR or “no surgery” orders even be sent to the ED? Theoretically, no. There is no need from the standpoint of their future care. They are not really eligible to have any studies or monitoring done. However, the facility may try to insist for their own liability issues, but this is not really a valid clinical reason.

I hope you enjoyed this little philosophical discussion. Feel free to agree/disagree through your comments or tweets!

 

Why People Don’t Change Their Minds Despite The Data

Has this happened to you?

Your (emergency physician / neurosurgeon / orthopaedic surgeon) colleague wants to (get rib detail xrays / administer steroids / wait a few days before doing a femur ORIF). You question it based on your interpretation of the literature. You even provide a stack of papers to them to prove your point. Do they buy it? Even in the presence of randomized, double-blinded, placebo-controlled studies with thousands of patients (good luck finding those)?

The answer is generally NO! Why not? It’s science. It’s objective data. WTF?

Sociologists and psychologists have shown that there is a concept that they call the Backfire Effect. Essentially, once you come to believe something, you do your best to protect it from harm. You become more skeptical of facts that refute your beliefs, and less skeptical of the items that support them. Having one’s beliefs challenged, even with objective and authoritative data, causes us to hold them even more deeply. There are plenty of examples of this in everyday life. The absence of weapons of mass destruction in Iraq. The number of shooters in the JFK assassination. President Obama’s citizenship.

Bottom line: It’s human nature to try to pick apart a scientific article that challenges your biases, looking for every possible fault. It’s the Backfire Effect. Be aware of this built in flaw (protective mechanism?) in our psyche. And always ask yourself, “what if?” Look at the issue through the eyes of someone not familiar with the concepts. If someone challenges your beliefs, welcome it! Be skeptical of both them AND yourself. You might just learn something new!

But The Radiologist Made Me Do It!

The radiologist made me order that (unnecessary) test! I’ve heard this excuse many, many times. Do these phrases look familiar?

  1. … recommend clinical correlation
  2. … correlation with CT may be of value
  3. … recommend delayed CT imaging through the area
  4. … may represent thymus vs thoracic aortic injury (in a 2 year old who fell down stairs)
Some trauma professionals will read the radiology report and then immediately order more xrays. Others will critically look at the report, the patient’s clinical status and mechanism of injury, and then decide they are not necessary. I am firmly in the latter camp.
But why do some just follow the rad’s suggestions? I believe there are two major camps:
  • Those that are afraid of being sued if they don’t do everything suggested, because they’ve done everything and shouldn’t miss the diagnosis
  • Those that don’t completely understand what is known about trauma mechanisms and injury and think the radiologist does
Bottom line: The radiologist is your consultant. While they are good at reading images, they do not know the nuances of trauma. Plus, they didn’t get to see the patient so they don’t have the full context for their read. First, talk to the rad so they know what happened to the patient and what you are looking for. Then critically look at their read. If the mechanism doesn’t support the diagnosis, or they are requesting unusual or unneeded studies, don’t get them! Just document your rationale clearly in the record. This provides best patient care, and minimizes the potential complications (and radiation exposure) from unnecessary tests.
Related post:

Reference: Pitfalls of the vague radiology report. AJR 174(6):1511-1518, 2000.

Why Is So Much Published Research So Bad?

I read lots of trauma-related articles every week. And as I browse through them, I often find studies that leave me wondering how they ever got published. And this is not a new phenomenon. Look at any journal a year ago. Five years ago. Twenty years ago. And even older. The research landscape is littered with their carcasses.

And on a related note, sit down with any serious clinical question in your field you want to answer. Do a deep dive with one of the major search engines and try to get an answer. Or better yet, let the professionals from the Cochrane Library or other organization do it for you. Invariably, you will find hints and pieces of the answer you seek. But never the completely usable solution you desire. 

Why is it so hard? Even with tens of thousands of articles being published every year?

Because there is no overarching plan! Individuals are forced to produce research as a condition of their employment. Or to assure career advancement. Or to get into medical school, or a “good” residency. And in the US, Level I trauma centers are required to publish at least 20 papers every three years to maintain their status. So there is tremendous pressure across all disciplines to publish something.

Unfortunately, that something is usually work that is easily conceived and quickly executed. A registry review, or some other type of retrospective study. They are easy to get approval for, take little time to complete and analyze, and have the potential to get published quickly.

But what this “publish or perish” mentality promotes is a random jumble of answers that we didn’t really need and can’t learn a thing from. There is no planning. There is no consideration of what questions we really need to answer. Just a random bunch of thoughts that are easy to get published but never get cited by anyone else.

Bottom line: How do we fix this? Not easily. Give every work a “quality score.” Instead of focusing on the quantity of publications, the “authorities” (tenure committees and the journal editors themselves) need to focus in on their quality. Extra credit should be given to multicenter trial involvement, prospective studies, and other higher quality projects. These will increase the quality score. The actual number of publications should not matter as much as how much high quality work is in progress. Judge the individual or center on their total quality score, not the absolute number of papers they produce. Sure, the sheer number of studies published will decline, but the quality will increase exponentially!