Welcome to two days of rants about bad research!
I read lots of trauma-related articles every week. And as I browse through them, I often find studies that leave me wondering how they ever got published. And this is not a new phenomenon. Look at any journal a year ago. Five years ago. Twenty years ago. And even older. The research landscape is littered with their carcasses.
And on a related note, sit down with any serious clinical question in your field you want to answer. Do a deep dive with one of the major search engines and try to get an answer. Or better yet, let the professionals from the Cochrane Library or other organization do it for you. Invariably, you will find hints and pieces of the answer you seek. But never the completely usable solution you desire.
Why is it so hard? Even with tens of thousands of articles being published every year?
Because there is no overarching plan! Individuals are forced to produce research as a condition of their employment. Or to assure career advancement. Or to get into medical school, or a “good” residency. And in the US, Level I trauma centers are required to publish at least 20 papers every three years to maintain their status. So there is tremendous pressure across all disciplines to publish something.
Unfortunately, that something is usually work that is easily conceived and quickly executed. A registry review, or some other type of retrospective study. They are easy to get approval for, take little time to complete and analyze, and have the potential to get published quickly.
But what this “publish or perish” mentality promotes is a random jumble of answers that we didn’t really need and can’t learn a thing from. There is no planning. There is no consideration of what questions we really need to answer. Just a random bunch of thoughts that are easy to get published but never get cited by anyone else.
Bottom line: How do we fix this? Not easily. Give every work a “quality score.” Instead of focusing on the quantity of publications, the “authorities” (tenure committees and the journal editors themselves) need to focus in on their quality. Extra credit should be given to multicenter trial involvement, prospective studies, and other higher quality projects. These will increase the quality score. The actual number of publications should not matter as much as how much high quality work is in progress. Judge the individual or center on their total quality score, not the absolute number of papers they produce. Sure, the sheer number of studies published will decline, but the quality will increase exponentially!
Tomorrow, the big picture view on how to detect bad research.