Tag Archives: artificial intelligence

ChatGPT And Your Research Paper

Generative artificial intelligence (AI) is the newest shiny toy. The best-known example, ChatGPT, burst onto the scene in November 2022 and caught most of us off guard. The earliest versions were interesting and showed great promise for a variety of applications.

The easiest way to think about this technology is to compare it to the auto-complete feature in your search engine. When you start to type a query, the search engine will show a list of commonly entered queries that begin the same way.

Generative AI does the same thing, just on a vastly expanded level. It looks at the question the user posed and displays its best guess on how to complete it based on the huge amount of data it has trained on. As the algorithms get better and the training data more extensive, the AI continues to improve.

However, there are drawbacks. Early iterations of ChatGPT demonstrated that the AI could very convincingly generate text that didn’t really match reality.  There are several very public cases of this. One online sports reporting service used AI extensively and found that nearly 50% of the articles it produced were not accurate. A lawyer submitted a legal brief prepared by AI without checking it. The judge determined that the cases cited in the brief were totally fabricated by the algorithms.

One of the most significant controversies facing academics and the research community is how to harness this valuable tool reliably. College students have been known to submit papers partially or fully prepared by AI. There is also a concern that it could be used to write parts or all of research papers for publication. Some enterprising startups have developed tools for spotting AI-generated text. And most journals have adopted language in their guidelines to force authors to disclose their use of this tool.

Here’s some sample language from the Elsevier Guide for Authors on their website:

“Where authors use generative artificial intelligence (AI) and AI-assisted technologies in the writing process, authors should only use these technologies to improve readability and language. Applying the technology should be done with human oversight and control, and authors should carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. AI and AI-assisted technologies should not be listed as an author or co-author, or be cited as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans, as outlined in Elsevier’s AI policy for authors.

Authors should disclose in their manuscript the use of AI and AI-assisted technologies in the writing process by following the instructions below. A statement will appear in the published work. Please note that authors are ultimately responsible and accountable for the contents of the work.

Disclosure instructions
Authors must disclose the use of generative AI and AI-assisted technologies in the writing process by adding a statement at the end of their manuscript in the core manuscript file, before the References list. The statement should be placed in a new section entitled ‘Declaration of Generative AI and AI-assisted technologies in the writing process’.

Statement: During the preparation of this work the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.”

But apparently, some researchers don’t heed the guidelines. Here is a snippet of text from a case report published in an Elsevier journal earlier this year. I’ve highlighted the suspicious text at the end of the discussion section. Click the image to make it readable in full size:

It appears the author tried to use AI to generate this text, but its disclaimer logic was triggered. This represents a failure on many levels:

  • The authors apparently used AI to generate one or more parts of their manuscript.
  • The authors did not thoroughly proofread the final copy before submission to the journal.
  • The authors did not disclose their use of AI as required.
  • The editors and reviewers for the journal did not appear to read it very closely either.
  • And it’s now in the wild for everyone to see.

The big question is, how reliable is the rest of the text? How much is actually just a bunch of stuff pulled together by generative AI, and how accurate is that?

Bottom line: First, use generative AI responsibly. Some excellent tools are available that allow researchers to gather a large quantity of background information and quickly analyze it. However, it ultimately needs to be fact-checked before being used in a paper.

Second, refrain from copying and pasting AI material into your paper. After fact-checking the material, write your own thoughts independent of the AI text.

Finally, if you do use AI, be sure to follow the editorial guidelines for your journal. Failure to do so may result in your being banned from future submissions!

Reference: Successful management of an Iatrogenic portal vein and hepatic artery injury in a 4-month-old female patient: A case report and literature review. Radiology Case Reports 19(2024):2106-2111.

New Technology: Using AI To Interpret Pelvic X-rays

Look out, radiologists! The computers are coming for you!

Radiologists use their extensive understanding of human anatomy and combine it with subtle findings they see on x-ray shadow pictures. In doing this, they can identify a wide variety of diseases, anomalies, and injuries. But as we have seen with vision systems and game playing (think chess), computers are getting pretty good at doing this as well.

Is it only a matter of time until computer artificial intelligence (AI) starts reading x-rays?  Look at how good they already are at interpreting EKGs. The trauma group at Stanford paired up with the Chang Gung Memorial Hospital in Taiwan to test the use of AI for interpreting images to identify a specific set of common pelvic fractures.

The Stanford group used a deep learning neural network (XCeption) to analyze source x-rays (standard A-P pelvis images) from Chang Gung. These x-rays were divided into training and testing cohorts. The authors also applied different degrees of blurring, brightness, rotation, and contrast adjustment to the training set in order to help the AI overcome these issues when interpreting novel images.

The AI interpreted the test images with a very high degree of sensitivity, specificity, accuracy, and predictive values, with all of them over 0.90. The algorithms generated a “heat map” that showed the areas that were suspicious for fracture. Here are some examples with the original x-ray on the left and the heat map on the right:

The top row shows a femoral neck fracture, the middle row an intertrochanteric fracture, and the bottom row another femoral neck fracture with a contralateral implant. All were handily identified by the AI.

AI applications are usually only as good as their training sets. In general, the bigger the better so they can gain a broader experience for more accurate interpretation. So it is possible that uncommon, subtle fractures could be missed. But remember, artificial intelligence is meant to supplement the radiologist, not replace him or her. You can all breathe more easily now.

This technology has the potential for broader use in radiographic interpretation. In my mind, the best way to use it is to first let the radiologist read the images as they usually do. Once they have done this, then turn on the heat map so they can see any additional anomalies the AI has found. They can then use this information to supplement the initial interpretation.

Expect to see more work like this in the future. I predict that, ultimately, the picture archiving and communications systems (PACS) software providers will build this into their product. As the digital images are moving from the imaging hardware to the digital storage media, the AI can intercept it and begin the augmented interpretation process. The radiologist will then be able to turn on the heat map as soon as the images arrive on their workstation.

Stay tuned! I’m sure there is more like this to come!

Reference: Practical computer vision application to detect hip fractures on pelvic X-rays: a bi-institutional study.  Trauma Surgery and Acute Care Open 6(1), http://dx.doi.org/10.1136/tsaco-2021-000705.

New Technology: Using AI To Interpret Pelvic X-rays

Look out, radiologists! The computers are coming for you!

Radiologists use their extensive understanding of human anatomy and combine it with subtle findings they see on x-ray shadow pictures. In doing this, they can identify a wide variety of diseases, anomalies, and injuries. But as we have seen with vision systems and game playing (think chess), computers are getting pretty good at doing this as well.

Is it only a matter of time until computer artificial intelligence (AI) starts reading x-rays?  Look at how good they already are at interpreting EKGs. The trauma group at Stanford paired up with the Chang Gung Memorial Hospital in Taiwan to test the use of AI for interpreting images to identify a specific set of common pelvic fractures.

The Stanford group used a deep learning neural network (XCeption) to analyze source x-rays (standard A-P pelvis images) from Chang Gung. These x-rays were divided into training and testing cohorts. The authors also applied different degrees of blurring, brightness, rotation, and contrast adjustment to the training set in order to help the AI overcome these issues when interpreting novel images.

The AI interpreted the test images with a very high degree of sensitivity, specificity, accuracy, and predictive values, with all of them over 0.90. The algorithms generated a “heat map” that showed the areas that were suspicious for fracture. Here are some examples with the original x-ray on the left and the heat map on the right:

The top row shows a femoral neck fracture, the middle row an intertrochanteric fracture, and the bottom row another femoral neck fracture with a contralateral implant. All were handily identified by the AI.

AI applications are usually only as good as their training sets. In general, the bigger the better so they can gain a broader experience for more accurate interpretation. So it is possible that uncommon, subtle fractures could be missed. But remember, artificial intelligence is meant to supplement the radiologist, not replace him or her. You can all breathe more easily now.

This technology has the potential for broader use in radiographic interpretation. In my mind, the best way to use it is to first let the radiologist read the images as they usually do. Once they have done this, then turn on the heat map so they can see any additional anomalies the AI has found. They can then use this information to supplement the initial interpretation.

Expect to see more work like this in the future. I predict that, ultimately, the picture archiving and communications systems (PACS) software providers will build this into their product. As the digital images are moving from the imaging hardware to the digital storage media, the AI can intercept it and begin the augmented interpretation process. The radiologist will then be able to turn on the heat map as soon as the images arrive on their workstation.

Stay tuned! I’m sure there is more like this to come!

Reference: Practical computer vision application to detect hip fractures on pelvic X-rays: a bi-institutional study.  Trauma Surgery and Acute Care Open 6(1), http://dx.doi.org/10.1136/tsaco-2021-000705.