ChatGPT 4 excels at picking the right imaging tests: Study

ChatGPT
x

ChatGPT

Highlights

OpenAI's ChatGPT can support the clinical decision-making process, including when picking the correct radiological imaging tests for breast cancer screening or breast pain, finds a study.

New York: OpenAI's ChatGPT can support the clinical decision-making process, including when picking the correct radiological imaging tests for breast cancer screening or breast pain, finds a study.

The study by investigators from Mass General Brigham in the US, suggests that large language models have the potential to assist decision-making for primary care doctors and referring providers in evaluating patients and ordering imaging tests for breast pain and breast cancer screenings. Their results are published in the Journal of the American College of Radiology.

"In this scenario, ChatGPT's abilities were impressive," said corresponding author Marc D. Succi, associate chair of Innovation and Commercialisation at Mass General Brigham Radiology and executive director of the MESH Incubator.

"I see it acting like a bridge between the referring healthcare professional and the expert radiologist -- stepping in as a trained consultant to recommend the right imaging test at the point of care, without delay.

"This could reduce administrative time on both referring and consulting physicians in making these evidence-backed decisions, optimise workflow, reduce burnout, and reduce patient confusion and wait times," Succi said.

In the study, the researchers asked ChatGPT 3.5 and 4 to help them decide what imaging tests to use for 21 made-up patient scenarios involving the need for breast cancer screening or the reporting of breast pain using the appropriateness criteria.

They asked the AI in an open-ended way and by giving ChatGPT a list of options. They tested ChatGPT 3.5 as well as ChatGPT 4, a newer, more advanced version.

ChatGPT 4 outperformed 3.5, especially when given the available imaging options.

For example, when asked about breast cancer screenings, and given multiple choice imaging options, ChatGPT 3.5 answered an average of 88.9 per cent of prompts correctly, and ChatGPT 4 got about 98.4 per cent right.

"This study doesn't compare ChatGPT to existing radiologists because the existing gold standard is actually a set of guidelines from the American College of Radiology, which is the comparison we performed," Succi said.

"This is purely an additive study, so we are not arguing that the AI is better than your doctor at choosing an imaging test but can be an excellent adjunct to optimise a doctor's time on non-interpretive tasks."

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS