by Gus Iversen
, Editor in Chief | August 07, 2023
AI tools that quickly and accurately create detailed narrative reports of a patient’s CT scan or X-ray can greatly ease the workload of busy radiologists.
Instead of merely identifying the presence or absence of abnormalities on an image, these AI reports convey complex diagnostic information, detailed descriptions, nuanced findings, and appropriate degrees of uncertainty. In short, they mirror how human radiologists describe what they see on a scan.
Several AI models capable of generating detailed narrative reports have begun to emerge. With them have come automated scoring systems that periodically assess these tools to help inform their development and augment their performance.
For those who need to move fast and expand clinical capabilities -- and would love new equipment -- the uCT 550 Advance offers a new fully configured 80-slice CT in up to 2 weeks with routine maintenance and parts and Software Upgrades for Life™ included.
So how well do the current systems gauge an AI model’s radiology performance?
The answer is good but not great, according to a new study by researchers at Harvard Medical School published Aug. 3
in the journal Patterns
Ensuring that scoring systems are reliable is critical for AI tools to continue to improve and for clinicians to trust them, the researchers said, but the metrics tested in the study failed to reliably identify clinical errors in the AI reports, some of them significant. The finding, the researchers said, highlights an urgent need for improvement and the importance of designing high-fidelity scoring systems that faithfully and accurately monitor tool performance.
“Accurately evaluating AI systems is the critical first step toward generating radiology reports that are clinically useful and trustworthy,” said study senior author Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.
Improving the score
In an effort to design better scoring metrics, the team designed a new method (RadGraph F1) for evaluating the performance of AI tools that automatically generate radiology reports from medical images.
They also designed a composite evaluation tool (RadCliQ) that combines multiple metrics into a single score that better matches how a human radiologist would evaluate an AI model’s performance.
Using these new scoring tools to evaluate several state-of-the-art AI models, the researchers found a notable gap between the models’ actual score and the top possible score.
“Measuring progress is imperative for advancing AI in medicine to the next level,” said co-first author Feiyang "Kathy" Yu, a research associate in the Rajpurkar lab. “Our quantitative analysis moves us closer to AI that augments radiologists to provide better patient care.”
Long term, the researchers’ vision is to build generalist medical AI models that perform a range of complex tasks, including the ability to solve problems never before encountered. Such systems, Rajpurkar said, could fluently converse with radiologists and physicians about medical images to assist in diagnosis and treatment decisions.
The team also aims to develop AI assistants that can explain and contextualize imaging findings directly to patients using everyday plain language.
“By aligning better with radiologists, our new metrics will accelerate development of AI that integrates seamlessly into the clinical workflow to improve patient care,” Rajpurkar said.