by John R. Fischer
, Senior Reporter | March 09, 2020
A new study shows the advantages that could come from radiologists working hand-in-hand with machine learning algorithms for breast cancer screenings.
The findings are derived from the Digital Mammography Dream Challenge, a crowdsourced competition to determine if AI algorithms could interpret scans more accurately than their human counterparts. While no algorithm outperformed any radiologists, the combination of both was found to improve overall accuracy in screenings and could potentially eliminate 500,000 unnecessary workups annually, according to researchers.
“This was done using an algorithm that, from a set of images in a screening exam, uses the results from the eight best deep learning algorithms from the challenge and the radiologist assessment (1 for recall and 0 for not recall), and processes that information to output a prediction (probabilistic) of whether the woman has cancer or not,” Gustavo Stolovitzky, the director of the IBM Translational Systems Biology and Nanobiotechnology Program for IBM Research and founder of the DREAM Challenges, told HCB News. “This algorithm was trained in the training set, and tested in the two evaluation data sets.”
Numed, a well established company in business since 1975 provides a wide range of service options including time & material service, PM only contracts, full service contracts, labor only contracts & system relocation. Call 800 96 Numed for more info.
Mammography screenings are common tools for detecting breast cancer early but must be assessed and interpreted by a radiologist who uses his or her visual perception to identify signs of cancer. This has led to false positive results in an estimated 10 percent of the 40 million U.S. women who receive routine annual breast cancer screenings.
The study consisted of hundreds of thousands of de-identified mammograms and clinical data from Kaiser Permanente Washington Health research Institute and the Karolinska Institute in Sweden. Research was conducted by Kaiser Permanente Washington, alongside IBM Research, Sage Bionetworks and the UW School of Medicine.
Participants were invited to submit their algorithms to the study organizers, who developed a system that automatically ran the models on the data. This model-to-data approach prevented the distribution of data to participants and minimized the risk of sensitive patient data from being released.
Stolovitzky says the biggest takeaway from these findings is that a combination of AI and radiologist assessments could potentially reduce unnecessary diagnostic workups in the U.S. He cautions, however, that more research is required.
“This means that a prospective clinical trial has to be made that checks the accuracy of this methodology,” he said. “In such a study, it would be necessary to study the interaction of a human interpreter with AI algorithm results and how AI would influence radiologists' final assessment.”
Research was funded by the National Cancer Institute and American Cancer Society.
The study was published in JAMA Network Open