by John W. Mitchell
, Senior Correspondent | December 03, 2019
With the buzz and excitement of AI in medical imaging rising, a panel on the first day of RSNA counseled caution moving forward. Despite the promise, it’s up to every radiologist to make sure that AI applications honor the ethics of trust implicit in the patient-doctor relationship.
In a session titled “Ethics in Radiology: Summary of the European and North American Multisociety Statement,” the panelists found fertile ground for the topic of AI ethics from recent headlines. From the Boeing 737 MAX crashes to pedestrian deaths caused by self-driving cars to the recent discovery that widely used AI algorithm discriminates against black patients. AI can do harm when ethical competency gets shorted.
“How ethical is ethical enough?” Dr. Raym Geis, ACR Data Science Institute senior scientist, asked the audience in Chicago.
Numed, a well established company in business since 1975 provides a wide range of service options including time & material service, PM only contracts, full service contracts, labor only contracts & system relocation. Call 800 96 Numed for more info.
Geis said that imaging AI, with all of its promise, still has a way to go in gaining acceptance with patients. Dr. Elmar Kotter, vice chairman of radiology at the University Hospital Freiberg in Germany, and vice president of the European Society of Medical Imaging Informatics, echoed Geis' caution. Kotter cited a recent survey that found 65 percent of all patients are uncomfortable with AI and prefer a doctor’s judgment.
“AI is both morally and technically challenging," said Geis. “AI must be worthy of patient trust.”
The three-panel members, which also included Judy Gichoya, assistant professor of IR Informatics at Emory University School of medicine, shared an overview of work several imaging professional societies recently released concerning the ethics of AI data use, algorithms, and practices in medical imaging.
Geis said an advisory panel (“18 people trying to do the right thing”) was composed of a cross-section of professionals. These included patient advocates, lawyers, a philosophy professor, medical clinicians, and information technology experts.
The biggest challenge, according to the panel, is no medical specialty has any experience using AI to care for patients at the scale that is rapidly emerging. It’s up to radiologists to monitor AI applications going forward to make sure that patients are protected.
Gichoya spoke to several long-standing concerns in medical big data development, such as cybersecurity threats and making sure that the profit-motive of AI applications takes a back seat to ethically achieved improvements in patient outcome. She also touched on the technical process of using machine learning to develop medical imaging algorithms.
She also provided a quick overview of the recent revelation of unintentional racial bias in a widely used commercial risk algorithm, used to manage population health. AI, she said, needs to be able to explain its findings and risks to those using AI assistance.
Kotter cited several challenges the joint society statements were meant to monitor. These included:
– AI needs to keep humans in the loop
– Radiologists need to be prepared, including ethically, for AI assist tools
– AI is prone to “automation bias” in the machine learning stage of algorithm development.
– Patient preference must always be considered
– AI needs to be traceable and explainable
– Imaging AI will cause workforce disruption
“Radiologists will remain ultimately responsible for patient care,” said Kotter. “We have to steer technological advancements and make the best decisions and actions for and with patients.”