Ethics in artificial intelligence — many questions, but few answers

July 02, 2019
by John W. Mitchell, Senior Correspondent
The Socratic method was in full force over the matter of sharing patient data for AI algorithm deep learning during an afternoon session at SIIM 2019 titled Ethics in Radiology: the European and North American Multisociety Statement.

The interactive session intended to solicit input from SIIM expert attendees on a draft paper about the ethics of sharing patient data in both Europe and the U.S. Some of the authors, from Kenya, the Netherlands, and the U.S., seemed pleased with the responses that raised more questions than definitive answers.

Participating groups include American College of Radiology (ACR), European Society of Radiology (ESR), Radiology Society of North America (RSNA), Canadian Association of Radiologists (CAR), Society for Imaging Informatics in Medicine (SIIM), European Society of Medical Imaging Informatics (EuSoMII), and American Association of Physicists in Medicine (AAPM).

Moderator Dr. J. Raymond Geis, assistant clinical professor of radiology at the University of Colorado, acknowledged that the topic was still unsettled and radiologists and others have strong opinions about the matter. His colleagues — Dr. Judy Gichoya, fellow and interventional radiologist at Oregon Health & Sciences University's Dotter Institute, and Dr. Erik Ranschaert, radiologist at the Netherlands Cancer Institute — helped lead the interactive session.

"I sent the draft to a colleague at Princeton, and he shared it with his grad students," reported Geis. "They pretty much chewed my butt about some things.”

He made the point to reinforce that the paper, titled with the same name as the session, was very much a work in progress. The audience spent most of the 90-minute session highly engaged in discussing the topic.

The opening paragraph set the tome for the interactive session:

Artificial intelligence (AI), defined as computers that behave in ways that, until recently, were thought to require human intelligence, has the potential to substantially improve all facets of radiology. AI is complex, has numerous potential pitfalls, and is inevitably biased to some degree. Radiologists and all others who build and use radiology AI products have a duty to understand AI deeply, to provide the most benefit to patients, to understand when and how hazards manifest, to be transparent about benefits and risks, and as much as possible to mitigate any harm they might cause. AI will cause dramatic clinical, social and economic changes. Most changes will be positive, but some may be for the worse.

While no conclusions or solutions were reached, the authors appeared to relish the feedback. Key discussion points included:

– There seemed to be support among the mostly clinical and academic audience that sharing patient data is permissible as long as no entity or company profits from the information. However, a few people from AI companies and other commercial interests were quick to point out that medical therapeutic progress is dependent upon shared patient data. The case of Henrietta Lacks, the famous African American woman whose cells powered many important medical discoveries and treatments, was raised as a case study in improper ethics.

– Another critical point was the ethics of actionable data. Do researchers have an obligation to follow up with patients whose data clearly indicates a health risk? Ranschaert said that the answer in Europe is yes, based on existing ethical guidelines. There was a discussion that existing IRB research guidelines allow U.S. researchers also to inform patients about a medical threat

– Opt-in and opt-out consent strategies was also a vigorous point of focus. A few participants compared the permission screens that most click through when downloading from a new app or website as useful and adequate in meeting ethical standards for any imaging research.

“We received back hundreds of comments on the paper,” said Gies. "The paper is not perfect, and some people disagree with our conclusions. We're working on trying to reach a lot of people."

The paper can be viewed at: