The FDA is looking at new rules to govern AI

December 02, 2019
by Sean Ruck, Contributing Editor
In October’s issue of HCB News, we looked at the history of AI and the fundamentals behind the technology. Rik Primo, the principal of Primo Medical Imaging Informatics Inc., is again with us for part two of our two-part piece to look at the FDA’s efforts to regulate healthcare AI software.

According to Primo, the FDA has a challenge. “The FDA has very clear rules for the approval of medical devices. If you have an X-ray device for example, you submit the device to the FDA. They will test it; you help them with the testing.” But Primo points out, the intended usage is straightforward — producing an image using radiation. The term to be familiar with here is “software as a medical device” or SaMD. SaMD is defined by the International Medical Device Regulators Forum (IMDRF) as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.” Primo says AI is very similar to SaMD as far as regulations should be concerned. He says according to the IMDRF, adaptive artificial intelligence and machine learning technologies differ from other SaMD in that they have the potential to adapt and optimize device performance in real time to continuously improve health care for patients.” The FDA wants to focus on the risk that is posed to the user and on patient safety issues that could arise from the use of the device.”

Altering an X-ray device, Primo says, is not a simple task. “You cannot just get a different type of X-ray tube or a different cassette tray and install it in an X-ray device that wasn’t designed to work with these parts. But with software, you basically can change the software and make the device do different functions which may take it beyond the scope of the initial intended use it was tested for.”

SaMDs can be locked though, meaning significant algorithm changes wouldn’t happen until the next version is introduced with the accompanying FDA nod. This workaround was created by the FDA and the IMDRF. However, some AI algorithms, once released into the “wild”, have the ability to continuously learn (AI-CL) and evolve based on real world experiences. In that case, by the time a new update is introduced, the original software which gained FDA approval may look very different.

Primo says there may be a mistaken belief that humans are directing the changes in the AI algorithms, but in fact, AI-CL algorithms could be adjusting their behavior based on some interaction or feedback from data or users, while not necessarily requiring the implicit directives for these changes by the users.

The FDA is considering a total product life cycle-based regulatory framework for these SaMD AI-CL technologies that would allow for modifications to be made from real-world learning and adaptation, while still ensuring that the safety and effectiveness of the software as a medical device is maintained. For the 510(k) classification, the framework says if there’s a similar device that exists already, the premarket notification is sufficient for the device being introduced. For the premarket authorization however, there are levels of risk categories from one (the lowest) to four that new devices will be classified under.

Primo says although AI SaMD exists across a wide spectrum, from locked to continuously learning data systems, a common set of considerations for data management, deep training and performance acceleration can be applied to the entire spectrum.

Last year, the Medical Imaging and Technology Alliance or MITA organized an AI summit meeting to create an overview and inventory of the artificial intelligence initiatives in the Medical Imaging community. Many organizations and AI experts, ranging from professional organizations to user organizations, regulators, manufacturers and standards development organizations attended.

Among the topics discussed was the fact that AI algorithms trained to work with images from a particular brand/model CT or MRI scanner may not always provide exactly the same results with a different scanner. Experiences show that even subtle differences in image quality and signal-to-noise ratios may cause this effect. At the meeting’s conclusion, the idea was discussed to assign several organizations to work on AI standards, but in close coordination to prevent overlapping work, similar to the model of how DICOM standard was created.

While there’s a lot of work to be done, there are already valuable initiatives underway according to Primo. HIMSS started an HL7 AI workgroup. The workgroup defined some clinical use cases starting with wound care and breast cancer. They plan to develop a library with HL7 tools for AI. MITA meanwhile, will develop several AI use cases defined by ACR as basis for regulatory guidelines to address automatic notifications by AI programs and diagnostics applications. These use cases will be shared with FDA for discussion and regulatory purposes. So while FDA, indeed, faces challenges with AI, there are many committed professional organizations, SDOs, user organizations and many others to assist FDA.