by
Gus Iversen, Editor in Chief | July 09, 2025
By Keri Forsythe-Stephens
Call it a double-edged sword. While artificial intelligence (AI) is strengthening healthcare cybersecurity, it’s also giving adversaries powerful new tools, warned Claroty’s Ty Greenhalgh at the 2025 AAMI eXchange in New Orleans. In his session,
AI and Cybersecurity: Navigating Risks and Innovation in Medical Devices, Greenhalgh outlined how the technology is reshaping medical device cybersecurity — for better and worse.
Smarter threats, smarter attacks

Ad Statistics
Times Displayed: 49216
Times Visited: 1535 Keep biomedical devices ready to go, so care teams can be ready to care for patients. GE HealthCare’s ReadySee™ helps overcome frustrations due to lack of network and device visibility, manual troubleshooting, and downtime.
“Cybercriminals are already weaponizing AI to make their attacks far more potent and personalized,” he said. One tactic? AI-driven phishing and social engineering campaigns that mimic real people with remarkable accuracy. By scanning publicly available data on healthcare professionals and systems, AI tools can generate messages — or even voice deepfakes — that appear to come from a colleague, supervisor, or vendor. These messages don’t just look authentic; they’re nearly indistinguishable from the real thing.
Greenhalgh warned that AI is also enabling more evasive, adaptive malware. “AI is being used to enhance the stealth and efficiency of malicious operations. One significant way is through AI-powered polymorphic malware and evasion techniques,” he said. Traditional signature-based detection tools can’t keep up as AI-generated malware constantly changes its behavior and code.
Proactive defense starts with oversight
To counter these threats, Greenhalgh emphasized the need for robust AI oversight, before deployment and throughout a model’s lifecycle. He urged organizations to adopt governance frameworks that define each model’s purpose, assess risks, and use internal, deidentified data to test performance.
He also recommended forming cross-functional review boards, including healthcare technology management (HTM), IT, clinical, and legal stakeholders, to evaluate models for accuracy, bias, security, and privacy. Ongoing monitoring, he stressed, is essential. “The ability to audit, update, and, if necessary, quickly roll back or replace models that don’t meet safety or trustworthiness standards is paramount.”
While initiatives like the National Institute of Standards and Technology’s AI Risk Management Framework and the EU’s AI Act are emerging, Greenhalgh said static regulations can quickly become outdated. He called for agile, principles-based frameworks that prioritize outcomes like safety, privacy, explainability, and accountability; helping organizations balance innovation with security.