DOTmed Home MRI Oncology Ultrasound Molecular Imaging X-Ray Cardiology Health IT Business Affairs
News Home Parts & Service Operating Room CT Women's Health Proton Therapy Endoscopy HTMs Mobile Imaging
SEARCH
Current Location:
>
> This Story


Log in or Register to rate this News Story
Forward Printable StoryPrint Comment
advertisement

 

advertisement

 

Health IT Homepage

Life Image and Dicom Systems partner on interoperability Combine Dicom Systems' Unifier with Life Image global data-sharing network

Philips integrates its IntelliSpace Enterprise Edition, PerformanceBridge solutions ahead of RSNA Will be rolled out at Jackson Health System

FDA unveils new mobile app for real-world patient data collection Informs clinicians for regulatory decision-making

To tech companies entering healthcare: proceed with caution... please The core ideals of healthcare and tech are actually very different

Intelerad to advance InteleViewer platform with EnvoyAI technology suite Providing a 'broad list of algorithms' for rad exams

Fujifilm's Synapse 5 PACS gets OK to run on US DoD networks Credits the approval to an emphasis on data protection

IBM taking Watson Health to hybrid cloud Announcement comes on the heels of Red Hat acquisition

PaxeraHealth to develop new AI module for PACS system Captures and stores actions and behavior of radiologist

Renewing clinical faith: reducing telemetry overuse by improving med-surg monitoring A useful bookstore analogy to see what hospitals could be doing better

Accuray showcases software upgrades for CyberKnife and Radixact at ASTRO Allows for 40 percent faster treatment delivery

Study team tricks AI programs into misclassifying diagnostic images

by John W. Mitchell , Senior Correspondent
With machine learning algorithms recently approved by the FDA to diagnose images without physician input, providers, payers, and regulators may need to be on guard for a new kind of fraud.

That’s the conclusion of a Harvard Medical School/MIT study team comprising biometric informatics, physicians, and Ph.D. candidate members, in a paper just published in IEEE Spectrum. The team was able to successfully launch “adversarial attacks” on three common automated AI medical imaging tasks to fool the programs up to 100 percent of the time into misdiagnosis. Their findings have imaging implications related to fraud, unnecessary treatments, higher insurance premiums and the possible manipulation of clinical trials.

Story Continues Below Advertisement

RamSoft PowerServer™ RIS/PACS - Enabling Efficient Diagnostic Imaging

RamSoft's PowerServer™ RIS/PACS is an intuitive, single database application that enables healthcare practices to operate diagnostic imaging more efficiently than ever before.Why is this important? Click to find out.



The team defined adversarial attacks on AI imaging algorithms as: “…inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake."

“Adversarial examples have become a major area of research in the field of computer science, but we were struck by the extent to which our colleagues within healthcare IT were unaware of these vulnerabilities,” Dr. Samuel Finlayson, lead author and M.D.-Ph.D. candidate, Harvard-MIT told HCB News. “Our goal in writing this paper was to try to bridge the gap between the medical and computer science communities, and to initiate a more complete discussion around both the benefits and risks of using AI in the clinic.”

In the study, the team was able to manipulate the AI program to indicate positive findings in pneumothorax noted in chest X-rays, diabetic retinopathy observed in retinal mages and melanoma based on skin images. In the chest X-ray examples, the degree of accuracy based on the AI manipulation to indicate pneumothorax was 100 percent.

“Our results demonstrate that even state-of-the-art medical AI systems can be manipulated,” said Finlayson. “If the output of machine learning algorithms becomes a determinant of healthcare reimbursement or drug approval, then adversarial examples could be used as a tool to control exactly what the algorithms see.”

He also said that such misuse could cause patients to undergo unnecessary treatments, which would increase medical and insurance costs. Adversarial attacks could also be used to “tip the scales” in medical research to achieve desired outcomes.

Another member of the study team, Dr. Andrew Beam, Ph.D., instructor, Department of Biomedical Informatics, Harvard Medical School believes their findings are a warning to the medical informatics sector. While the team stated they were excited about the “bright future” that AI offers for medicine, caution is advised.

"I think our results could be summarized as: 'there is no free lunch'. New forms of artificial intelligence do indeed hold tremendous promise, but as with all technology, it is a double-edged sword,” Beam told HCB News. “Organizations implementing this technology should be aware of the limitations and take active steps to combat potential threats.”

Health IT Homepage


You Must Be Logged In To Post A Comment

Advertise
Increase Your
Brand Awareness
Auctions + Private Sales
Get The
Best Price
Buy Equipment/Parts
Find The
Lowest Price
Daily News
Read The
Latest News
Directory
Browse All
DOTmed Users
Ethics on DOTmed
View Our
Ethics Program
Gold Parts Vendor Program
Receive PH
Requests
Gold Service Dealer Program
Receive RFP/PS
Requests
Healthcare Providers
See all
HCP Tools
Jobs/Training
Find/Fill
A Job
Parts Hunter +EasyPay
Get Parts
Quotes
Recently Certified
View Recently
Certified Users
Recently Rated
View Recently
Certified Users
Rental Central
Rent Equipment
For Less
Sell Equipment/Parts
Get The
Most Money
Service Technicians Forum
Find Help
And Advice
Simple RFP
Get Equipment
Quotes
Virtual Trade Show
Find Service
For Equipment
Access and use of this site is subject to the terms and conditions of our LEGAL NOTICE & PRIVACY NOTICE
Property of and Proprietary to DOTmed.com, Inc. Copyright ©2001-2018 DOTmed.com, Inc.
ALL RIGHTS RESERVED