DOTmed Home MRI Oncology Ultrasound Molecular Imaging X-Ray Cardiology Health IT Business Affairs
News Home Parts & Service Operating Room CT Women's Health Proton Therapy Endoscopy HTMs Mobile Imaging
SEARCH
Current Location:
>
> This Story


Log in or Register to rate this News Story
Forward Printable StoryPrint Comment

 

 

Health IT Homepage

The promise of AI (part 2 of 2) Dr. Luciano Prevedello shares insight he’s gained through the AI lab his radiology department created

Hospitals harness technology to achieve patient satisfaction Meeting a diverse range of patient expectations

Apple Watch can monitor heart rhythms with FDA approval Detects atrial fibrillation and can tell if you've taken a sudden fall

Dr. David Hirschorn on NYMIIS 2018 Imaging informatics takes center stage again at the New York symposium

Healthcare players aim to advance value-based care with HL7 resources Utilizing HL7 FHIR in two initial test cases

Anatomy of a hospital cybersecurity attack Tips for detecting, responding to, and preventing attacks at your facility

What does mobile tech mean for radiology? A discussion with William Pan, CEO of EBM Technologies

ScImage, Cardiac Imaging forge partnership Bringing 'anywhere, real-time' access to cardiology images

Patient iP and Clinerion enhance access to real world data for clinical trials Improving the selection of trial sites, accelerate enrollment

New augmented reality tools offer surgical guidance in war-torn areas Connects experienced clinicians with inexperienced ones in the field

Study team tricks AI programs into misclassifying diagnostic images

by John W. Mitchell , Senior Correspondent
With machine learning algorithms recently approved by the FDA to diagnose images without physician input, providers, payers, and regulators may need to be on guard for a new kind of fraud.

That’s the conclusion of a Harvard Medical School/MIT study team comprising biometric informatics, physicians, and Ph.D. candidate members, in a paper just published in IEEE Spectrum. The team was able to successfully launch “adversarial attacks” on three common automated AI medical imaging tasks to fool the programs up to 100 percent of the time into misdiagnosis. Their findings have imaging implications related to fraud, unnecessary treatments, higher insurance premiums and the possible manipulation of clinical trials.

Story Continues Below Advertisement

THE (LEADER) IN MEDICAL IMAGING TECHNOLOGY SINCE 1982. SALES-SERVICE-REPAIR

Special-Pricing Available on Medical Displays, Patient Monitors, Recorders, Printers, Media, Ultrasound Machines, and Cameras.This includes Top Brands such as SONY, BARCO, NDS, NEC, LG, EDAN, EIZO, ELO, FSN, PANASONIC, MITSUBISHI, OLYMPUS, & WIDE.



The team defined adversarial attacks on AI imaging algorithms as: “…inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake."

“Adversarial examples have become a major area of research in the field of computer science, but we were struck by the extent to which our colleagues within healthcare IT were unaware of these vulnerabilities,” Dr. Samuel Finlayson, lead author and M.D.-Ph.D. candidate, Harvard-MIT told HCB News. “Our goal in writing this paper was to try to bridge the gap between the medical and computer science communities, and to initiate a more complete discussion around both the benefits and risks of using AI in the clinic.”

In the study, the team was able to manipulate the AI program to indicate positive findings in pneumothorax noted in chest X-rays, diabetic retinopathy observed in retinal mages and melanoma based on skin images. In the chest X-ray examples, the degree of accuracy based on the AI manipulation to indicate pneumothorax was 100 percent.

“Our results demonstrate that even state-of-the-art medical AI systems can be manipulated,” said Finlayson. “If the output of machine learning algorithms becomes a determinant of healthcare reimbursement or drug approval, then adversarial examples could be used as a tool to control exactly what the algorithms see.”

He also said that such misuse could cause patients to undergo unnecessary treatments, which would increase medical and insurance costs. Adversarial attacks could also be used to “tip the scales” in medical research to achieve desired outcomes.

Another member of the study team, Dr. Andrew Beam, Ph.D., instructor, Department of Biomedical Informatics, Harvard Medical School believes their findings are a warning to the medical informatics sector. While the team stated they were excited about the “bright future” that AI offers for medicine, caution is advised.

"I think our results could be summarized as: 'there is no free lunch'. New forms of artificial intelligence do indeed hold tremendous promise, but as with all technology, it is a double-edged sword,” Beam told HCB News. “Organizations implementing this technology should be aware of the limitations and take active steps to combat potential threats.”

Health IT Homepage


You Must Be Logged In To Post A Comment

Advertise
Increase Your
Brand Awareness
Auctions + Private Sales
Get The
Best Price
Buy Equipment/Parts
Find The
Lowest Price
Daily News
Read The
Latest News
Directory
Browse All
DOTmed Users
Ethics on DOTmed
View Our
Ethics Program
Gold Parts Vendor Program
Receive PH
Requests
Gold Service Dealer Program
Receive RFP/PS
Requests
Healthcare Providers
See all
HCP Tools
Jobs/Training
Find/Fill
A Job
Parts Hunter +EasyPay
Get Parts
Quotes
Recently Certified
View Recently
Certified Users
Recently Rated
View Recently
Certified Users
Rental Central
Rent Equipment
For Less
Sell Equipment/Parts
Get The
Most Money
Service Technicians Forum
Find Help
And Advice
Simple RFP
Get Equipment
Quotes
Virtual Trade Show
Find Service
For Equipment
Access and use of this site is subject to the terms and conditions of our LEGAL NOTICE & PRIVACY NOTICE
Property of and Proprietary to DOTmed.com, Inc. Copyright ©2001-2018 DOTmed.com, Inc.
ALL RIGHTS RESERVED