DOTmed Home MRI Oncology Ultrasound Molecular Imaging X-Ray Cardiology Health IT Business Affairs
News Home Parts & Service Operating Room CT Women's Health Proton Therapy Endoscopy HTMs Mobile Imaging
SEARCH
Current Location:
>
> This Story


Log in or Register to rate this News Story
Forward Printable StoryPrint Comment

 

 

Health IT Homepage

Imaging informatics and the clinical informatics umbrella A discussion with University of Cincinnati's Dr. Alexander Towbin

Veritas closes $1 billion GE Healthcare software unit buy Will run newly acquired business as a stand-alone company

Onkos Surgical and Insight Medical partner to bring augmented reality to tumor surgery Could this become the new standard of care?

Blockchain-based cryptocurrencies vulnerable to growing cyberattacks Report from McAfee highlights emerging online dangers

International consortium kicks off multicountry study on device traceability Assesses manual and scanning methods for documenting OR implants

Gail Cinexi Huntington Hospital names vice president of enterprise clinical and support services

Cerner settles overtime lawsuit EHR giant 'adamantly denies' claims, says it seeks to minimize negative publicity and legal fees

Philips enters long-term strategic partnership with Jackson Health System Philips will assume responsibility for upgrading monitoring systems

KT Corporation and Russian Railways launch Russian digital health system Assist providers in all 173 Russian Railway stations

Cardinal Health showcases first integrated cloud solution for nuclear medicine at SNMMI Nuctrac replaces phone and fax orders

Study team tricks AI programs into misclassifying diagnostic images

by John W. Mitchell , Senior Correspondent
With machine learning algorithms recently approved by the FDA to diagnose images without physician input, providers, payers, and regulators may need to be on guard for a new kind of fraud.

That’s the conclusion of a Harvard Medical School/MIT study team comprising biometric informatics, physicians, and Ph.D. candidate members, in a paper just published in IEEE Spectrum. The team was able to successfully launch “adversarial attacks” on three common automated AI medical imaging tasks to fool the programs up to 100 percent of the time into misdiagnosis. Their findings have imaging implications related to fraud, unnecessary treatments, higher insurance premiums and the possible manipulation of clinical trials.

Story Continues Below Advertisement

THE (LEADER) IN MEDICAL IMAGING TECHNOLOGY SINCE 1982. SALES-SERVICE-REPAIR

Special-Pricing Available on Medical Displays, Patient Monitors, Recorders, Printers, Media, Ultrasound Machines, and Cameras.This includes Top Brands such as SONY, BARCO, NDS, NEC, LG, EDAN, EIZO, ELO, FSN, PANASONIC, MITSUBISHI, OLYMPUS, & WIDE.



The team defined adversarial attacks on AI imaging algorithms as: “…inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake."

“Adversarial examples have become a major area of research in the field of computer science, but we were struck by the extent to which our colleagues within healthcare IT were unaware of these vulnerabilities,” Dr. Samuel Finlayson, lead author and M.D.-Ph.D. candidate, Harvard-MIT told HCB News. “Our goal in writing this paper was to try to bridge the gap between the medical and computer science communities, and to initiate a more complete discussion around both the benefits and risks of using AI in the clinic.”

In the study, the team was able to manipulate the AI program to indicate positive findings in pneumothorax noted in chest X-rays, diabetic retinopathy observed in retinal mages and melanoma based on skin images. In the chest X-ray examples, the degree of accuracy based on the AI manipulation to indicate pneumothorax was 100 percent.

“Our results demonstrate that even state-of-the-art medical AI systems can be manipulated,” said Finlayson. “If the output of machine learning algorithms becomes a determinant of healthcare reimbursement or drug approval, then adversarial examples could be used as a tool to control exactly what the algorithms see.”

He also said that such misuse could cause patients to undergo unnecessary treatments, which would increase medical and insurance costs. Adversarial attacks could also be used to “tip the scales” in medical research to achieve desired outcomes.

Another member of the study team, Dr. Andrew Beam, Ph.D., instructor, Department of Biomedical Informatics, Harvard Medical School believes their findings are a warning to the medical informatics sector. While the team stated they were excited about the “bright future” that AI offers for medicine, caution is advised.

"I think our results could be summarized as: 'there is no free lunch'. New forms of artificial intelligence do indeed hold tremendous promise, but as with all technology, it is a double-edged sword,” Beam told HCB News. “Organizations implementing this technology should be aware of the limitations and take active steps to combat potential threats.”

Health IT Homepage


You Must Be Logged In To Post A Comment

Advertise
Increase Your
Brand Awareness
Auctions + Private Sales
Get The
Best Price
Buy Equipment/Parts
Find The
Lowest Price
Daily News
Read The
Latest News
Directory
Browse All
DOTmed Users
Ethics on DOTmed
View Our
Ethics Program
Gold Parts Vendor Program
Receive PH
Requests
Gold Service Dealer Program
Receive RFP/PS
Requests
Healthcare Providers
See all
HCP Tools
Jobs/Training
Find/Fill
A Job
Parts Hunter +EasyPay
Get Parts
Quotes
Recently Certified
View Recently
Certified Users
Recently Rated
View Recently
Certified Users
Rental Central
Rent Equipment
For Less
Sell Equipment/Parts
Get The
Most Money
Service Technicians Forum
Find Help
And Advice
Simple RFP
Get Equipment
Quotes
Virtual Trade Show
Find Service
For Equipment
Access and use of this site is subject to the terms and conditions of our LEGAL NOTICE & PRIVACY NOTICE
Property of and Proprietary to DOTmed.com, Inc. Copyright ©2001-2018 DOTmed.com, Inc.
ALL RIGHTS RESERVED