Over 400 New Jersey Auctions End Today - Bid Now
Over 1650 Total Lots Up For Auction at Four Locations - MA 04/30, NJ Cleansweep 05/02, TX 05/06, NJ 05/08

Research suggests reach of algorithmic modeling in healthcare may exceed grasp

by Robin Lasky, Contributing Reporter | January 24, 2022
Artificial Intelligence
Over the last decade, and increasingly since the start of the pandemic, technology developers have touted new advances and applications for AI in healthcare. Indeed, there have been many promising developments in AI, which have involved targeted, specific medical tasks, such as certain types of cancer screenings, or identifying signs of a stroke.

However, in the case of algorithmic modeling involving large, complex patient data sets, AI has failed to deliver on much of the promises and hype peddled by the media and tech industry, often producing impressive-looking results that fail to predict real world outcomes.

“We can’t take paradigms for developing AI tools that have worked in the consumer space and just port them over to the clinical space … The community fools [itself] into thinking we’re developing models that work much better than they actually do,” Visar Berisha, an associate professor at Arizona State University, told Wired Magazine.

This is, in part, because machine learning algorithms are still too primitive to account for the extremely large number of possible confounding variables when applied to large healthcare data sets. As reported by Wired, one reason for this is that these AI systems work by finding patterns, but when they are faced with confounding data, they have a tendency to use this skill in order to find ways to cheat and massage the data in order to arrive at the “correct” answer.

Additionally, these algorithms are often asked to produce broadly generalizable results out of only a few regional patient data sets. For instance, a 2019 study discovered that an algorithm sold by health services company Optum, intended to improve patient access, mistakenly ranked Black people at a lower priority than white people, even though the former is known on average to suffer from significantly more chronic health conditions in the U.S.

“It’s truly inconceivable to me that anyone else’s algorithm doesn’t suffer from this,” Sendhil Mullainathan, a professor of computation and behavioral science at the University of Chicago Booth School of Business who oversaw the 2019 study, told the Washington Post. “I’m hopeful that this causes the entire industry to say, ‘Oh, my, we’ve got to fix this’.”

Back to HCB News

You Must Be Logged In To Post A Comment