Over the last decade, and increasingly since the start of the pandemic, technology developers have touted new advances and applications for AI in healthcare.
Indeed, there have been many promising developments in AI, which have involved targeted, specific medical tasks, such as certain types of cancer screenings
, or identifying signs of a stroke.
However, in the case of algorithmic modeling involving large, complex patient data sets, AI has failed to deliver on much of the promises and hype peddled by the media and tech industry, often producing impressive-looking results that fail to predict real world outcomes.
“We can’t take paradigms for developing AI tools that have worked in the consumer space and just port them over to the clinical space … The community fools [itself] into thinking we’re developing models that work much better than they actually do,” Visar Berisha, an associate professor at Arizona State University, told Wired Magazine
This is, in part, because machine learning algorithms are still too primitive to account for the extremely large number of possible confounding variables when applied to large healthcare data sets. As reported by Wired, one reason for this is that these AI systems work by finding patterns, but when they are faced with confounding data, they have a tendency to use this skill in order to find ways to cheat and massage the data in order to arrive at the “correct” answer.
Additionally, these algorithms are often asked to produce broadly generalizable results out of only a few regional patient data sets. For instance, a 2019 study discovered that an algorithm sold by health services company Optum, intended to improve patient access, mistakenly ranked Black people at a lower priority than white people, even though the former is known on average to suffer from significantly more chronic health conditions in the U.S.
“It’s truly inconceivable to me that anyone else’s algorithm doesn’t suffer from this,” Sendhil Mullainathan, a professor of computation and behavioral science at the University of Chicago Booth School of Business who oversaw the 2019 study, told the Washington Post
. “I’m hopeful that this causes the entire industry to say, ‘Oh, my, we’ve got to fix this’.”