by John R. Fischer
, Senior Reporter | October 29, 2019
A new study indicates that algorithms designed to provide objective assessments of patients may in fact be biased in the factors they evaluate.
U.S. researchers have found that AI software programs used in healthcare may unintentionally be racially prejudiced in their findings and potentially affect access to care for millions of Americans. Their claims are based on findings that a type of software used to recommend patients for high-risk health care management programs is more inclined to admit healthier white, rather than less healthy black people.
“The algorithms encode racial bias by using health care costs to determine patient "risk", or who was mostly likely to benefit from care management programs,” said Ziad Obermeyer, acting associate professor of health policy and management at University of California, Berkeley and lead author of the paper, in a statement. “Because of the structural inequalities in our healthcare system, blacks at a given level of health end up generating lower costs than whites. As a result, black patients were much sicker at a given level of the algorithm’s predicted risk.”
Algorithmic bias is common, but often hard to address or even analyze due to their designers being private companies and proprietary. Working with an academic hospital that relies on a risk-based solution to determine preferential access to a high-risk care management program, the researchers evaluated the predicted risk scores of the solution’s algorithms for 43,539 white patients and 6,079 black patients.
Comparing their findings to direct measures of a patient’s health, such as number of chronic illnesses, they found that blacks had significantly poorer health than whites for a given risk score. Patients with risk scores in the top 97 percent were automatically deemed eligible by the algorithms for enrollment in the care management program, with only 18 percent of these automatic enrollees identifying as black. Tweaking the software to use other variables, such as avoidable costs or the number of chronic conditions that require treatment in a year, pushed the portion of automatic black enrollees to 47 percent.
“Instead of being trained to find the sickest, in a physiological sense, [these algorithms] ended up being trained to find the sickest in the sense of those whom we spend the most money on,” said Sendhil Mullainathan, the Roman Family University Professor of Computation and Behavioral Science at Chicago Booth and senior author of the study. “There are systemic racial differences in health care in who we spend money on.”
The team has since reached out to software manufacturers, who were motivated by the findings to address the issue.
“Algorithms can do terrible things, or algorithms can do wonderful things. Which one of those things they do is basically up to us,” said Obermeyer. “We make so many choices when we train an algorithm that feel technical and small. But these choices make the difference between an algorithm that’s good or bad, biased or unbiased. So it’s often very understandable when we end up with algorithms that don’t do what we want them to do, because those choices are hard.”
The study was a collaboration between UC Berkeley, the University of Chicago Booth School of Business and Partners HealthCare in Boston.
Funding was provided, in part, by a grant from the National Institute for Health Care Management Foundation.
The findings were published in the journal, Science