Algorithms may now be able to prioritize emergency-room care and effectively decrease health care inequalities. (AP Photo/John Minchillo)
Researchers have observed that people don't like leaving medical decisions to artificial intelligence, but hearing about racial and economic gaps in health care may warm patients up to the idea of clinicians using algorithms to guide decision-making, a new set of psychological studies suggests.
The new paper, published May 6 in Computers in Human Behavior, was based on four studies in the U.S. and Singapore that involved about 2,800 participants and were conducted with the backdrop of the COVID-19 pandemic. The researchers asked participants whether they preferred go to a hospital where triage decisions — such as which COVID-19 patients get ventilators — were made by AIs or by humans, finding that people's preference for AI-based triage increased modestly after being warned about the threat of unequal care, said Yochanan Bigman, a postdoctoral research fellow at Yale University and the study's lead author.
"There is a great promise of AI being able to treat people without taking into account certain features," Bigman told The Academic Times. "As a human, some stereotypes might be activated automatically, without people even being aware of them. But there's also a danger, especially since people often make the assumption that AIs are perfect, not biased, and don't make mistakes. And that can make it harder for people to challenge [them]."
In a previous study, Bigman and his colleague Kurt Gray found that people were averse to letting AI influence morally charged decisions, including choices involving health care. "In that paper, that's part of what we argue — people see AI as devoid of emotion and unable to experience bodily stuff like pain and hunger, and that makes people unhappy with them making moral decisions," he said.
That held true in the more recent findings. A base rate of over 18% of respondents said they'd go to a hospital where triage decisions are made by AI; that proportion rose to roughly 26% when participants were told about health care disparities. While Black, Latino, and Indigenous participants, as well as white participants, preferred algorithmic decision-making more after being warned of racial inequality in health care, Black, Latino and Indigenous participants (32.8%) were more likely than white participants (20.4%) to express a preference for algorithmic-based decisions under the "threat of inequality" condition, likely due to the personal relevance of the information, the researchers theorized.
"The best we could do was bring people to a point of indifference between AIs and humans," Bigman said. "We never actually got people to prefer AIs; we got them to say, 'OK, maybe I don't care as much.' … The effect seemed to be stronger for those who suffer from the disparities, which is not super surprising. But it is surprising for me that, even in the group not suffering the disparities, we're also seeing [some effect]."
The results illuminate a potential approach to increasing acceptance of AI-based decisions in health care and could be particularly relevant from a marketing perspective for hospitals and public health groups seeking to sell their communities on AI, Bigman said: "Broadly speaking, it's true that following even simple algorithms can boost our decision-making. Even today in hospitals, doctors type in some of your data, and the algorithm can tell you're at higher risk for heart disease or certain types of cancer.
"But I'm not proposing that algorithmic decisions are better or that they're perfect," he continued. "There are tons of problems. On the one hand, there are strong arguments being made that it's much easier to fix algorithms than it is to fix humans. On the other hand, algorithms are also scalable: If you have a biased person, there's a limit to how much damage that person could do. If you have a biased algorithm, the damage can be [much broader]."
For example, researchers at UC Berkeley's School of Public Health found in a 2019 study that a widely used health care algorithm that referred patients for additional care had a significant racial bias, favoring white patients over Black ones. In Bigman's field, such evidence has tempered researchers' initial excitement about the potential for AI to help people make more objective choices.
"There was this huge early promise that we could use algorithms to make all these decisions, and they would save us from all biases," he said. "There are a ton of documented cases where algorithms, for various reasons, are not carrying out that promise as well as we thought they would."
There's also evidence that algorithms can be wielded powerfully in health care settings, including a study last year that found a deep learning model to be effective in the early triage of critically ill COVID-19 patients. On the darker side, the adoption of AI in health care could result in even less human contact for patients — especially elderly ones — who are already lonely. "I think that with robots making medical decisions, this problem would become worse," Bigman said. "Care for the elderly is another category that will probably become more and more automated."
There isn't a simple answer regarding whether algorithms are less biased than humans are in making decisions, Bigman said. He views the ideal balance as being a system of checks and balances, in which humans and machines work in concert, with algorithms helping medical professionals make informed decisions — but leaving the interactions to physicians and patients.
"The bottom line is that what will drive acceptance of AI is efficiency," he said. "The potential benefits of AI are so strong that even if people have an aversion, eventually the benefits will overcome it. Hopefully, it will be faster, cheaper and more accurate."
The study, "Threat of racial and economic inequality increases preference for algorithm decision-making," published May 6 in Computers in Human Behavior, was authored by Yochanan E. Bigman, University of North Carolina at Chapel Hill and Yale University; Kai Chi Yam, National University of Singapore; Déborah Marciano, University of California, Berkeley; Scott J. Reynolds, University of Washington; and Kurt Gray, University of North Carolina at Chapel Hill.