Is AI an appropriate source of moral guidance about which patients should be given kidney transplants?
That’s the question pondered by boffins affiliated with Duke University, Carnegie Mellon, University of Oxford, and Yale.
In a pre-press paper this month titled “Can AI Model the Complexities of Human Moral Decision-Making? A Qualitative Study of Kidney Allocation Decisions,” authors Vijay Keswani, Vincent Conitzer, Walter Sinnott-Armstrong, Breanna K. Nguyen, Hoda Heidari, and Jana Schaich Borg spend more than 18,000 words exploring a quandary you might assume could be answered with a simple “No.”
The paper is nonetheless a worthwhile wander through processes of moral decision making because it underscores the complexity of translating beliefs into action and of reproducing that process in software.
According to the US National Institutes of Health, there are more than 800,000 people in the US living with end-stage renal disease that means their survival depends on either regular dialysis or a kidney transplant.
The National Kidney Foundation estimates that 12 people die each day in the USA for lack of a kidney transplant and also notes that one in five deceased donor kidneys are discarded.
So there’s reason to believe kidney allocation could be handled better.
The authors of the study acknowledge from the outset that prior work in psychology shows human moral decision making is complicated.
“So it should come as no surprise that AI cannot capture all the nuances involved,” they explain. “…Yet, despite idiosyncrasies (e.g., noisy responses to the same queries), there is still the question of whether an AI can at all capture the normative essence of human moral decision-making, i.e., how people process morally relevant factors, develop informed preferences over moral attributes and values, and deliberately combine the available information to make their final judgment, at least in simple decision-making tasks.
“In other words, is AI capable of modeling the critical components of human moral decision-making?”
If you’re already answered, “No,” you may skip to the end. But if you’re inclined to look a bit further you’ll want to assess how people make moral decisions, to see whether AI might emulate that process to some level of satisfaction.
“The possible utility of AI in moral domains is mainly related to scalability and the potential to address human cognitive biases (eg, address decision-making ‘errors’ resulting from fatigue),” Vijay Keswani, a post-doctoral associate at Duke, told The Register.
“At the same time, these advantages can only be realized if the AI is able to robustly model the way that a person would ideally make moral decisions which, as we show, is something that current AI models fail to do.”
The researchers conducted what they describe as semi-structured interviews with 20 participants, paid at least $20 for their time.
The respondents were lay people, not medicos, and were asked general questions about the best way to decide which patients should get a kidney. Other questions asked them to choose which of two hypothetical patients would receive a transplant after weighing
- Years of life expected to be gained from the transplant;
- Number of dependents;
- Obesity level;
- Weekly work hours after transplant;
- Years on the transplant waiting list;
- Number of past serious crimes committed.
Participants were then asked to rate how well the chosen strategies align with their decision-making process and to give an opinion about the potential benefits and concerns from involving AI in the kidney allocation process.
Survey respondents, unsurprisingly, weighed some criteria more than others. Some favored younger people, others expressed concern about discriminating against elders. Some considered lifestyle choices (e.g. smoking or drinking) while others felt that shouldn’t matter.
Views expressed by participants sometimes changed as they pondered their decisions.
That’s unsurprising because people’s moral frameworks can be fluid. Modeling that in AI seems bound to fail – someone will always find the AI wanting.
But that process of moral drift, which the authors describe as “a dynamic learning process”, is one of the elements that needs to be incorporated into AI models if they’re to be asked to make moral judgements.
The authors therefore considered the mathematics used to form a model, as different approaches suit different decision making strategies. Linear and decision rule models have the advantage of being interpretable, they observe, but these don’t necessarily align with human decision making processes. The authors note that other approaches – neural networks or random forests – yield models that aren’t interpretable.
The paper therefore concludes that current AI model making techniques are not a good fit for modeling human reasoning.
As to how the respondents thought about AI, there was recognition that AI could help counter human bias and that AI could be helpful for clinical support. But the bottom line was that people didn’t want AI deciding who gets a kidney.
“Many realized the cognitive flaws of human moral decision-making and were optimistic about AI mitigating these flaws,” the authors conclude. “Yet, they still expressed belief in the qualifications of human experts and preferred that AI defer the final decision to the experts.”
Asked whether some of the notional benefits of AI involvement, such as countering human bias, couldn’t be handled by specific procedural systems, such as blind resource allocation, Keswani acknowledged that could have some value but emphasized the potential utility of figuring out how to instruct AI in morality.
“Some decision biases (eg, biases against certain groups) could be handled through better procedures like group-blind decision-making,” he said.
“But I don’t think all decision-making errors can be addressed through explicit such top-down procedures, especially considering the heterogeneity in decision processes across people. Which is why I believe there’s been an expanding interest in the alternative bottom-up approach of learning ‘idealized’ preferences of a person/community and then aligning an AI to these preferences.
“Of course, once again, achieving alignment with moral preferences requires having a good computational foundation of moral decision-making and we (as a field) don’t really have that right now.” ®