不法行为
归属
道德
责备
心理学
替罪羊
结果(博弈论)
社会心理学
情感(语言学)
政治学
法学
经济
沟通
政治
数理经济学
作者
Daniel B. Shank,Alyssa DeSanti,Timothy Maninger
标识
DOI:10.1080/1369118x.2019.1568515
摘要
Artificial intelligence (AI) agents make decisions that affect individuals and society which can produce outcomes traditionally considered moral violations if performed by humans. Do people attribute the same moral permissibility and fault to AIs and humans when each produces the same moral violation outcome? Additionally, how do people attribute morality when the AI and human are jointly making the decision which produces that violation? We investigate these questions with an experiment that manipulates written descriptions of four real-world scenarios where, originally, a violation outcome was produced by an AI. Our decision-making structures include individual decision-making – either AIs or humans – and joint decision-making – either humans monitoring AIs or AIs recommending to humans. We find that the decision-making structure has little effect on morally faulting AIs, but that humans who monitor AIs are faulted less than solo humans and humans receiving recommendations. Furthermore, people attribute more permission and less fault to AIs compared to humans for the violation in both joint decision-making structures. The blame for joint AI-human wrongdoing suggests the potential for strategic scapegoating of AIs for human moral failings and the need for future research on AI-human teams.
科研通智能强力驱动
Strongly Powered by AbleSci AI