责备
道德解脱
道德的社会认知理论
规范性
代理(哲学)
心理学
归属
道德代理
社会心理学
机器人
规范(哲学)
认知
人机交互
道德发展
认识论
计算机科学
人工智能
神经科学
哲学
作者
John Voiklis,Boyoung Kim,Corey Cusimano,Bertram F. Malle
标识
DOI:10.1109/roman.2016.7745207
摘要
Robots will eventually perform norm-regulated roles in society (e.g. caregiving), but how will people apply moral norms and judgments to robots? By answering such questions, researchers can inform engineering decisions while also probing the scope of moral cognition. In previous work, we compared people's moral judgments about human and robot agents' behavior in moral dilemmas. We found that robots, compared with humans, were more commonly expected to sacrifice one person for the good of many, and they were blamed more than humans when they refrained from that decision. Thus, people seem to have somewhat different normative expectations of robots than of humans. In the current project we analyzed in detail the justifications people provide for three types of moral judgments (permissibility, wrongness, and blame) of robot and human agents. We found that people's moral judgments of both agents relied on the same conceptual and justificatory foundation: consequences and prohibitions undergirded wrongness judgments; attributions of mental agency undergirded blame judgments. For researchers, this means that people extend moral cognition to nonhuman agents. For designers, this means that robots with credible cognitive capacities will be considered moral agents but perhaps regulated by different moral norms.
科研通智能强力驱动
Strongly Powered by AbleSci AI