自治
可信赖性
伦理决策
人工智能应用
计算机科学
心理学
人类智力
知识管理
人工智能
工程伦理学
社会心理学
政治学
工程类
法学
作者
Suzanne Tolmeijer,Markus Christen,Serhiy Kandul,Markus Kneer,Abraham Bernstein
标识
DOI:10.1145/3491102.3517732
摘要
While artificial intelligence (AI) is increasingly applied for decision- making processes, ethical decisions pose challenges for AI applica- tions. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collabora- tion? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that partici- pants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ re- liance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are per- ceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.
科研通智能强力驱动
Strongly Powered by AbleSci AI