危害
心理学
功利主义
价值(数学)
社会心理学
道德困境
困境
感知
道德推理
道德
义务论伦理学
计算机科学
认识论
机器学习
哲学
神经科学
作者
Ryosuke Yokoi,Kazuya Nakayachi
出处
期刊:Human Factors
[SAGE Publishing]
日期:2020-07-14
卷期号:63 (8): 1465-1484
被引量:29
标识
DOI:10.1177/0018720820933041
摘要
Autonomous cars (ACs) controlled by artificial intelligence are expected to play a significant role in transportation in the near future. This study investigated determinants of trust in ACs.Trust in ACs influences different variables, including the intention to adopt AC technology. Several studies on risk perception have verified that shared value determines trust in risk managers. Previous research has confirmed the effect of value similarity on trust in artificial intelligence. We focused on moral beliefs, specifically utilitarianism (belief in promoting a greater good) and deontology (belief in condemning deliberate harm), and tested the effects of shared moral beliefs on trust in ACs.We conducted three experiments (N = 128, 71, and 196, for each), adopting a thought experiment similar to the well-known trolley problem. We manipulated shared moral beliefs (shared vs. unshared) and driver (AC vs. human), providing participants with different moral dilemma scenarios. Trust in ACs was measured through a questionnaire.The results of Experiment 1 showed that shared utilitarian belief strongly influenced trust in ACs. In Experiment 2 and Experiment 3, however, we did not find statistical evidence that shared deontological belief had an effect on trust in ACs.The results of the three experiments suggest that the effect of shared moral beliefs on trust varies depending on the values that ACs share with humans.To promote AC implementation, policymakers and developers need to understand which values are shared between ACs and humans to enhance trust in ACs.
科研通智能强力驱动
Strongly Powered by AbleSci AI