诚实
电抗
心理学
社会心理学
作弊
道德推理
框架(结构)
道德的社会认知理论
道德发展
工程类
结构工程
电压
电气工程
作者
Boyoung Kim,Ruchen Wen,Ewart de Visser,Chad C. Tossell,Qin Zhu,Tom Williams,Elizabeth Phillips
标识
DOI:10.1016/j.ijhcs.2024.103217
摘要
A growing body of human–robot interaction literature is exploring whether and how social robots, by utilizing their physical presence or capacity for verbal and nonverbal behavior, can influence people’s moral behavior. In the current research, we aimed to examine to what extent a social robot can effectively encourage people to act honestly by offering them moral advice. The robot either offered no advice at all or proactively offered moral advice before participants made a choice between acting honestly and cheating, and the underlying ethical framework of the advice was grounded in either deontology (rule-focused), virtue ethics (identity-focused), or Confucian role ethics (role-focused). Across three studies (N=1,693), we did not find a robot’s moral advice to be effective in deterring cheating. These null results were held constant even when we introduced the robot as being equipped with moral capacity to foster common expectations about the robot among participants before receiving the advice from it. The current work led us to an unexpected discovery of the psychological reactance effect associated with participants’ perception of the robot’s moral capacity. Stronger perceptions of the robot’s moral capacity were linked to greater probabilities of cheating. These findings demonstrate how psychological reactance may impact human–robot interaction in moral domains and suggest potential strategies for framing a robot’s moral messages to avoid such reactance.
科研通智能强力驱动
Strongly Powered by AbleSci AI