透明度(行为)
机器人
凝视
心理学
社交机器人
基于行为的机器人学
认知心理学
能力(人力资源)
社交暗示
情感(语言学)
社会心理学
计算机科学
人机交互
人工智能
沟通
机器人学
机器人控制
移动机器人
计算机安全
作者
Minha Lee,Peter A. M. Ruijten,Lily Frank,Wijnand A. IJsselsteijn
标识
DOI:10.1109/ro-man57019.2023.10309653
摘要
Future robots are expected to be autonomous actors, even capable of moral reasoning. Yet how they can provide transparent explanations while being socially intelligent during morally relevant interactions deserves a close examination. Our mixed-methods lab study on a human-robot moral debate on the footbridge dilemma showed that quantitatively, a robot’s perceived competence was significantly higher with transparency cues (additional information presented on a screen). The robot’s perceived warmth and mind were not influenced by transparency cues, but they did significantly change over time (pre- vs. post-debate). The change in the robot’s perceived mind and social attributes after the debate correlated with people’s trust in the robot; transparency cues did not correlate with trust. Qualitatively, the robot was described to be logical, unemotional, and intentional in making moral decisions; participants focused on its gaze and speech. While transparency may help in theory, if people do not observe relevant cues while attributing intentionality to the robot and its gaze, transparency cues may not be useful during critical decision-making though the robot can seem competent. We discuss the implications and call for broadening the notion of transparency to investigate how robots can be transparent communicators by appealing to both cognition and affect in morally sensitive interactions.
科研通智能强力驱动
Strongly Powered by AbleSci AI