错误
计算机科学
机器人
钥匙(锁)
人机交互
计算机安全
人工智能
互联网隐私
法学
政治学
作者
Paul Robinette,Ayanna M. Howard,Alan R. Wagner
标识
DOI:10.1007/978-3-319-25554-5_57
摘要
Even the best robots will eventually make a mistake while performing their tasks. In our past experiments, we have found that even one mistake can cause a large loss in trust by human users. In this paper, we evaluate the effects of a robot apologizing for its mistake, promising to do better in the future, and providing additional reasons to trust it in a simulated office evacuation conducted in a virtual environment. In tests with 319 participants, we find that each of these techniques can be successful at repairing trust if they are used when the robot asks the human to trust it again, but are not successful when used immediately after the mistake. The implications of these results are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI