透明度(行为)
计算机科学
过程(计算)
校准
自动化
人机交互
模拟
计算机安全
工程类
机械工程
操作系统
统计
数学
作者
Johannes Kraus,David Scholz,Dina Stiegemeier,Martin Baumann
出处
期刊:Human Factors
[SAGE Publishing]
日期:2019-06-24
卷期号:62 (5): 718-736
被引量:205
标识
DOI:10.1177/0018720819853686
摘要
Objective This paper presents a theoretical model and two simulator studies on the psychological processes during early trust calibration in automated vehicles. Background The positive outcomes of automation can only reach their full potential if a calibrated level of trust is achieved. In this process, information on system capabilities and limitations plays a crucial role. Method In two simulator experiments, trust was repeatedly measured during an automated drive. In Study 1, all participants in a two-group experiment experienced a system-initiated take-over, and the occurrence of a system malfunction was manipulated. In Study 2 in a 2 × 2 between-subject design, system transparency was manipulated as an additional factor. Results Trust was found to increase during the first interactions progressively. In Study 1, take-overs led to a temporary decrease in trust, as did malfunctions in both studies. Interestingly, trust was reestablished in the course of interaction for take-overs and malfunctions. In Study 2, the high transparency condition did not show a temporary decline in trust after a malfunction. Conclusion Trust is calibrated along provided information prior to and during the initial drive with an automated vehicle. The experience of take-overs and malfunctions leads to a temporary decline in trust that was recovered in the course of error-free interaction. The temporary decrease can be prevented by providing transparent information prior to system interaction. Application Transparency, also about potential limitations of the system, plays an important role in this process and should be considered in the design of tutorials and human-machine interaction (HMI) concepts of automated vehicles.
科研通智能强力驱动
Strongly Powered by AbleSci AI