可靠性(半导体)
感知
集合(抽象数据类型)
计算机科学
因子(编程语言)
自动化
人机交互
应用心理学
可信赖性
人工智能
心理学
计算机安全
量子力学
机械工程
物理
工程类
功率(物理)
神经科学
程序设计语言
作者
Theodore Jensen,Mohammad Maifi Hasan Khan,Yusuf Albayram
标识
DOI:10.1007/978-3-030-50334-5_3
摘要
Trust has been identified as a critical factor in the success and safety of interaction with automated systems. Researchers have referred to “trust calibration” as an apt design goal– user trust should be at an appropriate level given a system’s reliability. One factor in user trust is the degree to which a system is perceived as humanlike, or anthropomorphic. However, relevant prior work does not explicitly characterize trust appropriateness, and generally considers visual rather than behavioral anthropomorphism. To investigate the role of humanlike system behavior in trust calibration, we conducted a 2 (communication style: machinelike, humanlike) $$\times $$ 2 (reliability: low, high) between-subject study online where participants collaborated alongside an Automated Target Detection (ATD) system to classify a set of images in 5 rounds of gameplay. Participants chose how many images to allocate to the automation before each round, where appropriate trust was defined by a number of images that optimized performance. We found that communication style and reliability influenced perceptions of anthropomorphism and trustworthiness. Low and high reliability participants demonstrated overtrust and undertrust, respectively. The implications of our findings for the design and research of automated and autonomous systems are discussed in the paper.
科研通智能强力驱动
Strongly Powered by AbleSci AI