工作量
计算机科学
知识管理
人类智力
数据科学
人工智能
操作系统
作者
Lindsay Sanneman,Julie Shah
标识
DOI:10.1080/10447318.2022.2081282
摘要
Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems to be understandable to human users. The explainable AI (XAI) literature aims to enhance human understanding and human-AI team performance by providing users with necessary information about AI system behavior. Simultaneously, the human factors literature has long addressed important considerations that contribute to human performance, including how to determine human informational needs, human workload, and human trust in autonomous systems. Drawing from the human factors literature, we propose the Situation Awareness Framework for Explainable AI (SAFE-AI), a three-level framework for the development and evaluation of explanations about AI system behavior. Our proposed levels of XAI are based on the informational needs of human users, which can be determined using the levels of situation awareness (SA) framework from the human factors literature. Based on our levels of XAI framework, we also suggest a method for assessing the effectiveness of XAI systems. We further detail human workload considerations for determining the content and frequency of explanations as well as metrics that can be used to assess human workload. Finally, we discuss the importance of appropriately calibrating user trust in AI systems through explanations along with other trust-related considerations for XAI, and we detail metrics that can be used to evaluate user trust in these systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI