软系统方法论
战略信息系统
人工智能
计算机科学
知识管理
信息系统
认知科学
管理信息系统
管理科学
心理学
工程类
电气工程
作者
Maximilian Förster,Hanna Rebecca Broder,Marie Christine Fahr,Mathias Klier,Lior Fink
标识
DOI:10.1080/0960085x.2024.2404028
摘要
Whereas learning is one of the primary goals of Explainable Artificial Intelligence (XAI), we know little about whether, how, and when explanations enhance users’ learning from feedback provided by Artificial Intelligence (AI). Drawing on Feedback Theory as a fundamental theoretical lens, we formulate a research model wherein explanations enhance informativeness and task performance, contingent on users’ prior knowledge, ultimately leading to a higher learning outcome. This research model is tested in a randomized between-subjects online experiment with 573 participants whose task is to match Google Street View pictures to their city of origin. We find a positive effect of explanations on learning outcome, which is fully mediated by informativeness, for users with less prior knowledge. Furthermore, we find that explanations positively impact users’ task performance, where this effect is direct for more knowledgeable users and fully mediated by informativeness for less knowledgeable users. We seek to elucidate the mechanisms underlying these effects of explanations on learning from AI feedback in focus groups with AI experts and users. By studying the consequences of explanations as part of AI feedback for users in non-routine inference tasks, we advance the understanding of explanations as facilitators of human learning from AI systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI