认知
过程(计算)
心情
机制(生物学)
心理学
计算机科学
决策
人工智能
社会心理学
知识管理
认识论
工程类
哲学
运营管理
神经科学
采购
操作系统
作者
Carolin Ebermann,Matthias Selisky,Stephan Weibelzahl
标识
DOI:10.1080/10447318.2022.2126812
摘要
Providing explanations of an artificial intelligence (AI) system has been suggested as a means to increase users’ acceptance during the decision-making process. However, little research has been done to examine the psychological mechanism of how these explanations cause a positive or negative reaction in the user. To address this gap, we investigate the effect on user acceptance if decisions and the associated provided explanations contradict between an AI system and the user. An interdisciplinary research model was derived and validated by an experiment with 78 participants. Findings suggest that in decision situations with cognitive misfit users experience negative mood significantly more often and have a negative evaluation of the AI system’s support. Therefore, the following article provides further guidance regarding new interdisciplinary approaches for dealing with human-AI interaction during the decision-making process and sheds some light on how explainable AI can increase users’ acceptance of such systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI