透明度(行为)
感知
心理学
社会心理学
政治学
法学
神经科学
作者
Shih‐Yi Chien,Yi-Fan Wang,Kuang‐Ting Cheng,Yu‐Che Chen
标识
DOI:10.1080/10447318.2024.2441015
摘要
This study examines the impact of Explainable AI (XAI) on users' cognitive and affective responses, with a particular emphasis on cross-cultural differences. Utilizing the Situation Awareness-Based Agent Transparency model, our XAI mechanisms varied in terms of transparency levels and explanation types. Survey studies conducted in the United States (N = 1200) and Taiwan (N = 600) assessed the cultural influences on XAI perception. Our findings identified significant cultural differences, with Western cultures demonstrating an increased awareness of data privacy and exhibiting a pronounced reluctance to trust AI services. The results further revealed that Eastern cultures emphasized rational analysis in evaluating privacy risk, whereas Western cultures were more inclined to employ emotional responses to assess privacy concerns. Regarding the effectiveness of XAI mechanisms, both the low system transparency with an example-based method and high system transparency with a feature-based method yielded similar positive outcomes. Additionally, while the U.S. group exhibited little variance between conditions, Taiwanese participants demonstrated heightened sensitivity to differences in XAI approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI