确定性
透明度(行为)
业务
计算机科学
知识管理
会计
计算机安全
数学
几何学
标识
DOI:10.1177/02666669251346124
摘要
This study examines the non-linear relationship between transparency and AI use intention, challenging the assumption that increased transparency always enhances AI adoption. A web-based experiment with 491 participants across two interactions with AI systems, fake news detection (cognitive) and friending recommendations (social), are conducted to manipulate transparency (real, placebic, or absent) for this objective. Using quadratic regression analysis and threshold analysis, we find an inverted U-shaped effect, where moderate transparency fosters trust and certainty, but excessive transparency leads to cognitive overload and heightened scrutiny, reducing AI adoption. Additionally, the study identifies key causal pathways, demonstrating that transparency influences AI use intention indirectly by enhancing trust and reducing uncertainty, with certainty and trust serving as significant mediators. These findings contribute to Trust Calibration Theory and Cognitive Load Theory, advocating for adaptive transparency models that optimize AI explanations based on user expertise, task complexity, and engagement levels to maximize usability and trust.
科研通智能强力驱动
Strongly Powered by AbleSci AI