透明度(行为)
困境
业务
计算机科学
会计
计算机安全
认识论
哲学
标识
DOI:10.1108/idd-03-2025-0056
摘要
Purpose This study aims to investigate the nonlinear effects of transparency on artificial intelligence (AI) use intention, challenging the assumption that greater transparency always enhances adoption. Using Trust Calibration Theory and Cognitive Load Theory, the author proposes an inverted U-shaped relationship, where moderate transparency fosters trust and certainty, but excessive transparency induces cognitive overload and skepticism, reducing AI adoption. Design/methodology/approach A Web-based experiment with 491 participants was conducted across two AI decision-making contexts: fake news detection and friend recommendations. Transparency was manipulated into three conditions (none, placebo and real transparency), and quadratic regression analysis examined the diminishing returns of transparency. Mediation analysis tested its indirect effects via certainty and trust. Findings Results confirm an inverted U-shaped effect of transparency on AI use intention. While moderate transparency enhances trust and reduces uncertainty, excessive transparency triggers cognitive overload and skepticism, lowering adoption. Certainty and trust mediate the relationship, demonstrating that transparency influences AI adoption indirectly through trust-building and uncertainty reduction. Practical implications Findings suggest AI developers and policymakers should implement adaptive transparency strategies to balance clarity, usability and trust in AI-driven systems. Originality/value This study challenges the linear transparency paradigm, contributing to Trust Calibration Theory and Cognitive Load Theory by demonstrating the need for optimizing rather than maximizing transparency. It offers practical insights for adaptive transparency models, ensuring AI explanations align with user expertise and task complexity.
科研通智能强力驱动
Strongly Powered by AbleSci AI