透明度(行为)
计算机科学
读写能力
合法性
感知
控制(管理)
人机交互
心理学
人工智能
计算机安全
政治学
教育学
政治
神经科学
法学
作者
Jang Ho Moon,Se Hoon Kim,Young-Ju Jung,Joona Bang,Yongjun Sung
出处
期刊:Cyberpsychology, Behavior, and Social Networking
[Mary Ann Liebert, Inc.]
日期:2025-05-07
标识
DOI:10.1089/cyber.2024.0525
摘要
As algorithms increasingly shape user experiences on digital platforms, concerns have emerged regarding their opacity and potential negative consequences. In response, platforms have introduced transparency features such as algorithm-based recommendation explanations and user control features. However, empirical research on the effects of these approaches and how they vary according to user characteristics remains limited. This study explores the impact of algorithmic explainability and user control on perceptions of algorithmic transparency, legitimacy, and platform satisfaction in short-form video platforms, focusing on how users' algorithmic literacy moderates these relationships. A 2 (explainability: present vs. absent) × 2 (user control: present vs. absent) × 2 (algorithmic literacy: high vs. low) between-subjects experiment was conducted with 240 participants using a fictitious short-form video platform. The results revealed a significant three-way interaction across all the dependent variables. Both explainability and user control enhanced perceived algorithmic transparency, legitimacy, and satisfaction. When neither feature was present, algorithmic literacy had no significant impact. However, when at least one feature was present, literacy significantly influenced the dependent variables. These findings highlight the critical role of algorithmic literacy in moderating transparency mechanisms' effects. This study advances the understanding of how platform-initiated transparency shapes user perceptions, suggesting that literacy creates a new dimension of the digital divide, where transparency benefits are unequally experienced. Implications for platform developers and policymakers are discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI