透明度(行为)
适度
计算机科学
互联网隐私
代理(哲学)
万维网
人机交互
计算机安全
机器学习
社会学
社会科学
作者
María D. Molina,S. Shyam Sundar
摘要
Abstract Given the scale of user-generated content online, the use of artificial intelligence (AI) to flag problematic posts is inevitable, but users do not trust such automated moderation of content. We explore if (a) involving human moderators in the curation process and (b) affording “interactive transparency,” wherein users participate in curation, can promote appropriate reliance on AI. We test this through a 3 (Source: AI, Human, Both) × 3 (Transparency: No Transparency, Transparency-Only, Interactive Transparency) × 2 (Classification Decision: Flagged, Not Flagged) between-subjects online experiment (N = 676) involving classification of hate speech and suicidal ideation. We discovered that users trust AI for the moderation of content just as much as humans, but it depends on the heuristic that is triggered when they are told AI is the source of moderation. We also found that allowing users to provide feedback to the algorithm enhances trust by increasing user agency.
科研通智能强力驱动
Strongly Powered by AbleSci AI