误传
辨别力
社会化媒体
互联网隐私
众包
造谣
质量(理念)
计算机科学
心理学
假新闻
审查(临床试验)
社会心理学
广告
计算机安全
万维网
业务
哲学
认识论
统计
数学
作者
Ziv Epstein,N. Foppiani,Sophie Hilgard,Sanjana Sharma,Elena L. Glassman,D.A.J. Rand
出处
期刊:Cornell University - arXiv
日期:2021-01-01
标识
DOI:10.48550/arxiv.2112.03450
摘要
Social media platforms are increasingly deploying complex interventions to help users detect false news. Labeling false news using techniques that combine crowd-sourcing with artificial intelligence (AI) offers a promising way to inform users about potentially low-quality information without censoring content, but also can be hard for users to understand. In this study, we examine how users respond in their sharing intentions to information they are provided about a hypothetical human-AI hybrid system. We ask i) if these warnings increase discernment in social media sharing intentions and ii) if explaining how the labeling system works can boost the effectiveness of the warnings. To do so, we conduct a study ($N=1473$ Americans) in which participants indicated their likelihood of sharing content. Participants were randomly assigned to a control, a treatment where false content was labeled, or a treatment where the warning labels came with an explanation of how they were generated. We find clear evidence that both treatments increase sharing discernment, and directional evidence that explanations increase the warnings' effectiveness. Interestingly, we do not find that the explanations increase self-reported trust in the warning labels, although we do find some evidence that participants found the warnings with the explanations to be more informative. Together, these results have important implications for designing and deploying transparent misinformation warning labels, and AI-mediated systems more broadly.
科研通智能强力驱动
Strongly Powered by AbleSci AI