怀疑论
透明度(行为)
心理学
意外后果
内容(测量理论)
人工智能
生成语法
计算机科学
认知心理学
社会心理学
认识论
哲学
计算机安全
数学
数学分析
作者
Sacha Altay,Fabrizio Gilardi
出处
期刊:PNAS nexus
[Oxford University Press]
日期:2024-10-01
卷期号:3 (10): pgae403-pgae403
被引量:35
标识
DOI:10.1093/pnasnexus/pgae403
摘要
Abstract The rise of generative AI tools has sparked debates about the labeling of AI-generated content. Yet, the impact of such labels remains uncertain. In two preregistered online experiments among US and UK participants (N = 4,976), we show that while participants did not equate “AI-generated” with “False,” labeling headlines as AI-generated lowered their perceived accuracy and participants’ willingness to share them, regardless of whether the headlines were true or false, and created by humans or AI. The impact of labeling headlines as AI-generated was three times smaller than labeling them as false. This AI aversion is due to expectations that headlines labeled as AI-generated have been entirely written by AI with no human supervision. These findings suggest that the labeling of AI-generated content should be approached cautiously to avoid unintended negative effects on harmless or even beneficial AI-generated content and that effective deployment of labels requires transparency regarding their meaning.
科研通智能强力驱动
Strongly Powered by AbleSci AI