误传
人工智能
考试(生物学)
生成语法
心理学
计算机科学
认知心理学
机器学习
计算机安全
生态学
生物
作者
Yoori Hwang,Se‐Hoon Jeong
出处
期刊:Cyberpsychology, Behavior, and Social Networking
[Mary Ann Liebert, Inc.]
日期:2025-02-24
标识
DOI:10.1089/cyber.2024.0407
摘要
Generative artificial intelligence (AI) tools could create statements that are seemingly plausible but factually incorrect. This is referred to as AI hallucination, which can contribute to the generation and dissemination of misinformation. Thus, the present study examines whether forewarning about AI hallucination could reduce individuals' acceptance of AI-generated misinformation. An online experiment with 208 Korean adults demonstrated that AI hallucination forewarning reduced misinformation acceptance (p = 0.001, Cohen's d = 0.45) while forewarning did not reduce acceptance of true information (p = 0.91). In addition, the effect of AI hallucination forewarning on misinformation acceptance was moderated by preference for effortful thinking (p < 0.01) such that forewarning decreased misinformation acceptance when preference for effortful thinking was high (vs. low).
科研通智能强力驱动
Strongly Powered by AbleSci AI