危害
启动(农业)
调解
透视图(图形)
嫌疑犯
心理学
前提
干预(咨询)
社会心理学
模式
计算机科学
认识论
社会学
人工智能
生物
发芽
精神科
哲学
犯罪学
植物
社会科学
作者
Serena Iacobucci,Roberta De Cicco,Francesca Michetti,Riccardo Palumbo,Stefano Pagliaro
出处
期刊:Cyberpsychology, Behavior, and Social Networking
[Mary Ann Liebert, Inc.]
日期:2021-03-01
卷期号:24 (3): 194-202
被引量:34
标识
DOI:10.1089/cyber.2020.0149
摘要
The study aims to test whether simple priming of deepfake (DF) information significantly increases users' ability to recognize DF media. Although undoubtedly fascinating from a technological point of view, these highly realistic artificial intelligent (AI)-generated fake videos hold high deceptive potential. Both practitioners and institutions are thus joining forces to develop debunking strategies to counter the spread of such difficult-to-recognize and potentially misleading video content. On this premise, this study addresses the following research questions: does simple priming with the definition of DFs and information about their potentially harmful applications increase users' ability to recognize DFs? Does bullshit receptivity, as an individual tendency to be overly accepting of epistemically suspect beliefs, moderate the relationship between such priming and DF recognition? Results indicate that the development of strategies to counter the deceitfulness of DFs from an educational and cultural perspective might work well, but only for people with a lower susceptibility to believe willfully misleading claims. Finally, through a serial mediation analysis, we show that DF recognition does, in turn, negatively impact users' sharing intention, thus limiting the potential harm of DFs at the very root of one of their strengths: virality. We discuss the implications of our finding that society's defense against DFs could benefit from a simple reasoned digital literacy intervention.
科研通智能强力驱动
Strongly Powered by AbleSci AI