误传
社会化媒体
适度
政府(语言学)
独创性
心理学
价值(数学)
社会心理学
计算机科学
计算机安全
万维网
创造力
语言学
机器学习
哲学
出处
期刊:Internet Research
[Emerald Publishing Limited]
日期:2023-08-14
卷期号:33 (5): 1971-1989
被引量:9
标识
DOI:10.1108/intr-07-2022-0578
摘要
Purpose While there has been a growing call for insights on algorithms given their impact on what people encounter on social media, it remains unknown how enhanced algorithmic knowledge serves as a countermeasure to problematic information flow. To fill this gap, this study aims to investigate how algorithmic knowledge predicts people's attitudes and behaviors regarding misinformation through the lens of the third-person effect. Design/methodology/approach Four national surveys in the USA (N = 1,415), the UK (N = 1,435), South Korea (N = 1,798) and Mexico (N = 784) were conducted between April and September 2021. The survey questionnaire measured algorithmic knowledge, perceived influence of misinformation on self and others, intention to take corrective actions, support for government regulation and content moderation. Collected data were analyzed using multigroup SEM. Findings Results indicate that algorithmic knowledge was associated with presumed influence of misinformation on self and others to different degrees. Presumed media influence on self was a strong predictor of intention to take actions to correct misinformation, while presumed media influence on others was a strong predictor of support for government-led platform regulation and platform-led content moderation. There were nuanced but noteworthy differences in the link between presumed media influence and behavioral responses across the four countries studied. Originality/value These findings are relevant for grasping the role of algorithmic knowledge in countering rampant misinformation on social media, as well as for expanding US-centered extant literature by elucidating the distinctive views regarding social media algorithms and misinformation in four countries.
科研通智能强力驱动
Strongly Powered by AbleSci AI