误传
社会化媒体
计算机科学
互联网隐私
微观经济学
纳什均衡
意识形态
经济
计算机安全
政治学
万维网
政治
法学
作者
Daron Acemoğlu,Asuman Ozdaglar,James Siderius
标识
DOI:10.1093/restud/rdad111
摘要
Abstract We present a model of online content sharing where agents sequentially observe an article and decide whether to share it with others. This content may or may not contain misinformation. Each agent starts with an ideological bias and gains utility from positive social media interactions but does not want to be called out for propagating misinformation. We characterize the (Bayesian–Nash) equilibria of this social media game and establish that it exhibits strategic complementarities. Under this framework, we study how a platform interested in maximizing engagement would design its algorithm. Our main result establishes that when the relevant articles have low-reliability and are thus likely to contain misinformation, the engagement-maximizing algorithm takes the form of a “filter bubble”—creating an echo chamber of like-minded users. Moreover, filter bubbles become more likely when there is greater polarization in society and content is more divisive. Finally, we discuss various regulatory solutions to such platform-manufactured misinformation.
科研通智能强力驱动
Strongly Powered by AbleSci AI