适度
危害
互联网隐私
心理学
政治学
社会学
社会心理学
业务
计算机科学
作者
Ariadna Matamoros-Fernández,Nadia Jude
标识
DOI:10.1177/14614448251314399
摘要
This article critically examines the social implications of data infrastructures designed to moderate contested content categories such as disinformation. It does so in the context of new online safety regulation (e.g. the EU Digital Services Act) that pushes digital platforms to improve how they tackle both illegal and ‘legal but harmful’ content. In particular, we investigate and conceptualise X’s Community Notes, a tool that uses ‘human-AI cooperation’ to add context to tweets, as a data infrastructure for ‘soft moderation’. We find that Community Notes is limited when dealing with under-acknowledged online harms, such as those derived from the intersection between disinformation and humour. While research points to the potential of content moderation solutions that combine automation with humans-in-the-loop, we show how this approach can fail when disinformation is poorly defined in policy and practice.
科研通智能强力驱动
Strongly Powered by AbleSci AI