适度
公司治理
计算机科学
社会技术系统
政府(语言学)
计算机安全
政治
意外后果
互联网隐私
数据科学
政治学
知识管理
法学
业务
机器学习
语言学
哲学
财务
作者
Robert Gorwa,Reuben Binns,Christian Katzenbach
出处
期刊:Big Data & Society
[SAGE]
日期:2020-01-01
卷期号:7 (1): 205395171989794-205395171989794
被引量:797
标识
DOI:10.1177/2053951719897945
摘要
As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools – what we define here as algorithmic moderation systems – are increasingly being deployed to conduct content moderation at scale by major platforms for user-generated content such as Facebook, YouTube and Twitter. This article provides an accessible technical primer on how algorithmic moderation works; examines some of the existing automated tools used by major platforms to handle copyright infringement, terrorism and toxic speech; and identifies key political and ethical issues for these systems as the reliance on them grows. Recent events suggest that algorithmic moderation has become necessary to manage growing public expectations for increased platform responsibility, safety and security on the global stage; however, as we demonstrate, these systems remain opaque, unaccountable and poorly understood. Despite the potential promise of algorithms or ‘AI’, we show that even ‘well optimized’ moderation systems could exacerbate, rather than relieve, many existing problems with content policy as enacted by platforms for three main reasons: automated moderation threatens to (a) further increase opacity, making a famously non-transparent set of practices even more difficult to understand or audit, (b) further complicate outstanding issues of fairness and justice in large-scale sociotechnical systems and (c) re-obscure the fundamentally political nature of speech decisions being executed at scale.
科研通智能强力驱动
Strongly Powered by AbleSci AI