民主
计算机安全
政治学
计算机科学
业务
法学
政治
作者
Daniel Thilo Schroeder,Meeyoung Cha,Andrea Baronchelli,Nick Bostrom,Nicholas A. Christakis,David García,Amit Goldenberg,Yara Kyrychenko,Kevin Leyton‐Brown,Nina Lutz,Gary Marcus,Filippo Menczer,Gordon Pennycook,David G. Rand,Frank Schweitzer,Christopher Summerfield,Audrey Tang,Jay Joseph Van Bavel,Sander van der Linden,Dawn Song
标识
DOI:10.31219/osf.io/qm9yk_v2
摘要
Advances in AI portend a new era of sophisticated disinformation operations. While individual AI systems already create convincing—and at times misleading—information, an imminent development is the emergence of malicious AI swarms. These systems can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests, with round-the-clock persistence. The result can include fabricated grassroots consensus, fragmented shared reality, mass harassment, voter micro-suppression or mobilization, contamination of AI training data, and erosion of institutional trust. With increasing vulnerabilities in democratic processes worldwide, we urge a three-pronged response: (1) platform-side defenses—always-on swarm-detection dashboards, pre-election highfidelity swarm-simulation stress-tests, transparency audits, and optional client-side “AI shields” for users; (2) model-side safeguards—standardized persuasion-risk tests, provenance-authenticating passkeys, and watermarking; and (3) system-level oversight—a UN-backed AI Influence Observatory.
科研通智能强力驱动
Strongly Powered by AbleSci AI