适度
社会化媒体
用户生成的内容
能见度
民族志
上诉
内容(测量理论)
互联网隐私
社会学
社会心理学
心理学
计算机科学
政治学
万维网
地理
气象学
法学
数学分析
数学
人类学
作者
Hibby Thach,Samuel Mayworm,Daniel Delmonaco,Oliver L. Haimson
出处
期刊:New Media & Society
[SAGE Publishing]
日期:2022-07-18
卷期号:: 146144482211098-146144482211098
被引量:10
标识
DOI:10.1177/14614448221109804
摘要
Research suggests that marginalized social media users face disproportionate content moderation and removal. However, when content is removed or accounts suspended, the processes governing content moderation are largely invisible, making assessing content moderation bias difficult. To study this bias, we conducted a digital ethnography of marginalized users on Reddit’s /r/FTM subreddit and Twitch’s “Just Chatting” and “Pools, Hot Tubs, and Beaches” categories, observing content moderation visibility in real time. We found that on Reddit, a text-based platform, platform tools make content moderation practices invisible to users, but moderators make their practices visible through communication with users. Yet on Twitch, a live chat and streaming platform, content moderation practices are visible in channel live chats, “unban appeal” streams, and “back from my ban” streams. Our ethnography shows how content moderation visibility differs in important ways between social media platforms, harming those who must see offensive content, and at other times, allowing for increased platform accountability.
科研通智能强力驱动
Strongly Powered by AbleSci AI