To Act or React: Investigating Proactive Strategies For Online Community Moderation

06/27/2019
by   Hussam Habib, et al.
0

Reddit administrators have generally struggled to prevent or contain such discourse for several reasons including: (1) the inability for a handful of human administrators to track and react to millions of posts and comments per day and (2) fear of backlash as a consequence of administrative decisions to ban or quarantine hateful communities. Consequently, as shown in our background research, administrative actions (community bans and quarantines) are often taken in reaction to media pressure following offensive discourse within a community spilling into the real world with serious consequences. In this paper, we investigate the feasibility of proactive moderation on Reddit -- i.e., proactively identifying communities at risk of committing offenses that previously resulted in bans for other communities. Proactive moderation strategies show promise for two reasons: (1) they have potential to narrow down the communities that administrators need to monitor for hateful content and (2) they give administrators a scientific rationale to back their administrative decisions and interventions. Our work shows that communities are constantly evolving in their user base and topics of discourse and that evolution into hateful or dangerous (i.e., considered bannable by Reddit administrators) communities can often be predicted months ahead of time. This makes proactive moderation feasible. Further, we leverage explainable machine learning to help identify the strongest predictors of evolution into dangerous communities. This provides administrators with insights into the characteristics of communities at risk becoming dangerous or hateful. Finally, we investigate, at scale, the impact of participation in hateful and dangerous subreddits and the effectiveness of community bans and quarantines on the behavior of members of these communities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2015

Antisocial Behavior in Online Discussion Communities

User contributions in the form of posts, comments, and votes are essenti...
research
11/08/2021

ExtremeBB: Enabling Large-Scale Research into Extremism, the Manosphere and Their Correlation by Online Forum Data

Online extremism is a growing and pernicious problem, and increasingly l...
research
02/17/2021

Political Bias and Factualness in News Sharing across more than 100,000 Online Communities

As civil discourse increasingly takes place online, misinformation and t...
research
09/14/2023

Identifying and analysing toxic actors and communities on Facebook by employing network analysis

There has been an increasingly widespread agreement among both academic ...
research
03/15/2021

The Public Life of Data: Investigating Reactions to Visualizations on Reddit

This research investigates how people engage with data visualizations wh...
research
02/02/2022

Governing online goods: Maturity and formalization in Minecraft, Reddit, and World of Warcraft communities

Building a successful community means governing active populations and l...
research
10/07/2021

AS-Level BGP Community Usage Classification

BGP communities are a popular mechanism used by network operators for tr...

Please sign up or login with your details

Forgot password? Click here to reset