Diverse Perspectives Can Mitigate Political Bias in Crowdsourced Content Moderation

by   Jacob Thebault-Spieker, et al.

In recent years, social media companies have grappled with defining and enforcing content moderation policies surrounding political content on their platforms, due in part to concerns about political bias, disinformation, and polarization. These policies have taken many forms, including disallowing political advertising, limiting the reach of political topics, fact-checking political claims, and enabling users to hide political content altogether. However, implementing these policies requires human judgement to label political content, and it is unclear how well human labelers perform at this task, or whether biases affect this process. Therefore, in this study we experimentally evaluate the feasibility and practicality of using crowd workers to identify political content, and we uncover biases that make it difficult to identify this content. Our results problematize crowds composed of seemingly interchangeable workers, and provide preliminary evidence that aggregating judgements from heterogeneous workers may help mitigate political biases. In light of these findings, we identify strategies to achieving fairer labeling outcomes, while also better supporting crowd workers at this task and potentially mitigating biases.


page 1

page 2

page 3

page 4


Neutral Bots Reveal Political Bias on Social Media

Social media platforms attempting to curb abuse and misinformation have ...

Quantitative Analysis of Forecasting Models:In the Aspect of Online Political Bias

Understanding and mitigating political bias in online social media platf...

Inflating Topic Relevance with Ideology: A Case Study of Political Ideology Bias in Social Topic Detection Models

We investigate the impact of political ideology biases in training data....

ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning

This paper assesses the accuracy, reliability and bias of the Large Lang...

The Self-Perception and Political Biases of ChatGPT

This contribution analyzes the self-perception and political biases of O...

Vicarious Offense and Noise Audit of Offensive Speech Classifiers

This paper examines social web content moderation from two key perspecti...

AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics

The introduction of ChatGPT and the subsequent improvement of Large Lang...

Please sign up or login with your details

Forgot password? Click here to reset