Fairness and Bias in Truth Discovery Algorithms: An Experimental Analysis

04/25/2023
by   Simone Lazier, et al.
0

Machine learning (ML) based approaches are increasingly being used in a number of applications with societal impact. Training ML models often require vast amounts of labeled data, and crowdsourcing is a dominant paradigm for obtaining labels from multiple workers. Crowd workers may sometimes provide unreliable labels, and to address this, truth discovery (TD) algorithms such as majority voting are applied to determine the consensus labels from conflicting worker responses. However, it is important to note that these consensus labels may still be biased based on sensitive attributes such as gender, race, or political affiliation. Even when sensitive attributes are not involved, the labels can be biased due to different perspectives of subjective aspects such as toxicity. In this paper, we conduct a systematic study of the bias and fairness of TD algorithms. Our findings using two existing crowd-labeled datasets, reveal that a non-trivial proportion of workers provide biased results, and using simple approaches for TD is sub-optimal. Our study also demonstrates that popular TD algorithms are not a panacea. Additionally, we quantify the impact of these unfair workers on downstream ML tasks and show that conventional methods for achieving fairness and correcting label biases are ineffective in this setting. We end the paper with a plea for the design of novel bias-aware truth discovery algorithms that can ameliorate these issues.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2020

Measuring Social Biases of Crowd Workers using Counterfactual Queries

Social biases based on gender, race, etc. have been shown to pollute mac...
research
06/28/2023

Systematic analysis of the impact of label noise correction on ML Fairness

Arbitrary, inconsistent, or faulty decision-making raises serious concer...
research
07/20/2020

Crowd, Lending, Machine, and Bias

Big data and machine learning (ML) algorithms are key drivers of many fi...
research
02/16/2021

Finding the Ground-Truth from Multiple Labellers: Why Parameters of the Task Matter

Employing multiple workers to label data for machine learning models has...
research
01/06/2020

Fair Active Learning

Bias in training data and proxy attributes are probably the main reasons...
research
10/18/2021

Demographic Biases of Crowd Workers in Key Opinion Leaders Finding

Key Opinion Leaders (KOLs) are people that have a strong influence and t...
research
04/13/2023

ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning

This paper assesses the accuracy, reliability and bias of the Large Lang...

Please sign up or login with your details

Forgot password? Click here to reset