Measuring Social Biases of Crowd Workers using Counterfactual Queries

04/04/2020
by   Bhavya Ghai, et al.
1

Social biases based on gender, race, etc. have been shown to pollute machine learning (ML) pipeline predominantly via biased training datasets. Crowdsourcing, a popular cost-effective measure to gather labeled training datasets, is not immune to the inherent social biases of crowd workers. To ensure such social biases aren't passed onto the curated datasets, it's important to know how biased each crowd worker is. In this work, we propose a new method based on counterfactual fairness to quantify the degree of inherent social bias in each crowd worker. This extra information can be leveraged together with individual worker responses to curate a less biased dataset.

READ FULL TEXT
research
10/18/2021

Demographic Biases of Crowd Workers in Key Opinion Leaders Finding

Key Opinion Leaders (KOLs) are people that have a strong influence and t...
research
04/25/2023

Fairness and Bias in Truth Discovery Algorithms: An Experimental Analysis

Machine learning (ML) based approaches are increasingly being used in a ...
research
07/20/2020

Crowd, Lending, Machine, and Bias

Big data and machine learning (ML) algorithms are key drivers of many fi...
research
09/22/2017

Context Embedding Networks

Low dimensional embeddings that capture the main variations of interest ...
research
08/03/2023

A Multidimensional Analysis of Social Biases in Vision Transformers

The embedding spaces of image models have been shown to encode a range o...
research
01/16/2020

Race, Gender and Beauty: The Effect of Information Provision on Online Hiring Biases

We conduct a study of hiring bias on a simulation platform where we ask ...
research
11/19/2022

Quantifying Human Bias and Knowledge to guide ML models during Training

This paper discusses a crowdsourcing based method that we designed to qu...

Please sign up or login with your details

Forgot password? Click here to reset