Fairness Preferences, Actual and Hypothetical: A Study of Crowdworker Incentives

12/08/2020
by   Angie Peng, et al.
0

How should we decide which fairness criteria or definitions to adopt in machine learning systems? To answer this question, we must study the fairness preferences of actual users of machine learning systems. Stringent parity constraints on treatment or impact can come with trade-offs, and may not even be preferred by the social groups in question (Zafar et al., 2017). Thus it might be beneficial to elicit what the group's preferences are, rather than rely on a priori defined mathematical fairness constraints. Simply asking for self-reported rankings of users is challenging because research has shown that there are often gaps between people's stated and actual preferences(Bernheim et al., 2013). This paper outlines a research program and experimental designs for investigating these questions. Participants in the experiments are invited to perform a set of tasks in exchange for a base payment–they are told upfront that they may receive a bonus later on, and the bonus could depend on some combination of output quantity and quality. The same group of workers then votes on a bonus payment structure, to elicit preferences. The voting is hypothetical (not tied to an outcome) for half the group and actual (tied to the actual payment outcome) for the other half, so that we can understand the relation between a group's actual preferences and hypothetical (stated) preferences. Connections and lessons from fairness in machine learning are explored.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2023

Navigating Fairness Measures and Trade-Offs

In order to monitor and prevent bias in AI systems we can use a wide ran...
research
09/08/2017

On Democratic Fairness for Groups of Agents

We study the problem of allocating indivisible goods to groups of intere...
research
11/25/2018

50 Years of Test (Un)fairness: Lessons for Machine Learning

Quantitative definitions of what is unfair and what is fair have been in...
research
01/08/2021

Group Fairness: Independence Revisited

This paper critically examines arguments against independence, a measure...
research
05/27/2023

Moral Machine or Tyranny of the Majority?

With Artificial Intelligence systems increasingly applied in consequenti...
research
02/27/2020

"Do the Right Thing" for Whom? An Experiment on Ingroup Favouritism, Group Assorting and Moral Suasion

In this paper we investigate the effect of moral suasion on ingroup favo...
research
11/28/2018

Racial categories in machine learning

Controversies around race and machine learning have sparked debate among...

Please sign up or login with your details

Forgot password? Click here to reset