DeepAI
Log In Sign Up

Fairness Preferences, Actual and Hypothetical: A Study of Crowdworker Incentives

12/08/2020
by   Angie Peng, et al.
0

How should we decide which fairness criteria or definitions to adopt in machine learning systems? To answer this question, we must study the fairness preferences of actual users of machine learning systems. Stringent parity constraints on treatment or impact can come with trade-offs, and may not even be preferred by the social groups in question (Zafar et al., 2017). Thus it might be beneficial to elicit what the group's preferences are, rather than rely on a priori defined mathematical fairness constraints. Simply asking for self-reported rankings of users is challenging because research has shown that there are often gaps between people's stated and actual preferences(Bernheim et al., 2013). This paper outlines a research program and experimental designs for investigating these questions. Participants in the experiments are invited to perform a set of tasks in exchange for a base payment–they are told upfront that they may receive a bonus later on, and the bonus could depend on some combination of output quantity and quality. The same group of workers then votes on a bonus payment structure, to elicit preferences. The voting is hypothetical (not tied to an outcome) for half the group and actual (tied to the actual payment outcome) for the other half, so that we can understand the relation between a group's actual preferences and hypothetical (stated) preferences. Connections and lessons from fairness in machine learning are explored.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/08/2017

On Democratic Fairness for Groups of Agents

We study the problem of allocating indivisible goods to groups of intere...
11/25/2018

50 Years of Test (Un)fairness: Lessons for Machine Learning

Quantitative definitions of what is unfair and what is fair have been in...
01/08/2021

Group Fairness: Independence Revisited

This paper critically examines arguments against independence, a measure...
07/20/2022

MANI-Rank: Multiple Attribute and Intersectional Group Fairness for Consensus Ranking

Combining the preferences of many rankers into one single consensus rank...
12/14/2019

On the Apparent Conflict Between Individual and Group Fairness

A distinction has been drawn in fair machine learning research between `...
02/27/2020

"Do the Right Thing" for Whom? An Experiment on Ingroup Favouritism, Group Assorting and Moral Suasion

In this paper we investigate the effect of moral suasion on ingroup favo...
09/22/2022

Why More Text is (Often) Better: Themes from Reader Preferences for Integration of Charts and Text

Given a choice between charts with minimal text and those with copious t...