Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

02/26/2018
by   Nina Grgić-Hlača, et al.
0

As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people's moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person's assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85 fair. Our findings have important implications. At a high-level, we show that people's unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people's fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2021

Legal perspective on possible fairness measures - A legal discussion using the example of hiring decisions (preprint)

With the increasing use of AI in algorithmic decision making (e.g. based...
research
09/11/2019

Algorithmic and Economic Perspectives on Fairness

Algorithmic systems have been used to inform consequential decisions for...
research
05/10/2021

Loss-Aversively Fair Classification

The use of algorithmic (learning-based) decision making in scenarios tha...
research
01/21/2020

Algorithmic Fairness

An increasing number of decisions regarding the daily lives of human bei...
research
11/14/2022

A Survey on Preserving Fairness Guarantees in Changing Environments

Human lives are increasingly being affected by the outcomes of automated...
research
03/21/2019

A Simulation Based Dynamic Evaluation Framework for System-wide Algorithmic Fairness

We propose the use of Agent Based Models (ABMs) inside a reinforcement l...

Please sign up or login with your details

Forgot password? Click here to reset