Prisoners of Their Own Devices: How Models Induce Data Bias in Performative Prediction

06/27/2022
by   José Pombal, et al.
28

The unparalleled ability of machine learning algorithms to learn patterns from data also enables them to incorporate biases embedded within. A biased model can then make decisions that disproportionately harm certain groups in society. Much work has been devoted to measuring unfairness in static ML environments, but not in dynamic, performative prediction ones, in which most real-world use cases operate. In the latter, the predictive model itself plays a pivotal role in shaping the distribution of the data. However, little attention has been heeded to relating unfairness to these interactions. Thus, to further the understanding of unfairness in these settings, we propose a taxonomy to characterize bias in the data, and study cases where it is shaped by model behaviour. Using a real-world account opening fraud detection case study as an example, we study the dangers to both performance and fairness of two typical biases in performative prediction: distribution shifts, and the problem of selective labels.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2022

Understanding Unfairness in Fraud Detection through Model and Data Bias Interactions

In recent years, machine learning algorithms have become ubiquitous in a...
research
05/03/2023

Fairness in AI Systems: Mitigating gender bias from language-vision models

Our society is plagued by several biases, including racial biases, caste...
research
08/14/2021

TRAPDOOR: Repurposing backdoors to detect dataset bias in machine learning-based genomic analysis

Machine Learning (ML) has achieved unprecedented performance in several ...
research
03/30/2021

Statistical inference for individual fairness

As we rely on machine learning (ML) models to make more consequential de...
research
11/16/2022

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

Auditing machine learning-based (ML) healthcare tools for bias is critic...
research
07/30/2020

Fairness-Aware Online Personalization

Decision making in crucial applications such as lending, hiring, and col...
research
06/14/2022

ABCinML: Anticipatory Bias Correction in Machine Learning Applications

The idealization of a static machine-learned model, trained once and dep...

Please sign up or login with your details

Forgot password? Click here to reset