A Survey on Preserving Fairness Guarantees in Changing Environments

11/14/2022
by   Ainhize Barrainkua, et al.
0

Human lives are increasingly being affected by the outcomes of automated decision-making systems and it is essential for the latter to be, not only accurate, but also fair. The literature of algorithmic fairness has grown considerably over the last decade, where most of the approaches are evaluated under the strong assumption that the train and test samples are independently and identically drawn from the same underlying distribution. However, in practice, dissimilarity between the training and deployment environments exists, which compromises the performance of the decision-making algorithm as well as its fairness guarantees in the deployment data. There is an emergent research line that studies how to preserve fairness guarantees when the data generating processes differ between the source (train) and target (test) domains, which is growing remarkably. With this survey, we aim to provide a wide and unifying overview on the topic. For such purpose, we propose a taxonomy of the existing approaches for fair classification under distribution shift, highlight benchmarking alternatives, point out the relation with other similar research fields and eventually, identify future venues of research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2016

Fairness as a Program Property

We explore the following question: Is a decision-making program fair, fo...
research
02/26/2018

Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

As algorithms are increasingly used to make important decisions that aff...
research
03/22/2021

Fairness Perceptions of Algorithmic Decision-Making: A Systematic Review of the Empirical Literature

Algorithmic decision-making (ADM) increasingly shapes people's daily liv...
research
03/14/2022

Learning for Robot Decision Making under Distribution Shift: A Survey

With the recent advances in the field of deep learning, learning-based m...
research
05/10/2021

Loss-Aversively Fair Classification

The use of algorithmic (learning-based) decision making in scenarios tha...
research
11/10/2022

Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey

Despite being responsible for state-of-the-art results in several comput...

Please sign up or login with your details

Forgot password? Click here to reset