Avoiding Resentment Via Monotonic Fairness

09/03/2019
by   Guy W. Cole, et al.
0

Classifiers that achieve demographic balance by explicitly using protected attributes such as race or gender are often politically or culturally controversial due to their lack of individual fairness, i.e. individuals with similar qualifications will receive different outcomes. Individually and group fair decision criteria can produce counter-intuitive results, e.g. that the optimal constrained boundary may reject intuitively better candidates due to demographic imbalance in similar candidates. Both approaches can be seen as introducing individual resentment, where some individuals would have received a better outcome if they either belonged to a different demographic class and had the same qualifications, or if they remained in the same class but had objectively worse qualifications (e.g. lower test scores). We show that both forms of resentment can be avoided by using monotonically constrained machine learning models to create individually fair, demographically balanced classifiers.

READ FULL TEXT

page 9

page 15

page 16

research
06/04/2018

iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

People are rated and ranked, towards algorithmic decision making in an i...
research
03/19/2021

Individually Fair Ranking

We develop an algorithm to train individually fair learning-to-rank (LTR...
research
02/17/2020

Learning Individually Fair Classifier with Causal-Effect Constraint

Machine learning is increasingly being used in various applications that...
research
05/23/2023

Fair Oversampling Technique using Heterogeneous Clusters

Class imbalance and group (e.g., race, gender, and age) imbalance are ac...
research
07/21/2022

GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection

Facial forgery by deepfakes has raised severe societal concerns. Several...
research
06/21/2020

Verifying Individual Fairness in Machine Learning Models

We consider the problem of whether a given decision model, working with ...
research
10/14/2020

Causal Multi-Level Fairness

Algorithmic systems are known to impact marginalized groups severely, an...

Please sign up or login with your details

Forgot password? Click here to reset