AdaFair: Cumulative Fairness Adaptive Boosting

09/17/2019
by   Vasileios Iosifidis, et al.
9

The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination. In particular, in certain cases it has been observed that ML algorithms can provide different decisions based on sensitive attributes such as gender or race and therefore can lead to discrimination. Although, several fairness-aware ML approaches have been proposed, their focus has been largely on preserving the overall classification accuracy while improving fairness in predictions for both protected and non-protected groups (defined based on the sensitive attribute(s)). The overall accuracy however is not a good indicator of performance in case of class imbalance, as it is biased towards the majority class. As we will see in our experiments, many of the fairness-related datasets suffer from class imbalance and therefore, tackling fairness requires also tackling the imbalance problem. To this end, we propose AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error. Our experiments show that our approach can achieve parity in true positive and true negative rates for both protected and non-protected groups, while it significantly outperforms existing fairness-aware methods up to 25 balanced error.

READ FULL TEXT

page 5

page 6

page 8

research
01/04/2022

Parity-based Cumulative Fairness-aware Boosting

Data-driven AI systems can lead to discrimination on the basis of protec...
research
02/03/2020

FAE: A Fairness-Aware Ensemble Framework

Automated decision making based on big data and machine learning (ML) al...
research
11/09/2022

Discrimination and Class Imbalance Aware Online Naive Bayes

Fairness-aware mining of massive data streams is a growing and challengi...
research
08/30/2022

RAGUEL: Recourse-Aware Group Unfairness Elimination

While machine learning and ranking-based systems are in widespread use f...
research
10/28/2018

On preserving non-discrimination when combining expert advice

We study the interplay between sequential decision making and avoiding d...
research
10/12/2022

Fairness via Adversarial Attribute Neighbourhood Robust Learning

Improving fairness between privileged and less-privileged sensitive attr...
research
09/17/2022

AdaCC: Cumulative Cost-Sensitive Boosting for Imbalanced Classification

Class imbalance poses a major challenge for machine learning as most sup...

Please sign up or login with your details

Forgot password? Click here to reset