Bias Mitigation Post-processing for Individual and Group Fairness

12/14/2018
by   Pranay K. Lohia, et al.
0

Whereas previous post-processing approaches for increasing the fairness of predictions of biased classifiers address only group fairness, we propose a method for increasing both individual and group fairness. Our novel framework includes an individual bias detector used to prioritize data samples in a bias mitigation algorithm aiming to improve the group fairness measure of disparate impact. We show superior performance to previous work in the combination of classification accuracy, individual fairness and group fairness on several real-world datasets in applications such as credit, employment, and criminal justice.

READ FULL TEXT
research
01/31/2021

Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness

Previous post-processing bias mitigation algorithms on both group and in...
research
12/20/2020

Biased Models Have Biased Explanations

We study fairness in Machine Learning (FairML) through the lens of attri...
research
10/26/2021

Post-processing for Individual Fairness

Post-processing in algorithmic fairness is a versatile approach for corr...
research
02/12/2023

On Testing and Comparing Fair classifiers under Data Bias

In this paper, we consider a theoretical model for injecting data bias, ...
research
10/26/2020

One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification

With the widespread adoption of machine learning in the real world, the ...
research
09/03/2019

Quantifying Infra-Marginality and Its Trade-off with Group Fairness

In critical decision-making scenarios, optimizing accuracy can lead to a...
research
07/17/2023

Certifying the Fairness of KNN in the Presence of Dataset Bias

We propose a method for certifying the fairness of the classification re...

Please sign up or login with your details

Forgot password? Click here to reset