Simultaneous Improvement of ML Model Fairness and Performance by Identifying Bias in Data

by   Bhushan Chaudhari, et al.

Machine learning models built on datasets containing discriminative instances attributed to various underlying factors result in biased and unfair outcomes. It's a well founded and intuitive fact that existing bias mitigation strategies often sacrifice accuracy in order to ensure fairness. But when AI engine's prediction is used for decision making which reflects on revenue or operational efficiency such as credit risk modelling, it would be desirable by the business if accuracy can be somehow reasonably preserved. This conflicting requirement of maintaining accuracy and fairness in AI motivates our research. In this paper, we propose a fresh approach for simultaneous improvement of fairness and accuracy of ML models within a realistic paradigm. The essence of our work is a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training and we further show that such instance removal will have no adverse impact on model accuracy. In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes , an inherent bias gets induced in the dataset, which can be identified and mitigated through our novel scheme. Our experimental evaluation on two open-source datasets demonstrates how the proposed method can mitigate bias along with improving rather than degrading accuracy, while offering certain set of control for end user.


page 1

page 2

page 3

page 4


Fix Fairness, Don't Ruin Accuracy: Performance Aware Fairness Repair using AutoML

Machine learning (ML) is increasingly being used in critical decision-ma...

Fair Spatial Indexing: A paradigm for Group Spatial Fairness

Machine learning (ML) is playing an increasing role in decision-making t...

Distraction is All You Need for Fairness

With the recent growth in artificial intelligence models and its expandi...

Algorithmic Fairness Verification with Graphical Models

In recent years, machine learning (ML) algorithms have been deployed in ...

Fairness Under Feature Exemptions: Counterfactual and Observational Measures

With the growing use of AI in highly consequential domains, the quantifi...

RENATA: REpreseNtation And Training Alteration for Bias Mitigation

We propose a novel method for enforcing AI fairness with respect to prot...

Inducing bias is simpler than you think

Machine learning may be oblivious to human bias but it is not immune to ...

Please sign up or login with your details

Forgot password? Click here to reset