RENATA: REpreseNtation And Training Alteration for Bias Mitigation

12/11/2020
by   William Paul, et al.
5

We propose a novel method for enforcing AI fairness with respect to protected or sensitive factors. This method uses a dual strategy performing Training And Representation Alteration (RENATA) for mitigation of two of the most prominent causes of AI bias, including: a) the use of representation learning alteration via adversarial independence, to suppress the bias-inducing dependence of the data representation from protected factors; and b) training set alteration via intelligent augmentation, to address bias-causing data imbalance, by using generative models that allow fine control of sensitive factors related to underrepresented populations. When testing our methods on image analytics, experiments demonstrate that RENATA significantly or fully debiases baseline models while outperforming competing debiasing methods, e.g., with ( accuracy, for EyePACS, and (73.71, 11.82) vs. the (69.08, 21.65) baseline for CelebA. As an additional contribution, recognizing certain limitations in current metrics used for assessing debiasing performance, this study proposes novel conjunctive debiasing metrics. Our experiments also demonstrate the ability of these novel metrics in assessing the Pareto efficiency of the proposed methods.

READ FULL TEXT
research
12/26/2022

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

We propose a fairness-aware learning framework that mitigates intersecti...
research
02/14/2023

When Mitigating Bias is Unfair: A Comprehensive Study on the Impact of Bias Mitigation Algorithms

Most works on the fairness of machine learning systems focus on the blin...
research
05/31/2022

Inducing bias is simpler than you think

Machine learning may be oblivious to human bias but it is not immune to ...
research
10/26/2020

One-vs.-One Mitigation of Intersectional Bias: A General Method to Extend Fairness-Aware Binary Classification

With the widespread adoption of machine learning in the real world, the ...
research
10/24/2022

Simultaneous Improvement of ML Model Fairness and Performance by Identifying Bias in Data

Machine learning models built on datasets containing discriminative inst...
research
04/28/2020

Addressing Artificial Intelligence Bias in Retinal Disease Diagnostics

Few studies of deep learning systems (DLS) have addressed issues of arti...
research
03/03/2022

Robustness and Adaptation to Hidden Factors of Variation

We tackle here a specific, still not widely addressed aspect, of AI robu...

Please sign up or login with your details

Forgot password? Click here to reset