Distraction is All You Need for Fairness

03/15/2022
by   Mehdi Yazdani-Jahromi, et al.
0

With the recent growth in artificial intelligence models and its expanding role in automated decision making, ensuring that these models are not biased is of vital importance. There is an abundance of evidence suggesting that these models could contain or even amplify the bias present in the data on which they are trained, inherent to their objective function and learning algorithms. In this paper, we propose a novel classification algorithm that improves fairness, while maintaining accuracy of the predictions. Utilizing the embedding layer of a pre-trained classifier for the protected attributes, the network uses an attention layer to distract the classification from depending on the protected attribute in its predictions. We compare our model with six state-of-the-art methodologies proposed in fairness literature, and show that the model is superior to those methods in terms of minimizing bias while maintaining accuracy.

READ FULL TEXT

page 3

page 7

page 8

research
06/22/2023

Auditing Predictive Models for Intersectional Biases

Predictive models that satisfy group fairness criteria in aggregate for ...
research
10/09/2022

A Differentiable Distance Approximation for Fairer Image Classification

Naively trained AI models can be heavily biased. This can be particularl...
research
09/18/2022

Through a fair looking-glass: mitigating bias in image datasets

With the recent growth in computer vision applications, the question of ...
research
03/03/2023

Model Explanation Disparities as a Fairness Diagnostic

In recent years, there has been a flurry of research focusing on the fai...
research
09/28/2020

Towards a Measure of Individual Fairness for Deep Learning

Deep learning has produced big advances in artificial intelligence, but ...
research
10/24/2022

Simultaneous Improvement of ML Model Fairness and Performance by Identifying Bias in Data

Machine learning models built on datasets containing discriminative inst...
research
10/27/2021

Feature and Label Embedding Spaces Matter in Addressing Image Classifier Bias

This paper strives to address image classifier bias, with a focus on bot...

Please sign up or login with your details

Forgot password? Click here to reset