Enhancing Fairness of Visual Attribute Predictors

07/07/2022
by   Tobias Hänel, et al.
0

The performance of deep neural networks for image recognition tasks such as predicting a smiling face is known to degrade with under-represented classes of sensitive attributes. We address this problem by introducing fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure. The experiments performed on facial and medical images from CelebA, UTKFace, and the SIIM-ISIC melanoma classification challenge show the effectiveness of our proposed fairness losses for bias mitigation as they improve model fairness while maintaining high classification performance. To the best of our knowledge, our work is the first attempt to incorporate these types of losses in an end-to-end training scheme for mitigating biases of visual attribute predictors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2022

CAT: Controllable Attribute Translation for Fair Facial Attribute Classification

As the social impact of visual recognition has been under scrutiny, seve...
research
03/15/2023

FairAdaBN: Mitigating unfairness with adaptive batch normalization and its application to dermatological disease classification

Deep learning is becoming increasingly ubiquitous in medical research an...
research
07/20/2020

Investigating Bias and Fairness in Facial Expression Recognition

Recognition of expressions of emotions and affect from facial images is ...
research
09/10/2021

Fairness without the sensitive attribute via Causal Variational Autoencoder

In recent years, most fairness strategies in machine learning models foc...
research
09/27/2022

A Survey of Fairness in Medical Image Analysis: Concepts, Algorithms, Evaluations, and Challenges

Fairness, a criterion focuses on evaluating algorithm performance on dif...
research
01/08/2023

Fair Multi-Exit Framework for Facial Attribute Classification

Fairness has become increasingly pivotal in facial recognition. Without ...
research
06/23/2021

Fairness via Representation Neutralization

Existing bias mitigation methods for DNN models primarily work on learni...

Please sign up or login with your details

Forgot password? Click here to reset