Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model

10/24/2022
by   Jean-Rémy Conti, et al.
0

In spite of the high performance and reliability of deep learning algorithms in a wide range of everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against specific subgroups of the population (e.g. gender, ethnicity). This urges the practitioner to develop fair systems with a uniform/comparable performance across sensitive groups. In this work, we investigate the gender bias of deep Face Recognition networks. In order to measure this bias, we introduce two new metrics, BFAR and BFRR, that better reflect the inherent deployment needs of Face Recognition systems. Motivated by geometric considerations, we mitigate gender bias through a new post-processing methodology which transforms the deep embeddings of a pre-trained model to give more representation power to discriminated subgroups. It consists in training a shallow neural network by minimizing a Fair von Mises-Fisher loss whose hyperparameters account for the intra-class variance of each gender. Interestingly, we empirically observe that these hyperparameters are correlated with our fairness metrics. In fact, extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias.

READ FULL TEXT
research
06/14/2020

An adversarial learning algorithm for mitigating gender bias in face recognition

State-of-the-art face recognition networks implicitly encode gender info...
research
10/18/2022

On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition

Face recognition systems are deployed across the world by government age...
research
08/14/2021

Unravelling the Effect of Image Distortions for Biased Prediction of Pre-trained Face Recognition Models

Identifying and mitigating bias in deep learning algorithms has gained s...
research
12/17/2021

Distill and De-bias: Mitigating Bias in Face Recognition using Knowledge Distillation

Face recognition networks generally demonstrate bias with respect to sen...
research
04/02/2019

Deep Learning for Face Recognition: Pride or Prejudiced?

Do very high accuracies of deep networks suggest pride of effective AI o...
research
02/08/2022

Fair SA: Sensitivity Analysis for Fairness in Face Recognition

As the use of deep learning in high impact domains becomes ubiquitous, i...
research
04/26/2023

Are Explainability Tools Gender Biased? A Case Study on Face Presentation Attack Detection

Face recognition (FR) systems continue to spread in our daily lives with...

Please sign up or login with your details

Forgot password? Click here to reset