Assessing Demographic Bias Transfer from Dataset to Model: A Case Study in Facial Expression Recognition

05/20/2022
by   Iris Dominguez-Catena, et al.
0

The increasing amount of applications of Artificial Intelligence (AI) has led researchers to study the social impact of these technologies and evaluate their fairness. Unfortunately, current fairness metrics are hard to apply in multi-class multi-demographic classification problems, such as Facial Expression Recognition (FER). We propose a new set of metrics to approach these problems. Of the three metrics proposed, two focus on the representational and stereotypical bias of the dataset, and the third one on the residual bias of the trained model. These metrics combined can potentially be used to study and compare diverse bias mitigation methods. We demonstrate the usefulness of the metrics by applying them to a FER problem based on the popular Affectnet dataset. Like many other datasets for FER, Affectnet is a large Internet-sourced dataset with 291,651 labeled images. Obtaining images from the Internet raises some concerns over the fairness of any system trained on this data and its ability to generalize properly to diverse populations. We first analyze the dataset and some variants, finding substantial racial bias and gender stereotypes. We then extract several subsets with different demographic properties and train a model on each one, observing the amount of residual bias in the different setups. We also provide a second analysis on a different dataset, FER+.

READ FULL TEXT
research
03/28/2023

Metrics for Dataset Demographic Bias: A Case Study on Facial Expression Recognition

Demographic biases in source datasets have been shown as one of the caus...
research
10/11/2022

Gender Stereotyping Impact in Facial Expression Recognition

Facial Expression Recognition (FER) uses images of faces to identify the...
research
03/15/2021

Domain-Incremental Continual Learning for Mitigating Bias in Facial Expression and Action Unit Recognition

As Facial Expression Recognition (FER) systems become integrated into ou...
research
03/21/2021

Responsible AI: Gender bias assessment in emotion recognition

Rapid development of artificial intelligence (AI) systems amplify many c...
research
03/19/2022

Assessing Gender Bias in Predictive Algorithms using eXplainable AI

Predictive algorithms have a powerful potential to offer benefits in are...
research
10/08/2021

Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks

Algorithmic bias is of increasing concern, both to the research communit...
research
06/23/2021

Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to Data Imbalance in Deep Learning Based Segmentation

The subject of "fairness" in artificial intelligence (AI) refers to asse...

Please sign up or login with your details

Forgot password? Click here to reset