Fair SA: Sensitivity Analysis for Fairness in Face Recognition

02/08/2022
by   Aparna R. Joshi, et al.
4

As the use of deep learning in high impact domains becomes ubiquitous, it is increasingly important to assess the resilience of models. One such high impact domain is that of face recognition, with real world applications involving images affected by various degradations, such as motion blur or high exposure. Moreover, images captured across different attributes, such as gender and race, can also challenge the robustness of a face recognition algorithm. While traditional summary statistics suggest that the aggregate performance of face recognition models has continued to improve, these metrics do not directly measure the robustness or fairness of the models. Visual Psychophysics Sensitivity Analysis (VPSA) [1] provides a way to pinpoint the individual causes of failure by way of introducing incremental perturbations in the data. However, perturbations may affect subgroups differently. In this paper, we propose a new fairness evaluation based on robustness in the form of a generic framework that extends VPSA. With this framework, we can analyze the ability of a model to perform fairly for different subgroups of a population affected by perturbations, and pinpoint the exact failure modes for a subgroup by measuring targeted robustness. With the increasing focus on the fairness of models, we use face recognition as an example application of our framework and propose to compactly visualize the fairness analysis of a model via AUC matrices. We analyze the performance of common face recognition models and empirically show that certain subgroups are at a disadvantage when images are perturbed, thereby uncovering trends that were not visible using the model's performance on subgroups without perturbations.

READ FULL TEXT

page 7

page 11

page 12

page 13

page 14

research
11/28/2022

MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face Recognition

Although significant progress has been made in face recognition, demogra...
research
11/14/2022

Assessing Performance and Fairness Metrics in Face Recognition - Bootstrap Methods

The ROC curve is the major tool for assessing not only the performance b...
research
08/23/2022

Explaining Bias in Deep Face Recognition via Image Characteristics

In this paper, we propose a novel explanatory framework aimed to provide...
research
06/17/2020

Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning

Deep neural networks are being increasingly used in real world applicati...
research
10/18/2022

On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition

Face recognition systems are deployed across the world by government age...
research
11/17/2021

Fairness Testing of Deep Image Classification with Adequacy Metrics

As deep image classification applications, e.g., face recognition, becom...
research
10/24/2022

Mitigating Gender Bias in Face Recognition Using the von Mises-Fisher Mixture Model

In spite of the high performance and reliability of deep learning algori...

Please sign up or login with your details

Forgot password? Click here to reset