The More Secure, The Less Equally Usable: Gender and Ethnicity (Un)fairness of Deep Face Recognition along Security Thresholds

09/30/2022
by   Andrea Atzori, et al.
12

Face biometrics are playing a key role in making modern smart city applications more secure and usable. Commonly, the recognition threshold of a face recognition system is adjusted based on the degree of security for the considered use case. The likelihood of a match can be for instance decreased by setting a high threshold in case of a payment transaction verification. Prior work in face recognition has unfortunately showed that error rates are usually higher for certain demographic groups. These disparities have hence brought into question the fairness of systems empowered with face biometrics. In this paper, we investigate the extent to which disparities among demographic groups change under different security levels. Our analysis includes ten face recognition models, three security thresholds, and six demographic groups based on gender and ethnicity. Experiments show that the higher the security of the system is, the higher the disparities in usability among demographic groups are. Compelling unfairness issues hence exist and urge countermeasures in real-world high-stakes environments requiring severe security levels.

READ FULL TEXT

page 4

page 5

research
11/28/2022

MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face Recognition

Although significant progress has been made in face recognition, demogra...
research
08/29/2022

Towards Explaining Demographic Bias through the Eyes of Face Recognition Models

Biases inherent in both data and algorithms make the fairness of widespr...
research
12/07/2022

Leveraging Priority Thresholds to Improve Equitable Housing Access for Unhoused-at-Risk Youth

Approximately 4.2 million youth and young adults experience homelessness...
research
06/10/2021

Consistent Instance False Positive Improves Fairness in Face Recognition

Demographic bias is a significant challenge in practical face recognitio...
research
08/21/2022

Statistical Methods for Assessing Differences in False Non-Match Rates Across Demographic Groups

Biometric recognition is used across a variety of applications from cybe...
research
03/09/2022

Evaluating Proposed Fairness Models for Face Recognition Algorithms

The development of face recognition algorithms by academic and commercia...
research
04/22/2020

SensitiveLoss: Improving Accuracy and Fairness of Face Representations with Discrimination-Aware Deep Learning

We propose a new discrimination-aware learning method to improve both ac...

Please sign up or login with your details

Forgot password? Click here to reset