Evaluating Proposed Fairness Models for Face Recognition Algorithms

03/09/2022
by   John J. Howard, et al.
0

The development of face recognition algorithms by academic and commercial organizations is growing rapidly due to the onset of deep learning and the widespread availability of training data. Though tests of face recognition algorithm performance indicate yearly performance gains, error rates for many of these systems differ based on the demographic composition of the test set. These "demographic differentials" in algorithm performance can contribute to unequal or unfair outcomes for certain groups of people, raising concerns with increased worldwide adoption of face recognition systems. Consequently, regulatory bodies in both the United States and Europe have proposed new rules requiring audits of biometric systems for "discriminatory impacts" (European Union Artificial Intelligence Act) and "fairness" (U.S. Federal Trade Commission). However, no standard for measuring fairness in biometric systems yet exists. This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe. We find that both proposed methods are challenging to interpret when applied to disaggregated face recognition error rates as they are commonly experienced in practice. To address this, we propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure. We further develop a new fairness measure, the Gini Aggregation Rate for Biometric Equitability (GARBE), and show how, in conjunction with the Pareto optimization, this measure can be used to select among alternative algorithms based on the accuracy/fairness trade-space. Finally, we have open-sourced our dataset of machine-readable, demographically disaggregated error rates. We believe this is currently the largest open-source dataset of its kind.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2022

MixFairFace: Towards Ultimate Fairness via MixFair Adapter in Face Recognition

Although significant progress has been made in face recognition, demogra...
research
06/19/2023

Fairness Index Measures to Evaluate Bias in Biometric Recognition

The demographic disparity of biometric systems has led to serious concer...
research
11/14/2022

Assessing Performance and Fairness Metrics in Face Recognition - Bootstrap Methods

The ROC curve is the major tool for assessing not only the performance b...
research
09/30/2022

The More Secure, The Less Equally Usable: Gender and Ethnicity (Un)fairness of Deep Face Recognition along Security Thresholds

Face biometrics are playing a key role in making modern smart city appli...
research
06/15/2021

Demographic Fairness in Face Identification: The Watchlist Imbalance Effect

Recently, different researchers have found that the gallery composition ...
research
05/31/2021

Demographic Fairness in Biometric Systems: What do the Experts say?

Algorithmic decision systems have frequently been labelled as "biased", ...
research
10/11/2021

Biometric Template Protection for Neural-Network-based Face Recognition Systems: A Survey of Methods and Evaluation Techniques

This paper presents a survey of biometric template protection (BTP) meth...

Please sign up or login with your details

Forgot password? Click here to reset