Fairness Properties of Face Recognition and Obfuscation Systems

08/05/2021
by   Harrison Rosenberg, et al.
4

The proliferation of automated facial recognition in various commercial and government sectors has caused significant privacy concerns for individuals. A recent and popular approach to address these privacy concerns is to employ evasion attacks against the metric embedding networks powering facial recognition systems. Face obfuscation systems generate imperceptible perturbations, when added to an image, cause the facial recognition system to misidentify the user. The key to these approaches is the generation of perturbations using a pre-trained metric embedding network followed by their application to an online system, whose model might be proprietary. This dependence of face obfuscation on metric embedding networks, which are known to be unfair in the context of facial recognition, surfaces the question of demographic fairness – are there demographic disparities in the performance of face obfuscation systems? To address this question, we perform an analytical and empirical exploration of the performance of recent face obfuscation systems that rely on deep embedding networks. We find that metric embedding networks are demographically aware; they cluster faces in the embedding space based on their demographic attributes. We observe that this effect carries through to the face obfuscation systems: faces belonging to minority groups incur reduced utility compared to those from majority groups. For example, the disparity in average obfuscation success rate on the online Face++ API can reach up to 20 percentage points. Further, for some demographic groups, the average perturbation size increases by up to 17% when choosing a target identity belonging to a different demographic group versus the same demographic group. Finally, we present a simple analytical model to provide insights into these phenomena.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2020

Mitigating Face Recognition Bias via Group Adaptive Classifier

Face recognition is known to exhibit bias - subjects in certain demograp...
research
11/19/2019

DebFace: De-biasing Face Recognition

We address the problem of bias in automated face recognition algorithms,...
research
04/22/2020

SensitiveLoss: Improving Accuracy and Fairness of Face Representations with Discrimination-Aware Deep Learning

We propose a new discrimination-aware learning method to improve both ac...
research
12/04/2019

Algorithmic Discrimination: Formulation and Exploration in Deep Learning-based Face Biometrics

The most popular face recognition benchmarks assume a distribution of su...
research
06/15/2021

Demographic Fairness in Face Identification: The Watchlist Imbalance Effect

Recently, different researchers have found that the gallery composition ...
research
02/15/2023

Evidence of Demographic rather than Ideological Segregation in News Discussion on Reddit

We evaluate homophily and heterophily among ideological and demographic ...
research
03/15/2022

A Deep Dive into Dataset Imbalance and Bias in Face Identification

As the deployment of automated face recognition (FR) systems proliferate...

Please sign up or login with your details

Forgot password? Click here to reset