Towards Assessing and Characterizing the Semantic Robustness of Face Recognition

02/10/2022
by   Juan C. Pérez, et al.
2

Deep Neural Networks (DNNs) lack robustness against imperceptible perturbations to their input. Face Recognition Models (FRMs) based on DNNs inherit this vulnerability. We propose a methodology for assessing and characterizing the robustness of FRMs against semantic perturbations to their input. Our methodology causes FRMs to malfunction by designing adversarial attacks that search for identity-preserving modifications to faces. In particular, given a face, our attacks find identity-preserving variants of the face such that an FRM fails to recognize the images belonging to the same identity. We model these identity-preserving semantic modifications via direction- and magnitude-constrained perturbations in the latent space of StyleGAN. We further propose to characterize the semantic robustness of an FRM by statistically describing the perturbations that induce the FRM to malfunction. Finally, we combine our methodology with a certification technique, thus providing (i) theoretical guarantees on the performance of an FRM, and (ii) a formal description of how an FRM may model the notion of face identity.

READ FULL TEXT

page 7

page 16

page 17

page 20

page 21

page 24

page 25

page 26

research
05/09/2019

Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems

Thanks to recent advances in Deep Neural Networks (DNNs), face recogniti...
research
02/22/2018

Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks

Deep neural network (DNN) architecture based models have high expressive...
research
11/27/2020

Robust Attacks on Deep Learning Face Recognition in the Physical World

Deep neural networks (DNNs) have been increasingly used in face recognit...
research
12/31/2017

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

In this paper we show that misclassification attacks against face-recogn...
research
10/13/2022

Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition

Deep learning models have shown their vulnerability when dealing with ad...
research
04/07/2023

Improving Identity-Robustness for Face Models

Despite the success of deep-learning models in many tasks, there have be...
research
08/16/2022

OrthoMAD: Morphing Attack Detection Through Orthogonal Identity Disentanglement

Morphing attacks are one of the many threats that are constantly affecti...

Please sign up or login with your details

Forgot password? Click here to reset