FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems

04/08/2021
by   Liang Tong, et al.
0

We present FACESEC, a framework for fine-grained robustness evaluation of face recognition systems. FACESEC evaluation is performed along four dimensions of adversarial modeling: the nature of perturbation (e.g., pixel-level or face accessories), the attacker's system knowledge (about training data and learning architecture), goals (dodging or impersonation), and capability (tailored to individual inputs or across sets of these). We use FACESEC to study five face recognition systems in both closed-set and open-set settings, and to evaluate the state-of-the-art approach for defending against physically realizable attacks on these. We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks. Moreover, we observe that open-set face recognition systems are more vulnerable than closed-set systems under different types of attacks. The efficacy of attacks for other threat model variations, however, appears highly dependent on both the nature of perturbation and the neural network architecture. For example, attacks that involve adversarial face masks are usually more potent, even against adversarially trained models, and the ArcFace architecture tends to be more robust than the others.

READ FULL TEXT

page 4

page 7

page 13

page 14

research
10/15/2022

Is Face Recognition Safe from Realizable Attacks?

Face recognition is a popular form of biometric authentication and due t...
research
07/08/2020

Delving into the Adversarial Robustness on Face Recognition

Face recognition has recently made substantial progress and achieved hig...
research
06/09/2022

ReFace: Real-time Adversarial Attacks on Face Recognition Systems

Deep neural network based face recognition models have been shown to be ...
research
02/07/2020

On the Robustness of Face Recognition Algorithms Against Attacks and Bias

Face recognition algorithms have demonstrated very high recognition perf...
research
07/04/2022

RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely Limited Queries

Recent successful adversarial attacks on face recognition show that, des...
research
03/23/2022

On the (Limited) Generalization of MasterFace Attacks and Its Relation to the Capacity of Face Representations

A MasterFace is a face image that can successfully match against a large...
research
07/19/2021

Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition

The modern open internet contains billions of public images of human fac...

Please sign up or login with your details

Forgot password? Click here to reset