Simulated Adversarial Testing of Face Recognition Models

06/08/2021
by   Nataniel Ruiz, et al.
0

Most machine learning models are validated and tested on fixed datasets. This can give an incomplete picture of the capabilities and weaknesses of the model. Such weaknesses can be revealed at test time in the real world. The risks involved in such failures can be loss of profits, loss of time or even loss of life in certain critical applications. In order to alleviate this issue, simulators can be controlled in a fine-grained manner using interpretable parameters to explore the semantic image manifold. In this work, we propose a framework for learning how to test machine learning algorithms using simulators in an adversarial manner in order to find weaknesses in the model before deploying it in critical scenarios. We apply this model in a face recognition scenario. We are the first to show that weaknesses of models trained on real data can be discovered using simulated samples. Using our proposed method, we can find adversarial synthetic faces that fool contemporary face recognition models. This demonstrates the fact that these models have weaknesses that are not measured by commonly used validation datasets. We hypothesize that this type of adversarial examples are not isolated, but usually lie in connected components in the latent space of the simulator. We present a method to find these adversarial regions as opposed to the typical adversarial points found in the adversarial example literature.

READ FULL TEXT

page 6

page 8

page 14

page 15

research
06/09/2022

ReFace: Real-time Adversarial Attacks on Face Recognition Systems

Deep neural network based face recognition models have been shown to be ...
research
04/13/2020

Towards Transferable Adversarial Attack against Deep Face Recognition

Face recognition has achieved great success in the last five years due t...
research
05/26/2022

A Physical-World Adversarial Attack Against 3D Face Recognition

3D face recognition systems have been widely employed in intelligent ter...
research
03/09/2022

Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition

Recent studies have revealed the vulnerability of face recognition model...
research
11/30/2021

A Face Recognition System's Worst Morph Nightmare, Theoretically

It has been shown that Face Recognition Systems (FRSs) are vulnerable to...
research
09/24/2018

Fast Geometrically-Perturbed Adversarial Faces

The state-of-the-art performance of deep learning algorithms has led to ...
research
12/31/2018

Modeling tax distribution in metropolitan regions with PolicySpace

Brazilian executive body has consistently vetoed legislative initiatives...

Please sign up or login with your details

Forgot password? Click here to reset