Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

12/31/2017
by   Mahmood Sharif, et al.
0

In this paper we show that misclassification attacks against face-recognition systems based on deep neural networks (DNNs) are more dangerous than previously demonstrated, even in contexts where the adversary can manipulate only her physical appearance (versus directly manipulating the image input to the DNN). Specifically, we show how to create eyeglasses that, when worn, can succeed in targeted (impersonation) or untargeted (dodging) attacks while improving on previous work in one or more of three facets: (i) inconspicuousness to onlooking observers, which we test through a user study; (ii) robustness of the attack against proposed defenses; and (iii) scalability in the sense of decoupling eyeglass creation from the subject who will wear them, i.e., by creating "universal" sets of eyeglasses that facilitate misclassification. Central to these improvements are adversarial generative nets, a method we propose to generate physically realizable attack artifacts (here, eyeglasses) automatically.

READ FULL TEXT

page 9

page 13

page 14

research
09/20/2021

Robust Physical-World Attacks on Face Recognition

Face recognition has been greatly facilitated by the development of deep...
research
09/15/2020

Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems

Deep neural networks (DNN) have shown great success in many computer vis...
research
11/27/2020

Robust Attacks on Deep Learning Face Recognition in the Physical World

Deep neural networks (DNNs) have been increasingly used in face recognit...
research
11/29/2018

Attacks on State-of-the-Art Face Recognition using Attentional Adversarial Attack Generative Network

With the broad use of face recognition, its weakness gradually emerges t...
research
02/10/2022

Towards Assessing and Characterizing the Semantic Robustness of Face Recognition

Deep Neural Networks (DNNs) lack robustness against imperceptible pertur...
research
02/22/2018

Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks

Deep neural network (DNN) architecture based models have high expressive...
research
07/01/2020

ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks

Deep Neural Networks (DNNs) have been applied successfully in computer v...

Please sign up or login with your details

Forgot password? Click here to reset