Robust Attacks on Deep Learning Face Recognition in the Physical World

11/27/2020
by   Meng Shen, et al.
5

Deep neural networks (DNNs) have been increasingly used in face recognition (FR) systems. Recent studies, however, show that DNNs are vulnerable to adversarial examples, which can potentially mislead the FR systems using DNNs in the physical world. Existing attacks on these systems either generate perturbations working merely in the digital world, or rely on customized equipments to generate perturbations and are not robust in varying physical environments. In this paper, we propose FaceAdv, a physical-world attack that crafts adversarial stickers to deceive FR systems. It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes and the latter transformer aims to digitally attach stickers to human faces and provide feedbacks to the generator to improve the effectiveness of stickers. We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems (i.e., ArcFace, CosFace and FaceNet). The results show that compared with a state-of-the-art attack, FaceAdv can significantly improve success rate of both dodging and impersonating attacks. We also conduct comprehensive evaluations to demonstrate the robustness of FaceAdv.

READ FULL TEXT

page 7

page 10

research
09/20/2021

Robust Physical-World Attacks on Face Recognition

Face recognition has been greatly facilitated by the development of deep...
research
09/04/2021

Real-World Adversarial Examples involving Makeup Application

Deep neural networks have developed rapidly and have achieved outstandin...
research
03/01/2021

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World

Deep learning models are vulnerable to adversarial examples. As a more t...
research
09/18/2023

Stealthy Physical Masked Face Recognition Attack via Adversarial Style Optimization

Deep neural networks (DNNs) have achieved state-of-the-art performance o...
research
12/31/2017

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

In this paper we show that misclassification attacks against face-recogn...
research
07/24/2023

Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations

Camera-based autonomous systems that emulate human perception are increa...
research
02/10/2022

Towards Assessing and Characterizing the Semantic Robustness of Face Recognition

Deep Neural Networks (DNNs) lack robustness against imperceptible pertur...

Please sign up or login with your details

Forgot password? Click here to reset