Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

05/07/2021
by   Bangjie Yin, et al.
0

Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples. However, existing adversarial examples against face recognition systems either lack transferability to black-box models, or fail to be implemented in practice. In this paper, we propose a unified adversarial face generation method - Adv-Makeup, which can realize imperceptible and transferable attack under black-box setting. Adv-Makeup develops a task-driven makeup generation method with the blending module to synthesize imperceptible eye shadow over the orbital region on faces. And to achieve transferability, Adv-Makeup implements a fine-grained meta-learning adversarial attack strategy to learn more general attack features from various models. Compared to existing techniques, sufficient visualization results demonstrate that Adv-Makeup is capable to generate much more imperceptible attacks under both digital and physical scenarios. Meanwhile, extensive quantitative experiments show that Adv-Makeup can significantly improve the attack success rate under black-box setting, even attacking commercial systems.

READ FULL TEXT

page 1

page 3

page 5

page 6

research
06/25/2022

RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer

Face recognition has achieved considerable progress in recent years than...
research
03/24/2020

Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

Deep learning-based systems have been shown to be vulnerable to adversar...
research
02/13/2018

Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

To attack a deep neural network (DNN) based Face Recognition (FR) system...
research
03/22/2023

Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition

A hard challenge in developing practical face recognition (FR) attacks i...
research
03/09/2022

Controllable Evaluation and Generation of Physical Adversarial Patch on Face Recognition

Recent studies have revealed the vulnerability of face recognition model...
research
06/26/2023

3D-Aware Adversarial Makeup Generation for Facial Privacy Protection

The privacy and security of face data on social media are facing unprece...
research
03/28/2023

Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition

Face recognition is a prevailing authentication solution in numerous bio...

Please sign up or login with your details

Forgot password? Click here to reset