Dodging Attack Using Carefully Crafted Natural Makeup

09/14/2021
by   Nitzan Guetta, et al.
0

Deep learning face recognition models are used by state-of-the-art surveillance systems to identify individuals passing through public areas (e.g., airports). Previous studies have demonstrated the use of adversarial machine learning (AML) attacks to successfully evade identification by such systems, both in the digital and physical domains. Attacks in the physical domain, however, require significant manipulation to the human participant's face, which can raise suspicion by human observers (e.g. airport security officers). In this study, we present a novel black-box AML attack which carefully crafts natural makeup, which, when applied on a human participant, prevents the participant from being identified by facial recognition models. We evaluated our proposed attack against the ArcFace face recognition model, with 20 participants in a real-world setup that includes two cameras, different shooting angles, and different lighting conditions. The evaluation results show that in the digital domain, the face recognition system was unable to identify all of the participants, while in the physical domain, the face recognition system was able to identify the participants in only 1.22 (compared to 47.57 which is below a reasonable threshold of a realistic operational environment.

READ FULL TEXT

page 1

page 6

research
03/24/2020

Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

Deep learning-based systems have been shown to be vulnerable to adversar...
research
11/21/2021

Adversarial Mask: Real-World Adversarial Attack Against Face Recognition Models

Deep learning-based facial recognition (FR) models have demonstrated sta...
research
08/18/2021

Adversarial Relighting against Face Recognition

Deep face recognition (FR) has achieved significantly high accuracy on s...
research
09/24/2018

Fast Geometrically-Perturbed Adversarial Faces

The state-of-the-art performance of deep learning algorithms has led to ...
research
06/25/2020

Backdoor Attacks on Facial Recognition in the Physical World

Backdoor attacks embed hidden malicious behaviors inside deep neural net...
research
11/25/2020

Adversarial Attack on Facial Recognition using Visible Light

The use of deep learning for human identification and object detection i...
research
04/24/2019

Peek-a-boo, I Can See You, Forger: Influences of Human Demographics, Brand Familiarity and Security Backgrounds on Homograph Recognition

Homograph attack is a way that attackers deceive victims about which dom...

Please sign up or login with your details

Forgot password? Click here to reset