Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems

09/15/2020
by   Haoliang Li, et al.
7

Deep neural networks (DNN) have shown great success in many computer vision applications. However, they are also known to be susceptible to backdoor attacks. When conducting backdoor attacks, most of the existing approaches assume that the targeted DNN is always available, and an attacker can always inject a specific pattern to the training data to further fine-tune the DNN model. However, in practice, such attack may not be feasible as the DNN model is encrypted and only available to the secure enclave. In this paper, we propose a novel black-box backdoor attack technique on face recognition systems, which can be conducted without the knowledge of the targeted DNN model. To be specific, we propose a backdoor attack with a novel color stripe pattern trigger, which can be generated by modulating LED in a specialized waveform. We also use an evolutionary computing strategy to optimize the waveform for backdoor attack. Our backdoor attack can be conducted in a very mild condition: 1) the adversary cannot manipulate the input in an unnatural way (e.g., injecting adversarial noise); 2) the adversary cannot access the training database; 3) the adversary has no knowledge of the training model as well as the training set used by the victim party. We show that the backdoor trigger can be quite effective, where the attack success rate can be up to 88% based on our simulation study and up to 40% based on our physical-domain study by considering the task of face recognition and verification based on at most three-time attempts during authentication. Finally, we evaluate several state-of-the-art potential defenses towards backdoor attacks, and find that our attack can still be effective. We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 6

page 8

page 9

page 14

page 15

page 16

03/24/2020

Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

Deep learning-based systems have been shown to be vulnerable to adversar...
05/01/2021

A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification

We introduce a new attack against face verification systems based on Dee...
12/15/2017

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and t...
12/31/2017

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

In this paper we show that misclassification attacks against face-recogn...
03/01/2021

Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack

Recently, significant advancements have been made in face recognition te...
02/13/2018

Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

To attack a deep neural network (DNN) based Face Recognition (FR) system...
04/28/2022

Morphing Attack Potential

In security systems the risk assessment in the sense of common criteria ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.