Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems

by   Haoliang Li, et al.

Deep neural networks (DNN) have shown great success in many computer vision applications. However, they are also known to be susceptible to backdoor attacks. When conducting backdoor attacks, most of the existing approaches assume that the targeted DNN is always available, and an attacker can always inject a specific pattern to the training data to further fine-tune the DNN model. However, in practice, such attack may not be feasible as the DNN model is encrypted and only available to the secure enclave. In this paper, we propose a novel black-box backdoor attack technique on face recognition systems, which can be conducted without the knowledge of the targeted DNN model. To be specific, we propose a backdoor attack with a novel color stripe pattern trigger, which can be generated by modulating LED in a specialized waveform. We also use an evolutionary computing strategy to optimize the waveform for backdoor attack. Our backdoor attack can be conducted in a very mild condition: 1) the adversary cannot manipulate the input in an unnatural way (e.g., injecting adversarial noise); 2) the adversary cannot access the training database; 3) the adversary has no knowledge of the training model as well as the training set used by the victim party. We show that the backdoor trigger can be quite effective, where the attack success rate can be up to 88% based on our simulation study and up to 40% based on our physical-domain study by considering the task of face recognition and verification based on at most three-time attempts during authentication. Finally, we evaluate several state-of-the-art potential defenses towards backdoor attacks, and find that our attack can still be effective. We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.


page 2

page 5

page 6

page 8

page 9

page 14

page 15

page 16


Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

Deep learning-based systems have been shown to be vulnerable to adversar...

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Deep learning models have achieved high performance on many tasks, and t...

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

In this paper we show that misclassification attacks against face-recogn...

Measurement-driven Security Analysis of Imperceptible Impersonation Attacks

The emergence of Internet of Things (IoT) brings about new security chal...

Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attack

Recently, significant advancements have been made in face recognition te...

Morphing Attack Potential

In security systems the risk assessment in the sense of common criteria ...

Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

To attack a deep neural network (DNN) based Face Recognition (FR) system...

Please sign up or login with your details

Forgot password? Click here to reset