Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study

03/24/2020
by   Luan Nguyen, et al.
7

Deep learning-based systems have been shown to be vulnerable to adversarial attacks in both digital and physical domains. While feasible, digital attacks have limited applicability in attacking deployed systems, including face recognition systems, where an adversary typically has access to the input and not the transmission channel. In such setting, physical attacks that directly provide a malicious input through the input channel pose a bigger threat. We investigate the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections. A setup comprising a commercially available web camera and a projector is used to conduct the attack. The adversary uses a transformation-invariant adversarial pattern generation method to generate a digital adversarial pattern using one or more images of the target available to the adversary. The digital adversarial pattern is then projected onto the adversary's face in the physical domain to either impersonate a target (impersonation) or evade recognition (obfuscation). We conduct preliminary experiments using two open-source and one commercial face recognition system on a pool of 50 subjects. Our experimental results demonstrate the vulnerability of face recognition systems to light projection attacks in both white-box and black-box attack settings.

READ FULL TEXT

page 1

page 4

page 6

page 7

page 8

research
05/07/2021

Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

Deep neural networks, particularly face recognition models, have been sh...
research
09/15/2020

Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems

Deep neural networks (DNN) have shown great success in many computer vis...
research
09/14/2021

Dodging Attack Using Carefully Crafted Natural Makeup

Deep learning face recognition models are used by state-of-the-art surve...
research
10/16/2018

Projecting Trouble: Light Based Adversarial Attacks on Deep Learning Classifiers

This work demonstrates a physical attack on a deep learning image classi...
research
08/18/2021

Adversarial Relighting against Face Recognition

Deep face recognition (FR) has achieved significantly high accuracy on s...
research
07/19/2021

Examining the Human Perceptibility of Black-Box Adversarial Attacks on Face Recognition

The modern open internet contains billions of public images of human fac...
research
10/29/2022

On the Need of Neuromorphic Twins to Detect Denial-of-Service Attacks on Communication Networks

As we are more and more dependent on the communication technologies, res...

Please sign up or login with your details

Forgot password? Click here to reset