Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

02/13/2018
by   Di Tang, et al.
0

To attack a deep neural network (DNN) based Face Recognition (FR) system, one needs to build substitute models to simulate the target, so the adversarial examples discovered could also mislead the target. Such transferability is achieved in recent studies through querying the target to obtain data for training the substitutes. A real-world target, likes the FR system of law enforcement, however, is less accessible to the adversary. To attack such a system, a substitute with similar quality as the target is needed to identify their common defects. This is hard since the adversary often does not have the enough resources to train such a model (hundreds of millions of images for training a commercial FR system). We found in our research, however, that a resource-constrained adversary could still effectively approximate the target's capability to recognize specific individuals, by training biased substitutes on additional images of those who want to evade recognition (the subject) or the victims to be impersonated (called Point of Interest, or PoI). This is made possible by a new property we discovered, called Nearly Local Linearity (NLL), which models the observation that an ideal DNN model produces the image representations whose distances among themselves truthfully describe the differences in the input images seen by human. By simulating this property around the PoIs using the additional subject or victim data, we significantly improve the transferability of black-box impersonation attacks by nearly 50%. Particularly, we successfully attacked a commercial system trained over 20 million images, using 4 million images and 1/5 of the training time but achieving 60% transferability in an impersonation attack and 89% in a dodging attack.

READ FULL TEXT

page 12

page 17

research
05/07/2021

Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

Deep neural networks, particularly face recognition models, have been sh...
research
04/13/2020

Towards Transferable Adversarial Attack against Deep Face Recognition

Face recognition has achieved great success in the last five years due t...
research
06/09/2022

ReFace: Real-time Adversarial Attacks on Face Recognition Systems

Deep neural network based face recognition models have been shown to be ...
research
09/15/2020

Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems

Deep neural networks (DNN) have shown great success in many computer vis...
research
03/22/2023

Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition

A hard challenge in developing practical face recognition (FR) attacks i...
research
06/29/2021

Improving Transferability of Adversarial Patches on Face Recognition with Generative Models

Face recognition is greatly improved by deep convolutional neural networ...

Please sign up or login with your details

Forgot password? Click here to reset