Perturbation Inactivation Based Adversarial Defense for Face Recognition

07/13/2022
by   Min Ren, et al.
5

Deep learning-based face recognition models are vulnerable to adversarial attacks. To curb these attacks, most defense methods aim to improve the robustness of recognition models against adversarial perturbations. However, the generalization capacities of these methods are quite limited. In practice, they are still vulnerable to unseen adversarial attacks. Deep learning models are fairly robust to general perturbations, such as Gaussian noises. A straightforward approach is to inactivate the adversarial perturbations so that they can be easily handled as general perturbations. In this paper, a plug-and-play adversarial defense method, named perturbation inactivation (PIN), is proposed to inactivate adversarial perturbations for adversarial defense. We discover that the perturbations in different subspaces have different influences on the recognition model. There should be a subspace, called the immune space, in which the perturbations have fewer adverse impacts on the recognition model than in other subspaces. Hence, our method estimates the immune space and inactivates the adversarial perturbations by restricting them to this subspace. The proposed method can be generalized to unseen adversarial perturbations since it does not rely on a specific kind of adversarial attack method. This approach not only outperforms several state-of-the-art adversarial defense methods but also demonstrates a superior generalization capacity through exhaustive experiments. Moreover, the proposed method can be successfully applied to four commercial APIs without additional training, indicating that it can be easily generalized to existing face recognition systems. The source code is available at https://github.com/RenMin1991/Perturbation-Inactivate

READ FULL TEXT

page 1

page 4

page 5

page 9

page 10

research
11/28/2020

FaceGuard: A Self-Supervised Defense Against Adversarial Face Images

Prevailing defense mechanisms against adversarial face images tend to ov...
research
09/11/2017

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Deep learning has become the state of the art approach in many machine l...
research
08/12/2019

Adversarial Neural Pruning

It is well known that neural networks are susceptible to adversarial per...
research
06/19/2022

JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

It has been observed that the unauthorized use of face recognition syste...
research
07/13/2017

Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models

Even todays most advanced machine learning models are easily fooled by a...
research
12/24/2021

NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks

Although deep learning models have achieved unprecedented success, their...
research
06/08/2019

Defending against Adversarial Attacks through Resilient Feature Regeneration

Deep neural network (DNN) predictions have been shown to be vulnerable t...

Please sign up or login with your details

Forgot password? Click here to reset