Salient Feature Extractor for Adversarial Defense on Deep Neural Networks

05/14/2021
by   Jinyin Chen, et al.
14

Recent years have witnessed unprecedented success achieved by deep learning models in the field of computer vision. However, their vulnerability towards carefully crafted adversarial examples has also attracted the increasing attention of researchers. Motivated by the observation that adversarial examples are due to the non-robust feature learned from the original dataset by models, we propose the concepts of salient feature(SF) and trivial feature(TF). The former represents the class-related feature, while the latter is usually adopted to mislead the model. We extract these two features with coupled generative adversarial network model and put forward a novel detection and defense method named salient feature extractor (SFE) to defend against adversarial attacks. Concretely, detection is realized by separating and comparing the difference between SF and TF of the input. At the same time, correct labels are obtained by re-identifying SF to reach the purpose of defense. Extensive experiments are carried out on MNIST, CIFAR-10, and ImageNet datasets where SFE shows state-of-the-art results in effectiveness and efficiency compared with baselines. Furthermore, we provide an interpretable understanding of the defense and detection process.

READ FULL TEXT

page 3

page 22

page 26

page 29

page 36

page 37

research
03/13/2021

Attack as Defense: Characterizing Adversarial Examples using Robustness

As a new programming paradigm, deep learning has expanded its applicatio...
research
07/18/2017

APE-GAN: Adversarial Perturbation Elimination with GAN

Although neural networks could achieve state-of-the-art performance whil...
research
02/03/2020

Defending Adversarial Attacks via Semantic Feature Manipulation

Machine learning models have demonstrated vulnerability to adversarial a...
research
03/06/2019

GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier

Machine learning models, especially neural network (NN) classifiers, are...
research
12/25/2018

Adversarial Feature Genome: a Data Driven Adversarial Examples Recognition Method

Convolutional neural networks (CNNs) are easily spoofed by adversarial e...
research
12/24/2021

NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks

Although deep learning models have achieved unprecedented success, their...
research
12/10/2019

Feature Losses for Adversarial Robustness

Deep learning has made tremendous advances in computer vision tasks such...

Please sign up or login with your details

Forgot password? Click here to reset