HAD-GAN: A Human-perception Auxiliary Defense GAN model to Defend Adversarial Examples

09/17/2019
by   Wanting Yu, et al.
17

Adversarial examples reveal the vulnerability and unexplained nature of neural networks. It is of great practical significance to study the defense of adversarial examples. In fact, most adversarial examples that misclassify networks are often undetectable by humans. In this paper, we propose a defense model to train the classifier into a human-perception classification model with shape preference. The proposed model consisting of a TTN (Texture Transfer Network) and an auxiliary defense GAN (Generative Adversarial Networks) is called HAD-GAN (Human-perception Auxiliary Defense GAN). The TTN is used to extend the texture samples of a clean image and makes classifiers more focused on its shape. And GAN is utilized to form a training framework for the model and generate the images we need. A series of experiments conducted on MNIST, Fashion-MNIST and CIFAR10 show that the proposed model outperforms the state-of-the-art defense methods for network robust, and have a significant improvement on defense ability of adversarial examples.

READ FULL TEXT

page 7

page 10

page 11

page 12

page 19

page 20

research
07/18/2017

APE-GAN: Adversarial Perturbation Elimination with GAN

Although neural networks could achieve state-of-the-art performance whil...
research
05/21/2018

Generative Adversarial Examples

Adversarial examples are typically constructed by perturbing an existing...
research
07/14/2021

AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense

We propose an AID-purifier that can boost the robustness of adversariall...
research
10/30/2018

Improved Network Robustness with Adversary Critic

Ideally, what confuses neural network should be confusing to humans. How...
research
04/23/2018

VectorDefense: Vectorization as a Defense to Adversarial Examples

Training deep neural networks on images represented as grids of pixels h...
research
08/12/2022

Scale-free Photo-realistic Adversarial Pattern Attack

Traditional pixel-wise image attack algorithms suffer from poor robustne...
research
12/05/2018

Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples

Image classifiers often suffer from adversarial examples, which are gene...

Please sign up or login with your details

Forgot password? Click here to reset