HAD-GAN: A Human-perception Auxiliary Defense GAN model to Defend Adversarial Examples

09/17/2019 ∙ by Wanting Yu, et al. ∙ 17

Adversarial examples reveal the vulnerability and unexplained nature of neural networks. It is of great practical significance to study the defense of adversarial examples. In fact, most adversarial examples that misclassify networks are often undetectable by humans. In this paper, we propose a defense model to train the classifier into a human-perception classification model with shape preference. The proposed model consisting of a TTN (Texture Transfer Network) and an auxiliary defense GAN (Generative Adversarial Networks) is called HAD-GAN (Human-perception Auxiliary Defense GAN). The TTN is used to extend the texture samples of a clean image and makes classifiers more focused on its shape. And GAN is utilized to form a training framework for the model and generate the images we need. A series of experiments conducted on MNIST, Fashion-MNIST and CIFAR10 show that the proposed model outperforms the state-of-the-art defense methods for network robust, and have a significant improvement on defense ability of adversarial examples.



There are no comments yet.


page 7

page 10

page 11

page 12

page 19

page 20

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.