Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

05/17/2018
by   Pouya Samangouei, et al.
0

In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. Our code has been made publicly available at https://github.com/kabkabm/defensegan.

READ FULL TEXT
research
02/28/2019

Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN

Deep neural networks have been widely deployed in various machine learni...
research
01/07/2019

Image Super-Resolution as a Defense Against Adversarial Attacks

Convolutional Neural Networks have achieved significant success across m...
research
02/24/2020

Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks

Despite recent progress, deep neural networks generally continue to be v...
research
02/05/2022

Memory Defense: More Robust Classification via a Memory-Masking Autoencoder

Many deep neural networks are susceptible to minute perturbations of ima...
research
07/20/2021

Discriminator-Free Generative Adversarial Attack

The Deep Neural Networks are vulnerable toadversarial exam-ples(Figure 1...
research
07/19/2020

Exploiting vulnerabilities of deep neural networks for privacy protection

Adversarial perturbations can be added to images to protect their conten...
research
10/10/2019

Defending Neural Backdoors via Generative Distribution Modeling

Neural backdoor attack is emerging as a severe security threat to deep l...

Please sign up or login with your details

Forgot password? Click here to reset