AutoGAN: Robust Classifier Against Adversarial Attacks

12/08/2018
by   Blerta Lindqvist, et al.
0

Classifiers fail to classify correctly input images that have been purposefully and imperceptibly perturbed to cause misclassification. This susceptability has been shown to be consistent across classifiers, regardless of their type, architecture or parameters. Common defenses against adversarial attacks modify the classifer boundary by training on additional adversarial examples created in various ways. In this paper, we introduce AutoGAN, which counters adversarial attacks by enhancing the lower-dimensional manifold defined by the training data and by projecting perturbed data points onto it. AutoGAN mitigates the need for knowing the attack type and magnitude as well as the need for having adversarial samples of the attack. Our approach uses a Generative Adversarial Network (GAN) with an autoencoder generator and a discriminator that also serves as a classifier. We test AutoGAN against adversarial samples generated with state-of-the-art Fast Gradient Sign Method (FGSM) as well as samples generated with random Gaussian noise, both using the MNIST dataset. For different magnitudes of perturbation in training and testing, AutoGAN can surpass the accuracy of FGSM method by up to 25% points on samples perturbed using FGSM. Without an augmented training dataset, AutoGAN achieves an accuracy of 89% compared to 1% achieved by FGSM method on FGSM testing adversarial samples.

READ FULL TEXT
research
02/04/2020

Minimax Defense against Gradient-based Adversarial Attacks

State-of-the-art adversarial attacks are aimed at neural network classif...
research
05/27/2018

Defending Against Adversarial Attacks by Leveraging an Entire GAN

Recent work has shown that state-of-the-art models are highly vulnerable...
research
05/23/2023

The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks

Many defenses against adversarial attacks (robust classifiers, randomiza...
research
08/10/2020

FireBERT: Hardening BERT-based classifiers against adversarial attack

We present FireBERT, a set of three proof-of-concept NLP classifiers har...
research
09/02/2021

Regional Adversarial Training for Better Robust Generalization

Adversarial training (AT) has been demonstrated as one of the most promi...
research
11/23/2022

Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners

This paper examines the robustness of deployed few-shot meta-learning sy...
research
11/19/2021

Resilience from Diversity: Population-based approach to harden models against adversarial attacks

Traditional deep learning models exhibit intriguing vulnerabilities that...

Please sign up or login with your details

Forgot password? Click here to reset