Do we need entire training data for adversarial training?

03/10/2023
by   Vipul Gupta, et al.
0

Deep Neural Networks (DNNs) are being used to solve a wide range of problems in many domains including safety-critical domains like self-driving cars and medical imagery. DNNs suffer from vulnerability against adversarial attacks. In the past few years, numerous approaches have been proposed to tackle this problem by training networks using adversarial training. Almost all the approaches generate adversarial examples for the entire training dataset, thus increasing the training time drastically. We show that we can decrease the training time for any adversarial training algorithm by using only a subset of training data for adversarial training. To select the subset, we filter the adversarially-prone samples from the training data. We perform a simple adversarial attack on all training examples to filter this subset. In this attack, we add a small perturbation to each pixel and a few grid lines to the input image. We perform adversarial training on the adversarially-prone subset and mix it with vanilla training performed on the entire dataset. Our results show that when our method-agnostic approach is plugged into FGSM, we achieve a speedup of 3.52x on MNIST and 1.98x on the CIFAR-10 dataset with comparable robust accuracy. We also test our approach on state-of-the-art Free adversarial training and achieve a speedup of 1.2x in training time with a marginal drop in robust accuracy on the ImageNet dataset.

READ FULL TEXT
research
05/27/2017

A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

Some recent works revealed that deep neural networks (DNNs) are vulnerab...
research
08/18/2020

Improving adversarial robustness of deep neural networks by using semantic information

The vulnerability of deep neural networks (DNNs) to adversarial attack, ...
research
09/13/2022

Adversarial Coreset Selection for Efficient Robust Training

Neural networks are vulnerable to adversarial attacks: adding well-craft...
research
01/08/2020

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

Adversarial training is one of the most popular ways to learn robust mod...
research
10/17/2022

Probabilistic Categorical Adversarial Attack Adversarial Training

The existence of adversarial examples brings huge concern for people to ...
research
05/24/2022

One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks

Unlearnable examples (ULEs) aim to protect data from unauthorized usage ...
research
01/15/2019

The Limitations of Adversarial Training and the Blind-Spot Attack

The adversarial training procedure proposed by Madry et al. (2018) is on...

Please sign up or login with your details

Forgot password? Click here to reset