Efficient Adversarial Training With Data Pruning

07/01/2022
by   Maximilian Kaufmann, et al.
0

Neural networks are susceptible to adversarial examples-small input perturbations that cause models to fail. Adversarial training is one of the solutions that stops adversarial examples; models are exposed to attacks during training and learn to be resilient to them. Yet, such a procedure is currently expensive-it takes a long time to produce and train models with adversarial samples, and, what is worse, it occasionally fails. In this paper we demonstrate data pruning-a method for increasing adversarial training efficiency through data sub-sampling.We empirically show that data pruning leads to improvements in convergence and reliability of adversarial training, albeit with different levels of utility degradation. For example, we observe that using random sub-sampling of CIFAR10 to drop 40 adversarial accuracy against the strongest attackers, while by using only 20 of data we lose 14 Interestingly, we discover that in some settings data pruning brings benefits from both worlds-it both improves adversarial accuracy and training time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2021

Deep Repulsive Prototypes for Adversarial Robustness

While many defences against adversarial examples have been proposed, fin...
research
06/04/2020

Towards Understanding Fast Adversarial Training

Current neural-network-based classifiers are susceptible to adversarial ...
research
08/07/2020

Optimizing Information Loss Towards Robust Neural Networks

Neural Networks (NNs) are vulnerable to adversarial examples. Such input...
research
08/20/2021

ASAT: Adaptively Scaled Adversarial Training in Time Series

Adversarial training is a method for enhancing neural networks to improv...
research
03/15/2021

Adversarial Training is Not Ready for Robot Learning

Adversarial training is an effective method to train deep learning model...
research
04/10/2020

Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness

With the growth of interest in the attack and defense of deep neural net...
research
10/28/2022

Coverage-centric Coreset Selection for High Pruning Rates

One-shot coreset selection aims to select a subset of the training data,...

Please sign up or login with your details

Forgot password? Click here to reset