Efficient Adversarial Training with Transferable Adversarial Examples

12/27/2019
by   Haizhong Zheng, et al.
12

Adversarial training is an effective defense method to protect classification models against adversarial attacks. However, one limitation of this approach is that it can require orders of magnitude additional training time due to high cost of generating strong adversarial examples during training. In this paper, we first show that there is high transferability between models from neighboring epochs in the same training process, i.e., adversarial examples from one epoch continue to be adversarial in subsequent epochs. Leveraging this property, we propose a novel method, Adversarial Training with Transferable Adversarial Examples (ATTA), that can enhance the robustness of trained models and greatly improve the training efficiency by accumulating adversarial perturbations through epochs. Compared to state-of-the-art adversarial training methods, ATTA enhances adversarial accuracy by up to 7.2 requires 12 14x less training time on MNIST and CIFAR10 datasets with comparable model robustness.

READ FULL TEXT

page 3

page 11

research
05/15/2019

On Norm-Agnostic Robustness of Adversarial Training

Adversarial examples are carefully perturbed in-puts for fooling machine...
research
02/01/2021

Towards Speeding up Adversarial Training in Latent Spaces

Adversarial training is wildly considered as the most effective way to d...
research
09/18/2023

Reducing Adversarial Training Cost with Gradient Approximation

Deep learning models have achieved state-of-the-art performances in vari...
research
04/29/2019

Adversarial Training for Free!

Adversarial training, in which a network is trained on adversarial examp...
research
06/04/2020

Towards Understanding Fast Adversarial Training

Current neural-network-based classifiers are susceptible to adversarial ...
research
09/23/2019

Robust Local Features for Improving the Generalization of Adversarial Training

Adversarial training has been demonstrated as one of the most effective ...
research
08/07/2020

Optimizing Information Loss Towards Robust Neural Networks

Neural Networks (NNs) are vulnerable to adversarial examples. Such input...

Please sign up or login with your details

Forgot password? Click here to reset