Amata: An Annealing Mechanism for Adversarial Training Acceleration

12/15/2020
by   Nanyang Ye, et al.
0

Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that much degrade their performance. This is known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose an annealing mechanism, Amata, to reduce the overhead associated with adversarial training. The proposed Amata is provably convergent, well-motivated from the lens of optimal control theory and can be combined with existing acceleration methods to further enhance performance. It is demonstrated that on standard datasets, Amata can achieve similar or better robustness with around 1/3 to 1/2 the computational time compared with traditional methods. In addition, Amata can be incorporated into other adversarial training acceleration algorithms (e.g. YOPO, Free, Fast, and ATTA), which leads to further reduction in computational time on large-scale problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2019

Adversarial Training for Free!

Adversarial training, in which a network is trained on adversarial examp...
research
12/01/2021

ℓ_∞-Robustness and Beyond: Unleashing Efficient Adversarial Training

Neural networks are vulnerable to adversarial attacks: adding well-craft...
research
06/07/2022

Adaptive Regularization for Adversarial Training

Adversarial training, which is to enhance robustness against adversarial...
research
10/21/2020

Precise Statistical Analysis of Classification Accuracies for Adversarial Training

Despite the wide empirical success of modern machine learning algorithms...
research
11/21/2022

Addressing Mistake Severity in Neural Networks with Semantic Knowledge

Robustness in deep neural networks and machine learning algorithms in ge...
research
04/04/2021

Reliably fast adversarial training via latent adversarial perturbation

While multi-step adversarial training is widely popular as an effective ...
research
09/02/2021

Regional Adversarial Training for Better Robust Generalization

Adversarial training (AT) has been demonstrated as one of the most promi...

Please sign up or login with your details

Forgot password? Click here to reset