Adversarial Training Versus Weight Decay

04/10/2018
by   Angus Galloway, et al.
0

Performance-critical machine learning models should be robust to input perturbations not seen during training. Adversarial training is a method for improving a model's robustness to some perturbations by including them in the training process, but this tends to exacerbate other vulnerabilities of the model. The adversarial training framework has the effect of translating the data with respect to the cost function, while weight decay has a scaling effect. Although weight decay could be considered a crude regularization technique, it appears superior to adversarial training as it remains stable over a broader range of regimes and reduces all generalization errors. Equipped with these abstractions, we provide key baseline results and methodology for characterizing robustness. The two approaches can be combined to yield one small model that demonstrates good robustness to several white-box attacks associated with different metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2021

Adversarial Training is Not Ready for Robot Learning

Adversarial training is an effective method to train deep learning model...
research
10/01/2020

Bag of Tricks for Adversarial Training

Adversarial training (AT) is one of the most effective strategies for pr...
research
09/15/2022

A Light Recipe to Train Robust Vision Transformers

In this paper, we ask whether Vision Transformers (ViTs) can serve as an...
research
08/22/2023

Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging

Fast Adversarial Training (FAT) not only improves the model robustness b...
research
11/28/2022

Gamma-convergence of a nonlocal perimeter arising in adversarial machine learning

In this paper we prove Gamma-convergence of a nonlocal perimeter of Mink...
research
03/15/2021

Constant Random Perturbations Provide Adversarial Robustness with Minimal Effect on Accuracy

This paper proposes an attack-independent (non-adversarial training) tec...
research
05/09/2018

On Visual Hallmarks of Robustness to Adversarial Malware

A central challenge of adversarial learning is to interpret the resultin...

Please sign up or login with your details

Forgot password? Click here to reset