Data Augmentation Alone Can Improve Adversarial Training

01/24/2023
by   Lin Li, et al.
0

Adversarial training suffers from the issue of robust overfitting, which seriously impairs its generalization performance. Data augmentation, which is effective at preventing overfitting in standard training, has been observed by many previous works to be ineffective in mitigating overfitting in adversarial training. This work proves that, contrary to previous findings, data augmentation alone can significantly boost accuracy and robustness in adversarial training. We find that the hardness and the diversity of data augmentation are important factors in combating robust overfitting. In general, diversity can improve both accuracy and robustness, while hardness can boost robustness at the cost of accuracy within a certain limit and degrade them both over that limit. To mitigate robust overfitting, we first propose a new crop transformation, Cropshift, which has improved diversity compared to the conventional one (Padcrop). We then propose a new data augmentation scheme, based on Cropshift, with much improved diversity and well-balanced hardness. Empirically, our augmentation method achieves the state-of-the-art accuracy and robustness for data augmentations in adversarial training. Furthermore, when combined with weight averaging it matches, or even exceeds, the performance of the best contemporary regularization methods for alleviating robust overfitting. Code is available at: https://github.com/TreeLLi/DA-Alone-Improves-AT.

READ FULL TEXT

page 6

page 18

research
11/09/2021

Data Augmentation Can Improve Robustness

Adversarial training suffers from robust overfitting, a phenomenon where...
research
06/12/2023

AROID: Improving Adversarial Robustness through Online Instance-wise Data Augmentation

Deep neural networks are vulnerable to adversarial examples. Adversarial...
research
03/03/2021

On the effectiveness of adversarial training against common corruptions

The literature on robustness towards common corruptions shows no consens...
research
10/26/2021

AugMax: Adversarial Composition of Random Augmentations for Robust Training

Data augmentation is a simple yet effective way to improve the robustnes...
research
03/08/2021

Consistency Regularization for Adversarial Robustness

Adversarial training (AT) is currently one of the most successful method...
research
12/08/2022

MixBoost: Improving the Robustness of Deep Neural Networks by Boosting Data Augmentation

As more and more artificial intelligence (AI) technologies move from the...
research
03/30/2021

Enabling Data Diversity: Efficient Automatic Augmentation via Regularized Adversarial Training

Data augmentation has proved extremely useful by increasing training dat...

Please sign up or login with your details

Forgot password? Click here to reset