On the effectiveness of adversarial training against common corruptions

03/03/2021
by   Klim Kireev, et al.
0

The literature on robustness towards common corruptions shows no consensus on whether adversarial training can improve the performance in this setting. First, we show that, when used with an appropriately selected perturbation radius, ℓ_p adversarial training can serve as a strong baseline against common corruptions. Then we explain why adversarial training performs better than data augmentation with simple Gaussian noise which has been observed to be a meaningful baseline on common corruptions. Related to this, we identify the σ-overfitting phenomenon when Gaussian augmentation overfits to a particular standard deviation used for training which has a significant detrimental effect on common corruption accuracy. We discuss how to alleviate this problem and then how to further enhance ℓ_p adversarial training by introducing an efficient relaxation of adversarial training with learned perceptual image patch similarity as the distance metric. Through experiments on CIFAR-10 and ImageNet-100, we show that our approach does not only improve the ℓ_p adversarial training baseline but also has cumulative gains with data augmentation methods such as AugMix, ANT, and SIN leading to state-of-the-art performance on common corruptions. The code of our experiments is publicly available at https://github.com/tml-epfl/adv-training-corruptions.

READ FULL TEXT
research
01/24/2023

Data Augmentation Alone Can Improve Adversarial Training

Adversarial training suffers from the issue of robust overfitting, which...
research
10/27/2022

Efficient and Effective Augmentation Strategy for Adversarial Training

Adversarial training of Deep Neural Networks is known to be significantl...
research
11/30/2021

Pyramid Adversarial Training Improves ViT Performance

Aggressive data augmentation is a key component of the strong generaliza...
research
02/26/2020

Overfitting in adversarially robust deep learning

It is common practice in deep learning to use overparameterized networks...
research
03/27/2023

Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks

Unlearnable example attacks are data poisoning techniques that can be us...
research
11/01/2022

Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks

Adversarial training (AT) with imperfect supervision is significant but ...
research
10/26/2021

AugMax: Adversarial Composition of Random Augmentations for Robust Training

Data augmentation is a simple yet effective way to improve the robustnes...

Please sign up or login with your details

Forgot password? Click here to reset