Efficient and Effective Augmentation Strategy for Adversarial Training

10/27/2022
by   Sravanti Addepalli, et al.
0

Adversarial training of Deep Neural Networks is known to be significantly more data-hungry when compared to standard training. Furthermore, complex data augmentations such as AutoAugment, which have led to substantial gains in standard training of image classifiers, have not been successful with Adversarial Training. We first explain this contrasting behavior by viewing augmentation during training as a problem of domain generalization, and further propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use data augmentations effectively in adversarial training. We aim to handle the conflicting goals of enhancing the diversity of the training dataset and training with data that is close to the test distribution by using a combination of simple and complex augmentations with separate batch normalization layers during training. We further utilize the popular Jensen-Shannon divergence loss to encourage the joint learning of the diverse augmentations, thereby allowing simple augmentations to guide the learning of complex ones. Lastly, to improve the computational efficiency of the proposed method, we propose and utilize a two-step defense, Ascending Constraint Adversarial Training (ACAT), that uses an increasing epsilon schedule and weight-space smoothing to prevent gradient masking. The proposed method DAJAT achieves substantially better robustness-accuracy trade-off when compared to existing methods on the RobustBench Leaderboard on ResNet-18 and WideResNet-34-10. The code for implementing DAJAT is available here: https://github.com/val-iisc/DAJAT.

READ FULL TEXT

page 2

page 17

page 18

research
03/03/2021

On the effectiveness of adversarial training against common corruptions

The literature on robustness towards common corruptions shows no consens...
research
10/22/2020

Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free

Adversarial training and its many variants substantially improve deep ne...
research
03/23/2022

Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection

Recent studies in deepfake detection have yielded promising results when...
research
11/24/2021

Challenges of Adversarial Image Augmentations

Image augmentations applied during training are crucial for the generali...
research
09/27/2022

Inducing Data Amplification Using Auxiliary Datasets in Adversarial Training

Several recent studies have shown that the use of extra in-distribution ...
research
10/26/2021

AugMax: Adversarial Composition of Random Augmentations for Robust Training

Data augmentation is a simple yet effective way to improve the robustnes...
research
06/03/2022

Adversarial Unlearning: Reducing Confidence Along Adversarial Directions

Supervised learning methods trained with maximum likelihood objectives o...

Please sign up or login with your details

Forgot password? Click here to reset