Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples

10/07/2020
by   Sven Gowal, et al.
0

Adversarial training and its variants have become de facto standards for learning robust deep neural networks. In this paper, we explore the landscape around adversarial training in a bid to uncover its limits. We systematically study the effect of different training losses, model sizes, activation functions, the addition of unlabeled data (through pseudo-labeling) and other factors on adversarial robustness. We discover that it is possible to train robust models that go well beyond state-of-the-art results by combining larger models, Swish/SiLU activations and model weight averaging. We demonstrate large improvements on CIFAR-10 and CIFAR-100 against ℓ_∞ and ℓ_2 norm-bounded perturbations of size 8/255 and 128/255, respectively. In the setting with additional unlabeled data, we obtain an accuracy under attack of 65.87 with respect to prior art). Without additional data, we obtain an accuracy under attack of 56.43 without any additional modifications, we obtain an accuracy under attack of 80.45 and of 37.70 CIFAR-100.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset