Improving Robustness using Generated Data

10/18/2021
by   Sven Gowal, et al.
0

Recent work argues that robust training requires substantially larger datasets than those required for standard classification. On CIFAR-10 and CIFAR-100, this translates into a sizable robust-accuracy gap between models trained solely on data from the original training set and those trained with additional data extracted from the "80 Million Tiny Images" dataset (TI-80M). In this paper, we explore how generative models trained solely on the original training set can be leveraged to artificially increase the size of the original training set and improve adversarial robustness to ℓ_p norm-bounded perturbations. We identify the sufficient conditions under which incorporating additional generated data can improve robustness, and demonstrate that it is possible to significantly reduce the robust-accuracy gap to models trained with additional real data. Surprisingly, we even show that even the addition of non-realistic random data (generated by Gaussian sampling) can improve robustness. We evaluate our approach on CIFAR-10, CIFAR-100, SVHN and TinyImageNet against ℓ_∞ and ℓ_2 norm-bounded perturbations of size ϵ = 8/255 and ϵ = 128/255, respectively. We show large absolute improvements in robust accuracy compared to previous state-of-the-art methods. Against ℓ_∞ norm-bounded perturbations of size ϵ = 8/255, our models achieve 66.10 CIFAR-100, respectively (improving upon the state-of-the-art by +8.96 +3.29 128/255, our model achieves 78.31 most prior works that use external data.

READ FULL TEXT

page 4

page 20

page 25

research
03/02/2021

Fixing Data Augmentation to Improve Adversarial Robustness

Adversarial training suffers from robust overfitting, a phenomenon where...
research
11/09/2021

Data Augmentation Can Improve Robustness

Adversarial training suffers from robust overfitting, a phenomenon where...
research
08/17/2022

Two Heads are Better than One: Robust Learning Meets Multi-branch Models

Deep neural networks (DNNs) are vulnerable to adversarial examples, in w...
research
04/02/2021

Defending Against Image Corruptions Through Adversarial Augmentations

Modern neural networks excel at image classification, yet they remain vu...
research
11/06/2018

MixTrain: Scalable Training of Verifiably Robust Neural Networks

Making neural networks robust against adversarial inputs has resulted in...
research
11/22/2022

Improving Robust Generalization by Direct PAC-Bayesian Bound Minimization

Recent research in robust optimization has shown an overfitting-like phe...
research
05/31/2018

Scaling provable adversarial defenses

Recent work has developed methods for learning deep network classifiers ...

Please sign up or login with your details

Forgot password? Click here to reset