Unlabeled Data Improves Adversarial Robustness

05/31/2019
by   Yair Carmon, et al.
0

We demonstrate, theoretically and empirically, that adversarial robustness can significantly benefit from semisupervised learning. Theoretically, we revisit the simple Gaussian model of Schmidt et al. that shows a sample complexity gap between standard and robust classification. We prove that this gap does not pertain to labels: a simple semisupervised learning procedure (self-training) achieves robust accuracy using the same number of labels required for standard accuracy. Empirically, we augment CIFAR-10 with 500K unlabeled images sourced from 80 Million Tiny Images and use robust self-training to outperform state-of-the-art robust accuracies by over 5 points in (i) ℓ_∞ robustness against several strong attacks via adversarial training and (ii) certified ℓ_2 and ℓ_∞ robustness via randomized smoothing. On SVHN, adding the dataset's own extra training set with the labels removed provides gains of 4 to 10 points, within 1 point of the gain from using the extra labels as well.

READ FULL TEXT
research
05/31/2019

Are Labels Required for Improving Adversarial Robustness?

Recent work has uncovered the interesting (and somewhat surprising) find...
research
07/11/2022

RUSH: Robust Contrastive Learning via Randomized Smoothing

Recently, adversarial training has been incorporated in self-supervised ...
research
06/03/2019

Adversarially Robust Generalization Just Requires More Unlabeled Data

Neural network robustness has recently been highlighted by the existence...
research
06/15/2020

Improving Adversarial Robustness via Unlabeled Out-of-Domain Data

Data augmentation by incorporating cheap unlabeled data from multiple do...
research
11/03/2021

LTD: Low Temperature Distillation for Robust Adversarial Training

Adversarial training has been widely used to enhance the robustness of t...
research
05/25/2019

Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness

Previous work shows that adversarially robust generalization requires la...
research
10/22/2019

Class Mean Vectors, Self Monitoring and Self Learning for Neural Classifiers

In this paper we explore the role of sample mean in building a neural ne...

Please sign up or login with your details

Forgot password? Click here to reset