Adversarially Robust Generalization Just Requires More Unlabeled Data

06/03/2019
by   Runtian Zhai, et al.
0

Neural network robustness has recently been highlighted by the existence of adversarial examples. Many previous works show that the learned networks do not perform well on perturbed test data, and significantly more labeled data is required to achieve adversarially robust generalization. In this paper, we theoretically and empirically show that with just more unlabeled data, we can learn a model with better adversarially robust generalization. The key insight of our results is based on a risk decomposition theorem, in which the expected robust risk is separated into two parts: the stability part which measures the prediction stability in the presence of perturbations, and the accuracy part which evaluates the standard classification accuracy. As the stability part does not depend on any label information, we can optimize this part using unlabeled data. We further prove that for a specific Gaussian mixture problem illustrated by schmidt2018adversarially, adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Inspired by the theoretical findings, we propose a new algorithm called PASS by leveraging unlabeled data during adversarial training. We show that in the transductive and semi-supervised settings, PASS achieves higher robust accuracy and defense success rate on the Cifar-10 task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2019

Are Labels Required for Improving Adversarial Robustness?

Recent work has uncovered the interesting (and somewhat surprising) find...
research
11/20/2019

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?

Deep neural networks (DNNs) are incredibly brittle due to adversarial ex...
research
05/24/2019

Robustness to Adversarial Perturbations in Learning from Incomplete Data

What is the role of unlabeled data in an inference problem, when the pre...
research
06/15/2020

Improving Adversarial Robustness via Unlabeled Out-of-Domain Data

Data augmentation by incorporating cheap unlabeled data from multiple do...
research
05/31/2019

Unlabeled Data Improves Adversarial Robustness

We demonstrate, theoretically and empirically, that adversarial robustne...
research
05/01/2021

RATT: Leveraging Unlabeled Data to Guarantee Generalization

To assess generalization, machine learning scientists typically either (...
research
07/22/2022

Efficient Testing of Deep Neural Networks via Decision Boundary Analysis

Deep learning plays a more and more important role in our daily life due...

Please sign up or login with your details

Forgot password? Click here to reset