Curriculum Adversarial Training

05/13/2018
by   Qi-Zhi Cai, et al.
0

Recently, deep learning has been applied to many security-sensitive applications, such as facial authentication. The existence of adversarial examples hinders such applications. The state-of-the-art result on defense shows that adversarial training can be applied to train a robust model on MNIST against adversarial examples; but it fails to achieve a high empirical worst-case accuracy on a more complex task, such as CIFAR-10 and SVHN. In our work, we propose curriculum adversarial training (CAT) to resolve this issue. The basic idea is to develop a curriculum of adversarial examples generated by attacks with a wide range of strengths. With two techniques to mitigate the forgetting and the generalization issues, we demonstrate that CAT can improve the prior art's empirical worst-case accuracy by a large margin of 25 CIFAR-10 and 35 non-adversarial inputs is comparable to the state-of-the-art models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/23/2020

Semantics-Preserving Adversarial Training

Adversarial training is a defense technique that improves adversarial ro...
research
05/22/2019

Convergence and Margin of Adversarial Training on Separable Data

Adversarial training is a technique for training robust machine learning...
research
01/16/2020

Increasing the robustness of DNNs against image corruptions by playing the Game of Noise

The human visual system is remarkably robust against a wide range of nat...
research
02/28/2020

Detecting and Recovering Adversarial Examples: An Input Sensitivity Guided Method

Deep neural networks undergo rapid development and achieve notable succe...
research
06/26/2019

Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

This work provides theoretical and empirical evidence that invariance-in...
research
10/29/2019

Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications

We develop techniques to quantify the degree to which a given (training ...
research
03/05/2019

L 1-norm double backpropagation adversarial defense

Adversarial examples are a challenging open problem for deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset