Deep Repulsive Prototypes for Adversarial Robustness

05/26/2021 ∙ by Alex Serban, et al. ∙ 0

While many defences against adversarial examples have been proposed, finding robust machine learning models is still an open problem. The most compelling defence to date is adversarial training and consists of complementing the training data set with adversarial examples. Yet adversarial training severely impacts training time and depends on finding representative adversarial samples. In this paper we propose to train models on output spaces with large class separation in order to gain robustness without adversarial training. We introduce a method to partition the output space into class prototypes with large separation and train models to preserve it. Experimental results shows that models trained with these prototypes – which we call deep repulsive prototypes – gain robustness competitive with adversarial training, while also preserving more accuracy on natural samples. Moreover, the models are more resilient to large perturbation sizes. For example, we obtained over 50 robustness for CIFAR-10, with 92 robustness for CIFAR-100, with 71 adversarial training. For both data sets, the models preserved robustness against large perturbations better than adversarially trained models.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.