Unique properties of adversarially trained linear classifiers on Gaussian data
Machine learning models are vulnerable to adversarial perturbations, that when added to an input, can cause high confidence misclassifications. The adversarial learning research community has made remarkable progress in the understanding of the root causes of adversarial perturbations. However, most problems that one may consider important to solve for the deployment of machine learning in safety critical tasks involve high dimensional complex manifolds that are difficult to characterize and study. It is common to develop adversarially robust learning theory on simple problems, in the hope that insights will transfer to `real world datasets'. In this work, we discuss a setting where this approach fails. In particular, we show with a linear classifier, it is always possible to solve a binary classification problem on Gaussian data under arbitrary levels of adversarial corruption during training, and that this property is not observed with non-linear classifiers on the CIFAR-10 dataset.
READ FULL TEXT