Unique properties of adversarially trained linear classifiers on Gaussian data

06/06/2020
by   Jamie Hayes, et al.
0

Machine learning models are vulnerable to adversarial perturbations, that when added to an input, can cause high confidence misclassifications. The adversarial learning research community has made remarkable progress in the understanding of the root causes of adversarial perturbations. However, most problems that one may consider important to solve for the deployment of machine learning in safety critical tasks involve high dimensional complex manifolds that are difficult to characterize and study. It is common to develop adversarially robust learning theory on simple problems, in the hope that insights will transfer to `real world datasets'. In this work, we discuss a setting where this approach fails. In particular, we show with a linear classifier, it is always possible to solve a binary classification problem on Gaussian data under arbitrary levels of adversarial corruption during training, and that this property is not observed with non-linear classifiers on the CIFAR-10 dataset.

READ FULL TEXT
research
10/29/2018

Rademacher Complexity for Adversarially Robust Generalization

Many machine learning models are vulnerable to adversarial attacks. It h...
research
11/08/2018

A Geometric Perspective on the Transferability of Adversarial Directions

State-of-the-art machine learning models frequently misclassify inputs t...
research
10/29/2020

Robustifying Binary Classification to Adversarial Perturbation

Despite the enormous success of machine learning models in various appli...
research
01/09/2018

Adversarial Spheres

State of the art computer vision models have been shown to be vulnerable...
research
10/22/2021

Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff

Over the past few years, several adversarial training methods have been ...
research
04/30/2018

Adversarially Robust Generalization Requires More Data

Machine learning models are often susceptible to adversarial perturbatio...
research
10/30/2022

FI-ODE: Certified and Robust Forward Invariance in Neural ODEs

We study how to certifiably enforce forward invariance properties in neu...

Please sign up or login with your details

Forgot password? Click here to reset