Sample Complexity of Adversarially Robust Linear Classification on Separated Data

12/19/2020
by   Robi Bhattacharjee, et al.
6

We consider the sample complexity of learning with adversarial robustness. Most prior theoretical results for this problem have considered a setting where different classes in the data are close together or overlapping. Motivated by some real applications, we consider, in contrast, the well-separated case where there exists a classifier with perfect accuracy and robustness, and show that the sample complexity narrates an entirely different story. Specifically, for linear classifiers, we show a large class of well-separated distributions where the expected robust loss of any algorithm is at least Ω(d/n), whereas the max margin algorithm has expected standard loss O(1/n). This shows a gap in the standard and robust losses that cannot be obtained via prior techniques. Additionally, we present an algorithm that, given an instance where the robustness radius is much smaller than the gap between the classes, gives a solution with expected robust loss is O(1/n). This shows that for very well-separated data, convergence rates of O(1/n) are achievable, which is not the case otherwise. Our results apply to robustness measured in any ℓ_p norm with p > 1 (including p = ∞).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2018

Adversarially Robust Generalization Requires More Data

Machine learning models are often susceptible to adversarial perturbatio...
research
02/26/2023

A Finite Sample Complexity Bound for Distributionally Robust Q-learning

We consider a reinforcement learning setting in which the deployment env...
research
06/11/2021

Relaxing Local Robustness

Certifiable local robustness, which rigorously precludes small-norm adve...
research
01/16/2023

Distributionally Robust Learning with Weakly Convex Losses: Convergence Rates and Finite-Sample Guarantees

We consider a distributionally robust stochastic optimization problem an...
research
06/01/2023

Provable Benefit of Mixup for Finding Optimal Decision Boundaries

We investigate how pair-wise data augmentation techniques like Mixup aff...
research
07/18/2023

The Full Landscape of Robust Mean Testing: Sharp Separations between Oblivious and Adaptive Contamination

We consider the question of Gaussian mean testing, a fundamental task in...
research
04/18/2023

Impossibility of Characterizing Distribution Learning – a simple solution to a long-standing problem

We consider the long-standing question of finding a parameter of a class...

Please sign up or login with your details

Forgot password? Click here to reset