When are Non-Parametric Methods Robust?

03/13/2020
by   Robi Bhattacharjee, et al.
0

A growing body of research has shown that many classifiers are susceptible to adversarial examples– small strategic modifications to test inputs that lead to misclassification. In this work, we study general non-parametric methods, with a view towards understanding when they are robust to these modifications. We establish general conditions under which non-parametric methods are r-consistent – in the sense that they converge to optimally robust and accurate classifiers in the large sample limit. Concretely, our results show that when data is well-separated, nearest neighbors and kernel classifiers are r-consistent, while histograms are not. For general data distributions, we prove that preprocessing by Adversarial Pruning (Yang et. al., 2019) – that makes data well-separated – followed by nearest neighbors or kernel classifiers also leads to r-consistency.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset