Adversarial Examples for Non-Parametric Methods: Attacks, Defenses and Large Sample Limits

06/07/2019
by   Yao-Yuan Yang, et al.
0

Adversarial examples have received a great deal of recent attention because of their potential to uncover security flaws in machine learning systems. However, most prior work on adversarial examples has been on parametric classifiers, for which generic attack and defense methods are known; non-parametric methods have been only considered on an ad-hoc or classifier-specific basis. In this work, we take a holistic look at adversarial examples for non-parametric methods. We first provide a general region-based attack that applies to a wide range of classifiers, including nearest neighbors, decision trees, and random forests. Motivated by the close connection between non-parametric methods and the Bayes Optimal classifier, we next exhibit a robust analogue to the Bayes Optimal, and we use it to motivate a novel and generic defense that we call adversarial pruning. We empirically show that the region-based attack and adversarial pruning defense are either better than or competitive with existing attacks and defenses for non-parametric methods, while being considerably more generally applicable.

READ FULL TEXT
research
03/13/2020

When are Non-Parametric Methods Robust?

A growing body of research has shown that many classifiers are susceptib...
research
02/18/2021

Consistent Non-Parametric Methods for Adaptive Robustness

Learning classifiers that are robust to adversarial examples has receive...
research
02/07/2022

Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests

Non-parametric two-sample tests (TSTs) that judge whether two sets of sa...
research
06/13/2017

Analyzing the Robustness of Nearest Neighbors to Adversarial Examples

Motivated by applications such as autonomous vehicles, test-time attacks...
research
05/25/2020

Keyed Non-Parametric Hypothesis Tests

The recent popularity of machine learning calls for a deeper understandi...
research
10/31/2018

A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters

Naive Bayes spam filters are highly susceptible to data poisoning attack...
research
09/23/2020

Adversarial robustness via stochastic regularization of neural activation sensitivity

Recent works have shown that the input domain of any machine learning cl...

Please sign up or login with your details

Forgot password? Click here to reset