Minimum-Norm Adversarial Examples on KNN and KNN-Based Models

03/14/2020
by   Chawin Sitawarin, et al.
10

We study the robustness against adversarial examples of kNN classifiers and classifiers that combine kNN with neural networks. The main difficulty lies in the fact that finding an optimal attack on kNN is intractable for typical datasets. In this work, we propose a gradient-based attack on kNN and kNN-based defenses, inspired by the previous work by Sitawarin Wagner [1]. We demonstrate that our attack outperforms their method on all of the models we tested with only a minimal increase in the computation time. The attack also beats the state-of-the-art attack [2] on kNN when k > 1 using less than 1 its running time. We hope that this attack can be used as a new baseline for evaluating the robustness of kNN and its variants.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2019

Generating Adversarial Examples With Conditional Generative Adversarial Net

Recently, deep neural networks have significant progress and successful ...
research
06/15/2022

Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack

The AutoAttack (AA) has been the most reliable method to evaluate advers...
research
10/29/2018

Logit Pairing Methods Can Fool Gradient-Based Attacks

Recently, several logit regularization methods have been proposed in [Ka...
research
07/03/2019

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

The evaluation of robustness against adversarial manipulation of neural ...
research
06/28/2021

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Evading adversarial example detection defenses requires finding adversar...
research
01/31/2018

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

The robustness of neural networks to adversarial examples has received g...
research
06/18/2021

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples

Evaluating robustness of machine-learning models to adversarial examples...

Please sign up or login with your details

Forgot password? Click here to reset