AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients

11/15/2019
by   Xiaodan Li, et al.
0

Deep neural networks have been shown to be vulnerable to adversarial examples—maliciously crafted examples that can trigger the target model to misbehave by adding imperceptible perturbations. Existing attack methods for k-nearest neighbor (kNN) based algorithms either require large perturbations or are not applicable for large k. To handle this problem, this paper proposes a new method called AdvKNN for evaluating the adversarial robustness of kNN-based models. Firstly, we propose a deep kNN block to approximate the output of kNN methods, which is differentiable thus can provide gradients for attacks to cross the decision boundary with small distortions. Second, a new consistency learning for distribution instead of classification is proposed for the effectiveness in distribution based methods. Extensive experimental results indicate that the proposed method significantly outperforms state of the art in terms of attack success rate and the added perturbations.

READ FULL TEXT
research
03/20/2019

On the Robustness of Deep K-Nearest Neighbors

Despite a large amount of attention on adversarial examples, very few wo...
research
06/02/2023

Adversarial Attack Based on Prediction-Correction

Deep neural networks (DNNs) are vulnerable to adversarial examples obtai...
research
10/03/2019

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

Deep neural network image classifiers are reported to be susceptible to ...
research
06/27/2021

ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense

K-Nearest Neighbor (kNN)-based deep learning methods have been applied t...
research
02/11/2021

Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors

State-of-the-art machine learning models are vulnerable to data poisonin...
research
08/18/2022

Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries

The security of deep learning (DL) systems is an extremely important fie...
research
09/06/2023

SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series

Time series classification (TSC) has emerged as a critical task in vario...

Please sign up or login with your details

Forgot password? Click here to reset