Adversarial Poisoning Attacks and Defense for General Multi-Class Models Based On Synthetic Reduced Nearest Neighbors

02/11/2021
by   Pooya Tavallali, et al.
0

State-of-the-art machine learning models are vulnerable to data poisoning attacks whose purpose is to undermine the integrity of the model. However, the current literature on data poisoning attacks is mainly focused on ad hoc techniques that are only applicable to specific machine learning models. Additionally, the existing data poisoning attacks in the literature are limited to either binary classifiers or to gradient-based algorithms. To address these limitations, this paper first proposes a novel model-free label-flipping attack based on the multi-modality of the data, in which the adversary targets the clusters of classes while constrained by a label-flipping budget. The complexity of our proposed attack algorithm is linear in time over the size of the dataset. Also, the proposed attack can increase the error up to two times for the same attack budget. Second, a novel defense technique based on the Synthetic Reduced Nearest Neighbor (SRNN) model is proposed. The defense technique can detect and exclude flipped samples on the fly during the training procedure. Through extensive experimental analysis, we demonstrate that (i) the proposed attack technique can deteriorate the accuracy of several models drastically, and (ii) under the proposed attack, the proposed defense technique significantly outperforms other conventional machine learning models in recovering the accuracy of the targeted model.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/27/2020

Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks

Recently, the membership inference attack poses a serious threat to the ...
07/31/2020

Class-Oriented Poisoning Attack

Poisoning attacks on machine learning systems compromise the model perfo...
11/15/2019

AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients

Deep neural networks have been shown to be vulnerable to adversarial exa...
05/31/2018

Defending Against Model Stealing Attacks Using Deceptive Perturbations

Machine learning models are vulnerable to simple model stealing attacks ...
03/25/2022

Trojan Horse Training for Breaking Defenses against Backdoor Attacks in Deep Learning

Machine learning (ML) models that use deep neural networks are vulnerabl...
03/07/2020

Dynamic Backdoor Attacks Against Machine Learning Models

Machine learning (ML) has made tremendous progress during the past decad...
06/27/2021

ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense

K-Nearest Neighbor (kNN)-based deep learning methods have been applied t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.