Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN Approach

06/14/2020
by   Hu Ding, et al.
60

Adversarial machine learning has attracted a great amount of attention in recent years. In a poisoning attack, the adversary can inject a small number of specially crafted samples into the training data which make the decision boundary severely deviate and cause unexpected misclassification. Due to the great importance and popular use of support vector machines (SVM), we consider defending SVM against poisoning attacks in this paper. We study two commonly used strategies for defending: designing robust SVM algorithms and data sanitization. Though several robust SVM algorithms have been proposed before, most of them either are in lack of adversarial-resilience, or rely on strong assumptions about the data distribution or the attacker's behavior. Moreover, the research on their complexities is still quite limited. We are the first, to the best of our knowledge, to prove that even the simplest hard-margin one-class SVM with outliers problem is NP-complete, and has no fully PTAS unless P=NP (that means it is hard to achieve an even approximate algorithm). For the data sanitization defense, we link it to the intrinsic dimensionality of data; in particular, we provide a sampling theorem in doubling metrics for explaining the effectiveness of DBSCAN (as a density-based outlier removal method) for defending against poisoning attacks. In our empirical experiments, we compare several defenses including the DBSCAN and robust SVM methods, and investigate the influences from the intrinsic dimensionality and data density to their performances.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/21/2020

Defending Distributed Classifiers Against Data Poisoning Attacks

Support Vector Machines (SVMs) are vulnerable to targeted training data ...
research
06/27/2012

Poisoning Attacks against Support Vector Machines

We investigate a family of poisoning attacks against Support Vector Mach...
research
06/01/2022

Support Vector Machines under Adversarial Label Contamination

Machine learning algorithms are increasingly being applied in security-r...
research
05/20/2017

SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms

Support Vector Machine is one of the most classical approaches for class...
research
04/24/2019

A Robust Approach for Securing Audio Classification Against Adversarial Attacks

Adversarial audio attacks can be considered as a small perturbation unpe...
research
12/27/2022

LOSDD: Leave-Out Support Vector Data Description for Outlier Detection

Support Vector Machines have been successfully used for one-class classi...
research
02/07/2018

A Game-Theoretic Approach to Design Secure and Resilient Distributed Support Vector Machines

Distributed Support Vector Machines (DSVM) have been developed to solve ...

Please sign up or login with your details

Forgot password? Click here to reset