Poisoning Attacks against Support Vector Machines

06/27/2012
by   Battista Biggio, et al.
0

We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM's test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM's decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM's optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier's test error.

READ FULL TEXT

page 5

page 6

research
06/01/2022

Support Vector Machines under Adversarial Label Contamination

Machine learning algorithms are increasingly being applied in security-r...
research
09/27/2010

General Scaled Support Vector Machines

Support Vector Machines (SVMs) are popular tools for data mining tasks s...
research
02/07/2021

Robust Explanations for Private Support Vector Machines

We consider counterfactual explanations for private support vector machi...
research
06/14/2020

Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN Approach

Adversarial machine learning has attracted a great amount of attention i...
research
02/22/2019

Improving the Security of Image Manipulation Detection through One-and-a-half-class Multiple Classification

Protecting image manipulation detectors against perfect knowledge attack...
research
04/06/2017

Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks

Despite the wide use of machine learning in adversarial settings includi...
research
01/27/2014

Safe Sample Screening for Support Vector Machines

Sparse classifiers such as the support vector machines (SVM) are efficie...

Please sign up or login with your details

Forgot password? Click here to reset