Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

02/08/2018
by   Andrea Paudice, et al.
0

Machine learning has become an important component for many systems and applications including computer vision, spam filtering, malware and network intrusion detection, among others. Despite the capabilities of machine learning algorithms to extract valuable information from data and produce accurate predictions, it has been shown that these algorithms are vulnerable to attacks. Data poisoning is one of the most relevant security threats against machine learning systems, where attackers can subvert the learning process by injecting malicious samples in the training data. Recent work in adversarial machine learning has shown that the so-called optimal attack strategies can successfully poison linear classifiers, degrading the performance of the system dramatically after compromising a small fraction of the training dataset. In this paper we propose a defence mechanism to mitigate the effect of these optimal poisoning attacks based on outlier detection. We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack. Hence, they can be detected with an appropriate pre-filtering of the training dataset.

READ FULL TEXT
research
04/20/2021

Adversarial Training for Deep Learning-based Intrusion Detection Systems

Nowadays, Deep Neural Networks (DNNs) report state-of-the-art results in...
research
08/16/2018

Mitigation of Adversarial Attacks through Embedded Feature Selection

Machine learning has become one of the main components for task automati...
research
03/02/2018

Label Sanitization against Label Flipping Poisoning Attacks

Many machine learning systems rely on data collected in the wild from un...
research
06/18/2019

Poisoning Attacks with Generative Adversarial Nets

Machine learning algorithms are vulnerable to poisoning attacks: An adve...
research
04/14/2021

Defending against Adversarial Denial-of-Service Attacks

Data poisoning is one of the most relevant security threats against mach...
research
03/11/2018

BEBP: An Poisoning Method Against Machine Learning Based IDSs

In big data era, machine learning is one of fundamental techniques in in...
research
02/20/2018

Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning

As the prevalence and everyday use of machine learning algorithms, along...

Please sign up or login with your details

Forgot password? Click here to reset