Poisoning Attacks with Generative Adversarial Nets

06/18/2019
by   Luis Muñoz-González, et al.
13

Machine learning algorithms are vulnerable to poisoning attacks: An adversary can inject malicious points in the training dataset to influence the learning process and degrade its performance. Optimal poisoning attacks have already been proposed to evaluate worst-case scenarios, modelling attacks as a bi-level optimisation problem. Solving these problems is computationally demanding and has limited applicability for some models such as deep networks. In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training. We propose a Generative Adversarial Net with three components: generator, discriminator, and the target classifier. This approach allows us to model naturally the detectability constrains that can be expected in realistic attacks and to identify the regions of the underlying data distribution that can be more vulnerable to data poisoning. Our experimental evaluation shows the effectiveness of our attack to compromise machine learning classifiers, including deep networks.

READ FULL TEXT

page 7

page 17

page 18

page 19

research
02/08/2018

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

Machine learning has become an important component for many systems and ...
research
02/28/2020

Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, wh...
research
04/19/2021

Manipulating SGD with Data Ordering Attacks

Machine learning is vulnerable to a wide variety of different attacks. I...
research
12/18/2021

Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks

Amongst a variety of approaches aimed at making the learning procedure o...
research
01/25/2019

Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data

As online systems based on machine learning are offered to public or pai...
research
03/24/2018

A Dynamic-Adversarial Mining Approach to the Security of Machine Learning

Operating in a dynamic real world environment requires a forward thinkin...
research
03/23/2017

Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains

While modern day web applications aim to create impact at the civilizati...

Please sign up or login with your details

Forgot password? Click here to reset